If you have a large social site, you could just set up real turing tests between potential sign-ups and a list of the existing users, with users being removed from the list if they get too many false positives and negatives.
In a similar vein, I had an idea for a social site where everyone was invited via other people, establishing the chain of invites. Spamming from one of the leaf nodes propagates upwards as bad karma, decreasing exponentially (or by some power, depending on how strict you want to be about anti-spam ruling) with each parent step. If the inviter's karma drops below a certain threshold, he/she can't invite more people. If it drops lower still, the branch dies and everyone on it gets banned.
This is probably not a great user experience (getting banned because someone above you invited a bunch of spammers), but I think it's conceptually interesting to distribute the anti-spam onus to the users of the community instead of the administration in a form other than 'report spam'.
This wouldn't 'distribute the anti-spam onus', though. No one wants spam in their community, you don't need to teach people that it's bad. What this system would do is distribute the punishment for the presence of whatever the moderators decide is spam in a way which is disproportionate to users' actual responsibility for it.
Just having an invite-only system should solve the problem equally as effectively (although it creates its own issues with politics, groupthink, etc.)