Like most kinds of gated security, many solutions are borne out of inspecting the payload instead of who's sending it.
Captchas prevent bots from submitting spam, but they don't prevent humans from submitting spam. In 99% of cases, your problem is the spam, not who is submitting it. The non-lazy solution is to look at the content itself and directly determine whether it's spam, instead of relying on a related heuristic (e.g. who submitted it) to make an informed guess.
This isn't an alternative solution, just one you could do alongside making it difficult for drive-by bot spam.
For example, let's look at an actual service for identifying spam payloads: Akismet. It still lets a lot of spam through, especially in non-English languages.
This is a solution that, if done "perfectly", should be able to catch 100% of spam submissions. This is in contrast to things like captchas (because they "test" something other than the end-goal [no spam] to guess at whether something is spam or not, while ignoring spam from humans [or humans filling out captchas on behalf of bots], and cause problems for both humans and benign bots).
Obviously, it's an extremely hard problem that is hard to do 100% correctly. But it's a viable non-lazy solution (that still needs a lot more work than the current state-of-the-art implementations) compared to the lazy solution of just putting captchas on the page.
The ideal solution would get rid of spam without inconveniencing users who aren't submitting spam, I'd think, which means captchas aren't it.
Captchas prevent bots from submitting spam, but they don't prevent humans from submitting spam. In 99% of cases, your problem is the spam, not who is submitting it. The non-lazy solution is to look at the content itself and directly determine whether it's spam, instead of relying on a related heuristic (e.g. who submitted it) to make an informed guess.