Hacker News new | past | comments | ask | show | jobs | submit login

Do these "superforecasters" lose something when they are wrong? Do they have "skin in the game"?

I'm a big fan of predication markets (e.g. Polymarket, PredictIt) for exactly that reason – proper incentives are there.




Prediction markets liquidity is tiny, thus their predictions have very little "skin in the game".

This was made very clear during a couple of big world events (stuff like major elections, ...), where I was watching simultaneously financial markets, dedicated event betting markets and prediction markets. Conclusion was that financial markets is where the real super-forecasters work.


Sure but often there isn't an obvious financial market to check. Prediction markets still update faster and more accurately than say the news or pundits.

Hopefully they stop being hindered and become more popular so liquidity increases though. It'd be much more useful for everyone if a portion of sports and other gambling spending can be redirected towards them.


> It'd be much more useful for everyone if a portion of sports and other gambling spending can be redirected towards them.

The problem is that prediction markets is typically about very rare stuff, that happens once or twice a year.

You typical sports punter doesn't have the patience for this kind of long bets.

> Hopefully they stop being hindered and become more popular so liquidity increases though

That's just a US problem (as always). In the rest of the world there are no major blocks to create prediction markets. In some form, they already exist - BetFair has election betting, ...


The original study found their technique to have a superior brier score compared to a prediction market (https://goodjudgment.io/docs/Goldstein-et-al-2015.pdf). Metaculus also employs a similar technique.

It's been a while since I've read it, but the book (Superforcasting) also had an additional section elaborating on a comparison against prediction markets.

From memory, the core thesis of the GJP is that some individuals are good at making forecasts, and this accuracy is not domain specific or require insider information. Once measured, more weight is put onto those who make better opinions. As an analogy consider asking 100 chess players for the next move in a game - those with a higher elo are more likely to find a better next move. Conventional prediction market doesn't have this kind of "long term weighting", instead relying on individuals to bet according to their confidence (which may not always correspond to their accuracy).

Of interest is this article (https://mikesaintantoine.substack.com/p/scoring-midterm-elec...) which compared PredictIt, FiveThirtyEight and Manifold Markets (a prediction market with play money, so in theory no "proper" incentives). Even with the "proper incentives" it did no better Manifol Markets and a decent bit worse than 538.


> Once measured, more weight is put onto those who make better opinions.

So basically "boosting" or https://en.wikipedia.org/wiki/Multiplicative_weight_update_m...?


> Good Judgment maintains a global network of elite Superforecasters who collaborate to tackle our clients’ forecasting questions with unparalleled accuracy. We continue to grow this network by identifying and recruiting fresh talent from our public forecasting platform, Good Judgment Open. And, we train others to apply the methods that make our Superforecasters so accurate.

https://goodjudgment.com/about/


LMAO. Amazing to see this trash (the OP's link) posted on HN.


Maybe you can expand on that? And please keep in mind the HN comment guidelines (https://news.ycombinator.com/newsguidelines.html)

This recent study (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7333631/) seems to at least partially reproduce the and corroborate the original results?


Instead of "laughing your ass off", could you offer some constructive commentary about your reason for dismissing this?

I don't know anything about that organization and I come to HN to learn about things like this. So if you do know something about it, please share your knowledge.


I enrolled with the Good Judgement project for awhile. Most of these super high-level assessments are useless, and may even be put out as a bit of disinformation. What they really get is a lot of text from the participants which is essentially free amalgamation of OSINT that they turn over to the sponsors.


> What they really get is a lot of text from the participants which is essentially free amalgamation of OSINT that they turn over to the sponsors.

So, Meta's business model?


> big fan of predication markets

I'd love to see a predication market


Maybe I'm just not getting it, but this just seems like gambling by any other name.


It's gambling for information discovery. Rather than gambling for fun.

You see some of this in sports betting, but it is distorted by fans, and sport-outcomes are not really important.


How do they avoid attracting people who would see it as gambling for fun?

A lot of people are attracted to gambling - possibly everyone, to some degree, and for a wide enough description of "gambling" - even when there's no money involved. Put even a small monetary reward in and you'll get loads of people taking part just for fun, in most cases.

So how do prediction markets avoid this?


Why would they want to avoid them? They just add liquidity to the market, incentivising better forecasters and improving the overall accuracy and robustness. More useful for them to gamble on that than something else.


Having a lot of them distributed unevenly might incentivise competent participants to focus on these easy pickings.


In some sense they are 'suckers' that give an incentive for the good predictors to play. It's much nicer to participate if most of the other bets are poorly made.

Whether this is worsgamblers than othe




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: