But starting from the 10th paper, the value is also pretty low I imagine. How many new things can you discover from the same team and same research? That's 3 papers per year for a 30-year career. Every single year, no breaks.
> How many new things can you discover from the same team and same research?
That all depends on how you measure discoveries. The most common metric is... publications. Publications are what advance your career and are what you are evaluated on. The content may or may not matter (lol who reads your papers?) but the number certainly does. So the best way to advance your career is to write a minimum viable paper and submit as often as possible. I think we all forget how Goodhart's Law comes to bite everyone in the ass.
Well, to be sure, mouse research consistently produces amazing cures for cancer, insomnia, lost limbs, and even gravity itself. Sure, none of it translates to humans, but it's an important source of headlines for high impact journals and science columnists.
This is also true for machine learning papers. They cure cancer, discover physics, and all sorts of things. Sure, they don't actually translate to useful science, but they are highly valuable pieces of advertisements. And hey, maybe someday they might!
I just mean it demonstrated solving something challenging in a convincing way, justifying a great deal of additional resources being dedicated to applying ML to a wide range of biological research.
Not that it actually generates revenue or solves any really important health problems.
This comment is much more tempered and I do not think would generate a strong response. But I would suggest to take care when acting as an evangelical of a product. There's a big difference between "this technology shows great promise and warrants more funding as it won't be surprising if the benefits more than a decade's worth of losses at DeepMind" vs "this is a product right now generating billions of dollars a year".
The big problem with the latter statement isn't so much exaggeration, but something a bit more subtle. It is that people start to believe you. But then they sit waiting, and in that waiting eventually get disappointed. When that happens the usual response often feeds into conspiracies (perpetuating the overall distrust in science) or generates an overall bad sentiment against the whole domain.
The problem is that companies are bootstrapping with hype. The problem is that this leads to bubbles and makes it a ripe space for conmen, who just accelerate the bubble. There's no problem with Google/Microsoft/OpenAI/Etc talking to researchers/developers in the language of researchers/developers, but there is a problem of them talking to the average person in the language of the future. It's what enables the space for snakeoil like Rabbit or Devin. Those steal money from normal people and takes money from investors that could be better spent on actually pushing the research of the tech forward so that we can eventually have those products.
I understand some bootstrapping may be necessary due to needing money to even develop things, but certainly the big companies are not lacking in funding and we can still achieve the same goals while being more honest. The excitement and hope isn't the problem, it is the lying. "Is/Can" vs "will/we hope to"
Just be aware, the person you're arguing with has several decades experience working on the problem that AlphaFold just solved, and worked for Google on protein folding/design/drug discovery and machine learning for years. When I speak casually on Hacker News, I think people know enough from my writing style to not get triggered and write long analytic responses (but clearly, that's not always true). Think of me as a lawful neutral edge lord.
Either way, AlphaFold is one of the greatest achievements in science so far, and the funding agencies definitely are paying lots of attention to funding additional work in machine learning/biology, so in some sense, my statement is effectively true, even if not pedantically, literally correct.
> When I speak casually on Hacker News, I think people know enough from my writing style to not get triggered and write long analytic responses (but clearly, that's not always true).
If your "causal speech" is lying, then I don't think the problem is someone getting "triggered", I think it is because you lied.
> write long analytic responses
I'll concede that I'm verbose, but this isn't Twitter. I'd rather have real conversations.
Why would randoms on the internet be aware of your writing style in massive online forum. You aren't speaking from authority in this case, you can't compare it speaking at a conference for example.
I'd like to point out that AlphaFold does not constitute all, nor even the majority of ML works.
My comment was a bit tongue in cheek. Not every research is going to be profitable or eventually profitable, but that also doesn't mean it isn't useful. If we're willing to account for the indirect profits via learning what doesn't work (an important part of science), then this vastly diminishes the number of worthless papers (to essentially those that are fraudulent or plagiarism)
But as specifically for AlphaFold, I'm going to need a citation on that. If I understand the calculus correctly, Google acquired DM in 2014 for somewhere between $525 million to $850 million, and yearly spends a similar amount each year along with forgiving a 1.5bn dollar debt[0]. So I think (VERY) conservatively we can say $2bn (I think even $4bn is likely conservative here)? While I see articles that discuss how the value could be north of $100bn[1] (which certainly surpasses a very liberal estimate of costs), I have yet to see evidence that this is actual value that has been returned to Google. I can only find information about 2022 and 2023 having profits in the ballpark of $60m.
This isn't to say that AlphaFold won't offset all the costs (I actually believe it will), but I your sentence does not suggest speculation but rather actualization ("has basically paid", "been"). I think that difference matters enough that we have a dozen cliches with similar sentiment. In the same way my annoyance is not that we are investing in ML[2], but how quick we are to make promises and celebrate success[3]. Actually, my concern is that while hype is necessary, overdoing it allows charlatans[4] to more easily enter the space. And if they are able to gain a significant foothold (I believe that is happening/has happened) then this is actually destructive to those who actually wish to push forward technology.
[2] disclosure, I'm an ML researcher. I actually am in favor of more funding. Though different allocation.
[3] I'm willing to concede that success is realistically determined by how one measures success, and that this may be arbitrary and no objective measure actually exists or is possible.
[4] One needs not knowingly be a charlatan. Only that the claims made are false or inaccurate. There are many charlatans who believe in the snake oil they sell. Most of these are unwilling to acknowledge critiques. A clear example is religion. If you believe in a religion, this applies to all religious organizations except the one you are a part of. If you are not religious, well the same sentence holds true but the resultant set is one larger.
Did you chuck gravity in there to be hyperbolic or has someone really published a paper where they have data implying they got gravity not to apply to mice?