I agree with your points here, but I feel the need to address the final bit. This is not aimed personally at you, but at the pattern you described - specifically, at how it's all too often abused:
> Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!
Those are the easy cases, and correspondingly, you don't see much of those - or at least few are paying attention to companies talking like that. This is distinct from saying "X is going to directly benefit the society, and we're merely charging for it as fair compensation of our efforts, much like a baker charges you for the bread" or variants of it.
This is much closer to what most tech companies try to argue, and the distinction seems to escape a lot of otherwise seemingly sharp people. In threads like this, I surprisingly often end up defending tech companies against such strawmen - because come on, if we want positive change, then making up a simpler but baseless problem, calling it out, and declaring victory, isn't helping to improve anything (but it sure does drive engagement on-line, making advertisers happy; a big part of why press does this too on a routine basis).
And yes, this applies to this specific case of OpenAI as well. They're not claiming "LLMs are going to indirectly benefit the society because we're going to get rich off them, and then use that money to fund lots of nice things". They're just saying, "here, look at ChatGPT, we believe you'll find it useful, and we want to keep doing R&D in this direction, because we think it'll directly benefit society". They may be wrong about it, or they may even knowingly lie about those benefits - but this is not trickle-down economics v2.0, SaaS edition.
> Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!
Those are the easy cases, and correspondingly, you don't see much of those - or at least few are paying attention to companies talking like that. This is distinct from saying "X is going to directly benefit the society, and we're merely charging for it as fair compensation of our efforts, much like a baker charges you for the bread" or variants of it.
This is much closer to what most tech companies try to argue, and the distinction seems to escape a lot of otherwise seemingly sharp people. In threads like this, I surprisingly often end up defending tech companies against such strawmen - because come on, if we want positive change, then making up a simpler but baseless problem, calling it out, and declaring victory, isn't helping to improve anything (but it sure does drive engagement on-line, making advertisers happy; a big part of why press does this too on a routine basis).
And yes, this applies to this specific case of OpenAI as well. They're not claiming "LLMs are going to indirectly benefit the society because we're going to get rich off them, and then use that money to fund lots of nice things". They're just saying, "here, look at ChatGPT, we believe you'll find it useful, and we want to keep doing R&D in this direction, because we think it'll directly benefit society". They may be wrong about it, or they may even knowingly lie about those benefits - but this is not trickle-down economics v2.0, SaaS edition.