> At scale it does when “serious downsides” are both common and actually serious like death.
Yeah but the argument about how it works today is completely different from the argument about "theoretical limitations of the underlying technology". The theory would be making it orders of magnitude less common.
> Not when the underlying idea is flawed enough. You can’t get from the earth to the moon by training yourself to jump that distance, I don’t care who you’re asking to design your exercise routine.
We're talking about poor accuracy aren't we? That doesn't fundamentally sabotage the plan. Accuracy can be improved, and the best we have (humans) have accuracy problems too.
> The theory would be making it orders of magnitude less common.
LLM’s can’t get 3+ orders of magnitude better here. There’s no vast untapped reserves of clean training data, and tossing more processing power quickly results in overfitting existing training data.
Eventually you need to use different algorithms.
> That doesn’t fundamentally sabotage the pan. Accuracy can be improved
Yeah but the argument about how it works today is completely different from the argument about "theoretical limitations of the underlying technology". The theory would be making it orders of magnitude less common.
> Not when the underlying idea is flawed enough. You can’t get from the earth to the moon by training yourself to jump that distance, I don’t care who you’re asking to design your exercise routine.
We're talking about poor accuracy aren't we? That doesn't fundamentally sabotage the plan. Accuracy can be improved, and the best we have (humans) have accuracy problems too.