> I guess funding an "outsider" non-LLM AI project now requires finding someone like Carmack to get on board
And I think this is a big problem. Especially since these investments tend to be a lot cheaper than the existing ones. Hell, there's stuff in my PhD I tabled and several models I made that I'm confident I could have doubled performance with less than a million dollars worth of compute. My methods could already compete while requiring less compute, so why not give them a chance to scale? I've seen this happen to hundreds of methods. If "scale is all you need" then shouldn't the belief that any of those methods would also scale?