> "practically every CVE is on code you can read."
This is probably true due to a sort of survivorship bias. code you can read is much easier to analyze and test and report. Closed source internal code has a lot of security by obscurity built into it. Not to dismiss security by obscurity, I am sure it keeps an absolute frightening amount of code safe.
Lol what? Since when is asserting "finite resources will eventually be unable to sustain exponential growth" disproven by observing that "Yeah, well, they haven't yet"?
100 years is virtually no time at all, I'm having a hard time believing you're not making a joke with your optimism being that you think we have at least that long.
For roughly the same reason that "if I keep driving in a straight line, I'll crash into the tree at the end of my street" is disproven by the many years it hasn't yet...because it turns out the car has a steering wheel and I can turn when I reach the corner.
The steering wheel in the case of overpopulation being a general reduction of living standards in the best case and widespread genocide and refugee crisises in the worst case.
The steering wheel in the case of overpopulation is gentrification. Turns out folks with all their basic needs met have way fewer children than folks who are barely scraping out an existence.
No, not true. Many societies in the world have voluntarily reduced reproduction to the point where population growth would be zero, except for improvements in longevity.
These changes largely pre-date small recent drops in living standards, having taken place at the end of unprecedented runs of improvements in health and quality of life.
And how will you deal with the societies whose populations are booming in poverty and whom will desire to share your living standards? All the excess population from latin america, south east asia and africa will soon be displaced north by climate change
I don't know why you're so skeptical that a drug possessing the primary effect of flooding one's brain with serotonin can cause SSRI withdrawal symptoms upon cessation.
As for whether or not it's 'really MDMA', not everybody is popping pressed ecstasy tablets. It's trivial to send a few crushed crystals to a spectroscopy lab and I very rarely see samples cut with anything other than carbohydrates.
On the other side of the same coin, when animating VFX for live action, animation which looks "too clean" is also a failure mode. You want to make your poses a little less good for camera, introduce a little bit of grime and imperfection, etc.
Animation is a great art and it takes a lot of skill to make things look the way they ought to for whatever it is you are trying to achieve.
Most animators don't like the "digital makeup" comparison (because it's often used in a way which feels marginalizing to their work on mocap-heavy shows), but if you interpret it in the sense that makeup makes people look the way they are "supposed to" I think it's a good model for understanding why rotoscope and motion capture don't yet succeed without them.
In what respect is generating text a better predictor of real world applicability than the ability to achieve goals in a complex simulated environment containing other agents?
It's not one or the other. We need both supervised pre-training and reinforcement learning. The first part represents past human experiences encoded as language. They can bring a model to human level on most tasks, but not make it smarter.
The second approach, with RL, is based on immediate feedback and could make a model smarter than us. Just think of AlphaZero or AlphaTensor. But this requires deploying a wide search over possible solutions and using a mechanism to rank or filter the bad ideas out (code execution, running a simulation or a game, optimizing some metric)
So models need both past experience and new experience to advance. They can use organic text initially, but later need to develop their own training examples. The feedback they get will be on topic, both with the human user and with the model mistakes. That's very valuable. Feedback learning is what could make LLMs finally graduate from mediocre results.
DeepMind is saying they are using both, and feedback learning is dialed up.
The context in simulated environments of games is far less complex than the real world. Also the available interactions far less.
It would be different if the agent would be exposed to the real world and use multisensory data to predict the next "token", i.e. thought or action.
Have you seen what other people were painting in the early 1500s? Leonardo's work is extraordinary for his time. Comparing the Mona Lisa with later art is a bit like measuring somebody while you're standing on their shoulders.
"It's opensource, so it is going to be a better engine in the long run." Citation needed.