On the other hand, I'm baffled to encounter recursive self-improvement being discussed as something not only weird to expect, but as damning evidence of sloppy thinking by those who speculate about it.
We have an existence proof for intelligence that can improve AI: humans.
If AI ever gets to human-level intelligence, it would be quite strange if it couldn't improve itself.
Are people really that sceptical that AI will get to human level intelligence?
It that an insane belief worthy of being a primary example of a community not thinking clearly?
Come on! There is a good chance AI will recursively self-improve! Those poo pooing this idea are the ones not thinking clearly.
Consider that even the named phenomenon is sloppy: "recursive self improvement" does not imply "self improvement without bounds". This is the "what if you hit diminishing returns and never get past it" claim. Absolutely no justification for the jump, ever, among AI boosters.
> If AI ever gets to human-level intelligence
This picture of intelligence as a numerical scale that you just go up or down, with ants at the bottom and humans/AI at the top, is very very shaky. AI is vulnerable to this problem, because we do not have a definition of intelligence. We can attempt to match up capabilities LLMs seem to have with capabilities humans have, and if the capability is well-defined we may even be able to reason about how stable it is relative to how LLMs work.
For "reasoning" we categorically do not have this. There is not even any evidence that LLMs will continue increasing as techniques improve, except in the tautological sense that if LLMs don't appear to resemble humans more closely we will call the technique a failure. IIRC there was a recent paper about giving LLMs more opportunity processing time, and this reduced performance. Same with adding extraneous details, sometimes that reduces performance too. What if eventually everything you try reduces performance? Totally unaddressed.
> It that an insane belief worthy of being a primary example of a community not thinking clearly?
I really need to stress this: thinking clearly is about the reasoning, not the conclusion. Given the available evidence, no legitimate argument has been presented that implies the conclusion. This does not mean the conclusion is wrong! But just putting your finger in the air and saying "the wind feels right, we'll probably have AGI tomorrow" is how you get bubbles and winters.
>"recursive self improvement" does not imply "self improvement without bounds"
I was thinking that. I mean if you look at something like AlphaGo it was based on human training and then they made one I think called AlphaZero which learned by playing against itself and got very good but not infinitely good as it was still constrained by hardware. I think with Chess the best human is about 2800 on the ELO scale and computers about 3500. I imagine self improving AI would be like that - smarter than humans but not infinitely so and constrained by hardware.
Also like humans still play chess even if computers are better, I imagine humans will still do the usual kind of things even if computers get smarter.
> "recursive self improvement" does not imply "self improvement without bounds"
Obviously not, but thinking that the bounds are going to lie in between where AI intelligence is now and human intelligence I think is unwarranted - as mentioned, humans are unlikely to be the peak of what's possible since evolution did not optimise us for intelligence alone.
If you think the recursive self-improvement people are arguing for improvement without bounds, I think you're simply mistaken, and it seems like you have not made a good faith effort to understand their view.
AI only needs to be somewhat smarter than humans to be very powerful, the only arguments worth having IMHO are over whether recursive self-improvement will lead to AI being a head above humans or not. Diminishing returns will happen at some point (in the extreme due to fundamental physics, if nothing sooner), but whether it happens in time to prevent AI from becoming meaningfully more powerful than humans is the relevant question.
> we do not have a definition of intelligence
This strikes me as an unserious argument to make. Some animals are clearly more intelligent than others, whether you use a shaky definition or not. Pick whatever metric of performance on intellectual tasks you like, there is such a thing as human-level performance, and humans and AIs can be compared. You can't even make your subsequent arguments about AI performance being made worse by various factors unless you acknowledge such performance is measuring something meaningful. You can't even argue against recursive self-improvement if you reject that there is anything measurable that can be improved. I think you should retract this point as it prevents you making your own arguments.
> There is not even any evidence that LLMs will continue increasing as techniques improve, except in the tautological sense that if LLMs don't appear to resemble humans more closely we will call the technique a failure.
I'm pretty confused by this claim - whatever our difficulties defining intelligence, "resembling humans" is not it. Do you not believe there are tasks on which performance can be objectively graded beyond similarity to humans? I think it's quite easy to define tasks that we can judge the success of without being able to do it ourselves. If AI solves all the Millennium Prize Problems, that would be amazing! I don't need to have resolved all issues with a definition of intelligence to be impressed.
Anyway, is there really no evidence? AI having improved so far is not any evidence that it might continue, even a little bit? Are we really helpless to predict whether there will be any better chatbots released in the remainder of this year than we already have?
I do not think we are that helpless - if you entirely reject past trends as an indicator of future trends, and treat them as literally zero evidence at all, then this is simply faulty reasoning. Past trends are not a guarantee of future trends, but neither are they zero evidence. They are a nonzero medium amount of evidence, the strength of which depends on how long the trends have been going on and how well we understand the fundamentals driving them.
> thinking clearly is about the reasoning, not the conclusion.
And I think we have good arguments! You seem to have strong priors that the default is that machines can't reach human intelligence/performance or beyond, and you really need convincing otherwise. I think the fact that we have an existence proof in humans of human intelligence and an algorithm to get there proves it's possible. And I consider it quite unlikely that humans are the peak of intelligence/performance-on-whatever-metrics that is possible given it's now what we were optimised for specifically.
All your arguments about why progress might slow or stop short of superhuman-levels are legitimate and can't be ruled out, and yet these things have not been limiting factors so far despite that they would have been equally valid to make these arguments any time in the past few years.
> no legitimate argument has been presented that implies the conclusion
I mean it's probabilistic, right? I'm expecting something like an 85% chance of AGI before 2040. I don't think it's guaranteed, but when you look at progress so far, and that nature gives us proof (in the form of the human brain) that it's not impossible in any fundamental way, I think that's reasonable. Reasonability arguments and extrapolations are all we have, we can't imply anything definitively.
> We have an existence proof for intelligence that can improve AI: humans.
I don't understand what you mean by this. The human brain has not meaningfully changed, biologically, in the past 40,000 years.
We, collectively, have built a larger base of knowledge and learned to cooperate effectively enough to make large changes to our environment. But that is not the same thing as recursive self-improvement. No one has been editing our genes or performing brain surgery on children to increase our intelligence or change the fundamental way it works.
Modern brains don't work "better" than those of ancient humans, we just have more knowledge and resources to work with. If you took a modern human child and raised them in the middle ages, they would behave like everyone else in the culture that raised them. They would not suddenly discover electricity and calculus just because they were born in 2025 instead of 950.
----
And, if you are talking specifically about the ability to build better AI, we haven't matched human intelligence yet and there is no indication that the current LLM-heavy approach will ever get there.
I just mean that the existence of the human brain is proof that human-level intelligence is possible.
Yes it took billions of years all said and done, but it shows that there are no fundamental limits that prevent this level of intelligence. It even proves it can in principle be done with a few tens of watts a certain approximate amount of computational power.
Some used to think the first AIs would be brain uploads, for this reason. They thought we'd have the computing power and scanning techniques to scan and simulate all the neurons of a human brain before inventing any other architecture capable of coming close to the same level of intelligence. That now looks to be less likely.
Current state of the art AI still operate with less computational power than the human brain, and they are far less efficient at learning that humans are (there is a sense in which a human intelligence takes a merely years to develop - i.e. childhood - rather than billions, this is also a relevant comparison to make). Humans can learn from far fewer examples than current AI can.
So we've got some catching up to do - but humans prove it's possible.
The sun is very close to a black body radiator, so all wavelength. The atmosphere and water filters a lot.
It is actually quite strange that plants are green -- that's the wavelength the atmosphere lets through particularly well, so would be particular good to be absorbed instead of reflected, for energy production. It seems nature hasn't come up with a good, cheap way to move the absorption into that wavelength.
Well, black-body radiation is still peaked around a certain range of wavelengths depending on the temperature, it's not just equal power at all wavelengths.
Light visible to humans is at the peakiest bit of the sun's black body spectrum, see here image here: https://i.sstatic.net/kRUju.png
Green isn't just the wavelength the atmosphere lets through the best, or the wavelength humans are most sensitive to, it's also the peak of the sun's black body spectrum.
Andrew is mistaken, the paper he cites doesn't say that levitation is possible in a dipole field. Brauenbecker showed that diamagnetic levitation was possible at all (e.g. in a quadrupole field), but not in a dipole field.
Stable levitation in a dipole field is still thought to be something only type II superconductors can do, and Andrew should not uncritically repeat what he read on /sci/ - which is one of the only other google results for "Brauenbecker extension" currently (after his tweet).
Here in Australia, the regulator requires banks to assess your ability to make repayments at an interest rate three percentage points higher than the actual rate. A few years ago it was a floor of 7% rather than a buffer, and they'll probably go back to something like a 2% buffer and a 7% floor (rates increased by more than 3 percentage points in recent times, so looks like a three percent buffer alone isn't enough).
But yeah, mostly what happens if rates go up is that you begrudgingly pay it, because most in that situation can afford it (those who have changed circumstances may not be able to, but most can).
Edit: also, the fact that mortgage holders are more sensitive to rate increases means (it is thought that) the central bank doesn't need to change rates by as much to get the same effect. If there would be widespread mortgage defaults given a certain sized rate increase, then that probably means the central bank can stop short of an increase that large.
So the problem is sort of self-limiting. Rate hikes are designed to induce financial strain, but too much isn't desirable, so central banks don't hike too much on purpose (they sometimes do by accident).
Yep. I just got diagnosed recently. Still figuring out the right dose of the right meds, but in the meantime it has become painfully obvious that a lot of discussion around productivity and procrastination is happening without a lot of people realising they may simply have ADHD. There's a reason psychiatrists treat ADHD with medication first rather than trying to get you to follow productivity hacks or change your habits. Would have been good if ADHD was on my radar as a possibility much earlier.
Yeah! I think of Metaculus vs Manifold a bit like NYT vs Twitter -- or to borrow another analogy, lawful good vs chaotic evil. But we're both great places to think about the future in a bit more structured way than pure punditry!
Intelligence is consistent with the laws of physics but given our rudimentary understanding of it we have no idea if super-intelligence is. I'm defining super-intelligence as at least an order of magnitude increase in intelligence, not just "high IQ".
Intelligence is correlated with brain size, not just in humans, but also between species. No physical law prevents us from building the equivalent to a house sized brain. Perhaps not even a moon-sized brain. The only real limit seems to be mass (we don't want the whole thing collapse under its own weight) and signal delay due to the speed of light.
> No physical law prevents us from building the equivalent to a house sized brain
I think you'll find the speed of thought is much lower than the speed of light, I believe nerve impulses max out at about 120 m/s. The difficulty of supporting such a large mass of nerve tissue is also a problem. There is also the trouble of whether intelligence is purely electrically mediated or if it relies on the soup of hormones and molecular machinery of DNA and proteins for some of its functions. We also need to specifically consider the neocortex here since it is the only differentiating brain structure in animals with human intelligence. Other animals have much larger brains but no signs of human like abilities.
If you are suggesting we build an artificial brain then we need to wait at least a few more years if not decades to find out if that is possible. The current transformer models are quite advanced at language production and seem to have some simple reasoning abilities but they are very far from being intelligent.
I mean, we don't need to call our house or Moon sized AI "brain", but it seems clear that no laws of physics are preventing us from building something so intelligent that we are mere ants in comparison. The old Gods, while powerful, would pale in the light of its intelligence. A true God, hopefully a benevolent one. It doesn't seem many decades away.
The issue of non-ASCII-constrained environments is that it's still not easily accessible on most keyboards.
I do know and use the compose key but it's not the same as having a standard key for it. Trying on a mobile device, long pressing the dash key there suggests 2 dashes (not sure if the second choice is en-dash or em-dash), which is some but that's not the 4 types discussed here.
Do you know one where it's easy to type? There are many international keyboards but I don't think any has that many dashes. Compose key is the best I think.
It’s easier to type HYPHEN-MINUS‐the‐keyboard-key, but text should be displayed using the appropriate glyph, depending on its semantic meaning, which is never HYPHEN-MINUS‐the-glyph.
(This discussion is similar to the classic net discussion about TAB‐the‐ASCII‐character versus TAB‐the‐keyboard‐key, with some people having trouble conceptualizing the difference.)
We have an existence proof for intelligence that can improve AI: humans.
If AI ever gets to human-level intelligence, it would be quite strange if it couldn't improve itself.
Are people really that sceptical that AI will get to human level intelligence?
It that an insane belief worthy of being a primary example of a community not thinking clearly?
Come on! There is a good chance AI will recursively self-improve! Those poo pooing this idea are the ones not thinking clearly.