>> His precise theorem is this: Define "LISP program-size complexity" to be the size of a LISP subroutine that examines a proof, determines whether it is correct, and returns either the theorem established by the proof (if the proof is correct) or an error message (if the proof is incorrect). Then, given a formal axiomatic system A, with LISP program-size complexity N, A cannot be used to prove that any LISP expression longer than N + 356 characters is elegant.
Doesn't this in fact prove that numbers are discovered, not invented?
I'm fascinated by Chaitin's Constant and his use of the word "elegance". His ideas challenge my current belief system.
From the article:
>> [what is] the probability that a randomly constructed program will halt [?]
Where are you in life when this is a question that needs to be pondered? My bet is you're at a point where (when?) you question nature and/or human nature.
>> Real numbers are real
I meant to say, real numbers existed all along and were discovered, as opposed to being an invention.
What made me come to this conclusion? Here's Chaitin (paraphrased):
- run a process that through a series of operations produces a scalar, deterministically.
- alter that process.
- observe that the scalar has increased/decreased in value.
Why do you think that? A Boltzmann brain will immediately start to degrade but that does not mean there is zero time for thought.
If I were a Boltzmann brain, I would spend most if not all of my existence dwelling, contemplating, ruminating over the things that took me to my current point in space and time. Then I would quickly type something up and post it on a forum to confirm that I
OT, but his style is so dense in that book. Why did he write it like that? I've never been able to finish it. I find it almost impenetrable. The Hobbit I read with joy and ease even as a young child. In LOTR, sometimes after reading a paragraph I needed to pause to digest and to connect the dots. In The Silmarillion I need that pause after each clause.
This sucks, because I _need_ to know what's in the book.
I think two key points as to your question of _why_ he wrote it like that. First, it is not a novel that he planned out and then sat down and wrote. It's an amalgamation of disparate stories that Christopher cobbled together into a single book because that was the only way they could sell it to the publishers. It's also the quasi-religious tome of the Tolkien world, so rather than comparing the readability to a Stephen King novel, compare it to something like the Bible or the Torah.
All that said, it took me several reads before I felt like I really 'got' it. The hardest part for me was grasping the long timelines since most of it is a story of the elves and they are immortal. You might be following the same character arc for thousands of years. All that struggle was worth it though, because when you reread LOTR _after_ reading The Silmarillion, you pick out things in LOTR that you didn't even know were there before.
Afaik The Silmarillion was more of a backstory, not really fit for publishing at Tolkiens death. Christopher was the one I believe who collected all the materials into its current form. Don’t forget Tolkien himself was a master linguist!
Checkout this talk by Brandon Rhodes - it explains the context in which the Hobbit and LoTR were written in. I won’t spoil it, but I think it’ll answer your question.
I feel your bro science makes a lot of sense. I'm someone who's been identified as being bipolar and put on Lithium, which I quickly dropped (because it made me feel suicidal) and I instead turn to illegally acquired cannabis, which helps me to not feel the anxiety of not yet having solved a problem, something that these days freak the living shit out of me, if I'm not doped up.
Lithium should almost certainly NOT make one suicidal when dosed correctly for Bipolar. You might wish to explore the uses of a micronutritional form of lithium, namely, lithium orotate, which gets metabolized in a very different (and safer) way than pharmacological lithium.
Sean Carroll's solo podcast where he explains how the plank lenght is relative and how we should arrive at quantum gravity through the means of quantum physics, not by quantizing relativity. Blew my mind several times over.
At my day job, the meme that won and now dominate is "the percieved performance of a SPA is greater than that of a non-SPA". My calls for making our backend more performant are ignored (and probably ridiculed, because of how out-of-the-loop I seem). React for the win!
The progress academics continuously make in NLP take us closer and closer to a local maxima that, when we reach it will be the mark of the longest and coldest AI winter ever experienced by man, because of how far we are from a global maxima and, progress made by academically untrained researchers will, in the end, be what melts the snow, because of how "out-of-the-current-AI-box" they are in their theories about language and intelligence in general.
We hear that opinion all the time. As someone working in neural net-based computer vision I'd basically agree that the current approaches are tending towards a non-AGI local maximum, but I'd note that as compared to the 80s, this is an economically productive local maximum, which will likely help fuel new developments more efficiently than in previous waves. The next breakthrough may be made by someone academically untrained, but you can bet they'll have learned a whole lot of math, computer science, data science, and maybe neuroscience first.
Agreed. I'm nowhere near expert enough to opine on how far state-of-the-art is from some global maxima.
I'd contend that, for the most part, it doesn't matter. It's a bit like the whole ML vs AGI debate ("but ML is just curve fitting, it's not real intelligence"). The more pertinent question for human society is the impact it has - positive or negative. ML, with all its real or perceived weaknesses, is having a significant impact on the economy specifically and society generally.
It'll be little consolation for white collar workers who lose their jobs that the bot replacing them isn't "properly intelligent". Equally, few people using Siri to control their room temperature or satnav will care that the underlying "intelligence" isn't as clever as we like to think we are.
Maybe current approaches will prove to have a cliff-edge limitation like previous AI approaches did. That will be interesting from a scientific progress perspective. But even in its current state, contemporary ML has plently scope to bring about massive changes in society (and already is). We should be careful not to miss that in criticising current limitations.
Word. I think we’re actually at a level now that we’ll soon start questioning how intelligent people really are, and how much of human intelligence is just an uncanny ability to hide incompetence/lack of deeper comprehension.
(Of course we’re a hell of a long way from A.I. with deep comprehension, and may remain so for hundreds of years. It’s impossible to predict that kind of quantum leap IMHO.)
This perspective makes sense pragmatically, but in philosophical terms it’s a little absurd.
Going back to Turing, the argument was for true, human creativity. The claim was that there is no theoretical reason a machine cannot write a compelling sonnet.
After spending the better part of a century on that problem, we have made essentially zero progress. We still believe that there is no theoretical reason a machine cannot write a compelling sonnet. We still have zero models for how that could actually work.
If you are a non-technical person who has been reading popular reporting about ML, you might well have been given the impression that something like GPT2 reflects progress on the sonnet problem. Some very technical people seem to believe this too? Which seems like an issue, because there’s just no evidence for it.
Maybe a larger/deeper/more recurrent ML approach will magically solve the problem in the next twenty years.
And maybe the first machine built in the 20th century that could work out symbolic logic faster than all of the human computers in the world would have magically solved it.
There was no systematic model for the problem, so there was no reason to conclude one way or another, just as there isn’t any today.
ML is a powerful metaprogramming technique, probably the most productive one developed yet. And that matters.
It’s just still not at all what we’ve been proposing to the public for a hundred years. To the best of our understanding, it’s not even meaningfully closer. And that matters too, even if Siri still works fine.
Re sonnet problem: we can use GPT-2 to generate 10k sonnets, then choose the best one (say by popular vote, or expert opinion, etc), it's quite likely to be "compelling" or at least on par with an average published sonnet. Do you agree? If yes, then with some further deep learning research, more training data, and bigger models, we will probably be able to eventually shrink the output space to 1k, 100, and eventually maybe just 10 sonnets to choose from, to get similar quality. Would this be considered "progress for that problem" in your opinion?
> After spending the better part of a century on that problem, we have made essentially zero progress.
I dunno, I heard music composed with neural nets that is above what the average human could achieve [1]. Not on par with the greatest composers, but over average human level.
In the same line of thought, I have seen models do symbolic math better than automatic solvers, generate paintings better than average humans could paint, even translate better than average second language learners.
I would rate current level in AI at 50% of human intelligence on average, and most of that was accomplished in the recent years.
Going back to Turing, the argument was for true, human creativity.
That's not true. The Turing test is that one can't tell the difference between a human and a machine intelligence by communicating with it. That's it.
The claim was that there is no theoretical reason a machine cannot write a compelling sonnet.
And that's absolutely not true. I can't write a compelling sonnet.
If you are a non-technical person who has been reading popular reporting about ML, you might well have been given the impression that something like GPT2 reflects progress on the sonnet problem. Some very technical people seem to believe this too? Which seems like an issue, because there’s just no evidence for it.
I work in the field of NLP and I believe it does reflect progress, and I think there is evidence for it.
The gods are they who came to earth
And set the seas ablaze with gold.
There is a breeze upon the sea,
A sea of summer in its folds,
A salt, enchanted breeze that mocks
The scents of life, from far away
Comes slumbrous, sad, and quaint, and quaint.
The mother of the gods, that day,
With mortal feet and sweet voice speaks,
And smiles, and speaks to men: "My Sweet,
I shall not weary of thy pain."
GPT2 small generated poetry.
For the youth, who, long ago,
Came up the long and winding way
Beneath my father's roof, in sorrow—
Sorrow that I would not bless
With his very tears. Oh,
My son the sorrowing,
Sorrow's child. God keep thy head,
Where it is dim with age,
Gentle in her death!
> The next breakthrough may be made by someone academically untrained, but you can bet they'll have learned a whole lot of math, computer science, data science, and maybe neuroscience first.
I found this sentence particularly intriguing given that John Carmack recently announced that he was switching his main focus to AI.
> [..] this is an economically productive local maximum, which will likely help fuel new developments more efficiently than in previous waves.
This is exactly it. Every previously seen AI Winter had in common that funding was cut back. However, Google or other companies in the realm could approach a point where further investment wouldn't make sense to them. Until then there won't be a Winter, maybe Autumn, because smaller players disappear.
The progress that has been made so far is already good enough to deliver tons of real business value. The tech is way ahead of application, as the tech has jumped forward so much in the last 5 years, and progress continues to be rapid.
I've direct evidence of that from my day job (building NLP chat bots for Intercom).
That business value will increase as we NLP progresses, even if we're moving towards a local optimum.
Even if we do get stuck, real products and real revenue powered by NLP will help fund research on successive generations.
Of course theres tons of hype about AI. But theres also a big virtuous cycle which just wasn't present in the setup which created previous AI winters.
People thought that with LSTMs, and then we got transformers. People thought that with CNNs, and then we got Res-nets.
Progress is always that way. It plateaus, then suddenly jumps and then plateaus again.
If you complaint is about the general move away from statistics and deep learning becoming the norm, then there are a pretty decent number of labs who are working on coming up with whatever the next deep learning is. There is probabilistic programming and there models are some models with newer biologically inspired computation structures.
Even inside ML and deep learning, people are trying to come up with ways to better leverage unsupervised learning and building large common sense representations of the world.
There is certainly an oversupply of applied deeplearning practitioners, but there are other approaches being explored in the AI/ML community too.
Like the local maxima that the GLUE benchmark was for a few weeks (months?) before SuperGLUE got released ? This field is moving so fast, it's probably wiser to hold off over the top ominous predictions for a little while.
It is not a summer/winter choice only if you chose to think this way. Such construction is superficial and drama oriented while reflecting very little what reality actually presents.
The current A.I. B.O.O.M is due to end or it is ending already, but this only means now we are equipped with really powerful approximators previous generation of researchers would not even dream of, that we left us with a really tantalizing question:
What is the right question to ask?
We have undoubtedly proved machine are superior to fit, now we need to make them curious.
Yeah, some people even started giving it a name [0]: Schmidhuber's (local) optimum. It is a bit tongue-in-cheek, but the idea is that as long as Schmidhuber says he did it before, we are probably in the same basin of attraction as we were in the nineties.
The open question is whether AGI is the same as Schmidhuber's optimum, or even lies within Schmidhuber's basin.
[0] Cambridge style debate on the topic at Neurips 2019.
But why does it have to be the longest AI winter? I would agree that current NLP approaches do not get us any closer to NLU. They won't hurt either though. They may even help to motivate people. I started working on NLU because the current state of voice assistants is so frustrating...
But why does it have to be the longest AI winter?
Because we have explored both paradigms (symbolic, subsymbolic and hybrid AI)
The research has explored both existing paradigms and no other paradigm exist.
Curve fitting (subsymbolic) is inherently limited.
Maybe we need to reinvent symbolic AI, but almost nobody is working on it, and I'm not aware of any promising research paths/ideas for symbolic AI.
My feeling is that symbolic AI hasn't really been explored that much with modern tools and modern computing capacity. It just needs that one breakthrough to show that for certain areas (IMO NLU) it is far superior.
There's no way of predicting when this will happen, but given the current interest in AI I don't think it will take that long, even if currently there are far more people working on subsymbolic AI.
Isn't physics in the same situation? The theory is useful for countless applications but is ultimately flawed, and researchers are aware of that, of course. But we don't hear it every time there is a new application of physics. Why should the ultimate high standard be invoked so often in discussion only for ML? Other fields like psychology or economics are probably in an even worse position with their theories vs the reality.
ML is an empirical science, or a craft if you want, with useful applications. It's not the ultimate theory of intelligence.
Personally, seeing that connexionitsts and symbolists start talking with each other gives me hope that there won't be another AI winter before the AI singularity.
> progress made by academically untrained researchers will, in the end, be what melts the snow, because of how "out-of-the-current-AI-box" they are in their theories
This is an unnecessarily uncharitable view of academia.
"Outside the box thinking" is frequently just ignorance and Dunning-Kruger.
Current academic NLP would have been considered quite out-of-the-current-box 10 years ago. Most academic progress is driven by young graduate students who think similarly to you.
Can we also have some sort of a scrumish gathering, ASAP, to declare the correct use of the word "definition", because sometimes it seems we use this word arbitrarily.
But is the parent really needed here? This discussion seems to have become "how dare you take a stance in this issue that we cannot fully understand"?
How about, while hungry they are sickened by the cruelty involved in producing food, but while full, they simply have other, more important things on their mind?
Doesn't this in fact prove that numbers are discovered, not invented?
He defines elegance to be "N". He defines
N = 1
356 + N != N
Thus, real numbers are real.