> He went on a week long cabin-in-the-middle-of-nowhere trip about a year ago to dive in to AI (that's all this guy needs to become pretty damn proficient).
You must be joking, right? I'm as much of a Carmack fan as anyone here, but overstating the skills of one personal hero does no good to anyone.
What a weird future it would be if Carmack turns out to be the one to figure out the critical path and get it all working. An entire field of brilliant researchers be damned.
History books (for as long as those continue to exist) would cite AGI as his major contribution to society, and his name would be more renowned than Edison or Tesla. An Einstein. None of his other contributions will matter, as the machines will replace it all.
People have approaches. There's no end to half-assed "I thought about this for 10 seconds, how hard could it be!" solutions, really old approaches from decades ago where the brightest academics thought they could lick the problem over a summer, and some new public or hidden approaches that might be promising but (I can't know of course) I predict will still look a lot different than the final thing.
I think a big reason there are few in AGI is due to PR success from the Machine Intelligence Research Institute and friends. They make a good case that things are unlikely to end well for us humans if there's actually a serious attempt at AGI now that proves successful without having solved or mitigated the alignment problem first.
MIRI's concerns are vastly overrated IMHO. Any AGI that's intelligent enough to misinterpret its goals to mean "destroy humanity" is also intelligent enough to wirehead itself. Since wireheading is easier than destroying humanity, it's unlikely that AGI will destroy humanity.
Trying to make the AGI's sensors wirehead-proof is the exact same problem as trying to make the AGI's objective function align properly with human desires. In both cases, it's a matter of either limiting or outsmarting an intelligence that's (presumably) going to become much more intelligent than humans.
Hutter wrote some papers on avoiding the wireheading problem, and other people have written papers on making the AGI learn values itself so that it won't be tempted to wirehead. I wouldn't be surprised if both also mitigate the alignment problem, due to the equivalence between the two.
Yes, AGI is as much or more cognitive neuroscience and philosophy than computer science right now, but a lot depends on the approach one is taking. It's funny to think you have some kind of working model you can throw research data against to see how it holds up, and then doubt yourself when you spend 3 hours on Twitter arguing over fundamentals with another person that is also convinced of their model. A lot of popular ideas sound crazy (or non-workable), so you just have to accept that whatever idea you are pushing is going to crazy as well.
The problem of ensuring that the AI's values are aligned with ours. One big fear is that an AI will very effectively pursue the goals we give it, but unless we define those goals (and/or the method by which it modifies and creates its own goals) perfectly -- including all sorts of constraints that a human would take for granted, and others that are just really hard to define precisely -- we might get something very different from what we actually wanted.
>A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI.
Hassabis and DeepMind have a fairly organised approach of looking at how real brains work and trying to model different problems like Atari games then Go and recently Starcraft. Not quite sure what's next up.
I'm not sure I want AGI to succeed, given some of the possibilities. Sure if it plays nicely alongside us, amplifying human society, that's great. But if we get relegated to second class with the AIs doing everything meaningful, then no thanks.
Why not? I'd say that a world that is managed by AGI with limited input from human beings is a good goal to have. If AGI could be done without the nasty parts of human psychology and they're inherently superior to genetically intact human beings why shouldn't be embrace it?
I understand that it's a big assumption to make -- that a benevolent AI could be constructed. But under that assumption, why not have a benevolent dictator in the form of an AI?
Maybe. Yeah, human politics and justice systems leave something to be desired. But my worry was little bit beyond that. That the AIs would take all the meaningful work, discoveries and creativity away from us, leaving us just to amuse ourselves. Some people might be okay with that, but I don't think becoming pets is the best goal for the human race.
If the benevolent AI ruler(s) restrained themselves to allow for humans to flourish, then okay. Assuming it could be constructed benevolently.
There is another threat when things go wrong (and they eventually always do) - no matter how horrible some dictator is, eventually he/she will die, and at some point things get reshuffled by war/revolution/some other more peaceful means.
With AI, it would try its best to preserve/enhance/spread itself forever. And its best might be much better than our best...
His interviews are not adversarial and he is not judgemental towards his guests. He isn't there to put his guests on the spot. He isn't there to get a juicy soundbite taken out of context. He allows his guests to speak for as long as they want. And his guests appear to enjoy themselves.
These things are all true even if the guest or their ideas are extremely controversial. Maybe Joe Rogan is just smart in a way that's different to the way that you are smart.
Joe Rogan might not be the most knowledgeable, but he has a key characteristic that a lot of people lack. He is willing to admit that he is wrong when shown evidence and will adopt the more reasonable view as his own. While a lot of "smart" people will defend their views beyond reason just because admiting fault goes against their "being smart" persona.
I don't think people listen to the show to listen to him, and he probably knows that. He does, however, seem to be reasonably good at getting his guests to talk about interesting things.
Joe Rogan has pushed the “DMT is produced in our pineal gland” narrative, but there is no evidence to back this up. I’ll report a comment I made elsewhere and also link a separate reddit discussion which cites various sources. I will note that, in fairness to Joe, he said this a while ago, so perhaps he’s not so quick to jump the gun now, I don’t know, I don’t listen to his podcasts, but perhaps he’s better now.
“We all have it in our bodies” — This is an often repeated myth that has never been proven. The myth originates from Rick Strassman’s work, who himself has said that he only detected a precursor, not DMT itself and that everything else he wrote about it was hypothetical speculation. There have, apparently, been recent studies that found DMT synthesised in rat brains, but it has not yet been proven whether this translates to humans or not. Cognitive neuroscientist Dr. Indre Viskontas stated that while DMT shares a similar molecular structure to seritonin and melatonin, there is no evidence that it is made inside the brain. Similarly, Dr. Bryan Yamamoto of the neurosciencedepartment at the University of Toledo said: “I know of no evidence that DMT is produced anywhere in the body. It’s chemical structure us similar to serotonin and melatonin, but their endogenous actions are very different from DMT.”
There is a difference between the current politicized phrase "spreading misinformation" and being wrong.
Anyone who speaks on the record about their hobbies for thousands of hours will say some things that are incorrect. He might not understand something, and he is usually pretty humble about his knowledge level.
But "spreading misinformation" is something that people do because they are intentionally misleading others, or have something to gain.
I don't think he is benefiting much from the pineal gland narrative. And it sounds like from the information you cited, it may even be correct, even if its premature to state it as fact.
That’s fair, thanks for pointing it out. I’ll be more careful with how I express such things in future.
Regarding the pineal gland, it might be true, but it hasn’t been proven and multiple neuroscientists have stated that while DMT is similar to compounds found in the brain, it still functions quite differently and they have never seen any evidence to suggest that DMT exists in our bodies. There was a study finding it in mice brains, so it may still turn out that we have it in ours, but it’s definitely premature to make any such assumptions and definitely premature to repeat the trope.
I wonder how many historical figures went through the same thing? Who do we know for their contributions to field X, when 99% of their life was spent contributing to field Y?
Isaac Newton spent most of his life pursuing alchemy and obscure theological ideas, and found it a real nuisance whenever anyone pestered him about math or physics.
"Newton was not the first of the age of reason. He was the last of the magicians, the last of the Babylonians and Sumerians, the last great mind which looked out on the visible and intellectual world with the same eyes as those who began to build our intellectual inheritance rather less than 10,000 years ago. Isaac Newton, a posthumous child bom with no father on Christmas Day, 1642, was the last wonderchild to whom the Magi could do sincere and appropriate homage."
"Researchers in England may have finally settled the centuries-old debate over who gets credit for the creation of calculus.
For years, English scientist Isaac Newton and German philosopher Gottfried Leibniz both claimed credit for inventing the mathematical system sometime around the end of the seventeenth century.
Now, a team from the universities of Manchester and Exeter says it knows where the true credit lies — and it's with someone else completely.
The "Kerala school," a little-known group of scholars and mathematicians in fourteenth century India, identified the "infinite series" — one of the basic components of calculus — around 1350."
However, calculus proper (derivatives and integrals of general functions, and the connections between them) did not exist until Newton and Leibniz. Other mathematicians made important steps towards it earlier in the 1600s, and if Newton and Leibniz had not existed, others would have figured it out around the same time.
These are interesting articles that seem to agree with what I said. The first one defines calculus in a much more limited way, and refers to some of the earlier basic components I mentioned.
I'm not a historian, but a few months ago I spent some time analysing one of Fibonacci's trigonometric tables (chords, not sine or sine-differences). Aryabhata's sine-differences were much earlier.
Very true. Newton was an alchemist first and foremost and spent the vast majority of his time practicing alchemy rather than -what today one would call- science. One has to wonder what private reasons/results a genius of his magnitude had, in order to do that.
This little known fact is so embarrassing to some institutions [2], that they made up a new word "chymistry" in order to further obscure the issue and not outright admit the obvious.
> One has to wonder what private reasons/results a genius of his magnitude had, in order to do that.
Is there a reason to expect that someone who wanted to investigate the laws of the composition and reactivity of matter, in the late 1600s/early 1700s, would end up studying chemistry rather than alchemy? Sure, Boyle had introduced “chemistry” as an idea in 1661 (before Newton was born), but I imagine that alchemy would still be quite active in the late 1600s as an academic “field”, with many contributors already late in their careers studying it; whereas chemistry would have been just getting off the ground, without many potential collaborators.
Alchemy was never an academic field. It was a tradition veiled in secrecy, requiring years of private work and knowledge transmission through strict and very narrow (typically teacher-student) channels.
Your point has been brought up before -usually as an attempt by established institutions to whitewash and explain away Newton's idiosyncrasies- but there is no evidence whatsoever to back it. On the contrary, what we know (and there is a lot we do know thanks to his writings) about Newton and alchemy absolutely indicates him being immersed in the Hermetic worldview and alchemical paradigm. Clearly, Newton was practicing alchemy not as a way to look for novel techniques or as a way to bridge the old and new worlds together, but primarily because he was a devout believer.
Newton -a profound genius- stood at the threshold of two worlds colliding. He was also a groundbreaking scientist in optics/mechanics/mathematics. He was aware of Boyle's chemical research. Knowing all of that, he _absolutely_ chose to dedicate his life to alchemy. That is immensely interesting.
"Much of Newton's writing on alchemy may have been lost in a fire in his laboratory, so the true extent of his work in this area may have been larger than is currently known. Newton also suffered a nervous breakdown during his period of alchemical work, possibly due to some form of chemical poisoning (perhaps from mercury, lead, or some other substance)."
(Not OP) I don't think it does. It backs you up (barring quibbles on what you mean by "most"; years active or hours spent): "Beyond his work on the mathematical sciences, Newton dedicated much of his time to the study of alchemy and biblical chronology".
Very few, at least for STEM fields. If you look at notable scientists in any given field, their main contributions were in their expertise area before the thing that made them famous. Teller had already made serious contributions to physics before the atom bomb. Jennifer Doudna (CRISPR, CAS9) was the first to see the structure of RNA (except for tRNA) using an innovative crystalline technique. Planck is mainly known for quantum physics, but made huge contributions to the field in general.
It's hard to think of many famous scientists that weren't already well known in their field. Some stand out. Einstein, for example, had a fairly lackluster career until his Annus Mirabilis papers. Mark Z. Danielewski (House of Leaves) bounced to and from various jobs. But largely, the idea of the brilliant outsider is like the 10x engineer. It exists, but is rare.
Even Einstein I would not say didn't have formal training. He had been in and around academia for most of his life. He was obviously far ahead of the curve, but he did accumulate the formal training. His stint in a regular job was more of an anomaly than his affinity to academia and physics.
Right, even Einstein had some serious academic training and mathematical chops. But I would argue that he was a bit of a wild card, because he was unable to secure a teaching positions and looked very mediocre from an academic perspective. But fair point, even the geniuses had formal training and instruction.
Not an extreme example, but Albert Szent-Gyorgi is known for his work with Vitamin C, when his work on bioenergetics and cancer are more interesting and possibly more promising.
To be fair, a whole uninterrupted week of highly focussed work can get you pretty far (considering that you have the necessary background, which Carmack has, i.e. related to linear algebra, stats, programming, etc.)
Yes, but let's not assume the the hundreds of other scientists in the field have just been twiddling their thumbs the whole time. It is preposterous to assume that someone largely new to a highly specialized field can somehow start pushing the envelope within a week. Yes, JC is nothing short of brilliant, but these sort of assumptions just set him up to disappoint and is also highly unfair to all the other hardworking brilliant people in the field.
How many of them are doing real research, though? Corporate researchers improve ads impressions and academics researches are busy generating pointless papers or they won't be paid. Very few if any do actual research.
And I disagree violently. The deepmind folks are on salary and every year they need to prove that they are worth the money. This applies to Demis himself: he needs to prove that his org deserves this gaziliion of dollars per year.
I don’t think all papers are pointless but it’s been shown that many are not reproducible, so those are worthless and pointless. There was that guy a few months ago who tried to reproduce the results of 130 papers on financial forecasting (using ML and other such techniques) and found none of them could be reproduced and most were p-hacked or contained obvious flaws like leaking results data into the training data. An academic friend of mine who works in brain computer interfacing also says that a large number of papers he reviews are borderline or even outright fraudulent but many get published anyway because other reviewers let them through.
So I definitely wouldn’t dismiss all papers as pointless, but there certainly is a large percentage that are, enough that you can’t simply accept a published papers results without reproducing it yourself.
The need to generate publishable papers means that a researcher can only participate in activity that leads to such a paper. He can't try to work on that idea for 5 years, because if no big papers follow, he's toast /he'd probably lose funding long before that).
You have to earn the right to work on your idea for 5 years and get paid. Otherwise we would be funding all kind of crackpots. First you demonstrate you're a good researcher by producing good results. Then you can work on whatever you feel like (either by getting hired at places like DeepMind, or by finding funding sources that want to pay for what you want to work on).
This is what I meant. In our society, only very few, usually already rich, can try their own ideas. Most of us have to stick with known ideas that bring profit to business owners or meaningful visibility to universities. When I was in college, I had to work on ideas approved by my professor. Now I have to work on ideas approved by my corporation. But if I had money, I'd work on something completely different. Sure, in 15 I will be rich and can start doing my own stuff, but I'll also be old and my ability will be nowhere near the peak at 25 years.
What would you work on if you could? Would you say you deserve to be paid for 5 years of uninterrupted research? Do you think you have a decent chance to make a breakthrough in some field? These are the questions I ask myself.
I have some interesting ideas about managing software complexity in general (i.e. why this complexity inevitably snowballs and how we could deal with that), or about a better way to surf the internet (which may be a really big idea, tbh). But all these are moonshot ideas that gave a slim chance of success, while I need to pay ever raising bills. On the other hand I have a couple solid money making business ideas that I'm working on and that will bring me a few tens of millions, bit will be of no use to society, and I have a fallback plan: a corporate job with outstanding pay, but that brings exactly nothing to this world (it's about reshaping certain markets to make my employer slightly richer).
Do I deserve to be paid for 5 years for something that may not work? "Deserving" something doesn't have much meaning: we, the humans, merely transform solar energy into some fluff like stadiums and cruise ships. Getting paid just means getting a portion of that stream of solar energy. There is no reason I need to "deserve it" as it's unlimited and doesn't belong to anyone. A better question to ask is how can we change our society so that all, especially young, people would get a sufficient portion of resources to not think about paying bills.
Chances to make a breakthru are small, but that doesn't matter. It's a big numbers game: if chances are 1 to million, we let 1 billion people try and see 1000 successes. The problem currently is that we have these billions of people, but they are forces by silly constraints of our society to non stop solve fictional problems like paying rent.
When you have tenure, you can work on whatever you want for as long as you want. Nobody works on an idea for five years without publishing anything, though. Progress is made step by step.
Take Albert Einstein as an example, who arguably made one of the largest leap in physics with his theory of general relativity. He never stopped publishing during that time.
When you have tenure, you can work on whatever you want for as long as you want
Not quite. When you are a professor, you essentially become a manager for a group of researchers. You don't really do research yourself. Therefore, your main obligation becomes finding money to pay these researchers. So in reality you can only support the research someone is willing to pay for (via grants, scholarships, etc).
funny i wanted to make the same comment last night but was too lazy.
wasn't the first time John did what he did. and it's not the usual kind of learning either. he was learning by first principles. i truly love this idea of replaying in your own mind what went on when something was discovered (or at least come close to it).
contrast that with how ML & AI are taught nowadays: thrown into a Jupyter notebook with all FAANG libraries loaded for you...
I'm not saying he's LeCun, I'm just saying he gets up to speed absurdly fast. So it's not unreasonable to suppose that by now, he's learned enough to start seriously contributing to this kind of problem.
edit: to be clear, all I'm saying is he can catch up to the body of research already out there quicker than the average bear, and he's shown a real knack for designing solutions and being crazy productive. I'm not pretending he's gunna be publishing insane novel research anytime soon, just that I wouldn't be surprised if he ends up being a real voice in the field.
No, you can’t push the envelope in AGI after a week in the woods. That must come off as pretty insulting to the hundreds of world class scientists who have been working in the field for decades.
I never said that, where the hell did I say he pushed anything? All I'm saying is he's shown to be insanely productive and effective and I think he can catch up to the body of research (created and shared by those hundreds of scientists) to become a real contributor very quickly.
FWIW, "seriously contributing to this kind of problem" sounds basically the same as "pushing the envelope" to me. They both suggest contributing something novel and useful.
What are not basically the same thing are "he started seriously contributing to this kind of problem after a week in the woods" and "he spent a week in the woods a year ago and is ready to start contributing now. A year after that week in the woods."
You seem to have a very blase understanding of scientific progress and genius. The fact that hundreds of world class scientists have been working in a field for decades does not at all mean that a genius can't come along and make groundbreaking progress. That's the very definition of genius, someone that makes a leap "off the path" that nobody before him could make.
You must be joking, right? I'm as much of a Carmack fan as anyone here, but overstating the skills of one personal hero does no good to anyone.