This is a contrarian take: instead of open source models running on private infrastructure being the dominant mode of AI, they instead argue that the returns to scale from cacheing intermediate reasoning steps could provide enough of a performance advantage to centralized models.
He came up with a framework where time is real (assumed), and then the laws of physics evolve with time, and then goes on to develop an evolutionary theory of universes, where universes reproduce by producing black holes, which spawn baby universes, with slightly different laws of physics. He then predicts that it should tend to produce universes that are optimized to produce black holes.
The point I heard him speak to is that when you dismiss free will, people start accepting as inevitable the kinds of global, systematic problems that face us, of which climate change is only one. It's an element of human nature.
That is an interesting point when you start taking it to different conclusions.
For example, It’s funny how often that people who claim to believe in free will often adopt the language of determinism when they say things like:
“It was meant to be”
Or, “Everything happens for a reason”
Personally, I think the general population is just confused and not particularly inquisitive when it comes to asking whether they have free will or whether their actions and language reflect their belief or lack of.
I suspect a feeling of determinism probably takes over when one feels like they’re on autopilot and lack a locus of control.
Does it correlate with being susceptible to cults, group think and parroting other people’s ideas like a consumer rather than synthesizing your own?
> Does it correlate with being susceptible to cults, group think and parroting other people’s ideas like a consumer rather than synthesizing your own?
It's known that being deluged with information tends to suspend one's own critical faculties in attempting to cope with the flow. Perhaps internet life, by tending to crowd out one's own thoughts, contributes to a sense of loss of will.
People who emerge from an "internet cleanse" seem to be refreshed in a way that might be interpreted as the regaining of will, of freedom to formulate.
It's still worthwhile to look at 10x and 100x concentrations since these things bioaccumulate. Whatever negative effects are happening at 1x should be studied as we crank that concentration up. Might be fine now, but in 100 years? We should probably have an idea how the harm/effects scale
It all makes sense. Google is playing the editor factions against each other so they won't contest the Chrome-based Electron editors that prop up their monopoly.
The choice of using totally novel and cryptic names for everything was intentional. The project is ambitious, and aims to do a completely fresh stack, (OS, drivers, network stack, identity, filesystem, etc.). Given that level of ambition, the choice of names was done remind the user that this is not just Unix and TCP/IP re-written, this is a whole new alien OS, based on distinct ideas.
The fact that Arvo (the kernel) doesn't even have a distinction between RAM and disk, or between PCI input and network input, is a much bigger deal than remembering that "Arvo is the kernel".
OP is right that naming can give a false sense of familiarity, so inverting that for things like this makes sense. Create a false sense of un-familiarity, to keep the users paying attention while they learn what they are using. That's been my experience so far, a heightened sense of awareness while reading through the cryptic documentation.
I think a better question would be not how long but how well people live their end of life. Most people would trade living up to 90 fully aware and mostly active instead of up to 100 in misery and dementia.
Well, they could measure the biological ages of people who die (or maybe who have terminal illness) and compare it to people who continue to live on.
If they can get the price of the measurement low enough (which Sinclair claims to happen soon), then we can just wait and see: by doing a large number of measurements we'll have people who end up dying pretty quickly. (Yet another way to do it would be just to collect a large number of samples and measure those who die and compare it with some who do not, during the experiment.)
You can approximate this somewhat by watching their blood tests, kidney function, pulmonary function, grip strength, state of their arteries, strength of their immune systems etc.
People start deteriorating in these parameters long before they develop actual life threatening diseases.
Specifically, watch Bill Gates, Bezos, Branson, Thiel, and Musk. They're first in line with all the influence and resources needed, so if current life extension efforts pan out, we should see it happen.
If it happens soon, the Putin situation could get weird.
Much of the information we collect has a half life, or situations change and that information, while still valid, is not applicable to the issues we are facing currently. The vast majority of things we learn end up getting discarded at some point.
Think of the most recent book you read. How many of the details actually stuck with you at the moment you closed it. Now think of a book you read 10 years ago. How much of that do you remember?
I took years of mathematics and was able to do well on tests, but 40 years later I only recall the "shape" of PDE solutions and couldn't actually solve anything anymore. Instead my brain is stuffed full of arcane knowledge I need to do my job, and if I don't use it within a year, I probably need to learn it all over again.
This is a valid question for different reasons, too. I have a problem with the speech and behavior of immortal characters in books - why would a 300 year old powerful vampire behave like an immature emo teenager?
There's a reason we associate wisdom with age. The more times anyone of reasonable intelligence makes mistakes, the more opportunities they have to learn, and their behavior changes according to the degree to which they take on the lessons of life.
One big danger of immortal dictators is the simple fact that they'll stop learning. Through wealth and power they shield themselves from the consequences of mistakes, getting themselves and their people stuck in a local minima.
Imagine immortal Mitch McConnell, ever increasingly wealthy through passive income, maintaining power and privilege for his constituents and thus his hold on a senate seat.
If life extension pans out, liberal societies will have to impose term limits in a serious and well considered way. Humans aren't ready for the existing pace of technological development, and we're going to encounter an exponentially increasing number of problems, like the politics of immortality. The best thing we could do would be to maximize freedom of expression and minimize the duration of social institutions to achieve sufficient maneuverability to adapt to modern life.
It wouldn't be a limitation for centuries, or possibly millenia. The brain very efficiently packs information, and integrates with external storage. We'll be able to digitally augment brains directly long before temporal memory capacity becomes a problem.