Don't forget about the good ol' tech industry bait-and-switch. Quoting myself from earlier today:
> There's the good ol' bait-and-switch of tech industry you have to consider. New tech is promoted by emphasizing (and sometimes overstating) the humane aspects, the hypothetical applications for the benefit of the user and the society at large. In reality, these capabilities turn out to be mediocre, and those humane applications never manifest - there is no business in them. We always end up with a shitty version that's mostly useful for the most motivated players: the ones with money, that will use those tools to make even more money off the users.
It applies to Apple's automated emotion reading, and it applies even more to major VCs telling us AI will Save the World. As in, maybe it could, but those interests being involved are making it less likely.
I don't agree. You're right that this article really focuses only on the positive. That said, it is indeed true that technology has changed the world for the better. If one were to somehow prove that "facebook is net bad for young children", it still doesn't mean the web, advertising, and everything that makes up that product should be destroyed.
I think you're missing the point. I'm not saying the tech is bad in principle. I'm saying that these people are out to scam you. They're trying to sell you vision that they fully know will not materialize - because while possible on paper, it's not possible in our current economy, not right now, not at the current stage of technology, and most importantly, it's not why they actually want it to happen.
As for
> If one were to somehow prove that "facebook is net bad for young children", it still doesn't mean the web, advertising, and everything that makes up that product should be destroyed
you're right that this does not follow - however, advertising absolutely should be destroyed, because it's a cancer on modern society, and it's not caused by bad tech companies - it's what causes tech companies to go bad (as well as many other nasty things, see http://jacek.zlydach.pl/blog/2019-07-31-ads-as-cancer.html).
> They're trying to sell you vision that they fully know will not materialize - because while possible on paper, it's not possible in our current economy, not right now, not at the current stage of technology, and most importantly, it's not why they actually want it to happen.
I used to think in this way as well, until I realized that it just isn't true. I've seen too many tech people who genuinely believe that their tech will save the world. Only if people could see how, behave in "the right way" etc. They are not different in any of messiahs outside of tech.
> I've seen too many tech people who genuinely believe that their tech will save the world.
I briefly was like that too. All high on TED talks and startup economy propaganda. Then, over the years, I saw all those promises fail, and I saw how and why they failed, and what we got instead.
And even earlier than that, I learned how to craft this kind of bullshit myself - how far a good story can take your worthless university project, even if at no point in telling it you were actually lying.
(I may have also spent an unreasonable amount of time pondering why technology in the real world is so shitty compared to sci-fi settings, even dystopian ones.)
So it's true that not everyone is going to trying to scam you on purpose. Some are still naive enough to believe (I say that without passing bad judgement - it would be nice if things were so simple, but they just aren't). But some are absolutely out there to get you - the regular Joe, or the regular Joe & Jane Inc. Not because they hate you, or the society you live in, but because there is monetary value that can be extracted from you, which they can put to some other use.
And yes, there are also those messiahs you mention, the people insisting their stuff can save the day, "Only if people could see how, behave in 'the right way' etc.". They are a danger to regular people too, but for the entrepreneurial/investor class, they're useful fools. They're tools that can be employed very cheaply, because they already believe the bullshit they're selling.
One does not preclude another. More often than not I've seen that it's a money that causes messianism. You are lucky at some point of the time to hit jackpot with some company, it makes you a lot of money and it makes you feel smarter than other mortals and special. So special that you think you have the right to teach everyone else how to live their lives, how to think etc.
> it is indeed true that technology has changed the world for the better.
A big part of the "technology" in the past 150 years has been "machines". And all those machines have one thing in common, they consume energy (from the wheel to the latest iPhone).
Yes the technology allowed to increase the crop per acres, to ship goods across the world, to keep food cold, to warm/sterilize it, find new drugs, mass produce clothes for few cents, etc.
That miracle comes access to free and dense energy also known as fossil fuel. If oil/gas/coal was not found/used or just not as abundant, we might not have known the same past 150 years.
Now that miracle is not clean, it does produce a very stable molecule (CO2), that is, after 150 years starting to change the "world" and not necessarily for the better.
So, yes technology improved the life of billions, but maybe at the price of some species extinctions, and it might hurt our very own if we continue to feed those machines/technologies with fossil fuel.
Hopefully, we will reassess how good is a new technology under the prism of its dependence to fossil fuel. Can't wait for fission or fusion to be widely used instead of fossil fuel.
> Yes the technology allowed to increase the crop per acres, to ship goods across the world, to keep food cold, to warm/sterilize it, find new drugs, mass produce clothes for few cents, etc.
> That miracle comes access to free and dense energy also known as fossil fuel. If oil/gas/coal was not found/used or just not as abundant, we might not have known the same past 150 years.
That's only because humans were too naive and risk-averse to recognize that once we split the atom we had access to 1000x the energy with comparable reductions in waste. Perhaps if we had AI back in the mid/late 20th century it could've explained to us the benefits of nuclear power vs fossil fuels. Instead we fell for hallucinations of our own making via events like Three Mile Island, Chernobyl, and Fukushima.
"hallucinations of our own making via events like Three Mile Island, Chernobyl, and Fukushima"
I suppose the millions executed under extremist politics were also "hallucinations" then... (e.g. German holocaust, Soviet prisoner camps)
These were all very real events with very real repercussions. They did not conclusively prove there is no "safe" nuclear power, but they did illustrate the consequences of getting it wrong.
The "new" procedures around "cleaner" fission processes e.g. with fuel recycling all sound nice, but ultimately they cary the same costs of "getting it wrong". Having objections to safety is not a "hallucination".
Unless you can clearly explain why a dangerous process has been made "safe", without insulting people's intelligence or understanding i.e. hand waving, you cannot prove you understand it well enough yourself to claim safety. This is my problem with the spate "nuclear is safe" going around - if only a small subset of highly trained personnel can operate and diagnose it safely, without repercussion in a closed loop, you cannot claim safety, just that there are specialized processes that under correct supervision might not be harmful, maybe.
> These were all very real events with very real repercussions. They did not conclusively prove there is no "safe" nuclear power, but they did illustrate the consequences of getting it wrong.
The worse consequences of these events came from the evacuation of the zones, not from the radiations.
Yes there were death due to direct radiations, about 31 for Chernobyl and none for Fukushima. But that's very small compared to all the death due to coal energy pollution, and even hydro which is catastrophic when a dam breaks (which happens way more than nuclear plant accidents).
So this is how we can talk about safety of nuclear plant: by looking at stats of the last 70 years and compare it to alternatives. Because unless we want to go back to candles and windmills we can't just say "nuclear seems dangerous so it's safer not to use it". We have to consider what we'll be using to produce electricity instead.
> The worse consequences of these events came from the evacuation of the zones, not from the radiations.
As if, had we only not evacuated people and left everyone around, nobody would have died and everything would have been better...
> Yes there were death due to direct radiations, about 31 for Chernobyl and none for Fukushima. But that's very small compared to all the death due to coal energy pollution, and even hydro which is catastrophic when a dam breaks (which happens way more than nuclear plant accidents).
This is not a good argument, or at least a bad statistic. You need to look at deaths per <X>, not deaths as a result of. Sure, deaths are fewer with Nuclear, but also, deaths are fewer per coconut. Deaths per terawatt is a bad argument because again, there are so many fewer nuclear plants and the tera-watts are also lower.
A better analysis would be acres of land made uninhabitable by energy source. It doesn't matter you have all the electricity if nobody can live anywhere, whether it is coal causing massive wildfires in Canada or failed Nuclear plants evacuating 40 mile regions (and you need to be careful even then - the wildfires are caused by heat which is caused in part by tera-watts, although largely by fossil fuels and chemicals in the atmosphere, the dams floods are caused by lack of upkeep, a problem shared by nuclear reactors...)
Energy is dangerous (mechanical, chemical, etc.) and produces waste.
And great power requires great responsibility.
"Safety" can be seen thru different lenses. If we measure by number of deaths, then nuclear is safer than dams or coal/gas power plant.
Nuclear requires some expertise to run safely, but now the plant designs are better. I am wondering if the fear of nuclear is greater than the actual risk.
I agree. In general, accumulation of knowledge in accessible form and access to knowledge had been good for humans. And our AI is not an alien to us. It’s just our books compressed.
Now, there could be a problem, when a bad actor applies massive amounts of knowledge towards destructive purposes. If you let anyone to purchase an assault rifle or a nuke, there surely be increased likelihood that an assault rifle or a nuke will be used. The situation is a bit similar with an AI. Refined knowledge is dangerous.
>And our AI is not an alien to us. It’s just our books compressed.
This is a potential mistake.
Do you have pets? Even if you don't you'd generally consider animals to be intelligent, right? But even the smartest animal you've seen is far dumber than the average human being (except visitors at national parks operating trash cans). I mean you can teach many kinds of animals quite a bit, but eventually you hit physical limits of their architecture.
Even humans have limitations. You can learn the most when you're a child, and you do this in an interestingly subtractive method. Your brain is born with a huge number of inner connections. As we learn and age we winnow down those connections and use a far lower portion of our energy output on our mind. Simply put we become limited on how much we can learn as we age. You have to sleep. You get brain fog. You end up forgetting.
With A(G|S)I there are many questions unanswered. How far can it scale? Myself I see no reason it cannot scale far past the human mind. Why would humans be the most optimized intelligence architecture there is? It seems evolution would create that. When you ponder on the idea that something could possibly be far more intelligent than you, you have to come back to the thought on your pets. They live in the same reality of you, but you exist on an entirely different plane of existence due to your thinking abilities. What is the plane of existence like for something that can access all of humanities information at once look like? What is the plane of existence like for something that can see in infrared, ultraviolet, in wireless, in context of whatever tooling it can almost instantly connect to and work with, something that can work with raw data from sensors all over the planet feeding data back to it at light speed?
Now, you're most likely correct in the sense before we get some super ASI that is far beyond us, we'll have some AGI just good enough to power someone greedy and cause no shortage of hell on earth. But if somehow that doesn't happen, then we still have the alien to contend to.
It will be very alien to us - "it's just predicting the next word" is what I have heard repeatedly said about ChatGPT.
First, AI is far more than just chatgpt, don't presume this is the same thing happening everywhere.
Second, The LLMs are all reasoning machines drawing on encyclopedic knowledge. A great example I recently heard is like a student parroting names of presidents to seem smart - it isn't thinking in the exact manner that we do but it is applying a reasoning to it. Chat GPT may be doing something akin to prediction, but it is doing it in a manner that is exposing reasoning. As the parent mentioned, our own brains use networks that refine over time with removal, and a huge number of our behaviors are "automatic". If you go looking for "consciousness" you may never find it in a machine, but it doesn't really matter if the machine can perfectly mimic everything else that you do or say.
An unfeeling unconscious yet absolutely "aware" and hyperintellgent machine is possibly the most alien we can fathom, and I agree there is no "end game" there is likely no mathematical limit to how far you can take this.
Human minds also tend to predict the next word. And we still don’t know how intelligent behavior and capability to model the world emerges in humans. It’s quite possible that it is also based on predicting what would happen next and on compressed storage of massive amounts of associative memory with attention mechanisms.
The books are not alien to us. A mind that's born out of compressing them might be an entirely different thing. Increasingly so as it's able to grow on its own.
> There's the good ol' bait-and-switch of tech industry you have to consider. New tech is promoted by emphasizing (and sometimes overstating) the humane aspects, the hypothetical applications for the benefit of the user and the society at large. In reality, these capabilities turn out to be mediocre, and those humane applications never manifest - there is no business in them. We always end up with a shitty version that's mostly useful for the most motivated players: the ones with money, that will use those tools to make even more money off the users.
https://news.ycombinator.com/item?id=36211006
It applies to Apple's automated emotion reading, and it applies even more to major VCs telling us AI will Save the World. As in, maybe it could, but those interests being involved are making it less likely.