Whenever I read prominent figures talking about the dangers of AI, I feel like they are missing the mark.
I don't think the imminent dangers of AI are self-conscious machines rebelling and deciding to kill people, but a much more subtle and nuance problem.
Let's say that you go to a bank and ask for a loan. The bank has a procedure to decide if they give you the loan or not. If your request is rejected, they can (hopefully) explain why you didn't qualify.
Instead, the whole process gets offloaded to a learning AI that makes that decision for the bank. For the most part, we don't really understand why systems like neural networks make a decision. Now if the banks denies your loan, the best they can say why is 'huh, the computer said so'.
What if suspiciously the banks start rejecting applications from an specific minority? Has the AI determined that this particular minority is trustworthy? Does it even matter?
What when AI systems are introduced in legal procedures? What when an AI decides if someone will go to jail or not?
Maybe I'm out of the loop but I feel these are more pressing issues about the widespread use of AI that are not being talked about enough (definitely much less than the doomsday scenario).
> Let's say that you go to a bank and ask for a loan. The bank has a procedure to decide if they give you the loan or not. If your request is rejected, they can (hopefully) explain why you didn't qualify. Instead, the whole process gets offloaded to a learning AI that makes that decision for the bank. For the most part, we don't really understand why systems like neural networks make a decision. Now if the banks denies your loan, the best they can say why is 'huh, the computer said so'.
This is similar to the current situation in the UK for various financial and insurance applications. The application is farmed out to an organisation that ensures compliance with whatever regulations apply, but often simply gives a yes or no answer, without details. Or at least, the details are not passed back to the customer.
> What if suspiciously the banks start rejecting applications from an specific minority? Has the AI determined that this particular minority is trustworthy? Does it even matter?
This is a very real concern. Specific subsets of society tend to build and train AIs and may consider their outputs accurate or as expected, whereas other subsets of society may disagree but may be underrepresented among AI engineers. There is a danger that an AIs ability to distinguish subsets of society will be paired with value judgements. It doesn't have to be this way, we need to involve all members of society in discussions that are currently somewhat restricted.
> What if suspiciously the banks start rejecting applications from an specific minority?
These issues are not that specific to AI. You can have a look a what applications are turned down whether humans, algorithms or AIs are running things. And then figure if anything should be done about it.
Whilst that's an issue with current popular approaches (e.g. neural networks), I don't think it's too catastrophic.
One approach to dealing with this is to use algorithms which are more amenable to explanations, like decision trees or inductive logic programming: that way, we can see exactly what criteria are being used to reach a decision, and we can use existing regulations/audits/etc. to keep them in check.
Another approach is to have the AI system produce evidence rather than an opaque response. One area where this already exists is in automated theorem proving: given a (suitably formalised) mathematical statement, programs like E and Vampire can (try to) determine whether it's true or false. In theory, we could ask such programs whether e.g. a loan application satisfies our eligiblity criteria, but the resulting true/false answer would be about as opaque as using deep learning.
However, we don't have to be content with a simple boolean: programs like metis can output proofs of a statement (if it's true), whilst programs like MACE can output specific counterexamples (if it's false). It doesn't seem too far fetched to have an AI system generate justifications for and/or against an application.
This is not AI. This is our general statistical regression and overall reductive quantitative approach to decision making, of which AI is a subfield. Computation lets us scale up the stupidity.
>Whenever I read prominent figures talking about the dangers of AI, I feel like they are missing the mark.
I don't think the imminent dangers of AI are self-conscious machines rebelling and deciding to kill people, but a much more subtle and nuance problem.
People are plenty aware of this. It is just that they are not always talking about the most imminent dangers. Sort of like how people talk about global warming dangers down the road, even though they are aware of e.g. cancer today.
This is already happening with generating income on YouTube.
Videos are eventually evaluated by an AI as to whether or not they're advertiser-friendly, and get demonetized if the AI deems it unsuitable, with no explanation or feedback as to how to bring it into compliance. The AI's training likely changes over time, so there's no guarantee you'll be staying in its good graces.
Many folks' business & livelihood is now dependent on the whim of a statistical AI.
That may be new to YouTube, but it's been a part of Google search for years. Whether you make the top 3 hits of a particular keyword can make or break many businesses; also based on a statistical AI.
Google is not bad at generally pulling 3 decent results, but at least a few searches a day it returns garbage results.
Yes people are still anthropomorphizing AI. It's a subtle difference talking about their learned judgments of reality v. some sort of human-like logical decision-making process, and that's where most people get tripped up. The AI learns what you feed it. Garbage in, garbage out. The initialization of its world model is arguably the most important thing, and the hardest to make objective. Every decision following that, assuming it learns in an online manner, will be conditioned on its previous decisions since it can only learn about the things it experiences, i.e., chooses to engage with and how.
> we don't really understand why systems like neural networks make a decision
Maciej Cegłowski gave a very good talk about that specific problem, which incudes a few recommendations about how we can avoid some of the worst problems as machine learning and capitalist democracy merge.
"Build a Better Monster: Morality, Machine Learning and Mass Surveillance"
> For the most part, we don't really understand why systems like neural networks make a decision.
...
> Maybe I'm out of the loop but I feel these are more pressing issues about the widespread use of AI that are not being talked about enough (definitely much less than the doomsday scenario).
This is essentially about Fairness, Accountability and Transparency with algorithms. There has been a lot of discussion and research in this space.
I think the dangers are even more real present than you allude. The idea that social networks and search engines are deciding what media we consume is already a form of AI, and it's already causing real harm.
Case-in-point: just yesterday an innocent man was falsely labeled a mass murderer on 4chan and those threads were surfaced on the top of google search results and on facebook.
> I don't think the imminent dangers of AI are self-conscious machines rebelling and deciding to kill people
Those descriptions are too anthropomorphic. Why not "optimization algorithm finds and implements a solution for a problem of growing more tomatoes, which is incompatible with continued existence of humanity".
I'm more concerned with the opposite problem of how we will behave in response to AI control.
AI has no notion of the Lucas critique. It doesn't consider the perverse incentives it's optimized judgement scheme may create. The costs of looking good to proprietary algorithms are only going to grow.
None of that is really an existential threat, i.e. capable of wiping out the human race. Those prominent figures are focusing on the risk of losing our status as the dominant species.
Even if that takes 10000 years, it's significantly more serious than being put in jail or failing to get a bank loan.
What you're describing has been basically begun from the first time a spreadsheet was opened by a decision maker in a bank.
From then, the calculations only got more advanced, but computers have always hidden how they worked (especially when they reach a certain scale of customers).
Discrimination is bad, but even the worst discrimination will leave millions of human survivors. People talking about the existential dangers of AI are talking about disasters that leave no human survivors.
Good! Bring this future! Humans should never ever ever be in charge of such things. On most domains where relevant statistical data is available, like your examples, even crude algorithms like linear regression will almost always beat human experts.
Humans are biased as hell. Studies show judges give much harsher sentences just before lunch, when they are hungry. Attractive defendants get half the sentences as unattractive ones. Not to mention all the classic race and gender ones, and random factors we can't even measure.
It is inexcusable to have humans in charge of any kind of decision process that a superior unbiased algorithm can do.
What makes you think computers are less likely to be biased? It's far easier for an algorithm to "learn" an association between probability of guilt and the colour of pixels in the moving bit of the video screen or dialect words used in the defendant's transcript than it is for the algorithm to actually evaluate the testimony itself.
Your argument is predicated on the definition of "superior" and "unbiased" and also on the data being fed into the statistical algorithm.
With respect to your definition of "superior"... One of the major tenants of U.S. law is that in criminal cases you are entitled to a jury; you're judged by your peers. It's certainly possible that one day an algorithm may be more accurate than a jury in determining whether an accused person is guilty or innocent. But the jury is more than a fact checker, they provide credibility, independence, egalitarianism, and self-determination to the entire criminal justice system. Without those, the entire system could arguably collapse.
So even assuming computers will eventually have humans beat on accuracy, it's hard to imagine algorithms providing the additional values that are necessary for a functioning complex system.
Juries provide the appearance of credibility, etc. In reality jury selection is a game played by lawyers with the explicit goal of removing independent insight from the process.
The same applies to our political and economic systems, which exist to provide rhetorical air cover for entrenched power and resource appropriation mechanisms that disfavour most of the population.
There is very little chance of independent AI spontaneously deciding to rule the world. But there is every chance of the standard entrenched political factions adding AI to the kit of tools they use for social and political manipulation.
A truly independent hyper-smart AI would be the biggest possible threat to them. Not only would it be less gullible and easy to manipulate than most humans, it might conceivably be better at manipulation than they are, with unpredictable goals.
> Juries provide the appearance of credibility, etc. In reality jury selection is a game played by lawyers with the explicit goal of removing independent insight from the process.
I beg to differ, juries may be the single most important part of our justice system. In my experience witnessing juries first hand, they take their responsibility seriously. They aren't perfect, but they are very good, and more importantly, the bring credibility to the entire system by directly bringing citizens into it.
The purpose of the jury selection "game", formally called voir dire, is to remove bias. Both sides ask questions, and if a juror demonstrates bias, they are excused. The idea that there is a process to remove bias from a jury pool is unquestionably a good one. Sometimes jurors are excused for other reasons, sometimes racist reasons, but this is illegal (see recent supreme court case on this). Also, the adversarial two-sided nature of the US justice system ensures that both plaintiff and defendant get to play the game equally.
Also, your flippant attitude towards "appearances" is misplaced. In any justice system appearances are extremely important. If a justice system does not appear fair, even if it is fair, it will collapse. That's why judges must recuse themselves from cases even if there is a "an appearance of bias" (caveat: Supreme Court isn't subject to this rule, but that's another story).
If you really want to replace jurors with AI, an AI that decides guilty or innocent wouldn't be enough. It would need to convince people that it's fair and independent. Being judged by a jury of peers in your community provides some of those assurances. It'll be a much harder sell for a robot.
I think you completely misunderstood the parents concern, or chose to ignore it.
The question is - what happens if an algorithm develops a bias? A bias against a specific race/sex/occupation, whatever - what happens to the principle of free an equal society then?
Super simple example - you don't need AI to determine that men claim more frequently and more on car insurance than women. So....over the years, the rates for men have risen, until finally it was completely outlawed to take into account the sex of the applicant for car insurance quotes(at least here in the EU).
We need the human touch to iron out problems like this. An all knowing-omnipotent algorithm if going to discriminate based on statistics and that's something I don't think society wants.
You can of course remove race and gender data from the algorithm as legally required. There are even more advanced methods to prevent it learning that information indirectly and average out its predictions.
But I don't really see the point. Men really are riskier drivers. They should pay more on insurance. In the US they do, and no one seems to care. Men, as a group, pay exactly the rate that matches their risk. Anything else would mean women subsidizing men and paying more then their actual risk.
I like this example because it's the reverse of the typical stereotype that women are terrible drivers. The algorithm doesn't care about stereotypes. And it's a small effect, something like 5%. Men aren't being banned from driving or ostracized from society, they just pay a bit more to make up for their higher risk. Other factors like age or previous accidenrs matter far more.
And as the insurance companies get better data, that proxy will matter less and less. In the future they will just offer to record your driving behaviour on a test track and statistically analyze how skilled you actually are, or something like that.
And anyway this has gotten far away from the point. If getting insurance required a human to interview you and judge your character, it would be 100 times worse. You think humans can be made blind to gender and race? When the choice is between a human and an algorithm, it's indefensible to choose the human.
Would you be as ok with it if the algorithm was charging black people more? Would you also reply "well, the data shows that black people claim more, so it's only fair - otherwise, white people are subsidizing black people driving"? At the end of the day, you have as much control over being a man as you do over being black, and are being penalized for the actions of others, that makes no sense.
>>When the choice is between a human and an algorithm, it's indefensible to choose the human.
I don't see any reason to believe blacks are worse drivers than whites. Especially after taking other variables into account, like economic status.
If there even is an effect, just like your gender example, I doubt it would be large our align with our stereotypes. E.g. it could be the case that whites are 1% worse drivers. And yeah make them pay more, again as a group they aren't being treated unfairly and anything else would amount to subsidies to them from other groups. That's not fair either.
>being penalized for the actions of others,
This different way of viewing this situation is perhaps why we disagree. I don't see it as a "penalty". If you live in a wood house, you will pay more on fire insurance. You aren't being penalized, your house is just objectively riskier. People in stone houses shouldn't be forced to subsidise your extra risk.
If you banned discrimination and removed all female drivers from the planet, insurance rates would go up to match what they are for men now. Men now pay what they would if there were only men.
>That's where I vehemently disagree.
Yeah well that's not an argument. And you can't make me bite the racism bullet if you aren't even going to acknowledge how disgusting your alternative is. After all your system has far more racism because human judges are racist. And why should someone spend twice as long in prison or be unable to get a loan because they are less attractive?
Its algorithms or nothing. Humans are not even an option. And in many cases nothing isn't an option either.
>>After all your system has far more racism because human judges are racist. And why should someone spend twice as long in prison or be unable to get a loan because they are less attractive?
Because I don't think we can quantify the justice system. Like other commenters in this thread have pointed out - the jury system is explicitly based on the idea of being judged "by your peers", with all the biases and ideas that it brings with it. Could you replace that with an AI? Maybe - but how will you know if you reached the "correct" judgement then? In some bizzare scenario you might arrive at a situation where AI reaches a judgement that literally no one is happy with - and at that point we're just ruled by a hivemind overlord, no? I'm being sarcastic, but there is a point where we serve the algorithm and not the other way around. I mean, don't get me wrong, I would gladly submit to Culture-style Minds(Iain Banks) because I think they were being fair as described in fiction, but I have no trust that whatever we develop will be that fair, we seem to be using scattershot systems that look at the lowest common denominator and make a decision based on something that is easy and obvious, or worse, we train them on existing systems. Which(and I am sorry for bringing up the racism example again) - AI trained on the current population of the US prison system would conclude that it has to target certain groups more because...well, clearly they commit more crime! I have no trust it would be anything other than a shallow, simple-stat based oracle that everyone would listen to.
> Super simple example - you don't need AI to determine that men claim more frequently and more on car insurance than women. So....over the years, the rates for men have risen, until finally it was completely outlawed to take into account the sex of the applicant for car insurance quotes(at least here in the EU).
I believe that law should be repealed, because it would lead to fewer people dying on the roads. So there is a case where not everyone agrees the human touch is actually a good thing.
>>I believe that law should be repealed, because it would lead to fewer people dying on the roads
Citation needed.
Counter argument - vast majority of males in prison for murder in US are black. Therefore, if we stop anti-discrimination policies in US, that will lead to saving more lives. I mean, statistics don't lie!
Or, maybe, just maybe - the statistics omit a lot of data that would allow us to narrow down what makes men claim more and why there's so many black prisoners in the US?
And that brings us back to the original problem - AI can just come to a conclusion = charge men more for insurance = less accidents! Or....put more black people in prisons = less murders! And the results might be exactly what it expects, so of course it will be declared as a huge success - but it's everything but.
Do you think it should be legal for insurance companies to charge higher premiums to smokers? If so, then you agree it's okay to simply go by the statistics in some contexts, so the existence of some other contexts in which it's not, isn't a counterargument.
I think it really depends on what parameter(s) the AI is optimising for, or weighting the most. If it's efficiency or profit, there are tons of things that make sense financially, but would probably be considered unethical. E.g. we could encourage people to smoke, as it tends to lower the cost of healthcare overall (since they die younger / quicker, on average). That's just an example, the point is that I'm sure there are tons of "truths" from some perspective that an AI might employ, but we'd find unethical.
It's an interesting thought, and I tend to agree in a way.
That said, I suspect people would really quickly learn that ruthless efficiency and "unbiasedness" is probably not what they want. We have rules and laws that ban discrimination along certain lines, but would that be applicable to an opaque AI entity?
I think these postcards were great. Until personal flying machines become widespread, there's progress to be made. Phones and blockchains are nice, but won't substitute.
We actually have personal helicopters but only the super rich can afford them. Landing space is limited, fuel is quite expensive, piloting is hard.
Now the trouble with those is mostly the space required to start and land these things.
Think how skilled those firemen in the picture would have to be to operate even slow jetpacks or helicopters and not crash into one another as well as land reliably.
Birds have an advantage of advanced evolved neutral circuitry to handle all this.
I'd say the main advantage of birds is small weight. If humans were as light as birds, personal flying machines would've become widespread long before computers. Would've been much safer too, because light creatures can endure falls from much greater height.
Tldr from the sidebar at the end (although it wasn't very long and it was worth reading):
We should stop describing these modern marvels as proto-humans and instead talk about them as a new generation of flexible and powerful machines. We should be careful about how we deploy and use AI, but not because we are summoning some mythical demon that may turn against us. Rather, we should resist our predisposition to attribute human traits to our creations and accept these remarkable inventions for what they really are—potent tools that promise a more prosperous and comfortable future.
>Rather, we should resist our predisposition to attribute human traits to our creations
The whole idea behind AI is to add a human trait -- intelligence -- to machines. So, not only it's ok to attribute human traits to these particular creations, but this is the very reason they were created, and what we strive not just to attribute, but to actually give to them.
>and accept these remarkable inventions for what they really are—potent tools that promise a more prosperous and comfortable future.
Potent tools can also be used to a less prosperous and comfortable future. They've used many "potent tools" in WWI and WWII -- from airplanes to nuclear bombs.
Now imagine AI-cops, AI-surveillance, AI-armies -- including robotic-AI armies. Not sure what those machines will do if they get autonomous in some "singularity", but I'm pretty sure what the greedy, corrupt, etc bastards in power in various places around the world would use them for -- more restrictions, more control, more gain for their personal interests, more and easier wars for the side that has them, more terrorism for the sides that can hack one.
Here's a thing, it is like with philosophy. Once something is understood and solved it is no longer philosophy but science.
Likewise with crude alleged intelligence.
The proper label to be used is usually robustness, analysis or optimization. (We're far away from actual creativity by the way, and quite far from even robustness in most cases.)
Or autonomous automated decision making. The scariest of the bunch. If not employed right or not robust or not "reasonable" (possible to reason about), it can have really bad results.
(By the way, automated decision making is pretty old, as old as industrial revolution.)
I mean, it is intelligence what we try to create -- and it is artificial, since by artifice we denote anything man made.
If the objection is as to the degree that the applications I've mentioned is actually intelligence (and not something cruder/dumber), I think we could still call it AI, and not just keep the term for actual human-or-above-level intelligence.
That's ridiculous. The entire point of AI safety is that we must not attribute human traits to these creations, such as "social instinct" and "basic decency."
A machine learning powered future is more likely in the near term. That's going to consist of systems which take in lots of information and optimize for something. Probably profits.
I imagine these systems are going to be able to understand every motive of every person. If I am thinking this through correctly, at some point they are going to be able to precisely control the behavior of everybody. The systems will know what information to show you for a desired outcome. Who knows what that future is going to look like.
That's a sci-fi possibility, but it's more likely that buggy AIs will be the problem. They'll be rushed to market in hopes of getting that market-share and will make tons of errors and miscalculations. Most will be in ways that are hard to detect, but some will be obvious.
Rather than making a perfect AI that can control everyone, I think it's far more likely we'll develop a buggy one that accidentally kills us all.
I don't think the imminent dangers of AI are self-conscious machines rebelling and deciding to kill people, but a much more subtle and nuance problem.
Let's say that you go to a bank and ask for a loan. The bank has a procedure to decide if they give you the loan or not. If your request is rejected, they can (hopefully) explain why you didn't qualify.
Instead, the whole process gets offloaded to a learning AI that makes that decision for the bank. For the most part, we don't really understand why systems like neural networks make a decision. Now if the banks denies your loan, the best they can say why is 'huh, the computer said so'.
What if suspiciously the banks start rejecting applications from an specific minority? Has the AI determined that this particular minority is trustworthy? Does it even matter?
What when AI systems are introduced in legal procedures? What when an AI decides if someone will go to jail or not?
Maybe I'm out of the loop but I feel these are more pressing issues about the widespread use of AI that are not being talked about enough (definitely much less than the doomsday scenario).