We need a word or phrase for this phenomenon, where we attempt to substitute human pattern recognition with algorithms that just aren't up to the job. Facebook moderation, Tesla Full Self Driving, the War Games movie, arrests for mistaken facial identification. It's becoming an increasingly dystopic force in our lives and will likely get much worse before getting even worse. So it needs a label. Maybe there's a ten syllable German word that expresses it perfectly?
is used by Private Eye (British satirical/political magazine) to highlight adverts auto-generated inappropriately to accompany articles on news etc websites.
Yeah, it's definitely the most sellable one I've seen.
I don't like the implication that it's the algorithm that's malicious rather than the person who wrote the algorithm (no algorithm is inherently malevolent or benevolent in my opinion, it's just an algorithm), but I also know this distinction is completely pointless for the vast majority of people and "malgorithm" gets the gist of what people are getting at across very well.
While "malevolent"/"malicious" definitely has a "wicked" connotation, you also see it in words like "maladapted" or "malodorous" which are just "bad" without the "wickedness".
That's still dumping a "bad" on the poor old algorithm though, which has done nothing wrong simply by existing and doing what it is programmed to do. Algorithms aren't bad, it's the programmers who write them and the managers who decide they should be written who are bad when things like this happen.
I don't see how you could believe this point, unless you think things like "There's no bad music, only bad musicians" as well.
Perhaps an elucidating counterpoint is an algorithm written in such a manner that it is deliberately worthless, as a joke (e.g., StackSort, Bogosort). Obviously they're not the result of a bad programmer, they're just an inherently bad algorithm.
>unless you think things like "There's no bad music, only bad musicians" as well.
I actually do though, and I say that as a musician myself. These things are inherently subjective, writing good music isn't just a matter of how closely the musician adheres to a pre-determined set of rules. The qualities of goodness and badness exist in the minds of the creators and the audience rather than being attached to the music itself in some sense. Any attempt to classify "good" versus "bad" music in the sense people usually understand it is just an appeal to authority fallacy, the only thing that makes music good is "do I personally enjoy listening to it or not?". You can try to classify music based on how closely it fits a genre's set of rules but this quickly breaks down into absurdity in practice (for example, acts like the Grateful Dead which span many genres).
Not really, the fox crap that smells awful to me smells wonderful to a golden retriever. The "badness" of the smell is entirely down to the nose that's smelling it, the subjective experience of smelling comes from the mind rather than the particular chemical compounds which we understand as a smell.
That subjective information which describes the badness of a smell doesn't exist within the smell itself, it exists within the mind.
Hmm? I don't think an algorithm has to give "correct" answers, it just has to be precisely defined. For example, one could say "For this problem, a greedy algorithm yields decent but suboptimal answers."
Merriam-Webster online says: "a procedure for solving a mathematical problem (as of finding the greatest common divisor) in a finite number of steps that frequently involves repetition of an operation" and "broadly : a step-by-step procedure for solving a problem or accomplishing some end".
Algorithms solve problems. Wrong answers are not solutions to (i.e. do not solve) a given problem. Hence, algorithm implies that it provides correct answers within the parameters of the problem.
Disagree. We see this phenomena all the time: Right solution, bad input data. Right solution, wrong problem. Worlds turned to grey goo by replicators working to some technically correct algorithm.
The definition of algorithm is wider than you think. As your parent poster noted, "greedy algorithms" exist (as do many other algorithms which provide suboptimal answers). You can easily verify this by googling.
Scunthorpe problem [1] is used to describe the false positives for auto filters, which are often results of naive substring matching. In a way, the current problem is similar, but on the semantic level.
However, it doesn't cover other cases for other AI mistakes you mentioned, like self-driving.
The problem also goes in the other direction. That is platform rely too much on automation that human signal gets too faint. For example humans have a hard time flagging actual hate speech on these platforms as well. Another example: Every so often there is a front page post on HN where a company like Google will automatically shut down service for a customer (false positive). The customer has a hard time getting through and having this false positive corrected because their signal can’t reach through the layers of automation.
The concept of "so-so automation" [1] seems relevant: innovation that allows a business or organization to eliminate human employees, but doesn't result in overall productivity gains or cost savings for society that could then be redistributed to the laid-off employees.
I think so-so automation often is used places where there's a lot of zero-sum conflict between workers and management, or where the work itself causes a lot of negative human externalities. (This can be a good thing: it's probably okay to settle for a "worse result" from an automated system if it eliminates a lot of physical or psychological harm to people...some content moderation issues probably fall under this case, but not this one.)
It enabled things like Facebook to displace message boards for the most part. Look at Facebook or Reddit, they are barely able to police obvious noxious behavior in English, and military juntas in Myanmar are able to organize on the platforms.
"Totalitalgorithms" (Totalgorithms?) captures the spirit of these algorithms. They seem like bugs but they're actually undirected, organic features of a total technocratic political system that is rapidly coming to dominate life in our modern societies. The filters will be tuned but not fixed because they aren't broken. They're part of what Tocqueville described as 'soft despotism':
"Thus, After having thus successively taken each member of the community in its powerful grasp and fashioned him at will, the supreme power then extends its arm over the whole community. It covers the surface of society with a network of small complicated rules, minute and uniform, through which the most original minds and the most energetic characters cannot penetrate, to rise above the crowd. The will of man is not shattered, but softened, bent, and guided; men are seldom forced by it to act, but they are constantly restrained from acting. Such a power does not destroy, but it prevents existence; it does not tyrannize, but it compresses, enervates, extinguishes, and stupefies a people, till each nation is reduced to nothing better than a flock of timid and industrious animals, of which the government is the shepherd."
> It's becoming an increasingly dystopic force in our lives and will likely get much worse before getting even worse. So it needs a label.
It's not just something that happens to us, it's something we do. We need to do better rather than accepting it as inevitable, and let future generations worry about what the best name for it was.
> Maybe there's a ten syllable German word that expresses it perfectly?
That being said, I propose Urteilsfähigkeitsauslagerungsnekrose: The necrosis that follows the outsourcing of our capability of judgement.
It's easy to blame this on imperfect technology but I'm not so sure. Couple of months back when all tech companies started their holier than though publicity campaigns with token actions we faced the same issue.
"Blacklist" was banned as term because it was deemed racist. No matter that people understand black and white outside the race issue.
So if blacklist is deemed by people , not technology, as racist; thus removing context from the equation, why is AI wrong to assume that "attack the black soldier in C4" is hate speech?
Human interactions have a social/cultural context. You can't just recognise $thing, you have to recognise how $thing depends on $context for the correct interpretation.
Current AI either ignores context completely or doesn't parse it correctly.
It's a rediscovery of the ancient "Time flies like an arrow, fruit flies like a banana" problem.
If you're out on a date and you say "Let's go back to mine" it implies one thing. If you say it to some friends after an evening out it usually means something completely different.
Sometimes it means the same thing - but you need to know a lot about the people involved to be able to infer that accurately.
And sometimes humans can't parse these nuances accurately either.
AI-by-grep or stat bucket can't handle them at all, because the inferences are contextual and specific to situations and/or individuals. They can't be extracted from just the words themselves.
Minsky & co researched some of this in the 70s, and eventually it motivated the semantic web people. But it was too hard a problem for the technology of the time. Now it seems somewhat forgotten.
I use the phrase "K ohne I" since years already. Which basically means "künstlich ohne Intelligenz". We all saw this coming. The topic has been gone over in scifi literature. And still, big tech decided its time to roll it out. "A human would also not be perfect, and we claim this algo is better then the avg. human" is the last thing you hear before discriminating tech is rolled out. And since politics is in the grip of commerce, regulations will not happen early enough. We are fucked. 2040 will be horrible.
I don't have a word for the phenomenon, but the problem reminds me of a quote by Wilfrid Sellars.
"The aim of philosophy, abstractly formulated, is to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term"
call it maybe philosophy-lacking, or worldview-lacking, but understanding how things 'hang together' in broad terms is precisely what our 'intelligent' systems cannot do. Agents in the world have an integrated point of view, they're not assemblages of models, and there seems to be very little interest than to build anything but the latter.
It's basically perception and reaction without cognition.
In the natural world we would call that instinct. So maybe 'artificial instinct' if we want to keep 'AI' or 'synthetic instinct' because I think it sounds better.
It is basically about using the wrong tool for the job, and continuing to do so even after you (and everyone else) is fully aware of just how wrong you are. Personifying the tool would just distract from the root cause.
"I was using a ballpeen hammer to pound in roofing nails, everything was going great until the hammer's 'synthetic instinct' resulted in a painful blow to my groin and a near fatal three story drop. It'll go better next time - I've painted the ballpeen hammer a different color."
If a human made the same mistake, we would call them incompetent, careless, and negligent.
- incompetent system
- incompetent robot
- incompesys
- incompebot
- inept system
- inept robot
- inepsys
- ineptobot
- inepobot
- bunglebot
- hambot
- sloppybot
- careless system
- careless robot
- carelessys
- carelessbot
- neglisys
- negligent robot
- neglibot
I like 'neglibot'. "YouTube neglibotted my video." "This is my new email address. My old one got neglibotted." "The app works for typing in data, but the camera feature is neglibotty." "They spent a lot of effort and turned their neglibot automod into carefulbot. In another year it may be meticubot.
Industrial revolutions has happened few times in the past, and every time it occurs, we change our world to adopt it.
I can't help but wonder, what the world would look like if services in the future will be provided primarily by AIs. How do we adopt them? Do we have to invent a "New Speech" just to make AI understand us better so we can live a easier life?
"Question: Have you consumed your food today?", "Answer: I have consumed my food today."
Or a more subtle example:
"Hi! Welcome to ___. What do you want to eat today?", "I want to eat item one, fifty-six, eighty-seven and ninety-one", "Do you want to eat item one, fifty-six, eighty-seven and ninety-one?", "Yes", "Please make a seat. The food will come right away!" (all without making any Hmm or Err noise)
> "Hi! Welcome to ___. What do you want to eat today?", "I want to eat item one, fifty-six, eighty-seven and ninety-one", "Do you want to eat item one, fifty-six, eighty-seven and ninety-one?", "Yes", "Please make a seat. The food will come right away!" (all without making any Hmm or Err noise)
this already happens today, with human servers. The menu items are numbered, and you tell them the number instead of the name of the dish.
And then in some places they hand you another number - a pooled session identifier - to take to your table. Then you are expected to respond to events broadcast regarding this number if the food fails to reach the destination automatically.
Blockchain Chicken Farm gives an interesting angle on this. Part of the book describes a AI controlled pig farm. For the outcome to be good, as many variables as possible have to be removed from the pigs' life. For example total isolation from the outside world. Otherwise there is too much for the AI to account for and the training set also needs to increase. What does that mean for our lifes as AI gets control over more aspects? What variables can be removed?
Yea, I'm Trinidadian, and even though my accent has changed significantly from living in Canada for 7 years, people and especially voice recognition get confused by some of my speech patterns.
Example is that people always hear 50 when I say 30, because of how I pounce the "y". Anything ending in "th" or "ing" gets confused a lot by people who don't know me.
A new vernacular to interface with tools that never reveal the actual state of a system under their control, and forbid you to directly influence it. You are granted the privilege to express your limited desires, from which the system will "learn" your preferences.
If you want a vision of the future, imagine a man repeating "Hey Thermostat! Can you cool it down in here a little?" over and over–forever.
In a government system a similar problem is called Bureaucracy. It is similar in the sense that the system is very complex, beyond any single persons comprehension, bureaucratic system is unforgiving in its conclusion, and it is the responsibility of the victim to deal with a false positive using the same (or similarly complex) system to attempt correction.
However this is different from bureaucracy in the level of automation and statistical inference. A bureaucratic system doesn’t do inference (or at least that is not its main function), and the steps between require human inputs (albeit really automated humans most of the time).
I suggest automatacracy which strings together automation and bureaucracy.
How about calling it a "Buttle" after the movie Brazil from 1985 where a certain Mr. "Buttle" gets arrested and killed instead of a Mr. "Tuttle" due to a fly in a teleprinter.
It's funny that you should mention War Games, because the only way to win this battle is not to play at all. Why are we so hell-bent on restricting speech and burning all these engineering hours trying to moderate something that cannot be moderated? Languages -- and people -- are "transformable" enough to avoid triggering "hate speech" (whatever that actually is, and whoever it is that determines it) algorithms. Let people downvote or shut their computer off if they don't like it, and leave it at that. Are we that scared of words or ideas?
I used to live in a communist country (probably the same one that Mr. Radic lives/lived in), where "hate speech" -- which was called anti-state, or anti-establishment speech back then -- was punishable by a middle-of-the-night visit by dark-clad police. Yet, people still found ways to openly criticize the Party, and there were even popular songs that openly defied them, which only demonstrates that not even human pattern recognition is good enough to detect these things, as you state in your first sentence.
It's a waste of time, but more importantly it is detrimental to society.
I am scared of the masses believing nonsense. What fraction of Americans believe the election was stolen? How many people are still going to refuse to be vaccinated? Indeed, bad ideas are scary. I am scared that my country people will lead another insurrection or allow themselves to be a needlessly vulnerable vector for highly transmissible and dangerous virus.
Misinformation spreads like wildfire.
The notion that good speech counters bad speech relies on rational, well informed, critical thinking skills that are lacking in a significant portion of the populace.
It is fundamentally different that a company wants to exercise its right to free speech in choosing what is said on its platform than for a government to use free speech as a fig leaf cover for dissent.
Putting up no fight against misinformation and hate speech is not a winning strategy. Society loses much more from nonsense spreading than it does from having to wait 24 hours for high quality chess content to get remoderated.
It’s not just about a game of chess being moderated.
It’s about silencing legitimate speech that you disagree with, whether it’s on a factual level or other motivations.
Perfect example, coronavirus being manufactured in a lab rather than originating in the wild. Anyone that suggested it was from a lab has been moderated heavily until a report came out that in fact it did.
Was this not damaging to society?
And to your point of hate speech, can you give me a definition of the term? You can’t because there isn’t one that a) covers all scenarios and, more importantly, b) doesn’t cover legitimate speech.
It is up to me to think for myself, not up to someone else to do it for me.
So instead put power in the hands of giant companies because regular people can’t be bothered to not get hot-and-bothered by reading something they don’t like? What makes these companies so morally pure?
The only way this can make sense - especially on HN - is if the people who advocate for company control of regular peoples’ lives work at these companies.
We definitely need a term for this so when we are a victim of this, we can easily raise a flag. I have a few ideas:
- Bot blunder
- Artificial stupidity
- Algofail
- Machine madness
- Neural slip
I think this has the unnatural, unjust and just basic wrongness about using AI to be like humans.
"human-ops" is the justification to remove human pattern recognition because it's better for a small group of employees (e.g. "we cant let our expensive staff view flags of distressing pictures or boring chess videos, so we will get an algorithm instead". The tech companies HR say that this is pro mental health as an additional way to justify this change and any unemployment.
"In modern times, legal or administrative bodies with strict, arbitrary rulings, no "due process" rights to those accused, and secretive proceedings are sometimes called "star chambers" as a metaphor."
Example uses:
"I got starbotted."
"Instagram's automod is a starbot."
"YouTube is too starbotty for your lectures. Better post with your school account."
"We're suing them because their starbot took down our site right after our superbowl ad ran."
"Play Store starbotted release 14 so we cut a dupe release 15. How much will it cost to push the ad campaign back 2 weeks?"
"We use Gmail and Google Docs but not Google Cloud because of the starbots."
"I tried to put Google ads on it, but their starbot rejected the site because it doesn't have enough pages. It's a single-page JavaScript utility." (This is my true story about https://www.cloudping.info )
"Our site gets a lot of traffic but we don't use Google ads because of the starbot risk. Nobody needs that trouble."
"The STAHP Bill (Starbot Tempering by Adding Humans to Process) just passed the Senate! Big Tech is finally getting de-Kafka'ed. About f**ing time."
The suggested “malgorithms” is probably the best noun form for these algorithms themselves.
As for terminology that captures society’s general over reliance on automation and algorithms to handle things that really ought to have direct human intervention, I like Rouvroy’s “algorithmic governmentality”
AS or just artificial stupidity I've heard a couple of times. It's quite mind boggling if you think about how many people had to engineer tensors and train networks for months if not years to create a system capable of so blatant stupidity.
In my opinion this is not just about AI. This is more general. We as humans try to fix social issues with technical measures. All the racists are not going to suddenly become good people if we push them to separate platforms.
Yes but at least you won’t be labeled as conspiracy spreading platforms by the /very important/ media, and your woke employees won’t walk out and disappear.
Oh and the current administration won’t pass legislation restricting your platforms or revoking article 230.
sorry, are you railing against freedom of association? because it sounds like you are. if you're an insufferable ass (hypothetically speaking) and no one wants to work for you, and so they do not, thats freedom baby.
There's an acronym: OOD - out of distribution - for these situations.
There's no reason YT can't detect chess from hate speech if they updated their training set. Maybe they weren't aware of this failure case, or they didn't get to fix it, or by trying to fix it they caused more false negatives. The way they assign cost to a false positive vs a false negative is also related.
"Cheap AI"/"cheap automation", to dispel the notion that throwing data at a neural network is high science or serious engineering. Or, even more directly, reduce it to "fuzzy matching". AI is just a fuzzy pattern database.
The problem of the flagging isn't so much of an issue, if it weren't for the fear that you don't know whether you can get an actual human being at Google to get in touch with you.
> Ooopsie woopsie! Our AI made a fucky wucky and locked your account for no reason lol. We pwomise nothing pls go away and die uwu PS: If you make a new account we gonna shadowban iwt immediately lmao, if you have any compwaints please write a letter and addwess it to the hospital you were born in.
It's not like humans are generally better then this. I mean look at the Github master branch fiasco. It had a completely different meaning then master with slavery connotations. Yet the outrage was so large Github changed this name. I'd say this is the same behavior of this algorithm. Seeing a word and getting "triggered" and marking it as toxic even though it has a completely different meaning.
A receiver operating characteristic curve (ROC curve) [1] describes the tradeoff between sensitivity and recall. No matter how sophisticated we think our classifiers are, the confluence of physics and mathematics will always limit the accuracy of our automated systems. It is just a matter of what kinds of errors we are willing to tolerate.
No. What we need is something that is amazingly simple:
* you, a programmer who coded something that caused company X to lose Y dollars because of a "mistake", have to be on a hook for Y dollars.
* you, a manager who managed a project that caused a company X to lose Y dollars because of a "mistake" by your programmer, have to be on a hook for Y dollars.
* you, a product owner who accepted the what the manager of ta project that caused a company X to lose Y dollars because of a "mistake" by your programmer, have to be on a hook for Y dollars.
* you, the CEO of the company that had a product owner who accepted the what the manager of ta project that caused a company X to lose Y dollars because of a "mistake" by your programmer, have to be on a hook for Y dollars.
Yes, I understand that the cost of a mistake is now higher than the loss suffered by X. That's the incentive to ensure that it does not happen because the wives or husbands or partners of the people who would now have to pay are going to ensure that they do not do take that whose wacky ideas and implement them -- wacky ideas are abstract but the loss of nice housing, nice organic food, nice daycare for the kids and nice scholarship fun is real.
Funny, we hold civil engineers accountable and there seem to be plenty of capable engineers to do things that need to be done, and we hold bad pilots accountable, and yet the only reason airlines have a pilot shortage is because they refuse to pay more than $20k a year.
Developers need to stop being so allergic to accountability. It's kind of pathetic
I studied mechanical engineering myself, so I hear you.
That being said, software development and civil engineering are very different in terms of risk management. There are strict regulations around what you do and if you play by the rules, you have minimum to no risk. Even if dozens of people die under a collapsing building you are not accountable if everything you had done was by the book.
Software development, on the other hand, is more like wild west. There are minimal regulations, only best practices. One developer can never know if the end product is free of any errors.
Pilots are a completely different story. The OP was talking about "causing company X to lose Y dollars", not human lives.
What about the customers who bought/consumed that faulty product? They should also pay Y dollars, since it was their decision and responsibility to take the risk of buying the product and not vetting management, developers and the CEO before hand.