The parent's point is that they may not be absorbing stereotypes from humans at all. They may be generating accurate beliefs about the world from text representations of the world.
My point is that neither "insects are unpleasant" nor "plants are pleasant" nor "doctors are 66% male" are immutable features of the universe. They are merely snapshots of the human view of world conditions, as the world is now. "True now", but not "true forever and always".
The paper seems to advocate for designing ML systems that learn that what is "true now" may not be "true forever and always". It seems to be quite the opposite of "there are certain truths that ML systems should not learn."
If your standard for truth is "immutable feature of the universe" then you might as well give up now because we don't know about any of those, or indeed if any exist at all.
Setting such a standard for a machine is ridiculous if all you want is a new tool to get some work done.
Quite possibly. Words relating to insects will occur in news articles about malaria, zika, crop destruction, etc. Words relating to plants might occur in articles about arbor day, spring time, environmentalism, etc.
An exercise: Words relating to insects will occur in news articles about environmentalism, crop production, rituals of rebirth, etc. Words relating to plants might occur in articles about crop destruction, the international drug trade, people getting poisoned, etc.
rmxt questioned the universality of sentiment analysis. Responding by noting specific contexts, free from a clear coherent general structure, is an assertion against the discovered sentiments' universal truth.
But it is a universal truth that humans generally find plants pleasant and insects unpleasant. And the word "pleasant" is entirely based on human preferences after all.
What I'm probably missing indeed is that scoping of universality to humans. Lately I've been trying to be more explicit in my written communications in an attempt to understand both the limits of my knowledge and perceptions and the limits of the sources of information that I digest.
Is suggesting that pleasantness is a sentiment that's not unique to humans really that controversial?
super late edit: it's specifically flowers, not plants, that people are biased towards finding pleasant
Do you have any evidence that this effect results in machines making systematically wrong inferences?
Near as I can tell, your paper shows that these "biases" result in significantly more accurate predictions. For example, Fig 1 shows that a machine trained on human language can accurately predict the % female of many professions. Fig 2 shows the machine can accurately predict the gender of humans.
Normally I'd expect a "bias" to result in wrong predictions - but in this case (due to an unusual redefinition of "bias") the exact opposite seems to occur.
Accuracy might mean "positively" right, as your post suggests, but that doesn't necessarily mean "normatively" right.
From what I understand, the fear surrounding embedding human stereotypes into ML systems is that the stereotypes will get reinforced. In some way or form, there will be less equality of opportunity in the future than exists today, because machines will make decisions that humans are currently making. Societal norms evolve over time, yet code can become locked in place.
Is your takeaway from this paper that we, as the creators of intelligent machines, should allow them to continue to making "positively" right assumptions simply because that's the way we, as humans, have always done them? Is "positively" right, in your opinion, in all cases equivalent to "normatively" right?
I think your questions would be answered by reading the article. Particularly:
"In AI and machine learning, bias refers generally to prior information, a necessary prerequisite for intelligent action (4). Yet bias can be problematic where such information is derived from aspects of human culture known to lead to harmful behavior. Here, we will call such biases “stereotyped” and actions taken on their basis “prejudiced.”"
This definition is not unusual. This is about inferences that are wrong in the sense of prejudiced, not necessarily inaccurate.
The usual definition of bias in ML papers is E[theta_estimator - theta]. That is explicitly a systematically wrong prediction.
In any case, the paper suggests that this "bias" or "prejudice" is better described as "truths I don't like". I'm asking if the author knows of any cases where they are actually not truthful. The paper does not suggest any, but maybe there are some?
Again, per the article "bias refers generally to prior information, a necessary prerequisite for intelligent action (4)." This includes a citation to a well-known ML text. This seems broader than the statistical definition you cite.
Think for example of an inductive bias. If I see a couple of white swans, I may conclude that all swans are white, and we all know this is wrong. Similarly, I may conclude the sun rises everyday, and for all practical purposes this is correct. This kind of bias is neither wrong nor right, but, in the words of the article "a necessary prerequisite for intelligent action", because no induction/generalization would be possible without it.
There are undoubtedly examples where the prejudiced kind of biases lead to both truthful and untruthful predictions, but that seems beside the point, which is to design a system with the biases you want, and without the ones you don't.
This article is a little odd. It describes conservatives as being worried that professors are "subversive", but it's exactly the opposite - conservatives worry that colleges are enforcing and indoctrinating people in the orthodoxy.
It's the rare subversive academics, e.g. Charles Murray, who are being assaulted and chased of campus for holding unorthodox opinions.
Orthodoxy is not a few people (correctly) arguing that we should STOP stating that black people are inferior. Furthermore, a Nazi who happens to be against a particular war is not "peaceful but unorthodox".
Charles Murray wasn't discussing human biodiversity at all. His talk was on an entirely different topic. No one argued against him at all. They merely disrupted his talk and then assaulted him and others.
The Nazi in question did not engage in any violent acts at that gathering. That makes him a peaceful protester.
That Murray was attacked during an unrelated talk is uncontested. My objection is that you referred to the defenders of centuries-old superstition as unorthodox simply because the cultural tide began to turn against them in the 20th century.
I agree that in some historical time, some of Murray's views might be considered orthodox. That doesn't make them orthodox today - nowadays he's subversive.
Only if we accept that racism is dead. To find proof to the contrary, we need only look to popular responses to videos of unarmed black men being killed by men with guns. And that's just scratching the surface.
That's a pretty misleading way to summarize Charles Murray, who is probably America's foremost exponent of the Just World Fallacy, having written extensively about why blacks, women, and just about every other minority are in whatever diminished circumstances they find themselves in due to intrinsic deficiencies. Particularly blacks.
No, they didn't play us. They did exactly what they promised. Thanks to Uber, I don't get racially discriminated against on a daily basis. Before Uber I did.
Some folks may hate them because of Susan Fowler, but that doesn't change the fact that they've drastically improved the world for consumers.
They're still giving us exactly what they promised - promised not by just words, but actions.
Initially, Uber set itself out as the hero of the day, riding on a shining horse to fight the Evil Taxi Mafia. Anyone who looked closely at how they did that could easily predict that what they want to become is the new, but worse, Taxi Mafia. They've been assholes almost from the start, they continue to be assholes now. That so many people only got angry after sexism accusations, of all the things, only makes me sad about the state of humanity.
As a homo economicus blindfoldus - nothing. Cheaper and better service here right now? Yay, party time!
As a responsible citizen of a civilized society however, one should be interested in how such a service comes to be, and what it means to people involved and the society at large.
--
BTW. I'm founding a biotech startup now; we provide personalized medication for free, OTC, ordered through our mobile app. We can do that because of our innovative manufacturing model, which involves doing BSL-4 level work with pathogens in our garage. I.e. we're disrupting the corrupt dinosaur regulations to provide a cheaper and better service.
You're a HN regular, so you've probably seen pretty much every single Uber misbehaviour over the last few years. I don't think there's more to be added.
I found that to be true at the beginning. But now, Uber drivers are Lyft drivers are taxi drivers. They all use apps now. And evidence suggests drivers adapted and found new ways to discriminate. https://www.theatlantic.com/business/archive/2016/10/uber-ly... The only difference now is who is taking how much out of the driver's paycheck--and whether or not that company is paying taxes or just extracting wealth from any given city.
I have read the study; there is just no longer a difference. "Traditional" taxis now use apps & drivers now drive for Uber, Lyft and local taxi companies.
In a taxi you can rip off a firangi or fail to pick up a black passenger at your leisure. With Uber, this will cause your ratings to drop below 4.3, or your acceptance rate/cancellation rate to drop below/above whatever threshold they use. Then you get kicked off the platform.
Uber is providing the necessary regulation of the system that governments fail to provide.
Since taxis also operate via apps & the drivers are the same people, this is simply no longer true.
While I'll agree that Uber used to provide a higher quality experience (nicer cars etc), this is no longer the case. The rating system is also broken-eg it now offers too much power to drunk customers. Uber now operates only to benefit itself-not its customers, not the drivers and certainly not the public. Perhaps you'd prefer to be regulated by the whims of a corporation & their pursuit of profit extraction rather than the democratic legitimacy of government, but thankfully, most of us would not. Hence, even if its taken some time for local & national governments to get up to speed, Uber is being banned, taxed, and forced to abide by the public & workplace safety rules every other company must follow.
Perhaps you'd prefer to be regulated by the whims of a corporation & their pursuit of profit extraction rather than the democratic legitimacy of government, but thankfully, most of us would not.
On the contrary, most of us would prefer this. That's why yellow cabs are losing market share everywhere that men with guns don't take away their right to choose.
>"Racism at Uber is vastly smaller than racism via traditional taxis."
And that makes racism more acceptable?
Also I guess you are not familiar with The Atlantic.
It is a 160 year old institution, it is very well-respected. Past writers include Ralph Waldo Emerson, Oliver Wendell Holmes and Harriet Beacher Stowe. It does not trade in Clickbait.
They in no way said it was more acceptable. Just that to the individual, it is more pleasant to have to deal with a company which is less explicitly racist. Certainly there is still room for improvement.
The comment two levels above said that Uber reduces my daily dose of racism to levels far lower than "daily". The comment one level up says that the study your clickbait article cites supports this point.
Yes, a small amount of racism that I don't even notice is far more acceptable than a large amount which inconveniences me daily.
Racism isn't like homeopathy, where any quantity at all has the same effect. More is worse, less is better.
In that study, the median black taxi rider is passed by 2 taxis before being picked up, vs 0 for the median white (see fig A.6). The difference for Uber is not remotely as large.
Oh the tired old accusation of "that's just a ______ fallacy", the rest of my comment is simply pointing to that fact that the publication is not one whose business model is clickbait a la Buzzfeed. There is no claim of "authority" in that.
A venerable institution can fall on hard times and make decisions that reflect poorly on it, but generate revenue. And even in the best of times, a dud can slip through the editorial cracks.
How did you come to the conclusion that the Atlantic has fallen on hard times?
An article being a "dud" is highly subjective, even if you yourself haven't found an article to be a worthwhile read does not qualify qualify an article as "clickbait".
There is nothing sensational in the article, it is simply discussing findings in a study.
The Atlantic has actually fared pretty well:
[1] "The Atlantic saw the highest increase in circulation, expanding slightly by 2% in 2015."
>How did you come to the conclusion that the Atlantic has fallen on hard times?
I haven't. I'm simply pointing out that their venerability doesn't guarantee that everything they put out is of the highest caliber. The bevy of think (or whine) pieces it has published about millennials and safe spaces speaks to that, I feel.
>There is nothing sensational in the article, it is simply discussing findings in a study.
Which is sufficient as a rebuttal to the claim that it is clickbait.
This seems flippant. If a car is fast we generally understand why. We don't need to worry that under some rarely-encountered combination of circumstances it will unexpectedly do a handbrake turn and open the fuel cap.
Speed limits on public roads place legal limits on the operation of automobiles. Rules in formula racing place limits on the speed which race cars can travel.a
If you really want good static typing while being fairly python like, there are other great options. Java or c++ both have decent generics, speed comparable to go, great libraries and years of collective experience using them.
I truly hate the social hacks that are causing go to displace much better languages.
Java is pretty darned graspable, at least I think it's way closer to Go than C++. What do you think are the biggest things that make it hard? The language? Warts like int/Integer? Library size or organization? (This is a real question, I can't look at this stuff with "fresh eyes.")
I can write a simple program in C/C++/go/rust/etc, and compile and run it.
In Java, a bloated IDE isn't just a recommendation, it's practically a requirement.
Evey time I try to learn Java, I smack my head against the toolchain. I ended up just giving up when I tried to wrap my head around classpath, and haven't tried since. I could probably figure it out pretty easily now, but I just don't see any allure in learning Java anyway.
One of these days, I might start using Clojure, but the JVM just feels so messy that I don't want to start.
Could you illustrate what you mean by "bloated syntax"?
Everything else you mention (besides native executables) Java also has. From what I can see, Java provides far better concurrency support - more paradigms than simply go routines.
Classes are required. Sometimes I just want a c-style program without a class. Java looks a lot like C++ (just with garbage collection, and some more consistently defined behavior). In comparison, golang (and rust) have some nice syntactical differences: inferred types, postfix type annotations, etc.
Go routines are nice when that is all the complexity you want. Having more paradigms isn't an interesting prospect until it becomes specifically useful.
Java may be more usable in many cases, but it, in my experience, just isn't easy to work with.
Apart from the boilerplate enclosing class (4 lines, vs 2 in C), you can write a C-style program without classes in Java. There is literally nothing stopping you and generics are helping you significantly for this.
That's completely true, but I just don't like the boiler plate. It's not hugely important to the grand scheme of things, but the more barriers I have from writing a function to running code, the less I want to deal with the toolchain/language in the first place.
I think yupyup is referring to the fact that all types must be explicitly annotated.
(I favor this and explicitly annotate most of my types in Haskell and Scala. Makes reading easier. But I did occasionally find it annoying in Java when it was mandatory.)
So Italy's law is basically "be extremely inefficient and don't compete with political insiders".
In the US we had similar rules enforced by armed men - don't compete with the Lucchese in the Bronx, the Genovese in Little Italy, etc. Clearly this system is working well for Italy. Maybe in the US we can end RICO and bring back the old system ruled by the capo di tutti capo. That could help boost our economy to Italian levels.