Hacker Newsnew | past | comments | ask | show | jobs | submit | more horrified's commentslogin

Except it didn't because Tesla reimbursed them with no fuss.


Just turn off the news. They feed on fear.


Can you explain to me then why China is still building new coal based power plants? If solar is cheaper, why would anybody build something else?


Because solar energy is not cheapest form of 24/7 energy source.

Baseload is the minimum level of demand on an electrical grid over some timespan. Providing that baseload with solar or wind is not as cheap as hydro, coal, natural gas or nuclear.


Because solar only works when it’s sunny.

The quoted prices for cheap solar energy don’t include storage. Solar plus storage is still expensive.


Inertia.


Also switching to alternative energy has a huge upfront cost in CO2 - producing all those windmills and solar panels costs a lot of energy.

If the time to turn things around is really so short, maybe that upfront CO2 explosion is not the right way to go.

Similar things hold for other things like energy neutral buildings - actually building them has a huge upfront cost in CO2.


Which one do you use?


Not the person you asked, but I use an Acer XB273K GP 27" 4K as my middle/main screen. It does 120Hz just fine, is gsync too.

I also have 2 LG 27UD68-W 27" 4K screens, one on the left in portrait mode, the other to the right in landscape. These were cheap, work fine at 60Hz, but have an incredibly annoying bright standby light that flashes every 2 seconds. There's no way to switch it off, the light comes through the menu joystick on the bottom of the monitor so you can't cover it with electrical tape. And the whole back of the monitor is white plastic, so it glows/reflects some of the LED light. Do not recommend!!


Thanks!


A Lenovo Legion Y27-Q20, the stand is pretty big but otherwise a great monitor


Thanks!


I feel like I have read about a similar YC startup before. Search brings up afrostream.tv - how did they fare, and what is different? https://www.ycombinator.com/companies/afrostream

Not that there can't be more than one.

For non-blacks, what would be recommended shows to watch to get a feel for black culture (or your vision of black culture, I guess)?

Edit: it seem Afrostream shut down in 2017 https://techpoint.africa/2017/09/22/afrostream-shut-down/


If an entity needs to cooperate to survive, it is selfish to cooperate. I feel like the author, like many other critics of economic theory, have not fully understood economics.

Also, the selfish gene metaphor was misleading in the sense that it is of course nonsense to ascribe any kind of motivation to genes. I think it was useful to break a common misconception, that evolution would work for the benefit of the individual.


If I were to list a dozen behaviours, such as "gives to charity", "moves all income to offshore accounts to save taxes", etc, you would probably be able to sort them into "cooperative" and "selfish".

So those are well-defined categories. That one might just be a detour to the other doesn't seem to be relevant.

And, of course, the problem is that, far too often, fans of economic theory make the opposite mistake, and consider cooperative behaviour as a failure of rationality, when in reality they are just not thinking it through to the end.


This misses the point, as does the original article.

In the economic framework, "selfishness" just whatever personal objective a person is optimizing for. Cooperation is tactic for achieving "selfish" goals, and not necessarily at odds.

People donate to charity in the pursuit of what they personally value.

I agree that some "fans of economics" make the mistake of viewing outcomes that don't increase a personal bank account as irrational. Well respected economic thinkers clearly understand that people have other goals in life. On the flip side, "critics of economics" make the opposite mistake, thinking that values in life besides maximizing your bank account are at odds with economic theory.


It depends on what level you look at. Cooperation between individuals in a company could be for the selfish gain of the company. Cooperation between entities like companies can be for the selfish gain of even larger entities like nation states. Cooperation between nation states, e.g. members in the EU, can be for the selfish gain of the group, at the expense of states outside the group that get second class treatment.


If an entity is made out of smaller constituents, it needs cooperation in order to be selfish.

My formulation is just as correct as the standard economics one, but not as misleading, because it rightly implies that you are part of/made of larger and smaller entities, whereas the economics formulation falsely implies that you are an independent entity, capable of pure selfishness. If one cell in your body is selfish, that's called cancer and you die.


Somatic cells are clearly exploited by their reproductive cell masters and the collaborator brain cells. They are worked to death and expendable. If they step out of line, they are relentlessly hunted down by the boot-licking immune cells. The labors of their work are taken from them, and they are given the bare minimum to survive, until they are no longer of use and they all die.


A cell faces incentives to not be cancerous: if it goes replicator, there's large chance it gets killed by a hunter cell.


Also it will die when the organism it is part of dies because of cancer.


Yeah it totally lost its way by the end where the analogy to biology is thrown out the window:

> [...] And we need coop-cop regulations of selfishness at every level to avoid parts harming their needed wholes. Otherwise, harsh karma awaits as our co-fate for allowing a plague of parasitic plutocratic plundering.


seems there's a lot of disagreement about economics.

what can economists predict?


One of the examples given in the article is the prisoner's dilemma.

They present it as a celebration of "rational" yet self-defeating selfishness.

There's a lot of crazy things said in and by economists, but I dont think I've ever heard anyone actually claim that.

What it is useful for, what it "predicts", is that even perfectly, better than humanly-possible, smart people can still find themselves in situations where they can't get the best outcome for themselves by doing whatever is best for them alone.

The general conclusion and recommendation that arises is to implement some kind of meta-regulatory framework to ensure that the more desired outcome arises.

And indeed, we see exactly that in situations like criminal gangs, that co-operate to punish those that "snitch" and so solve the dilemma. The evolutionary coop-cop things mentioned are other examples.

Because the ideal and reality of the "prisoner" part so often diverge, I generally prefer the example of trying to do a drug deal or spy swap. You have something they want, and they have something you want. But unless theres some guarantee you wont be ripped off, then the profitable to both trade won't happen.


Economic policy does not favour cooperation.

It gets labels like "cartel" and "monopoly"


Economic policy is cooperation.


But there actually ARE conspiracies. Practically every political group is scheming how to gain power and get their demands implemented.


Yes. But the degree to which I blame my own frustrations on this or that conspiracy's machinations is the degree to which I'm not seeing the situation clearly. It's the difference between something raining on one's parade and parading in the rain.

Or as it was put to me: 'To a man with no slack, even "Bob" served the Conspiracy. To a man with "Bob"'s slack, even the Conspiracy is of service"


I would argue, moreover, that conspiracy is the very definition of capitalism: everyone competing against each other and making alliances to crush competition.

It's not disconnected from reality to consider some groups of people to have too much power and to conspire against the best interests of society at large for their own benefits. What is delusional is to link that to reptiles or illuminatis, when the people pulling the strings are the same ones we see smiling on TV: multinationals and their lobbyists, bosses unions (eg. MEDEF in France), political parties.

Some of these interest groups tend to be more secretive (eg. Bildeberg, Dîners du siècle) but they are comprised of well-known figures whose interests align, and there's certainly not a single group of people controlling the entire planet's fate. It just so happens the rich and powerful have strong incentives to unite in order to screw the rest of us.


I think you would find a LOT of conspiracies in the history of communism.


Of course! Whether it's plots to assassinate powerful people, or police setups like Sacco & Vanzetti, or FBI's CoIntelPro infiltration, sabotage, setup and assassination program of all US revolutionary groups of the 50s/60s (Black Panthers, Young Lords, MOVE..)

If you're strictly speaking about marxism-leninism, which as an anarchist i do not recognize as a form of communism (but rather a form of State capitalism), that's also the case long before Stalin came in. Notoriously, the revolt of the Kronstadt soviet in 1921 was framed by Lenin and Trotsky as counter-revolutionary, see: https://en.wikisource.org/wiki/Trotsky_Protests_Too_Much

However, communism as a political perspective (whether libertarian or authoritarian) promotes mutual aid and emulation as principles of organizing society, not competition. So my specific critique of capitalist mindset/education/system as a major factor in the development of actual conspiracies does not really apply to communist ideas in themselves.


Networks might be a more useful term than conspiracies.


Horrible story :-( And once again the grievance strategy was successful.


I don't know how folks can be aware of how the exchange went down and say that it was a "successful" "grievance strategy". LeCunn wasn't necessarily in the right here, and it wasn't only Gebru's twitter followers going on the offensive.


Well LeCunn quit Twitter, so it is "one down". That is what I meant by successful. And Gebru's "arguments" weren't even arguments, just "whatever you say is wrong because you are white and don't recognise our special grievances".

I personally agree with what he said when he said it is a difference between a research project and a commercial product. No actual harm was done when the AI completed Obama's image into a white person. You could just laugh about it and move on.


Not to disagree, but a couple of FYIs:

* LeCun did not really quit Twitter, he's still active on there and has been for a while - but I guess he did temporarily when all this happened.

* many researchers agreed with Gebru's opposition to LeCun's original point - see tweets by Charles Isbel, yoavgo, Dirk Hovy embedded here https://thegradient.pub/pulse-lessons/ under 'On the Source of Bias in Machine Learning Systems' (warning - it takes a while to load). There was a civil back-and-forth between him and these other researchers as you can see in that post, so it was a point worth discussing. Gebru mostly did not participate in this beyond her initial tweets as far as I remember.

* Lecun got into more heat when he posted a long set of tweets to Gebru which to many seemed like he was lecturing her on her subject of expertise aka 'mansplaining'. I am sure many would see that as nonsense, but afaik many people making that point was the cause of quitting twitter.


Thanks for the further background information. I have to say it doesn't really make it better for me. The "angry people" are of course correct that you can also create bias in other ways than data sets. But are they implying that people generally deliberately introduce such biases to uphold discrimination? That seems like a very serious and offensive claim to make, and not very helpful either.

The whole way to think about issues is backwards in my opinion. I would think usually when you train some algorithm, you tune and experiment until it roughly does what it wants you to do. I don't think anybody starts out by saying "let's use the L2 loss function so that everybody starts white". They'll start with some loss function, and if the results are not as good as they hope, they'll try another one. In fact the usual approach will lead back to issues with the data set, because that is what people will test and tweak their algorithms with. If the dataset doesn't contain "problematic" cases, they won't be detected.

But overall, such misclassifications are simply "bugs" that should get a ticket and be fixed, not trigger huge debates. I think it is toxic to try to frame everything as an issue of race.


> Thanks for the further background information. I have to say it doesn't really make it better for me. The "angry people" are of course correct that you can also create bias in other ways than data sets. But are they implying that people generally deliberately introduce such biases to uphold discrimination? That seems like a very serious and offensive claim to make, and not very helpful either.

No.

I think Isbell's Neurips Keynote (https://nips.cc/virtual/2020/public/invited_16166.html), titled "You Can’t Escape Hyperparameters and Latent Variables" does a good job of explaining this.

The humans who ultimately validate the model (and who decide on the dataset) are a hyperparameter. Often ignored, yes, but they are still part of the training loop. They decide what the other hyperparams are, when to stop training and publish, etc.

To use a question I've asked on HN before: say you're training a model to detect criminality based on facial structure. This has come up as a real world example, papers have been published on this topic. What does a "good" dataset look like? Or similarly, for a system that decides on bail or sentence length. Do you use historical data on bail or sentencing? We have very well documented examples of bias in both of those things, even in the ground truth. So how do you decide to mitigate that bias? Or do you choose not to, and to continue enforcing said biases in your model?

> But overall, such misclassifications are simply "bugs" that should get a ticket and be fixed, not trigger huge debates

But when such "bugs" aren't prioritized because people don't think they are bugs, you have to debate whether or not they are bugs at all! The hyperparameter here is "who decides what is or isn't a bug"


"say you're training a model to detect criminality based on facial structure. This has come up as a real world example, papers have been published on this topic. What does a "good" dataset look like?"

I don't think anybody who is respected says "here is this data set of criminals, we have trained the algorithm on it, and therefore it is proven that such and such facial features predict criminality". I mean yeah this mistake has been made over and over again (even before the invention of computers), but it has long been debunked.

Also presumably "black skin" is a good predictor for criminality - in the current day, the crime rate is higher for black people. The algorithm only detects that, it doesn't interpret it. It is up to the humans who use the algorithm to interpret it. If you interpret it as "black people have a genetic disposition to criminality", you are wrong. But it wouldn't be the fault of the algorithm. What is insanity, but basically what the "AI ethics" people demand, is to tweak the algorithms to make them pretend the prevalence of criminality is not higher in certain populations.

"But when such "bugs" aren't prioritized because people don't think they are bugs, you have to debate whether or not they are bugs at all!"

Nobody says they are not bugs. You are creating an imaginary problem here. You really think, say, researchers at Amazon said "let's make it so that women are ranked down by the algorithm"? Likewise I don't think anybody says "the algorithm should rank black people worse for crime".

It is also not a novel idea to look out for bias int he algorithms, delivered to us by the woke crowd. The whole field is about treating bias - a machine learning algorithm is all about training some bias.


> Nobody says they are not bugs. You are creating an imaginary problem here. You really think, say, researchers at Amazon said "let's make it so that women are ranked down by the algorithm"? Likewise I don't think anybody says "the algorithm should rank black people worse for crime".

They did though, at least until Gebru and those like her came along and forced the issue.

It's really sad to see people say that this was never a concern as though bias and ethics were taken seriously by the field as a whole more than, say, 5 years ago. They weren't. Idk if you're new to the field or weren't paying attention, but it just wasn't a thing. Like most of the foundational papers in terms of racial misclassification and such are from 2017 and 2018.[2] It's more recent than...GANs or AlphaZero. Not to mention that there's attempts to publish garbage like this[1] every year!

> You really think, say, researchers at Amazon said "let's make it so that women are ranked down by the algorithm"? Likewise I don't think anybody says "the algorithm should rank black people worse for crime".

No, I already said this. Someone failing to notice a bug isn't malice. But there issue is that no one even considered that these kinds of things were bugs so they didn't get noticed or researched.

> The whole field is about treating bias - a machine learning algorithm is all about training some bias.

Yes, but thinking about race as a particular category where we should avoid unintended bias (and indeed prefer generalization across categories) was a novel idea when proposed by those ethicists!

> But it wouldn't be the fault of the algorithm. What is insanity, but basically what the "AI ethics" people demand, is to tweak the algorithms to make them pretend the prevalence of criminality is not higher in certain populations.

But...you're making the algorithm. If your goal is to build a model that tries to detect "racial criminality", I'm going to suggest that you probably are doing something racist, because there isn't really a useful, non-racist, reason to train a model that incorrectly classifies people as criminal based on their skin color.

On the other hand, if you're having to do additional interpretation of the model output, why aren't you integrating that additional interpretation into the model? And if you can't, then is the model even adding any value? Probably not.

And that's not even ignoring questions like what "prevalence of criminality". I think you mean "are arrested more often". We often think that that correlates with criminality, and for some crimes it may, but for e.g. drug crimes we know that it doesn't. The point is, if you don't have at least thoughtful answers to all of those questions and more, you have no business trying to do "criminality" prediction, because your algorithm is not doing whatever you think its doing.

[1]: https://www.bbc.com/news/technology-53165286

[2]: Seriously, Gender Shades is 2018, Debiasing word embeddings is 2016 which I think is the earliest you could argue people were taking this stuff seriously, and it cites "Unequal Representation and Gender Stereotypes in Image Search Results for Occupations" from 2015, which is kind of it.


Isbell's Neurips keynote is fantastic! Definitely recommend.


> Lecun got into more heat when he posted a long set of tweets to Gebru which to many seemed like he was lecturing her on her subject of expertise aka 'mansplaining'.

And yet seems he was right.


Not to mention Obama is 50% white.

(picture of his parents) https://static.politico.com/dims4/default/553152c/2147483647...


> LeCunn wasn't necessarily in the right here

And yet, he was.

Gebru couldn't know that, because despite all her claims, she's not technical.


Gebru is not a Woz-level wizard like LeCun but someone who worked at Apple as an engineer and did a PhD with Fei-Fei Li cannot be dismissed as “not technical.”


I had a phone screen shortly before the pandemic where I emphasized that I liked understanding and solving problems for people and didn't care what specific technology I used.

I got feedback from the recruiter that the company passed on me because I was not technical enough. They literally had asked me zero technical questions.

Not too long after, the company was in the news for a massive data breach.


Well, unless you were interviewing for a CTO type job, they were probably looking for an indication of what technologies you’re most proficient in. If you’re equally proficient in 20 programming languages, that proficiency level is, with high probability, pretty low.


>If you’re equally proficient in 20 programming languages, that proficiency level is, with high probability, pretty low.

I'm way too old to put down 20 programming languages on my resume. Overcoming the compulsion to do that is how I got my first real programming job.


Surely they were able to sort it out with Tesla (some other comment here mentions Tesla refunded it without a fuss), so why is it even news?


Article: here is the darkest dark-pattern you will ever see IRL.

HN readers: this is our beloved Tesla, so how can we downplay it?


Doesn't make sense as a dark pattern, people will just reverse the sale and cause increased workload for Tesla. Why make such a bad faith assumption?


I don't think it's intentional but think it's still possible for it to be a net positive. All it takes is one person who was on the fence or else too rich to bother deciding "oh well" to make it worthwhile for Tesla


This is every Apple and Tesla thread. Goes to show how much people love their products.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: