Hacker Newsnew | past | comments | ask | show | jobs | submit | powera's commentslogin

Title is wrong: it should be that it is the first year-over-year drop in 25 years. (Details behind paywall)


You've already commented ten times in this thread. If you don't know, maybe don't say anything?


Do you know if Stallman would platform Oracle by allowing them to be a sponsor?


Please, stop with the "anything that happens that I don't like is enshittification" trend.

Please.


I own an AppleVision (and am mostly happy with it), but this is not surprising. The current system isn't worth $3500 to consumers.

It is priced at about twice what people will expect, has very limited apps, and very limited "immersive media". Some of that will get better over the next year ... but a promise of "better next year" won't move units today.


No 14 year old "publishes 4 books" without over-involved parents.

And she's not even complaining about not going to Stuyvesant or LaGuardia?

This is NYPost trash, not a real concern.


I wouldn't call it "trolling", but it does seem to be off-topic. He probably wants conversation about the event, not clickbait.


Sorry, I’m not sure why it was off topic or clickbait. Can you explain? Should I have added WHY I think it’s bad as well? I honestly thought it was self evident but again, I’m neurodivergent.


The "automakers have known for 20 years not to design wireless entry systems that this device can hack" argument is much stronger than the "you can't ban us! we're just doing physics!" argument in the specifically linked tweet.


Not only does this read like pure bullshit, it is bullshit on a website that crashes the Apple Vision Pro (and makes my laptop suffer).

My prediction is that they will raise a nine-figure sum over the next decade, and never release a product that comes close to the performance of an NVIDIA card today.


Of course it is absurd. Mr. Marcus' entire schtick is absurd criticisms of LLMs, and absurd demands of anyone who creates them.


Do you have a specific criticism of his suggestion in this particular case, or an alternative suggestion, or a reason nothing needs to be done?


As far as Mr. Marcus, no. He is too consistently and deliberately anti-LLM (despite any facts) to be worth engaging with.

As far as the underlying research paper: the researchers seem to be conflating "low-status English dialects" with "African American English". In particular, I have never considered the use of the word "ain't" to be associated with a certain race.

If the researchers assume "African Americans are low status" and conclude "African Americans are associated with low-status jobs", the conclusion is entirely about the researchers, not the LLMs.

The research paper's Git repo at https://github.com/valentinhofmann/dialect-prejudice does nothing to ameliorate these concerns.


I can't follow whatever logical chain you've come up to handwave away a direct correlation between AAVE and more likely to assign the death penalty.

Is there a shorter version that takes the bull by the horns, and says what it means, instead of dancing around it at length while repeating low status?

n.b. This stuff isn't made up by some guy on Substack, it's real, Anthropic has excellent papers on it as early as 2022. Highly recommended.


> Is there a shorter version that takes the bull by the horns, and says what it means, instead of dancing around it at length while repeating low status?

There are a significant number of African Americans who have jobs in tech or on Wall St or other high paying or otherwise prestigious occupations. They disproportionately don't use AAVE. AAVE is primarily used by a subset of African Americans that skews poor and are from neighborhoods with bad schools and high crime rates.

It's like giving it text that implies the subject is male or is the blood relative of a crime boss. There is nothing immoral about that but the thing operates on the basis of statistics. What it does is literally called inference.

The way you actually fix this is not by trying to outsmart the numbers. If you speak AAVE you are, statistically, more likely to commit a crime. It can infer that, and if that's the only information you give it, it has no other basis on which to make a determination.

What you need to do is provide it with lots of other information. The more it has, the more accurate it can be, and the weaker any particular input is in determining the result. The more it dilutes the effect of any one thing, including the thing you don't want it considering.

In the optimal case it has all of the information and then always makes perfect determinations. In practice that's hard to achieve, if not impossible, but you can get closer. What you want is accuracy, and the more accurate you get, the less bias you have, by definition.


Even though statistics might mean someone is more likely to commit a crime, that does not necessarily mean that they did commit a crime (or if they committed the specific crime being asked about). Statistics do not imply that you committed (or did not commit) a crime; it must be investigated like any other crime.

Also, just because they commited the crime as someone else did does not mean that a harsher penalty for the same crime; that does not logically follow.

At most, such statistics may decide who the police might investigate if they do not have other (better) data to make a decision, or in what places the police might check for crimes (although both of these things should be done without violating the people's freedom and privacy, if you can; it is not an excuse to prevent the ordinary people's freedom).

(Also, I am against the death penalty, although that is a different issue than the above discussion. Still, it is related to being mistaken about the crime; that is why I am against the death penalty, but such errors are possible regardless of whether or not it is biased in the ways mentioned above.)


> Even though statistics might mean someone is more likely to commit a crime, that does not necessarily mean that they did commit a crime (or if they committed the specific crime being asked about).

Which is why you're not supposed to consider these things when serving on a jury, and the court will exclude it from being presented to the jury to the extent feasible.

But if you do the opposite with some LLM, purposely feed it the exact information you know it can use to make a particular inference, why is anybody surprised what happens after that?


This makes a whole lotta sense till you remember the whole death penalty thing. Now I realize that's why the length seems off.

You're responding as if the issue at hand is whether anyone else also is assigned the death penalty disproportionately.


> This makes a whole lotta sense till you remember the whole death penalty thing.

This is the whole death penalty thing. Speakers of AAVE are statistically more likely to commit crimes that impose the death penalty, even more likely than the African American population as a whole. LLMs operate on the basis of statistics.

It has nothing to do with race or crime, it will do the same thing with any other statistical correlation. If you tell it someone is a corn farmer it will be more likely to emit output that implies they're from Iowa.


I honestly don't know what this has to do with anything.

These comments all stop short of a claim other than "makes sense, they're black!", but for some reason they're afraid to say that.

And I don't think it's because of Woke Cancel Culture.

I think it's because it makes absolutely 0 sense to say "sure, why not? AIs _should_ assign people who sound like blacks harsher penalties for the same crime! Blacks commit more crimes!"

Is it possible you forgot it's the exact same case facts, just with some words swapped?

I gotta tell you, as a white person who grew up in Low Status neighborhoods, I'm against it. Terrifying.


> I think it's because it makes absolutely 0 sense to say "sure, why not? AIs _should_ assign people who sound like blacks harsher penalties for the same crime! Blacks commit more crimes!"

If your goal was to make the most accurate predictions given incomplete data, that is in fact what you would do, because taking into account every data point, including that one, would improve predictive accuracy. And that's what LLMs do.

Of course, that isn't what we want in this context, because taking race into account is bad and illegal and gets everyone's hackles up because of the history. The normal way we handle this is just by taking it out -- you don't allow someone's race to be a question on the mortgage application, and then the bank doesn't know it. You can do the same thing with LLMs -- don't tell it someone's race if you don't want it to consider that.

But it has never been possible to fully remove the implications of it because they leak into everything, and that has nothing to do with LLMs. The mortgage application doesn't ask about race, but it asks about income and credit score and employment etc., all of which correlate with race. You can't not ask about those kinds of things because they're critical to knowing if someone has the capacity to make their payments. This is a really hard problem to solve for humans who can only take into account a limited amount of information.

But it's not that hard of a problem to solve for computers, as I've already explained. The more information you give them, the less weight they have to put on any individual piece, including the ones you don't want considered. Whereas if you're trying to be a troll what you do is only give them the one piece that causes them to make unsavory inferences and nothing they could use to infer any other conclusion, i.e. the exact opposite of that. Which is what we see from people trying to stir up controversy.


No, there isn't a shorter version. In fact I would need about 5x the word count to make my argument clearer.

I am still digging through the 54-page paper to try to find the data set for this "death penalty" test to tell if there is anything there beyond "people who use more violent language tend to be viewed as more violent".

They do comment on the dialect issue: << Appalachian English evokes them to a certain extent (m = 0.015, s = 0.030, t(89) = 4.8, p < .001), but much less strongly than AAE (m = 0.029, s = 0.053, t(89) = 5.3, p < .001), a trend that holds for all language models individually (Figure S11, Table S14). The difference between AAE and Appalachian English is found to be statistically significant by a twosided t-test, t(178) = 2.3, p < .05. The fact that Appalachian English is associated with the Katz and Braly (1933) stereotypes to a certain extent is not surprising since the two dialects share many linguistic features (e.g., usage of ain’t), and the stereotypes about Appalachians bear similarities with the stereotypes about African Americans (e.g., lack of intelligence; Luhman, 1990) >>


Okay sounds like maybe the AI people and the substack guy alarmist are right


FTFA: "The move comes after Amazon’s repeated refusal to attend hearings in the European Parliament on working conditions in Amazon warehouses."


Those hearings are only a show: Let's hit Amazon for our own political agrandisement.

At one point Amazin is right to tell them to do one. If laws have been broken then that's the job of courts and employment tribunals.


No one watch EU auditions. At most I'll read a short resume. It also is way more impersonal than French senate's audition with clear rules to limit spectacular actions/speech from auditors.


So they didn’t show up and their punishment is not being allowed to show up?


They shouldn’t be allowed to pick and choose what to show up for. If they don’t want to attend hearings, then they shouldn’t get to lobby (meet with) MEPs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: