Of course it does, but this proposal entails banning search engines, right? I can imagine definitions of "advertising" that don't encompass search, but this author doesn't intend them; he explicitly states that he is not just classifying "paid" advertising of products as advertising but all "third-party" advertising, "full stop", and acknowledges this would make Google "cease to exist" in its "current form". He clearly intends his proposal to include banning search engines, entirely.
Obvious thought that I haven't tested: can you literally achieve this by getting it to answer in Dutch, or training an AI on Dutch text? Plausibly* Dutch-language training data will reflect this cultural difference by virtue of being written primarily by Dutch people.
* (though not necessarily, since the Internet is its own country with its own culture, and much training data comes from the Internet)
That hardly works. Though from my limited experiments, claude's models are better at this than OpenAIs. OpenAI will, quite often, come with suggestions that are literal translations of "anglicist" phrases.
Such as "Ik hoop dat deze email u gezond vindt" (I hope this email finds you well), which is so wrong that not even "simple" translation tools would suggest this.
Seeing that OpenAIs models can (could? This is from a large test we did months ago) not even use proper localized phrases but uses American ones, I highly doubt it can or will respond by refusing answers when it has none based on the training data.
Some of the examples are still wrong. Nuanced, but a Dutch native will still frown at it.
But more importantly is that you limited the context a lot. As in: the scope, the prompt, is very narrow.
In our case, we were generating emails. Lines like greetings are but one of 20+ details in that mail and not even the most important ones. The prompts ever larger, the multishot examples ever more tuned. And then, one in a few hundred will turn up with these "horrible" translations.
We've now moved to a chain of models, where we generate emails in American (the creative part) and then use another model to translate them to Dutch (the non-creative but culturally aware part). This works much better as we can pick models that are good at one thing or tuned to do this one thing better (either by the LLMAAS provider, or by parameters such as temperature).
I've tried Dutch answers and it is more than happy to hallucinate and give me answers that are very "American". Doesn't help that our culture is very inspired by the US pop culture as well since the internet.
Haven't tried prompt engineering with the Dutch stereotype, though.
I wonder (if this works at all) if the effect might be stronger if you also prompted in Dutch, preferably written by a fluent speaker rather than machine-translated.
No, but theoretically, blunt responses might be more common in Dutch-language training data. A well-fit model would be expected to replicate that. (Basically similar to straight up asking it to be more blunt, except it has probably trained a lot more on _Dutch_ than on _someone just told me to be blunt_ so the effect might be more natural and more subtle.)
In my experience, blame for this basically never lies on grunt-level devs; it's EMs and CTOs/CIOs who insist on using third-party products for everything out of some misguided belief that it will save dev time and it's foolish to reinvent the wheel. (Of course, often figuring out how to integrate a third-party wheel, and maintain the integration, is predictably far more work for a worse result than making your own wheel in the first place, but I have often found it difficult to convince managers of this. In fairness, occasionally they're right and I'm wrong!)
This doesn't feel like a "reasoning" challenge. The mental skill required to solve most of these seems to be the ability to loop over all known members of a category like "popular brand names" or "well-known actors" and see if they fit the clue.
As a human, you'd expect to fail either because you didn't know a category member (e.g. as a non-American I have no idea WTF "Citgo" is; I could never get the answer to the first question because I have never seen that name before in my life) or because you weren't able to bring it to mind; the mental act of looping over all members of a category is quite challenging for a human.
Admittedly this is something an AI system could in principle be REALLY good at, and it's interesting to test and see that current ones are not! But it seems weird to me to call what's being tested "reasoning" when it's so heavily focused on memory recall (and evaluating whether a candidate answer works or not is trivial once you've brought it to mind and doesn't really require any intelligent thought).
(If the questions were multiple-choice, eliminating the challenge of bringing candidate answers to mind that is the main challenge for a human, then I'd agree it was a "reasoning" test.)
I had the same thought. It reminds me of solving Project Euler problems, where there is often an obvious naive approach which is guaranteed to produce the correct answer but would consume prohibitive memory/compute resources to execute to completion. I suspect the models would perform much better if prompted to formulate a strategy for efficiently solving these challenges rather than solving them directly… which indicates a direction for potential improvement I suppose.
I agree that recall seems to play an important role in solving these problems. Similar to how the ARC-AGI problems seem to depend on visual perception of shapes and colors. When I come up with the correct answers to such puzzles, I feel subjectively that the answers flashed into my mind, not that I reasoned my way to them.
But, I do think this is reasoning. It requires recall, but anything other than pure logic puzzles do. For example, on a competition math problem or a programming problem, No person or LLM is inventing well-known lemmas and algorithms from first-principles.
I think what you mean is that once you've managed to recall, checking constraints is easy. Remarkably, a few people are much better at this than others. They are able to think fast and execute an explicit mental search over a very small number of plausible candidates. Other people take forever. Seems to be the case for models too.
I think what you said is the same as what your comment said? "Requires no non-trivial thought besides recall" seems remarkably similar to "once you have recalled an item, checking that it fits the constraints is trivial"
Or are you pointing to a nuanced difference between "easy" and "trivial" that I'm not understanding? Or do you think it requires non-trivial thought before the recall step?
The problem, of course, is that due to "disparate impact" doctrine, this (and colourblind hiring in general) is de facto illegal, and DEI scale-tipping is de facto mandatory (even though it's almost always de jure illegal).
Large American employers basically all face the same double bind: if they do not disriminate in hiring, they almost certainly will not get the demographic ratios the EEOC wants, and will get sued successfully for disparate impact (and because EVERYTHING has disparate impact, and you cannot carry out a validation study on every one of the infinite attributes of your HR processes, everyone who hires people is unavoidably guilty all the time). But if they DO discriminate, and get caught, then that's even more straightforwardly illegal and they get sued too.
There is only one strategy that has a chance of not ending you up on the losing end of a lawsuit: deliberately illegally discriminate to achieve the demographic percentages that will make the EEOC happy, but keep the details of how you're doing so secret so that nobody can piece together of the story to directly prove illegal discrimination in a lawsuit. (It'll be kinda obvious it must've happened from the resulting demographics of your workforce, but that's not enough evidence.) The FAA here clearly failed horribly at the "keep the details secret" part of this standard plan.
Curious to see if "disparate impact" criteria gets softened, i.e., impose requirement to find "intentional bias" (c.f. status quo)
What I think is weird is how many firms have this reason, but do it for other stated reasons and don't simply state this compliance nuance. I figure more people would accept your "paragraph three strategy" as an acceptable means to a required end. Maybe this threat is more of a "what if" that has lower probability of enforcement so in practice, getting hunted for this is not that likely.
He concedes no such thing. Reserving jobs for members of a "black coalition" that any black person can join is obviously DEI, not cronyism. It's a de facto race-based filter, not one based on favour-trading or past links to the applicant.
Why not both? Near as I can tell, Cronyism goes hand in hand. Someone has to gatekeep who counts in what bracket, someone has to represent the bracket, etc.
And the beauty is, the more brackets, the more true this is, and the more can be extracted from the system.
You're asking the wrong person there. "Both" concedes that it was "DEI"
But to actually answer the question: while it can absolutely be both, you need to provide proof of the additional claim. "People cheated for DEI reasons" and "People cheated for cronyism reasons" are two separate claims. The article provides plenty of evidence for the former and not much for the latter.
"Cronyism (noun, derogatory): the appointment of friends and associates to positions of authority, without proper regard to their qualifications."
Cronyism is advancing the interests of your personal connections. Friends and family. If you want an explicit cutoff, the Dunbar Number suggests this group should have 100, maybe 150 people in it.
Conversely, there's 40 million black people in the US, and I really doubt anyone is even associated with all of them, much less calling them one of their friends.
You can change who you're friends with a lot easier than you can change your skin color, so the two result in different problems. They're both bad, of course. Similar to how "wage theft" and "shoplifting" are different crimes, even though both of them involve taking money from someone else.
> Associates. You know like people who literally belong (aka associate) to the same organization?
First, the FAA and the NBCFAE are different organizations.
Second, "Associate" does not mean "employed at the same massive organization". It means someone you actually know, on a personal level. You and I are not "associates" just because we both post on Hacker News.
Third, the question is whether you're associated with the individual, not the organization that they're a part of.
> Only hiring people who belong to the same fraternity is also cronyism
If you only hire from Harvard or some other prestigious university is that also cronyism?
Are all internal promotions cronyism?
If you only hire people who live in your city, is that also cronyism? Keep in mind that there's plenty of rural towns that have fewer people than a big fraternity does. Does this change if all th qualified workers in the town are black, so you're only hiring black workers?
You presumably have to draw the line somewhere, otherwise "only hiring US citizens" is also cronyism. Where, exactly, are you suggesting that line should be?
Totally agreed on this being racist, illegal, and just absurdly unethical. I just think the way you're understanding the word "cronyism" is going to lead to a lot of confusion, because it's not the way most people use it.
I'll offer up the Wikipedia definition, since it is perhaps slightly clearer: https://en.wikipedia.org/wiki/Cronyism defines it as "friends or trusted colleagues".
Indeed, it didn't have race-based questions, which I don't think anyone claimed. Rather it had totally arbitrary questions, not related to merit in any plausible way, and a score cutoff that made it highly likely you'd fail if you hadn't been tipped off with the correct answers.
For instance, there is a 15-point question for which you have to answer that your worst grade in high school was in Science, and a separate 15-point question where you have to answer that your worst grade in college was in History/Political Science; picking any of the other options (each question has 5 possible answers) means 0 marks for that question. Collectively, these two questions alone account for one eighth of all the available points. (Many questions were red herrings that were actually worth nothing.)
But then the same blacks-only group that had lobbied internally to get the questionairre instituted (the National Black Coalition of Federal Aviation Employees) leaked the "correct" answers to the arbitrary questions to its members, allowing them to get full marks. Effectively this was a race-based hiring cartel. Non-blacks couldn't pass; blacks unwilling to join segregated racial affinity groups or unwilling to cheat the test couldn't pass; but corrupt blacks just needed to cheat when invited to and they would pass easily, entering the merit-based stage of hiring with the competition already eliminated by the biographical questionairre.
(A sad injustice is that blacks who wouldn't join the NBCFAE or cheat the test, and so suffered the same unfair disadvantage as whites, are excluded from the class in the class-action lawsuit over this whole mess. Since the legal argument is that it was discrimination against non-blacks, blacks don't get to sue - they lost out because of their integrity, not their race, and they have no recourse at law for that.)
The test wasn't explicitly discriminatory, just the way that it was used. And the way that it was used is disputed. The facts are still being litigated. All the coverage is from opinion pieces or right-of-center newspapers.
It would also be damning of recruitment patterns across American institutions. Getting ahold of prestigious or lucrative opportunities often requires pressing unfair advantages that other applicants don't even know exist, by design. My personal anecdote: my SAT score was higher than yours (statistically-speaking, this statement is correct 9 times out of 10, maybe slightly less considering the audience); my alma mater's ranking is lower than yours. I was not privy to he means required to capitalize on my performance. No one bats an eye at this; if the guidance of the adults in my life and my own ambition didn't drive me to a better school, that's tough luck. Never mind that the incidence of this sort of situation has a likewise racially-biased bent.
Perhaps if these sorts of tactics, or even just circumstances, weren't so prevalent, then they wouldn't seem like such a good idea to purposely replicate.
No one is gullible enough to believe that, if the alleged is true, it would be the first time that unscrupulous methods were used to advantage a particular group in recruitment for jobs or education, right? Or that, when it has happened, it has been primarily used to advantage black applicants?
I don't think the WSJ was the only place reporting on it, and I don't think the fact that you consider them to be "right of center" means their coverage might be anything but factual here.
could equally likely be misrepresentation of the facts given the current climate, judicial courts are important because you have to prove your claims in a rigorous fact finding and disputing session, rather than the court of public opinion
Huh? When has Facebook ever implemented political censorship on behalf of Trump? I am not aware of a single case of such a thing even being requested, let alone granted. The scandals about government-directed social media censorship were under Biden's admin, not under Trump's.
> While users who type "#Democrat" or "#Democrats" see no results, the hashtag "Republican" returns 3.3 million posts on the social media platform.
> By manually searching Instagram for "Democrats", rather than clicking on a hashtag, users are greeted by a screen reading "we've hidden these results".
> "Results for the term you searched for may contain sensitive content," it says.
While I agree about Trump, Facebook has censored left-wing causes such as Palestinians. Zuckerberg's embrace of Trump, including possibly getting approval for Facebook's recent changes, raises many concerns.
No, it is true and has been covered in all major newspapers many times over. The Hunter Biden laptop story was censored due to warnings from the FBI, and Zuckerberg has repeatedly said that Facebook was pressured by the Biden administration to censor Covid-19-related content.
The FBI did not say anything about the Hunter laptop story to Facebook, they warned all social media companies that they had detected suspicious activity and that the companies should be aware of foreign disinformation ops
The laptop story was always nonsense anyway, because the chain of custody of the laptop was compromised, and forensic analysis of the hard drive showed that the contents had been modified after it was retrieved from the repair shop and so the content could not be trusted
Oh, come on. No, per the reporting, they didn't specifically mention the laptop, but they DID tell SM and tech companies to expect an imminent disinformation dump the week before the NYP published their story (which they'd been sitting on while working on verifying it), and gave enough of a characterisation of what that specific impending dump would contain that employees of the warned companies sent each other messages affirming that, yes, this was clearly the dump the FBI had warned them about. Then when they asked the FBI if this was indeed the Russian misinformation they'd just warned about, the FBI didn't deny it.
(Right? I have no insider information, here, but as far as I understand it none of the above is controversial.)
I don't know the exact content of the messages the FBI sent (I don't think they've been published), but on its face it seems perverse to me to characterise that sequence of events as the FBI not saying "anything about the Hunter laptop story" to Facebook. They presumably were referring to the Hunter laptop story, and Facebook correctly recognised that this was the case when the story broke, so in what sense are was the warning not "about" that story?
They didn't tell them to censor anything, they warned them that misinformation was coming, and FB on their own decided to temporarily suppress it from "Trending" before changing their mind. You're free to read all of the evidence. Zuck lied and tried to conflate the Instagram bug with the laptop story and/or the COVID stuff
Both of those claims are false, and Facebook's own internal communications showed that they did not censor any covid-related content. SCOTUS ruled that the govt asked them to filter out misinformation without putting any undue pressure on them, and Facebook declined to do so
I already posted the link refuting this nonsense with sources
Building a professional reputation? Letting people contact you with feedback and improvement suggestions? Pure personal pride? Plenty of reasons to want your work to be attributed to you regardless of whether you're directly monetising people reading it.
And who is going to find or even care about these websites except for people going to them specifically because of a link to your profile on social media sites, through public talks or otherwise through word of mouth?
I don't understand what you're getting at. This thread concerns how we used to be able to find good information with these contraptions called search engines, so that word of mouth was not the only way information was found.
What I’m getting at is simple, no one is going to find a random persons obscure blog where they are trying to build a “brand” or be a “thought leader” that is not on the first page of search results.
I subscribe to Ben Thompson’s writing and make it habit to go to a few other websites because they have earned my trust.
The only method that most people have ever had of gaining traction is via word of mouth and not
through search engines.
No one owes you traffic or discoverability any more than they owed HuffPost or the other click bait, SEO optimized websites before the algorithm changes
I don't know how old you are, or whether you ever really knew the web in the prior era that we're talking about. Forgive me if I'm making flawed guesses about where you're coming from.
Back in the day, if I wanted the answer to some specific question about, say, restaurants in Chicago, I'd search for it on Google. Even if I didn't know enough about the topic to recognize the highest quality sites, it was okay, because the sorts of people who spent time writing websites about the Chicago restaurant scene did know enough, and they mostly linked to the high-quality sites, and that was the basis of how Google formed its rankings. Word of mouth only had to spread among deeply-invested experts (which happens quite naturally), and that was enough to allow search engines to send the broader public to the best resources. So yeah, once upon a time, search engines were pretty darn good at pointing people to high quality sites, and a lot of those quality sites became well-known in exactly that way.
I’m old enough that my first paid project was making modifications to a home grown Gopher server built using XCMDs for HyperCard.
My first post was on Usenet in 1994 using the “nn” newsreader
The web has gotten much larger than when it didn’t exist when I started.
But web rings on GeoCities weren’t exactly places to do “high quality research”. You still had to go to trusted sites you knew about or start at Wikipedia and go to citations.
Before two years ago I would go to Yelp. Now I use the paid version of ChatGPT that searches the internet and returns sources with links