> If you care then you'll fact check before publishing.
Doing a proper fact check is as much work as doing the entire research by hand, and therefore, this system is useless to anyone who cares about the result being correct.
> I don't see why this changes.
And because of the above this system should not exist.
> The employer sets the terms of the interview. If you don’t like them, don’t apply.
What you're missing here is that this is an individual's answer to a systemic problem. You don't apply when it's _one_ obnoxious employer.
When it's standard practice across the entire industry, we have a problem.
> submitting a fraudulent resume because you disagree with the required qualifications.
This is already worryingly common practice because employers lie about the required qualifications.
Honesty gets your resume shredded before a human even looked at it. And employers refusing to address that situation is just making everything worse and worse.
You make a valid point that while the rules of the game are known ahead of time, it’s strange that the entire industry is stuck in this local maximum of LeetCode interviews. Big companies are comfortable with the status quo, and small companies just don’t have the budget to experiment with anything else (maybe with some one-offs).
Sadly, it’s not just the interview loops—the way candidates are screened for roles also sucks.
I’ve seen startups trying to innovate in this space for many years now, and it’s surprising that absolutely nothing has changed.
>I’ve seen startups trying to innovate in this space for many years now, and it’s surprising that absolutely nothing has changed.
I don't want to be too crass, but I'm not surprised people who can startup a business are precisely the ones who hyper-fixate on efficiency when hiring and try to find the best coders. Instead of the best engineers. When you need to put your money where you mouth is, many will squirm back to "what works".
I'm sorry but what the fuck is this product pitch?
Anyone who's done any kind of substantial document research knows that it's a NIGHTMARE of chasing loose ends & citogenesis.
Trusting an LLM to critically evaluate every source and to be deeply suspect of any unproven claim is a ridiculous thing to do. These are not hard reasoning systems, they are probabilistic language models.
o1 and o3 are definitely not your run of the mill LLM. I've had o1 correct my logic, and it had correct math to back up why I was wrong. I'm very skeptical, but I do think at some point AI is going to be able to do this sort of thing.
Yet. They have clearly voiced their desire for this.
> it just won't be state funded.
This isn't just "The government is not funding research into this", this is the government maintaining a list of thoughtcrime and banning researchers from using words.
No it’s not. I am free to publish whatever my crazy mind can cook up. The government’s policy on what an acceptable bar for “take it seriously” is doesn’t change that.
As someone who has been censored and who has watched my friends be censored for being trans people and talking about it we are most certainly not "free to publish whatever".
> It feels like they always anthropomorphize AI as some sort of "God".
It's not like that. It is that. They're playing Pascal's Wager against an imaginary future god.
The most maddening part is that the obvious problem with that has been well identified by those circles, dubbed "Pascal's Mugging", but they're still rambling on about "extinction risk" whilst disregarding the very material ongoing issues AI causes.
They're all clowns whose opinions are to be immediately discarded.
Which material ongoing issues are we ignoring? The paper is mainly talking about how the mundane problems we're already starting to have could lead to an irrecoverable catastrophe, even without any sudden betrayal or omnipotent AGI.
So I think we might be on the same side on this one.
The "Mugging" going on is that "AI safety" folks proclaim that AI might have an "extinction risk" or infinite-negative outcome.
And they proclaim that therefore, we should be devoting considerable resources (i.e. on the scale of billions) to avoiding that even if the actual chance of this scenario is minimal to astronomically small. "ChatGPT won't kill us now, but in 1000 years it might" kinda shit. For some this ends with "and therefore you need to approve my research funding application", for others (including Altman) it has mutated into "We must build AGI first because we're the only people who can do it without destroying the world".
The problem is that this is absurd. They're focussing on a niche scenario whilst ignoring horrific problems caused in the here and now.
"Skynet might happen in Y3K" is no excuse to flood the current internet with AI slop, create a sizeable economic bubble, seek to replace entire economic sectors with outsourced "Virtual" employees, and perhaps most ethically concerning of all: create horrific CSAM torment nexuses where even near-destitute gig economy workers in Kenya walk out of the job.
The people who say it's absurd tend to be the least informed while the people saying it's a major risk include the guy who got a Nobel prize for inventing the current stuff and the leading researchers. Here's some names in the field. 15/19 think the risk is significant https://x.com/AISafetyMemes/status/1884562099612889106/photo...
> The same argument applies to essentially all technology, like a computer.
Why yes, it does.
Even setting aside that most AI hype: Yes, automation is in fact quite sinister if you do not go out of your way to deal with the downsides. Putting people out of a job is bad, actually.
Yes. The industrial revolution was a great boon to humanity that drastically improved quality of living and wealth. It also created horrific torment nexuses like mechanical looms into which we sent small children to get maimed.
And we absolutely could've had the former without the latter; Child labour laws handily proved it was possible, and should have been implemented far sooner.
In addition, the Industrial Revolution led to societal upheaval which took more than a century to sort out, if you agree its ever been sorted out at all.
So, if it is true we’re on the cusp of an AI Revolution, AGI, the Singularity, or anything like that, then there’s precedent to worry. It could destroy our lives and livelihoods on a timescale of decades, even if the whole world really would be over all improved in a century or two.
I'm not suggesting child labor laws are bad, I'm saying automation is good and not sinister. Automation inherently reduces labor, which can inherently lead to someone not needing to work a job that is now automated. That we want to protect people from suffering doesn't mean we should be suspicious of all new technology because we can imagine a way someone might lose a job.
This is a hilarious claim given that none of the current action is going through the legislative path, and the tech billionaires freely bend the knee to Trump even before the inauguration.
What's even the material point here? That "the left" pierced the taboo on speech censorship? Trump's currently wiping his ass with the separation of powers enshrined in the constitution. He does not care about taboo.
Crypto is different from things like housing (where the "bubble" is merely artificially restricted supply driving the price up, so it's a real price increase) or the stock market (Where the fundamentals are real enough that a crash in e.g. AI-stocks will hurt, but not be systemically-destructive. Nvidia, microsoft, etc are all still going to exist as very profitable companies)
> Sure you can say that crypto is only bubble, but that doesn't really add any information. The bubble can't be "inflating" since there's no ratio to a non-bubble version.
The core point is that there is (essentially^[1]) nothing keeping the price up besides speculator interest. That means the price can collapse catastrophically. Saying crypto is a bubble is useful because it has the defining traits and dangers of a bubble.
[1]: The big exception here is that a lot of the price is also kept afloat by the absolutely ridiculous amount of financial fraud in this ecosystem. Most of the "dollars" chasing Bitcoin are fake, and it's still unclear how insolvent the big stablecoins are.
The little exception is that there is a minimum floor; Cryptocurrencies have some utility as a payment system, which would give them some value. But the market rate for consumer payments is effectively if not literally zero, well below the amounts required to operate the mining/staking systems these currencies require.
>Why should somebody researching e.g. fusion for the Department of Energy also need to create a Promoting Inclusive and Equitable Research (PIER) plan, to even apply?
Why?
Because a homogenous culture of researchers is less effective.
Because you are not just doing research on a topic, you are also training the next generation of scientists and field experts.
And the implication that the old boys club of white dudes is intrinsically the best "meritocratic" outcome is ridiculous. The history of science is full of people who had to fight that norm and succeed despite it.
> This should greatly reduce the overall bureaucratic nonsense in science and help get back to science simply being science without imposing ideological conformity tests.
Sure, sure. Except for the part where they're also censoring any science topics deemed "woke", where all funding now has to meet the president's ideological conformity test on subject and staff as well.
Your stereotypes belie a lack of familiarity with researchers. Here [1] are the demographics of PhD researchers.
White individuals are significantly under represented (and even more so in STEM) though it's not for any nefarious reason. Science has traditionally been merit > all. And lots of highly skilled individuals from China, India, and so on are pursuing education and work in the US, which makes the competition for these spots very different than a random sampling of Americans.
You either don't know what you're a talking about or a in bad faith.
The demographic or PhD researchers is, in fact, the problem because the ration of women to men is very high at the beginning of the career but declines as the career progresses and becomes embarrassing at the professorial stage. This is the whole reason why DEI became essential because it aimed at removing those anti-meritocratic barriers that promoted male career at the expenses of females.
Men raise children too. And parental leave exists to make having kids compatible with having a career. Not clear to me why we should just accept women dropping out after becoming parents as a given.
I think if/when you have children you'll see that these sort of statements are not really realistic.
In a sustainable society every single woman needs to have a bit more than 2 children on average. The 9 months are not so bad though so come with a significant number of side effects and a substantial number of hospital visits, both planned and unplanned.
After birth the real fun begins. At first you'll be nursing every 2 hours, constantly. Over time that changes to every 3-4 hours but that is now the new normal, 8 hours of sleep is a thing of the past. And nursing isn't just whip out a boob and a few minutes later you're done, depending on your baby, his mood, teething, growth spurts and a zillion other factors it could last an hour all the while his hunger for the next window also grows!
And, probably as an evolutionary warning mechanism, the mother will frequently experience vivid nightmare playing out every horrific scenario that could happen, experience anxiety (probably as a result of the former) and so on. That fades over time but never really goes away.
And back to breastfeeding, if a mother doesn't constantly drain her breasts it can lead to infection/mastitis/blocked ducts and all other sorts of fun stuff.
And that's just a sampling of some of the issues (completely ignoring baby himself here) for one child, at which point it's time to get ready for number 2, let alone 3.
This is a multiple years long process that completely consumes your life. Now imagine pairing all this with a 9-5 which in reality is rarely just a 9-5.
There's a reason the West's fertility has fallen well below replacement. This fantasy of doing a great job both as a mother and as a corporate drone, just isn't realistic at all.
I think that researchers from China, India, and so one should also have plans to effectively manage the diverse set of students and staff they are likely to work with while in the U.S.
> Your stereotypes belie a lack of familiarity with researchers
I was referencing what the current Trump administration deems "meritocratic" and seeks to "return" to, their policy changes are in direct response and opposition to the demographics you describe.
Doing a proper fact check is as much work as doing the entire research by hand, and therefore, this system is useless to anyone who cares about the result being correct.
> I don't see why this changes.
And because of the above this system should not exist.