Hacker Newsnew | past | comments | ask | show | jobs | submit | andyfilms1's commentslogin

I've said this before, but the only thing that will convince me that these companies are actually laying people off due to AI will be when they start replacing their leadership staff with it.

Until that happens--IMO it's just an excuse to reduce the overhired workforce without tanking their stock.


They are firing leadership.

There's a ton of middle managers and high-ranking roles whose jobs are to basically supervise other people.

If the people go, they do too.


Many companies I’m familiar with were overstaffed with middle managers. Managers with 3 reports. I even know of one with a single report.

Layoffs are generally good for stock prices, at least in the very short term

Why would leadership replace themselves?

Would they replace themselves in countries (e.g. Germany, France) where they have more of a civil responsibility, beyond countries like USA where they have a stock-holder responsibility?

- corporate social responsibility (CSR)

Does this mandate self-termination when financially reasonable?


This is a good biological explanation. The physical explanation is, if the sensitivities didn't overlap, our spectral sensitivity would not be continuous. There would be valleys of zero sensitivity between the cones, and a continuous wavelength sweep would result in us seeing black bands between colors.


Gray bands, or more realistically just desaturated bands. There'd still be sensitivity to light through rods (black and white), and even if the peaks of wavelength sensitivity were highly separated there would still be some cone response to wavelengths that didn't stimulate them strongly.


Thousands are being laid off, supposedly because they're "being replaced with AI," implying the AI is as good or better as humans at these jobs. Managers and execs are workers, too--so if the AI really is so good, surely they should recuse themselves and go live a peaceful life with the wealth they've accrued.

I don't know about you, but I can't imagine that ever happening. To me, that alone is a tip off that this tech, while amazing, can't live up to the hype in the long term.


Some employees can be replaced by AI. That part is true. It's not revolutionary (at least not yet) — it's pretty much the same as other post-industrial technologies that have automated some types of work in the past. It also takes time for industries to adapt to these changes. Replacing workers couldn't possibly happen in one year, even if our AI models were more far more capable than they are in practice

I'm afraid that what we're seeing instead are layoffs that are purely oriented at the stock market. As long as layoffs and talk about AI are seen as a positive signal for investors and as long as corporate leadership is judged by the direction the stock price goes, we will see layoffs (as well as separate hiring sprees for "AI Engineers").

It's a telltale sign that we're seeing a large number of layoffs in the tech sector. It is true that tech companies are poised to adapt AI more quickly than others but that doesn't seem to be what's happening. What seem to be happening is that tech companies have been overhiring throughout the decade leading up to the end of COVID-19. At that time hiring was a positive signal — now firing is.

I don't think these massive layoffs are good for tech companies in the long term, but since they mostly affect things that don't touch direct revenue generating operations, they won't hurt in the near-term and by the time company starts feeling the pain, the cause would be too long in the past to be remembered.


> Some employees can be replaced by AI.

Yes, but not lets pretend that there aren't a lot of middle and even upper management that couldn't also be replaced by AI.

Of course they won't be because they are the ones making the decisions.


> Of course they won't be because they are the ones making the decisions.

That's not accurate at all

https://www.businessinsider.com/microsoft-amazon-google-embr...


I stand corrected.


Yeah, I wonder how asking for a raise of an AI chat bot will go except as well as AT&T tech support.


I don't think anyone is being laid off because of AI. People are being laid off because the market is bad for a myriad of reasons, and companies are blaming AI because it helps them deflect worry that might lower their stock price.

Companies say "we've laid people off because we're using AI,x but they mean "we had to lay people off, were hoping we can make up for them with AI."


> I don't think anyone is being laid off because of AI.

I think that's demonstratively false. While many business leaders may be overstating it, there are some pretty clear cut cases of people losing their jobs to AI. Here are 2 articles from the Washington Post from 2 years ago:

https://archive.vn/C5syl "ChatGPT took their jobs. Now they walk dogs and fix air conditioners."

https://archive.vn/cFWmX "ChatGPT provided better customer service than his staff. He fired them."


The wave of layoffs started couple of years before the AI craze (ChatGPT).


> Managers and execs are workers, too--so if the AI really is so good, surely they should recuse themselves and go live a peaceful life

One thing that doesn't get mentioned is AI capability for being held accountable. AI is fundamentally unaccountable. Like the genie from the lamp, it will grant you the 3 wishes but you bear the consequences.

So what can we do when the tasks are critically important, like deciding on an investment or spending much time and resources on a pursuit? We still need the managers. We need humans for all tasks of consequence where risks are taken. Not because humans are smarter, but because we have skin.

Even on the other side, that of goals, desires, choosing problems to be solved - AI has nothing to say. It has no desires of its own. It needs humans to expose the problem space inside which AI could generate value. It generates no value of its own.

This second observation means AI value will not concentrate in the hands of a few, but instead will be widespread. It's no different than Linux, yes, it has a high initial development cost, but then it generates value in the application layer which is as distributed as it gets. Each human using Linux exposes their own problems to the software to get help, and value is distributed across all problem contexts.

I have come to think that generating the opportunity for AI to provide value, and then incurring the outcomes, good or bad, of that work, are fundamentally human and distributed across society.


> Thousands are being laid off, supposedly because they're "being replaced with AI," implying the AI is as good or better as humans at these jobs.

I don't think the "implying the AI is as good or better as humans" part is correct. While they may not be saying it loudly, I think most folks making these decisions around AI and staffing are quite clear that AI is not as good as human workers.

They do, however, think that in many cases it is "good enough". Just look at like 90%+ of the physical goods we buy these days. Most of them are almost designed to fall apart after a few years. I think it's almost exactly analogous to the situation with the Luddites (which is often falsely remembered as the Luddites being "anti-technology", when in reality they were just "pro-not-starving-to-death"). In that case, new mechanized looms greatly threatened the livelihood of skilled weavers. The quality of the fabric from these looms tended to be much worse than those of the skilled weavers. But it was still "good enough" for most people such that most consumers preferred the worse but much cheaper cloth.

It's the same thing with AI. It's not that execs think it's "as good as humans", it's that if AI costs X to do something, and the human costs 50X (which is a fair differential I think), execs think people will be willing to put up with a lot shittier quality if the can be delivered something much more cheaply.

One final note - in some cases people clearly do prefer the quality of AI. There was an article on HN recently discussing that folks preferred Waymo taxis, even though they're more expensive.


Not surprising people like Waymos even though they are a bit more expensive. For a few more dollars you get:

- arguably a very nice, clean car

- same, ahem, Driver and driving style

With the basic UberX it’s a crapshoot. Good drivers, wild drivers, open windows, no air-con. UberX Comfort is better but there’s still a range.


Every few weeks I give LLMs a chance to code something for me.

Friday I laid out a problem very cleanly. Take this datastructure and tranform it into this other datastructure in terraform. With examples of the data in both formats.

After the seventh round of back and forth where it would give me code that would not compile or code that gave me a totally different datastructure, giving it more examples and clarifications all the while I gave up. I gave the problem to a junior and they came back with the answer in about an hour.

Next time an AI bro tells you that AI can 'replace your juniors' tell him to go to hell.


Interesting, her videos have never struck me as contrarian for the sake of it, she seems genuinely frustrated at a lack of substantial progress in physics and the plethora of garbage papers. Though I imagine it must be annoying to be a physicist and have someone constantly telling you you're not good enough, but that itself is kind of part of the scientific process too.


My biggest complaint is sometimes it seems like she will take some low quality paper and just dunk on it. This feels a bit click-baity/strawman-y if nobody was being convinced by the paper in the first place

[I am not a physicist so probably can't really evaluate the whole thing neutrally]


What's the downside of that? Shouldn't a bit of public criticality help raise the publication standards?


Attacking your opponent's weakest argument is easy. Attacking their strong arguments is what takes skill.

If the paper is getting a lot of press that is one thing, but if its languishing in obscurity, it just feels a bit self-indulgant


It sounds like you're saying that her opponents are the entirety of physics researchers in academia. But isn't it that her opponents are those particular researchers that are publishing poor work, and that she's attacking the strongest arguments of those? Or am I missing something?

And I don't accept the "languishing in obscurity" argument - if a published work is poor, we should still critique it (by publishing a letter to the editorial, or any other manner), rather than just let it pollute the space. There have been many cases of obscure works being picked up decades hence, and especially now with AI "deep research", it's easier and easier for bad work to slip in - so I believe that science communicators should do what they can to add the appropriate commentary to such works. And if it seems like "easy" work, then all the better.


The issue is that many of her videos argue that funding for particle physics should instead go into foundations and interpretations of quantum mechanics, specifically research completely identical to what she works on.

This is not helped by the fact that she pushes an interpretation of quantum mechanics viewed as fringe at best. Her takes on modern physics seem typically disingenuous or biased.


Could she be correct in her assertion? Are we spending more on areas of physics which don’t require it?


She's correct. If you want theory of everything energy you need accelerator the size of our Solar system. Source: Stephen Hawking's Universe in a nutshell.


Evernote will do this, you can feed it a bunch of PDFs and other documents, it will OCR them and make them all searchable. it's not perfect, but you can also add manual tags for things you know are important.


At some point I'd love to further train an LLM on all my PDFs and be able to ask it questions.


For txt based pdfs, doc files etc, this is super easy to do and requires very little technical expertise or configuration. Download the gpt4all client. Pick an LLM the new meta llama3 model works really well then configure the local documents plugin.

I wrote a scraper to download all of the California EdCode from the governments site, convert them all to txt docs and I can ask questions about California EdCode in plain English.

I work in a shared governance capacity that requires us to refer to the Ed code for contractual negotiations and it’s been extremely helpful.


Lots of people swear by PlatformIO on VSCode, and that's totally fine. But if you're the kind of programmer that just wants "notepad, but with an 'upload' button" the Arduino IDE is hard to beat.


Why would anyone want that though? Masochistic challenge? "If you're the kind of person who likes to wash their clothes by hand..."


I like Arduino IDE as kind of a living reminder of "how the other half lives". I'm sitting here with Tinygo and my J-Link pushing several revisions of my code per minute (not to mention fucking goroutines and channels on an ARM M0 core), and other people are fiddling for hours trying to get Arduino IDE to compile code for 45 minutes only to be told COM69 is no longer connected (because Windows decided that it's now COM123).

I really think that professional software engineers are trolling "normal people" with stuff like this. Oh, you want to be a programmer? Jump through these 8 million flaming hoops suspended over man-eating sharks, and you'll see just how hard it is to make an LED blink. After failing, you will dedicate a new national holiday to our sacrifices; we do this for a living and you should be grateful that literal human garbage like you are even allowed to THINK about monitoring the water level of your potted plants. But behind the scenes, we don't do this for a living. When we press TAB, our IDE knows exactly which level to indent the text too. When we want to reflash our microcontroller, it does a diff of the last binary we pushed and updates only the 3 flash cells that actually changed and updates it in microseconds. When we want the documentation for some library function, it pops up in a web browser hovering above our cursor. Our life is nothing like this. We just use our skills to give you tools to make you think programming sucks.

It's all a scam.


I definitely agree. Arduino was good back in 2009 when professional alternatives were expensive and complicated, but it basically didn't change at all until now. And they've only really fixed the IDE. The rest of it is still terrible. Awful API, badly written libraries, everything done over an (emulated!) serial port.

It's like the legal system where emails are converted to fax when sent and then back to email when received.

That said, I think Mbed should have eaten Arduino's lunch but they kept making loads of different terrible build systems, and there are still a load of old boards available that need firmware updates to work properly. I have a couple of K22F's that have been bricked just by plugging them into a Windows machine. Never happened with Arduino.

So yeah, Arduino have sat still at the "it's rubbish but it works pretty reliably" stage while Mbed has failed to get beyond "it's great... when it works".


So yes a masochistic challenge then?


Because having thirty billion integrations and automations, toggles and etc that a new user may mess up is overwhelming for a newcomer. Arduino IDE serves great as a plug-and-play introduction to microcontrollers, allowing people to learn what they're doing before figuring out what the billions of automation tools are and how to tame them. It's the same reason python still has IDLE, it's just notepad with a run button, because it's all you need when you first start getting to grips with everything.


I don't think syntax highlighting and autocomplete and live error squiggles and tooltip help make things harder for beginners. The optimum beginner experience definitely isn't Eclipse, but it also definitely isn't Notepad!


VSCode is kind of bloated + really bad if you like to exclusively use keyboard control rather than "mouse sometimes", as things lose focus easily when notifications and other things gets focus and then it doesn't drop focus back where you were.

Not every tool is for everyone, and that's fine :)


There is a lot more to image quality than just resolution, and often increasing resolution will hurt sensor performance in other areas like dynamic range, QE, and noise.

The Arri Alexa's Alev III sensor, which has won basically every Best Cinematography Oscar for the last decade, shoots at about 3k resolution (~7MP) yet holds up perfectly fine on huge cinema screens.


Anything at 1080p holds up fine on screens already, and that's only 2 megapixels, and it's been around for 15 years.

Majority benefit from shooting at a higher resolution is the ability to crop/zoom during processing. Sometimes, it's the ability to actually reduce the resolution to get rid of noise/grain.


If there was salary parity between teachers and workers, I think that sentiment would vanish. With the current disparity, the implication is teaching is a "final resort" for people who "couldn't cut it" in their industry.


With 3D printers, "cheap" is inversely correlated with "reliable." Something like an Ender 3 will fulfill your requirements, but will require maintenance once it's been "broken in."

If you can find some more cash, a Prusa MK3S or the newer Bambu Lab X1 or AnkerMake will provide more consistent, reliable quality.


Oculus sells their hardware at a loss, and make up for it by selling your data.

Valve doesn't care about being competitive. They're not market driven. They hire the best engineers possible, and let them work on what they want.

Sometimes that comes together in something beautiful (Alyx, Index) sometimes it results in something out of touch (steam machines) and sometimes it results in unending development hell.


> Oculus sells their hardware at a loss, and make up for it by selling your data.

Source?


What facebook does is worse than selling your data, but they rely upon people saying they sell your data, which they can deny, to hide what they actually do, which is analyze your data to sell behavior modification products.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: