Hacker Newsnew | past | comments | ask | show | jobs | submit | drob518's commentslogin

He was a brain nerd, for sure.

RIP. I was a big user of Pagemaker back in the early 1990s. Great product for the time.

So, it wasn’t my imagination.

I concur. A lot of “flakey” issues can be traced to poor quality power supplies. That’s a component that doesn’t get any attention in spec sheets other than a max power rating and I think a lot of manufacturers skimp there. As long as the system boots up and runs for a few minutes, they ship it.

Heck, even dirty power from the wall can contribute. I've seen improvements in stability from putting things behind power conditioners.

Definitely that too, particularly in 2nd-world countries. I remember having a difficult time with dirty power for some hardware products I was responsible for at one time, where the customers were in the Middle East nd Africa in the 1990s. We ended up having to have the PS manufacturer do a redesign to help compensate for dirty power. It can be done, but it costs a bit more.

That works for a LOT of people. Not me, but the everybody else in my family.

I agree with the sentiment, but in general it's not worth my time to try to purge. I used to do that back in 2005. Heck, in the 1990s, I'd buy a new hard drive every year. But these days, I find that a hard drive lasts me for 5 years if I plan well.

Have you used local AI models on a 32 GB MBP? I ask because I'm looking to finally upgrade my M1 Air, which I love, but which only has 16 GB RAM. I'm trying to figure out if I just want to bump to 32 GB with the M5 MBAir or make the jump all the way to 64 GB with the low-end M5 MBP. I love my M1 Air and I don't typically tax the CPU much, but I'm starting to look at running local models and for that I'd like faster and bigger. But that said, I don't want to overpay. Memory is my main issue right now. Anyway, if you have experience, I'd love to hear it. Which MBP, stats of the system, which AI model, how fast did it go, etc?

For local models are you wanting to do:

A) Embeddings.

B) Things like classification, structured outputs, image labelling etc.

C) Image generation.

D) LLM chatbot for answering questions, improving email drafts etc.

E) Agentic coding.

?

I have a MBP with M1 Max and 32GB RAM. I can run a 20GB mlx_vlm model like mlx-community/Qwen3.5-35B-A3B-4bit. But:

- it's not very fast

- the context window is small

- it's not useful for agentic coding

I asked "What was mary j blige's first album?" and it output 332 tokens (mostly reasoning) and the correct answer.

mlx_vlm reported:

  Prompt: 20 tokens @ 28.5 t/s | Generation: 332 tokens @ 56.0 t/s | Peak memory: 21.67 GB

Thanks for the info.

I’d like to do agentic coding first, but then chatbot and classification as lower priorities. I don’t really care about image gen.

Also, if you’re only able to run 35B models in 32GB, seems like I’d definitely want at least 64GB for the newer, larger models (qwen has a 122B model, right). My theory there is that models are only getting larger, though perhaps also more efficient.


Wise guy. That said, upvoted for cleverness.

The original link has nothing to do with any recent tariffs. The study period from 2018 through 2022 and specifically looked tariffs on wine driven by the Airbus/Boeing kerfuffle happening at the time.

Thank the founders for the Commerce Clause, at least when it's applied correctly and isn't being abused.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: