Hacker Newsnew | past | comments | ask | show | jobs | submit | andrewgleave's commentslogin

My Lindy alarm has gone off!


Looks like MLX is not a supported backend in Ollama so the numbers for the Mac could be significantly higher in some cases.

It would be interesting to swap out Ollama for LM Studio and use their built-in MLX support and see the difference.


For any stamp collectors here, the Isle of Man Post Office [1] has just issued an official set of 6 Roger Dean and Rick Wakeman stamps [2]:

[1] https://iomstamps.com/collections/wakeman [2] https://www.bbc.co.uk/news/articles/clyqe679gqno


Theory building is the secret sauce, and all variants of "this is how to use AI effectively" I've seen are inferior to the epistemologically sound theory Naur outlines in his paper.


Feynman said, "The first principle is that you must not fool yourself - and you are the easiest person to fool" when talking about science, but it also applies to the properties of LLM output.


Not cosmological but yesterday Apple released an interesting protein folding model with 3B param transformer-based arch which runs on M-series hardware and is competitive with state-of-the art models. [1] Code [2]

[1] https://arxiv.org/pdf/2509.18480 [2] https://github.com/apple/ml-simplefold


Yes. Juniors have a lack of knowledge about how to build coherent mental models of problems whose solution will ultimately be implemented in code, whereas seasoned engineers do.

Seniors can make this explicit to models and use them to automate "the code they would have written," whereas a junior doesn’t know what they would have written nor how they would have solved it absent a LLM.

Same applies to all fields: LLMs can be either huge leverage on top of existing knowledge or a crutch for a lack of understanding.


For anyone interested in the history of Sellafield and its role in reprocessing, "Britain's Nuclear Secrets: Inside Sellafield" on BBC 4 at the moment is worth a watch. Presented by Jim Al-Khalili.

https://www.bbc.co.uk/programmes/b065x080


Reminds me of Brett Victor's demo of projected AR turbulence around a toy car at Dynamicland. Only a short clip, but you get the idea: https://youtu.be/5Q9r-AEzRMA?t=47


I know and like Brett Victor's work. Definitely a source of inspiration!


> “There's kind of like two different ways you could describe what's happening in the model business right now. So, let's say in 2023, you train a model that costs 100 million dollars. > > And then you deploy it in 2024, and it makes $200 million of revenue. Meanwhile, because of the scaling laws, in 2024, you also train a model that costs a billion dollars. And then in 2025, you get $2 billion of revenue from that $1 billion, and you spend $10 billion to train the model. > > So, if you look in a conventional way at the profit and loss of the company, you've lost $100 million the first year, you've lost $800 million the second year, and you've lost $8 billion in the third year. So, it looks like it's getting worse and worse. If you consider each model to be a company, the model that was trained in 2023 was profitable.” > ... > > “So, if every model was a company, the model is actually, in this example, is actually profitable. What's going on is that at the same time as you're reaping the benefits from one company, you're founding another company that's like much more expensive and requires much more upfront R&D investment. And so, the way that it's going to shake out is this will keep going up until the numbers go very large, the models can't get larger, and then it will be a large, very profitable business, or at some point, the models will stop getting better. > > The march to AGI will be halted for some reason, and then perhaps it will be some overhang, so there will be a one-time, oh man, we spent a lot of money and we didn't get anything for it, and then the business returns to whatever scale it was at.” > ... > > “The only relevant questions are, at how large a scale do we reach equilibrium, and is there ever an overshoot?”

From Dario’s interview on Cheeky Pint: https://podcasts.apple.com/gb/podcast/cheeky-pint/id18210553...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: