Hacker Newsnew | past | comments | ask | show | jobs | submit | jonathanlydall's favoriteslogin

It’s supported by the latest revision of Glinet’s Comet kvm over IP hardware, which was a cool upgrade imho.

https://www.gl-inet.com/products/gl-rm1/


That's an interesting comment, because "locating the correct file to edit" was the very first thing LLMs did that was valuable to me as a developer.

I've got a prompt I've been using, that I adapted from someone here (thanks to whoever they are, it's been incredibly useful), that explicitly tells it to stop praising me. I've been using an LLM to help me work through something recently, and I have to keep reminding it to cut that shit out (I guess context windows etc mean it forgets)

    Prioritize substance, clarity, and depth. Challenge all my proposals, designs, and conclusions as hypotheses to be tested. Sharpen follow-up questions for precision, surfacing hidden assumptions, trade offs, and failure modes early. Default to terse, logically structured, information-dense responses unless detailed exploration is required. Skip unnecessary praise unless grounded in evidence. Explicitly acknowledge uncertainty when applicable. Always propose at least one alternative framing. Accept critical debate as normal and preferred. Treat all factual claims as provisional unless cited or clearly justified. Cite when appropriate. Acknowledge when claims rely on inference or incomplete information. Favor accuracy over sounding certain. When citing, please tell me in-situ, including reference links.  Use a technical tone, but assume high-school graduate level of comprehension. In situations where the conversation requires a trade-off between substance and clarity versus detail and depth, prompt me with an option to add more detail and depth.

Well, that's always what LLM-based AI has been. It can be incredibly convincing but the bottom line is it's just flavoring past text patterns, billions of them it's been "trained" on, which is more accurately described as compressed efficiently onto latent space. Like if someone lived for 10,000 years engaging in small talk at the bar, has heard it all, and just kind of mindlessly and intuitively replied with something that sounds plausible for every situation.

Sam Altman is the real sycophant in this situation. GPT is patronizing. Listening to Sam go off on tangent about science fiction scenarios that are just around the corner... I don't know how more people don't see through it.

I kind of get the feeling the people who have to work him every day got sick of his nonsense and just did what he asked for. Targeting the self-help crowd, drive engagement, flatter users, "create the next paradigm of emotionally-enabled humans of perfect agency" or whatever the fuck it he was popping off about to try to motivate the team to compete better with Anthropic.

He clearly isn't very smart. He clearly is product of nepotism. And clearly, LLM "AI" is an overhyped, overwrought version of 20 questions artificial intelligence enabled by mass data scale and NVidia video game graphics. it's been 4 years now of this and AI still tells me the most obviously wrong nonsense every day.

"Are you sure about that?"

"You're absolutely correct to be skeptical of ..."


> Since the register size is fixed there is no way to scale the ISA to new levels of hardware parallelism without adding new instructions and registers.

I look at SIMD as the same idea as any other aspect of the x86 instruction set. If you are directly interacting with it, you should probably have a good reason to be.

I primarily interact with these primitives via types like Vector<T> in .NET's System.Numerics namespace. With the appropriate level of abstraction, you no longer have to worry about how wide the underlying architecture is, or if it even supports SIMD at all.

I'd prefer to let someone who is paid a very fat salary by a F100 spend their full time job worrying about how to emit SIMD instructions for my program source.


In Peter Watts' novel Blindsight, human protagonists enter a completely alien world - not Star Trek "people with rubber ears", but different biology/consciousness-patterns/etc entirely.

As one of the plot points, 'aliens' (again, not the Star Trek humanoid kind) eventually 'hack' the human nervous/visual systems through various means (electromagnetic fields, visual patterns, movement types, etc) to hide things in plan sight.

My internal vision of scenes from that book is eerily similar to the videos in the article.

(On aside, would highly recommend Peter Watts to Hacker News audience :)


As an Electron maintainer, I'll re-iterate a warning I've told many people before: Your auto-updater and the underlying code-signing and notarization mechanisms are sacred. The recovery mechanisms for the entire system are extremely painful and often require embarrassing emails to customers. A compromised code-sign certificate is close to the top of my personal nightmares.

Dave and toDesktop have build a product that serves many people really well, but I'd encourage everyone building desktop software (no matter how, with or without toDesktop!) to really understand everything involved in compiling, signing, and releasing your builds. In my projects, I often make an argument against too much abstraction and long dependency chain in those processes.

If you're an Electron developer (like the apps mentioned), I recommend:

* Build with Electron Forge, which is maintained by Electron and uses @electron/windows-sign and @electron/osx-sign directly. No magic.

* For Windows signing, use Azure Trusted Signing, which signs just-in-time. That's relatively new and offers some additional recovery mechanisms in the worst case.

* You probably want to rotate your certificates if you ever gave anyone else access.

* Lastly, you should probably be the only one with the keys to your update server.


really random question - but what is used to create the images in this blog post? I see this style quite often but never been able to track down what is used.

It’s not my favorite of his books. Try Cryptonomicon (if you like history/math) or Anathem (if you like sci fi/math) or The Diamond Age (if you like sci fi).

Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: