Hacker Newsnew | past | comments | ask | show | jobs | submit | tatjam's commentslogin

Wellll you could technically jam their uplink channels, but doing so may get the US in your doorstep quite quickly

This is a great plot for a B movie or a trashy military action book. “The bad guys are jamming GPS uplink and we only have two weeks until the almanacs are out of date and the whole system breaks down. Millions of innocent Americans will drive into rivers by accident.”

More to the point, to do that to this number of satellites on this big an area you'd need nuclear power plant levels of power, and it would only degrade GPS a bit (their clocks slowly desync when uplink is blocked)

My understanding was that each satellite broadcasts a coarse ephemeris for the whole network, and that that “almanac” isn’t accurate for very long (on the order of weeks). Without uploads to the satellites, those almanacs will go stale.

I don’t think the almanacs are necessary for the system to work, in theory. But I believe they’re commonly used by receivers to narrow down the range of possibilities when trying to find a PRN match for a signal they’re getting.

(I’ve dealt with GPS and similar navigation signals for work but am not an expert, this is just the impression I’ve gotten over a few years)


Confusingly, because it stops the "particle spawning" but not the animation! At first I thought it just changed the background to orange.

It helps to reload, as the setting is sticky.

Exactly, for the thing that has been done in Github 10000x times over, LLMs are pretty awesome and they speed up your job significantly (it's arguable if you would be better off using some abstraction already built if that's the case).

But try to do something novel and... they become nearly useless. Not like anything particularly difficult, just something that's so niche it's never been done before. It will most likely hallucinate some methods and call it a day.

As a personal anecdote, I was doing some LTSpice simulations and tried to get Claude Sonnet to write a plot expression to convert reactance to apparent capacitance in an AC sweep. It hallucinated pretty much the entire thing, and got the equation wrong (assumed the source was unit intensity, while LTSpice models AC circuits with unit voltage. This surely is on the internet, but apparently has never been written alongside the need to convert an impedance to capacitance!).


This kind of stuff could trigger the next revolution in computing, as the theoretical energy consumption of computing is pretty insignificant. Imagine if we could make computers with near-zero energy dissipation! A "solid 3D" computer would then become possible, and Moore's law may keep going until we exhaust the new dimension ;)


Lean 4 seems to be pretty AI-usable, and you get insane guarantees (but LLM do seem to make very heavy use of "sorry")


As a quick example, compare doing embedded work with a C static uint8_t[MAX_BUFFER_SIZE] alongside a FreeRTOS semaphore and counter for the number of bytes written, vs using Rust's heapless::Vec<u8, MAX_BUFFER_SIZE>, behind a embassy Mutex.

The first will be a real pain, as you now have 3 global variables, and the second will look pretty much like multi-threaded Rust running on a normal OS, but with some extra logic to handle the buffer growing too big.

You can probably squeeze more performance out of the C code, specially if you know your system in-depth, but (from experience) it's very easy to lose track of the program's state and end up shooting your foot.


Okay, fair enough.

So it's mostly about the absence of abstraction, in the C example? C++ would offer the same convenience (with std::mutex and std::array globals), but in C it's more of a hassle. Gotcha.

One more question because I'm curious - where would you anticipate C would be able to squeeze out more performance in above example?


I think the key is that the LLM is having no trouble mapping from one "embedding" of the language to another (the task they are best performers at!), and that appears extremely intelligent to us humans, but certainly is not all there's to intelligence.

But just take a look at how LLMs struggle to handle dynamical, complex systems such as the "vending machine" paper published some time ago. Those kind of tasks, which we humans tend to think of as "less intelligent" than say, converting human language to a C++ implementation, seem to have some kind of higher (or at least, different) complexity than the embedding mapping done by LLMs. Maybe that's what we typically refer to as creativity? And if so, modern LLMs certainly struggle with that!

Quite sci-fi that we have created a "mind" so alien we struggle to even agree on the word to define what it's doing :)


I'm looking for it as a general development machine, sometimes I do stuff that requires GPUs so if they can get a competitive price wrt. building a custom PC then I'm all in for the convenient form factor!


The steam machine as they’re marketing it doesn’t have enough RAM to be a proper development box once you start running IDEs, language servers and debuggers. They’d need to release a 64 or 128 gig sku, unless RAM is easily upgradable.


> [...] proper development box [...]

"Proper" is very subjective here. My entire workflow for developing 3D engines is covered a couple of times over with the announced Steam Machine specs. In fact, even when I was working in web backend development it would've covered it as a dedicated development machine and that was with some pretty pathological dev setups, language servers that ate up way too much memory, etc..


RAM is upgradable according to Linus LTT, standard modules fit. But you can't upgrade graphics/cpu AFAIR.


> But you can't upgrade graphics/cpu AFAIR

It's a laptop APU with graphics card moved to a daughter board, hence "semi-custom".


I think most people would agree that AlphaEvolve is not AGI, but any AGI system must be a bit like AlphaEvolve, in the sense that it must be able to iteratively interact with an external system towards some sort of goal stated both abstractly and using some metrics.

I like to think that the fundamental difference between AlphaEvolve and your typical genetic / optimization algorithms is the ability to work with the context of its goal in an abstract manner instead of just the derivatives of the cost function against the inputs, thus being able to tackle problems with mind-boggling dimensionality.


You can just use the GPL, then it's free, but your labour cannot be so easily profited from by big corps


But it can be profited for not-so-big corps, so I'm still working for free.

Also I have never received requests from TooBigTech, but I've received a lot of requests from small companies/startups. Sometimes it went as far as asking for a permissive licence, because they did not want my copyleft licence. Never offered to pay for anything though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: