Hacker Newsnew | past | comments | ask | show | jobs | submit | jiggawatts's commentslogin

My example is lighting technology:

Wood fires were the only option for something like a few hundred thousand years.

Oil lamps for millennia.

Tallow or beeswax candles are modern technology, appearing after the fall of the Roman Empire.

Gas lighting was widespread for less than a century.

Incandescent lightbulbs for another century, but were starting to get replaced by fluorescent tubes just decades later.

Cold cathode fluorescents saw mainstream use for about two decades.

LEDs completely displaced almost all previous forms of lighting in less than a decade.

I recently read about a new form of lighting developed and commercialised in just a few of years: https://www.science.org/doi/10.1126/sciadv.adf3737


This is an excellent example to illustrate an S-curve. There is a certain amount of energy in a photon. It cannot be emitted with less energy. There is 100% efficiency barrier that cannot be surpassed no matter how smart you are.

I think we can stop building new streetlights at the moment we have full daylight illumination on the visible spectrum 24x7 in urban areas. We’ll probably settle for much less and be happy with that.

If we need more light, we can deploy more power generators.


Sure, but the technology lifetimes and adoption rates have compressed exponentially despite that.

Efficiency is not the only relevant metric, there's also cost, flexibility, lifetime/durability, CRI, etc...

For example, OLEDs are (literally) flexible, but burn out faster then LEDs and are less efficient.

As another example, the light sources for televisions have undergone nearly annual changes! They started with CFL backlights, then side-illumination with white LEDs, then blue light with quantum dots, OLED panels, backlights as controllable grids of LEDs, mini-LED, micro-LED, RGB micro-LED, etc...

We're up to something like 10K dimming zones with the latest TCL panels and 100K is just around the corner.


Meanwhile, here in Australia I spoke with small business owners (cafes, gyms, etc...) about their preparedness for the COVID lockdowns before the first one we had. All of them just had a wide-eyed look and a mumbled "Lockdowns? Really? Here? You think so?"

More than half of them went bankrupt.

One guys kept dumping money into a new gym buildout mere weeks before the months-long lockdowns commenced.


I've seen Sam Altman make similar claims in interviews, and I now interpret every statement from an Open AI employee (and especially Sam) as if an Aes Sedai had said it.

I.e.: "keep API model behavior constant" says nothing about the consumer ChatGPT web app, mobile apps, third-party integrations, etc.

Similarly, it might mean very specifically that a "certain model timestamp" remains constant but the generic "-latest" or whatever model name auto-updates "for your convenience" to the new faster performance achieved through quantisation or reduced thinking time.

You might be telling the full, unvarnished truth, but after many similar claims from OpenAI that turned out to be only technically true, I remain sceptical.


That's a fair suspicion - I'll freely acknowledge that I am biased towards saying things that are simple and known, and I steer away from topics that feel too proprietary, messy, etc.

ChatGPT model behavior can definitely change over time. We share release notes here (https://help.openai.com/en/articles/6825453-chatgpt-release-...), and we also make changes or run A/B tests that aren't reported there. Plus, ChatGPT has memory, so as you use it, its behavior can technically change even with no changes on our end.

That said, I do my best to be honest and communicate the way that I would want someone to communicate with me.


There are two very distinct kinds of AI workloads that go into data centres:

    1. Inference
    2. Training
Inference just might be doable in space because it is "embarrassingly parallel" and can be deployed as a swarm of thousands of satellites, each carrying the equivalent of a single compute node with 8x GPUs. The inputs and outputs are just text, which is low bandwidth. The model parameters only need to be uploaded a few times a year, if that. Not much storage is required , just a bit of flash for the model, caching, logging, and the like. This is very similar to a Starlink satellites, just with bigger solar panels and some additional radiative cooling. Realistically, a spacecraft like this would use inference-optimised chips, not power-hungry general purpose NVIDIA GPUs, LPDDR5 instead of HBM, etc...

Training is a whole other ballgame. It is parallelisable, sure, but only through heroic efforts involving fantastically expensive network switches with petabits of aggregated bandwidth. It also needs more general-purpose GPUs, access to petabytes of data, etc. The name of the game here is to bring a hundred thousand or more GPUs into close proximity and connect them with a terabit or more per GPU to exchange data. This cannot be put into orbit with any near-future technologies! It would be a giant satellite with square kilometers of solar and cooling panels. It would certainly get hit sooner or later by space debris, not to mention the hazard it poses to other satellites.

The problem with putting inference-only into space is that training still needs to go somewhere, and current AI data centres are pulling double-duty: they're usable for both training and inference, or any mix of the two. The greatest challenge is that a training bleeding edge model needs the biggest possible clusters (approaching a million GPUs!) in one place, and that is the problem -- few places in the world can provide the ~gigawatt of power to light up something that big. Again, the problem here is that training workloads can't be spread out.

Space solves the "wrong" problem! We can distribute inference to thousands of datacentre locations here on Earth, each needs just hundreds of kilowatts. That's no problem.

It's the giaaaant clusters everyone is trying to build that are the problem.


Arguably the dotnet sdk now works better on Linux than Windows. For example, Windows Containers are "supported" only the marketing checkbox sense.

This kind of thing always reminds me G. H. Hardy, a mathematician that famously took pride in working on pure number theory, which he described as having "no practical use" and therefore being morally superior to applied mathematics connected to war or industry.

This ended up as the theoretical bedrock upon which modern encryption algorithms are built, enabling trillions of dollars of economic activity (as well as spying and other nefarious or ethically questionable activities).

I noticed (as have others) that even the purest of pure fundamental research has this oddly persistent pattern of becoming applied to everyday problems sooner or later.

The x-ray telescope mirror design used for Chandra -- motived only by pure intellectual curiosity -- ended up being a key development stepping stone towards ASML's TWINSCAN tools that use focused x-rays for chip lithography. Arguably this is more important to the global economy now that even oil is!

Similarly, particle accelerators like CERN might be the next chip lithography beam sources. The technologies being developed for research physicists such as laser-driven "desktop" accelerators might be just the ticket to replace tin droplet x-ray light sources.

Who knows?

We certainly won't if we don't built these things for pure research first to find out!


A recent development is High Bandwidth Flash (HBF) which works the same as HBM but uses less power and has far higher density.

If Intel made an equivalent high bandwidth Optane product, they’d immediately corner the GPU memory market.


An interview with a Silicon Valley big shot finally explained this.

The concept is that you can convince smart people to work extra hard for you by selling them the story that no-no-no, they’re not merely tweaking some dystopian algorithm to sell Chinese plastic crap to people, they’re saving the world.

Once you recognise this pattern, you’ll see it everywhere: Zuckerberg, Sam Altman, and Elon all do it.

Hence the “AI safety” rhetoric. Those CEOs will gladly take the safeties off and make an army of Terminators to sell to the highest bidder! The safety talk is for their employees, to convince them to work like slaves to “save the world”.


It has everything to do with DRM. It’s not “dual use” technology. It has one use, and this is it.

We're gonna be using this to validate someone didn't move your login to another device. Which will protect you from session hijacking. Your work stuff will start requiring it. Your media accounts will too. Or else linux will simply be locked out from major services. DRM is already in your browser. And literally has no connection to identity attestation.

Who is “we”? So we can know who to avoid.

All corporate SSO providers.

Another common issue on corporate-issued workstation laptops is that they don’t install the proper GPU drivers. The basic ones that ship with the OS are awful, but work just well enough that people don’t notice that they’re missing something important.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: