Hacker Newsnew | past | comments | ask | show | jobs | submit | perfmode's commentslogin

It’s hard not to wonder whether better technology could someday help stop tragedies like this.

No. Better technology is only making it more efficient. We need better humanity, better morals, better policing of criminals in power.

That's misguided. Technology is a tool. Tools can be used for good or bad. The hammer that builds a hospital can also crack a skull open.

No, we need better people controlling the tools.


The israeli army are famous for their tech?

The Holocaust was built on IBM, the genocide in Gaza is built on Azure. Technology won't be on the side of stopping these tragedies.


Well, right now the "better technology" is Israel's use of the "Lavender" AI to designate people to kill because they are "likely" to be hamas supporters.

And yes, probably they could have used better technology to realize that people in the car are not a danger to them. But that would immply they actually want to avoid killing civilians instead of looking for any excuse to shoot them.


[flagged]


"I don't give a shit about dead Palestinian kids" is... quite a flex. Just chipping more pieces from a damaged soul.

Short answer: several operators already do. The barrier isn’t technical, it’s proximity to a municipal wastewater source and willingness to invest in on-site treatment (pre-filtration, ultrafiltration, partial RO, ongoing biocide dosing). Recycled water typically costs 30-50% less than potable once the treatment infrastructure is in place.

It’s already happening at scale.

AWS expanded recycled water use to 120+ facilities by late 2025, Google’s Douglas County GA site has used 100% recycled municipal wastewater for cooling since 2008, and Microsoft built a $31M water reuse utility in Quincy WA that cut their potable water use by 97%.

The main technical challenges are higher mineral loads causing scaling on heat exchange surfaces and increased Legionella risk from biofilm formation, but these are well-understood treatment problems with roughly a 6-year payback on the additional infrastructure.


Indirect evaporative systems average around COP 17.5 and dew-point systems can hit ~30, so your numbers check out for the best cases. Worth noting that direct liquid cooling with dry heat rejection is now achieving PUE 1.03-1.06 with near-zero water, which narrows the effective gap considerably for the high-density AI racks that are driving most new builds.

Vertiv published a 10-year TCO analysis for a 3 MW facility that found waterless systems actually achieved lower overall cost despite higher energy draw, because water treatment, legionella testing, RO filtration, and cooling tower maintenance add up fast.

The PUE penalty is typically +0.1 to +0.4, so roughly 10-25% more energy for a hyperscaler currently at PUE ~1.1.

Microsoft announced all new builds from late 2027 onward will use zero water for cooling via closed-loop liquid cooling, which suggests the economics have already tipped for new construction.


The data centers in WA cluster in Quincy and Moses Lake in the Columbia Basin, which gets 7-9 inches of rain per year. The town of Quincy (pop ~8,200) uses groundwater at rates equivalent to a city of 30,000, and during the 2021 drought the irrigation district cut off data center pumps entirely.

You’re right that WA is a reasonable place relative to alternatives, and data center water use is a rounding error next to agriculture, but the strain is real at the municipal infrastructure level in the specific towns hosting these facilities.


75% of Japan’s oil. 60% of India’s oil. 40% of China’s oil. is what i’ve heard.

I think what you're describing is vertical integration rather than the walled garden specifically. The walled garden is the App Store restrictions, iMessage lock-in, that kind of thing. What made the Neo possible is that Apple controls the silicon, the OS, the firmware, and the industrial design as a single unit. They could put a phone chip in a laptop form factor and have it feel coherent because there's no seam between the hardware and software teams.

The distinction matters because it changes what the lesson is for the rest of the industry. You don't need a walled garden to compete here. You need to own enough of the stack that you can make aggressive tradeoffs (like shipping 8GB and an A18 Pro) without everything falling apart at the integration boundaries. Microsoft can't do that because they don't make the hardware. Dell and Lenovo can't do that because they don't make the OS. Qualcomm can't do that because they don't control the software ecosystem.

The one company that could theoretically pull this off is Google with ChromeOS on their own Tensor chips, and the fact that they haven't is probably the more interesting question than why Asus is shocked.


>The one company that could theoretically pull this off is Google with ChromeOS on their own Tensor chips, and the fact that they haven't is probably the more interesting question than why Asus is shocked.

Successful Chromebook’s have always been the throwaway $200 models. Higher end ones like the Pixelbook served more as flagship devices to prove they could do more but were never really marketed.

I don’t think Google’s gonna make a souped up Chromebook because they know their place. They’re entirely internet dependent devices with little brand recognition and no serious software. The Neo serves somewhere in between that. They have the brand recognition and MacOS.


> no serious software.

What software do you want to be considered serious? With the addition of Linux/Crostini, there's 3D modeling, CAD, and NLE video editing and compilers and everything else.


All the professional software that’s capable of running in the MacBook Neo, you know Final Cut etc.

What's the etc? Davinci Resolve is available on Linux and is an industry standard for video editing. Blender's no slouch either these days. I'll give you Ableton though.

This is the most interesting point in the thread to me. Tolerance stack-up is the reason tight per-part tolerances matter at all. A single brick being precise is table stakes for injection molding. The hard problem is what happens when you compose hundreds of them. The decoupling strategy you're describing is really similar to how you handle error accumulation in any large composed system. You can't make individual components perfect enough to avoid drift at scale, so you introduce boundaries where the accumulated error gets absorbed rather than propagated. In Lego's case that means designing joints between sections that are forgiving enough to accommodate the stack-up from each chunk independently.

It's also why knockoff bricks can feel fine for small builds and then fall apart (sometimes literally) on larger ones. If your per-part tolerance is 3x worse, it doesn't matter much for a 20-piece build, but for a 2000-piece build your cumulative error budget is blown long before you're done. The failure mode isn't that any individual brick is bad, it's that the composition doesn't hold.

I'd be curious whether Lego publishes or talks about those chunk size design rules anywhere. That seems like the actually interesting engineering story, more so than the per-part tolerance numbers that get repeated in every article about them.


The quality cliff question is the right one to be asking. There's a pattern in systems work where something that scales cleanly in theory hits emergent failure modes at production scale that weren't visible in smaller tests. The loss landscape concern is exactly that kind of thing, and nobody has actually run the experiment.

That said, I think the comparison to improving GGUF quantization isn't quite apples to apples. Post-training quantization is compressing a model that already learned its representations in high precision. Native ternary training is making an architectural bet that the model can learn equally expressive representations under a much tighter constraint from the start. Those are different propositions with different scaling characteristics. The BitNet papers suggest the native approach wins at small scale, but that could easily be because the quantization baselines they compared against (Llama 3 at 1.58 bits) were just bad. A full-precision model wasn't designed to survive that level of compression.

The real tell will be whether anyone with serious compute (not Microsoft, apparently) decides the potential inference cost savings justify a full training run. The framework existing lowers one barrier, but the more important barrier is that a failed 100B training run is extremely expensive, and right now there's not enough evidence to derisk it. Two years of framework polish without a flagship model is a notable absence.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: