Hacker Newsnew | past | comments | ask | show | jobs | submit | zelon88's commentslogin

Part of the reason you don't see them more is because commercial satellite mega-constellations (like Starlink) work against long exposure times by literally clouding and brightening our view of space. (1)

1) https://www.nature.com/articles/s41586-025-09759-5


Not really sure how this has anything to do with space based platforms like Chandra (which is x-ray) and Hubble which is well above Starlink. Also, Starlink is only a couple of years old to be problematic, but the ground based observatories have had clean skies for decades before.

This just really feels like someone trying to interject a pet peeve. Whether the peeve is valid or not, it's not the problem here.


It's relevant because ground based satellites add observational capacity. If a ground based telescope can't get a good view, that's when you queue up Chandra or James Webb (Hubble is not the same type of telescope, and it's workload is not interchangeable).

Astronomers have thousands of interesting things they would like to point their telescopes at. There are thousands of capable ground stations that could take the easy targets, and only 2 x-ray satellites which should be used only for the highest value targets where absolute clarity and resolution is required. But if you start obstructing those ground stations, the workload must be taken over by just 2 satellites.

Ground stations are valued because they help solve the capacity planning problem. More usable telescopes === more observation time. Having more ground stations frees up the 2 satellite telescopes for truly stunning shots.


> Chandra or James Webb (Hubble is not the same type of telescope

Chandra and James Webb are not the same type of telescope either. How is this relevant?


Hubble is actually the same altitude as Starlink, 340 mi. There have been proposals to boost Hubble to higher altitude so it doesn't reenter next decade.

But since Hubble doesn't look towards the Earth, it won't see as many as from Earth.


One of the main concerns of astronomers, and one of the benefits of Chandra and James Webb being in orbit aboard sattelites, is the prevelance of commercial sattelite constellations ruining the view of the cosmos. (1)

1) https://www.nature.com/articles/s41586-025-09759-5


I worked for the Chandra Operations Control Center in Burlington MA for a while. The team was a fascinating collaboration between Northrop Grumman, The Smithsonian, NASA, and Harvard.

The telescope was launched into orbit in 1999 and has been a tremendous value to astrophysics. Although it is showing signs of it's age, and it is not as capable or cost effective to operate as the James Webb telescope; it still offers scientists much needed capacity and logistics capability that come with having two telescopes in orbit instead of just one.

One of the fascinating parts about the telescope is it's resiliance and the dedication of the staff who control it. For example, to maximize the usable lifespan of the anti-radiation shielding, and to prevent radiation damage to sensitive features, the position of the craft is constantly being planned and adjusted relative to the sun to balance radiation exposure and maximize observation time at various targets. Much like telling a small child "don't stare directly at the sun" as they take in as much information about their surroundings as possible.


Could you please name one non-food product in America that a typical consumer could buy that doesn't subsidize corporate shareholders? Name one thing the average American can buy that contributes to gainful employment at the expense of corporate profit.

Where I'm sitting, the only manufacturing that exists in the USA is, for subassemblies or components that are purchased by larger companies on a contract basis, and manufactured by lower-middle class citizens. Boeing and GE is an example. And the reason Boeing buys domestically is only because they have to in order to limit their liability, reduce labor costs, and protect their IP. If the America you're looking for existed, Boeing would happily pay twice as much for labor to make components in house. If that America existed your television would be made here too, by people who weren't being subsidized themselves by Food Stamps.

There are no consumer commodity manufacturers in the USA who provide gainful employment without significant consideration towards corporate profit. There is basically nothing at Wal-Mart that you can buy that is made in the USA by people who are living in financial comfort. That's just not how late stage capitalism works.


This, 100%.

I'd like to add my reasoning for a similar failure of an HP Proliant server I encountered.

Sometimes hardware can fail during long uptime and not become a problem until the next reboot. Consider a piece of hardware with 100 features. During typical use, the hardware may only use 50 of those features. Imagine one of the unused features has failed. This would not cause a catastrophic failure during typical use, but on startup (which rarely occurs) that feature is necessary and the system will not boot without it. If it could, it could still perform it's task... because the damaged feature is not needed. But it can't get past the boot phase, where the feature is required.

Tl;dr the system actually failed months ago and the user didn't notice because the missing feature was not needed again until the next reboot.


Is there a good reason why upgrades need to stress-test the whole system? Can't they go slowly, throttling resource usage to background levels?

They involve heavy CPU use, stress the whole system completely unnecessary, the system easily sees the highest temperature the device had ever seen during these stress tests. If during that strain something fails or gets corrupted, it's a system-level corruption...

Incidentally, Linux kernel upgrades are not better. During DKMS updates the CPU load skyrockets and then a reboot is always sketchy. There's no guarantee that something would not go wrong, a secure boot issue after a kernel upgrade in particular could be a nightmare.


To answer your question; it helps to explain what the upgrade process entails.

In the case of Linux DKMS updates: DKMS is re-compiling your installed kernel modules to match the new kernel. Sometimes a kernel update will also update the system compiler. In that instance it can be beneficial for performance or stability to have all your existing modules recompiled with the new version of the compiler. The new kernel comes with a new build environment, which DKMS uses to recompile existing kernel modules to ensure stability and consistency with that new kernel and build system.

Also, kernel modules and drivers may have many code paths that should only be run on specific kernel versions. This is called 'conditional compilation' and it is a technique programmers use to develop cross platform software. Think of this as one set of source code files that generates wildly different binaries depending on the machine that compiled it. By recompiling the source code after the new kernel is installed, the resulting binary may be drastically different than the one compiled by the previous kernel. Source code compiled on a 10 year old kernel might contain different code paths and routines than the same source code that was compiled on the latest kernel.

Compiling source code is incredibly taxing on the CPU and takes significantly longer when CPU usage is throttled. Compiling large modules on extremely slow systems could take hours. Managing hardware health and temperatures is mostly a hardware level decision controlled by firmware on the hardware itself. That is usually abstracted away from software developers who need to be able to be certain that the machine running their code is functional and stable enough to run it. This is why we have "minimum hardware requirements."

Imagine if every piece of software contained code to monitor and manage CPU cooling. You would have software fighting each other over hardware priorities. You would have different systems for control, with some more effective and secure than others. Instead the hardware is designed to do this job intrinsically, and developers are free to focus on the output of their code on a healthy, stable system. If a particular system is not stable, that falls on the administrator of that system. By separating the responsibility between software, hardware, and implementation we have clear boundaries between who cares about what, and a cohesive operating environment.


The default could be that a background upgrade should not be a foreground stress test.

Imagine you are driving a car and from time ro time, without any warning, it suddenly starts accelerating and decelerating aggressively. Your powertrain, engine, breaks are getting tear and wear, oh and at random that car also spins out and rolls, killing everyone inside (data loss).

This is roughly how current unattended upgrades work.


My concerns are the computerization of vehicles in general. The issue is not entirely with the telemetry itself, as you frame it. My issue is "what happens when the telemetry is not available?" You, and perhaps your friend, seem to be framing the problem as though the concern is that the car is filled with "spyware." My issue is that the car is filled with "DRM" from the manufacturer. When I buy a car, I expect to own that car entirely, forever. If I wanted to rent the right to someone else's car... I'd lease a car.

Musk touts the CyberTruck as "the perfect armageddon vehicle" but if you have no cell phone service how do you charge the truck? What if Tesla disappears, or GCP is down, or WW3 actually happens and the datacenters go dark? Can you operate the vehicle? What if the power goes out because... Armageddon. How do you fuel the vehicle?

What if Musk sees what I said about him on social media and accuses me of violating the TOS? Will he disable my vehicle remotely? I've seen this in the real world when a machine shop missed it's payment to Haas.

In a real armageddon, my 1997 shitbox would still function. My 2013 F150 would function right up until the EMP hit. A 2025 EV probably would not make it to a fueling source within 24 hours after the power goes out.


Have you tried to fuel a ICE vehicle in a power outage? I waited a week in the Congo.

Getting fuel out of underground tanks and paying for it are non trivial with no power.

I can charge an EV off my solar panels.


And solar is just one of many ways of generating electricity. Wind turbines continue to work, as do hydroelectric plants (and on a smaller scale, water wheels). Worst case scenario, you can burn more readily accessible carbon based fuels like wood to make steam to generate power. Vehicles that are dependent on extraction and refinement of petroleum are actually quite limited in comparison.

On a side note, you should look up "wood gas". There are YouTube videos of 110v generators running off wood gas, and while it takes a bit of setup, it's within the realm of what a country person could do in a weekend or two. By weight, I think I remember that it takes 4x of wood vs gasoline to get the same energy. So while a generator takes 6 lbs of gasoline (a gallon) to give you ~5kw, it takes 25 lbs of wood. Sounds bad til you realize how much a tree weighs.

The battery in a model X is 57,000 watts. To charge that with an 800 watt consumer grade wind turbine would take 71 hours. Try again.

71 hours is waaaay shorter than the week I waited in the gas lineups in Congo, Sudan , Ethiopia and more.

And even after that week of waiting I was only allowed to buy 20 litres max.


I have a diesel car and in theory you could fuel it with used cooking oil if you needed to.

I don't think I'm likely to do that as I think this would gum up the engine but I know people that have told me they've done it with a filtration.

For the record I would like to switch to an electric car next but my current diesel seems to have a lot of life in it yet.


You find another car and use a center punch on the gas tank. Drain it into a container and fuel whatever you want with it.

You'll be charging a 57kw model X for AT LEAST an entire day using solar panels that would barely fit in the trunk.


And, ah, during a time of gas shortages, exactly which cars have gas in them just sitting around not being used, and who is going to let you do that to their car?

I can tell you’ve never actually lived this reality, you’re just making stuff up.

I had gas pumps not work due to lack of electricity (and lack of gas in the tanks) on half a dozens occasions in different countries.


You forgot to bring your pump, spotter, and nail-tipped bat. If you had those I'm sure you would have been able to fight your way through and have your go juice in no more than 15 minutes.

Not in a line of hundreds of people waiting days.

In armageddon, you would run out of fuel around day 3 and then that's it.

A CyberTruck will charge just fine from anything that can generate the proper AC or DC, no phoning home needed. Many home solar installations can work off grid and charge your car.


To charge a 57,000 watt tesla with an 800w consumer solar panel would take 71 hours of sunlight. There is not 24 hours of sunlight in a single day. This also is not considering the battery management which heats the battery passively. So it would take a week to charge a Tesla with equipment that you could carry inside the car.

Strange that you assume a single solar panel? If you’re installing solar at home, wouldn’t you install more than one panel?

I have a 15KW setup at home (kinda large, but adjust the numbers to what’s reasonable for you), it should charge the Tesla in less than 4 hours.


Watching these two behemoths wrestle over the future of a space we all share, and wondering if they will need to loop in regulators on one side or another, convinces me that we shouldn't have gifted all of our digital infrastructure to just 2 companies. Inlcuding our economy, healthcare, government, and civil infrastructure. We've put all our eggs into only a couple of very greedy, impossible to audit baskets. We've really done this all to ourselves. We've raced ourselves all the way to the bottom.


There is nothing stopping other CDN/DNS providers from implementing similar services and tools to what Cloudflare offers. Part of the reason CF has become so popular is because so many of their competitors don't offer nearly the same convenience for routine tasks & protection.


> we shouldn't have gifted all of our digital infrastructure to just 2 companies

We didn't. Just as we didn't gift all our chocolate-making infrastructure to Hershey's and Cadbury's.


Hey did you forget the market is consumer driven?


It used to be just Ma Bell


After 50 years of effectively zero activity, we had some glimmers of anti-trust enforcement under the Biden admin. But then eggs were expensive in the summer of 2024, so we decided it was actually no problem for these half-dozen companies to control our speech and economy, and here we are.


I understand the idea behind it and am still kinda chewing on the scope of it all. It will probably break some enterprise applications and cause some help desk or group policy/profile headaches for some.

It would be nice to know when a site is probing the local network. But by the same token, here is Google once again putting barriers on self sufficiency and using them to promote their PaaS goals.

They'll gladly narc on your self hosted application doing what it's supposed to do, but what about the 23 separate calls to Google CDN, ads, fonts, ect that every website has your browser make?

I tend to believe the this particular functionality is no longer of any use to Google, which is why they want to deprecate it to raise the barrier of entry for others.


Idk, I like the idea of my browser warning me when a random website I visit tries to talk to my network. if there's a legitimate reason I can still click yes. This is orthogonal to any ads and data collection.


I have this today from macOS. To me it feels more appropriate to have the OS attempt to secure running applications.


No you don’t - you get a single permission prompt for the entire browser. You definitely don’t get any permission-site permission options from the OS


Ah I misunderstood, thank you


I agree that any newly proposed standards for the web coming from Google should be met with a skeptical eye — they aren’t good stewards IMO and are usually self-serving.

I’d be interested in hearing what the folks at Ladybird think of this proposal.


Just looked this up to see an example of the behavior described. https://www.youtube.com/watch?v=0SlUlENAggE

As a side note, BeamNG.Drive has the most accurate throttle response and audio response of any driving game I've ever tried. You can almost feel the car pull vacuum (or build boost).


Considering how terrible Android and ChromeOS and GCP is in every conceivable way, I'm surprised Google even has time to quantify the quality of Microsoft products.


This is normal propaganda: make the other one look worse.

TBH, the quality of both Windows and Android is quite the same.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: