Hacker Newsnew | past | comments | ask | show | jobs | submit | woutifier's commentslogin

A service outage meaning starlink is down globally.

Anecdotally my starlink is using more power currently than usual. And when I first looked at the app it mentioned having gotten a new public IP address.


> Anecdotally my starlink is using more power currently

That is interesting info - don't phones typically use a lot of power when they're trying to find a tower?


Yea. Mine is down and listed as heating. And getting a public up, not a cgnatted 100.x.x.x I usually get. Weird...


I get new public IPs fairly frequently (maybe once every two weeks or so?)


Indeed but in this case it coincided with starlink going down. Could be a coincidence though!


Yeah, that's interesting actually, I guess one could speculate that the fact that you got a new IP might mean they rolled out/restarted something at the application layer and then things went south from there perhaps


So in AWS-speak, this is a yellow/orange status outage ?


No they were already doing that, the global withdrawal of the legitimate route just exposed it.


How is there absolutely no further comment about that in their RCA? That seems like a pretty major thing...


For nuclear reactor incidents there are reinsurance pools. A collective of insurance companies. For example the nuclear reactor in Borssele, The Netherlands, is insured backed by 26 of such collectives (one national, and 25 foreign collectives) together they consist of over 300 individual insurance companies.

This particular (small) reactor is insured for liability up to 1.2 billion euro.


The problem is, the followup costs for a nuclear reactor disaster can easily reach multiple trillions of euros - and no plant is adequately insured. In Germany, they did the maths [1] in the wake of the 2011 Fukushima disaster and found out that decent insurance would cost 72 billion euros a year in premiums - a move that would raise electricity prices from 20 ct/kWh to 4 euros/kWh.

We have been ignoring that massive risk in favor of "cheap" electricity for decades, and every time something happened, in the end the taxpayers had to cover the bill.

Nuclear simply is financially unviable once externalities are properly accounted for. Add to that cost and time overruns in projects, the still unsolved questions of where to dump the waste, of nuclear weapon fuel proliferation... and just forget about fission-based nuclear projects, please.

[1] https://www.manager-magazin.de/finanzen/versicherungen/a-761...


Yes, but, you could also say the same of fossil fuel power plants as well. The world has been getting "cheap" electricity from fossil fuels for many decades now while ignoring the environmental (and other, like nuclear waste problems from Coal) externalities that taxpayers ultimately have to face one way or another as well. You'd need a comparative analysis of the externalities of both to make sense of the trade-offs and pick one that's "better".

Renewables like solar and wind are great of course, but they're intermittent and cause weird issues like the Duck Curve. You still need base load from either nuclear or fossil fuels for a stable, reliable grid.


> the still unsolved questions of where to dump the waste

This isn't an unsolved problem, just an uneconomical one - there isn't enough waste for any of the possible solutions to be economical. The solution to getting rid of nuclear waste, is to produce enough of it to make R&D worthwhile.

> of nuclear weapon fuel proliferation

But how does this relate to power plants? The military will get its nukes, whether or not the plants are built.


> But how does this relate to power plants? The military will get its nukes, whether or not the plants are built.

The problem is that the plutonium will be created in the first place and needs to be taken extra care of to make sure the stuff doesn't get stolen or "misplaced" and then diverted off to terrorists.


Given Chernobyl cost about around 235 billion USD to cleanup - What constitutes "decent insurance"? How much are they expecting to pay out?


It's not just the direct cleanup costs - the number goes up extremely hard if you also attempt to account for externalized costs that are rarely grouped together:

- loss of life and the economic productivity of people who died early (not just the liquidators, but also everyone who got radiation damage!)

- loss of sellability of goods - to this day, Bavarian shroom collectors and hunters have to check fungi or wild animals for radiation, and in particularly bad years up to 70% of wild pigs have to be discarded due to irradiation [1]. That's an economic loss for the hunters and for all subsequent economic activities (butchers, restaurants).

- healthcare cost associated with dealing with the fallout - cancers are the most expensive illnesses known to mankind

- loss of quality of life in those who were displaced by the Chernobyl and Fukushima disasters, additionally also the cost of relocating and the loss of opportunities, social networks etc. caused by the relocating

- loss of economic potential that could have been realized in the area around Chernobyl, had it not been contaminated

[1] https://www.welt.de/wissenschaft/article230648425/35-Jahre-n...


But are these the kinds of costs that are accounted for by an insurer? Have local hunters/butchers/restaurants managed to get compensation for loss of revenue?

Also, these costs seem specific to a melt-down. Fukushima wasn't just poorly insured, it failed to heed studies warning about its location.


Defense against rainbow tables is obtained via salt, not via slow hashes. A rainbow table is a space-time tradeoff (you give space, and you get time), so using a slow hash only "encourages" (for lack of me knowing a better word) creating rainbow tables.

Adding long salts on the other hand requires the attacker to create an infeasible number of rainbow tables (one for each possible value of the salt).


Technically this class of trades also incurs a lot more hashing (and so the expense of the hash function is important), the trade is valuable because you get to do all the work once, up front, and not spend that time on any particular password hash once you have it.

So e.g. computing a rainbow table for all possible Windows 2000 passwords (up to 14 characters but you're actually only doing 56-bit inputs) in the LANMAN scheme took ages, and produced a fairly large file, but having done so that's it, you, or anyone you give the file to, can reverse LANMAN hashes into working passwords almost instantly.

(Microsoft's LANMAN hash lacked salt, and is stupid in various other ways, MD5 would actually have been a better choice than LANMAN, because you allow arbitrary input passwords and thus good passwords are stronger instead of impossible even though MD5 is much too fast for a good password hash)


> Defense against rainbow tables is obtained via salt, not via slow hashes.

And bcrypt includes stores a unique salt next to the hash for every password, making rainbow tables completely useless.

https://en.wikipedia.org/wiki/Bcrypt


Nobody is saying bcrypt isn't a good choice here, they're saying that rainbow tables (and all time-space trades) are made infeasible by salt regardless of whether you're using a good password hash like bcrypt.


But both you & GGP are talking about bcrypt as though it was only a password hash. If someone says, "I'm using bcrypt", then they are using both a password hash and unique per-password salts, or they're not using bcrypt. What makes bcrypt and other such systems nice (and makes this kind of mistake basically inexcusable in 2022) is that if you're using a library or package which implements it (which you should), you don't need to think about salts; you just call GenerateFromPassword(<new_password>) and CompareHashAndPassword(<entered_password>, <stored_hash>) and forget about it.

EDIT: I mean, I understand maybe why in 2006 you used md5 without salt. But a few years ago I when I tossed together my first webapp (a scheduling system for the community I'm involved in), I just googled "password hash" and immediately bcrypt came up as a recommendation; there was a package in my target programming language, so after half an hour of research and 5 minutes of programming I was done. I don't understand how the opensubtitles team, after having had their password database compromised, came up with "use sha256 without salt" instead of "use bcrypt or one of the many libraries which takes care of all of that for you".


I mean, sure, but, notice that you're still way behind state of the art for the end of the 20th century. I blame Tim (Berners-Lee, the man whose toy hypermedia system we're all stuck with because it became popular)

You note that when you googled "Password hash" you got pointed to a decent password hash. But, knowing what questions to ask is half the battle. Too many people figured hey, I should use a cryptographic hash and got pointed to MD5 (or SHA1 or even SHA256) because that's what those are.

The words you should have googled weren't "password hash" but maybe "web authentication" and then the answer is clearly you shouldn't use passwords or any sort of shared secret.

You can have a copy of the authentication database for the toys system I maintain, but it wouldn't help you sign into it because all the information in it is public, a trick we've known how to do in principle for decades, and which works today, on the public Web, with readily available devices (e.g. my phone) and yet, here we are on Hacker News discussing which password hash is best like it's still 1985.


> with readily available devices (e.g. my phone) and yet

I'm pretty sure it wasn't anywhere near ready to use five years ago when I wrote v1 of my webapp. And googling again, it's still not clear whether it would work on whatever random software setup some of my zealous-for-software-freedom colleagues use. With passwords I don't need to worry: if they can render the HTML+CSS (even the JS is optional, because some of my colleagues prefer NoScript), they can log in.


This, _and_ bcrypt is slow. Many wrapping libraries' hashing functions accept an optional parameter to specify just how slow it should be.


True, I might be mixing names. What I mean is that now with fast hashes it's feasible to take for each user and compute a table with the 1,000,000 most common passwords and their specific salt, and repeat for all the users. With a modern homemade GPU rigs making billions[1] of computations per second, you are testing thousands of users per second with a 1M word dictionary, so you are going to find matches.

[1] https://hackaday.com/2012/12/06/25-gpus-brute-force-348-bill...


Where do you store the salt?


The salt can be stored directly in front of the hash in the same string in the DB (a lot of crypto hash functions will output this). It can be plaintext since the goal is to add a random component so rainbow tables wouldn't be possible since there's always more to the string being hashed. That's where it becomes a time problem.

Yeah, you could rebuild a rainbow table yourself u til you find the collision, but you have to search every bit of the potential hash space.


BCP 38 would stop SYN flooding if it is using source address spoofing. It won't stop any attack not based on IP spoofing though.


We've seen 40 degrees in the Netherlands in recent years. As well as below -15.


Have you seen it for practically a month straight? 40 degrees is average high in Las Vegas from June to September, and it’s only getting below 30 degrees from November to March. In Houston, Atlanta, Miami or New Orleans, it’s above 30 from May to October, and it’s even worse than in Vegas, because these places are much more humid, making the heat less bearable, as sweating doesn’t really cool you off.

Seriously, climate in Netherlands is in no way comparable to most of US.


Honestly there is no small country in the world that is comparable to most of the US.


Microsoft recently confirmed that they have always lost money on their consoles (during the Apple vs Epic proceedings). In contrast, Nintendo usually makes a profit on selling their hardware.


I had an Irma card for a while, with a supporting Android app, must've been about 7 or 8 years ago now. Seemed to work ok'ish then (as a proof of concept). You could use it at some places in the university, although I can't remember what the use case was exactly (it was definitely the same thing though).

Anyway, since they already had that 8 years ago, I don't really understand what the roadmap for this project is. I see it popping up every now and again.


Hello, IRMA lead developer here.

Since then we have mainly moved from smart cards to a mobile app [0], and focused on getting the IRMA server and frontend software production-ready and developer friendly. Additionally we have worked on connecting existing institutions and companies on IRMA; you have to have parties that issue and verify IRMA attributes before the project can be of use to end users.

As to development, we have a public roadmap here [1].

[0]: https://irma.app/

[1]: https://irma.app/roadmap.html


Some Dutch healthcare institutions are using it, but as far as I know that's about it.

There's some development effort and every now and then a new demo pops up, but I don't think the project advertises itself enough when it comes to uptake.

I can see massive benefits to all kinds of businesses, especially now that restricted items such as alcohol and cigarettes are being ordered online because of COVID measures, but sadly the uptake has been minimal so far.


Here [0] is a list of projects that use IRMA. Indeed there are several healthcare institutions, but there is also https://irma-meet.nl which features authenticated video calling, and there are several municipalities that are planning to allow their citizens to log in using IRMA.

In addition, we are cooperating with several institutions and governments, including the Dutch national government, on future IRMA projects. Given the differences of IRMA from other mainstream authentication mechanisms, however, getting parties and their end users ready to use IRMA can be complicated and takes time.

[0]: https://privacybydesign.foundation/usage/


Is IRMA still going to be relevant for healthcare in the Netherlands with the Wet Digitale Overheid law making authentication by means of the government issued DigiD credentials mandatory and all but making access via the ToegangsVerleningsService (TVS) authentication platform offered by the government a requirement for any citizen accessing their medical data?

It seems only a matter of time before municipalities will face the same requirement for authentication. Where does that leave IRMA?


Although developments have slowed down due to the pandemic and now the collapse of the dutch cabinet, the Privacy by Design Foundation and SIDN, who jointly develop and run IRMA, have been in talks and developing pilots with the Ministry of the Interior of the Netherlands to become an accepted party ("toegelaten partij") under the WDO. That would mean that when the WDO comes into effect, the ministry issues basic personal data to IRMA apps, and IRMA becomes one of the ways in which citizens can authenticate to services alongside DigiD.


Good to hear. Availability via the TVS is a must for vendors though, but you are probably aware of this.

I also wonder how something like IRMA will work when not only DigiD is used via the TVS, but also DigiD Machtigen to grant others permission to act on your behalf.


It - or at least IDEMIX - is being actively investigated for use in COVID Vaccine passports by a number of countries in the open source community.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: