Hacker News new | past | comments | ask | show | jobs | submit | awaythrow98765's comments login

> If your business invests in physical servers anticipating strong growth next year then later finds out actually we're going into a recession and those servers are no longer needed, then that's a sunk cost.

Cloud vendors also mostly sell minimum use packages for discounts in the range of 20 to 80% (called e.g. "committed use discount" or "compute savings plan"). Lots of businesses use those, because two-digit discounts are real money, but they might find themselves in the same spot as with physical hardware they don't need...


Yup, and you are paying the premium of cloud forever, which over some vanilla compute & storage can be a lot.

And cloud proponents pretend data center / rack space / server leasing doesn't exist either, for those trying to avoid large up front costs.


I'm a cloud proponent because it means not having to sit through hours of meetings to deploy a $5/mo virtual machine.

It also means some poor fuck at AWS gets woken up in the middle of the night instead of me when things go to shit.

It absolutely comes at a cost, and might not be the right fit for an organisation that's absolutely on top of it's hardware requirements and can afford to divert resources from new development work. For the rest of us it saves a lot of dev hours that would have otherwise been spent in pointless meetings or debating the best implementation of whatever half-baked stack has oozed it's way out of the organisation in an attempt to replicate what's handed to you with a cloud solution.


> I'm a cloud proponent because it means not having to sit through hours of meetings to deploy a $5/mo virtual machine.

And endless orgies of "call for pricing" with hardware vendors and hosting. Shitty websites where you can buy preconfigured servers somewhat cheaply, or vendor websites where you can configure everything but overpay. Useless sales-droids trying to "value-add" stuff on top.

Cloud buys are a lot friendlier, because you only have the one cloud vendor to worry about. Entry level you just pay list price by clicking a button. If you buy a lot, you are big enough to have your own business people to hammer out a rebate on list price, still very easy, still very simple. But overall still more expensive unfortunately.


> I'm a cloud proponent because it means not having to sit through hours of meetings to deploy a $5/mo virtual machine.

I'd hope there aren't actually hours of meetings for a single $5/mo VM?

But I would hope there are reviews and meetings when deploying enough of these to amount to real money. Companies that don't do that soon enough find themselves with a million dollar AWS bill without understanding what's going on.

Spend is spend, it's vital to understand what is being spent on what and why.


> I'd hope there aren't actually hours of meetings for a single $5/mo VM?

Slightly exaggerated in the case of the $5 machine, probably 2-3 manhours total but it took 4 days for it to be deployed instead of ~5 minutes. We did spent tens of hours justifying why the business should spend ~$100 more per month on a production system where the metrics clearly indicated that it was resource constrained.

The same IT department that demanded we justify every penny spent did not apply any of that rigour to their own spending. Control over the deployment of resources was used as a political tool to increase their headcount.

> I would hope there are reviews and meetings when deploying enough of these to amount to real money. Companies that don't do that soon enough find themselves with a million dollar AWS bill without understanding what's going on.

I consider the judicious use of resources to be part of my job as a software engineer. A development team that isn't considering how they can reduce spend, tidy up, or right-size their resources is a massive red flag to me. Organisations frequently shoot themselves in the foot by shifting that responsibility away from the development team. The result is usually factional infighting and more meetings.


It's not really the same spot in that your paying monthly rather than upfront. Devs tend to think about total $, the business/accountants do care about Opex vs Capex.

Also it's going to be simpler to provision your base (commited use) on the cloud and then handle bursts on the cloud, than it is to have your base on prem and burst to the cloud.


> It's not really the same spot in that your paying monthly rather than upfront. Devs tend to think about total $, the business/accountants do care about Opex vs Capex.

You can buy physical servers in leasing ,turning it into opex

You can also rent them for little bit extra via managed dedicated servers from vendors like OVH.


I think this point isn’t made often enough.

Not going with a big cloud provider def doesn’t mean that you need to buy physical servers and build an on-prem data center.


And other point I also seen used to lie about cloud cost is saying you save so much on engineers.

...while forgetting to have sane on-call rotation for cloud you also need at least 3 people on that rotation that are also clued in on cloud operation enough. Sure they can be "developers" but if your app architecture requires so little maintenance and flea removal that they are not doing ops jobs much, chances are so would it in either rented or dedicated server env.


That is not really a difference, you may as well lease your server farm in the basement, practically the same cost as buying it, just as a monthly payment with the supposed "advantages" the business people might care about.


Amnesty International is primarily an advertisement and donation collection business. Most of your donations go straight into advertisements and collecting more donations. Collectors aren't volunteers but hired professional collection agencies/con-artists trained to guilt people out of their money.

AI using AI for misrepresentations fits my very low opinion of them.


What percentage of its budget does Amnesty International put into advertising, and is it unusually high for charities?

I mean, I'm all for effective altruism, for giving money to charities the charities that help the most people with it with the lowest overhead, and for doing your research before deciding who you donate to (I recommend GiveWell as a starting point); but "hiring PR agents to solicit donations" is hardly unique to Amnesty.


From https://www.amnesty.org/en/2021-global-financial-report/

> 2021 was a record-breaking year with a total fundraising income of €357m.

And

> The largest proportion of the movement’s programme expenditure €26.4m (35%) was spent towards Goal One.

And

> 45% of total income spent on Human rights research, advocacy, campaigning, raising awareness and education

So ~20% (75m of 357m) was actually spent on real work, assuming everything reached end beneficiaries.

Interesting way of showing 45% of expenditure clubbing multiple heads, while showing 2% expenditure on "Maintaining our democratic system of governance".


I'm not sure your math holds up, though.

Among other things, it sounds like their income was higher than their expenditures, eg that they had some money left at the end of the year; so your "75m of 375m" ratio isn't quite right.

Maybe I missed something.


~ for approximately. Idea was not to audit records but provide an approximation of how much is spent on the stated objectives.

And even 20% is probably very high, because I would guess people working on those would be paid some salary/ honorarium, and that also would be included in this expense.


Yikes, only 20% going to actual end beneficiaries is pretty low.


Yeah. Except if the usual Qualcom spy chips (also available in Google Pixel) phone home all your biometrics...


Those passkeys are either insecure or unreliable. Let me explain:

Those passkeys are asymmetric cryptographic keypairs where the private key is securely stored on a device, unlockable (for use, not reading) only by convincing your devices security processor to do so by pin/fingerprint/pattern. Which in itself can be secure, given you do trust that magic security processor (which you shouldn't, see yesterday's news for example). However, if that key cannot be read, you cannot make a backup of it, so it will be unrealiable and easy to loose. The recovery process will either be insecure and prone to social engineering, or unreliable because proving your identity will be nigh impossible without that passkey. Now one could allow backups of a passkey, but then that passkey would be as insecure as a password. One could allow multiple instances of authorized passkeys, but those would be even more insecure than passwords, because malicious software on your device could create evil new key instances.

In all a bad and dangerous idea.


It would be a bad and dangerous idea, if what you said was true; but it isn't.

Passkeys are just asymmetric key-pairs. There will be a variety of client-side implementations. Some may make export and backup difficult or impossible. Others, such as 1Password's already extant implementation advertise backup and synchronization as a feature! There is nothing about the passkey standard which prescribes the reality you fear.

> Now one could allow backups of a passkey, but then that passkey would be as insecure as a password.

Wrong, absolutely and entirely. Its still more secure, because its an asymmetric keypair, and you're forgetting about the far more common attack vector against password disclosure: service breaches. That's how attackers learn about passwords by-and-large. And this is not just some nice-to-have side-benefit of passkeys: its a core motivation of this standard. With passwords, a service breach compromises not only the accounts of every user on that service, but potentially every other account every user has, globally, because of password sharing. With passkeys, all of that is resolved.

Even if we end up with a system that is the same level of effective client-side security, which is also extremely wrong, the net security of the system will be vastly improved because service providers aren't storing the private key used to authenticate user accounts.

But the client-side security is also substantially improved, because passkeys have much higher phishing resistance.


> Now one could allow backups of a passkey

That's literally part of what makes a passkey a passkey (v.s. just a WebAuthn credential), so that's a given.

> as insecure as a password

No. Passkeys can't be phished, passwords can. Passkeys can't be cracked after a data breach. Passwords can. Passkeys can't be set to something easily guessable. Passwords can. Passkeys can't be written on a post-it note and taped to your monitor. Passwords can. Passkeys can't be reused across multiple sites. Passwords can.

There are so many ways passkeys are superior to user-memorized passwords from a security perspective, it's laughable to call them "as insecure as a password".

> One could allow multiple instances of authorized passkeys, but those would be even more insecure than passwords, because malicious software on your device could create evil new key instances.

What? Malware stealing your password is "more secure" than malware registering it's own malicious key to each individual site it wants access to?


> No. Passkeys can't be phished, passwords can. Passkeys can't be cracked after a data breach. Passwords can. Passkeys can't be set to something easily guessable. Passwords can. Passkeys can't be written on a post-it note and taped to your monitor. Passwords can. Passkeys can't be reused across multiple sites. Passwords can.

Passkeys don't need to be cracked after a data breach of your backup provider, they are just usable, right there.

> There are so many ways passkeys are superior to user-memorized passwords from a security perspective, it's laughable to call them "as insecure as a password".

Passkeys are accessible permanently on some devices unencrypted or decryptable in the filesystem, if part of e.g. a backup. Whereas passwords are usually only accessible temporarily. That makes the attack surface top copy over some passkey far larger than for sniffing a password.


> Passkeys are accessible permanently on some devices unencrypted or decryptable in the filesystem, if part of e.g. a backup. Whereas passwords are usually only accessible temporarily.

I think you're mixing up server-side and client/sync-backend-side compromises here.

For the former (i.e. a compromise of hashed passwords and their corresponding salts), you'll need to rotate all passwords since the hashes can be brute-forced. For passkeys, all an attacker gets when compromising a service's database are public keys that can't be brute-forced and key handles that don't give an attacker anything without the corresponding authenticators.

For the latter, the situation is exactly the same for passkeys and passwords in a password manager, i.e. both are as secure as their on-device storage and encryption in transit and rest at a synchronization provider (if any).


You seem to be under the false impression that passkey databases are stored completely unencrypted and unprotected on disk or in the cloud. Obviously those details are implementation-dependent, but I don't know of any passkey implementation that works that way.

Let's take Apple's implementation as an example (since that was the one I could most easily find information on). Their implementation stores passkeys in the iCloud keychain[1], which is end-to-end encrypted[2].

[1]: https://support.apple.com/guide/iphone/sign-in-with-passkeys...

[2]: https://support.apple.com/guide/security/secure-keychain-syn...


As an administrator, I hear you, but we have to adapt. Passwords are awful. On the whole, the effort and energy spent training people on passwords, battling phishing, dealing with password managers, cleaning up from breaches, and more… passwords can't die soon enough.

FWIW, asymmetric PKI is technically mature and relatively easy to implement in most applications (without "vendor lock-in", I might add to comments upthread), and there are ways to address most of your concerns about key loss and recovery beyond what you describe, as by the ring of trust scheme Apple uses, for example.

The only way through this is forward. I'm confident it really will get better once passwords become a smelly indicator of bad security practice.


I'm looking forward to such glory days. Right now, however, none of the solutions available are ones that I could live with if I had to use them for everything. For one or two very sensitive things, sure, but for everything? It's less of a pain to use long, random passwords.


This is just like using a long random password, except that it's cryptographically verifiable without ever leaving your device.

If passwords are like playing poker with your cards facing out, Passkeys are like playing with your cards facing in. Your secrets remain under your full control at all times. Nothing sensitive is sent over the wire.

Yes, for everything. Those who've implemented it so far have done a great job at making it /easier/ than handling passwords.

If you've ever used ssh with keys instead of passwords, it's the same thing, and it's so much easier while being more secure. A rare convergence.


> you cannot make a backup of it

The way this typically works is that the keys are stored in an encrypted file, which can be backed up securely as-is. It can also be copied around and sync'd to other devices.

Of course, this means the authenticator app/service that needs to use the private keys to respond to challenges has to be able to decrypt that file, which means logging in to it. Authenticators balance convenience with security in terms of how often you need to fully log in to it. They are also often configured to require a light-weight authentication on each use (fingerprint, face, pin).

With authenticator apps handling the private keys, secure backups should be easy and automatic. Things should improve since the people using passwords now who don't have a secure automatic backup mechanism for them and switch to passkeys will probably end up with an authenticator that does it automatically.

(Recovery processes will still exist and can still be an issue.)


Maybe for browsers on Windows it'll default to storing the key purely on-device, but especially with iCloud Keychain the key is not encrypted by the on-device processor.

This does not make it as "insecure as a password". It does mean you can use root/OS access to exfiltrate keys, but it closes the following security holes that affect passwords:

- keyboard sound-based exfiltration[0]

- visual exfiltration (someone recording you enter your password, or looking over your shoulder and memorizing it)

- credential stuffing, where people who reuse passwords get pwned when the same leaked password is used on other websites

0: https://www.independent.co.uk/tech/cyber-security-passwords-...


The point of passkeys isn’t to be perfect — the point is to replace passwords, which are already far more imperfect than passkeys. The bonus points with a password is that every site that uses them has to secure them properly and theft of passwords, in plain-text, hashed, etc form is common.


For an end-user, reliability and ease-of-use trump security. Passkeys are imperfect in the wrong places imho.


Imagine for a moment that instead of all the time wasted on this, we just implemented a protocol amongst the browser makers which allowed a secure password prompt to be requested, and required strong-hashing before sending anything over the wire?

Which would be easier to use and more effective.


If you hash before sending anything over the wire then the hash of your password is now your password, meaning that if it leaks it amounts to basically the same as your password leaking. Granted, applications may choose different hashing algorithms, provide their own clientside salts, etc. which would be really nice. To be fair, I believe more systems should be doing this nowadays, it's really weird to have to send your actual password to the website if a hash would suffice. Then the website could store the salted hash of the salted hash of your password in their database.

Programs such as Bitwarden already do this, where you send the hash of your password to the server instead of the password itself, because from the password you derive the decryption key and you never want that reaching the server. You then use that hashed password as the authorization password, but the client uses the actual password to decrypt the delivered password vault.


If a common browser protocol required the password to be salted with an application supplied value and then rehashed with the domain name it's served on, there'd be no way to phish a password.

The value the user's browser sends back can't be reversed, so any website prompting from the wrong domain would only ever see an incorrect hash, rather then the cleartext as it does now.


The idea seems to be that you will either trust a provider like Apple or Google to keep you private key safe and let them sync it around, or you will create a passkey for each device that you use. If you lose the device, deauthorize the passkey. If you somehow lose the passkey itself, create another one, either by using an older form of authentication, or by creating using a different device to authenticate. There is no need for passkey recovery or backup.


I find this to be a regression in terms of usability and security as well.

On top of what you mentioned, it also fails really hard when someone has access to you and your trusted device (which will be the smartphone in most cases). It's already an issue allowing easy access to smartphone content, it will extend it to any account using that method of authentication.


What would be a better idea, then?


Actually I'd see a future where some of those password killers might replace passwords, even for some of the under-funded, under-manned applications out there.

What is necessary is a robust, simple-to-integrate standard for authentication, authorization and sessions built into HTTP. Such that all the "hard work" is integrated into common HTTP server software or load balancers, transparently. From an application perspective it should just look like your request getting HTTP_USER=someone HTTP_PERMISSIONS="stuff,foo,bar" HTTP_SESSION="0xdeadbeef", similar to what you get from HTTP basic or negotiate auth, but with a few more necessary features such as session, login/out and a permission model. Browsers would have to provide some proper UI for that, not utter crap like they currently do for HTTP basic or negotiate auth.

Then your centralized auth application can just talk to any old application in a very simple way, no need to deal with huge integration headaches like OAuth or stuff. And the centralized auth application can do all the fancy password killer, 2FA, magic or whatever special auth you need.


Actually, there is no "folding in good faith" in such like cases.

A company is bankrupt if a point in time can be foreseen, where the company can no longer pay its debtors (employees here are debtors, sometimes even a higher-priority class of such). And while one can argue about the "foreseen" part, e.g. by saying "I did reasonably expect that incoming payment to arrive, which didn't". As soon as you were unable to pay a debt once, or like here even twice, you are actually too late in declaring your bankruptcy. Which, in most jurisdictions, is a crime. And it certainly is in bad faith. If the company isn't actually bankrupt, could pay but won't, then it is also an act of bad faith.


This is assuming the LNT (linear no threshold) model of radiation damage is true. Which it probably isn't. Lower than predicted deaths from accidents such as Windscale/Sellafield are sometimes taken as arguments against LNT.

However, in all such discussions, one needs to be aware of the strong interest of the British government to keep all things around it's nuclear weapons program secret. So maybe the public figures are fake.


Few things in medicine are linear. DNA has error-correcting codes, much like a hard drive or network connection.

In hard drives or networks, errors are tolerable below a certain threshold. It's a different story once unrecoverable errors start to occur (data loss, system crashes, perhaps errors getting caught by higher-level things like exceptions).

I'd expect DNA, cells and biological systems are probably similar. The error correcting mechanisms can probably handle some errors up to a threshold (which probably itself varies between individuals).


No written contract (over here) is sometimes a very nice thing in those kinds of situations, because then some very employee-friendly default terms do apply (unlimited contract time, long notice period for dismissal, no reassignment to a field of work you might not like, etc).

Get a lawyer and good luck!


There's two in the family luckily! And thank you!


If you are at this level of required complexity as in those examples, you should use a proper programming language, not shell. Half those snippets fail with spaces in the wrong place, newlines, control characters, etc.

I think all such shell "magic" should come with a huge disclaimer pointing the user at better alternatives such as perl or python. And all snippets should have the necessary caveats on horribly broken and dangerous stuff such as "if this variable contains anything other than letters, your 'eval' will summon demons" in eval "hello_$var=value" and stuff...


I enjoy using bash, and throw together little scripts now and then. It is convenient to be able to wrap up some bash commands and turn them into a script, when I realize I’ve been using them repeatedly.

But, every time I see examples of how to write sh scripts properly, it makes me wonder if this is just the wrong way to look at the world. Maybe it would be easier to extend Python down to make it better for command line use. Xonsh or something.


Ansible is like Python but for scripting (orchestration). They used a YAML format and with all the curly braces and quoting, made it just as bad as shell.


What would make Python better for command line use? Better alternatives to argparse in the standard library?


Easier, lightweight syntax for shell-like pipes, command execution and catching stdin/stdout/stderr.

Something like Perl's IPC::Run.

Also, more shell-relevant stuff in the default distribution, so that one doesn't need to care about any modules (which is the primary reason for using bash or even sh, those are installed practically everywhere along with at least coreutils and stuff). Edit: examples that a standard python doesn't really do would be quick and easy recursive directory traversal and doing stuff to files found there (like the unix 'find' tool), archive (de)compression, file attribute operations (not only simple permissions but also ACLs, xattrs, mknod, etc).

But the sister comment clarified it in another way, so this maybe irrelevant.


Sorry, I was sloppy. I meant using it as the system shell. So, processing arguments, I guess, would be less of a big deal.

Convenience features, like ls and other classic shell commands being run without parentheses would have to be handled… I’m not breaking any new ground here, actually this has gotten me to look into xonsh and it looks pretty decent.


Yeah, I'm a diehard bash scripter but even I know to switch to Python once the script gets above a certain size/complexity.

I should really just start with Python, but old habits and all that.


"perl or python". Don't forget Ruby.


Yes, but actually no: I was tempted to include it but didn't. The one big argument for bash and sh is ubiquity and compatibility. Perl also has those. Python is somewhat lacking in those. Ruby is very lacking on both.


Using a more advanced language just because you find shell syntax to be wacky is like using a car to get groceries because you find panniers or a backpack to be wacky. It's the use case that matters; if you're 60 meters from the store, just use your bike, or walk.

There are plenty of cases in which Perl or Python will make things much more complicated than 5 lines of spooky-looking shell script. Sometimes a little mystical sorcery is what the doctor ordered.


Shell is full of ridiculous footguns though. It's like saying if the store is only 60m away (across an Indiana Jones-tier trap gauntlet) then just walk there.

Remember that time bumblebee accidentally deleted everyone's /usr? Or that time steam deleted everyone's homedir? Both because of the easiest to avoid bash footguns - spaces and empty variables.

My hard and fast rule that I've never regretted is - if you use a bash if statement, consider python. If you're going to do a for loop, you must instead use python.

Typically as a side effect once the programmer is in python at my prompting, they find having such easy access to libraries suddenly lets them make the whole script more robust - proper argparsing is an immediate one that comes to mind.

Frequently the reticence I see from people, especially juniors, is that they're worried about people thinking "haha they have to pull an entire python into their image just to run this script" or "wow they're so newbie/young that they can't even write some shell". I reassure them: don't worry, there's a reason we used Perl even back then too.


The shell is not meant to implement the X Window System in it. The shell is a command interpreter.

If you want to do more advanced things there is always the right tool for the job although with an increased attack surface.


Have you not used Python or Perl much? Because both are full of footguns. And I don't see any advantage to for loops in a HLL. These are identical:

  for i in `seq 1 10` ; do
    echo i $i
  done

  for i in range(1,10):
    print("i %i\n")
You might find problems with the shell code, and I'll find problems with the Python. But both will print 1 to 10.


Personally I've found Python to have significantly fewer foot-guns than bash.

The biggest reason why I don't use it all of the time is that calling / piping commands takes a lot more typing, so it's easier to use bash for very simple shell scripts. And while there are libraries that simplify shell scripting, that adds external dependencies to your scripts.

> for i in range(1,10): > print("i %i\n")

This outputs "i %i\n\n" 9 times.


> This outputs "i %i\n\n" 9 times.

damn, ya got me there. to be fair, I was drunk when I wrote that xD


Come on, they aren't full of footguns in basic things like string comparison or variable assignment.

Most of the things on this list couldn't happen in Python:

https://mywiki.wooledge.org/BashPitfalls


> Remember that time bumblebee accidentally deleted everyone's /usr? Or that time steam deleted everyone's homedir? Both because of the easiest to avoid bash footguns - spaces and empty variables.

https://www-uxsup.csx.cam.ac.uk/misc/horror.txt

Your exanples were bugs which were not caught during development because "testing is hard and expensive" and "if it compiles, ship it".


And yet Rust gains traction over C. "Best practice" can't fix a dangerous tool.


Shell syntax isn't "wacky", it's extremely error prone.

Using shell script instead of a sane language is like opening beer with your teeth because you can't be bothered to get a bottle opener.


I have written my large share of bash scripts at this point in my life. However, I recently started a new project. I opened up a file and started noodling out a sh script. I stopped exactly because of what you are saying. I then installed powershell. I have not decided if I am going to use powershell, python or ansible yet for this. But as it is gluing a bunch of commands together with some string manipulation and some very simple math calculations powershell seems better for the job in this case.

Bash is also good at these things but it feels like you are using weird archaic tools to get things done. They work very well but the syntax on some of the commands you end up having to use are entire oriley books by themselves. It has its own odd way of doing things. Bash is nice for when you know you can not really manipulate the environment. As it is fully complete and usually 'table stakes' for what is installed. It is just kind of odd the way it works. In this case I decided to start with something that is more akin to what I am used to writing.


Oh, right, because it's not easy to cause errors in Python.


Python stops on errors.

Bash may or may not. It depends on how your script was called. If your script is sourced you need to remember to set and restore the flags.

In bash your data can accidentally become code. "rm $fn" usually deletes one file, but it might one day delete a few (spaces), or wildcard expansion makes it delete many. With Python, calling the function to delete one file will always delete one file. Your function will never run with a "continue on errors" mode.


oh come on, whatever is feeding files to the function I'll just trick into using some other data with different files. you don't need to "execute data" to have substitution bugs.

and it's easy to add status checks to your shell script just like you can for Python. exceptions are not the only way to stop on error. but it's sure a hell of a lot easier to have a non-working program in Python, whereas it's a lot easier for a shell script to keep working.


Correct. Python code is unlikely to contain the kind of trivial variable manipulation errors that plague Bash scripts.


Anti-nuclear-people have mostly been a vocal minority. However, due to mostly anti-science reporting and irresponsible media-spread panic after Czernobyl and Fukushima, there have been temporary majorities for a shutdown, which populist politicians used to get (re)elected.


> Anti-nuclear-people have mostly been a vocal minority.

Not in Germany


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: