> Fear of failure is a stumbling block for science. That's why many universities declare in their charter that research doesn't have to be practical.
No, universities do that because it's limiting to only focus on practical science, not because scientists are afraid to fail. Theoretical breakthroughs often find their use in practice with time.
Fear of failure is because we only put money on success, so researchers' livelihood, dignity and prestige depend on their research bearing fruit.
Isn't that just capitalism? The rule is for companies to keep pushing for higher margins and profit, so given enough time any company will default to shady tactics and product enshitification.
It can if your refactor needs to deal with interface changes, like moving methods around, changing argument order etc... all these need to propagate to the tests
Your tests are an assertion that 'no matter what this will never change'. If your interface can change then you are testing implementation details instead of the behavior users care about.
the above is really hard. A lot of tdd 'experts' don't understand is and teach fragile tests that are not worth having.
your implementation is your interface. its a bit naive or hating-your-users to assume your tests are what your users care about. theyre dealing with everything, regardless of what youve tested or not.
> Your tests are an assertion that 'no matter what this will never change'.
That's a strange definition. A lot of software should change in order to adapt to emerging requirements. Refactorings are often needed to make those changes easier, or to improve the codebase in ways that are transparent to users. This doesn't mean that the interfaces remain static.
> If your interface can change then you are testing implementation details instead of the behavior users care about.
Your APIs also have users. If you're only testing end-user interfaces, you're disregarding the users of your libraries and modules, e.g. your teammates and yourself.
Implementation details are contextual. To end-users, everything behind the external UI is an implementation detail. To other programmers, the implementation of a library, module, or even a single function can be a detail. That doesn't mean that its functionality shouldn't be tested. And, yes, sometimes that entails updating tests, but tests are code like any other, and also require maintenance and care.
> A lot of software should change in order to adapt to emerging requirements.
True, but your tests should still aim to be testing the type of thing that will never change unless a customer requirement is changed, not because you want to refactor something.
This is of course impossible, but it should still be your goal.
>Your APIs also have users
Exactly - so test those APIs that have users, not the internal implementation details. If an API has users it quickly becomes an Augean Stables problem to change them and so you won't touch that API if you can at all help it (you may add a new/better way and slowly convert everyone, but it will be a decade before you can get rid of the old one)
> To other programmers, the implementation of a library, module, or even a single function can be a detail
other programmers are sometimes customers/users. If you are writing a logging system (what I happen to be working on today) your end users may never be allowed to see anything related to your system, but you expect to quickly have so many people calling Log() that you can't change the interface. By contrast you may be able to log to a file or a network socket - test those two backends by calling log() like your end users would and not be calling whatever the interface between the frontend (that selects which backend to use) and the backend is.
Again, the goal is to never update tests once written. I'm under no illusions you will (or even should) achieve this, but it is the goal.
Sure if you are changing your interfaces a lot you either are leaking abstractions or you aren't designing your interfaces well.
But things evolve with time. Not only your software is required to do things it wasn't originally designed to do, but your understanding of the domain evolve, and what once was fine becomes obsolete or insufficient.
This falls into the domain of the ethics of care. Sure change needs to start some place, but it doesn't need to be done recklessly. Nobody does anybody any favours by putting themselves in dangerous situations. To care for other people, to give them the attention they need you need to prepare yourself for it first.
Also, can you do this and still be able to access the original OS? It be nice to run Linux on it, but I'd also need to access my PS5 library, so do I need two machines for this?
I don't know if it's related, but a few days ago I've read about an update about the mast1c0re exploit that permits to run native code (userland) on latest PS5 FW without a kernel exploit:
According to some hearsay, there is allegedly at least another one unpatched hypervisor exploit in recent firmware. Considering the OP's track record, I'm not surprised he's sitting on a full-chain jailbreak.
Porting the Linux kernel to the PS5 is also an impressive flex, quite funny thinking that ten years ago, in 2016, fail0verflow did the same for the PS4...
The problem isn't the granularity of the backup but since the worm silently nukes pages, it's virtually impossible to reconcile the state before the attack and the current state, so you have to just forfeit any changes made since then and ask the contributors to do the leg work of reapplying the correct changes
No: from what I can tell, they're being conservative, which is appropriate here. Once you've pushed the "stop bad things happening" button, there's no need to rush.
> We need to increase reliability in the kernel, so the kernel team should fire the top 5 bug-introducers, to reduce the amount of bugs being introduced (https://pebblebed.com/blog/kernel-bugs-part2/05_author_analy...). Linus has got to go.
You've cut bugs being introduced while also reducing development costs by slashing team size. You deserve a promotion and an increase in equity.
Does pixel support alternate OSes or it just doesn't get in the way of custom firmware developers?
And for the gaming aspect, there is a huge market for mobile gaming, specially in Asia, so having a manufacturer like Motorola adopting GrapheneOS as a first class citizen will improve the chances that high performance applications will have better performance in such OSes which is a big win.
The Google Pixel has first-class support for alternate OSes (not custom firmware like a Chromebook). The OEM has to go out of their way to support avb_custom_key as mentioned in https://android.googlesource.com/platform/external/avb/+/mas... and I believe the GrapheneOS founder strcat was heavily involved in helping Google design this feature and flow for Android Verified Boot.
If you conceive a device to be shipped with a specific OS that's a completely different relationship with the developer than just giving the keys to the kingdom and wishing good luck, so I hardly think this is subjective
Also a gentle reminder that backups without periodic drills are just binary blobs. I had an instance where for some reason my Borg backups where corrupted. Only caught them with periodic drills.
No, universities do that because it's limiting to only focus on practical science, not because scientists are afraid to fail. Theoretical breakthroughs often find their use in practice with time.
Fear of failure is because we only put money on success, so researchers' livelihood, dignity and prestige depend on their research bearing fruit.
reply