I imagine they're willing to accept them a different way. That email doesn't even specifically ask for the forms to be emailed. It just says to "send them over".
Also, I'd argue there are ways to make it reasonably secure over email. An encrypted attachment with a securely pre-shared key doesn't seem too risky IMO.
> FLOSS Fund refused to follow the regulatory requirements to continue funding projects through Github, and Github dropped them as a funding source.
The email they sent to Pocketbase (posted elsewhere in the thread) makes it sound like the regulatory issue with GitHub funding is still being worked on. The email also doesn't sound like it ruled out the option to wait until the GitHub situation potentially gets sorted out in the future and simply recommended that they use a wire transfer to get things moving.
Meanwhile, it's probably A-OK for the app to run on a phone that hasn't received security updates for 5 years.
I don't get it. If they're worried about liability, why not check the security patch level and refuse to run on phones that aren't up to date?
I'm guessing it's because there are a lot of phones floating around that aren't updated (probably far more than are rooted), and they're willing to pretend to be secure when it impacts a small number of users but not willing to pretend to be secure when it impacts many users.
Because a phone running an unknown OS is significantly more dangerous than a phone that hasn't received security updates for years. For example, a malicious OS maker could add their own certificate to the root store, essentially allowing them to MitM all the traffic you send to the bank.
Liability works on the principle that "if it's good enough for Google, it's good enough for me." A bank cannot realistically vet every vendor, so they rely on the OS maker to do the heavy lifting.
Even if they wanted to trust a third-party OS, they would need to review them on a case-by-case basis. A hobbyist OS compiled by a random volunteer would almost certainly be rejected.
I can add certificates on my unrooted android. That how HTTPToolkit [0] works, it only requires adb, which (thankfully) doesn't trip banking apps. Banking apps can (and do iirc) pin certificates, so a rooted phone adds no risk whatsoever.
Also in my experience a rooted phone experience is by far more secure than the OEM androids. Security is supposed to assess risk objectively, yet "running on a Xiaomi phone with 3rd party apps that cannot be uninstalled and have system access" is somehow more secure than "running on a signed LineageOS where user can edit hosts file".
>Because a phone running an unknown OS is significantly more dangerous than a phone that hasn't received security updates for years.
That's just straight-up false ; the phone without security updates has known exploits its user knows nothing about (and certainly not how to avoid them). The phone with an unknown OS has a user capable of installing said OS, at the very least.
> Because a phone running an unknown OS is significantly more dangerous than a phone that hasn't received security updates for years.
I'm not convinced this is generally true, at least as can be detected by an app. Back when I had my phone rooted, it was configured so that it would pass all the Google checks and look like the stock OS. That configuration was probably dangerous, but apps were happy with it. Now that I run an OS that doesn't lie about what it is, I'm flagged as untrustworthy. What's the point in being honest?
Overall, I don't think they really have any idea what's a threat based on the checks they're doing, so I don't think they can say at all what's more or less trustworthy. But I think that a phone that reports being years out of date should reasonably not be expected to be secure, but yet they mark it as secure anyway. Many of those devices can be rooted in a way that can still pass their checks. I would think, if nothing else, that would be reason to block them, since they're interested in blocking rooted devices.
> If they're worried about liability, why not check the security patch level and refuse to run on phones that aren't up to date?
Google doesn't provide an API or data set to figure out what the current security patch level is for any particular device. Officially, OEMs can now be 4 months out-of-date, and user updates lag behind that.
Your guess is good, but misses the point. Banks are worried about a couple things with mobile clients: credential stealing and application spoofing. As a consequence, the banks want to ensure that the thing connecting to their client API is an unmodified first-party application. The only way to accomplish this with any sort of confidence is to use hardware attestation, which requires a secure chain-of-trust from the hardware TEE/TPM, to the bootloader, to the system OS, and finally to your application.
So you need a way for security people working for banks to feel confident that it's the bank's code which is operating on the user's behalf to do things like transfer money. They care less about exploits for unsupported devices, and it's inconvenient to users if they can't make payments from their five-year-old device.
And this is why Web Environment Integrity and friends should never be allowed to exist, because Android is the perfect cautionary tale of what banks will do with trusted-computing features: which is, the laziest possible thing that technically works, and keeps their support phone lines open.
I'm not an Android developer, but I was thinking they could use something like the android.os.Build.VERSION.SECURITY_PATCH call to get the security patch level. Maybe that's not sufficient for that purpose, though.
Sure, there is enough information available to the app to determine what OS version and patch level it is running under. The issue is, the app would need to communicate this to the bank via an API, and the bank wants to trust the app in the first place in order to rely on this information.
Even then, two things turn out to be true:
- Banks don't actually want to put in the effort and deal with angry customers with slightly-out-of-date devices.
- All the credential-stealing malware on Android works perfectly fine on stock, unmodified, non-rooted OS images anyway. They just need to socially-engineer the user to grant accessibility permissions to the malicious app.
It's more frustrating because my partner's pixel 4A cannot use google pay or the bank apps because it is an invalid os - I am guessing due to lack of updates? So, perfectly fine hardware, but crippled in functionality due to the lack of software updates.
You don't necessarily need a lib, though. Especially if you're interested in a use case where you can store data in a go bag, safe deposit box, etc., it seems like having individual tapes would be preferable.
Individual used drives aren't too expensive (or at least didn't used to be). Libraries, in contrast, do tend to be more expensive (and also a lot more trouble to ship).
My understanding is that the thing that makes M-Disc DVDs special is that they don't use organic dyes in the recording. Blu-ray discs, with the exception of the weird LTH ones, by default don't use organic dyes. Consequently, the main magic of DVD M-Disc is just the default with BDR.
For a long time the vast majority of DVD-R disks have been light-to-dark (ie, the laser writing to a spot makes that spot darker, not lighter.) Dark-to-light disks were rare, the cheapest, and fell out of production pretty fast.
Another thing to keep in mind is that, for many unethical people, there's a limit to their unethical approaches. A lot of them might be willing to lie to get a promotion but wouldn't be willing to, e.g., lie to put someone to death. I'm not convinced that an unethical AI would have this nuance. Basically, on some level, you can still trust a lot of unethical people. That may not be true with AIs.
I'm not convinced that the AIs do fail the same way people do.
One of the things that really confounds me about Discord is that so many groups will just keep coming back, even after they're booted, like they're in an abusive relationship.
I used to follow some of the console homebrew / piracy Discord servers from a distance. (For some reason, this was where you had to go to get some of the homebrew, even if it wasn't related to piracy.) They would always complain about servers getting shut down and people getting banned, but for some reason the idea of just moving to a different service was unacceptable. They need to be on Discord to get their 7th account banned and set the server back up for the 10th time. They could just host it on some service that won't shut them down (or self host), but they'd rather just keep getting banned on Discord. Why?
We don't know that none of the names are real. And even if they aren't, the article is still showcasing his failed attempt at doxing the owner of archive.today and providing a starting point for anyone else wanting to try.
> they were all already posted publicly previously
Doxing very often consists of nothing more than collecting data from a bunch of public sources
> Doxing very often consists of nothing more than collecting data from a bunch of public sources
I simply don't agree that this looks like doxing. No addresses or even any private information were reported. It's just a Google using WhoIs data and, in one case, the person said, in a public forum, that archive.is is "my website." Why would they have said that if they were worried about people finding out who it belongs to?
If they'd have stumbled upon an address to a private residence and reported that, sure, that would look like doxing. I just don't see it here.
I simply don't agree with that, either. It just seems like journalism to me. No details were reported that would reasonably be expected to compromise anyone's safety. Why should it be disallowed to investigate the ownership of a website? People used to do this all the time when they were going to order products from a web store they'd never used before, to try to deduce if it was trustworthy. They'd look up the owner, verify that the store has a physical address, etc. Were they not supposed to be doing that? They're just supposed to never Google any of that and just pray instead, because, if they learn any of that information, they've done something morally reprehensible? That's absurd.
And, to that point, archive.is isn't so different from a store. They accept donations, so it seems perfectly reasonable to ask and answer questions about where the donations go IMO. Is it unreasonable to look at and report on Archive.org's nonprofit details?
What does that even mean? Are you trying to suggest that journalism is inherently okay? A piece of despicable journalism simply cannot exist?
>No details were reported that would reasonably be expected to compromise anyone's safety.
So it's okay because he failed at what he set out to do? I'd counter that regardless of whether or not the doxing was successful, publishing this information serves no other purpose but to aid future attempts.
>Why should it be disallowed to investigate the ownership of a website?
You have to be kidding, I feel like anyone with even just the most basic social skills would be able to understand that absolutely nobody gives a shit about what you do as long as it doesn't affect other people.
> And, to that point, archive.is isn't so different from a store. They accept donations, so it seems perfectly reasonable to ask and answer questions about where the donations go IMO.
Obviously it is very different from a store.
Besides, why would you spend time trying to identify the owner of a store who is obviously not interested in identifying themselves? Surely the right choice is to pass in approximately 100% of such cases.
yeah looks like someone is either a hyper tailscale fan or had extremely bad experience with it, I also run several dozens of machines (and tablets and phones) on it. never had a single moment of downtime since I started.
Also, I'd argue there are ways to make it reasonably secure over email. An encrypted attachment with a securely pre-shared key doesn't seem too risky IMO.