It is impossible to run anything like Twitter in Germany without being sued into oblivion for defamation etc and get struck with criminal charges for something a user posted. Germanys equivalent of cease and desist letters (Abmahnungen) make running a big site with user generated content very unattractive.
Or accidentally trigger a three-strike rule or infringe some odd sentence in the 300k pages of regulations that concern your business... Europe is not going to have a big tech company (or even mid-sized for that matter) until this legal approach changes in an extreme way.
Agreed, and bring it on. If this becomes a thing then I'll run a platform you can outsource your busy time in a VM to. This sounds like a ton of fun TBH and makes me think of game automation. Evolving into a game of cat and mouse and giving me an excuse to spend more time messing w/ OpenAI.
Or just play a full screen slide show of various spreadsheets and documents. Will they really study the screenshots that closely to see the repetition?
In my experience employers only tend to look back at things like this once they have a “reason” and are trying to document/justify performance issues. Although I agree it doesn’t seem too far off to have a system that proactively analyzes the screenshots and alerts a manager when Facebook or gmail seems to be open a disproportionate amount of time.
The phone was released with iOS 13 which prominently featured enhancements to USB restricted mode[1] which was supposed to defend against GrayKey/Cellebrite attacks. Seems like GrayKey can easily bypass that feature. Does not really inspire much trust in Apples security team as the USB restricted mode was already a bandaid itself.
I think people don't fully realize the magnitude of such a task. We're not talking about something like consoles where the attack has to meet a higher bar to be viable (be persistent across reboots and upgrades, work over the internet against the provider's infrastructure, etc.). And as usual, the attacker only needs to get it right once and they can afford to wait for months or years to find the exploit on the particular device they have seized.
It doesn't need to be perfect. The main requirement is that a chip holds on to a secret key, releases it upon getting the correct pin, has a limit on attempts, and is resilient to voltage and timing attacks. That's difficult but not exceptionally difficult.
Apple has chosen to run a ton of code inside the secure enclave, and bugs from that are on them.
Is it? Have nonvolatile storage inside the chip, and increment+verify the attempt counter before checking if the supplied PIN is correct. What do you need beyond that?
The difficulty (in my view) comes from ensuring that I can't just clone/replicate the state of the device from when I had more tries left and then try again.
> the near impossible task of defending a physical device in the hands of an attacker.
If you assume the device is off and the user chose a strong password, it's pretty easy to defend. You simply encrypt the data with a key which is encrypted with the user's password.
If you want to protect devices that are on, or want to protect devices with less than stellar passwords, then it becomes harder.
If you assume a strong password you don't need to worry about dictionary attacks.
There are 2 ways to slow down the attacks: key stretching and secure storage. Key stretching is a good idea.
I recommend not relying fully on secure storage, because I've heard of tons of hardware vulnerabilities (side channel attacks, undervoltage, electron microscopes, buggy implementation). I trust math more than a physical object. In fact it seems impossible to me to build fully secure storage, because if someone has a delicate enough measurement tool to measure the atoms inside the storage, the data inside can be extracted. If you store the password (or hashed password) as well as the key in the secure storage, and have it only return the key if the input password is correct, you run the risk of someone finding a bug in the storage to extract the key without the password. Then you're compromised.
But you build a system so that the secure storage is no worse than regular crypto. You do the encryption using a combination of the user's password and the output of the secure storage. That way even if the secure storage is fully compromised, the password is still needed.
You can't really assume a strong password, because if you have to type in 12 characters, letters and punctuation marks every time you want to look at your phone, you're going to give up on the whole thing pretty quickly.
To be usable, phones need to allow relatively weak passwords.
I've had a password like that on my (Android) phone for ~7 years and haven't given up. I don't use punctuation though, it's not worth the extra taps to get to the punctuation keyboard for the entropy you gain. I've never had fingerprint or face ID enabled either.
12 characters gives 62 bits of entropy. That's plenty if proper key strengthening is in place.
Linus Sebastian says that when his phone got slower to open up, he got happier, because it caused him to use his phone less, cutting out the useless stuff. https://youtu.be/WGZh-xP-q7A?t=305
When was the last time a regular person turned their phone off? Not counting reboots or out of battery incidents I'm going to guess not since it was purchased.
Do you really think it's impossible to "defend a physical device" – prevent an attacker from accessing and decrypting the data stored on it? I believe it is possible, that's the promise of hardware security modules. The article is about a mostly secure physical system that Apple undermines by encouraging the use of easily-cracked numeric PINs. I am not sure if the implementations are there yet, but biometric authentication looks like a promising solution to this problem.
You need to think about this a bit more: biometrics are bad if you pass them over the network where an attacker can replay them but it's different in a local context where they never leave the device. You get a high-entropy key and an attacker who can get both your device and a sufficiently high-quality biometric scan can also simply do things like like you in a room until you unlock the device. That seems like a reasonable compromise.
The handling of biometric data is designed to be secure between the sensors and the secure enclave. For example, data from the fingerprint sensor is encrypted when it is sent over the wires inside the device. The secure enclave does not store images of the fingerprint, but a representation of it which is not enough to reverse back into a fingerprint.
This is covered in the Apple Platform Security Guide.
What makes you think that's relevant to the discussion here? The person I replied to was under the incorrect assumption that someone in possession of a phone could extract stored fingerprint images, which is not true of any well-designed biometric system.
If you do a little bit of reading about the topic, too, note how well-designed biometric systems require more than a simple fingerprint or photograph — e.g. Apple's FaceID has liveness checks for eye motion and uses a 3D scan. None of these are impossible for a well-resourced attacker but that's true of the alternatives as well. This is why you need to think in terms of threat models — e.g. the attacker who can get a high-resolution 3d scan of your face can also watch you type your passcode in so the latter isn't more secure in practice.
Apple specifically uses biometric authentication (faceID and their fingerprint thing) except when the device is first powered on. This is (at least partly) because of US legal rulings that allow LEOs to compel you to provide a fingerprint and similar biometric id but cannot compel you to provide a password.
I think it’s important to stress that we have essentially no solid information as to whether these attacks are real, and if they are, then the methods they use or the amount of time they take, nor what measures can be used against them.
Let’s not sing the requiem for their security team just yet.
The new scheme is called “Capital Allowances for Intangible Assets”, and in practice it’s the same as “Double Irish with a Dutch Sandwich”. They were both very intentionally put in place by the Irish government, to allow (mostly) giant US firms to pay nearly zero tax on non-US profits, as long as they open Irish offices and employ a decent number of Irish people.
The closing of the Double Irish tax evasion scheme is indeed largely useless, as Ireland ensured that there’s a good replacement. They don’t want to lose the jobs that come with these tax evasion schemes.
Or rather, if they can't find a legal loophole they'll use billions of dollars they have in their pockets because of previous tax avoidance efforts to lobby for new legal loopholes.
And even Ireland still remains one of those jurisdictions: while they were successfully pressured into passing this new general legislation, Ireland have refused to accept Apple's unpaid taxes and are spending millions fighting the EU ruling in court[0]
Yes, but that takes time, and it involves the company trusting their money to a third party that is trustworthy in inverse proportion to how much tax leniency they are willing to grant.
"You can't stop us from dodging taxes so don't even try" is what they want us to believe, because if we believe it then they can dodge taxes every year without inconvenience.
Do you think that the accountants at Google, Apple, et al. have failed to foresee this sort of pushback? I personally don't doubt they've been planning alternative tax-avoidance scenarios for years. That would include lobbying other jurisdictions for favorable tax treatment in any rational scenario.
To date, they have suffered no serious financial or reputational consequences for tax evasion, so there's no reason to think they won't continue to avoid taxes as part of their overall financial strategy.
"A loophole is an ambiguity or inadequacy in a system, such as a law or security, which can be used to circumvent or otherwise avoid the purpose, implied or explicitly stated, of the system." [1]
A loophole is by definition legal, otherwise it would be fraud.
That's why it's called a 'loophole' - when the letter of the law is followed, but the intent is subverted (or what people imagine what the intent should have been).
I am pretty sure the lawmakers intended it exactly as it was used. I'd give them benefit of doubt if they closed the hole within a year after discovery, but this is the most well known tax loophole of which even children know the name, used for several decades.
I guess nothing it's illegal if you are not prosecuted/convicted for the said activities. Not even if you are a war criminal (see the recent Trump saga).
The tax rules are confusing and complex enough so that only the wealthy can benefit(i.e. use loopholes).
Maybe they are starting to see the writing on the wall: if they keep hiding their profit away, governments are eventually going to tax them on their revenues, regardless of profit. Or based on worldwide profit rather than local profit. That's what France is starting to do, and what the EU is looking very closely at.
I could imagine the US following through: the idea that rich megacorps should pay taxes is not particularly unpopular. And if you can't tax profit, then you have to tax something else...
Google actually does this a lot already. It's a very minor thing, but pay attention to the text when signing up for Google services in various countries (cloud, ads, play, etc...), and who you are actually paying your money to. Much of the time, you aren't paying to Google Inc.
Which is a reason why in some locations it's illegal to photograph or video record voting. Where I vote, you do it in a curtained area so it would be easy to get away with but with setups like where Trump and family voted in New York, you're mostly out in the open with a short partition.