If the idea was for Australia to lead by example on all the ways to NOT regulate an increasingly digital world, we're doing a bloody good job.
I worked for Atlassian in Sydney. I know there's a pretty decent tech culture that is thriving in Sydney + Melbourne (and elsewhere, I'm sure). I really hope more of my colleagues find a passion for politics and get to a position where they can steer our clearly incompetent government in governance issues pertaining to technology.
In the meantime, does anyone have a clear idea of how this Act will even work? Lets talk in specific terms for something like Signal. I'm assuming Signal has no legal footprint in Australia. How/can Australia compel Signal to allow Australian enforcement agencies to snoop on conversations?
If they can't, won't one of the worst outcomes of this legislation be that any kind of technology company that needs to deal with encryption (which these days should be basically 100% of them) be forced to move overseas? How could a single Australian-based tech company have even the slightest scrap of credibility for data security when a law like this exists?
Final note - is anyone from Fastmail around here? I'm a Fastmail customer and this has me extremely concerned.
Signal: The employee is required to intentionally alter the app to no longer provide security, such that some communication may be intercepted. They are not permitted to publicly disclose that they have done this.
Unfortunately, this goes deeper. The law may be capable of compelling Google or Apple of Australia to force deploy to your phone a malicious version of the Signal app with the "technical assistance notice".
Don't be fooled by the narrative on the amendment.
The truth is that such interception is normal, in Australia, in other countries.
AA is two things. 1st, the government there legalizing an already existing practise to cover their own liability in the event that the grey practise is eventually exposed. Especially amid the current public awareness of privacy.
Next, more importantly, AA is to get around recent security patches that rendered previous vectors now impossible to use, since these collections were often done covertly. It's law compensating for where a much relied on covert method no longer works. Thus, the 11th hour urgency.
The cover story that it is this huge privacy catastophe is just more noise, to distract from the big story; how interceptions like this have been going on for more than a decade.
Will your phone accept an app upgrade which has been signed by Google or Apple instead of by Signal?
If not, is the law capable of compelling your telephone vendor to ship you an upgrade that weakens its upgrade testing enough that Apple/Google can ship you such an upgrade?
Apple controls the root CA on iOS devices. I guess that Google controls the root CA on Android too. Therefore it is within their technical ability to issue a certificate that bears the name of Signal and is trusted by almost all devices. They wouldn’t need to ship any OS upgrades to forge the signature of Signal, as they are already the ultimate authority of who is Signal. I won’t speculate on whether they or their Australian employees will actually do so in the future.
AFAIK, that's not how Android works. Each apk is signed by a standalone certificate (which does not have to be signed by any CA), and the operating system will only allow an upgrade if the same certificate is used. Which means a developer must carefully guard the certificate's private key; if it's lost, the application can no longer be updated, but it must instead be released as a new application with a separate name. And since AFAIK this mechanism is part of the operating system (not the constantly-updated Google Play store), to bypass it would require a full OS update.
(This has other consequences: if a developer releases the same apk to several stores, but it's signed by different certificates on each store, a user who installed the apk from one store will not be able to upgrade it using the other store.)
My understanding is that it would not due to the different app signing certificate. This would be a new application unless Apple or Google signs the app using certificate forgery or similar.
The Australian government could just force Google or Apple to make updates to their OS to not enforce signatures for some apps, or put in vulnerabilities that could be used by them to bypass signature checking at all.
I'm not a lawyer, but from what I hear any Australian employees can be compelled to change code and be threatened with prison if they tell anyone. Any companies with any presence in Aus can be given demands and gag orders to ensure they can't talk about what is happening.
And if this article is trustworthy, this isn't hypothetical, it's already happening right now. Right now people are being served with orders to do things like this and if they tell anyone (including the company they work for and are in essence "attacking"), they can kiss their life goodbye.
That's what makes it so scary. A programmer that is living in Aus that works for Google or Apple could one day get a notice that they are now mandated to modify code for an unknown reason with the threat of prison if they don't or if they tell anyone. Technically even programmers that don't work for those companies can be compelled to make contributions to open source software to introduce vulnerabilities or exploits, and again there is literally nothing the person can do except follow orders or go to jail forever.
If they are ever forced to do that and it becomes public knowledge, I think it will finally be enough for a critical mass of security-conscious people to buy phones with user-controlled platforms where it becomes impossible.
Figuring out people's easy passwords and password recovery methods isn't a weakness in iCloud, and shouldn't be counted as hacking iCloud. And if you were concerned with security, and had to buy a smartphone, what device is better than an iPhone?
Analogies are useful for illustrating a thought, not for supporting arguments. And identity theft (where the victim can do nothing to protect themselves) is not analogous in the first place.
Using multi factor authentication, using long, difficult passwords, and don’t let your security questions be obvious. If someone knocks me out, uses my finger to TouchID into my bank’s app and transfer money, that’s the price I pay for the convenience of not wanting to login with my password. Same with using weak passwords and questions.
You can't seriously say that a multi-million dollar company can't enforce those security features by default.
This is the same as having a car recall and having people dying before the letters reach their homes and saying 'they should have known this company's cars could explode'
I don't understand the purpose of using analogies (valid or not) in this discussion.
Apple could have forced people to use multi factor authentication, and whether or not they should have forced it is a separate discussion that can be had. But I was claiming that your original comment was that iCloud was "hacked" is incorrect, since it implies there was some weakness on Apple's technical backend.
Not forcing secure by default practices in your secure devices is a weakness on Apple's technical backend.
Maybe they should take a couple of notes for their broken cloud implementation from another phone manufacturer in the space that takes security seriously:
The celebgate phishing attacks involved more Google accounts than Apple accounts.
>According to court filings, Collins stole photos, videos and sometimes entire iPhone backups from at least 50 iCloud accounts and 72 Gmail accounts, “mostly belonging to celebrities,” between November 2012 and September 2014, when the photos were posted online.
> Signal: The employee is required to intentionally alter the app to no longer provide security, such that some communication may be intercepted. They are not permitted to publicly disclose that they have done this.
Replace "Signal" with "OpenBSD" and watch the freaking fireworks.
(Why OpenBSD? De Raadt hasn't promised to be "nicer" recently. Linus has.)
Seems like a fair summary would be that since they don't offer any truly secure services (e.g. e2e encryption), there's nothing that this law could require them to subvert. Turning over an individual's account data pursuant to an Australian issued warrant was something they were already doing, and nothing about that has changed under the new law.
There's no reason to be concerned about fastmail like there's any kind of uncertainty, your account is comprised and you were given warning it was going to happen.
Meta-question: Obviously, the whole thing is rotten because of the secrecy. But is is better to do what Australia is doing, by making it a law which can at least be talked about, or doing it anyway, but covertly, like certain other five eyes countries (and beyond)? I don't really have an opinion or answer, just curious.
What Australia is doing is worse, because the actions can’t be talked about.
What the five eyes do is normal espionage. They often get away with it, but when they are uncovered their illegal actions aren’t covered by gag orders.
Here's a brief of companies that could/might be impacted. I don't agree that server location is sufficient enough protection. Correct me if wrong, but can't the authorities compel the Aussie company's directors to hand over foreign server credentials?
But we're not just talking about headquartered companies.
Cloudflare has datacenters in Brisbane, Melbourne, Perth, and Sydney and an office in Sydney. Could they be compelled to hand over your website's certificate that they have because they're the front end load balancing proxy for your website?? That way the police can man in the middle your website. Cloudflare would be gagged from telling you they gave away your private TLS certificate you entrusted them with.
By this logic, there is no need to compel Cloudflare to handover the private TLS key. They can just compel any CA based in Australia to sign a fake cert. Or directly compelling OS vendors (Google, Apple, Microsoft,etc) to make government cert as a root CA.
A fraudulently issued cert compelled by government might be detected by CA reporting. This was already done to watch for direct government controlled CAs issuing bad certs.
Stealing your own existing cert is less likely to be detected.
Obviously half employees including directors are in Australia, so that opens up to pression anyway, just like Japan could in theory compell a director of a foreign company by threatening to jail him on a fortuitous tax accusation. The directors would never be so square as to not comply, right?
Yes, while they were founded in Australia, the parent company Atlassian Corporation Plc is located in the UK (presumably for stock market reasons?).
But they still have subsidiaries in Australia, which arguably does a lot of work given that at least a third of their employees are in Australia, and so would be subject to the local laws. The employees could be compelled to bypass things and not be allowed to talk about it to anybody, including the parent company.
Hotel rooms usually provide a small safe for which guests can pick the combination. There is also a secret backdoor combination or master key that lets hotel staff open the safe.
This creates an obvious security hole : if the backdoor combination is easy to guess, or a copy of the master key falls into the hands of unscrupulous employees or ex-employees, then the contents of the safe can be stolen. As a guest, there is nothing you can do to reduce this risk.
Now, imagine that a hotel found a solution where locked safes are destroyed and replaced at almost no cost to them, and could give up on the backdoor. Burglaries involving the backdoor would vanish, and although this slightly increases the risk of losing your belongings by forgetting your combination (since the hotel can no longer open the safes of forgetful guests), it's a net improvement in security.
Ten years later, the hotel community has reached a consensus that safes-without-backdoors are the Right Thing to Do. The state then mandates that all hotels should be able to give access to the contents of those safes to the police. But they're not saying that hotels have to use a backdoor combination or master key, so they're not really asking anyone to reduce the security of their safes...
“The legislation in no way compromises the security of any Australians’ digital communications.”
Reminds me of the time someone tried to legislate pi=3. There's absolutely no way to give police a back door into encryption without giving criminals the same back door.
That's working under the assumption that even the tiny scraps of privacy we have left shouldn't be safe from the government. In light of stuff like police helping blacklist union organizers (https://www.bbc.com/news/uk-43507728) or sharing data on protesters with the company they're protesting against (https://en.wikipedia.org/wiki/Victorian_Desalination_Plant#S...), that's a very poor assumption to make.
> There's absolutely no way to give police a back door into encryption without giving criminals the same back door.
This feels disingenuous to me. It would be fairly trivial, for example, to store a copy of all keys, encrypted with the government’s public key. Of course, there’s a million eats to go wrong, but that’s different from “mathematically impossible.”
But the million ways to go wrong IS the problem. I may be appealing to authority here, but is it disingenuous when an overwhelming majority of encryption and security experts agree?
Note that he was replying to a comment that was saying that a back door that is not wide open to criminals is comparable to thinking pi = 3.
As is pointed out in the Schneier article, the problems with a key escrow scheme are on the law enforcement side of things. They could lose access to their keys, especially if a lot of different agencies have keys.
Those are difficulties that can in theory be overcome, although it may not be practical to do so. That's a far cry from a pi = 3 issue.
The original argument was “The legislation in no way compromises the security of any Australians’ digital communications.”
This is approaching a pi = 3 level falsehood because of the “in no way compromises” clause. There are many schemes that are outright illegal (in my not a lawyer interpretation of this law), and it nakedly makes the other schemes harder with state actors as additional points of failure.
Well, that does actually make some schemes impossible (in a pi = 3 kind of impossible) because it means the private key has to leave someone’s device and be sent over the wire- and many schemes don’t do that.
https://en.m.wikipedia.org/wiki/Three-pass_protocol
I’m not a cryptographer but I assume there are other schemes that are at least weakened by the requirement of a third party holding a key, much like the TSA master lock program was broken by statistical analysis of locks that were mastered this way.
But the mathematical impossibility if this aside, there is a very real practical impossibility if trusting an organization as large as the US government to keep such a database secure. There are better ways to help law enforcement than blowing such a large gaping hole in the web.
> Reminds me of the time someone tried to legislate pi=3 There's absolutely no way to give police a back door into encryption without giving criminals the same back door.
Most secure N party communication systems can be made to be secure N+1 party communication systems. If that +1 is the police, then arguably you have in fact given police a back door without giving criminals the same back door.
Criminals who want in would have to do so the same way they would before the back door--compromise one of the parties to the communication--except now there are N+1 parties to try and compromise instead of N so the attack surface is larger. How much this lowers security depends largely on the competence of the +1 party.
The popular model of a back door seems to be some wide open spying interface protected only by running on an undocumented port or something like that, and that all the bad guys have to do is get a copy of one client, reverse engineer it to find the access info, and then they are in.
For some reason, people tend to overlook that a back door is really just another communication channel, and the mechanisms modern cryptography provides for securing communication channels apply to back doors as much as they do to any other channel.
Nevermind that increasing the parties being attacked almost always makes the job easier not harder. See the TSA master key debacle for an example of how adding a third party master weakened the security and allowed statistical analysis to break the lock.
A company based in Iceland might have Australian customers, but if they have no representation in Australia, there's precious little the authorities can do. They are of course free to pursue their inquiries through the country where the company is resident (Iceland), but that country has no obligation to adhere to Assie laws, and most likely isn't going to.
> 1. The law applies to all tech companies who have users in Australia, regardless of where the company is incorporated.
Yes, but as someone not living in, working in, or traveling to Australia I don't really have to care. I don't own a company that makes crypto, but if I did, my reaction to this would be along the lines of "don't open a Sydney office but otherwise business as usual".
You would also need to distrust any ssl cert a multinational company's australians can access, as the law can compel and gag them to steal your ssl cert. All Australian residents and citizens, including potentially abroad, are now legislated to be untrustworthy when it comes to holding any cryptographic secrets or access to systems of your customers.
In light of what happened to symantec, wosign, and startcom, I would expect smart CAs to pull out of Australia, or at least spin off their Australian operations to a separate entity.
The devil is in the details. 317c, defining "communications provider".
> 5. the person provides a service that facilitates, or is ancillary or incidental to, the provision of an electronic service that has one or more end‑users in Australia
> 6. the person develops, supplies or updates software used, for use, or likely to be used, in connection with:
> (a) a listed carriage service; or
> (b) an electronic service that has one or more end‑users in Australia
> 8. the person manufactures or supplies components for use, or likely to be used, in the manufacture of a facility for use, or likely to be used, in Australia
There's several more.
A communications provider, under the given definitions is not bound to be on Australian soil, but rather interacting with Australia as a nation.
Applying this law to those of different nationality is difficult, and unlikely to succeed, however those of dual-citizenship can be held accountable.
This opinion I have, that the law does apply to those internationally, is one I have seen supported by several law firms I have occasional contact with.
Probably aided by:
> 317F. This Part extends to every external Territory.
> 317ZC.4 Part 4 of the Regulatory Powers (Standard Provisions) Act 2014, as it applies in relation to section 317ZB of this Act, extends to:
> (a) every external Territory; and
> (b) acts, omissions, matters and things outside Australia.
> 317ZD (Enforceable Undertakings).
> Part 6 of the Regulatory Powers (Standard Provisions) Act 2014, as it applies in relation to section 317ZB of this Act, extends to:
> (a) every external Territory; and
> (b) acts, omissions, matters and things outside Australia.
There's a few more - but as the Act is stating it is enforceable to both external Territories and acts, omissions, matters and things outside Australia, I do think the most likely reading is that 'acts' can be enforced upon Australians living outside the borders.
"External Territories" here means Christmas Island, the Australian Antarctic Territory etc etc. When I say "extra-territoriality" I mean the application of law outside of Australia's borders.
The "acts, ommissions, matters and things" appears to give extra-territoriality to subject matter but not to legal personalities (ie companies and people).
The "communications provider" part is very broad, and while in Australia I am definitely covered by it. But the courts will not generally interpret legislation as having extra-territorial effect unless it explicitly says so. Otherwise every Act would need a stuff like "ps. the Fisheries Amendments (Rex Hunt Is A Wanker) Act is non-territorial".
My question is not about whether a legal personality (ie, a company) is affected if they have a physical-legal presence in Australia, because of course they are affected. My question is whether someone like me, who is outside Australia's boundaries, can be served a notice while I am out of the country. On my reading it's still a "no".
I'm not sure there are Australian cert providers... at least the government (gov.au) uses common ones such as Entrust and Digicert, and companies like Atlassian use Digicert.
I feel like I must be misreading, because my first-pass interpretation is that companies would terminate all of their Australian citizen employees, and add terms to the remaining contracts saying that employees must notify them if they become an Australian citizen.
Australian tech companies need to release a program called UltraDecrypt that simply brute-force decrypts any message on their platforms given billions of years and sell it for $10M per license.
Then when law enforcement claims they are not being cooperative, they can say they have a tool that meets their needs if they're patient.
You'd probably be charged with something and punished.
Courts don't take to kindly to people trying to be cute with their demands. It's not like they are going to say "well they are technically right" and give up, they are going to just up the consequences or clarify the request until you comply in the way that everyone knows they want you to.
This. The biggest weakness to encryption systems isn't the math, it's the guy in the uniform who has tied you to a chair and keeps strapping your feet with a rubber hose.
Well don't be too excited about it, because laws like the one this article is about makes that kind of response impossible (as in you WILL be thrown in jail for the rest of your life if you attempt to do that).
And even in the US it still isn't a sure thing that Lavabit's response would work again if someone else tried it. I don't know the details, but I believe there is still some uncertainty around if the FBI just kind of "allowed" them to close down by not pursuing it any further, or if they got what they needed, or of secret laws were changed because of this instance.
In Lavabit's case, there was a lot of FBI involvement, a lot of secret court orders and gag orders, and a lot of accusations from the owner of Lavabit that he was brought to secret courts without legal representation and no chance to appeal, and even he says that there are things he still can't talk about.
A colleague and I published a related idea [1] last year: Weaken the encryption just enough so that a government can (barely) afford to do the brute force if they really do care about it that much. (Hint: They almost certainly don't.)
Please note that we're not seriously suggesting that encryption providers should adopt this -- not as long as there are other options. But if you're legally obligated to do something, this is the "f*ck off and leave me alone" approach to compliance.
I've often thought a good solution would be zero-knowledge weak encryption with an additional strong encryption layered on top. When the government comes to ask for data you decrypt with the strong key, but then they still have to do the work to break the weak key.
Thinking like an economist, you want to align the incentives to make it possible but not free to access user data. A weak key (per user) that's breakable with $10k compute cost seems about right to me, but the actual optimal cost may be higher or lower.
I am not a cryptologist, not a lawyer, and only a marginally capable software engineer.
But I think we've had the option to send personally encrypted end-to-end messages for a while now. (Open)PGP anyone?
So instead of using Signal, or Whatsapp, or whatever and depending on their client-side-encryption (and possible server-side-decryption) of private messages, how about plain email using standalone user-encryption.
Two things may come of this: Google will stop "interpreting" my email messages, and laws like these stop mattering very much.
Of course, 5th amendment (and its siblings in other countries) still apply...
This is where a political problem needs to be fought back where it hurts the government the most, threaten to get the hell out and see their corporate taxes income dwindle.
I worked for Atlassian in Sydney. I know there's a pretty decent tech culture that is thriving in Sydney + Melbourne (and elsewhere, I'm sure). I really hope more of my colleagues find a passion for politics and get to a position where they can steer our clearly incompetent government in governance issues pertaining to technology.
In the meantime, does anyone have a clear idea of how this Act will even work? Lets talk in specific terms for something like Signal. I'm assuming Signal has no legal footprint in Australia. How/can Australia compel Signal to allow Australian enforcement agencies to snoop on conversations?
If they can't, won't one of the worst outcomes of this legislation be that any kind of technology company that needs to deal with encryption (which these days should be basically 100% of them) be forced to move overseas? How could a single Australian-based tech company have even the slightest scrap of credibility for data security when a law like this exists?
Final note - is anyone from Fastmail around here? I'm a Fastmail customer and this has me extremely concerned.