Hacker News new | past | comments | ask | show | jobs | submit login
Notices have already been issued under Australia’s new encryption laws (innovationaus.com)
140 points by DyslexicAtheist on Feb 6, 2019 | hide | past | favorite | 101 comments



If the idea was for Australia to lead by example on all the ways to NOT regulate an increasingly digital world, we're doing a bloody good job.

I worked for Atlassian in Sydney. I know there's a pretty decent tech culture that is thriving in Sydney + Melbourne (and elsewhere, I'm sure). I really hope more of my colleagues find a passion for politics and get to a position where they can steer our clearly incompetent government in governance issues pertaining to technology.

In the meantime, does anyone have a clear idea of how this Act will even work? Lets talk in specific terms for something like Signal. I'm assuming Signal has no legal footprint in Australia. How/can Australia compel Signal to allow Australian enforcement agencies to snoop on conversations?

If they can't, won't one of the worst outcomes of this legislation be that any kind of technology company that needs to deal with encryption (which these days should be basically 100% of them) be forced to move overseas? How could a single Australian-based tech company have even the slightest scrap of credibility for data security when a law like this exists?

Final note - is anyone from Fastmail around here? I'm a Fastmail customer and this has me extremely concerned.


Signal: The employee is required to intentionally alter the app to no longer provide security, such that some communication may be intercepted. They are not permitted to publicly disclose that they have done this.

Unfortunately, this goes deeper. The law may be capable of compelling Google or Apple of Australia to force deploy to your phone a malicious version of the Signal app with the "technical assistance notice".


Don't be fooled by the narrative on the amendment.

The truth is that such interception is normal, in Australia, in other countries.

AA is two things. 1st, the government there legalizing an already existing practise to cover their own liability in the event that the grey practise is eventually exposed. Especially amid the current public awareness of privacy.

Next, more importantly, AA is to get around recent security patches that rendered previous vectors now impossible to use, since these collections were often done covertly. It's law compensating for where a much relied on covert method no longer works. Thus, the 11th hour urgency.

The cover story that it is this huge privacy catastophe is just more noise, to distract from the big story; how interceptions like this have been going on for more than a decade.


Will your phone accept an app upgrade which has been signed by Google or Apple instead of by Signal?

If not, is the law capable of compelling your telephone vendor to ship you an upgrade that weakens its upgrade testing enough that Apple/Google can ship you such an upgrade?


Apple controls the root CA on iOS devices. I guess that Google controls the root CA on Android too. Therefore it is within their technical ability to issue a certificate that bears the name of Signal and is trusted by almost all devices. They wouldn’t need to ship any OS upgrades to forge the signature of Signal, as they are already the ultimate authority of who is Signal. I won’t speculate on whether they or their Australian employees will actually do so in the future.


AFAIK, that's not how Android works. Each apk is signed by a standalone certificate (which does not have to be signed by any CA), and the operating system will only allow an upgrade if the same certificate is used. Which means a developer must carefully guard the certificate's private key; if it's lost, the application can no longer be updated, but it must instead be released as a new application with a separate name. And since AFAIK this mechanism is part of the operating system (not the constantly-updated Google Play store), to bypass it would require a full OS update.

(This has other consequences: if a developer releases the same apk to several stores, but it's signed by different certificates on each store, a user who installed the apk from one store will not be able to upgrade it using the other store.)


I don't know, but I presume Google cross-signs APKs that are approved through the Play Store?


No.

Easily checked, run jarsigner -verify -verbose -certs some.apk on an APK of your choice. I ran it on 31 just now, no cross-signing visible anywhere.


My understanding is that it would not due to the different app signing certificate. This would be a new application unless Apple or Google signs the app using certificate forgery or similar.


The Australian government could just force Google or Apple to make updates to their OS to not enforce signatures for some apps, or put in vulnerabilities that could be used by them to bypass signature checking at all.


Good luck with that. How do you have any idea its running in Australia or belongs to an Australian?


I'm not a lawyer, but from what I hear any Australian employees can be compelled to change code and be threatened with prison if they tell anyone. Any companies with any presence in Aus can be given demands and gag orders to ensure they can't talk about what is happening.

And if this article is trustworthy, this isn't hypothetical, it's already happening right now. Right now people are being served with orders to do things like this and if they tell anyone (including the company they work for and are in essence "attacking"), they can kiss their life goodbye.

That's what makes it so scary. A programmer that is living in Aus that works for Google or Apple could one day get a notice that they are now mandated to modify code for an unknown reason with the threat of prison if they don't or if they tell anyone. Technically even programmers that don't work for those companies can be compelled to make contributions to open source software to introduce vulnerabilities or exploits, and again there is literally nothing the person can do except follow orders or go to jail forever.


and if the change is caught in a code-review, then what?


I don't know, and I'm not sure if anyone outside of the person who gets the order can know, because these laws are so vague anything can be required.


Maybe Australia would even force Samsung to issue OS updates for as long as an Australian uses the device in question?

Put that way, it sounds like a feature, doesn't it? But perhaps a little implausible?


Good luck with that.


This would be a new application unless Apple or Google signs the app using certificate forgery or similar.

What if Apple or Google changes the OS security mechanism itself?


If they are ever forced to do that and it becomes public knowledge, I think it will finally be enough for a critical mass of security-conscious people to buy phones with user-controlled platforms where it becomes impossible.

In a way this might be a blessing.


No, they won't.

They'll buy the newest Apple phone because it's 10 nanometers thinner and it's a status symbol.

Apple Cloud was hacked and celebrity pictures were stolen and noone batted an eye.


> Apple Cloud was hacked and celebrity pictures were stolen and noone batted an eye.

iCloud was not hacked; the celebrities were spear phished: https://en.wikipedia.org/wiki/ICloud_leaks_of_celebrity_phot...


> We didn't give random people loans in your name, your identity was stolen.

When you shift responsibility from the provider to the customer, you play right into the security blame game.


So, if I put a post-it with my PIN and SSN on my credit card and give that to a random stranger, it's the bank's fault?


So, the celebrities put their password in a post-it on their iPhone?


Figuring out people's easy passwords and password recovery methods isn't a weakness in iCloud, and shouldn't be counted as hacking iCloud. And if you were concerned with security, and had to buy a smartphone, what device is better than an iPhone?


> We didn't give random people loans in your name, your identity was stolen.

When you shift responsibility from the provider to the customer, you play right into the security blame game.


Analogies are useful for illustrating a thought, not for supporting arguments. And identity theft (where the victim can do nothing to protect themselves) is not analogous in the first place.

Using multi factor authentication, using long, difficult passwords, and don’t let your security questions be obvious. If someone knocks me out, uses my finger to TouchID into my bank’s app and transfer money, that’s the price I pay for the convenience of not wanting to login with my password. Same with using weak passwords and questions.


You can't seriously say that a multi-million dollar company can't enforce those security features by default.

This is the same as having a car recall and having people dying before the letters reach their homes and saying 'they should have known this company's cars could explode'


I don't understand the purpose of using analogies (valid or not) in this discussion.

Apple could have forced people to use multi factor authentication, and whether or not they should have forced it is a separate discussion that can be had. But I was claiming that your original comment was that iCloud was "hacked" is incorrect, since it implies there was some weakness on Apple's technical backend.


Not forcing secure by default practices in your secure devices is a weakness on Apple's technical backend.

Maybe they should take a couple of notes for their broken cloud implementation from another phone manufacturer in the space that takes security seriously:

https://www.theguardian.com/commentisfree/2014/sep/30/iphone...

https://www.cso.com.au/article/577197/apple-tells-ios-9-deve...


> Apple Cloud was hacked and celebrity pictures were stolen and noone batted an eye.

Phishing. I think your comment is pretty disingenuous as Apple has generally showed a strong commitment to privacy and security.


> We didn't give random people loans in your name, your identity was stolen.

When you shift responsibility from the provider to the customer, you play right into the security blame game.


The celebgate phishing attacks involved more Google accounts than Apple accounts.

>According to court filings, Collins stole photos, videos and sometimes entire iPhone backups from at least 50 iCloud accounts and 72 Gmail accounts, “mostly belonging to celebrities,” between November 2012 and September 2014, when the photos were posted online.

https://www.washingtonpost.com/news/the-intersect/wp/2016/03...


> Signal: The employee is required to intentionally alter the app to no longer provide security, such that some communication may be intercepted. They are not permitted to publicly disclose that they have done this.

Replace "Signal" with "OpenBSD" and watch the freaking fireworks.

(Why OpenBSD? De Raadt hasn't promised to be "nicer" recently. Linus has.)


Fastmail discussed the bill on their blog at https://fastmail.blog/2018/09/10/access-and-assistance-bill/


Seems like a fair summary would be that since they don't offer any truly secure services (e.g. e2e encryption), there's nothing that this law could require them to subvert. Turning over an individual's account data pursuant to an Australian issued warrant was something they were already doing, and nothing about that has changed under the new law.


There's no reason to be concerned about fastmail like there's any kind of uncertainty, your account is comprised and you were given warning it was going to happen.


Meta-question: Obviously, the whole thing is rotten because of the secrecy. But is is better to do what Australia is doing, by making it a law which can at least be talked about, or doing it anyway, but covertly, like certain other five eyes countries (and beyond)? I don't really have an opinion or answer, just curious.


What Australia is doing is worse, because the actions can’t be talked about.

What the five eyes do is normal espionage. They often get away with it, but when they are uncovered their illegal actions aren’t covered by gag orders.


>I'm a Fastmail customer and this has me extremely concerned.

IIRC Jeremy said its a swiss company with swiss servers, so, no problem.


Are you thinking of Protonmail? Fastmail is an Australian company with US based servers.


Nope I didn't say that - and I haven't been involved with fm for years.


> IIRC Jeremy said its a swiss company with swiss servers, so, no problem.

Do you have a link for that statement?


Doesn't matter, if they have an Australian employee they're compromised.


Here's a brief of companies that could/might be impacted. I don't agree that server location is sufficient enough protection. Correct me if wrong, but can't the authorities compel the Aussie company's directors to hand over foreign server credentials?

Xero

Atlassian

Canva

Fastmail

99 Designs


Good angle.

But we're not just talking about headquartered companies.

Cloudflare has datacenters in Brisbane, Melbourne, Perth, and Sydney and an office in Sydney. Could they be compelled to hand over your website's certificate that they have because they're the front end load balancing proxy for your website?? That way the police can man in the middle your website. Cloudflare would be gagged from telling you they gave away your private TLS certificate you entrusted them with.


Spin off Cloudflare Australia as an independent subcontractor of Cloudflare International.

Give customers a choice of whether to enable Australian proxies for their site.

Use distinct certificates for the proxies in each region, with short expiration times to limit the damage if a certificate falls into the wrong hands.


By this logic, there is no need to compel Cloudflare to handover the private TLS key. They can just compel any CA based in Australia to sign a fake cert. Or directly compelling OS vendors (Google, Apple, Microsoft,etc) to make government cert as a root CA.


A fraudulently issued cert compelled by government might be detected by CA reporting. This was already done to watch for direct government controlled CAs issuing bad certs.

Stealing your own existing cert is less likely to be detected.


My understanding is an Australian cloudflare employee could be compelled to, yeah, hand over certificates, without telling their boss they did so.


Atlassian is a UK company.

Obviously half employees including directors are in Australia, so that opens up to pression anyway, just like Japan could in theory compell a director of a foreign company by threatening to jail him on a fortuitous tax accusation. The directors would never be so square as to not comply, right?


Yes, while they were founded in Australia, the parent company Atlassian Corporation Plc is located in the UK (presumably for stock market reasons?).

But they still have subsidiaries in Australia, which arguably does a lot of work given that at least a third of their employees are in Australia, and so would be subject to the local laws. The employees could be compelled to bypass things and not be allowed to talk about it to anybody, including the parent company.


The hotel room analogy is almost correct.

Hotel rooms usually provide a small safe for which guests can pick the combination. There is also a secret backdoor combination or master key that lets hotel staff open the safe.

This creates an obvious security hole : if the backdoor combination is easy to guess, or a copy of the master key falls into the hands of unscrupulous employees or ex-employees, then the contents of the safe can be stolen. As a guest, there is nothing you can do to reduce this risk.

Now, imagine that a hotel found a solution where locked safes are destroyed and replaced at almost no cost to them, and could give up on the backdoor. Burglaries involving the backdoor would vanish, and although this slightly increases the risk of losing your belongings by forgetting your combination (since the hotel can no longer open the safes of forgetful guests), it's a net improvement in security.

Ten years later, the hotel community has reached a consensus that safes-without-backdoors are the Right Thing to Do. The state then mandates that all hotels should be able to give access to the contents of those safes to the police. But they're not saying that hotels have to use a backdoor combination or master key, so they're not really asking anyone to reduce the security of their safes...


“The legislation in no way compromises the security of any Australians’ digital communications.”

Reminds me of the time someone tried to legislate pi=3. There's absolutely no way to give police a back door into encryption without giving criminals the same back door.


That's working under the assumption that even the tiny scraps of privacy we have left shouldn't be safe from the government. In light of stuff like police helping blacklist union organizers (https://www.bbc.com/news/uk-43507728) or sharing data on protesters with the company they're protesting against (https://en.wikipedia.org/wiki/Victorian_Desalination_Plant#S...), that's a very poor assumption to make.


> There's absolutely no way to give police a back door into encryption without giving criminals the same back door.

This feels disingenuous to me. It would be fairly trivial, for example, to store a copy of all keys, encrypted with the government’s public key. Of course, there’s a million eats to go wrong, but that’s different from “mathematically impossible.”


But the million ways to go wrong IS the problem. I may be appealing to authority here, but is it disingenuous when an overwhelming majority of encryption and security experts agree?

https://www.washingtonpost.com/news/powerpost/paloma/the-cyb...

https://www.schneier.com/blog/archives/2018/05/ray_ozzies_en...

https://www.justsecurity.org/53316/criminalize-security-crim...


Note that he was replying to a comment that was saying that a back door that is not wide open to criminals is comparable to thinking pi = 3.

As is pointed out in the Schneier article, the problems with a key escrow scheme are on the law enforcement side of things. They could lose access to their keys, especially if a lot of different agencies have keys.

Those are difficulties that can in theory be overcome, although it may not be practical to do so. That's a far cry from a pi = 3 issue.


The original argument was “The legislation in no way compromises the security of any Australians’ digital communications.”

This is approaching a pi = 3 level falsehood because of the “in no way compromises” clause. There are many schemes that are outright illegal (in my not a lawyer interpretation of this law), and it nakedly makes the other schemes harder with state actors as additional points of failure.


Appealing to authority is only a bad argument if the authority is irrelevant to the topic.

Appealing to Schneier on the topic of encryption is not an irrelevant appeal.


Well, that does actually make some schemes impossible (in a pi = 3 kind of impossible) because it means the private key has to leave someone’s device and be sent over the wire- and many schemes don’t do that. https://en.m.wikipedia.org/wiki/Three-pass_protocol

I’m not a cryptographer but I assume there are other schemes that are at least weakened by the requirement of a third party holding a key, much like the TSA master lock program was broken by statistical analysis of locks that were mastered this way.

But the mathematical impossibility if this aside, there is a very real practical impossibility if trusting an organization as large as the US government to keep such a database secure. There are better ways to help law enforcement than blowing such a large gaping hole in the web.


> Reminds me of the time someone tried to legislate pi=3 There's absolutely no way to give police a back door into encryption without giving criminals the same back door.

Most secure N party communication systems can be made to be secure N+1 party communication systems. If that +1 is the police, then arguably you have in fact given police a back door without giving criminals the same back door.

Criminals who want in would have to do so the same way they would before the back door--compromise one of the parties to the communication--except now there are N+1 parties to try and compromise instead of N so the attack surface is larger. How much this lowers security depends largely on the competence of the +1 party.

The popular model of a back door seems to be some wide open spying interface protected only by running on an undocumented port or something like that, and that all the bad guys have to do is get a copy of one client, reverse engineer it to find the access info, and then they are in.

For some reason, people tend to overlook that a back door is really just another communication channel, and the mechanisms modern cryptography provides for securing communication channels apply to back doors as much as they do to any other channel.


There exist some secure schemes where keys are not exchanged, and those would be effectively outlawed. https://en.m.wikipedia.org/wiki/Three-pass_protocol

Nevermind that increasing the parties being attacked almost always makes the job easier not harder. See the TSA master key debacle for an example of how adding a third party master weakened the security and allowed statistical analysis to break the lock.


> you have in fact given police a back door without giving criminals the same back door

As long as one corrupt cop exists on the access pool the premise doesn't hold up.


There's some serious doublethink going on in the head of whoever said that.


We already in process of ending our Atlassian contracts, currently moving all our data out.


What about your Amazon and Google contracts? (If your company doesn’t use them, then consider this a general, hypothetical question.)


They are used for very specific set of features to reduce costs. Google is trusted a LOT less than Amazon.


This is an excellent move. Very unfortunate for Atlassian, but if the Australian government is to see sense, it is by ‘sanctions’ like this.


This is short sighted.

1. The law applies to all tech companies who have users in Australia, regardless of where the company is incorporated.

2. Atlassian offers primarily self hosted products, to which this law does not apply.


How can Australia exert legal authority over a company that has no legal presence in the country and one Australian user?


I'm not informed on the subject, but narrow-mindedly, GDPR and serving EU customers?


Yeah, that has exactly the same problem with respect to jurisdiction.

The only difference is that it's easier to write off Australia than the entire EU.


> 1.

A company based in Iceland might have Australian customers, but if they have no representation in Australia, there's precious little the authorities can do. They are of course free to pursue their inquiries through the country where the company is resident (Iceland), but that country has no obligation to adhere to Assie laws, and most likely isn't going to.


> 1. The law applies to all tech companies who have users in Australia, regardless of where the company is incorporated.

Yes, but as someone not living in, working in, or traveling to Australia I don't really have to care. I don't own a company that makes crypto, but if I did, my reaction to this would be along the lines of "don't open a Sydney office but otherwise business as usual".


Also you can't employ any Australians, because they are all covert government spies now.


So, is it time to distrust every ssl cert issued by Australian cert provider? Is there a list?


You would also need to distrust any ssl cert a multinational company's australians can access, as the law can compel and gag them to steal your ssl cert. All Australian residents and citizens, including potentially abroad, are now legislated to be untrustworthy when it comes to holding any cryptographic secrets or access to systems of your customers.


In light of what happened to symantec, wosign, and startcom, I would expect smart CAs to pull out of Australia, or at least spin off their Australian operations to a separate entity.


> including potentially abroad

On my reading of the A&A bill there was no extra-territoriality clauses, so I think it only applies within Australia.

However, I am not a lawyer and this is not legal advice.


It also appears to apply to Australian citizens living abroad, potentially making dual-citizens into covert actors.


Do you know which clause gives it extra-territoriality? I didn't see one in my pass over it.


The devil is in the details. 317c, defining "communications provider".

> 5. the person provides a service that facilitates, or is ancillary or incidental to, the provision of an electronic service that has one or more end‑users in Australia

> 6. the person develops, supplies or updates software used, for use, or likely to be used, in connection with:

> (a) a listed carriage service; or

> (b) an electronic service that has one or more end‑users in Australia

> 8. the person manufactures or supplies components for use, or likely to be used, in the manufacture of a facility for use, or likely to be used, in Australia

There's several more.

A communications provider, under the given definitions is not bound to be on Australian soil, but rather interacting with Australia as a nation.

Applying this law to those of different nationality is difficult, and unlikely to succeed, however those of dual-citizenship can be held accountable.

This opinion I have, that the law does apply to those internationally, is one I have seen supported by several law firms I have occasional contact with.

Probably aided by:

> 317F. This Part extends to every external Territory.

> 317ZC.4 Part 4 of the Regulatory Powers (Standard Provisions) Act 2014, as it applies in relation to section 317ZB of this Act, extends to:

> (a) every external Territory; and

> (b) acts, omissions, matters and things outside Australia.

> 317ZD (Enforceable Undertakings).

> Part 6 of the Regulatory Powers (Standard Provisions) Act 2014, as it applies in relation to section 317ZB of this Act, extends to:

> (a) every external Territory; and

> (b) acts, omissions, matters and things outside Australia.

There's a few more - but as the Act is stating it is enforceable to both external Territories and acts, omissions, matters and things outside Australia, I do think the most likely reading is that 'acts' can be enforced upon Australians living outside the borders.


"External Territories" here means Christmas Island, the Australian Antarctic Territory etc etc. When I say "extra-territoriality" I mean the application of law outside of Australia's borders.

The "acts, ommissions, matters and things" appears to give extra-territoriality to subject matter but not to legal personalities (ie companies and people).

The "communications provider" part is very broad, and while in Australia I am definitely covered by it. But the courts will not generally interpret legislation as having extra-territorial effect unless it explicitly says so. Otherwise every Act would need a stuff like "ps. the Fisheries Amendments (Rex Hunt Is A Wanker) Act is non-territorial".

My question is not about whether a legal personality (ie, a company) is affected if they have a physical-legal presence in Australia, because of course they are affected. My question is whether someone like me, who is outside Australia's boundaries, can be served a notice while I am out of the country. On my reading it's still a "no".

But I am still not a lawyer.


I'm not sure there are Australian cert providers... at least the government (gov.au) uses common ones such as Entrust and Digicert, and companies like Atlassian use Digicert.


I feel like I must be misreading, because my first-pass interpretation is that companies would terminate all of their Australian citizen employees, and add terms to the remaining contracts saying that employees must notify them if they become an Australian citizen.


As an Australian, I dearly hope all international companies actually do this.

We are a testing ground for this insanity, the time to fight it is now.

Australian citizens did not ask for this. All the "consultation" letters were ignored in favour of creating a police state.


Australian tech companies need to release a program called UltraDecrypt that simply brute-force decrypts any message on their platforms given billions of years and sell it for $10M per license.

Then when law enforcement claims they are not being cooperative, they can say they have a tool that meets their needs if they're patient.


You'd probably be charged with something and punished.

Courts don't take to kindly to people trying to be cute with their demands. It's not like they are going to say "well they are technically right" and give up, they are going to just up the consequences or clarify the request until you comply in the way that everyone knows they want you to.


This. The biggest weakness to encryption systems isn't the math, it's the guy in the uniform who has tied you to a chair and keeps strapping your feet with a rubber hose.


Thus once again revealing -- to those who believed otherwise -- that the police do not exist to "protect" the citizens.


Did Lavabit get additional court punishment for sending SSL keys in 4 pt font?


They were threatened with further charges unless they complied with them in electronic form, which is when the company was famously shuttered.

Edit: And according to the wikipedia page [1] for Lavabit, he was successfully held in contempt of court for the printout move.

[1] https://en.wikipedia.org/wiki/Lavabit


That is incredible. Props to Lavabit. We need more people like that in the tech community.

We absolutely do not need the "not my problem, I don't care, engage in mass surveillance all you want!" engineers.


Well don't be too excited about it, because laws like the one this article is about makes that kind of response impossible (as in you WILL be thrown in jail for the rest of your life if you attempt to do that).

And even in the US it still isn't a sure thing that Lavabit's response would work again if someone else tried it. I don't know the details, but I believe there is still some uncertainty around if the FBI just kind of "allowed" them to close down by not pursuing it any further, or if they got what they needed, or of secret laws were changed because of this instance.

In Lavabit's case, there was a lot of FBI involvement, a lot of secret court orders and gag orders, and a lot of accusations from the owner of Lavabit that he was brought to secret courts without legal representation and no chance to appeal, and even he says that there are things he still can't talk about.


I didn't know the nuances of that timeline. Thanks so much for filling me in!


A colleague and I published a related idea [1] last year: Weaken the encryption just enough so that a government can (barely) afford to do the brute force if they really do care about it that much. (Hint: They almost certainly don't.)

Please note that we're not seriously suggesting that encryption providers should adopt this -- not as long as there are other options. But if you're legally obligated to do something, this is the "f*ck off and leave me alone" approach to compliance.

[1] C.V. Wright and M. Varia. Crypto Crumple Zones: Enabling Limited Access without Mass Surveillance. In Proceedings of IEEE European Symposium on Security & Privacy, 2018. https://www.ieee-security.org/TC/EuroSP2018/program.php#euro... http://web.cecs.pdx.edu/~cvwright/papers/crumplezones.pdf


I've often thought a good solution would be zero-knowledge weak encryption with an additional strong encryption layered on top. When the government comes to ask for data you decrypt with the strong key, but then they still have to do the work to break the weak key.

Thinking like an economist, you want to align the incentives to make it possible but not free to access user data. A weak key (per user) that's breakable with $10k compute cost seems about right to me, but the actual optimal cost may be higher or lower.


I am not a cryptologist, not a lawyer, and only a marginally capable software engineer.

But I think we've had the option to send personally encrypted end-to-end messages for a while now. (Open)PGP anyone?

So instead of using Signal, or Whatsapp, or whatever and depending on their client-side-encryption (and possible server-side-decryption) of private messages, how about plain email using standalone user-encryption.

Two things may come of this: Google will stop "interpreting" my email messages, and laws like these stop mattering very much.

Of course, 5th amendment (and its siblings in other countries) still apply...


This is where a political problem needs to be fought back where it hurts the government the most, threaten to get the hell out and see their corporate taxes income dwindle.


The general point that I disagree with is the premise that there can be no such thing as a private conversation.

Do you want a police state? Because secret surveillance of all citizens is how you end up with a police state.


Would letsencrypt be compelled to issue certs for your domain to the Australian Gman?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: