Fun article, the phenomenon is interesting to see in practice, I've seen it regularly with newer instance types as it can take time for people to add them to their configurations.
We're heavy users of spot here in Intercom. I spot-checked our biggest workload, and this week we could have paid around 10% less if we were able to get the cheapest spot host possible in us-east-1 that is suitable for our workload (all 16xlarge Gravitons). However that would be at the cost of fleet stability, I think that to run relatively large production services used in realtime on spot you need to prioritise fleet stability, so choosing the "Capacity Optimized" strategy. We've seen incessant fleet churn when trying out cost optimised strategies.
> Well I can only guess so much of the underlying egress internet routing of AWS.
> At worst, if no explicit region is specified, it will reach the global aws endpoint through internet which is likely in a complete different part of the world than where you are, redirect to the local endpoint, and back.
"When using public IP addresses, all communication between instances and services hosted in AWS use AWS's private network. Packets that originate from the AWS network with a destination on the AWS network stay on the AWS global network, except traffic to or from AWS China Regions."
In practice there is not much risk from accessing AWS services using public endpoints, you just need to take AWS at their word.
Intercom is pretty similar. We use EC2 hosts and no containers (other than for development/test environments and some niche third-party software that is distributed as Docker containers). Autoscaling groups are our unit of scalability, pretty much one per workload, and we treat the EC2 hosts as immutable cattle. We do a scheduled AMI build every week and replace every host. We use an internally developed software tool to deploy buildpacks to hosts - buildpacks are pre-Docker technology from Heroku that solves most of the problems containers do.
I wouldn't necessarily recommend building this from scratch today, it was largely put in place around 8 years ago, and there are few compelling reasons for us to switch.
> Under the EU Internet Forum, the Commission has launched an expert process with industry to map and preliminarily assess, by the end of 2020, possible technical solutions to detect and report child sexual abuse in end-to-end encrypted electronic communications, and to address regulatory and operational challenges and opportunities in the fight against these crimes.
It is a spectacular overreaction to equate this to "EU wants to ban encryption". This will never happen.
The bit about Facebook's planned end-to-end encryption ends with:
> One of the specific initiatives under the EU Internet Forum in 2020 is the creation of a technical expert process to map and assess possible solutions which could allow companies to detect and report child sexual abuse in end-to-end encrypted electronic communications, in full respect of fundamental rights and without creating new vulnerabilities criminals could exploit. Technical experts from academia, industry, public authorities and civil society organisations will examine possible solutions focused on the device, the server and the encryption protocol that could ensure the privacy and security of electronic communications and the protection of children from sexual abuse and sexual exploitation.
I read this as "Okay, fine, we can't ban end-to-end encryption and we cannot backdoor it. What can we do?" If that is what they mean, it seems a reasonable enough question to ask.
> possible solutions focused on the device, the server and the encryption protocol
Looks like they're going to find ways to read our messages before they are encrypted and sent. Why would anyone continue to use a communications application that's known to do this?
My guess: Client-side scan for certain keywords to identify grooming and some kind of signature-based identification of known child-porn media. Basically what I assume Messenger does today, but on the local devices instead.
The general public won't care until we're halfway down a slippery slope, and then people will just switch to whatever platform is perceived as more secure/popular at that particular moment in time.
What if hashes of known-bad content are stored locally on the device, and sending content that matches against those hashes is not allowed. Or, the user could appeal if they think there's a false positive. This can be used for CP but also for known-bad fake news or inflammatory content. Clearly, the content hash DB needs to be scope down, and what goes in there should be chosen with democratic principles, and stand scrutiny in the courts. If done thoughtfully, it seems like a feasible solution.
Changing a hash is incredibly easy, you could just change some Metadata and the hash would change. And any perceptual hashing algorithms would naturally lead to false positives.
Also this would likely be quickly commandeered for copyrighted work (honestly pretty surprised it hasn't happened already).
Yes, it would have to be a perceptual hash. False positives will occur, so there needs to be a way to appeal or remediate the algorithmic decision. We already apply this approach in a bunch of places. I believe the major personal cloud storage providers (OneDrive, etc) already do such scanning.
>This can be used for CP but also for known-bad fake news or inflammatory content.
It worries me that anyone thinks it would be a good idea to have "fake news" and "inflammatory content" blocked at the device level. Obviously cloud providers can do whatever they want (though I doubt it catches any more than the lowest hanging fruit, encrypting then uploading would be uncatchable), but the idea that my device will have a list of disapproved content, and I'll have to appeal to the government to be allowed to view it in case of false positives? The day that becomes a reality freedom will truly be dead.
I didn't say it would be at the system level. I'd expect this to happen per app. It's similar to how photo manipulation software can detect currency. I doubt every such app complies, and certainly the system screenshot tool does not.
They are not against it, they say it makes precenting child porn dissemination more difficult, which seems like a rather obvious truth. They say Industry and government need to work together SNF try and see what can be done about it without breaking privacy.
what can be done about it without breaking privacy
Well, the answer is: nothing. Let's take an analogy:
> Industry and government need to work together to try and see what can be done about me talking to my wife without breaking privacy.
If I want to speak to my wife in private, that's between me and my wife only. If industry and/or government want to have a say in that, they're going to need to control or monitor anything I say to my wife. The very act of desiring to control requires subverting privacy.
Of course, there are fruitful discussions to be had about the extent of privacy itself, the extent of private communication, and the extent of control that might be admissible. But to pretend that there is a perfect solution that doesn't affect privacy is either a foolish or deliberately malicious position to take.
"The laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia" - Australian ex-Prime Minister Malcolm Turnbull (while he was Prime Minister).
While the EU is at it, they should do one for free energy.
Get just the right panel of experts together, and hopefully they can handwave all those troublesome laws of physics away as well.
> Last year, Facebook announced plans to implement end-to-end encryption by default in its
instant messaging service. In the absence of accompanying measures, it is estimated that this
could reduce the number of total reports of child sexual abuse in the EU (and globally) by
more than half and as much as two-thirds, since the detection tools as currently used do
not work on end-to-end encrypted communications.
Do you imagine that this "expert process" will come up with a way to preserve message privacy while also flagging which messages are illegal? What else could the purpose of it be than to recommend requiring providers to MITM their customers' messages?
(Although I agree that the title should be changed to "ban end-to-end encryption"; certainly the suggestion that the EU would try to ban encryption generally is an exaggeration)
I can think of a very obvious one, identify and refuse to send messages that the client app decides are child porn. No intrusion.
Or perhaps add a counter to the account when it's detected. Minimal intrusion, single flag defining the message.
You don't need to mitm things to implement _some_ mitigations.
Before the inevitable - a method that is not 100% reliable in stopping something is not useless. Otherwise we may as well make it as easy as possible to share child porn because it wouldn't make a difference.
FB also already proposed one where users can report encrypted messages and send an unencrypted log to them from their client device.
Since most existing child abuse imagery is reported by users that see it somehow - this seems like a reasonably pro-privacy way to keep the same amount of reporting.
Banning encryption, completely neutralizing and circumventing the encryption... The effect is the same: the government will be able to read the messages.
On one hand, this type of service merely solves business intent with unfriendly or un-automatable interfaces. Why should this exist and be so successful? On the other, I guess it's like Segment or Tray.io, gluing together various services to improve business outcomes, and taking their slice of the efficiency improvements they make possible. I guess the most commonly integrated services should wake up and see the potential revenue they're losing in having crumby interfaces, and in the meantime UIPath will provide a good path for efficient IT services. Fair play to them in executing so well in this space.
I don't understand why they're using the Somalian TLD when they also own .com
How reliable are these other TLDs like emerging country TLDs like .ly (Libya), .co (Colombia) or .so (Somalia)? Could you just get shut down overnight?
Any domain is entirely at the mercy of the registrar, and for country domains that's entirely in the country itself. It's amusing to me that some of the most "desirable" country domains are also coincidentally some of the most unstable countries.
This is not entirely true. For example the .se domain is not controlled by the Swedish government but is administered by The Swedish Internet Foundation which is a private foundation (source I work there..). It is (now) regulated somewhat by law.
Perhaps, but a private foundation in Sweden is subject to the laws thereof - if Sweden wanted to regain complete control there's not much your foundation could do.
If I understand correctly, the .so TLD is effectively a national resource of Somalia, and as such they have total control over it and could in fact shut it down if they wanted.
> It is an uncommon status that is usually enacted during legal disputes, non-payment, or when your domain is subject to deletion.
Serious question, if this is an extended legal dispute or the domain is actually subject to deletion, that would be hugely damaging to the Notion brand right?
I ask because I can't think of another company that's had a similar issue lately, so I find the domain dispute an interesting issue that we rarely see get to this stage.
I never understood why notion continued to use notion.so when they own notion.com and you don’t have to worry about countries (ccTLD) doing things like this.
By the sounds of it, you need to take drastic action. It sounds like you will not be able to just optimise your AWS spend to get more runway, though you should definitely do some bill optimisation. You will need to optimise your product itself and maybe even getting rid of unprofitable customers.
If you are not sure exactly who or what is driving the AWS cost, take a look at Honeycomb to get the ability to dive deep into what is eating up resources.
We're heavy users of spot here in Intercom. I spot-checked our biggest workload, and this week we could have paid around 10% less if we were able to get the cheapest spot host possible in us-east-1 that is suitable for our workload (all 16xlarge Gravitons). However that would be at the cost of fleet stability, I think that to run relatively large production services used in realtime on spot you need to prioritise fleet stability, so choosing the "Capacity Optimized" strategy. We've seen incessant fleet churn when trying out cost optimised strategies.