I don't think we can assume it's within 1 hour of being reported. The press release also says:
> In addition to referrals, internet companies should implement proactive measures, including automated detection, to effectively and swiftly remove or disable terrorist content and stop it from reappearing once it has been removed.
To me, that sounds a lot like they're willing to mandate AI detection in order to "solve" the "team is asleep" problem. They suggest this might be yet another rent-seeking opportunity for big internet platforms:
> To assist smaller platforms, companies should share and optimise appropriate technological tools and put in place working arrangements for better cooperation with the relevant authorities, including Europol.
No doubt, some "helpful" authorities will provide an "appropriate technological tool" to help. No need for the government to break Perfect Forward Secrecy if every site is giving it clear-text access to all uploaded content.
I wonder if part of the problem might be a natural tendency to generalise bad behaviour of a small number onto the whole group?
For example, I might see several instances of individual cyclists running red lights and generalise that to "all cyclists run red lights". Or see several instances of individual motorists "dooring" cyclists and generalise that to "all motorists are dangerously inconsiderate".
It's probably easy to go from that generalisation to an overt dislike of the other group. Therefore, I try to force myself to attribute bad behaviour to the individual rather than any groups they might be a member of.
Part of it is that it's a necessary mindset. When cycling, you have to assume that every driver is trying to kill you and every parked car is waiting to throw a door open and knock you off (and possibly kill you). Even where malicious intent is lacking, distracted driving is such a pervasive problem that you have to adopt a mindset of "everyone else is awful"....because statistically speaking, enough of them really are.
I think there's a big difference between "everyone else is awful", "there's statistically enough awful individuals" and "every driver is trying to kill me". Only the last one of those is going to lead me to incorrectly blame a whole group.
I agree that a statistically significant subset of drivers cause problems (and a statistically significant subset of cyclists, too), but the vast majority of individual drivers (and cyclists) are safe and do not deserve to be grouped in with the the problem-causers.
They have to be grouped in, as there's no way to differentiate them.
Also, you're proposing a false equivalence. When drivers cause problems, other people (pedestrians, cyclists, other drivers) die. When cyclists cause problems, they're usually the only ones that get hurt. The urgency of the two problems is dramatically different.
I really wish "a bike running a red light" wasn't seen as such a problem. In many places, pedestrian cross on red lights also. In other places, cars can turn right on red lights. In at least one state, bikes can use red lights as stop signs, and stop signs as yield signs.
It is easy run a red light on a bike (or as a pedestrian) in a safe manner, but it often isn't in a car.
In each round, I reckon the optimal strategy for the "cut" will likely be the same as partisan cuts are now; densely pack the opposition's supporters into a small number of districts, while slicing the rest up to have a small but stable majority for one's own party.
This gives the opposition the chance to choose/freeze either a district they know they'll win, or a district they're pretty sure they won't win. I would choose the one I know my party would win, and redraw to pack all my opposition's voters into as few districts as possible.
If this is repeated a few times, the map ends up with many safe districts for each party and a small number of "left-overs" which will be contested. This means parties will have an incentive to concentrate on the small number of voters in the few contestable districts, at the expense of probably the majority of voters. It doesn't seem like a recipe for every vote counting and voters' voices being listened to.
If we break up monopoly businesses when the lack of choice increases the cost to consumers, should we also break up monopoly parties when the lack of choice increases the "cost" in time and effort that voters must "spend" to get their issues dealt with?
Do you think it might work to tell Facebook that in a year's time they'll be broken up into (for example) 5 separate companies, and their users will be assigned at random to one of those parts but must also be allowed to move between them at will?
That gives Facebook engineers a year to design and implement a federated API that would work seamlessly. If reasonable and non-discriminatory licensing was required on the API standards, then possibly others could interoperate with it too. Hopefully the mini-Facebooks could start to differentiate themselves through features, price, privacy, etc...
I'm not completely convinced a federated protocol in and of itself will solve the entirety of the problem. I think we need some blockchain and smart contracts, people need to actually control their own data on these services. For instance, if facebook created an open source wallet, your data could sit behind that on your own device.
Needless to say, I think even if congress ordered facebook to breakup, it would be cheaper and easier for Zuckerberg to just move everything off shore and thumb his nose at a relatively impotent body. Zuck could also just lobby his ass off and probably end up writing the laws for the congressman.. Unfortunately, business writing the law for itself is pretty common in the USA, I'm sure Zuck knows that.
Mapzen, a now sadly defunct mapping startup, also had an awesome (if I say so myself - I used to work there, but not on that team) vector tile renderer for browser and mobile. Check it out at https://github.com/tangrams
Any chance mapzen would open source the code used to make metro extracts? The formats your team had to offer were so much nicer than working with raw OSM data
I'm glad you found them useful! The code which made all the metro extracts was embedded as a Chef recipe, although I'm sure you could just extract the bits which do stuff from the Chef wrappers: https://github.com/mapzen/chef-metroextractor
Mapzen, a now sadly defunct mapping startup, released all of its software open-source, including WebGL mobile and browser SDKs, map tile rendering, search and routing. Take a look at:
One method for making a copy or crawl very difficult to tamper with is to publish a hash somewhere difficult to forge (e.g: in a national newspaper or opentimestamps). That won't prove the copy wasn't manipulated before it was archived, though. For that, we would need multiple, independent archives.
This is effectively what libraries have been doing for many years with their archives of newspapers.
The hashing needs to be done by a trusted third party. It would be a cheaper to operate service than wayback, but would let you check individuals content against manipulation.
You have to incentivize people running and storing the hashes.
You could put it in the bitcoin blockchain. Or if you don't need that level of complexity and cost, you could put it on twitter, which doesn't allow editing tweets (but does allow deleting).
> for me the solution was to find social circles beyond work. For me I have my church community and also meetup groups with other developers in a similar field.
I think you are exactly right. Additionally, I think having a variety of social circles beyond work can help broaden one's support network, which can reduce the disruption of changing jobs or being let go. And it's fun to have a wider variety of friends with common interests.