There are two aspects to the current debate:
- The 'child porn keyword' web search filter mandated on all UK ISPs with no opt-in or opt-out
- The 'opt-out porn block' which will be applied to all internet connections, from which people can opt-out in order to receive unfiltered results.
The first part hasn't received as much attention because it's harder to write a punchy article about the malicious nature of a government-supplied permanent search filter blacklist, and it isn't as easy to attack as the blocking of legal content such as pornography but this is where the real danger lies.
Once the government add all their 'illegal search terms' to the blacklist and have the appartus for such wide-ranging censorship set up, what is to stop them from adding terms unchecked and unguided to filter any "unwanted" material from web searches? If this had existed in the US, for example, when the NSA Verizon/PRISM stories were leaked, how easy would it be for them to simply add "Edward Snowden" or "The Guardian" or "PRISM" or even "NSA" to the search term blacklist? They would easily justify it on the grounds that the material leaked was classified or damaging to national security.
At this stage a majority of people would in hindsight agree that this leak is hugely important and in the public interest, but if these terms were blocked by the government then what?
Why would installing them on only new devices make a lot of sense? It doesn't achieve the stated goal of "protecting the children" if they are using any one of the millions of devices currently in use.
Moreso, why default to blocked and not unblocked, allowing people to opt-in?
Ah but then we'd be back in the rational and current position of giving people the ability to install blocking software or filters if they so desire...
I'm extremely against censorship, however I can see the benefit of having a default filter that can be turned off in private. Conservatively safe defaults are never a bad idea.
Sloppy effort on behalf of the developers. Andreas (author of the blog post) was right to deny it permission to use his Twitter account the first few times but gave in eventually because of the nagging.
If there was a way to see expanded permissions before allowing a program to update perhaps he would have not updated at all?
So why do you think we shouldn't talk about this? Don't you think that what you've just described, being more prevalent in the US, warrants the discussion? Don't you think it's irrelevant and destructive to play the quote at the end of your article telling us just to get back to work?
Tweets in a timeline, on the Twitter feed in the web client, all have the retweet/quote/reply/favourite actions per-tweet so I would expect that they mean each and every tweet displayed anywhere.
I like www.mind42.com - a decent web based mind-mapping tool that allows you to share the mindmap with other people. Also it's free, which is a plus. I like mind-maps for any kind of stream-of-consciousness note taking and idea documenting.
That's what I recalled from when I read through it before signing (evidently not in close enough detail...)
However, the phrasing is: "any invention wholly or partially made by the Employee at any time during the course of his employment with the Company (whether or not during working hours or using Company premises or resources, and whether or not recorded in material form)."
"during the course of his employment" restricts it to works made as part of your work duties. Obviously this can fall into vague territories though if your day job is in anyway related to your side projects, so I'd still recommend you get a specific exclusion for any side projects.
Yes, hindsight is a wonderful thing - I signed this 3.5 years ago fresh out of university and didn't pay enough attention to the small print, I realise that now - I'll contact someone in the legal profession that I know for a more qualified opinion.
As to it being normal, most of the friends of mine who went to work for medium to large enterprises say that they've got the same type of clause in their contracts. From what I've read it appears that employers are typically recommended to make their phrasing as wide-ranged as reasonable.
The first part hasn't received as much attention because it's harder to write a punchy article about the malicious nature of a government-supplied permanent search filter blacklist, and it isn't as easy to attack as the blocking of legal content such as pornography but this is where the real danger lies.
Once the government add all their 'illegal search terms' to the blacklist and have the appartus for such wide-ranging censorship set up, what is to stop them from adding terms unchecked and unguided to filter any "unwanted" material from web searches? If this had existed in the US, for example, when the NSA Verizon/PRISM stories were leaked, how easy would it be for them to simply add "Edward Snowden" or "The Guardian" or "PRISM" or even "NSA" to the search term blacklist? They would easily justify it on the grounds that the material leaked was classified or damaging to national security.
At this stage a majority of people would in hindsight agree that this leak is hugely important and in the public interest, but if these terms were blocked by the government then what?