I get the impression that Medium is pretty low effort, brings in readers, and is fairly popular (mindshare is a thing). I considered it before doing github sites (now does ssl easily), which I’d recommend except that managing a simple blog with git isn’t for everyone.
I had this scenario with my Vizio, too (I mentioned it above). I got the new remote amd they had replaced Chromecast firmware with their own smart tv junk that required TOS acceptance for monitoring. I had to ban my own tv from using the network ever again.
I assume Google is reporting on my Chromecast usage, but Vizio doing unaccepted updates to essentially brick my tv unless I agree to be monitored? That’s a step or two too far.
My Vizio replaced its built-in, vanilla Chromecast capability with a conventional “smart tv” interface that I didn’t want. And to be able to use that, they force you to accept a TOS that includes monitoring.
Worse, I couldn’t tell it to stop connecting to my WiFi after all this happened. I had to ban the TV from my LAN and change the WiFi credentials.
I spent a lot of money on that TV, only to have it go rogue on me.
That sounds like a reasonable concern. I generally worry more about alternative options being bought by someone else, or languishing / deprecation.
In either case, the subject of a decent article might be: setting up automated exports from service features like Takeout, to mitigate the risk of personal service account closures and disruptions.
If we’re only talking about operations that are fast, nearly bulletproof, and trivial, then I’m even less sold on the idea.
In the article it describes reverting the state of the component if the action fails, asynchronously. IMO that’s much worse than briefly representing a progress state for a few ms.
It sounds like they're suggesting that, assuming those rights exist in these situations (they claim there's ambiguity there), that there are big questions around what qualifies as an acceptable explanation.
Yes, but reading the official guidelines should clarify this ([1], p. 14 ff.):
> The controller should find simple ways to tell the data subject about the rationale behind, or the criteria relied on in reaching the decision without necessarily always attempting a complex explanation of the algorithms used or disclosure of the full algorithm. The information provided should, however, be meaningful to the data subject.
There's also a detailed example in the document that should make it clearer what kind of explanation is required (and what is not required). Again, personally I understand that it's not clear what exactly will be required here, but titling an article "Will GDPR Make Machine Learning Illegal?" is just an attempt to garner attention by instilling (unfounded) fear.
>>> The information provided should, however, be meaningful to the data subject.
Although non binding, this makes the intent very clear. "AI: you have been refused insurance. Me : Why ? Insurer : Because our AI has reached that conclusion based on these data : X,Y,Z". Looks perfectly fine to me. Because with enough explanations like this, we can form an opinion about how the AI is working, which in turn will allow to balance the powers between me and the insurer some more. That looks good and balanced to me (notice that this argument doesn't consider the cost of implementation of GDPR, just the way the intereste of parties are better balanced)
"Articles 13-15 provide rights to 'meaningful information about the logic involved' in automated decisions."
Your scenario doesn't explain the logic. Saying "that's the AI's choice and we're going with it because it's 99.9% accurate" isn't the logic involved in the decision.
You need an interpretable model to ensure that the AI isn't discriminating based on a protected class (race/gender/etc). "You were denied a loan because the AI determined that you're Polish, and we don't like Polish people" is partly what this law wants to prevent.
Forcing models to be explainable makes sure that we aren't illegally discriminating, so we need to make sure that we can tell why the AI made it's choice, not just what the choice was.
100% agree, that's why I wrote "that conclusion based on these data : X,Y,Z". The important word is "data". I said data because with AI, the decision process may be quite a black box. The only thing you know is what data you put in. So to me, input data is part of the answer.
In my job, we grant decisions to help people or not. We could use some kind of AI to give, for example, a "pre decision". That AI would be trained on our current data but, in the end, it would interpret the profile of the person. So basically, it'd say "based on the profile of X, we've decided that ...". Now if nationality, for example, was in the list of data in the profile, I'm 100% sure that we'd have a lawyer at our door (rightfully).
My point is that just saying "we have data X,Y,Z" for a person doesn't explain the logic. It allows you to check that the input data is correct, but you don't understand the decision from it. What you need is an explanation saying something like "X is too low, and we think that Y in the presence of Z is a significant risk factor."
The need for the explanation is because an AI can learn to discriminate against protected classes even if they aren't explicitly part of the dataset. You might not have included race in the inputs, but you did include their name, and it figures out that people named "Jakub" should be declined for a loan. The AI can't say that it's because they are Polish, but it learned to discriminate against Polish sounding names because of all the racism in the training data. We could uncover that if the AI was able to explain that it denied the loan mostly because of the name, and that the other pieces Y and Z did not factor into the decision as heavily. Just saying X Y and Z doesn't help us figure out which of those pieces are the important parts for denying a loan.