Thank you: this conversation was beginning to feel surreal, between the "let's pull the plug on these frenchies!" and the "how dare a sovereign state wants to legiferate on the activities of an American company on its territory"...
I do enjoy the irony of some Americans blaming big data companies when the president of their choice does not get elected, all the while advocating for less power to French news agencies (e.g. death by asphyxiation from Google news, without any recourse).
I'm pretty sure it's different people in each case. Certainly I don't blame any big tech company for any election. All I want is the Internet as I recall it: free voluntary association. My user agent is powerful enough to ensure this and I'm happy with it.
If I wanted to use only directories and webrings I could do so today. I'm fine with that. If I want to use Google I have to use Google terms. That's okay. If I don't, I can easily just not.
I don't think it'll last, though. The guys who blame tech companies for elections are the same guys saying they need to be regulated. And so eventually we'll have all these highly-regulated services instead of the thing I've enjoyed. But that's the tyranny of the majority, I suppose. I can find a way to work with it.
Or maybe Netflix could say "no this is a service we are providing to our users" and then the big "Hollywood exec" would be like "ok maybe we will not sit on the million dollars we get from our agreement" and that would be it.
Content is king; Netflix, as long as it doesn't have exclusive rights, is just a middleman trying to extract rents. As soon as technically competent competitors popped up (Amazon etc.) Netflix's margin on licensed content is liable to be squeezed.
That's why Netflix borrows so much to spend on creating its own content. It's not viable otherwise.
not exactly sure how to parse your wording for the Hollywood exec, but I assume you're saying Netflix has the leverage and the Hollywood person has to shut up and take the money?
that doesn't seem to be the case in reality. Netflix streaming doesn't really have a lot of big Hollywood movies.
How much content are you going to sacrifice to preserve a feature? Which draws in more customers, having good content to watch or the ability to do it in a party format?
Ideally you have both, but if forced to choose, you choose whatever makes sense for your business.
You are missing the point, which is that if you can blindly follow the type hints of the compiler and still get a non-stupid function when you're joking around like the author is, then in real-life coding situations the type hints will be all the more relevant and reliable.
By the way, this is literally the most common foldr implementation in functional languages, so I have no idea why you would want to do a code review of it, or talk about performances in such a trivial setting, or pretend that you're confused by it so much that you have to refer to a library...
> then in real-life coding situations the type hints will be all the more relevant and reliable
That was exactly my point about the foldr example: in real-world code, you may need a function that perfectly matches foldr's type signature. But following the author's methodology, you'll write out a new implementation of foldr, when what you should actually be doing is calling foldr in that type hole.
This ties back in to my comment about code review. Think about how a patch from someone who applied the author's method would look like: it would contain a lot of in-place implementations that they sort of stumbled into, when in fact they should have reused well-known functions. Someone should then try to review that code to see if some implementation is not actually a well-known function.
Even ghc suggested at some point that he should use maxBound or minBound instead of writing his own thing.
Edit: I should also add that there is no reason to think that foldr is the right implementation of zoop without more context. For example, zoop could be a function just like foldr but that works on the reversed list of `as`. Or one that skips every second element of `as`. If we're just guessing based on type, and don't know what the code is supposed to do, it's easy to mess up.
You're going full circle here. He's not the one advocating for "modern evidence-based research and training", so it's not in his standards. And I'm not blaming him. Not everything is measurable, or at least measurable in ways that would enable researchers to extract meaningful answers or techniques. You said it yourself, productivity is a fuzzy end-goal.
If you do not believe it is possible to empirically derive a way to interview properly, then rationalism is your best choice, i.e. Intuition/Deduction. And contrary to common belief, rationalism is not a prison. It's just a decision process, in which you're perfectly allowed to question yourself using available empiric data.
Yes, and it is well known that Tesla's self-driving program reproduces Elon Musk's reckless driving habits... /s
I'm just intervening to point out that what you're saying is not what the paper you linked implies. Its title is "Semantics derived automatically from language corpora contain human-like biases", and the general conclusion you could derive from it is that AI programs reflect the stereotypes ... of the data they are trained with. This is why they use the right word: stereotype, instead of the charged word you used: prejudice.
I'm returning your argument. Imagine your are driving your car on the highway, but you suddenly have a heart attack and become unable to remain conscious. Do you: (a) Die? (b) Die?
Manned vehicles are not coming anytime soon. They are a whole slew of problems etc. -- you got my point.
I think focusing too much on pesky details is very much a fallacy in this case - you do not want an "AI" to react like a human in all situations, you only want it to drive in a way that is conservative enough not to endanger people too much.
And we clearly aren't that far from this goal right now.
"Imagine your are driving your car on the highway, but you suddenly have a heart attack and become unable to remain conscious"
Humans who drive cautiously may go for a million miles without an accident. The best self-driving cars (i.e. Waymo) disengage on average every 11,000 miles.[1] It seems to me that a disengagement is equivalent to becoming unconscious without warning, and presumably we both agree that a given human does not have a heart attack while driving every year.
Humans, even with all the people who drive drunk, or texting, or falling asleep, average about 80 million miles between fatalities. Going 11,000 miles between events of total loss of control is nearly four orders of magnitude worse.
It is an interesting metric, thank you for pointing it out. Until now I kept in mind the amount of miles between accidents, but surely both should be considered.
However, I think this metric could be irrelevant in the case of a home/work commute. The 11,000 miles average appears to have been obtained basically by randomly driving Waymo cars on Californian roads. But a usual commute is much less than 11,000 miles, and if your self-driving can do it by itself once then probably it can do it twice. As the article puts it:
"The value of the data is limited, however, as the figures don’t factor in the complexity of environments in which vehicles are tested–dense urban settings, versus low-speed suburbs or less complex highway driving–nor do they show conditions including weather, light or speed."
Nevertheless, you seem to have missed my point : I was arguing that coming up with a specific use-case example that may (or may not, actually) go wrong is an argument that goes both ways.
This does not deserve a sensationalist depiction. Everyone in space industry, astronomy and even the public now thanks to this terrible movie Gravity knows about the Kessler effect.
Some thoughts:
- Putting 12000 big freezers (900 pounds at most) in orbit is never going to "crowd" the place; imagine these objects on Earth, then the claim that on a much higher radius sphere these could be seen at any point seems ridiculous.
(I am not speaking about speed or dangerosity, which even up the numbers a bit, or powerful light-emitting projects)
- In that sense, the video displayed is annoyingly misleading.
- The danger of the Kessler effect is a long term one, and as such it seems purely economic to me. Increasing the number of space debris will gradually increase the probability of collisions, and raise the costs of space industry, in an upwards trend that may at some point represent a real financial burden. This is the only question: at which point does it become economically interesting to tackle this problem, and are we not underestimating the future costs at this point ?
- My take is not yet, and SpaceX will do fine managing their 12000 space fridges.