It seems like Uber put in this "reaction delay" to prevent the cars from driving/maneuvering erratically (think excessive braking and avoidance turning). This, along with allowing the cars to drive on public roads at all before handling obvious concerns like pedestrians outside of crosswalks, is supposed to be balanced out by having a human ready to intervene and handle these situations.
I think one of the biggest lessons here is about the difficulty of relying on humans to maintain attention while covering an autonomous vehicle. Yes, this particular driver was actively negligent by apparently watching videos when they should have been monitoring the road. But even a more earnest driver could easily space out after long hours of uneventful driving with no "events" or inputs. And that could be enough that their delay in taking over could lead to the worst.
Certainly not defending the safety driver here - or Uber. But I think there's a bit of a paradox in that the better an AV system performs, and the more the human driver trusts it, the easier it is for that human to mentally disengage. Even if only subconsciously. This seems like a difficult problem to overcome, especially if AV development is counting on tracking driver interventions to further train the models for new, unexpected, or fringe driving situations.
We will never have any way to know whether an average attentive human would have correctly parsed this situation or would also have hit the unexpected pedestrian in the middle of the street at night, but it's worth remembering that trying to make broad assessments of self-driving technology from this one accident is reasoning from a single data point.
One advantage the self-driving cars have over a human driver is that NTSB and Uber can yank the memory and replay the logs to see what went wrong, correct the problem, and push the correction to the next generation of vehicles. That's not a trick you can pull off with our current fleet of human drivers, unfortunately(1).
(1) This is not a universal problem with human operators, per se... The airline industry has a great culture of observing air accidents and learning from them as a responsibility of individual pilots. We don't have a similar process for individual drivers, and there are far, far more car crashes than air crashes so the time commitment would be impractical at 100% of accidents.
>> I think one of the biggest lessons here is about the difficulty of relying on humans to maintain attention while covering an autonomous vehicle.
Why not run these systems in shadow mode to collect data, rather than active? Have the human completely in control and compare system's proposed response to human's. At my last job running a new algorithm in shadow mode against the current one was a common way to approach (financially) risky changes.
It must feel odd and incredibly difficult to control an exoskeleton (or any motion restoring device) without any proprioceptive or tactile feedback. I imagine it's like having someone else move your limbs, or for a paraplegic, like using your arms to move your legs.
If we find a way to trigger those sensations, perhaps with separate brain implants, it would be a huge breakthrough and make learning to control the device much faster.
It's not so different from an electric wheelchair, the major difference is that your eye-line isn't crotch height. Plus, there's no reason they cannot add some type of haptic feedback. It doesn't even need to be natural or realistic as the body/ brain will adjust.
I'm a disabled person myself (I prefer to use crutches as the world isn't built for wheelchairs, that's another topic). I've been injured since 2012 and they now feel like an extension of myself (a little like having really long arms, I use them to push buttons/ switches and grab stuff). Now trying to walk without them feels VERY alien (and I often fall over if I put too much weight on my injured leg). With crutches; I walk faster than most people (they have suspension!), can climb stairs easier than somebody who is obese, even on days I can't even put my foot on the ground let alone weight-bear. The biggest limitation is the lack of hands to carry stuff.
The point I was making is that while aids/ devices may seem primitive and a poor substitute for what they're trying to replace, the difference they can make to an individual can be huge!
Repairing nerve damage is one of the last frontiers of modern medicine, but it is advancing, slowly...
I love Hugh Herr. I first encountered him in the 90s when he was doing custom prosthetics for rock climbing and then I lost track of his work. Next thing I know he's doing stuff like the talk you linked to. He's truly inspirational, and humbling in what he achieves.
I remember when playing the original PS3 my brain was imaging vibrations in the controller when the screen rumbled (back when the original controllers had no haptic feedback)
Obviously it would be better to provide more feedback but I am bullish on humans being able to adapt and the brain finding means of "faking" feedback or finding second order proxies
> Obviously it would be better to provide more feedback but I am bullish on humans being able to adapt and the brain finding means of "faking" feedback or finding second order proxies
I couldn't see that happening in the same way as your anecdote. We use touch sensation for more than just an accompaniment to visual clues:
* It's used as a pressure feedback, eg knowing how tightly to grip an object so that we don't crush it nor let it slip from our fingers
* It's used to identify dangers, like sharp objects or extreme temperatures
* Plus we use it an awful lot to for feedback on stuff we're not even looking at (eg touch typing, using in car controls while driving, getting in and out of bed when tired, blowing our nose, etc)
I'm sure some of that last point could be resolved if we learn to rely on muscle memory with the loss of any tactile feedback but the former two points would be harder to workaround without it said feedback.
Also, if you can excuse the nitpick (fellow retro gamer here) but...
> I remember when playing the original PS3...(back when the original controllers had no haptic feedback)
I assume you mean "original Playstation"? The PS3 definitely had a rumble unit built in.
If you have touch sense left on a piece of skin, maybe not. An actuator matrix on a piece of skin should work, taking advantage of neuroplasticity. (There was a similar device "projecting" a camera image on the tongue? with electricity.)
I suspect the main reason this type of device tends to be more symmetrical in thickness is to carry an additional battery, needed to power the second display without shortening the total battery life of the device.
The article makes repeated mentions of the lack of persistence (rebooting the phone removes the exploit), suggesting this makes it very little of a security threat.
However, most people reboot their phone very rarely: the occasional software update a couple times a year; if the battery runs out (which people usually go to pains to avoid); or for some people, to try to fix a misbehaving phone.
The exploit does require physical access to the phone for a few minutes. But in situations where that can happen, and the owner doesn't have the suspicion or knowledge to reboot, I think an exploit could easily run for one or several months.
Paired with enough clever software modifications made possible by the jailbreak (like a lock screen that collects passcode input), a malicious instance of this could do plenty of damage.
I think more practical concerns are cases of forced seizure by the government. The easier it is to access private data against someone’s will, the more often it will happen.
If your device tells you that you are required to enter your passcode (instead of having biometric authentication available) at a time when you have not just rebooted the device yourself, that would be your clue that something unusual is going on.
At which time you simply need to reboot the device yourself to clear anything made possible by this particular boot ROM bug.
Check out the podcast "Business Wars" by Wondery. They have an 8-episode miniseries on Netflix vs. Blockbuster that I really enjoyed and highly recommend. It includes this story, as well as some of the other competitive developments that, as crazy as it sounds now, at the time almost looked like they would bring a victory to Blockbuster.
Same here. I use a 12" MacBook "retina" (not air), which is an ultraportable with a weak CPU. Firefox is borderline unusable, even with uBlock Origin, unless you stick to only 1 or 2 tabs. High def video in YouTube really struggles, Google Maps struggles, and other Google properties like Flights have a habit of freezing up.
Plus the non-native feel you mentioned. No pinch-to-zoom, moving tabs between windows feels clunky, and scrolling feels different than almost every other app.
I try Firefox for Android from time to time, and I also have a lot of issues with polish that make me stop using it every time. Tab management is extremely clunky, the address bar is inconsistent with virtually every other Android text input (e.g. there's a big X that I expect to clear the address bar, but in Firefox this escapes me from input mode). Even getting the toolbars to reappear by scrolling is troublesome - infuriating.
It sucks, because I care about privacy and web diversity, and have been a Firefox user since back in the Phoenix days.
Regardless of one's opinion of Apple products - Firefox does have big problems with macOS, with quite a few issues in their bugtracker that are being worked on.
I've always thought that the main reason for Starbucks to push their card and reward program is to save on per-transaction credit card fees.
By my understanding, in the US, typical merchant fees to accept a card are a flat $0.20-$0.30 transaction fee, plus 2-3% of the total dollar amount.
The article mentions the interchange fee (the 2-3%), but for small purchases the transaction fee is more significant.
I assume the majority of purchases are individuals buying a single beverage. If an average drink costs $4, a $0.30 transaction fee alone is eating 7.5% of your gross revenue. That's an absolutely huge amount.
Even if Starbucks cards are bought/topped up with a credit card, that $0.30 fee is being amortized over $20+ worth of product instead of $4 worth.
I also figured this tied in with Starbucks' contract with Square some years back - as one of Square's value props is eliminating the per-transaction fee for credit cards. Not sure why that was cancelled, though.
> By my understanding, in the US, typical merchant fees to accept a card are a flat $0.20-$0.30 transaction fee, plus 2-3% of the total dollar amount.
That's correct, generally. But those can always be negotiated. And Starbucks probably has the volume to have some negotiating power. I'd be surprised if they didn't have a lower rate.
This is a great example of a HUGE competitive advantage Starbucks has over regular merchants that's very unfair. Smaller merchants don't get the scale advantages here...
Although, having tried Starbucks coffee, I have to say it's terrible!
Maybe it's the way it's made with an automatic machine and a disinterested barista, or perhaps the beans are over-roasted and mass produced in a factory. Just about any small coffee shop makes better espresso based coffee than them, especially in Sydney, Australia, where Starbucks continues to struggle to gain a foothold.
To be honest some of those smaller merchants can also make better coffee than Starbucks, and as such they can ask for slightly higher prices for said better coffee without their customers complaining.
No one goes to Starbucks for the high quality of their coffee. They go for convenience, maybe atmosphere (a safe comfortable place to hang for awhile), and such. But if you want great coffee, that is a trade off that doesn’t favor Starbucks (like a burger conesuire at McDonald’s).
>maybe atmosphere (a safe comfortable place to hang for awhile),
So I mean, coffee shops in general, yes. But at least in urban areas? Starbucks seems to be really well-designed to maximize throughput and profit; they have fewer seats, usually per cup of coffee sold than the smaller coffee places, and generally seem to be set up for 'to go'.
It's one of those things where if I look at it as an investor, I really like it, but if I look at it as a consumer, the opposite.
What urban areas? Seattle? No, downtown stores have seats, there are very few togo stores, heck the one near the library is pure plush seat forest (the one in Paris near the grand opera is like that also). You can’t operate togo stores at all in China, where your sales volume is strictly limited by the amount of seating you have (unlike the USA). Tokyo has the best seating I’ve ever seen in a Starbucks, maybe NYC?
mm. in santa clara, redwood city, san francisco and Los Angeles, Starbucks usually have few places to sit (compared to local coffee shops) - and are usually way busier (compared to local coffee shops) and usually don't offer mugs; they just have to-go cups (or bring your own mug) - while most of the nicer local places give you a mug by default if you say it's 'for here'
I mean, sure, they aren't 'to go only' - but they are impractical places to meet or hang out compared to most smaller coffee shops, just because they seem to have a much worse (or, I guess as an investor, better) ratio of customers to seats; a lot of time there are more people in line for coffee than there are tables.
I'm not saying they don't have seating at all, just that they seem to be optimized in a thoughtful way to sell as much coffee as possible, making them superior (if you are an investor) but inferior if you want a place to meet up with other people or hang out, when compared to the locally run and less-optimized coffee shops.
> we are seeing early indications in research of some amazing additional benefits – improved milk production, increased immunity, improved food conversion ratio (meaning you can feed cattle less and have them pack on more protein)
These "additional benefits" are all things that would increase profit - reduced loss of output from sickness; increased production; reduced feed consumption.
If the seaweed is cheap enough, it could be worth it for farmers to use without additional incentives. Though, they would also get green cred "for free."
I think one of the biggest lessons here is about the difficulty of relying on humans to maintain attention while covering an autonomous vehicle. Yes, this particular driver was actively negligent by apparently watching videos when they should have been monitoring the road. But even a more earnest driver could easily space out after long hours of uneventful driving with no "events" or inputs. And that could be enough that their delay in taking over could lead to the worst.
Certainly not defending the safety driver here - or Uber. But I think there's a bit of a paradox in that the better an AV system performs, and the more the human driver trusts it, the easier it is for that human to mentally disengage. Even if only subconsciously. This seems like a difficult problem to overcome, especially if AV development is counting on tracking driver interventions to further train the models for new, unexpected, or fringe driving situations.