> but then the challenge is reconciling disagreements with calibrated, and probabilistic fusion
I keep reading arguments like this, but I really don't understand what the problem here is supposed to be. Yes, in a rule based system, this is a challenge, but in an end-to-end neural network, another sensor is just another input, regardless of whether it's another camera, LIDAR, or a sensor measuring the adrenaline level of the driver.
If you have enough training data, the model training will converge to a reasonable set of weights for various scenarios. In fact, training data with a richer set of sensors would also allow you to determine whether some of the sensors do not in fact contribute meaningfully to overall performance.
I suspect high risk extreme sports are in fact worse, as they seem to require a constant ramp-up of the risk, and there does not seem to be any detox mechanism (other than old age) that would allow one to reset the required risk.
There is an appalling number of people in these sports who die young, e.g. Ueli Steck (https://en.wikipedia.org/wiki/Ueli_Steck), recently Felix Baumgartner (https://en.wikipedia.org/wiki/Felix_Baumgartner). Many of them had already see friends and family die in these sports (as had Natalia Nagovitsyna), so it's not like they were not aware of the dangers.
I agree, some of the most self centered, callous people I've met came out of the New Age / Anthroposophy scene. A family member who is anything but right-wing sued her siblings to get her way, introduced or fueled massive feuds among other family members, and is now complaining that she does not understand how people would dislike her.
On the other hand, as the career art of RFK, Jr. shows, horseshoe theory is real.
Empirically, every Apple product you're using today was designed without Woz' involvement, and nearly every one of them still shows traces of Jobs' involvement.
Conversely, Woz started numerous companies after parting ways with Jobs, and I can't think of a single one that had a lasting impact.
It's not really a level playing field to compare Jobs running an established company with a devoted fan base, to Woz starting companies from nothing. One is much easier than the other.
When Jobs was fired by Apple, he started NeXT (platform where the web was developed) and Pixar. The Apple desktop platform, one of the existing products referenced, still has a lot of heritage from NeXT. I think Jobs was an asshole too but he did start outside companies that did well and still have a major lasting contribution today.
so Jupiter is 317.8 M⊕, this thing is around 80-150, but ... Saturn is right there at 80 ... so unlikely to have a solid surface, but likely has a rocky core, and wild winds at this temperature. (Saturn's average temp is -178C, -138C "surface", and this candidate seems to have -48C.)
It seems that all of this is based on 2 data points, and they only provide some examples that are consistent with that, but the models are also very low-confidence (as we don't have a lot of data about cold and small orbiting things - as they are hard to detect).
Offtopic, but such an interesting civilization where the keepers of knowledges seem to relate to this statement so much, innit?
Very Zen or is it just the overwork? Maybe it's a thing installed in our childhoods so that we would not struggle for power. (I certainly remember acquiring this manner of speaking based on fundamental self-deprecation around 5th grade, some other kids not acquiring it, and then 10y later we'd have mutually incomprehensible life scenarios.)
While kinds of dark humor other than "the falsity and futility of my own existence, amirite?" don't quite resonate with people as much, for whatever reason.
I propose main character syndrome as explanation. Reading too many blogs, thinking we are one of the cognoscenti, projecting ourselves a bit too close to the big polymath plasma screen in the sky, and eventually just ending up as ash in the divertor at the bottom of the big social tokamak. We think we know better, because we likely do, but what good does that do us?
I sort of understand the motivation to get the exact flavor of the cocktail recipe (especially with an ingredient like Kina Lillet), but for the most part, I find ingredients specified by brand name somewhat annoying, especially for home bars.
It rather depends how specific the flavor of the ingredient is, though. Not everything has generic versions. "Maker's Mark" instead of "bourbon" probably doesn't add anything to a recipe -- exploring other bourbons is likely to give good results, although it may be valuable if the bourbon specified has particularly unusual characteristics that the drink is balanced around. And even Cointreau instead of triple sec is silly (in my opinion), although Grand Marnier is different enough to call out (and to expect some work needed when subbing). But for something like Green Chartreuse... it's a brand name, sure, but it's also what it is; there's nothing else like it, nothing else that you can use to build the same drink. And Kina Lillet is in that category. There are plenty of "brand name" drinks that were never popular enough to generate near-exact clones, and are gone to the giant Long Island Ice Tea in the sky.
I keep reading arguments like this, but I really don't understand what the problem here is supposed to be. Yes, in a rule based system, this is a challenge, but in an end-to-end neural network, another sensor is just another input, regardless of whether it's another camera, LIDAR, or a sensor measuring the adrenaline level of the driver.
If you have enough training data, the model training will converge to a reasonable set of weights for various scenarios. In fact, training data with a richer set of sensors would also allow you to determine whether some of the sensors do not in fact contribute meaningfully to overall performance.
reply