What do you believe the frame rate and resolution of Tesla cameras are? If a human can tell the difference between two virtual reality displays, one with a frame rate of 36hz and a per eye resolution of 1448x1876, and another display with numerically greater values, then the cameras that Tesla uses for self driving are inferior to human eyes. The human eye typically has a resolution from 5 to 15 megapixels in the fovea, and the current, highest definition automotive cameras that Tesla uses just about clears 5 megapixels across the entire field of view. By your criterion, the cameras that Tesla uses today are never high definition. I can physically saccade my eyes by a millimeter here or there and see something that their cameras would never be able to resolve.
I can't figure out your position, then. You were saying that human eyes suck and are inferior compared to sensors because human eyes require interpretation by a human brain. You're also saying that if self driving isn't possible with only camera sensors, then no amount of extra sensors will make up for the deficiency.
This came from a side conversation with other parties where one noted that driving is possible with only human eyes, another person said that human eyes are superior to cameras, you disagreed, and then when you're told that the only company which is approaching self driving with cameras alone has cameras with worse visual resolution and worse temporal resolution than human eyes, you're saying you respect the grind because the cameras require processing by a computer.
If I understand correctly, you believe:
1. Driving should be possible with vision alone, because human eyes can do it, and human eyes are inferior to camera sensors and require post processing, so obviously with superior sensors it must be possible
2. Even if one knows that current automotive camera sensors are not actually superior to human eyes and also require post processing, then that just means that camera-only approaches are the only way forward and you "respect the grind" of a single company trying to make it work.
Is that correct? Okay, maybe that's understandable, but it makes me confused because 1 and 2 contradict each other. Help me out here.
My position is: sensors aren't the blocker, AI is the blocker.
Tesla put together a sensor suite that's amenable to AI techniques and gives them good enough performance. Then they moved on to getting better FSD hardware and rolling out newer versions of AI models.
Tesla gets it. They located the hard problem and put themselves on the hard problem. LIDAR wankers don't get it. They point at the easy problem and say "THIS IS WHY TESLA IS BAD, SEE?"
Outperforming humans in the sensing dept wasn't "hard" for over a decade now. You can play with sensors all day long and watch real world driving performance vary by a measurement error. Because "sensors" was never where the issue was.
It did always strike me as funny that Cronenberg had a movie about "what if TV was evil and made people murderous and the studio execs had to pay", and a movie about "what if video games were evil and made people murderous and their creators had to pay", but never a movie about "what if movies were evil and made people murderous and film directors had to pay". Obvious bias aside I wonder if it would work as a story - movies don't seem as hypnotic in the public consciousness, I believe.
The thing about a sledgehammer is that if you're asleep in your house, you, your dog, your SO, or your neighbors might be startled awake by the sound of metal splitting and cracking open. Your security system might be designed to alert on something like a window being smashed. The person attempting to enter the house may be trying to enter undetected, because they know that a broken lock and/or a replaced lock will alert the people they're trying to ambush or steal from. Imagine something like industrial espionage, where a person breaks in undetected, steals an item, and then leaves. The occupant only realizes the item is gone a week later, and wonders if they could've misplaced it. In your scenario, they'd see the sledgehammered lock and immediately call the cops.
I see comments like these all the time on Reddit and Hackernews. Hackers are like, "locks aren't security, a sledgehammer breaks them" and it appears to betray a mental threat model of "what if the cops want my thing" and never "what if someone wishes to do me harm while I am in my house" or "what if a criminal wants to not get caught taking my things" or "what if someone wants to lie in wait in my house", which are not risks to these commenters. They are to a lot of people though.
People don't buy locks so that they can lose their keys and require the lock to be picked. They buy locks to secure access to items or places. The parent I was replying to is saying that locks aren't security because a sledgehammer breaks them. I argue that a sledgehammer is only important for certain threat models. I am quite aware that most lock picking is for lost keys. However, I am describing threat models for which locks are important security. Do you understand?
The parent you were replying to mentioned at least three things:
- lock picking hobbist
- snap gun
- sledgehammer
And you simplified their comment to "locks aren't security because a sledgehammer breaks them" then proceeded to describe threat models where a sledgehammer doesn't work in detail. It's not a very constructive discussion.
Even without the sledge hammer your locks probably aren't good enough to stop a thief with a set of picks. A robot that brute forces it is more expensive and slower than any of the existing tools, so it shouldn't change your threat model.
Locks and keys are usually more an inconvenience to prevent casual abuse of your boundaries. People who want access, nefarious or otherwise, will gain access, whether it's cops, ninja assassins, or junkies looking to strip your house of copper.
Ninja assassins are low on the list of possible threats, but never zero.
The biggest risk to me personally is the junkies and porch pirates, so signs and out of reach and very visible cameras have gone up to make them uncomfortable and feel too paranoid to mess with the locks.
They keep honest people honest and give a few moments more work to those that are dishonest. It's a promise to society that you'll act decent. Needless to say they mean nothing to those that break promises.
In almost all cases, with a lock or not, by the time you figure out the lock is broken (10 minutes or 10 days) your shit is long gone and you better have your security onion setup with multiple layers if you want the foggiest idea what happened.
If you have an above average risk of having your shit stole or becoming under attack you better have a whole shit load more layers in your defense or you're screwed.
Locks raise the cost of bad behavior, which makes it less likely. They can still be quite meaningful to someone who breaks those promises, if that person doesn't have the tools or time to defeat the lock, or is just plain lazy.
I live in a pretty low-crime area. From time to time, residents complain about things being stolen from their cars. Every single time that I've seen, the cars have been unlocked. A thief certainly could smash a window to steal from a locked car, but the thieves around here seem to be opportunistic and won't go that far.
And a larger lock pick tool does pretty much zero in the case you listed as that is not opportunistic. Those are pretty much the open up and steal when they see an unlocked car kind of people.
It does nothing for the type of criminals that work in groups and steal tires of 50 cars at once, or whatever soup de jour of automobile parts they want at that moment.
My point is, locks do more than just keep honest people honest, and they are meaningful to some people who are up to no good.
I wasn't addressing picks at all. My opinion there is that it's the lock maker and lock owner's responsibility to resist picking, and the rest of us have no obligation to keep it more difficult by not making tools.
It's a lot like turn signals - social communication that goes beyond the practical benefit. If you're using your turn signals, you're saying "I'm aware of the environment and a good participant in the game we're playing because I'm following the rules". If you don't use signals, you're telling people that you're not following the rules, and that makes you suspect in all the other social games. Kinda funny to do some people watching with that perspective, and to start to see how many assumptions are based on society being high trust - the exploitable vulnerabilities are endless, and people communicate a lot about themselves in the rules they choose to follow or break.
100%, especially while driving as you say. When teaching my daughter driving I tell her to watch for people other people breaking the law/bad driving in other ways and distance yourself from them. The probability of them doing something else stupid in the next few minutes when your in their vicinity approaches unity, and it reduces your chances of being what they hit.
Is it completely insane and incoherent to imagine a situation where ice cream has two equilibrium prices, one higher and one lower, and the market just settles on the higher one? Like, imagine a case where Jeni's would start losing money on every pint if they reduced the price by a dollar, but they'd make the same amount of money overall if they reduced it by 3. But they're in a local optimum, the "price reduced by 3" is identical for revenue purposes, and they choose their current local optimum. Then ice cream could still be priced too high and be "appropriately priced". Is this impossible?
> Is it completely insane and incoherent to imagine a situation where ice cream has two equilibrium prices, one higher and one lower, and the market just settles on the higher one?
It it completely insane? No. But draw a set of supply and demand curves that supports it, and then try to come up with a narrative that explains them. In the static, same time, all other things being equal case, it is hard to do.
At what point does a demand for evidence come back around to making the requestor seem less like a prudent, rational truth seeker and more like someone with naive lack of personal, lived experience? Like, not a single soul will say "got evidence for that assertion?" when it's a news story about EA or Oracle or Adobe acquiring a company and people are predicting that the acquired product will be destroyed, and isolated demands for rigor will be laughed out of the comment section. Why is that - when does it flip over to "oh, so I guess it's okay to just nakedly assert that food companies will seek profit by reformulating their recipes, even though there isn't a shred of evidence to support that, therefore, we're now allowed to predict anything!"
The complement of the claim is essentially "food manufacturers will never again attempt to modify their recipes to make them more hyperpalatable, now that GLP-1 exists." Does that need evidence? It's the null hypothesis, but it certainly sounds a lot more unrealistic than the opposite.
Destroying a product is a well understood process, and we've witnessed many big companies do it. That's evidence!
Designing a food to be more appealing is also a relatively well understood process that is already carried out, but Ozempic seems to blunt the effectiveness of it.
Food companies will surely try to make food that is appealing for Ozempic users, and will do so if they can. But it is a massive assumption that they will be able to, given that they're already doing as much as possible to make food appealing to people.
So there is significant uncertainty that the food companies can do what the parent suggested they would do.
It needs evidence that there's a general phenomenon of "hyperpalatable" food companies can search for, not just a latent property of how certain macronutrients balance in food. Otherwise, it's like proposing that public transit is pointless because car companies will somehow defeat it by making up more reasons to drive.
But that's what happened. I mean, it doesn't mean that proposing public transit is pointless, but if someone in 1930 heard about a trolley track being run in town and another person said "it's only a matter of time before the car companies try to sabotage mass transit", they would've been right. That's what actually happened.
Okay, people say this. Could you please, and this is not a rhetorical device, it's a sincere question: how do you keep the browser updated without updating the operating system? Or if you are updating the os, doesn't that change the user interface? And if the user interface is changing, doesn't that confuse your grandmother? I installed Ubuntu for my mom and after four years Firefox was out of date, and the website for banking she'd use would have checks where logging in was only possible if the one if the user agent was recent enough. One can fake that, but I didn't want to. But updating Firefox meant updating Ubuntu, which means that every single icon and every single menu position changes, and I didn't want to have to teach her where everything was again. How do you avoid this?
I haven't dealt with this for her in a few years, but basically:
Pin all their apps in favorites and they will persist through updates. Updates don't overwrite desktop shortcuts either (although like other os, a couple might be added that need to be removed). This might be more difficult in gnome, I wouldn't know since I am firmly in the kde camp.
To stay as up to date as possible, use the mozilla apt repo:
In my experience, the Mozilla apt repo would still have dependencies on system libraries that can only be installed by updating the operating system to another LTS. Like, the Mozilla Firefox package depends on libssl, which depends on another package and that other package can only be updated by updating the operating system, which typically drastically changes the look and feel of system menus and things that are not easily gleaned by looking at a screenshot of an empty desktop. Maybe this isn't true of KDE and the interface remains stable across to update cycles. Thank you for the suggestion.
There actually is a stopping point , and the definition of ultra processed food versus processed food is often drawn at the line where you can expect someone in their home kitchen to be able to do the processing. So, the question kind of goes whether or not you would expect someone to be able to make cheese or wine at home. I think there you would find it natural to conclude that there's a difference between a Cheeto, which can only be created in a factory with a secret extrusion process, versus cottage cheese, which can be created inside of a cottage. And you would probably also note that there is a difference between American cheese which requires a process that results in a Nile Red upload, and cheddar cheese which still could be done at home, over the course of months like how people make soap at home. You can tell that wine can be made at home because people make it in jails. I have found that a lot of people on Hackernews have a tendency to flatten distinctions into a binary, and then attack the binary as if distinctions don't matter. This is another such example.
There actually is no agreed-upon definition of "ultra-processed foods", and it's much murkier than you make it out to be. Not to mention that "can't be made at home" and "is bad for you" are entirely orthogonal qualities.
Would you consider all food in existence to be "processed", because ultimately all food is chopped up by your teeth or broken down by your saliva and stomach acid? If some descriptor applies to every single member of a set, why use the descriptor at all? It carries no semantic value.
Is it fair to recognize that there is a category difference between the processing that happens by default on every cell phone camera today, and the time and labor intensive processing performed by professionals in the time of film? What's happening today is like if you took your film to a developer and then the negatives came back with someone having airbrushed out the wrinkles and evened out skin tones. I think that photographers back in the day would have made a point of saying "hey, I didn't take my film to a lab where an artist goes in and changes stuff."
It’s fair to recognize. Personally I do not like the aesthetic decisions that Apple makes, so if I’m taking pictures on my phone I use camera apps that’s give me more control (Halide, Leica Lux). I also have reservations about cloning away power lines or using AI in-painting. But to your example, if you got your film scanned or printed, in all likelihood someone did go in and change some stuff. Color correction and touching the contrast etc is routine at development labs. There is no tenable purist stance because there is no “traditional” amount of processing.
Some things are just so far outside the bounds of normal, and yet are still world-class photography. Just look at someone like Antoine d’Agata who shot an entire book using an iPhone accessory FLIR camera.
I would argue that there's a qualitative difference between processing that aims to get the image to the point where it's a closer rendition of how the human eye would have perceived the subject (the stuff described in TFA) vs processing that explicitly tries to make the image further from the in-person experience (removing power lines, people from the background, etc)
reply