That's certainly true in the sense that flying from NYC to LA is 750x safer than doing the same as a road trip, on a fatalities-per-km basis. But on a per-trip basis, boarding that flight will be about equally as safe as taking a 5 km trip by car to the hardware store, and above-average defensive driving can certainly boost that radius considerably, maybe to 50 km.
Some would argue the per-trip comparison is invalid, but often the travel distance is not fixed, such as if you were weighing between vacation options of flying to NYC vs camping at a local campsite.
On a danger-per-hour-in-vehicle basis, airplanes of course still come out ahead, although not quite as overwhelmingly. NYC to LA is about a 5.5 hour flight; an equivalent drive would be about 350 km, and it will be very hard to match the safety of that flight even with defensive driving. You'd need to drive 70x better than average, even with the fatigue of a 5.5 hour drive.
> In 2007, the National Transportation Safety Board estimated a total of nearly 24 million flight hours. Of these 24 million hours, 6.84 of every 100,000 flight hours yielded an airplane crash, and 1.19 of every 100,000 yielded a fatal crash. https://www.psbr.law/aviation_accident_statistics.html
So we have 330M people in the US, of which let's say 100M are driving regularly. How regularly? Let's assume 2 hours a day for 52x5 = 260 working days in a year. So given that we have 43K traffic fatalities per year let's compute fatalities per hour of driving. 100M * 2 * 260 / 43K = 1.2M So we have 1 fatality per 1.2M hours of driving. At the same time we have roughly 1 fatality per 100K hours of flying. Oops!
Of course one should consider that:
(a) it's 2007 data, it's probably lower now (10 times lower?),
(b) we definitely cover longer distances per hour of flying (by the way not that much, 60 mph vs 600 mph is within 10x difference),
(c) it's probably all flying, including private, but I'm not considering just public buses either.
Add defensive driving though, and it's not that obvious which is safer.
The report you seem to be citing is this one, which summarizes the data on General Aviation flights. Those are small private planes. Commercial air transport is not part of General Aviation.
Yep, I'm not actually claiming that driving is safer per se, but it's apples vs oranges. I'm also not sure about 24M hours, total commercial airlines hours (i.e. aircraft hours, not passengers') is around 14M/year in 2018 (link in my other comment), so we need to multiply by the average number of passengers. Which gives >1B hours/year for commercial airlines only.
If that door had hit horizontal stabilizer though we would have had a completely different statistics even with 1B hours. Fortunately it didn't happen, but with the current trend the idea that flying is always safer may become not so obvious, and "orders of magnitude" thing may disappear pretty fast.
IMHO the comparison is to inform the decision point of whether to fly or drive somewhere, so the inputs should be limited accordingly: exclude drives that couldn't reasonably be flown.
Is it safer on average to do a long road trip, or fly? Historical crash data on long road trips (excluding commutes, local errands, etc.) probably doesn't exist, but if it did, that would be very preferable. Perhaps people crash more when driving unfamiliar roads, with additional fatigue of long durations, with additional distraction of kids, etc. Or perhaps routine drives are worse because one lets their guard down!
Statistics is a tricky thing. There are 43K traffic fatalities in the US per year and 53K deaths from colorectal cancer. Which means chances of dying from colorectal cancer is higher than dying in a traffic accident. Well, over a lifetime, but distribution over age can be different etc. In the same way 43K fatalities are not an even distribution over region, type of driving, destination, age etc.
Of course I have to admit that flying commercial airlines is safer by average numbers, in the US and for now. But if we estimate total flying hours as 1.3B/year (http://web.mit.edu/airlinedata/www/2018%2012%20Month%20Docum... times 100 passengers per aircraft) it only takes 1300 deaths per year to make it even with average traffic fatalities. If that flight had been unlucky enough to go down we would have had 177 deaths, already not "orders of magnitude safer" than driving. And the trend is not good.
But again, we are comparing apples to oranges. Driving is a very different experience, both long and short trips. Nobody chooses to drive from Boston to LA just out of fear of flying (well, maybe there are exceptions, but "nobody" is still a very accurate word). As for short trips, changes of getting into an accident in urban area driving to the airport is probably higher than driving in the other direction towards your destination. Again, it depends.
This is roughly accurate for general aviation (people taking a Cessna out for a ride on a weekend, etc.) - it is about 10x deadlier than driving and the rates have been pretty stable for decades.
If you look at just airlines, they’re in turn 10x _safer_ than driving if I remember correctly. There’s this anecdote that after 9/11 people were afraid to fly and died on the highways in much higher numbers. There’s also the fact that there there was a very small number of passenger deaths involving airliners in the US in over a decade (meaning no major crashes). Compared to thousands and thousands of traffic deaths a year that should drive the point home, even when you have to adjust for base rates.
> It hates any form of order and will actively attempt to destroy it.
If you are talking about ever increasing entropy, it applies to any environment. But keep in mind that the Moon itself is a manifestation of order. If it hadn't been the case we would have been observing a cloud of dust and gas where our Solar system is.
Yes, entropy increases in the salad dressing, but only when it's insulated (in reality we can't consider salad dressing outside of the Earth gravitational field, but let's say the Earth is insulated too). Now imagine that the extra energy (i.e. generated heat) has dissipated (either out of a window or, if we consider the Earth too, into the space). Is it still an increase of entropy? Our Solar system is not a closed system, the extra heat that was generated by creation of planets has dissipated (and is continuing to do so). So in the end the entropy of the Solar system is lower, i.e. we have more order, at least in our vicinity. Possibly in the whole universe since it's (presumably) expanding. Anyway, I wouldn't apply 2nd law of thermodynamics to the whole universe, we have no idea what happens at that scale.
Most modern cars will happily start at -20°C and many will start at -30°C without any special additions (don't ask me how I know, you just need to know what you are doing). Of course it's not -200°C, but one thing to remember is that there is no temperature in the vacuum. The temperature of the Lunar surface is not it. An object without heating can easily reach lower temperatures there, yet it may not be that hard to keep that object warm as e.g. somewhere in the Arctic as there is no conductivity, only the radiation heat transfer.
> Most modern cars will happily start at -20°C and many will start at -30°C without any special additions
A couple years ago temps here went below -25° and there were a lot of cars which didn't go nowhere. Sure, some of them just had a battery too low from the usual urban minimal distance travel, but I heard enough rants from and about people who was forced to abandon their car and use the public transport or taxi. Diesels without an engine heater were among them.
> don't ask me how I know, you just need to know what you are doing
Ye, have an 'offline' (lol) charger for your car battery or have a 'kick-starter' kit. The thing is what the cars sold here are prepared for the winter conditions, yet many of them failed at a slightly lower temps - which again shows how a mere 5° difference can be way too much even for things what work otherwise just fine.
People just don't care about their vehicles, that's why. There are multiple examples of even diesels starting at -25°C without heating. All my cars have been gas and while starting them at -30°C required some magical actions (like turning on headlights briefly to warm the battery up or depressing the clutch if you have a manual transmission) they all started most of the time. No extra tools or devices were necessary.
> or depressing the clutch if you have a manual transmission
What exactly does depressing the clutch do as compared to not doing it and starting the vehicle in neutral? Is it to reduce the engine load further? Or something else?
Standard gear box oil is 80W90, which at -30C (even at -20C) turns into a thick jelly. Even in neutral starter has to move gears in that jelly. So normally you don't want to release the clutch until engine is warm enough and stable, and even when you do release it (in neutral) you do it slowly, sometimes in multiple attempts to avoid engine stalling, like starting on a steep hill. I haven't tried automatic at those temperatures, but ATF is much less viscous, so it should be much easier on the starter at low RPM.
Then you block the runtime? _aha you got me, threads and pre-emptive concurrency is better_.
This is where you have a reasonable trade off. I have accepted that async gives me more control over my code. For that I have to accept that blocking can slow down the app. After running async rust in production for over 2 years now I've not seen any blocking tasks block the executor. Maybe I'm just good but my experience is that my colleagues who come from C# generally don't make these mistakes either
You wait on a shared channel so both read and sleep threads queue a message when ready (whichever comes first). Not sure about channels, but in other languages it would be a concurrent queue.
Not sure about Rust, but in other languages that don't have async: create a queue, spawn a thread with your task, thread with sleep and wait for a message from any of those two. Kill the still running thread when you get the message. Can't say it's incredibly hard (unless it's Javascript or you work in a single-threaded model in general).
> Kill the still running thread when you get the message
This is extremely difficult. I mentioned elsewhere that the only way to kill a thread is through the pthread_cancel API, which is very dangerous.
Languages with larger runtimes can get around this because they have all sorts of things like global locks or implicit cooperative yielding. So they don't ever have to "kill" a thread, they actually just join it when it yields.
As I understand when you are blocked on I/O and sends a signal to the waiting thread, that system call will simply be released and return an error. Ruby (Java etc.) does make it simple because of GC, so I don't need to worry about file descriptor leaks etc. But talking about Rust, shouldn't it be a part of a thread management? Basically if an error happens during normal blocking system call, it goes through the same sequence, no? E.g. you have to release any thread-local allocations no matter by each way system call was terminated. Rust threads are supposed to be memory safe, not sure about file descriptors. I don't quite understand what you mean by "yielding" though.
In some sense it is. Async is a glorified future, and future is a glorified thread management, and threads are a way to facilitate asynchronous execution. You can also create a threadless runtime, but then you are relying on OS threads (e.g. I/O or XHR), otherwise you are simply combining function calls (for which we already have language syntax).
That's pretty simple. The primary goal of every software engineer is (or at least should be) ... no, not to learn a new cool technology, but to get the shit done. There are cases where async might be beneficial, but those cases are few and far in between. In all other cases a simple thread model, or even a single thread works just fine without incurring extra mental overhead. As professionals we need to think not only if some technology is fun, but how much it actually costs to our employer and about those who are going to maintain our "cool" code when we leave for better pastures. I know, I know, I sound like a grandpa (and I actually am).
You can cancel socket operations using signals. You can eg have one or more background threads running timers which will interrupt the blocking IO if it doesn’t return in a timely manner. A lot of very important frameworks and services that are used in billions of transactions per day use this model.
Of course you can. It does mean that you need cooperation between the child and parent thread (to set up the signal handler so that resources are cleaned up) though. That's easy in a framework, kind of a pain in the ass if you're just trying to get some opaque client you were passed to do something in <10 seconds.
And that's just for IO. I mentioned elsewhere that you may want to cancel pure compute work.
You can see my point, I assume, that when your userspace program can cancel tasks natively it's much easier to work with?
Can you cancel a tight computing loop (i.e. without system calls and without yielding of any sort) with async? I wonder how? Also if you can inject a cleanup code in your async task what prevents you from doing it with threads? Such things existed long before async/await and system calls didn't change for async/await. Also, what's the difference between "framework" and async/await runtime, isn't the latter a kind of a framework?
Without any yielding? Seems hard. You could park the thread idk.
> what's the difference between "framework" and async/await runtime,
Sure, in that in both cases you have the threads managed for you. But there's a difference between spawning a raw pthread, which will have no signal handlers/ cleanup hooks, and one managed by a framework where it can add all of those things and more.
Interesting, in the Java world Thread.stop is deprecated too: https://docs.oracle.com/javase/7/docs/technotes/guides/concu... Which means there is no good way to actually stop a thread involuntary. Of course in most simple apps it's not a big deal, but I would not do it in long-running apps.
OTOH in Rust async model is based on polling. Which means that poll may never block, but instead has to set a wake callback if no data is available. So there is no way to interrupt a rogue task and all async functions should rely on callbacks to wake them (welcome to Windows 3.1, only inside out!). Thread model is much more lax in this sense, e.g. even though my web server (akka-http) is based on futures, nothing prevents me from blocking inside my future, in most cases I can get away with it. As I understand it's not possible in Rust async model, I can only use non-blocking async functions inside async function. So in reality you don't interrupt or clean up anything in Rust when a timeout happens, you simply abandon execution (i.e. stop polling). I wonder what happens with resources if there were allocated.
> As I understand it's not possible in Rust async model,
You can block, you're just going to block all other futures that are executed on that same underlying thread. But all sorts of things block, for loops block.
This is the same as Java, I believe. Akka also has special actors called IO Workers that are designed for blocking work - Rust has the same thing with `spawn_blocking`, which will place the work onto a dedicated threadpool.
> So in reality you don't interrupt or clean up anything in Rust when a timeout happens
You don't interrupt, you are yielded to. It's cooperative.
> I wonder what happens with resources if there were allocated.
When a Future is dropped its state is dropped as well, so they are all freed.
If you block your timer goes out the window, right? Because the poll will never get there until the blocking call is done. So yeah, you can block, but it will disrupt the whole chain, including tasks above yours up to await. Similar to Erlang VM where the language itself yields (e.g. there are no loops and every recursive call is effectively a yield), but if you add a C module and are careless enough to block, the whole EVM blocks. So no, if you want to use async you shouldn't block. For loops? Nope, not if they take long time for the same reason, you may want to break them down to smaller chunks ("long" depends on other tasks and expected latency).
Having said that, Erlang exists and doing well, so async is as good as any model designed for special cases. But this discussion basically answers the question
> Why don’t people like async?
Because not everybody (which means a majority of developers) needs this complexity. And the upward poisoning means that I can't block in my function if my web server is based on async, which affects everybody who is using it.
> If you block your timer goes out the window, right?
This is the case in every language.
> So no, if you want to use async you shouldn't block.
Everything blocks. The dosage makes the poison.
> For loops? Nope, not if they take long time for the same reason
You would want to add a yield in your loop, yes. Async loops `while let Some(msg) = stream.next().await` will work well for this.
> And the upward poisoning means that I can't block in my function if my web server is based on async, which affects everybody who is using it.
To be clear, you can definitely block as much as you want in those frameworks, you just need to understand that you'll block the entire thread that's running your various futures. That's not that big of a deal, you'd have the exact same issue with a synchronous framework. Blocking in an OS thread still blocks all work on that thread, of course.
The last comment is actually pretty interesting and spot on. In the Java/JDK world - which you can assume as a „framework“ - you can cancel blocking IO via the Thread.interrupt() mechanism. And that works because it’s deeply integrated into the framework, similar like async Rust runtimes provide support for cancellation.
> Show me how to cancel a network requests using only threads, with no access to the underlying socket APIs?
It’s been a long time since I did this in Rust. But why do you not have access to the sockets or at least a set_timeout method? Is it a higher level lib that omits such crucial features?
In Go, the super common net.Conn interface has deadline methods. Not everyone knows their importance but generally you have something like it piped through to the higher layers.
EDIT: Oh I see you replied to my other comment. Please disregard.
Total rust newb here, but does that need the full async story, or is it a limitation of an API somewhere? From the point of view of the code using the request's response could you use a channel with recv_timeout? Is the problem there that the thread with the socket connection is still going and there's no way to stop it?
The ability to cancel an operation without talking to the operating system requires that your program has yield points. That yielding is what allows another part of the program to take control and say "OK, I'm done with you now, no need to finish".
Yes, the problem is that your thread would continue to perform work even if you stopped waiting on it.
Maybe I don't understand the complexity, but in good old Ruby I can easily stop a thread if I don't need result anymore. No async needed and no yield points necessary. Doesn't it apply to Rust too?
I assume that Ruby does in fact have yield points in some form, such as a global lock. Killing a thread is only possible (for a pthread) via the `pthread_cancel` API. That API is very dangerous and is generally not something you'd ever want to use manually - the thread will not clean up any memory or other resources, any shared memory is left in a tricky state.
To gracefully shut a thread down you need yielding of some kind.
Most commercial code is running an almost entirely IO workload, acting as a gatekeeper to a database or processing user interactions - places where async shines.
Async isn't a lark, it's a workhorse. The goal is not to write sexy code, it's to achieve better utilization (which is to say, save money).
Depends on the nature of commercial code and if it has another level of parallelism (think of web servers and read my comment below). As for DB queries, here's the thing: most commercial code is using DB transactions and there is no way to run transaction across multiple connections, so you are either single-threaded and do things in sequence anyway (why use async then?), or you are multi-threaded and then forget about transactions. Besides that, even if you can get away with multiple transactions there are those pesky questions like "what to do with a partially failed state?". Not all transactions are idempotent, and not all are reversible, it's hard enough when you run them sequentially, and running them in parallel and dealing with a failure might be an absolute nightmare.
Most web applications (every one I've ever worked on) use connection pooling to run multiple transactions in parallel. I suppose you could think of that as a sort of network level parallelism, but it's not multithreading.
Connection pooling is of course not without it's hazards, scaling databases can be very difficult and almost all of the production incidents I've dealt with involve a database running out of a resource (often connections). But for your garden variety web app, it certainly isn't a dichotomy between serializing all concurrent updates or losing atomicity.
But async Python is a single threaded. I’d prefer async over multithreading in python nowadays. Otherwise code can be slow as piss, if it’s doing a lot of I/O. Then, async is almost table stakes for almost any level of reasonable performance (GIL and all).
Not exactly sure how async in Python works, but if its runtime is non-preemptive and single-threaded (i.e. based on yield), then congratulations, you reinvented Windows 3.1! Those who are old enough to be "lucky" to use it, remember that the damn thing could hang the whole OS if your application was careless enough to block and not yield. Also "slow" is relative, if you create a thread to do DB query, thread creation is a way faster than any DB request, so not sure why it's slow. Never had problems with Ruby threads even though Ruby doesn't have a mechanism to create a thread pool (didn't have? it's been some time since I worked with Ruby). Java & Scala, OTOH are using thread pools, even multiple variations of them, so the thread startup time doesn't matter. In any case you are talking about I/O, in which case neither thread startup nor context switching matters.
Another reason to is that it lets you handle bursty input with bursty CPU usage. Sounds great, right? Round peg, round hole.
But nobody will sell you just a CPU cycle. They come in bundles of varying size.
I recently heard a successful argument that we should take the pod that's 99% unutilized and double its CPU capacity so it can be 99.9% unutilized, that way we don't get paged when the data size spikes.
When I proposed we flatten those spikes since they're only 100ms wide it was sort down because "implementing a queueing architecture" wasn't worth the developer time.
I suppose you could call it a queueing architecture. I'd call it a for loop.
Your answer boils down to: "I know this technique, I don't want to learn blub technique. My job is to get stuff done, not learn new techniques." In which case, good for you; enjoy your sync code (seriously), and please stop telling the rest of us that have learnt the new blub technique that we shouldn't use it.
Honestly, your dismissal of its value sounds very much like you don't know how to use it. The whole argument can be turned around and the same said about threads, which are not "simple" as you suggest if you don't already know how to use them. You might as well say "simple async".
If you carefully read my message, I said there are cases where async is beneficial. Most of the time I don't think even threads are necessary. E.g. the most common application nowadays (arguably) is a web server. Of course those who write web server itself may use whatever technology that fits, but for us mortals who simply want to receive request, query DB and respond with data, even threads have a very limited usage. Why? Because web servers are highly parallel, you try to make your request processing parallel and you starve another request (DB is a limited resource, and most web apps don't require computational power). So a simple sync processing works just fine -- no headaches, no mental overhead and you can focus on the business logic, that's what your employer values the most. The exception is when your company name is Twitter or X, whatever (which the most of web apps are not). Other cases? Depends, but the same approach applies: we usually have a bottleneck somewhere else, so you are trying to be smart and starves that. And introducing a sophisticated approach where it's not necessary you shift the focus from the business logic (see above).
Also parallel execution is never simple, there are multiple problems no matter what technology you use, be it async or threads. Meanwhile there are different threads too, you know, green, system etc. There is Erlang for example, which existed long before async was invented. Async is just the current hype, which always starts with "we solved this specific problem, let's do it everywhere!", then ... yeah, we did, but only for this special case, but then it creates tons of problem elsewhere, but we are not going to look there, and if you are looking there we will declare you simply not able learn our new shiny thing. Been there, seen that. Even had this mentality.
> 1. Screening mammography has been proven to reduce cancer specific mortality in many studies.
So let's assume that an annual X-ray caused another cancer in women who would never develop breast cancer (i.e. 87% of them). You are saying "we don't know", but the authors of that paper are trying to answer exactly that. We may have saved lives in 13% group (that would be < 2.5% of those dying from breast cancer), but may have lost some lives in 87% group. According to the paper the net outcome is around 0.
> 99.9% of people won't ever get colon cancer, and therefore won't ever benefit from a colonoscopy.
But they may have complications from a colonoscopy, that's the idea. No test is completely harmless, even a blood work. You save some lives but may loose others, that's the point of the paper. And of course you waste resources that can be used to find a cure.