Most tests have been performed underground. And for an above-ground test to cause anything close to a nuclear winter, you need a lot more material than the bomb itself and the soil beneath it. Tons and tons of debris would need to be shot up into the atmosphere.
So, the theory of nuclear winter depends on bombs detonating over cities and forests, causing massive amounts of debris, and also inducing firestorms that would act as chimneys for even more smoke and particles to be funneled up into the troposphere and stratosphere.
Very common in the web context -- you perform some form of relational persistence while also inserting a job to schedule background work (like sending an email). Having those both in the same transaction gets rid of a lot of tricky failure cases.
A transaction in a database does not help you here.
Let's say these are your steps:
1) Open a transaction
2) Claim an email to send
3) Send the email
4) Make email as sent
5) Close transaction
Say your web client crashes between 3 and 4? The email is not going to get marked as sent, and the transaction will rollback. You have no choice but to resend the email.
You could have done this same exact thing with RabbitMQ and an external worker (Celery etc. etc.). You chose to either ack just BEFORE you start the work (between 2 and 3). You will never double send, but risk dropping, or you choose to ack just AFTER you start the work (between 3 and 4), and guarantee to always do the work, but at the risk of a double send.
If your task is idempotent this is super easy, just ack after the work is complete and you will be good. If your task is not idempotent (like sending an email), this takes a bit more work... but I think you have that same exact work in the database transaction example (see above)
Email is just one example, but maybe a better example is talking to an API that supports some form of idempotency. You don't want to talk to the API during a transaction that involves other related data persistence, but you can transactionally store the intent to talk to that API (i.e. you insert a job into a queue, and that job will eventually do the API interaction). But even in the email case, you can benefit from a transaction:
1) Open a transaction
2) Persist some business/app data
3) Persist a job into your queue that will send an email relating to #2
4) Close transaction
So you've at least transactionally stored the data changes and the intent to send an email. When actually sending the email you probably want something that guarantees at-most-once delivery with some form of cleanup in the failure states (it's a bit more work, as you said).
you can still have the notion of a transaction while integrating rabbitmq like this
you do your mutations in a transaction and also in that transaction you execute a NOTIFY command. If transaction is successful, the notify will go through at the end of the transaction. the notify events and be "bridged" to a messaging server like rabbitmq (see my other comment)
Nice -- pg-amqp-bridge seems like a clever solution for coordinating the two. It still puts some amount of the queuing load on the DB, but I'd be comfortable with that trade-off if I really wanted to use rabbitmq as the primary queue.
Question though -- Does it guarantee at-least-once or at-most-once on the NOTIFY? (Like, if there is a network blip, will it retry the NOTIFY?) And if it is at-least-once, I assume that consumer apps will have to handle deduplication/idempotency.
Short answer no, it doesn't (at least now, release 1 week ago :) ), but dismissing it only for that would be a superficial look. Most cases don't need those guarantees, i.e. you can tolerate a few lost messages.
For example, you are implementing real time updates in your app using this.
What's the probability of a network glitch happening at the same time as two users being logged in at the same time in one system where an event produced by one needs to be synced to the other, even more, say he lost that event, is it that critical considering he will soon move to another screen and reload the data entirely?
From rabbitmq point of view, db&bridge are producers. You are really asking here, does the "producer" guarantee delivery?
To do that, it means the producer needs to become himself a "queue" system in case he fails to communicate with rabbit.
Considering we are talking web here, the producers are usually scripts invoked by a http call so there is no such guarantee in any system (when communication to rabbitmq fails).
However i think network (in the datacenter) is quite reliable so there is no point in overengineering for that case.
If the system can tolerate a few seconds of downtime, it's easy enough to implement a heartbeat system which would restart this tool in case it's needed, also, you can run 2-3 of them to make it redundant then use corelationId to dedup the messages.
A more robust tool would be https://github.com/confluentinc/bottledwater-pg but it's for kafka and the major downside for me is that it cant be used with RDS since it's installed as a plugin to postgresql
Notify is very lightweight in postgres, it takes up very little cpu if that's your worry.
anyway, this is not meant for log storing/streaming (same as postgresql). My usecase for it is "real time updates" (reasonable amount of events, that need to be routed in complex way to multiple consumers)
Secondly, the best way to avoid making painful design decisions is... to make them enough times that you instinctively remember them. There aren't many shortcuts to developing that instinct, so take this opportunity to try and generalize your learnings -- what do you recognize now that you didn't see when you started the project? A healthy amount of retrospection will make you a better engineer, and you'll also get better at identifying uncertainties and risks before you start coding.
Lastly, up-front planning and design documents can only get you so far. They're important, sure, but at some point you'll be down in the nitty gritty details, and you'll need to make unanticipated course corrections. As you gain experience you'll start being able to fill in more of those gaps on your own, but until then, you'll want to loop in other engineers more frequently.
One of the qualities of a senior engineer is that they act as a force multiplier for their team. So as a relatively junior engineer, don't be afraid to make use of your senior engineers! They should be there to support you and make you better, beyond just helping you plan out a project. If possible, pair on some of the trickier parts with them, so that you can see how they'd approach the problem and start picking up on things that might not seem so obvious to you right now.
There is a much bigger barrier to entry when it comes to monetizing via product placement and sponsorships. For smaller entities, developing dedicated advertising partnerships is just not on the table.
So if the the ad networks themselves truly are the problem (and as a result adblocking is ever on the rise), it sounds like there is an opportunity for a better, more "organic" solution accessible to the long tail of smaller sites looking for monetization options. Sounds like a tricky nut to crack, as I'm sure anyone who works in the industry would tell me.
Adblock Plus has an "Acceptable Ads" program that allows non-animated, non-sound ads to remain on the page. People who think supporting the sites they visit is the right thing to do, can enable acceptable ads while blocking annoying ones.
The book How Not to Be Wrong: The Power of Mathematical Thinking[1] by Jordan Ellenberg has a segment on this with similar examples (minus the programming bits), tying it in to human psychology and, with surprising insight, the behavior of slime molds. Would definitely recommend reading if you find these kinds of topics interesting.
Hospital-acquired infections have become incredibly common. My grandmother caught a C diff infection after what had already been a very arduous treatment and recovery, and she received basically no support from the hospital that had effectively given her the infection.
While she likely got it in the hospital, it should be noted that a fair portion of the population carried C. difficile in their guts, and could develop a symptomatic infection purely as the result of a disruption of the less hardy elements of their gut flora.
Curious about the first one -- was there some kind of conflict of interest? As far as I can tell all you were doing was taking public information already available on the page and disclosing it with a much more visible banner. Was your employer heavily involved in Native Advertising campaigns or something?
Imo it'd actually help google if anything. As far as I know, they're not in the sponsored content game. If hypothetically, OPs thingo blew up and sponsored content articles became useless, that's more advertising budget freed up to go in to Googles pocket.
No. In C++, for instance, you can share data either by a const reference (or pointer), or by a non-const reference (or pointer). The latter allows mutation; the former does not.
Now, that means that the language is not forcing you to share without mutability, because you can leave off "const". That is, you have to be disciplined. But if you put "const" on the reference, the compiler won't permit you to modify the data through that reference.
M must always be the highest point within the range. A and Z are at the same height and are the lowest point within the range.
Imagine that the hikers are somehow connected (quantum entanglement, etc) such that one cannot physically move up or down if other cannot, and as one moves up and down the other must follow at the same height.
Perhaps a better metaphor would be if the paths are cut into a wall, and you have inserted two pegs that are connected by a horizontal backing bar behind the wall. The bar may move up and down but will always remain horizontal. The pegs can slide left and right along the bar, and up and down along the paths, but must always remain horizontally level.
Now, imagine that whenever a peg reaches a local max or min (peak or valley), a change in vertical direction may also cause a traversal along the opposite size of the peak/valley, thus allowing for forward progression.
While one peg hits a vertical stop and makes horizontal progression, the other peg will simply move up and down along the same segment.
This exercise is obviously not a mathematical proof, but does serve to make the proof feel a bit more intuitive. I'd love to construct such a "puzzle" myself and try it out on a bunch of different contours/tracks.
Basic income doesn't always solve this for someone who is already homeless, but it might still act as a preventative measure, giving someone enough of a financial buffer to avoid homelessness entirely.
If you heard some of these stories, you would find that it's not just a lack of money that causes homelessness. To be brutally honest, it's a mixture of stupidity and often, an addiction. These people need other people actively intervening in their lives to keep them on track. The preventative measure should be the social worker, not the pay cheque.
I think the operating word is "some," not all. We know that UBI will sometimes be misspent. Most if not all aid programs have this problem. That doesn't mean it isn't worth it for those who would put the money to good use.
It's a problem if the math behind UBI presumes savings from eliminating a ton of social programs and we later find that there are substantial numbers of people that still need those programs even with UBI for non-financial reasons.
If we find that non-trivial numbers of parents blow all their UBI on XYZ, are we really going to let their kids starve? Or will we see that programs like SNAP need to remain in existence?
I think a good UBI program will eliminate the crappy social programs, and will keep some around to help transition people to the new society and also to help people who aren't helped just by having money to spend. For instance, child abuse: we have CPS for a reason, but giving everyone a monthly check isn't going to magically eliminate child abuse, so obviously we still need CPS for that. There's plenty of other places where we'll still need social workers to help people. But the savings we'll realize by not having armies of government workers making sure someone isn't "cheating" by getting a welfare check and then trying to supplement that with another source of income, along with various other benefits, will help pay for UBI. And eventually, by eliminating poverty and changing the culture (with both UBI and social workers), we won't need so many social workers.
Is over-hiring of social workers in welfare sector a problem in this country? It seems that applications for something like SNAP are handled online, distributions are done directly to debit cards, and a bunch of enforcement and fraud prevention work is offloaded to agencies like SSA and IRS.
I don't have any numbers handy, however I think it should be fairly obvious that even with some stuff handled online, there's still a lot of federal workers behind the scenes. Anyone who's worked in the federal government knows there's a ton of federal workers who really don't do much all day long, and get paid a lot for it.
If the SSA and IRS don't have to do so much enforcement work for entitlements, that's a bunch of people there who can be laid off to save taxpayer money.
For SNAP they did the analysis of various policy steps that could result in administrative savings - https://www.cbo.gov/sites/default/files/cbofiles/images/pubs... (it's a very wide image, so zoom in and scroll to the right until you hit "Policy Options").
Eliminating the asset test (the area where undoubtedly some government employees work full-time) would actually increase the program cost. Most of the savings are actually derived from juggling the numbers behind "the cost of a nutritious diet".
I am personally not opposed to going full-throttle on something like SNAP - just send a free card to anyone with a SSN who requests one (I am sure agricultural and retail lobby would concur), but I think the potential savings are overblown - Medicare, SS and SNAP are generally tightly administered.
So, the theory of nuclear winter depends on bombs detonating over cities and forests, causing massive amounts of debris, and also inducing firestorms that would act as chimneys for even more smoke and particles to be funneled up into the troposphere and stratosphere.