> That if someone really wants to understand where we are with all those Libya, Syria, Iraq, ... Ukraine.
I appreciate Mearsheimer's perspective, but for a self professed "realist", I find it strange that he talks a lot about "ideology" and very little about "oil".
At the end of the day all of those conflicts are conflicts over the control of fossil fuels. I have a hard time taking any analysis seriously that doesn't focus on this primary issue.
Putin/The Russians aren't insane, they're looking after oil/NG interests. That's the entire reason they captured Crimea in the first place. The Ukrainian government then stopped the flow of water to Crimea to make it basically unusable.
NATO backed oil/NG companies also are annoyed because, despite investing a lot in gas fields in the East and the West, they haven't been able to produce much of anything because the region is under constant military conflict, funded by both the US and Russia.
I agree with the parent comment. If Russia can finally and absolutely control gas fields in the East and have assured access to not only Crimea but water to run the region they'll be getting much more out of this situation than they had prior. They will be interested in stability as much as NATO backed energy companies. Given the conflict in the region, and the recent (2014) overthrow of a government friendly to them (coincidentally right after Shell and Chevron signed contracts to develop gas fields), Russia is, rightfully, as wary of promises from the West as we are of them.
For purely, selfish, greedy reasons Russia would want peace in the region as much as NATO backed oil/gas companies have been hoping for it. There are still gas fields in the West that are waiting to be exploited and already have their resources under contract. Once these resources can be split up in a way everyone can deal with, there won't be conflict until the gas dries up because both parties make tremendously more money with stability after that.
I wouldn't underestimate the value of the Ukrainian NPPs and fertile land too, as well as access to the Black Sea. Maybe even the human capital shall not be underestimated.
I second this question. In the last few days I've heard pretty heavy reference to "people cheering on nuclear war" but haven't seen it first hand anywhere. A link to an opinion piece that explicitly states this, or even a few high-profile twitter accounts espousing this view would at least convince me it's happening.
Heck, even browsing reddit comments and hacker news comments I've seen no view encouraging nuclear war. Even 4chan doesn't seem to have any posts proclaiming this view.
In realpolitik terms, nuclear war is the most likely outcome of any further escalation w/ NATO including the US / West refusing to lose face / back off. It's just that most folks do not think in such terms and are crafting delusions where that doesn't take place: Revolution in Russia! Putin will be overthrown! Minor nuclear escalation then everyone will back off! Let's have NATO get involved in Ukraine then it'll be on the Russians if they pull the trigger! They would never dare!
People are talking about the Russians being brainwashed, but read most of the comments in this thread and you'll realize they're far from being the only ones.
> further escalation w/ NATO including the US / West refusing to lose face / back off.
Here's a sovereign nation that tries to save its independence, pleading with the West / EU / NATO for protection. Then you have a ruler who has been saying for years that that nation should not exist and should be part of his empire, then proceeds to invade.
The framing of this as a NATO vs Russia issue, where NATO should "back down" to prevent further escalation, making Ukraine merely a pawn in some greater game, is absolutely disgusting.
> In an original and persuasive analysis, Mulder shows how isolating aggressors from global commerce and finance was seen as an alternative to war that worked precisely because of the pain it imposed on the target society. From the very beginning, it was civilians who suffered the most. Nevertheless, the League of Nations embraced sanctions and established an elaborate legal and bureaucratic apparatus to enforce them. Mulder argues that instead of keeping the peace, this form of economic warfare aggravated the tensions of the 1930s, encouraging austerity and autarky and restraining smaller states but backfiring against the larger authoritarian ones, such as Italy.
I think this is an interesting point to bring up, because, especially in the West, we hold a believe that "economic" violence is not as bad as "physical" violence. It's well worth at least questioning this assumption.
I think that, in the world of nuclear powers, it is trivial to see that physical violence at war scales between nuclear powers is essentially infinitely bad.
Famine among millions of people, and general economic depression has a massive human toll. But it doesn't compare to full on nuclear war, followed by nuclear winter and Global radiation levels from fallout.
So it seems obvious to me that economic violence is the only for of violence the west can bring to bear against Russia. There is a discussion to be had on the matter of whether it should be brought to bear. Punishing the people for the actions of the elite and all that. Perhaps also a 'what does this system lead to in peace time' question, but that seems to argue 'economic sanctions are a tragedy of the commons', which means it is hard to address. At least the solution 'we just won't sanction' does not actually resolve the tragedy of the commons.
Here's the problem though: if you obliterate a country's financial system to the point that people start starving and fundamentally lack their basic needs, what incentive are you giving someone like Putin to not drop a bomb?
If people start starving en masse the way they did back during Soviet Collectivization, which saw 4-7M dead[0], what's stopping Putin from ending it all in retaliation?
No, this is the second volume of "Probabilistic Machine Learning", the first volume of which was just published this week. The 2 volume set can be seen as a complete rewrite/replacement for "Machine Learning: A Probabilistic Perspective"
For clarification, Murphy's first book is just Machine Learning: A probabilistic perspective this is his newest, 2 volume book, Probabilistic Machine Learning which is broken down into two parts an Introduction (published March 1, 2022) and Advanced Topics (expected to be published in 2023, but draft preview available now).
To answer your question. This book is even more complete and a bit improved over the first book. I don't believe there's anything in Machine Learning that isn't well covered, or correctly omitted from Probabilistic Machine Learning. This also has the benefit of a few more years of rethinking these topics. So between the existing Murphy books, Probabilistic Machine Learning: an Introduction is probably the one you should have.
Why this over Bishop (which I'm not sure is the case)? While on the surface they are very similar (very mathematical overviews of ML from a very probability focused perspective) they function as very different books. Murphy is much more of a reference to contemporary ML. If you want to understand how most leading researchers think about and understand ML, and want a reference covering the mathematical underpinnings this is a book you really need for a reference.
Bishop is a much more opinionated book in that Bishop isn't just listing out all possible ways of thinking about a problem, but really building out a specific view of how probability relates to machine learning. If I'm going to sit down and read a book, it's going to be Bishop because he has a much stronger voice as an author and thinker. However Bishop's book is now more than 10 years old an misses out on nearly all of the major progress we've seen in deep learning. That's a lot to be missing and it won't be rectified in Bishop's perpetual WIP book [0.]
A better comparison is not Murphy to Murphy or Murphy to Bishop, but Murphy to Hastie et al. The Elements of Statistical Learning for many years was the standard reference for advanced ML stuff, especially during the brief time when GBDT and Random Forests where the hot thing (which they still are to an extent in some communities). I really enjoy EoSL but it does have a very "Stanford Statistics" (which I feel is even more aggressively Frequentist than your average Frequentist) feel to the intuitions. Murphy is really the contemporary computer science/Bayesian understanding of ML that has dominated the top research teams for the last few years. It feels much more modern and should be the replacement reference text for most people.
I read TESL during my Master's and I remember being very confused with the way
it described decision tree learning. I remember being pleased with myself that I
had a strong grip on decision tree learning before reading TESL and then being
thoroughly confused after reading about them on TESL.
Eyballing the relevant chapter again (9.2) I think that may have been because it
introduces decision tree learning with CART (the algorithm), whereas I was more
familiar with ID3 and C4.5. Perhaps it's simpler to describe CART as TESL does,
but decision trees are a propositional logic "model" (in truth, a theory) and
for me the natural way to describe them, is as a propositional logic "model"
(theory). I also get the feeling that Quinlan's work is sidelined a little,
perhaps because he was coming from a more classical AI background and that's
poo-poo'd in statistical learning circles. If so, that's a bit of a shame and a
bit of an omission. Machine learning is not just statistics and it's not just
AI, it's a little bit of both and one needs to have at least some background in
both subjects to understand what's really going on. But perhaps it's the data
mining/ data science angle that I find a bit one-sided.
Sorry to digress. I'm so excited when people discuss actual textbooks on HN.
I’m in agreement with much of your post.
The Elements of Statistical Learning played its role quite well years ago but a fresher take is needed.
Thanks for the response.
Echoing others, thank you for writing this (as someone doing an applied math masters and digging into ML - I have used ESL for a class but not the others you mention)
Kevin Murphy has done an incredible service to the ML (and Stats) community by producing such an encyclopedic work of contemporary views on ML. These books are really a much need update of the now outdated feeling "The Elements of Statistical Learning" and the logical continuation of Bishop's nearly perfect "Pattern Recognition and Machine Learning".
One thing I do find a bit surprising is that in the nearly 2000 pages covered between these two books there is almost no mention of understanding parameter variance. I get that in machine learning we typically don't care, but this is such an essential part of basic statistics I'm surprised it's not covered at all.
The closest we get is in the Inference section which is mostly interested in prediction variance. It's also surprising that in neither the section on Laplace Approximation or Fisher information does anyone call out the Cramér-Rao lower-bound which seems like a vital piece of information regarding uncertainty estimates.
This is of course a minor critique since virtual no ML books touch on these topics, it's just unfortunate that in a volume this massive we still see ML ignoring what is arguably the most useful part of what statistics has to offer to machine learning.
Do you really expect this situation to ever change ? The communities are vastly different in their goals despite some minor overlap in their theoretical foundations. Suppose you take rnorm(100) sample and find its variance. Then you ask the crowd the mean and variance of that sample variance. If your crowd is a 100 professional statisticians with a degree in Statistics, you should get the right answer atleast 90% of the time. If instead you have a 100 ML professionals with some sort of a degree in cs/vision/nlp, less than 10% would know how to go about computing the variance of sample variance, let alone what distribution that belongs to. The worst case is 100 self-taught Valley bros - not only will you get the wrong answer 100% of the time, they’ll pile on you for gatekeeping and computing useless statistical quantities by hand when you should be focused on the latest and greatest libraries in numpy that will magically do all these sorts of things if you invoke the right api. As a statistician, I feel quite sad. But classical stats has no place in what passes for ML these days.
Folks can’t Rao Blackwellize for shit, how can you expect a Fisher Information matrix from them ?
I think Bishop et al. WIP book Model-Based Machine Learning[0] is a nice step in the right direction. Honestly the most important thing missing from ML that stats has is the idea that your model is a model of something. That how you construct a problem mathematically says something about how you believe the world works. Then we can ask all sorts of detailed question about "how good is this model and what does it tell me?"
I'm not sure this will ever dominate. As much as I love Bayesian approaches I sort of feel there is a push to make them ever more byzantine, recreating all of the original critiques of where frequentist stats had gone wrong. So essentially we're just seeing a different orthodoxy dominant thinking with all of the same trapping of the previous orthodoxy.
Wait, what’s the problem with people not knowing things that they don’t need to know? This just comes across as being bitter that self taught people exist, or that other people are somehow encroaching on your field.
I think your comment does what the OP complains about, regarding gatekeeping etc.
I don't know about OP, whose comment I find a little harsh, but personally I'm always frustrated a bit and despairing a bit when I realise how poor the background is of the average machine learning researcher today, i.e. of my generation. Sometimes it's like nothing matters other than the chance that Google or Facebook will want to hire someone with a certain skillset and any knowledge that isn't absolutely essential to getting that skillset, is irrelevant.
Who said "Those who do not know their history are doomed to repeat it"? In research that means being oblivious of the trials and tribulations of previous generations of researchers and then falling down the same pits that they did. See for example how deep learning models today are criticised for being "brittle", a criticism that was last levelled against expert systems, and for similar, although superficially different, reasons. Why can't we ever learn?
> I think your comment does what the OP complains about, regarding gatekeeping etc.
Oh absolutely, that's how I intended it. I don't think that preemptively calling out people's reaction gives the parent comment a pass on gatekeeping.
Your concern about poor background... it's only a problem for people who are jumping into things without the prerequisite background and they aren't learning fast enough. But modern deep learning is much more empirical - there are a few building blocks and people are trying out different things to see how they perform. I don't get why we need to look down on people for not knowing things that they don't need to know. If there was some magic that comes from knowing much more statistics, then the researchers who do would be outperforming the rest of the field by a lot but I don't think that's the case.
That certainly is the case. Not for statistics specifically, but all the people at the top of the field, Bengio, LeCunn, Schmidhuber, Hinton, and so on, all have deep backgrounds in computer science, maths, psychology, statistics, physics, AI, etc. You don't get to make progress in a field as saturated as deep learning when all you know how to do is throw stuff at the wall to see what sticks.
I never said anything about needing to look down on anyone. Where did that come from?
My concern is that without a solid background in AI, no innovation can happen, because innovation means doing something entirely new and one cannot figure out what "entirely new" means, without knowing what has been done before. The people who "are trying out different things to see how they perform" as you say, are forced to do that because that's all you can do when you don't understand what you're doing.
To get the prediction variance in a Bayesian treatment, you integrate over the posterior of the parameters - surely computing or approximating the posterior counts as considering parameter variance?
Of course it does. You can put hyperpriors on the priors, and hyper hyperpriors on the hyperpriors, but the regress has to stop somewhere. What is your point?
I'm not sure I entirely follow your comment, however I was merely pointing out that reckoning with parameter uncertainty by "computing or approximating the posterior...", as you said, is not always applicable in probabilistic ML.
Yes, but that's true of all statistics. You have to make some assumptions to get off the ground. If you estimate parameter variance the frequentist way, you also make assumptions about the parameter distribution.
No, this is expressly untrue. In the frequentist paradigm parameters are fixed but unknown, they are not random variables, and have no implicit probability distribution associated with them.
An estimator (of a parameter) is a random variable, as it is a function of random variables, however this depends only on the data distribution, there is no other implicit distribution on which it depends.
For instance, the distribution for the maximum likelihood estimator of the mean of a normal distribution is normally distributed, however this does not imply that the mean parameter has a normal prior, it has no prior, as it is a fixed quantity.
Do you think this book is useful for someone just looking to get more into statistic and probability sans machine learning? How would I go about that?
Currently I have lined up - Math for programmers (No starch press), Practical Statistics for data scientists (O'Reily - the crab book), and Discovering Statistics using R.
Basically I'm trying to follow the theory from "Statistical Consequences of Fat Tails" by NNT.
I've had tinnitus for about 20 years now and I don't notice it most of the time and when I do don't really mind it too much. The only thing I really miss is perfect silence.
One of the things I've noticed people struggle with regarding any chronic illness is constantly feeling like they are "supposed" to be feeling fine and have more distress worrying about some perceived gap between their actual life and how they imagine life should be then the ailment itself(this obviously carrier over to things other than chronic illness).
Letting go of this imagined alternative life is the biggest first step towards accepting and dealing with tinnitus. I have tinnitus the same way I have all my other features, positive and negative. That's just the life I have and there is no alternate life out there.
I do still get a kick out of the look of horror on people's faces when I tell them I've had constant, changing frequency, ringing in my ear for two decades.
I haven't done or even really touched front end work in a decade and I believe that this would have been an interesting interview.
Given his perfect answers I probably would have been able to come pretty close to passing.
If you know anything at all about the details of html, all of these can be easily understood by looking at what they're claiming to do. None of these are esoteric even if your front end knowledge predates wide spread use of mobile devices.
Take line 5 for example. I have never heard of Open Graph, but I know that og: is declaring a namespace for a pretty self describing tag. It's clear from this take with no other information that it is providing a site name for a specific service.
I don't think it's a big ask at all for a senior front-end dev, who is fresh in the field, to know what the first 10 lines of one of the most popular websites out there is doing.
I'm surprised to see so many moral narratives, like this one, surrounding war start popping up. I thought it was pretty well understood and established that the narratives we tell about war are mostly propaganda/myth making after the fact.
War is about resources and control. That's it. There are no "good/bad" guys (or more specifically they're all "bad" guys). We're not participating in some grand moral struggle, we're taking calculated risks to maximizes gains when the opportunity appears and minimize losses when we have no other options.
For the last 70 years world power has be set so there was primarily one global hegemon (the US) so there was not much movement on the world stage among powerful nations. Economic strategies remained the safest way to control resources among world powers, but military actions still dominated in cases of extreme power asymmetry. The US has been invading and bombing countries and regions virtually continuously for the last few decades.
The resource situation is starting to change, so risks that made no sense suddenly start to seem more reasonable. But absolutely none of what we're seeing now has anything to do with some silly narrative about NATO powers and Russia. Russia has ceased the opportunity of low risk acquisition of resources, NATO powers have worked to minimize their own loses given the risk of any major action is high.
As resources grow more scare we'll see increased military conflict. The severity of which will be proportional to the scarcity. None of this will "haunt" anybody, it's just a game to survive which as been fairly easy the last few decades and that is changing.
People on internet, especially on twitter, reddit and here are too innocents and/or naives. War is what you described and that will never change.
It doesn't matter what system or ideology people invent, war will always appear in order to to get power or resources. Or just because there are bored, just like Dostoevsky said it.
How? If power and resources are not scarce, then people will start doing despicable things just because they can and want to. As I said, most of you here are naive and will not see it.
Don't you think conflicts and wars also have an ideological/cultural component? Freedom vs dictatorship, communism vs capitalism, democracy vs authoritarianism?
No, these are always the tools used to convince people to go to war, and later to justify the results.
They are part of the ways power maintains power. Every first king is just a warlord, a thug more brutal than the rest, and after defeating their enemies they suddenly declare the role of king is ordained by the god(s) and to challenge it is heresy. They set up the notion that rulers are more noble than common men, which is why they should rule over them and control the land. In later years they will use these tools to again convince their people to fight for them in the name of "country" or some other such notion.
And what peoples are against "freedom"? Do you really believe terrorists attacked the US because they "hate our freedoms?" No, it's because the US systemically destroyed their homelands, denied them of their own freedoms, and forcibly took their resources. But when you fill your gas tank you aren't comfortable recognizing that a hidden part of the price of the gas is the civilians murdered so you could have it.
I'm endlessly surprised to see educated adults buying into the stories will tell children about how the world works.
I appreciate Mearsheimer's perspective, but for a self professed "realist", I find it strange that he talks a lot about "ideology" and very little about "oil".
At the end of the day all of those conflicts are conflicts over the control of fossil fuels. I have a hard time taking any analysis seriously that doesn't focus on this primary issue.