Technically, updating priors wouldn't necessarily be warranted. Consider a statement X implies Y, e.g. The government is corrupt, which implies SBF won't go to jail. Just because X implies Y does not mean ~Y implies ~X. E.g. SBF going to jail does not imply the government is not corrupt.
Sure but it is a simple result in Bayesian statistics that if event X increases your confidence in fact Y then ~X should decrease your confidence in Y.
For example, if SBF evading jail would increase your confidence in the statement "The US justice system is wholly corrupt" then SBF being sentenced should decrease your confidence in it.
In your example, according to logic, if X implies Y, then if you don't have Y, you necessarily don't have X. If this were a logic exercise, then not "SBF goes to jail" necessarily implies not "the government is not corrupt."
However, in real life there's no connection between the two.
¬Y=='Gov not corrupt' is not an option for those people who argue that the government is corrupt.
In conclusion, naysayers say he wont be convicted is imying and thus proves that the gov is corrupt. The top comment says he may get a sentence, meaning the government is not necessarily corrupt. Yaysayers say the gov is not corrupt and he will get a conviction iff he is guilty.
This is trivial, but difficult to formalize. Thanks for your correction.
I would formalize it as "(corruption ∨ ¬guilty) <-> ¬jail".
- If the government is corrupted it does not matter if SBF is guilty, he will not go jail.
- If the government is not corrupted and SBF is not guilty, he will not go to jail.
- Only if the government is not corrupted and SBF is guilty, he will go to jail.
The problem is: There are more factors in life that just a corrupted government and guilt. There are jurys, capable lawyers, incapable DAs, loopholes, you name it.
So in truth we have "(corruption ∨ ¬guilty ∨ X) <-> ¬jail", with X being the unknown. Thus, if SBF does not go to jail, it could be true that the government is not corrupted, that SBF is guilty, but any of the other factors were at work.
I think this is what people are really arguing about: what will be causally relevant for the outcome. Mind you, even a conviction would not convict (ha!) people that the convernment is not corrupt. They'd rather say that somebody did not pay enough, other interests were at work, aliens, and so on.
The truth is that you cannot infer much based on a singular outcome if you do not have extremely good insight into the mechanics behind the outcome. Which is precisely why people rather update priors as a way to build up an evaluation based on statistics over a longer time frame. Quite ingenious, if you ask me.
> Just because X implies Y does not mean ~Y implies ~X.
As others have mentioned, X implies Y does in fact require ~Y implies ~X. I think your example is confusing because "the government is corrupt" means many different things, but you're using it in a rather specific way ("the government is protecting SBF"). The equivalence of `X implies Y` and `~Y implies ~X` is more manifest through the following example
"The government is protecting SBF, so SBF won't go to jail"
and
"SBF went to jail, so the government wasn't protecting him."
I get what you mean, but I think you've formulated this incorrectly. Let P(Y) be your prior about government corruption. Let X be the event that SBF is arrested. You want to compute P(Y|X) using the Bayesian update formula and then set P(Y) = P(Y|X). That is what is meant by re-evaluating your priors.
You're modelling X and Y as propositions and you're correct about the inference of ~X and ~Y, but Bayesian updating is about degree of belief in those propositions, which your inference is not a claim about.
This is a straw-man argument. Emerging economies get money and jobs out of the globalization arrangement. That's more than enough to sell emerging economies on the idea. Globalization needed to be sold to western economies. Why, after all, should the western economies sacrifice money and jobs? Western economies, they were told, would benefit due to more efficient economic operations, or so the neo-liberal economic orthodoxy went.
Now globalization is unpopular in the west because the west realized neo-liberalism is bullshit, for lack of a better term.
Western economies benefited immensely from neo-liberalism. The problem is that by nature, the most significant gains went to the rich. The rest of the population got some cheaper gadgets to play with and a lot of bullshit service jobs.
According to During, DePalma was supposed to be an author on the paper. According DePalma, they had brief discussions to collaborate, but ultimately decided not to.
It doesn't matter because if the article is correct, it's on him to produce the raw data, there is no need to speculate. As it is, the story is fishy in every sense of the word and definitely warrants an investigation.
Apparently he doesn't have the data anymore. According to the article, independent third parties did see the data and didn't think it was suspicious. The implication of what you are saying is that if you lose data for any reason then you're automatically guilty of data forgery. Obviously that's not a great precedent to set.
It's a recently published paper, barely a few months old. Conveniently when questioned he claimed the analysis had been done years ago by someone who's now dead and the raw data is lost. I don't know about yourself but I have copies of all the raw data for everything I ever published, simply because that's the sort of stuff a researcher would naturally keep. Of course, the story could theoretically have happened as he says but it's quite the coincidence that this specific data set is now found to be irrecoverably lost after he's accused of fraud.
Simply using Occam's razor in the absence of better evidence. Which is on those making the claim to produce, you can't just publish papers and then go "I've done the isotopic analyses, trust me bro". Researchers could make up anything then, there needs to be accountability. At the very least his team needs to retract the paper.
You've conveniently omitted the independent third-parties that had seen the data. It's a messy world. Unexpected things happen. How do these third-parties factor in to Occam's razor? You need to add more parameters to your explanation as to why the third-parties would verify the existence of non-existent data.
Here's another possibility: During goes to DePalma to collaborate and asks if DePalma still has the data. DePalma says he doesn't. During sees this an an opportunity to claim credit for the work.
It's impossible to tell which scenario is more likely. Are you really willing to ruin someone's career over purely circumstantial evidence provided by a biased witness?
You mean the sort of third parties that let a paper slip through the cracks which peers contacted by Science.org claim should have never been published because of the glaring errors it contains?
What evidence do you have that anyone has actually looked into those data? In what context? You're simply repeating his claim but there isn't evidence the data was ever scrutinized. In research you're shown other people's data all the time - that doesn't mean that people actually look into it.
I suspect that you either did not read the article posted here or that you are ignoring relevant parts of it when you reply in this thread for reasons known only to yourself. Perhaps you should state your relationship to During and DePalma so others can judge whether there is a conflict of interest here.
In a reply further down you pose this scenario, though the evidence against it is covered in the story that it appears you didn't read.
>Here's another possibility: During goes to DePalma to collaborate and asks if DePalma still has the data. DePalma says he doesn't. During sees this an an opportunity to claim credit for the work.
From the article it is clear that she used her own data from samples that she collected (she is credited in DePalma's acknowledgements) and that he shipped to her so that she could study them and she attempted to get him to be a co-author on her paper but he declined and instead, he assembled a paper that to reviewers looked to be a bit haphazardly assembled.
From the article - DePalma's paper:
>Several independent scientists consulted about the case by Science agreed the Scientific Reports paper contains suspicious irregularities, and most were surprised that the paper—which they note contains typos, unresolved proofreader’s notes, and several basic notation errors—was published in the first place. Although they stopped short of saying the irregularities clearly point to fraud, most—but not all—said they are so concerning that DePalma’s team must come up with the raw data behind its analyses if team members want to clear themselves.
From the article - During's work:
>After she returned to Amsterdam, During asked DePalma to send her the samples she had dug up, mostly sturgeon fossils. He did so, and later also sent a partial paddlefish fossil he had excavated himself. During obtained extremely high-resolution x-ray images of the fossils at the European Synchrotron Radiation Facility in Grenoble, France. The x-rays revealed tiny bits of glass called spherules—remnants of the shower of molten rock that would have been thrown from the impact site and rained down around the world.
It is reasonable to conclude from the text that During collected samples, had them independently analyzed, and that her analyses resulted in the discovery of the spherules lodged in the gills that helped cement this site as contemporaneous with the Chicxulub impact. The 2019 paper that DePalma and others, including Melanie During's mentor, Jan Smit co-authored with him became the basis for the movie narrated by David Attenborough. It put him in the spotlight, which appears from the various posts available on Twitter and Reddit to be something that feeds his needs.
As to others wondering how as a grad student he could come to control access to the site it is worth noting that one of the people he thanks in his acknowledgements is his Dad who is an orthodontist and who, it may be implied, could be the source for the funding needed to secure access to the Tanis site.
If it was my kid and they were needing to buy something to help them complete their studies then my wallet would open. I expect his Dad feels the same way.
I do not know either of these people and strongly suspect that in the end, we will all know more about them than we ever wanted to know.
I am glad that they were able to gather the samples, complete the analyses, compile the supporting data from multiple lines of inquiry to bolster the case for Tanis being contemporaneous with Chicxulub. I'm a geophysicist who likes geosciences.
Well for one, I don’t want to add another language to my tool chain. Many languages can compile C directly. For instance in Swift or Go you can add C source files directly to your project and have them compile as part of your Swift or Go build. You can’t do that with Rust or C++.
C is the lingua franca of the software development world.
From the industrial revolution to some time in the early-to-mid 20th century, materials were more expensive than labor. It made total sense to ornately decorate things as the cost was not much more than the material itself.
Now labor is vastly more expensive than materials. Making this easy to build makes them way cheaper.
That's quite interesting. Who enforces the parking limits? Surely the city has a very advantageous negotiating position if the city is responsible for enforcement.
What excites me about about ChatGPT is the fact that you can take a lot of data and a huge model and make it do something cool. Right now, "making it do something cool" costs tens of millions of dollars. If that cost can be brought down to the 10s of thousands of dollars, I think we'd start to see really mind blowing applications.
The point is that iOS has always used a separate process for computing the GPU state. Android does this on the main thread, which is why Android feels like garbage. And I say this as a former Android developer. Animations on Android feel like peanut butter. They always will until they are moved off the main thread.
No, it has not. If you talked to the GPU through OpenGL, you did it on the main thread. You pass all the state that you computed in that thread. Chances are you computed that state in that thread too.
There are some optimizations regarding e.g. scrolling being computed in a different thread/process (still on the CPU) before compositing (on the GPU) but that is not a requirement for smooth animation.
That's not technically correct(which is always the best kind of correct :) ), HWUI has been multithreaded for quite a long while now. That doesn't prevent apps from doing bad things but it's been possible to do smooth animation on Android since the days of project butter.
Android is the opposite of barebones. It comes loaded with crap, much of it is using "stop the world" garbage collection while also producing lots of garbage. Building on Java is Android's original sin. You could still build an NDK app, talk to OpenGL, and hit 60Hz on low-end devices, years ago, but that's not how most stuff is developed. There are layers upon layers of crap between applications and the hardware, to the point where apparently people have come to believe that you need special OS interfaces to do smooth animation.