Actual answer is your chat window. The steam app has a friends chat function, and "stickers" that are often game-specific images or gifs, can be sent, e.g. [1] this bouncing sheep from Mabinogi:
If 1/6 of Americans are potential repeat federal felons based on just one activity, I find it highly dubious that the other 5/6 can't be as well in the other hundreds of activities we undertake each day. Using your parents' Netflix/ Disney+/ etc password can technically be prosecuted under CFAA[1], for example. That's probably another 1/6 at least. Now it's 1/3 of the country.
>A few months after leaving Korn/Ferry, Nosal solicited three Korn/Ferry employees to help him start a competing executive search business. Before leaving the company, the employees downloaded a large volume of "highly confidential and proprietary" data from Korn/Ferry's computers, including source lists, names, and contact information for executives.
Extending that ruling to netflix password sharing is a stretch.
Moreover you can't say "I can think of one activity that many americans do is a felony", and then apply induction on it to claim that the other activities americans due surely contain felonies.
>That's probably another 1/6 at least. Now it's 1/3 of the country.
That's only true if you assume the population of weed smoker and netfilx watchers don't intersect, which is... doubtful.
at any rate possession of Marijuana or other controlled substances does not mean that one uses them, so lots of people are theoretically in possession because they give someone a ride that has drugs with them.
> repeatedly refused actual begging to accept payment for things
I don't claim to know Mozilla's internal workings, but my wife works for an education-space 501c3, and there are very strict rules about how they can fundraise, how they can spend money that's been donated, etc. I'm sure Mozilla Foundation is large enough to manage this stuff, but things like per-project bank accounts and tax records are still overhead they would have to deal with. I know one of their (my wife's org's) thorniest areas is around what money can be spent on non-"core mission" expenses.
There are strict rules for businesses soliciting donations. Mozilla Corporation would have to be more careful than most because of confusion between the non profit Mozilla Foundation and for profit Mozilla Corporation. And many people who claim they would donate to Mozilla Corporation demand per project accounting if they don't demand elimination of all projects they dislike.
If we're limiting 'corruption' to just be about bribes, then sure. Of course, in reality it also encompasses racism, nepotism, etc (i.e. anything that is a "corruption" of the impartial execution of their jobs).
I suspect many Black people would prefer paying a bribe to being killed by police at an outsize ratio, or paying a bribe to being charged more aggressively and sentenced more harshly.
Police brutality and incarceration is worse than bribes, my dude.
Chattel slavery- direct, constant, and complete control over one's life and death, and the reduction of the person to mere property, is essentially the most authoritarian institution there can be.
Not a huge Chomsky fan. He calls himself an anarchist, but if you pin him down on specifics he turns into a minarchist rhetorically, and a Social Democrat in practical matters.
He's similar to Lenin, imo, in that he advocates using the State to prepare to dismantle the State, all while gassing up the things that the State provides (e.g. social protections). There's never anything more than a vague promise to move on from that in the future, which is exactly the same as the single-party-State USSR.
People mistake Capitalism as the driver for authoritarianism, but Capitalism is just the means to gain power/wealth in our current society, with hierarchical government being the framework within which Capitalism operates. Greed is the driver, and greed is intrinsic to humans. But greed without a framework to amass power (like a State) can only operate on an individual level.
> As far as I can tell, there's no real explanation of how it supposedly came about.
Critical awards are a huge boon in getting your book on the very limited and dwindling shelf space (and especially on endcaps and displays, where they're going to be noticed), or on the front page of online book stores.
In a field that is hurting for readers, that's a huge deal. That will get books sold for a *very tiny* number of titles, but not create a stable fanbase. So everyone is competing for those critical awards, causing the genre to spiral.
This is a really good analysis article. Thank you for posting it. I think the critic vs consumer decoupling rings true to me, and this is obviously the worst economy to exist in as a "struggling" anything.
There are a lot of industries that are struggling right now to figure out how to re-monetize independent from large corporations (like magazines/ publishers/ movies/ etc) because those corporations are cutting out anyone not already hugely profitable.
I feel like whatever solution we eventually land on to 'democratize' media funding will also be a good solution to our FOSS funding problem.
> When researchers measure an individual particle, the outcome is random, but the properties of the pair are more correlated than classical physics allows, enabling researchers to verify the randomness.
Is this not possibly just random-seeming to us, because we do not know or cannot measure all the variables?
> The process starts by generating a pair of entangled photons inside a special nonlinear crystal. The photons travel via optical fiber to separate labs at opposite ends of the hall.
> Once the photons reach the labs, their polarizations are measured. The outcomes of these measurements are truly random.
I understand that obviously for our purposes (e.g. for encryption), this is safely random, but from a pure science perspective, have we actually proven that the waveform collapsing during measurement is "truly random"?
How could we possibly assert that we've accounted for all variables that could be affecting this? There could be variables at play that we don't even know exist, when it comes to quantum mechanics, no?
A coin toss is completely deterministic if you can account for wind, air resistance, momentum, starting state, mass, etc. But if you don't know that air resistance or wind exists, you could easily conclude it's random.
I ask this as a layman, and I'm really interested if anyone has insight into this.
Bell's Theorem (1964) describes an inequality that should hold if quantum mechanics' randomness can be explained by certain types of hidden variables. In the time since, we've repeatedly observed that inequality violated in labs, leading most to presume that the normal types of hidden variables you would intuit don't exist. There are some esoteric loopholes that remain possibilities, but for now the position that matches our data the best is that there are not hidden variables and quantum mechanics is fundamentally probabilistic.
So to make sure I am understanding correctly, the normal distribution of the outcomes is itself evidence that other hidden factors aren't at play, because those factors would produces a less normal distribution?
I.e. if coin toss results skew towards heads, you can conclude some factor is biasing it that way, therefore if the results are (over the course of many tests) 'even', you can conclude the absence of biasing factors?
Basically they get to measure a super position particle twice, by using an entangled pair of it. So two detectors that each measure one of the particle's 3 possible spin directions, which are known to be identical (but usually you only get to make 1 measurement, so now we can essentially measure 2 directions). We then compare how the different spin directions agree or disagree with each other in a chart.
15% of the time they get combination result A, 15% of the time they get combination result B. Logically we would expect a result of A or B 30% of the time, and combination result C 70% of the time (There are only 3 combinatorial output possibilities - A,B,C)
But when we set the detectors to rule out result C (so they must be either A or B), we get a result of 50%.
So it seems like the particle is able to change it's result based on how you deduce it. A local hidden variable almost certainly would be static regardless of how you determine it.
This is simplified and dumbified because I am no expert, but that is the gist of it.
Not really. The shape of the distribution of whatever random numbers you are getting is just a result of the physical situation and nothing to do with the question posed by Bell.
Let me take a crack at this. Quantum Mechanics like this: we write down an expression for the energy of a system using position and momentum (the precise nature of what constitutes a momentum is a little abstract, but the physics 101 intuition of "something that characterizes how a position is changing" is ok). From this definition we develop both a way of describing a wave function and time-evolving this object. The wave function encodes everything we could learn about the physical system if we were to make a measurement and thus is necessarily associated with a probability distribution from which the universe appears to sample when we make a measurement.
It is totally reasonable to ask the question "maybe that probability distribution indicates that we don't know everything about the system in question and thus, were that the case, and we had the extra theory and extra information we could predict the outcome of measurements, not just their distribution."
Totally reasonable idea. But quantum mechanics has certain features that are surprising if we assume that is true (that there are the so-called hidden variables). In quantum mechanical systems (and in reality) when we make a measurement all subsequent measurements of the system agree with the initial measurement (this is wave function collapse - before measurement we do not know what the outcome will be, but after measurement the wave function just indicates one state, which subsequent measurements necessarily produce). However, measurements are local (they happen at one point in spacetime) but in quantum mechanics this update of the wave function from the pre to post measurement state happens all at once for the entire quantum mechanical system, no matter its physical extent.
In the Bell experiment we contrive to produce a system which is extended in space (two particles separated by a large distance) but for which the results of measurement on the two particles will be correlated. So if Alice measures spin up, then the theory predicts (and we see), that Bob will measure spin down.
The question is: if Alice measures spin up at 10am on earth and then Bob measures his particle at 10:01 am earth time on Pluto, do they still get results that agree, even though the wave function would have to collapse faster than the speed of light to get there to make the two measurements agree (since it takes much longer than 1 minute for light to travel to Pluto from earth).
This turns out to be a measureable fact of reality: Alice and Bob always get concordant measurement no matter when the measurement occurs or who does it first (in fact, because of special relativity, there really appears to be no state of affairs whatever about who measures first in this situation - it depends on how fast you are moving when you measure who measures first).
Ok, so we love special relativity and we want to "fix" this problem. We wish to eliminate the idea that the wave function collapse happens faster than the speed of light (indeed, we'd actually just like to have an account of reality where the wave function collapse can be totally dispensed with, because of the issue above) so we instead imagine that when particle B goes flying off to Pluto and A goes flying off to earth for measurement they each carry a little bit of hidden information to the effect of "when you are measured, give this result."
That is to say that we want to resolve the measurement problem by eliminating the measurement's causal role and just pre-determine locally which result will occur for both particles.
This would work for a simple classical system like a coin. Imagine I am on mars and I flip a coin, then neatly cut the coin in half along its thin edge. I mail one side to earth and the other to Pluto. Whether Bob or Alice opens their envelope first and in fact, no matter when they do, the if Alice gets the heads side, Bob will get the tails side.
This simple case fails to capture the quantum mechanical system because Alice and Bob have a choice of not just when to measure, but how (which orientation to use on their detector). So here is the rub: the correlation between Alice and Bob's measurement depends on the relative orientation of their detectors and even though both detectors measure a random result, that correlation is correct even if Alice and Bob, for example, just randomly choose orientations for their measurements, which means Quantum Mechanics describes the system correctly even when the measurement would have had to be totally determined for all possible pairs of measurements ahead of time at the point the particles were separated.
Assuming that Alice and Bob are actually free to choose a random measuring orientation, there is no way to pre-decide the results of all pairs of measurements ahead of time without knowing at the time the particles are created which way Alice and Bob will orient their detectors. That shows up in the Bell Inequality, which basically shows that certain correlations are impossible in a purely classical universe between Alice and Bob's detectors.
Note that in any given single experiment, both Alice and Bob's results are totally random - QM only governs the correlation between the measurements, so neither Alice nor Bob can communicate any information to eachother.
>I ask this as a layman, and I'm really interested if anyone has insight into this.
Another comment basically answered but basically you are touching on Hidden Variable Theorems in QM. Basically that there could be missing variables we can't currently measure that explain the seeming randomness of QM. Various tests have shown and most Physicists agree that Hidden Variables are very unlikely at this point.
Local hidden variables are impossible. Non-local hidden variables are perfectly possible. Aesthetically displeasing, since it requires giving up on locality, but not logically impossible. Non-local interpretations of quantum mechanics give up on locality instead of giving up on hidden variables. You can't have both, but either one alone is possible.
What's funny is that everyone is answering me saying, "you must be thinking of local hidden variables, which have been disproven", and I'm like, 'I didn't even know enough to differentiate between local vs non-local.'
I was never assuming the hidden variables had to do with quantum mechanics violating 'normal' physics, I was looking at this purely from what seemed like a logical standpoint of, "you can't prove or disprove what you don't know exists", like aliens or gods.
If the article was only talking about disproving local hidden variables bringing about non-random outcomes (which having read a bit now, I assume is the case as it references Bell Tests?), that wasn't clear to me.
I read it as claiming to be disproving the existence of any hidden variables in quantum mechanics that could affect a deterministic outcome, which seemed (to my lay, uninformed knowledge) to imply an essentially perfect or near-perfect understanding of quantum physics to make that claim; unless we believe our understanding to be perfect, how can we assert the non-existence of the unknown?
They even give a statistic of 99.7% random outcomes... which means that the .3% were (potentially) non-random?
> In its first 40 days of operation, the protocol produced random numbers 7,434 times out of 7,454 attempts, a 99.7% success rate.
It could still be a pseudo random number generator behind the scenes. For example, a typical quantum circuit simulator would implement measurements by computing a probability then asking a pseudo random number generator for the outcome and then updating the state to be consistent with this outcome. Bell's theorem proves those state updates can't be local in a certain technical sense, but the program has arbitrary control over all amplitudes of the wavefunction so that's not a problem when writing the simulator code.
If the prng was weak, then the quantum circuit being simulated could be a series of operations that solve for the seed being used by the simulator. At which point collapses would be predictable. Also, it would become possible to do limited FTL communication. An analogy is some people built a redstone computer in minecraft that would detonate TNT repeatedly, record the random directions objects were thrown, and solve for the prng's seed [1]. By solving at two times, you can determine how many calls to the prng had occurred, and so get a global count of various actions (like breaking a block) regardless of where they happened in the world.
This a difference between the ontological (as-is) and the epistemological (as-modeled). I asked pretty much the same thing, you might find some of the responses I got illuminating. [0]
I don’t think I’ll ever be convinced that there’s some kind of fundamental “randomness” (as in one that isn’t a measure of ignorance) in the world. Claiming its existence sounds like claiming to know what we don’t know.
This is what I tell my boss when I miss standups.
reply