Hacker Newsnew | past | comments | ask | show | jobs | submit | more seventhtiger's commentslogin

I'm directing an indie game at the moment. I think programming and art skills can be both be valuable, depending on game and genre and so on, but I strongly agree about artists having a better perspective on game production. They understand that games are content and most of the work goes into content creation. Sometimes you discuss adding two characters and the programmer is thinking I can just share the movement code while the artist knows it's at least weeks of work to create all the art, same goes for sound, writing, and game design.

The highly automating/efficient mindset of programmers is great but even small games are big pieces of art that require a lot of diligent labor that you must expect and respect.

And of course, your point is that no one sees programming in a screenshot or trailer.


Totally fine to paint in broad strokes, and I would probably lean towards that perspective myself. However there are some games that show off the programming more than others, at least in motion. Movement based games, like technical platformers or similar, have a lot of expression through the systems that govern them. Whether or not a game feels good is fairly easy to figure out after spending some time with it, but understanding what would make it feel better and how to achieve it is a much harder skillset to develop. Obviously non-programmers can work out in high level terms what might improve game feel, but I think you can get much better results when you have deep understanding of how things work, what's possible, and what side effects may fall out of a given change.

That said, some of the most fun mechanics are born out of happy accidents involving interactions that weren't fully considered so it's definitely not a constant


I am being broad but to be more specific, if we classify games across two dimensions, one being tall (aka systems) versus wide (aka content), then I can't think of a single game that is more tall than wide. At best you get games which are squares, like tetris and pacman where a small amount of systems and a small amount of content go together.

For the vast majority of games they are very wide. Including technical platformers, which will have very finetuned movement systems, but they must be accompanied by a lot of equally finetuned levels. Another way of seeing it is that content is the "space" which your systems are expressed in, and more expressive systems require more space. A complex combat system will demand more enemy characters for its complexity to be relevant.

But I was only speaking in terms of programmers' understanding the nature of game production, rather than their actual contribution to the game. Of course there are very programming forward games, and entire genres driven first and foremost by innovative gameplay code. But even in those games and genres the programmer must understand that on top of the unique features that are being programmed, most of the important work will still be content creation. It's the nature of the beast. I'm a programmer who had to learn this the hard way. It's nothing like a software startup. It's more like a movie production with a software project inside.


I used to be president of a toastmasters club and a public speaking competitor.

First of all is practice. The clubs are great for this because they provide constant practice, but practice will help a lot even without anyone else. Keep presenting it over and over and over while driving, at home, to friends.

I asked a world champion public speaker if the nerves ever go away. He said no.

Second, I think talking to the people I will be presenting to before I present helps a lot. Makes me feels like I'm presenting to friends.

A third thing was thinking about public speaking like swimming. The depth of the water doesn't change how you swim, the same way the size of the audience doesn't actually affect what you have to do. So focus on saying and doing the things you gotta do and forget about how deep the water is.

Fourth is definitely write your speech, and write your answers to possible questions, and write as much as you can. After you write it, don't try to memorize it. Reciting uses a different part of your brain to speaking, and while you're delivering memorized lines if your flow breaks and you forget a word you will be lost. Write it to nail the structure and to make sure your ideas are expressible, but in the moment improvise their expression.

If you're interested join a toastmasters club. If you're focused and put the time towards advancing your progress you can finish a good chunk of the program in 6 months and reap a lot of benefits. Still a lot to do after that but it's advanced and diminishing returns.


Agreed on all of these. It's a skill more like juggling than writing. You'll end up relying more on reflexes than something you thought of, and the practice helps to build the reflexes.


Game development is intensely iterative. One way to think of games is software where its only value is its usability. It has has no process workflows or business goals that it ever sacrifices usability for because the user experience is the value proposition itself. It only sacrifices the user experience for other aspects of the user experience.

Failed and not fun games do happen, but in general game developers put far more weight to this sort of process. There are no other goals that you can claim were accomplished if the users think the game sucks. So you get these ruthless production cycles and there's an appetite to cut things that don't work.


This seems like a very nice product.

I wanted to look into the developer guide to customizing a template. It's to add RTL support for Arabic content.

This give me error tho https://blot.im/developers


Unless the refugees are guaranteed a right to return, then you're just asking for Jordan and Egypt to facilitate ethnic cleansing and finishing the job.


Right of return is a non starter for the winning power. Which is why this conflict is intractable.


You cannot convince a people to give up on generational existential struggle with "might is right".


Maybe the universe is digital after all.


Quantization does not imply discreteness: https://physics.stackexchange.com/questions/206790/differenc...


And neither alone imply computable (or 'digital').

You'd need determinism.

Some reply was (improperly?) flagged, but computability requires determinism.

All computable functions are functions from the integers to the integers


One of the most famous problems in computer science, P vs NP, is about non-deterministic computation. So no, computation does not require determinism - there are in fact plenty of models of non-deterministic computation. There are even models of computation where the halting problem is solvable (typically called hyper-computation).

Now, it is true that computation does require some amount of determinism - if the universe were entirely non-deterministic, i.e. if there was no kind of causality and events were completely unrelated to each other, there could be no notion of computation. But no one believes in that type of universe. Adding some source of rare non-deterministic events to an otherwise deterministic universe does not hurt computation.


I think you picked a bad example, because the "computation" in NP is not really a computation to most of us.

Most programmers think that computation means "something that can be done on a Turing machine or equivalent". It isn't hard to extend this idea to things like a true random number generator, or a quantum computer. This shows your basic point that computation need not be deterministic.

But the "nondeterministic" in NP doesn't speak to an actual computation that programmers think can be done. It speaks to a computation that we'd like to be able to do, but most of us think can't be done. (There is a prize for proving that impossibility.) And while there might be models of computation where said computation can be done, few programmers would think of them as modeling an actual computation.

Here alert readers might jump up and say, "Quantum computers might be able to solve NP complete problems!" True, we don't have a proof that it is impossible. But at the present time, there is no reason to believe that it is possible either. See, for instance, https://www.scottaaronson.com/papers/npcomplete.pdf. And so it appears that for actual computers that can be built, there is no computation matching how we'd like to solve NP complete problems.


I was mainly trying to point out that the mathematical term "computation" is not limited to deterministic computers. Since the GP was mentioning computable functions as something requiring determinism, I believe they were also talking more about the mathematical notion of computation rather than physical computers.

I should also note that our computers can very much solve NP-complete problems. They can't implement NP-complete algorithms to solve them, but all NP problems can be solved by a deterministic computer or a quantum computer - it just takes [much] more compute time (assuming P != NP, otherwise it may even take the same time).

This is very relevant to this discussion, because in fact it is well known and proven that the non-determinism in the NP model does not give any amount of extra computational power beyond a Turing machine. That is, a non-determinstic Turing machine can solve exactly the same set of problems as a deterministic Turing machine (but, as far as it is known today, faster). The same is true of Quantum computers.

Hyper-computation refers to even more fanciful mathematical models which are actually able to solve problems that a Turing machine can't solve, even with infinite time. They involve things like performing an infinite amount of Turing machine steps in one Hyper-Turing machine step, or having access to an oracle which tells you if a computation halts etc.


Yes, you can have fanciful "models of computation" with oracles of various kinds.

Few people really think of that as computation.

But I agree that there are real examples of physically possible computation which are not deterministic. The most widely used being true random number generators. And so theoretical computability is not necessarily the right model for the real world.


Note that analyzing the complexity of problems in relation to various oracles is probably half of computer science. Algorithms like those in NP are not exotic concepts, they are the bread and butter of many computer science courses and careers.

Hyper-computation is much more exotic, though, I'll give you that.


Random number generation, if physically non-deterministic, isnt computation.

https://en.wikipedia.org/wiki/Computable_function


The "nondeterministic" in NP ("Nondeterministic polynomial") means that problems in NP can be solved in polynomial time by a nondeterministic Turing machine. Such a machine would be like what people incorrectly think quantum computers would be, in the sense that it would explore all the possible paths at once. It's the same meaning as in a nondeterministic finite automaton (NFA) vs a deterministic finite automaton (DFA).

The question of P vs NP is not whether we haven't determined if we can do a particular computation (either at all or in polynomial time). It's whether a deterministic Turing machine could solve in polynomial time the same class of problems that we _know_ a nondeterministic Turing machine could solve in polynomial time.

Of course, the nondeterministic Turing machine is not a physically realistic model of computation.


I'm trying to figure out whether you're just trying to echo what I said in different words, or whether you thought that you were correcting what I said.

If the latter, please be specific about what misunderstanding you think I might have.


> There are even models of computation where the halting problem is solvable (typically called hyper-computation).

BTW, it's not only hyper-computation that can solve the halting problem.

The halting problem is decidable in models of computation that have finite state (although the decider machine does need more state than the machine being analyzed).


The term "the halting problem" typically refers to the problem "given a Turing machine and a starting state of the tape, determine if the Turing machine will halt".

There are other versions of the halting problem for systems that are more limitted than a Turing machine, such as finite automata ("given a finite automaton and an initial state, determine if the automaton will halt"). Some of these other versions are indeed solvable, such as the one you mention. But these are different problems, not the same problem as THE halting problem. As far as it is known today, all such systems are strictly less powerful than Turing machines (that is, for any system where it is provable if a computation in that system halts, there are problems that it can't solve that a Turing machine can) - this is known as the Church-Turing thesis.

Hyper-computation refers to models of computation where THE halting problem (does an arbitrary Turing machine halt) is solvable.


I meant "the halting problem" as in the informal description of the problem, e.g. as in how it is described in Wikipedia: "the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever".

If you take that description at face value and consider a model of computation with finite state (like real-world computers have), then it is decidable.

If you take the usual formal description of the halting problem, which like you said, is specifically defined over Turing machines (i.e. a theoretical model which assumes you can have a machine with literally infinite state, which is impossible to construct in our universe), then yes, you'd need hyper-computation to solve that.


I thought initially you are referring to things like deterministic finite automata (DFAs) or total languages (Idris) when you are talking about finite state.

If instead you are simply referring to the observation that physical computers have a finite amount of memory and thus we can solve the halting problem in finite time by simply iterating over all possible configurations, that is a somewhat uninteresting observation - since if we are already talking about real physical constraints, that algorithm is entirely useless for even the simplest computers from the 50s and 60s. It's basically equivalent to saying "any program will halt, because the sun will destroy all computers on Earth when it goes supernova".

More interestingly, it turns out that there are some finitist versions of the halting problem for finite-tape Turing machines, and they act as a similar kind of limit. That is, it turns out that the only way to verify whether an arbitrary finite-state Turing machine will halt on a specific input is to check all possible states (and this also requires a finite-state Turing machine with a larger tape than the one under analysis).

This result can actually be used in a very similar way to the infinite-tape halting problem: it 100% guarantees that, if your system is equivalent to a Turing machine with tape length N and M possible symbols, it will take more than N^M computational steps to check whether an arbitrary program holds. This can be used to prove that it is effectively impossible to check if a program halts, much the same as the "true" halting problem is used to prove that it is actually impossible to check.

For an example, an arbitrary program for a computer with as much memory as the infamous "640KB is enough for anyone" quote would require at least 640,000^255 (~10^1480) computational steps to check if it halts. So, we can just as easily say it is impossible to check and we wouldn't be far off.

This is very different from something like a total language (e.g. Idris) or a DFA, where it is actually possible to relatively quickly verify whether a program halts.


> thus we can solve the halting problem in finite time by simply iterating over all possible configurations, that is a somewhat uninteresting observation

It is, but that's not the observation I was making. You only mentioned one way of solving the problem, but that's not the only way.

> That is, it turns out that the only way to verify whether an arbitrary finite-state Turing machine will halt on a specific input is to check all possible states

What do you mean by all possible states? If you mean literally all possible states, that's not true. I mean, yes, you could iterate over all possible states to solve that problem, but that's probably the least efficient way to solve it.

There are already-known algorithms which always solve the Halting problem for machines with finite state, and they don't need to iterate over all possible states. They do, however, need to iterate over all state transitions that the machine actually goes through (multiple times, even). However, these algorithms that I'm mentioning (i.e. cycle detection algorithms) are also quite dumb. They don't exploit any knowledge about the state transitions in order to analyze whether the machines halt or not, they just simulate the machine step by step (this is due to the definition of the cycle detection problem itself, which does not allow inspecting the program).

In principle, and even in practice, it's possible to make those algorithms significantly more efficient, at least for many of the machines (i.e. programs) that we care about.

I suspect it is not possible to make such a (fully automatic) algorithm significantly more efficient for all possible programs (even the nonsensical ones), although I don't think such a proof exists (if it does, I would like to see it). The closest I've been pointed to is a paper possibly implying that such an algorithm would have to be EXPTIME-complete, although even the person that pointed me to that paper had some difficulty interpreting it -- and even if that were true, that says nothing about its real-world efficiency.

> That is, it turns out that the only way to verify whether an arbitrary finite-state Turing machine will halt on a specific input is to check all possible states

Can you point me to a source that proves this claim? Not only I'm doubting it, but even if you are right, I'd be really interested in reading such a proof.

> For an example, an arbitrary program for a computer with as much memory as the infamous "640KB is enough for anyone" quote would require at least 640,000^255 (~10^1480) computational steps to check if it halts.

Again, I'm wondering why you are claiming that the only possible algorithms which can check whether arbitrary finite-state programs halt have to iterate over all possible states (or even all the actual state transitions).


By the standards of our times, our computers require a halt, so it’s arguably deterministic by today’s implementation.


Halting has little to do with determinism. Our physical computers are also decidedly undeterministic, at least for practical purposes: TLS itself greatly relies on an RNG for example.


To give a concrete example, a free particle can have any energy it likes - it's only bound states that have discrete spectra.

Mathematically, this corresponds to solutions to a particular differential equation existing for particular values of energy (which appears as a constant in the equation). To use a simpler DE for an example: dx/dt = kx has solutions Ce^kt for all k, but a more complicated DE might only have solutions for some k.


The current mainstream says that it is really not.

However, they might be wrong.

Hilbert space in quanum field theory is infinite-dimensional (otherwise the theory will just not work). Some physics are saying that Hilber space is not infinite-dimensional: Holography principle might imply that finite amount of information in a given region of space, potentially implying a finite-dimensional Hilbert space.

And that might be some kind of pixilation.


Does having inherently unpredictable phenomena have philosophical implications?


Certainly. One of the most fundamental philosophical questions is about the nature of time, precisely because all of our "laws" of physics are time reversible, so from the perspective of physics time doesn't really exist or is just a space-like dimension (no arrow). And so the Universe should be perfectly deterministic and with enough computing power and precise enough knowledge of the initial conditions everything should be perfectly predictable and the future should be "fixed". In some way or another probably most of philosophy is arguing about whether or not this is true and what it means.

This paper claims that 5% of 3-body systems in the Universe can't be predicted even in principle, because you would need to measure initial conditions to greater precision than the planck length, which is impossible. And of course N-body systems where N > 3 are even more unpredictable, and the whole of the Universe is an N-body system, so if correct it would mean the end of determinism.

For a good treatment of this topic for a lay audience see Lee Smolin's recent book "Time Reborn".


> And of course N-body systems are even more unpredictable, and the whole of the Universe is an N-body system, so if correct it would mean the end of determinism.

I read this as the systems are, for all intents and purposes, practically unpredictable, not fundamentally unpredictable.

Just because we can't measure beyond the plank length doesn't mean there aren't deterministic rules down there.

So my take is that the universe could still be deterministic, but that can't be knowable to us since we'd have to be able to peer below the plank length.

Maybe I'm reading this wrong.


Just to go back to the philosophy thread, this is one of the things that made Kripke famous (relatively speaking, philosophically speaking): a priori "knowability" or predictability is not the same as determinism.

You can have a system that is perfectly deterministic, but with outcomes that are a priori unknowable, in the sense of being unpredictable. Kripke didn't use this language, but if the information required to compute the prediction becomes unattainable, either because of capacity or input requirements, you can't make the prediction.

I think there's some very abstract computability theorem in computer science that's more recent that came to a similar conclusion. I think that came at it more from the sense of the amount of change in inputs that would pass during the time it took to simulate a system makes it impossible to perfectly simulate things past some point.

These aren't examples in physics, but they speak to how determinism per se isn't necessarily the same thing as predictability.


Indeed. The evolution of the universal wave function as described by the Schrödinger equation is perfectly deterministic. If you believe in that as the fundamental description of reality (which also means not believing in wave-function collapse), the result is the many-worlds interpretation of QM. Similarly, hidden-variable theories are deterministic.


If hidden variable theories exist, they're non-local. Which is a whole other philosophical problem.

As for many worlds - when everything is possible, nothing is explained.


Many-worlds doesn’t mean that everything is possible, it rather means that everything that is possible becomes actual.

It is arguably strongest in explanation, in the sense that it relies on the least amount of assumptions (just the Schrödinger equation).


> It is arguably strongest in explanation, in the sense that it relies on the least amount of assumptions (just the Schrödinger equation).

It really seems like Physicists day this because they want it to be true, not because it's actually true.

I completely understand they don't like the idea of non-local hidden variables (I mean programmers don't love them much either) - but the idea that a non- local variable is somehow more complex than an infinitely dividing universe breaking into infinite copies and exploring all possible paths at all possible times relies on fewer assumptions or is simpler is just laughable to me. Maybe I'm just not getting it, but it really seems like a way of redefining the rules of a game until the preferred party wins.

"Hey we have this compression scheme that's incredibly fast. It's only a constant time lookup." "Really? How do you do that?" "Well we store and index all possible inputs." Won't some outputs be longer due to pigeonhole principle?" "Well actually in one single encoding it would, but we store an infinite number of different encodings of all possible inputs, meaning in at least one of the encodings it's smaller and just use that one." "So you store infinite variations of infinitely sized data and as a result claim your compression scheme is simpler?" "Yes because when we go to decompress we spawn an infinite number of threads and each thread decompresses by following one of the encodings, and in that thread's view it's just a constant time lookup to store the index which is clearly smaller (so compressed) and constant time to reverse and decompress." "And what about the complexity of the infinite threads with infinite copies of infinite storage?" "Ah we don't count that, we only count the world line of the successful thread."

The incredible "simplicity" of Many-Worlds.


I think you're confusing simplicity with computational cost.

Bubble sort is simpler than quicksort. It is also more computationally expensive.

Universal wave function theory is simple in this sense. (IMHO the term many-worlds is doing the theory a disservice because it's fundamentally misleading. There is only one world, we just can't perceive most of it. Which is as it has been for all of humanity's existence.)


The many-worlds interpretation isn't deducible from just the Schrödinger equation.

The Schrödinger equation roughly predicts that if you put two detectors at the two possible positions where a light beam can go after a beam splitter, and fire a single photon at the beam splitter, both detectors will detect some "amount" of the photon*. However, what you actually see in experiments is that one of the detectors detects a single photon, and the other detects 0. However, you also notice that if you repeat the experiment many times, the probability of detection is exactly equal to the square of the modulus of the amplitude of the Schrödinger function for that state.

To explain this observation, the MWI uses an extra assertion: that each "world" contains a single result, but that the number of "worlds" where the state is X corresponds to the square of the modulus of the amplitude of the Schrödinger function for that state. So, if doing simple frequentist probabilities over these "worlds", your chance as an observer to be in a "world" where the state is X is equal to that value, as the experiments observe.

Note that this assumption is exactly the same assumption as the wave function collapse, known as the Born rule.

* the Schrödinger equation actually predicts something more esoteric than even that: for any two complementary solutions X and Y, there is an infinity of additional solutions of the form aX + bY, with a and b real numbers with certain properties. So, in fact, it is actually impossible to use the Schrödinger equation to predict any particular state. You have to use the Schrödinger equation and a chosen basis of measurement, and only look at the solutions in that basis. The simple idea of "counting worlds" from above mostly breaks down at this stage - you need an additional third assumption of assuming a pre-existing "background" and using Decoherence to explain why only certain solutions normally manifest.


No, it really is.

Consider a sealed room with an experimenter looking at a box with Schrödinger's famous cat in it. The question we usually ask is whether the cat is alive, dead, or in a superposition before the box is opened.

Schrödinger's famous equation predicts that if the cat itself can be described by quantum mechanics, then it must be in a superposition. If the experimenter can be described by quantum mechanics as well, then the experimenter must also go into a superposition upon opening the cat's box. And, thanks to thermodynamics, there is no experiment that is doable by the experimenter from which the existence of collapse can be demonstrated.

Therefore the claim that there is a collapse at all is an entirely unnecessary hypotheses. All other interpretations of QM have to invent explanations for an event (collapse) that no experimental evidence exists for.


That's all well and good, but it then predicts that all possible outcomes have equal probability, and this is measurably false.

Say you design the experiment such that the observer will see the cat is alive if 2 particles both have spin up, and dead if any particle has spin down. Say the Schrodinger equation will assign equal amplitudes to the 4 possible states (up-up, up-down, down-down, down-up), and let's ignore the composite states (e.g. 1/sqrt(2)up-up + 1/sqrt(2)up-down). So, the cat-alive (up-up) state has an amplitude which is 3* the amplitude of the cat-dead state (up-down + down-up + down-down).

In a naive interpretation of MWI that only used the Schrodinger equation, there are two versions of the observer, so the probability that the observer sees one outcome versus the other is obviously 1/2: you either happen to be the version that sees the cat alive, or you happen to be the one that sees it dead. This reasoning will work if we repeat the experiment many times: since the repetitions are independent, if I repeat it 10 times, I expect that I will happen to be one of the observers who sees the cat alive about 5 times, and dead about 5 times as well. In your interpretation, the amplitude of the Schrodinger equation is irrelevant, as long as it is greater than 0: all possible events happen.

If I actually do the experiment though, I will see the cat alive only about 2.5/10 times, since the total amplitude of the wavefunction for all states where the cat is alive is much lower than the total amplitude of all states where the cat is dead.

So, the actual MWI says that, while there are two kinds of worlds, they are not equally likely. In the multitude of all worlds, the prevalence of worlds where the cat is alive is proportional to the wavefunction amplitude of the cat-alive state (about 1/4) and the ones where the cat is dead follows the same logic (about 3/4). So, given that I am one observer in one of the many worlds, the chance I am the observer in a world where the cat is alive is only 1/4.

But this connection between the number of worlds and the amplitude of the wavefunction is an additional assumption atop the Schrodinger equation. Sure, the wavefunction doesn't collapse, but it splits according to the exact same formula as the collapse in the CI (Born's rule).

And I again want to mention that even this is not enough. If |cat-alive> and |cat-dead> are solutions to the Schrodinger equation, then so is x|cat-alive>+y|cat-dead>, for an infinite number of x and y real numbers. The MWI has to explain why no observer ever actually perceives such a state (how this state would look like to to an observer is not even definable). Decoherence solves this, and it was an extremely important contribution, but it also adds an additional assumption (that CI also needs): some pre-existing classical-like background.


Yes, you can construct a naive version of the MWI that produces answers in disagreement with experiment. But that naive version of the MWI also doesn't match the predictions of attempting to model both cat and experimenter with QM.

This is the essence of a straw man argument. OK, your ridiculous version of the theory doesn't work. Now what happens if you look at the actual theory under discussion?


My point is that the actual version of the MWI requires the Born rule (which can't be derived from the Schrodinger equation, and which is also known as the measurement postulate) just as much as any other interpretation.

I wasn't building a strawman, I was trying to explain why the simple explanation you had given in the previous post (which is the commonly presented explanation of the MWI in many popular channels) doesn't actually work. You were the one who was claiming that the MWI simply says that the observer is in a superposition itself, which is indeed what the Schrodinger equation predicts. But this entirely leaves out the other half (why does the observer in fact observe a single outcome , with some probability X) and I was explaining how the same maths as the oh-so-hated collapse sneak back in through there.

Basically, the MWI and the CI agree that from the point of view of the observer something happens when the observer opens the box which they can't predict deterministically. The collapse versions of the CI say that this event actually changes the wavefunction, it collapses it, and all other possible results don't happen. The MWI says that this is just how it looks like to one observer, and all other results happen to other observers, in a very precise proportion. The "shut up and calculate" version of CI says that it's unscientific to even discuss this distinction, since a single observer anyway observes a single thing, talking about observations that didn't happen and how real or false they are is unscientific speculation.


You've said nothing suggesting that the explanation doesn't work.

Whether you believe in collapse now, collapse later, or collapse never but the Born rule works, you get the same exact predictions. Therefore no experiment done to date represents evidence that there actually is a collapse. And evidence that the experimenter is modeled by QM is evidence against a collapse. This is all true, and is all verifiable from QM.

And whether collapse handles later or never, the math behind the MWI explains why we'd think we'd observed what we observed in the absence of a collapse.


Fundamental randomness also explains nothing, but worse.


> Maybe I'm reading this wrong.

No, you're reading this correctly. GP is misinterpreting the results.


"Fundamentally"as in "QM forbids this". If our universe were different, a simpler kind which Laplace dealt with, it could be completely predicable, as theories of 18th century stated.


I think an appeal to quantum mechanics is a different argument from the one under discussion, which is based on discrete physics.


I'm not sure it is a different argument, given its dependence on the Planck length.


It's a different view of the same problem.


>'One of the most fundamental philosophical questions is about the nature of time, precisely because all of our "laws" of physics are time reversible'

Many of our theories are, but the thing is we have several direct observations of time reversal symmetry violation (below), independent of the experimental demonstrations of CP violation which also imply T-Symmetry violation.

https://arxiv.org/abs/1409.5998

https://pubs.aip.org/physicstoday/article-abstract/52/2/19/4...


Yes! This is important. It's a finding from 1964, nearly 60 years ago, and it is included in the Standard Model of physics.

One interesting thing is that unlike the other discrete symmetries, we haven't found a system that has a large Time-reversal symmetry violation. Not having a large enough source of T-violation is actually one of the major problems with the SM!

Related supplemental reading on the Strong CP problem: https://www.forbes.com/sites/startswithabang/2019/11/19/the-...


The universe is not an N-body problem. Because we have things behaving like waves. Which are much harder to simulate. We can't even simulate a 2-electron collision precisely.


So this would be a totally separate source of indeterminism than quantum mechanics. So you could have a schrodinger's rocket fired at a indeterministic 3-body system which means multiple sources of indeterminism interact.

Entropy could be viewed as the simple addition of information due to indeterminism.

How can causality survive in a universe like that.


Why does causality require determinism? Surely effect can still follow cause, even if it is not predictable in advance?


The scientific method tests its understanding of causation using prediction.

If something is not predictable in principle, then it is impossible to show that it has causation.


That only matters if you assume infinite measurement accuracy, and the scientific method has never assumed that.

Even if we assume the world is 100% fully deterministic, as long as our measurements are not 100% accurate we will have some amount of measurement noise which is completely indistinguishable, even in principle, from true fundamental randomness.

In fact, fundamental randomness is much easier to work around than measurement noise, since there is no risk of it being correlated with your experimental design. In contrast, measurement errors are often correlated to the measurement method, which makes them much harder to eliminate statistically.


Doesn't the Turing halting problem also imply that (some) things are not predictable (the halting of certain algorithms)? But I don't think that interferes with causality.


Causality in physics survived the discovery of quantum uncertainty, though it was changed by it. What parts of current physics would have to be abandoned if our current inability to predict outcomes with complete precision turned out to be fundamental?


A non-deterministic universe can have two kinds of events: caused events, and random events. That is, an event can simply happen, but it can also be caused by another event. For example, a ball could start moving on a pool table all on its own (random), but still any ball that is hit by another ball would start moving because of the impact (caused).

If the fully random events are rare enough, you can even still determine causality using statistical tests, just like we do today. Of course, you can never be 100% certain, but that is to be expected. This is anyway how experimental science worked even when the world was assumed to be 100% deterministic: true randomness is not really different to experimental science than measurement noise.


The variation of philosophical implications are possibly unpredictable by themselves...


Ha! Gödel would be so proud of this conjecture


See Norton's Dome as an example of nondeterministic behavior in classical mechanics. No quantum or chaos etc required. https://en.m.wikipedia.org/wiki/Norton%27s_dome


Note that this is not an actual physical experiment - it only works with an infinitely accurate and smooth shape of the dome, which contradicts all of the models of how matter exists at least since the ancient greeks. Any dome made up of atoms, even if arranged perfectly accurately up to the position of each individual atom, does not exhibit any kind of non-deterministic behavior in classical mechanics.


I think Diablo 4 is really well made but I keep getting surprised by some goofs in what I would consider the most important parts of the game.

Like even when I make a small scale RPG, my damage formulas and scaling and so on are my bread and butter. I obsess over them. I would accept being wrong about the resulting experience and having to adjust, but to me it's really unacceptable to have oversights and mistakes there.

So something like how broken resistance was, or different types of scaling implemented wrong. That's not disagreeing about the outcome. That's clear oversight in what I consider the heart of the game.

Or for example how the mount's controls with mouse were just remapped from joystick and you have to put your mouse at the edge of the screen to go fast.

I don't understand how to reconcile such a big and well made game with many baffling oversights in the game's core systems. Feels competent and amateurish at the same time.


What makes you say that the game is well made of many of its core systems are defective?


Usually games with deep development issues have those issues show up everywhere. Not so in Diablo 4.

The art direction is great, they produced a massive amount of assets at a very high quality, the story is great by Diablo standards, the marketing was good, the tech is pretty good.

The player experience is still very good and very high quality. I usually find the only people who complain are the hardcore players. It makes sense because once you've run out of campaign content, you really have to rely on those core mechanics to keep the game going.

A lot of the people involved are competent and have a good track record. I just don't understand why the foundations are shaky. They're not even innovating so much on what an ARPG is supposed to be.


You hit it quite well on what I was getting at.

I think many of us did enjoy Diablo 4 but the systems that make an ARPG shine fell through the cracks which is surprising given Blizzard defined the category.

I am very much am aware they aren't the same people running the show now from Blizzard North, and they had some major team shake ups in D4s dev cycle, so that was part of the problem but it's hard not to have high expectations.

It's just those pieces or connections in-between the excellent departmental work that fell apart, along with the foundation of an ARPG being flawed, that a gaming generalist could have solved to bring it all together.

I imagine things would look quite different now if that generalist person or persons on D4 played ARPGs for decades, had Diablo DNA infused in them with equal experience in game design and systems, and had some authority they could have made the best ARPG all over again since D2 given how good D4 is in so many ways. They equally learned a lot from D3 and it's shortcomings but that knowledge all felt lost in D4.


If the implementation of a poor algorithm is robust and bug-free, that would be something that is well-made, despite being bad.

Consider a chair that is sturdy and load-bearing, while still being lightweight, but it wildly uncomfortable to sit on.


But they're describing a poor and buggy implementation.


The problems they are describing are bad design, not bad implementation.


Apps did not just fail to help me meet someone. They really harmed my self esteem. Even when a match on an app becomes a real date, the lack of any context other than the app makes those dates really weird. "Could I have kids with this suspicious stranger from the Internet?" I could never be myself. Never had a second date through an app.

Met my SO after I had stopped looking for romance at a hobby club. In social and hobby contexts people can glimpse the breadth and depth of your personality. Others can vouch for you and speak well of you. You actually have something to talk about. It feels much more organic and not forced at all.

I challenge you to make 200-300 friends and acquaintances and not find a romantic relationship. Looking back, that's actually what I was doing, and somehow romance fell into my lap. We go married last year.


How does an adult working a full time job even meet 200-300 people let alone become acquaintances/friends with them?

I find it a struggle to even become casual acquaintances with more than a handful of people a year.


If your social life is this small, then simply the pool of candidates is so small. You need to expand how many people you meet by a lot. Only then you can start thinking about how to attract romance, how to approach, etc, which is all a lot easier if you just meet a lot of people.


I played adult rec league sports for many years. Ended up dating 5 people from that, 3 seriously, and married one of them. I’m sure I met many hundreds of people over that time, all while having a full-time job and while doing an activity I inherently enjoyed.


I get this is how people meet but I never liked this dynamic. Most relationships end and it can make it weird for everyone else. Wish fewer people comingled their hobbies and dating life like this.


How do you expect people to partner??


Online dating, bars, or any other social activity that doesn't involve the same group over an extended period of time.


Over 2 years, I attended once or twice weekly public speaking meetings. Became president of a club, VP of another, member of another (most are biweekly). Mentored and worked with dozens, met hundreds, spoke in front of thousands.

There are some very low resistance social graphs. Just travel them consistently.


It's not an obligation in the sense that it's not strictly mandatory.

But it is beyond encouraged. Any faithful Muslim believe that marriage is half of faith.

“When a person gets married he has completed half of his religion, so let him fear Allah with regard to the other half.”

https://islamqa.info/en/answers/11586/is-marriage-half-of-re...


It's strongly encouraged, yes. But not mandatory and not obligatory, which was the erroneous claim to which I was responding.

Are you Muslim? You realize people don't take "half of faith" literally, right?


Yes I'm Muslim. They do where I am. Calling marriage "completing half my faith" is a very common expression. Why wouldn't they take it literally?


I used that at my speech in my brother’s wedding, and the aunties loved it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: