Hacker News new | past | comments | ask | show | jobs | submit login

Here's my high-level understanding.

You're going to tile (say) the plane with a set of tiles you've chosen beforehand. (In this example, it looks like the tiles are 3d.) There are rules for which tiles can go next to each other---the edges must match.

As you go, there are two regions: a region in which you've settled on which tiles to use, and a boundary region in which you haven't. In the boundary region, you keep track of a probability distribution over the tiles. This distribution is influenced by the neighboring tiles (exactly how is probably the most interesting bit of the algorithm; I don't know).

The algorithm proceeds by repeatedly (i) picking a tile on the boundary that has a very skewed probability distribution, and fixing it to be the most likely tile; and (ii) updating the probability distributions of the neighboring tiles in the boundary.

It's possible that you end up making poor tile placements at the beginning that gets you stuck later, so the algorithm must sometimes backtrack to undo past decisions. You want to pick a set of tiles such that the algorithm doesn't need to backtrack too often, or it will take too long.

Note that the distribution is an ordinary probability distribution, not a sum of complex amplitudes like the name "Wave Function Collapse" would make you think.




Two additions to this: - AFAIK, the probability distribution is based on how common any particular adjacency is in the input. If the input is just rules rather than a small sample map, they'd all be equal, but if the input is an image that shows (for example) that only one in ten roads dead-end, that would carry over to generated stuff. - It can also do an "overlapping" model, where it sets one pixel at a time but considers overlapping tiles of 2x2 or 3x3 pixels. This breaks away from the grid structure, but requires a full sample image, not just a list of tiles.


In my case, the probabilites for each block are supplied manually.


This sounds like standard procedures for sampling from a Markov random field.

https://en.wikipedia.org/wiki/Markov_random_field


Oh my....perhaps this backtracking to fix past inconsistencies could allow for a game environment in which "Mandela effect" events happen naturally from time to time. Could make for an interesting game mechanic.


You're right. There is backtracking in the demo and when it happens it looks quite eerie. The world around you just decides it wants to look different and changes.


That sounds like it could be mechanically cool. Or at least lead to a cool setting.


It seems like it would work very well in a horror game. Though random generation doesn’t lend itself too well to horror.


Maybe this is a dumb comparison, but it sounds like an algorithm for solving minesweeper. The big difference is there's no "right" answer, the puzzle emerges from the outplaying of probabilities.


Basically.


I wonder if machine learning could be used to model the probability distributions at a greater capacity, and thus reduce backtracking.

One might also consider placing the algorithm in the context of a generative adversarial network (GAN) to adapt the tile probability modeling ML component towards a pattern that is less distinguishable from a real city.



A while back I hacked together a crossword generator that works almost exactly like this, but using a wordlist to provide probabilities.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: