Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Using Simulated Annealing to Solve Logic Puzzles (pluszero.ca)
171 points by djoldman on July 18, 2016 | hide | past | favorite | 31 comments


Another fun way to solve this is using Logic programming, e.g. http://baptiste-wicht.com/posts/2010/09/solve-einsteins-ridd...


Indeed anything with constraints is usually a great use case for Prolog. I particularly like this version in ECLiPSe which is specialized on constraint programming with Prolog: https://gist.github.com/JuanitoFatas/2227711

I haven't benchmarked it but it is pretty intuitive to read which is very important for these types of problems.


This is the best prolog solution I've seen for this kind of puzzle. Other solutions all had to define a bunch of primitives for neighborhood constraints.


Very concise and easy to read, I like it :)


The article mentions the problem of "local minimums". The author suggests occasionally accepting worse solutions, in order to escape from such. "The hope here is that by following this worse solution, we can eventually get to the global minimum." This is good but one may wish to take it further.

I once messed with something called "metropolis coupling", where you run multiple threads in parallel, with different thresholds for how often they accept worse solutions, i.e. one thread is "cold", one is slightly "hotter", and so on. If a hot thread's current solution is better than the solution of the neighbouring colder thread, the solutions are swapped. In this way the coldest thread (which is the one we're paying attention to) gets pulled out of the local minimum.

As I say, I messed with this once and it did seem to help. It seems to have been developed for inference in evolutionary phylogenetics, and is perhaps a bit obscure?


Hi, author here:

Yep, this is the sort of thing we study in our cooperative and adaptive algorithms class. Similar techniques to what you suggest are shown here https://books.google.ca/books?id=G5ML5EYch94C&pg=PA88&lpg=PA...


Looks neat but I gave up on it converging on my new laptop. I'm going to port it to rebol2. This bit confuses me:

  random.choice(range(0, i) + range(i+1, 4))
how is that different from

  random.choice(range(0, 4))? 
Obviously no Pythonista, but playing around in the REPL I see no diff.


Duh. Where is the button that reverses the passage of time?


This has the advantage of working as a sampling strategy, not just an optimization strategy. It is useful when the mass of your distribution is concentrated in a region that may be hard to find with a random walk.

https://en.wikipedia.org/wiki/Parallel_tempering

Markov-chain Monte-Carlo sampling gives a theoretical underpinning which explains why simulated annealing works in optimization.


That's actually pretty brilliant. I haven't ever heard of metropolis coupling before, but I can see how it would break the algo out of a local minimum. Also, klaatu barada nikto.


Nice. I've used Simulated Annealing to achieve good results in problems were the solution space was too large for constraint programming, but for this specific sort of problems I've had good experience using constraint programming frameworks like GECODE or choco


A lot of people here mention Prolog as another tool to solve this problem. I wonder if there's a natural way to combine constraint solving with simulated annealing.


A very well written post on Simulated Annealing, and a fun puzzle to solve (even if it is old!)


If you like this sort of thing, I wrote some code a couple of years ago to generate and solve logical deduction problems, such as the "Cheryl's Birthday" brainteaser:

https://github.com/shaungallagher/cheryls-murder/blob/master...

The code isn't very clean (it was written hastily as part of a hack days project at work) but the notebook walks you through it, so even if you're not familiar with Python, you'll get a sense of the technique.


Bonus points for including code in python to solve arbitrary logic puzzles.


I've used simulated annealing to make constraint solvers more efficient. The nice thing about constraint solvers and simulated annealing as demonstrated is that it's fairly straight forward to adapt the algorithm to your specific problem and data in order to get the best performance.


So, what does it mean exactly when they say that these kinds of problems are in the same class as Sudoku?


Simulated annealing and similar techniques such as Tabu Search are good for problems that have large search spaces due to combinatorial explosion (https://en.wikipedia.org/wiki/Combinatorial_explosion) and can be expressed as a state and an associated cost/fitness function.


This is true provided your cost function is "continuous" w.r.t. distance. Roughly what this means is that if you make a small change - e.g. one edit in this example - then your cost function shouldn't change a lot.

If you don't have some form of continuity condition you are actually just doing random search.



both problems have constraints and you solve the puzzle by finding the right configuration of data that meets the constraints.


They're both examples of an exact cover problem: https://en.wikipedia.org/wiki/Exact_cover


Constraint Satisfaction Problems can be modeled as Hopfield-type neural networks, such as Boltzmann machines (RBMs of deep learning are a variant) which can be tuned using simulated annealing.

There was plenty of research on this around 1985-1995 (the "second coming" of neural networks), but it died out with the hype because it was never an actually practical way to solve CSPs. Given the recent innovations in deep learning, it should eventually pick up again, to enable deep learners to solve constraint satisfaction subproblems within the same integrated architecture.


This sounds exactly like this game: http://www.kaser.com/sherwin.html

There's relational properties, combinations, and maybes all over


Really interesting, I wrote a logic puzzle solver in Prolog a long time ago while I was in uni, I just created a repo for anyone who wants to check it out. Unfortunately the documentation and comments are all in Spanish...

https://github.com/Anilm3/Logic-Puzzle-Solver


Al menos podías haber dejado "head-tail", es más intuitivo.

Gracias de todo modos ;)


Tweaking 't' here seems to be similar to tweaking the learning rate in gradient descent. But in gradient descent you assume the problem is convex.

Is there any way to tell whether this logic problem (or any other) is convex? Is it possible that the correct solution's 5-away neighbours all have very high costs (like 9+)? What prevents you from doing worse that brute force?


What if we change the cost function to something that returns a number which has a direct correlation with the maximum number of attributes that matched? Would solving the problem be any easier or we would still leave the possibility of getting stuck in local optimums just like the one described in the post?


What do you mean by the 'maximum number of attributes that matched'? How is it different from the number of conditions that are satisfied?


I wonder if Quantum Annealing would also be applicable.


I think in theory it is possible by unary encoding attributes of the five houses then defining the appropriate weights according to the constraints.

In practice it might not be possible to map the resulting graph on the D-Wave's architecture, I can't find any good documentation on the D-Wave though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: