Hacker News new | past | comments | ask | show | jobs | submit login

Due to the no free lunch theorem [0], any search algorithm that makes some problems faster will necessarily make other problems slower. How does the worst case for an algorithm like this look like?

I think that part of the appeal of A* to me is that I can readily visualize why the algorithm failed at some pathological inputs.

[0] https://en.wikipedia.org/wiki/No_free_lunch_in_search_and_op...




Well, it's still using A* and state exploration to validate the plans generated. But, it's generating plans nondeterministically so it is possible it could run in O(\inf).

Why didn't they give more details about the Sokoban puzzles being solved? The standard benchmark is the 90 puzzles in XSokoban and there's a Large Test Suite that has a few thousand but many of them haven't been solved by any search or planning system. Even the 90 puzzles in XSokoban were only solved a few years ago (2020?) by Festival (using FESS, a feature-space search in addition to domain-space, state-space). Before that it had been about 20 years of only 57 or so levels solved via search.

I see that they measure trace lengths but I would have really liked to see # of XSokoban levels and how many states were examined, to line up with existing research.


Well you have to run the AI, which is significantly more costly than just running A*

Also, the AI has a finite size, which means it has to be scaled up for bigger graphs. I doubt it would work at all for non-euclidian 2d graphs.

The exciting thing is that they made an AI do a toy problem, not that it was efficient.

But people are dumb and tech companies have brain control over them. "AI good and solve all problems"

It reminds me of the "slime mold solves mazes" thing. It was interesting and there were even things to learn from it, but it wasn't magic and couldn't give us better general tools.


The "No free lunch" theorem does have some prequisites for it to actually apply.

> There is no free lunch in search if and only if the distribution on objective functions is invariant under permutation of the space of candidate solutions.


NFL theorem is frankly silly.

It boils down to "you can't predict random numbers".

I'm sure it's useful to have formalized but it has no real impact on real world situations because most problems we care about have some kind of structure we can exploit.


Well when I asked "How does the worst case for an algorithm like this look like?" I fully expected the answer to be "in practice this never happens" (that is, there are bad cases but since we aren't drawing inputs at random, in practice we never hit them)

But how likely is that?


Yes significantly. But take for example the stock market, which is obviously of interest, to OAI in particular, but which cannot over the medium-long term be extrapolated accurately from historical data.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: