Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are recent papers based on diffusion that perform quite well. Here's an example of a recent paper https://arxiv.org/pdf/2406.01661. I am also working on ML-based CO. My approach has a close 1% gap on hard instances with 800-1200 nodes and less than 0.1% for 200-300 nodes on Maximum Cut, Minimum Independent Set, and Maximum Clique problems. I think these are very promising times for neural network-based discrete optimization.


Thanks, will try to give it a read this weekend. Would you say that diffusion is the architectural change that opened up CO for neural nets? Haven't followed this particular niche in a while


I believe it helps but not the sole reason. Because there are also autoregressive models that perform slightly worse. Unsupervised learning + Diffusion + Neural Search is the way to go in my opinion. However, currently, the literature lacks efficient Neural search space exploration. The diffusion process is a good starting point for neural search space exploration, especially when it is used not just to create a solution from scratch but also as a local search method. Still, there is no clear exploration and exploration control in current papers. We need to incorporate more ideas from heuristic search paradigms to neural network CO pipelines to take it to the next step.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: