Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Another nice quote,

> The next logical step is to invent a way to scale linearly with the number of constraints. “That is the North Star for all this research,” she said. But it would require a completely new strategy. “We are not at risk of achieving this anytime soon.”



My bet on this would be to abandon moving to vertices like simplex does and move on facets instead.

However, this requires to solve a quadratic 'best direction' problem each time which if IIRC reduces to 'Linear complementarity problem (LCP)' (https://en.wikipedia.org/wiki/Linear_complementarity_problem). The LCP problem scales with the number of active constraints which is always smaller than the dimensionality (N) of the problem. So if you have number of constraints P >> N you are golden.

Note that Dantzig has also contributed to LCP.

Obviously any breakthrough in these basic methods is directly translatable to more efficient learning algorithms for training single layer neural nets (perceptrons). Extending to multi layer NNs is not far off from there...


> “We are not at risk of achieving this anytime soon.”

Here "risk" seems odd (or it's a translation/language-nuance mistake).


It is not a mistake, it is just being cheeky.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: