Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>For comparison, neural nets cannot update their models - when the world changes, a neural net can only train its model all over again, from scratch

I mean, sure they can. Training a neural network is literally nothing but the network's model being updated one batch of training examples at a time. You can stop, restart, extend or change the data at any point in the process. There's whole fields of transfer learning and online learning which extend that to updating a trained model with new data.

edit: Also in a way reinforcement learning where the model controls the future data it sees and updates itself on.



The problem I'm describing is formally known as "catastrophic forgetting". Quoting from wikipedia:

Catastrophic interference, also known as catastrophic forgetting, is the tendency of an artificial neural network to completely and abruptly forget previously learned information upon learning new information.

https://en.wikipedia.org/wiki/Catastrophic_interference

Of course neural nets can update their weights as they are trained, but the problem is that weight updates are destructive: the new weights replace the old weights and the old state of the network cannot be recalled.

Transfer learning, online learning and (deep) reinforcement learning are as susceptible to this problem as any neural network techniques.

This is a widely recognised limitation of neural network systems, old and new, and overcomging it is an active area of research. Many approaches have been proposed over the years but it remains an open problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: