Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As defined by the lecturer herself, "h(t) = h(t-1)", the very definition of hysteresis.

My point is that the lecturer missed a golden opportunity to give her students a natural intuition of "h" that they can see, feel and touch and that will serve them well for their entire careers.

The only thing "hidden" about "h" is that hysteresis is hidden in plain site in her lecture - maybe the lecturer did not know herself.

Neural networks have an undeserved reputation for being mysterious, and maybe that is partly due to a lack of basic physics knowledge.



> As defined by the lecturer herself, "h(t) = h(t-1)", the very definition of hysteresis.

How is that a definition of hysteresis?

Hysteresis is when state is a function of previous state, not identical to previous state.


It's just simplified pseudoscope using the lecturer's own notation from her slides to make my point.

The following is the lecturer's full TeX form if that helps:

h(t) = \tanh \left(h(t-1) W_{\text{hh}}^T+x(t) W_{\text{hx}}^T\right)

However, I don't want our readers to get distracted by line noise; h(t) = h(t - 1) makes my point.


Back in the day, having taken some kind of statistical signal processing course would have been common before getting into neural networks. That would likely have covered a lot of intuitions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: