Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder how AI models follow this law?


When I look at the image with the dots, I feel that my eyes are using the ratio of black to white as a low resolution “feature” that my brain is acting upon. So is it that my eyes are mainly providing already scaled information, or is it that my brain is choosing that information as the most important thing to act on? When it comes to fast decision making (which herd to target for hunting, which side of the hill do I run down to escape this predator) I’d kind of expect that the brain is hardwired to act on low resolution information quickly and process more detailed information at leisure.

I think the same goes for neural nets. Numerical features are often provided in some scaled manner anyway (e.g. not “how many cents has this stock price fallen in the last second” but “what is the ratio between the price now and the price a second ago?”, or even “what is the ratio of the stock’s price change in the last second, represented as a number of standard deviations from the mean in this background data?”.

And then there’s the fact that a (useful) neural network isn’t linear to begin with (if it was, it’d just be a simple matrix transformation). Each layer has an activation function. None of those I’m familiar of are even remotely logarithmic, but e.g. tanh and sigmoid functions are more sensitive to small changes at values near 0 than values far from zero. Perhaps over many layers it kind of resembles something kind of logarithmic? IDK




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: