Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Weber–Fechner Law (wikipedia.org)
71 points by the-mitr on April 8, 2023 | hide | past | favorite | 25 comments


A practical application of this for computer people: performance improvements are imperceptible unless it is a large factor. Even "laypeople" will notice a 2-3x difference, but a series of 1.1x improvements adding up to 3x over time is generally not noticed, even by tech professionals.

So, I tend not to release performance improvements one-at-a-time. That's a career-limiting move in the passive sense. Instead, I batch performance tuning changes and release several at the same time, which provides a much bigger "bang" for the users. The next morning, they'll notice, write emails to managers with words like "Wow!" in it, and then you get promoted.


The same is true of features.

The other side of this is that batched changes are delayed, can cause conflicts with other development, and are riskier.


My work tends to be on infrastructure, not code, so luckily I don't have coordination issues with branching, merging, or feature releases.


Simply put: human senses work logarithmically (as a professor introduced it in an introductory signal processing course decades ago in my undergraduate). This is a fascinating law, that, if you assume it applies to all cognitive processes, can perhaps explain things like why one gets used to nice things and happiness goes down as we accumulate more stuff.

I’ve always thought that it’s also a powerful argument against the concepts of Heaven and Hell, as described in many religions, I.e. it’s impossible to continuously torture people or keep them ecstatic.


Yeah, unfortunately, I believe that one side effect of the human (sensory) nervous system having a logarithmic response curve is that humans are bad at understanding exponential processes. If you look at an exponential function on a log scale, you get a graph that looks linear. But, that's what fools people into thinking exponential processes are tame and easy to control.


Well that and it’s hard to distinguish if you’re early in exponential growth or linear. Also exponential growth does hit some kind of ceiling relatively quickly.

I think that’s an easier explanation for why we’re biased against it that isn’t necessarily tied to why the brain uses a logarithmic response curve. The latter could be because it provides better dampening to random excitations (ie the brain doesn’t have to expend as much energy dealing with them at the cost of missing signal in lower energy levels). Of course it could be this is why we’re bad with exponential but that seems like a larger leap to make.


> Well that and it’s hard to distinguish if you’re early in exponential growth or linear.

That's true, but not really saying very much. Any differentiable function is locally linear around a neighborhood of any point where the derivative exists.

> Also exponential growth does hit some kind of ceiling relatively quickly.

Well... that depends. Much like how markets can remain irrational longer than you can remain solvent, exponential growth can often remain exponential for much longer than it takes to create a problem. Conversely, sometimes it can't remain exponential long enough to prevent a problem. Exponential growth is a hard beast to tame.


> That's true, but not really saying very much. Any differentiable function is locally linear around a neighborhood of any point where the derivative exists.

What I’m saying is that it’s hard to know if you are dealing with an exponential scenario. You’d respond differently more quickly but doing so for a linear function may be the wrong response. There are certain failure modes when you do encounter an exponential but most things we encounter are more linear / dampened so the bias humans have against exponential is rational despite the failures we have dealing with exponential problems (climate change being a notable counter example). I’m saying it’s a rational trade off to evolve when dealing with the world.


To be fair to (major) religions, people don't exist in their physical ("corrupt") forms in hell / heaven.


A wild example of this is from a paper published ~10 years ago: researchers wrote a program that would display an image, then slowly change the color of a significant object in the image. It was an unmistakable change, like turning a big object red in a grayscale image. When the change was slow enough (IIRC ~20-30 seconds), it would become undetectable to human perception if the person stared at it—even when explicitly told to look for a change. They published a demo video and I remember shocking several people with it.

Can’t remember the link. I might have found it through HN.


Sounds similar to (unproven) boiling frog: https://en.wikipedia.org/wiki/Boiling_frog


  > They were first published in 1860
  > Perceived loudness/brightness is proportional to logarithm of the actual intensity
impressively forward-thinking for the time, considering we went on to use dB for all sorts of amplitudes, including sound


Is this the reasoning behind the Mel scale?


163 years ago!


Reminds me of a talk arguing that drawing the number line in equal increments is counterintuitive coming from the context of the natural world.

The example: knowing whether there’s one or two lions in the bushes is a lot more important than distinguishing eight lions from nine!


Super timely. I was just wondering today if something like this might be the case for perception of light levels. I was putting up blackout curtains, and when I blocked most of the light but not all, it still seemed pretty bright.


Likewise I've noticed that a laser pointer that works well in a normally lit room will be invisible outdoors.


If you've never run before, a 1K run may be your limit as a beginner. Running 2K can be quite challenging. However, after a year of running, a 9K or 10K run may feel no different from each other.


I used to run 10k every other day. Once I decided to try a half-marathon (21k) and found that it wasn't particularly hard. It was like running 10k twice.

But I've heard that between a half-marathon and a full marathon there's 'the wall'. A qualative change in difficulty which makes a marathon much harder than running a half-marathon twice.


So a ~10% difference is harder to perceive than a 100% difference?


Right. If it were linear than a 1km difference would feel exactly the same regardless.


Related:

Weber–Fechner law - https://news.ycombinator.com/item?id=17301360 - June 2018 (6 comments)

Weber–Fechner law - https://news.ycombinator.com/item?id=5240806 - Feb 2013 (1 comment)


Everyone notices a minimal user interface. They also notice when the minimal UI adds an incremental feature.

Case in point: Early Google search box and subsequent changes to the Google home page.

When features get tacked on to an app, users may just give up due to cognitive overload. Case in point Microsoft Word features or the current Google apps tucked under a link.


I wonder how AI models follow this law?


When I look at the image with the dots, I feel that my eyes are using the ratio of black to white as a low resolution “feature” that my brain is acting upon. So is it that my eyes are mainly providing already scaled information, or is it that my brain is choosing that information as the most important thing to act on? When it comes to fast decision making (which herd to target for hunting, which side of the hill do I run down to escape this predator) I’d kind of expect that the brain is hardwired to act on low resolution information quickly and process more detailed information at leisure.

I think the same goes for neural nets. Numerical features are often provided in some scaled manner anyway (e.g. not “how many cents has this stock price fallen in the last second” but “what is the ratio between the price now and the price a second ago?”, or even “what is the ratio of the stock’s price change in the last second, represented as a number of standard deviations from the mean in this background data?”.

And then there’s the fact that a (useful) neural network isn’t linear to begin with (if it was, it’d just be a simple matrix transformation). Each layer has an activation function. None of those I’m familiar of are even remotely logarithmic, but e.g. tanh and sigmoid functions are more sensitive to small changes at values near 0 than values far from zero. Perhaps over many layers it kind of resembles something kind of logarithmic? IDK




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: