> 1. This is a familiar problem and it is where I spend the majority of time. If a bank provides a unique transaction ID the it would be a simple matter of code; however, none of mine do.
This is an annoying problem I've had to deal with also (banks have some unique ID somewhere, why not provide it?).
I create my own unique ID to the extent that it's possible. Some txns have the exact same info but are 2 sep txns, requires a bit of messy logic.
> You don't need tons of complexity. You can make any system infinitely complex, but this is a choice. A billing system with -- and I quote -- "160000 combinations" is too complicated for customer support to understand, too complicated to explain to consumers, and will inevitably result in people getting billed incorrectly.
I think the author is just referring to a configurable promo's and deals type of engine, which is pretty common.
It just gives the business the ability to configure as opposed to hard code. For example, maybe the business wants to run a buy one get one free type promotion, this can just be configured into a generalized engine that allows for qty hurdles (or amount hurdles) that trigger some discount on some set of items.
This seems to make sense given that Purkinje cells in the brain have been shown to do this same type of thing in isolation (detect and respond to patterns of input).
It meant there was some low level mechanism lurking inside at least those cells, so not too surprising it's more general.
> But if you squint then sensory actions and reactions are also sequential tokens
I'm not sure you could model it that way.
Animal brains don't necessarily just react to sensory input, they frequently have already predicted the next state based on previous state and learning/experience, and not just in a simple sequential manner but at many different levels of patterns involved simultaneously (local immed action vs actions part of larger structure of behavior), etc.
Sensory input is compared to predicted state and differences are incorporated into the flow.
The key thing is our brains are modeling and simulating the world around us and it's future state (modeling the physical world as well as the abstract world of what other animals are thinking). It's not clear that LLM's are doing that (my assumption is that they are not doing any of that, and until we build systems that do that, we won't be moving towards the kind of flexible and adaptable control our brains have).
Edit: I just read the rest of the parent post that said basically the same thing, was skimming so missed it.
the problem with salt is it is difficult to get rid of, and non-specific. For driveway cracks it might be ok, but boiling water is still much safer and harder to fuck up.
Boiling water is really useful too, since the boiling damages the plant in a way that pathogens can enter the plant, which then eat/destroy the roots which usually result in regrowth.
> From there, learning that red lights correlates with the large, fast, dangerous object stopping, is just a matter of observation
I think "just a matter of observation" understates the many levels of abstraction and generalization that animal brains have evolved to effectively deal with the environment.
Here's something I just read the other day about this:
"After experiencing enough sequences, the mice did something remarkable—they guessed a part of the sequence they had never experienced before. When reaching D in a new location for the first time, they knew to go straight back to A. This action couldn't have been remembered, since it was never experienced in the first place! Instead, it's evidence that mice know the general structure of the task and can track their 'position' in behavioral coordinates"
AGI=lim(x->0)AIHype(x)
where x=length of winter
reply