I used to look at all TensorFlow questions when I was on the TensorFlow team (https://stackoverflow.com/tags/tensorflow/info). Unclear where people go to interact with their users now....Reddit? But the tone on Reddit is kind of negative/complainy
Stricter (but not looser) standards can be imposed on state level. Canada has no binding national drinking water law, they leave it to territories/provinces to decide how to implement guidelines.
What if instead we could all collectively agree that access to some amounts of fresh, running water is a fundamental human need? We figure a number, and the first N units are free. Additional units cost money, and perhaps you have two or three usage tiers where heavy users are disincentivized through additional cost.
You calculate the figures such that the higher usage tiers subsidize the costs of the basic needs users.
I've used this feature before to make my chats discoverable through search engines. I had to manually click it each time I shared, it didn't toggle on.
The difference is that for some assets you can calculate value based on fundamentals. IE, humans need shelter, hence estimate future value of shelter (real estate) based on migration trends and other factors. How do you estimate future value of bitcoin? Lack of predictability is probably why serious investment funds don't go into crypto
Realistic simulation of neurons is expensive. Back in my grad school days we ran Genesis and could afford at most 10k neurons - each neuron needs a lot of work to model the corresponding differential equations. However, it's unclear how to translate this into requirements for artificial neural networks -- the type of computation is too different.
A different metric is a more relevant goalpost -- number of synapses. If each of 125 trillion synapses in the brain can adjust its strength independently of others, it loosely corresponds to a parameter in a neural network. So if we get 100 trillion parameter networks training but still no human intelligence, we'll know conclusively that the bottleneck is something else. Currently training 1T parameter networks seem feasible
if you collapse things to just synapses, you’ve lost of the complexity of dendritic arbors. The article doesn’t mention gap junctions but there are networks of those too with different properties.
It seems to me that mean field models, which could be deep networks internally, are a much more parsimonious computational approach.