Of course you could train a NN to do arithmetic, but this is much more impressive.
Training a NN network to solve problems with available tools means more abstraction, and is closer to AGI than just essentially learning a LUT.
Yeah. I only trained addition. Actually exploring the impact of training a net to perform a range of operations on the minimum plausible neuron count would be quite interesting
I don’t see any reason why it would be significantly harder to do, however
You’re right about accuracy. I didn’t let the model train enough to push the error low enough to guarantee exact results over the input range. But then again this was designed as a toy experiment, not something people should rely on