Well, that's a bit like asking "what's with the obsessive emphasis on programming computers, after all they're just objects that compute?"
Yes, neural networks are objects that compute; there's even this "universal approximator" theorem that says a basic, albeit sufficiently large, neural network can approximate any arbitrary function (from a broad class of functions) to arbitrary precision. However, the theorem says nothing about whether you'll ever actually _find_ the neural network that corresponds to that function. This is what training is for, it allows us to find (the parameters of) the NN that we want to do some computation.
In other words, training is how we program NNs, but in general it can be really hard to arrive at the "program" you're looking for.
Your point is that training is computationally intensive, but your question is "why do people obsess over training"? Sounds like you've answered your own question, then. It's currently hard to train networks, so people "obsess" over improving the methods (see, for example, the article we're commenting on) so that it doesn't have to take as long.
But also note that what you say is not necessarily true that the NN's spend most of their time training. Maybe you've got to spend a week on a huge GPU cluster training some autonomous-driving algorithm, but then it runs in "compute" mode for hours a day in tens of thousands of cars.
Yes, neural networks are objects that compute; there's even this "universal approximator" theorem that says a basic, albeit sufficiently large, neural network can approximate any arbitrary function (from a broad class of functions) to arbitrary precision. However, the theorem says nothing about whether you'll ever actually _find_ the neural network that corresponds to that function. This is what training is for, it allows us to find (the parameters of) the NN that we want to do some computation.
In other words, training is how we program NNs, but in general it can be really hard to arrive at the "program" you're looking for.