You don't train it for every image; in this way, a neural network often is a "closed-form solution": it provides you an equation, admittedly a very convoluted one, which can be used to obtain its solution, admittedly usually an approximation, in a finite amount of time. The normal solution to this problem (according to the paper) is an iterative technique "to solve an optimization problem of matching the Gram matrices of deep features extracted from the content and style photos", whereas this one is simply two passes: stylization and smoothing.
Previous stylisation was slow because it needed to SGD optimisation for each image to be stylised. This uses a NN trained once. When you've trained a NN it is precisely a closed form solution, in the style y = max(0, 3x + 4). However they are normally a little longer to wrote down :P
Ah okay right this is the answer. Previous approaches [1] are deep generative models that you have to optimize for each input, whereas here you run just a forward evaluation on a model that you've trained beforehand.
I would still argue the term closed-form is misleading here, because:
- Even during training at any given time you can read off a "closed-form expression" of the neural network of this type, so closed-form in this broad sense really doesn't mean much. Furthermore any result of any numerical computation ever are also closed-form solutions according to this, on the grounds that they result from a computation that completed in finite number of steps. So really whenever you ask a grad student to run some numerical simulation expect them to come back saying "Hey I found a closed-form expression!"
- The reason the above is absurd is that these trained NN's aren't really solutions to the optimization problem, but approximations. So this is really saying I have a problem, I don't know how to solve it but I can produce a infinite sequence of approximations. Now I'm gonna truncate this sequence of approximations, and call this a closed form solution.
The analogy in highschool math would be computing an infinite sum that doesn't converge, but now let's instead just add to some large N, and call this a closed-form solution.
Actually, I agree with you. Initially you seemed to object to the term "closed form"; this now highlights the more pertinent point - these models are 100% closed form, but 0% "solution" in the formal sense.
Someone correct me if I'm wrong, but I believe this refers to the fact that it can be expressed in terms of certain simple mathematical operations like addition, subtraction, multiplication, powers, roots etc.—and as a consequence, the execution is very efficient. My understanding is that 'closed form' solution is essentially something that resembles a polynomial (again, accepting corrections!).
Closed form just means you can do it in a finite number of operations. So just "run X" rather than the previous versions of this kind of thing which are "repeat X until measure Y is lower than the limit I care about". (my basic understanding)
I checked the Wikipedia article, and the sorts of operations involved do appear to be a part of the definition: https://en.wikipedia.org/wiki/Closed-form_expression —though it sounds like it's a somewhat loosely defined term.
I really don't think that it's fair to call neural networks closed-form solutions. The term immediately makes me assume that it enabled you to bypass the training stage altogether.