I wonder what google would have to say since it's built from deep dream experience. Also it was recently improved with newer tech which may very well be non infringing
eh most computer now have some good muscle. also it's an iterative process so you can show a meaningful progress bar to user even if it takes an hour or so people wouldn't complain. and for many it would take minutes - one just need to set the expectations straight.
"By increasing the server memory to 27GB I manage to create 1024px images on CPU - seven hours per image" - http://spiceprogram.org/artcorn/
And the GPU version requires amounts of video memory that generally do not exist for anything else than research or GPGPU purposes. With 4GB of VRAM you might get a 920x690 image processed in under 1 hour, using recent optimizations added to the neural-style repo.
So a standalone iOS version seems pretty useless. A standalone Windows version might appeal to people with gaming rigs but without the technical knowledge or determination to install neural-style in its current form. It certainly has potential, just not as huge as the cloud version.
If you were a serious graphic design professional might not you already have an i7 or a xeon rig with SLI already set up, and be accustomed to leaving a render running overnight?
the aws smallest instance comes at 450$/mo (about 300/mo paid upfront), the one he suggest comes at 200$/mo but the gpu seems cheaper. still one would need some sort of payout to keep it running, it's not exactly cheap and you need to shell 2k upfront
But they could terminate your instance at any time...
Usually, training step is the one that takes the longest and that can take a whole day. Evaluation step can be done on CPU usually almost instantly. Not sure what the utility of spot instances is in deep learning. Not to mention code complexity and Dev-Ops investments to run on very transient hardware.
Finally, you can't haggle with AWS, but you can basically name your price with private dedicated server providers like that.
the net comes pretrained. I haven't dug into the code much, but from the surface it seems it works by using the net as a scoring system for feature similarity, and then does a gradient descent on that (in very broad terms, I saw the style layer being used multiple times at multiple scales, probably to capture high and level patterns independently)
this guy had the right idea http://ostagram.ru/static_pages/lenta?last_days=30
but the service costs an arm and a leg to run on current cuda clouds
first to get to a standalone python or ios version takes all :D