What technology would you bet your business on then?
Today, you can write numpy code, and that runs on pretty much all CPUs from all vendors, with different levels of quality.
A one line change allows you to run all numpy code you write on nvidia GPUs, which at least today, are probably the only GPUs you want to buy anyways.
In practice, you would probably be also running your whole software stack on CPUs, at least for debugging purposes. So if you change your mind about using nvidia hardware at some point, you can just revert that one line change and go back to exclusively targeting CPUs. Or who knows, maybe some other GPU vendors might provide their numpy implementation by then, and you can just go from CuPy to ROCmPy or similar.
Either way, if you are building a numpy stack today, I don't see what you lose today from using CuPy when running your products on hardware for which that's available.
shrug I'll bet my business on waiting an extra 15 minutes for analytics code to run.
Seriously. There's little most businesses really needs that I couldn't do on a nice 486 running at 33MHz. Now, if a $5000 workstations gives even 5% improvement to employee productivity, that's an obvious business decision. That doesn't mean it's necessary for a business to work. So dropping $1000 on an NVidia graphics card, if things ran faster and there were no additional cost, would be a no-brainer.
There are additional costs, though.
And no, you can't just go back from faster to slower. Try running Ubuntu 20.04 on the 486 -- it won't go. Over time, code fills up resources available. If I could take a 2x performance hit, it'd be fine. But GPUs are orders-of-magnitude faster.
Please, show us how to train Alexa or BERT on a 486. That'll definitely win you the Turing and Gordon Bell prices, and probably the Peace Nobel price for all those power savings!
Please show me a business (aside from Amazon, obviously) who needs Alexa.
Most businesses need a word processor, a spreadsheet, and some kind of database for managing employees and inventory. A 486 does that just fine.
Most businesses derive additional value from having more, but that's always an ROI calculation. ROI has two pieces: return, and investment. Basic business analytics (regressions, hard-coded rules, and similar) have high return on low investment. Successively complex models typically have exponentially-growing complexity in return for diminishing returns. At some point, there's a breakpoint, but that breakpoint varies for each business.
If the goal is to limit GPGPU to businesses whose core value-add is ML (the ones building things like Alexa), NVidia has done an amazing job. If the goal is to have GPGPU as common as x86, NVidia has failed.
> Please show me a business (aside from Amazon, obviously) who needs Alexa.
I'll bite.
Have you ever been getting a haircut, and the hair dresser had to stop to pick up the phone to make an appointment?
Have you ever go to actually pick up a pizza at a small pizzeria and noticed that from 4 employees, 3 are making pizzas, and one is 99% of their time on the phone?
Every single business that you've ever used in your life would be better off with an Alexa that can handle the 99% most common user interactions.
In fact, even small pizzerias and hair salons nowadays are using third-party online booking systems with chat bots. Larger companies are able to turn a 200 people call center into a 20 man operation by just using an Alexa to at least identify customers and resolve the most common questions.
If CuPy supported NVidia and AMD, and was folded into Numpy, I'd buy the biggest, beefiest GPU I could find overnight.