Because conda (and by extension, Anaconda) harms the Python ecosystem with their non-standard package format. If Continuum Analytics cared about making scientific packages like numpy and scipy more accessible, they would make binary wheels for Linux/Windows/OSX with the MKL. There is no technical reason that prevents them from doing this. Instead they lock you in to their package/environment manager and create confusion when you have pip and conda packages installed at the same time.
Conda is a package manager for more than Python. It manages libraries that are shared between Python and R and other tools. This isn't possible with a purely Python-based solution.
It also does envs better than most other solutions since it will prefer hardlinks which allows environmental isolation at a very small cost in terms of disc space.
Full disclosure: I work for Anaconda Inc. where I am trying my best to make package management and scientific computing easier.
Shared libraries were invented at a time when disk space was very scarce. In my opinion, the fact that we still use dynamic linking today is largely an accident of history. Literally the only advantage over static linking (or distributing any shared libraries along with your application) is that you save some space, at the cost of requiring a complex package management system. Also, a sufficiently smart filesystem can deduplicate this transparently.
And of course, it makes distribution of any application that uses shared libraries a whole lot more difficult.
Disk space is the cheapest and most abundant computing resource in 2017. If you want to make these packages more widely available, just create wheels/whatever the R equivalent is. Good, well understood, and interoperable tooling for these already exists.
Dynamic linking is hugely helpful when, for example, you want to update to the latest openssl without updating half the binaries on your system. That packages are statically including openssl in wheels, and then wheel versions being explicitly pinned in projects, is introducing some juicy attack vectors.
MKL is developed, owned, and licensed by Intel, not Anaconda, Inc. Anaconda distributes it with permission from Intel. If there were to be wheels built and uploaded to PyPI, it would be by Intel. Instead, Intel has chosen conda as the tool they themselves use to deliver their Intel Distribution for Python.
Could you explain in a bit more detail how you would integrate an SVM layer into a DNN? The kernel matrix depends on all samples, while at training time you would only have access to those in the minibatch.
The simplest is to pop it on the top. Run you DNN to reduce your input down to a nicer cleaner smaller dimensional output, then plop an SVM on top for classification.
Seems like in that case you would train both models separately on different cost functions. By phrasing it as a layer I was expecting both the SVM and the DNN could be trained simultaneously.
The premise of the optimizations in this article don't always hold, unfortunately. If x is floating point, sum(reverse(x)) is unlikely to be equal to sum(x) for long enough x, and sorting x on beforehand will make this effect even more apparent.