This course appears to be above average in terms of terminology and and code quality for a free online course. (although the DL part focuses on MXNet, which is not as ubiquitous as TensorFlow/PyTorch)
1. We have just translated contents from the Chinese book https://github.com/d2l-ai/d2l-zh. However, the translation quality is not good enough; thus, we are still editing. As @EForEndeavour put, you are more than welcome to become a contributor of the book if you spot any issue: https://github.com/d2l-ai/d2l-en Your help is valuable for making the book better for everyone.
2. Indeed, recently we were also asked by a few instructors why we use MXNet in 'Dive into Deep Learning' (D2L). Here is what we think:
a) Traditional deep learning (DL) textbooks often illustrate algorithms without implementations.
b) In view of this, D2L features both algorithms and implementations for DL. Doing so does not require exclusive features of any deep learning framework.
c) Thus, even when re-implementing the algorithms in the book with other DL frameworks, the code descriptions won't be too different. We use MXNet because we are familiar with it. No matter which DL framework one uses, it should be easy to switch to another one.
As a concrete example, in the case of applying RNN to language models, the implementation includes data preprocessing, model construction, and training loops. D2L will guide you through how to transform text data to allow efficient mini-batch iteration, how to implement RNN (with or without using RNN api), and how to efficiently and effectively train a language model. On one hand, even if a DL novice can memorize the algorithms in a traditional textbook, it is still hard to apply it into a real project without knowing implementation details. On the other hand, such implementations are general: the code will be similar even when being re-implemented with another framework.
3. We thank institutions for adopting or recommending D2L in their courses, such as UCLA CS 269 Foundations of Deep Learning, University of Science and Technology of China Deep Learning, UIUC CS 498 Introduction to Deep Learning, and UW CSE 599W Systems for ML. When we wrote the book in Chinese, the book benefited from a lot of feedbacks at https://discuss.gluon.ai/latest?order=views and pull requests from 120+ contributors. It would be very helpful if we could get feedbacks and help from more readers when we are editing.
Even something niche. I want a great course in computability theory. HN is almost 100% ML these days, as if ML is the only exciting thing happening in CS world. Very disappointing.
The senior author, Alex Smola, is the director of AWS (and is a big deal: https://alex.smola.org/), which helps explain the choice of Apache MXNet. That said, the authors write that "even if you use other deep learning frameworks in your research or work, we hope that these codes will help you better understand deep learning algorithms."
Keras is quite popular as well. My point is, these courses have the cognitive load that's quite high even without having to deal with an unfamiliar framework.
Don't get me wrong, it seems like a great course and anyone interested in the field will certainly get a lot of mileage out of it. But by choosing a rather obscure framework it kind of shoots itself in the foot compared to e.g. Fast.ai course, which is jam packed with practical advice and uses PyTorch. Like it or not, there are two dominant frameworks right now: TF and PyTorch, the latter is my personal favorite by far. A more practical (and fairly low-effort) approach, therefore, would be to duplicate code samples in PyTorch.
That said, Fast.ai shoots itself in the foot a little too, by requiring Python 3.6 and up, which a lot of people don't have out of the box. I understand why they do it (type annotations), but still. They also hide PyTorch behind a rather large ball of Python with cognitive loads of its own.
Your point is understood, and reiterated in other posts here. I just wanted to add context to the course as being presented by an author of MXNet, who is free to present it in his own framework.
Anyway, there's no dearth of courses in the popular frameworks already, it doesn't really help enrich the community if all educational resources only taught the current "market winner", we might otherwise be missing other opportunities or ways of thinking. Imagine if all schools only taught Javascript or Java!
I use Keras, TensorFlow, and PyTorch (mostly Keras) but one advantage of mxnet is that it has good Scala support and pretty good/getting better Clojure support so if you like to use those languages then mxnet could fit your work flow.