IMO, no language without a Jupyter kernel can ever be a serious contender in the machine learning research space.
I was pretty skeptical of Jupyter until recently (because of accessibility concerns), but I just can't imagine my life without it any more. Incidentally, this gave me a much deeper appreciation and understanding of why people loved Lisp so much. An overpowered repl is an useful tool indeed.
Fast compilation times are great and all, but the ability to modify a part of your code while keeping variable values intact is invaluable. This is particularly true if you have large datasets that are somewhat slow to load or models that are somewhat slow to train. When you're experimenting, you don't want to deal with two different scripts, one for training the model and one for loading and experimenting with it, particularly when both of them need to do the same dataset processing operations. Doing all of this in Jupyter is just so much easier.
With that said, this might be a great framework for deep learning on the edge. I can imagine this thing, coupled with a nice desktop GUI framework, being used in desktop apps for using such models. Things like LLM Studio, Stable Diffusion, voice changers utilizing RVC (as virtual sound cards and/or VST plugins), or even internal, proprietary models, to be used by company employees. Use cases where the model is already trained, you already know the model architecture, but you want a binary that can be distributed easily.
Jupyter notebook is indeed very important. It mainly provides data scientists with two things: a literate programming environment (mixing text, code and outputs) and a way to hold state of data in memory (so that you can perform computation interactively).
For holding state a Nim repl (which is on the roadmap as secondary priority after completing incremental compilation) is definitely an option.
Another option could be to create a library framework for caching (or be able to serialize and deserialize quickly) large data and objects. One way to see it, could be to build something similar to streamlit cache (streamlit indeed provides great interactivity)
Elixir beating python in the machine learning wars, or at least becoming a competitive option, is something I dream of.
Is anybody using Elixir for ML who could comment on the state of it? How usable is it now?
Last I heard, for new projects/models/etc it was great, but so much existing stuff (that you want to reuse or expand on) is dependent on python, making it hard unless you are starting from scratch.
I was pretty skeptical of Jupyter until recently (because of accessibility concerns), but I just can't imagine my life without it any more. Incidentally, this gave me a much deeper appreciation and understanding of why people loved Lisp so much. An overpowered repl is an useful tool indeed.
Fast compilation times are great and all, but the ability to modify a part of your code while keeping variable values intact is invaluable. This is particularly true if you have large datasets that are somewhat slow to load or models that are somewhat slow to train. When you're experimenting, you don't want to deal with two different scripts, one for training the model and one for loading and experimenting with it, particularly when both of them need to do the same dataset processing operations. Doing all of this in Jupyter is just so much easier.
With that said, this might be a great framework for deep learning on the edge. I can imagine this thing, coupled with a nice desktop GUI framework, being used in desktop apps for using such models. Things like LLM Studio, Stable Diffusion, voice changers utilizing RVC (as virtual sound cards and/or VST plugins), or even internal, proprietary models, to be used by company employees. Use cases where the model is already trained, you already know the model architecture, but you want a binary that can be distributed easily.