Notebooks are a diluted form of the Lisp/Smalltalk REPL-based development experience, with some features from the reactive-spreadsheet world. Especially for those of us in the business of producing numbers, 'real code' with a fixed set of tests isn't a better way but a necessary evil, a black box that we can't really trust. Building a calculation piece by piece, in a notebook, trying out variations along the way? That's how you get a feel for the calculation, connect with it at a spiritual level.
I wonder why CL and Smalltalk haven’t beat Python. Is is the languages or just unawareness? The workflow just make more sense there with better updates propagation and state saving.
The Clojure/JVM statistics/scientific computing/now tensor math packages just never got as good as Python, and in Smalltalk they were a non-starter. R is an awkward language, but it's repl-first, has a lot of Lisp's metaprogramming (done in very ad-hoc ways), and Smalltalk's serializable image model -- so a lot of exploratory/experimental statistical methods research happens there, and then gradually makes its way to Python when it needs to be stabilized for production.
F# based .Net would've made a fine math-centric environment, but it's a language with a high initial barrier, and though .Net community has been making a decent effort at porting over a lot of NumPy/SciPy, it's not fully caught up after many years.
We have Julia now, it's jitted, multicore/GPU friendly, and has interesting REPL innovations, and yes, serializable state (though, ugh, ligatures). But every time I reach for it, it's like, oh no, yet another thing I want to call is in the Python ecosystem. Especially all the modern deep learning/tensor stuff, where the assumption is Python's speed and the GIL don't matter because you're just gluing together GPU calls.
> The Clojure/JVM statistics/scientific computing/now tensor math packages just never got as good as Python, and in Smalltalk they were a non-starter.
There are undergoing efforts to improve numerics performance on Smalltalk. Pharo, for example, uses JIT, and already beats pure Python. But numerics in Python is mostly outsourced to fast modules written in C/Fortran. So people are in process of making Pharo's counterpart to NumPy/SciPy (PolyMath) leverage BLAS/LAPACK integration as well. See:
In terms of NumPy port, for more involved implementations you might be better off using built-in numeric primitives together with upcoming Tensor<T>(kind of like ndarray on steroids) and TensorPrimitives (BLAS).