Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Notebooks are a diluted form of the Lisp/Smalltalk REPL-based development experience, with some features from the reactive-spreadsheet world. Especially for those of us in the business of producing numbers, 'real code' with a fixed set of tests isn't a better way but a necessary evil, a black box that we can't really trust. Building a calculation piece by piece, in a notebook, trying out variations along the way? That's how you get a feel for the calculation, connect with it at a spiritual level.


This is what I have to teach people working with numbers. Often with a risk of looking like a fool to the SWE crowd.

That working with numbers is a craft in itself. And the primary driver for the craft are the numbers. Not SWE practices.

You can't "feel" the numbers just by coming up with a huge testsuite.

And pretty often, the feel goes missing when you translate your prototype notebook into "real code".


I wonder why CL and Smalltalk haven’t beat Python. Is is the languages or just unawareness? The workflow just make more sense there with better updates propagation and state saving.


The Clojure/JVM statistics/scientific computing/now tensor math packages just never got as good as Python, and in Smalltalk they were a non-starter. R is an awkward language, but it's repl-first, has a lot of Lisp's metaprogramming (done in very ad-hoc ways), and Smalltalk's serializable image model -- so a lot of exploratory/experimental statistical methods research happens there, and then gradually makes its way to Python when it needs to be stabilized for production.

F# based .Net would've made a fine math-centric environment, but it's a language with a high initial barrier, and though .Net community has been making a decent effort at porting over a lot of NumPy/SciPy, it's not fully caught up after many years.

We have Julia now, it's jitted, multicore/GPU friendly, and has interesting REPL innovations, and yes, serializable state (though, ugh, ligatures). But every time I reach for it, it's like, oh no, yet another thing I want to call is in the Python ecosystem. Especially all the modern deep learning/tensor stuff, where the assumption is Python's speed and the GIL don't matter because you're just gluing together GPU calls.


> The Clojure/JVM statistics/scientific computing/now tensor math packages just never got as good as Python, and in Smalltalk they were a non-starter.

There are undergoing efforts to improve numerics performance on Smalltalk. Pharo, for example, uses JIT, and already beats pure Python. But numerics in Python is mostly outsourced to fast modules written in C/Fortran. So people are in process of making Pharo's counterpart to NumPy/SciPy (PolyMath) leverage BLAS/LAPACK integration as well. See:

https://www.youtube.com/watch?v=R5jJBUMLxq8

https://hal.science/hal-03768601v1/file/main.pdf


In their defense, F# has https://diffsharp.github.io and .NET has https://ilgpu.net

In terms of NumPy port, for more involved implementations you might be better off using built-in numeric primitives together with upcoming Tensor<T>(kind of like ndarray on steroids) and TensorPrimitives (BLAS).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: