Python versions 3.11, 3.12 and now 3.13 have contained far fewer additions to the language than earlier 3.x versions. Instead the newest releases have been focusing on implementation improvements - and in 3.13, the new REPL, experimental JIT & GIL-free options all sound great!
The language itself is (more than) complex enough already - I hope this focus on implementation quality continues.
"
Python now uses a new interactive shell by default, based on code from the PyPy project. When the user starts the REPL from an interactive terminal, the following new features are now supported:
Multiline editing with history preservation.
Direct support for REPL-specific commands like help, exit, and quit, without the need to call them as functions.
Prompts and tracebacks with color enabled by default.
Interactive help browsing using F1 with a separate command history.
History browsing using F2 that skips output as well as the >>> and … prompts.
“Paste mode” with F3 that makes pasting larger blocks of code easier (press F3 again to return to the regular prompt).
"
Sounds cool. Definitely need the history feature, for the few times I can't run IPython.
Presumably this also means readline (GPL) is no longer required to have any line editing beyond what a canonical-mode terminal does by itself. It seems like there is code to support libedit (BSD), but I've never managed to make Python's build system detect it.
I have managed to build Python with libedit instead of readline, but it was a custom build.
If your assumption is correct, then I'm anxiously waiting for having the default Python executable in Ubuntu, for example, being licensed under a non-copyleft license. Then one would be able to build proprietary-licensed executables via PyInstaller much more easily.
I've love to see a revamp of the import system. It is a continuous source of pain points when I write Python. Circular imports all over unless I structure my program explicitly with this in mind. Using python path hacks with `sys` etc to go up a directory too.
The biggest problem with Python imports is that the resolution of non-relative module names always prioritizes local files, even when the import happens in stdlib. This means that, for any `foo` that is a module name in stdlib, having foo.py in your code can break arbitrary modules in stdlib. For example, this breaks:
# bisect.py
...
# main.py
import random
with:
Traceback (most recent call last):
File ".../foo.py", line 1, in <module>
import random
File "/usr/lib/python3.12/random.py", line 62, in <module>
from bisect import bisect as _bisect
ImportError: cannot import name 'bisect' from 'bisect'
This is very frustrating because Python stdlib is still very large and so many meaningful names are effectively reserved. People are aware of things like "sys" or "json", but e.g. did you know that "wave", "cmd", and "grp" are also standard modules?
Worse yet is that these errors are not consistent. You might be inadvertently reusing an stdlib module name without even realizing it just because none of the stdlib (or third-party) modules that you import have it in their import graphs. Then you move on to a new version of Python or some of your dependencies, and suddenly it breaks because they have added an import somewhere.
But even if you are careful about checking every single module name against the list of standard modules, a new Python version can still break you by introducing a new stdlib module that happens to clash with one of yours. For example, Python 3.9 added "graphlib", which is a fairly generic name.
I agree, it's unreasonable to expect devs to know the whole standard library. The VSCode extension Pylance does give a warning when this happens. I thought linters might also check this. The one I use doesn't, maybe the issue[0] I just created will lead to it being implemented.
But the problem remains, because these warnings - whether they come from linters or Python itself - can only warn you about existing stdlib modules. I'm not aware of any way to guard against conflicts with any future new stdlib modules being added.
It is a problem because stdlib does not use relative imports for other stdlib modules, and neither do most third-party packages, which then breaks you regardless of what you do in your code.
It was ultimately rejected due to issues with how it would need to change the dict object.
IMO all the rejection reasons could be overcome with a more focused approach and implementation, but I don't know if there is anyone wishing to give it another go.
Getting a cyclic import error is not a bug, it's a feature alerting you that your code structure is like spaghetti and you should refactor it to break the cycles.
The last couple years also saw a stringent approach to deprecations: If something is marked as deprecated, it WILL be removed in a minor release sooner than later.
Yep. They’ve primarily (entirely?) involved removing ancient libraries from stdlib, usually with links to maintained 3rd party libraries. People who can’t/won’t upgrade to newer Pythons, perhaps because their old system that uses those old modules can’t run a newer one, aren’t affected. People using newer Pythons can replace those modules.
There may be a person in the world panicking that they need to be on Python 3.13 and also need to parse Amiga IFF files, but it seems unlikely.
I mean the stdlib is open source too, so you could always vendor deprecated stdlib modules. Most of them haven't changed in eons either so the lack of official support probably doesn't change much.
Using Python on and off since version 1.6, I always like to point out that the language + standard library is quite complex, even more when taking into account all the variations across versions.
Agreed. I still haven’t really started using the ‘match’ statement and structural pattern matching (which I would love to use) since I still have to support Python 3.8 and 3.9. I was getting tired of thinking, “gee this new feature will be nice to use in 4 years, if I remember to…”
But they've worked very hard at shielding most users from that complexity. And the end result - making multithreading a truly viable alternative to multiprocessing for typical use cases - will open up many opportunities for Python users to simplify their software designs.
I suppose only time will tell if that effort succeeds. But the intent is promising.
No. Python is orders of magnitude slower than even C# or Java. It’s doing hash table lookups per variable access. I would write a separate program to do the number crunching.
Everyone must now pay the mental cost of multithreading for the chance that you might want to optimize something.
> It’s doing hash table lookups per variable access.
That hasn't been true for many variable accesses for a very long time. LOAD_FAST, LOAD_CONST, and (sometimes) LOAD_DEREF provide references to variables via pointer offset + chasing, often with caches in front to reduce struct instantiations as well. No hashing is performed. Those access mechanisms account for the vast majority (in my experience; feel free to check by "dis"ing code yourself) of Python code that isn't using locals()/globals()/eval()/exec() tricks. The remaining small minority I've seen is doing weird rebinding/shadowing stuff with e.g. closures and prebound exception captures.
So too for object field accesses; slotted classes significantly improve field lookup cost, though unlike LOAD_FAST users have to explicitly opt into slotting.
Don't get me wrong, there are some pretty regrettably ordinary behaviors that Python makes much slower than they need to be (per-binding method refcounting comes to mind, though I hear that's going to be improved). But the old saw of "everything is a dict in python, even variable lookups use hashing!" has been incorrect for years.
Thanks for the correction and technical detail. I’m not saying this is bad, it’s just the nature of this kind of dynamic language. Productivity over performance.
> Everyone must now pay the mental cost of multithreading for the chance that you might want to optimize something.
I'm assuming that by "everyone" you mean everyone who works on the Python implementation's C code? Because I don't see how that makes sense if you mean Python programmers in general. As far as I know, things will stay the same if your program is single-threaded or uses multiprocessing/asyncio. The changes only affect programs that start threads, in which case you need to take care of synchronization anyway.
Python doesn't do hash table lookups for local variable access. This only applies to globals and attributes of Python classes that don't use __slots__.
The mental cost of multithreading is there regardless because GIL is usually at the wrong granularity for data consistency. That is, it ensures that e.g. adding or deleting a single element to a dict happens atomically, but more often than not, you have a sequence of operations like that which need to be locked. In practice, in any scenario where your data is shared across threads, the only sane thing is to use explicit locks already.
1. The whole system is dedicated to running my one program,
2. I want to use multi threading to share large amounts of state between workers because that's appropriate to my specific use case, and
3. A 2-8x speedup without having to re-write parts of the code in another language would be fan-freaking-tastic.
In other worse, I know what I'm doing, I've been doing this since the 90s, and I can imagine this improvement unlocking a whole lot of use cases that've been previously unviable.
Sounds like a lot of speculation on your end because we don't have lots of evidence about how much this will affect anything, because until just now it's not been possible to get that information.
> ditto. So that’s not relevant.
Then I'm genuinely surprised you've never once stumbled across one of the many, many use cases where multithreaded CPU-intensive code would be a nice, obvious solution to a problem. You seem to think these are hypothetical and my experience has been that these are very real.
This issue is discussed extensively in “the art of Unix programming” if we want to play the authority and experience game.
> multithreaded CPU-intensive code would be a nice, obvious solution to a problem
Processes are well supported in python. But if you’re maxing your CPU core with the right algorithm then python was probably the wrong tool.
> my experience has been that these are very real.
When you’re used to working one way it may seem impossible to frame the problem differently. Just to remind you this is a NEW feature in python. JavaScript, perl, and bash, also do not support multi threading for similar reasons.
One school of design says if you can think of a use case, add that feature. Another tries to maintain invariants of a system.
If you’re in a primarily python coding house, your argument won’t mean anything when you bring up you’ll have to rewrite millions of lines of code in C# or Java, you might as well ask them to liquidate the company and start fresh.
“Make things as simple as possible, but no simpler”, I for one am glad they’ll be letting us use modern CPUs much more easily instead of it being designed around 1998 cpus
Big fan of typing improvements in Python. Any chance you can elaborate on the "if let" pattern in Rust and how it would look in Python now? Not sure I follow how it translates.
I don't get the point of a runtime type checker.
It adds a lot of noise with those decorators everywhere and you need to call each section of the code to get full coverage, meaning 100% test coverage.
At that point just use rust, or am I missing something?
It looks like you call a function near the beginning of your Python program / application that does all the type checking at startup time. IDK for sure, I haven't used the library.
Someone using Python doesn't "just use Rust", there are very clear pros and cons and people already using Python are doing so for a reason. It is sometimes helpful to have type checks in Python though.
> Use beartype to assure the quality of Python code beyond what tests alone can assure. If you have yet to test, do that first with a pytest-based test suite, tox configuration, and continuous integration (CI). If you have any time, money, or motivation left, annotate callables and classes with PEP-compliant type hints and decorate those callables and classes with the @beartype.beartype decorator.
Don't get me wrong, I think static type checking is great. Now if you need to add a decorator on top of each class and function AND maintain 100% code coverage, well that does not sound like "zero-cost" to me. I can hardly think of a greater cost just to continue dynamically typing your code and maintain guarantees about external dependencies with no type hints.
I prefer mypy but sometimes pyright supports new PEPs before mypy, so if you like experimenting with cutting-edge python, you may have to switch time to time.
Python version from 3.10 have had a very annoying bug with the SSLContext (something related only to glibc) where there are memory leaks when opening new connections to new hosts and eventually causes any service (dockerized in my case) to crash due to OOM. Can still see that the issues have not been resolved in this release which basically makes it very difficult to deploy any production grade service difficult.
I've been tracking this one: https://github.com/python/cpython/issues/109534, but there are multiple others raised in the cpython repo over on Github. Searching for asyncio or sslcontext shows multiple issues raised over the years with no fix in place.
> Free-threaded execution allows for full utilization of the available processing power by running threads in parallel on available CPU cores. While not all software will benefit from this automatically, programs designed with threading in mind will run faster on multi-core hardware.
Would be nice to see performance improvements for libraries like FastAPI, NetworkX etc in future.
What I've been surprised about is the number of python packages that require specific python versions(e.g., works on 3.10, but not 3.11. Package versioning is already touchy enough without the language itself causing it in minor upgrades.
And will python 3.14 be named pi-thon 3.14. I will see myself out.
Today someone's pipeline broke because they were using python:3 from Dockerhub and got an unexpected upgrade ;-)
Specifically, pendulum hasn't released a wheel yet for 3.13 so it tried to build from source but it uses Rust and the Python docker image obviously doesn't have Rust installed.
Good to get advanced notice, if I read all the way down, that they will silently completely change the behavior of multiprocessing in 3.14 (only on Unix/Linux, in case other people wonder what’s going on), which is going to break a bunch of programs I work with.
I really like using Python, but I can’t keep using it when they just keep breaking things like this. Most people don’t read all the release notes.
Not defending their specific course of action here, but you should probably try to wade into the linked discussion (https://github.com/python/cpython/issues/84559). Looks like the push to disable warnings (in 3.13) is mostly coming from one guy.
While it’s not perfect, I know a few other people people who do “set up lots of data structures, including in libraries, then make use of the fact multiprocessing uses fork to duplicate them”. While fork always has sharp edges, it’s also long been clearly documented that’s the behavior on Linux.
I'm pretty sure that significantly more people were burned by fork being the default with no actual benefit to their code, whether because of the deadlocks etc that it triggers in multithreaded non-fork-aware code, or because their code wouldn't work correctly on other platform. Keeping it there as an option that one can explicitly enable for those few cases where it's actually useful and with full understanding of consequences is surely the better choice for something as high-level as Python.
However, changing the default silently just means people's code is going to change behaviour between versions, or silently break if someone with an older version runs their code. At this point, it's probably better to just require people give an explicit choice (they can even make one of the choice names be 'default' or something, to make life easy for people who don't really care).
I'm with you on undesirability of silent change of behavior. But requiring people to make an explicit choice would immediately break a lot more code, because now all the (far more numerous) instances of code that genuinely doesn't care one way or another won't run at all without changes - and note that for packages, this also breaks anyone depending on them, requiring a fix that is not even in their code. So it's downsides either way, and which one is more disruptive to the ecosystem depends on the proportion of code affected in different ways. I assume that they did look at existing Python code out in the wild to get at least an eyeball estimate of that when making the decision.
> posix_spawn() now accepts None for the env argument, which makes the newly spawned process use the current process environment
That is the thing about fork(), spawn(), and even system() being essential wrappers around clone() in glibc and musl.
You can duplicate the behavior of fork() without making the default painful for everyone else.
In musl systems() calls posix_spawn() which calls clone().
All that changes is replacing a legacy call fork() that is nothing more than a legacy convenience alias with real issues and foot guns with multiple threads.
both fork() and spawn() are just wrappers around clone() on most libc types anyway.
spawn() was introduced to POSIX in the last century to address some of the problems with fork() especially related to multi threading, so I an curious how your code is so dependent on UTM, yet multi threading.
My code isn't dependant on multi-threading at all.
It use fork in Python multiprocess, because many packages can't be "pickled" (the standard way of copying data structures between processes), so instead my code looks like:
* Set up big complicated data-structures.
* Use fork to make a bunch of copies of my running program, and all my datastructures
* Use multiprocessing to make all those python programs talk to each other and share work, thereby using all my CPU cores.
'Threading' is an overload term. And while I didn't know, I was wondering if at the library level, the fact that posix_spawn() pauses the parent, while fork() doesn't, that is what you were leveraging.
The python multiprocessing module has been problematic for a while, as the platform abstractions are leaky and to be honest the POSIX version of spawn() was poorly implemented and mostly copied the limits of Windows.
I am sure that some of the recent deadlocks are due to this pull request as an example that calls out how risky this is.
Personally knowing the pain of fork() in the way you are using it, I have moved on.
But I would strongly encourage you to look into how clone() and the CLONE_VM and CLONE_VFORK options interact, document your use case and file an actionable issue against the multiprocessing module.
Go moved away from fork in 1.9 which may explain the issues with it better than the previous linked python discussion.
But looking at the git blame, all the 'fixes' have been about people trading known problems and focusing on the happy path.
My reply was intended for someone to address that tech debt and move forward with an intentional designed refactoring.
As I just focus on modern Linux, I avoid the internal submodule and just call clone() in a custom module or use python as glue to languages that have better concurrency.
My guess is that threads in Cython are an end goal. While setting execution context will get you past this release, fork() has to be removed if the core interpreter is threaded.
The delta between threads and fork/exec has narrowed.
While I don't know if that is even an option for you, I am not seeing any real credible use cases documented to ensure that model is supported.
Note, I fully admit this is my own limits of imagination. I am 100% sure there are valid reasons to use fork() styles.
Someone just needs to document them and convince someone to refactor the module.
But as it is not compatible with threads, has a ton of undefined behavior and security issues, fork() will be removed without credible documented use cases that people can weigh when considering the tradeoffs.
> I really like using Python, but I can’t keep using it when they just keep breaking things like this.
So much perl clutching. Just curious, since I guess you've made up your mind, what's your plan to migrate away? Or are you hoping maintainers see your comment and reconsider the road-map?
It was similar last year when 3.12 came out and 3.11 still wasn't supported. I'm really curious what makes azure functions so slow to upgrade available run times, or if it's just that they figure demand for the latest python version isn't there.
We have switched to exclusively using Docker Images in Lambda on AWS cause their runtime team constantly breaks things and is behind with a bunch of releases.
We follow this rule (about two dozen services with in total ~100k loc of Python):
By default, use the version 1 release below the latest.
I.e. we currently run 3.11 and will now schedule work to upgrade to 3.12, which is expected to be more or less trivial for most services.
The rationale is that some of the (direct and transitive) dependencies will take a while to be compatible with the latest release. And waiting roughly a year is both fast enough to not get too much behind, and slow enough to expect that most dependencies have caught up with the latest release.
Yup. At my last gig, upgrading to a new version meant setting the Docker tag to the new one and running `make test`. If that passed, we were 99% certain it was safe for prod. The other 1% was covered by running in pre-prod for a couple days.
>Any rule of thumb when it comes to adopting Python releases?
No, because it varies widely depending on your use case and your motivations.
>Is it usually best to wait for the first patch version before using in production?
This makes it sound like you're primarily worried about a situation where you host an application and you're worried about Python itself breaking. On the one hand, historically Python has been pretty good about this sort of thing. The bugfixes in patches are usually quite minor, throughout the life cycle of a minor version (despite how many of them there are these days - a lot of that is just because of how big the standard library is). 3.13 has already been through alpha, beta and multiple RCs - they know what they're doing by now. The much greater concern is your dependencies - they aren't likely to have tested on pre-release versions of 3.13, and if they have any non-Python components then either you or they will have to rebuild everything and pray for no major hiccups. And, of course, that applies transitively.
On the other hand, unless you're on 3.8 (dropping out of support), you might not have any good reason to update at all yet. The new no-GIL stuff seems a lot more exciting for new development (since anyone for whom the GIL caused a bottleneck before, will have already developed an acceptable workaround), and I haven't heard a lot about other performance improvements - certainly that hasn't been talked up as much as it was for 3.11 and 3.12. There are a lot of quality-of-implementation improvements this time around, but (at least from what I've paid attention to so far, at least) they seem more oriented towards onboarding newer programmers.
And again, it will be completely different if that isn't your situation. Hobbyists writing new code will have a completely different set of considerations; so will people who primarily maintain mature libraries (for whom "using in production" is someone else's problem); etc.
Rule 1 is wait until there are built wheels for that version of python for all the libraries that you need. In most cases that can take a month or two, depending on exactly what libraries you use and how obscure they are.
Scream into a pillow because even PHP manages to have less breaking releases. Python is a dumpster fire that no one wants to admit or have an honest conversation about. If python 2 is still around my advice is don't upgrade unless you have a clear reason for the new features.
When I'm in a docker container using the Python 3 version that comes with Debian - is there an easy way to swap it out for this version so I can test how my software behaves under 3.13?
This[0] is the Docker Python using Debian Bookworm, so as soon as 3.13.0 (not the release candidate I've linked to) is released, there will be an image.
Otherwise, there's always the excellent `pyenv` to use, including this person's docker-pyenv project [1]
So 1 runs it under 3.11 (which came with Debian) and 2 runs it under 3.13.
I don't need to preserve 3.11. some_magic_command can wrack havoc in the container as much as it wants. As soon as I exit it, it will be gone anyhow.
The in a sense, the question is not related to Docker at all. I just mentioned that I would do it inside a container to emphasize that I don't need to preserve anything.
You can use pyenv to create multiple virtual environments with different Python versions, so you'd run your script with (eg) venv311/bin/python and venv313/bin/python
Yep. I only have Homebrew's Python installed because some other things in Homebrew depend on it. I use pyenv+virtualenv exclusively when developing my own code.
(Technically, I use uv now, but to the same ends.)
It's unlikely that the OS's version of Python, and the Python packages available through the OS, are going to be the ones you'd install of your own volition. And on your workstation, it's likely you'll have multiple projects with different requirements.
You almost always want to develop in a virtualenv so you can install the exact versions of things you need without conflicting with the ones the OS itself requires. If you're abstracting out the site-packages directory anyway, why not take one more step and abstract out Python, too? Things like pyenv and uv make that trivially easy.
For instance, this creates a new project using Python 3.13.
I did not have Python 3.13 installed before I ran those commands. Now I do. It's so trivially easy to have per-project versions that this is my default way of using Python.
You can get 95% of the same functionality by installing pyenv and using it to install the various versions you might want. It's also an excellent tool. Python's own built-in venv module (https://docs.python.org/3/library/venv.html) makes it easy to create virtualenvs anytime you want to use them. I like using uv to combine that and more into one single tool, but that's just my preference. There are many tools that support this workflow and I highly recommend you find one you like and use it. (But not pipenv. Don't pick that one.)
This is the conventional wisdom these days, and a real thing, but unless you are admin challenged, running your local scripts with the system Python is fine. Been doing it two decades plus now.
Use either one. `pyenv install 3.x` is slower than `uv python install 3x`, but that's not the most common operation I use either of those tools for. Uv is also comparatively brand new, and while I like and use it, I'm sure plenty of shops aren't racing to switch to it.
If you already have pyenv, use it. If you don't have pyenv or uv, install uv and use that. Either one is a huge upgrade over using the default Python from your OS.
This makes sense for a desktop environment, but for a disposable testing container there is no way that building and compiling each version of Python like that is a sensible use of time/resources.
For that kind of thing I'd always either used the tagged Python images in Docker Hub or put the build step in an early layer that didn't have to re-run each time.
One other advantage is that you know the provenance of the python executable when you build it yourself. Uv downloads a prebuilt exe from https://gregoryszorc.com/docs/python-build-standalone/main/ who is a very reputable, trusted source, but it's not the official version from python.org. If you have very strict security requirements, that may be something to consider. If you use an OS other than Linux/Mac/Windows on x86/ARM, you'll have to build your own version. If you want to use readline instead of libedit, you'll have to build your own version.
I am personally fine with those limitations. All of the OSes I regularly use are covered. I'm satisfied with the indygreg source security. The libedit version works fine for me. I like that I can have a new Python version a couple seconds after asking for uv. There are still plenty of reasons why you might want to use pyenv to build it yourself.
The language itself is (more than) complex enough already - I hope this focus on implementation quality continues.