Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wow this is depressing. Python's always been tricky to distribute, but everything here just sounds like backwards steps from conventions which work, rather then trying to standardize the methods we already have.


What things there sound like steps backwards?

IIUC there are three "problems" raised here:

* With PEP 517 you can only get the files as a wheel or a tarball (I assume the latter refers to the sdist). The author dislikes having the compression done just to decompress again immediately. Fair enough, but this sounds pretty minor to me. Would there really be much time spent in this compress/decompress? Especially relative to everything else a gentoo update entails? Isn't this just a feature that can be added later, bringing a minor speed-up?

* PEP 517 doesn't support data_files. This is a surprising problem for a distribution maintainer to raise. I thought the whole objection to data_files was that it allows python package installations to stick files in arbitrary places, i.e. do stuff that should be the sole preserve of the system package manager. Why does the author want it to be allowed?

* distutils and "setup.py install" deprecation. I don't see the alternative here. Yes, it requires downstream changes, but proliferation of mechanisms without standardisation is the big problem with python packaging. Either it continues to be supported or it gets deprecated and removed. My vote would certainly be for the latter.

Nothing in the article even claims the changes aren't progress. The objections are about particular hassles that progress is generating for the author as a distro maintainer. And maybe they are valid but as far as I can see they are either quite minor or unavoidable if the state of the world is to improve.

I've been using python for ~20 years and it finally seems to be on a path towards some sort of sanity as far as packaging is concerned. Not there yet, by a long, long way, but heading in the right direction at least.

Edit: Oh, and one other problem: The lack of a standard library toml module when PEP 517 mandates the use of TOML. That is indeed mad. I don't understand how the PEP was approved without such a module being added.


> I thought the whole objection to data_files was that it allows python package installations to stick files in arbitrary places, i.e. do stuff that should be the sole preserve of the system package manager.

But those "arbitrary places" include things like the standard places where man pages and other documentation go, the standard places where shared data (and things like example programs) go, etc. Without data_files the Python installation tools give you no way to provide any of these things with your Python library.


Man-pages is a Linux thing, while python packages are cross-platform and can be installed on Windows.

It seems out of scope for a python packager to include such files. As a user i would also not be very happy to see a pip install start dropping files all over my system. How would that behave in a virtualenv anyway? Sandboxing should be a key feature of a package manager.


> Man-pages is a Linux thing, while python packages are cross-platform and can be installed on Windows.

Windows has help files, which are its version of man pages. So an installer that installed man pages on Linux would be expected to install the corresponding help files on Windows.

> dropping files all over my system

I said no such thing. I said there are certain designated places in a filesystem where certain common items like man pages (or Windows help files) live. Not to mention system-wide configuration files (which on Linux go in /etc), and I'm sure there are others I've missed. An installer that is not allowed to access those places does not seem to me to be a complete installer. Linux package manager installers certainly put files in such places. Windows installers do it too (although the specific items and places are different).

> How would that behave in a virtualenv anyway?

A complete virtualenv would have its own copies of the above types of files, in the appropriate places relative to the root of the virtualenv.


Haskell is the worst. If you make something in Haskell you hate your end users. That being said I do a bunch in Haskell!


What wrong with Haskell and distribution? I thought GHC produced native executables, maybe with some runtime libraries.


On Arch Linux, you can install packages from the user repository. Basically these are scripts to I stall things that aren't in the normal package repository, and the default is to install from source.

Whenever I install the Haskell package "aura", I need to be careful tot get "aura-bin". The install-from-source version will install a crazy number of Haskell packages.

That said, not a problem with Haskell per se. I'm very happy with aura-bin.


i bet it may be better on different OS and distributions but on arch haskell is a friggin nightmare. at least the arch wiki page for it is more complaining about haskell than giving guidance: https://wiki.archlinux.org/title/Haskell


I see so many updates for Haskell-related packages all the damn time.


Well, if they ask for trouble, trouble will come. Haskell packaging is no different from Rust packaging: native executables statically linked with many small packages; both rely critically on cross-module inline for performance; no stable ABI.

Arch Linux wants to provide an _up-to-date_ compatible set of Haskell packages, instead of relying on the established Stackage like what NixOS does. This certainly causes frustrations as they must do a lot of patches.

End users don't benefit from this decision; they complain about a large number of tiny packages. Haskell devs don't benefit either, as they don't use packages provided by the OS -- just like any other Python/Node/Java/etc dev. Rust programmers use rustup+cargo, Python programmers use pip; likewise Haskell programmers use ghcup+cabal.

That being said, I respect their efforts and the packagers (Felix Yan, et al.). They made great efforts in updating a large number of old packages to be compatible with GHC 9.x.


Haskell packaging/building is a lot more pleasant with Nix. Cabal and stack are too fragile imho and user settings can far too easily break packages. With nix you can still do customisation but I find nix at least puts a reasonable amount of assurance that your configuration isn't going to cause the entire build to explode.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: