Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

IMO Pydantic is way more ergonomic, has great defaults, and easier to bend to your will when you want to use it a little differently.

Lots of love to Attrs, which is a great library and is a component of a lot of great software. It was my go-to library for years before Pydantic matured, but I think a lot of people have rightly started to move on to Pydantic, particularly with the popularity of FastAPI



I'd say the opposite. Specifically, Pydantic tries to do everything, and as a result (partly b/c they favor base classes over higher order functions), it isn't as composable as attrs is.

I've done some truly amazing things with attrs because of that composability. If I'd wanted the same things with Pydantic, it would have had to be a feature request.


I think this was on HN at some point, but this article makes the case for why/when you’d want to use attrs rather than pydantic. https://threeofwands.com/why-i-use-attrs-instead-of-pydantic...


Apples and oranges. pydantic is a serialization and input validation library, attrs is a class-generator library. Completely different features, completely different use cases.

Yes, you can do validation in attrs, but it's not meant to be used the same way as pydantic. For serialization, you need cattrs, which is a completely different package.


Do you have concern with speed or memory footprint of pydantic compared to the rest (attrs, dataclasses etc)? Pydantic seems insistent on parsing/validating the types at runtime (which makes good sense for something like FastAPI).


We always used attrs with the runtime type validators anyway. Getting those types checked in Python was way more valuable to my teams than the minor boilerplate reduction.

If you’re worried about the performance hit of extra crap happening at runtime… dear lord use another programming language.

Dataclasses is just… meh. Pydantic and Attrs just have so many great features, I would never use dataclasses unless someone had a gun to my head to only use the standard library. I don’t know of a single Python project that uses dataclasses where Pydantic or Attrs would do (I’m sure they exist, but I’ve never run across it).

Dataclasses honestly seems very reactionary by the Python devs, since Attrs was getting so popular and used everywhere that it got a little embarrassing for Python that something so obviously needed in the language just wasn’t there. Those that weren’t using Attrs runtime validators often did something similar to Attrs by abusing NamedTuple with type hints. There were tons of “why isnt Attrs in the stdlib” comments, which is an annoying type of comment to make, but it happens. So they added dataclasses, but having all the many features that Attrs has isn’t a very standard-library-like approach, so we got… dataclasses. Like “look, it’s what you wanted, right!?”. Well no not really, thanks we’ll just keep using Attrs and then Pydantic


I wouldn't say adding dataclasses was "reactionary". One of the reasons for adding it was to use it in the stdlib itself, which is obviously something we couldn't do with attrs. And because dataclasses skipped ahead to just using type hints to define fields, it has less backward-comparability baggage than attrs has.

As I think I made clear in PEP 557, and every time I discuss this with anyone, dataclasses owes a lot to attrs. I think attrs made some great design decisions, in particular to metaclasses or base classes.


To the runtime-validation point; our team used attrs with runtime validation enforced everywhere (we even wrote our own wrapper to make it always use validation, with no boilerplate) and this ended up being a massive performance hit, to the point where it was showing up close to the top of most profile stats from our application. Ripping all that out made a significant improvement to interactive performance, with zero algorithmic improvements anywhere else. It really is very expensive to do this type of validation, and we weren't even doing "deep" validation (i.e. validating that `list[int]` really did include only `int` objects) which would have been even more expensive.

Python can be used quite successfully in high-performance environments if you are judicious about how you use it; set performance budgets, measure continuously, make sure to have vectorized interfaces, and have a tool on hand, like PyO3, Cython, or mypyc (you should probably NOT be using C these days, even if "rewrite in C" is the way this advice was phrased historically) ready to push very hot loops into something with higher performance when necessary. But if you redundantly validate everything's type on every invocation at runtime, it does eventually become untenable for anything but slow batch jobs if you have any significant volume of data.


Thanks. You do make a lot of good points.

Attrs just has the features I need for now. It certainly feel a touch verbose but I’m happy to pay the price.


can you tell me which parts feel verbose? In my recent usage, I've not found myself using the older, more verbose approaches.


If you use the runtime type validators along with the python builtin type hints you constantly repeat yourself:

    id: UUID = attr.ib(validator=instance_of(UUID), …other parameters)
The type hint helps mypy and pylint work, while the validator is a runtime check by Attrs

If your attribute name is longer or you have other parameters to set, that can be a very long line (or lines) of code that you repeat for every attribute


Pydantic is dataclasses, except types are validated at runtime? It's nice and looks just like a normal dataclass looking at https://pydantic-docs.helpmanual.io/

For any larger program, pervasive type annotations and "compile" time checking with mypy is a really good idea though, which somewhat lessens the need for runtime checking.


Pydantic types will be checked by mypy or any other static type analysis tool as well.

I don’t expect any type-related thing to be remotely safe in Python without applying at least mypy and pylint, potentially pyright as well, plus, as always with an interpreted language, unit tests for typing issues that would be caught by a compiler in another language


Welp not sure why I caught downvotes for using the two most common static analysis tools for Python, but here we are on Hacker News


Likely the reason is that you really only need just mypy. It handles the type issues. Pylint is useful, but doesn't really overlap with mypy in terms of what it catches, and there's no need to use pyright or to write type-verifying unit tests, or to do runtime type validation if you have mypy.

Using mypy gives you the type safety equivalent of a compiled language. If you're using mypy, you don't need any additional validation that you wouldn't use in java or c++. I didn't downvote you, but the needless defense in depth is weird.


“Using mypy gives you the type safety if a compiled language” is far from the truth. Each tool has things that they don’t catch, or things that they sometimes false alert on.

Yeah mypy will get you maybe 90% of the way there, but the swiss cheese approach of stacking a few tools, even though there are redundancies, helps plug almost all of the holes.

People can argue about unit tests all day, but there’s very little cost to stacking multiple static analysis tools.


Pydantic works with mypy, so you have validation at build-time and parsing at runtime.


Last time I checked. Constrained types do not work with mypy out of the box.

https://github.com/samuelcolvin/pydantic/issues/975


Thanks! I didn’t know about that.


Pydantic is useful if you're dealing with parsing unstructured (or sort of weakly untrusted) data. If you just want "things that feel like structs", dataclassess or attrs are going to be just as easy and more performant (and due to using decorators and not metaclasses, more capable of playing nicely with other things).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: