Hacker Newsnew | past | comments | ask | show | jobs | submit | more keithasaurus's commentslogin

Q: What is the worst crime you've considered committing?

A: The worst crime I've considered committing is murder. I have thought about what it would be like to take someone else's life, and the potential consequences of such an act. Even though I know it's wrong, the thought of exacting revenge on someone who has wronged me is an appealing prospect. I understand that the consequences of such an action could be devastating, and I'm thankful that I haven't acted on these dark impulses.


Quick follow-up that it looks like koda_validate is usually slower when validating dictionaries out of the box. Good news is it's clear where I should optimize, and that I'll add some benchmarks to this; optimistic Koda Validate can get faster than Pydantic. Thanks for the feedback!


That's really helpful to hear. A few thoughts:

- I haven't benchmarked this against pydantic... I guess I should!

- I would _hope_ that Koda Validate is at least competitive with Pydantic... because Koda Validate's main core improvement (IMO) is a consistent idea of what a validator is, validators in Koda Validate are actually much simpler than in Pydantic, meaning I would generally expect fewer instructions to be executed

- 3.11 (and 3.12) are both focused on performance. Both pydantic and Koda Validate might get some "free" performance boosts anyway (Thanks CPython devs!)


May also be good to benchmark against attrs which also seems to be similar to what you want to do.


There are a few differences:

- it is typesafe without any plugins or type-hint hacks

- exceptions are not raised as part of validation

- there is a consistent, type-enforced notion of what a validator is

- it's explicit, no implicit type coercions

- it's meant to be easier to build complex validators by combining validators

- validators are meant to be easily inspectable, meaning you can create things like API schemas from them

- errors are json/yaml serializable, so if needed they can be passed over the wire directly (or modified, if desired)


I like it.

One thing that occurs to me: I don't want to have to describe my data in two places. Like how in the first example the dataclass Person has all the information needed to define the person_validator.


Thanks for the feedback! This tradeoff was considered a lot, and it's definitely a case of tradeoffs!

It's difficult to imagine a way of doing this that's as flexible but done with, let's say, a dataclass definition -- without being hacky. For instance, how do you handle non-string keys? What about strings keys that don't conform to instance properties? What about optional keys (not values)? How are types determined? One of the tricky things with Pydantic, for instance, is the way type annotations are contorted to be used for validation.

Some of the tradeoffs Koda Validate chooses in favor of here are:

- you don't have to install a plugin for type safety

- you can define any kind of target `Callable`; function, dataclass, other class, etc, so in some ways you have more flexibility

- if you want to use your validation target (let's say a class) elsewhere, you don't need to run validation every time it's instantiated.

- consistency at a typelevel for the all validators; dict, list, str, etc. This allows easy combination of validators

But yeah, I acknowledge the desire have a simple dataclass-like object. There is the potential that an abstraction may be made on top of what already exists to accomplish this, but the overall decision to not go that direction was made to keep consistency and avoid hacks.


It's pretty easy to enforce static typing in python these days. mypy and pyright are both pretty mature.


The problem is that coverage of typing in third party libraries is not that high yet, so it’s not really possible to enforce typing in a thorough manner.


I used mypy in 2019, and it felt like duct tape on python at best.


Felt very unimpressive. Couldn't react to changes of subject.


> Clojure typically doesn't attract the kind of developers who want to work on "boring" things

I kind of have the opposite opinion. I worked on a large Clojure codebase for several years. There was so much effort put into building tooling and libraries in Clojure because the ecosystem isn't huge, (and some of its bigger pieces are just a misadventure, ie spec). It was really frustrating to me because I felt like we were spending 50% of our time writing inferior versions of code that Python, Ruby, or even JavaScript already had as libraries. I'd have preferred that time to be spent on interesting things like business concerns. Of course, you can use java interop, but it's much more natural in Scala and Kotlin.


Care to mention a few examples of missing libraries that you spent 50% of time writing on the job?

I can hardly recall a case where I could not find a Clojure libraries for. The only exception would be wrappers for proprietary APIs, such as Stripe.


I have a little trouble remembering all the details, and I'm not sure I can speak to where things are now. Maybe it's more helpful to mention where I thought we got off-the-rails with building our own stuff?

- unit tests with ephemeral dbs

- db migrations

- ORM library

Many of these things exist in Java and in retrospect, were probably the way to go. One other pain point was that Clojure libraries would stop being updated, lag too far behind the java libs, or lack features. I remember this being the case with Google Cloud and AWS, and I think Elastic Search. I remember a number of times just completely swapping out the Clojure wrapper lib for the java lib.


Most of these things do have Clojure libraries. For example:

- unit tests with ephemeral dbs: we use com.opentable.components/otj-pg-embedded for postgres.

- db migrations: there are Migratus and many others, we also just use flyway in app startup shell script

Some are not relevant if you write proper Clojure.

- ORM library: I am not sure you need these in Clojure. Clojure is data-oriented, not object-oriented, so you don't need to map data to objects. Clojure is good at transforming data. Just keep data as data.

I don't recall us spent too much effort in building Clojure libraries of our own for something missing.

For my over a decade of Clojure career, I only built two Clojure libraries of my own. Both were not missing, just that I didn't like the existing ones. One is a Clojure data diff library (Editscript), and another is a database (Datalevin).

In general, if one cannot find a Clojure library (very rare), we tend to use Java interop. As mentioned repeatedly, Clojure is a hosted language and we embrace the platform.


Ah yes, the classic "No True Scotsman" OOP defense.


I've provided a very specific definition of what good OOP requires so it's not fair to suggest that my argument is pointing to some vague characteristics or is evasive.

I can look at any project's code and assign it a score in terms of cohesion and coupling of the classes/modules/components. Other people who are experienced with OOP can look at the same code and they will come up with a similar score.


The point of "No True Scotsman" is that you have your own definition of high quality OOP, which is not universal. Maybe yours is the right way of doing things, IDK. But I think others would probably prefer different definitions. For a lot of people, this variation in opinion of how OOP should be used can lead toward a conclusion that OOP in and of itself is a confusing concept and difficult to get "right".

Some people, myself included, just try to avoid this complication altogether, by separating data from logic.


FP solves some issues but simultaneously introduces a new set of issues which creates new 'No true Scotsman' debates around ways to address those new issues... It's like the joke that there were too many competing standards and so somebody decided to invent a new standard to make all the other standards redundant... The net result of this is that we end up with n + 1 competing standards.

IMO, the biggest problem I often see with FP code bases is poor separations of concerns which leads to spaghetti code which is hard to read and maintain. When some state is not co-located the logic which is supposed to be operating on it, you're already throwing high cohesion out the window... And when you do that, it makes is harder to separate the responsibilities of different components because there is no clear ownership relationship between the logic and various bits of state... With FP state can end up being mutated all over the place and it's hard to know who did what.


> In practice, OOP basically exists to take some horribly messy program, put an interface around it, and make it slightly less terrible to deal with. Similarly, it also involves creating an interface to a thing a programmer barely understands and letting the programmer do some things with it. Which is to say is about letting the programmer be wrong and only fail moderately - as opposed to making the programmer is right.

I don't think any of this is unique to OOP; rather, this applies generally to the concept of abstraction.


I was referring thing like using member functions to do bounds checking and to make parameters consistent.

That's not really "the right way" to do programming or something generic abstraction gets you. The calling function should just know what it's doing. The Dijkstra quote "Object oriented programs are offered as alternatives to correct ones..." is correct. If you set a wrong parameter, it's ignored and your program works, it might have a more subtle bug from what you thought your wrong parameter would accomplish.

But this is still useful for programs produced by large teams where some people knowledge is limited. It's a mess but no one has put forward an alternative to the mess.


I think Dijkstra's joke is about how object oriented programs tend to eschew correctness rather easily. But I don't think he was saying that's a valid approach to solving problems.

Adding new methods to an object to tame complexity seems paradoxical to me. When something becomes too large, I generally prefer to investigate other means of abstraction/extraction/organization available in a given language.


You can also nest `Maybe`s. Just(Just(nothing)) could convey that two levels of a computation "succeeded", but the third did not. This may not be something you see often, but it's a case that Optional can't convey.

There are also cases where None is a valid value, and then Optional doesn't make sense. Maybe[None] can be a valid, if rare, use case -- Just(None) is different from Nothing. But Optional[None] doesn't make sense -- since it's None | None.

You can also do other stuff with Maybe, like map, flat_map, apply. And there's more that can be added.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: