Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> very non-trivial long-lived application will require tweaks to individual queries at the string level.

It's possible we can have LLMs tweak queries to perfection given ORM-style input. Obviously ORMs of the past don't do this, but it's not impossible we couldn't do it in the future.

For example

    filter(lambda user:user.username.startswith("foo"), db.users)
is an anti-pattern because it fetches all of db.users and then does the filtering in Python, which is horrible. But an LLM could optimize this code to an SQL query like

    SELECT * FROM users where username LIKE "foo%"


You don't need an LLM to be able to write SQL though?! Aside from attempting to shove LLM into every conversation, I'm failing to see its relevance here.

You still have to learn the ORM syntax and quirks as well, since they may be neither intuitive nor consistent. Why not just focus learning efforts on the (more) global skill of SQL queries instead of the leaky abstraction that is ORMs generating SQL queries?


> You don't need an LLM to be able to write SQL though?

For 1-liners, sure, but I've seen some horrendously long SQL queries with multiple joins that are prone to bugs of sorts.


Save time?


I don't think this is a good example of where ORMs fall apart. If you have devs doing the first thing you have issues.

ORMs fall apart when they either do unexpected things, or "expected" but often terrible things, like lazy-loading data that causes a bagillion queries.


Both of these things are only a matter of knowing the tool.


It's true. I use ORMs in my projects.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: