Hacker Newsnew | past | comments | ask | show | jobs | submit | yellowapple's commentslogin

Truly the most disturbing video in the archive: https://jmail.world/jefftube/watch/EFTA01196885

That's why I'm pretty optimistic about the AT protocol: you get the advantages of app-driven innovation (need a new feature? just define a lexicon for it) without requiring data reliant on that feature to live in that application's silo; the records all exist in each users' PDS, under each users' own control, no matter which applications use those records. And of course, if those features prove to be good ideas, other applications can adopt those lexicons and they're immediately interoperable.

> However, this can lead to catastrophic SQL injection attacks if you use this for user input, because raw_sql does not support binding and sanitizing query parameters.

That's surprising, given that SQLite itself supports binding and sanitizing query parameters via sqlite_bind_*(). Is SQLx just blindly calling sqlite3_exec() instead of doing the prepare→bind→step→finalize sequence itself?


This is about raw_sql, which is explicitely documented to not use prepared statements and thus doesn't support query parameters; not about the actual query() API SQLx offers.

> Note: query parameters are not supported.

> Query parameters require the use of prepared statements which this API does support.

> If you require dynamic input data in your SQL, you can use format!() but be very careful doing this with user input. SQLx does not provide escaping or sanitization for inserting dynamic input into queries this way.

> See query() for details.


I believe so. When you call `raw_sql`, the API doesn't provide a way for you to specify which parts of the query are parameters, so it just passes that exact string in to exec.

If that's the case, then why did Theseus need that ball of yarn from Ariadne to avoid getting lost in one?

That's what he addresses with “In the dashboard of this project, I solved this with a single attribute: hx-push-url="true".”, no?

He makes it sound like he did something special, but this is just something that htmx offers out of the box. In fact if he had used something like:

    <a href="/?page=2" hx-target="#dashboard-content" hx-boost="true">
      Next Page
    </a>
Then he would have gotten the functionality out of the box without even using hx-push-url explicitly. And he would have gotten graceful degradation with a link that worked without JS and Ctrl/Cmd-click to open in a background tab.

Also the article seems to be full of errors. Eg

> In HTMX, if the server returns a 500 error, the browser might swap the entire stack trace or the generic error page into the middle of a table by default. This is a poor user experience.

This is simply incorrect. By default htmx does not swap 4xx/5xx responses. Instead it triggers an error event in the DOM. Developers can choose to handle that event or they can choose to override the default behaviour and do a swap.


I don't get why people go through all these flaming hoops and hurdles to deal with MSVC when MinGW and MinGW-w64/MSYS2 are options. In the latter case you even still get (mostly complete) MSVC ABI-compatibility if you compile with clang.

MinGW and MinGW-64/MSYS2 are just as inscrutable, fragile and new-user-hostile. The fact that you have to choose between MinGW (which has a 64 bit version) or MinGW64 (completely separate codebases maintained by different people as far as I can tell) is just the first in a long obstacle course of decisions, traps, and unexplained acronyms/product names. There are dozens of different versions, pre-built toolchains and packages to throw you off-course if you choose the wrong one.

If you're just a guy trying to compile a C application on Windows, and you end up on the mingw-w64 downloads page, it's not exactly smooth sailing: https://www.mingw-w64.org/downloads/


> If you're just a guy trying to compile a C application on Windows, and you end up on the mingw-w64 downloads page, it's not exactly smooth sailing: https://www.mingw-w64.org/downloads/

One of the options on that page is MSYS2, which I specifically listed above alongside MinGW-w64. And that download page is much smoother sailing: https://www.msys2.org/

There are other options on the MinGW-w64 page, but most of those are for cross-compiling from non-Windows operating systems (which conceivably could include something running on WSL these days), and of the Windows-host options, the only two with “many” packages are Cygwin and MSYS2 (though WinLibs looks interesting).


Because it's fewer hoops and hurdles than using MinGW, in my experience.

MinGW/MSYS2 are flaming poop hurdles. That’s the bending over backwards to fake a hacky ass bad dev environment. Projects that only support MinGW on Windows are projecting “don’t take windows seriously”.

Supporting Windows without MinGW garbage is really really easy. Only supporting MinGW is saying “I don’t take this platform seriously so you should probably just ignore this project”.


On top of that, it's one thing to write the code, whereas it's another to actually run that code with maximal reliability and minimal downtime. I'm sure LLMs can churn out Terraform all day long, but can they troubleshoot when something goes wrong (as is often the case)?

Sounds like a fun experiment, let AI completely control an infrastructure, from application layer to load balancing and databases.

I bet it would burn a lot of money very fast and not just on tokens.


I like it. Seems like a nice combination of features. It's pitched at AI/ML usecases, which is understandable given the current hypetrain, but on first glance I think it can stand up well in a more general-purpose context.

Re: pipe tracing, half a decade or so ago I made a little language called OTPCL, which has user-definable pipeline operators; combined with the ability to redefine any command in a given interpreter state, it'd be straightforward for a user to shove something like (pardon the possibly-incorrect syntax; haven't touched Erlang in awhile)

    'CMD_|'(Args, State) ->
        io:print("something something log something something"),
        otpcl_core:'CMD_|'(Args, State).
into an Erlang module, and then by adding that to a custom interpreter state with otpcl:cmd/3 you end up with automatic logging every time a script uses a pipe.

Downside is that you'd have to do this for every command defining a pipe operator (i.e. every command with a name starting with "|"); alternate user-facing approach would be to get the AST from otpcl:parse/1, inject log/trace commands before or after every command, and pass the modified tree to otpcl:interpret/2 (alongside an interpreter state with those log/trace commands defined). Or do the logging outside of the interpreter between manual calls to otpcl:interpret/2 for each command; something like

    trace_and_interpret([], State) ->
        {ok, State};
    trace_and_interpret([Cmd|Tree], State) ->
        io:print("something something log something something"),
        {_, NewState} = otpcl:interpret([Cmd], State),
        trace_and_interpret(Tree, NewState).
should do the trick, covering all pipes and ordinary commands alike.

Sounds like Katholieke Universiteit ought to release their own Compiler Kit ;)

Catholic University Compiler Kit? It would have to use one of the eponymous licenses if it didn't want to cause a paradox, heh.

https://lukesmith.xyz/articles/why-i-use-the-gpl-and-not-cuc...


There was no reason to lie about knowing the Scots language well enough to be the primary contributor by volume to Scots Wikipedia, and yet that's something that happened.

> There was no reason to lie about knowing the Scots language well enough to be the primary contributor by volume to Scots Wikipedia

Yes, there was: becoming the primary contributor by volume to Scots Wikipedia (which probably doesn't have many contributors to begin with, but there you are). Some people just have to have attention, no matter how.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: