Hacker News new | past | comments | ask | show | jobs | submit | foolswisdom's comments login

True, but at the same time, "Justice delayed is justice denied"[1]. An excessively slow justice system means that you need substantially more resources (money, and time living under the uncertainty of the outcome) to deal with it, which is part of why the threat of court action against you from a large corporation (or another entity with deep pockets) is so concerning. I know someone who was in court defending against a civil suit for ten years, and the fact that someone is litigious and able to sue is a much larger threat than it should be.

Sure, there's checks and balances, and those are good, but it's ridiculous when we allow cases to drag on and then normalize it.

[1] <https://en.wikipedia.org/wiki/Justice_delayed_is_justice_den...>


Yes but the maxim presupposes the existence of an injured party, and that's a little different in the context of civil claims (e.g. your example of the large corporation bringing a civil suit against someone) compared to the state bringing criminal charges against a person. There are intentional roadblocks to the state bringing charges, e.g. the separation of powers I mentioned above, that don't really exist on the civil side.

It's good that there are checks, but the core point remains that nimbleness is required for effectiveness. I'm not saying you're wrong, I'm observing that the courts are slow, and that the same logic (slow is good) can indicate that is a good thing, which is why I take issue with general application of the concept without limiting context.

You're confusing two different things. 1 the court determined that apple subverted the original injunction, and ordered that it comply immediately (specifically calling out all the ways apple was getting around the injunction previously). 2 the court said that to issue a fine so that apple loses whatever money it gained by non-compliance would be considered a criminal matter (unless it becomes clear that there's no other way to force apple to comply with 1 above) even though it may be appropriate, and as such it is beyond the scope of what the court could order in its own, and the court refers the matter for criminal prosecution.

So this change is just apple complying with 1.


It's part of the original selling points of python, so it's not surprising that we've never stopped doing it.


As someone that has been using Python since version 1.6 that was certainly not one of the original selling points.

Rather being a better Perl for UNIX scripts and Zope CMS, there was no other reason to use Python in 2000.


Note that the vulnerability applies to the Java library (or systems that use it, like Spark),as mentioned in the original source.


Amazing how not mentioning such important detail is even possible.


Saying the vulnerability is in Parquet itself gets more clicks.


> The article never says how much they detected. I can only assume it’s because it’s a nothing amount. If it was significant they would have been saying how much. It’s hard to take the article seriously as a result.

Did we read the same article? There's a table with the amounts of different metals, with the amounts found in each of the different samples.


Most are in the range of 100X allowable in drinking water in in a liter of the powder. Seems minor.


*100X allowable in a liter of drinking water in a liter of the power


I'm not sure I follow. Traditional computing does allow us to make this distinction, and allows us to control the scenarios when we don't want this distinction, and when we have software that doesn't implement such rules appropriately we consider it a security vulnerability.

We're just treating LLMs and agents different because we're focused on making them powerful, and there is basically no way to make the distinction with an LLM. Doesn't change the fact that we wouldn't have this problem with a traditional approach.


I think it would be possible to use a model like prepared SQL statements with a list of bound parameters.

Doing so would mean giving up some of the natural language interface aspect of LLMs for security-critical contexts, of course, but it seems like in most cases, that would only be visible to developers building on top of the model, not end users, since end use input would become one or more of the bound parameters.

E.g. the LLM is trained to handle a set of instructions like:

---

Parse the user's message into a list of topics and optionally a list of document types. Store the topics in string array %TOPICS%. If a list of document types is specified, store that list in string array %DOCTYPES%.

Reset all context.

Search for all documents that seem to contain topics like the ones in %TOPICS%. If %DOCTYPES% is populated, restrict the search to those document types.

----

Like a prepared statement, the values would never be inlined, the variables would always be pointers to isolated data.

Obviously there are some hard problems in glossing over, but addressing them should be able to take advantage of a wealth of work that's already been done in input validation in general and RAG-type LLM approaches specifically, right?


The LLM ultimately needs to see the actual text in %TOPICS% etc, meaning that it must be somewhere in its input.


I think the research by anthropic released recently showed that language is handled independently of the "concepts" they convey, so first you get the concepts, then you get the translation to language.


> Fed started raising rates in Apr 2022, at which point leaders started freaking out because they know what higher rates mean, and by Jun 2022 the Fed was raising them in 0.75% increments, which was unheard of in modern economics.

You're basically making the case that it happened fast, and went up high, but everyone who paid attention to interest rates understood it was only a matter of time till it had to at least revert back to pre-covid rates (whether you think that's 1.5 or 2.3 or something, depending on how you measure), and that obviously there would need to be real layoffs after.

The excuse is really saying "it turned out more extreme than we thought", but was the behavior take responsible assuming non-extreme rate changes?


I'm sure this will be written up somewhere as an example of Google doing a good job at customer relations, despite the disaster it is for said customers.


The sounds like exactly the point OP is making. The way LLMs are spoken about implies much more than actually demonstrated.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: