The number of people in this thread confusing fairness with consistency, and who seem to think llms achieve the former when they achieve the latter seems a bit high. However the fact that some people believe justice systems should actually prize consistency over fairness is … frightening
“Fair” is a complex moral question that llms are not qualified to answer, since they have no morals or empathy, and aren’t answering here.
Instead they are being “consistent” and the humans are not. Consistency has no moral component and llms are at least theoretically well suited to being consistent (model temperature choices aside)
Fairness and consistency are two different things, and you definitely want your justice system to target fairness above consistency.
This view seems to miss the goal of the justice system in the first place. The goals are societal. Any consistency is a means and not an end.
(IE being consistent at all is simply one thing that helps achieve some of the societal goals. It is not a goal itself. A totally consistent system that did not achieve the societal goals would be pointless)
As one of those authors (3 books in this case) I'll just point out:
Most authors don't own any interesting rights to their books because they are works for hire.
Maybe I would have gotten something, maybe not. Depends on the contract. One of my books that was used is from 1996. That contract did not say a lot about the internet, and I was also 16 at the time ;)
In practice they stole from a relatively small number of publishers. The rest is PR.
The settlement goes to authors in part because anything else would generate immensely bad PR.
I spend a lot of time in eplan, 3d cad programs, and cam programs (when working on machines, furniture, and cnc of furniture or machines respectively :P)
However, the vast majority of time what i'm searching grabcad or friends it's for specific part numbers or parts.
I cannot remember the last time i wanted to try to find some object by generic description. Even as a tool to find possible parts that might fit my task, i don't see how it would make sense, because the stuff that matters is mostly not visual.
IE let's say i'm trying to find ballscrew nuts that might be a replacement for one i can't get anymore.
I'd want to search for ballscrew nuts that have a specific dimension, which maybe it can do (doesn't look like it so far), but the parts that matter are things like "is it preloaded", etc, which wouldn't be part of the description it generates (because it's often not even visualized in the cad model).
This is for mechanical stuff that might be interchangeable. Lots of cad models are of electronics and such that are very much not.
Even when i'm doing 3d printing, most of the time i'd search for the part i'm looking for, not a generic description of what might look like - ie i'm searching for dividers for sidiocrate crates.
Giving me thousands of possible things that might be a divider would be pretty useless.
To the degree i search for generic descriptions, people already provide them, and it's not obvious this is a meaningful friction point for them (IE that being able to generate the labels automatically is really valuable)
So while i think this is overall cool, i struggle to think of a truly practical use.
It's also rarely worth being optimal in scalar code anymore, particularly at compilation speed cost. The exception here is memory accesses and branches that will miss. So the writing of useless zeros is egregious but other stuff just isn't usually worth caring about these days. It's "good enough" in an age where even in embedded land I can run a 48mhz cortex m0 for 10 years on a battery and not worry about a few extra ANDS. I'm much more likely to hit size than speed limitations.
Not to mention for anything not super battery limited you can get a m55 running at 800mhz with a separate 1ghz npu, hardware video encoders, etc.
This is before you move into the rockchip/etc space.
We really just aren't scalar compute limited in tons of places these days. There are certainly places but 10-15 years ago missing little scalar optimizations could make very noticeable differences in the performance of lots of apps and now it just doesn't anymore
Unlocking/forced unlocking is not a 4th amendment issue, but a 5th amendment one.
The 4th amendment would protect you from them seizing your phone in the first place for no good reason, but would not protect you from them seizing your phone if they believe it has evidence of a crime.
Regardless, it is not the thing that protects you (or doesn't, depending) from having to give or otherwise type in your passcode/pin/fingerprint/etc.
This is an area that seems to confuse a lot of people because of what the 5th amendment says and doesn't say.
The reason they can't force you to unlock your phone is not because your phone contains evidence of stuff. They have a warrant to get that evidence. You do not have a right to prevent them from getting it just because it's yours. Most evidence is self-incriminating in this way - if you have a murder weapon in your pocket with blood on it, and the police lawfully stop you and take it, you really are incriminating yourself in one sense by giving it to them, but not in the 5th amendment sense.
The right against self-incrimination is mostly about being forced to give testimonial evidence against yourself. That is, it's mostly about you being forced to testify against yourself under oath, or otherwise give evidence that is testimonial in nature against yourself. In the case of passwords, courts often view it now as you being forced to disclose the contents of your mind (IE live testify against yourself) and equally important, even if not live testimony against yourself, it testimonially proves that you have access to the phone (more on this in a second). Biometrics are a weird state, with some courts finding it like passwords/pins, and some finding it just a physical fact with no testimonial component at all other than proving your ability to access.
The foregone conclusion part comes into play because, excluding being forced to disclose the contents of your mind for a second, the testimonial evidence you are being forced to give when you unlock a phone is that you have access to the phone. If they can already prove it's your phone or that you have access to it, then unlocking it does not matter from a testimonial standpoint, and courts will often require you to do so in the jurisdictions that don't consider any other part of unlocking to be testimonial.
(Similarly, if they can't prove you have access to the phone, and whether you have access to the phone or not matters to the case in a material way, they generally will not be able to force you to unlock it or try to unlock it because it woudl be a 5th amendment violation).
> excluding being forced to disclose the contents of your mind for a second
This seems like a key point though. What's the legal distinction between compelling someone to unlock a phone using information in their mind, and compelling them to speak what's in their mind?
If I had incriminating info on my phone at one point, and I memorized it and then deleted it from the phone, now that information is legally protected from being accessed. So it just matters whether the information itself is in your mind, vs. the ability to access it?
There are practical differences - phones store a lot more information that you will keep in your mind at once.
You can actually eliminate phones entirely from your second example.
If you had incriminating info on paper at one point, and memorized it and deleted it, it would now be legally protected from being accessed.
One reason society is okay with this is because most people can't memorize vast troves of information.
Otherwise, the view here would probably change.
These rules exist to serve various goals as best they can. If they no longer serve those goals well, because of technology or whatever else, the rules will change. Being completely logical and self-consistent is not one of these goals, nor would it make sense as a primary goal for rules meant to try to balance societal vs personal rights.
This is, for various reasons, often frustrating to the average HN'er :)
> This is, for various reasons, often frustrating to the average HN'er :)
With that in mind...
> Being completely logical and self-consistent is not one of these goals, nor would it make sense as a primary goal for rules meant to try to balance societal vs personal rights.
Do we really know that it wouldn't make sense, or is that just an assumption because the existing system doesn't do it? (Alternatively, perhaps a consistent logical theory simply hasn't been identified and articulated.)
This reminds me of how "sovereign citizens" argue their position. Their logic isn't consistent, it’s built around rhetorical escape hatches. They'll claim that their vehicle is registered with the federal DOT, which is a commercial registration, but then they'll also claim to be a non-commercial "traveler". They're optimizing for coverage of objections, not global consistency.
What you seem to be telling me is that the prevailing legal system is the same, just perhaps with more of the obvious rough edges smoothed out over the centuries.
I will say first that C libc does this - the functions are inline defined in header files, but this is mainly a pre-LTO artifact.
Otherwise it has no particular advantage other than disk space, it's the equivalent of just catting all your source files together and compiling that.
If you thikn it's better to do in the frontend, cool, you could make it so all the code gets seen by the frontend by fake compiling all the stuff, writing the original source to an object file special section, and then make the linker really call the frontend with all those special sections.
You can even do it without the linker if you want.
Now you have all the code in the frontend if that's what you want (I have no idea why you'd want this).
It has the disadvantage that it's the equivalent of this, without choice.
If you look far enough back, lots of C/C++ projects used to do this kind of thing when they needed performance in the days before LTO, or they just shoved the function definitions in header files, but stopped because it has a huge forced memory and compilation speed footprint.
Then we moved to precompiled headers to fix the latter, then LTO to fix the former and the latter.
Everything old is new again.
In the end, you are also much better off improving the ability to take lots of random object files with IR and make it optimize well than trying to ensure that all possible source code will be present to the frontend for a single compile. Lots of languages and compilers went down this path and it just doesn't work in practice for real users.
So doing stuff in the linker (and it's not really the linker, the linker is just calling the compiler with the code, whether that compiler is a library or a separate executable) is not a hack, it's the best compilation strategy you can realistically use, because the latter is essentially a dream land where nobody has third party libraries they link or subprojects that are libraries or multiple compilation processes and ....
Zig always seems to do this thing in blog posts and elsewhere where they add these remarks that often imply there is only one true way of doing it right and they are doing it.
It often comes off as immature and honestly a turnoff from wanting to use it for real.
I may be able to resolve this, having hacked a bunch on M1N1 and such - the DFU port is going through a microcontroller with firmware.
That is why, for example, it can properly process USB-PD messages that contain vendor defined message codes, even prior to any form of boot, as long as it has any source of power.
The firmware on the USB controller is processing that.
This is how VDMTool works to be able to mux debug (and do other things) even with the machine otherwise off.
reply