> The fact that I can unlock and relock the bootloader is not a security issue or a risk. People who don't know what that means cannot possibly do it by mistake.
The second sentence is false. Lots of people blindly follow things and don't understand consequences until they brick their devices. Those who don’t break something won’t notice if they’ve silently backdoored themselves.
People asking for support after getting themselves into some weird hole they never should have been in because some friend or online article said so is super common.
> The second sentence is false. Lots of people blindly follow things and don't understand consequences until they brick their devices. Those who don’t break something won’t notice if they’ve silently backdoored themselves.
"Lots of people", how many though? Can that number be reduced? What number would be acceptable?
I feel like it _has_ to be possible to devise an unlocking procedure that dissuades most people from self-harm.
The problem is often treated as intractable, but intuitively this seems really unlikely to me. I don't think more than a tiny percentage of Xiaomi owners, for example, would go through the bootloader unlock process which often has a mandatory wait period attached to it without a reason more compelling than an impulse to randomly and blindly follow instructions on the internet.
I would like to see user studies with good methodology before other people decide to barter long-term freedoms away for insufficient benefit.
Why do I so rarely see people who are concerned about the security issues of bootloader unlocking calling for designing hassle and warning into the process. Instead, it's more common to hear that in the name of the average user, all escape hatches must be removed.
After a certain point, someone else's insistence on self-harm ceases to be a good excuse to infringe on my freedom. We don't ban hammers because some people accidentally damage their property/body, and it's a lot easier to do that with a hammer than an unlocked bootloader.
If you need world knowledge, then bigger models. If you need problem-solving, then more reasoning.
But the specific nuance of picking nano/mini/main and minimal/low/medium/high comes down to experimentation and what your cost/latency constraints are.
Trying to get an accurate answer (best correlated with objective truth) on a topic I don't already know the answer to (or why would I ask?). This is, to me, the challenge with the "it depends, tune it" answers that always come up in how to use these tools -- it requires the tools to not be useful for you (because there's already a solution) to be able to do the tuning.
If cost is no concern (as in infrequent one-off tasks) then you can always go with the biggest model with the most reasoning. Maybe compare it with the biggest model with no/less reasoning, since sometimes reasoning can hurt (just as with humans overthinking something).
If you have a task you do frequently you need some kind of benchmark. Which might just be comparing how good the output of the smaller models holds up to the output of the bigger model, if you don't know the ground truth
I agree. Public benchmarks aren't very useful for a bunch of reasons. Any company relying on LLMs for a critical function should have its own internal benchmark system. I maintain such a system for my job. If you are able, use the same prompt every time. It's fun to be able to include models like the original Bard on our leader board.
Kotlin Native's choice to go with a GC over native memory management is my biggest issue with it and really limits its use for memory and performance sensitive use cases.
The second sentence is false. Lots of people blindly follow things and don't understand consequences until they brick their devices. Those who don’t break something won’t notice if they’ve silently backdoored themselves.
People asking for support after getting themselves into some weird hole they never should have been in because some friend or online article said so is super common.
reply