Hacker Newsnew | past | comments | ask | show | jobs | submit | makeramen's commentslogin

> The fact that I can unlock and relock the bootloader is not a security issue or a risk. People who don't know what that means cannot possibly do it by mistake.

The second sentence is false. Lots of people blindly follow things and don't understand consequences until they brick their devices. Those who don’t break something won’t notice if they’ve silently backdoored themselves.

People asking for support after getting themselves into some weird hole they never should have been in because some friend or online article said so is super common.


> The second sentence is false. Lots of people blindly follow things and don't understand consequences until they brick their devices. Those who don’t break something won’t notice if they’ve silently backdoored themselves.

"Lots of people", how many though? Can that number be reduced? What number would be acceptable?

I feel like it _has_ to be possible to devise an unlocking procedure that dissuades most people from self-harm.

The problem is often treated as intractable, but intuitively this seems really unlikely to me. I don't think more than a tiny percentage of Xiaomi owners, for example, would go through the bootloader unlock process which often has a mandatory wait period attached to it without a reason more compelling than an impulse to randomly and blindly follow instructions on the internet.

I would like to see user studies with good methodology before other people decide to barter long-term freedoms away for insufficient benefit.

Why do I so rarely see people who are concerned about the security issues of bootloader unlocking calling for designing hassle and warning into the process. Instead, it's more common to hear that in the name of the average user, all escape hatches must be removed.


After a certain point, someone else's insistence on self-harm ceases to be a good excuse to infringe on my freedom. We don't ban hammers because some people accidentally damage their property/body, and it's a lot easier to do that with a hammer than an unlocked bootloader.

When you reboot in fastboot mode and enter the commands that break your phone, I think you're responsible.

If you take a hammer and destroy your phone, I think you're responsible.


Tailscale serve

That seems like a typo or incorrect info, the M5 MBP definitely can be configured up to 32 GB, and the Apple page mentions 32 GB explicitly as well.


You can probably infer some from their Ory case study: https://www.ory.sh/case-studies/openai


But given the option, do you choose bigger models or more reasoning? Or medium of both?


If you need world knowledge, then bigger models. If you need problem-solving, then more reasoning.

But the specific nuance of picking nano/mini/main and minimal/low/medium/high comes down to experimentation and what your cost/latency constraints are.


I would have to get experience with them. I mostly use Mistral, so I have only the choice of thinking or not thinking.


Mistral also has small medium and large. With both small and medium håving a thinking one, devstral codestral ++

Not really that mich simpler.


Ah, but I never route to these manually. I only use LLMs a little bit, mostly to try to see what they can't do.


Depends on what you're doing.


> Depends on what you're doing.

Trying to get an accurate answer (best correlated with objective truth) on a topic I don't already know the answer to (or why would I ask?). This is, to me, the challenge with the "it depends, tune it" answers that always come up in how to use these tools -- it requires the tools to not be useful for you (because there's already a solution) to be able to do the tuning.


If cost is no concern (as in infrequent one-off tasks) then you can always go with the biggest model with the most reasoning. Maybe compare it with the biggest model with no/less reasoning, since sometimes reasoning can hurt (just as with humans overthinking something).

If you have a task you do frequently you need some kind of benchmark. Which might just be comparing how good the output of the smaller models holds up to the output of the bigger model, if you don't know the ground truth


I agree. Public benchmarks aren't very useful for a bunch of reasons. Any company relying on LLMs for a critical function should have its own internal benchmark system. I maintain such a system for my job. If you are able, use the same prompt every time. It's fun to be able to include models like the original Bard on our leader board.


You don't do it manually. You have claude do it once you’ve guided it back on track to remind itself not to do it next time.


Seems like that was a preview model, unknown if this released version is different


I think it's only pulling the older model - I see it's using the liteRT models from May.


Kotlin Native's choice to go with a GC over native memory management is my biggest issue with it and really limits its use for memory and performance sensitive use cases.


So basically [engineering] design is more important than implementation details.

I would say the "engineering" part of the design is also optional, as product design is also another lever of higher influence than code optimization.


That’s a recent development, used to be something else altogether.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: