Hacker News new | past | comments | ask | show | jobs | submit login

"model available" would be my preferred term.

Is Photoshop.exe "interpretable" by anybody with a copy (of windows)? How about a binary that's been heavily decompiled, like a Mario game?




Photoshop doesn't claim to be open source like llama does though, I'm not sure of the connection you're making.

Don't get me wrong, llama is at least more open than OpenAI and that may be meaningful.


The license aside, the question is what can be done with a carefully arranged blob of binary? Without additional software (Windows) I can't really do anything with Photoshop.exe. Similarly, Llama.gguf is either useful, with Ollama.app, or not, standing alone. So (looking past the difference in license), would you consider Photoshop.exe similar in that it's a binary blob that's useless by itself, or is it a useful collection of bytes, and why is/is not an ML model available on hugging faces the same?


The license used isn't important in my opinion, when talking about open source the question is whether the source code is available to be modified and reviewed/interpreted.

Photoshop, or any compiled binary, isn't meant to be open source and the code isn't meant to be reviewable. Llama is called open source, though the most important part isn't publicly available for review. If llama didn't claim to be open source I don't think it would matter that the model itself and all the weights aren't available.

If your argument is just that most software is shipped as compiled and/or obfuscated code, sure that's how it is usually done. That isn't considered open source though, and the line with LLMs seems to be very gray - it can be "open source" if the source code for the training logic is available even though the actual code/model being run can't be reviewed or interpreted.


The source data for the training needs to be public and freely licensed too, otherwise its IMO not an open source model.


Is that really necessary if the resulting model was actually available and comprehensible?

Personally I can't say I care as much about what the training set is, I want to know what's actually in the model and used at runtime/interpretation.


Yes, you can't know what kind of poisoning was done in the initial training data set, and you can't review the data, you can't review any human inputs, and you can't retrain from scratch. All those are things the model author can do, downstream folks/companies/governments should be able to do them too. Otherwise it isn't open source.


I think this discussion is silly in the context of a modern LLM. Nobody really understands how an LLM works, and you absolutely do not actually want to retrain Llama from scratch.

When I said "it's not really open source", I was referring to the fact that there are restrictions on who can use Llama.


Well that's a much deeper rabbit hole - we shouldn't be using such massive systems or throwing so many resources at them when no one even knows how they work.





Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: