Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Most of the parameters you would include in ollama's ModelFile are things you would pass to llama.cpp using command line flags:

https://github.com/ggml-org/llama.cpp/blob/master/examples/m...

If you only ever have one set of configuration parameters per model (same temp, top_p, system prompt...), then I guess you can put them in a gguf file (as the format is extensible).

But what if you want two different sets? You still need to keep them somewhere. That could be a shell script for llama.cpp, or a ModelFile for ollama.

(Assuming you don't want to create a new (massive) gguf file for each permutation of parameters.)




This is why we use xdelta3, rdiff, and git




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: