If that's the case then I'll try the platform out :) I want to finetune Codestral or Qwen2.5-coder on a custom codebase. Thank you for the response! Are there some docs or infos about the compatibility of the downloaded models, meaning will they work right away with llama.cpp?
We don't support Codestral or Qwen2.5-coder right out of the box for now, but depending on your use-case we certainly could add it.
We utilize LoRA for smaller models, and qLoRA (quantized) for 70b+ models to improve training speeds, so when downloading model weights, what you get is the weights & adapter_config.json. Should work with llama.cpp!