Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The lack of a corresponding announcement on their blog makes me worry about a Twitter account compromise and a malicious model. Any way to verify it’s really from them?



Their https://twitter.com/MistralAI account has 5 tweets since the account opened, three of which were model release magnet links.

https://twitter.com/MistralAILabs is their other Twitter account, which is very slightly more useful though still very low traffic.


you must be new to mistral releases. they invented the magnet first blog later meta


At 3:30a France local? Alrighty. I still wait a lil bit ;)


What could a malicious model do, though? Curse at you?



Not .safetensors though


Exploit a memory safety issue in the tokenizer/or other parts of your LLM infra written in a native language.


??? With weights?


There was a buffer overflow or some other exploit like that in llama.cpp and the gguf format. It has been fixed now, but it's definitely possible. Also weights distributed as python pickles can run arbitrary code.


Distributing anything as python pickles seems utterly batshit to me.


Completely agree.


There are plenty of exploits where the payload is just "data" read by some vulnerable program (PDF readers, image viewers, browsers, compression tools, messaging apps, etc)


Yes, there's a reason weights are now distributed as "safetensors" files. Malicious weights files in the old formats are possible, and while I haven't seen evidence of the new format being exploitable, I wouldn't be surprised if someone figures out how to do it eventually.


This is how they released every model so far.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: