It's a 7B "unified model" LLM/VLM (not a diffusion model!) that out-benchmarks Dall-E 3 and Stable Diffusion Medium. It's released under the DeepSeek License, which is pretty-open license that allows commercial use but restricts military use, along with a few other content-based restrictions.
Who in their right mind is going to blindly take the code output by a large language model and toss it on a cruise missile? Sleeper agents are trivially circumvented by even a modicum of human oversight.
The weights and data pipeline are open sourced and described explicitly in the paper they published. The non-reasoning data isn't nearly as interesting as the reasoning data though
Lawsuits, but it's mainly just CYA for DeepSeek; I doubt they truly are going to attempt to enforce much. I only mentioned it because it's technically not FOSS due to the content restrictions (but it's one of the most-open licenses in the industry; i.e. more open than Llama licenses, which restrict Meta's largest competitors from using Llama at all).