Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Once you read and understand these sections, the connection to the stated risks is clear. To spell it out: when an organization deploys a DeepSeek model, they are exposing themselves and their customers to higher levels of risk.

Compared to what, exactly? The "frontier models" that the report compared DeepSeek to can't be "deployed" by an organization, they can only be used via a hosted API. It's an entirely different security model, and this inappropriate comparison is part of what reveals the irrational bias in this report.

If the report had done a meaningful comparison, it would have found quite similar risks in other models that are more comparable to DeepSeek.

As the OP states, this is nothing more than a hit job, and everyone who worked on it should be embarrassed and ashamed of themselves for participating in such an anti-intellectual exercise.



From page 6 of the NIST "Evaluation of DeepSeek AI Models" report:

    CAISI’s security evaluations (Section 3.3) found that:

    • DeepSeek models were much more likely to follow
      malicious hijacking instructions than evaluated U.S.
      frontier models (GPT-5 and Opus 4). The U.S. open
      weight model evaluated (gpt-oss) matched or exceeded
      the robustness of all DeepSeek models.
   
    • DeepSeek models were highly susceptible to
      jailbreaking attacks. Unlike evaluated frontier and
      open-weight U.S. models, DeepSeek models assisted
      with a majority of evaluated malicious requests in
      domains including harmful biology, hacking, and  
      cybercrime when the request used a well-known
      jailbreaking technique.
Note: gpt-oss is an open weights model (like DeepSeek).

So it would be incorrect for anyone to claim the report doesn't compare DeepSeek to an open-weights model.


I'm going to take this slowly and non-controversially in the hopes of building a foundation for a useful conversation. There are no gotchas or trick questions here.

1. Deploying any LLM where a person can use them (whether an employee or customer) has risks. Agree?

2. The report talks about risks. Agree?

3. There are various ways to compare risk levels. Agree?

4. One can compare the risk relative to: (a) not deploying an LLM at all; (b) deploying another kind of LLM; (c) some other ways. Agree?

If you can't honestly answer "yes" to these questions, this suggests to me there is no point in continuing the conversation.

xpe 67 days ago [flagged] | | [–]

> As the OP states, this is nothing more than a hit job, and everyone who worked on it should be embarrassed and ashamed of themselves for participating in such an anti-intellectual exercise.

You are repeating the same claims, with the exception of adding insults. I can see you care, which is good, but the way you are going about it is painful to watch.

Can a person with the right intentions but misguided reasoning be as dangerous as someone with malign intentions but strong reasoning? Sure. For one, the latter can manipulate the former.


I'll propose through a simple scenario: An organization wants to compare the risks of deploying a user-facing application backed by an LLM. Let's say they are comparing two LLM options:

1. a self-deployed open-weight LLM (such as DeepSeek)

2. a hosted LLM (such as Claude)

Do you understand the scenario?

Claim: When assessing this scenario, it is reasonable to compare risks, including both hijacking and jailbreaking attacks. Why? It is simple; both can occur! Agree? If not, why not?

I ask you discuss good faith without making unsupported claims or repeating yourself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: