Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

GLM-4.7 (specifically this version) repeats the guardrail prompt injections from 3.0 Pro, word-by-word, and never follows them, which is consistent with training on a reward-hacked CoT. Gemini 3.0 only discusses snippets from this injection in its native CoT (hidden by default, trivial to uncover), but GLM-4.7 was able to reconstruct it in full during training. The only possible reason for this is direct training on a large amount of examples of Gemini's CoT. Its structure and a lot of replies were identical in GLM too.

Gemini 2.0 Exp 1206 was reported to be indirectly trained on Claude's outputs with humans in between [1], which was pretty consistent with its outputs at the time. No other Gemini versions except two experimental ones were similar to Claude.

[1] https://techcrunch.com/2024/12/24/google-is-using-anthropics...

 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: