Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>You cannot fingerprint models like this

A GAN can absolutely be trained to discriminate between text generated from this model or another model.

>that's hilarious

What's hilarious about it?



That would be interesting if it was true, but I think it can’t be true because LLMs main advantage is they memorize text in their weights and so your discriminator model would need to be the same size as the LLM.

That said the smaller GPT3 models break down quite often so they’re probably detectable.


In the same way we can train models that can identify people from their choice of words, phrasing, grammar, etc, we can train models that identify other models.


That's anthropomorphizing them - a large language model doesn't have a bottleneck the same way a human does (in terms of being able to express things), it can get on a path where it just outputs memorized text directly and it won't be consistent with what it usually seems to know at all.

Also, you could break a discriminator model by running a filter over the output that changes a few words around or misspells things, etc. Basically an adversarial attack.


I agree it is not exactly the same as a human, but the content it produces is based on its specific training data, how it was fed the training data, how long it was trained, the size and shape of the network, etc. These are unique characteristics of a model that directly impact what it produces. A model could have a unique proclivity for using specific groups of words, for example.

But yes, you could break the discriminator model, in the same way people disguise their own writing patterns by using synonyms, making different grammar/syntax choices, etc. Building a better evader and building a better detector is an eternal cat and mouse game, but it doesn't reduce the need to participate in this game.


A well trained GAN has 50% chance of finding if the generate image is fake or not. But you can't do imperceptible changes on text like you for images.


> A GAN can absolutely be trained to discriminate between text generated from this model or another model.

Nope. I dare you to do it. Or at least intelligently articulate the model architectures for doing so.

> What's hilarious about it?

It's a bullshit term, firstoff, and calling yourself that is the height of ego. Might as well throw in rockstar, ninja, etc too.


So in the entire field of machine learning, we can't train a model that can identify another model from its output? Just can't be done? And there's absolutely no value in having tools that can identify deep fakes, or content produced by specific open models?

>It's a bullshit term, firstoff, and calling yourself that is the height of ego

I am a 10x engineer though, so I'm sorry if that rubs you the wrong way. Also, you're reading my personal website, so of course I'm going to speak highly of myself :)


> in the entire field of machine learning

... we can't train a model to be 100% correct. There will always be false matches. Another super hard task is confidence estimation - models tend to be super sure of many bad predictions.

In this particular case you're talking about detecting human written texts against stochastic text generation. If you wanted to test if the model regurgitates training data, that would have been easy. But the other way around, to check if it outputs something different from future text, it's a hard, open-ended problem. Especially if you take into consideration the prompts and the additional information they could contain.

It's like testing if I have my keys in the house vs testing if my keys are not outside the house (can't prove an open ended negative). On top of this, the prompts would be like allowing unsupervised random strangers into the house.


That is an interesting idea. The fact that they are characterizing the toxicity of the language relative or other LLMs gives it some credibility. That being said, I just don’t see where the ROI would be in something like that. Seems like a lot of expense for no payoff.

My (unasked for) advice would be to take the 10x engineer stuff off your page. It may be true, but it signals the opposite. Much better to just let your resume / accomplishments speak for themselves.


>That being said, I just don’t see where the ROI would be in something like that. Seems like a lot of expense for no payoff.

I consider these types of models as information weapons, so I wouldn't be surprised if they have some contract/agreement with the US government that they can only release these things to the internet if they have sufficient confidence in their ability to detect them, when they inevitably get used to attack the interests of the US and our allies. I don't know how (or even if) that translates to a financial ROI for Meta.


> Nope. I dare you to do it. Or at least intelligently articulate the model architectures for doing so.

It is obvious that we can in principle try to detect this. People are already attempting to do so [1][2]. I would be very surprised if Facebook and other tech giants are not trying to do that, because they already have a huge problem in their hands from this type of technology.

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8049133/ [2] https://github.com/openai/gpt-2-output-dataset/tree/master/d...




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: