That would be very surprising, given that any widely used cryptographic encryption algorithm has been EXTENSIVELY cryptanalyzed.
ML models are essentially trained to recognize patterns. Encryption algorithms are explicitly designed to resist that kind of analysis. LLMs are not magic.
All of what you said is true, for us. I know LLMs aren’t magic (lord knows I actually kind of understand the principles of how they operate), but they have a much greater computational and relational bandwidth than we’ve ever had access to before. So I’m curious if that can break down what otherwise appears to be complete obfuscation. Otherwise, we’re saying that encryption is somehow magic in a way that LLMs cannot possibly be.
> So I’m curious if that can break down what otherwise appears to be complete obfuscation.
This seems to be a complete misunderstanding of what encryption is.
Obfuscation generally means muddling things around in ways that can be reconstructed. It's entirely possible a (custom - because you'd need custom tokenization) LLM could deobfuscate things.
Encryption OTOH means using a piece of information that isn't present. Weak encryption gets broken because that missing information can be guessed or recovered easily.
But this isn't the case for correctly implemented strong encryption. The missing information cannot be recovered by any non-quantum process in a reasonable timeframe.
There are exceptions - newly developed mathematical techniques can sometimes make recovering that information quicker.
But in general math is the weakest point of LLMs, so it seems an unlikely place for them to excel.
ML models are essentially trained to recognize patterns. Encryption algorithms are explicitly designed to resist that kind of analysis. LLMs are not magic.