That’s a script for the reverse-engineering tool Ghidra that uses GPT-3 to de-compile machine code and to write plain English explanations of what a piece of code does.
There was also another HN story about what at first sight looks like an alternative implementation of the same idea: “GptHidra – Ghidra plugin that asks OpenAI Chat GPT to explain functions”
Incredible. I had this exact idea rolling around in my head. Could something be trained to decompile binaries to source in a readable way? We have vast source code available and can build the binaries.
Ghidra may be pretty good for some binaries (or for some audience?), but my experience trying to get it to reverse both golang and rust-lang binaries has been abysmal. It fails to correctly identify string literals (which is my #1 go-to for finding "points of interest") and the decompilation output is ... well, maybe it's helpful to someone but not to me. I regret that I let my Binary Ninja license lapse in order to see what it would have to say about the same binaries, and I've never had an IDA license to know what that's like
As a point of comparison, I fed 10.2.2 a copy of gojq 0.12.11 that I had lying around and this is pretty representative of its output
I somehow thought that IDA Free was missing the decompiler, but I just downloaded 8.2.221216 macOS x86_64 and while it did a much better job at identifying the symbols in the rust binary, regrettably it then consumed 100% of the CPU and effectively locked up. So ... better, I guess? :-/
Myeah, that sound like a bug in this variant. File a bug report with them, probably they'll release a better free one then you can properly test your theory. Good luck
They are "pretty good" in the sense that they define the sequence of assembly instructions according to some C type code that may have produced them. This is a tool that gives you essentially, "this function might be doing MD5".
I haven't tried it yet, but I can see it being useful if it's somewhat accurate (that's a big if), and quite different from what Ghidra gives you in pseudo code.
Wow, I wouldn't have expected Tenable to shell out to curl, especially when the curl only adds two headers and they omitted the "--fail" that would cause non-200 responses to return a non-zero exit code :-(
Fair point. It was a quick and dirty workaround, in the absence of `requests` in Ghidra's Jython distribution, but it turns out that `httplib` is available, and the latest commit uses that to do the HTTP request instead.
You can actually try adding "and indicate what security vulnerabilities are present in the code, if any" or something to that effect to the prompt, by tweaking the `EXTRA` global variable defined near the head of the script. My experience with this so far is that it tends to spew out infosec truisms that aren't closely connected with the code, and that most interesting vulnerabilities require a bit more contextual awareness to notice than this tool has available to it, but ymmv, and it's definitely worth taking a bit of time to see if you can massage the prompt to finagle useful bughunting output from the tool.
I'm partial to Gepetto for IDA, which includes an especially hilarious trick in which it instructs ChatGPT to phrase its responses in JSON, and then uses this JSON directly to name variables in the decompilation. If the JSON is incorrect, it politely asks ChatGPT to please fix its JSON output, which usually works.
The real question is how a human should merge these results with their own reversing, honestly. I can't really trust GPT-3 to be accurate like I would actually trust the decompiler (and, as any reverser knows, you don't trust the decompiler). I think I would treat the output of this as I might a suggestion from a friend who I let glance over the code: "hmm, that might be a SHA-1?" and then I go confirm the results for myself.
I think this plugin is not to decompile, but rather to explain decompiled code.
They mention that the AI's decompilation is about as good as Ghidra's (though of course less trustworthy).
The benefit is in explaining the decompiled code, they give an example where the prompt to the AI is something like "here is some code decompiled with ghidra, explain it in detail".
From the article:
"the paraphrase of disassembled or decompiled code into high-level commentary, can be assisted by automated tooling as well.
And this is just what the G-3PO Ghidra script does.
The output of such a tool, of course, would have to be carefully checked. Taking its soundness for granted would be a mistake, just as it would be a mistake to put too much faith in the decompiler. We should trust such a tool, backed as it is by an opaque LLM, far less than we trust decompilers, in fact. Fortunately reverse engineering is the sort of domain where we don’t need to trust much at all. It’s an essentially skeptical craft."
At least most of the decompiler logic comes from formal methods, thus reducing possible edge cases compared to statistics. There's a room for AI in reversing, but it should be a specifically trained model with a carefully extracted features from binary, not only disassembly output: graphs, debug and demangling info, types, IL analysis results, etc.
All these GPT-based plugins are just toys. There's more serious research like this[1][2][3]
I've been waiting to see something like this. There's certainly room to fine-tune an LLM for this task; in that vein, I wonder whether Ghidra's pcode would produce better results? It's a bit better suited to this task in that the model wouldn't need to be tuned for each possible instruction set. Training on code compiled at different optimization levels might also produce interesting results.
You could probably also take the explanations from the LLM, convert those into embeddings, and then do semantic search over all functions in a binary. For example, searching for "get process handle and inject dll" and getting a list of prospects. It's less useful in an obfuscated binary, but for things like modding games or extending end-of-life software it could be very useful.
I know there's some active work on this (using LLMs, not traditional methods), not on the binary side but on the source analysis side. See https://grit.io/, which tries to detect bugs (and maybe vulnerabilities?) and automatically submits PRs to patch them for you. I think morgante is their contact on HN.
It feels like it'd be difficult to acquire a large corpus of vulnerabilities to train on.
There are already systems for that in github enterprise. It works decently if you can massage your build to fit in their system. It is still more or less experimental for some languages.
My PhD research is about classifying functions in obfuscated binaries, and when I saw this I immediately wondered if it will make my work obsolete. I suspect obfuscation will give LLMs a hard time for a little while, at least until they start training on obfuscated code.
But there is still the issue of whether companies doing this kind of RE want to send their code to OpenAI's servers. If you're reverse engineering in order to determine whether you should sue another company for copyright infringement, you are probably cautious about sharing code in the first place.
For this use case there will always be a need for some kind of alternative that you can self-host without sharing your data with another party.
Very neat! I also worked on something that uses GPT-3 for reverse engineering last week. The basic idea is that right now GPT-3 is limited in how much context it can see at once. So instead, to summarize a function in context, I use the call graph to find all of its dependencies, and summarize them one by one, providing the summaries of the callees when summarizing the caller:
Hey guys, I'm the one who wrote the post and the Ghidra plugin. Really delighted to see it get so much traction here! I just merged a couple of PRs which should improve the tool somewhat -- one from eShuttleworth, which uses GPT-3's feedback to automatically rename the function and global variables, and another from me, which does the same for what Ghidra internally refers to as HighVariables (variables visible in the decompiled code listing, as opposed to just the assembly listing). Turns out these two things only look like similar operations from the perspective of the GUI.
Here's the public github repo: <https://github.com/tenable/ghidra_tools> Please feel free to fork it and send me PRs with any features or fixes you think might be useful.
Explain the following C++ code:
int sum(int n) {
int result = 0;
for (int i = 0; i < n; i++) {
if (true || false) {
return 42;
}
}
for (int i = 0; i < n * 2; i++) {
i++;
}
return n;
}
Then ask it what is the result of that function when n is 5, and then when n is -1.
Note that the function looks trivial but it has certain tricks to confuse it. e.g.: "result" is never returned, instead, either 42 or n is returned.
int sum(int n) {
int result = 0;
for (int i = 0; i < n; i++) {
if (true || false) {
return 42;
}
}
for (int i = 0; i < n * 2; i++) {
i++;
}
return n;
}
This C++ function is called sum and it takes an integer argument n. It has a local variable result which is initialized to 0 but is not used in the function.
The function has two for loops. The first for loop iterates over the range 0 to n-1. On each iteration, it checks if true || false is true. Since true || false is always true, it immediately returns the value 42. This means that the first for loop will only execute once.
The second for loop iterates over the range 0 to 2n-1. On each iteration, it increments i by 2 (since i++ increments i by 1 and i is incremented again on the next line). This means that the second for loop will only execute n times.
After the second for loop, the function returns the value of n.
What is the result when n=5?
If n is 5, the function will immediately return 42 when the first for loop is executed, so the function will not reach the second for loop or the return statement at the end. Therefore, the result of calling sum(5) would be 42.
What is the result when n=-1?
If n is negative, the first for loop will not be executed at all because the loop condition i < n will be false from the start. The second for loop will also not be executed because the loop variable i will never be less than 2n, which is a negative number in this case.
Therefore, the result of calling sum(-1) would be -1, which is the value of n that was returned at the end of the function.
When I tried it with the same prompt, it explained the second for loop as
The second for loop iterates over the values of i startingat 0 and going up to, but not including, n * 2. Within thisloop is a statement that increments i by 1. This statement has no effect on the loop because i is already beingincremented by the loop itself.
That's awesome! I also wanted to combine Ghidra and GPT3, but I was unsure about its capabilities to explain code. I made a plugin where you highlight part of the listing view and it explains what is going on with the help of ChatGPT. Link for the curious: https://github.com/SourceDiver42/Ghidra-ChatGPT
Just because you don’t understand the knob doesn’t mean it’s voodoo or proprietary. LMs of every kind can incorporate temperature into their next-token inference and it’s a common term when describing models like these:
That’s a script for the reverse-engineering tool Ghidra that uses GPT-3 to de-compile machine code and to write plain English explanations of what a piece of code does.
The article is quite detailed and describes both its capabilities and its limitations. That G-3PO script is open source, MIT license: https://github.com/tenable/ghidra_tools/tree/main/g3po
There was also another HN story about what at first sight looks like an alternative implementation of the same idea: “GptHidra – Ghidra plugin that asks OpenAI Chat GPT to explain functions”
https://news.ycombinator.com/item?id=34165291
This one is more recent and lacks that good write-up mentioned above. The script is smaller and it seems to have fewer features.
I suggest checking both of them.