Is the decompiled code guaranteed to be equivalent to the compiled code? While this might be cool it doesn’t seem that useful if you can’t reason about the correctness of the output. I skimmed the README and didn’t manage to figure it out
I can’t speak for this specific approach/system, but no. LLMs never really guarantee anything, and for translation roles like this, it’s hard to say how much help they provide. The main issue being that you now have to understand what the LLM generated before you can start fixing it and/or debugging it.
From my understanding, it trys to tackle the hardest part, getting from Assembly back to something human readable and not necessarily compilable out the gate
A large part of the tedious and intensive process of decompilation is just figuring out what chunks in ASM do what and working it out to named functions and variables
Why LLM? What was wrong with training a model specifically for decompiling?
LLM is being used in a colloquial way here. It’s just how the algorithm is arranged. Tokenize input, generate output by stacking the most likely subsequent tokens, etc.
It still differentiates it from neural networks and other more basic forms of machine “learning” (god what an anthropomorphized term from the start…).
They did train a model specifically for decompiling.
I don’t get it, how is it better than ghidra? Or it tries to name func, vars and types too, which is hard work
Or it tries to name func, vars and types too,
It tries to do exactly that, it actually uses ghidra for the initial decompilation
Mmm, exciting, will it guess global unknown array variables, where god knows where they start/ends? From git example it seems just works in specific functions, not globally the whole code with global variable space
I truly do not understand “AI” (LLMs) as they stand.
I asked Copilot (microsoft) to suggest a way to log from powershell (ms) to application insights (ms)
It straight made up a powershell module, and method call. Completely made up, non existent.
And somehow people are using it for useful decompilation???
edit
Im not even sure I understand what this repos point is. It shows various LLMs performing decompilation… but does it show any level of accuracy? I must be missing something.
does this repo show useful real world decompilation or am I missing something
People that use chatbots usually don’t know how they work.
Machine Learning models are “predictive.” They use previous data to train, and then predict stock prices or weather next week etc.
LLM chatbots are trained on google and whatsapp messages. When you type hi they “predict” what reply on whatsapp would be.
When you ask how to so something they predict what top result on google would look like. What’s correct or not doesn’t matter.
Yet people using chatbots assume it googles your each question like some kind of scraper and reasons with it’s “language code.”
people using chatbots assume it googles your each question like some kind of scraper
Most do this in their “thinking” step now. And the results are far better as a result, even up to current events.
It straight made up a powershell module, and method call. Completely made up, non existent.
It was just imagining the best way to accomplish the task: instead of complaining, you should have just asked it to give you the source code of that new module.
Your lack of faith in AI is hindering your coding ability.
(do I need to add the /s? no, right?)
With how many faithful AI users on lemmy, you in fact do lmao.
Hello Faith Based Code.
We’re going to be going with Faith Based Security next.
Remember to perform 1 Hail Mary and 3 Canticles of the Omnissiah to ensure the AI is cooperative!
Not sure if… Blood for the Blood god… or all hail the god-emperor… would be better.
If I understand the results tables on repo correctly, their largest model achieves ~68% re-executability rate on code compiled with the q0 optimization flag. I’m unsure if that just tests if the decompiled code can be recompiled and executed, or if the programs need to produce the same result on some test cases. If the model is used to refine Ghidra outputs (I’m guessing this is some well-known decompilation framework) it can be used to achieve ~80% re-executability rate, which is better than Ghidra’s baseline of ~34%.
It straight made up a powershell module, and method call. Completely made up, non existent.
is it wrong, or is it just ahead of it’s time?
Maybe it gave you a perfectly functioning code that will work flawlessly, on windows 18.
It straight made up a powershell module, and method call. Completely made up, non existent.
Counterpoint 1:
I gave Copilot a couple of XML files that described a map and a route, and told it to make a program in C# that could create artificial maps and routes using those as a guideline.
After about 20 minutes of back and forth, mainly me describing what I wanted in the map (eg walls that were +/- 3m from the routes, points in the routes should be 1m apart, etc) it spat out a program that could successfully build xml files that worked in the real-world device that needed them.
Counterpoint 2: I gave Copilot a python program that I’d written about 8 years ago that connected to a Mikrotik router using its vendor specific API and compiled some data to push out to websocket clients that connected. I told it to make a C# equivalent that could be installed and run as a windows service, and it created something that worked on the very first pass using third party .NET libraries for Mikrotik API access.
Counterpoint 3: I had a SQL query in a PowerShell script that took some reporting data from a database and mangled it heavily to get shift-by-shift reports. Again I asked it to take the query and business logic from the script and create a command line C# application that could populate a new table with the shift report data. It created something that worked immediately and fixed a corner case in the query that was causing me some grumbles as well.
These were things that I’ve done in the past month. Each one would have taken a week for me to do myself, and with some general discussion with this particular LLM each one took about an hour instead, with it giving me a complete zipped up project folder with multiple source files that I could just open in Visual Studio and press “build” to get what I want.
In all these cases however, I was well versed in the area it was working in, and I knew how to phrase things precisely enough that it could generate something useful. It did try and tack on a lot of not-particularly-useful things, particularly options for the command line reporting program.
And I HATE the oh-so-agreeable tone it takes with everything. I’m not “absolutely right” when I correct it or steer it along a different path. I don’t really want all this extra stuff that it’s so happy to tack on, “it won’t take a minute”.
I want the LLM to tell me that’s an awful idea, or that it can’t do it. A constant yes-man agreeing with everything I say doesn’t help me get shit done.
I’m with ya. I find it a super useful tool all day every day. But that’s because I’m a sme on the stuff I’m working on.
As for your last points, play with the system prompt. “You are a useful machine, not a human. Don’t get emotional like humans. No greeting or salutations. If something can’t be done, say so. Your job isn’t to please me it is to accomplish tasks without prejudice.” Something like that. It really does help.
LLMs are good at language processing tasks. Asking it to write code or solve complex maths it will make things up. Plus, it takes large amounts of energy to run them. Not to mention the data needed to train them.
Code written by them always has security holes. Use it to find facts, correct grammer or maybe generate a small paragraph or essay. But, don’t use it to generate code, medical device, etc.
As a recent example Chatgpt cannot answer if there is a seahorse emoji. It will get infinitely stuck in trying to be funny and finding an answer. Changing the answer mid token.
Chatgpt cannot answer if there is a seahorse emoji
Llama says there is but displays a seashell or a fish, lol. Then a horse and it admits there is none.
Now this is a great use of LLMs. Love it. So many old apps and games exist only in compiled form.
If it actually works.
I’d guess training a model on nothing but C and the resulting ASM would be much better.
It doesn’t look like it works very well. If I’m reading their results section correctly, it works less than 20% of the time on real world problems.
lol