LLM is being used in a colloquial way here. It’s just how the algorithm is arranged. Tokenize input, generate output by stacking the most likely subsequent tokens, etc.
It still differentiates it from neural networks and other more basic forms of machine “learning” (god what an anthropomorphized term from the start…).
Why LLM? What was wrong with training a model specifically for decompiling?
LLM is being used in a colloquial way here. It’s just how the algorithm is arranged. Tokenize input, generate output by stacking the most likely subsequent tokens, etc.
It still differentiates it from neural networks and other more basic forms of machine “learning” (god what an anthropomorphized term from the start…).
They did train a model specifically for decompiling.