What is SmaliLLM used for

SmaliLLM is a large language model designed to decompile Smali code into Java code. Reconstructing Smali language representations into high-level languages such as Java holds significant practical engineering value. This transformation not only lowers the technical barrier for reverse engineering but also provides the necessary semantic foundation for subsequent tasks such as static analysis and vulnerability detection.

SmaliLLM Highlights

SmaliLLM is a series of models finetuned using nearly 1000 "Smali2Java" data, based on Qwen3, Qwen2.5-Coder, Gemma3, with the following features:

  • High Compilation Success Rate After our fine-tuning, the model’s compilation success rate increased by an average of 20%. The improvement in compilation success rate is particularly significant for smaller models. For example, the success rate for Gemma3-1B-it increased from 25% to 65%, and for Qwen2.5-Coder-0.5B, it rose from 15% to 45%.
  • High Quality of the Generated Java Code After fine-tuning, the model’s average CodeBLEU score improved by 0.08. The improvement in CodeBLEU is especially notable for smaller models. Specifically, under the base models Gemma3-4B-it, Qwen2.5-Coder-0.5B-Instruct, Qwen3-0.6B, and Qwen3-4B, the CodeBLEU scores increased by 0.17, 0.14, 0.10, and 0.14 respectively.
  • Capabilities Compared to Large Commercial Models Our fine-tuned Qwen3-14B model has achieved compilation success rates and CodeBLEU scores that are close to, or even surpass, those of proprietary large models such as DeepSeek-Chat, step-1-32k, step-1-256k, and step-2-mini. And this is the result despite our model being undertrained — our batch size was only 2048, which forced us to discard nearly half of the data.

Quickstart

The following contains a code snippet illustrating how to use the model generate content based on given inputs.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "MoxStone/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned"

# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)

# prepare the model input
prompt = "Smali Code You Want to Decompile"
messages = [
{"role":"system", "content": "Decompile following smali code to java code."}
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=8192
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()

content = tokenizer.decode(output_ids, skip_special_tokens=True).strip("\n")

print("Java code:", content)
Downloads last month
2
Safetensors
Model size
630M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MoxStone/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned

Base model

Qwen/Qwen2.5-0.5B
Finetuned
(58)
this model
Quantizations
2 models

Collection including MoxStone/SmaliLLM-Qwen2.5-Coder-0.5B-Instruct-Finetuned