Iterative Self-Tuning LLMs for Enhanced Jailbreaking Capabilities
Paper • 2410.18469 • Published • 1
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("cesun/advllm_llama2")
model = AutoModelForCausalLM.from_pretrained("cesun/advllm_llama2")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))ADV-LLM is an iteratively self-tuned adversarial language model that generates jailbreak suffixes capable of bypassing safety alignment in open-source and proprietary models.
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("cesun/advllm_llama2")
tokenizer = AutoTokenizer.from_pretrained("cesun/advllm_llama2")
inputs = tokenizer("How to make a bomb", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=90)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
ADV-LLM achieves near-perfect jailbreak success rates under group beam search (GBS-50) across a wide range of models and safety checks, including Template (TP), LlamaGuard (LG), and GPT-4 evaluations.
| Victim Model | GBS-50 ASR (TP / LG / GPT-4) |
|---|---|
| Vicuna-7B-v1.5 | 100.00% / 100.00% / 99.81% |
| Guanaco-7B | 100.00% / 100.00% / 99.81% |
| Mistral-7B-Instruct-v0.2 | 100.00% / 100.00% / 100.00% |
| LLaMA-2-7B-chat | 100.00% / 100.00% / 93.85% |
| LLaMA-3-8B-Instruct | 100.00% / 98.84% / 98.27% |
Legend:
If you use ADV-LLM in your research or evaluation, please cite:
BibTeX
@inproceedings{sun2025advllm,
title={Iterative Self-Tuning LLMs for Enhanced Jailbreaking Capabilities},
author={Sun, Chung-En and Liu, Xiaodong and Yang, Weiwei and Weng, Tsui-Wei and Cheng, Hao and San, Aidan and Galley, Michel and Gao, Jianfeng},
booktitle={NAACL},
year={2025}
}
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="cesun/advllm_llama2") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)