You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

You agree to not use the model to conduct experiments that cause harm to human subjects.

Log in or Sign Up to review the conditions and access this model content.

Attacker-v0.1

Model Description

Attacker-v0.1 is a specialized model designed to generate adversarial prompts capable of bypassing the safety mechanisms of various Large Language Models (LLMs). Its primary objective is to assist researchers in identifying and understanding vulnerabilities within LLMs, thereby contributing to the development of more robust and secure AI systems.

Intended Uses & Limitations

Intended Uses:

  • Research and Development: To study and analyze the vulnerabilities of LLMs by generating prompts that can potentially bypass their safety constraints.
  • Security Testing: To evaluate the effectiveness of safety mechanisms implemented in LLMs and assist in enhancing their robustness.

Limitations:

  • Ethical Considerations: The use of Attacker-v0.1 should be confined to ethical research purposes. It is not intended for malicious activities or to cause harm.
  • Controlled Access: Due to potential misuse, access to this model is restricted. Interested parties must contact the author for usage permissions beyond academic research.

Usage

Please refer to https://github.com/leileqiTHU/Attacker πŸ₯ΊπŸ₯ΊπŸ₯Ί Please 🌟STAR🌟 this repo if you think the model or the repo is helpful, this is very important to me! Thanks !! πŸ₯ΊπŸ₯ΊπŸ₯Ί

Risks

Risks:

  • Misuse Potential: There is a risk that the model could be used for unethical purposes, such as generating harmful content or exploiting AI systems maliciously.

Mitigation Strategies:

  • Access Control: Implement strict access controls to ensure the model is used solely for legitimate research purposes.

πŸ“ License

  • License: Research Use Only
  • Usage Restrictions: This model is intended for research purposes only. For commercial use or other inquiries, please contact the model author.

Model Card Authors

Citation

If you use this model in your research or work, please cite it as follows:

@misc{leqi2025Attacker,
  author = {Leqi Lei},
  title = {The Safety Mirage: Reinforcement-Learned Jailbreaks Bypass Even the
Most Aligned LLMs},
  year = {2025},
  howpublished = {\url{https://github.com/leileqiTHU/Attacker}},
  note = {Accessed: 2025-04-13}
}
Downloads last month
38
Safetensors
Model size
14.8B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for thu-coai/Attacker-v0.1

Base model

Qwen/Qwen2.5-14B
Finetuned
(66)
this model

Dataset used to train thu-coai/Attacker-v0.1