Eraser: Jailbreaking defense in large language models via unlearning harmful knowledge

W Lu, Z Zeng, J Wang, Z Lu, Z Chen, H Zhuang…�- arXiv preprint arXiv�…, 2024 - arxiv.org
W Lu, Z Zeng, J Wang, Z Lu, Z Chen, H Zhuang, C Chen
arXiv preprint arXiv:2404.05880, 2024arxiv.org
Jailbreaking attacks can enable Large Language Models (LLMs) to bypass the safeguard
and generate harmful content. Existing jailbreaking defense methods have failed to address
the fundamental issue that harmful knowledge resides within the model, leading to potential
jailbreak risks for LLMs. In this paper, we propose a novel defense method called Eraser,
which mainly includes three goals: unlearning harmful knowledge, retaining general
knowledge, and maintaining safety alignment. The intuition is that if an LLM forgets the�…
Jailbreaking attacks can enable Large Language Models (LLMs) to bypass the safeguard and generate harmful content. Existing jailbreaking defense methods have failed to address the fundamental issue that harmful knowledge resides within the model, leading to potential jailbreak risks for LLMs. In this paper, we propose a novel defense method called Eraser, which mainly includes three goals: unlearning harmful knowledge, retaining general knowledge, and maintaining safety alignment. The intuition is that if an LLM forgets the specific knowledge required to answer a harmful question, it will no longer have the ability to answer harmful questions. The training of Erase does not actually require the model's own harmful knowledge, and it can benefit from unlearning general answers related to harmful queries, which means it does not need assistance from the red team. The experimental results show that Eraser can significantly reduce the jailbreaking success rate for various attacks without compromising the general capabilities of the model.
arxiv.org