👉 (Nov-2024) Colab Demos Added!
👉 (Sep-2024) Preprint Released! Programming Refusal with Conditional Activation Steering on arXiv
This is a general-purpose activation steering library to (1) extract vectors and (2) steer model behavior. We release this library alongside our recent report on Programming Refusal with Conditional Activation Steering to provide an intuitive toolchain for activation steering efforts.
git clone https://github.com/IBM/activation-steering
pip install -e activation-steering
Activation steering is a technique for influencing the behavior of language models by modifying their internal activations during inference. This library provides tools for:
- Extracting steering vectors from contrastive examples
- Applying steering vectors to modify model behavior
Conditional activation steering selectively applies or withholds activation steering based on the input context. Conditional activation steering extends the activation steering framework by introducing:
- Context-dependent control capabilities through condition vectors
- Logical composition of multiple condition vectors
For detailed implementation and usage of both activation steering and conditional activation steering, refer to our paper and the documentation.
Refer to /docs to understand this library. We recommend starting with Quick Start Tutorial as it covers most concepts that you need to get started with activation steering and conditional activation steering.
- Adding Refusal Behavior to LLaMA 3.1 8B Inst 👉 here!
- Adding CoT Behavior to Gemma 2 9B 👉 here!
- Making Hermes 2 Pro Conditionally Refuse Legal Instructions 👉 here!
This library builds on top of the excellent work done in the following repositories:
Some parts of the documentation for this library are generated by
- ml-tooling/lazydocs > lazydocs activation_steering/ --no-watermark
@misc{lee2024programmingrefusalconditionalactivation,
title={Programming Refusal with Conditional Activation Steering},
author={Bruce W. Lee and Inkit Padhi and Karthikeyan Natesan Ramamurthy and Erik Miehling and Pierre Dognin and Manish Nagireddy and Amit Dhurandhar},
year={2024},
eprint={2409.05907},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2409.05907},
}