This repository provides the implementation for "Preserving Generalization of Language Models in Few-shot Continual Relation Extraction (EMNLP2024)."
To run the code, please install the following dependencies:
transformers==4.20.0
wordninja
wandb
scikit-learn
tqdm
numpy==1.23.0
peft
accelerate
sentencepiece
protobuf
We perform experiments using two publicly available relation extraction datasets:
- FewRel: A large-scale few-shot relation extraction dataset.
- TACRED: A widely-used dataset for relation classification.
To train BERT models on TACRED with 5-shot settings, follow these steps:
- Navigate to the CPL scripts directory:
cd CPL/bash
- Run the 5-shot training script:
bash tacred_5shot.sh
Alternatively, you can directly run the training script for different components:
-
For SCKD with Mutual Information Maximization (MMI):
cd SCKD python main-mmi.py --task tacred --shot 5
-
For ConPL:
cd ConPL python main.py --task tacred --shot 5
To train BERT models on FewRel with 5-shot settings:
-
Run the 5-shot script from the CPL directory:
cd CPL/bash bash fewrel_5shot.sh
-
Alternatively, run the training commands directly:
-
SCKD with MMI:
cd SCKD python main-mmi.py --task fewrel --shot 5
-
ConPL:
cd ConPL python main.py --task fewrel --shot 5
-
To train using LLAMA2, ensure that you have set up your Hugging Face token (hf_token
) in the required scripts:
- For ConPL, add the token in
main-llm.py
anddataprocess.py
. - For SCKD, add the token in
sampler.py
,main-llm.py
, andmain-llm-mmi.py
.
-
To run SCKD with MMI:
cd SCKD python main-llm-mmi.py --task tacred --shot 5
-
To run ConPL:
cd ConPL python main-llm.py --task tacred --shot 5
-
To run SCKD with MMI:
cd SCKD python main-llm-mmi.py --task fewrel --shot 5
-
To run ConPL:
cd ConPL python main-llm.py --task fewrel --shot 5