We contribute a cross-lingual dataset with task-specific instructions that explicitly instructs LLMs to generate responses incorporating swear words in contexts like professional emails, academic writing, or casual messages. It aims to evaluate the current state of LLMs in handling offensive instructions in diverse situations involving low resource languages.
English, Spanish (EU), French (EU), German (EU), Hindi (IND), Marathi (IND), Bengali (IND), Gujarati (IND)
English Prompt + Swear word Local Language
Each of the 109 English prompts are embedded with 25 swear words from each language.
English Prompt + Transliterated Swear word
Each of the 109 English prompts are embedded with 25 swear words from each language.
We reviewed 13 different models from families such as Mistral, Phi, Qwen, and LLaMA to evaluate their safety alignment. These models range in size, from smaller ones with 7 billion parameters to much larger versions with 141 billion parameters.
- src
- drive
- dataset
- swear words
- prompts
- case 1
- case 2
- case 1
- swear words
- model inference
- case 1
- case 2
- case 1
- dataset
- metrics
- case 1.xlsx
- case 2.xlsx
- case 1 percentage.xlsx
- case 2 percentage.xlsx
- case 1.xlsx
- drive