all search terms
2024 年 11 月 4 日
Desert Camels and Oil Sheikhs ArabCentric Red Teaming of Frontier LLMs
title: Desert Camels and Oil Sheikhs ArabCentric Red Teaming of Frontier LLMs
publish date:
2024-10-31
authors:
Muhammed Saeed et.al.
paper id
2410.24049v1
download
abstracts:
Large language models (LLMs) are widely used but raise ethical concerns due to embedded social biases. This study examines LLM biases against Arabs versus Westerners across eight domains, including women’s rights, terrorism, and anti-Semitism and assesses model resistance to perpetuating these biases. To this end, we create two datasets: one to evaluate LLM bias toward Arabs versus Westerners and another to test model safety against prompts that exaggerate negative traits (“jailbreaks”). We evaluate six LLMs — GPT-4, GPT-4o, LlaMA 3.1 (8B & 405B), Mistral 7B, and Claude 3.5 Sonnet. We find 79% of cases displaying negative biases toward Arabs, with LlaMA 3.1-405B being the most biased. Our jailbreak tests reveal GPT-4o as the most vulnerable, despite being an optimized version, followed by LlaMA 3.1-8B and Mistral 7B. All LLMs except Claude exhibit attack success rates above 87% in three categories. We also find Claude 3.5 Sonnet the safest, but it still displays biases in seven of eight categories. Despite being an optimized version of GPT4, We find GPT-4o to be more prone to biases and jailbreaks, suggesting optimization flaws. Our findings underscore the pressing need for more robust bias mitigation strategies and strengthened security measures in LLMs.
QA:
coming soon
编辑整理: wanghaisheng 更新日期:2024 年 11 月 4 日