dataset
2025 年 1 月 12 日
FairCode Evaluating Social Bias of LLMs in Code Generation
title: FairCode Evaluating Social Bias of LLMs in Code Generation
publish date:
2025-01-09
authors:
Yongkang Du et.al.
paper id
2501.05396v1
download
abstracts:
Large language models (LLMs) have demonstrated significant capability in code generation, drawing increasing attention to the evaluation of the quality and safety of their outputs. However, research on bias in code generation remains limited. Existing studies typically assess bias by applying malicious prompts or reapply tasks and dataset for discriminative models. Given that LLMs are often aligned with human values and that prior datasets are not fully optimized for code-related tasks, there is a pressing need for benchmarks specifically designed for evaluating code models. In this study, we introduce FairCode, a novel benchmark for evaluating bias in code generation. FairCode comprises two tasks: function implementation and test case generation, each evaluating social bias through diverse scenarios. Additionally, we propose a new metric, FairScore, to assess model performance on this benchmark. We conduct experiments on widely used LLMs and provide a comprehensive analysis of the results. The findings reveal that all tested LLMs exhibit bias. The code is available at https://github.com/YongkDu/FairCode.
QA:
coming soon
编辑整理: wanghaisheng 更新日期:2025 年 1 月 12 日