all search terms
2025 年 1 月 6 日
Rethinking Relation Extraction Beyond Shortcuts to Generalization with a Debiased Benchmark
title: Rethinking Relation Extraction Beyond Shortcuts to Generalization with a Debiased Benchmark
publish date:
2025-01-02
authors:
Liang He et.al.
paper id
2501.01349v1
download
abstracts:
Benchmarks are crucial for evaluating machine learning algorithm performance, facilitating comparison and identifying superior solutions. However, biases within datasets can lead models to learn shortcut patterns, resulting in inaccurate assessments and hindering real-world applicability. This paper addresses the issue of entity bias in relation extraction tasks, where models tend to rely on entity mentions rather than context. We propose a debiased relation extraction benchmark DREB that breaks the pseudo-correlation between entity mentions and relation types through entity replacement. DREB utilizes Bias Evaluator and PPL Evaluator to ensure low bias and high naturalness, providing a reliable and accurate assessment of model generalization in entity bias scenarios. To establish a new baseline on DREB, we introduce MixDebias, a debiasing method combining data-level and model training-level techniques. MixDebias effectively improves model performance on DREB while maintaining performance on the original dataset. Extensive experiments demonstrate the effectiveness and robustness of MixDebias compared to existing methods, highlighting its potential for improving the generalization ability of relation extraction models. We will release DREB and MixDebias publicly.
QA:
coming soon
编辑整理: wanghaisheng 更新日期:2025 年 1 月 6 日