Skip to main content Skip to main navigation


Why only Micro-F1? Class Weighting of Metrics for Relation Classification

David Harbecke; Yuxuan Chen; Leonhard Hennig; Christoph Alt
In: Proceedings of the 1st Workshop on Efficient Benchmarking in NLP. Annual Meeting of the Association for Computational Linguistics (ACL-2022), May 22-27, Dublin, Ireland, Association for Computational Linguistics, 5/2022.


Relation classification models are conventionally evaluated using only a single measure, e.g., micro-F1, macro-F1 or AUC. In this work, we analyze weighting schemes, such as micro and macro, for imbalanced datasets. We introduce a framework for weighting schemes, where existing schemes are extremes, and two new intermediate schemes. We show that reporting results of different weighting schemes better highlights strengths and weaknesses of a model.