Skip to main content Skip to main navigation

Publication

LLsiM: Large Language Models for Similarity Assessment in Case-Based Reasoning

Mirko Lenz; Maximilian Hoffmann; Ralph Bergmann
In: Isabelle Bichindaritz; Beatriz Lopez (Hrsg.). Case-Based Reasoning Research and Development. International Conference on Case-Based Reasoning (ICCBR-2025), Biarritz, France, Lecture Notes in Computer Science (LNCS), Springer Nature Switzerland, Cham, 2025.

Abstract

In Case-Based Reasoning (CBR), past experience is used to solve new problems. Determining the most relevant cases is a crucial aspect of this process and is typically based on one or multiple manuallydefined similarity measures, requiring deep domain knowledge. To overcome the knowledge-acquisition bottleneck, we propose the use of Large Language Models (LLMs) to automatically assess similarities between cases. We present three distinct approaches where the model is used for different tasks: (i) to predict similarity scores, (ii) to assess pairwise preferences, and (iii) to automatically configure similarity measures. Our conceptual work is accompanied by an open-source Python implementation that we use to evaluate the approaches on three different domains by comparing them to manually crafted similarity measures. Our results show that directly using LLM-based scores does not align well with the baseline rankings, but letting the LLM automatically configure the measures yields rankings that closely resemble the expert-defined ones.