Interactive Summarization of Large Document CollectionsBenjamin Hättasch; Christian M. Meyer; Carsten Binnig
In: Proceedings of the Workshop on Human-In-the-Loop Data Analytics, HILDA@SIGMOD 2019. Workshop on Human-In-the-Loop Data Analytics (HILDA-2019), located at SIGMOD 2019, July 5, Amsterdam, Netherlands, Pages 9:1-9:4, ACM, 2019.
We present a new system for custom summarizations of large text corpora at interactive speed. The task of producing textual summaries is an important step to understand large collections of topic-related documents and has many real-world applications in journalism, medicine, and many more. Key to our system is that the summarization model is refined by user feedback and called multiple times to improve the quality of the summaries iteratively. To that end, the human is brought into the loop to gather feedback in every iteration about which aspects of the intermediate summaries satisfy their individual information needs. Our system consists of a sampling component and a learned model to produce a textual summary. As we show in our evaluation, our system can provide a similar quality level as existing summarization models that are working on the full corpus and hence cannot provide interactive speeds.