Utilizing Out-Domain Datasets to Enhance Multi-Task Citation AnalysisDominique Mercier; Syed Tahseen Raza Rizvi; Vikas Rajashekar; Sheraz Ahmed; Andreas Dengel
In: Agents and Artificial Intelligence. International Conference on Agents and Artificial Intelligence (ICAART), Pages 113-134, LNCS, Vol. 13251, ISBN 978-3-031-10161-8, Springer, Cham, 7/2022.
Citations are generally analyzed using only quantitative measures while excluding qualitative aspects such as sentiment and intent. However, qualitative aspects provide deeper insights into the impact of a scientific research artifact and make it possible to focus on relevant literature free from bias associated with quantitative aspects. Therefore, it is possible to rank and categorize papers based on their sentiment and intent. For this purpose, larger citation sentiment datasets are required. However, from a time and cost perspective, curating a large citation sentiment dataset is a challenging task. Particularly, citation sentiment analysis suffers from both data scarcity and tremendous costs for dataset annotation. To overcome the bottleneck of data scarcity in the citation analysis domain we explore the impact of out-domain data during training to enhance the model performance. Our results emphasize the use of different scheduling methods based on the use case. We empirically found that a model trained using sequential data scheduling is more suitable for domain-specific usecases. Conversely, shuffled data feeding achieves better performance on a cross-domain task. Based on our findings, we propose an end-to-end trainable multi-task model that covers the sentiment and intent analysis that utilizes out-domain datasets to overcome the data scarcity.