This workshop will take place on Monday, 20 August 2018, from 9:30 to 17:00.
|09:30 - 09:45||Welcome and Opening Remarks|
|09:45 - 10:30||Invited Talk by Steven Schockaert|
|Knowledge Representation with Conceptual Spaces|
|10:30 - 11:00||Coffee break|
|11:00 - 11:30||Pankaj Gupta, Bernt Andrassy and Hinrich Schütze|
|Replicated Siamese LSTM in Ticketing System for Similarity Learning and Retrieval in Asymmetric Texts|
|11:30 - 12:00||Sreekavitha Parupalli, Vijjini Anvesh Rao and Radhika Mamidi|
|BCSAT: A Benchmark Corpus for Sentiment Analysis in Telugu Using Word-level Annotations|
|12:00 - 12:30
||Su-Youn Yoon, Anastassia Loukina, Chong Min Lee, Matthew Mulholland, Xinhao Wang and Ikkyu Choi|
|Word-Embedding based Content Features for Automated Oral Proficiency Scoring|
|12:30 - 14:00||Lunch|
|14:00 - 14:45||Invited talk by Christos Christodoulopoulos|
|Knowledge Representation and Extraction at Scale|
|14:45 - 15:15||Luis Nieto Piña and Richard Johansson|
|Automatically Linking Lexical Resources with Word Sense Embedding Models|
|15:15 - 15:30||Ignatius Ezeani, Ikechukwu Onyenwe and Mark Hepple|
|Transferred Embeddings for Igbo Similarity, Analogy, and Diacritic Restoration Tasks|
|15:30 - 16:20||Poster Session and Coffee Break|
|16:20 - 17:00||Wrap-up, Q&A and Open Discussion|
Steven Schockaert is a professor at Cardiff University. His current research interests include commonsense reasoning, interpretable machine learning, vagueness and uncertainty modelling, representation learning, and information retrieval. He holds an ERC Starting Grant, and has previously been supported by funding from the Leverhulme Trust, EPSRC, and FWO, among others. He was the recipient of the 2008 ECCAI Doctoral Dissertation Award and the IBM Belgium Prize for Computer Science. He is on the board of directors of EurAI, on the editorial board of Artificial Intelligence and an area editor for Fuzzy Sets and Systems. He was PC co-chair of SUM 2016 and the general chair of UKCI 2017.
Title: Knowledge Representation with Conceptual Spaces
Abstract:Entity embeddings are vector space representations of a given domain of interest. They are typically learned from text corpora (possibly in combination with any available structured knowledge), based on the intuition that similar entities should be represented by similar vectors. The usefulness of such entity embeddings largely stems from the fact that they implicitly encode a rich amount of knowledge about the considered domain, beyond mere similarity. In an embedding of movies, for instance, we may expect all movies from a given genre to be located in some low-dimensional manifold. This is particularly useful in supervised learning settings, where it may e.g. allow neural movie recommenders to base predictions on the genre of a movie, without that genre having to be specified explicitly for each movie, or without even the need to specify that the genre of a movie is a property that may have predictive value for the considered task. In unsupervised settings, however, such implicitly encoded knowledge cannot be leveraged.
Conceptual spaces, as proposed by Gärdenfors, are similar to entity embeddings, but provide more structure. In conceptual spaces, among others, dimensions are interpretable and grouped into facets, and properties and concepts are explicitly modelled as (vague) regions. Thanks to this additional structure, conceptual spaces can be used as a knowledge representation framework, which can also be effectively exploited in unsupervised settings. Given a conceptual space of movies, for instance, we are able to answer queries that ask about similarity w.r.t. a particular facet (e.g. movies which are cinematographically similar to Jurassic Park), that refer to a given feature (e.g. movies which are scarier than Jurassic Park but otherwise similar), or that refer to particular properties or concepts (e.g. thriller from the 1990s with a dinosaur theme). Compared to standard entity embeddings, however, conceptual spaces are more challenging to learn in a purely data-driven fashion. In this talk, I will give an overview of some approaches for learning such representations that have recently been developed within the context of the FLEXILOG project.
Christos Christodoulopoulos is a Research Scientist at Amazon Research Cambridge (UK), working on knowledge extraction and verification. He got his PhD at the University of Edinburgh, where he studied the underlying structure of syntactic categories across languages. Before joining Amazon, he was a postdoctoral researcher at the University of Illinois working on semantic role labeling and psycholinguistic models of language acquisition. He has experience in science communication including giving public talks and producing a science podcast.
Title: Knowledge Representation and Extraction at Scale
Abstract:These days, most general knowledge question-answering systems rely on large-scale knowledge bases comprising billions of facts about millions of entities. Having a structured source of semantic knowledge means that we can answer questions involving single static facts (e.g. "Who was the 8th president of the US?") or dynamically generated ones (e.g. "How old is Donald Trump?). More importantly, we can answer questions involving multiple inference steps ("Is the queen older than the president of the US?").
In this talk, I'm going to be discussing some of the unique challenges that are involved with building and maintaining a consistent knowledge base for Alexa, extending it with new facts and using it to serve answers in multiple languages. I will focus on three recent projects from our group. First, a way of measuring the completeness of a knowledge base, that is based on usage patterns. The definition of the usage of the KB is done in terms of the relation distribution of entities seen in question-answer logs. Instead of directly estimating the relation distribution of individual entities, it is generalized to the "class signature" of each entity. For example, users ask for baseball players' height, age, and batting average, so a knowledge base is complete (with respect to baseball players) if every entity has facts for those three relations.
Second, an investigation into fact extraction from unstructured text. I will present a method for creating distant (weak) supervision labels for training a large-scale relation extraction system. I will also discuss the effectiveness of neural network approaches by decoupling the model architecture from the feature design of a state-of-the-art neural network system. Surprisingly, a much simpler classifier trained on similar features performs on par with the highly complex neural network system (at 75x reduction to the training time), suggesting that the features are a bigger contributor to the final performance.
Finally, I will present the Fact Extraction and VERification (FEVER) dataset and challenge. The dataset comprises more than 185,000 human-generated claims extracted from Wikipedia pages. False claims were generated by mutating true claims in a variety of ways, some of which were meaning-altering. During the verification step, annotators were required to label a claim for its validity and also supply full-sentence textual evidence from (potentially multiple) Wikipedia articles for the label. With FEVER, we aim to help create a new generation of transparent and interprable knowledge extraction systems.