Les exposés des conférences invités auront lieu dans la salle virtuelle "A" aux horaires indiqués sur le programme général :
Jeudi 1er juillet à 9h
Titre : Combining symbolic knowledge and neural representations
Résumé : Current Natural Language Processing (NLP) systems typically rely on vectors for representing entities, and on vector manipulations for reasoning about these entities. The knowledge that is captured by these systems is then encoded implicitly in the parameters of some neural network. While it is not always clear what kinds of knowledge such systems capture, and what types of reasoning they are capable of, in practice they tend to perform very well across a wide range of tasks. Nonetheless, symbolic knowledge, e.g. encoded as rules, taxonomies or knowledge graphs, still has an important role to play for NLP. For instance, symbolic knowledge is needed to “inject” knowledge that is rarely expressed in text, such as highly specialised domain knowledge or, at the other side of the spectrum, general commonsense properties that are too obvious for humans to be stated explicitly. Symbolic knowledge is also bound to play a central role in applications where interpretability is important. This begs the question how the advantages of neural representations and symbolic knowledge can best be combined.
In the first part of the talk, I will discuss a number of strategies in which symbolic knowledge is encoded using geometric constructs. For instance, one popular strategy is to view concepts as convex regions in a vector space, rather than vectors. Symbolic knowledge about concepts can then be modelled in terms of the qualitative spatial relationships that hold between these regions. Different strategies for embedding symbolic knowledge (i.e. for modelling such knowledge in terms of geometric constructs) have different strengths and weaknesses, and a better understanding is needed of the different trade-offs involved. Whereas the first part of the talk is essentially about using symbolic knowledge to improve the usefulness of vector representations, in the second part of the talk I will discuss the opposite direction: using vector representations to improve the usefulness of symbolic knowledge. In particular, vector representations of concepts capture aspects of conceptual knowledge that are difficult to encode symbolically, including fine-grained knowledge about different facets of similarity. Such knowledge can be exploited to design strategies for plausible reasoning with symbolic knowledge, allowing us for instance to automatically extend existing ontologies.
Biographie : Steven Schockaert is a professor at Cardiff University. His current research interests include commonsense reasoning, natural language processing and representation learning. He is Editor-in-Chief of AI Communications: the European Journal on Artificial Intelligence, Associate Editor of Artificial Intelligence Journal and Area Editor of Fuzzy Sets and Systems. He serves on the board of the European Association for Artificial Intelligence (EurAI) in the role of treasurer. His research has been funded from various sources including the European Research Council, EPSRC, the Leverhulme Trust and the Research Foundation Flanders. He was the recipient of the ECCAI Artificial Intelligence Dissertation Award and the IBM Belgium Prize for Computer Science.