Invited speakers

Horizon talks: Even more than regular invited talks Horizon talks are meant to inspire. By inviting researchers whom the organizers find inspiring and asking them to address not only their biggest past achievements,  but also — if not mostly — their future (even bigger) achievements. Where do they think our field should evolve to? What are the greatest challenges in getting there? How do they think that some of those challenges could be met? In other words, the question we asked then was: which regular invited talk would you like to give in 10 years time?

Marko Robnik Šikonja

University of Ljubljana
How can a house mouse crack the Turing test? Knowledge representation view.

Recently, we have seen impressive progress in the area of artificial intelligence. Deep neural networks have solved previously unsolvable problems in computer vision (e.g., face recognition rivals humans’ performance), game playing (a program has beaten human champions in the game of Go), and natural language processing (useful automatic speech recognition and machine translation). As a result, the media and prophets of AI are proclaiming near emergence of artificial general intelligence and demise of the human race.

Most AI scientists disagree. Facetiously, with deep learning, we can handle vision and sound, but with less accuracy and robustness than a house mouse. However, we never say that a mouse is intelligent in a human sense. What do mice miss in progress towards human intelligence? How will the artificial intelligence field move beyond narrow problems requiring huge datasets? The key seems to be in knowledge representation and manipulation.

Several approaches are emerging that may form the building blocks of the next-generation artificial intelligence. Embeddings transform symbolic knowledge into numeric vectors where relations between object are expressed as distances. Using large quantities of unsupervised data, we can embed texts, images, electronic health records, graphs, relations, and other entities. Transfer learning uses related tasks for better generalization and knowledge transfer. The power of first-order logic and neural networks can be merged in relational neural networks. Perturbation approaches can explain decisions of black-box models. Still, some components are missing, e.g., better representation of meaning and abstract concepts, or better integration and consolidation of knowledge sources. If we can extrapolate from the history of AI optimism, the progress will be slower than anticipated.

 

Ulrik Brandes

ETH Zurich
Network Data Science.

Complete this sentence: Network science is (…).
Did you come up with something like the study of some kind — or property, or function, or evolution — of networks? Then it is likely that popular accounts with their catchy metaphors, stunning phenomenological similarities, and big promises, have tricked you into seeing the network before the phenomenon.

I will argue that networks should be conceived of rather as constructs. The future of network science therefore lies in treating it as a data science distinguished by the format of the data, not the object of study. Centered on the notion of network position, this allows me to propose a methodological framework that suggests mathematical and computational problems to address independent of application domains precisely because it reduces the gap between methods and theorizing.

 

Peter Flach

University of Bristol

Abstract

Will follow soon …