Invited speakers

Marko Robnik Šikonja

University of Ljubljana
How can a house mouse crack the Turing test? Knowledge representation view.

Recently, we have seen impressive progress in the area of artificial intelligence. Deep neural networks have solved previously unsolvable problems in computer vision (e.g., face recognition rivals humans’ performance), game playing (a program has beaten human champions in the game of Go), and natural language processing (useful automatic speech recognition and machine translation). As a result, the media and prophets of AI are proclaiming near emergence of artificial general intelligence and demise of the human race.

Most AI scientists disagree. Facetiously, with deep learning, we can handle vision and sound, but with less accuracy and robustness than a house mouse. However, we never say that a mouse is intelligent in a human sense. What do mice miss in progress towards human intelligence? How will the artificial intelligence field move beyond narrow problems requiring huge datasets? The key seems to be in knowledge representation and manipulation.

Several approaches are emerging that may form the building blocks of the next-generation artificial intelligence. Embeddings transform symbolic knowledge into numeric vectors where relations between object are expressed as distances. Using large quantities of unsupervised data, we can embed texts, images, electronic health records, graphs, relations, and other entities. Transfer learning uses related tasks for better generalization and knowledge transfer. The power of first-order logic and neural networks can be merged in relational neural networks. Perturbation approaches can explain decisions of black-box models. Still, some components are missing, e.g., better representation of meaning and abstract concepts, or better integration and consolidation of knowledge sources. If we can extrapolate from the history of AI optimism, the progress will be slower than anticipated.

Ulrik Brandes

ETH Zurich


Will follow soon …