Bernhard Aichernig
TU Graz, Austria (website)Talk: Learn to Test - Automata Learning for Formal Testing
In this talk, we will provide an overview of our research in automata learning for the testing of black-box systems, i.e., systems without access to source code.
Automata learning, also known as grammar inference or model mining, infers finite-state models from observations at the interface of a system.
Our tools are capable of learning deterministic, stochastic, and timed models in the form of Mealy machines, MDPs, and Timed Automata.
These learned models can be used to automatically test system components externally.
Recent application domains include IoT protocols (Bluetooth LE, MQTT), cars (driver models, autonomous driving), and reinforcement learning.
Most recently, we are investigating the synergies between machine learning and automata learning in both directions, i.e., machine learning to assist automata learning and automata learning to assist machine learning.
For easier integration with machine-learning frameworks, we have developed AALpy, an open-source Active Automata Learning library for Python:
https://github.com/DES-Lab/AALpy
Martin Berger
University of Sussex, United Kingdom (website)Talk: Dumb but fast >> smart and slow: fast grammar inference on GPUs
GPUs are the work-horses of high-performance computing.
The acceleration they provide to applications compatible with their programming paradigm can surpass CPU performance by several orders of magnitude, as notably evidenced by the advancements in deep learning.
A significant spectrum of applications, including grammar inference has yet to reap the benefits of GPU acceleration.
In this talk I will give an overview of recent work on GPU acceleration of grammar induction that has lead to state-of-the-art results, with orders of magnitude improvements for regular expressions and LTL (linear temporal logic, a subsystem of regular expressions, widely used in industrial verification) inference.
Ryan Cotterell
ETH Zürich, Switzerland (website)Talk: A formal perspective on language modeling
Language models—especially the large ones—are all the rage.
And, for what will surely be one of only a few times in history, my field, natural language processing, is the center of world attention.
Indeed, there is nearly a daily stream of articles in the popular press on the most recent advances in language modeling technology.
In contrast to most of these articles (and most other talks on the topic), this tutorial-style presentation is not about forward progress in the area.
Instead, I am going to take a step back and ask simple questions about the nature of language modeling itself.
We will start with the most basic of questions: From a mathematical perspective, what is a language model? Next, the talk will turn philosophical.
With all the talk of artificial general intelligence, what can theory of computation bring to bear on the computational power of language models?
The talk will conclude with a statement of several recent theorems proven by my research group, the highlight of which is that no Transformer-based language model is Turing complete and, thus, we should be careful about labeling such language models, e.g., GPT-4, as general-purpose reasoners.