It is possible to join this workshop online. This is free of charge, but you need to register via this form. After registration, we will send you a link to the Zoom call before Sunday morning. (If you have not received the Zoom link by then, please contact Joshua Moerman.)
Note that the time zone is EEST (Eastern European Summer Time, UTC+3).
Note that the time zone is EEST (Eastern European Summer Time, UTC+3).
9:00  Registration 
9:10  Opening remarks 
9:15 
Invited lecture:
Learn to Test  Automata Learning for Formal Testing
In this talk, we will provide an overview of our research in automata learning for the testing of blackbox systems, i.e., systems without access to source code.
Automata learning, also known as grammar inference or model mining, infers finitestate models from observations at the interface of a system.
Our tools are capable of learning deterministic, stochastic, and timed models in the form of Mealy machines, MDPs, and Timed Automata.
These learned models can be used to automatically test system components externally.
Recent application domains include IoT protocols (Bluetooth LE, MQTT), cars (driver models, autonomous driving), and reinforcement learning.
Most recently, we are investigating the synergies between machine learning and automata learning in both directions, i.e., machine learning to assist automata learning and automata learning to assist machine learning.
For easier integration with machinelearning frameworks, we have developed AALpy, an opensource Active Automata Learning library for Python:
https://github.com/DESLab/AALpy

10:00  Coffee break ☕️ 
10:30 
Small Test Suites for Active Automata Learning (link to paper)
(Loes Kruger, Sebastian Junges and Jurriaan Rot)
A bottleneck in modern active automata learning is to test whether a hypothesized Mealy machine correctly describes the system under learning.
The search space for possible counterexamples is given by socalled test suites, consisting of input sequences that have to be checked to decide whether a counterexample exists.
We show that significantly smaller test suites suffice under reasonable assumptions on the structure of the black box.
These smaller test suites help to refute false hypotheses during active automata learning, even when the assumptions do not hold.
We combine multiple test suites using a multiarmed bandit setup that adaptively selects a test suite.
An extensive empirical evaluation shows the efficacy of our approach. For small to mediumsized models, the performance gain is limited.
However, the approach allows learning models from large, industrial case studies that were beyond the reach of known methods.

10:55 
Outputdecomposed Learning of Mealy Machines (link to paper)
(Rick Koenders and Joshua Moerman)
We present an active automata learning algorithm which learns a decomposition of a finite state machine, based on projecting onto individual outputs.
This is dual to a recent compositional learning algorithm by Labbaf et al.
When projecting the outputs to a smaller set, the model itself is reduced in size.
By having several such projections, we do not lose any information and the full system can be reconstructed.
Depending on the structure of the system this reduces the number of queries drastically, as shown by a preliminary evaluation of the algorithm.

11:20 
Analyzing constrained LLM through PDFAlearning (link to paper)
(Matías Carrasco, Franz Mayr, Sergio Yovine, Johny Kidd, Martín Iturbide, Juan da Silva and Alejo Garat)
We define a congruence that copes with null nextsymbol probabilities that arise when the output of a language model is constrained by some means during text generation.
We develop an algorithm for efficiently learning the quotient with respect to this congruence and evaluate it on case studies for analyzing statistical properties of LLM.

11:45 
Databaseassisted automata learning (link to paper)
(Hielke Walinga, Robert Baumgartner and Sicco Verwer)
This paper presents DAALder (DatabaseAssisted Automata Learning, with Dutch suffix from leerder ), a new algorithm for learning state machines, or automata, specifically deterministic finitestate automata (DFA).
When learning state machines from log data originating from software systems, the large amount of log data can pose a challenge.
Conventional state merging algorithms cannot efficiently deal with this, as they require a large amount of memory.
To solve this, we utilized database technologies to efficiently query a big trace data set and construct a state machine from it, as databases allow to save large amounts of data on disk while still being able to query it efficiently.
Building on research in both active learning and passive learning, the proposed algorithm is a combination of the two.
It can quickly find a characteristic set of traces from a database using heuristics from a state merging algorithm.
Experiments show that our algorithm has similar performance to conventional state merging algorithms on large datasets, but requires far less memory.

12:10 
DFAMiner: Mining minimal separating DFAs from labelled samples (link to paper)
(Daniele Dell'Erba, Yong Li and Sven Schewe)
We propose DFAMiner, a passive learning tool for learning minimal separating deterministic finite automata (DFA) from a set of labelled samples.
Separating automata are an interesting class of automata that occurs generally in regular model checking and has raised interest in foundational questions of parity game solving.
We first propose a simple and lineartime algorithm that incrementally constructs a threevalued DFA (3DFA) from a set of labelled samples given in the usual lexicographical order.
This 3DFA has accepting and rejecting states as well as don'tcare states, so that it can exactly recognise the labelled examples.
We then apply our tool to mining a minimal separating DFA for the labelled samples by minimising the constructed automata via a reduction to solving SAT problems.
Empirical evaluation shows that our tool outperforms current stateoftheart tools significantly on standard benchmarks for learning minimal separating DFAs from samples.
Progress in the efficient construction of separating DFAs can also lead to finding the lower bound of parity game solving, where we show that DFAMiner can create optimal separating automata for simple languages with up to 7 colours.
Future improvements might offer inroads to better data structures.

12:35  Lunch break 🍽️ 
14:00 
Invited lecture:
Dumb but fast >> smart and slow: fast grammar inference on GPUs
GPUs are the workhorses of highperformance computing.
The acceleration they provide to applications compatible with their programming paradigm can surpass CPU performance by several orders of magnitude, as notably evidenced by the advancements in deep learning.
A significant spectrum of applications, including grammar inference has yet to reap the benefits of GPU acceleration.
In this talk I will give an overview of recent work on GPU acceleration of grammar induction that has lead to stateoftheart results, with orders of magnitude improvements for regular expressions and LTL (linear temporal logic, a subsystem of regular expressions, widely used in industrial verification) inference. 
14:45 
Learning EFSM Models with Registers in Guards (link to paper)
(Germán Vega, Michael Foster, Roland Groz, Neil Walkinshaw, Catherine Oriat and Adenilso Simão)
This paper presents an active inference method for Extended Finite State Machines, where inputs and outputs are parametrized, and transitions can be conditioned by guards involving input parameters and internal variables called registers.
The method applies to (software) systems that cannot be reset, so it learns an EFSM model of the system on a single trace.

15:10 
PDFA Distillation via String Probability Queries (link to paper)
(Robert Baumgartner and Sicco Verwer)
Probabilistic deterministic finite automata (PDFA) are discrete event systems modeling conditional probabilities over languages:
Given an already seen sequence of tokens they return the probability of tokens of interest to appear next.
These types of models have gained interest in the domain of explainable machine learning, where they are used as surrogate models for neural networks trained as language models.
In this work we present an algorithm to distill PDFA from neural networks.
Our algorithm is a derivative of the L# algorithm and capable of learning PDFA from a new type of query, in which the algorithm infers conditional probabilities from the probability of the queried string to occur.
We show its effectiveness on a recent public dataset by distilling PDFA from a set of trained neural networks.

15:35  Coffee break ☕️ 
16:00 
A Theoretical Analysis of the Incremental Counting Ability of LSTM in Finite Precision (link to paper)
(Volodimir Mitarchuk and Rémi Eyraud)
Since the introduction of the LSTM (Long ShortTerm Memory), this type of Recurrent
Neural Networks has been the subject of extensive studies to determine its expressiveness.
This work is an attempt to fill the gap between the observations made during empirical studies and the theory.
We propose in this article to study theoretically the functioning and the limits of the, empirically observed, LSTMs counting mechanism.
Our results show that, in finite precision, LSTMs have a limited ability to retrieve the counting information it has deposited in its carry.
This reveals an asymmetry in the information storage and retrieval mechanisms in LSTMs.

16:25 
Learning Closed Signal Flow Graphs (link to paper)
(Ekaterina Piotrovskaya, Leo Lobski and Fabio Zanasi)
We develop a learning algorithm for closed signal flow graphs  a graphical model of signal transducers.
The algorithm relies on the translation between closed signal flow graphs and weighted finite automata on a singleton alphabet.
We demonstrate that this procedure results in a genuine reduction of complexity:
our algorithm fares better than existing learning algorithms for weighted automata restricted to the case of singleton alphabet.

16:50 
Invited lecture:
A formal perspective on language modeling
Language models—especially the large ones—are all the rage.
And, for what will surely be one of only a few times in history, my field, natural language processing, is the center of world attention.
Indeed, there is nearly a daily stream of articles in the popular press on the most recent advances in language modeling technology.
In contrast to most of these articles (and most other talks on the topic), this tutorialstyle presentation is not about forward progress in the area.
Instead, I am going to take a step back and ask simple questions about the nature of language modeling itself.
We will start with the most basic of questions: From a mathematical perspective, what is a language model? Next, the talk will turn philosophical.
With all the talk of artificial general intelligence, what can theory of computation bring to bear on the computational power of language models?
The talk will conclude with a statement of several recent theorems proven by my research group, the highlight of which is that no Transformerbased language model is Turing complete and, thus, we should be careful about labeling such language models, e.g., GPT4, as generalpurpose reasoners.

17:35  Closing remarks 
17:40  End 🍕🍻 