This is part one of a two part series on automata learning and satisfiability modulo theories. Part two will appear soon.

Deterministic finite automata (DFAs) are useful in a variety of applications. However, the problem of learning a DFA of minimal size from positive (accepted) and negative (rejected) strings can be very hard. In fact, it is the optimization variant of the problem of finding a consistent DFA of a fixed size, which has been shown to be NP-complete. In 2010, Marijn Heule and Sicco Verwer presented an algorithm that encodes the problem of learning a DFA from labeled strings as a satisfiability (SAT) problem. Their algorithm has since won the StaMinA competition, and has led to the creation of the dfasat tool (for which Chris has created an exellent tutorial).

In this post, I present an encoding that takes a satisfiability modulo theories (SMT) perspective. This encoding is faster than the one used in dfasat, and benefits from the continuous efforts by fellow researchers on making SMT solvers more powerful. Moreover, I find it more natural, because it makes a distinction between the logic that is required to solve the problem, and the logic imposed by the background theories.

Continue Reading "Automata learning as a satisfiability modulo theories problem"

This is our team’s first attempt at applying automata learning theory to problems in autonomous driving. Our starting point, learning patterns of car-following behavior, is the most fundamental task in daily driving scenarios. It has been widely recognized the first milestone towards autonomous or semi-autonomous driving is a good cruise-controller for save car-following. The goal is to succinctly describe how a car follows the car in front of it, often called the lead car. The research present here was accepted in a paper titled “Car-following Behavior…Continue Reading “Finite State Automata for Autonomous Driving”

The Communications of ACM published an article on automata learning in software engineering last February. The techniques described in the article are used to obtain models for the (input/output) behaviour of software. Even without access to source code, one can now use model checking or other bug finding tools on these models. The article shows many successful applications. Why is this possible at all?

Continue Reading "Why is learning so effective in software testing?"

In linguistic applications, tasks typically are translating a sentence, or deciding whether a given string belongs to a specific language. In the past, popular models to learn rules were finite state machines, pushdown automata, and hidden Markov models. We understand these models fairly well, and they each describe a class in the Chomsky hierarchy. This makes them very apt to model formal systems. But when it comes to describing natural language and solving problems in NLP, the rules imposed by formal grammars are often too…Continue Reading “Looking beyond Automata Models: Transducing and Grammar Learning with Neural Machines”

The precision, speed and deterministic, algorithmic problem-solving strategies of computers are often idealized. Consequently, computers are often seen as unbiased and objective. This view is also transferred to automated decision making using machine-learned models. But this is dangerous for multiple reasons: Between false positives and false negatives, models can be wrong in more than one way. The effect of the human component in the data can be severe and is often ignored or grossly underestimated (see for example this paper here): The data we collect in…Continue Reading “4 properties making automata particularly interpretable”