Programming Principles, Logic and Verification Group, University College London, UK
Alexandra Silva is a theoretical computer scientist whose main research focuses on semantics of programming languages and modular development of algorithms for computational models. A lot of her work uses the unifying perspective offered by coalgebra, a mathematical framework established in the last decades.
Alexandra is currently a senior lecturer at University College London. Previously, she was an assistant professor in Nijmegen and a post-doc at Cornell University, with Prof. Dexter Kozen, and a PhD student at the Dutch national research center for Mathematics and Computer Science (CWI), under the supervision of Prof. Jan Rutten and Dr. Marcello Bonsangue.
She was the recipient of the Presburger Award 2017, the Leverhulme prize 2016, and an ERC starting Grant in 2015.
Keynote topic :
CALF: Categorical Automata Learning Framework
Automata learning is a technique that has successfully been applied in verification, with the automaton type varying depending on the application domain. Adaptations of automata learning algorithms for increasingly complex types of automata have to be developed from scratch because there was no abstract theory offering guidelines. This makes it hard to devise such algorithms, and it obscures their correctness proofs. We introduce a simple category-theoretic formalism that provides an appropriately abstract foundation for studying automata learning. Furthermore, our framework establishes formal relations between algorithms for learning, testing, and minimization. We illustrate its generality with two examples: deterministic and weighted automata.
This is joint work with Gerco van Heerdt and Matteo Sammartino.
Kim Guldstrand Larsen
Center for Embedded Software Systems, Aalborg University, Danemark
Kim G. Larsen is a professor in the Department of Computer Science at Aalborg University within the Distributed, Embedded and Intellingt Systems Unit and director of the ICT-competence center CISS, Center for Embedded Software Systems. He is also director of the Sino-Danish Basic Research Center IDEA4CPS, the Danish Innovation Network InfinIT, as well as the newly founded innovation research center DiCyPS: Data Intensive Cyber Physical Systems. In 2015, he won an ERC Advanced Grant with the project LASSO for learning, analysis, synthesis and optimization of cyber physical systems. His research interests include modeling, verification, performance analysis of real-time and embedded systems with application and contributions to concurrency theory and model checking. In particular he is a prime investigator of the tool UPPAAL and co-founder of the company UP4ALL International. In 2013 he was the co-recipient of the CAV Award for the work on UPPAAL as “the foremost model checker for real-time Systems”. Kim G Larsen became Honorary Doctor (Honoris causa) at Uppsala University, Sweden, in 1999. In 2007 he became Knight of the Order of the Denmark. In 2007 he became Honorary Doctor (Honoris causa) at ENS Cachan, France. In 2012 he became Honary Member of Academia Europaea. He is life-long member of the Royal Danish Academy of Sciences and Letters, Copenhagen, and a member of the Danish Academy of Technical Sciences, for which he served as vice-chairman for the group on Electro- and Information Systems. Since 2016 he has been appointed INRIA International Chair for a 5 year period. Also he has won the prestigious industrial Grundfos Award 2016.
In this talk we will describe the recent advances and application of the branch UPPAAL Stratego. This tool allows for synthesis of safe and near-optimal strategies to be synthesized for (1½ player) stochastic hybrid games, by combining symbolic methods for timed games, with reinforcement learning methods, as well as abstraction techniques for hybrid games.
Aiming at using the synthesized strategies as control programs to be executed on small embedded platforms an important issue is how to encode compactly the synthesized strategies. For this purpose algorithmic methods have been devised to take into account both compositionality as well as partial observability. Also on-line methods for strategy synthesis/learning has been successfully applied in diverse domains such as heating systems and intelligent traffic control. The on-line method has the distinct advantages of not needing to store any strategy (as it is constantly computed during operation). However, for some systems response from the controller may be required with too high frequency (e.g. in the order of milli-seconds for switched controllers for power electronics) in order that on-line synthesis is feasible. Thus, we are investigating off-line training of a neural network encoding the near-optimal strategy.
Finally we will address the so far open problem concerned with the verification of safety properties of strategies obtained from pure learning.
Google & Courant Institute of Mathematical Sciences, New York University, USA)
Mehryar Mohri is a professor of Computer Science and Mathematics at the Courant Institute of Mathematical Sciences, and a Research Consultant at Google Research. Prior to these positions, he served as a Department Head and a Technology Leader at AT&T Labs-Research (since 1994), earlier AT&T Bell Labs. He graduated from Ecole Polytechnique, later Ecole Normale Superieure (ENS Ulm) and University of Paris 7 for his Masters and Ph.D. in applied mathematics and computer science (1993).
His research interests cover virtually all aspects of machine learning, as well as automata theory, and algorithms and theory in general. His work on algorithms has served as the foundation for several applications in language processing, including most spoken-dialog and speech recognitions systems used in the United States. He is the principal author of the machine learning textbook Foundations of Machine Learning, used in graduate courses on machine learning in several universities and corporate research laboratories.
The problem of learning automata or weighted automata dates back to the origins of computer science. This talk presents a series of recent fundamental data-dependent learning guarantees for this problem based on the notion of Rademacher complexity. These learning theory results can guide the design of new algorithms benefiting from a strong justification.