We study two fundamental questions in neuro-symbolic computing: can deep learning tackle challenging problems in logics end-to-end, and can neural networks learn the semantics of logics. In this work we focus on linear-time temporal logic (LTL), as it is widely used in verification. We train a Transformer on the problem to directly predict a solution, i.e. a trace, to a given LTL formula. The training data is generated with classical solvers, which, however, only provide one of many possible solutions to each formula. We demonstrate that it is sufficient to train on those particular solutions to formulas, and that Transformers can predict solutions even to formulas from benchmarks from the literature on which the classical solver timed out. T...
International audienceAbstract Linear temporal logic (LTL) is a specification language for finite se...
Recent work on neuro-symbolic inductive logic programming has led to promising approaches that can l...
Characterizing the implicit structure of the computation within neural networks is a foundational pr...
Temporal logics are a well established formal specification paradigm to specify the behavior of syst...
We present two novel algorithms for learning formulas in Linear Temporal Logic (LTL) from examples. ...
In this thesis, we study logical and deep learning methods for the temporal reasoning of reactive sy...
In recent years, there has been a great interest in applying machine learning-based techniques to th...
Teaching a deep reinforcement learning (RL) agent to follow instructions in multi-task environments ...
Learning linear temporal logic on finite traces (LTLf) formulae aims to learn a target formula that ...
The effective integration of knowledge representation, reasoning and learning into a robust computat...
We demonstrate how a reinforcement learning agent can use compositional recurrent neural net- works ...
Neural-symbolic models bridge the gap between sub-symbolic and symbolic approaches, both of which ha...
The importance of the efforts towards integrating the sym-bolic and connectionist paradigms of artif...
We train hierarchical Transformers on the task of synthesizing hardware circuits directly out of hig...
International audienceThis chapter illustrates two aspects of automata theory related to linear-time...
International audienceAbstract Linear temporal logic (LTL) is a specification language for finite se...
Recent work on neuro-symbolic inductive logic programming has led to promising approaches that can l...
Characterizing the implicit structure of the computation within neural networks is a foundational pr...
Temporal logics are a well established formal specification paradigm to specify the behavior of syst...
We present two novel algorithms for learning formulas in Linear Temporal Logic (LTL) from examples. ...
In this thesis, we study logical and deep learning methods for the temporal reasoning of reactive sy...
In recent years, there has been a great interest in applying machine learning-based techniques to th...
Teaching a deep reinforcement learning (RL) agent to follow instructions in multi-task environments ...
Learning linear temporal logic on finite traces (LTLf) formulae aims to learn a target formula that ...
The effective integration of knowledge representation, reasoning and learning into a robust computat...
We demonstrate how a reinforcement learning agent can use compositional recurrent neural net- works ...
Neural-symbolic models bridge the gap between sub-symbolic and symbolic approaches, both of which ha...
The importance of the efforts towards integrating the sym-bolic and connectionist paradigms of artif...
We train hierarchical Transformers on the task of synthesizing hardware circuits directly out of hig...
International audienceThis chapter illustrates two aspects of automata theory related to linear-time...
International audienceAbstract Linear temporal logic (LTL) is a specification language for finite se...
Recent work on neuro-symbolic inductive logic programming has led to promising approaches that can l...
Characterizing the implicit structure of the computation within neural networks is a foundational pr...