Neurosymbolic AI
Overview
Neurosymbolic AI combines the pattern-recognition power of neural networks with the rigour of symbolic reasoning. My work in this area focuses on formally verifying correctness and robustness properties of neural networks and other machine learning systems using the Rocq proof assistant.
A central goal is to give machine learning systems the same level of trustworthiness that we expect from safety-critical software, by providing machine-checked proofs of their behaviour.
Differentiable Logics in Rocq
Differentiable logics are a family of logical frameworks that allow neural networks to be trained to satisfy logical specifications by differentiating through the logic. I am working on:
- Formalizing the metatheory of differentiable logics in Rocq, ensuring that the logical connectives and quantifiers behave as expected.
- Extending the formalization with quantifiers, enabling richer specifications for neural network behaviour.
One tool I use and contribute to is the Vehicle language, a domain-specific language for writing neural-network specifications that can be checked formally.
Applications
Current verification targets include:
- Network intrusion detection — verifying that classifiers for network traffic correctly identify malicious patterns while remaining robust to adversarial perturbations.
- Autonomous driving controllers — verifying safety and liveness properties of controllers trained by reinforcement learning.
Related Publications
Jairo Miguel Marulanda-Giraldo, Ekaterina Komendantskaya, Alessandro Bruni, Reynald Affeldt, Matteo Capucci, Enrico Marchioni: Our Ongoing Work: Quantifiers for Differentiable Logics in Rocq. SAIV 2025 PDF
Reynald Affeldt, Alessandro Bruni, Ekaterina Komendantskaya, Natalia Slusarz, Kathrin Stark: Taming Differentiable Logics with Coq Formalisation. ITP 2024 arXiv PDF
Funding
This project is supported in part through collaborations within the EuroProofNet COST Action network.