Alessandro Bruni

Neurosymbolic AI

Overview

Neurosymbolic AI combines the pattern-recognition power of neural networks with the rigour of symbolic reasoning. My work in this area focuses on formally verifying correctness and robustness properties of neural networks and other machine learning systems using the Rocq proof assistant.

A central goal is to give machine learning systems the same level of trustworthiness that we expect from safety-critical software, by providing machine-checked proofs of their behaviour.

Differentiable Logics in Rocq

Differentiable logics are a family of logical frameworks that allow neural networks to be trained to satisfy logical specifications by differentiating through the logic. I am working on:

One tool I use and contribute to is the Vehicle language, a domain-specific language for writing neural-network specifications that can be checked formally.

Applications

Current verification targets include:

  1. Jairo Miguel Marulanda-Giraldo, Ekaterina Komendantskaya, Alessandro Bruni, Reynald Affeldt, Matteo Capucci, Enrico Marchioni: Our Ongoing Work: Quantifiers for Differentiable Logics in Rocq. SAIV 2025 PDF

  2. Reynald Affeldt, Alessandro Bruni, Ekaterina Komendantskaya, Natalia Slusarz, Kathrin Stark: Taming Differentiable Logics with Coq Formalisation. ITP 2024 arXiv PDF

Funding

This project is supported in part through collaborations within the EuroProofNet COST Action network.