Dagstuhl Seminar 26421
Advancing Testability and Verifiability of CPS with Neurosymbolic and Large Language Models
( Oct 11 – Oct 16, 2026 )
Permalink
Organizers
- Simin Nadjm-Tehrani (Linköping University, SE)
- Ruzica Piskac (Yale University - New Haven, US)
- Armando Solar-Lezama (MIT - Cambridge, US)
- Xi (James) Zheng (Macquarie University - Sydney, AU)
Contact
- Marsha Kleinbauer (for scientific matters)
- Susanne Bach-Bernhard (for administrative matters)
The integration of machine learning into Cyber-Physical Systems (CPS) introduces unprecedented challenges for ensuring safety, reliability, and trustworthiness. As these systems become more autonomous—operating in domains such as transportation, healthcare, and robotics—the limitations of traditional testing and formal verification methods become increasingly evident. The probabilistic nature of learning components, lack of explicit specifications, and unpredictable behavior in out-of-distribution conditions demand new paradigms.
This Dagstuhl Seminar will explore how neurosymbolic techniques and Large Language Models (LLMs) can be jointly leveraged to tackle the pressing challenges in testing and verifying learning-enabled CPS. The goal is to establish a unified framework for building robust, explainable, and formally verifiable CPS. The seminar will focus on integrating neurosymbolic reasoning into system design for deterministic guarantees, while employing LLMs for automated test generation, formal specification extraction, and system interpretability. Topics include LLM-guided specification mining, test generation for edge cases, runtime verification with neurosymbolic models, and end-to-end validation pipelines. By bridging data-driven learning with formal methods, the seminar aims to chart a path toward safe, trustworthy, and certifiable AI-integrated CPS.
Key Topics to Be Addressed
-
Formal Specification Mining and Alignment
Explore how LLMs can extract candidate specifications from natural language and sensor data, while neurosymbolic distillation aligns them with system-level requirements through differentiable reasoning. -
Specification-Guided Design and Synthesis
Investigate symbolic and neurosymbolic methods for top-down synthesis of perception, planning, and control modules using domain-specific languages and formally verified specifications. -
LLM-Driven Test Generation
Leverage LLMs to generate diverse test scenarios—including rare and out-of-distribution cases—from natural language, logs, and multi-modal data to improve CPS test coverage. -
Runtime Monitoring, Verification, and Cybersecurity Incident Detection and Treatment
Develop neurosymbolic techniques for runtime monitoring, anomaly detection, and verification to enable adaptive CPS behavior that complies with formal specifications and facilitates cybersecurity incident detection and treatment. -
Integrated LLM–Neurosymbolic Pipelines
Design end-to-end workflows that combine LLM-based specification inference and test generation with interpretable neurosymbolic components to enable scalable, explainable, and formally verifiable CPS. The seminar will bring together experts in formal methods, AI, robotics, software engineering, systems engineering, and control theory to tackle these urgent challenges. Through a mix of presentations, discussions, and collaborative group work, participants will define a shared research roadmap and initiate new efforts that span theory to practice.

Classification
- Artificial Intelligence
- Formal Languages and Automata Theory
- Software Engineering
Keywords
- Learning-enabled Cyber-Physical Systems
- Neurosymbolic AI
- Large language models
- testability and verifiability