TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 26192

Evaluation of AI Models in Software Engineering

( 03. May – 08. May, 2026 )

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/26192

Organisatoren
  • Satish Chandra (Google - Mountain View, US)
  • Maliheh Izadi (TU Delft, NL)
  • Michael Pradel (Universität Stuttgart, DE & CISPA - Saarbrücken, DE)

Kontakt

Motivation

Large Language Models (LLMs) are rapidly reshaping software engineering—powering code generation, debugging, and documentation. Yet while adoption is high, trust is low: only 3% of developers “highly trust” AI coding tools. A key reason is the lack of rigorous, standardized evaluation. Current benchmarks capture basic correctness but overlook qualities critical in real projects, such as readability, maintainability, security, and efficiency.

This Dagstuhl Seminar brings together researchers and practitioners to confront this evaluation gap. Our goal is to define what to measure, and how, when assessing LLM-based tools, and to develop shared benchmarks and guidelines. By building a stronger foundation for evaluation, we aim to foster reliable comparisons, drive progress in tool development, and strengthen confidence in AI for software engineering.

Objectives and Topics

The seminar aims to build a community-driven roadmap for evaluating AI in software engineering. We will:

  • Benchmark AI coding tools: Define standardized tasks and benchmarks that better reflect real-world coding, from multi-file projects to collaborative scenarios.
  • Improve evaluation frameworks: Develop reproducible, community-adopted frameworks and harnesses that support continuous and human-in-the-loop evaluation.
  • Expand metrics: Go beyond accuracy to capture readability, maintainability, security, efficiency, and other critical dimensions of software quality.
  • Address future open challenges:Anticipate emerging issues, such as evaluating coding agents, human–AI collaboration, and responsible benchmarking practices.

Together, these efforts will establish shared standards for rigorous and practical evaluation of AI in software engineering.

Copyright Satish Chandra, Maliheh Izadi, and Michael Pradel

LZI Junior Researchers

This seminar qualifies for Dagstuhl's LZI Junior Researchers program. Schloss Dagstuhl wishes to enable the participation of junior scientists with a specialisation fitting for this Dagstuhl Seminar, even if they are not on the radar of the organizers. Applications by outstanding junior scientists are possible until December 5, 2025.


Klassifikation
  • Artificial Intelligence
  • Software Engineering

Schlagworte
  • large language models
  • evaluation
  • benchmarking
  • metrics
  • frameworks