DAI-25 Tutorials

See Workshop & Tutorial Schedule

    Workshop 1: AI Agent and Embodied Intelligence

    09:00–14:30, Nov. 21, 2025.
    Jing Huo (Nanjing University), Tianpei Yang (Nanjing University), Jieqi Shi (Nanjing University)
Abstract
This workshop explores the intersection of AI agents and embodied intelligence. With the rapid development of LLM, the ability of AI agents and embodied AI are also advanced. Although both of the two areas focus on how intelligent systems can perceive, reason, and act in environments, their technical focuses are currently quite different. Therefore, this workshop will try to find the technical distinctions between the two fields. We will try to find what is truly unique to UI/game/other virtual agents (tool usage, memory, screen I/O, etc.) versus embodied agents (sensor noise, reasoning in 3D environment, action generalization, safety, etc). We will also delve into comparing the technical pathways of AI agent and embodied AI, addressing several pressing questions: How can UI agents borrow embodied strategies for 3D spatial reasoning in desktop environments? Can game agents transfer hierarchical planning skills to robotic manipulation? What are the common benchmarks or foundation models (e.g., multimodal LLMs, diffusion policies) that can unite these fields?
Details

    Workshop 2: LLM-based Multi-Agent Systems: Towards Responsible, Reliable, and Scalable Agentic Systems (LaMAS)

    09:00–15:30, Nov. 21, 2025.
    Muning Wen (Shanghai Jiao Tong University), Stefano V. Albrecht (DeepFlow), Weinan Zhang (Shanghai Jiao Tong University)
Abstract
This workshop focuses on the emerging field of multi-agent systems powered by Large Language Models (LLMs), addressing the critical challenges and opportunities that arise when multiple LLM agents interact, collaborate, and coordinate to solve complex tasks. While recent progress has focused on enhancing the capabilities of agents, there is a clear gap in systematically addressing failure modes, alignment challenges, and responsible behavior in multi-step, real-world agent interactions. As LLMs become increasingly capable and accessible, there is growing interest in leveraging multiple agents to tackle problems that exceed the capabilities of individual models, with a focus on making these systems powerful, transparent, verifiable, and aligned with human intent.
Details

    Workshop 3: LLM-Based Agents with Reinforcement Learning

    09:00–15:30, Nov. 22, 2025.
    Haifeng Zhang (Chinese Academy of Sciences), Xue Yan (Chinese Academy of Sciences), Jiajun Chai (Meituan),Yan Song (University College London)
Abstract
This workshop, "LLM-Based Agents with Reinforcement Learning," explores the vast potential of integrating Large Language Models (LLMs) with Reinforcement Learning (RL) to address complex, real-world decision-making challenges. By leveraging their rich prior knowledge and powerful reasoning abilities, LLMs have shown impressive performance in tackling sophisticated decision-making tasks. In turn, RL algorithms can further enhance the reasoning capabilities of LLMs through experience-driven learning. The workshop will focus on strategies for fusing the extensive knowledge of LLMs with the strong experience-summarization capacity of RL algorithms, aiming to push the boundaries of what is possible in complex decision-making.
Details

    Workshop 4: Bridging Disciplines in Distributed AI (BDDAI)

    09:00–15:30, Nov. 22, 2025.
    Dr. Asieh Salehi Fathabadi (University of Southampton, UK), Prof. Pauline Leonard (University of Southampton, UK), Dr. Yali Du (King's College London, UK), Dr. Teresa Scassa (University of Ottawa, Canada), Prof. Thomas Irvine (University of Southampton, UK).
Abstract
BDDAI brings together researchers from AI, multi-agent systems, sociology, economics, cognitive science, policy, and law to rethink the design of distributed intelligence. It will feature talks, panels, and collaborative sessions to explore how cross-disciplinary models can inform new architectures and approaches to DAI.
Details

    Workshop 5: LLMs in Games: Reasoning, Strategy, and Distributed Intelligence

    09:00–12:30, Nov. 22, 2025.
    Yuanheng Zhu (Chinese Academy of Sciences), Kun Shao (Huawei London Research Centre), Simon Lucas (Queen Mary University of London), Dongbin Zhao (Chinese Academy of Sciences)
Abstract
This workshop aims to bring together researchers from both LLMs and games to explore how games can serve as a scalable testbed to study the reasoning, strategy, and distributed intelligence capabilities of LLMs and LLM-based agents, as well as how LLMs can, in turn, transform the design of intelligent game AI and complex simulations. The workshop will cover a wide range of research themes, including but not limited to:
  • LLMs as game-playing agents in board games, card games, video games, and simulation environments.
  • Reasoning and inference in games: logical puzzles, deductive reasoning, and multi-step problem-solving.
  • Strategy and planning in dynamic and long-horizon environments.
  • Multi-agent interactions: cooperation, competition, negotiation, and communication mediated by LLMs.

  • Details

    Workshop 6: Human-Centric Agentic Web (HAW)

    13:00–17:30, Nov. 21, 2025.
    Panayiotis Danassis (University of Southampton), Naman Goel (University of Oxford), Jesse Wright (University of Oxford), An Zhang (USTC)
Abstract
This workshop aims to explore the emerging infrastructure required to support safe, user-centric, and decentralised AI agents. Unlike classical multi-agent systems that often operated in controlled, closed environments, LLM-based agents are poised to operate openly and widely across the internet, potentially interacting with other agents and humans across jurisdictions, platforms, and use cases. This introduces new challenges in identity management, communication protocols, access control, privacy, auditability, availability and quality of inference data, interoperability, and alignment with user intent. These are not merely engineering problems; they require careful rethinking of how we design agent systems that are robust, accountable, privacy-preserving, and work for diverse stakeholders. This workshop will convene discussion around novel architectures, system design patterns, protocol development, data interoperability, data quality, decentralised governance models, human-in- the-loop safety mechanisms, and standards for inter-agent communication. By bringing together researchers from multi-agent systems, systems engineering, security, HCI, and AI ethics, the workshop seeks to chart a path toward responsible infrastructure for next-generation AI agents.
Details

    Workshop 7: Multi-Agent Security: Limits, Evals, Applications (MASEC)

    16:00–18:00, Nov. 22, 2025.
    Christian Schroeder de Witt (University of Oxford), Klaudia Krawiecka, Chandler Smith
Abstract
Decentralised AI is shifting from isolated agents to networks of interacting agents operating across shared platforms and protocols. This creates security challenges beyond traditional cybersecurity and single-agent safety, where free-form communication and tool use are essential for task generalisation yet open new system-level failure modes. These security vulnerabilities complicate attribution and oversight, and network effects can turn local issues into per- sistent, systemic risks (e.g., privacy leaks, jailbreak propagation, distributed attacks, or secret collusion). The workshop will address open challenges in multi-agent security [1, MASEC] as a discipline dedicated to securing interactions among agents, human–AI teams, and institutions—emphasising security–performance–coordination trade-offs, secure interaction protocols and environments, and monitoring/containment that remain effective under emergent be- haviour. The main focus will lie on threat model discovery through community interaction.
Details