Abstract
This workshop aims to explore the emerging infrastructure required to support safe, user-centric, and decentralised AI agents.
Unlike classical multi-agent systems that often operated in controlled, closed environments, LLM-based agents are poised to operate openly and widely across the internet, potentially interacting with other agents and humans across jurisdictions, platforms,
and use cases. This introduces new challenges in identity
management, communication protocols, access control, privacy, auditability, availability and quality of inference data, interoperability,
and alignment with user intent. These are not merely engineering
problems; they require careful rethinking of how we design agent
systems that are robust, accountable, privacy-preserving, and work
for diverse stakeholders.
This workshop will convene discussion around novel architectures, system design patterns, protocol development, data interoperability, data quality, decentralised governance models, human-in-
the-loop safety mechanisms, and standards for inter-agent communication. By bringing together researchers from multi-agent
systems, systems engineering, security, HCI, and AI ethics, the
workshop seeks to chart a path toward responsible infrastructure
for next-generation AI agents.
Details