Conversational AI has fundamentally reshaped how we interact with technology. While one-on-one interactions with large language models (LLMs) have seen significant advances, they rarely capture the full complexity of human communication. Many real-world dialogues, including team meetings, family dinners, or classroom lessons, are inherently multi-party. These interactions involve fluid turn-taking, shifting roles, and dynamic interruptions.
For designers and developers, simulating natural and engaging multi-party conversations has historically required a trade-off: settle for the rigidity of scripted interaction or accept the unpredictability of purely generative models. To bridge this gap, we need tools that blend the structural predictability of a script with the spontaneous, improvisational nature of human conversation.
To address this need, we introduce DialogLab, presented at ACM UIST 2025, an open-source prototyping framework designed to author, simulate, and test dynamic human-AI group conversations. DialogLab provides a unified interface to manage multi-party dialogue complexity, handling everything from defining agent personas to orchestrating complex turn-taking dynamics. Through integrating real-time improvisation with structured scripting, this framework enables developers to test conversations ranging from a structured Q&A session to a free-flowing creative brainstorm. Our evaluations with 14 end users or domain experts validate that DialogLab supports efficient iteration and realistic, adaptable multi-party design for training and research.

