Oa5678 Stack
ArticlesCategories
Data Science

ConferencePulse: Building a Live AI-Powered Conference Assistant with .NET's Composable AI Stack

Published 2026-05-04 12:33:58 · Data Science

Welcome to an interactive exploration of ConferencePulse—a Blazor Server application that transforms live conference sessions with real-time AI. Built entirely on .NET's new composable AI stack, this app demonstrates how you can integrate polling, Q&A, insights, and summarization without juggling mismatched libraries. Below, we answer the most common questions about how it all works.

What is ConferencePulse and what does it do?

ConferencePulse is a live conference assistant designed for presenters and attendees. It runs on Blazor Server and lets attendees join a session by scanning a QR code. Once connected, they can vote in AI-generated polls, ask questions that the AI answers using a retrieval-augmented generation (RAG) pipeline, and watch real-time insights appear as data flows in. When the session ends, the app automatically produces a comprehensive summary by merging analyses from multiple AI agents. The entire system is built on .NET 10 and uses Aspire to orchestrate services like Qdrant for vector storage, PostgreSQL for relational data, and Azure OpenAI for language models.

ConferencePulse: Building a Live AI-Powered Conference Assistant with .NET's Composable AI Stack
Source: devblogs.microsoft.com

How does ConferencePulse leverage the composable AI stack from .NET?

The app is divided into five projects that mirror the composable stack: Web (Blazor UI), Core (models and interfaces), Ingestion (data pipelines and vector search), Agents (AI workflows), and Mcp (Model Context Protocol server and client). Instead of stitching together incompatible libraries, each building block exposes stable, unified abstractions. For example, IChatClient from Microsoft.Extensions.AI works with OpenAI, Azure OpenAI, Ollama, and more—so switching providers requires only configuration changes. The Microsoft.Extensions.VectorData library provides a consistent API for vector databases like Qdrant, while the Microsoft.Extensions.DataIngestion pipeline handles document chunking, embedding, and indexing. This coherent stack eliminates the typical friction of mixing multiple ecosystems, letting the team focus on features.

How does ConferencePulse generate live polls and derive insights?

Polls are dynamically created by an AI agent that analyzes the session's knowledge base—markdown from a GitHub repo—and extracts key talking points. The agent uses IChatClient to prompt a language model to propose multiple-choice questions relevant to the session content. Attendees vote through the Blazor interface, and results stream to the presenter's dashboard in real time. Simultaneously, a separate insights agent monitors both poll results and incoming audience questions. It identifies emerging themes, sentiment shifts, and frequently asked topics, then updates a live feed of auto-generated insights. This combination of pre-session knowledge grounding and real-time data analysis keeps the conversation fresh and data-driven.

How does ConferencePulse answer audience questions in real time using RAG?

When an attendee submits a question, the app runs a retrieval-augmented generation (RAG) pipeline. First, the question is converted into an embedding vector via the configured embedding model. That vector is searched against the vector database (Qdrant), which contains chunked and embedded content from the session's knowledge base, Microsoft Learn docs, and GitHub wiki. The top matching chunks are retrieved and passed—together with the original question—to a language model through IChatClient. The model formulates a concise, grounded answer and streams it back to the attendee. This approach ensures responses are accurate, sourced from authoritative material, and avoid hallucination. The whole process, from question submission to answer display, takes only a couple of seconds, making the Q&A feel interactive and natural.

ConferencePulse: Building a Live AI-Powered Conference Assistant with .NET's Composable AI Stack
Source: devblogs.microsoft.com

How does session summarization work, and what role do multiple AI agents play?

When the presenter ends a session, ConferencePulse launches a multi‑agent summarization workflow. Three specialized agents run concurrently: one analyzes poll outcomes and trends, another reviews all questions and answers for common themes, and a third inspects the insight feed for notable patterns. Each agent produces a partial summary in a structured format using its own IChatClient instance. Once all agents finish, a merger agent collects these partial reports and synthesizes a cohesive final summary. This parallel approach not only speeds up the process but also ensures that different aspects of the session—poll data, audience concerns, and dynamic insights—are thoroughly considered. The resulting summary can be saved to the conference database or displayed on the presenter’s dashboard for follow‑up discussions.

How does the data ingestion pipeline prepare content for the AI features?

Before the session starts, ConferencePulse must ingest the relevant content. A presenter points the app to a GitHub repository, and the Microsoft.Extensions.DataIngestion pipeline automatically downloads all markdown files. The pipeline then splits each document into manageable chunks, generates embeddings using the same embedding model used at query time, and indexes those chunks into the Qdrant vector database. This establishes the session knowledge base that grounds polls, Q&A answers, and insights. The pipeline also handles refresh—if the repo updates, it re-ingests only changed files. By automating this preparation, presenters can focus on their talk without manual content curation, and the AI always has up‑to‑date source material to draw from.