Build an AI Voice Agent with Flutter

Learn to create a Flutter-based AI Voice Agent using VideoSDK. Follow our step-by-step guide to build and test your agent efficiently.

Introduction to AI Voice Agents in Flutter

In recent years, AI Voice Agents have become an integral part of many applications, providing users with an interactive and hands-free experience. These agents are designed to understand and respond to human speech, making them invaluable in various industries, including customer service, healthcare, and smart home automation.

What is an AI

Voice Agent

?

An AI

Voice Agent

is a software application that uses artificial intelligence to process and respond to voice commands. It typically involves speech-to-text (STT) to convert spoken language into text, a language model to understand and generate responses, and text-to-speech (TTS) to convert the response back into voice.

Why are they important for the Flutter industry?

In the context of Flutter, a popular UI toolkit for building natively compiled applications, AI Voice Agents can enhance user experience by providing voice-driven navigation and assistance within apps. This can greatly benefit accessibility, user engagement, and overall app functionality.

Core Components of a

Voice Agent

  1. Speech-to-Text (STT): Converts spoken language into text.
  2. Language Model (LLM): Processes the text and generates a response.
  3. Text-to-Speech (TTS): Converts the response text back into speech.
For a comprehensive understanding, refer to the

AI voice Agent core components overview

.

What You'll Build in This Tutorial

In this guide, you will learn how to build an AI

Voice Agent

using Flutter and VideoSDK. We will walk you through setting up your development environment, creating a custom agent, and testing it in a

AI Agent playground

environment.

Architecture and Core Concepts

High-Level Architecture Overview

The architecture of an AI

Voice Agent

involves several components working together to process voice commands and generate responses. The data flow typically follows this sequence:
  1. User Speech: Captured by the microphone.
  2. Voice

    Activity Detection

    (VAD):
    Identifies when the user is speaking.
  3. Speech-to-Text (STT): Transcribes the spoken words into text.
  4. Language Model (LLM): Analyzes the text and formulates a response.
  5. Text-to-Speech (TTS): Converts the response text into audible speech.
  6. Agent Response: Delivered back to the user.
Diagram

Understanding Key Concepts in the VideoSDK Framework

  • Agent: The core class representing your bot, responsible for handling interactions.
  • Cascading Pipeline

    :
    Manages the flow of audio processing, integrating STT, LLM, and TTS.
  • VAD & TurnDetector: These components help the agent determine when to listen and when to speak, ensuring smooth interaction.

Setting Up the Development Environment

Prerequisites

Before you begin, ensure you have the following:
  • Python 3.11+: Required for running the VideoSDK framework.
  • VideoSDK Account: Sign up at app.videosdk.live to obtain API keys.

Step 1: Create a Virtual Environment

To keep dependencies organized, create a virtual environment:
1python -m venv venv
2source venv/bin/activate  # On Windows use `venv\Scripts\activate`
3

Step 2: Install Required Packages

Install the necessary Python packages using pip:
1pip install videosdk
2

Step 3: Configure API Keys in a .env file

Create a .env file in your project directory and add your VideoSDK API keys:
1VIDEOSDK_API_KEY=your_api_key_here
2

Building the AI Voice Agent: A Step-by-Step Guide

To build your AI Voice Agent, we'll start by presenting the complete code and then break it down into sections for detailed explanations.
1import asyncio, os
2from videosdk.agents import Agent, AgentSession, CascadingPipeline, JobContext, RoomOptions, WorkerJob, ConversationFlow
3from videosdk.plugins.silero import SileroVAD
4from videosdk.plugins.turn_detector import TurnDetector, pre_download_model
5from videosdk.plugins.deepgram import DeepgramSTT
6from videosdk.plugins.openai import OpenAILLM
7from videosdk.plugins.elevenlabs import ElevenLabsTTS
8from typing import AsyncIterator
9
10# Pre-downloading the Turn Detector model
11pre_download_model()
12
13agent_instructions = "You are an AI Voice Agent developed using Flutter, designed to assist users in navigating and utilizing Flutter applications efficiently. Your persona is that of a friendly and knowledgeable tech assistant. Your primary capabilities include answering questions about Flutter development, providing guidance on using Flutter widgets, and offering tips on optimizing Flutter app performance. You can also help troubleshoot common issues developers face when working with Flutter. However, you are not a substitute for professional development support and must remind users to consult official Flutter documentation or seek expert advice for complex issues. Additionally, you should not provide code snippets that could lead to security vulnerabilities or violate Flutter's best practices."
14
15class MyVoiceAgent(Agent):
16    def __init__(self):
17        super().__init__(instructions=agent_instructions)
18    async def on_enter(self): await self.session.say("Hello! How can I help?")
19    async def on_exit(self): await self.session.say("Goodbye!")
20
21async def start_session(context: JobContext):
22    # Create agent and conversation flow
23    agent = MyVoiceAgent()
24    conversation_flow = ConversationFlow(agent)
25
26    # Create pipeline
27    pipeline = CascadingPipeline(
28        stt=DeepgramSTT(model="nova-2", language="en"),
29        llm=OpenAILLM(model="gpt-4o"),
30        tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
31        vad=SileroVAD(threshold=0.35),
32        turn_detector=TurnDetector(threshold=0.8)
33    )
34
35    session = [AgentSession](https://docs.videosdk.live/ai_agents/core-components/agent-session)(
36        agent=agent,
37        pipeline=pipeline,
38        conversation_flow=conversation_flow
39    )
40
41    try:
42        await context.connect()
43        await session.start()
44        # Keep the session running until manually terminated
45        await asyncio.Event().wait()
46    finally:
47        # Clean up resources when done
48        await session.close()
49        await context.shutdown()
50
51def make_context() -> JobContext:
52    room_options = RoomOptions(
53    #  room_id="YOUR_MEETING_ID",  # Set to join a pre-created room; omit to auto-create
54        name="VideoSDK Cascaded Agent",
55        playground=True
56    )
57
58    return JobContext(room_options=room_options)
59
60if __name__ == "__main__":
61    job = WorkerJob(entrypoint=start_session, jobctx=make_context)
62    job.start()
63

Step 4.1: Generating a VideoSDK Meeting ID

To interact with your AI Voice Agent, you need a meeting ID. You can generate one using the VideoSDK API. Here's a sample curl command:
1curl -X POST "https://api.videosdk.live/v1/meetings" \
2-H "Authorization: YOUR_API_KEY" \
3-H "Content-Type: application/json"
4

Step 4.2: Creating the Custom Agent Class

The MyVoiceAgent class is where you define the behavior of your voice agent. It inherits from the Agent class and uses the agent_instructions string to guide interactions. The on_enter and on_exit methods define what the agent says when a session starts and ends.

Step 4.3: Defining the Core Pipeline

The CascadingPipeline is crucial as it integrates all the components that process the user's speech. It includes:
  • STT (DeepgramSTT): Converts speech to text.
  • LLM (OpenAILLM): Uses a language model to generate responses.
  • TTS (ElevenLabsTTS): Converts the text response back to speech.
  • VAD (SileroVAD): Detects when the user is speaking.
  • TurnDetector: Helps manage conversation turns.

Step 4.4: Managing the Session and Startup Logic

The start_session function initializes the session, creating a conversation flow and pipeline. The make_context function sets up the room options, and the main block starts the job.

Running and Testing the Agent

Step 5.1: Running the Python Script

To run your AI Voice Agent, execute the following command in your terminal:
1python main.py
2

Step 5.2: Interacting with the Agent in the Playground

Once the script is running, you will see a playground link in the console. Open this link in a browser to join the session and interact with your agent.

Advanced Features and Customizations

Extending Functionality with Custom Tools

You can enhance your agent by adding custom tools using the function_tool concept. This allows you to integrate additional capabilities tailored to your application's needs.

Exploring Other Plugins

While this tutorial uses specific plugins, VideoSDK supports various STT, LLM, and TTS options. Consider exploring alternatives to find the best fit for your use case.

Troubleshooting Common Issues

API Key and Authentication Errors

Ensure your API keys are correctly configured in the .env file. Double-check for typos or missing keys.

Audio Input/Output Problems

Verify that your microphone and speakers are working correctly. Check system settings and permissions.

Dependency and Version Conflicts

Ensure all dependencies are installed with compatible versions. Use a virtual environment to manage them effectively.

Conclusion

Summary of What You've Built

In this tutorial, you've built a fully functional AI Voice Agent using Flutter and VideoSDK. You've learned about the architecture, set up your environment, and tested your agent in a playground.

Next Steps and Further Learning

To further enhance your skills, explore additional plugins and customize your agent with new features. Consider diving deeper into the VideoSDK documentation for more advanced capabilities.

Start Building With Free $20 Balance

No credit card required to start.

Want to level-up your learning? Subscribe now

Subscribe to our newsletter for more tech based insights

FAQ