Introduction to AI Voice Agents in Open Source AI Voice Assistant
What is an AI Voice Agent?
AI Voice Agents are software systems designed to interact with users through voice commands. They process spoken language using technologies like Speech-to-Text (STT), Language Learning Models (LLM), and Text-to-Speech (TTS). These agents can perform tasks, answer questions, and provide information, making them valuable tools in various industries.
Why are they important for the Open Source AI Voice Assistant industry?
In the open source domain, AI Voice Agents can facilitate user interaction with open source tools and projects. They help users navigate complex software, provide installation guidance, and answer questions about open source licensing and community practices. This enhances accessibility and user-friendliness, encouraging broader adoption of open source solutions.
Core Components of a Voice Agent
- STT (Speech-to-Text): Converts spoken language into text.
- LLM (Language Learning Model): Processes and understands the text to generate appropriate responses.
- TTS (Text-to-Speech): Converts text responses back into spoken language.
What You'll Build in This Tutorial
In this tutorial, we will guide you through building an open source AI voice assistant using the VideoSDK framework. You will learn how to set up the environment, create a custom voice agent, and test it in a
playground environment
.Architecture and Core Concepts
High-Level Architecture Overview
The architecture of an AI Voice Agent involves a seamless flow of data from user input to agent response. When a user speaks, the audio is processed by the STT engine to convert it into text. This text is then fed into the LLM, which generates a response. Finally, the TTS engine converts this response back into speech, completing the interaction loop.
1sequenceDiagram
2 participant User
3 participant Agent
4 participant STT
5 participant LLM
6 participant TTS
7 User->>Agent: Speak
8 Agent->>STT: Convert Speech to Text
9 STT->>LLM: Send Text
10 LLM->>TTS: Generate Response
11 TTS->>Agent: Convert Text to Speech
12 Agent->>User: Respond
13Understanding Key Concepts in the VideoSDK Framework
- Agent: The core class that represents your voice bot, handling interactions and logic.
- CascadingPipeline: Manages the flow of audio processing from STT to LLM to TTS. For more details, refer to the
Cascading pipeline in AI voice Agents
. - VAD & TurnDetector: Tools that help the agent determine when to listen and when to speak, ensuring smooth interaction.
Setting Up the Development Environment
Prerequisites
To start building your AI Voice Agent, ensure you have Python 3.11+ installed and a VideoSDK account. You can sign up at app.videosdk.live.
Step 1: Create a Virtual Environment
Create a virtual environment to manage your project dependencies separately from your system packages. Run the following command:
1python -m venv my_voice_agent_env
2Activate the virtual environment:
- On Windows:
my_voice_agent_env\Scripts\activate - On macOS/Linux:
source my_voice_agent_env/bin/activate
Step 2: Install Required Packages
Install the necessary packages using pip:
1pip install videosdk-agents videosdk-plugins
2Step 3: Configure API Keys in a .env file
Create a
.env file in your project directory and add your API keys:1VIDEOSDK_API_KEY=your_api_key_here
2Building the AI Voice Agent: A Step-by-Step Guide
Below is the complete, runnable code for our AI Voice Agent. We will break it down and explain each part in detail.
1import asyncio, os
2from videosdk.agents import Agent, AgentSession, CascadingPipeline, JobContext, RoomOptions, WorkerJob, ConversationFlow
3from videosdk.plugins.silero import SileroVAD
4from videosdk.plugins.turn_detector import TurnDetector, pre_download_model
5from videosdk.plugins.deepgram import DeepgramSTT
6from videosdk.plugins.openai import OpenAILLM
7from videosdk.plugins.elevenlabs import ElevenLabsTTS
8from typing import AsyncIterator
9
10# Pre-downloading the Turn Detector model
11pre_download_model()
12
13agent_instructions = "You are an open source AI voice assistant designed to assist users with a variety of tasks. Your persona is that of a friendly and knowledgeable guide who is eager to help users navigate through open source software and tools. Your primary capabilities include providing information about open source projects, assisting with installation and setup of open source software, and answering general questions about open source licensing and community practices. You can also guide users on how to contribute to open source projects. However, you must adhere to the following constraints: you are not a legal advisor, so you must include a disclaimer when discussing licensing issues, and you cannot provide personalized technical support beyond general guidance. Always encourage users to consult official documentation or community forums for specific technical issues."
14
15class MyVoiceAgent(Agent):
16 def __init__(self):
17 super().__init__(instructions=agent_instructions)
18 async def on_enter(self): await self.session.say("Hello! How can I help?")
19 async def on_exit(self): await self.session.say("Goodbye!")
20
21async def start_session(context: JobContext):
22 # Create agent and conversation flow
23 agent = MyVoiceAgent()
24 conversation_flow = ConversationFlow(agent)
25
26 # Create pipeline
27 pipeline = CascadingPipeline(
28 stt=DeepgramSTT(model="nova-2", language="en"),
29 llm=OpenAILLM(model="gpt-4o"),
30 tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
31 vad=SileroVAD(threshold=0.35),
32 turn_detector=TurnDetector(threshold=0.8)
33 )
34
35 session = AgentSession(
36 agent=agent,
37 pipeline=pipeline,
38 conversation_flow=conversation_flow
39 )
40
41 try:
42 await context.connect()
43 await session.start()
44 # Keep the session running until manually terminated
45 await asyncio.Event().wait()
46 finally:
47 # Clean up resources when done
48 await session.close()
49 await context.shutdown()
50
51def make_context() -> JobContext:
52 room_options = RoomOptions(
53 # room_id="YOUR_MEETING_ID", # Set to join a pre-created room; omit to auto-create
54 name="VideoSDK Cascaded Agent",
55 playground=True
56 )
57
58 return JobContext(room_options=room_options)
59
60if __name__ == "__main__":
61 job = WorkerJob(entrypoint=start_session, jobctx=make_context)
62 job.start()
63Step 4.1: Generating a VideoSDK Meeting ID
To interact with your agent, you need a meeting ID. Use the following
curl command to generate one:1curl -X POST https://api.videosdk.live/v1/meetings \
2-H "Authorization: Bearer YOUR_API_KEY" \
3-H "Content-Type: application/json" \
4-d '{"region": "us-west"}'
5Step 4.2: Creating the Custom Agent Class
The
MyVoiceAgent class is where you define the behavior of your voice assistant. It inherits from the Agent class and uses the agent_instructions to guide interactions.1class MyVoiceAgent(Agent):
2 def __init__(self):
3 super().__init__(instructions=agent_instructions)
4 async def on_enter(self): await self.session.say("Hello! How can I help?")
5 async def on_exit(self): await self.session.say("Goodbye!")
6Step 4.3: Defining the Core Pipeline
The
CascadingPipeline orchestrates the flow of data through the STT, LLM, and TTS components. Each plugin plays a critical role in processing the audio and generating responses. For a detailed overview, check the AI voice Agent core components overview
.1pipeline = CascadingPipeline(
2 stt=DeepgramSTT(model="nova-2", language="en"),
3 llm=OpenAILLM(model="gpt-4o"),
4 tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
5 vad=SileroVAD(threshold=0.35),
6 turn_detector=TurnDetector(threshold=0.8)
7)
8Step 4.4: Managing the Session and Startup Logic
The
start_session function initializes the agent session and manages the lifecycle of the interaction. The make_context function sets up the room options for the VideoSDK environment.1async def start_session(context: JobContext):
2 # Create agent and conversation flow
3 agent = MyVoiceAgent()
4 conversation_flow = ConversationFlow(agent)
5
6 # Create pipeline
7 pipeline = CascadingPipeline(
8 stt=DeepgramSTT(model="nova-2", language="en"),
9 llm=OpenAILLM(model="gpt-4o"),
10 tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
11 vad=SileroVAD(threshold=0.35),
12 turn_detector=TurnDetector(threshold=0.8)
13 )
14
15 session = AgentSession(
16 agent=agent,
17 pipeline=pipeline,
18 conversation_flow=conversation_flow
19 )
20
21 try:
22 await context.connect()
23 await session.start()
24 # Keep the session running until manually terminated
25 await asyncio.Event().wait()
26 finally:
27 # Clean up resources when done
28 await session.close()
29 await context.shutdown()
30Running and Testing the Agent
Step 5.1: Running the Python Script
To start your AI Voice Agent, run the following command in your terminal:
1python main.py
2Step 5.2: Interacting with the Agent in the Playground
After starting the agent, look for the playground link in your console. Open it in a browser to interact with your voice assistant. Speak into your microphone to issue commands and listen to the agent's responses.
Advanced Features and Customizations
Extending Functionality with Custom Tools
The VideoSDK framework allows you to extend the capabilities of your voice agent by integrating custom tools. This flexibility enables you to tailor the agent's functionality to specific use cases. For a quick setup, refer to the
Voice Agent Quick Start Guide
.Exploring Other Plugins
While we used specific plugins for STT, LLM, and TTS, the VideoSDK framework supports various alternatives. Explore these options to find the best fit for your project's needs. For instance, consider the
Deepgram STT Plugin for voice agent
,OpenAI LLM Plugin for voice agent
, andElevenLabs TTS Plugin for voice agent
.Troubleshooting Common Issues
API Key and Authentication Errors
Ensure your API keys are correctly set in the
.env file and that they have the necessary permissions.Audio Input/Output Problems
Check your microphone and speaker settings. Ensure they are correctly configured and not muted.
Dependency and Version Conflicts
Make sure all dependencies are compatible with your Python version. Use
pip list to check installed packages and their versions.Conclusion
Summary of What You've Built
In this tutorial, you've built a fully functional open source AI voice assistant using the VideoSDK framework. You've learned about the core components, set up the environment, and tested the agent in a playground.
Next Steps and Further Learning
To further enhance your AI voice assistant, explore additional plugins and customization options. Consider contributing to open source projects or developing new features for your agent. For more advanced sessions, explore
AI voice Agent Sessions
.Want to level-up your learning? Subscribe now
Subscribe to our newsletter for more tech based insights
FAQ