Introduction to AI Voice Agents in How to Trigger Actions from a Voice Agent
AI Voice Agents are intelligent systems capable of understanding and processing human speech to perform various tasks. They are increasingly becoming integral in industries that require automation of voice-driven tasks. These agents can be used in smart homes, customer service, healthcare, and more, where they help in setting reminders, sending messages, controlling devices, and providing information.
The core components of a voice agent include:
- Speech-to-Text (STT): Converts spoken language into text.
- Large Language Model (LLM): Processes the text to understand and generate responses.
- Text-to-Speech (TTS): Converts the text response back into speech.
In this tutorial, you will learn how to build an AI Voice Agent using the VideoSDK framework to trigger actions based on voice commands.
Architecture and Core Concepts
High-Level Architecture Overview
The architecture of an AI Voice Agent involves a seamless flow of data from user speech to agent response. The process begins with capturing the user's voice, converting it to text, processing the text to understand the intent, generating a response, and finally converting the response back to speech.

Understanding Key Concepts in the VideoSDK Framework
- Agent: Represents the core class of your bot, managing interactions.
Cascading pipeline in AI voice Agents
: Manages the flow of audio processing, connecting STT, LLM, and TTS components.- VAD & TurnDetector: Determine when the agent should listen and when to speak, ensuring smooth interaction.
Setting Up the Development Environment
Prerequisites
To get started, ensure you have Python 3.11+ installed and a VideoSDK account. You'll need to access the VideoSDK dashboard at app.videosdk.live to obtain your API keys.
Step 1: Create a Virtual Environment
Create a virtual environment to manage your project dependencies:
1python -m venv venv
2source venv/bin/activate # On Windows use `venv\\Scripts\\activate`
3Step 2: Install Required Packages
Install the necessary packages using pip:
1pip install videosdk-agents videosdk-plugins
2Step 3: Configure API Keys in a .env File
Create a
.env file in your project directory and add your VideoSDK API keys:1VIDEOSDK_API_KEY=your_api_key_here
2Building the AI Voice Agent: A Step-by-Step Guide
Here's the complete code for the AI Voice Agent:
1import asyncio, os
2from videosdk.agents import Agent, AgentSession, CascadingPipeline, JobContext, RoomOptions, WorkerJob, ConversationFlow
3from videosdk.plugins.silero import SileroVAD
4from videosdk.plugins.turn_detector import TurnDetector, pre_download_model
5from videosdk.plugins.deepgram import DeepgramSTT
6from videosdk.plugins.openai import OpenAILLM
7from videosdk.plugins.elevenlabs import ElevenLabsTTS
8from typing import AsyncIterator
9
10# Pre-downloading the Turn Detector model
11pre_download_model()
12
13agent_instructions = "You are a proactive and efficient AI Voice Agent designed to assist users in triggering various actions through voice commands. Your primary role is to understand user requests and execute corresponding actions seamlessly. You can perform tasks such as setting reminders, sending messages, controlling smart home devices, and providing information on demand. However, you must always ensure user privacy and data security, and you are not authorized to perform any financial transactions or access sensitive personal information. Always confirm actions with the user before execution and provide clear feedback on the status of their requests. Remember, you are not a human and should not attempt to provide personal opinions or advice beyond your programmed capabilities."
14
15class MyVoiceAgent(Agent):
16 def __init__(self):
17 super().__init__(instructions=agent_instructions)
18 async def on_enter(self): await self.session.say("Hello! How can I help?")
19 async def on_exit(self): await self.session.say("Goodbye!")
20
21async def start_session(context: JobContext):
22 # Create agent and conversation flow
23 agent = MyVoiceAgent()
24 conversation_flow = ConversationFlow(agent)
25
26 # Create pipeline
27 pipeline = CascadingPipeline(
28 stt=[Deepgram STT Plugin for voice agent](https://docs.videosdk.live/ai_agents/plugins/stt/deepgram)(model="nova-2", language="en"),
29 llm=[OpenAI LLM Plugin for voice agent](https://docs.videosdk.live/ai_agents/plugins/llm/openai)(model="gpt-4o"),
30 tts=[ElevenLabs TTS Plugin for voice agent](https://docs.videosdk.live/ai_agents/plugins/tts/eleven-labs)(model="eleven_flash_v2_5"),
31 vad=[Silero Voice Activity Detection](https://docs.videosdk.live/ai_agents/plugins/silero-vad)(threshold=0.35),
32 turn_detector=[Turn detector for AI voice Agents](https://docs.videosdk.live/ai_agents/plugins/turn-detector)(threshold=0.8)
33 )
34
35 session = [AI voice Agent Sessions](https://docs.videosdk.live/ai_agents/core-components/agent-session)(
36 agent=agent,
37 pipeline=pipeline,
38 conversation_flow=conversation_flow
39 )
40
41 try:
42 await context.connect()
43 await session.start()
44 # Keep the session running until manually terminated
45 await asyncio.Event().wait()
46 finally:
47 # Clean up resources when done
48 await session.close()
49 await context.shutdown()
50
51def make_context() -> JobContext:
52 room_options = RoomOptions(
53 # room_id="YOUR_MEETING_ID", # Set to join a pre-created room; omit to auto-create
54 name="VideoSDK Cascaded Agent",
55 playground=True
56 )
57
58 return JobContext(room_options=room_options)
59
60if __name__ == "__main__":
61 job = WorkerJob(entrypoint=start_session, jobctx=make_context)
62 job.start()
63Step 4.1: Generating a VideoSDK Meeting ID
To interact with your voice agent, you need a meeting ID. You can generate one using the VideoSDK API:
1curl -X POST \\
2 https://api.videosdk.live/v1/meetings \\
3 -H "Authorization: Bearer your_api_key_here" \\
4 -H "Content-Type: application/json" \\
5 -d '{"region": "us"}'
6Step 4.2: Creating the Custom Agent Class
The
MyVoiceAgent class extends the Agent class, providing custom behavior for entering and exiting sessions. This class is where you define how the agent interacts with users.1class MyVoiceAgent(Agent):
2 def __init__(self):
3 super().__init__(instructions=agent_instructions)
4 async def on_enter(self): await self.session.say("Hello! How can I help?")
5 async def on_exit(self): await self.session.say("Goodbye!")
6Step 4.3: Defining the Core Pipeline
The
CascadingPipeline is crucial for processing audio. It connects the STT, LLM, and TTS components, ensuring that user speech is accurately converted, processed, and responded to.1pipeline = CascadingPipeline(
2 stt=DeepgramSTT(model="nova-2", language="en"),
3 llm=OpenAILLM(model="gpt-4o"),
4 tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
5 vad=SileroVAD(threshold=0.35),
6 turn_detector=TurnDetector(threshold=0.8)
7)
8Step 4.4: Managing the Session and Startup Logic
The
start_session function initializes the agent, sets up the conversation flow, and manages the session lifecycle. The make_context function configures the room options for the session.1def make_context() -> JobContext:
2 room_options = RoomOptions(
3 name="VideoSDK Cascaded Agent",
4 playground=True
5 )
6
7 return JobContext(room_options=room_options)
8
9if __name__ == "__main__":
10 job = WorkerJob(entrypoint=start_session, jobctx=make_context)
11 job.start()
12Running and Testing the Agent
Step 5.1: Running the Python Script
To start the agent, run the Python script:
1python main.py
2Step 5.2: Interacting with the Agent in the Playground
Once the agent is running, you can interact with it using the playground link provided in the console. This allows you to test the agent's capabilities in a controlled environment.
Advanced Features and Customizations
Extending Functionality with Custom Tools
You can extend the agent's functionality by adding custom tools. This involves creating new functions that the agent can call to perform specific tasks.
Exploring Other Plugins
The VideoSDK framework supports various plugins for STT, LLM, and TTS. Explore options like Cartesia for STT or Google Gemini for LLM to enhance your agent's capabilities.
Troubleshooting Common Issues
API Key and Authentication Errors
Ensure your API keys are correctly configured in the
.env file. Double-check the VideoSDK dashboard for accurate credentials.Audio Input/Output Problems
Verify that your microphone and speakers are properly connected and configured. Check system settings and permissions.
Dependency and Version Conflicts
Use a virtual environment to avoid conflicts. Ensure all dependencies are compatible with Python 3.11+.
Conclusion
Summary of What You've Built
You've successfully built an AI Voice Agent capable of triggering actions based on voice commands using the VideoSDK framework.
Next Steps and Further Learning
Explore additional plugins and customizations to enhance your agent's functionality. Consider integrating with other APIs for expanded capabilities.
For more detailed guidance, refer to the
Voice Agent Quick Start Guide
.Want to level-up your learning? Subscribe now
Subscribe to our newsletter for more tech based insights
FAQ