Introduction to AI Voice Agents in how to build ai voice agent for small business
AI Voice Agents are automated systems designed to interact with users through voice commands. They leverage technologies like Speech-to-Text (STT), Language Learning Models (LLM), and Text-to-Speech (TTS) to understand and respond to user queries. For small businesses, these agents can handle customer inquiries, schedule appointments, and provide information about products or services, enhancing customer service and operational efficiency.
In this tutorial, you will learn how to build a basic AI Voice Agent using the VideoSDK framework. We will cover the setup, implementation, and testing of a voice agent tailored for small business needs.
Architecture and Core Concepts
The architecture of an AI Voice Agent involves several components that work together to process user inputs and generate responses. Here's a high-level overview of the data flow:

Understanding Key Concepts in the VideoSDK Framework
- Agent: This is the core class representing your bot. It handles the interaction logic and manages the session lifecycle.
Cascading pipeline in AI voice Agents
: This defines the flow of audio processing from STT to LLM to TTS. Each component plays a specific role in the processing chain.- VAD & TurnDetector: These components help the agent determine when to listen and when to respond, ensuring smooth interactions.
Setting Up the Development Environment
Before we start building our AI Voice Agent, we need to set up the development environment.
Prerequisites
- Python 3.11+: Ensure you have Python 3.11 or higher installed on your machine.
- VideoSDK Account: Sign up for an account at app.videosdk.live to access the necessary APIs.
Step 1: Create a Virtual Environment
Create a virtual environment to manage dependencies:
1python -m venv venv
2source venv/bin/activate # On Windows use `venv\Scripts\activate`
3Step 2: Install Required Packages
Install the VideoSDK and necessary plugins:
1pip install videosdk
2Step 3: Configure API Keys in a .env file
Create a
.env file in your project directory and add your VideoSDK API key:1VIDEOSDK_API_KEY=your_api_key_here
2Building the AI Voice Agent: A Step-by-Step Guide
Let's dive into building the AI Voice Agent. Below is the complete code for the agent:
1import asyncio, os
2from videosdk.agents import Agent, AgentSession, CascadingPipeline, JobContext, RoomOptions, WorkerJob, ConversationFlow
3from videosdk.plugins.silero import SileroVAD
4from videosdk.plugins.turn_detector import TurnDetector, pre_download_model
5from videosdk.plugins.deepgram import DeepgramSTT
6from videosdk.plugins.openai import OpenAILLM
7from videosdk.plugins.elevenlabs import ElevenLabsTTS
8from typing import AsyncIterator
9
10# Pre-downloading the Turn Detector model
11pre_download_model()
12
13agent_instructions = "You are a helpful AI Voice Agent designed specifically for small businesses. Your primary role is to assist business owners and their customers by providing information and support related to the business's services and operations. You can answer frequently asked questions, provide details about products or services, and assist with scheduling appointments or reservations. However, you are not a human and should always remind users to verify critical information with a human representative. You must not handle sensitive personal data or financial transactions. Always maintain a friendly and professional tone, and ensure that your responses are concise and relevant to the user's query."
14
15class MyVoiceAgent(Agent):
16 def __init__(self):
17 super().__init__(instructions=agent_instructions)
18 async def on_enter(self): await self.session.say("Hello! How can I help?")
19 async def on_exit(self): await self.session.say("Goodbye!")
20
21async def start_session(context: JobContext):
22 # Create agent and conversation flow
23 agent = MyVoiceAgent()
24 conversation_flow = ConversationFlow(agent)
25
26 # Create pipeline
27 pipeline = CascadingPipeline(
28 stt=DeepgramSTT(model="nova-2", language="en"),
29 llm=OpenAILLM(model="gpt-4o"),
30 tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
31 vad=SileroVAD(threshold=0.35),
32 turn_detector=TurnDetector(threshold=0.8)
33 )
34
35 session = AgentSession(
36 agent=agent,
37 pipeline=pipeline,
38 conversation_flow=conversation_flow
39 )
40
41 try:
42 await context.connect()
43 await session.start()
44 # Keep the session running until manually terminated
45 await asyncio.Event().wait()
46 finally:
47 # Clean up resources when done
48 await session.close()
49 await context.shutdown()
50
51def make_context() -> JobContext:
52 room_options = RoomOptions(
53 # room_id="YOUR_MEETING_ID", # Set to join a pre-created room; omit to auto-create
54 name="VideoSDK Cascaded Agent",
55 playground=True
56 )
57
58 return JobContext(room_options=room_options)
59
60if __name__ == "__main__":
61 job = WorkerJob(entrypoint=start_session, jobctx=make_context)
62 job.start()
63Step 4.1: Generating a VideoSDK Meeting ID
To interact with your agent, you need a meeting ID. You can generate it using the VideoSDK API. Here is an example using
curl:1curl -X POST \
2 https://api.videosdk.live/v1/meetings \
3 -H "Authorization: Bearer YOUR_API_KEY" \
4 -H "Content-Type: application/json"
5Step 4.2: Creating the Custom Agent Class
The
MyVoiceAgent class is where you define the behavior of your voice agent. It inherits from the Agent class and implements methods like on_enter and on_exit to handle session start and end events.1class MyVoiceAgent(Agent):
2 def __init__(self):
3 super().__init__(instructions=agent_instructions)
4 async def on_enter(self): await self.session.say("Hello! How can I help?")
5 async def on_exit(self): await self.session.say("Goodbye!")
6Step 4.3: Defining the Core Pipeline
The
CascadingPipeline is crucial as it defines how the agent processes audio input and generates responses. It includes components like STT, LLM, TTS, VAD, and TurnDetector. For more detailed guidance, refer to the Voice Agent Quick Start Guide
.1pipeline = CascadingPipeline(
2 stt=DeepgramSTT(model="nova-2", language="en"),
3 llm=OpenAILLM(model="gpt-4o"),
4 tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
5 vad=SileroVAD(threshold=0.35),
6 turn_detector=TurnDetector(threshold=0.8)
7)
8Step 4.4: Managing the Session and Startup Logic
The
start_session function initializes the agent session and starts the pipeline. The make_context function sets up the room options, and the if __name__ == "__main__": block runs the agent.1def make_context() -> JobContext:
2 room_options = RoomOptions(
3 name="VideoSDK Cascaded Agent",
4 playground=True
5 )
6 return JobContext(room_options=room_options)
7
8if __name__ == "__main__":
9 job = WorkerJob(entrypoint=start_session, jobctx=make_context)
10 job.start()
11Running and Testing the Agent
Step 5.1: Running the Python Script
To start the agent, run the Python script:
1python main.py
2Step 5.2: Interacting with the Agent in the Playground
Once the agent is running, you will receive a playground link in the console. Open this link in your browser to interact with the agent. You can speak to the agent and receive responses in real-time.
Advanced Features and Customizations
Extending Functionality with Custom Tools
The VideoSDK framework allows you to extend the agent's functionality using custom tools. This can include additional processing or integration with other services.
Exploring Other Plugins
While this tutorial uses specific plugins, the VideoSDK framework supports various STT, LLM, and TTS plugins. Explore these options to find the best fit for your needs. For instance, you can utilize the
Deepgram STT Plugin for voice agent
,OpenAI LLM Plugin for voice agent
, andElevenLabs TTS Plugin for voice agent
to enhance your agent's capabilities.Troubleshooting Common Issues
API Key and Authentication Errors
Ensure your API key is correctly set in the
.env file and that you have the necessary permissions.Audio Input/Output Problems
Check your microphone and speaker settings. Ensure they are correctly configured and accessible by the application.
Dependency and Version Conflicts
Ensure all dependencies are installed with compatible versions. Use a virtual environment to manage these dependencies effectively.
Conclusion
In this tutorial, you built a functional AI Voice Agent tailored for small businesses using the VideoSDK framework. This agent can handle customer interactions, provide information, and assist with scheduling. As next steps, consider exploring advanced features and customizations to enhance your agent's capabilities. Additionally, leverage tools like the
Silero Voice Activity Detection
andTurn detector for AI voice Agents
to improve interaction quality, and manage yourAI voice Agent Sessions
effectively.Want to level-up your learning? Subscribe now
Subscribe to our newsletter for more tech based insights
FAQ