Introduction to AI Voice Agents in How to Build AI Voice Agent for Lead Qualification
AI Voice Agents are sophisticated systems that can understand and respond to human speech, making them invaluable in various industries, including lead qualification. These agents can engage potential leads, ask qualifying questions, and even schedule follow-up meetings, all while ensuring compliance with data protection regulations.
What is an AI Voice Agent?
An AI Voice Agent is a software application that uses artificial intelligence to interact with users through voice commands. It processes spoken language, understands the intent, and provides appropriate responses, often using natural language processing (NLP) techniques.
Why are they important for the Lead Qualification Industry?
In the lead qualification industry, AI Voice Agents streamline the process of gathering and analyzing potential customer information. They can efficiently handle initial interactions, freeing up human agents to focus on more complex tasks. This automation leads to faster response times and improved customer satisfaction.
Core Components of a Voice Agent
- Speech-to-Text (STT): Converts spoken language into text. For this, you might explore the
Deepgram STT Plugin for voice agent
. - Large Language Model (LLM): Processes the text to understand and generate responses.
- Text-to-Speech (TTS): Converts the generated text back into speech, utilizing tools like the
ElevenLabs TTS Plugin for voice agent
.
What You'll Build in This Tutorial
In this tutorial, you'll build a fully functional AI Voice Agent for lead qualification using the VideoSDK framework. You'll learn how to set up the environment, create a custom agent, and test it in a real-world scenario. For a step-by-step setup, refer to the
Voice Agent Quick Start Guide
.Architecture and Core Concepts
High-Level Architecture Overview
The architecture of an AI Voice Agent involves several components working together to process and respond to user input. The flow typically starts with capturing user speech, converting it to text, processing the text to determine the appropriate response, and finally converting the response back to speech.

Understanding Key Concepts in the VideoSDK Framework
- Agent: The core class representing your bot, responsible for managing the interaction flow. For a detailed understanding, see the
AI voice Agent core components overview
. - CascadingPipeline: Defines the sequence of processing steps (STT -> LLM -> TTS) to handle user input and generate responses. Learn more about this in the
Cascading pipeline in AI voice Agents
. - VAD & TurnDetector: Voice
Activity Detection
(VAD) and Turn Detection ensure the agent listens and responds at the right times. Consider using theTurn detector for AI voice Agents
for optimal performance.
Setting Up the Development Environment
Prerequisites
Before you begin, ensure you have Python 3.11+ installed and a VideoSDK account. You can sign up at app.videosdk.live.
Step 1: Create a Virtual Environment
Create a virtual environment to manage your project dependencies:
1python -m venv venv
2source venv/bin/activate # On Windows use `venv\\Scripts\\activate`
3Step 2: Install Required Packages
Install the necessary Python packages using pip:
1pip install videosdk agents
2Step 3: Configure API Keys in a .env File
Create a
.env file in your project directory and add your VideoSDK API key:1VIDEOSDK_API_KEY=your_api_key_here
2Building the AI Voice Agent: A Step-by-Step Guide
Complete Code Block
Here is the complete code for the AI Voice Agent:
1import asyncio, os
2from videosdk.agents import Agent, AgentSession, CascadingPipeline, JobContext, RoomOptions, WorkerJob, ConversationFlow
3from videosdk.plugins.silero import SileroVAD
4from videosdk.plugins.turn_detector import TurnDetector, pre_download_model
5from videosdk.plugins.deepgram import DeepgramSTT
6from videosdk.plugins.openai import OpenAILLM
7from videosdk.plugins.elevenlabs import ElevenLabsTTS
8from typing import AsyncIterator
9
10# Pre-downloading the Turn Detector model
11pre_download_model()
12
13agent_instructions = "{\n \"persona\": \"Efficient Lead Qualification Specialist\",\n \"capabilities\": [\n \"Engage potential leads in conversation to gather essential information\",\n \"Ask predefined questions to qualify leads based on criteria such as budget, timeline, and decision-making authority\",\n \"Provide basic information about products or services\",\n \"Schedule follow-up calls or meetings with a human sales representative\",\n \"Record and summarize lead responses for sales team review\"\n ],\n \"constraints\": [\n \"You are not authorized to make final sales decisions or offer discounts\",\n \"You must not collect sensitive personal information such as credit card numbers or social security numbers\",\n \"Always inform the lead that they will be contacted by a human representative for further discussion\",\n \"Ensure compliance with data protection regulations such as GDPR or CCPA\"\n ]\n}"
14
15class MyVoiceAgent(Agent):
16 def __init__(self):
17 super().__init__(instructions=agent_instructions)
18 async def on_enter(self): await self.session.say("Hello! How can I help?")
19 async def on_exit(self): await self.session.say("Goodbye!")
20
21async def start_session(context: JobContext):
22 # Create agent and conversation flow
23 agent = MyVoiceAgent()
24 conversation_flow = ConversationFlow(agent)
25
26 # Create pipeline
27 pipeline = CascadingPipeline(
28 stt=DeepgramSTT(model="nova-2", language="en"),
29 llm=OpenAILLM(model="gpt-4o"),
30 tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
31 vad=SileroVAD(threshold=0.35),
32 turn_detector=TurnDetector(threshold=0.8)
33 )
34
35 session = AgentSession(
36 agent=agent,
37 pipeline=pipeline,
38 conversation_flow=conversation_flow
39 )
40
41 try:
42 await context.connect()
43 await session.start()
44 # Keep the session running until manually terminated
45 await asyncio.Event().wait()
46 finally:
47 # Clean up resources when done
48 await session.close()
49 await context.shutdown()
50
51def make_context() -> JobContext:
52 room_options = RoomOptions(
53 # room_id="YOUR_MEETING_ID", # Set to join a pre-created room; omit to auto-create
54 name="VideoSDK Cascaded Agent",
55 playground=True
56 )
57
58 return JobContext(room_options=room_options)
59
60if __name__ == "__main__":
61 job = WorkerJob(entrypoint=start_session, jobctx=make_context)
62 job.start()
63Step 4.1: Generating a VideoSDK Meeting ID
To interact with the agent, you'll need a meeting ID. You can generate one using the VideoSDK API:
1curl -X POST "https://api.videosdk.live/v1/meetings" \
2-H "Authorization: Bearer YOUR_API_KEY" \
3-H "Content-Type: application/json"
4Step 4.2: Creating the Custom Agent Class
The
MyVoiceAgent class extends the Agent class. It defines the agent's behavior when entering and exiting a session:1class MyVoiceAgent(Agent):
2 def __init__(self):
3 super().__init__(instructions=agent_instructions)
4 async def on_enter(self): await self.session.say("Hello! How can I help?")
5 async def on_exit(self): await self.session.say("Goodbye!")
6Step 4.3: Defining the Core Pipeline
The
CascadingPipeline is crucial for processing user input and generating responses. It integrates STT, LLM, TTS, VAD, and Turn Detection:1pipeline = CascadingPipeline(
2 stt=DeepgramSTT(model="nova-2", language="en"),
3 llm=OpenAILLM(model="gpt-4o"),
4 tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
5 vad=SileroVAD(threshold=0.35),
6 turn_detector=TurnDetector(threshold=0.8)
7)
8Step 4.4: Managing the Session and Startup Logic
The
start_session function manages the agent's lifecycle, connecting to the VideoSDK and starting the session. For more details on managing sessions, refer to AI voice Agent Sessions
.1async def start_session(context: JobContext):
2 # Create agent and conversation flow
3 agent = MyVoiceAgent()
4 conversation_flow = ConversationFlow(agent)
5
6 # Create pipeline
7 pipeline = CascadingPipeline(
8 stt=DeepgramSTT(model="nova-2", language="en"),
9 llm=OpenAILLM(model="gpt-4o"),
10 tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
11 vad=SileroVAD(threshold=0.35),
12 turn_detector=TurnDetector(threshold=0.8)
13 )
14
15 session = AgentSession(
16 agent=agent,
17 pipeline=pipeline,
18 conversation_flow=conversation_flow
19 )
20
21 try:
22 await context.connect()
23 await session.start()
24 # Keep the session running until manually terminated
25 await asyncio.Event().wait()
26 finally:
27 # Clean up resources when done
28 await session.close()
29 await context.shutdown()
30Running and Testing the Agent
Step 5.1: Running the Python Script
To start the agent, run the following command in your terminal:
1python main.py
2Step 5.2: Interacting with the Agent in the Playground
Once the script is running, you'll receive a playground link in the console. Use this link to join the session and interact with your AI Voice Agent.
Advanced Features and Customizations
Extending Functionality with Custom Tools
The VideoSDK framework allows you to extend your agent's functionality by integrating custom tools. This can include additional data sources or specialized processing capabilities.
Exploring Other Plugins
While this tutorial uses specific plugins, VideoSDK supports various STT, LLM, and TTS options. Explore these to enhance your agent's performance.
Troubleshooting Common Issues
API Key and Authentication Errors
Ensure your API key is correctly set in the
.env file. Double-check for typos or missing keys.Audio Input/Output Problems
Verify your microphone and speaker settings. Ensure the correct devices are selected in your system settings.
Dependency and Version Conflicts
Use a virtual environment to manage dependencies. Check for version conflicts in your
requirements.txt file.Conclusion
Summary of What You've Built
In this tutorial, you built an AI Voice Agent capable of qualifying leads using the VideoSDK framework. You learned how to set up the environment, create a custom agent, and test it.
Next Steps and Further Learning
Explore additional features and plugins in the VideoSDK framework to enhance your agent. Consider integrating with CRM systems for a complete lead management solution.
Want to level-up your learning? Subscribe now
Subscribe to our newsletter for more tech based insights
FAQ