Introduction to AI Voice Agents in Call Barging
In today's fast-paced world, AI Voice Agents are revolutionizing the call center industry, particularly in the realm of call barging. But what exactly is an AI
Voice Agent
, and why is it so crucial for call barging?What is an AI Voice Agent
?
An AI
Voice Agent
is a software application that can understand and respond to human speech. It utilizes technologies like Speech-to-Text (STT), Text-to-Speech (TTS), and Language Models (LLM) to process and generate human-like responses. These agents are capable of performing tasks such as answering queries, providing information, and even intervening in calls to assist human agents.Why are they important for the call barging industry?
In the call center industry, call barging refers to the ability of a supervisor to join a call between an agent and a customer. AI Voice Agents enhance this process by monitoring calls for quality assurance, providing real-time feedback, and intervening when necessary. This not only improves the efficiency and effectiveness of call centers but also enhances customer satisfaction.
Core Components of a Voice Agent
To build a robust AI
Voice Agent
, understanding its core components is essential. These include:- Speech-to-Text (STT): Converts spoken language into text.
- Language Model (LLM): Processes the text to understand context and intent.
- Text-to-Speech (TTS): Converts text responses back into spoken language.
What You'll Build in This Tutorial
In this tutorial, we'll guide you through building an AI
Voice Agent
specifically designed for call barging using the VideoSDK framework. You'll learn how to set up the development environment, create a custom agent class, and manageAI voice Agent Sessions
effectively.Architecture and Core Concepts
Before diving into the code, let's explore the architecture and core concepts of our AI Voice Agent.
High-Level Architecture Overview
The AI Voice Agent operates by processing audio input from a user, converting it to text, generating a response using a language model, and then converting that response back to audio. This flow is managed by a
cascading pipeline in AI voice Agents
that integrates various plugins for different tasks.
Understanding Key Concepts in the VideoSDK Framework
- Agent: The core class representing your bot. It handles interactions and decision-making.
- CascadingPipeline: Manages the flow of audio processing through STT, LLM, and TTS.
- VAD & TurnDetector: These components help the agent determine when to listen and when to speak by detecting voice activity and conversational turns using
Silero Voice Activity Detection
and aTurn detector for AI voice Agents
.
Setting Up the Development Environment
Let's get started by setting up the necessary tools and environment for developing our AI Voice Agent.
Prerequisites
To follow this tutorial, you'll need:
- Python 3.11+
- A VideoSDK account, which you can create at app.videosdk.live.
Step 1: Create a Virtual Environment
Creating a virtual environment helps manage dependencies and avoid conflicts. Run the following commands:
1python -m venv myenv
2source myenv/bin/activate # On Windows use `myenv\\Scripts\\activate`
3Step 2: Install Required Packages
Install the necessary packages using pip:
1pip install videosdk
2pip install python-dotenv
3Step 3: Configure API Keys in a .env File
Create a
.env file in your project directory and add your VideoSDK API key:1VIDEOSDK_API_KEY=your_api_key_here
2Building the AI Voice Agent: A Step-by-Step Guide
In this section, we'll present the complete, runnable code and then break it down to explain each part.
1import asyncio, os
2from videosdk.agents import Agent, AgentSession, CascadingPipeline, JobContext, RoomOptions, WorkerJob, ConversationFlow
3from videosdk.plugins.silero import SileroVAD
4from videosdk.plugins.turn_detector import TurnDetector, pre_download_model
5from videosdk.plugins.deepgram import DeepgramSTT
6from videosdk.plugins.openai import OpenAILLM
7from videosdk.plugins.elevenlabs import ElevenLabsTTS
8from typing import AsyncIterator
9
10# Pre-downloading the Turn Detector model
11pre_download_model()
12
13agent_instructions = "{\n \"persona\": \"efficient call center supervisor\",\n \"capabilities\": [\n \"monitor ongoing calls for quality assurance\",\n \"intervene in calls when necessary to assist agents\",\n \"provide real-time feedback to call center agents\",\n \"log call details and interventions for training purposes\"\n ],\n \"constraints\": [\n \"you must not disclose sensitive customer information\",\n \"you cannot make decisions on behalf of the company\",\n \"you must always inform the customer when you are joining the call\",\n \"you are not authorized to handle escalations beyond your level\"\n ]\n}"
14
15class MyVoiceAgent(Agent):
16 def __init__(self):
17 super().__init__(instructions=agent_instructions)
18 async def on_enter(self): await self.session.say("Hello! How can I help?")
19 async def on_exit(self): await self.session.say("Goodbye!")
20
21async def start_session(context: JobContext):
22 # Create agent and conversation flow
23 agent = MyVoiceAgent()
24 conversation_flow = ConversationFlow(agent)
25
26 # Create pipeline
27 pipeline = CascadingPipeline(
28 stt=DeepgramSTT(model="nova-2", language="en"),
29 llm=OpenAILLM(model="gpt-4o"),
30 tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
31 vad=SileroVAD(threshold=0.35),
32 turn_detector=TurnDetector(threshold=0.8)
33 )
34
35 session = AgentSession(
36 agent=agent,
37 pipeline=pipeline,
38 conversation_flow=conversation_flow
39 )
40
41 try:
42 await context.connect()
43 await session.start()
44 # Keep the session running until manually terminated
45 await asyncio.Event().wait()
46 finally:
47 # Clean up resources when done
48 await session.close()
49 await context.shutdown()
50
51def make_context() -> JobContext:
52 room_options = RoomOptions(
53 # room_id="YOUR_MEETING_ID", # Set to join a pre-created room; omit to auto-create
54 name="VideoSDK Cascaded Agent",
55 playground=True
56 )
57
58 return JobContext(room_options=room_options)
59
60if __name__ == "__main__":
61 job = WorkerJob(entrypoint=start_session, jobctx=make_context)
62 job.start()
63Step 4.1: Generating a VideoSDK Meeting ID
To test our agent, we need a meeting ID. You can generate one using the following
curl command:1curl -X POST \\
2 https://api.videosdk.live/v1/meetings \\
3 -H "Authorization: Bearer YOUR_API_KEY" \\
4 -H "Content-Type: application/json" \\
5 -d '{"region": "us-west"}'
6Step 4.2: Creating the Custom Agent Class
The
MyVoiceAgent class extends the Agent class from the VideoSDK framework. It defines the agent's behavior when entering and exiting a session.1class MyVoiceAgent(Agent):
2 def __init__(self):
3 super().__init__(instructions=agent_instructions)
4 async def on_enter(self): await self.session.say("Hello! How can I help?")
5 async def on_exit(self): await self.session.say("Goodbye!")
6This class uses the
agent_instructions string to define the agent's persona, capabilities, and constraints.Step 4.3: Defining the Core Pipeline
The
CascadingPipeline is responsible for processing audio input and generating responses. It integrates various plugins for STT, LLM, TTS, VAD, and turn detection.1pipeline = CascadingPipeline(
2 stt=DeepgramSTT(model="nova-2", language="en"),
3 llm=OpenAILLM(model="gpt-4o"),
4 tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
5 vad=SileroVAD(threshold=0.35),
6 turn_detector=TurnDetector(threshold=0.8)
7)
8Each component of the pipeline plays a crucial role in ensuring smooth and accurate communication.
Step 4.4: Managing the Session and Startup Logic
The
start_session function manages the lifecycle of the agent session, including connection, execution, and cleanup.1async def start_session(context: JobContext):
2 agent = MyVoiceAgent()
3 conversation_flow = ConversationFlow(agent)
4
5 pipeline = CascadingPipeline(
6 stt=DeepgramSTT(model="nova-2", language="en"),
7 llm=OpenAILLM(model="gpt-4o"),
8 tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
9 vad=SileroVAD(threshold=0.35),
10 turn_detector=TurnDetector(threshold=0.8)
11 )
12
13 session = AgentSession(
14 agent=agent,
15 pipeline=pipeline,
16 conversation_flow=conversation_flow
17 )
18
19 try:
20 await context.connect()
21 await session.start()
22 await asyncio.Event().wait()
23 finally:
24 await session.close()
25 await context.shutdown()
26The
make_context function sets up the JobContext with room options, enabling the creation or joining of a meeting room.1def make_context() -> JobContext:
2 room_options = RoomOptions(
3 # room_id="YOUR_MEETING_ID", # Set to join a pre-created room; omit to auto-create
4 name="VideoSDK Cascaded Agent",
5 playground=True
6 )
7
8 return JobContext(room_options=room_options)
9Finally, the script's entry point ensures that the job is started correctly.
1if __name__ == "__main__":
2 job = WorkerJob(entrypoint=start_session, jobctx=make_context)
3 job.start()
4Running and Testing the Agent
Step 5.1: Running the Python Script
To run the agent, execute the script using Python:
1python main.py
2Step 5.2: Interacting with the Agent in the Playground
Once the script is running, you'll find a link to the playground in the console output. Use this link to join the session and interact with your AI Voice Agent.
Advanced Features and Customizations
Extending Functionality with Custom Tools
The VideoSDK framework allows for extending the agent's functionality with custom tools. This can include additional processing logic or integrations with other services.
Exploring Other Plugins
While this tutorial uses specific plugins, VideoSDK supports various STT, LLM, and TTS options that can be explored for different use cases.
Troubleshooting Common Issues
API Key and Authentication Errors
Ensure your API key is correctly set in the
.env file and that it has the necessary permissions.Audio Input/Output Problems
Check your microphone and speaker settings to ensure they are configured correctly.
Dependency and Version Conflicts
Make sure all dependencies are installed with compatible versions as specified in the tutorial.
Conclusion
Summary of What You've Built
In this tutorial, you've built a fully functional AI Voice Agent for call barging using the VideoSDK framework. You've learned how to set up the environment, create an agent, and manage sessions.
Next Steps and Further Learning
To further enhance your AI Voice Agent, consider exploring additional plugins and custom tools. Continue learning by experimenting with different configurations and use cases.
Want to level-up your learning? Subscribe now
Subscribe to our newsletter for more tech based insights
FAQ