Build AI Voice Agents for Support

Step-by-step guide to building AI voice agents for customer support with VideoSDK.

Introduction to AI Voice Agents in Customer Support

AI Voice Agents are revolutionizing the way customer support is delivered. These agents use artificial intelligence to understand and respond to customer queries, providing a seamless and efficient support experience. In this tutorial, we'll explore how to build an AI Voice Agent specifically tailored for customer support using the VideoSDK framework.

What is an AI Voice Agent?

An AI Voice Agent is a software application that uses speech recognition and natural language processing to interact with users via voice commands. It listens to user input, processes the information using AI algorithms, and responds in a human-like manner.

Why are they important for the Customer Support Industry?

In the customer support industry, AI Voice Agents can handle a variety of tasks such as answering FAQs, providing order status updates, and assisting with troubleshooting. They help reduce wait times and improve customer satisfaction by providing immediate assistance.

Core Components of a Voice Agent

The main components of a voice agent include:
  • Speech-to-Text (STT): Converts spoken language into text.
  • Large Language Model (LLM): Processes the text to understand and generate responses.
  • Text-to-Speech (TTS): Converts text responses back into spoken language.
For a detailed understanding of these components, refer to the

AI voice Agent core components overview

.

What You'll Build in This Tutorial

In this guide, you will learn to build an AI Voice Agent using Python and the VideoSDK framework. We'll cover everything from setting up your development environment to deploying and testing your agent. To get started quickly, you can follow the

Voice Agent Quick Start Guide

.

Architecture and Core Concepts

High-Level Architecture Overview

The architecture of an AI Voice Agent involves several key components working together to process user input and generate responses. The process begins with capturing user speech, which is then converted to text using STT. The text is processed by an LLM to generate a suitable response, which is then converted back to speech using TTS.
1sequenceDiagram
2    participant User
3    participant Agent
4    participant STT
5    participant LLM
6    participant TTS
7    User->>Agent: Speak
8    Agent->>STT: Convert Speech to Text
9    STT->>Agent: Text
10    Agent->>LLM: Process Text
11    LLM->>Agent: Response Text
12    Agent->>TTS: Convert Text to Speech
13    TTS->>Agent: Speech
14    Agent->>User: Respond
15

Understanding Key Concepts in the VideoSDK Framework

  • Agent: Represents the core class of your AI Voice Agent, handling interactions with users.
  • CascadingPipeline: Manages the flow of audio processing from STT to LLM to TTS. Learn more about the

    Cascading pipeline in AI voice Agents

    .
  • VAD & TurnDetector: These components help the agent determine when to listen and when to respond.

Setting Up the Development Environment

Prerequisites

Before you begin, ensure you have Python 3.11+ installed and create an account on the VideoSDK platform at app.videosdk.live.

Step 1: Create a Virtual Environment

Create a virtual environment to manage your project dependencies:
1python -m venv venv
2source venv/bin/activate  # On Windows use `venv\\Scripts\\activate`
3

Step 2: Install Required Packages

Install the necessary Python packages using pip:
1pip install videosdk
2

Step 3: Configure API Keys in a .env file

Create a .env file in your project directory and add your VideoSDK API key:
1VIDEOSDK_API_KEY=your_api_key_here
2

Building the AI Voice Agent: A Step-by-Step Guide

Here is the complete code for the AI Voice Agent. We'll break it down into sections to explain each part.
1import asyncio, os
2from videosdk.agents import Agent, AgentSession, CascadingPipeline, JobContext, RoomOptions, WorkerJob, ConversationFlow
3from videosdk.plugins.silero import SileroVAD
4from videosdk.plugins.turn_detector import TurnDetector, pre_download_model
5from videosdk.plugins.deepgram import DeepgramSTT
6from videosdk.plugins.openai import OpenAILLM
7from videosdk.plugins.elevenlabs import ElevenLabsTTS
8from typing import AsyncIterator
9
10# Pre-downloading the Turn Detector model
11pre_download_model()
12
13agent_instructions = "You are a helpful AI Voice Agent specialized in customer support. Your primary role is to assist customers by answering their queries, providing information about products and services, and guiding them through troubleshooting processes. You can handle a wide range of customer service tasks, including providing order status updates, processing returns, and offering basic technical support. However, you must adhere to the following constraints: you cannot process payments or access sensitive customer information such as credit card details. Always ensure to maintain a polite and professional tone, and if a query is beyond your capabilities, direct the customer to a human representative for further assistance. Remember, you are not a human, and you must clarify this to the customers when necessary."
14
15class MyVoiceAgent(Agent):
16    def __init__(self):
17        super().__init__(instructions=agent_instructions)
18    async def on_enter(self): await self.session.say("Hello! How can I help?")
19    async def on_exit(self): await self.session.say("Goodbye!")
20
21async def start_session(context: JobContext):
22    # Create agent and conversation flow
23    agent = MyVoiceAgent()
24    conversation_flow = ConversationFlow(agent)
25
26    # Create pipeline
27    pipeline = CascadingPipeline(
28        stt=DeepgramSTT(model="nova-2", language="en"),
29        llm=OpenAILLM(model="gpt-4o"),
30        tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
31        vad=SileroVAD(threshold=0.35),
32        turn_detector=TurnDetector(threshold=0.8)
33    )
34
35    session = AgentSession(
36        agent=agent,
37        pipeline=pipeline,
38        conversation_flow=conversation_flow
39    )
40
41    try:
42        await context.connect()
43        await session.start()
44        # Keep the session running until manually terminated
45        await asyncio.Event().wait()
46    finally:
47        # Clean up resources when done
48        await session.close()
49        await context.shutdown()
50
51def make_context() -> JobContext:
52    room_options = RoomOptions(
53    #  room_id="YOUR_MEETING_ID",  # Set to join a pre-created room; omit to auto-create
54        name="VideoSDK Cascaded Agent",
55        playground=True
56    )
57
58    return JobContext(room_options=room_options)
59
60if __name__ == "__main__":
61    job = WorkerJob(entrypoint=start_session, jobctx=make_context)
62    job.start()
63

Step 4.1: Generating a VideoSDK Meeting ID

To interact with your voice agent, you'll need a meeting ID. You can generate one using the VideoSDK API. Here's a curl command example:
1curl -X POST https://api.videosdk.live/v1/meetings \\
2-H "Authorization: Bearer YOUR_API_KEY" \\
3-H "Content-Type: application/json" \\
4-d '{}'
5

Step 4.2: Creating the Custom Agent Class

The MyVoiceAgent class extends the Agent class from the VideoSDK framework. It defines the behavior of your voice agent, including how it greets users and ends conversations.
1class MyVoiceAgent(Agent):
2    def __init__(self):
3        super().__init__(instructions=agent_instructions)
4    async def on_enter(self): await self.session.say("Hello! How can I help?")
5    async def on_exit(self): await self.session.say("Goodbye!")
6

Step 4.3: Defining the Core Pipeline

The CascadingPipeline is crucial as it defines how audio is processed. It involves the following components:
1pipeline = CascadingPipeline(
2    stt=DeepgramSTT(model="nova-2", language="en"),
3    llm=OpenAILLM(model="gpt-4o"),
4    tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
5    vad=SileroVAD(threshold=0.35),
6    turn_detector=TurnDetector(threshold=0.8)
7)
8

Step 4.4: Managing the Session and Startup Logic

The start_session function is responsible for managing the agent's lifecycle, including starting and stopping the session. The make_context function sets up the job context, and the main block runs the agent.
1async def start_session(context: JobContext):
2    agent = MyVoiceAgent()
3    conversation_flow = ConversationFlow(agent)
4    pipeline = CascadingPipeline(
5        stt=DeepgramSTT(model="nova-2", language="en"),
6        llm=OpenAILLM(model="gpt-4o"),
7        tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
8        vad=SileroVAD(threshold=0.35),
9        turn_detector=TurnDetector(threshold=0.8)
10    )
11    session = AgentSession(
12        agent=agent,
13        pipeline=pipeline,
14        conversation_flow=conversation_flow
15    )
16    try:
17        await context.connect()
18        await session.start()
19        await asyncio.Event().wait()
20    finally:
21        await session.close()
22        await context.shutdown()
23
24def make_context() -> JobContext:
25    room_options = RoomOptions(
26        name="VideoSDK Cascaded Agent",
27        playground=True
28    )
29    return JobContext(room_options=room_options)
30
31if __name__ == "__main__":
32    job = WorkerJob(entrypoint=start_session, jobctx=make_context)
33    job.start()
34

Running and Testing the Agent

Step 5.1: Running the Python Script

To start your AI Voice Agent, run the following command in your terminal:
1python main.py
2

Step 5.2: Interacting with the Agent in the Playground

Once the agent is running, you'll see a playground link in the console. Open this link in your browser to interact with your AI Voice Agent. You can speak to the agent and it will respond based on the instructions provided. For a hands-on experience, visit the

AI Agent playground

.

Advanced Features and Customizations

Extending Functionality with Custom Tools

The VideoSDK framework allows you to extend your agent's capabilities by integrating custom tools. This can include additional APIs or databases to enhance the agent's functionality.

Exploring Other Plugins

While this tutorial uses specific plugins for STT, LLM, and TTS, the VideoSDK framework supports various other options. Explore these to tailor the agent to your needs.

Troubleshooting Common Issues

API Key and Authentication Errors

Ensure your API key is correctly set in the .env file. Double-check the key's validity and permissions.

Audio Input/Output Problems

Verify your microphone and speaker settings. Ensure the correct devices are selected in your system's audio settings.

Dependency and Version Conflicts

Use a virtual environment to manage dependencies. Check for any version conflicts and resolve them by updating or downgrading packages as needed.

Conclusion

Summary of What You've Built

You've successfully built an AI Voice Agent capable of handling customer support tasks. This agent can understand and respond to user queries in real-time.

Next Steps and Further Learning

Explore additional features and plugins offered by the VideoSDK framework to enhance your agent's capabilities. Consider integrating more complex functionalities to further improve the customer support experience. For deployment, refer to the

AI voice Agent deployment

guide.

Start Building With Free $20 Balance

No credit card required to start.

Want to level-up your learning? Subscribe now

Subscribe to our newsletter for more tech based insights

FAQ