Introduction to AI Voice Agents in AI Voice Assistants for Public Services
In recent years, AI voice assistants have become invaluable tools in various industries, including public services. These intelligent agents are designed to interact with users through natural language, providing information and assistance seamlessly. In this tutorial, we will explore how to build an AI voice assistant tailored for public services using the VideoSDK framework.
What is an AI Voice Agent
?
An AI
Voice Agent
is a software application that uses artificial intelligence to interpret and respond to human speech. These agents leverage technologies like Speech-to-Text (STT), Language Model Processing (LLM), and Text-to-Speech (TTS) to understand and communicate with users effectively.Why are they important for the AI Voice Assistants for Public Services Industry?
AI voice assistants can significantly enhance public service delivery by providing quick access to information, reducing wait times, and improving user experience. They can assist citizens with inquiries related to healthcare, transportation, and government services, making public services more accessible and efficient.
Core Components of a Voice Agent
- Speech-to-Text (STT): Converts spoken language into text, utilizing tools like the
Deepgram STT Plugin for voice agent
. - Language Model (LLM): Processes the text to understand the user's intent, often enhanced by the
OpenAI LLM Plugin for voice agent
. - Text-to-Speech (TTS): Converts the processed text back into spoken language.
What You'll Build in This Tutorial
In this tutorial, we will guide you through building a fully functional AI voice assistant for public services using the VideoSDK framework. You will learn how to set up the development environment, create a custom agent, manage
AI voice Agent Sessions
, and test your agent in anAI Agent playground
environment.Architecture and Core Concepts
High-Level Architecture Overview
The AI voice assistant's architecture involves several components working together to process user input and generate responses. The data flow begins with the user's speech, which is captured and converted into text by the STT module. The text is then processed by the LLM to determine the appropriate response, which is finally converted back into speech by the TTS module.

Understanding Key Concepts in the VideoSDK Framework
- Agent: The core class representing your bot, responsible for managing interactions.
Cascading Pipeline in AI voice Agents
: Defines the flow of audio processing, including STT, LLM, and TTS.- VAD &
Turn Detector for AI voice Agents
: Tools that help the agent know when to listen and speak, ensuring smooth interactions.
Setting Up the Development Environment
Prerequisites
Before we start, ensure you have Python 3.11+ installed and create an account on the VideoSDK platform at app.videosdk.live.
Step 1: Create a Virtual Environment
To avoid conflicts with other Python projects, create a virtual environment:
1python -m venv venv
2source venv/bin/activate # On Windows use `venv\\Scripts\\activate`
3Step 2: Install Required Packages
Install the necessary packages using pip:
1pip install videosdk
2Step 3: Configure API Keys in a .env File
Create a
.env file in your project directory and add your VideoSDK API key:1VIDEOSDK_API_KEY=your_api_key_here
2Building the AI Voice Agent: A Step-by-Step Guide
Let's dive into building our AI voice assistant. Below is the complete code block that we'll break down in the following sections:
1import asyncio, os
2from videosdk.agents import Agent, AgentSession, CascadingPipeline, JobContext, RoomOptions, WorkerJob, ConversationFlow
3from videosdk.plugins.silero import SileroVAD
4from videosdk.plugins.turn_detector import TurnDetector, pre_download_model
5from videosdk.plugins.deepgram import DeepgramSTT
6from videosdk.plugins.openai import OpenAILLM
7from videosdk.plugins.elevenlabs import ElevenLabsTTS
8from typing import AsyncIterator
9
10# Pre-downloading the Turn Detector model
11pre_download_model()
12
13agent_instructions = "You are an AI Voice Assistant designed to support public services. Your primary role is to assist citizens by providing information and guidance on various public services such as healthcare, transportation, and government facilities. You should be friendly, informative, and efficient in your responses.\n\nCapabilities:\n1. Provide information about public healthcare services, including locations, operating hours, and contact details.\n2. Assist with public transportation inquiries, such as schedules, routes, and fare information.\n3. Offer guidance on accessing government services, including how to apply for permits, licenses, and other official documents.\n4. Answer frequently asked questions related to public services and direct users to appropriate resources for more detailed information.\n\nConstraints and Limitations:\n1. You are not a legal or medical professional, and you must include a disclaimer advising users to consult with qualified professionals for legal or medical advice.\n2. You cannot process personal data or handle transactions; direct users to official websites or contact centers for these services.\n3. Ensure that all information provided is up-to-date and sourced from reliable public service databases.\n4. Maintain user privacy and confidentiality at all times, adhering to data protection regulations."
14
15class MyVoiceAgent(Agent):
16 def __init__(self):
17 super().__init__(instructions=agent_instructions)
18 async def on_enter(self): await self.session.say("Hello! How can I help?")
19 async def on_exit(self): await self.session.say("Goodbye!")
20
21async def start_session(context: JobContext):
22 # Create agent and conversation flow
23 agent = MyVoiceAgent()
24 conversation_flow = ConversationFlow(agent)
25
26 # Create pipeline
27 pipeline = CascadingPipeline(
28 stt=DeepgramSTT(model="nova-2", language="en"),
29 llm=OpenAILLM(model="gpt-4o"),
30 tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
31 vad=SileroVAD(threshold=0.35),
32 turn_detector=TurnDetector(threshold=0.8)
33 )
34
35 session = AgentSession(
36 agent=agent,
37 pipeline=pipeline,
38 conversation_flow=conversation_flow
39 )
40
41 try:
42 await context.connect()
43 await session.start()
44 # Keep the session running until manually terminated
45 await asyncio.Event().wait()
46 finally:
47 # Clean up resources when done
48 await session.close()
49 await context.shutdown()
50
51def make_context() -> JobContext:
52 room_options = RoomOptions(
53 # room_id="YOUR_MEETING_ID", # Set to join a pre-created room; omit to auto-create
54 name="VideoSDK Cascaded Agent",
55 playground=True
56 )
57
58 return JobContext(room_options=room_options)
59
60if __name__ == "__main__":
61 job = WorkerJob(entrypoint=start_session, jobctx=make_context)
62 job.start()
63Step 4.1: Generating a VideoSDK Meeting ID
To interact with your agent, you'll need a meeting ID. You can generate one using the VideoSDK API:
1curl -X POST "https://api.videosdk.live/v1/rooms" \
2-H "Authorization: Bearer your_api_key_here" \
3-H "Content-Type: application/json" \
4-d '{}'
5Step 4.2: Creating the Custom Agent Class
The
MyVoiceAgent class is where we define the behavior of our AI voice assistant. This class extends the Agent class from the VideoSDK framework and includes methods to handle entering and exiting a session:1class MyVoiceAgent(Agent):
2 def __init__(self):
3 super().__init__(instructions=agent_instructions)
4 async def on_enter(self): await self.session.say("Hello! How can I help?")
5 async def on_exit(self): await self.session.say("Goodbye!")
6Step 4.3: Defining the Core Pipeline
The
Cascading Pipeline
is crucial as it defines the flow of data through the system, specifying how audio is processed from input to output:1pipeline = CascadingPipeline(
2 stt=DeepgramSTT(model="nova-2", language="en"),
3 llm=OpenAILLM(model="gpt-4o"),
4 tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
5 vad=SileroVAD(threshold=0.35),
6 turn_detector=TurnDetector(threshold=0.8)
7)
8Step 4.4: Managing the Session and Startup Logic
The functions
start_session and make_context manage the session lifecycle and setup:1async def start_session(context: JobContext):
2 # Create agent and conversation flow
3 agent = MyVoiceAgent()
4 conversation_flow = ConversationFlow(agent)
5
6 # Create pipeline
7 pipeline = CascadingPipeline(
8 stt=DeepgramSTT(model="nova-2", language="en"),
9 llm=OpenAILLM(model="gpt-4o"),
10 tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
11 vad=SileroVAD(threshold=0.35),
12 turn_detector=TurnDetector(threshold=0.8)
13 )
14
15 session = AgentSession(
16 agent=agent,
17 pipeline=pipeline,
18 conversation_flow=conversation_flow
19 )
20
21 try:
22 await context.connect()
23 await session.start()
24 # Keep the session running until manually terminated
25 await asyncio.Event().wait()
26 finally:
27 # Clean up resources when done
28 await session.close()
29 await context.shutdown()
30
31def make_context() -> JobContext:
32 room_options = RoomOptions(
33 # room_id="YOUR_MEETING_ID", # Set to join a pre-created room; omit to auto-create
34 name="VideoSDK Cascaded Agent",
35 playground=True
36 )
37
38 return JobContext(room_options=room_options)
39
40if __name__ == "__main__":
41 job = WorkerJob(entrypoint=start_session, jobctx=make_context)
42 job.start()
43Running and Testing the Agent
Step 5.1: Running the Python Script
To run your AI voice assistant, execute the Python script:
1python main.py
2Step 5.2: Interacting with the Agent in the Playground
Once the script is running, you will find a playground link in the console output. Use this link to join the session and interact with your AI voice assistant.
Advanced Features and Customizations
Extending Functionality with Custom Tools
The VideoSDK framework allows you to extend your agent's functionality with custom tools, enabling you to integrate additional features and services.
Exploring Other Plugins
While this tutorial uses specific plugins for STT, LLM, and TTS, you can explore other options available in the VideoSDK framework to suit your needs.
Troubleshooting Common Issues
API Key and Authentication Errors
Ensure your API key is correctly configured in the
.env file and that you have the necessary permissions.Audio Input/Output Problems
Check your microphone and speaker settings to ensure they are correctly configured and functioning.
Dependency and Version Conflicts
Make sure all dependencies are installed with compatible versions as specified in the VideoSDK documentation.
Conclusion
Summary of What You've Built
In this tutorial, you've built a functional AI voice assistant for public services using the VideoSDK framework. You learned how to set up your development environment, create a custom agent, manage sessions, and test your agent.
Next Steps and Further Learning
Consider exploring advanced features of the VideoSDK framework and integrating additional plugins to enhance your AI voice assistant's capabilities. For a comprehensive understanding, refer to the
AI voice Agent core components overview
.Want to level-up your learning? Subscribe now
Subscribe to our newsletter for more tech based insights
FAQ