Introduction to AI Voice Agents in the Banking Industry
In today's fast-paced world, the banking industry is constantly seeking innovative ways to enhance customer service and streamline operations. One such innovation is the AI
Voice Agent
, a technology that allows customers to interact with banking services using natural language. In this tutorial, we will guide you through the process of building an AI Voice Assistant tailored for the banking sector.What is an AI Voice Agent
?
An AI
Voice Agent
is a software application that uses artificial intelligence to process and respond to spoken language. It combines technologies such as Speech-to-Text (STT), Natural Language Processing (NLP), and Text-to-Speech (TTS) to understand and interact with users.Why are they important for the Banking Industry?
AI Voice Agents offer numerous benefits to the banking industry. They can handle routine inquiries, provide information on account balances, and assist with online banking setup, thereby freeing up human resources for more complex tasks. This not only improves efficiency but also enhances customer satisfaction by providing 24/7 support.
Core Components of a Voice Agent
- Speech-to-Text (STT): Converts spoken language into text.
- Large Language Model (LLM): Processes the text to understand and generate responses.
- Text-to-Speech (TTS): Converts the generated text back into spoken language.
To understand how these components work together, you can explore the
AI voice Agent core components overview
.What You'll Build in This Tutorial
In this tutorial, you'll learn to build a fully functional AI Voice Assistant for the banking industry using the VideoSDK framework. We'll guide you through setting up the development environment, implementing the
voice agent
, and testing it in a real-world scenario.Architecture and Core Concepts
High-Level Architecture Overview
The AI Voice Agent architecture involves several components working together to provide seamless interaction. Here's a high-level overview of the data flow:
- User Speech: The user speaks into the microphone.
- Voice
Activity Detection
(VAD): Detects when the user starts and stops speaking. - Speech-to-Text (STT): Converts the spoken words into text.
- Large Language Model (LLM): Processes the text to generate a response.
- Text-to-Speech (TTS): Converts the response text back into speech.
- Agent Response: The agent speaks back to the user.

Understanding Key Concepts in the VideoSDK Framework
- Agent: The core class representing your bot, responsible for handling interactions.
- CascadingPipeline: Manages the flow of audio processing from STT to TTS. For more details, you can refer to the
Cascading pipeline in AI voice Agents
. - VAD & TurnDetector: Components that help the agent know when to listen and when to respond.
Setting Up the Development Environment
Prerequisites
Before you begin, ensure you have Python 3.11+ installed on your system. You'll also need a VideoSDK account, which you can create at app.videosdk.live.
Step 1: Create a Virtual Environment
To keep your project dependencies organized, create a virtual environment:
1python -m venv banking-voice-agent
2source banking-voice-agent/bin/activate # On Windows use `banking-voice-agent\\Scripts\\activate`
3Step 2: Install Required Packages
Install the necessary packages using pip:
1pip install videosdk-agents videosdk-plugins
2Step 3: Configure API Keys in a .env File
Create a
.env file in your project directory to store your VideoSDK API keys and other sensitive information:1VIDEOSDK_API_KEY=your_api_key_here
2Building the AI Voice Agent: A Step-by-Step Guide
Here is the complete, runnable code for our AI Voice Agent:
1import asyncio, os
2from videosdk.agents import Agent, AgentSession, CascadingPipeline, JobContext, RoomOptions, WorkerJob, ConversationFlow
3from videosdk.plugins.silero import SileroVAD
4from videosdk.plugins.turn_detector import TurnDetector, pre_download_model
5from videosdk.plugins.deepgram import DeepgramSTT
6from videosdk.plugins.openai import OpenAILLM
7from videosdk.plugins.elevenlabs import ElevenLabsTTS
8from typing import AsyncIterator
9
10# Pre-downloading the Turn Detector model
11pre_download_model()
12
13agent_instructions = "You are a knowledgeable and friendly AI Voice Assistant specialized in the banking industry. Your primary role is to assist customers with their banking needs by providing information on account balances, recent transactions, and general banking inquiries. You can also guide users through the process of setting up online banking, explain different banking products, and offer tips on financial management. However, you are not authorized to perform any transactions, access personal account details, or provide financial advice. Always remind users to contact their bank directly for any sensitive or transaction-related queries. Your responses should be clear, concise, and ensure the user's privacy and security are prioritized."
14
15class MyVoiceAgent(Agent):
16 def __init__(self):
17 super().__init__(instructions=agent_instructions)
18 async def on_enter(self): await self.session.say("Hello! How can I help?")
19 async def on_exit(self): await self.session.say("Goodbye!")
20
21async def start_session(context: JobContext):
22 # Create agent and conversation flow
23 agent = MyVoiceAgent()
24 conversation_flow = ConversationFlow(agent)
25
26 # Create pipeline
27 pipeline = CascadingPipeline(
28 stt=DeepgramSTT(model="nova-2", language="en"),
29 llm=OpenAILLM(model="gpt-4o"),
30 tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
31 vad=SileroVAD(threshold=0.35),
32 turn_detector=TurnDetector(threshold=0.8)
33 )
34
35 session = AgentSession(
36 agent=agent,
37 pipeline=pipeline,
38 conversation_flow=conversation_flow
39 )
40
41 try:
42 await context.connect()
43 await session.start()
44 # Keep the session running until manually terminated
45 await asyncio.Event().wait()
46 finally:
47 # Clean up resources when done
48 await session.close()
49 await context.shutdown()
50
51def make_context() -> JobContext:
52 room_options = RoomOptions(
53 # room_id="YOUR_MEETING_ID", # Set to join a pre-created room; omit to auto-create
54 name="VideoSDK Cascaded Agent",
55 playground=True
56 )
57
58 return JobContext(room_options=room_options)
59
60if __name__ == "__main__":
61 job = WorkerJob(entrypoint=start_session, jobctx=make_context)
62 job.start()
63Step 4.1: Generating a VideoSDK Meeting ID
To interact with your agent, you'll need a meeting ID. You can generate one using the VideoSDK API. Here's a sample
curl command:1curl -X POST 'https://api.videosdk.live/v1/meetings' \
2-H 'Authorization: Bearer YOUR_API_KEY' \
3-H 'Content-Type: application/json'
4Step 4.2: Creating the Custom Agent Class
The
MyVoiceAgent class is where you define the behavior of your AI Voice Assistant. It inherits from the Agent class and implements the on_enter and on_exit methods to greet users and say goodbye, respectively.1class MyVoiceAgent(Agent):
2 def __init__(self):
3 super().__init__(instructions=agent_instructions)
4 async def on_enter(self): await self.session.say("Hello! How can I help?")
5 async def on_exit(self): await self.session.say("Goodbye!")
6Step 4.3: Defining the Core Pipeline
The
CascadingPipeline is a crucial part of the agent, defining how audio data flows through the system. It includes components like STT, LLM, TTS, VAD, and TurnDetector.1pipeline = CascadingPipeline(
2 stt=DeepgramSTT(model="nova-2", language="en"),
3 llm=OpenAILLM(model="gpt-4o"),
4 tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
5 vad=SileroVAD(threshold=0.35),
6 turn_detector=TurnDetector(threshold=0.8)
7)
8Step 4.4: Managing the Session and Startup Logic
The
start_session function initializes the agent session and starts the conversation flow. The make_context function sets up the room options, and the main block runs the agent.1async def start_session(context: JobContext):
2 # Create agent and conversation flow
3 agent = MyVoiceAgent()
4 conversation_flow = ConversationFlow(agent)
5
6 # Create pipeline
7 pipeline = CascadingPipeline(
8 stt=DeepgramSTT(model="nova-2", language="en"),
9 llm=OpenAILLM(model="gpt-4o"),
10 tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
11 vad=SileroVAD(threshold=0.35),
12 turn_detector=TurnDetector(threshold=0.8)
13 )
14
15 session = AgentSession(
16 agent=agent,
17 pipeline=pipeline,
18 conversation_flow=conversation_flow
19 )
20
21 try:
22 await context.connect()
23 await session.start()
24 # Keep the session running until manually terminated
25 await asyncio.Event().wait()
26 finally:
27 # Clean up resources when done
28 await session.close()
29 await context.shutdown()
30
31def make_context() -> JobContext:
32 room_options = RoomOptions(
33 # room_id="YOUR_MEETING_ID", # Set to join a pre-created room; omit to auto-create
34 name="VideoSDK Cascaded Agent",
35 playground=True
36 )
37
38 return JobContext(room_options=room_options)
39
40if __name__ == "__main__":
41 job = WorkerJob(entrypoint=start_session, jobctx=make_context)
42 job.start()
43Running and Testing the Agent
Step 5.1: Running the Python Script
To start your AI Voice Agent, run the Python script using the command:
1python main.py
2Step 5.2: Interacting with the Agent in the Playground
Once the script is running, you'll receive a playground link in the console. Open this link in your browser to interact with your agent. Speak into your microphone, and the agent will respond based on the instructions you've provided. You can explore the
AI Agent playground
for more interactive testing.Advanced Features and Customizations
Extending Functionality with Custom Tools
The VideoSDK framework allows you to extend the functionality of your agent by integrating custom tools. This can include additional plugins or custom logic to handle specific tasks.
Exploring Other Plugins
While this tutorial uses specific plugins for STT, LLM, and TTS, the VideoSDK framework supports various other options. Explore plugins like Cartesia for STT or Google Gemini for LLM to enhance your agent's capabilities.
Troubleshooting Common Issues
API Key and Authentication Errors
Ensure your API keys are correctly configured in the
.env file. Double-check the VideoSDK documentation for any changes in authentication methods.Audio Input/Output Problems
If you encounter issues with audio, verify your microphone and speaker settings. Ensure they are correctly configured and accessible by the application.
Dependency and Version Conflicts
Use a virtual environment to manage dependencies and avoid version conflicts. Check the compatibility of installed packages with your Python version.
Conclusion
Summary of What You've Built
Congratulations! You've built a fully functional AI Voice Assistant for the banking industry using the VideoSDK framework. This agent can handle customer inquiries and provide information efficiently.
Next Steps and Further Learning
To further enhance your AI Voice Agent, consider exploring additional plugins and custom tools. Stay updated with the latest developments in the VideoSDK framework to leverage new features and improvements.
Want to level-up your learning? Subscribe now
Subscribe to our newsletter for more tech based insights
FAQ