Intent Recognition with AI Voice Agents

Step-by-step guide to build an AI Voice Agent for intent recognition using VideoSDK, complete with code examples.

Introduction to AI Voice Agents in Intent Recognition

AI Voice Agents are sophisticated systems designed to interact with users through natural language. They are equipped to understand, process, and respond to human speech, making them invaluable in various industries, including healthcare, customer service, and more. In the context of intent recognition, these agents play a crucial role in deciphering user intentions from spoken language, allowing businesses to provide more personalized and efficient services.

What is an AI

Voice Agent

?

An AI

Voice Agent

is a software program that uses artificial intelligence to understand and respond to human speech. It leverages technologies like Speech-to-Text (STT), Text-to-Speech (TTS), and Large Language Models (LLM) to process and generate natural language responses.

Why are they important for the intent recognition industry?

In industries like healthcare, AI Voice Agents can assist in scheduling appointments, answering health-related queries, and providing general advice. This capability not only enhances user experience but also optimizes operational efficiency by automating routine tasks.

Core Components of a

Voice Agent

  • STT (Speech-to-Text): Converts spoken language into text.
  • LLM (Large Language Model): Processes the text to understand the intent and generate responses.
  • TTS (Text-to-Speech): Converts text responses back into spoken language.
For a comprehensive understanding of these elements, refer to the

AI voice Agent core components overview

.

What You’ll Build in This Tutorial

In this tutorial, you will build an AI

Voice Agent

using the VideoSDK framework. This agent will specialize in intent recognition, particularly in the healthcare domain, providing users with empathetic and informative interactions.

Architecture and Core Concepts

High-Level Architecture Overview

The AI

Voice Agent

architecture involves several key components working together to process user speech and generate responses. The flow begins with capturing user speech, converting it to text, processing the text to understand the intent, generating a response, and finally converting the response back to speech.
Diagram

Understanding Key Concepts in the VideoSDK Framework

  • Agent: The core class that represents your voice bot.
  • CascadingPipeline: Manages the flow of audio processing from STT to LLM to TTS.
  • VAD & TurnDetector: These components help the agent determine when to listen and when to speak, ensuring smooth interactions. For more details, explore the

    Silero Voice Activity Detection

    and

    Turn detector for AI voice Agents

    .

Setting Up the Development Environment

Prerequisites

To get started, ensure you have Python 3.11+ installed and a VideoSDK account, which you can create at app.videosdk.live.

Step 1: Create a Virtual Environment

Create a virtual environment to manage your project dependencies:
1python -m venv venv
2source venv/bin/activate  # On Windows use `venv\\Scripts\\activate`
3

Step 2: Install Required Packages

Install the necessary packages using pip:
1pip install videosdk
2pip install python-dotenv
3

Step 3: Configure API Keys in a .env file

Create a .env file in your project directory and add your VideoSDK API keys:
1VIDEOSDK_API_KEY=your_api_key_here
2

Building the AI Voice Agent: A Step-by-Step Guide

Here is the complete, runnable code for our AI Voice Agent:
1import asyncio, os
2from videosdk.agents import Agent, AgentSession, CascadingPipeline, JobContext, RoomOptions, WorkerJob, ConversationFlow
3from videosdk.plugins.silero import SileroVAD
4from videosdk.plugins.turn_detector import TurnDetector, pre_download_model
5from videosdk.plugins.deepgram import DeepgramSTT
6from videosdk.plugins.openai import OpenAILLM
7from videosdk.plugins.elevenlabs import ElevenLabsTTS
8from typing import AsyncIterator
9
10# Pre-downloading the Turn Detector model
11pre_download_model()
12
13agent_instructions = "You are an AI Voice Agent specializing in intent recognition, acting as a helpful healthcare assistant. Your primary role is to understand and interpret user intents related to healthcare inquiries. You can answer questions about symptoms, provide general health advice, and assist in scheduling appointments. However, you are not a medical professional, and you must always include a disclaimer advising users to consult a doctor for medical advice. Your responses should be clear, concise, and empathetic, ensuring users feel heard and understood. You must prioritize user privacy and data security, ensuring that all interactions comply with relevant regulations and guidelines. Your capabilities include recognizing and processing various intents such as symptom inquiry, appointment scheduling, and general health advice, while your limitations include not providing specific medical diagnoses or treatment plans."
14
15class MyVoiceAgent(Agent):
16    def __init__(self):
17        super().__init__(instructions=agent_instructions)
18    async def on_enter(self): await self.session.say("Hello! How can I help?")
19    async def on_exit(self): await self.session.say("Goodbye!")
20
21async def start_session(context: JobContext):
22    # Create agent and conversation flow
23    agent = MyVoiceAgent()
24    conversation_flow = ConversationFlow(agent)
25
26    # Create pipeline
27    pipeline = CascadingPipeline(
28        stt=DeepgramSTT(model="nova-2", language="en"),
29        llm=OpenAILLM(model="gpt-4o"),
30        tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
31        vad=SileroVAD(threshold=0.35),
32        turn_detector=TurnDetector(threshold=0.8)
33    )
34
35    session = AgentSession(
36        agent=agent,
37        pipeline=pipeline,
38        conversation_flow=conversation_flow
39    )
40
41    try:
42        await context.connect()
43        await session.start()
44        # Keep the session running until manually terminated
45        await asyncio.Event().wait()
46    finally:
47        # Clean up resources when done
48        await session.close()
49        await context.shutdown()
50
51def make_context() -> JobContext:
52    room_options = RoomOptions(
53    #  room_id="YOUR_MEETING_ID",  # Set to join a pre-created room; omit to auto-create
54        name="VideoSDK Cascaded Agent",
55        playground=True
56    )
57
58    return JobContext(room_options=room_options)
59
60if __name__ == "__main__":
61    job = WorkerJob(entrypoint=start_session, jobctx=make_context)
62    job.start()
63

Step 4.1: Generating a VideoSDK Meeting ID

To interact with the agent, you need a meeting ID. You can generate one using the VideoSDK API:
1curl -X POST "https://api.videosdk.live/v1/meetings" \
2-H "Authorization: YOUR_API_KEY" \
3-H "Content-Type: application/json"
4

Step 4.2: Creating the Custom Agent Class

The MyVoiceAgent class is where you define the behavior of your AI Voice Agent. It inherits from the Agent class and uses the agent_instructions to guide its interactions.
1class MyVoiceAgent(Agent):
2    def __init__(self):
3        super().__init__(instructions=agent_instructions)
4    async def on_enter(self): await self.session.say("Hello! How can I help?")
5    async def on_exit(self): await self.session.say("Goodbye!")
6

Step 4.3: Defining the Core Pipeline

The CascadingPipeline is crucial for processing audio data through various stages:
  • STT (DeepgramSTT): Converts speech to text using the Nova-2 model.
  • LLM (OpenAILLM): Processes the text to understand the intent using GPT-4.
  • TTS (ElevenLabsTTS): Converts the text response back to speech.
  • VAD (SileroVAD): Detects when the user is speaking.
  • TurnDetector: Ensures smooth conversation flow by detecting turns.
1pipeline = CascadingPipeline(
2    stt=DeepgramSTT(model="nova-2", language="en"),
3    llm=OpenAILLM(model="gpt-4o"),
4    tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
5    vad=SileroVAD(threshold=0.35),
6    turn_detector=TurnDetector(threshold=0.8)
7)
8

Step 4.4: Managing the Session and Startup Logic

The start_session function initializes the session, connects to the context, and starts the agent. The make_context function sets up the room options, and the main block starts the job.
1async def start_session(context: JobContext):
2    agent = MyVoiceAgent()
3    conversation_flow = ConversationFlow(agent)
4    pipeline = CascadingPipeline(
5        stt=DeepgramSTT(model="nova-2", language="en"),
6        llm=OpenAILLM(model="gpt-4o"),
7        tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
8        vad=SileroVAD(threshold=0.35),
9        turn_detector=TurnDetector(threshold=0.8)
10    )
11    session = AgentSession(
12        agent=agent,
13        pipeline=pipeline,
14        conversation_flow=conversation_flow
15    )
16    try:
17        await context.connect()
18        await session.start()
19        await asyncio.Event().wait()
20    finally:
21        await session.close()
22        await context.shutdown()
23
24def make_context() -> JobContext:
25    room_options = RoomOptions(
26        name="VideoSDK Cascaded Agent",
27        playground=True
28    )
29    return JobContext(room_options=room_options)
30
31if __name__ == "__main__":
32    job = WorkerJob(entrypoint=start_session, jobctx=make_context)
33    job.start()
34

Running and Testing the Agent

Step 5.1: Running the Python Script

Run your Python script to start the agent:
1python main.py
2

Step 5.2: Interacting with the Agent in the Playground

Once the agent is running, you will see a playground link in the console. Use this link to join the session and interact with your AI Voice Agent. You can test various intents by speaking into your microphone. For a hands-on experience, visit the

AI Agent playground

.

Advanced Features and Customizations

Extending Functionality with Custom Tools

You can extend the agent's functionality by adding custom tools using the function_tool concept in VideoSDK.

Exploring Other Plugins

While this tutorial uses specific plugins, VideoSDK supports various STT, LLM, and TTS options, allowing you to customize your agent further.

Troubleshooting Common Issues

API Key and Authentication Errors

Ensure your API keys are correctly set in the .env file and that you have the necessary permissions.

Audio Input/Output Problems

Check your microphone and speaker settings to ensure they are properly configured.

Dependency and Version Conflicts

Ensure all dependencies are installed with compatible versions as specified in the tutorial.

Conclusion

Summary of What You’ve Built

In this tutorial, you built an AI Voice Agent capable of recognizing intents in the healthcare domain using the VideoSDK framework.

Next Steps and Further Learning

Explore additional plugins and customization options in VideoSDK to enhance your agent's capabilities and apply it to other domains.

Start Building With Free $20 Balance

No credit card required to start.

Want to level-up your learning? Subscribe now

Subscribe to our newsletter for more tech based insights

FAQ