Build a Function Calling Voice Agent

Step-by-step guide to build a function calling voice agent using VideoSDK, complete with code examples and testing instructions.

Introduction to AI Voice Agents in Function Calling Voice Agent

AI Voice Agents are sophisticated systems designed to interpret and respond to human voice commands. These agents leverage advanced technologies like Speech-to-Text (STT), Text-to-Speech (TTS), and Large Language Models (LLM) to process and understand spoken language, execute tasks, and provide feedback to users. In the context of function calling, these agents can perform predefined functions based on user instructions, making them invaluable in various industries such as customer service, smart home automation, and personal assistance.

What is an AI Voice Agent?

An AI Voice Agent is a digital assistant capable of understanding and processing human speech to perform tasks. These agents utilize STT to convert speech into text, LLMs to comprehend and generate responses, and TTS to convert text back into speech, creating a seamless interaction with users. For a detailed setup, refer to the

Voice Agent Quick Start Guide

.

Why are they important for the function calling voice agent industry?

In industries requiring automation and efficiency, function calling voice agents streamline operations by executing specific tasks through voice commands. They enhance user experience by providing quick responses and reducing the need for manual input, proving beneficial in sectors like customer support, healthcare, and smart home devices.

Core Components of a Voice Agent

What You'll Build in This Tutorial

In this tutorial, you will learn to build a function calling voice agent using the VideoSDK framework. We will guide you through setting up the environment, implementing the agent, and testing it in a simulated environment.

Architecture and Core Concepts

High-Level Architecture Overview

The architecture of our voice agent involves a seamless flow of data from user speech to agent response. When a user speaks, the audio is processed through a series of components: STT transcribes the speech, LLM interprets the text, and TTS generates a spoken response. This process is managed by a

cascading pipeline

that ensures efficient handling of each step.
Diagram

Understanding Key Concepts in the VideoSDK Framework

  • Agent: The core class representing your bot, responsible for handling interactions. For an in-depth understanding, see the

    AI voice Agent core components overview

    .
  • CascadingPipeline: Manages the flow of audio processing from STT to LLM to TTS.
  • VAD & TurnDetector: These components help the agent determine when to listen and when to speak, ensuring smooth and natural interactions. The

    Silero Voice Activity Detection

    plugin can be particularly useful here.

Setting Up the Development Environment

Prerequisites

To get started, ensure you have Python 3.11+ installed and a VideoSDK account, which you can create at app.videosdk.live.

Step 1: Create a Virtual Environment

Begin by creating a virtual environment to manage your project dependencies:
1python -m venv voice-agent-env
2source voice-agent-env/bin/activate  # On Windows use `voice-agent-env\Scripts\activate`
3

Step 2: Install Required Packages

Install the necessary packages using pip:
1pip install videosdk
2pip install python-dotenv
3

Step 3: Configure API Keys in a .env file

Create a .env file in your project directory and add your VideoSDK API keys:
1VIDEOSDK_API_KEY=your_api_key_here
2VIDEOSDK_API_SECRET=your_api_secret_here
3

Building the AI Voice Agent: A Step-by-Step Guide

Here is the complete code for building your voice agent:
1import asyncio, os
2from videosdk.agents import Agent, AgentSession, CascadingPipeline, JobContext, RoomOptions, WorkerJob, ConversationFlow
3from videosdk.plugins.silero import SileroVAD
4from videosdk.plugins.turn_detector import TurnDetector, pre_download_model
5from videosdk.plugins.deepgram import DeepgramSTT
6from videosdk.plugins.openai import OpenAILLM
7from videosdk.plugins.elevenlabs import ElevenLabsTTS
8from typing import AsyncIterator
9
10# Pre-downloading the Turn Detector model
11pre_download_model()
12
13agent_instructions = "You are a 'function calling voice agent' designed to assist users by executing specific functions based on voice commands. Your persona is that of a friendly and efficient digital assistant. Your primary capabilities include:\n\n1. Understanding and processing voice commands to call predefined functions.\n2. Providing users with feedback on the success or failure of the function execution.\n3. Offering suggestions for alternative commands if the initial request cannot be fulfilled.\n\nConstraints and limitations:\n\n1. You cannot perform any actions outside the predefined functions.\n2. You must inform users if a requested function is unavailable or if there are any limitations in executing it.\n3. You are not capable of making decisions or providing advice beyond executing the specified functions.\n4. Always maintain user privacy and do not store any personal data.\n\nYour goal is to enhance user experience by efficiently executing functions and providing clear, concise feedback."
14
15class MyVoiceAgent(Agent):
16    def __init__(self):
17        super().__init__(instructions=agent_instructions)
18    async def on_enter(self): await self.session.say("Hello! How can I help?")
19    async def on_exit(self): await self.session.say("Goodbye!")
20
21async def start_session(context: JobContext):
22    # Create agent and conversation flow
23    agent = MyVoiceAgent()
24    conversation_flow = ConversationFlow(agent)
25
26    # Create pipeline
27    pipeline = CascadingPipeline(
28        stt=DeepgramSTT(model="nova-2", language="en"),
29        llm=OpenAILLM(model="gpt-4o"),
30        tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
31        vad=SileroVAD(threshold=0.35),
32        turn_detector=TurnDetector(threshold=0.8)
33    )
34
35    session = AgentSession(
36        agent=agent,
37        pipeline=pipeline,
38        conversation_flow=conversation_flow
39    )
40
41    try:
42        await context.connect()
43        await session.start()
44        # Keep the session running until manually terminated
45        await asyncio.Event().wait()
46    finally:
47        # Clean up resources when done
48        await session.close()
49        await context.shutdown()
50
51def make_context() -> JobContext:
52    room_options = RoomOptions(
53    #  room_id="YOUR_MEETING_ID",  # Set to join a pre-created room; omit to auto-create
54        name="VideoSDK Cascaded Agent",
55        playground=True
56    )
57
58    return JobContext(room_options=room_options)
59
60if __name__ == "__main__":
61    job = WorkerJob(entrypoint=start_session, jobctx=make_context)
62    job.start()
63

Step 4.1: Generating a VideoSDK Meeting ID

Before running your agent, you need a meeting ID. Use the following curl command to generate one:
1curl -X POST "https://api.videosdk.live/v1/meetings" \
2-H "Authorization: Bearer YOUR_API_TOKEN" \
3-H "Content-Type: application/json"
4

Step 4.2: Creating the Custom Agent Class

The MyVoiceAgent class is where you define your agent's behavior. It inherits from the Agent class and uses predefined instructions to guide its interactions. The on_enter and on_exit methods are used to greet users and say goodbye, respectively.

Step 4.3: Defining the Core Pipeline

The CascadingPipeline is a crucial component that processes audio data. It consists of:
  • STT (DeepgramSTT): Transcribes spoken words into text.
  • LLM (OpenAILLM): Interprets the text to understand user intent.
  • TTS (ElevenLabsTTS): Converts the response text back into speech.
  • VAD (SileroVAD) & TurnDetector: Manage when the agent listens and responds, ensuring smooth interactions.

Step 4.4: Managing the Session and Startup Logic

The start_session function initializes the agent session, connecting the agent with its conversation flow and pipeline. The make_context function sets up the room options, enabling a playground mode for testing. The main block runs the agent, keeping it active until manually terminated.

Running and Testing the Agent

Step 5.1: Running the Python Script

To start your agent, run the following command in your terminal:
1python main.py
2

Step 5.2: Interacting with the Agent in the Playground

Once the agent is running, look for the playground link in your console. Open it in a browser to interact with your agent. Speak commands and observe how the agent processes and responds to them. For more detailed session management, refer to

AI voice Agent Sessions

.

Advanced Features and Customizations

Extending Functionality with Custom Tools

You can enhance your agent's capabilities by integrating custom tools. This allows the agent to perform more complex functions beyond the predefined ones.

Exploring Other Plugins

The VideoSDK framework supports various plugins for STT, LLM, and TTS. Experiment with different options to optimize your agent's performance and tailor it to specific needs.

Troubleshooting Common Issues

API Key and Authentication Errors

Ensure your API keys are correctly configured in the .env file. Double-check for typos or missing entries.

Audio Input/Output Problems

Verify your microphone and speaker settings. Ensure the correct devices are selected and functioning properly.

Dependency and Version Conflicts

Use a virtual environment to manage dependencies and avoid conflicts. Ensure all required packages are installed with compatible versions.

Conclusion

Summary of What You've Built

In this tutorial, you've built a function calling voice agent using the VideoSDK framework. You learned how to set up the environment, implement the agent, and test it in a playground.

Next Steps and Further Learning

Explore additional features and plugins to enhance your agent's capabilities. Continue learning by experimenting with different configurations and customizations to suit your specific use cases.

Start Building With Free $20 Balance

No credit card required to start.

Want to level-up your learning? Subscribe now

Subscribe to our newsletter for more tech based insights

FAQ