Build a Restaurant AI Voice Agent

Step-by-step guide to building an AI voice agent for restaurants using VideoSDK. Includes code examples and testing instructions.

Introduction to AI Voice Agents in Restaurants

AI Voice Agents are transforming the way businesses interact with their customers, especially in the restaurant industry. These agents are designed to handle customer inquiries, manage reservations, and provide information about menu items, all through voice interaction. By automating these tasks, restaurants can enhance customer service, reduce wait times, and ensure a seamless dining experience.

What is an AI

Voice Agent

?

An AI

Voice Agent

is a software application that uses artificial intelligence to understand and respond to human speech. It typically involves components like Speech-to-Text (STT) for converting spoken language into text, a Language Model (LLM) for processing the text and generating responses, and Text-to-Speech (TTS) for converting responses back into spoken language.

Why are they important for the restaurant industry?

In the restaurant industry, AI Voice Agents can streamline operations by handling routine inquiries, managing bookings, and even upselling menu items. This not only improves efficiency but also enhances the customer experience by providing quick and accurate responses.

Core Components of a

Voice Agent

  • STT (Speech-to-Text): Converts spoken language into text.
  • LLM (Language Model): Processes the text and generates appropriate responses.
  • TTS (Text-to-Speech): Converts text responses back into spoken language.

What You'll Build in This Tutorial

In this tutorial, you will build a

voice agent

for restaurants using the VideoSDK framework. The agent will be capable of handling multiple customer interactions simultaneously, providing information about the menu, taking reservations, and answering general inquiries.

Architecture and Core Concepts

High-Level Architecture Overview

The architecture of a

voice agent

involves several key components working together to process user input and generate responses. The data flow typically starts with the user speaking into a microphone. The audio is then processed by the STT component, which converts it into text. This text is fed into the LLM, which generates a response. Finally, the TTS component converts the response back into audio, which is played back to the user.
Diagram

Understanding Key Concepts in the VideoSDK Framework

  • Agent: The core class representing your bot. It handles the logic for interacting with users.
  • Cascading Pipeline in AI voice Agents

    :
    Manages the flow of audio processing through different components like STT, LLM, and TTS.
  • VAD & TurnDetector: These components help the agent determine when to listen and when to speak, ensuring smooth interactions.

Setting Up the Development Environment

Prerequisites

Before you begin, ensure you have Python 3.11+ installed and a VideoSDK account. You can sign up at app.videosdk.live.

Step 1: Create a Virtual Environment

To keep dependencies organized, create a virtual environment:
1python -m venv venv
2source venv/bin/activate  # On Windows use `venv\Scripts\activate`
3

Step 2: Install Required Packages

Install the necessary packages using pip:
1pip install videosdk
2

Step 3: Configure API Keys in a .env file

Create a .env file in your project's root directory and add your VideoSDK API keys:
1VIDEOSDK_API_KEY=your_api_key_here
2

Building the AI Voice Agent: A Step-by-Step Guide

Here is the complete code to build your voice agent:
1import asyncio, os
2from videosdk.agents import Agent, AgentSession, CascadingPipeline, JobContext, RoomOptions, WorkerJob, ConversationFlow
3from videosdk.plugins.silero import SileroVAD
4from videosdk.plugins.turn_detector import TurnDetector, pre_download_model
5from videosdk.plugins.deepgram import DeepgramSTT
6from videosdk.plugins.openai import OpenAILLM
7from videosdk.plugins.elevenlabs import ElevenLabsTTS
8from typing import AsyncIterator
9
10# Pre-downloading the Turn Detector model
11pre_download_model()
12
13agent_instructions = "You are a friendly and efficient voice agent for restaurants. Your primary role is to assist customers with their dining needs. You can provide information about the restaurant's menu, take reservations, inform customers about special offers, and answer general inquiries about the restaurant's services. You are designed to handle multiple customer interactions simultaneously, ensuring a smooth and pleasant experience for all users. However, you are not capable of processing payments or handling complex dietary restrictions. Always remind customers to contact the restaurant directly for specific dietary needs or payment-related queries. Your goal is to enhance the dining experience by providing quick and accurate information while maintaining a courteous and professional demeanor."
14
15class MyVoiceAgent(Agent):
16    def __init__(self):
17        super().__init__(instructions=agent_instructions)
18    async def on_enter(self): await self.session.say("Hello! How can I help?")
19    async def on_exit(self): await self.session.say("Goodbye!")
20
21async def start_session(context: JobContext):
22    # Create agent and conversation flow
23    agent = MyVoiceAgent()
24    conversation_flow = ConversationFlow(agent)
25
26    # Create pipeline
27    pipeline = CascadingPipeline(
28        stt=DeepgramSTT(model="nova-2", language="en"),
29        llm=OpenAILLM(model="gpt-4o"),
30        tts=[ElevenLabs TTS Plugin for voice agent](https://docs.videosdk.live/ai_agents/plugins/tts/eleven-labs)(model="eleven_flash_v2_5"),
31        vad=[Silero Voice Activity Detection](https://docs.videosdk.live/ai_agents/plugins/silero-vad)(threshold=0.35),
32        turn_detector=[Turn detector for AI voice Agents](https://docs.videosdk.live/ai_agents/plugins/turn-detector)(threshold=0.8)
33    )
34
35    session = [AI voice Agent Sessions](https://docs.videosdk.live/ai_agents/core-components/agent-session)(
36        agent=agent,
37        pipeline=pipeline,
38        conversation_flow=conversation_flow
39    )
40
41    try:
42        await context.connect()
43        await session.start()
44        # Keep the session running until manually terminated
45        await asyncio.Event().wait()
46    finally:
47        # Clean up resources when done
48        await session.close()
49        await context.shutdown()
50
51def make_context() -> JobContext:
52    room_options = RoomOptions(
53    #  room_id="YOUR_MEETING_ID",  # Set to join a pre-created room; omit to auto-create
54        name="VideoSDK Cascaded Agent",
55        playground=True
56    )
57
58    return JobContext(room_options=room_options)
59
60if __name__ == "__main__":
61    job = WorkerJob(entrypoint=start_session, jobctx=make_context)
62    job.start()
63

Step 4.1: Generating a VideoSDK Meeting ID

To interact with your agent, you'll need a meeting ID. Use the following curl command to generate one:
1curl -X POST https://api.videosdk.live/v1/rooms \
2-H "Authorization: Bearer YOUR_API_KEY" \
3-H "Content-Type: application/json" \
4-d '{}'
5

Step 4.2: Creating the Custom Agent Class

The MyVoiceAgent class is where you define the behavior of your voice agent. It inherits from the Agent class and uses the agent_instructions to guide its interactions. The on_enter and on_exit methods define what the agent says when a session starts and ends.

Step 4.3: Defining the Core Pipeline

The CascadingPipeline is crucial for processing user input and generating responses. It integrates various plugins:
  • DeepgramSTT: Converts user speech into text.
  • OpenAILLM: Processes the text and generates a response using GPT-4.
  • ElevenLabsTTS: Converts the response text back into speech.
  • SileroVAD & TurnDetector: Manage when the agent listens and speaks.

Step 4.4: Managing the Session and Startup Logic

The start_session function initializes the agent, conversation flow, and pipeline. It connects to the VideoSDK service and starts the session. The make_context function sets up the room options, and the script's main block (if __name__ == "__main__":) starts the agent.

Running and Testing the Agent

Step 5.1: Running the Python Script

Run your script using:
1python main.py
2

Step 5.2: Interacting with the Agent in the Playground

After running the script, you'll see a playground link in the console. Open it in a browser to interact with your agent. Speak into your microphone and listen to the agent's responses.

Advanced Features and Customizations

Extending Functionality with Custom Tools

You can extend your agent's functionality by integrating custom tools. This involves creating additional plugins or modifying the existing pipeline to handle specific tasks.

Exploring Other Plugins

Consider experimenting with other STT, LLM, and TTS plugins to optimize your agent's performance and capabilities.

Troubleshooting Common Issues

API Key and Authentication Errors

Ensure your API keys are correctly set in the .env file and that you're using the correct endpoint URLs.

Audio Input/Output Problems

Check your microphone and speaker settings to ensure they're configured correctly for use with the agent.

Dependency and Version Conflicts

Ensure all dependencies are compatible with Python 3.11+ and that you've installed the correct versions of each package.

Conclusion

Summary of What You've Built

In this tutorial, you've built a fully functional AI voice agent for restaurants using the VideoSDK framework. Your agent can handle multiple customer interactions, provide information, and manage reservations.

Next Steps and Further Learning

Explore additional features and plugins to further enhance your agent's capabilities. Consider integrating with other APIs or services to expand its functionality.

Start Building With Free $20 Balance

No credit card required to start.

Want to level-up your learning? Subscribe now

Subscribe to our newsletter for more tech based insights

FAQ