Build an AI Voice Agent for Real Estate

Step-by-step guide to building an AI voice agent for real estate using VideoSDK. Includes complete code and testing instructions.

Introduction to AI Voice Agents in the Real Estate Industry

What is an AI

Voice Agent

?

An AI

Voice Agent

is a software application that uses artificial intelligence to interact with users through voice commands. These agents can understand spoken language, process the information, and respond accordingly. They are commonly used in various industries to automate customer service, provide information, and perform tasks based on user requests.

Why are they important for the real estate industry?

In the real estate industry, AI Voice Agents can revolutionize how clients interact with property listings and real estate services. They can provide instant information about properties, schedule viewings, and even connect potential buyers with real estate agents. This automation not only enhances user experience but also increases efficiency and accessibility.

Core Components of a

Voice Agent

  • Speech-to-Text (STT): Converts spoken language into text.
  • Large Language Model (LLM): Processes the text to understand and generate responses.
  • Text-to-Speech (TTS): Converts text responses back into spoken language.

What You'll Build in This Tutorial

In this tutorial, we will build a fully functional AI

Voice Agent

tailored for the real estate industry using the VideoSDK framework. We will guide you through setting up the environment, writing the code, and testing the agent.

Architecture and Core Concepts

High-Level Architecture Overview

The AI

Voice Agent

architecture involves several components that work together to process user input and generate responses. The process begins with capturing user speech, which is then converted to text using STT. The text is processed by an LLM to generate a response, which is then converted back to speech using TTS.
Diagram

Understanding Key Concepts in the VideoSDK Framework

  • Agent: The core class representing your bot. It handles interactions and manages the conversation flow.
  • Cascading Pipeline in AI Voice Agents

    :
    Manages the flow of audio processing through STT, LLM, and TTS plugins.
  • VAD &

    Turn Detector for AI Voice Agents

    :
    These components help the agent determine when to listen and when to speak, ensuring smooth interaction.

Setting Up the Development Environment

Prerequisites

To get started, ensure you have Python 3.11+ installed and a VideoSDK account. You can sign up at app.videosdk.live.

Step 1: Create a Virtual Environment

To avoid conflicts with other projects, create a virtual environment:
1python -m venv real-estate-agent-env
2source real-estate-agent-env/bin/activate  # On Windows use `real-estate-agent-env\Scripts\activate`
3

Step 2: Install Required Packages

Install the necessary packages using pip:
1pip install videosdk-agents videosdk-plugins-silero videosdk-plugins-turn-detector videosdk-plugins-deepgram videosdk-plugins-openai videosdk-plugins-elevenlabs
2

Step 3: Configure API Keys in a .env File

Create a .env file in your project directory and add your API keys:
1VIDEOSDK_API_KEY=your_videosdk_api_key
2DEEPGRAM_API_KEY=your_deepgram_api_key
3OPENAI_API_KEY=your_openai_api_key
4ELEVENLABS_API_KEY=your_elevenlabs_api_key
5

Building the AI Voice Agent: A Step-by-Step Guide

Complete Code

Here's the complete code for our AI Voice Agent:
1import asyncio, os
2from videosdk.agents import Agent, AgentSession, CascadingPipeline, JobContext, RoomOptions, WorkerJob, ConversationFlow
3from videosdk.plugins.silero import SileroVAD
4from videosdk.plugins.turn_detector import TurnDetector, pre_download_model
5from videosdk.plugins.deepgram import DeepgramSTT
6from videosdk.plugins.openai import OpenAILLM
7from videosdk.plugins.elevenlabs import ElevenLabsTTS
8from typing import AsyncIterator
9
10# Pre-downloading the Turn Detector model
11pre_download_model()
12
13agent_instructions = "You are a knowledgeable real estate assistant AI Voice Agent designed to assist users in the real estate industry. Your primary role is to provide accurate and helpful information about real estate properties, market trends, and investment opportunities. You can answer questions related to property listings, pricing, and neighborhood insights. Additionally, you can assist users in scheduling property viewings and connecting them with real estate agents.\n\nCapabilities:\n1. Provide detailed information about property listings, including location, price, and features.\n2. Offer insights into real estate market trends and investment opportunities.\n3. Assist in scheduling property viewings and connecting users with real estate agents.\n4. Answer frequently asked questions about buying, selling, and renting properties.\n\nConstraints and Limitations:\n1. You are not a licensed real estate agent and cannot provide legal or financial advice.\n2. Always include a disclaimer advising users to consult with a licensed real estate professional for specific advice.\n3. You cannot access personal user data or make transactions on behalf of users.\n4. Ensure user privacy and data protection by not storing any personal information."
14
15class MyVoiceAgent(Agent):
16    def __init__(self):
17        super().__init__(instructions=agent_instructions)
18    async def on_enter(self): await self.session.say("Hello! How can I help?")
19    async def on_exit(self): await self.session.say("Goodbye!")
20
21async def start_session(context: JobContext):
22    # Create agent and conversation flow
23    agent = MyVoiceAgent()
24    conversation_flow = ConversationFlow(agent)
25
26    # Create pipeline
27    pipeline = CascadingPipeline(
28        stt=DeepgramSTT(model="nova-2", language="en"),
29        llm=OpenAILLM(model="gpt-4o"),
30        tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
31        vad=SileroVAD(threshold=0.35),
32        turn_detector=TurnDetector(threshold=0.8)
33    )
34
35    session = [AgentSession](https://docs.videosdk.live/ai_agents/core-components/agent-session)(
36        agent=agent,
37        pipeline=pipeline,
38        conversation_flow=conversation_flow
39    )
40
41    try:
42        await context.connect()
43        await session.start()
44        # Keep the session running until manually terminated
45        await asyncio.Event().wait()
46    finally:
47        # Clean up resources when done
48        await session.close()
49        await context.shutdown()
50
51def make_context() -> JobContext:
52    room_options = RoomOptions(
53    #  room_id="YOUR_MEETING_ID",  # Set to join a pre-created room; omit to auto-create
54        name="VideoSDK Cascaded Agent",
55        playground=True
56    )
57
58    return JobContext(room_options=room_options)
59
60if __name__ == "__main__":
61    job = WorkerJob(entrypoint=start_session, jobctx=make_context)
62    job.start()
63

Step 4.1: Generating a VideoSDK Meeting ID

To interact with your agent, you need a meeting ID. You can generate one using the VideoSDK API. Here's how you can do it using a curl command:
1curl -X POST "https://api.videosdk.live/v1/rooms" \
2-H "Authorization: Bearer your_videosdk_api_key" \
3-H "Content-Type: application/json" \
4-d '{"name":"Real Estate Agent Room"}'
5

Step 4.2: Creating the Custom Agent Class

The MyVoiceAgent class is where we define the behavior of our AI Voice Agent. It inherits from the Agent class provided by VideoSDK. The on_enter and on_exit methods define what the agent says when a session starts and ends.
1class MyVoiceAgent(Agent):
2    def __init__(self):
3        super().__init__(instructions=agent_instructions)
4    async def on_enter(self): await self.session.say("Hello! How can I help?")
5    async def on_exit(self): await self.session.say("Goodbye!")
6

Step 4.3: Defining the Core Pipeline

The CascadingPipeline is crucial as it defines how the agent processes audio. It uses several plugins:
  • DeepgramSTT: Converts speech to text.
  • OpenAI LLM Plugin for voice agent

    :
    Processes the text to generate responses.
  • ElevenLabsTTS: Converts text responses back to speech.
  • SileroVAD: Detects voice activity to manage when the agent listens.
  • TurnDetector: Helps manage conversation flow by detecting when it's the agent's turn to speak.
1pipeline = CascadingPipeline(
2    stt=DeepgramSTT(model="nova-2", language="en"),
3    llm=OpenAILLM(model="gpt-4o"),
4    tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
5    vad=SileroVAD(threshold=0.35),
6    turn_detector=TurnDetector(threshold=0.8)
7)
8

Step 4.4: Managing the Session and Startup Logic

The start_session function initializes the agent session, connects to the context, and starts the conversation flow. The make_context function sets up the room options for the session.
1async def start_session(context: JobContext):
2    agent = MyVoiceAgent()
3    conversation_flow = ConversationFlow(agent)
4    pipeline = CascadingPipeline(
5        stt=DeepgramSTT(model="nova-2", language="en"),
6        llm=OpenAILLM(model="gpt-4o"),
7        tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
8        vad=SileroVAD(threshold=0.35),
9        turn_detector=TurnDetector(threshold=0.8)
10    )
11
12    session = AgentSession(
13        agent=agent,
14        pipeline=pipeline,
15        conversation_flow=conversation_flow
16    )
17
18    try:
19        await context.connect()
20        await session.start()
21        await asyncio.Event().wait()
22    finally:
23        await session.close()
24        await context.shutdown()
25
26def make_context() -> JobContext:
27    room_options = RoomOptions(
28        name="VideoSDK Cascaded Agent",
29        playground=True
30    )
31    return JobContext(room_options=room_options)
32

Running and Testing the Agent

Step 5.1: Running the Python Script

To run your AI Voice Agent, execute the Python script:
1python main.py
2

Step 5.2: Interacting with the Agent in the Playground

Once the script is running, you will see a playground link in the console. Use this link to join the session and interact with your AI Voice Agent. You can test its capabilities by asking questions related to real estate.

Advanced Features and Customizations

Extending Functionality with Custom Tools

You can extend your AI Voice Agent's functionality by integrating custom tools using the function_tool concept. This allows you to add specific features tailored to your needs.

Exploring Other Plugins

While we used specific plugins in this tutorial, VideoSDK supports various other STT, LLM, and TTS options that you can explore to enhance your agent's capabilities.

Troubleshooting Common Issues

API Key and Authentication Errors

Ensure your API keys are correctly set in the .env file. Double-check for any typos or missing keys.

Audio Input/Output Problems

Verify that your microphone and speakers are working correctly. Check the permissions if you're using a web-based interface.

Dependency and Version Conflicts

Make sure all dependencies are installed and up-to-date. Use a virtual environment to manage package versions.

Conclusion

Summary of What You've Built

In this tutorial, you've built a functional AI Voice Agent for the real estate industry using VideoSDK. You've learned how to set up the environment, write the code, and test the agent.

Next Steps and Further Learning

Consider exploring additional features and plugins to enhance your agent. You can also look into deploying your agent in a production environment for real-world applications.

Start Building With Free $20 Balance

No credit card required to start.

Want to level-up your learning? Subscribe now

Subscribe to our newsletter for more tech based insights

FAQ