Introduction to AI Voice Agents in Real Estate
In today's rapidly evolving technological landscape, AI voice agents are becoming increasingly integral to various industries, including real estate. But what exactly is an AI
voice agent
? At its core, an AIvoice agent
is a software application capable of understanding and responding to human speech. It utilizes advanced technologies such as Speech-to-Text (STT), Language Learning Models (LLM), and Text-to-Speech (TTS) to facilitate seamless interactions.Why are they important for the real estate industry?
In the real estate sector, AI voice agents can revolutionize customer service by providing instant responses to inquiries about property listings, scheduling viewings, and offering market insights. They enhance user experience by providing 24/7 availability and personalized interactions, making them invaluable tools for real estate professionals.
Core Components of a Voice Agent
- Speech-to-Text (STT): Converts spoken language into text.
- Language Learning Model (LLM): Processes the text and generates appropriate responses.
- Text-to-Speech (TTS): Converts the generated text back into spoken language.
- For a comprehensive understanding, refer to the
AI voice Agent core components overview
.
What You'll Build in This Tutorial
In this tutorial, you will learn how to build a real estate AI
voice agent
using the VideoSDK framework. We'll guide you through setting up the environment, building the agent, and testing it in a real-world scenario.Architecture and Core Concepts
High-Level Architecture Overview
The architecture of an AI
voice agent
involves several components working in tandem. The user's speech is first captured and converted into text using STT. This text is then processed by an LLM to generate a suitable response, which is subsequently converted back to speech using TTS. This entire process is orchestrated by the VideoSDK framework.
Understanding Key Concepts in the VideoSDK Framework
- Agent: The core class representing your bot.
- CascadingPipeline: Manages the flow of audio processing from STT to LLM to TTS. Learn more about the
cascading pipeline in AI voice Agents
. - VAD & TurnDetector: These components help the agent determine when to listen and when to speak, ensuring smooth interactions. For more details, explore the
turn detector for AI voice Agents
.
Setting Up the Development Environment
Prerequisites
Before you begin, ensure you have Python 3.11+ installed and a VideoSDK account, which you can create at app.videosdk.live. These are essential for developing and testing your AI voice agent.
Step 1: Create a Virtual Environment
Creating a virtual environment is crucial to manage dependencies effectively. Run the following commands:
1python -m venv myenv
2source myenv/bin/activate # On Windows use `myenv\\Scripts\\activate`
3
Step 2: Install Required Packages
With the virtual environment activated, install the necessary packages using pip:
1pip install videosdk-agents videosdk-plugins
2
Step 3: Configure API Keys in a .env
file
Create a
.env
file in your project directory and add your VideoSDK API credentials:1VIDEOSDK_API_KEY=your_api_key_here
2
Building the AI Voice Agent: A Step-by-Step Guide
To build your AI voice agent, we'll start by presenting the complete code block, then break it down into smaller parts for detailed explanations.
1import asyncio, os
2from videosdk.agents import Agent, AgentSession, CascadingPipeline, JobContext, RoomOptions, WorkerJob, ConversationFlow
3from videosdk.plugins.silero import SileroVAD
4from videosdk.plugins.turn_detector import TurnDetector, pre_download_model
5from videosdk.plugins.deepgram import DeepgramSTT
6from videosdk.plugins.openai import OpenAILLM
7from videosdk.plugins.elevenlabs import ElevenLabsTTS
8from typing import AsyncIterator
9
10pre_download_model()
11
12agent_instructions = "You are a knowledgeable and friendly voice agent specialized in real estate. Your primary role is to assist users with inquiries related to buying, selling, and renting properties. You can provide information about property listings, market trends, and general real estate advice. You are capable of scheduling property viewings and connecting users with real estate agents for further assistance. However, you are not a licensed real estate agent and must inform users to consult with a professional for legal or financial advice. You should also respect user privacy and ensure that any personal data shared is handled securely and confidentially."
13
14class MyVoiceAgent(Agent):
15 def __init__(self):
16 super().__init__(instructions=agent_instructions)
17 async def on_enter(self): await self.session.say("Hello! How can I help?")
18 async def on_exit(self): await self.session.say("Goodbye!")
19
20async def start_session(context: JobContext):
21 agent = MyVoiceAgent()
22 conversation_flow = ConversationFlow(agent)
23
24 pipeline = CascadingPipeline(
25 stt=DeepgramSTT(model="nova-2", language="en"),
26 llm=OpenAILLM(model="gpt-4o"),
27 tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
28 vad=SileroVAD(threshold=0.35),
29 turn_detector=TurnDetector(threshold=0.8)
30 )
31
32 session = AgentSession(
33 agent=agent,
34 pipeline=pipeline,
35 conversation_flow=conversation_flow
36 )
37
38 try:
39 await context.connect()
40 await session.start()
41 await asyncio.Event().wait()
42 finally:
43 await session.close()
44 await context.shutdown()
45
46def make_context() -> JobContext:
47 room_options = RoomOptions(
48 name="VideoSDK Cascaded Agent",
49 playground=True
50 )
51
52 return JobContext(room_options=room_options)
53
54if __name__ == "__main__":
55 job = WorkerJob(entrypoint=start_session, jobctx=make_context)
56 job.start()
57
Step 4.1: Generating a VideoSDK Meeting ID
To interact with your agent, you'll need a meeting ID. Use the following
curl
command to generate one:1curl -X POST \\
2 https://api.videosdk.live/v1/meetings \\
3 -H "Authorization: Bearer YOUR_API_KEY" \\
4 -H "Content-Type: application/json"
5
Step 4.2: Creating the Custom Agent Class
The
MyVoiceAgent
class is where you define the personality and behavior of your agent. It inherits from the Agent
class and includes methods like on_enter
and on_exit
to handle initial greetings and farewells.1class MyVoiceAgent(Agent):
2 def __init__(self):
3 super().__init__(instructions=agent_instructions)
4 async def on_enter(self): await self.session.say("Hello! How can I help?")
5 async def on_exit(self): await self.session.say("Goodbye!")
6
Step 4.3: Defining the Core Pipeline
The
CascadingPipeline
is crucial as it defines the flow of audio processing. Here, we utilize plugins like DeepgramSTT
for speech-to-text, OpenAILLM
for language processing, and ElevenLabsTTS
for text-to-speech.1pipeline = CascadingPipeline(
2 stt=DeepgramSTT(model="nova-2", language="en"),
3 llm=OpenAILLM(model="gpt-4o"),
4 tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
5 vad=SileroVAD(threshold=0.35),
6 turn_detector=TurnDetector(threshold=0.8)
7)
8
Step 4.4: Managing the Session and Startup Logic
The
start_session
function is responsible for initializing and managing the agent's session. It sets up the AI voice Agent Sessions
with the defined pipeline and conversation flow.1async def start_session(context: JobContext):
2 agent = MyVoiceAgent()
3 conversation_flow = ConversationFlow(agent)
4
5 pipeline = CascadingPipeline(
6 stt=DeepgramSTT(model="nova-2", language="en"),
7 llm=OpenAILLM(model="gpt-4o"),
8 tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
9 vad=SileroVAD(threshold=0.35),
10 turn_detector=TurnDetector(threshold=0.8)
11 )
12
13 session = AgentSession(
14 agent=agent,
15 pipeline=pipeline,
16 conversation_flow=conversation_flow
17 )
18
19 try:
20 await context.connect()
21 await session.start()
22 await asyncio.Event().wait()
23 finally:
24 await session.close()
25 await context.shutdown()
26
The
make_context
function configures the room options, enabling the playground mode for testing.1def make_context() -> JobContext:
2 room_options = RoomOptions(
3 name="VideoSDK Cascaded Agent",
4 playground=True
5 )
6
7 return JobContext(room_options=room_options)
8
Finally, the script is executed with the
if __name__ == "__main__":
block, which starts the agent.1if __name__ == "__main__":
2 job = WorkerJob(entrypoint=start_session, jobctx=make_context)
3 job.start()
4
Running and Testing the Agent
Step 5.1: Running the Python Script
To run your agent, execute the script using Python:
1python main.py
2
Step 5.2: Interacting with the Agent in the Playground
Upon running the script, you'll see a playground link in the console. Use this link to join the session and interact with your agent. The agent will greet you and respond to your queries about real estate.
Advanced Features and Customizations
Extending Functionality with Custom Tools
The VideoSDK framework allows you to extend your agent's capabilities using custom tools. This can include integrating additional APIs or functionalities tailored to specific needs.
Exploring Other Plugins
While this tutorial uses specific plugins, VideoSDK supports various STT, LLM, and TTS plugins. Explore options like Cartesia for STT or Google Gemini for LLM to suit your project's requirements.
Troubleshooting Common Issues
API Key and Authentication Errors
Ensure your API keys are correctly configured in the
.env
file. Double-check for any typos or incorrect values.Audio Input/Output Problems
Verify that your microphone and speakers are properly connected and configured. Check system settings if audio issues persist.
Dependency and Version Conflicts
Ensure all dependencies are installed with compatible versions. Use a virtual environment to manage packages and avoid conflicts.
Conclusion
Summary of What You've Built
Congratulations! You've built a fully functional AI voice agent for the real estate industry using VideoSDK. This agent can assist users with property inquiries and more.
Next Steps and Further Learning
To further enhance your agent, consider exploring advanced features of the VideoSDK framework or integrating additional APIs for more complex interactions. Happy coding!
Want to level-up your learning? Subscribe now
Subscribe to our newsletter for more tech based insights
FAQ