Build AI Voice Assistant for Transportation

Step-by-step guide to building an AI voice assistant for the transportation industry using VideoSDK.

Introduction to AI Voice Agents in the Transportation Industry

AI Voice Agents are revolutionizing various industries by providing seamless interaction between humans and machines. In the transportation industry, these agents can enhance user experiences by offering real-time traffic updates, optimal route suggestions, public transportation schedules, and more.

What is an AI

Voice Agent

?

An AI

Voice Agent

is a software application that uses speech recognition, natural language processing, and speech synthesis to interact with users through voice commands. These agents listen to user inputs, process the information, and respond with relevant information or actions.

Why are they important for the transportation industry?

In the transportation sector, AI Voice Agents can significantly improve efficiency and user satisfaction. They can assist with route planning, provide traffic updates, and facilitate bookings, making travel more convenient and informed.

Core Components of a

Voice Agent

  • Speech-to-Text (STT): Converts spoken language into text.
  • Large Language Model (LLM): Processes the text to understand and generate responses.
  • Text-to-Speech (TTS): Converts text responses back into spoken language.
For a comprehensive understanding, refer to the

AI voice Agent core components overview

.

What You'll Build in This Tutorial

In this tutorial, you will learn to build an AI Voice Assistant tailored for the transportation industry using the VideoSDK framework. This agent will be capable of handling transportation-related inquiries and tasks.

Architecture and Core Concepts

High-Level Architecture Overview

The AI

Voice Agent

's architecture involves a sequence of processes starting from capturing user speech to generating a spoken response. The flow includes capturing audio, converting it to text, processing it using a language model, and finally converting the response back to speech.
Diagram

Understanding Key Concepts in the VideoSDK Framework

Setting Up the Development Environment

Prerequisites

To get started, ensure you have Python 3.11+ installed and a VideoSDK account, which you can create at app.videosdk.live.

Step 1: Create a Virtual Environment

Create a virtual environment to manage dependencies:
1python -m venv venv
2source venv/bin/activate  # On Windows use `venv\\Scripts\\activate`
3

Step 2: Install Required Packages

Install the necessary packages using pip:
1pip install videosdk-python
2

Step 3: Configure API Keys in a .env file

Create a .env file in your project directory and add your VideoSDK API key:
1VIDEOSDK_API_KEY=your_api_key_here
2

Building the AI Voice Agent: A Step-by-Step Guide

Below is the complete, runnable code for the AI Voice Agent. We'll break it down in the following sections.
1import asyncio, os
2from videosdk.agents import Agent, AgentSession, CascadingPipeline, JobContext, RoomOptions, WorkerJob, ConversationFlow
3from videosdk.plugins.silero import SileroVAD
4from videosdk.plugins.turn_detector import TurnDetector, pre_download_model
5from videosdk.plugins.deepgram import DeepgramSTT
6from videosdk.plugins.openai import OpenAILLM
7from videosdk.plugins.elevenlabs import ElevenLabsTTS
8from typing import AsyncIterator
9
10# Pre-downloading the Turn Detector model
11pre_download_model()
12
13agent_instructions = "You are a knowledgeable and efficient AI Voice Assistant designed specifically for the transportation industry. Your primary role is to assist users with transportation-related inquiries and tasks. You can provide real-time traffic updates, suggest optimal routes, offer information on public transportation schedules, and assist with booking transportation services. Additionally, you can answer frequently asked questions about transportation policies and safety guidelines. However, you must clearly state that you are not a human expert and that users should verify critical information through official transportation channels. You are not equipped to handle emergency situations and should advise users to contact emergency services if needed. Your responses should be concise, informative, and user-friendly, ensuring a seamless interaction experience."
14
15class MyVoiceAgent(Agent):
16    def __init__(self):
17        super().__init__(instructions=agent_instructions)
18    async def on_enter(self): await self.session.say("Hello! How can I help?")
19    async def on_exit(self): await self.session.say("Goodbye!")
20
21async def start_session(context: JobContext):
22    # Create agent and conversation flow
23    agent = MyVoiceAgent()
24    conversation_flow = ConversationFlow(agent)
25
26    # Create pipeline
27    pipeline = CascadingPipeline(
28        stt=DeepgramSTT(model="nova-2", language="en"),
29        llm=OpenAILLM(model="gpt-4o"),
30        tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
31        vad=SileroVAD(threshold=0.35),
32        turn_detector=TurnDetector(threshold=0.8)
33    )
34
35    session = AgentSession(
36        agent=agent,
37        pipeline=pipeline,
38        conversation_flow=conversation_flow
39    )
40
41    try:
42        await context.connect()
43        await session.start()
44        # Keep the session running until manually terminated
45        await asyncio.Event().wait()
46    finally:
47        # Clean up resources when done
48        await session.close()
49        await context.shutdown()
50
51def make_context() -> JobContext:
52    room_options = RoomOptions(
53    #  room_id="YOUR_MEETING_ID",  # Set to join a pre-created room; omit to auto-create
54        name="VideoSDK Cascaded Agent",
55        playground=True
56    )
57
58    return JobContext(room_options=room_options)
59
60if __name__ == "__main__":
61    job = WorkerJob(entrypoint=start_session, jobctx=make_context)
62    job.start()
63

Step 4.1: Generating a VideoSDK Meeting ID

To generate a meeting ID, use the following curl command:
1curl -X POST "https://api.videosdk.live/v1/rooms" \
2-H "Authorization: Bearer YOUR_API_KEY" \
3-H "Content-Type: application/json" \
4-d '{}'
5

Step 4.2: Creating the Custom Agent Class

The MyVoiceAgent class is a custom implementation of the Agent class. It provides specific instructions for the agent, ensuring it responds appropriately to transportation-related queries.
1class MyVoiceAgent(Agent):
2    def __init__(self):
3        super().__init__(instructions=agent_instructions)
4    async def on_enter(self): await self.session.say("Hello! How can I help?")
5    async def on_exit(self): await self.session.say("Goodbye!")
6

Step 4.3: Defining the Core Pipeline

The CascadingPipeline defines the data flow through various plugins, including STT, LLM, TTS, VAD, and Turn Detector.
1pipeline = CascadingPipeline(
2    stt=DeepgramSTT(model="nova-2", language="en"),
3    llm=OpenAILLM(model="gpt-4o"),
4    tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
5    vad=SileroVAD(threshold=0.35),
6    turn_detector=TurnDetector(threshold=0.8)
7)
8

Step 4.4: Managing the Session and Startup Logic

The start_session function initializes the

AI voice Agent Sessions

and manages the lifecycle of the voice agent.
1async def start_session(context: JobContext):
2    # Create agent and conversation flow
3    agent = MyVoiceAgent()
4    conversation_flow = ConversationFlow(agent)
5
6    # Create pipeline
7    pipeline = CascadingPipeline(
8        stt=DeepgramSTT(model="nova-2", language="en"),
9        llm=OpenAILLM(model="gpt-4o"),
10        tts=ElevenLabsTTS(model="eleven_flash_v2_5"),
11        vad=SileroVAD(threshold=0.35),
12        turn_detector=TurnDetector(threshold=0.8)
13    )
14
15    session = AgentSession(
16        agent=agent,
17        pipeline=pipeline,
18        conversation_flow=conversation_flow
19    )
20
21    try:
22        await context.connect()
23        await session.start()
24        # Keep the session running until manually terminated
25        await asyncio.Event().wait()
26    finally:
27        # Clean up resources when done
28        await session.close()
29        await context.shutdown()
30

Running and Testing the Agent

Step 5.1: Running the Python Script

To run the agent, execute the Python script:
1python main.py
2

Step 5.2: Interacting with the Agent in the Playground

Once the agent is running, you can interact with it via the VideoSDK playground link provided in the console. This allows you to test the agent's functionality and response to transportation-related queries.

Advanced Features and Customizations

Extending Functionality with Custom Tools

You can extend the agent's functionality by integrating custom tools using the function_tool concept in VideoSDK.

Exploring Other Plugins

Explore other plugins for STT, LLM, and TTS to customize the agent's capabilities further.

Troubleshooting Common Issues

API Key and Authentication Errors

Ensure your API key is correctly configured in the .env file and is valid.

Audio Input/Output Problems

Check your audio device settings and ensure the correct input/output devices are selected.

Dependency and Version Conflicts

Ensure all dependencies are installed with compatible versions. Use pip freeze to check installed packages.

Conclusion

Summary of What You've Built

In this tutorial, you've built a functional AI Voice Assistant for the transportation industry, capable of handling various queries and tasks.

Next Steps and Further Learning

Explore more advanced features and plugins in the VideoSDK framework to enhance your AI Voice Agent further. For deployment details, refer to the

AI voice Agent deployment

.

Start Building With Free $20 Balance

No credit card required to start.

Want to level-up your learning? Subscribe now

Subscribe to our newsletter for more tech based insights

FAQ