What is Interactive Live Streaming?

Interactive Live Streaming is gaining popularity among younger audiences due to its ability to create a sense of community, provide immersive experiences, diverse content, and monetization opportunities. It's used by content creators, influencers, brands, celebrities, events, and educators to create engaging and meaningful content. As a software developer in the video space, it's essential to understand the difference between traditional and Interactive Live Streaming. The latter allows for real-time engagement between the streamer and audience, offering a personalized and innovative experience. This form of media is becoming increasingly popular and provides unique opportunities for content creators and businesses to build a loyal community and monetize their streams.

React Interactive Live Video Streaming (Demo) Several platforms in the market offer Interactive Live Streaming, and among the most popular ones are Twitch, YouTube Live, Facebook Live, and Instagram Live.

Can we build Interactive Live Streaming app by ourself?

To successfully host an Interactive Live Streaming platform, a robust infrastructure is essential to handle the workload of serving thousands of participants while maintaining reliability, stability, and optimizations. However, building this type of highly complex application is indeed possible. In fact, Video SDK already provides a well-tested infrastructure that can handle this kind of workload.

Therefore, we will be creating our Interactive Live Streaming App using VideoSDK.live, which will ensure a reliable and stable platform for our users. Let's get started!

4 Steps to Build React.js Interactive Live Streaming App using Video SDK

Tools for building an Interactive Live Streaming App

  • VideoSDK.Live's React SDK
  • VideoSDK.Live's HLS Composition
  • VideoSDK.Live's HLS Streaming

Step 1: Understanding App Functionalities and Project Structure

I will be creating this app for 2 types of users Speaker and Viewer

  • Speaker will have all media controls i.e. they can toggle their webcam and mic to share their information to the viewers. Speaker can also start HLS stream so that viewer consume the content.
  • Viewer will not have any media controls, they will just watch an VideoSDK HLS Stream, which was started by speaker

Pre-requisites before starting to write code:

After our coding environment is setup, we can now start writing our code, first I will create a new React App using create-react-app, also we will install usefull dependencies.

npx create-react-app videosdk-interactive-live-streaming-app

cd videosdk-interactive-live-streaming-app

npm install @videosdk.live/react-sdk react-player hls.js

Project Structure

I will create 3 screens:

  1. Welcome Screen
  2. Speaker Screen
  3. Viewer Screen

Below is the folder structure of our app.


App Container

I will prepare a basic App.js, this file will containe all the screens. and render all screen conditionally according to the appData state changes.


import React, { useState } from "react";
import SpeakerScreenContainer from "./screens/speakerScreen/SpeakerScreenContainer";
import ViewerScreenContainer from "./screens/ViewerScreenContainer";
import WelcomeScreenContainer from "./screens/WelcomeScreenContainer";

const App = () => {
  const [appData, setAppData] = useState({ meetingId: null, mode: null });

  return appData.meetingId ? (
    appData.mode === "CONFERENCE" ? (
      <SpeakerScreenContainer meetingId={appData.meetingId} />
    ) : (
      <ViewerScreenContainer meetingId={appData.meetingId} />
  ) : (
    <WelcomeScreenContainer setAppData={setAppData} />

export default App;

Step 2: Welcome Screen

Creating a new meeting will require an api call, so we will write some code for that

A temporary auth-token can be fetched from our user dashboard, but in production we reccomend to use an authToken generated by your servers.

Follow this guide to get temporary auth-token from user Dashboard.


export const authToken = "temporary-generated-auth-token-goes-here";

export const createNewRoom = async () => {
  const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
    method: "POST",
    headers: {
      authorization: `${authToken}`,
      "Content-Type": "application/json",

  const { roomId } = await res.json();
  return roomId;

WelcomeScreenContainer will be useful for creating a new meeting by speakers. it will also allow to enter already created meetingId to join the existing session.


import React, { useState } from "react";
import { createNewRoom } from "../api";

const WelcomeScreenContainer = ({ setAppData }) => {
  const [meetingId, setMeetingId] = useState("");

  const createClick = async () => {
    const meetingId = await createNewRoom();

    setAppData({ mode: "CONFERENCE", meetingId });
  const hostClick = () => setAppData({ mode: "CONFERENCE", meetingId });
  const viewerClick = () => setAppData({ mode: "VIEWER", meetingId });

  return (
      <button onClick={createClick}>Create new Meeting</button>
        placeholder="Enter meetingId"
        onChange={(e) => setMeetingId(e.target.value)}
      <button onClick={hostClick}>Join As Host</button>
      <button onClick={viewerClick}>Join As Viewer</button>

export default WelcomeScreenContainer;

Step 3: Speaker Screen

This screen will contain all media controls and participants grid. First I will create a name input pox for participant who will be joining.


import { MeetingProvider } from "@videosdk.live/react-sdk";
import React from "react";
import MediaControlsContainer from "./MediaControlsContainer";
import ParticipantsGridContainer from "./ParticipantsGridContainer";

import { authToken } from "../../api";

const SpeakerScreenContainer = ({ meetingId }) => {
  return (
        name: "C.V. Raman",
        micEnabled: true,
        webcamEnabled: true,
      <MediaControlsContainer meetingId={meetingId} />
      <ParticipantsGridContainer />

export default SpeakerScreenContainer;


This container will be used for toggling mic and webcam. Also we will add some code for starting HLS streaming.


import { useMeeting, Constants } from "@videosdk.live/react-sdk";
import React, { useMemo } from "react";

const MediaControlsContainer = () => {
  const { toggleMic, toggleWebcam, startHls, stopHls, hlsState, meetingId } =

  const { isHlsStarted, isHlsStopped, isHlsPlayable } = useMemo(
    () => ({
      isHlsStarted: hlsState === Constants.hlsEvents.HLS_STARTED,
      isHlsStopped: hlsState === Constants.hlsEvents.HLS_STOPPED,
      isHlsPlayable: hlsState === Constants.hlsEvents.HLS_PLAYABLE,

  const _handleToggleHls = () => {
    if (isHlsStarted) {
    } else if (isHlsStopped) {
      startHls({ quality: "high" });

  return (
      <p>MeetingId: {meetingId}</p>
      <p>HLS state: {hlsState}</p>
      {isHlsPlayable && <p>Viewers will now be able to watch the stream.</p>}
      <button onClick={toggleMic}>Toggle Mic</button>
      <button onClick={toggleWebcam}>Toggle Webcam</button>
      <button onClick={_handleToggleHls}>
        {isHlsStarted ? "Stop Hls" : "Start Hls"}

export default MediaControlsContainer;


This will get all joined participants from useMeeting hook and render them individually. Here we will be using SingleParticipantContainer for rendering a single participant webcam stream


import { useMeeting } from "@videosdk.live/react-sdk";
import React, { useMemo } from "react";
import SingleParticipantContainer from "./SingleParticipantContainer";

const ParticipantsGridContainer = () => {
  const { participants } = useMeeting();

  const participantIds = useMemo(
    () => [...participants.keys()],

  return (
      {participantIds.map((participantId) => (
          {...{ participantId, key: participantId }}

export default ParticipantsGridContainer;


This container will get participantId from props and will get webcam streams and other information from useParticipant hook.

It will render both Audio and Video streams of the participant whose participantId is provided from props.


import { useParticipant } from "@videosdk.live/react-sdk";
import React, { useEffect, useMemo, useRef } from "react";
import ReactPlayer from "react-player";

const SingleParticipantContainer = ({ participantId }) => {
  const { micOn, micStream, isLocal, displayName, webcamStream, webcamOn } =

  const audioPlayer = useRef();

  const videoStream = useMemo(() => {
    if (webcamOn && webcamStream) {
      const mediaStream = new MediaStream();
      return mediaStream;
  }, [webcamStream, webcamOn]);

  useEffect(() => {
    if (!isLocal && audioPlayer.current && micOn && micStream) {
      const mediaStream = new MediaStream();

      audioPlayer.current.srcObject = mediaStream;
      audioPlayer.current.play().catch((err) => {
        if (
          err.message ===
          "play() failed because the user didn't interact with the document first. https://goo.gl/xX8pDD"
        ) {
          console.error("audio" + err.message);
    } else {
      audioPlayer.current.srcObject = null;
  }, [micStream, micOn, isLocal, participantId]);

  return (
    <div style={{ height: 200, width: 360, position: "relative" }}>
      <audio autoPlay playsInline controls={false} ref={audioPlayer} />
        style={{ position: "absolute", background: "#ffffffb3", padding: 8 }}
        <p>Name: {displayName}</p>
        <p>Webcam: {webcamOn ? "on" : "off"}</p>
        <p>Mic: {micOn ? "on" : "off"}</p>
      {webcamOn && (
          playsinline // very very imp prop
          onError={(err) => {
            console.log(err, "participant video error");

export default SingleParticipantContainer;

Our speaker screen is completed, not we can start coding ViewerScreenContainer

Step 4: Viewer Screen

Viewer screen will be used for viewer participants, they will be waching the HLS stream when speaker starts to stream.

Same as Speaker screen this screen will also have initialization process.


import {
} from "@videosdk.live/react-sdk";
import React, { useEffect, useMemo, useRef } from "react";
import Hls from "hls.js";
import { authToken } from "../api";

const HLSPlayer = () => {
  const { hlsUrls, hlsState } = useMeeting();

  const playerRef = useRef(null);

  const hlsPlaybackHlsUrl = useMemo(() => hlsUrls.playbackHlsUrl, [hlsUrls]);

  useEffect(() => {
    if (Hls.isSupported()) {
      const hls = new Hls({
        capLevelToPlayerSize: true,
        maxLoadingDelay: 4,
        minAutoBitrate: 0,
        autoStartLoad: true,
        defaultAudioCodec: "mp4a.40.2",

      let player = document.querySelector("#hlsPlayer");

    } else {
      if (typeof playerRef.current?.play === "function") {
        playerRef.current.src = hlsPlaybackHlsUrl;
  }, [hlsPlaybackHlsUrl, hlsState]);

  return (
      style={{ width: "70%", height: "70%" }}
      onError={(err) => console.log(err, "hls video error")}

const ViewerScreenContainer = ({ meetingId }) => {
  return (
      config={{ meetingId, name: "C.V. Raman", mode: "VIEWER" }}
        {({ hlsState }) =>
          hlsState === Constants.hlsEvents.HLS_PLAYABLE ? (
            <HLSPlayer />
          ) : (
            <p>Waiting for host to start stream...</p>

export default ViewerScreenContainer;

Our ViewerScreen is completed, now we can test our application.

npm run start

Output of Interactive Live Streaming App

Video SDK Image

Source Code of this app is available in this Github Repo.

What Next ?

This was a very basic example of interactive Live Streaming App using Video SDK, you can customize it in your way.

  • Add more CSS to make the UI more interactive
  • Add Chat using PubSub
  • Implement Change Mode, by this we can switch any participant from Viewer to Speaker, or vice versa.
  • You can also take reference from our Prebuilt App which is build using VideoSDK's React package. Here is the Github Repo.

More React Resources