Integrating an HLS (HTTP Live Streaming) player into your React JS video calling app can enhance the streaming experience for your users. With HLS, you can ensure smooth playback across various devices and network conditions. Using VideoSDK's powerful HLS playback feature, you can ensure a smooth and reliable video calling experience for your users.

Advantages of Using HLS Player:

  • Seamless Streaming: HLS ensures smooth video playback by adapting to network fluctuations, and delivering uninterrupted communication during video calls.
  • Cross-Platform Compatibility: HLS is supported across different devices and platforms, enabling users to engage in video calls seamlessly regardless of their device type.
  • Improved User Experience: With HLS, users experience minimal buffering and faster load times, enhancing their overall satisfaction with the video calling app.
  • Scalability: HLS supports scalable streaming, allowing your app to accommodate a growing user base without compromising on performance.
  • Adaptive Bitrate Streaming: HLS automatically adjusts the video quality based on the user's network speed, ensuring optimal viewing experience under varying conditions.

Practical Applications of HLS Player:

  • Virtual Meetings: HLS integration enables businesses to conduct virtual meetings with remote teams or clients, fostering collaboration regardless of geographical barriers.
  • Online Education: Educational platforms can leverage HLS to deliver high-quality video lectures and interactive sessions, facilitating remote learning for students worldwide.
  • Telehealth Services: Healthcare providers can use HLS-enabled video calling apps to offer remote consultations and medical advice, improving access to healthcare services.
  • Live Events: Event organizers can live stream conferences, concerts, or sports events using HLS, reaching a wider audience and enhancing attendee engagement.
  • Customer Support: Companies can integrate HLS into their customer support systems to provide real-time video assistance and troubleshooting, enhancing customer satisfaction and loyalty.

Let's build the React HLS player with an integrated using VideoSDK. With VideoSDK's robust APIs and SDKs, you can effortlessly incorporate React HLS streaming capabilities into your application, ensuring smooth and high-quality video playback for all users.

Getting Started with VideoSDK and HLS

To take advantage of the React HLS player functionality, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, let's ensure, that you have completed the necessary prerequisites.

[a] Create a VideoSDK Account

Go to your VideoSDK dashboard and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.

[b] Generate your Auth Token

Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features.

For a more visual understanding of the account creation and token generation process, consider referring to the provided tutorial.

[c] Prerequisites and Setup

Before proceeding, ensure that your development environment meets the following requirements:

  • VideoSDK Developer Account (Not having one?) Follow VideoSDK Dashboard.
  • React VideoSDK
  • Make sure Node and NPM are installed on your device.
  • Basic understanding of Hooks (useState, useRef, useEffect)
  • React Context API (optional)

Follow the steps to create the environment necessary to add video calls to your app. You can also find the code sample for Quickstart here.

Creating and Configuring Your React App

$ npx create-react-app videosdk-rtc-react-app

Install VideoSDK

It is necessary to set up VideoSDK within your project before going into the details of integrating the HLS feature. Installing VideoSDK using NPM or Yarn will depend on the needs of your project.

  • For NPM
$ npm install ""

//For the Participants Video
$ npm install "react-player"
  • For Yarn
$ yarn add ""

//For the Participants Video
$ yarn add "react-player"

You are going to use functional components to leverage React's reusable component architecture. There will be components for users, videos and controls (mic, camera, leave) over the video.

Structure of the project

Your project structure should look like this.

   ├── node_modules
   ├── public
   ├── src
   │    ├── API.js
   │    ├── App.js
   │    ├── index.js
   ├── package.json
   .    .

You are going to use functional components to leverage react's reusable component architecture. There will be components for users, videos and controls (mic, camera, leave) over the video.

App Architecture

The App will contain a container component which includes a user component with videos. Each video component will have control buttons for the mic, camera, leave a meeting, and HLS.

You will be working on these files:

  • API.js: Responsible for handling API calls such as generating unique meetingId and token
  • App.js: Responsible for rendering the container and joining the meeting.

Architecture for Speaker

Video SDK Image

Architecture for Viewer

Video SDK Image

Implementing HLS Player in React JS

To add video capability to your React application, you must first complete a sequence of prerequisites.

Step 1: Get started with API.js

Before moving on, you must create an API request to generate a unique meetingId. You will need an authentication token, which you can create either through the videosdk-rtc-api-server-examples or directly from the VideoSDK Dashboard for developers.

//This is the Auth token, you will use it to generate a meeting and connect to it
export const authToken = "<Generated-from-dashbaord>";
// API call to create a meeting
export const createMeeting = async ({ token }) => {
  const res = await fetch(``, {
    method: "POST",
    headers: {
      authorization: `${authToken}`,
      "Content-Type": "application/json",
    body: JSON.stringify({}),
  //Destructuring the roomId from the response
  const { roomId } = await res.json();
  return roomId;

Step 2: Wireframe App.js with all the components

To build up a wireframe of App.js, you need to use VideoSDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks.

First, you need to understand the Context Provider and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.

  • MeetingProvider: This is the Context Provider. It accepts value config and token as props. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree.
  • MeetingConsumer: This is the Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes.
  • useMeeting: This is the meeting hook API. It includes all the information related to meetings such as join/leave, enable/disable the mic or webcam, etc.
  • useParticipant: This is the participant hook API. It is responsible for handling all the events and props related to one particular participant such as name, webcamStream, micStream, etc.

The Meeting Context provides a way to listen for any changes that occur when a participant joins the meeting or makes modifications to their microphone, camera, and other settings.

Begin by making a few changes to the code in the App.js file.

import "./App.css";
import React, { useEffect, useMemo, useRef, useState } from "react";
import {
} from "";
import { authToken, createMeeting } from "./API";
import ReactPlayer from "react-player";

function JoinScreen({ getMeetingAndToken }) {
  return null;

function ParticipantView(props) {
  return null;

function Controls(props) {
  return null;

function MeetingView(props) {
  return null;

function App() {
  const [meetingId, setMeetingId] = useState(null);

  //Getting the meeting id by calling the api we just wrote
  const getMeetingAndToken = async (id) => {
    const meetingId =
      id == null ? await createMeeting({ token: authToken }) : id;

  //This will set Meeting Id to null when meeting is left or ended
  const onMeetingLeave = () => {

  return authToken && meetingId ? (
        micEnabled: true,
        webcamEnabled: true,
        name: "C.V. Raman",
      <MeetingView meetingId={meetingId} onMeetingLeave={onMeetingLeave} />
  ) : (
    <JoinScreen getMeetingAndToken={getMeetingAndToken} />

export default App;

Step 3: Implement Join Screen

The join screen will serve as a medium to either schedule a new meeting or join an existing one as a host or a viewer.

This functionality will have 3 buttons:

  1. Join as Host: When this button is clicked, the person will join the meeting with the entered meetingId as HOST.
  2. Join as Viewer: When this button is clicked, the person will join the meeting with the entered meetingId as VIEWER.
  3. Create Meeting: When this button is clicked, the person will join a new meeting as HOST.
function JoinScreen({ getMeetingAndToken, setMode }) {
  const [meetingId, setMeetingId] = useState(null);
  //Set the mode of joining participant and set the meeting id or generate new one
  const onClick = async (mode) => {
    await getMeetingAndToken(meetingId);
  return (
    <div className="container">
      <button onClick={() => onClick("CONFERENCE")}>Create Meeting</button>
      <br />
      <br />
      {" or "}
      <br />
      <br />
        placeholder="Enter Meeting Id"
        onChange={(e) => {
      <br />
      <br />
      <button onClick={() => onClick("CONFERENCE")}>Join as Host</button>
      {" | "}
      <button onClick={() => onClick("VIEWER")}>Join as Viewer</button>


Video SDK Image

Step 4: Implement MeetingView and Controls

The next step is to create a container to manage features such as join, leave, mute, unmute, start, and stop HLS for the HOST and to display an HLS Player for the viewer.

You need to determine the mode of the localParticipant, if its CONFERENCE, display the SpeakerView component otherwise shows the ViewerView component.

function Container(props) {
  const [joined, setJoined] = useState(null);
  //Get the method which will be used to join the meeting.
  const { join } = useMeeting();
  const mMeeting = useMeeting({
    //callback for when a meeting is joined successfully
    onMeetingJoined: () => {
    //callback for when a meeting is left
    onMeetingLeft: () => {
    //callback for when there is an error in a meeting
    onError: (error) => {
  const joinMeeting = () => {

  return (
    <div className="container">
      <h3>Meeting Id: {props.meetingId}</h3>
      {joined && joined == "JOINED" ? (
        mMeeting.localParticipant.mode == Constants.modes.CONFERENCE ? (
          <SpeakerView />
        ) : mMeeting.localParticipant.mode == Constants.modes.VIEWER ? (
          <ViewerView />
        ) : null
      ) : joined && joined == "JOINING" ? (
        <p>Joining the meeting...</p>
      ) : (
        <button onClick={joinMeeting}>Join</button>

Step 5: Implement SpeakerView

The next step is to create SpeakerView and Controls components to manage features such as join, leave, mute, and unmute.

  • You have to retrieve all the participants using the useMeeting hook and filter them based on the mode set to CONFERENCE ensuring that only Speakers are displayed on the screen.
function SpeakerView() {
  //Get the participants and HLS State from useMeeting
  const { participants, hlsState } = useMeeting();

  //Filtering the host/speakers from all the participants
  const speakers = useMemo(() => {
    const speakerParticipants = [...participants.values()].filter(
      (participant) => {
        return participant.mode == Constants.modes.CONFERENCE;
    return speakerParticipants;
  }, [participants]);
  return (
      <p>Current HLS State: {hlsState}</p>
      {/* Controls for the meeting */}
      <Controls />

      {/* Rendring all the HOST participants */}
      { => (
        <ParticipantView participantId={} key={} />

function Container(){

  const mMeeting = useMeeting({
    onMeetingJoined: () => {
      //Pin the local participant if he joins in CONFERENCE mode
      if (mMeetingRef.current.localParticipant.mode == "CONFERENCE") {;

  //Create a ref to meeting object so that when used inside the
  //Callback functions, meeting state is maintained
  const mMeetingRef = useRef(mMeeting);
  useEffect(() => {
    mMeetingRef.current = mMeeting;
  }, [mMeeting]);

  return <>...</>;
  • You have to add the Controls component which will allow the participant to toggle their media.
function Controls() {
  const { leave, toggleMic, toggleWebcam, startHls, stopHls } = useMeeting();
  return (
      <button onClick={() => leave()}>Leave</button>
      <button onClick={() => toggleMic()}>toggleMic</button>
      <button onClick={() => toggleWebcam()}>toggleWebcam</button>
        onClick={() => {
          //Start the HLS in SPOTLIGHT mode and PIN as
          //priority so only speakers are visible in the HLS stream
            layout: {
              type: "SPOTLIGHT",
              priority: "PIN",
              gridSize: "20",
            theme: "LIGHT",
            mode: "video-and-audio",
            quality: "high",
            orientation: "landscape",
        Start HLS
      <button onClick={() => stopHls()}>Stop HLS</button>
Controls Component
  • You need to then create the ParticipantView to display the participant's name and media. To play the media, use the webcamStream and micStream from the useParticipant hook.
function ParticipantView(props) {
  const micRef = useRef(null);
  const { webcamStream, micStream, webcamOn, micOn, isLocal, displayName } =

  const videoStream = useMemo(() => {
    if (webcamOn && webcamStream) {
      const mediaStream = new MediaStream();
      return mediaStream;
  }, [webcamStream, webcamOn]);

  //Playing the audio in the <audio>
  useEffect(() => {
    if (micRef.current) {
      if (micOn && micStream) {
        const mediaStream = new MediaStream();

        micRef.current.srcObject = mediaStream;
          .catch((error) =>
            console.error(" failed", error)
      } else {
        micRef.current.srcObject = null;
  }, [micStream, micOn]);

  return (
        Participant: {displayName} | Webcam: {webcamOn ? "ON" : "OFF"} | Mic:{" "}
        {micOn ? "ON" : "OFF"}
      <audio ref={micRef} autoPlay playsInline muted={isLocal} />
      {webcamOn && (
          playsinline // extremely crucial prop
          onError={(err) => {
            console.log(err, "participant video error");

Output Of SpeakerView Component

Video SDK Image

Step 6: Implement ViewerView with HLS Player

When the host initiates the live streaming, viewers will be able to watch it. To implement the player view, you have to use hls.js. It will help play the HLS stream.

Begin by adding this package.

$ npm install hls.js


$ yarn add hls.js

With hls.js installed, you can now get the hlsUrls from the useMeeting hook which will be used to play the HLS in the player.

//importing hls.js
import Hls from "hls.js";

function ViewerView() {
  // States to store downstream url and current HLS state
  const playerRef = useRef(null);
  //Getting the hlsUrls
  const { hlsUrls, hlsState } = useMeeting();

  //Playing the HLS stream when the playbackHlsUrl is present and it is playable
  useEffect(() => {
    if (hlsUrls.playbackHlsUrl && hlsState == "HLS_PLAYABLE") {
      if (Hls.isSupported()) {
        const hls = new Hls({
          maxLoadingDelay: 1, // max video loading delay used in automatic start level selection
          defaultAudioCodec: "mp4a.40.2", // default audio codec
          maxBufferLength: 0, // If buffer length is/becomes less than this value, a new fragment will be loaded
          maxMaxBufferLength: 1, // Hls.js will never exceed this value
          startLevel: 0, // Start playback at the lowest quality level
          startPosition: -1, // set -1 playback will start from intialtime = 0
          maxBufferHole: 0.001, // 'Maximum' inter-fragment buffer hole tolerance that hls.js can cope with when searching for the next fragment to load.
          highBufferWatchdogPeriod: 0, // if media element is expected to play and if currentTime has not moved for more than highBufferWatchdogPeriod and if there are more than maxBufferHole seconds buffered upfront, hls.js will jump buffer gaps, or try to nudge playhead to recover playback.
          nudgeOffset: 0.05, // In case playback continues to stall after first playhead nudging, currentTime will be nudged evenmore following nudgeOffset to try to restore playback. media.currentTime += (nb nudge retry -1)*nudgeOffset
          nudgeMaxRetry: 1, // Max nb of nudge retries before hls.js raise a fatal BUFFER_STALLED_ERROR
          maxFragLookUpTolerance: .1, // This tolerance factor is used during fragment lookup.
          liveSyncDurationCount: 1, // if set to 3, playback will start from fragment N-3, N being the last fragment of the live playlist
          abrEwmaFastLive: 1, // Fast bitrate Exponential moving average half-life, used to compute average bitrate for Live streams.
          abrEwmaSlowLive: 3, // Slow bitrate Exponential moving average half-life, used to compute average bitrate for Live streams.
          abrEwmaFastVoD: 1, // Fast bitrate Exponential moving average half-life, used to compute average bitrate for VoD streams
          abrEwmaSlowVoD: 3, // Slow bitrate Exponential moving average half-life, used to compute average bitrate for VoD streams
          maxStarvationDelay: 1, // ABR algorithm will always try to choose a quality level that should avoid rebuffering

        let player = document.querySelector("#hlsPlayer");

      } else {
        if (typeof playerRef.current?.play === "function") {
          playerRef.current.src = hlsUrls.playbackHlsUrl;
  }, [hlsUrls, hlsState, playerRef.current]);

  return (
      {/* Showing message if HLS is not started or is stopped by HOST */}
      {hlsState != "HLS_PLAYABLE" ? (
          <p>HLS has not started yet or is stopped</p>
      ) : (
        hlsState == "HLS_PLAYABLE" && (
              style={{ width: "100%", height: "100%" }}
              onError={(err) => {
                console.log(err, "hls video error");

Output of ViewerView

Video SDK Image

Congrats! You have completed the implementation of a customized live-streaming app in ReactJS using VideoSDK. To explore more features, go through Basic and Advanced features.

For, more reference, check our docs:

Interactive Livestream - Video SDK Docs | Video SDK
Interactive Livestream features quick integrate in Javascript, React JS, Android, IOS, React Native, Flutter with Video SDK to add live video & audio conferencing to your applications.
Video SDK Image


Integrating the HLS player into your React JS video calling app using VideoSDK offers a powerful solution for seamless streaming and enhanced user experience. With the help of VideoSDK, you can easily implement HLS playback, ensuring smooth and reliable video communication for your users. With HLS.js library integration and VideoSDK's APIs, you can build a feature-rich video-calling application that meets the demands of modern communication.

Ready to take your video calling app to the next level? Sign up with VideoSDK today and Get 10000 minutes free to take the video app to the next level!