๐Ÿ“Œ TL;DR

Live streaming has become an increasingly popular way to share information and connect with audiences in real time. Whether you're a musician, a business owner, or a teacher, live streaming can be a powerful tool for engaging with your audience and sharing your message.

Choosing the right live stream platform is an important decision, as it can impact the quality of your stream, the size of your audience, and the overall success of your live stream. Here are a few factors to consider while choosing a live-stream platform:

  1. Features: Look for live stream platforms that offer the features you need to create high-quality streams. For example, you might need a platform that supports multiple camera angles, allows you to share your screen, or allows multiple broadcasters.
  2. Integration: Integration of a live streaming platform should be simple and quick.
  3. Budget: Although live streaming platforms are expensive, it is important to choose a platform that fits within your budget.

๐Ÿค” Why choose the VideoSDK?

VideoSDK is a perfect choice for those seeking a live-streaming platform that offers the necessary features to create high-quality streams. The platform supports screen sharing and real-time Messaging, allows broadcasters to invite audience members to the stage, and supports 100k+ Participants, ensuring that your live streams are interactive and engaging. With VideoSDK, you can also use your own custom-designed layout template for live streaming.

In terms of integration, VideoSDK offers a simple and quick integration process, allowing you to seamlessly integrate live streaming into your app. This ensures that you can enjoy the benefits of live streaming without any technical difficulties or lengthy implementation processes.

Furthermore, VideoSDK is budget-friendly, making it an affordable option for businesses of all sizes. You can enjoy the benefits of a feature-rich live-streaming platform without breaking the bank, making it an ideal choice for startups and small businesses.

๐Ÿ“ฑ Follow these 5 Steps to Build a Live Streaming React Native app

The below steps will give you all the information to quickly build an Interactive Live Streaming app. Please carefully follow along, and if you have any trouble, let us know right away on Discord, and we will be happy to help you.

๐Ÿ” Prerequisites

Before proceeding, ensure that your development environment meets the following requirements:

  • VideoSDK Developer Account (Not having one? Follow VideoSDK Dashboard)
  • Basic understanding of React Native
  • Node.js v12+
  • NPM v6+ (comes installed with newer Node versions)
  • Android Studio or Xcode installed

One should have a VideoSDK account to generate token. Visit VideoSDK dashboard to generate token

๐Ÿ—๏ธ App Architectureโ€‹


The App will contain two screens :

  1. Join Screen : This screen allows SPEAKER to create a studio or join a predefined studio and VIEWER to join the predefined studio.
  2. Speaker Screen : This screen basically contains a speaker list and some studio controls such as Enable / Disable Mic & Camera and Leave studio.
  3. Viewer Screen : This screen basically contains a live stream player in which the viewer will play the stream.
Video SDK Image

๐Ÿš€ STEP 1: ย Getting Started with the Code!โ€‹


(a) ๐Ÿ“‹ Create App
โ€‹

Create a new react-native app by applying the below commands.

npx react-native init AppName

For react-native setup, you can follow Official Docs.


(b) ๐Ÿ“น Video SDK Installation
โ€‹

Install the Video SDK by following the below command. Do make sure you should be in your project directory before you run this command.
For NPM :

npm install "@videosdk.live/react-native-sdk"

For Yarn :

yarn add "@videosdk.live/react-native-sdk"

(c) ๐Ÿ“‚ Project Structureโ€‹

  root
   โ”œโ”€โ”€ node_modules
   โ”œโ”€โ”€ android
   โ”œโ”€โ”€ ios
   โ”œโ”€โ”€ App.js
   โ”œโ”€โ”€ api.js
   โ”œโ”€โ”€ index.js

๐Ÿค– STEP 2: ย Android Setup

(a)๐Ÿ› ๏ธ Add the required permission in the AndroidManifest.xml file.

<manifest
  xmlns:android="http://schemas.android.com/apk/res/android"
  package="com.cool.app"
>
    <!-- Give all the required permissions to app -->
    <uses-permission android:name="android.permission.INTERNET" />
    <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
    <!-- Needed to communicate with already-paired Bluetooth devices. (Legacy up to Android 11) -->
    <uses-permission
        android:name="android.permission.BLUETOOTH"
        android:maxSdkVersion="30" />
    <uses-permission
        android:name="android.permission.BLUETOOTH_ADMIN"
        android:maxSdkVersion="30" />

    <!-- Needed to communicate with already-paired Bluetooth devices. (Android 12 upwards)-->
    <uses-permission android:name="android.permission.BLUETOOTH_CONNECT" />

    <uses-permission android:name="android.permission.CAMERA" />
    <uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
    <uses-permission android:name="android.permission.RECORD_AUDIO" />
    <uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW" />
    <uses-permission android:name="android.permission.FOREGROUND_SERVICE"/>
    <uses-permission android:name="android.permission.WAKE_LOCK" />
โ€‹
  <application>
   <meta-data
      android:name="live.videosdk.rnfgservice.notification_channel_name"
      android:value="Meeting Notification"
     />
    <meta-data
    android:name="live.videosdk.rnfgservice.notification_channel_description"
    android:value="Whenever meeting started notification will appear."
    />
    <meta-data
    android:name="live.videosdk.rnfgservice.notification_color"
    android:resource="@color/red"
    />
    <service android:name="live.videosdk.rnfgservice.ForegroundService" android:foregroundServiceType="mediaProjection"></service>
    <service android:name="live.videosdk.rnfgservice.ForegroundServiceTask"></service>
  </application>
</manifest>
dependencies {
    compile project(':rnfgservice') 
    compile project(':rnwebrtc') 
    compile project(':rnincallmanager')
  }

Include dependencies in android/settings.gradle

include ':rnwebrtc'
project(':rnwebrtc').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-webrtc/android')

include ':rnincallmanager'
project(':rnincallmanager').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-incallmanager/android')

include ':rnfgservice'
project(':rnfgservice').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-foreground-service/android')

Update MainApplication.java to use InCallManager and run some foreground services.

import live.videosdk.rnfgservice.ForegroundServicePackage;
import live.videosdk.rnincallmanager.InCallManagerPackage;
import live.videosdk.rnwebrtc.WebRTCModulePackage;

public class MainApplication extends Application implements ReactApplication {
  private static List<ReactPackage> getPackages() {
      return Arrays.<ReactPackage>asList(
          /* Initialise foreground service, incall manager and webrtc module */
          new ForegroundServicePackage(),
          new InCallManagerPackage(),
          new WebRTCModulePackage(),
      );
  }
}

Some devices might face WebRTC problems and to solve that, update your android/gradle.properties file with the following

/* This one fixes a weird WebRTC runtime problem on some devices. */
android.enableDexingArtifactTransform.desugaring=false

If you use Proguard, make the changes shown below in android/app/proguard-rules.pro file (this is optional)

-keep class org.webrtc.** { *; }

(c)๐ŸŽจ Update colors.xml file with some new colors for internal dependencies.

<resources>
    <item name="red" type="color">#FC0303</item>
    <integer-array name="androidcolors">
    <item>@color/red</item>
    </integer-array>
</resources>

๐Ÿ STEP 3: ย iOS Setup

(a) ๐Ÿ“ฆ Install react-native-incallmanager

$ yarn add @videosdk.live/react-native-incallmanager

(b) ๐Ÿ’Ž Install the gem again to update Cocoapods.

Make sure you are using CocoaPods 1.10 or higher. To update CocoaPods, you can simply install the gem again.

$[sudo] gem install cocoapods

(c) ๐Ÿ”— Manually linking (if react-native-incall-manager is not linked automatically)

  • Drag node_modules/@videosdk.live/react-native-incall-manager/ios/RNInCallManager.xcodeproj under <your_xcode_project>/Libraries

  • Select <your_xcode_project> --> Build Phases --> Link Binary With Libraries

  • Drag Libraries/RNInCallManager.xcodeproj/Products/libRNInCallManager.a to Link Binary With Libraries

  • Select <your_xcode_project> --> Build Settings In Header Search Paths, add $(SRCROOT)/../node_modules/@videosdk.live/react-native-incall-manager/ios/RNInCallManager

(d) ๐Ÿ”„ Change the path of react-native-webrtc

pod โ€˜react-native-webrtcโ€™, :path => โ€˜../node_modules/@videosdk.live/react-native-webrtcโ€™

(e) โฌ†๏ธ Change your platform version

  • You have to change the platform field of podfile to 11.0 or above it, as react-native-webrtc doesnโ€™t support IOS < 11 platform :ios, โ€˜11.0โ€™

(f) ๐Ÿงฉ After updating the version, you have to install pods

Pod install

(h) ๐Ÿ›ก๏ธ Now add the following permissions to the info.plist (project folder/IOS/projectname/info.plist):

<key>NSCameraUsageDescription</key>
<string>Camera permission description</string>
<key>NSMicrophoneUsageDescription</key>
<string>Microphone permission description</string>

๐Ÿ“ STEP 4: ย Register Service

Register Video SDK services in the root index.js file for initialization service.

import { register } from '@videosdk.live/react-native-sdk';
import { AppRegistry } from 'react-native';
import { name as appName } from './app.json';
import App from './src/App.js';
โ€‹
// Register the service
register();
AppRegistry.registerComponent(appName, () => App);

โœ STEP 5: ย Start Writing Your Code Now

(a) ๐Ÿ“„ Get started with API.js

Before jumping to anything else, we have to write API to generate a unique meetingId. You will require an auth token, you can generate it either by using videosdk-rtc-api-server-examples or generating it from the VideoSDK Dashboard for developer.

// Auth token we will use to generate a meeting and connect to it
export const authToken = "<Generated-from-dashbaord>";

// API call to create meeting
export const createMeeting = async ({ token }) => {
  const res = await fetch(`https://api.videosdk.live/v1/meetings`, {
    method: "POST",
    headers: {
      authorization: `${token}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({ region: "sg001" }),
  });

  const { meetingId } = await res.json();
  return meetingId;
};

(b) ๐Ÿงฉ Wireframe App.js with all the components

To build up a wireframe of App.js, we are going to use VideoSDK Hooks and Context Providers. Video SDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks. Let's understand each of them.

First, we will explore Context Provider and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.

  • MeetingProvider : It is a Context Provider. It accepts value config and token as props. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree.
  • MeetingConsumer : It is Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Providerโ€™s value prop changes.
  • useMeeting : It is meeting react hook API for the meeting. It includes all the information related to meeting such as join, leave, enable/disable a mic or webcam, etc.
  • useParticipant : It is participant hook API. useParticipant hook is responsible for handling all the events and props related to one particular participant such as name, webcamStream, micStream, etc.

Meeting Context helps to listen to all the changes when a participant joins a meeting or changes mic or camera etc.

Let's get started with changing a couple of lines of code in App.js

import React, { useState, useMemo, useRef, useEffect } from "react";
import {
  SafeAreaView,
  TouchableOpacity,
  Text,
  TextInput,
  View,
  FlatList,
  Clipboard,
} from "react-native";
import {
  MeetingProvider,
  useMeeting,
  useParticipant,
  MediaStream,
  RTCView,
  Constants,
} from "@videosdk.live/react-native-sdk";
import { createMeeting, authToken } from "./api";

// Responsible for either schedule new meeting or to join existing meeting as a host or as a viewer.
function JoinScreen({ getMeetingAndToken, setMode }) {
  return null;
}

// Responsible for managing participant video stream
function ParticipantView(props) {
  return null;
}

// Responsible for managing meeting controls such as toggle mic / webcam and leave
function Controls() {
  return null;
}

// Responsible for Speaker side view, which contains Meeting Controls(toggle mic/webcam & leave) and Participant list
function SpeakerView() {
  return null;
}

// Responsible for Viewer side view, which contains video player for streaming HLS and managing HLS state (HLS_STARTED, HLS_STOPPING, HLS_STARTING, etc.)
function ViewerView() {
  return null;
}

// Responsible for managing two view (Speaker & Viewer) based on provided mode (`CONFERENCE` & `VIEWER`)
function Container(props) {
  return null;
}

function App() {
  const [meetingId, setMeetingId] = useState(null);

  //State to handle the mode of the participant i.e. CONFERNCE or VIEWER
  const [mode, setMode] = useState("CONFERENCE");

  //Getting MeetingId from the API we created earlier

  const getMeetingAndToken = async (id) => {
    const meetingId =
      id == null ? await createMeeting({ token: authToken }) : id;
    setMeetingId(meetingId);
  };


  return authToken && meetingId ? (
    <MeetingProvider
      config={{
        meetingId,
        micEnabled: true,
        webcamEnabled: true,
        name: "C.V. Raman",
        //These will be the mode of the participant CONFERENCE or VIEWER
        mode: mode,
      }}
      token={authToken}
    >
      <Container />
    </MeetingProvider>
  ) : (
    <JoinScreen getMeetingAndToken={getMeetingAndToken} setMode={setMode} />
  );
}

export default App;

(c) ๐Ÿ“ฒ Implement Join Screen

The join screen will work as a medium to either schedule a new meeting or to join an existing meeting as a host or as a viewer.

These will have 3 buttons:

  1. Join as Host: When this button is clicked, the person will join the entered 'meetingId' as 'HOST'.
  2. Join as Viewer: When this button is clicked, the person will join the entered 'meetingId' as 'VIEWER'.
  3. Create Studio Room: When this button is clicked, the person will join a new meeting as 'HOST'.
function JoinScreen({ getMeetingAndToken, setMode }) {
  const [meetingVal, setMeetingVal] = useState("");

  const JoinButton = ({ value, onPress }) => {
    return (
      <TouchableOpacity
        style={{
          backgroundColor: "#1178F8",
          padding: 12,
          marginVertical: 8,
          borderRadius: 6,
        }}
        onPress={onPress}
      >
        <Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}>
          {value}
        </Text>
      </TouchableOpacity>
    );
  };
  return (
    <SafeAreaView
      style={{
        flex: 1,
        backgroundColor: "black",
        justifyContent: "center",
        paddingHorizontal: 6 * 10,
      }}
    >
      <TextInput
        value={meetingVal}
        onChangeText={setMeetingVal}
        placeholder={"XXXX-XXXX-XXXX"}
        placeholderTextColor={"grey"}
        style={{
          padding: 12,
          borderWidth: 1,
          borderColor: "white",
          borderRadius: 6,
          color: "white",
          marginBottom: 16,
        }}
      />
      <JoinButton
        onPress={() => {
          getMeetingAndToken(meetingVal);
        }}
        value={"Join as Host"}
      />
      <JoinButton
        onPress={() => {
          setMode("VIEWER");
          getMeetingAndToken(meetingVal);
        }}
        value={"Join as Viewer"}
      />
      <Text
        style={{
          alignSelf: "center",
          fontSize: 22,
          marginVertical: 16,
          fontStyle: "italic",
          color: "grey",
        }}
      >
        ---------- OR ----------
      </Text>

      <JoinButton
        onPress={() => {
          getMeetingAndToken();
        }}
        value={"Create Studio Room"}
      />
    </SafeAreaView>
  );
}

(d) ๐Ÿ“ฆ Implement Container Component

  • The next step is to create a container that will manage Join screen, SpeakerView and ViewerView component based on mode.
  • We will check the mode of the localParticipant, if its CONFERENCE we will show SpeakerView else we will show ViewerView.
function Container() {
  const { join, changeWebcam, localParticipant } = useMeeting({
    onError: (error) => {
      console.log(error.message);
    },
  });

  return (
    <View style={{ flex: 1 }}>
      {localParticipant?.mode == Constants.modes.CONFERENCE ? (
        <SpeakerView />
      ) : localParticipant?.mode == Constants.modes.VIEWER ? (
        <ViewerView />
      ) : (
        <View
          style={{
            flex: 1,
            justifyContent: "center",
            alignItems: "center",
            backgroundColor: "black",
          }}
        >
          <Text style={{ fontSize: 20, color: "white" }}>
            Press Join button to enter studio.
          </Text>
          <Button
            btnStyle={{
              marginTop: 8,
              paddingHorizontal: 22,
              padding: 12,
              borderWidth: 1,
              borderColor: "white",
              borderRadius: 8,
            }}
            buttonText={"Join"}
            onPress={() => {
              join();
              setTimeout(() => {
                changeWebcam();
              }, 300);
            }}
          />
        </View>
      )}
    </View>
  );
}

// Common Component which will also be used in Controls Component
const Button = ({ onPress, buttonText, backgroundColor, btnStyle }) => {
  return (
    <TouchableOpacity
      onPress={onPress}
      style={{
        ...btnStyle,
        backgroundColor: backgroundColor,
        padding: 10,
        borderRadius: 8,
      }}
    >
      <Text style={{ color: "white", fontSize: 12 }}>{buttonText}</Text>
    </TouchableOpacity>
  );
};

(e) ๐Ÿ”Š Implement SpeakerView

The next step is to create SpeakerView and Controls components to manage features such as join, leave, mute, and unmute.

  1. We will get all the participants from useMeeting hook and filter them for the mode set to CONFERENCE so only Speakers are shown on the screen.
function SpeakerView() {
  // Get the Participant Map and meetingId
  const { meetingId, participants } = useMeeting({});

  // For getting speaker participant, we will filter out `CONFERENCE` mode participant
  const speakers = useMemo(() => {
    const speakerParticipants = [...participants.values()].filter(
      (participant) => {
        return participant.mode == Constants.modes.CONFERENCE;
      }
    );
    return speakerParticipants;
  }, [participants]);

  return (
    <SafeAreaView style={{ backgroundColor: "black", flex: 1 }}>
      {/* Render Header for copy meetingId and leave meeting*/}
      <HeaderView />

      {/* Render Participant List */}
      {speakers.length > 0 ? (
        <FlatList
          data={speakers}
          renderItem={({ item }) => {
            return <ParticipantView participantId={item.id} />;
          }}
        />
      ) : null}

      {/* Render Controls */}
      <Controls />
    </SafeAreaView>
  );
}

function HeaderView() {
  const { meetingId, leave } = useMeeting();
  return (
    <View
      style={{
        flexDirection: "row",
        padding: 16,
        justifyContent: "space-evenly",
        alignItems: "center",
      }}
    >
      <Text style={{ fontSize: 24, color: "white" }}>{meetingId}</Text>
      <Button
        btnStyle={{
          borderWidth: 1,
          borderColor: "white",
        }}
        onPress={() => {
          Clipboard.setString(meetingId);
          alert("MeetingId copied successfully");
        }}
        buttonText={"Copy MeetingId"}
        backgroundColor={"transparent"}
      />
      <Button
        onPress={() => {
          leave();
        }}
        buttonText={"Leave"}
        backgroundColor={"#FF0000"}
      />
    </View>
  );
}

function Container(){
  ...

  const mMeeting = useMeeting({
    onMeetingJoined: () => {
      // We will pin the local participant if he joins in CONFERENCE mode
      if (mMeetingRef.current.localParticipant.mode == "CONFERENCE") {
        mMeetingRef.current.localParticipant.pin();
      }
    },
    ...
  });

  // We will create a ref to meeting object so that when used inside the
  // Callback functions, meeting state is maintained
  const mMeetingRef = useRef(mMeeting);
  useEffect(() => {
    mMeetingRef.current = mMeeting;
  }, [mMeeting]);

  return <>...</>;
}

2. We will be creating the ParticipantView to show the participants media. For which, will be using the webcamStream from the useParticipant hook to play the media of the participant.

function ParticipantView({ participantId }) {
  const { webcamStream, webcamOn } = useParticipant(participantId);
  return webcamOn && webcamStream ? (
    <RTCView
      streamURL={new MediaStream([webcamStream.track]).toURL()}
      objectFit={"cover"}
      style={{
        height: 300,
        marginVertical: 8,
        marginHorizontal: 8,
      }}
    />
  ) : (
    <View
      style={{
        backgroundColor: "grey",
        height: 300,
        justifyContent: "center",
        alignItems: "center",
        marginVertical: 8,
        marginHorizontal: 8,
      }}
    >
      <Text style={{ fontSize: 16 }}>NO MEDIA</Text>
    </View>
  );
}

3. We will add the Controls component which will allow the speaker to toggle media and start/stop HLS.

function Controls() {
  const { toggleWebcam, toggleMic, startHls, stopHls, hlsState } = useMeeting(
    {}
  );

  const _handleHLS = async () => {
    if (!hlsState || hlsState === "HLS_STOPPED") {
      startHls({
        layout: {
          type: "SPOTLIGHT",
          priority: "PIN",
          gridSize: 4,
        },
        theme: "DARK",
        orientation: "portrait",
      });
    } else if (hlsState === "HLS_STARTED" || hlsState === "HLS_PLAYABLE") {
      stopHls();
    }
  };

  return (
    <View
      style={{
        padding: 24,
        flexDirection: "row",
        justifyContent: "space-between",
      }}
    >
      <Button
        onPress={() => {
          toggleWebcam();
        }}
        buttonText={"Toggle Webcam"}
        backgroundColor={"#1178F8"}
      />
      <Button
        onPress={() => {
          toggleMic();
        }}
        buttonText={"Toggle Mic"}
        backgroundColor={"#1178F8"}
      />
      {hlsState === "HLS_STARTED" ||
      hlsState === "HLS_STOPPING" ||
      hlsState === "HLS_STARTING" ||
      hlsState === "HLS_PLAYABLE" ? (
        <Button
          onPress={() => {
            _handleHLS();
          }}
          buttonText={
            hlsState === "HLS_STARTED"
              ? `Live Starting`
              : hlsState === "HLS_STOPPING"
              ? `Live Stopping`
              : hlsState === "HLS_PLAYABLE"
              ? `Stop Live`
              : `Loading...`
          }
          backgroundColor={"#FF5D5D"}
        />
      ) : (
        <Button
          onPress={() => {
            _handleHLS();
          }}
          buttonText={`Go Live`}
          backgroundColor={"#1178F8"}
        />
      )}
    </View>
  );
}

(f) ๐Ÿ‘€ Implement ViewerView

When HOST ('CONFERENCE' mode participant) starts the live streaming, the viewer will be able to see the live streaming.

To implement the player view, we are going to use react-native-video. It will be helpful to play the HLS stream.

Let's first add this package.

For NPM :

npm install react-native-video

For Yarn :

yarn add react-native-video

With react-native-video installed, we will get the hlsUrls and isHlsPlayable from the useMeeting hook which will be used to play the HLS in the player.

//Add imports
// imports react-native-video
import Video from "react-native-video";

function ViewerView({}) {
  const { hlsState, hlsUrls } = useMeeting();

  return (
    <SafeAreaView style={{ flex: 1, backgroundColor: "black" }}>
      {hlsState == "HLS_PLAYABLE" ? (
        <>
          {/* Render Header for copy meetingId and leave meeting*/}
          <HeaderView />

          {/* Render VideoPlayer that will play `downstreamUrl`*/}
          <Video
            controls={true}
            source={{
              uri: hlsUrls.downstreamUrl,
            }}
            resizeMode={"stretch"}
            style={{
              flex: 1,
              backgroundColor: "black",
            }}
            onError={(e) => console.log("error", e)}
          />
        </>
      ) : (
        <SafeAreaView
          style={{ flex: 1, justifyContent: "center", alignItems: "center" }}
        >
          <Text style={{ fontSize: 20, color: "white" }}>
            HLS is not started yet or is stopped
          </Text>
        </SafeAreaView>
      )}
    </SafeAreaView>
  );
}

๐Ÿƒโ€โ™‚๏ธ Run Your Code Now

//for android
npx react-native run-android

//for ios
npx react-native run-ios

Stuck anywhere? Check out this example code on GitHub

Video SDK Image

๐ŸŽฏ Conclusion

With this, we have successfully built the React Native Live Streaming app using the VideoSDK in React-Native. If you wish to add functionalities like chat messaging, and screen sharing, you can always check out our Documentation. If you face any difficulty with the implementation, You can connect with us on our Discord Community.

๐Ÿ“š Resources