Skip to main content
Version: Next

Simple React App

What you'll learn

This tutorial will guide you through creating your first React project that uses the Jellyfish Client. By the end of the tutorial, you'll have a working web application that connects to Jellyfish Media Server using WebRTC technology.

Finished appFinished app

You can check out the finished project here

What do you need

  • a little bit of experience in creating React apps
  • IDE of your choice (for example Visual Studio Code)
  • Node.js installed on your machine

Jellyfish architecture

info

You can learn more about Jellyfish architecture in Jellyfish docs. This section provides a brief description aimed at front-end developers

Let's introduce some concepts first:

  • Peer - A peer is a client-side entity that connects to the server to publish, subscribe or publish and subscribe to tracks published by components or other peers. You can think of it as a participant in a room. At the moment, there is only one type of peer - WebRTC.
  • Track - An object that represents an audio or video stream. A track can be associated with a local media source, such as a camera or microphone, or a remote media source received from another user. Tracks are used to capture, transmit, and receive audio and video data in WebRTC applications.
  • Room - In Jellyfish, a room serves as a holder for peers and components, its function varying based on application. From a front-end perspective, this will be probably one meeting or a broadcast.

For a better understanding of these concepts here is an example of a room that holds a standard WebRTC conference from a perspective of the User:

Room example

In this example, peers stream multiple video and audio tracks. Peer #1 streams even two video tracks (camera and screencast track). You can differentiate between them by using track metadata. The user gets info about peers and their tracks from the server using Jellyfish Client. The user is also informed in real time about peers joining/leaving and tracks being added/removed.

To keep this tutorial short we'll simplify things a little. Every peer will stream just one video track.

Connecting and joining the room

The general flow of connecting to the server and joining the room in a standard WebRTC conference setup looks like this:

Connecting and joing the room

The parts that you need to implement are marked in blue and things handled by Jellyfish are marked in red.

Firstly, the user logs in. Then your backend authenticates the user and obtains a peer token. It allows the user to authenticate and join the room in Jellyfish Server. The backend passes the token to your front-end, and your front-end passes it to Jellyfish Client. The client establishes the connection with Jellyfish Server. Then Jellyfish Client sets up tracks (camera, microphone) to stream and joins the room on Jellyfish Server. Finally, your front-end can display the room for the user.

For this tutorial we simplified this process a bit - you don't have to implement a backend or authentication. Jellyfish Dashboard will do this for you. It's also a nice tool to test and play around with Jellyfish. The flow with Jellyfish Dashboard looks like this:

Connecting and joing the room with dashboard

You can see that the only things you need to implement are interactions with the user and Jellyfish Client. This tutorial will show you how to do it.

Setup

Create React + Vite project

Firstly create a brand new project.

npm create vite@latest my-react-app -- --template react-ts

Add dependencies

For this module to work you'll need to add our react-client-sdk package. This is necessary to create and connect Jellyfish Client.

npm install https://github.com/jellyfish-dev/react-client-sdk#0.1.2

Start the Jellyfish backend

For testing, we'll run the Jellyfish Media Server locally using Docker image:

docker run -p 50000-50050:50000-50050/udp \
-p 5002:5002/tcp \
-e JF_CHECK_ORIGIN=false \
-e JF_HOST=<your ip address>:5002 \
-e JF_PORT="5002" \
-e JF_WEBRTC_USED=true \
-e JF_WEBRTC_TURN_PORT_RANGE=50000-50050 \
-e JF_WEBRTC_TURN_IP=<your ip address> \
-e JF_WEBRTC_TURN_LISTEN_IP=0.0.0.0 \
-e JF_SERVER_API_TOKEN=development \
ghcr.io/jellyfish-dev/jellyfish:0.5.0-rc0

Make sure to set JF_WEBRTC_TURN_IP and JF_HOST to your local IP address. Without it, the mobile device won't be able to connect to the Jellyfish.

tip

To check your local IP you can use this handy command (Linux/macOS):

ifconfig | grep "inet " | grep -Fv 127.0.0.1 | awk '{print $2}'

Start the dashboard web front-end

There are a couple of ways to start the dashboard:

The current version of the dashboard is ready to use and available here. Ensure that it is compatible with your Jellyfish server! Please note that this dashboard only supports secure connections (https/wss) or connections to localhost. Any insecure requests (http/ws) will be automatically blocked by the browser.

(Optional) Add a bit of CSS styling

For this project, we prepared simple CSS classes, You are free to use it or create your own.

General project structure

Our app will consist of two parts:

  • a component that will connect to the server and join the room

  • a component that will display the video tracks from other participants

First step - prepare all the hooks and the context

To connect to the Jellyfish backend, we need to create a Membrane Client instance. We can do it by using the create function from the @jellyfish-dev/react-client-sdk package. It needs two generic parameters:

  • PeerMetadata - the type of metadata that will be sent to the server when connecting to the room (for example, user name) it has to be serializable

  • TrackMetadata - the type of the metadata that will be sent to the server when sending a track (for example, track name) it has to be serializable as well

App.tsx
import React from "react";
import { create } from "@jellyfish-dev/react-client-sdk";

// Example metadata types for peer and track
// You can define your metadata types just make sure they are serializable
type PeerMetadata = {
name: string;
};

type TrackMetadata = {
type: "camera" | "screen";
};

// Create a Jellyfish client instance
// Since we will use this context outside of the component we need to export it
export const {
JellyfishContextProvider, // Context provider
} = create<PeerMetadata, TrackMetadata>();

export const App = () => {};

Now we need to wrap our app with the context provider

That's all we will need to do in this file. Simply import the JellyfishContextProvider along with the App component and wrap the App component with the JellyfishContextProvider:

main.tsx
import React from "react";
import ReactDOM from "react-dom/client";
import { App, JellyfishContextProvider } from "./components/App";

ReactDOM.createRoot(document.getElementById("root") as HTMLElement).render(
<React.StrictMode>
<JellyfishContextProvider>
<App />
</JellyfishContextProvider>
</React.StrictMode>
);

UI component that will connect to the server and join the room

The UI of the component will be quite simple. It will consist of a simple text input field that will allow us to enter the peer token and a button that will connect to the server and join the room. We can also display the status of the connection.

App.tsx
import React, { useState } from "react";
//...
export const App = () => {
// Create a state to store the peer token
const [token, setToken] = useState("");
return (
<div style={{ display: "flex", flexDirection: "column", gap: "8px" }}>
<input value={token} onChange={(e) => setToken(() => e?.target?.value)} placeholder="token" />
<div style={{ display: "flex", flexDirection: "row", gap: "8px" }}>

<button
disabled={}
onClick={() => {}};
>
Connect
</button>
<button
disabled={}
onClick={() => {}};
>
Disconnect
</button>
<span>Status: {}</span>
</div>
</div>
);
};

Once the UI is ready, we need to implement the logic

App.tsx
import { SignalingUrl } from "@jellyfish-dev/react-client-sdk/.";
//...
export const {
useStatus, // Hook to check the status of the connection
useConnect, // Hook to connect to the server
useDisconnect, // Hook to disconnect from the server
JellyfishContextProvider, // Context provider
};
export const App = () => {
// Create a state to store the peer token
const [token, setToken] = useState("");
// Use the built-in hook to check the status of the connection
const status = useStatus();
const connect = useConnect();
const disconnect = useDisconnect();
return (
<div style={{ display: "flex", flexDirection: "column", gap: "8px" }}>
<input
className="input-field"
value={token}
onChange={(e) => setToken(() => e?.target?.value)}
placeholder="token"
/>
<div style={{ display: "flex", flexDirection: "row", gap: "8px" }}>
<button
className="button"
disabled={token === "" || status === "joined"} // simple check to avoid errors
onClick={() => {
connect({
peerMetadata: { name: "John Doe" }, // example metadata
token: token,
});
}}
>
Connect
</button>
<button
className="button"
disabled={status !== "joined"}
onClick={() => {
disconnect();
}}
>
Disconnect
</button>
<span className="span-status">Status: {status}</span>
</div>
</div>
);
};

Great! Now we can connect to the server and join the room. But we still need to add some logic to send our tracks to the server and receive tracks from others.

Let's send our screen to the server

This hook uses Navigator.mediaDevices take a look how it works

App.tsx
import React, { useEffect, useState } from "react";
import { create, SCREEN_SHARING_MEDIA_CONSTRAINTS } from "@jellyfish-dev/react-client-sdk";
import { SignalingUrl, Peer } from "@jellyfish-dev/react-client-sdk/.";
//...
export const {
useStatus, // Hook to check the status of the connection
useApi, // Hook to get the webrtcApi reference
useConnect, // Hook to connect to the server
useDisconnect, // Hook to disconnect from the server
JellyfishContextProvider, // Context provider
} = create<PeerMetadata, TrackMetadata>();

export const App = () => {
//...
// Get the webrtcApi reference
const webrtcApi = useApi();

function startScreenSharing() {
// Get screen sharing MediaStream
navigator.mediaDevices.getDisplayMedia(SCREEN_SHARING_MEDIA_CONSTRAINTS).then((screenStream) => {
// Add local MediaStream to webrtc
screenStream.getTracks().forEach((track) => webrtcApi.addTrack(track, screenStream, { type: "screen" }));
};
};
return (
//...
<button
className="button"
disabled={status !== "joined"}
onClick={() => {
startScreenSharing();
}}
>
Start screen share
</button>
<span>Status: {status}</span>
//...
)
};

You should now see your screen received for each connected client on the dashboard. You can add another participant to check this out!

The streaming part of the app is ready!

What about the receiving part?

This is where the second component comes in handy

For each track received, we will create a new video element and display it on the screen. For clarity, we will separate this component into another file:

Create in your directory file VideoPlayer.tsx

VideoPlayer.tsx
type Props = {
stream: MediaStream | null | undefined;
};

const VideoPlayer = ({ stream }: Props) => {
return (
<div className="video-container">
<video autoPlay playsInline muted ref={/* place for track ref*/} />
</div>
);
};

export default VideoPlayer;

Now the logic for the component

VideoPlayer.tsx
type Props = {
stream: MediaStream | null | undefined;
};

const VideoPlayer = ({ stream }: Props) => {
const videoRef: RefObject<HTMLVideoElement> = useRef<HTMLVideoElement>(null);

useEffect(() => {
if (!videoRef.current) return;
videoRef.current.srcObject = stream || null;
}, [stream]);

return (
<div>
<video autoPlay playsInline muted ref={videoRef} />
</div>
);
};

export default VideoPlayer;

Now we can use it in our main component

App.tsx
import React, { useEffect, useState } from "react";
import { create, SCREEN_SHARING_MEDIA_CONSTRAINTS } from "@jellyfish-dev/react-client-sdk";
import { SignalingUrl, Peer } from "@jellyfish-dev/react-client-sdk/.";
import VideoPlayer from "./VideoPlayer";
//...

export const {
useStatus, // Hook to check the status of the connection
useTracks, // Hook to get the tracks from the server
useApi, // Hook to get the webrtcApi reference
useConnect, // Hook to connect to the server
useDisconnect, // Hook to disconnect from the server
JellyfishContextProvider, // Context provider
} = create<PeerMetadata, TrackMetadata>();
export const App = () => {
const tracks = useTracks();
//...
<div
style={{
display: "flex",
flexWrap: "wrap",
justifyContent: "center", // To align items in the center
gap: "20px",
}}
>
{Object.values(tracks).map(({ stream, trackId }) => (
<VideoPlayer key={trackId} stream={stream} /> // pass the stream to the component
))}
</div>
//...
)

You should see all the tracks sent from the dashboard directly on your page, to test them, add a new client, and add a track (for example a rotating frog). It will show up in your app automatically:

Summary

Congrats on finishing your first Jellyfish web application! In this tutorial, You've learned how to make a basic Jellyfish client application that streams your screen and receives video tracks with WebRTC technology.

But this was just the beginning. Jellyfish Client supports much more than just streaming camera: It can also stream audio or your device's camera, configure your camera and audio devices, detect voice activity, control simulcast, bandwidth and encoding settings, show camera preview, display WebRTC stats and more to come. Check out our other tutorials to learn about those features.

You can also take a look at our fully featured Videoroom Demo example:

Videoroom DemoVideoroom Demo