How To Access Front and Rear Cameras with JavaScript's getUserMedia()

PostedDecember 12, 2019 11.2k views JavaScript

While this tutorial has content that we believe is of great benefit to our community, we have not yet tested or edited it to ensure you have an error-free learning experience. It's on our list, and we're working on it! You can help us out by using the "report an issue" button at the bottom of the tutorial.

With HTML5 came the introduction of APIs that have access to device hardware, including the MediaDevices API. This API provides access to media input devices like audio and video.

With the help of this API, developers are able to access to the audio and videos devices to stream and display live video feed in the browser. In this tutorial, you’ll access the video feed from the user’s device and display it in the browser using the getUserMedia method.

The getUserMedia API

The getUserMedia API makes use of the media input devices to produce a MediaStream. This MediaStream contains the requested media types whether audio or video. Using the stream returned from the API, video feeds can be displayed on the browser, which is useful for real-time communication on the browser. When used alongside the MediaStreamRecorder API, you can record and store media data captured on the browser. This API only works on secure origins like the rest of the newly introduced APIs, but it also works on localhost and on file URLs.

Checking Device support

First, we have to check if the user’s browser supports the mediaDevices API. This API exists within the navigator interface, and contains the current state and identity of the user agent. This is how the check is performed:

if('mediaDevices' in navigator && 'getUserMedia' in navigator.mediaDevices){
  console.log("Let's get this party started")
}

First we check if the mediaDevices API exists within the navigator and then checking if the getUserMedia API is available within the mediaDevices. If this returns true, we can get started.

Requesting User Permission

The next step after confirming support on the browser for getUserMedia is to request permission to make use of the media input devices on the user agent. Typically, after a user grants permission, a Promise is returned which resolves to a media stream. This Promise isn’t returned when the permission is denied by the user, which blocks access to these devices.

if('mediaDevices' in navigator && 'getUserMedia' in navigator.mediaDevices){
  const stream = await navigator.mediaDevices.getUserMedia({video: true})
}

The object provided as an argument for the getUserMedia method is called constraints, this determines which of the media input devices we are requesting permissions for, if the object contained audio: true, the user will be asked to grant access to the audio input device.

Configuring Media Constraints

The constraints object is a MediaStreamConstraints object that specifies the types of media to request and the requirements of each media type. Using the constraints object, you can specify requirements for the requested stream like the resolution of the stream to use (front, back).

A media type must be provided when requesting a media type, either video or audio, a NotFoundError will be returned if the requested media types can’t be found on the user’s browser.
If we intend to request a video stream of 1280 x 720 resolution, we’ll can update the constraints object to look like this:

{
  video: {
    width: 1280,
    height: 720,
  }
}

With this update, the browser will try to match the specified quality settings for the stream, but if the video device can’t deliver this resolution, the browser will return other resolutions available. To ensure that the browser returns a resolution not lower than the one provided we have to make use of the min property. Update the constraints object to include the min property:

{
  video: {
    width: { 
      min: 1280,
    },
    height: {
      min: 720,
    }
  }
}

This will ensure that the stream resolution will returned will be at least 1280 x 720. If this minimum requirement can’t be met, the promise will be rejected with an OverconstrainedError.

Sometimes, you’re concerned about data saving and you need the stream to not exceed a set resolution. This can come in handy when the user is on a limited plan. To enable this functionality, update the constraints object to contain a max field:

{
  video: {
    width: { 
      min: 1280,
      max: 1920,
    },
    height: {
      min: 720,
      max: 1080
    }
  }
}

With these settings, the browser will ensure that the return stream doesn’t go below 1280 x 720 and doesn’t exceed 1920 x 1080. Other terms that can be used includes exact and ideal. That ideal setting is typically used alongside the min and max properties to find the best possible setting that is closest to the ideal values provided.

You can update the constraints to use the ideal keyword:

{
  video: {
    width: { 
      min: 1280,
      ideal: 1920,
      max: 2560,
    },
    height: {
      min: 720,
      ideal: 1080,
      max: 1440
    }
  }
}

To tell the browser to make use of the front or back (on mobile) camera on devices, you can specify a facingMode property in the video object:

{
  video: {
    width: { 
      min: 1280,
      ideal: 1920,
      max: 2560,
    },
    height: {
      min: 720,
      ideal: 1080,
      max: 1440
    },
    facingMode: 'user'
  }
}

This setting will make use of the front facing camera at all times in all devices. To make use of the back camera on mobile devices, we can alter the facingMode property to environment.

{
  video: {
    ...
    facingMode: { 
      exact: 'environment'
    }
  }
}

Using the enumerateDevices method

When the enumerateDevices method is called, returns all the available input media devices available on the user’s PC.

With the method, you can provide the user options on which input media device to use for streaming audio or video content. This method returns a Promise resolved to a MediaDeviceInfo array containing information about each device.

An example of how to make a use of this method is show in the snippet below:

async function getDevices(){
  const devices = await navigator.mediaDevices.enumerateDevices();
}

A sample response for each of the devices would look like:

{
  deviceId: "23e77f76e308d9b56cad920fe36883f30239491b8952ae36603c650fd5d8fbgj",
  groupId: "e0be8445bd846722962662d91c9eb04ia624aa42c2ca7c8e876187d1db3a3875",
  kind: "audiooutput",
  label: "",
}

Note: A label won’t be returned unless an available stream is available, or the user has granted device access permissions.

Displaying the Video Stream on the Browser

We’ve gone through the process of requesting and getting access to the media devices, configured constraints to include required resolutions, and also selected the camera we need to record video. After going through all these steps, we’ll at least want to see if the stream is delivering based on the configured settings. To ensure this, we’ll make use of the video element to display the video stream on the browser.

Like mentioned earlier, the getUserMedia method returns a Promise that can be resolved to a stream. The returned stream can be converted to an object URL using the createObjectURL method, this URL will be set as video source.

We’ll create a short demo where we let the user choose from their available list of video devices. using the enumerateDevices method. This is a navigator.mediaDevices method, it lists the available media devices like microphones and cameras. It returns a Promise resolvable to an array of objects detailing the available media devices.

Create an index.html file and update the contents with the code below:

<!doctype html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport"
          content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">
    <meta http-equiv="X-UA-Compatible" content="ie=edge">
    <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css">
    <link rel="stylesheet" href="style.css">
    <title>Document</title>
</head>
<body>
<div>
    <video autoplay></video>
    <canvas class="d-none"></canvas>
</div>
<div class="video-options">
    <select name="" id="" class="custom-select">
        <option value="">Select camera</option>
    </select>
</div>

<img class="screenshot-image" alt="">

<div class="controls">
    <button class="btn btn-danger play" title="Play"><i data-feather="play-circle"></i></button>
    <button class="btn btn-info pause d-none" title="Pause"><i data-feather="pause"></i></button>
    <button class="btn btn-outline-success screenshot d-none" title="ScreenShot"><i data-feather="image"></i></button>
</div>

<script src="https://unpkg.com/feather-icons"></script>
<script src="script.js"></script>
</body>
</html>

In the snippet above, we’ve set up the elements we’ll need and a couple of controls for the video. Also included is a button for taking screenshots of the current video feed. Now let’s style up these components a bit.

Create a style.css file and copy the following styles into it. Bootstrap was included to reduce the amount of CSS we need to write to get the components going.

// style.css
.screenshot-image {
    width: 150px;
    height: 90px;
    border-radius: 4px;
    border: 2px solid whitesmoke;
    box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.1);
    position: absolute;
    bottom: 5px;
    left: 10px;
    background: white;
}

.display-cover {
    display: flex;
    justify-content: center;
    align-items: center;
    width: 70%;
    margin: 5% auto;
    position: relative;
}

video {
    width: 100%;
    background: rgba(0, 0, 0, 0.2);
}

.video-options {
    position: absolute;
    left: 20px;
    top: 30px;
}

.controls {
    position: absolute;
    right: 20px;
    top: 20px;
    display: flex;
}

.controls > button {
    width: 45px;
    height: 45px;
    text-align: center;
    border-radius: 100%;
    margin: 0 6px;
    background: transparent;
}

.controls > button:hover svg {
    color: white !important;
}

@media (min-width: 300px) and (max-width: 400px) {
    .controls {
        flex-direction: column;
    }

    .controls button {
        margin: 5px 0 !important;
    }
}

.controls > button > svg {
    height: 20px;
    width: 18px;
    text-align: center;
    margin: 0 auto;
    padding: 0;
}

.controls button:nth-child(1) {
    border: 2px solid #D2002E;
}

.controls button:nth-child(1) svg {
    color: #D2002E;
}

.controls button:nth-child(2) {
    border: 2px solid #008496;
}

.controls button:nth-child(2) svg {
    color: #008496;
}

.controls button:nth-child(3) {
    border: 2px solid #00B541;
}

.controls button:nth-child(3) svg {
    color: #00B541;
}

.controls > button {
    width: 45px;
    height: 45px;
    text-align: center;
    border-radius: 100%;
    margin: 0 6px;
    background: transparent;
}

.controls > button:hover svg {
    color: white;
}

The next step, is to add functionality to the demo, using the enumerateDevices method, we’ll get the available video devices and set it as the options within the select element. Create a file script.js and update it with the following snippet:

feather.replace();

const controls = document.querySelector('.controls');
const cameraOptions = document.querySelector('.video-options>select');
const video = document.querySelector('video');
const canvas = document.querySelector('canvas');
const screenshotImage = document.querySelector('img');
const buttons = [...controls.querySelectorAll('button')];
let streamStarted = false;

const [play, pause, screenshot] = buttons;

const constraints = {
  video: {
    width: {
      min: 1280,
      ideal: 1920,
      max: 2560,
    },
    height: {
      min: 720,
      ideal: 1080,
      max: 1440
    },
  }
};

const getCameraSelection = async () => {
  const devices = await navigator.mediaDevices.enumerateDevices();
  const videoDevices = devices.filter(device => device.kind === 'videoinput');
  const options = videoDevices.map(videoDevice => {
    return `<option value="${videoDevice.deviceId}">${videoDevice.label}</option>`;
  });
  cameraOptions.innerHTML = options.join('');
};

play.onclick = () => {
  if (streamStarted) {
    video.play();
    play.classList.add('d-none');
    pause.classList.remove('d-none');
    return;
  }
  if ('mediaDevices' in navigator && navigator.mediaDevices.getUserMedia) {
    const updatedConstraints = {
      ...constraints,
      deviceId: {
        exact: cameraOptions.value
      }
    };
    startStream(updatedConstraints);
  }
};

const startStream = async (constraints) => {
  const stream = await navigator.mediaDevices.getUserMedia(constraints);
  handleStream(stream);
};

const handleStream = (stream) => {
  video.srcObject = stream;
  play.classList.add('d-none');
  pause.classList.remove('d-none');
  screenshot.classList.remove('d-none');
  streamStarted = true;
};

getCameraSelection();

In the snippet above there are a couple of things going on, let’s break them down:

  1. feather.replace(): this method call instantiates feather, a great icon-set for web development.
  2. The constraints variable holds the initial configuration for the stream. This will be extended to include the media device the user chooses.
  3. getCameraSelection: this function calls the enumerateDevices method, then, we filter through the array from the resolved Promise and select video input devices. From the filtered results, we create options for the select element.
  4. Calling the getUserMedia method happens within the onclick listener of the play button. Here we check if this method is supported by the user’s browser before starting the stream.
  5. Next, we call the startStream function that takes a constraints argument. It calls the getUserMedia method with the provided constraints . handleStream is called using the stream from the resolved promise, this method sets the returned stream to the video element’s srcObject.

Next, we’ll add click listeners to the button controls on the page to pause, stop, and take screenshots. Also, we’ll add a listener to the select element to update the stream constraints with the selected video device.

Update the script.js file with the code below:

...

const startStream = async (constraints) => {
  ...
};

const handleStream = (stream) => {
  ...
};

cameraOptions.onchange = () => {
  const updatedConstraints = {
    ...constraints,
    deviceId: {
      exact: cameraOptions.value
    }
  };
  startStream(updatedConstraints);
};

const pauseStream = () => {
  video.pause();
  play.classList.remove('d-none');
  pause.classList.add('d-none');
};

const doScreenshot = () => {
  canvas.width = video.videoWidth;
  canvas.height = video.videoHeight;
  canvas.getContext('2d').drawImage(video, 0, 0);
  screenshotImage.src = canvas.toDataURL('image/webp');
  screenshotImage.classList.remove('d-none');
};

pause.onclick = pauseStream;
screenshot.onclick = doScreenshot;

Now, when you open the index.html file on the browser, clicking the Play button will start the stream.

Here is a complete demo:

https://codepen.io/chrisbeast/pen/ebYwpX

Conclusion

This tutorial introduced the getUserMedia API, an interesting addition to HTML5 that eases the process of capturing media on the web. The API takes a parameter ( constraints) that can be used to configure the get access to audio and video input devices, it can also be used to specify the video resolution required for your application. You can extend the demo further to give the user an option to save the screenshots taken, as well as recording and storing video and audio data with the help of MediaStreamRecorder API.

0 Comments

Creative Commons License