

# IVS Broadcast SDK \$1 Real-Time Streaming
<a name="broadcast"></a>

The Amazon Interactive Video Services (IVS) Real-Time Streaming broadcast SDK is for developers who are building applications with Amazon IVS. This SDK is designed to leverage the Amazon IVS architecture and will see continual improvement and new features, alongside Amazon IVS. As a native broadcast SDK, it is designed to minimize the performance impact on your application and on the devices with which your users access your application.

Note that the broadcast SDK is used for both sending and receiving video; i.e., you use the same SDK for hosts and viewers. No separate player SDK needed.

Your application can leverage the key features of the Amazon IVS broadcast SDK:
+ **High quality streaming** — The broadcast SDK supports high quality streaming. Capture video from your camera and encode it at up to 720p.
+ **Automatic Bitrate Adjustments** — Smartphone users are mobile, so their network conditions can change throughout the course of a broadcast. The Amazon IVS broadcast SDK automatically adjusts the video bitrate to accommodate changing network conditions.
+ **Portrait and Landscape Support** — No matter how your users hold their devices, the image appears right-side up and properly scaled. The broadcast SDK supports both portrait and landscape canvas sizes. It automatically manages the aspect ratio when the users rotate their device away from the configured orientation.
+ **Secure Streaming** — Your user’s broadcasts are encrypted using TLS, so they can keep their streams secure.
+ **External Audio Devices** — The Amazon IVS broadcast SDK supports audio jack, USB, and Bluetooth SCO external microphones.

## Platform Requirements
<a name="broadcast-platform-requirements"></a>

### Native Platforms
<a name="broadcast-native-platforms"></a>


| Platform | Supported Versions | 
| --- | --- | 
| Android |  9.0\$1 -- note customers can build with versions 6.0\$1 but will not be able to use real-time streaming functionality.  | 
| iOS |  14\$1  | 

IVS supports a minimum of 4 major iOS versions and 6 major Android versions. Our current version support may extend beyond these minimums. Customers will be notified via SDK release notes at least 3 months in advance of a major version no longer being supported.

### Desktop Browsers
<a name="browser-desktop"></a>


| Browser | Supported Platforms | Supported Versions | 
| --- | --- | --- | 
| Chrome | Windows, macOS | Two major versions (current and most recent prior version) | 
| Firefox | Windows, macOS | Two major versions (current and most recent prior version) | 
| Edge | Windows 8.1\$1 | Two major versions (current and most recent prior version) Excludes Edge Legacy | 
| Safari | macOS | Two major versions (current and most recent prior version) | 

### Mobile Browsers (iOS and Android)
<a name="browser-mobile"></a>


| Browser | Supported Platforms | Supported Versions | 
| --- | --- | --- | 
| Chrome | iOS, Android | Two major versions (current and most recent prior version) | 
| Firefox | Android | Two major versions (current and most recent prior version) | 
| Safari | iOS | Two major versions (current and most recent prior version) | 

#### Known Limitations
<a name="browser-mobile-limitations"></a>
+ On all mobile web browsers, we recommend publishing/subscribing with no more than three simultaneous publishers, due to performance constraints which cause video artifacts and black screens. If you require more publishers, configure [audio-only publish and subscribe](web-publish-subscribe.md#web-publish-subscribe-concepts-strategy-updates).
+ We do not recommend compositing a stage and broadcasting it to a channel on Android Mobile Web, due to performance considerations and potential crashes. If broadcast functionality is required, integrate the [IVS real-time streaming Android broadcast SDK](broadcast-android.md).

## Webviews
<a name="broadcast-webviews"></a>

The Web broadcast SDK does not provide support for webviews or weblike environments (TVs, consoles, etc). For mobile implementations, see the Real-Time Streaming Broadcast SDK Guide for [Android](broadcast-android.md) and for [iOS](broadcast-ios.md).

## Required Device Access
<a name="broadcast-device-access"></a>

The broadcast SDK requires access to the device's cameras and microphones, both those built into the device and those connected through Bluetooth, USB, or audio jack.

## Support
<a name="broadcast-support"></a>

The broadcast SDK is continually improved. See [Amazon IVS Release Notes](release-notes.md) for available versions and fixed issues. If appropriate, before contacting support, update your version of the broadcast SDK and see if that resolves your issue.

### Versioning
<a name="broadcast-support-versioning"></a>

The Amazon IVS broadcast SDKs use [semantic versioning](https://semver.org/).

For this discussion, suppose:
+ The latest release is 4.1.3.
+ The latest release of the prior major version is 3.2.4.
+ The latest release of version 1.x is 1.5.6.

Backward-compatible new features are added as minor releases of the latest version. In this case, the next set of new features will be added as version 4.2.0.

Backward-compatible, minor bug fixes are added as patch releases of the latest version. Here, the next set of minor bug fixes will be added as version 4.1.4.

Backward-compatible, major bug fixes are handled differently; these are added to several versions:
+ Patch release of the latest version. Here, this is version 4.1.4.
+ Patch release of the prior minor version. Here, this is version 3.2.5.
+ Patch release of the latest version 1.x release. Here, this is version 1.5.7.

Major bug fixes are defined by the Amazon IVS product team. Typical examples are critical security updates and selected other fixes necessary for customers.

**Note:** In the examples above, released versions increment without skipping any numbers (e.g., from 4.1.3 to 4.1.4). In reality, one or more patch numbers may remain internal and not be released, so the released version could increment from 4.1.3 to, say, 4.1.6.

# IVS Broadcast SDK: Web Guide \$1 Real-Time Streaming
<a name="broadcast-web"></a>

The IVS real-time streaming Web broadcast SDK gives developers the tools to build interactive, real-time experiences on the web. This SDK is for developers who are building web applications with Amazon IVS.

The Web broadcast SDK enables participants to send and receive video. The SDK supports the following operations:
+ Join a stage
+ Publish media to other participants in the stage
+ Subscribe to media from other participants in the stage
+ Manage and monitor video and audio published to the stage
+ Get WebRTC statistics for each peer connection
+ All operations from the IVS low-latency streaming Web broadcast SDK

**Latest version of Web broadcast SDK:** 1.34.0 ([Release Notes](https://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/release-notes.html#apr09-26-broadcast-web-rt)) 

**Reference documentation:** For information on the most important methods available in the Amazon IVS Web Broadcast SDK, see [https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-reference](https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-reference). Make sure the most current version of the SDK is selected.

**Sample code**: The samples below are a good place to get started quickly with the SDK:
+ [Simple Playback](https://codepen.io/amazon-ivs/pen/RNwVBRK)
+ [Simple Publishing and Subscribing](https://codepen.io/amazon-ivs/pen/ZEqgrpo)
+ [Comprehensive React Real-Time Collaboration Demo](https://github.com/aws-samples/amazon-ivs-real-time-collaboration-web-demo/tree/main)

**Platform requirements**: See [Amazon IVS Broadcast SDK](https://docs.aws.amazon.com//ivs/latest/RealTimeUserGuide/broadcast.html) for a list of supported platforms

**Note:** Publishing from a browser is convenient for end users because it does not require installing additional software. However, browser-based publishing is subject to the constraints and variability of browser environments. If you need to prioritize stability (for example, for event streaming), we generally recommend publishing from a non-browser source (e.g., OBS Studio or other dedicated encoders), which often have direct access to system resources and avoid browser limitations. For more on non-browser publishing options, see the [Stream Ingest](rt-stream-ingest.md) documentation.

# Getting Started​ with the IVS Web Broadcast SDK \$1 Real-Time Streaming
<a name="broadcast-web-getting-started"></a>

This document takes you through the steps involved in getting started with the IVS real-time streaming Web broadcast SDK.

## Imports
<a name="broadcast-web-getting-started-imports"></a>

The building blocks for real-time are located in a different namespace than the root broadcasting modules.

### Using a Script Tag
<a name="broadcast-web-getting-started-imports-script"></a>

The Web broadcast SDK is distributed as a JavaScript library and can be retrieved at [https://web-broadcast.live-video.net/1.34.0/amazon-ivs-web-broadcast.js](https://web-broadcast.live-video.net/1.34.0/amazon-ivs-web-broadcast.js).

The classes and enums defined in the examples below can be found on the global object `IVSBroadcastClient`:

```
const { Stage, SubscribeType } = IVSBroadcastClient;
```

### Using npm
<a name="broadcast-web-getting-started-imports-npm"></a>

To install the `npm` package: 

```
npm install amazon-ivs-web-broadcast
```

The classes, enums, and types also can be imported from the package module:

```
import { Stage, SubscribeType, LocalStageStream } from 'amazon-ivs-web-broadcast'
```

### Server-Side Rendering Support
<a name="broadcast-web-getting-started-imports-server-side-rendering"></a>

The Web Broadcast SDK Stages library cannot be loaded in a server-side context, as it references browser primitives necessary to the functioning of the library when loaded. To work around this, load the library dynamically, as demonstrated in the [Web Broadcast Demo using Next and React](https://github.com/aws-samples/amazon-ivs-broadcast-web-demo/blob/main/hooks/useBroadcastSDK.js#L26-L31).

## Request Permissions
<a name="broadcast-web-request-permissions"></a>

Your app must request permission to access the user’s camera and microphone, and it must be served using HTTPS. (This is not specific to Amazon IVS; it is required for any website that needs access to cameras and microphones.)

Here's an example function showing how you can request and capture permissions for both audio and video devices:

```
async function handlePermissions() {
   let permissions = {
       audio: false,
       video: false,
   };
   try {
       const stream = await navigator.mediaDevices.getUserMedia({ video: true, audio: true });
       for (const track of stream.getTracks()) {
           track.stop();
       }
       permissions = { video: true, audio: true };
   } catch (err) {
       permissions = { video: false, audio: false };
       console.error(err.message);
   }
   // If we still don't have permissions after requesting them display the error message
   if (!permissions.video) {
       console.error('Failed to get video permissions.');
   } else if (!permissions.audio) {
       console.error('Failed to get audio permissions.');
   }
}
```

For additional information, see the [Permissions API](https://developer.mozilla.org/en-US/docs/Web/API/Permissions_API) and [MediaDevices.getUserMedia()](https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia).

## List Available Devices
<a name="broadcast-web-request-list-devices"></a>

To see what devices are available to capture, query the browser's [MediaDevices.enumerateDevices()](https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/enumerateDevices) method:

```
const devices = await navigator.mediaDevices.enumerateDevices();
window.videoDevices = devices.filter((d) => d.kind === 'videoinput');
window.audioDevices = devices.filter((d) => d.kind === 'audioinput');
```

## Retrieve a MediaStream from a Device
<a name="broadcast-web-retrieve-mediastream"></a>

After acquiring the list of available devices, you can retrieve a stream from any number of devices. For example, you can use the `getUserMedia()` method to retrieve a stream from a camera.

If you'd like to specify which device to capture the stream from, you can explicitly set the `deviceId` in the `audio` or `video` section of the media constraints. Alternately, you can omit the `deviceId` and have users select their devices from the browser prompt.

You also can specify an ideal camera resolution using the `width` and `height` constraints. (Read more about these constraints [here](https://developer.mozilla.org/en-US/docs/Web/API/MediaTrackConstraints#properties_of_video_tracks).) The SDK automatically applies width and height constraints that correspond to your maximum broadcast resolution; however, it's a good idea to also apply these yourself to ensure that the source aspect ratio is not changed after you add the source to the SDK.

For real-time streaming, ensure that media is constrained to 720p resolution. Specifically, your `getUserMedia` and `getDisplayMedia` constraint values for width and height must not exceed 921600 (1280\$1720) when multiplied together. 

```
const videoConfiguration = {
  maxWidth: 1280,
  maxHeight: 720,
  maxFramerate: 30,
}

window.cameraStream = await navigator.mediaDevices.getUserMedia({
   video: {
       deviceId: window.videoDevices[0].deviceId,
       width: {
           ideal: videoConfiguration.maxWidth,
       },
       height: {
           ideal:videoConfiguration.maxHeight,
       },
   },
});
window.microphoneStream = await navigator.mediaDevices.getUserMedia({
   audio: { deviceId: window.audioDevices[0].deviceId },
});
```

# Publishing & Subscribing with the IVS Web Broadcast SDK \$1 Real-Time Streaming
<a name="web-publish-subscribe"></a>

This document takes you through the steps involved in publishing and subscribing to a stage using the IVS real-time streaming Web broadcast SDK.

## Concepts
<a name="web-publish-subscribe-concepts"></a>

Three core concepts underlie real-time functionality: [stage](#web-publish-subscribe-concepts-stage), [strategy](#web-publish-subscribe-concepts-strategy), and [events](#web-publish-subscribe-concepts-events). The design goal is minimizing the amount of client-side logic necessary to build a working product.

### Stage
<a name="web-publish-subscribe-concepts-stage"></a>

The `Stage` class is the main point of interaction between the host application and the SDK. It represents the stage itself and is used to join and leave the stage. Creating and joining a stage requires a valid, unexpired token string from the control plane (represented as `token`). Joining and leaving a stage are simple:

```
const stage = new Stage(token, strategy)

try {
   await stage.join();
} catch (error) {
   // handle join exception
}

stage.leave();
```

### Strategy
<a name="web-publish-subscribe-concepts-strategy"></a>

The `StageStrategy` interface provides a way for the host application to communicate the desired state of the stage to the SDK. Three functions need to be implemented: `shouldSubscribeToParticipant`, `shouldPublishParticipant`, and `stageStreamsToPublish`. All are discussed below.

To use a defined strategy, pass it to the `Stage` constructor. The following is a complete example of an application using a strategy to publish a participant's webcam to the stage and subscribe to all participants. Each required strategy function's purpose is explained in detail in the subsequent sections.

```
const devices = await navigator.mediaDevices.getUserMedia({ 
   audio: true,
   video: {
        width: { max: 1280 },
        height: { max: 720 },
    } 
});
const myAudioTrack = new LocalStageStream(devices.getAudioTracks()[0]);
const myVideoTrack = new LocalStageStream(devices.getVideoTracks()[0]);

// Define the stage strategy, implementing required functions
const strategy = {
   audioTrack: myAudioTrack,
   videoTrack: myVideoTrack,

   // optional
   updateTracks(newAudioTrack, newVideoTrack) {
      this.audioTrack = newAudioTrack;
      this.videoTrack = newVideoTrack;
   },

   // required
   stageStreamsToPublish() {
      return [this.audioTrack, this.videoTrack];
   },

   // required
   shouldPublishParticipant(participant) {
      return true;
   },

   // required
   shouldSubscribeToParticipant(participant) {
      return SubscribeType.AUDIO_VIDEO;
   }
};

// Initialize the stage and start publishing
const stage = new Stage(token, strategy);
await stage.join();


// To update later (e.g. in an onClick event handler)
strategy.updateTracks(myNewAudioTrack, myNewVideoTrack);
stage.refreshStrategy();
```

#### Subscribing to Participants
<a name="web-publish-subscribe-concepts-strategy-participants"></a>

```
shouldSubscribeToParticipant(participant: StageParticipantInfo): SubscribeType
```

When a remote participant joins the stage, the SDK queries the host application about the desired subscription state for that participant. The options are `NONE`, `AUDIO_ONLY`, and `AUDIO_VIDEO`. When returning a value for this function, the host application does not need to worry about the publish state, current subscription state, or stage connection state. If `AUDIO_VIDEO` is returned, the SDK waits until the remote participant is publishing before it subscribes, and it updates the host application by emitting events throughout the process.

Here is a sample implementation:

```
const strategy = {
   
   shouldSubscribeToParticipant: (participant) => {
      return SubscribeType.AUDIO_VIDEO;
   }

   // ... other strategy functions
}
```

This is the complete implementation of this function for a host application that always wants all participants to see each other; e.g., a video chat application.

More advanced implementations also are possible. For example, assume the application provides a `role` attribute when creating the token with CreateParticipantToken. The application could use the `attributes` property on `StageParticipantInfo` to selectively subscribe to participants based on the server-provided attributes:

```
const strategy = {
   
   shouldSubscribeToParticipant(participant) {
      switch (participant.attributes.role) {
         case 'moderator':
            return SubscribeType.NONE;
         case 'guest':
            return SubscribeType.AUDIO_VIDEO;
         default:
            return SubscribeType.NONE;
      }
   }
   // . . . other strategies properties
}
```

This can be used to create a stage where moderators can monitor all guests without being seen or heard themselves. The host application could use additional business logic to let moderators see each other but remain invisible to guests.

#### Configuration for Subscribing to Participants
<a name="web-publish-subscribe-concepts-strategy-participants-config"></a>

```
subscribeConfiguration(participant: StageParticipantInfo): SubscribeConfiguration
```

If a remote participant is being subscribed to (see [Subscribing to Participants](#web-publish-subscribe-concepts-strategy-participants)), the SDK queries the host application about a custom subscribe configuration for that participant. This configuration is optional and allows the host application to control certain aspects of subscriber behavior. For information on what can be configured, see [SubscribeConfiguration](https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-reference/interfaces/SubscribeConfiguration) in the SDK reference documentation.

Here is a sample implementation:

```
const strategy = {
   
   subscribeConfiguration: (participant) => {
      return {
         jitterBuffer: {
            minDelay: JitterBufferMinDelay.MEDIUM
         }  
      }

   // ... other strategy functions
}
```

This implementation updates the jitter-buffer minimum delay for all subscribed participants to a preset of `MEDIUM`.

As with `shouldSubscribeToParticipant`, more advanced implementations are possible. The given `ParticipantInfo` can be used to selectively update the subscribe configuration for specific participants.

We recommend using the default behaviors. Specify custom configuration only if there is a particular behavior you want to change.

#### Publishing
<a name="web-publish-subscribe-concepts-strategy-publishing"></a>

```
shouldPublishParticipant(participant: StageParticipantInfo): boolean
```

Once connected to the stage, the SDK queries the host application to see if a particular participant should publish. This is invoked only on local participants that have permission to publish based on the provided token.

Here is a sample implementation:

```
const strategy = {
   
   shouldPublishParticipant: (participant) => {
      return true;
   }

   // . . . other strategies properties
}
```

This is for a standard video chat application where users always want to publish. They can mute and unmute their audio and video, to instantly be hidden or seen/heard. (They also can use publish/unpublish, but that is much slower. Mute/unmute is preferable for use cases where changing visibility often is desirable.)

#### Choosing Streams to Publish
<a name="web-publish-subscribe-concepts-strategy-streams"></a>

```
stageStreamsToPublish(): LocalStageStream[];
```

When publishing, this is used to determine what audio and video streams should be published. This is covered in more detail later in [ Publish a Media Stream](#web-publish-subscribe-publish-stream).

#### Updating the Strategy
<a name="web-publish-subscribe-concepts-strategy-updates"></a>

The strategy is intended to be dynamic: the values returned from any of the above functions can be changed at any time. For example, if the host application does not want to publish until the end user taps a button, you could return a variable from `shouldPublishParticipant` (something like `hasUserTappedPublishButton`). When that variable changes based on an interaction by the end user, call `stage.refreshStrategy()` to signal to the SDK that it should query the strategy for the latest values, applying only things that have changed. If the SDK observes that the `shouldPublishParticipant` value has changed, it starts the publish process. If the SDK queries and all functions return the same value as before, the `refreshStrategy` call does not modify the stage.

If the return value of `shouldSubscribeToParticipant` changes from `AUDIO_VIDEO` to `AUDIO_ONLY`, the video stream is removed for all participants with changed returned values, if a video stream existed previously.

Generally, the stage uses the strategy to most efficiently apply the difference between the previous and current strategies, without the host application needing to worry about all the state required to manage it properly. Because of this, think of calling `stage.refreshStrategy()` as a cheap operation, because it does nothing unless the strategy changes.

### Events
<a name="web-publish-subscribe-concepts-events"></a>

A `Stage` instance is an event emitter. Using `stage.on()`, the state of the stage is communicated to the host application. Updates to the host application’s UI usually can be supported entirely by the events. The events are as follows:

```
stage.on(StageEvents.STAGE_CONNECTION_STATE_CHANGED, (state) => {})
stage.on(StageEvents.STAGE_PARTICIPANT_JOINED, (participant) => {})
stage.on(StageEvents.STAGE_PARTICIPANT_LEFT, (participant) => {})
stage.on(StageEvents.STAGE_PARTICIPANT_PUBLISH_STATE_CHANGED, (participant, state) => {})
stage.on(StageEvents.STAGE_PARTICIPANT_SUBSCRIBE_STATE_CHANGED, (participant, state) => {})
stage.on(StageEvents.STAGE_PARTICIPANT_STREAMS_ADDED, (participant, streams) => {})
stage.on(StageEvents.STAGE_PARTICIPANT_STREAMS_REMOVED, (participant, streams) => {})
stage.on(StageEvents.STAGE_STREAM_ADAPTION_CHANGED, (participant, stream, isAdapting) => ())
stage.on(StageEvents.STAGE_STREAM_LAYERS_CHANGED, (participant, stream, layers) => ())
stage.on(StageEvents.STAGE_STREAM_LAYER_SELECTED, (participant, stream, layer, reason) => ())
stage.on(StageEvents.STAGE_STREAM_MUTE_CHANGED, (participant, stream) => {})
stage.on(StageEvents.STAGE_STREAM_SEI_MESSAGE_RECEIVED, (participant, stream) => {})
```

For most of these events, the corresponding `ParticipantInfo` is provided.

It is not expected that the information provided by the events impacts the return values of the strategy. For example, the return value of `shouldSubscribeToParticipant` is not expected to change when `STAGE_PARTICIPANT_PUBLISH_STATE_CHANGED` is called. If the host application wants to subscribe to a particular participant, it should return the desired subscription type regardless of that participant’s publish state. The SDK is responsible for ensuring that the desired state of the strategy is acted on at the correct time based on the state of the stage.

## Publish a Media Stream
<a name="web-publish-subscribe-publish-stream"></a>

Local devices like microphones and cameras are retrieved using the same steps as outlined above in [Retrieve a MediaStream from a Device](broadcast-web-getting-started.md#broadcast-web-retrieve-mediastream)​. In the example we use `MediaStream` to create a list of `LocalStageStream` objects used for publishing by the SDK:

```
try {
    // Get stream using steps outlined in document above
    const stream = await getMediaStreamFromDevice();

    let streamsToPublish = stream.getTracks().map(track => {
        new LocalStageStream(track)
    });

    // Create stage with strategy, or update existing strategy
    const strategy = {
        stageStreamsToPublish: () => streamsToPublish
    }
}
```

## Publish a Screenshare
<a name="web-publish-subscribe-publish-screenshare"></a>

Applications often need to publish a screenshare in addition to the user's web camera. Publishing a screenshare necessitates creating an additional token for the stage, specifically for publishing the screenshare's media. Use `getDisplayMedia` and constrain the resolution to a maximum of 720p. After that, the steps are similar to publishing a camera to the stage.

```
// Invoke the following lines to get the screenshare's tracks
const media = await navigator.mediaDevices.getDisplayMedia({
   video: {
      width: {
         max: 1280,
      },
      height: {
         max: 720,
      }
   }
});
const screenshare = { videoStream: new LocalStageStream(media.getVideoTracks()[0]) };
const screenshareStrategy = {
   stageStreamsToPublish: () => {
      return [screenshare.videoStream];
   },
   shouldPublishParticipant: (participant) => {
      return true;
   },
   shouldSubscribeToParticipant: (participant) => {
      return SubscribeType.AUDIO_VIDEO;
   }
}
const screenshareStage = new Stage(screenshareToken, screenshareStrategy);
await screenshareStage.join();
```

## Display and Remove Participants
<a name="web-publish-subscribe-participants"></a>

After subscribing is completed, you receive an array of `StageStream` objects through the `STAGE_PARTICIPANT_STREAMS_ADDED` event. The event also gives you participant info to help when displaying media streams:

```
stage.on(StageEvents.STAGE_PARTICIPANT_STREAMS_ADDED, (participant, streams) => {
    let streamsToDisplay = streams;

    if (participant.isLocal) {
        // Ensure to exclude local audio streams, otherwise echo will occur
        streamsToDisplay = streams.filter(stream => stream.streamType === StreamType.VIDEO)
    }

    // Create or find video element already available in your application
    const videoEl = getParticipantVideoElement(participant.id);

    // Attach the participants streams
    videoEl.srcObject = new MediaStream();
    streamsToDisplay.forEach(stream => videoEl.srcObject.addTrack(stream.mediaStreamTrack));
})
```

When a participant stops publishing or is unsubscribed from a stream, the `STAGE_PARTICIPANT_STREAMS_REMOVED` function is called with the streams that were removed. Host applications should use this as a signal to remove the participant’s video stream from the DOM.

`STAGE_PARTICIPANT_STREAMS_REMOVED` is invoked for all scenarios in which a stream might be removed, including:
+ The remote participant stops publishing.
+ A local device unsubscribes or changes subscription from `AUDIO_VIDEO` to `AUDIO_ONLY`.
+ The remote participant leaves the stage.
+ The local participant leaves the stage.

Because `STAGE_PARTICIPANT_STREAMS_REMOVED` is invoked for all scenarios, no custom business logic is required around removing participants from the UI during remote or local leave operations.

## Mute and Unmute Media Streams
<a name="web-publish-subscribe-mute-streams"></a>

`LocalStageStream` objects have a `setMuted` function that controls whether the stream is muted. This function can be called on the stream before or after it is returned from the `stageStreamsToPublish` strategy function.

**Important**: If a new `LocalStageStream` object instance is returned by `stageStreamsToPublish` after a call to `refreshStrategy`, the mute state of the new stream object is applied to the stage. Be careful when creating new `LocalStageStream` instances to make sure the expected mute state is maintained.

## Monitor Remote Participant Media Mute State
<a name="web-publish-subscribe-mute-state"></a>

When participants change the mute state of their video or audio, the `STAGE_STREAM_MUTE_CHANGED` event is triggered with a list of streams that have changed. Use the `isMuted` property on `StageStream` to update your UI accordingly:

```
stage.on(StageEvents.STAGE_STREAM_MUTE_CHANGED, (participant, stream) => {
   if (stream.streamType === 'video' && stream.isMuted) {
       // handle UI changes for video track getting muted
   }
})
```

Also, you can look at [StageParticipantInfo](https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-reference#stageparticipantinfo) for state information on whether audio or video is muted:

```
stage.on(StageEvents.STAGE_STREAM_MUTE_CHANGED, (participant, stream) => {
   if (participant.videoStopped || participant.audioMuted) {
       // handle UI changes for either video or audio
   }
})
```

## Get WebRTC Statistics
<a name="web-publish-subscribe-webrtc-stats"></a>

The `requestQualityStats()` method provides access to detailed WebRTC statistics for both local and remote streams. This is available on both LocalStageStream and RemoteStageStream objects. It returns comprehensive quality metrics including network quality, packet statistics, bitrate information, and frame-related metrics.

This is an asynchronous method with which you can retrieve statistics either via await or by chaining a promise. It returns `undefined` when statistics are not available; e.g., the stream is not active or internal statistics are unavailable. If statistics are available, and depending on the stream (remote or local, video or audio), the method returns a [LocalVideoStats](https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-reference/interfaces/LocalVideoStats), [LocalAudioStats](https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-reference/interfaces/LocalAudioStats), [RemoteVideoStats](https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-reference/interfaces/RemoteVideoStats), or [RemoteAudioStats](https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-reference/interfaces/RemoteAudioStats) object.

Note that for video streams with simulcast, the array contains multiple stat objects (one per layer).

**Best Practices**
+ Polling frequency — Call `requestQualityStats()` at reasonable intervals (1-5 seconds) to avoid performance impact
+ Error handling — Always check if the returned value is `undefined` before processing
+ Memory management — Clear intervals/timeouts when streams are no longer needed
+ Network quality — Use `networkQuality` for user feedback regarding possible degradations caused by the network. For details, see [NetworkQuality](https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-reference/enumerations/NetworkQuality).

**Example Usage**

```
// For local streams
const localStats = await localVideoStream.requestQualityStats();
const audioStats = await localAudioStream.requestQualityStats();

// For remote streams
const remoteVideoStats = await remoteVideoStream.requestQualityStats();
const remoteAudioStats = await remoteAudioStream.requestQualityStats();

// Example: Monitor stats every 10 seconds
const statsInterval = setInterval(async () => {
   const stats = await localVideoStream.requestQualityStats();
   if (stats) {
      // Note: If simulcast is enabled, you may receive multiple 
      // stats records for each layer
      stats.forEach(layer => {
         const rid = layer.rid || 'default';
         console.log(`Layer ${rid}:`, {
            active: layer.active,
            networkQuality: layer.networkQuality,
            packetsSent: layer.packetsSent,
            bytesSent: layer.bytesSent,
            resolution: `${layer.frameWidth}x${layer.frameHeight}`,
            fps: layer.framesPerSecond
         });
      });
   }
}, 10000);
```

## Optimizing Media
<a name="web-publish-subscribe-optimizing-media"></a>

It's recommended to limit `getUserMedia` and `getDisplayMedia` calls to the following constraints for the best performance:

```
const CONSTRAINTS = {
    video: {
        width: { ideal: 1280 }, // Note: flip width and height values if portrait is desired
        height: { ideal: 720 },
        framerate: { ideal: 30 },
    },
};
```

You can further constrain the media through additional options passed to the `LocalStageStream` constructor:

```
const localStreamOptions = {
    minBitrate?: number;
    maxBitrate?: number;
    maxFramerate?: number;
    simulcast: {
        enabled: boolean
    }
}
const localStream = new LocalStageStream(track, localStreamOptions)
```

In the code above:
+ `minBitrate` sets a minimum bitrate that the browser should be expected to use. However, a low complexity video stream may push the encoder to go lower than this bitrate.
+ `maxBitrate` sets a maximum bitrate that the browser should be expected to not exceed for this stream.
+ `maxFramerate` sets a maximum frame rate that the browser should be expected to not exceed for this stream.
+ The `simulcast` option is usable only on Chromium-based browsers. It enables sending three rendition layers of the stream.
  + This allows the server to choose which rendition to send to other participants, based on their networking limitations.
  + When `simulcast` is specified along with a `maxBitrate` and/or `maxFramerate` value, it is expected that the highest rendition layer will be configured with these values in mind, provided the `maxBitrate` does not go below the internal SDK’s second highest layer’s default `maxBitrate` value of 900 kbps.
  + If `maxBitrate` is specified as too low compared to the second highest layer’s default value, `simulcast` will be disabled.
  + `simulcast` cannot be toggled on and off without republishing the media through a combination of having `shouldPublishParticipant` return `false`, calling `refreshStrategy`, having `shouldPublishParticipant` return `true` and calling `refreshStrategy` again.

## Get Participant Attributes
<a name="web-publish-subscribe-participant-attributes"></a>

If you specify attributes in the `CreateParticipantToken` operation request, you can see the attributes in `StageParticipantInfo` properties:

```
stage.on(StageEvents.STAGE_PARTICIPANT_JOINED, (participant) => {
   console.log(`Participant ${participant.id} info:`, participant.attributes);
})
```

## Supplemental Enhancement Information (SEI)
<a name="web-publish-subscribe-sei-attributes"></a>

The Supplemental Enhancement Information (SEI) NAL unit is used to store frame-aligned metadata alongside the video. It can be used when publishing and subscribing to H.264 video streams. SEI payloads are not guaranteed to arrive to subscribers, especially in bad network conditions. As the SEI payload stores data directly within the H.264 frame structure, this capability cannot be leveraged for audio-only streams.

### Inserting SEI Payloads
<a name="sei-attributes-inserting-sei-payloads"></a>

Publishing clients can insert SEI payloads to a stage stream that is being published by configuring their video's LocalStageStream to enable `inBandMessaging` and subsequently invoking the `insertSeiMessage` method. Note that enabling `inBandMessaging` increases SDK memory usage.

Payloads must be of the [ArrayBuffer](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer) type. The payload size must be greater than 0KB and less than 1KB. The number of SEI messages inserted per second must not exceed 10KB per second.

```
const config = {
    inBandMessaging: { enabled: true }
};
const vidStream = new LocalStageStream(videoTrack, config);
const payload = new TextEncoder().encode('hello world').buffer;
vidStream.insertSeiMessage(payload);
```

#### Repeating SEI Payloads
<a name="sei-attributes-repeating-sei-payloads"></a>

Optionally provide a `repeatCount` to repeat the insertion of SEI payloads for the next N frames sent. This could be helpful to mitigate the inherent loss that may occur due to the underlying UDP transport protocol used to send video. Note this value must be between 0 and 30. Receiving clients must have logic to de-duplicate the message.

```
vidStream.insertSeiMessage(payload, { repeatCount: 5 }); // Optional config, repeatCount must be between 0 and 30
```

### Reading SEI Payloads
<a name="sei-attributes-reading-sei-payloads"></a>

Subscribing clients can read SEI payloads from a publisher who is publishing H.264 video if present by configuring the subscriber(s) `SubscribeConfiguration` to enable `inBandMessaging` and listening to the `StageEvents.STAGE_STREAM_SEI_MESSAGE_RECEIVED` event, as shown in the following example:

```
const strategy = {
    subscribeConfiguration: (participant) => {
        return {
            inBandMessaging: {
                enabled: true
            }
        }
    }
    // ... other strategy functions
}

stage.on(StageEvents.STAGE_STREAM_SEI_MESSAGE_RECEIVED, (participant, seiMessage) => {
    console.log(seiMessage.payload, seiMessage.uuid);
});
```

## Layered Encoding with Simulcast
<a name="web-publish-subscribe-layered-encoding-simulcast"></a>

Layered encoding with simulcast is an IVS real-time streaming feature that allows publishers to send multiple different quality layers of video, and subscribers to dynamically or manually change those layers. The feature is described more in the [Streaming Optimizations](https://docs.aws.amazon.com//ivs/latest/RealTimeUserGuide/real-time-streaming-optimization.html) document.

### Configuring Layered Encoding (Publisher)
<a name="web-layered-encoding-simulcast-configure-publisher"></a>

As a publisher, to enable layered encoding with simulcast, add the following configuration to your `LocalStageStream` on instantiation:

```
// Enable Simulcast
let cameraStream = new LocalStageStream(cameraDevice, {
   simulcast: { enabled: true }
})
```

Depending on the input resolution of your camera device, a set number of layers will be encoded and sent as defined in the [Default Layers, Qualities, and Framerates](real-time-streaming-optimization.md#real-time-streaming-optimization-default-layers) section of *Streaming Optimizations*.

Also, you can optionally configure individual layers from within the simulcast configuration:

```
import { SimulcastLayerPresets } from ‘amazon-ivs-web-broadcast’

// Enable Simulcast
let cameraStream = new LocalStageStream(cameraDevice, {
   simulcast: {
      enabled: true,
      layers: [
         SimulcastLayerPresets.DEFAULT_720,
          SimulcastLayerPresets.DEFAULT_360,
          SimulcastLayerPresets.DEFAULT_180, 
   }
})
```

Alternately, you can create your own custom layer configurations for up to three layers. If you provide an empty array or no value, the defaults described above are used. Layers are described with the following required properties:
+ `height: number;`
+ `width: number;`
+ `maxBitrateKbps: number;`
+ `maxFramerate: number;`

Starting from the presets, you can either override individual properties or create an entirely new configuration:

```
import { SimulcastLayerPresets } from ‘amazon-ivs-web-broadcast’

const custom720pLayer = {
   ...SimulcastLayerPresets.DEFAULT_720,
   maxFramerate: 15,
}

const custom360pLayer = {
       maxBitrateKbps: 600,
       maxFramerate: 15,
       width: 640,
       height: 360,
}

// Enable Simulcast
let cameraStream = new LocalStageStream(cameraDevice, {
   simulcast: {
      enabled: true,
      layers: [
         custom720pLayer,
         custom360pLayer, 
   }
})
```

For maximum values, limits, and errors which can be triggered when configuring individual layers, see the SDK reference documentation.

### Configuring Layered Encoding (Subscriber)
<a name="web-layered-encoding-simulcast-configure-subscriber"></a>

As a subscriber, there is nothing needed to enable layered encoding. If a publisher is sending simulcast layers, then by default the server dynamically adapts between the layers to choose the optimal quality based on the subscriber's device and network conditions.

Alternatively, to pick explicit layers that the publisher is sending, there are several options, described below.

### Option 1: Initial Layer Quality Preference
<a name="web-layered-encoding-simulcast-layer-quality-preference"></a>

Using the `subscribeConfiguration` strategy, it is possible to choose what initial layer you want to receive as a subscriber:

```
const strategy = {
    subscribeConfiguration: (participant) => {
        return {
            simulcast: {
                initialLayerPreference: InitialLayerPreference.LOWEST_QUALITY
            }
        }
    }
    // ... other strategy functions
}
```

By default, subscribers always are sent the lowest quality layer first; this slowly ramps up to the highest quality layer. This optimizes end-user bandwidth consumption and provides the best time to video, reducing initial video freezes for users on weaker networks.

These options are available for `InitialLayerPreference`:
+ `LOWEST_QUALITY` — The server delivers the lowest quality layer of video first. This optimizes bandwidth consumption, as well as time to media. Quality is defined as the combination of size, bitrate, and framerate of the video. For example, 720p video is lower quality than 1080p video.
+ `HIGHEST_QUALITY` — The server delivers the highest quality layer of video first. This optimizes quality but may increase the time to media. Quality is defined as the combination of size, bitrate, and framerate of the video. For example, 1080p video is higher quality than 720p video.

**Note:** For initial layer preferences (the `initialLayerPreference` call) to take effect, a re-subscribe is necessary as these updates do not apply to the active subscription.



### Option 2: Preferred Layer for Stream
<a name="web-layered-encoding-simulcast-preferred-layer"></a>

Once a stream has started, you can use the `preferredLayerForStream ` strategy method. This strategy method exposes the participant and the stream information.

The strategy method can be returned with the following:
+ The layer object directly, based on what `RemoteStageStream.getLayers` returns 
+ The layer object label string, based on `StageStreamLayer.label`
+ Undefined or null, which indicates that no layer should be selected, and dynamic adaption is preferred

For example, the following strategy will always have the users selecting the lowest quality layer of video available:

```
const strategy = {
    preferredLayerForStream: (participant, stream) => {
        return stream.getLowestQualityLayer();
    }
    // ... other strategy functions
}
```

To reset the layer selection and return to dynamic adaption, return null or undefined in the strategy. In this example `appState` is a dummy variable that represents the possible application state.

```
const strategy = {
    preferredLayerForStream: (participant, stream) => {
        if (appState.isAutoMode) {
            return null;
        } else {
            return appState.layerChoice
        }
    }
    // ... other strategy functions
}
```

### Option 3: RemoteStageStream Layer Helpers
<a name="web-layered-encoding-simulcast-remotestagestream-helpers"></a>

`RemoteStageStream` has several helpers which can be used to make decisions about layer selection and display the corresponding selections to end users:
+ **Layer Events** — Alongside `StageEvents`, the `RemoteStageStream` object itself has events which communicate layer and simulcast adaption changes:
  + `stream.on(RemoteStageStreamEvents.ADAPTION_CHANGED, (isAdapting) => {})`
  + `stream.on(RemoteStageStreamEvents.LAYERS_CHANGED, (layers) => {})`
  + `stream.on(RemoteStageStreamEvents.LAYER_SELECTED, (layer, reason) => {})`
+ **Layer Methods** — `RemoteStageStream` has several helper methods which can be used to get information about the stream and the layers being presented. These methods are available on the remote stream provided in the `preferredLayerForStream `strategy, as well as remote streams exposed via `StageEvents.STAGE_PARTICIPANT_STREAMS_ADDED`.
  + `stream.getLayers`
  + `stream.getSelectedLayer`
  + `stream.getLowestQualityLayer`
  + `stream.getHighestQualityLayer`

For details, see the `RemoteStageStream` class in the [SDK reference documentation](https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-reference). For the `LAYER_SELECTED` reason, if `UNAVAILABLE` is returned, this indicates that the requested layer could not be selected. A best-effort selection is made in its place, which typically is a lower quality layer to maintain stream stability.

## Handling Network Issues
<a name="web-publish-subscribe-network-issues"></a>

When the local device’s network connection is lost, the SDK internally tries to reconnect without any user action. In some cases, the SDK is not successful and user action is needed.

Broadly the state of the stage can be handled via the `STAGE_CONNECTION_STATE_CHANGED` event:

```
stage.on(StageEvents.STAGE_CONNECTION_STATE_CHANGED, (state) => {
   switch (state) {
      case StageConnectionState.DISCONNECTED:
         // handle disconnected UI
         return;
      case StageConnectionState.CONNECTING:
         // handle establishing connection UI
         return;
      case StageConnectionState.CONNECTED:
         // SDK is connected to the Stage
         return;
      case StageConnectionState.ERRORED:
         // SDK encountered an error and lost its connection to the stage. Wait for CONNECTED.
         return;
    }
})
```

In general, you can ignore an errored state that is encountered after successfully joining a stage, as the SDK will try to recover internally. If the SDK reports an `ERRORED` state and the stage remains in the `CONNECTING` state for an extended period of time (e.g., 30 seconds or longer), you probably are disconnected from the network.

## Broadcast the Stage to an IVS Channel
<a name="web-publish-subscribe-broadcast-stage"></a>

To broadcast a stage, create a separate `IVSBroadcastClient` session and then follow the usual instructions for broadcasting with the SDK, described above. The list of `StageStream` exposed via `STAGE_PARTICIPANT_STREAMS_ADDED` can be used to retrieve the participant media streams which can be applied to the broadcast stream composition, as follows:

```
// Setup client with preferred settings
const broadcastClient = getIvsBroadcastClient();

stage.on(StageEvents.STAGE_PARTICIPANT_STREAMS_ADDED, (participant, streams) => {
    streams.forEach(stream => {
        const inputStream = new MediaStream([stream.mediaStreamTrack]);
        switch (stream.streamType) {
            case StreamType.VIDEO:
                broadcastClient.addVideoInputDevice(inputStream, `video-${participant.id}`, {
                    index: DESIRED_LAYER,
                    width: MAX_WIDTH,
                    height: MAX_HEIGHT
                });
                break;
            case StreamType.AUDIO:
                broadcastClient.addAudioInputDevice(inputStream, `audio-${participant.id}`);
                break;
        }
    })
})
```

Optionally, you can composite a stage and broadcast it to an IVS low-latency channel, to reach a larger audience. See [Enabling Multiple Hosts on an Amazon IVS Stream](https://docs.aws.amazon.com//ivs/latest/LowLatencyUserGuide/multiple-hosts.html) in the IVS Low-Latency Streaming User Guide.

# Known Issues & Workarounds in the IVS Web Broadcast SDK \$1 Real-Time Streaming
<a name="broadcast-web-known-issues"></a>

This document lists known issues that you might encounter when using the Amazon IVS real-time streaming Web broadcast SDK and suggests potential workarounds.
+ When closing browser tabs or exiting browsers without calling `stage.leave()`, users can still appear in the session with a frozen frame or black screen for up to 10 seconds.

  **Workaround:** None.
+ Safari sessions intermittently appear with a black screen to users joining after a session has begun.

  **Workaround:** Refresh the browser and reconnect the session.
+ Safari does not recover gracefully from switching networks.

  **Workaround:** Refresh the browser and reconnect the session.
+ The developer console repeats an `Error: UnintentionalError at StageSocket.onClose` error.

  **Workaround:** Only one stage can be created per participant token. This error occurs when more than one `Stage` instance is created with the same participant token, regardless of whether the instance is on one device or multiple devices.
+ You may have trouble maintaining a `StageParticipantPublishState.PUBLISHED` state and may receive repeated `StageParticipantPublishState.ATTEMPTING_PUBLISH` states when listening to the `StageEvents.STAGE_PARTICIPANT_PUBLISH_STATE_CHANGED` event.

  **Workaround:** Constrain video resolution to 720p when invoking `getUserMedia` or `getDisplayMedia`. Specifically, your `getUserMedia` and `getDisplayMedia` constraint values for width and height must not exceed 921600 (1280\$1720) when multiplied together.
+ When `stage.leave()` is invoked or a remote participant leaves, a 404 DELETE error appears in the browser's debug console.

  **Workaround:** None. This is a harmless error.

## Safari Limitations
<a name="broadcast-web-safari-limitations"></a>
+ Denying a permissions prompt requires resetting the permission in Safari website settings at the OS level.
+ Safari does not natively detect all devices as effectively as Firefox or Chrome. For example, OBS Virtual Camera does not get detected.

## Firefox Limitations
<a name="broadcast-web-firefox-limitations"></a>
+ System permissions need to be enabled for Firefox to screen share. After enabling them, the user must restart Firefox for it to work correctly; otherwise, if permissions are perceived as blocked, the browser will throw a [NotFoundError](https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getDisplayMedia#exceptions) exception.
+ The `getCapabilities` method is missing. This means users cannot get the media track's resolution or aspect ratio. See this [bugzilla thread](https://bugzilla.mozilla.org/show_bug.cgi?id=1179084).
+ Several `AudioContext` properties are missing; e.g., latency and channel count. This could pose a problem for advanced users who want to manipulate the audio tracks.
+ Camera feeds from `getUserMedia` are restricted to a 4:3 aspect ratio on MacOS. See [bugzilla thread 1](https://bugzilla.mozilla.org/show_bug.cgi?id=1193640) and [bugzilla thread 2](https://bugzilla.mozilla.org/show_bug.cgi?id=1306034).
+ Audio capture is not supported with `getDisplayMedia`. See this [bugzilla thread](https://bugzilla.mozilla.org/show_bug.cgi?id=1541425).
+ Framerate in screen capture is suboptimal (approximately 15fps?). See this [bugzilla thread](https://bugzilla.mozilla.org/show_bug.cgi?id=1703522).

## Mobile Web Limitations
<a name="broadcast-web-mobile-web-limitations"></a>
+ [getDisplayMedia](https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getDisplayMedia#browser_compatibility) screen sharing is unsupported on mobile devices.

  **Workaround**: None.
+ Participant takes 15-30 seconds to leave when closing a browser without calling `leave()`.

  **Workaround**: Add a UI that encourages users to properly disconnect.
+ Backgrounding app causes publishing video to stop.

  **Workaround**: Display a UI slate when the publisher is paused.
+ Video framerate drops for approximately 5 seconds after unmuting a camera on Android devices.

  **Workaround**: None.
+ The video feed is stretched on rotation for iOS 16.0.

  **Workaround**: Display a UI outlining this known OS issue.
+ Switching the audio-input device automatically switches the audio-output device.

  **Workaround**: None.
+ Backgrounding the browser causes the publishing stream to go black and produce only audio.

  **Workaround**: None. This is for security reasons.

# Error Handling in the IVS Web Broadcast SDK \$1 Real-Time Streaming
<a name="broadcast-web-error-handling"></a>

This section is an overview of error conditions, how the Web broadcast SDK reports them to the application, and what an application should do when those errors are encountered. Errors are reported by the SDK to listeners of the `StageEvents.ERROR` event:

```
stage.on(StageEvents.ERROR, (error: StageError) => {
    // log or handle errors here
    console.log(`${error.code}, ${error.category}, ${error.message}`);
});
```

## Stage Errors
<a name="web-error-handling-stage-errors"></a>

A StageError is reported when the SDK encounters a problem it cannot recover from and generally requires app intervention and/or network reconnection to recover.

Each reported `StageError` has a code (or `StageErrorCode`), message (string), and category (`StageErrorCategory`). Each is related to an underlying operation category.

The operation category of the error is determined based on whether it is related to the connection to the stage (`JOIN_ERROR`), sending media to the stage (`PUBLISH_ERROR`), or receiving an incoming media stream from the stage (`SUBSCRIBE_ERROR`).

The code property of a `StageError` reports the specific problem:


| Name | Code | Recommended Action | 
| --- | --- | --- | 
| TOKEN\$1MALFORMED | 1 | Create a valid token and retry instantiating the stage. | 
| TOKEN\$1EXPIRED | 2 | Create an unexpired token and retry instantiating the stage. | 
| TIMEOUT | 3 | The operation timed out. If the stage exists and the token is valid, this failure likely is a network issue. In that case, wait for the device’s connectivity to recover. | 
| FAILED | 4 | A fatal condition was encountered when attempting an operation. Check error details. If the stage exists and the token is valid, this failure likely is a network issue. In that case, wait for the device’s connectivity to recover. For most failures related to network stability, the SDK will retry internally for a period of up to 30 seconds before emitting a FAILED error.  | 
| CANCELED | 5 | Check application code and ensure there are no repeated `join`, `refreshStrategy`, or `replaceStrategy` invocations, which may cause repeated operations to be started and canceled before completion. | 
| STAGE\$1AT\$1CAPACITY | 6 | This error indicates that the stage or your account is at capacity. If the stage has reached its participant limit, try the operation again when the stage is no longer at capacity, by refreshing the strategy. If your account has reached its concurrent subscriptions or concurrent publishers quota, reduce usage or request a quota increase through the [AWS Service Quotas console](https://console.aws.amazon.com/servicequotas/).  | 
| CODEC\$1MISMATCH | 7 | The codec is not supported by the stage. Check the browser and platform for codec support. For IVS real-time streaming, browsers must support the H.264 codec for video and the Opus codec for audio. | 
| TOKEN\$1NOT\$1ALLOWED | 8 | The token does not have permission for the operation. Recreate the token with the correct permission(s) and try again. | 
| STAGE\$1DELETED | 9 | None; attempting to join a deleted stage triggers this error. | 
| PARTICIPANT\$1DISCONNECTED | 10 | None; attempting to join with a token of a disconnected participant triggers this error. | 

### Handling StageError Example
<a name="web-error-handling-stage-errors-example"></a>

Use the StageError code to determine if the error is due to an expired token:

```
stage.on(StageEvents.ERROR, (error: StageError) => {
    if (error.code === StageError.TOKEN_EXPIRED) {
        // recreate the token and stage instance and re-join
    }
});
```

### Network Errors when Already Joined
<a name="web-error-handling-stage-errors-network"></a>

If the device’s network connection goes down, the SDK may lose its connection to stage servers. You may see errors in the console because the SDK can no longer reach backend services. POSTs to https://broadcast.stats.live-video.net will fail.

If you are publishing and/or subscribing, you will see errors in the console related to attempts to publish/subscribe.

Internally the SDK will try to reconnect with an exponential backoff strategy.

**Action**: Wait for the device’s connectivity to recover.

## Errored States
<a name="web-error-handling-errored-states"></a>

We recommend you use these states for application logging and to display messaging to users that alerts them of connectivity issues to the stage for a particular participant.

### Publish
<a name="errored-states-publish"></a>

The SDK reports `ERRORED` when a publish fails.

```
stage.on(StageEvents.STAGE_PARTICIPANT_PUBLISH_STATE_CHANGED, (participantInfo, state) => {
  if (state === StageParticipantPublishState.ERRORED) {
      // Log and/or display message to user
  }
});
```

### Subscribe
<a name="errored-states-subscribe"></a>

The SDK reports `ERRORED` when a subscribe fails. This can occur due to network conditions or if a stage is at capacity for subscribers.

```
stage.on(StageEvents.STAGE_PARTICIPANT_SUBSCRIBE_STATE_CHANGED, (participantInfo, state) => {
  if (state === StageParticipantSubscribeState.ERRORED) {
    // Log and/or display message to user
  }
});
```

# IVS Broadcast SDK: Android Guide \$1 Real-Time Streaming
<a name="broadcast-android"></a>

The IVS real-time streaming Android broadcast SDK enables participants to send and receive video on Android.

The `com.amazonaws.ivs.broadcast` package implements the interface described in this document. The SDK supports the following operations:
+ Join a stage 
+ Publish media to other participants in the stage
+ Subscribe to media from other participants in the stage
+ Manage and monitor video and audio published to the stage
+ Get WebRTC statistics for each peer connection
+ All operations from the IVS low-latency streaming Android broadcast SDK

**Latest version of Android broadcast SDK:** 1.41.0 ([Release Notes](https://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/release-notes.html#apr09-26-broadcast-mobile-rt)) 

**Reference documentation:** For information on the most important methods available in the Amazon IVS Android broadcast SDK, see the reference documentation at [https://aws.github.io/amazon-ivs-broadcast-docs/1.41.0/android/](https://aws.github.io/amazon-ivs-broadcast-docs/1.41.0/android/).

**Sample code: **See the Android sample repository on GitHub: [https://github.com/aws-samples/amazon-ivs-real-time-streaming-android-samples](https://github.com/aws-samples/amazon-ivs-real-time-streaming-android-samples).

**Platform requirements:** Android 9.0\$1

# Getting Started​ with the IVS Android Broadcast SDK \$1 Real-Time Streaming
<a name="broadcast-android-getting-started"></a>

This document takes you through the steps involved in getting started with the IVS real-time streaming Android broadcast SDK.

## Install the Library
<a name="broadcast-android-install"></a>

There are several ways to add the Amazon IVS Android broadcast library to your Android development environment: use Gradle directly, use Gradle version catalogs, or install the SDK manually.

**Use Gradle directly**: Add the library to your module’s `build.gradle` file, as shown here (for the latest version of the IVS broadcast SDK):

```
repositories {
    mavenCentral()
}
 
dependencies {
     implementation 'com.amazonaws:ivs-broadcast:1.41.0:stages@aar'
}
```

**Use Gradle version catalogs**: First include this in your module’s `build.gradle` file:

```
implementation(libs.ivs){
   artifact {
      classifier = "stages"
      type = "aar"
   }
}
```

Then include the following in the `libs.version.toml` file (for the latest version of the IVS broadcast SDK):

```
[versions]
ivs="1.41.0"

[libraries]
ivs = {module = "com.amazonaws:ivs-broadcast", version.ref = "ivs"}
```

**Install the SDK manually**: Download the latest version from this location:

[https://search.maven.org/artifact/com.amazonaws/ivs-broadcast](https://search.maven.org/artifact/com.amazonaws/ivs-broadcast)

Be sure to download the `aar` with `-stages` appended.

**Also allow SDK control over the speakerphone**: Regardless of which installation method you choose, also add the following permission to your manifest, to allow the SDK to enable and disable the speakerphone:

```
<uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS"/>
```

## Using the SDK with Debug Symbols
<a name="broadcast-android-using-debug-symbols-rt"></a>

We also publish a version of the Android broadcast SDK which includes debug symbols. You can use this version to improve the quality of debug reports (stack traces) in Firebase Crashlytics, if you run into crashes in the IVS broadcast SDK; i.e., `libbroadcastcore.so`. When you report these crashes to the IVS SDK team, the higher quality stack traces make it easier to fix the issues.

To use this version of the SDK, put the following in your Gradle build files:

```
implementation "com.amazonaws:ivs-broadcast:$version:stages-unstripped@aar"
```

Use the above line instead of this:

```
implementation "com.amazonaws:ivs-broadcast:$version:stages@aar"
```

### Uploading Symbols to Firebase Crashlytics
<a name="android-debug-symbols-rt-firebase-crashlytics"></a>

Ensure that your Gradle build files are set up for Firebase Crashlytics. Follow Google’s instructions here:

[https://firebase.google.com/docs/crashlytics/ndk-reports](https://firebase.google.com/docs/crashlytics/ndk-reports)

Be sure to include `com.google.firebase:firebase-crashlytics-ndk` as a dependency.

When building your app for release, the Firebase Crashlytics plugin should upload symbols automatically. To upload symbols manually, run either of the following:

```
gradle uploadCrashlyticsSymbolFileRelease
```

```
./gradlew uploadCrashlyticsSymbolFileRelease
```

(It will not hurt if symbols are uploaded twice, both automatically and manually.)

### Preventing your Release .apk from Becoming Larger
<a name="android-debug-symbols-rt-sizing-apk"></a>

Before packaging the release `.apk` file, the Android Gradle Plugin automatically tries to strip debug information from shared libraries (including the IVS broadcast SDK's `libbroadcastcore.so` library). However, sometimes this does not happen. As a result, your `.apk` file could become larger and you could get a warning message from the Android Gradle Plugin that it’s unable to strip debug symbols and is packaging `.so` files as is. If this happens, do the following:
+ Install an Android NDK. Any recent version will work.
+ Add `ndkVersion <your_installed_ndk_version_number>` to your application’s `build.gradle` file. Do this even if your application itself does not contain native code.

For more information, see this [issue report](https://issuetracker.google.com/issues/353554169).

## Request Permissions
<a name="broadcast-android-permissions"></a>

Your app must request permission to access the user’s camera and mic. (This is not specific to Amazon IVS; it is required for any application that needs access to cameras and microphones.)

Here, we check whether the user has already granted permissions and, if not, ask for them:

```
final String[] requiredPermissions =
         { Manifest.permission.CAMERA, Manifest.permission.RECORD_AUDIO };

for (String permission : requiredPermissions) {
    if (ContextCompat.checkSelfPermission(this, permission) 
                != PackageManager.PERMISSION_GRANTED) {
        // If any permissions are missing we want to just request them all.
        ActivityCompat.requestPermissions(this, requiredPermissions, 0x100);
        break;
    }
}
```

Here, we get the user’s response:

```
@Override
public void onRequestPermissionsResult(int requestCode, 
                                      @NonNull String[] permissions,
                                      @NonNull int[] grantResults) {
    super.onRequestPermissionsResult(requestCode,
               permissions, grantResults);
    if (requestCode == 0x100) {
        for (int result : grantResults) {
            if (result == PackageManager.PERMISSION_DENIED) {
                return;
            }
        }
        setupBroadcastSession();
    }
}
```

# Publishing & Subscribing with the IVS Android Broadcast SDK \$1 Real-Time Streaming
<a name="android-publish-subscribe"></a>

This document takes you through the steps involved in publishing and subscribing to a stage using the IVS real-time streaming Android broadcast SDK.

## Concepts
<a name="android-publish-subscribe-concepts"></a>

Three core concepts underlie real-time functionality: [stage](#android-publish-subscribe-concepts-stage), [strategy](#android-publish-subscribe-concepts-strategy), and [renderer](#android-publish-subscribe-concepts-renderer). The design goal is minimizing the amount of client-side logic necessary to build a working product.

### Stage
<a name="android-publish-subscribe-concepts-stage"></a>

The `Stage` class is the main point of interaction between the host application and the SDK. It represents the stage itself and is used to join and leave the stage. Creating and joining a stage requires a valid, unexpired token string from the control plane (represented as `token`). Joining and leaving a stage are simple. 

```
Stage stage = new Stage(context, token, strategy);

try {
	stage.join();
} catch (BroadcastException exception) {
	// handle join exception
}

stage.leave();
```

The `Stage` class is also where the `StageRenderer` can be attached:

```
stage.addRenderer(renderer); // multiple renderers can be added
```

### Strategy
<a name="android-publish-subscribe-concepts-strategy"></a>

The `Stage.Strategy` interface provides a way for the host application to communicate the desired state of the stage to the SDK. Three functions need to be implemented: `shouldSubscribeToParticipant`, `shouldPublishFromParticipant`, and `stageStreamsToPublishForParticipant`. All are discussed below.

#### Subscribing to Participants
<a name="android-publish-subscribe-concepts-strategy-participants"></a>

```
Stage.SubscribeType shouldSubscribeToParticipant(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo);
```

When a remote participant joins the stage, the SDK queries the host application about the desired subscription state for that participant. The options are `NONE`, `AUDIO_ONLY`, and `AUDIO_VIDEO`. When returning a value for this function, the host application does not need to worry about the publish state, current subscription state, or stage connection state. If `AUDIO_VIDEO` is returned, the SDK waits until the remote participant is publishing before subscribing, and it updates the host application through the renderer throughout the process.

Here is a sample implementation:

```
@Override
Stage.SubscribeType shouldSubscribeToParticipant(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo) {
	return Stage.SubscribeType.AUDIO_VIDEO;
}
```

This is the complete implementation of this function for a host application that always wants all participants to see each other; e.g., a video chat application.

More advanced implementations also are possible. Use the `userInfo` property on `ParticipantInfo` to selectively subscribe to participants based on server-provided attributes:

```
@Override
Stage.SubscribeType shouldSubscribeToParticipant(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo) {
	switch(participantInfo.userInfo.get(“role”)) {
		case “moderator”:
			return Stage.SubscribeType.NONE;
		case “guest”:
			return Stage.SubscribeType.AUDIO_VIDEO;
		default:
			return Stage.SubscribeType.NONE;
	}
}
```

This can be used to create a stage where moderators can monitor all guests without being seen or heard themselves. The host application could use additional business logic to let moderates see each other but remain invisible to guests.

#### Configuration for Subscribing to Participants
<a name="android-publish-subscribe-concepts-strategy-participants-config"></a>

```
SubscribeConfiguration subscribeConfigurationForParticipant(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo);
```

If a remote participant is being subscribed to (see [Subscribing to Participants](#android-publish-subscribe-concepts-strategy-participants)), the SDK queries the host application about a custom subscribe configuration for that participant. This configuration is optional and allows the host application to control certain aspects of subscriber behavior. For information on what can be configured, see [SubscribeConfiguration](https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-reference/interfaces/SubscribeConfiguration) in the SDK reference documentation.

Here is a sample implementation:

```
@Override
public SubscribeConfiguration subscribeConfigrationForParticipant(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo) {
    SubscribeConfiguration config = new SubscribeConfiguration();

    config.jitterBuffer.setMinDelay(JitterBufferConfiguration.JitterBufferDelay.MEDIUM());

    return config;
}
```

This implementation updates the jitter-buffer minimum delay for all subscribed participants to a preset of `MEDIUM`.

As with `shouldSubscribeToParticipant`, more advanced implementations are possible. The given `ParticipantInfo` can be used to selectively update the subscribe configuration for specific participants.

We recommend using the default behaviors. Specify custom configuration only if there is a particular behavior you want to change.

#### Publishing
<a name="android-publish-subscribe-concepts-strategy-publishing"></a>

```
boolean shouldPublishFromParticipant(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo);
```

Once connected to the stage, the SDK queries the host application to see if a particular participant should publish. This is invoked only on local participants that have permission to publish based on the provided token.

Here is a sample implementation:

```
@Override
boolean shouldPublishFromParticipant(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo) {
	return true;
}
```

This is for a standard video chat application where users always want to publish. They can mute and unmute their audio and video, to instantly be hidden or seen/heard. (They also can use publish/unpublish, but that is much slower. Mute/unmute is preferable for use cases where changing visibility often is desirable.)

#### Choosing Streams to Publish
<a name="android-publish-subscribe-concepts-strategy-streams"></a>

```
@Override
List<LocalStageStream> stageStreamsToPublishForParticipant(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo);
}
```

When publishing, this is used to determine what audio and video streams should be published. This is covered in more detail later in [Publish a Media Stream](#android-publish-subscribe-publish-stream).

#### Updating the Strategy
<a name="android-publish-subscribe-concepts-strategy-updates"></a>

The strategy is intended to be dynamic: the values returned from any of the above functions can be changed at any time. For example, if the host application does not want to publish until the end user taps a button, you could return a variable from `shouldPublishFromParticipant` (something like `hasUserTappedPublishButton`). When that variable changes based on an interaction by the end user, call `stage.refreshStrategy()` to signal to the SDK that it should query the strategy for the latest values, applying only things that have changed. If the SDK observes that the `shouldPublishFromParticipant` value has changed, it will start the publish process. If the SDK queries and all functions return the same value as before, the `refreshStrategy` call will not perform any modifications to the stage.

If the return value of `shouldSubscribeToParticipant` changes from `AUDIO_VIDEO` to `AUDIO_ONLY`, the video stream will be removed for all participants with changed returned values, if a video stream existed previously.

Generally, the stage uses the strategy to most efficiently apply the difference between the previous and current strategies, without the host application needing to worry about all the state required to manage it properly. Because of this, think of calling `stage.refreshStrategy()` as a cheap operation, because it does nothing unless the strategy changes.

### Renderer
<a name="android-publish-subscribe-concepts-renderer"></a>

The `StageRenderer` interface communicates the state of the stage to the host application. Updates to the host application’s UI usually can be powered entirely by the events provided by the renderer. The renderer provides the following functions:

```
void onParticipantJoined(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo);

void onParticipantLeft(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo);

void onParticipantPublishStateChanged(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo, @NonNull Stage.PublishState publishState);

void onParticipantSubscribeStateChanged(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo, @NonNull Stage.SubscribeState subscribeState);

void onStreamsAdded(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo, @NonNull List<StageStream> streams);

void onStreamsRemoved(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo, @NonNull List<StageStream> streams);

void onStreamsMutedChanged(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo, @NonNull List<StageStream> streams);

void onError(@NonNull BroadcastException exception);

void onConnectionStateChanged(@NonNull Stage stage, @NonNull Stage.ConnectionState state, @Nullable BroadcastException exception);
                
void onStreamAdaptionChanged(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo, @NonNull RemoteStageStream stream, boolean adaption);

void onStreamLayersChanged(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo, @NonNull RemoteStageStream stream, @NonNull List<RemoteStageStream.Layer> layers);

void onStreamLayerSelected(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo, @NonNull RemoteStageStream stream, @Nullable RemoteStageStream.Layer layer, @NonNull RemoteStageStream.LayerSelectedReason reason);
```

For most of these methods, the corresponding `Stage` and `ParticipantInfo` are provided.

It is not expected that the information provided by the renderer impacts the return values of the strategy. For example, the return value of `shouldSubscribeToParticipant` is not expected to change when `onParticipantPublishStateChanged` is called. If the host application wants to subscribe to a particular participant, it should return the desired subscription type regardless of that participant’s publish state. The SDK is responsible for ensuring that the desired state of the strategy is acted on at the correct time based on the state of the stage.

The `StageRenderer` can be attached to the stage class:

```
stage.addRenderer(renderer); // multiple renderers can be added
```

Note that only publishing participants trigger `onParticipantJoined`, and whenever a participant stops publishing or leaves the stage session, `onParticipantLeft` is triggered.

## Publish a Media Stream
<a name="android-publish-subscribe-publish-stream"></a>

Local devices such as built-in microphones and cameras are discovered via `DeviceDiscovery`. Here is an example of selecting the front-facing camera and default microphone, then return them as `LocalStageStreams` to be published by the SDK:

```
DeviceDiscovery deviceDiscovery = new DeviceDiscovery(context);

List<Device> devices = deviceDiscovery.listLocalDevices();
List<LocalStageStream> publishStreams = new ArrayList<LocalStageStream>();

Device frontCamera = null;
Device microphone = null;

// Create streams using the front camera, first microphone
for (Device device : devices) {
	Device.Descriptor descriptor = device.getDescriptor();
	if (!frontCamera && descriptor.type == Device.Descriptor.DeviceType.Camera && descriptor.position = Device.Descriptor.Position.FRONT) {
		front Camera = device;
	}
	if (!microphone && descriptor.type == Device.Descriptor.DeviceType.Microphone) {
		microphone = device;
	}
}

ImageLocalStageStream cameraStream = new ImageLocalStageStream(frontCamera);
AudioLocalStageStream microphoneStream = new AudioLocalStageStream(microphoneDevice);

publishStreams.add(cameraStream);
publishStreams.add(microphoneStream);

// Provide the streams in Stage.Strategy
@Override
@NonNull List<LocalStageStream> stageStreamsToPublishForParticipant(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo) {
	return publishStreams;
}
```

## Display and Remove Participants
<a name="android-publish-subscribe-participants"></a>

After subscribing is completed, you will receive an array of `StageStream` objects through the renderer’s `onStreamsAdded` function. You can retrieve the preview from an `ImageStageStream`:

```
ImagePreviewView preview = ((ImageStageStream)stream).getPreview();

// Add the view to your view hierarchy
LinearLayout previewHolder = findViewById(R.id.previewHolder);
preview.setLayoutParams(new LinearLayout.LayoutParams(
		LinearLayout.LayoutParams.MATCH_PARENT,
		LinearLayout.LayoutParams.MATCH_PARENT));
previewHolder.addView(preview);
```

You can retrieve the audio-level stats from an `AudioStageStream`:

```
((AudioStageStream)stream).setStatsCallback((peak, rms) -> {
	// handle statistics
});
```

When a participant stops publishing or is unsubscribed from, the `onStreamsRemoved` function is called with the streams that were removed. Host applications should use this as a signal to remove the participant’s video stream from the view hierarchy.

`onStreamsRemoved` is invoked for all scenarios in which a stream might be removed, including: 
+ The remote participant stops publishing.
+ A local device unsubscribes or changes subscription from `AUDIO_VIDEO` to `AUDIO_ONLY`.
+ The remote participant leaves the stage.
+ The local participant leaves the stage.

Because `onStreamsRemoved` is invoked for all scenarios, no custom business logic is required around removing participants from the UI during remote or local leave operations.

## Mute and Unmute Media Streams
<a name="android-publish-subscribe-mute-streams"></a>

`LocalStageStream` objects have a `setMuted` function that controls whether the stream is muted. This function can be called on the stream before or after it is returned from the `streamsToPublishForParticipant` strategy function.

**Important**: If a new `LocalStageStream` object instance is returned by `streamsToPublishForParticipant` after a call to `refreshStrategy`, the mute state of the new stream object is applied to the stage. Be careful when creating new `LocalStageStream` instances to make sure the expected mute state is maintained.

## Monitor Remote Participant Media Mute State
<a name="android-publish-subscribe-mute-state"></a>

When a participant changes the mute state of their video or audio stream, the renderer `onStreamMutedChanged` function is invoked with a list of streams that have changed. Use the `getMuted` method on `StageStream` to update your UI accordingly. 

```
@Override
void onStreamsMutedChanged(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo, @NonNull List<StageStream> streams) {
	for (StageStream stream : streams) {
		boolean muted = stream.getMuted();
		// handle UI changes
	}
}
```

## Get WebRTC Statistics
<a name="android-publish-subscribe-webrtc-stats"></a>

To get the latest WebRTC statistics for a publishing stream or a subscribing stream, use `requestRTCStats` on `StageStream`. When a collection is completed, you will receive statistics through the `StageStream.Listener` which can be set on `StageStream`.

```
stream.requestRTCStats();

@Override
void onRTCStats(Map<String, Map<String, String>> statsMap) {
	for (Map.Entry<String, Map<String, string>> stat : statsMap.entrySet()) {
		for(Map.Entry<String, String> member : stat.getValue().entrySet()) {
			Log.i(TAG, stat.getKey() + “ has member “ + member.getKey() + “ with value “ + member.getValue());
		}
	}
}
```

## Get Participant Attributes
<a name="android-publish-subscribe-participant-attributes"></a>

If you specify attributes in the `CreateParticipantToken` operation request, you can see the attributes in `ParticipantInfo` properties:

```
@Override
void onParticipantJoined(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo) {
	for (Map.Entry<String, String> entry : participantInfo.userInfo.entrySet()) {
		Log.i(TAG, “attribute: “ + entry.getKey() + “ = “ + entry.getValue());
	}
}
```

## Embed Messages
<a name="android-publish-subscribe-embed-messages"></a>

The `embedMessage` method on ImageDevice allows you to insert metadata payloads directly into video frames during publishing. This enables frame-synchronized messaging for real-time applications. Message embedding is available only when using the SDK for real-time publishing (not low-latency publishing).

Embedded messages are not guaranteed to arrive to subscribers because they are embedded directly within video frames and transmitted over UDP, which does not guarantee packet delivery. Packet loss during transmission can result in lost messages, especially in poor network conditions. To mitigate this, the `embedMessage` method includes a `repeatCount` parameter that duplicates the message across multiple consecutive frames, increasing delivery reliability. This capability is available only for video streams.

### Using embedMessage
<a name="android-embed-messages-using-embedmessage"></a>

Publishing clients can embed message payloads into their video stream using the `embedMessage` method on ImageDevice. The payload size must be greater than 0KB and less than 1KB. The number of embedded messages inserted per second must not exceed 10KB per second. 

```
val surfaceSource: SurfaceSource = imageStream.device as SurfaceSource
val message = "hello world"
val messageBytes = message.toByteArray(StandardCharsets.UTF_8)

try {
    surfaceSource.embedMessage(messageBytes, 0)
} catch (e: BroadcastException) {
    Log.e("EmbedMessage", "Failed to embed message: ${e.message}")
}
```

### Repeating Message Payloads
<a name="android-embed-messages-repeat-payloads"></a>

Use `repeatCount` to duplicate the message across multiple frames for improved reliability. This value must be between 0 and 30. Receiving clients must have logic to de-duplicate the message.

```
try {
    surfaceSource.embedMessage(messageBytes, 5)
    // repeatCount: 0-30, receiving clients should handle duplicates
} catch (e: BroadcastException) {
    Log.e("EmbedMessage", "Failed to embed message: ${e.message}")
}
```

### Reading Embedded Messages
<a name="android-embed-messages-read-messages"></a>

See "Get Supplemental Enhancement Information (SEI)" below for how to read embedded messages from incoming streams.

## Get Supplemental Enhancement Information (SEI)
<a name="android-publish-subscribe-sei-attributes"></a>

The Supplemental Enhancement Information (SEI) NAL unit is used to store frame-aligned metadata alongside the video. Subscribing clients can read SEI payloads from a publisher who is publishing H.264 video by inspecting the `embeddedMessages` property on the `ImageDeviceFrame` objects coming out of the publisher’s `ImageDevice`. To do this, acquire a publisher’s `ImageDevice`, then observe each frame via a callback provided to `setOnFrameCallback`, as shown in the following example:

```
// in a StageRenderer’s onStreamsAdded function, after acquiring the new ImageStream

val imageDevice = imageStream.device as ImageDevice
imageDevice.setOnFrameCallback(object : ImageDevice.FrameCallback {
	override fun onFrame(frame: ImageDeviceFrame) {
    		for (message in frame.embeddedMessages) {
        		if (message is UserDataUnregisteredSeiMessage) {
            		val seiMessageBytes = message.data
            		val seiMessageUUID = message.uuid
           	 
            		// interpret the message's data based on the UUID
        		}
    		}
	}
})
```

## Continue Session in the Background
<a name="android-publish-subscribe-background-session"></a>

When the app enters the background, you may want to stop publishing or subscribe only to other remote participants’ audio. To accomplish this, update your `Strategy` implementation to stop publishing, and subscribe to `AUDIO_ONLY` (or `NONE`, if applicable).

```
// Local variables before going into the background
boolean shouldPublish = true;
Stage.SubscribeType subscribeType = Stage.SubscribeType.AUDIO_VIDEO;

// Stage.Strategy implementation
@Override
boolean shouldPublishFromParticipant(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo) {
	return shouldPublish;
}

@Override
Stage.SubscribeType shouldSubscribeToParticipant(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo) {
	return subscribeType;
}

// In our Activity, modify desired publish/subscribe when we go to background, then call refreshStrategy to update the stage
@Override
void onStop() {
	super.onStop();
	shouldPublish = false;
	subscribeTpye = Stage.SubscribeType.AUDIO_ONLY;
	stage.refreshStrategy();
}
```

## Layered Encoding with Simulcast
<a name="android-publish-subscribe-layered-encoding-simulcast"></a>

Layered encoding with simulcast is an IVS real-time streaming feature that allows publishers to send multiple different quality layers of video, and subscribers to dynamically or manually configure those layers. The feature is described more in the [Streaming Optimizations](real-time-streaming-optimization.md) document.

### Configuring Layered Encoding (Publisher)
<a name="android-layered-encoding-simulcast-configure-publisher"></a>

As a publisher, to enable layered encoding with simulcast, add the following configuration to your `LocalStageStream` on instantiation:

```
// Enable Simulcast
StageVideoConfiguration config = new StageVideoConfiguration();
config.simulcast.setEnabled(true);

ImageLocalStageStream cameraStream = new ImageLocalStageStream(frontCamera, config);

// Other Stage implementation code
```

Depending on the resolution you set on video configuration, a set number of layers will be encoded and sent as defined in the [Default Layers, Qualities, and Framerates](real-time-streaming-optimization.md#real-time-streaming-optimization-default-layers) section of *Streaming Optimizations*.

Also, you can optionally configure individual layers from within the simulcast configuration: 

```
// Enable Simulcast
StageVideoConfiguration config = new StageVideoConfiguration();
config.simulcast.setEnabled(true);

List<StageVideoConfiguration.Simulcast.Layer> simulcastLayers = new ArrayList<>();
simulcastLayers.add(StagePresets.SimulcastLocalLayer.DEFAULT_720);
simulcastLayers.add(StagePresets.SimulcastLocalLayer.DEFAULT_180);

config.simulcast.setLayers(simulcastLayers);

ImageLocalStageStream cameraStream = new ImageLocalStageStream(frontCamera, config);

// Other Stage implementation code
```

Alternately, you can create your own custom layer configurations for up to three layers. If you provide an empty array or no value, the defaults described above are used. Layers are described with the following required property setters:
+ `setSize: Vec2;`
+ `setMaxBitrate: integer;`
+ `setMinBitrate: integer;`
+ `setTargetFramerate: integer;`

Starting from the presets, you can either override individual properties or create an entirely new configuration:

```
// Enable Simulcast
StageVideoConfiguration config = new StageVideoConfiguration();
config.simulcast.setEnabled(true);

List<StageVideoConfiguration.Simulcast.Layer> simulcastLayers = new ArrayList<>();

// Configure high quality layer with custom framerate
StageVideoConfiguration.Simulcast.Layer customHiLayer = StagePresets.SimulcastLocalLayer.DEFAULT_720;
customHiLayer.setTargetFramerate(15);

// Add layers to the list
simulcastLayers.add(customHiLayer);
simulcastLayers.add(StagePresets.SimulcastLocalLayer.DEFAULT_180);

config.simulcast.setLayers(simulcastLayers);

ImageLocalStageStream cameraStream = new ImageLocalStageStream(frontCamera, config);

// Other Stage implementation code
```

For maximum values, limits, and errors which can be triggered when configuring individual layers, see the SDK reference documentation.

### Configuring Layered Encoding (Subscriber)
<a name="android-layered-encoding-simulcast-configure-subscriber"></a>

As a subscriber, there is nothing needed to enable layered encoding. If a publisher is sending simulcast layers, then by default the server dynamically adapts between the layers to choose the optimal quality based on the subscriber's device and network conditions.

Alternatively, to pick explicit layers that the publisher is sending, there are several options, described below.

### Option 1: Initial Layer Quality Preference
<a name="android-layered-encoding-simulcast-layer-quality-preference"></a>

Using the `subscribeConfigurationForParticipant` strategy, it is possible to choose what initial layer you want to receive as a subscriber:

```
@Override
public SubscribeConfiguration subscribeConfigrationForParticipant(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo) {
    SubscribeConfiguration config = new SubscribeConfiguration();

    config.simulcast.setInitialLayerPreference(SubscribeSimulcastConfiguration.InitialLayerPreference.LOWEST_QUALITY);

    return config;
}
```

By default, subscribers always are sent the lowest quality layer first; this slowly ramps up to the highest quality layer. This optimizes end-user bandwidth consumption and provides the best time to video, reducing initial video freezes for users on weaker networks.

These options are available for `InitialLayerPreference`:
+ `LOWEST_QUALITY` — The server delivers the lowest quality layer of video first. This optimizes bandwidth consumption, as well as time to media. Quality is defined as the combination of size, bitrate, and framerate of the video. For example, 720p video is lower quality than 1080p video.
+ `HIGHEST_QUALITY` — The server delivers the highest quality layer of video first. This optimizes quality but may increase the time to media. Quality is defined as the combination of size, bitrate, and framerate of the video. For example, 1080p video is higher quality than 720p video.

**Note:** For initial layer preferences (the `setInitialLayerPreference` call) to take effect, a re-subscribe is necessary as these updates do not apply to the active subscription.

### Option 2: Preferred Layer for Stream
<a name="android-layered-encoding-simulcast-preferred-layer"></a>

The `preferredLayerForStream` strategy method lets you select a layer after the stream has started. This strategy method receives the participant and the stream information, so you can select a layer on a participant-by-participant basis. The SDK calls this method in response to specific events, such as when stream layers change, the participant state changes, or the host application refreshes the strategy.

The strategy method returns a `RemoteStageStream.Layer` object, which can be one of the following:
+ A layer object, such as one returned by `RemoteStageStream.getLayers`.
+ null, which indicates that no layer should be selected and dynamic adaption is preferred.

For example, the following strategy will always have the users selecting the lowest quality layer of video available:

```
@Nullable
@Override
public RemoteStageStream.Layer preferredLayerForStream(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo, @NonNull RemoteStageStream stream) {
    return stream.getLowestQualityLayer();
}
```

To reset the layer selection and return to dynamic adaption, return null or undefined in the strategy. In this example, `appState` is a placeholder variable that represents the host application’s state.

```
@Nullable
@Override
public RemoteStageStream.Layer preferredLayerForStream(@NonNull Stage stage, @NonNull ParticipantInfo participantInfo, @NonNull RemoteStageStream stream) {
    if (appState.isAutoMode) {
        return null;
    } else {
        return appState.layerChoice;
    }
}
```

### Option 3: RemoteStageStream Layer Helpers
<a name="android-layered-encoding-simulcast-remotestagestream-helpers"></a>

`RemoteStageStream` has several helpers which can be used to make decisions about layer selection and display the corresponding selections to end users:
+ **Layer Events** — Alongside `StageRenderer`, the `RemoteStageStream.Listener` has events which communicate layer and simulcast adaption changes:
  + `void onAdaptionChanged(boolean adaption)`
  + `void onLayersChanged(@NonNull List<Layer> layers)`
  + `void onLayerSelected(@Nullable Layer layer, @NonNull LayerSelectedReason reason)`
+ **Layer Methods** — `RemoteStageStream` has several helper methods which can be used to get information about the stream and the layers being presented. These methods are available on the remote stream provided in the `preferredLayerForStream` strategy, as well as remote streams exposed via `StageRenderer.onStreamsAdded`.
  + `stream.getLayers`
  + `stream.getSelectedLayer`
  + `stream.getLowestQualityLayer`
  + `stream.getHighestQualityLayer`
  + `stream.getLayersWithConstraints`

For details, see the `RemoteStageStream` class in the [SDK reference documentation](https://aws.github.io/amazon-ivs-broadcast-docs/latest/android/). For the `LayerSelected` reason, if `UNAVAILABLE` is returned, this indicates that the requested layer could not be selected. A best-effort selection is made in its place, which typically is a lower quality layer to maintain stream stability.

## Video-Configuration Limitations
<a name="android-publish-subscribe-video-limits"></a>

The SDK does not support forcing portrait mode or landscape mode using `StageVideoConfiguration.setSize(BroadcastConfiguration.Vec2 size)`. In portrait orientation, the smaller dimension is used as the width; in landscape orientation, the height. This means that the following two calls to `setSize` have the same effect on the video configuration:

```
StageVideo Configuration config = new StageVideo Configuration();

config.setSize(BroadcastConfiguration.Vec2(720f, 1280f);
config.setSize(BroadcastConfiguration.Vec2(1280f, 720f);
```

## Handling Network Issues
<a name="android-publish-subscribe-network-issues"></a>

When the local device’s network connection is lost, the SDK internally tries to reconnect without any user action. In some cases, the SDK is not successful and user action is needed. There are two main errors related to losing the network connection:
+ Error code 1400, message: "PeerConnection is lost due to unknown network error"
+ Error code 1300, message: "Retry attempts are exhausted"

If the first error is received but the second is not, the SDK is still connected to the stage and will try to reestablish its connections automatically. As a safeguard, you can call `refreshStrategy` without any changes to the strategy method’s return values, to trigger a manual reconnect attempt.

If the second error is received, the SDK’s reconnect attempts have failed and the local device is no longer connected to the stage. In this case, try to rejoin the stage by calling `join` after your network connection has been reestablished.

In general, encountering errors after joining a stage successfully indicates that the SDK was unsuccessful in reestablishing a connection. Create a new `Stage` object and try to join when network conditions improve.

## Using Bluetooth Microphones
<a name="android-publish-subscribe-bluetooth-microphones"></a>

To publish using Bluetooth microphone devices, you must start a Bluetooth SCO connection:

```
Bluetooth.startBluetoothSco(context);
// Now bluetooth microphones can be used
…
// Must also stop bluetooth SCO
Bluetooth.stopBluetoothSco(context);
```

# Known Issues & Workarounds in the IVS Android Broadcast SDK \$1 Real-Time Streaming ​
<a name="broadcast-android-known-issues"></a>

This document lists known issues that you might encounter when using the Amazon IVS real-time streaming Android broadcast SDK and suggests potential workarounds.
+ When an Android device goes to sleep and wakes up, it is possible for the preview to be in a frozen state.

  **Workaround:** Create and use a new `Stage`.
+ When a participant joins with a token that is being used by another participant, the first connection is disconnected without a specific error.

  **Workaround:** None. 
+ There is a rare issue where the publisher is publishing but the publish state that subscribers receive is `inactive`.

  **Workaround:** Try leaving and then joining the session. If the issue remains, create a new token for the publisher.
+ A rare audio-distortion issue may occur intermittently during a stage session, typically on calls of longer durations.

  **Workaround:** The participant with distorted audio can either leave and rejoin the session, or unpublish and republish their audio to fix the issue.
+ External microphones are not supported when publishing to a stage.

  **Workaround:** Do not use an external microphone connected via USB for publishing to a stage.
+ Publishing to a stage with screen share using `createSystemCaptureSources` is not supported.

  **Workaround:** Manage the system capture manually, using custom image-input sources and custom audio-input sources.
+ When an `ImagePreviewView` is removed from a parent (e.g., `removeView()` is called at the parent), the `ImagePreviewView` is released immediately. The `ImagePreviewView` does not show any frames when it is added to another parent view.

  **Workaround:** Request another preview using `getPreview`.
+ When joining a stage with a Samsung Galaxy S22/\$1 with Android 12, you may encounter a 1401 error and the local device fails to join the stage or joins but has no audio.

  **Workaround:** Upgrade to Android 13.
+ When joining a stage with a Nokia X20 on Android 13, the camera may fail to open and an exception is thrown.

  **Workaround:** None.
+ Devices with the MediaTek Helio chipset may not render video of remote participants properly.

  **Workaround:** None.
+ On a few devices, the device OS may choose a different microphone than what’s selected through the SDK. This is because the Amazon IVS Broadcast SDK cannot control how the `VOICE_COMMUNICATION` audio route is defined, as it varies according to different device manufacturers.

  **Workaround:** None.
+ Some Android video encoders cannot be configured with a video size less than 176x176. Configuring a smaller size causes an error and prevents streaming.

  **Workaround:** Do not configure the video size to be less than 176x176.

# Error Handling in the IVS Android Broadcast SDK \$1 Real-Time Streaming
<a name="broadcast-android-error-handling"></a>

This section is an overview of error conditions, how the IVS real-time streaming Android broadcast SDK reports them to the application, and what an application should do when those errors are encountered.

## Fatal vs. Non-Fatal Errors
<a name="broadcast-android-fatal-vs-nonfatal-errors"></a>

The error object has an "is fatal" boolean field of `BroadcastException`.

In general, fatal errors are related to connection to the Stages server (either a connection cannot be established or is lost and cannot be recovered). The application should re-create the stage and re-join, possibly with a new token or when the device’s connectivity recovers.

Non-fatal errors generally are related to the publish/subscribe state and are handled by the SDK, which retries the publish/subscribe operation.

You can check this property:

```
try {
  stage.join(...)
} catch (e: BroadcastException) {
  If (e.isFatal) { 
    // the error is fatal
```

## Join Errors
<a name="broadcast-android-stage-join-errors"></a>

### Malformed Token
<a name="broadcast-android-stage-join-errors-malformed-token"></a>

This happens when the stage token is malformed.

The SDK throws a Java exception from a call to `stage.join`, with error code = 1000 and fatal = true.

**Action**: Create a valid token and retry joining.

### Expired Token
<a name="broadcast-android-stage-join-errors-expired-token"></a>

This happens when the stage token is expired.

The SDK throws a Java exception from a call to `stage.join`, with error code = 1001 and fatal = true.

**Action**: Create a new token and retry joining.

### Invalid or Revoked Token
<a name="broadcast-android-stage-join-errors-invalid-token"></a>

This happens when the stage token is not malformed but is rejected by the Stages server. This is reported asynchronously through the application-supplied stage renderer.

The SDK calls `onConnectionStateChanged` with an exception, with error code = 1026 and fatal = true.

**Action**: Create a valid token and retry joining.

### Network Errors for Initial Join
<a name="broadcast-android-stage-join-errors-network-initial-join"></a>

This happens when the SDK cannot contact the Stages server to establish a connection. This is reported asynchronously through the application-supplied stage renderer.

The SDK calls `onConnectionStateChanged` with an exception, with error code = 1300 and fatal = true.

**Action**: Wait for the device’s connectivity to recover and retry joining.

### Network Errors when Already Joined
<a name="broadcast-android-stage-join-errors-network-already-joined"></a>

If the device’s network connection goes down, the SDK may lose its connection to Stage servers. This is reported asynchronously through the application-supplied stage renderer.

The SDK calls `onConnectionStateChanged` with an exception, with error code = 1300 and fatal = true.

**Action**: Wait for the device’s connectivity to recover and retry joining.

## Publish/Subscribe Errors
<a name="broadcast-android-publish-subscribe-errors"></a>

### Initial
<a name="broadcast-android-publish-subscribe-errors-initial"></a>

There are several errors:
+ MultihostSessionOfferCreationFailPublish (1020)
+ MultihostSessionOfferCreationFailSubscribe (1021)
+ MultihostSessionNoIceCandidates (1022)
+ MultihostSessionStageAtCapacity (1024)
+ SignallingSessionCannotRead (1201)
+ SignallingSessionCannotSend (1202)
+ SignallingSessionBadResponse (1203)

These are reported asynchronously through the application-supplied stage renderer.

The SDK retries the operation for a limited number of times. During retries, the publish/subscribe state is `ATTEMPTING_PUBLISH` / `ATTEMPTING_SUBSCRIBE`. If the retry attempts succeed, the state changes to `PUBLISHED` / `SUBSCRIBED`.

The SDK calls `onError` with the relevant error code and fatal = false.

**Action**: No action is needed, as the SDK retries automatically. Optionally, the application can refresh the strategy to force more retries.

### Already Established, Then Fail
<a name="broadcast-android-publish-subscribe-errors-established"></a>

A publish or subscribe can fail after it is established, most likely due to a network error. The error code for a "peer connection lost due to network error" is 1400.

This is reported asynchronously through the application-supplied stage renderer.

The SDK retries the publish/subscribe operation. During retries, the publish/subscribe state is `ATTEMPTING_PUBLISH` / `ATTEMPTING_SUBSCRIBE`. If the retry attempts succeed, the state changes to `PUBLISHED` / `SUBSCRIBED`.

The SDK calls `onError` with the error code = 1400 and fatal = false.

**Action**: No action is needed, as the SDK retries automatically. Optionally, the application can refresh the strategy to force more retries. In the event of total connectivity loss, it’s likely that the connection to Stages will fail too.

# IVS Broadcast SDK: iOS Guide \$1 Real-Time Streaming
<a name="broadcast-ios"></a>

The IVS real-time streaming iOS broadcast SDK enables participants to send and receive video on iOS.

The `AmazonIVSBroadcast` module implements the interface described in this document. The following operations are supported:
+ Join a stage 
+ Publish media to other participants in the stage
+ Subscribe to media from other participants in the stage
+ Manage and monitor video and audio published to the stage
+ Get WebRTC statistics for each peer connection
+ All operations from the IVS low-latency streaming iOS broadcast SDK

**Latest version of iOS broadcast SDK:** 1.41.0 ([Release Notes](https://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/release-notes.html#apr09-26-broadcast-mobile-rt)) 

**Reference documentation:** For information on the most important methods available in the Amazon IVS iOS broadcast SDK, see the reference documentation at [https://aws.github.io/amazon-ivs-broadcast-docs/1.41.0/ios/](https://aws.github.io/amazon-ivs-broadcast-docs/1.41.0/ios/).

**Sample code: **See the iOS sample repository on GitHub: [https://github.com/aws-samples/amazon-ivs-real-time-streaming-ios-samples](https://github.com/aws-samples/amazon-ivs-real-time-streaming-ios-samples).

**Platform requirements:** iOS 14\$1

# Getting Started​ with the IVS iOS Broadcast SDK \$1 Real-Time Streaming
<a name="broadcast-ios-getting-started"></a>

This document takes you through the steps involved in getting started with the IVS real-time streaming iOS broadcast SDK.

## Install the Library
<a name="broadcast-ios-install"></a>

We recommend that you integrate broadcast SDK via Swift Package Manager. (Alternatively, you can manually add the framework to your project.)

### Recommended: Integrate the Broadcast SDK (Swift Package Manager)
<a name="broadcast-ios-install-swift"></a>

1. Download the Package.swift file from [https://broadcast.live-video.net/1.41.0/Package.swift](https://broadcast.live-video.net/1.41.0/Package.swift).

1. In your project, create a new directory named AmazonIVSBroadcast and add it to version control.

1. Place the downloaded Package.swift file in the new directory.

1. In Xcode, go to **File > Add Package Dependencies** and select **Add Local...**

1. Navigate to and select the AmazonIVSBroadcast directory that you created, and select **Add Package**.

1. When prompted to **Choose Package Products for AmazonIVSBroadcast**, select **AmazonIVSBroadcastStages** as your **Package Product** by setting your application target in the **Add to Target** section.

1. Select **Add Package**.

**Important**: The IVS real-time streaming broadcast SDK includes all features of the IVS low-latency streaming broadcast SDK. It is not possible to integrate both SDKs in the same project.

### Alternate Approach: Install the Framework Manually
<a name="broadcast-ios-install-manual"></a>

1. Download the latest version from [ https://broadcast.live-video.net/1.41.0/AmazonIVSBroadcast-Stages.xcframework.zip](https://broadcast.live-video.net/1.41.0/AmazonIVSBroadcast-Stages.xcframework.zip).

1. Extract the contents of the archive. `AmazonIVSBroadcast.xcframework` contains the SDK for both device and simulator.

1. Embed `AmazonIVSBroadcast.xcframework` by dragging it into the **Frameworks, Libraries, and Embedded Content** section of the **General** tab for your application target.  
![\[The Frameworks, Libraries, and Embedded Content section of the General tab for your application target.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/iOS_Broadcast_SDK_Guide_xcframework.png)

## Request Permissions
<a name="broadcast-ios-permissions"></a>

Your app must request permission to access the user’s camera and mic. (This is not specific to Amazon IVS; it is required for any application that needs access to cameras and microphones.)

Here, we check whether the user has already granted permissions and, if not, we ask for them:

```
switch AVCaptureDevice.authorizationStatus(for: .video) {
case .authorized: // permission already granted.
case .notDetermined:
   AVCaptureDevice.requestAccess(for: .video) { granted in
       // permission granted based on granted bool.
   }
case .denied, .restricted: // permission denied.
@unknown default: // permissions unknown.
}
```

You need to do this for both `.video` and `.audio` media types, if you want access to cameras and microphones, respectively.

You also need to add entries for `NSCameraUsageDescription` and `NSMicrophoneUsageDescription` to your `Info.plist`. Otherwise, your app will crash when trying to request permissions.

## Disable the Application Idle Timer
<a name="broadcast-ios-disable-idle-timer"></a>

This is optional but recommended. It prevents your device from going to sleep while using the broadcast SDK, which would interrupt the broadcast.

```
override func viewDidAppear(_ animated: Bool) {
   super.viewDidAppear(animated)
   UIApplication.shared.isIdleTimerDisabled = true
}
override func viewDidDisappear(_ animated: Bool) {
   super.viewDidDisappear(animated)
   UIApplication.shared.isIdleTimerDisabled = false
}
```

# Publishing & Subscribing with the IVS iOS Broadcast SDK \$1 Real-Time Streaming
<a name="ios-publish-subscribe"></a>

This document takes you through the steps involved in publishing and subscribing to a stage using the IVS real-time streaming iOS broadcast SDK.

## Concepts
<a name="ios-publish-subscribe-concepts"></a>

Three core concepts underlie real-time functionality: [stage](#ios-publish-subscribe-concepts-stage), [strategy](#ios-publish-subscribe-concepts-strategy), and [renderer](#ios-publish-subscribe-concepts-renderer). The design goal is minimizing the amount of client-side logic necessary to build a working product.

### Stage
<a name="ios-publish-subscribe-concepts-stage"></a>

The `IVSStage` class is the main point of interaction between the host application and the SDK. The class represents the stage itself and is used to join and leave the stage. Creating or joining a stage requires a valid, unexpired token string from the control plane (represented as `token`). Joining and leaving a stage are simple.

```
let stage = try IVSStage(token: token, strategy: self)

try stage.join()

stage.leave()
```

The `IVSStage` class also is where the `IVSStageRenderer` and `IVSErrorDelegate` can be attached:

```
let stage = try IVSStage(token: token, strategy: self)
stage.errorDelegate = self
stage.addRenderer(self) // multiple renderers can be added
```

### Strategy
<a name="ios-publish-subscribe-concepts-strategy"></a>

The `IVSStageStrategy` protocol provides a way for the host application to communicate the desired state of the stage to the SDK. Three functions need to be implemented: `shouldSubscribeToParticipant`, `shouldPublishParticipant`, and `streamsToPublishForParticipant`. All are discussed below.

#### Subscribing to Participants
<a name="ios-publish-subscribe-concepts-strategy-participants"></a>

```
func stage(_ stage: IVSStage, shouldSubscribeToParticipant participant: IVSParticipantInfo) -> IVSStageSubscribeType
```

When a remote participant joins a stage, the SDK queries the host application about the desired subscription state for that participant. The options are `.none`, `.audioOnly`, and `.audioVideo`. When returning a value for this function, the host application does not need to worry about the publish state, current subscription state, or stage connection state. If `.audioVideo` is returned, the SDK waits until the remote participant is publishing before subscribing, and it updates the host application through the renderer throughout the process.

Here is a sample implementation:

```
func stage(_ stage: IVSStage, shouldSubscribeToParticipant participant: IVSParticipantInfo) -> IVSStageSubscribeType {
    return .audioVideo
}
```

This is the complete implementation of this function for a host application that always wants all participants to see each other; e.g., a video-chat application.

More advanced implementations also are possible. Use the `attributes` property on `IVSParticipantInfo` to selectively subscribe to participants based on server-provided attributes:

```
func stage(_ stage: IVSStage, shouldSubscribeToParticipant participant: IVSParticipantInfo) -> IVSStageSubscribeType {
    switch participant.attributes["role"] {
    case "moderator": return .none
    case "guest": return .audioVideo
    default: return .none
    }
}
```

This can be used to create a stage where moderators can monitor all guests without being seen or heard themselves. The host application could use additional business logic to let moderators see each other but remain invisible to guests.

#### Configuration for Subscribing to Participants
<a name="ios-publish-subscribe-concepts-strategy-participants-config"></a>

```
func stage(_ stage: IVSStage, subscribeConfigurationForParticipant participant: IVSParticipantInfo) -> IVSSubscribeConfiguration
```

If a remote participant is being subscribed to (see [Subscribing to Participants](#ios-publish-subscribe-concepts-strategy-participants)), the SDK queries the host application about a custom subscribe configuration for that participant. This configuration is optional and allows the host application to control certain aspects of subscriber behavior. For information on what can be configured, see [SubscribeConfiguration](https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-reference/interfaces/SubscribeConfiguration) in the SDK reference documentation.

Here is a sample implementation:

```
func stage(_ stage: IVSStage, subscribeConfigurationForParticipant participant: IVSParticipantInfo) -> IVSSubscribeConfiguration {
    let config = IVSSubscribeConfiguration()

    try! config.jitterBuffer.setMinDelay(.medium())

    return config
}
```

This implementation updates the jitter-buffer minimum delay for all subscribed participants to a preset of `MEDIUM`.

As with `shouldSubscribeToParticipant`, more advanced implementations are possible. The given `ParticipantInfo` can be used to selectively update the subscribe configuration for specific participants.

We recommend using the default behaviors. Specify custom configuration only if there is a particular behavior you want to change.

#### Publishing
<a name="ios-publish-subscribe-concepts-strategy-publishing"></a>

```
func stage(_ stage: IVSStage, shouldPublishParticipant participant: IVSParticipantInfo) -> Bool
```

Once connected to the stage, the SDK queries the host application to see if a particular participant should publish. This is invoked only on local participants that have permission to publish based on the provided token.

Here is a sample implementation:

```
func stage(_ stage: IVSStage, shouldPublishParticipant participant: IVSParticipantInfo) -> Bool {
    return true
}
```

This is for a standard video chat application where users always want to publish. They can mute and unmute their audio and video, to instantly be hidden or seen/heard. (They also can use publish/unpublish, but that is much slower. Mute/unmute is preferable for use cases where changing visibility often is desirable.)

#### Choosing Streams to Publish
<a name="ios-publish-subscribe-concepts-strategy-streams"></a>

```
func stage(_ stage: IVSStage, streamsToPublishForParticipant participant: IVSParticipantInfo) -> [IVSLocalStageStream]
```

When publishing, this is used to determine what audio and video streams should be published. This is covered in more detail later in [Publish a Media Stream](#ios-publish-subscribe-publish-stream).

#### Updating the Strategy
<a name="ios-publish-subscribe-concepts-strategy-updates"></a>

The strategy is intended to be dynamic: the values returned from any of the above functions can be changed at any time. For example, if the host application does not want to publish until the end user taps a button, you could return a variable from `shouldPublishParticipant` (something like `hasUserTappedPublishButton`). When that variable changes based on an interaction by the end user, call `stage.refreshStrategy()` to signal to the SDK that it should query the strategy for the latest values, applying only things that have changed. If the SDK observes that the `shouldPublishParticipant` value has changed, it will start the publish process. If the SDK queries and all functions return the same value as before, the `refreshStrategy` call will not make any modifications to the stage.

If the return value of `shouldSubscribeToParticipant` changes from `.audioVideo` to `.audioOnly`, the video stream will be removed for all participants with changed returned values, if a video stream existed previously.

Generally, the stage uses the strategy to most efficiently apply the difference between the previous and current strategies, without the host application needing to worry about all the state required to manage it properly. Because of this, think of calling `stage.refreshStrategy()` as a cheap operation, because it does nothing unless the strategy changes.

### Renderer
<a name="ios-publish-subscribe-concepts-renderer"></a>

The `IVSStageRenderer` protocol communicates the state of the stage to the host application. Updates to the host application’s UI usually can be powered entirely by the events provided by the renderer. The renderer provides the following functions:

```
func stage(_ stage: IVSStage, participantDidJoin participant: IVSParticipantInfo)

func stage(_ stage: IVSStage, participantDidLeave participant: IVSParticipantInfo)

func stage(_ stage: IVSStage, participant: IVSParticipantInfo, didChange publishState: IVSParticipantPublishState)

func stage(_ stage: IVSStage, participant: IVSParticipantInfo, didChange subscribeState: IVSParticipantSubscribeState)

func stage(_ stage: IVSStage, participant: IVSParticipantInfo, didAdd streams: [IVSStageStream])

func stage(_ stage: IVSStage, participant: IVSParticipantInfo, didRemove streams: [IVSStageStream])

func stage(_ stage: IVSStage, participant: IVSParticipantInfo, didChangeMutedStreams streams: [IVSStageStream])

func stage(_ stage: IVSStage, didChange connectionState: IVSStageConnectionState, withError error: Error?)

func stage(_ stage: IVSStage, participant: IVSParticipantInfo, stream: IVSRemoteStageStream, didChangeStreamAdaption adaption: Bool)

func stage(_ stage: IVSStage, participant: IVSParticipantInfo, stream: IVSRemoteStageStream, didChange layers: [IVSRemoteStageStreamLayer])

func stage(_ stage: IVSStage, participant: IVSParticipantInfo, stream: IVSRemoteStageStream, didSelect layer: IVSRemoteStageStreamLayer?, reason: IVSRemoteStageStream.LayerSelectedReason)
```

It is not expected that the information provided by the renderer impacts the return values of the strategy. For example, the return value of `shouldSubscribeToParticipant` is not expected to change when `participant:didChangePublishState` is called. If the host application wants to subscribe to a particular participant, it should return the desired subscription type regardless of that participant’s publish state. The SDK is responsible for ensuring that the desired state of the strategy is acted on at the correct time based on the state of the stage.

Note that only publishing participants trigger `participantDidJoin`, and whenever a participant stops publishing or leaves the stage session, `participantDidLeave` is triggered.

## Publish a Media Stream
<a name="ios-publish-subscribe-publish-stream"></a>

Local devices such as built-in microphones and cameras are discovered via `IVSDeviceDiscovery`. Here is an example of selecting the front-facing camera and default microphone, then returning them as `IVSLocalStageStreams` to be published by the SDK:

```
let devices = IVSDeviceDiscovery().listLocalDevices()

// Find the camera virtual device, choose the front source, and create a stream
let camera = devices.compactMap({ $0 as? IVSCamera }).first!
let frontSource = camera.listAvailableInputSources().first(where: { $0.position == .front })!
camera.setPreferredInputSource(frontSource)
let cameraStream = IVSLocalStageStream(device: camera)

// Find the microphone virtual device and create a stream
let microphone = devices.compactMap({ $0 as? IVSMicrophone }).first!
let microphoneStream = IVSLocalStageStream(device: microphone)

// Configure the audio manager to use the videoChat preset, which is optimized for bi-directional communication, including echo cancellation.
IVSStageAudioManager.sharedInstance().setPreset(.videoChat)

// This is a function on IVSStageStrategy
func stage(_ stage: IVSStage, streamsToPublishForParticipant participant: IVSParticipantInfo) -> [IVSLocalStageStream] {
    return [cameraStream, microphoneStream]
}
```

## Display and Remove Participants
<a name="ios-publish-subscribe-participants"></a>

After subscribing is completed, you will receive an array of `IVSStageStream` objects through the renderer’s `didAddStreams` function. To preview or receive audio level stats about this participant, you can access the underlying `IVSDevice` object from the stream:

```
if let imageDevice = stream.device as? IVSImageDevice {
    let preview = imageDevice.previewView()
    /* attach this UIView subclass to your view */
} else if let audioDevice = stream.device as? IVSAudioDevice {
    audioDevice.setStatsCallback( { stats in
        /* process stats.peak and stats.rms */
    })
}
```

When a participant stops publishing or is unsubscribed from, the `didRemoveStreams` function is called with the streams that were removed. Host applications should use this as a signal to remove the participant’s video stream from the view hierarchy.

`didRemoveStreams` is invoked for all scenarios in which a stream might be removed, including:
+ The remote participant stops publishing.
+ A local device unsubscribes or changes subscription from `.audioVideo` to `.audioOnly`.
+ The remote participant leaves the stage.
+ The local participant leaves the stage.

Because `didRemoveStreams` is invoked for all scenarios, no custom business logic is required around removing participants from the UI during remote or local leave operations.

## Mute and Unmute Media Streams
<a name="ios-publish-subscribe-mute-streams"></a>

`IVSLocalStageStream` objects have a `setMuted` function that controls whether the stream is muted. This function can be called on the stream before or after it is returned from the `streamsToPublishForParticipant` strategy function.

**Important**: If a new `IVSLocalStageStream` object instance is returned by `streamsToPublishForParticipant` after a call to `refreshStrategy`, the mute state of the new stream object is applied to the stage. Be careful when creating new `IVSLocalStageStream` instances to make sure the expected mute state is maintained.

## Monitor Remote Participant Media Mute State
<a name="ios-publish-subscribe-mute-state"></a>

When a participant changes the mute state of its video or audio stream, the renderer `didChangeMutedStreams` function is invoked with an array of streams that have changed. Use the `isMuted` property on `IVSStageStream` to update your UI accordingly:

```
func stage(_ stage: IVSStage, participant: IVSParticipantInfo, didChangeMutedStreams streams: [IVSStageStream]) {
    streams.forEach { stream in 
        /* stream.isMuted */
    }
}
```

## Create a Stage Configuration
<a name="ios-publish-subscribe-stage-config"></a>

To customize the values of a stage’s video configuration, use `IVSLocalStageStreamVideoConfiguration`:

```
let config = IVSLocalStageStreamVideoConfiguration()
try config.setMaxBitrate(900_000)
try config.setMinBitrate(100_000)
try config.setTargetFramerate(30)
try config.setSize(CGSize(width: 360, height: 640))
config.degradationPreference = .balanced
```

## Get WebRTC Statistics
<a name="ios-publish-subscribe-webrtc-stats"></a>

To get the latest WebRTC statistics for a publishing stream or a subscribing stream, use `requestRTCStats` on `IVSStageStream`. When a collection is completed, you will receive statistics through the `IVSStageStreamDelegate` which can be set on `IVSStageStream`. To continually collect WebRTC statistics, call this function on a `Timer`.

```
func stream(_ stream: IVSStageStream, didGenerateRTCStats stats: [String : [String : String]]) {
    for stat in stats {
      for member in stat.value {
         print("stat \(stat.key) has member \(member.key) with value \(member.value)")
      }
   }
}
```

## Get Participant Attributes
<a name="ios-publish-subscribe-participant-attributes"></a>

If you specify attributes in the `CreateParticipantToken` operation request, you can see the attributes in `IVSParticipantInfo` properties:

```
func stage(_ stage: IVSStage, participantDidJoin participant: IVSParticipantInfo) {
    print("ID: \(participant.participantId)")
    for attribute in participant.attributes {
        print("attribute: \(attribute.key)=\(attribute.value)")
    }
}
```

## Embed Messages
<a name="ios-publish-subscribe-embed-messages"></a>

The `embedMessage` method on IVSImageDevice allows you to insert metadata payloads directly into video frames during publishing. This enables frame-synchronized messaging for real-time applications. Message embedding is available only when using the SDK for real-time publishing (not low-latency publishing).

Embedded messages are not guaranteed to arrive to subscribers because they are embedded directly within video frames and transmitted over UDP, which does not guarantee packet delivery. Packet loss during transmission can result in lost messages, especially in poor network conditions. To mitigate this, the `embedMessage` method includes a `repeatCount` parameter that duplicates the message across multiple consecutive frames, increasing delivery reliability. This capability is available only for video streams.

### Using embedMessage
<a name="ios-embed-messages-using-embedmessage"></a>

Publishing clients can embed message payloads into their video stream using the `embedMessage` method on IVSImageDevice. The payload size must be greater than 0KB and less than 1KB. The number of embedded messages inserted per second must not exceed 10KB per second.

```
let imageDevice: IVSImageDevice = imageStream.device as! IVSImageDevice
let messageData = Data("hello world".utf8)

do {
    try imageDevice.embedMessage(messageData, withRepeatCount: 0)
} catch {
    print("Failed to embed message: \(error)")
}
```

### Repeating Message Payloads
<a name="ios-embed-messages-repeat-payloads"></a>

Use `repeatCount` to duplicate the message across multiple frames for improved reliability. This value must be between 0 and 30. Receiving clients must have logic to de-duplicate the message.

```
try imageDevice.embedMessage(messageData, withRepeatCount: 5)

// repeatCount: 0-30, receiving clients should handle duplicates
```

### Reading Embedded Messages
<a name="ios-embed-messages-read-messages"></a>

See "Get Supplemental Enhancement Information (SEI)" below for how to read embedded messages from incoming streams. 

## Get Supplemental Enhancement Information (SEI)
<a name="ios-publish-subscribe-sei-attributes"></a>

The Supplemental Enhancement Information (SEI) NAL unit is used to store frame-aligned metadata alongside the video. Subscribing clients can read SEI payloads from a publisher who is publishing H.264 video by inspecting the `embeddedMessages` property on the `IVSImageDeviceFrame` objects coming out of the publisher’s `IVSImageDevice`. To do this, acquire a publisher’s `IVSImageDevice`, then observe each frame via a callback provided to `setOnFrameCallback`, as shown in the following example:

```
// in an IVSStageRenderer’s stage:participant:didAddStreams: function, after acquiring the new IVSImageStream

let imageDevice: IVSImageDevice? = imageStream.device as? IVSImageDevice
imageDevice?.setOnFrameCallback { frame in
	for message in frame.embeddedMessages {
    		if let seiMessage = message as? IVSUserDataUnregisteredSEIMessage {
        		let seiMessageData = seiMessage.data
        		let seiMessageUUID = seiMessage.UUID

        		// interpret the message's data based on the UUID
    		}
	}
}
```

## Continue Session in the Background
<a name="ios-publish-subscribe-background-session"></a>

When the app enters the background, you can continue to be in the stage while hearing remote audio, though it is not possible to continue to send your own image and audio. You will need to update your `IVSStrategy` implementation to stop publishing and subscribe to `.audioOnly` (or `.none`, if applicable):

```
func stage(_ stage: IVSStage, shouldPublishParticipant participant: IVSParticipantInfo) -> Bool {
    return false
}
func stage(_ stage: IVSStage, shouldSubscribeToParticipant participant: IVSParticipantInfo) -> IVSStageSubscribeType {
    return .audioOnly
}
```

Then make a call to `stage.refreshStrategy()`.

## Layered Encoding with Simulcast
<a name="ios-publish-subscribe-layered-encoding-simulcast"></a>

Layered encoding with simulcast is an IVS real-time streaming feature that allows publishers to send multiple different quality layers of video, and subscribers to dynamically or manually configure those layers. The feature is described more in the [Streaming Optimizations](real-time-streaming-optimization.md) document.

### Configuring Layered Encoding (Publisher)
<a name="ios-layered-encoding-simulcast-configure-publisher"></a>

As a publisher, to enable layered encoding with simulcast, add the following configuration to your `IVSLocalStageStream` on instantiation:

```
// Enable Simulcast
let config = IVSLocalStageStreamVideoConfiguration()
config.simulcast.enabled = true

let cameraStream = IVSLocalStageStream(device: camera, configuration: config)

// Other Stage implementation code
```

Depending on the resolution you set on video configuration, a set number of layers will be encoded and sent as defined in the [Default Layers, Qualities, and Framerates](real-time-streaming-optimization.md#real-time-streaming-optimization-default-layers) section of *Streaming Optimizations*.

Also, you can optionally configure individual layers from within the simulcast configuration:

```
// Enable Simulcast
let config = IVSLocalStageStreamVideoConfiguration()
config.simulcast.enabled = true

let layers = [
    IVSStagePresets.simulcastLocalLayer().default720(),
    IVSStagePresets.simulcastLocalLayer().default180()
]

try config.simulcast.setLayers(layers)

let cameraStream = IVSLocalStageStream(device: camera, configuration: config)

// Other Stage implementation code
```

Alternately, you can create your own custom layer configurations for up to three layers. If you provide an empty array or no value, the defaults described above are used. Layers are described with the following required property setters:
+ `setSize: CGSize;`
+ `setMaxBitrate: integer;`
+ `setMinBitrate: integer;`
+ `setTargetFramerate: float;`

Starting from the presets, you can either override individual properties or create an entirely new configuration:

```
// Enable Simulcast
let config = IVSLocalStageStreamVideoConfiguration()
config.simulcast.enabled = true

let customHiLayer = IVSStagePresets.simulcastLocalLayer().default720()
try customHiLayer.setTargetFramerate(15)

let layers = [
    customHiLayer,
    IVSStagePresets.simulcastLocalLayer().default180()
]

try config.simulcast.setLayers(layers)

let cameraStream = IVSLocalStageStream(device: camera, configuration: config)

// Other Stage implementation code
```

For maximum values, limits, and errors which can be triggered when configuring individual layers, see the SDK reference documentation.

### Configuring Layered Encoding (Subscriber)
<a name="ios-layered-encoding-simulcast-configure-subscriber"></a>

As a subscriber, there is nothing needed to enable layered encoding. If a publisher is sending simulcast layers, then by default the server dynamically adapts between the layers to choose the optimal quality based on the subscriber's device and network conditions.

Alternatively, to pick explicit layers that the publisher is sending, there are several options, described below.

### Option 1: Initial Layer Quality Preference
<a name="ios-layered-encoding-simulcast-layer-quality-preference"></a>

Using the `subscribeConfigurationForParticipant` strategy, it is possible to choose what initial layer you want to receive as a subscriber:

```
func stage(_ stage: IVSStage, subscribeConfigurationForParticipant participant: IVSParticipantInfo) -> IVSSubscribeConfiguration {
    let config = IVSSubscribeConfiguration()

    config.simulcast.initialLayerPreference = .lowestQuality

    return config
}
```

By default, subscribers always are sent the lowest quality layer first; this slowly ramps up to the highest quality layer. This optimizes end-user bandwidth consumption and provides the best time to video, reducing initial video freezes for users on weaker networks.

These options are available for `InitialLayerPreference`:
+ `lowestQuality` — The server delivers the lowest quality layer of video first. This optimizes bandwidth consumption, as well as time to media. Quality is defined as the combination of size, bitrate, and framerate of the video. For example, 720p video is lower quality than 1080p video.
+ `highestQuality` — The server delivers the highest quality layer of video first. This optimizes quality but may increase the time to media. Quality is defined as the combination of size, bitrate, and framerate of the video. For example, 1080p video is higher quality than 720p video.

**Note:** For initial layer preferences (the `initialLayerPreference` call) to take effect, a re-subscribe is necessary as these updates do not apply to the active subscription.

### Option 2: Preferred Layer for Stream
<a name="ios-layered-encoding-simulcast-preferred-layer"></a>

The `preferredLayerForStream` strategy method lets you select a layer after the stream has started. This strategy method receives the participant and the stream information, so you can select a layer on a participant-by-participant basis. The SDK calls this method in response to specific events, such as when stream layers change, the participant state changes, or the host application refreshes the strategy.

The strategy method returns an `IVSRemoteStageStreamLayer` object, which can be one of the following:
+ A layer object, such as one returned by `IVSRemoteStageStream.layers`.
+ null, which indicates that no layer should be selected and dynamic adaption is preferred.

For example, the following strategy will always have the users selecting the lowest quality layer of video available:

```
func stage(_ stage: IVSStage, participant: IVSParticipantInfo, preferredLayerFor stream: IVSRemoteStageStream) -> IVSRemoteStageStreamLayer? {
    return stream.lowestQualityLayer
}
```

To reset the layer selection and return to dynamic adaption, return null or undefined in the strategy. In this example, `appState` is a placeholder variable that represents the host application’s state.

```
func stage(_ stage: IVSStage, participant: IVSParticipantInfo, preferredLayerFor stream: IVSRemoteStageStream) -> IVSRemoteStageStreamLayer? {
    If appState.isAutoMode {
        return nil
    } else {
        return appState.layerChoice
    }
}
```

### Option 3: RemoteStageStream Layer Helpers
<a name="ios-layered-encoding-simulcast-remotestagestream-helpers"></a>

`IVSRemoteStageStream` has several helpers which can be used to make decisions about layer selection and display the corresponding selections to end users:
+ **Layer Events** — Alongside `IVSStageRenderer`, the `IVSRemoteStageStreamDelegate` has events which communicate layer and simulcast adaption changes:
  + `func stream(_ stream: IVSRemoteStageStream, didChangeAdaption adaption: Bool)`
  + `func stream(_ stream: IVSRemoteStageStream, didChange layers: [IVSRemoteStageStreamLayer])`
  + `func stream(_ stream: IVSRemoteStageStream, didSelect layer: IVSRemoteStageStreamLayer?, reason: IVSRemoteStageStream.LayerSelectedReason)`
+ **Layer Methods** — `IVSRemoteStageStream` has several helper methods which can be used to get information about the stream and the layers being presented. These methods are available on the remote stream provided in the `preferredLayerForStream` strategy, as well as remote streams exposed via `func stage(_ stage: IVSStage, participant: IVSParticipantInfo, didAdd streams: [IVSStageStream])`.
  + `stream.layers`
  + `stream.selectedLayer`
  + `stream.lowestQualityLayer`
  + `stream.highestQualityLayer`
  + `stream.layers(with: IVSRemoteStageStreamLayerConstraints)`

For details, see the `IVSRemoteStageStream` class in the [SDK reference documentation](https://aws.github.io/amazon-ivs-broadcast-docs/latest/ios/). For the `LayerSelected` reason, if `UNAVAILABLE` is returned, this indicates that the requested layer could not be selected. A best-effort selection is made in its place, which typically is a lower quality layer to maintain stream stability.

## Broadcast the Stage to an IVS Channel
<a name="ios-publish-subscribe-broadcast-stage"></a>

To broadcast a stage, create a separate `IVSBroadcastSession` and then follow the usual instructions for broadcasting with the SDK, described above. The `device` property on `IVSStageStream` will be either an `IVSImageDevice` or `IVSAudioDevice` as shown in the snippet above; these can be connected to the `IVSBroadcastSession.mixer` to broadcast the entire stage in a customizable layout.

Optionally, you can composite a stage and broadcast it to an IVS low-latency channel, to reach a larger audience. See [Enabling Multiple Hosts on an Amazon IVS Stream](https://docs.aws.amazon.com//ivs/latest/LowLatencyUserGuide/multiple-hosts.html) in the IVS Low-Latency Streaming User Guide.

# How iOS Chooses Camera Resolution and Frame Rate
<a name="ios-publish-subscribe-resolution-framerate"></a>

The camera managed by the broadcast SDK optimizes its resolution and frame rate (frames-per-second, or FPS) to minimize heat production and energy consumption. This section explains how the resolution and frame rate are selected to help host applications optimize for their use cases.

When creating an `IVSLocalStageStream` with an `IVSCamera`, the camera is optimized for a frame rate of `IVSLocalStageStreamVideoConfiguration.targetFramerate` and a resolution of `IVSLocalStageStreamVideoConfiguration.size`. Calling `IVSLocalStageStream.setConfiguration` updates the camera with newer values. 

## Camera Preview
<a name="resolution-framerate-camera-preview"></a>

If you create a preview of an `IVSCamera` without attaching it to a `IVSBroadcastSession` or `IVSStage`, it defaults to a resolution of 1080p and a frame rate of 60 fps.

## Broadcasting a Stage
<a name="resolution-framerate-broadcast-stage"></a>

When using an `IVSBroadcastSession` to broadcast an `IVSStage`, the SDK tries to optimize the camera with a resolution and frame rate that meet the criteria of both sessions.

For example, if the broadcast configuration is set to have a frame rate of 15 FPS and a resolution of 1080p, while the Stage has a frame rate of 30 FPS and a resolution of 720p, the SDK will select a camera configuration with a frame rate of 30 FPS and a resolution of 1080p. The `IVSBroadcastSession` will drop every other frame from the camera, and the `IVSStage` will scale the 1080p image down to 720p.

If a host application plans on using both `IVSBroadcastSession` and `IVSStage` together, with a camera, we recommend that the `targetFramerate` and `size` properties of the respective configurations match. A mismatch could cause the camera to reconfigure itself while capturing video, which will cause a brief delay in video-sample delivery.

If having identical values does not meet the host application’s use case, creating the higher quality camera first will prevent the camera from reconfiguring itself when the lower quality session is added. For example, if you broadcast at 1080p and 30 FPS and then later join a Stage set to 720p and 30 FPS, the camera will not reconfigure itself and video will continue uninterrupted. This is because 720p is less than or equal to 1080p and 30 FPS is less than or equal to 30 FPS.

## Arbitrary Frame Rates, Resolutions, and Aspect Ratios
<a name="resolution-framerate-arbitrary"></a>

Most camera hardware can exactly match common formats, such as 720p at 30 FPS or 1080p at 60 FPS. However, it is not possible to exactly match all formats. The broadcast SDK chooses the camera configuration based on the following rules (in priority order):

1. The width and height of the resolution are greater than or equal to the desired resolution, but within this constraint, width and height are as small as possible.

1. The frame rate is greater than or equal to the desired frame rate, but within this constraint, frame rate is as low as possible.

1. The aspect ratio matches the desired aspect ratio.

1. If there are multiple matching formats, the format with the greatest field of view is used.

Here are two examples:
+ The host application is trying to broadcast in 4k at 120 FPS. The selected camera supports only 4k at 60 FPS or 1080p at 120 FPS. The selected format will be 4k at 60 FPS, because the resolution rule is higher priority than the frame-rate rule.
+ An irregular resolution is requested, 1910x1070. The camera will use 1920x1080. *Be careful: choosing a resolution like 1921x1080 will cause the camera to scale up to the next available resolution (such as 2592x1944), which incurs a CPU and memory-bandwidth penalty*.

## What about Android?
<a name="resolution-framerate-android"></a>

Android does not adjust its resolution or frame rate on the fly like iOS does, so this does not impact the Android broadcast SDK.

# Known Issues & Workarounds in the IVS iOS Broadcast SDK \$1 Real-Time Streaming ​
<a name="broadcast-ios-known-issues"></a>

This document lists known issues that you might encounter when using the Amazon IVS real-time streaming iOS broadcast SDK and suggests potential workarounds.
+ Changing Bluetooth audio routes can be unpredictable. If you connect a new device mid-session, iOS may or may not automatically change the input route. Also, it is not possible to choose between multiple Bluetooth headsets that are connected at the same time. This happens in both regular broadcast and stage sessions.

  **Workaround:** If you plan to use a Bluetooth headset, connect it before starting the broadcast or stage and leave it connected throughout the session.
+ Participants using an iPhone 14, iPhone 14 Plus, iPhone 14 Pro, or iPhone 14 Pro Max may cause an audio echo issue for other participants.

  **Workaround:** Participants using the affected devices can use headphones to prevent the echo issue for other participants.
+ When a participant joins with a token that is being used by another participant, the first connection is disconnected without a specific error.

  **Workaround:** None.
+ There is a rare issue where the publisher is publishing but the publish state that subscribers receive is `inactive`.

  **Workaround:** Try leaving and then joining the session. If the issue remains, create a new token for the publisher.
+ When a participant is publishing or subscribing, it is possible to receive an error with code 1400 that indicates disconnection due to a network issue, even when the network is stable.

  **Workaround:** Try republishing / resubscribing.
+ A rare audio-distortion issue may occur intermittently during a stage session, typically on calls of longer durations.

  **Workaround:** The participant with distorted audio can either leave and rejoin the session, or unpublish and republish their audio to fix the issue.

# Error Handling in the IVS iOS Broadcast SDK \$1 Real-Time Streaming
<a name="broadcast-ios-error-handling"></a>

This section is an overview of error conditions, how the IVS real-time streaming iOS broadcast SDK reports them to the application, and what an application should do when those errors are encountered.

## Fatal vs. Non-Fatal Errors
<a name="broadcast-ios-fatal-vs-nonfatal-errors"></a>

The error object has an "is fatal" boolean. This is a dictionary entry under `IVSBroadcastErrorIsFatalKey` which contains a boolean.

In general, fatal errors are related to connection to the Stages server (either a connection cannot be established or is lost and cannot be recovered). The application should re-create the stage and re-join, possibly with a new token or when the device’s connectivity recovers.

Non-fatal errors generally are related to the publish/subscribe state and are handled by the SDK, which retries the publish/subscribe operation.

You can check this property:

```
let nsError = error as NSError
if nsError.userInfo[IVSBroadcastErrorIsFatalKey] as? Bool == true {
  // the error is fatal
}
```

## Join Errors
<a name="broadcast-ios-stage-join-errors"></a>

### Malformed Token
<a name="broadcast-ios-stage-join-errors-malformed-token"></a>

This happens when the stage token is malformed.

The SDK throws a Swift exception with error code = 1000 and IVSBroadcastErrorIsFatalKey = YES.

**Action**: Create a valid token and retry joining.

### Expired Token
<a name="broadcast-ios-stage-join-errors-expired-token"></a>

This happens when the stage token is expired.

The SDK throws a Swift exception with error code = 1001 and IVSBroadcastErrorIsFatalKey = YES.

**Action**: Create a new token and retry joining.

### Invalid or Revoked Token
<a name="broadcast-ios-stage-join-errors-invalid-token"></a>

This happens when the stage token is not malformed but is rejected by the Stages server. This is reported asynchronously through the application-supplied stage renderer.

The SDK calls `stage(didChange connectionState, withError error)` with error code = 1026 and IVSBroadcastErrorIsFatalKey = YES.

**Action**: Create a valid token and retry joining.

### Network Errors for Initial Join
<a name="broadcast-ios-stage-join-errors-network-initial-join"></a>

This happens when the SDK cannot contact the Stages server to establish a connection. This is reported asynchronously through the application-supplied stage renderer.

The SDK calls `stage(didChange connectionState, withError error)` with error code = 1300 and IVSBroadcastErrorIsFatalKey = YES.

**Action**: Wait for the device’s connectivity to recover and retry joining.

### Network Errors when Already Joined
<a name="broadcast-ios-stage-join-errors-network-already-joined"></a>

If the device’s network connection goes down, the SDK may lose its connection to Stage servers. This is reported asynchronously through the application-supplied stage renderer.

The SDK calls `stage(didChange connectionState, withError error)` with error code = 1300 and IVSBroadcastErrorIsFatalKey value = YES.

**Action**: Wait for the device’s connectivity to recover and retry joining.

## Publish/Subscribe Errors
<a name="broadcast-ios-publish-subscribe-errors"></a>

### Initial
<a name="broadcast-ios-publish-subscribe-errors-initial"></a>

There are several errors:
+ MultihostSessionOfferCreationFailPublish (1020)
+ MultihostSessionOfferCreationFailSubscribe (1021)
+ MultihostSessionNoIceCandidates (1022)
+ MultihostSessionStageAtCapacity (1024)
+ SignallingSessionCannotRead (1201)
+ SignallingSessionCannotSend (1202)
+ SignallingSessionBadResponse (1203)

These are reported asynchronously through the application-supplied stage renderer.

The SDK retries the operation for a limited number of times. During retries, the publish/subscribe state is `ATTEMPTING_PUBLISH` / `ATTEMPTING_SUBSCRIBE`. If the retry attempts succeed, the state changes to `PUBLISHED` / `SUBSCRIBED`.

The SDK calls `IVSErrorDelegate:didEmitError` with the relevant error code and `IVSBroadcastErrorIsFatalKey == NO`.

**Action**: No action is needed, as the SDK retries automatically. Optionally, the application can refresh the strategy to force more retries.

### Already Established, Then Fail
<a name="broadcast-ios-publish-subscribe-errors-established"></a>

A publish or subscribe can fail after it is established, most likely due to a network error. The error code for a "peer connection lost due to network error" is 1400.

This is reported asynchronously through the application-supplied stage renderer.

The SDK retries the publish/subscribe operation. During retries, the publish/subscribe state is `ATTEMPTING_PUBLISH` / `ATTEMPTING_SUBSCRIBE`. If the retry attempts succeed, the state changes to `PUBLISHED` / `SUBSCRIBED`.

The SDK calls `didEmitError` with error code = 1400 and IVSBroadcastErrorIsFatalKey = NO.

**Action**: No action is needed, as the SDK retries automatically. Optionally, the application can refresh the strategy to force more retries. In the event of total connectivity loss, it’s likely that the connection to Stages will fail too.

# IVS Broadcast SDK: Mixed Devices
<a name="broadcast-mixed-devices"></a>

Mixed devices are audio and video devices that take multiple input sources and generate a single output. Mixing devices is a powerful feature that lets you define and manage multiple on-screen (video) elements and audio tracks. You can combine video and audio from multiple sources such as cameras, microphones, screen captures, and audio and video generated by your app. You can use transitions to move these sources around the video that you stream to IVS, and add to and remove sources mid-stream.

Mixed devices come in image and audio flavors. To create a mixed image device, call:

`DeviceDiscovery.createMixedImageDevice()` on Android

`IVSDeviceDiscovery.createMixedImageDevice()` on iOS

The returned device can be attached to a `BroadcastSession` (low-latency streaming) or `Stage` (real-time streaming), like any other device.

## Terminology
<a name="broadcast-mixed-devices-terminology"></a>

![\[IVS broadcasting mixed devices terminology.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/Broadcast_SDK_Mixer_Glossary.png)



| Term | Description | 
| --- | --- | 
| Device | A hardware or software component that produces audio or image input. Examples of devices are microphones, cameras, Bluetooth headsets, and virtual devices such as screen captures or custom-image inputs. | 
| Mixed Device | A `Device` that can be attached to a `BroadcastSession` like any other `Device`, but with additional APIs that allow `Source` objects to be added. Mixed devices have internal mixers that composite audio or images, producing a single output audio and image stream. Mixed devices come in either audio or image flavors.  | 
| Mixed device configuration | A configuration object for the mixed device. For mixed image devices, this configures properties like dimensions and framerate. For mixed audio devices, this configures the channel count. | 
|  Source | A container that defines a visual element’s position on screen and an audio track’s properties in the audio mix. A mixed device can be configured with zero or more sources. Sources are given a configuration that affects how the source’s media are used. The image above shows four image sources: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/broadcast-mixed-devices.html)  | 
| Source Configuration |  A configuration object for the source going into a mixed device. The full configuration objects are described below..   | 
| Transition | To move a slot to a new position or change some of its properties, use `MixedDevice.transitionToConfiguration()`. This method takes: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/broadcast-mixed-devices.html) | 

## Mixed Audio Device
<a name="broadcast-mixed-audio-device"></a>

### Configuration
<a name="broadcast-mixed-audio-device-configuration"></a>

`MixedAudioDeviceConfiguration` on Android

`IVSMixedAudioDeviceConfiguration` on iOS


| Name | Type | Description | 
| --- | --- | --- | 
| `channels` | Integer | Number of output channels from the audio mixer. Valid values: 1, 2. 1 is mono audio; 2, stereo audio. Default: 2. | 

### Source Configuration
<a name="broadcast-mixed-audio-device-source-configuration"></a>

`MixedAudioDeviceSourceConfiguration` on Android

`IVSMixedAudioDeviceSourceConfiguration` on iOS


| Name | Type | Description | 
| --- | --- | --- | 
| `gain` | Float | Audio gain. This is a multiplier, so any value above 1 increases the gain; any value below 1, decreases it. Valid values: 0-2. Default: 1.  | 

## Mixed Image Device
<a name="broadcast-mixed-image-device"></a>

### Configuration
<a name="broadcast-mixed-image-device-configuration"></a>

`MixedImageDeviceConfiguration` on Android

`IVSMixedImageDeviceConfiguration` on iOS


| Name | Type | Description | 
| --- | --- | --- | 
| `size` | Vec2 | Size of the video canvas. | 
| `targetFramerate` | Integer | Number of target frames per second for the mixed device. On average, this value should be met, but the system may drop frames under certain circumstances (e.g., high CPU or GPU load). | 
| `transparencyEnabled` | Boolean | This enables blending using the `alpha` property on image source configurations. Setting this to `true` increases memory and CPU consumption. Default: `false`. | 

### Source Configuration
<a name="broadcast-mixed-image-device-source-configuration"></a>

`MixedImageDeviceSourceConfiguration` on Android

`IVSMixedImageDeviceSourceConfiguration` on iOS


| Name | Type | Description | 
| --- | --- | --- | 
| `alpha` | Float | Alpha of the slot. This is multiplicative with any alpha values in the image. Valid values: 0-1. 0 is fully transparent and 1 is fully opaque. Default: 1. | 
| `aspect` | AspectMode | Aspect-ratio mode for any image rendered in the slot. Valid values: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/broadcast-mixed-devices.html) Default: `Fit`  | 
| `fillColor` | Vec4 | Fill color to be used with `aspect Fit` when the slot and image aspect ratios do not match. The format is (red, green, blue, alpha). Valid value (for each channel): 0-1. Default: (0, 0, 0, 0). | 
| `position` | Vec2 | Slot position (in pixels), relative to the top-left corner of the canvas. The origin of the slot also is top-left. | 
| `size` | Vec2 | Size of the slot, in pixels. Setting this value also sets `matchCanvasSize` to `false`. Default: (0, 0); however, because `matchCanvasSize` defaults to `true`, the rendered size of the slot is the canvas size, not (0, 0). | 
| `zIndex` | Float | Relative ordering of slots. Slots with higher `zIndex` values are drawn on top of slots with lower `zIndex` values. | 

## Creating and Configuring a Mixed Image Device
<a name="broadcast-mixed-image-device-creating-configuring"></a>

![\[Configuring a broadcast session for mixing.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/Broadcast_SDK_Mixer_Configuring.png)


Here, we create a scene similar to the one at the beginning of this guide, with three on-screen elements:
+ Bottom-left slot for a camera.
+ Bottom-right slot for a logo overlay.
+ Top-right slot for a movie.

Note that the origin for the canvas is the top-left corner and this is the same for the slots. Hence, positioning a slot at (0, 0) puts it in the top-left corner with the entire slot visible.

### iOS
<a name="broadcast-mixed-image-device-creating-configuring-ios"></a>

```
let deviceDiscovery = IVSDeviceDiscovery()
let mixedImageConfig = IVSMixedImageDeviceConfiguration()
mixedImageConfig.size = CGSize(width: 1280, height: 720)
try mixedImageConfig.setTargetFramerate(60)
mixedImageConfig.isTransparencyEnabled = true
let mixedImageDevice = deviceDiscovery.createMixedImageDevice(with: mixedImageConfig)

// Bottom Left
let cameraConfig = IVSMixedImageDeviceSourceConfiguration()
cameraConfig.size = CGSize(width: 320, height: 180)
cameraConfig.position = CGPoint(x: 20, y: mixedImageConfig.size.height - cameraConfig.size.height - 20)
cameraConfig.zIndex = 2
let camera = deviceDiscovery.listLocalDevices().first(where: { $0 is IVSCamera }) as? IVSCamera
let cameraSource = IVSMixedImageDeviceSource(configuration: cameraConfig, device: camera)
mixedImageDevice.add(cameraSource)

// Top Right
let streamConfig = IVSMixedImageDeviceSourceConfiguration()
streamConfig.size = CGSize(width: 640, height: 320)
streamConfig.position = CGPoint(x: mixedImageConfig.size.width - streamConfig.size.width - 20, y: 20)
streamConfig.zIndex = 1
let streamDevice = deviceDiscovery.createImageSource(withName: "stream")
let streamSource = IVSMixedImageDeviceSource(configuration: streamConfig, device: streamDevice)
mixedImageDevice.add(streamSource)

// Bottom Right
let logoConfig = IVSMixedImageDeviceSourceConfiguration()
logoConfig.size = CGSize(width: 320, height: 180)
logoConfig.position = CGPoint(x: mixedImageConfig.size.width - logoConfig.size.width - 20,
                              y: mixedImageConfig.size.height - logoConfig.size.height - 20)
logoConfig.zIndex = 3
let logoDevice = deviceDiscovery.createImageSource(withName: "logo")
let logoSource = IVSMixedImageDeviceSource(configuration: logoConfig, device: logoDevice)
mixedImageDevice.add(logoSource)
```

### Android
<a name="broadcast-mixed-image-device-creating-configuring-android"></a>

```
val deviceDiscovery = DeviceDiscovery(this /* context */)
val mixedImageConfig = MixedImageDeviceConfiguration().apply {
    setSize(BroadcastConfiguration.Vec2(1280f, 720f))
    setTargetFramerate(60)
    setEnableTransparency(true)
}
val mixedImageDevice = deviceDiscovery.createMixedImageDevice(mixedImageConfig)

// Bottom Left
val cameraConfig = MixedImageDeviceSourceConfiguration().apply {
    setSize(BroadcastConfiguration.Vec2(320f, 180f))
    setPosition(BroadcastConfiguration.Vec2(20f, mixedImageConfig.size.y - size.y - 20))
    setZIndex(2)
}
val camera = deviceDiscovery.listLocalDevices().firstNotNullOf { it as? CameraSource }
val cameraSource = MixedImageDeviceSource(cameraConfig, camera)
mixedImageDevice.addSource(cameraSource)

// Top Right
val streamConfig = MixedImageDeviceSourceConfiguration().apply {
    setSize(BroadcastConfiguration.Vec2(640f, 320f))
    setPosition(BroadcastConfiguration.Vec2(mixedImageConfig.size.x - size.x - 20, 20f))
    setZIndex(1)
}
val streamDevice = deviceDiscovery.createImageInputSource(streamConfig.size)
val streamSource = MixedImageDeviceSource(streamConfig, streamDevice)
mixedImageDevice.addSource(streamSource)

// Bottom Right
val logoConfig = MixedImageDeviceSourceConfiguration().apply {
    setSize(BroadcastConfiguration.Vec2(320f, 180f))
    setPosition(BroadcastConfiguration.Vec2(mixedImageConfig.size.x - size.x - 20, mixedImageConfig.size.y - size.y - 20))
    setZIndex(1)
}
val logoDevice = deviceDiscovery.createImageInputSource(logoConfig.size)
val logoSource = MixedImageDeviceSource(logoConfig, logoDevice)
mixedImageDevice.addSource(logoSource)
```

## Removing Sources
<a name="broadcast-mixed-devices-removing-sources"></a>

To remove a source, call `MixedDevice.remove` with the `Source` object you want to remove.

## Animations with Transitions
<a name="broadcast-mixed-devices-animations-transitions"></a>

The transition method replaces a source’s configuration with a new configuration. This replacement can be animated over time by setting a duration higher than 0, in seconds. 

### Which Properties Can Be Animated?
<a name="broadcast-mixed-devices-animations-properties"></a>

Not all properties in the slot structure can be animated. Any properties based on Float types can be animated; other properties take effect at either the start or end of the animation.


| Name | Can It Be Animated? | Impact Point | 
| --- | --- | --- | 
| `Audio.gain` | Yes | Interpolated | 
| `Image.alpha` | Yes | Interpolated | 
| `Image.aspect` | No | End | 
| `Image.fillColor` | Yes | Interpolated | 
| `Image.position` | Yes | Interpolated | 
| `Image.size` | Yes | Interpolated | 
| `Image.zIndex` Note: The `zIndex` moves 2D planes through 3D space, so the transition happens when the two planes cross at some point in the middle of the animation. This could be computed, but it depends on the starting and ending `zIndex` values. For a smoother transition, combine this with `alpha`.  | Yes | Unknown | 

### Simple Examples
<a name="broadcast-mixed-devices-animations-examples"></a>

Below are examples of a full-screen camera takeover using the configuration defined above in [Creating and Configuring a Mixed Image Device](#broadcast-mixed-image-device-creating-configuring). This is animated over 0.5 seconds.

#### iOS
<a name="broadcast-mixed-devices-animations-examples-ios"></a>

```
// Continuing the example from above, modifying the existing cameraConfig object.
cameraConfig.size = CGSize(width: 1280, height: 720)
cameraConfig.position = CGPoint.zero
cameraSource.transition(to: cameraConfig, duration: 0.5) { completed in
    if completed {
        print("Animation completed")
    } else {
        print("Animation interrupted")
    }
}
```

#### Android
<a name="broadcast-mixed-devices-animations-examples-android"></a>

```
// Continuing the example from above, modifying the existing cameraConfig object.
cameraConfig.setSize(BroadcastConfiguration.Vec2(1280f, 720f))
cameraConfig.setPosition(BroadcastConfiguration.Vec2(0f, 0f))
cameraSource.transitionToConfiguration(cameraConfig, 500) { completed ->
    if (completed) {
        print("Animation completed")
    } else {
        print("Animation interrupted")
    }
}
```

## Mirroring the Broadcast
<a name="broadcast-mixed-devices-mirroring"></a>


| To mirror an attached image device in the broadcast in this direction … | Use a negative value for … | 
| --- | --- | 
| Horizontally | The width of the slot | 
| Vertically | The height of the slot | 
| Both horizontally and vertically | Both the width and height of the slot | 

The position will need to be adjusted by the same value, to put the slot in the correct position when mirrored.

Below are examples for mirroring the broadcast horizontally and vertically.

### iOS
<a name="broadcast-mixed-devices-mirroring-ios"></a>

Horizontal mirroring:

```
let cameraSource = IVSMixedImageDeviceSourceConfiguration()
cameraSource.size = CGSize(width: -320, height: 720)
// Add 320 to position x since our width is -320
cameraSource.position = CGPoint(x: 320, y: 0)
```

Vertical mirroring:

```
let cameraSource = IVSMixedImageDeviceSourceConfiguration()
cameraSource.size = CGSize(width: 320, height: -720)
// Add 720 to position y since our height is -720
cameraSource.position = CGPoint(x: 0, y: 720)
```

### Android
<a name="broadcast-mixed-devices-mirroring-android"></a>

Horizontal mirroring:

```
val cameraConfig = MixedImageDeviceSourceConfiguration().apply {
    setSize(BroadcastConfiguration.Vec2(-320f, 180f))
   // Add 320f to position x since our width is -320f
    setPosition(BroadcastConfiguration.Vec2(320f, 0f))
}
```

Vertical mirroring:

```
val cameraConfig = MixedImageDeviceSourceConfiguration().apply {
    setSize(BroadcastConfiguration.Vec2(320f, -180f))
    // Add 180f to position y since our height is -180f
    setPosition(BroadcastConfiguration.Vec2(0f, 180f))
}
```

Note: This mirroring is different than the `setMirrored` method on `ImagePreviewView` (Android) and `IVSImagePreviewView` (iOS). That method affects only the local preview view on the device and does not impact the broadcast.

# IVS Broadcast SDK: Token Exchange \$1 Real-Time Streaming
<a name="broadcast-mobile-token-exchange"></a>

Token exchange enables you to upgrade or downgrade participant-token capabilities and update token attributes within the broadcast SDK, without requiring participants to reconnect. This is useful for scenarios like co-hosting, where participants may start with subscribe-only capabilities and later need publish capabilities.

Token exchange is supported in both the mobile and web broadcast SDKs. When a participant exchanges a token, server-side composition detects the updated attributes in real time and automatically adjusts the layout — for example, reassigning the featured slot, reordering participants, or moving a participant into the picture-in-picture overlay — without requiring a reconnect. 

Limitation: Token exchange only works with tokens created on your server using a [key pair](https://docs.aws.amazon.com//ivs/latest/RealTimeUserGuide/getting-started-distribute-tokens.html#getting-started-distribute-tokens-self-signed). It does not work with tokens created via the [CreateParticipantToken API](https://docs.aws.amazon.com/ivs/latest/RealTimeAPIReference/API_CreateParticipantToken.html).

## Exchanging Tokens
<a name="broadcast-mobile-token-exchange-exchanging-tokens"></a>

Exchanging tokens is straightforward: call the `exchangeToken` API on the `Stage` / `IVSStage` object and provide the new token. If the `capabilities` of the new token are different than those of the previous token, the new token's capabilities are evaluated immediately. For example, if the previous token did not have the `publish` capability and the new token does, the stage strategy functions for publishing are invoked, allowing the host application to decide if they want to publish right away with the new capability, or wait. The same is true for removed capabilities: if the previous token had the `publish` capability and the new token does not, the participant immediately unpublishes without invoking the stage strategy functions for publishing.

When exchanging a token, the previous and new token must have the same values for the following payload fields: 
+ `topic`
+ `resource`
+ `jti`
+ `whip_url`
+ `events_url`

These fields are immutable. Exchanging a token that modifies an immutable field results in the SDK immediately rejecting the exchange.

The remaining fields can be changed, including:
+ `attributes`
+ `capabilities`
+ `user`
+ `_id`
+ `iat`
+ `exp`

### iOS
<a name="broadcast-mobile-token-exchange-exchanging-tokens-ios"></a>



```
let stage = try IVSStage(token: originalToken, strategy: self)
stage.join()
stage.exchangeToken(newToken)
```

### Android
<a name="broadcast-mobile-token-exchange-exchanging-tokens-android"></a>



```
val stage = Stage(context, originalToken, strategy)
stage.join()
stage.exchangeToken(newToken)
```

### Web
<a name="broadcast-web-token-exchange-exchanging-tokens"></a>



```
const stage = new Stage(originalToken, strategy);
await stage.join();
await stage.exchangeToken(newToken);
```

## Receiving Updates
<a name="broadcast-mobile-token-exchange-receiving-updates"></a>

A function in `StageRenderer` / `IVSStageRenderer` receives updates about already-published, remote participants that exchange their tokens to update their `userId` or `attributes`. Remote participants that are not already publishing will have their updated `userId` and `attributes` exposed via the existing `onParticipantJoined` / `participantDidJoin` renderer functions if they eventually publish.

### iOS
<a name="broadcast-mobile-token-exchange-receiving-updates-ios"></a>



```
class MyStageRenderer: NSObject, IVSStageRenderer {
    func stage(_ stage: IVSStage, participantMetadataDidUpdate participant: IVSParticipantInfo) {
        // participant will be a new IVSParticipantInfo instance with updated properties.
    }
}
```

### Android
<a name="broadcast-mobile-token-exchange-receiving-updates-android"></a>



```
private val stageRenderer = object : StageRenderer {
    override fun onParticipantMetadataUpdated(stage: Stage, participantInfo: ParticipantInfo) {
        // participantInfo will be a new ParticipantInfo instance with updated properties.
    }
}
```

### Web
<a name="broadcast-web-token-exchange-receiving-updates"></a>



```
stage.on(StageEvents.STAGE_PARTICIPANT_METADATA_CHANGED, (participantInfo: StageParticipantInfo) => { 
    // participantInfo properties will be updated with the changed properties 
    }
);
```

## Visibility of Updates
<a name="broadcast-mobile-token-exchange-visibility"></a>

When a participant exchanges a token to update their `userId` or `attributes`, the visibility of these changes depends on their current publishing state: 
+ **If the participant is *not* publishing:** The update is processed silently. If they eventually publish, all SDKs will receive the updated `userId` and `attributes` as part of the initial publish event.
+ **If the participant *is* already publishing:** The update is broadcast immediately for participants using mobile SDKs v1.37.0\$1, the web SDK, and server-side composition. Participants using older mobile SDKs do not see the change until the participant unpublishes and republishes.

This table clarifies the matrix of support:


| Participant State | Observer: Mobile SDK 1.37.0\$1, Web SDK, Server-Side Composition  | Observer: Older Mobile SDKs | 
| --- | --- | --- | 
| Not publishing (then starts) | ✅ Visible (on publish through participant joined event) | ✅ Visible (on publish through participant joined event) | 
| Already publishing (never republishes) | ✅ Visible (immediately through participant metadata updated event) | ❌ Not Visible | 
| Already publishing (unpublishes and republishes) | ✅ Visible (immediately through participant metadata updated event) | ⚠️ Eventually Visible (on republish through participant joined event) | 

# IVS Broadcast SDK: Custom Image Sources \$1 Real-Time Streaming
<a name="broadcast-custom-image-sources"></a>

Custom image-input sources allow an application to provide its own image input to the broadcast SDK, instead of being limited to the preset cameras. A custom image source can be as simple as a semi-transparent watermark or static “be right back” scene, or it can allow the app to do additional custom processing like adding beauty filters to the camera.

When you use a custom image-input source for custom control of the camera (such as using beauty-filter libraries that require camera access), the broadcast SDK is no longer responsible for managing the camera. Instead, the application is responsible for handling the camera’s lifecycle correctly. See official platform documentation on how your application should manage the camera.

## Android
<a name="custom-image-sources-android"></a>

After you create a `DeviceDiscovery` session, create an image-input source:

```
CustomImageSource imageSource = deviceDiscovery.createImageInputSource(new BroadcastConfiguration.Vec2(1280, 720));
```

This method returns a `CustomImageSource`, which is an image source backed by a standard Android [Surface](https://developer.android.com/reference/android/view/Surface). The sublcass `SurfaceSource` can be resized and rotated. You also can create an `ImagePreviewView` to display a preview of its contents.

To retrieve the underlying `Surface`:

```
Surface surface = surfaceSource.getInputSurface();
```

This `Surface` can be used as the output buffer for image producers like Camera2, OpenGL ES, and other libraries. The simplest use case is directly drawing a static bitmap or color into the Surface’s Canvas. However, many libraries (such as beauty-filter libraries) provide a method that allows an application to specify an external `Surface` for rendering. You can use such a method to pass this `Surface` to the filter library, which allows the library to output processed frames for the broadcast session to stream.

This `CustomImageSource` can be wrapped in a `LocalStageStream` and returned by the `StageStrategy` to publish to a `Stage`.

## iOS
<a name="custom-image-sources-ios"></a>

After you create a `DeviceDiscovery` session, create an image-input source:

```
let customSource = broadcastSession.createImageSource(withName: "customSourceName")
```

This method returns an `IVSCustomImageSource`, which is an image source that allows the application to submit `CMSampleBuffers` manually. For supported pixel formats, see the iOS Broadcast SDK Reference; a link to the most current version is in the [Amazon IVS Release Notes](release-notes.md) for the latest broadcast SDK release.

Samples submitted to the custom source will be streamed to the Stage:

```
customSource.onSampleBuffer(sampleBuffer)
```

For streaming video, use this method in a callback. For example, if you’re using the camera, then every time a new sample buffer is received from an `AVCaptureSession`, the application can forward the sample buffer to the custom image source. If desired, the application can apply further processing (like a beauty filter) before submitting the sample to the custom image source.

The `IVSCustomImageSource` can be wrapped in an `IVSLocalStageStream` and returned by the `IVSStageStrategy` to publish to a `Stage`.

# IVS Broadcast SDK: Custom Audio Sources \$1 Real-Time Streaming
<a name="broadcast-custom-audio-sources"></a>

**Note:** This guide only applies to the IVS real-time streaming Android broadcast SDK. Information for the iOS and web SDKs will be published in the future.

Custom audio-input sources allow an application to provide its own audio input to the broadcast SDK, instead of being limited to the device’s built-in microphone. A custom audio source enables applications to stream processed audio with effects, mix multiple audio streams, or integrate with third-party audio processing libraries.

When you use a custom audio-input source, the broadcast SDK is no longer responsible for managing the microphone directly. Instead, your application is responsible for capturing, processing, and submitting audio data to the custom source.

The custom-audio-source workflow follows these steps:

1. Audio input — Create a custom audio source with specified audio format (sample rate, channels, format). 

1. Your processing — Capture or generate audio data from your audio processing pipeline.

1. Custom audio source — Submit audio buffers to the custom source using `appendBuffer()`.

1. Stage — Wrap in `LocalStageStream` and publish to the stage via your `StageStrategy`. 

1. Participants — Stage participants receive the processed audio in real time.

## Android
<a name="custom-audio-sources-android"></a>

### Creating a Custom Audio Source
<a name="custom-audio-sources-android-creating-a-custom-audio-source"></a>

After you create a `DeviceDiscovery` session, create a custom audio-input source:

```
DeviceDiscovery deviceDiscovery = new DeviceDiscovery(context); 
 
// Create custom audio source with specific format 
CustomAudioSource customAudioSource = deviceDiscovery.createAudioInputSource( 
   2,  // Number of channels (1 = mono, 2 = stereo) 
   BroadcastConfiguration.AudioSampleRate.RATE_48000,  // Sample rate 
   AudioDevice.Format.INT16  // Audio format (16-bit PCM) 
);
```

This method returns a `CustomAudioSource`, which accepts raw PCM audio data. The custom audio source must be configured with the same audio format that your audio-processing pipeline produces.

#### Supported Audio Formats
<a name="custom-audio-sources-android-submitting-audio-data-supportedi-audio-formats"></a>


| Parameter | Options | Description | 
| --- | --- | --- | 
| Channels | 1 (mono), 2 (stereo) | Number of audio channels. | 
| Sample rate | RATE\$116000, RATE\$144100, RATE\$148000 | Audio sample rate in Hz. 48kHz recommended for high quality. | 
| Format | INT16, FLOAT32 | Audio sample format. INT16 is 16-bit fixed-point PCM, FLOAT32 is 32-bit floating-point PCM. Both interleaved and planar formats are available. | 

### Submitting Audio Data
<a name="custom-audio-sources-android-submitting-audio-data"></a>

To submit audio data to the custom source, use the `appendBuffer()` method:

```
// Prepare audio data in a ByteBuffer 
ByteBuffer audioBuffer = ByteBuffer.allocateDirect(bufferSize); 
audioBuffer.put(pcmAudioData);  // Your processed audio data 
 
// Calculate the number of bytes 
long byteCount = pcmAudioData.length; 
 
// Submit audio to the custom source 
// presentationTimeUs should be generated by and come from your audio source
int samplesProcessed = customAudioSource.appendBuffer( 
   audioBuffer, 
   byteCount, 
   presentationTimeUs 
); 
 
if (samplesProcessed > 0) { 
   Log.d(TAG, "Successfully submitted " + samplesProcessed + " samples"); 
} else { 
   Log.w(TAG, "Failed to submit audio samples"); 
} 
 
// Clear buffer for reuse 
audioBuffer.clear();
```

**Important considerations:**
+ Audio data must be in the format specified when creating the custom source.
+ Timestamps should be monotonically increasing and provided by your audio source for smooth audio playback.
+ Submit audio regularly to avoid gaps in the stream.
+ The method returns the number of samples processed (0 indicates failure). 

### Publishing to a Stage
<a name="custom-audio-sources-android-publishing-to-a-stage"></a>

Wrap the `CustomAudioSource` in an `AudioLocalStageStream` and return it from your `StageStrategy`:

```
// Create the audio stream from custom source 
AudioLocalStageStream audioStream = new AudioLocalStageStream(customAudioSource); 
 
// Define your stage strategy 
Strategy stageStrategy = new Strategy() { 
   @NonNull 
   @Override 
   public List<LocalStageStream> stageStreamsToPublishForParticipant( 
         @NonNull Stage stage, 
         @NonNull ParticipantInfo participantInfo) { 
      List<LocalStageStream> streams = new ArrayList<>(); 
      streams.add(audioStream);  // Publish custom audio 
      return streams; 
   } 
 
   @Override 
   public boolean shouldPublishFromParticipant( 
         @NonNull Stage stage, 
         @NonNull ParticipantInfo participantInfo) { 
      return true;  // Control when to publish 
   } 
 
   @Override 
   public Stage.SubscribeType shouldSubscribeToParticipant( 
         @NonNull Stage stage, 
         @NonNull ParticipantInfo participantInfo) { 
      return Stage.SubscribeType.AUDIO_VIDEO; 
   } 
}; 
 
// Create and join the stage 
Stage stage = new Stage(context, stageToken, stageStrategy);
```

### Complete Example: Audio Processing Integration
<a name="custom-audio-sources-android-complete-example"></a>

Here’s a complete example showing integration with an audio-processing SDK:

```
public class AudioStreamingActivity extends AppCompatActivity { 
   private DeviceDiscovery deviceDiscovery; 
   private CustomAudioSource customAudioSource; 
   private AudioLocalStageStream audioStream; 
   private Stage stage; 
 
   @Override 
   protected void onCreate(Bundle savedInstanceState) { 
      super.onCreate(savedInstanceState); 
 
      // Configure audio manager 
      StageAudioManager.getInstance(this) 
         .setPreset(StageAudioManager.UseCasePreset.VIDEO_CHAT); 
 
      // Initialize IVS components 
      initializeIVSStage(); 
 
      // Initialize your audio processing SDK 
      initializeAudioProcessing(); 
   } 
 
   private void initializeIVSStage() { 
      deviceDiscovery = new DeviceDiscovery(this); 
 
      // Create custom audio source (48kHz stereo, 16-bit) 
      customAudioSource = deviceDiscovery.createAudioInputSource( 
         2,  // Stereo 
         BroadcastConfiguration.AudioSampleRate.RATE_48000, 
         AudioDevice.Format.INT16 
      ); 
 
      // Create audio stream 
      audioStream = new AudioLocalStageStream(customAudioSource); 
 
      // Create stage with strategy 
      Strategy strategy = new Strategy() { 
         @NonNull 
         @Override 
         public List<LocalStageStream> stageStreamsToPublishForParticipant( 
               @NonNull Stage stage, 
               @NonNull ParticipantInfo participantInfo) { 
            return Collections.singletonList(audioStream); 
         } 
 
         @Override 
         public boolean shouldPublishFromParticipant( 
               @NonNull Stage stage, 
               @NonNull ParticipantInfo participantInfo) { 
            return true; 
         } 
 
         @Override 
         public Stage.SubscribeType shouldSubscribeToParticipant( 
               @NonNull Stage stage, 
               @NonNull ParticipantInfo participantInfo) { 
            return Stage.SubscribeType.AUDIO_VIDEO; 
         } 
      }; 
 
      stage = new Stage(this, getStageToken(), strategy); 
   } 
 
   private void initializeAudioProcessing() { 
      // Initialize your audio processing SDK 
      // Set up callback to receive processed audio 
      yourAudioSDK.setAudioCallback(new AudioCallback() { 
         @Override 
         public void onProcessedAudio(byte[] audioData, int sampleRate, 
                                     int channels, long timestamp) { 
            // Submit processed audio to IVS Stage 
            submitAudioToStage(audioData, timestamp); 
         } 
      }); 
   } 
 
   // The timestamp is required to come from your audio source and you  
   // should not be generating one on your own, unless your audio source 
   // does not provide one. If that is the case, create your own epoch  
   // timestamp and manually calculate the duration between each sample  
   // using the number of frames and frame size. 

   private void submitAudioToStage(byte[] audioData, long timestamp) { 
      try { 
         // Allocate direct buffer 
         ByteBuffer buffer = ByteBuffer.allocateDirect(audioData.length); 
         buffer.put(audioData); 
 
         // Submit to custom audio source 
         int samplesProcessed = customAudioSource.appendBuffer( 
            buffer, 
            audioData.length, 
            timestamp > 0 ? timestamp : System.nanoTime() / 1000 
         ); 
 
         if (samplesProcessed <= 0) { 
            Log.w(TAG, "Failed to submit audio samples"); 
         } 
 
         buffer.clear(); 
      } catch (Exception e) { 
         Log.e(TAG, "Error submitting audio: " + e.getMessage(), e); 
      } 
   } 
 
   @Override 
   protected void onDestroy() { 
      super.onDestroy(); 
      if (stage != null) { 
          stage.release(); 
      } 
   } 
}
```

### Best Practices
<a name="custom-audio-sources-android-best-practices"></a>

#### Audio Format Consistency
<a name="custom-audio-sources-android-best-practices-audio-format-consistency"></a>

Ensure the audio format you submit matches the format specified when creating the custom source:

```
// If you create with 48kHz stereo INT16 
customAudioSource = deviceDiscovery.createAudioInputSource( 
   2, RATE_48000, INT16 
); 
 
// Your audio data must be: 
// - 2 channels (stereo) 
// - 48000 Hz sample rate 
// - 16-bit interleaved PCM format
```

#### Buffer Management
<a name="custom-audio-sources-android-best-practices-buffer-managemetn"></a>

Use direct `ByteBuffers` and reuse them to minimize garbage collection: 

```
// Allocate once 
private ByteBuffer audioBuffer = ByteBuffer.allocateDirect(BUFFER_SIZE); 
 
// Reuse in callback 
public void onAudioData(byte[] data) { 
   audioBuffer.clear(); 
   audioBuffer.put(data); 
   customAudioSource.appendBuffer(audioBuffer, data.length, getTimestamp()); 
   audioBuffer.clear(); 
}
```

#### Timing and Synchronization
<a name="custom-audio-sources-android-best-practices-timing-and-synchronization"></a>

You must use timestamps provided by your audio source for smooth audio playback. If your audio source does not provide its own timestamp, create your own epoch timestamp and manually calculate the duration between each sample using the number of frames and frame size. 

```
// "audioFrameTimestamp" should be generated by your audio source
// Consult your audio source’s documentation for information on how to get this 
long timestamp = audioFrameTimestamp;
```

#### Error Handling
<a name="custom-audio-sources-android-best-practices-error-handling"></a>

Always check the return value of `appendBuffer()`: 

```
int samplesProcessed = customAudioSource.appendBuffer(buffer, count, timestamp); 
 
if (samplesProcessed <= 0) { 
   Log.w(TAG, "Audio submission failed - buffer may be full or format mismatch"); 
   // Handle error: check format, reduce submission rate, etc. 
}
```

# IVS Broadcast SDK: Third-Party Camera Filters \$1 Real-Time Streaming
<a name="broadcast-3p-camera-filters"></a>

This guide assumes you are already familiar with [custom image](broadcast-custom-image-sources.md) sources as well as integrating the [IVS real-time streaming broadcast SDK](broadcast.md) into your application.

Camera filters enable live-stream creators to augment or alter their facial or background appearance. This potentially can increase viewer engagement, attract viewers, and enhance the live-streaming experience.

# Integrating Third-Party Camera Filters
<a name="broadcast-3p-camera-filters-integrating"></a>

You can integrate third-party camera filter SDKs with the IVS broadcast SDK by feeding the filter SDK’s output to a [custom image input source](broadcast-custom-image-sources.md). A custom image-input source allows an application to provide its own image input to the Broadcast SDK. A third-party filter provider’s SDK may manage the camera’s lifecycle to process images from the camera, apply a filter effect, and output it in a format that can be passed to a custom image source.

![\[Integrating third-party camera filter SDKs with the IVS broadcast SDK by feeding the filter SDK’s output to a custom image input source.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/3P_Camera_Filters_Integrating.png)


Consult your third-party filter provider’s documentation for built-in methods to convert a camera frame, with the filter effect, applied to a format that can be passed to a [custom image-input source](broadcast-custom-image-sources.md). The process varies, depending on which version of the IVS broadcast SDK is used:
+ **Web** — The filter provider must be able to render its output to a canvas element. The [captureStream](https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement/captureStream) method can then be used to return a MediaStream of the canvas’s contents. The MediaStream can then be converted to an instance of a [LocalStageStream](https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-reference/classes/LocalStageStream) and published to a Stage.
+ **Android** — The filter provider’s SDK can either render a frame to an Android `Surface` provided by the IVS broadcast SDK or convert the frame to a bitmap. If using a bitmap, it can then be rendered to the underlying `Surface` provided by the custom image source, by unlocking and writing to a canvas.
+ **iOS** — A third-party filter provider’s SDK must provide a camera frame with a filter effect applied as a `CMSampleBuffer`. Refer to your third-party filter vendor SDK’s documentation for information on how to get a `CMSampleBuffer` as the final output after a camera image is processed.

# Using BytePlus with the IVS Broadcast SDK
<a name="broadcast-3p-camera-filters-integrating-byteplus"></a>

This document explains how to use the BytePlus Effects SDK with the IVS broadcast SDK.

## Android
<a name="integrating-byteplus-android"></a>

### Install and Set Up the BytePlus Effects SDK
<a name="integrating-byteplus-android-install-effects-sdk"></a>

See the BytePlus [Android Access Guide](https://docs.byteplus.com/en/effects/docs/android-v4101-access-guide) for details on how to install, initialize, and set up the BytePlus Effects SDK.

### Set Up the Custom Image Source
<a name="integrating-byteplus-android-setup-image-source"></a>

After initializing the SDK, feed processed camera frames with a filter effect applied to a custom-image input source. To do that, create an instance of a `DeviceDiscovery` object and create a custom image source. Note that when you use a custom image input source for custom control of the camera, the broadcast SDK is no longer responsible for managing the camera. Instead, the application is responsible for handling the camera’s lifecycle correctly.

#### Java
<a name="integrating-byteplus-android-setup-image-source-code"></a>

```
var deviceDiscovery = DeviceDiscovery(applicationContext)
var customSource = deviceDiscovery.createImageInputSource( BroadcastConfiguration.Vec2(
720F, 1280F
))
var surface: Surface = customSource.inputSurface
var filterStream = ImageLocalStageStream(customSource)
```

### Convert Output to a Bitmap and Feed to Custom Image Input Source
<a name="integrating-byteplus-android-convert-to-bitmap"></a>

To enable camera frames with a filter effect applied from the BytePlus Effect SDK to be forwarded directly to the IVS broadcast SDK, convert the BytePlus Effects SDK’s output of a texture to a bitmap. When an image is processed, the `onDrawFrame()` method is invoked by the SDK. The `onDrawFrame()` method is a public method of Android’s [GLSurfaceView.Renderer](https://developer.android.com/reference/android/opengl/GLSurfaceView.Renderer) interface. In the Android sample app provided by BytePlus, this method is called on every camera frame; it outputs a texture. Concurrently, you can supplement the `onDrawFrame()` method with logic to convert this texture to a bitmap and feed it to a custom image input source. As shown in the following code sample, use the `transferTextureToBitmap` method provided by the BytePlus SDK to do this conversion. This method is provided by the [com.bytedance.labcv.core.util.ImageUtil](https://docs.byteplus.com/en/effects/docs/android-v4101-access-guide#Appendix:%20convert%20input%20texture%20to%202D%20texture%20with%20upright%20face) library from the BytePlus Effects SDK, as shown in the following code sample.You can then render to the underlying Android `Surface` of a `CustomImageSource` by writing the resulting bitmap to a Surface’s canvas. Many successive invocations of `onDrawFrame()` results in a sequence of bitmaps, and when combined, creates a stream of video.

#### Java
<a name="integrating-byteplus-android-convert-to-bitmap-code"></a>

```
import com.bytedance.labcv.core.util.ImageUtil;
...
protected ImageUtil imageUtility;
...


@Override
public void onDrawFrame(GL10 gl10) {
  ...	
  // Convert BytePlus output to a Bitmap
  Bitmap outputBt = imageUtility.transferTextureToBitmap(output.getTexture(),ByteEffect     
  Constants.TextureFormat.Texture2D,output.getWidth(), output.getHeight());

  canvas = surface.lockCanvas(null);
  canvas.drawBitmap(outputBt, 0f, 0f, null);
  surface.unlockCanvasAndPost(canvas);
```

# Using DeepAR with the IVS Broadcast SDK
<a name="broadcast-3p-camera-filters-integrating-deepar"></a>

This document explains how to use the DeepAR SDK with the IVS broadcast SDK.

## Android
<a name="integrating-deepar-android"></a>

See the [Android Integration Guide from DeepAR](https://docs.deepar.ai/deepar-sdk/integrations/video-calling/amazon-ivs/android/) for details on how to integrate the DeepAR SDK with the Android IVS broadcast SDK.

## iOS
<a name="integrating-deepar-ios"></a>

See the [iOS Integration Guide from DeepAR](https://docs.deepar.ai/deepar-sdk/integrations/video-calling/amazon-ivs/ios/) for details on how to integrate the DeepAR SDK with the iOS IVS broadcast SDK.

# Using Snap with the IVS Broadcast SDK
<a name="broadcast-3p-camera-filters-integrating-snap"></a>

This document explains how to use Snap’s Camera Kit SDK with the IVS broadcast SDK.

## Web
<a name="integrating-snap-web"></a>

This section assumes you are already familiar with [publishing and subscribing to video using the Web Broadcast SDK](getting-started-pub-sub-web.md).

To integrate Snap’s Camera Kit SDK with the IVS real-time streaming Web broadcast SDK, you need to:

1. Install the Camera Kit SDK and Webpack. (Our example uses Webpack as the bundler, but you can use any bundler of your choice.)

1. Create `index.html`.

1. Add setup elements.

1. Create `index.css`.

1. Display and set up participants.

1. Display connected cameras and microphones.

1. Create a Camera Kit session.

1. Fetch lenses and populate lens selector.

1. Render the output from a Camera Kit session to a canvas.

1. Create a function to populate the Lens dropdown.

1. Provide Camera Kit with a media source for rendering and publish a `LocalStageStream`.

1. Create `package.json`.

1. Create a Webpack config file.

1. Set up an HTTPS server and test.

Each of these steps is described below.

### Install the Camera Kit SDK and Webpack
<a name="integrating-snap-web-install-camera-kit"></a>

In this example we use Webpack as our bundler; however, you can use any bundler.

```
npm i @snap/camera-kit webpack webpack-cli
```

### Create index.html
<a name="integrating-snap-web-create-index"></a>

Next, create the HTML boilerplate and import the Web broadcast SDK as a script tag. In the following code, be sure to replace `<SDK version>` with the broadcast SDK version that you are using.

#### HTML
<a name="integrating-snap-web-create-index-code"></a>

```
<!--
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */
-->
<!DOCTYPE html>
<html lang="en">

<head>
  <meta charset="UTF-8" />
  <meta http-equiv="X-UA-Compatible" content="IE=edge" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0" />

  <title>Amazon IVS Real-Time Streaming Web Sample (HTML and JavaScript)</title>

  <!-- Fonts and Styling -->
  <link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Roboto:300,300italic,700,700italic" />
  <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.css" />
  <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/milligram/1.4.1/milligram.css" />
  <link rel="stylesheet" href="./index.css" />

  <!-- Stages in Broadcast SDK -->
  <script src="https://web-broadcast.live-video.net/<SDK version>/amazon-ivs-web-broadcast.js"></script>
</head>

<body>
  <!-- Introduction -->
  <header>
    <h1>Amazon IVS Real-Time Streaming Web Sample (HTML and JavaScript)</h1>

    <p>This sample is used to demonstrate basic HTML / JS usage. <b><a href="https://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/multiple-hosts.html">Use the AWS CLI</a></b> to create a <b>Stage</b> and a corresponding <b>ParticipantToken</b>. Multiple participants can load this page and put in their own tokens. You can <b><a href="https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-guides/stages#glossary" target="_blank">read more about stages in our public docs.</a></b></p>
  </header>
  <hr />
  
  <!-- Setup Controls -->
 
  <!-- Display Local Participants -->
  
  <!-- Lens Selector -->

  <!-- Display Remote Participants -->

  <!-- Load All Desired Scripts -->
```

### Add Setup Elements
<a name="integrating-snap-web-add-setup-elements"></a>

Create the HTML for selecting a camera, microphone, and lens and specifying a participant token:

#### HTML
<a name="integrating-snap-web-setup-controls-code"></a>

```
<!-- Setup Controls -->
  <div class="row">
    <div class="column">
      <label for="video-devices">Select Camera</label>
      <select disabled id="video-devices">
        <option selected disabled>Choose Option</option>
      </select>
    </div>
    <div class="column">
      <label for="audio-devices">Select Microphone</label>
      <select disabled id="audio-devices">
        <option selected disabled>Choose Option</option>
      </select>
    </div>
    <div class="column">
      <label for="token">Participant Token</label>
      <input type="text" id="token" name="token" />
    </div>
    <div class="column" style="display: flex; margin-top: 1.5rem">
      <button class="button" style="margin: auto; width: 100%" id="join-button">Join Stage</button>
    </div>
    <div class="column" style="display: flex; margin-top: 1.5rem">
      <button class="button" style="margin: auto; width: 100%" id="leave-button">Leave Stage</button>
    </div>
  </div>
```

Add additional HTML beneath that to display camera feeds from local and remote participants:

#### HTML
<a name="integrating-snap-web-local-remote-participants-code"></a>

```
 <!-- Local Participant -->
<div class="row local-container">
    <canvas id="canvas"></canvas>

    <div class="column" id="local-media"></div>
    <div class="static-controls hidden" id="local-controls">
      <button class="button" id="mic-control">Mute Mic</button>
      <button class="button" id="camera-control">Mute Camera</button>
    </div>
  </div>

  
  <hr style="margin-top: 5rem"/>
  
  <!-- Remote Participants -->
  <div class="row">
    <div id="remote-media"></div>
  </div>
```

Load additional logic, including helper methods for setting up the camera and the bundled JavaScript file. (Later in this section, you will create these JavaScript files and bundle them into a single file, so you can import Camera Kit as a module. The bundled JavaScript file will contain the logic for setting up Camera Kit, applying a Lens, and publishing the camera feed with a Lens applied to a stage.) Add closing tags for the `body` and `html` elements to complete the creation of `index.html`.

#### HTML
<a name="integrating-snap-web-load-all-scripts-code"></a>

```
<!-- Load all Desired Scripts -->
  <script src="./helpers.js"></script>
  <script src="./media-devices.js"></script>
  <!-- <script type="module" src="./stages-simple.js"></script> -->
  <script src="./dist/bundle.js"></script>
</body>
</html>
```

### Create index.css
<a name="integrating-snap-web-create-index-css"></a>

Create a CSS source file to style the page. We will not be going over this code so as to focus on the logic for managing a Stage and integrating with Snap’s Camera Kit SDK.

#### CSS
<a name="integrating-snap-web-create-index-css-code"></a>

```
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */

html,
body {
  margin: 2rem;
  box-sizing: border-box;
  height: 100vh;
  max-height: 100vh;
  display: flex;
  flex-direction: column;
}

hr {
  margin: 1rem 0;
}

table {
  display: table;
}

canvas {
  margin-bottom: 1rem;
  background: green;
}

video {
  margin-bottom: 1rem;
  background: black;
  max-width: 100%;
  max-height: 150px;
}

.log {
  flex: none;
  height: 300px;
}

.content {
  flex: 1 0 auto;
}

.button {
  display: block;
  margin: 0 auto;
}

.local-container {
  position: relative;
}

.static-controls {
  position: absolute;
  margin-left: auto;
  margin-right: auto;
  left: 0;
  right: 0;
  bottom: -4rem;
  text-align: center;
}

.static-controls button {
  display: inline-block;
}

.hidden {
  display: none;
}

.participant-container {
  display: flex;
  align-items: center;
  justify-content: center;
  flex-direction: column;
  margin: 1rem;
}

video {
  border: 0.5rem solid #555;
  border-radius: 0.5rem;
}
.placeholder {
  background-color: #333333;
  display: flex;
  text-align: center;
  margin-bottom: 1rem;
}
.placeholder span {
  margin: auto;
  color: white;
}
#local-media {
  display: inline-block;
  width: 100vw;
}

#local-media video {
  max-height: 300px;
}

#remote-media {
  display: flex;
  justify-content: center;
  align-items: center;
  flex-direction: row;
  width: 100%;
}

#lens-selector {
  width: 100%;
  margin-bottom: 1rem;
}
```

### Display and Set Up Participants
<a name="integrating-snap-web-setup-participants"></a>

Next, create `helpers.js`, which contains helper methods that you will use to display and set up participants:

#### JavaScript
<a name="integrating-snap-web-setup-participants-code"></a>

```
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */

function setupParticipant({ isLocal, id }) {
  const groupId = isLocal ? 'local-media' : 'remote-media';
  const groupContainer = document.getElementById(groupId);

  const participantContainerId = isLocal ? 'local' : id;
  const participantContainer = createContainer(participantContainerId);
  const videoEl = createVideoEl(participantContainerId);

  participantContainer.appendChild(videoEl);
  groupContainer.appendChild(participantContainer);

  return videoEl;
}

function teardownParticipant({ isLocal, id }) {
  const groupId = isLocal ? 'local-media' : 'remote-media';
  const groupContainer = document.getElementById(groupId);
  const participantContainerId = isLocal ? 'local' : id;

  const participantDiv = document.getElementById(
    participantContainerId + '-container'
  );
  if (!participantDiv) {
    return;
  }
  groupContainer.removeChild(participantDiv);
}

function createVideoEl(id) {
  const videoEl = document.createElement('video');
  videoEl.id = id;
  videoEl.autoplay = true;
  videoEl.playsInline = true;
  videoEl.srcObject = new MediaStream();
  return videoEl;
}

function createContainer(id) {
  const participantContainer = document.createElement('div');
  participantContainer.classList = 'participant-container';
  participantContainer.id = id + '-container';

  return participantContainer;
}
```

### Display Connected Cameras and Microphones
<a name="integrating-snap-web-display-cameras-microphones"></a>

Next, create `media-devices.js`, which contains helper methods for displaying cameras and microphones connected to your device:

#### JavaScript
<a name="integrating-snap-web-display-cameras-microphones-code"></a>

```
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */

/**
 * Returns an initial list of devices populated on the page selects
 */
async function initializeDeviceSelect() {
  const videoSelectEl = document.getElementById('video-devices');
  videoSelectEl.disabled = false;

  const { videoDevices, audioDevices } = await getDevices();
  videoDevices.forEach((device, index) => {
    videoSelectEl.options[index] = new Option(device.label, device.deviceId);
  });

  const audioSelectEl = document.getElementById('audio-devices');

  audioSelectEl.disabled = false;
  audioDevices.forEach((device, index) => {
    audioSelectEl.options[index] = new Option(device.label, device.deviceId);
  });
}

/**
 * Returns all devices available on the current device
 */
async function getDevices() {
  // Prevents issues on Safari/FF so devices are not blank
  await navigator.mediaDevices.getUserMedia({ video: true, audio: true });

  const devices = await navigator.mediaDevices.enumerateDevices();
  // Get all video devices
  const videoDevices = devices.filter((d) => d.kind === 'videoinput');
  if (!videoDevices.length) {
    console.error('No video devices found.');
  }

  // Get all audio devices
  const audioDevices = devices.filter((d) => d.kind === 'audioinput');
  if (!audioDevices.length) {
    console.error('No audio devices found.');
  }

  return { videoDevices, audioDevices };
}

async function getCamera(deviceId) {
  // Use Max Width and Height
  return navigator.mediaDevices.getUserMedia({
    video: {
      deviceId: deviceId ? { exact: deviceId } : null,
    },
    audio: false,
  });
}

async function getMic(deviceId) {
  return navigator.mediaDevices.getUserMedia({
    video: false,
    audio: {
      deviceId: deviceId ? { exact: deviceId } : null,
    },
  });
}
```

### Create a Camera Kit Session
<a name="integrating-snap-web-camera-kit-session"></a>

Create `stages.js`, which contains the logic for applying a Lens to the camera feed and publishing the feed to a stage. We recommend copying and pasting the following code block into `stages.js`. You can then review the code piece by piece to understand what’s going on in the following sections.

#### JavaScript
<a name="integrating-snap-web-camera-kit-session-code"></a>

```
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */

const {
  Stage,
  LocalStageStream,
  SubscribeType,
  StageEvents,
  ConnectionState,
  StreamType,
} = IVSBroadcastClient;

import {
  bootstrapCameraKit,
  createMediaStreamSource,
  Transform2D,
} from '@snap/camera-kit';

let cameraButton = document.getElementById('camera-control');
let micButton = document.getElementById('mic-control');
let joinButton = document.getElementById('join-button');
let leaveButton = document.getElementById('leave-button');

let controls = document.getElementById('local-controls');
let videoDevicesList = document.getElementById('video-devices');
let audioDevicesList = document.getElementById('audio-devices');

let lensSelector = document.getElementById('lens-selector');
let session;
let availableLenses = [];

// Stage management
let stage;
let joining = false;
let connected = false;
let localCamera;
let localMic;
let cameraStageStream;
let micStageStream;

const liveRenderTarget = document.getElementById('canvas');

const init = async () => {
  await initializeDeviceSelect();

  const cameraKit = await bootstrapCameraKit({
    apiToken: 'INSERT_YOUR_API_TOKEN_HERE',
  });

  session = await cameraKit.createSession({ liveRenderTarget });
  const { lenses } = await cameraKit.lensRepository.loadLensGroups([
    'INSERT_YOUR_LENS_GROUP_ID_HERE',
  ]);

  availableLenses = lenses;
  populateLensSelector(lenses);

  const snapStream = liveRenderTarget.captureStream();

  lensSelector.addEventListener('change', handleLensChange);
  lensSelector.disabled = true;
  cameraButton.addEventListener('click', () => {
    const isMuted = !cameraStageStream.isMuted;
    cameraStageStream.setMuted(isMuted);
    cameraButton.innerText = isMuted ? 'Show Camera' : 'Hide Camera';
  });

  micButton.addEventListener('click', () => {
    const isMuted = !micStageStream.isMuted;
    micStageStream.setMuted(isMuted);
    micButton.innerText = isMuted ? 'Unmute Mic' : 'Mute Mic';
  });

  joinButton.addEventListener('click', () => {
    joinStage(session, snapStream);
  });

  leaveButton.addEventListener('click', () => {
    leaveStage();
  });
};

async function setCameraKitSource(session, mediaStream) {
  const source = createMediaStreamSource(mediaStream);
  await session.setSource(source);
  source.setTransform(Transform2D.MirrorX);
  session.play();
}

const populateLensSelector = (lenses) => {
  lensSelector.innerHTML = '<option selected disabled>Choose Lens</option>';

  lenses.forEach((lens, index) => {
    const option = document.createElement('option');
    option.value = index;
    option.text = lens.name || `Lens ${index + 1}`;
    lensSelector.appendChild(option);
  });
};

const handleLensChange = (event) => {
  const selectedIndex = parseInt(event.target.value);
  if (session && availableLenses[selectedIndex]) {
    session.applyLens(availableLenses[selectedIndex]);
  }
};

const joinStage = async (session, snapStream) => {
  if (connected || joining) {
    return;
  }
  joining = true;

  const token = document.getElementById('token').value;

  if (!token) {
    window.alert('Please enter a participant token');
    joining = false;
    return;
  }

  // Retrieve the User Media currently set on the page
  localCamera = await getCamera(videoDevicesList.value);
  localMic = await getMic(audioDevicesList.value);
  await setCameraKitSource(session, localCamera);

  // Create StageStreams for Audio and Video
  cameraStageStream = new LocalStageStream(snapStream.getVideoTracks()[0]);
  micStageStream = new LocalStageStream(localMic.getAudioTracks()[0]);

  const strategy = {
    stageStreamsToPublish() {
      return [cameraStageStream, micStageStream];
    },
    shouldPublishParticipant() {
      return true;
    },
    shouldSubscribeToParticipant() {
      return SubscribeType.AUDIO_VIDEO;
    },
  };

  stage = new Stage(token, strategy);

  // Other available events:
  // https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-guides/stages#events
  stage.on(StageEvents.STAGE_CONNECTION_STATE_CHANGED, (state) => {
    connected = state === ConnectionState.CONNECTED;

    if (connected) {
      joining = false;
      controls.classList.remove('hidden');
      lensSelector.disabled = false;
    } else {
      controls.classList.add('hidden');
      lensSelector.disabled = true;
    }
  });

  stage.on(StageEvents.STAGE_PARTICIPANT_JOINED, (participant) => {
    console.log('Participant Joined:', participant);
  });

  stage.on(
    StageEvents.STAGE_PARTICIPANT_STREAMS_ADDED,
    (participant, streams) => {
      console.log('Participant Media Added: ', participant, streams);

      let streamsToDisplay = streams;

      if (participant.isLocal) {
        // Ensure to exclude local audio streams, otherwise echo will occur
        streamsToDisplay = streams.filter(
          (stream) => stream.streamType === StreamType.VIDEO
        );
      }

      const videoEl = setupParticipant(participant);
      streamsToDisplay.forEach((stream) =>
        videoEl.srcObject.addTrack(stream.mediaStreamTrack)
      );
    }
  );

  stage.on(StageEvents.STAGE_PARTICIPANT_LEFT, (participant) => {
    console.log('Participant Left: ', participant);
    teardownParticipant(participant);
  });

  try {
    await stage.join();
  } catch (err) {
    joining = false;
    connected = false;
    console.error(err.message);
  }
};

const leaveStage = async () => {
  stage.leave();

  joining = false;
  connected = false;

  cameraButton.innerText = 'Hide Camera';
  micButton.innerText = 'Mute Mic';
  controls.classList.add('hidden');
};

init();
```

In the first part of this file, we import the broadcast SDK and Camera Kit Web SDK and initialize the variables we will use with each SDK. We create a Camera Kit session by calling `createSession` after [bootstrapping the Camera Kit Web SDK](https://kit.snapchat.com/reference/CameraKit/web/0.7.0/index.html#bootstrapping-the-sdk). Note that a canvas element object is passed to a session; this tells Camera Kit to render into that canvas.

#### JavaScript
<a name="integrating-snap-web-camera-kit-session-code-2"></a>

```
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */

const {
  Stage,
  LocalStageStream,
  SubscribeType,
  StageEvents,
  ConnectionState,
  StreamType,
} = IVSBroadcastClient;

import {
  bootstrapCameraKit,
  createMediaStreamSource,
  Transform2D,
} from '@snap/camera-kit';

let cameraButton = document.getElementById('camera-control');
let micButton = document.getElementById('mic-control');
let joinButton = document.getElementById('join-button');
let leaveButton = document.getElementById('leave-button');

let controls = document.getElementById('local-controls');
let videoDevicesList = document.getElementById('video-devices');
let audioDevicesList = document.getElementById('audio-devices');

let lensSelector = document.getElementById('lens-selector');
let session;
let availableLenses = [];

// Stage management
let stage;
let joining = false;
let connected = false;
let localCamera;
let localMic;
let cameraStageStream;
let micStageStream;

const liveRenderTarget = document.getElementById('canvas');

const init = async () => {
  await initializeDeviceSelect();

  const cameraKit = await bootstrapCameraKit({
    apiToken: 'INSERT_YOUR_API_TOKEN_HERE',
  });

  session = await cameraKit.createSession({ liveRenderTarget });
```

### Fetch Lenses and Populate Lens Selector
<a name="integrating-snap-web-fetch-apply-lens"></a>

To fetch your Lenses, replace the placeholder for the Lens Group ID with your own, which can be found in the [Camera Kit Developer Portal](https://camera-kit.snapchat.com/). Populate the Lens selection dropdown using the `populateLensSelector()` function which we will create later.

#### JavaScript
<a name="integrating-snap-web-fetch-apply-lens-code"></a>

```
session = await cameraKit.createSession({ liveRenderTarget });
  const { lenses } = await cameraKit.lensRepository.loadLensGroups([
    'INSERT_YOUR_LENS_GROUP_ID_HERE',
  ]);

  availableLenses = lenses;
  populateLensSelector(lenses);
```

### Render the Output from a Camera Kit Session to a Canvas
<a name="integrating-snap-web-render-output-to-canvas"></a>

Use the [captureStream](https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement/captureStream) method to return a `MediaStream` of the canvas’s contents. The canvas will contain a video stream of the camera feed with a Lens applied. Also, add event listeners for buttons to mute the camera and microphone as well as event listeners for joining and leaving a stage. In the event listener for joining a stage, we pass in a Camera Kit session and the `MediaStream` from the canvas so it can be published to a stage.

#### JavaScript
<a name="integrating-snap-web-render-output-to-canvas-code"></a>

```
const snapStream = liveRenderTarget.captureStream();

  lensSelector.addEventListener('change', handleLensChange);
  lensSelector.disabled = true;
  cameraButton.addEventListener('click', () => {
    const isMuted = !cameraStageStream.isMuted;
    cameraStageStream.setMuted(isMuted);
    cameraButton.innerText = isMuted ? 'Show Camera' : 'Hide Camera';
  });

  micButton.addEventListener('click', () => {
    const isMuted = !micStageStream.isMuted;
    micStageStream.setMuted(isMuted);
    micButton.innerText = isMuted ? 'Unmute Mic' : 'Mute Mic';
  });

  joinButton.addEventListener('click', () => {
    joinStage(session, snapStream);
  });

  leaveButton.addEventListener('click', () => {
    leaveStage();
  });
};
```

### Create a Function to Populate the Lens Dropdown
<a name="integrating-snap-web-populate-lens-dropdown"></a>

Create the following function to populate the **Lens** selector with the lenses fetched earlier. The **Lens** selector is a UI element on the page that lets you select from a list of lenses to apply to the camera feed. Also, create the `handleLensChange` callback function to apply the specified lens when it is selected from the **Lens** dropdown.

#### JavaScript
<a name="integrating-snap-web-populate-lens-dropdown-code"></a>

```
const populateLensSelector = (lenses) => {
  lensSelector.innerHTML = '<option selected disabled>Choose Lens</option>';

  lenses.forEach((lens, index) => {
    const option = document.createElement('option');
    option.value = index;
    option.text = lens.name || `Lens ${index + 1}`;
    lensSelector.appendChild(option);
  });
};

const handleLensChange = (event) => {
  const selectedIndex = parseInt(event.target.value);
  if (session && availableLenses[selectedIndex]) {
    session.applyLens(availableLenses[selectedIndex]);
  }
};
```

### Provide Camera Kit with a Media Source for Rendering and Publish a LocalStageStream
<a name="integrating-snap-web-publish-localstagestream"></a>

To publish a video stream with a Lens applied, create a function called `setCameraKitSource` to pass in the `MediaStream` captured from the canvas earlier. The `MediaStream` from the canvas isn’t doing anything at the moment because we have not incorporated our local camera feed yet. We can incorporate our local camera feed by calling the `getCamera` helper method and assigning it to `localCamera` . We can then pass in our local camera feed (via `localCamera`) and the session object to `setCameraKitSource`. The `setCameraKitSource` function converts our local camera feed to a [source of media for CameraKit](https://docs.snap.com/camera-kit/integrate-sdk/web/web-configuration#creating-a-camerakitsource) by calling `createMediaStreamSource`. The media source for `CameraKit` is then [transformed](https://docs.snap.com/camera-kit/integrate-sdk/web/web-configuration#2d-transforms) to mirror the front-facing camera. The Lens effect is then applied to the media source and rendered to the output canvas by calling `session.play()`.

With Lens now applied to the `MediaStream` captured from the canvas, we can then proceed to publishing it to a stage. We do that by creating a `LocalStageStream` with the video tracks from the `MediaStream`. An instance of `LocalStageStream` can then be passed in to a `StageStrategy` to be published.

#### JavaScript
<a name="integrating-snap-web-publish-localstagestream-code"></a>

```
async function setCameraKitSource(session, mediaStream) {
  const source = createMediaStreamSource(mediaStream);
  await session.setSource(source);
  source.setTransform(Transform2D.MirrorX);
  session.play();
}

const joinStage = async (session, snapStream) => {
  if (connected || joining) {
    return;
  }
  joining = true;

  const token = document.getElementById('token').value;

  if (!token) {
    window.alert('Please enter a participant token');
    joining = false;
    return;
  }

  // Retrieve the User Media currently set on the page
  localCamera = await getCamera(videoDevicesList.value);
  localMic = await getMic(audioDevicesList.value);
  await setCameraKitSource(session, localCamera);
  // Create StageStreams for Audio and Video
  // cameraStageStream = new LocalStageStream(localCamera.getVideoTracks()[0]);
  cameraStageStream = new LocalStageStream(snapStream.getVideoTracks()[0]);
  micStageStream = new LocalStageStream(localMic.getAudioTracks()[0]);

  const strategy = {
    stageStreamsToPublish() {
      return [cameraStageStream, micStageStream];
    },
    shouldPublishParticipant() {
      return true;
    },
    shouldSubscribeToParticipant() {
      return SubscribeType.AUDIO_VIDEO;
    },
  };
```

The remaining code below is for creating and managing our stage:

#### JavaScript
<a name="integrating-snap-web-create-manage-stage-code"></a>

```
stage = new Stage(token, strategy);

  // Other available events:
  // https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-guides/stages#events

  stage.on(StageEvents.STAGE_CONNECTION_STATE_CHANGED, (state) => {
    connected = state === ConnectionState.CONNECTED;

    if (connected) {
      joining = false;
      controls.classList.remove('hidden');
    } else {
      controls.classList.add('hidden');
    }
  });

  stage.on(StageEvents.STAGE_PARTICIPANT_JOINED, (participant) => {
    console.log('Participant Joined:', participant);
  });

  stage.on(
    StageEvents.STAGE_PARTICIPANT_STREAMS_ADDED,
    (participant, streams) => {
      console.log('Participant Media Added: ', participant, streams);

      let streamsToDisplay = streams;

      if (participant.isLocal) {
        // Ensure to exclude local audio streams, otherwise echo will occur
        streamsToDisplay = streams.filter(
          (stream) => stream.streamType === StreamType.VIDEO
        );
      }

      const videoEl = setupParticipant(participant);
      streamsToDisplay.forEach((stream) =>
        videoEl.srcObject.addTrack(stream.mediaStreamTrack)
      );
    }
  );

  stage.on(StageEvents.STAGE_PARTICIPANT_LEFT, (participant) => {
    console.log('Participant Left: ', participant);
    teardownParticipant(participant);
  });

  try {
    await stage.join();
  } catch (err) {
    joining = false;
    connected = false;
    console.error(err.message);
  }
};

const leaveStage = async () => {
  stage.leave();

  joining = false;
  connected = false;

  cameraButton.innerText = 'Hide Camera';
  micButton.innerText = 'Mute Mic';
  controls.classList.add('hidden');
};

init();
```

### Create package.json
<a name="integrating-snap-web-package-json"></a>

Create `package.json` and add the following JSON configuration. This file defines our dependencies and includes a script command for bundling our code.

#### JSON Configuration
<a name="integrating-snap-web-package-json-code"></a>

```
{
  "dependencies": {
    "@snap/camera-kit": "^0.10.0"
  },
  "name": "ivs-stages-with-snap-camerakit",
  "version": "1.0.0",
  "main": "index.js",
  "scripts": {
    "build": "webpack"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "description": "",
  "devDependencies": {
    "webpack": "^5.95.0",
    "webpack-cli": "^5.1.4"
  }
}
```

### Create a Webpack Config File
<a name="integrating-snap-web-webpack-config"></a>

Create `webpack.config.js` and add the following code. This bundles the code we created thus far so that we can use the import statement to use Camera Kit.

#### JavaScript
<a name="integrating-snap-web-webpack-config-code"></a>

```
const path = require('path');
module.exports = {
  entry: ['./stage.js'],
  output: {
    filename: 'bundle.js',
    path: path.resolve(__dirname, 'dist'),
  },
};
```

Finally, run `npm run build` to bundle your JavaScript as defined in the Webpack config file. For testing purposes, you can then serve HTML and JavaScript from your local computer. In this example, we use Python’s `http.server` module. 

### Set Up an HTTPS Server and Test
<a name="integrating-snap-web-https-server-test"></a>

To test our code, we need to set up an HTTPS server. Using an HTTPS server for local development and testing of your web app's integration with the Snap Camera Kit SDK will help avoid CORS (Cross-Origin Resource Sharing) issues.

Open a terminal and navigate to the directory where you created all the code up to this point. Run the following command to generate a self-signed SSL/TLS certificate and private key:

```
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes
```

This creates two files: `key.pem` (the private key) and `cert.pem` (the self-signed certificate). Create a new Python file named `https_server.py` and add the following code:

#### Python
<a name="integrating-snap-web-https-server-test-code"></a>

```
import http.server
import ssl

# Set the directory to serve files from
DIRECTORY = '.'

# Create the HTTPS server
server_address = ('', 4443)
httpd = http.server.HTTPServer(
    server_address, http.server.SimpleHTTPRequestHandler)

# Wrap the socket with SSL/TLS
context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
context.load_cert_chain('cert.pem', 'key.pem')
httpd.socket = context.wrap_socket(httpd.socket, server_side=True)

print(f'Starting HTTPS server on https://localhost:4443, serving {DIRECTORY}')
httpd.serve_forever()
```

Open a terminal, navigate to the directory where you created the `https_server.py` file, and run the following command:

```
python3 https_server.py
```

This starts the HTTPS server on https://localhost:4443, serving files from the current directory. Make sure that the `cert.pem` and `key.pem` files are in the same directory as the `https_server.py` file.

Open your browser and navigate to https://localhost:4443. Since this is a self-signed SSL/TLS certificate, it will not be trusted by your web browser, so you will get a warning. Since this is only for testing purposes, you can bypass the warning. You should then see the AR effect for the Snap Lens you specified earlier applied to your camera feed on screen.

Note that this setup using Python's built-in `http.server` and `ssl` modules is suitable for local development and testing purposes, but it is not recommended for a production environment. The self-signed SSL/TLS certificate used in this setup is not trusted by web browsers and other clients, which means users will encounter security warnings when accessing the server. Also, although we use Python’s built-in http.server and ssl modules in this example, you may choose to use another HTTPS server solution.

## Android
<a name="integrating-snap-android"></a>

To integrate Snap’s Camera Kit SDK with the IVS Android broadcast SDK, you must install the Camera Kit SDK, initialize a Camera Kit session, apply a Lens and feed the Camera Kit session’s output to the custom-image input source.

To install the Camera Kit SDK, add the following to your module’s `build.gradle` file. Replace `$cameraKitVersion` with the [latest Camera Kit SDK version](https://docs.snap.com/camera-kit/integrate-sdk/mobile/changelog-mobile).

### Java
<a name="integrating-snap-android-install-camerakit-sdk-code"></a>

```
implementation "com.snap.camerakit:camerakit:$cameraKitVersion"
```

Initialize and obtain a `cameraKitSession`. Camera Kit also provides a convenient wrapper for Android’s [CameraX](https://developer.android.com/media/camera/camerax) APIs, so you don’t have to write complicated logic to use CameraX with Camera Kit. You can use the `CameraXImageProcessorSource` object as a [Source](https://snapchat.github.io/camera-kit-reference/api/android/latest/-camera-kit/com.snap.camerakit/-source/index.html) for [ImageProcessor](https://snapchat.github.io/camera-kit-reference/api/android/latest/-camera-kit/com.snap.camerakit/-image-processor/index.html), which allows you to start camera-preview streaming frames.

### Java
<a name="integrating-snap-android-initialize-camerakitsession-code"></a>

```
 protected void onCreate(@Nullable Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        setContentView(R.layout.activity_main);

        // Camera Kit support implementation of ImageProcessor that is backed by CameraX library:
        // https://developer.android.com/training/camerax
        CameraXImageProcessorSource imageProcessorSource = new CameraXImageProcessorSource( 
            this /*context*/, this /*lifecycleOwner*/
        );
        imageProcessorSource.startPreview(true /*cameraFacingFront*/);

        cameraKitSession = Sessions.newBuilder(this)
                .imageProcessorSource(imageProcessorSource)
                .attachTo(findViewById(R.id.camerakit_stub))
                .build();
    }
```

### Fetch and Apply Lenses
<a name="integrating-snap-android-fetch-apply-lenses"></a>

You can configure Lenses and their ordering in the carousel on the [Camera Kit Developer Portal](https://camera-kit.snapchat.com/):

#### Java
<a name="integrating-snap-android-configure-lenses-code"></a>

```
// Fetch lenses from repository and apply them
 // Replace LENS_GROUP_ID with Lens Group ID from https://camera-kit.snapchat.com
cameraKitSession.getLenses().getRepository().get(new Available(LENS_GROUP_ID), available -> {
     Log.d(TAG, "Available lenses: " + available);
     Lenses.whenHasFirst(available, lens -> cameraKitSession.getLenses().getProcessor().apply(lens, result -> {
          Log.d(TAG,  "Apply lens [" + lens + "] success: " + result);
      }));
});
```

To broadcast, send processed frames to the underlying `Surface` of a custom image source. Use a `DeviceDiscovery` object and create a `CustomImageSource` to return a `SurfaceSource`. You can then render the output from a `CameraKit` session to the underlying `Surface` provided by the `SurfaceSource`.

#### Java
<a name="integrating-snap-android-broadcast-code"></a>

```
val publishStreams = ArrayList<LocalStageStream>()

val deviceDiscovery = DeviceDiscovery(applicationContext)
val customSource = deviceDiscovery.createImageInputSource(BroadcastConfiguration.Vec2(720f, 1280f))

cameraKitSession.processor.connectOutput(outputFrom(customSource.inputSurface))
val customStream = ImageLocalStageStream(customSource)

// After rendering the output from a Camera Kit session to the Surface, you can 
// then return it as a LocalStageStream to be published by the Broadcast SDK
val customStream: ImageLocalStageStream = ImageLocalStageStream(surfaceSource)
publishStreams.add(customStream)

@Override
fun stageStreamsToPublishForParticipant(stage: Stage, participantInfo: ParticipantInfo): List<LocalStageStream> = publishStreams
```

# Using Background Replacement with the IVS Broadcast SDK
<a name="broadcast-3p-camera-filters-background-replacement"></a>

Background replacement is a type of camera filter that enables live-stream creators to change their backgrounds. As shown in the following diagram, replacing your background involves:

1. Getting a camera image from the live camera feed.

1. Segmenting it into foreground and background components using Google ML Kit.

1. Combining the resulting segmentation mask with a custom background image.

1. Passing it to a Custom Image Source for broadcast.

![\[Workflow for implementing background replacement.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/3P_Camera_Filters_Background_Replacement.png)


## Web
<a name="background-replacement-web"></a>

This section assumes you are already familiar with [publishing and subscribing to video using the Web Broadcast SDK](https://docs.aws.amazon.com//ivs/latest/RealTimeUserGuide/getting-started-pub-sub-web.html).

To replace the background of a live stream with a custom image, use the [selfie segmentation model](https://developers.google.com/mediapipe/solutions/vision/image_segmenter#selfie-model) with [MediaPipe Image Segmenter](https://developers.google.com/mediapipe/solutions/vision/image_segmenter). This is a machine-learning model that identifies which pixels in the video frame are in the foreground or background. You can then use the results from the model to replace the background of a live stream, by copying foreground pixels from the video feed to a custom image representing the new background.

To integrate background replacement with the IVS real-time streaming Web broadcast SDK, you need to:

1. Install MediaPipe and Webpack. (Our example uses Webpack as the bundler, but you can use any bundler of your choice.)

1. Create `index.html`.

1. Add media elements.

1. Add a script tag.

1. Create `app.js`.

1. Load a custom background image.

1. Create an instance of `ImageSegmenter`.

1. Render the video feed to a canvas.

1. Create background replacement logic.

1. Create Webpack config File.

1. Bundle Your JavaScript file.

### Install MediaPipe and Webpack
<a name="background-replacement-web-install-mediapipe-webpack"></a>

To start, install the `@mediapipe/tasks-vision` and `webpack` npm packages. The example below uses Webpack as a JavaScript bundler; you can use a different bundler if preferred.

#### JavaScript
<a name="background-replacement-web-install-mediapipe-webpack-code"></a>

```
npm i @mediapipe/tasks-vision webpack webpack-cli
```

Make sure to also update your `package.json` to specify `webpack` as your build script:

#### JavaScript
<a name="background-replacement-web-update-package-json-code"></a>

```
"scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "build": "webpack"
  },
```

### Create index.html
<a name="background-replacement-web-create-index"></a>

Next, create the HTML boilerplate and import the Web broadcast SDK as a script tag. In the following code, be sure to replace `<SDK version>` with the broadcast SDK version that you are using.

#### JavaScript
<a name="background-replacement-web-create-index-code"></a>

```
<!DOCTYPE html>
<html lang="en">

<head>
  <meta charset="UTF-8" />
  <meta http-equiv="X-UA-Compatible" content="IE=edge" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0" />

  <!-- Import the SDK -->
  <script src="https://web-broadcast.live-video.net/<SDK version>/amazon-ivs-web-broadcast.js"></script>
</head>

<body>

</body>
</html>
```

### Add Media Elements
<a name="background-replacement-web-add-media-elements"></a>

Next, add a video element and two canvas elements within the body tag. The video element will contain your live camera feed and will be used as input to the MediaPipe Image Segmenter. The first canvas element will be used to render a preview of the feed that will be broadcast. The second canvas element will be used to render the custom image that will be used as a background. Since the second canvas with the custom image is used only as a source to programmatically copy pixels from it to the final canvas, it is hidden from view.

#### JavaScript
<a name="background-replacement-web-add-media-elements-code"></a>

```
<div class="row local-container">
      <video id="webcam" autoplay style="display: none"></video>
    </div>
    <div class="row local-container">
      <canvas id="canvas" width="640px" height="480px"></canvas>

      <div class="column" id="local-media"></div>
      <div class="static-controls hidden" id="local-controls">
        <button class="button" id="mic-control">Mute Mic</button>
        <button class="button" id="camera-control">Mute Camera</button>
      </div>
    </div>
    <div class="row local-container">
      <canvas id="background" width="640px" height="480px" style="display: none"></canvas>
    </div>
```

### Add a Script Tag
<a name="background-replacement-web-add-script-tag"></a>

Add a script tag to load a bundled JavaScript file that will contain the code to do the background replacement and publish it to a stage:

```
<script src="./dist/bundle.js"></script>
```

### Create app.js
<a name="background-replacement-web-create-appjs"></a>

Next, create a JavaScript file to get the element objects for the canvas and video elements that were created in the HTML page. Import the `ImageSegmenter` and `FilesetResolver` modules. The `ImageSegmenter` module will be used to perform the segmentation task.

#### JavaScript
<a name="create-appjs-import-imagesegmenter-fileresolver-code"></a>

```
const canvasElement = document.getElementById("canvas");
const background = document.getElementById("background");
const canvasCtx = canvasElement.getContext("2d");
const backgroundCtx = background.getContext("2d");
const video = document.getElementById("webcam");

import { ImageSegmenter, FilesetResolver } from "@mediapipe/tasks-vision";
```

Next, create a function called `init()` to retrieve the MediaStream from the user’s camera and invoke a callback function each time a camera frame finishes loading. Add event listeners for the buttons to join and leave a stage.

Note that when joining a stage, we pass in a variable named `segmentationStream`. This is a video stream that is captured from a canvas element, containing a foreground image overlaid on the custom image representing the background. Later, this custom stream will be used to create an instance of a `LocalStageStream`, which can be published to a stage.

#### JavaScript
<a name="create-appjs-create-init-code"></a>

```
const init = async () => {
  await initializeDeviceSelect();

  cameraButton.addEventListener("click", () => {
    const isMuted = !cameraStageStream.isMuted;
    cameraStageStream.setMuted(isMuted);
    cameraButton.innerText = isMuted ? "Show Camera" : "Hide Camera";
  });

  micButton.addEventListener("click", () => {
    const isMuted = !micStageStream.isMuted;
    micStageStream.setMuted(isMuted);
    micButton.innerText = isMuted ? "Unmute Mic" : "Mute Mic";
  });

  localCamera = await getCamera(videoDevicesList.value);
  const segmentationStream = canvasElement.captureStream();

  joinButton.addEventListener("click", () => {
    joinStage(segmentationStream);
  });

  leaveButton.addEventListener("click", () => {
    leaveStage();
  });
};
```

### Load a Custom Background Image
<a name="background-replacement-web-background-image"></a>

At the bottom of the `init` function, add code to call a function named `initBackgroundCanvas`, which loads a custom image from a local file and renders it onto a canvas. We will define this function in the next step. Assign the `MediaStream` retrieved from the user’s camera to the video object. Later, this video object will be passed to the Image Segmenter. Also, set a function named `renderVideoToCanvas` as the callback function to invoke whenever a video frame has finished loading. We will define this function in a later step.

#### JavaScript
<a name="background-replacement-web-load-background-image-code"></a>

```
initBackgroundCanvas();

  video.srcObject = localCamera;
  video.addEventListener("loadeddata", renderVideoToCanvas);
```

Let’s implement the `initBackgroundCanvas` function, which loads an image from a local file. In this example, we use an image of a beach as the custom background. The canvas containing the custom image will be hidden from display, as you will merge it with the foreground pixels from the canvas element containing the camera feed.

#### JavaScript
<a name="background-replacement-web-implement-initBackgroundCanvas-code"></a>

```
const initBackgroundCanvas = () => {
  let img = new Image();
  img.src = "beach.jpg";

  img.onload = () => {
    backgroundCtx.clearRect(0, 0, canvas.width, canvas.height);
    backgroundCtx.drawImage(img, 0, 0);
  };
};
```

### Create an Instance of ImageSegmenter
<a name="background-replacement-web-imagesegmenter"></a>

Next, create an instance of `ImageSegmenter`, which will segment the image and return the result as a mask. When creating an instance of an `ImageSegmenter`, you will use the [selfie segmentation model](https://developers.google.com/mediapipe/solutions/vision/image_segmenter#selfie-model).

#### JavaScript
<a name="background-replacement-web-imagesegmenter-code"></a>

```
const createImageSegmenter = async () => {
  const audio = await FilesetResolver.forVisionTasks("https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@0.10.2/wasm");

  imageSegmenter = await ImageSegmenter.createFromOptions(audio, {
    baseOptions: {
      modelAssetPath: "https://storage.googleapis.com/mediapipe-models/image_segmenter/selfie_segmenter/float16/latest/selfie_segmenter.tflite",
      delegate: "GPU",
    },
    runningMode: "VIDEO",
    outputCategoryMask: true,
  });
};
```

### Render the Video Feed to a Canvas
<a name="background-replacement-web-render-video-to-canvas"></a>

Next, create the function that renders the video feed to the other canvas element. We need to render the video feed to a canvas so we can extract the foreground pixels from it using the Canvas 2D API. While doing this, we also will pass a video frame to our instance of `ImageSegmenter`, using the [segmentforVideo](https://developers.google.com/mediapipe/api/solutions/js/tasks-vision.imagesegmenter#imagesegmentersegmentforvideo) method to segment the foreground from the background in the video frame. When the [segmentforVideo](https://developers.google.com/mediapipe/api/solutions/js/tasks-vision.imagesegmenter#imagesegmentersegmentforvideo) method returns, it invokes our custom callback function, `replaceBackground`, for doing the background replacement.

#### JavaScript
<a name="background-replacement-web-render-video-to-canvas-code"></a>

```
const renderVideoToCanvas = async () => {
  if (video.currentTime === lastWebcamTime) {
    window.requestAnimationFrame(renderVideoToCanvas);
    return;
  }
  lastWebcamTime = video.currentTime;
  canvasCtx.drawImage(video, 0, 0, video.videoWidth, video.videoHeight);

  if (imageSegmenter === undefined) {
    return;
  }

  let startTimeMs = performance.now();

  imageSegmenter.segmentForVideo(video, startTimeMs, replaceBackground);
};
```

### Create Background Replacement Logic
<a name="background-replacement-web-logic"></a>

Create the `replaceBackground` function, which merges the custom background image with the foreground from the camera feed to replace the background. The function first retrieves the underlying pixel data of the custom background image and the video feed from the two canvas elements created earlier. It then iterates through the mask provided by `ImageSegmenter`, which indicates which pixels are in the foreground. As it iterates through the mask, it selectively copies pixels that contain the user’s camera feed to the corresponding background pixel data. Once that is done, it converts the final pixel data with the foreground copied on to the background and draws it to a Canvas.

#### JavaScript
<a name="background-replacement-web-logic-create-replacebackground-code"></a>

```
function replaceBackground(result) {
  let imageData = canvasCtx.getImageData(0, 0, video.videoWidth, video.videoHeight).data;
  let backgroundData = backgroundCtx.getImageData(0, 0, video.videoWidth, video.videoHeight).data;
  const mask = result.categoryMask.getAsFloat32Array();
  let j = 0;

  for (let i = 0; i < mask.length; ++i) {
    const maskVal = Math.round(mask[i] * 255.0);

    j += 4;
  // Only copy pixels on to the background image if the mask indicates they are in the foreground
    if (maskVal < 255) {
      backgroundData[j] = imageData[j];
      backgroundData[j + 1] = imageData[j + 1];
      backgroundData[j + 2] = imageData[j + 2];
      backgroundData[j + 3] = imageData[j + 3];
    }
  }

 // Convert the pixel data to a format suitable to be drawn to a canvas
  const uint8Array = new Uint8ClampedArray(backgroundData.buffer);
  const dataNew = new ImageData(uint8Array, video.videoWidth, video.videoHeight);
  canvasCtx.putImageData(dataNew, 0, 0);
  window.requestAnimationFrame(renderVideoToCanvas);
}
```

For reference, here is the complete `app.js` file containing all the logic above:

#### JavaScript
<a name="background-replacement-web-logic-app-js-code"></a>

```
/*! Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: Apache-2.0 */

// All helpers are expose on 'media-devices.js' and 'dom.js'
const { setupParticipant } = window;

const { Stage, LocalStageStream, SubscribeType, StageEvents, ConnectionState, StreamType } = IVSBroadcastClient;
const canvasElement = document.getElementById("canvas");
const background = document.getElementById("background");
const canvasCtx = canvasElement.getContext("2d");
const backgroundCtx = background.getContext("2d");
const video = document.getElementById("webcam");

import { ImageSegmenter, FilesetResolver } from "@mediapipe/tasks-vision";

let cameraButton = document.getElementById("camera-control");
let micButton = document.getElementById("mic-control");
let joinButton = document.getElementById("join-button");
let leaveButton = document.getElementById("leave-button");

let controls = document.getElementById("local-controls");
let audioDevicesList = document.getElementById("audio-devices");
let videoDevicesList = document.getElementById("video-devices");

// Stage management
let stage;
let joining = false;
let connected = false;
let localCamera;
let localMic;
let cameraStageStream;
let micStageStream;
let imageSegmenter;
let lastWebcamTime = -1;

const init = async () => {
  await initializeDeviceSelect();

  cameraButton.addEventListener("click", () => {
    const isMuted = !cameraStageStream.isMuted;
    cameraStageStream.setMuted(isMuted);
    cameraButton.innerText = isMuted ? "Show Camera" : "Hide Camera";
  });

  micButton.addEventListener("click", () => {
    const isMuted = !micStageStream.isMuted;
    micStageStream.setMuted(isMuted);
    micButton.innerText = isMuted ? "Unmute Mic" : "Mute Mic";
  });

  localCamera = await getCamera(videoDevicesList.value);
  const segmentationStream = canvasElement.captureStream();

  joinButton.addEventListener("click", () => {
    joinStage(segmentationStream);
  });

  leaveButton.addEventListener("click", () => {
    leaveStage();
  });

  initBackgroundCanvas();

  video.srcObject = localCamera;
  video.addEventListener("loadeddata", renderVideoToCanvas);
};

const joinStage = async (segmentationStream) => {
  if (connected || joining) {
    return;
  }
  joining = true;

  const token = document.getElementById("token").value;

  if (!token) {
    window.alert("Please enter a participant token");
    joining = false;
    return;
  }

  // Retrieve the User Media currently set on the page
  localMic = await getMic(audioDevicesList.value);

  cameraStageStream = new LocalStageStream(segmentationStream.getVideoTracks()[0]);
  micStageStream = new LocalStageStream(localMic.getAudioTracks()[0]);

  const strategy = {
    stageStreamsToPublish() {
      return [cameraStageStream, micStageStream];
    },
    shouldPublishParticipant() {
      return true;
    },
    shouldSubscribeToParticipant() {
      return SubscribeType.AUDIO_VIDEO;
    },
  };

  stage = new Stage(token, strategy);

  // Other available events:
  // https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-guides/stages#events
  stage.on(StageEvents.STAGE_CONNECTION_STATE_CHANGED, (state) => {
    connected = state === ConnectionState.CONNECTED;

    if (connected) {
      joining = false;
      controls.classList.remove("hidden");
    } else {
      controls.classList.add("hidden");
    }
  });

  stage.on(StageEvents.STAGE_PARTICIPANT_JOINED, (participant) => {
    console.log("Participant Joined:", participant);
  });

  stage.on(StageEvents.STAGE_PARTICIPANT_STREAMS_ADDED, (participant, streams) => {
    console.log("Participant Media Added: ", participant, streams);

    let streamsToDisplay = streams;

    if (participant.isLocal) {
      // Ensure to exclude local audio streams, otherwise echo will occur
      streamsToDisplay = streams.filter((stream) => stream.streamType === StreamType.VIDEO);
    }

    const videoEl = setupParticipant(participant);
    streamsToDisplay.forEach((stream) => videoEl.srcObject.addTrack(stream.mediaStreamTrack));
  });

  stage.on(StageEvents.STAGE_PARTICIPANT_LEFT, (participant) => {
    console.log("Participant Left: ", participant);
    teardownParticipant(participant);
  });

  try {
    await stage.join();
  } catch (err) {
    joining = false;
    connected = false;
    console.error(err.message);
  }
};

const leaveStage = async () => {
  stage.leave();

  joining = false;
  connected = false;

  cameraButton.innerText = "Hide Camera";
  micButton.innerText = "Mute Mic";
  controls.classList.add("hidden");
};

function replaceBackground(result) {
  let imageData = canvasCtx.getImageData(0, 0, video.videoWidth, video.videoHeight).data;
  let backgroundData = backgroundCtx.getImageData(0, 0, video.videoWidth, video.videoHeight).data;
  const mask = result.categoryMask.getAsFloat32Array();
  let j = 0;

  for (let i = 0; i < mask.length; ++i) {
    const maskVal = Math.round(mask[i] * 255.0);

    j += 4;
    if (maskVal < 255) {
      backgroundData[j] = imageData[j];
      backgroundData[j + 1] = imageData[j + 1];
      backgroundData[j + 2] = imageData[j + 2];
      backgroundData[j + 3] = imageData[j + 3];
    }
  }
  const uint8Array = new Uint8ClampedArray(backgroundData.buffer);
  const dataNew = new ImageData(uint8Array, video.videoWidth, video.videoHeight);
  canvasCtx.putImageData(dataNew, 0, 0);
  window.requestAnimationFrame(renderVideoToCanvas);
}

const createImageSegmenter = async () => {
  const audio = await FilesetResolver.forVisionTasks("https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@0.10.2/wasm");

  imageSegmenter = await ImageSegmenter.createFromOptions(audio, {
    baseOptions: {
      modelAssetPath: "https://storage.googleapis.com/mediapipe-models/image_segmenter/selfie_segmenter/float16/latest/selfie_segmenter.tflite",
      delegate: "GPU",
    },
    runningMode: "VIDEO",
    outputCategoryMask: true,
  });
};

const renderVideoToCanvas = async () => {
  if (video.currentTime === lastWebcamTime) {
    window.requestAnimationFrame(renderVideoToCanvas);
    return;
  }
  lastWebcamTime = video.currentTime;
  canvasCtx.drawImage(video, 0, 0, video.videoWidth, video.videoHeight);

  if (imageSegmenter === undefined) {
    return;
  }

  let startTimeMs = performance.now();

  imageSegmenter.segmentForVideo(video, startTimeMs, replaceBackground);
};

const initBackgroundCanvas = () => {
  let img = new Image();
  img.src = "beach.jpg";

  img.onload = () => {
    backgroundCtx.clearRect(0, 0, canvas.width, canvas.height);
    backgroundCtx.drawImage(img, 0, 0);
  };
};

createImageSegmenter();
init();
```

### Create a Webpack Config File
<a name="background-replacement-web-webpack-config"></a>

Add this configuration to your Webpack config file to bundle `app.js`, so the import calls will work:

#### JavaScript
<a name="background-replacement-web-webpack-config-code"></a>

```
const path = require("path");
module.exports = {
  entry: ["./app.js"],
  output: {
    filename: "bundle.js",
    path: path.resolve(__dirname, "dist"),
  },
};
```

### Bundle Your JavaScript files
<a name="background-replacement-web-bundle-javascript"></a>

```
npm run build
```

Start a simple HTTP server from the directory containing `index.html` and open `localhost:8000` to see the result:

```
python3 -m http.server -d ./
```

## Android
<a name="background-replacement-android"></a>

To replace the background in your live stream, you can use the selfie segmentation API of [Google ML Kit](https://developers.google.com/ml-kit/vision/selfie-segmentation). The selfie segmentation API accepts a camera image as input and returns a mask that provides a confidence score for each pixel of the image, indicating whether it was in the foreground or the background. Based on the confidence score, you can then retrieve the corresponding pixel color from either the background image or the foreground image. This process continues until all confidence scores in the mask have been examined. The result is a new array of pixel colors containing foreground pixels combined with pixels from the background image.

To integrate background replacement with the IVS real-time streaming Android broadcast SDK, you need to:

1. Install CameraX libraries and the Google ML kit.

1. Initialize boilerplate variables.

1. Create a custom image source.

1. Manage camera frames.

1. Pass camera frames to Google ML Kit.

1. Overlay camera frame foreground onto your custom background.

1. Feed the new image to a custom image source.

### Install CameraX Libraries and Google ML Kit
<a name="background-replacement-android-install-camerax-googleml"></a>

To extract images from the live camera feed, use Android’s CameraX library. To install the CameraX library and Google ML Kit, add the following to your module’s `build.gradle` file. Replace `${camerax_version}` and `${google_ml_kit_version}` with the latest version of the [CameraX](https://developer.android.com/jetpack/androidx/releases/camera) and [Google ML Kit](https://developers.google.com/ml-kit/vision/selfie-segmentation/android) libraries, respectively. 

#### Java
<a name="background-replacement-android-install-camerax-googleml-code"></a>

```
implementation "com.google.mlkit:segmentation-selfie:${google_ml_kit_version}"
implementation "androidx.camera:camera-core:${camerax_version}"
implementation "androidx.camera:camera-lifecycle:${camerax_version}"
```

Import the following libraries:

#### Java
<a name="background-replacement-android-import-libraries-code"></a>

```
import androidx.camera.core.CameraSelector
import androidx.camera.core.ImageAnalysis
import androidx.camera.core.ImageProxy
import androidx.camera.lifecycle.ProcessCameraProvider
import com.google.mlkit.vision.segmentation.selfie.SelfieSegmenterOptions
```

### Initialize Boilerplate Variables
<a name="background-replacement-android-initialize-variables"></a>

Initialize an instance of `ImageAnalysis` and an instance of an `ExecutorService`:

#### Java
<a name="background-replacement-android-initialize-imageanalysis-executorservice-code"></a>

```
private lateinit var binding: ActivityMainBinding
private lateinit var cameraExecutor: ExecutorService
private var analysisUseCase: ImageAnalysis? = null
```

Initialize a Segmenter instance in [STREAM\$1MODE](https://developers.google.com/ml-kit/vision/selfie-segmentation/android#detector_mode):

#### Java
<a name="background-replacement-android-initialize-segmenter-code"></a>

```
private val options =
        SelfieSegmenterOptions.Builder()
            .setDetectorMode(SelfieSegmenterOptions.STREAM_MODE)
            .build()

private val segmenter = Segmentation.getClient(options)
```

### Create a Custom Image Source
<a name="background-replacement-android-create-image-source"></a>

In the `onCreate` method of your activity, create an instance of a `DeviceDiscovery` object and create a custom image source. The `Surface` provided by the Custom Image Source will receive the final image, with the foreground overlaid on a custom background image. You will then create an instance of a `ImageLocalStageStream` using the Custom Image Source. The instance of a `ImageLocalStageStream` (named `filterStream` in this example) can then be published to a stage. See the [IVS Android Broadcast SDK Guide](broadcast-android.md) for instructions on setting up a stage. Finally, also create a thread that will be used to manage the camera.

#### Java
<a name="background-replacement-android-create-image-source-code"></a>

```
var deviceDiscovery = DeviceDiscovery(applicationContext)
var customSource = deviceDiscovery.createImageInputSource( BroadcastConfiguration.Vec2(
720F, 1280F
))
var surface: Surface = customSource.inputSurface
var filterStream = ImageLocalStageStream(customSource)

cameraExecutor = Executors.newSingleThreadExecutor()
```

### Manage Camera Frames
<a name="background-replacement-android-camera-frames"></a>

Next, create a function to initialize the camera. This function uses the CameraX library to extract images from the live camera feed. First, you create an instance of a `ProcessCameraProvider` called `cameraProviderFuture`. This object represents a future result of obtaining a camera provider. Then you load an image from your project as a bitmap. This example uses an image of a beach as a background, but it can be any image you want.

You then add a listener to `cameraProviderFuture`. This listener is notified when the camera becomes available or if an error occurs during the process of obtaining a camera provider.

#### Java
<a name="background-replacement-android-initialize-camera-code"></a>

```
private fun startCamera(surface: Surface) {
        val cameraProviderFuture = ProcessCameraProvider.getInstance(this)
        val imageResource = R.drawable.beach
        val bgBitmap: Bitmap = BitmapFactory.decodeResource(resources, imageResource)
        var resultBitmap: Bitmap;


        cameraProviderFuture.addListener({
            val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()

            
                if (mediaImage != null) {
                    val inputImage =
                        InputImage.fromMediaImage(mediaImage, imageProxy.imageInfo.rotationDegrees)

                            resultBitmap = overlayForeground(mask, maskWidth, maskHeight, inputBitmap, backgroundPixels)
                            canvas = surface.lockCanvas(null);
                            canvas.drawBitmap(resultBitmap, 0f, 0f, null)

                            surface.unlockCanvasAndPost(canvas);

                        }
                        .addOnFailureListener { exception ->
                            Log.d("App", exception.message!!)
                        }
                        .addOnCompleteListener {
                            imageProxy.close()
                        }

                }
            };

            val cameraSelector = CameraSelector.DEFAULT_FRONT_CAMERA

            try {
                // Unbind use cases before rebinding
                cameraProvider.unbindAll()

                // Bind use cases to camera
                cameraProvider.bindToLifecycle(this, cameraSelector, analysisUseCase)

            } catch(exc: Exception) {
                Log.e(TAG, "Use case binding failed", exc)
            }

        }, ContextCompat.getMainExecutor(this))
    }
```

Within the listener, create `ImageAnalysis.Builder` to access each individual frame from the live camera feed. Set the back-pressure strategy to `STRATEGY_KEEP_ONLY_LATEST`. This guarantees that only one camera frame at a time is delivered for processing. Convert each individual camera frame to a bitmap, so you can extract its pixels to later combine it with the custom background image.

#### Java
<a name="background-replacement-android-create-imageanalysisbuilder-code"></a>

```
val imageAnalyzer = ImageAnalysis.Builder()
analysisUseCase = imageAnalyzer
    .setTargetResolution(Size(360, 640))
    .setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
    .build()

analysisUseCase?.setAnalyzer(cameraExecutor) { imageProxy: ImageProxy ->
    val mediaImage = imageProxy.image
    val tempBitmap = imageProxy.toBitmap();
    val inputBitmap = tempBitmap.rotate(imageProxy.imageInfo.rotationDegrees.toFloat())
```

### Pass Camera Frames to Google ML Kit
<a name="background-replacement-android-frames-to-mlkit"></a>

Next, create an `InputImage` and pass it to the instance of Segmenter for processing. An `InputImage` can be created from an `ImageProxy` provided by the instance of `ImageAnalysis`. Once an `InputImage` is provided to Segmenter, it returns a mask with confidence scores indicating the likelihood of a pixel being in the foreground or background. This mask also provides width and height properties, which you will use to create a new array containing the background pixels from the custom background image loaded earlier.

#### Java
<a name="background-replacement-android-frames-to-mlkit-code"></a>

```
if (mediaImage != null) {
        val inputImage =
            InputImage.fromMediaImag


segmenter.process(inputImage)
    .addOnSuccessListener { segmentationMask ->
        val mask = segmentationMask.buffer
        val maskWidth = segmentationMask.width
        val maskHeight = segmentationMask.height
        val backgroundPixels = IntArray(maskWidth * maskHeight)
        bgBitmap.getPixels(backgroundPixels, 0, maskWidth, 0, 0, maskWidth, maskHeight)
```

### Overlay the Camera Frame Foreground onto Your Custom Background
<a name="background-replacement-android-overlay-frame-foreground"></a>

With the mask containing the confidence scores, the camera frame as a bitmap, and the color pixels from the custom background image, you have everything you need to overlay the foreground onto your custom background. The `overlayForeground` function is then called with the following parameters:

#### Java
<a name="background-replacement-android-call-overlayforeground-code"></a>

```
resultBitmap = overlayForeground(mask, maskWidth, maskHeight, inputBitmap, backgroundPixels)
```

This function iterates through the mask and checks the confidence values to determine whether to get the corresponding pixel color from the background image or the camera frame. If the confidence value indicates that a pixel in the mask is most likely in the background, it will get the corresponding pixel color from the background image; otherwise, it will get the corresponding pixel color from the camera frame to build the foreground. Once the function finishes iterating through the mask, a new bitmap is created using the new array of color pixels and returned. This new bitmap contains the foreground overlaid on the custom background.

#### Java
<a name="background-replacement-android-run-overlayforeground-code"></a>

```
private fun overlayForeground(
        byteBuffer: ByteBuffer,
        maskWidth: Int,
        maskHeight: Int,
        cameraBitmap: Bitmap,
        backgroundPixels: IntArray
    ): Bitmap {
        @ColorInt val colors = IntArray(maskWidth * maskHeight)
        val cameraPixels = IntArray(maskWidth * maskHeight)

        cameraBitmap.getPixels(cameraPixels, 0, maskWidth, 0, 0, maskWidth, maskHeight)

        for (i in 0 until maskWidth * maskHeight) {
            val backgroundLikelihood: Float = 1 - byteBuffer.getFloat()

            // Apply the virtual background to the color if it's not part of the foreground
            if (backgroundLikelihood > 0.9) {
                // Get the corresponding pixel color from the background image
                // Set the color in the mask based on the background image pixel color
                colors[i] = backgroundPixels.get(i)
            } else {
                // Get the corresponding pixel color from the camera frame
                // Set the color in the mask based on the camera image pixel color
                colors[i] = cameraPixels.get(i)
            }
        }

        return Bitmap.createBitmap(
            colors, maskWidth, maskHeight, Bitmap.Config.ARGB_8888
        )
    }
```

### Feed the New Image to a Custom Image Source
<a name="background-replacement-android-custom-image-source"></a>

You can then write the new bitmap to the `Surface` provided by a custom image source. This will broadcast it to your stage.

#### Java
<a name="background-replacement-android-custom-image-source-code"></a>

```
resultBitmap = overlayForeground(mask, inputBitmap, mutableBitmap, bgBitmap)
canvas = surface.lockCanvas(null);
canvas.drawBitmap(resultBitmap, 0f, 0f, null)
```

Here is the complete function for getting the camera frames, passing it to Segmenter, and overlaying it on the background:

#### Java
<a name="background-replacement-android-custom-image-source-startcamera-code"></a>

```
@androidx.annotation.OptIn(androidx.camera.core.ExperimentalGetImage::class)
    private fun startCamera(surface: Surface) {
        val cameraProviderFuture = ProcessCameraProvider.getInstance(this)
        val imageResource = R.drawable.clouds
        val bgBitmap: Bitmap = BitmapFactory.decodeResource(resources, imageResource)
        var resultBitmap: Bitmap;

        cameraProviderFuture.addListener({
            // Used to bind the lifecycle of cameras to the lifecycle owner
            val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()

            val imageAnalyzer = ImageAnalysis.Builder()
            analysisUseCase = imageAnalyzer
                .setTargetResolution(Size(720, 1280))
                .setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
                .build()

            analysisUseCase!!.setAnalyzer(cameraExecutor) { imageProxy: ImageProxy ->
                val mediaImage = imageProxy.image
                val tempBitmap = imageProxy.toBitmap();
                val inputBitmap = tempBitmap.rotate(imageProxy.imageInfo.rotationDegrees.toFloat())

                if (mediaImage != null) {
                    val inputImage =
                        InputImage.fromMediaImage(mediaImage, imageProxy.imageInfo.rotationDegrees)

                    segmenter.process(inputImage)
                        .addOnSuccessListener { segmentationMask ->
                            val mask = segmentationMask.buffer
                            val maskWidth = segmentationMask.width
                            val maskHeight = segmentationMask.height
                            val backgroundPixels = IntArray(maskWidth * maskHeight)
                            bgBitmap.getPixels(backgroundPixels, 0, maskWidth, 0, 0, maskWidth, maskHeight)

                            resultBitmap = overlayForeground(mask, maskWidth, maskHeight, inputBitmap, backgroundPixels)
                            canvas = surface.lockCanvas(null);
                            canvas.drawBitmap(resultBitmap, 0f, 0f, null)

                            surface.unlockCanvasAndPost(canvas);

                        }
                        .addOnFailureListener { exception ->
                            Log.d("App", exception.message!!)
                        }
                        .addOnCompleteListener {
                            imageProxy.close()
                        }

                }
            };

            val cameraSelector = CameraSelector.DEFAULT_FRONT_CAMERA

            try {
                // Unbind use cases before rebinding
                cameraProvider.unbindAll()

                // Bind use cases to camera
                cameraProvider.bindToLifecycle(this, cameraSelector, analysisUseCase)

            } catch(exc: Exception) {
                Log.e(TAG, "Use case binding failed", exc)
            }

        }, ContextCompat.getMainExecutor(this))
    }
```

# IVS Broadcast SDK: Mobile Audio Modes \$1 Real-Time Streaming
<a name="broadcast-mobile-audio-modes"></a>

Audio quality is an important part of any real-team media experience, and there isn’t a one-size-fits-all audio configuration that works best for every use case. To ensure that your users have the best experience when listening to an IVS real-time stream, our mobile SDKs provide several preset audio configurations, as well as more powerful customizations as needed.

## Introduction
<a name="broadcast-mobile-audio-modes-introduction"></a>

The IVS mobile broadcast SDKs provide a `StageAudioManager` class. This class is designed to be the single point of contact for controlling the underlying audio modes on both platforms. On Android, this controls the [AudioManager](https://developer.android.com/reference/android/media/AudioManager), including the audio mode, audio source, content type, usage, and communication devices. On iOS, it controls the application [AVAudioSession](https://developer.apple.com/documentation/avfaudio/avaudiosession), as well as whether [voiceProcessing](https://developer.apple.com/documentation/avfaudio/avaudioionode/3152101-voiceprocessingenabled?language=objc) is enabled.

**Important**: Do not interact with `AVAudioSession` or `AudioManager` directly while the IVS real-time broadcast SDK is active. Doing so could result in the loss of audio, or audio being recorded from or played back on the wrong device.

Before you create your first `DeviceDiscovery` or `Stage` object, the `StageAudioManager` class must be configured.

------
#### [ Android (Kotlin) ]

```
StageAudioManager.getInstance(context).setPreset(StageAudioManager.UseCasePreset.VIDEO_CHAT) // The default value

val deviceDiscovery = DeviceDiscovery(context)
val stage = Stage(context, token, this)

// Other Stage implementation code
```

------
#### [ iOS (Swift) ]

```
IVSStageAudioManager.sharedInstance().setPreset(.videoChat) // The default value

let deviceDiscovery = IVSDeviceDiscovery()
let stage = try? IVSStage(token: token, strategy: self)

// Other Stage implementation code
```

------

If nothing is set on the `StageAudioManager` before initialization of a `DeviceDiscovery` or `Stage` instance, the `VideoChat` preset is applied automatically.

## Audio Use Case Presets
<a name="broadcast-mobile-audio-modes-presets"></a>

The real-time broadcast SDK provides three presets, each tailored to common use cases, as described below. For each preset, we cover five key categories that differentiate the presets from each other.

The **Volume Rocker** category refers to the type of volume (media volume or call volume) that is used or changed via the physical volume rockers on the device. Note that this impacts volume when switching audio modes. For example, suppose the device volume is set to the maximum value while using the Video Chat preset. Switching to the Subscribe Only preset causes a different volume level from the operating system, which could lead to a significant volume change on the device.

### Video Chat
<a name="audio-modes-presets-video-chat"></a>

This is the default preset, designed for when the local device is going to have a real-time conversation with other participants.

**Known issue on iOS**: Using this preset and not attaching a microphone causes audio to play through the earpiece instead of the device speaker. Use this preset only in combination with a microphone.


| Category | Android | iOS | 
| --- | --- | --- | 
| Echo Cancellation | Enabled | Enabled | 
| Noise Suppression | Enabled | Enabled | 
| Volume Rocker | Call Volume | Call Volume | 
| Microphone Selection | Limited based on the OS. USB microphones may not be available. | Limited based on the OS. USB and Bluetooth microphones may not be available. Bluetooth headsets that handle both input and output together should work; e.g., AirPods. | 
| Audio Output | Any output device should work. | Limited based on the OS. Wired headsets may not be available. | 
| Audio Quality | Medium / Low. It will sound like a phone call, not like media playback. | Medium / Low. It will sound like a phone call, not like media playback. | 

### Subscribe Only
<a name="audio-modes-presets-subscribe-only"></a>

This preset is designed for when you plan to subscribe to other publishing participants but not publish yourself. It focuses on audio quality and supporting all available output devices.


| Category | Android | iOS | 
| --- | --- | --- | 
| Echo Cancellation | Disabled | Disabled | 
| Noise Suppression | Disabled | Disabled | 
| Volume Rocker | Media Volume | Media Volume | 
| Microphone Selection | N/A, this preset is not designed for publishing. | N/A, this preset is not designed for publishing. | 
| Audio Output | Any output device should work. | Any output device should work. | 
| Audio Quality | High. Any media type should come through clearly, including music. | High. Any media type should come through clearly, including music. | 

### Studio
<a name="audio-modes-presets-studio"></a>

This preset is designed for high quality subscribing while maintaining the ability to publish. It requires the recording and playback hardware to provide echo cancellation. A use case here would be using a USB microphone and a wired headset. The SDK will maintain the highest quality audio while relying on the physical separation of those devices from causing echo.


| Category | Android | iOS | 
| --- | --- | --- | 
| Echo Cancellation | Disabled | Disabled | 
| Noise Supression | Disabled | Disabled | 
| Volume Rocker | Media Volume in most cases. Call Volume when a Bluetooth microphone is connected.  | Media Volume | 
| Microphone Selection | Any microphone should work. | Any microphone should work. | 
| Audio Output | Any output device should work. | Any output device should work. | 
| Audio Quality | High. Both sides should be able to send music and hear it clearly on the other side. When a Bluetooth headset is connected, audio quality will drop due to Bluetooth SCO mode being enabled. | High. Both sides should be able to send music and hear it clearly on the other side. When a Bluetooth headset is connected, audio quality may drop due to Bluetooth SCO mode being enabled, depending on the headset.  | 

## Advanced Use Cases
<a name="broadcast-mobile-audio-modes-advanced-use-cases"></a>

Beyond the presets, both the iOS and Android real-time streaming broadcast SDKs allow configuring the underlying platform audio modes:
+ On Android, set the [AudioSource](https://developer.android.com/reference/android/media/MediaRecorder.AudioSource), [Usage](https://developer.android.com/reference/android/media/AudioAttributes#USAGE_ALARM), and [ContentType](https://developer.android.com/reference/android/media/AudioAttributes#CONTENT_TYPE_MOVIE).
+ On iOS, use [AVAudioSession.Category](https://developer.apple.com/documentation/avfaudio/avaudiosession/category), [AVAudioSession.CategoryOptions](https://developer.apple.com/documentation/avfaudio/avaudiosession/categoryoptions), [AVAudioSession.Mode](https://developer.apple.com/documentation/avfaudio/avaudiosession/mode), and the ability to toggle if [voice processing](https://developer.apple.com/documentation/avfaudio/avaudioionode/3152101-voiceprocessingenabled?language=objc) is enabled or not while publishing.

Note: When using these audio SDK methods, it is possible to incorrectly configure the underlying audio session. For example, using the `.allowBluetooth` option on iOS in combination with the `.playback` category creates an invalid audio configuration and the SDK cannot record or play back audio. These methods are designed to be used only when an application has specific audio-session requirements that have been validated.

------
#### [ Android (Kotlin) ]

```
// This would act similar to the Subscribe Only preset, but it uses a different ContentType.
StageAudioManager.getInstance(context)
    .setConfiguration(StageAudioManager.Source.GENERIC,
                      StageAudioManager.ContentType.MOVIE,
                      StageAudioManager.Usage.MEDIA);

val stage = Stage(context, token, this)

// Other Stage implementation code
```

------
#### [ iOS (Swift) ]

```
// This would act similar to the Subscribe Only preset, but it uses a different mode and options.
IVSStageAudioManager.sharedInstance()
    .setCategory(.playback,
                 options: [.duckOthers, .mixWithOthers],
                 mode: .default)

let stage = try? IVSStage(token: token, strategy: self)

// Other Stage implementation code
```

------

### iOS Echo Cancellation
<a name="advanced-use-cases-ios_echo_cancellation"></a>

Echo cancellation on iOS can be independently controlled via `IVSStageAudioManager` as well using its `echoCancellationEnabled` method. This method controls whether [voice processing](https://developer.apple.com/documentation/avfaudio/avaudioionode/3152101-voiceprocessingenabled?language=objc) is enabled on the input and output nodes of the underlying `AVAudioEngine` used by the SDK. It is important to understand the effect of changing this property manually:
+ The `AVAudioEngine` property is honored only if the SDK’s microphone is active; this is necessary due to the iOS requirement that voice processing be enabled on both the input and output nodes simultaneously. Normally this is done by using the microphone returned by `IVSDeviceDiscovery` to create an `IVSLocalStageStream` to publish. Alternately, the microphone can be enabled, without being used to publish, by attaching an `IVSAudioDeviceStatsCallback` to the microphone itself. This alternate approach is useful if echo cancellation is needed while using a custom audio-source-based microphone instead of the IVS SDK’s microphone.
+ Enabling the `AVAudioEngine` property requires a mode of `.videoChat` or `.voiceChat`. Requesting a different mode causes iOS’s underlying audio framework to fight the SDK, causing audio loss.
+ Enabling `AVAudioEngine` automatically enables the `.allowBluetooth `option.

Behaviors can differ depending on the device and iOS version.

### iOS Custom Audio Sources
<a name="advanced-use-cases-ios_custom_audio_sources"></a>

Custom audio sources can be used with the SDK by using `IVSDeviceDiscovery.createAudioSource`. When connecting to a Stage, the IVS real-time streaming broadcast SDK still manages an internal `AVAudioEngine` instance for audio playback, even if the SDK’s microphone is not used. As a result, the values provided to `IVSStageAudioManager` must be compatible with the audio being provided by the custom audio source.

If the custom audio source being used to publish is recording from the microphone but managed by the host application, the echo-cancellation SDK above will not work unless the SDK-managed microphone is activated. To work around that requirement, see [iOS Echo Cancellation](#advanced-use-cases-ios_echo_cancellation).

### Publishing with Bluetooth on Android
<a name="advanced-use-cases-bluetooth-android"></a>

The SDK automatically reverts to the `VIDEO_CHAT` preset on Android when the following conditions are met:
+ The assigned configuration does not use the `VOICE_COMMUNICATION` usage value.
+ A Bluetooth microphone is connected to the device.
+ The local participant is publishing to a Stage.

This is a limitation of the Android operating system in regard to how Bluetooth headsets are used for recording audio.

## Integrating with Other SDKs
<a name="broadcast-mobile-audio-modes-integrating-other-sdks"></a>

Because both iOS and Android support only one active audio mode per application, it is common to run into conflicts if your application uses multiple SDKs that require control of the audio mode. When you run into these conflicts, there are some common resolution strategies to try, explained below.

### Match Audio Mode Values
<a name="integrating-other-sdks-match-values"></a>

Using either the IVS SDK’s advanced audio-configuration options or the other SDK’s functionality, have the two SDKs align on the underlying values.

### Agora
<a name="integrating-other-sdks-agora"></a>

#### iOS
<a name="integrating-other-sdks-agora-ios"></a>

On iOS, telling the Agora SDK to keep the `AVAudioSession` active will prevent it from deactivating while the IVS real-time streaming broadcast SDK is using it.

```
myRtcEngine.SetParameters("{\"che.audio.keep.audiosession\":true}");
```

#### Android
<a name="integrating-other-sdks-agora-android"></a>

Avoid calling `setEnableSpeakerphone` on `RtcEngine`, and call `enableLocalAudio(false)` while publishing with the IVS real-time streaming broadcast SDK. You can call `enableLocalAudio(true)` again when the IVS SDK is not publishing.