

# IVS Broadcast SDK \$1 Low-Latency Streaming
<a name="broadcast"></a>

The Amazon Interactive Video Services (IVS) Low-Latency Streaming broadcast SDK is for developers who are building applications with Amazon IVS. This SDK is designed to leverage the Amazon IVS architecture and will see continual improvement and new features, alongside Amazon IVS. As a native broadcast SDK, it is designed to minimize the performance impact on your application and on the devices with which your users access your application.

Your application can leverage the key features of the Amazon IVS broadcast SDK:
+ **High quality streaming** — The broadcast SDK supports high quality streaming. Capture video from your camera and encode it at up to 1080p quality for a high quality viewing experience.
+ **Automatic Bitrate Adjustments** — Smartphone users are mobile, so their network conditions can change throughout the course of a broadcast. The Amazon IVS broadcast SDK automatically adjusts the video bitrate to accommodate changing network conditions.
+ **Portrait and Landscape Support** — No matter how your users hold their devices, the image appears right-side up and properly scaled. The broadcast SDK supports both portrait and landscape canvas sizes. It automatically manages the aspect ratio when the users rotate their device away from the configured orientation.
+ **Secure Streaming** — Your user’s broadcasts are encrypted using TLS, so they can keep their streams secure.
+ **External Audio Devices** — The Amazon IVS broadcast SDK supports audio jack, USB, and Bluetooth SCO external microphones.

## Platform Requirements
<a name="broadcast-platform-requirements"></a>

### Native Platforms
<a name="broadcast-native-platforms"></a>


| Platform | Supported Versions | 
| --- | --- | 
| Android |  6.0 and later  | 
| iOS |  14\$1  If broadcasting is essential to your application, specify Metal as a requirement for downloading your app from the Apple App Store, using [UIRequiredDeviceCapabilities](https://developer.apple.com/documentation/bundleresources/information_property_list/uirequireddevicecapabilities).   | 

IVS supports a minimum of 4 major iOS versions and 6 major Android versions. Our current version support may extend beyond these minimums. Customers will be notified via SDK release notes at least 3 months in advance of a major version no longer being supported.

### Desktop Browsers
<a name="browser-desktop"></a>


| Browser | Supported Platforms | Supported Versions | 
| --- | --- | --- | 
| Chrome | Windows, macOS | Two major versions (current and most recent prior version) | 
| Firefox | Windows, macOS | Two major versions (current and most recent prior version) | 
| Edge | Windows 8.1 and later | Two major versions (current and most recent prior version) Excludes Edge Legacy | 
| Safari | macOS | Two major versions (current and most recent prior version) | 

### Mobile Browsers
<a name="browser-mobile"></a>


| Browser | Supported Versions | 
| --- | --- | 
| Chrome for iOS, Safari for iOS |  Two major versions (current and most recent prior version)  | 
| Chrome for iPadOS, Safari for iPadOS |  Two major versions (current and most recent prior version)  | 
| Chrome for Android | Two major versions (current and most recent prior version)  | 

## Webviews
<a name="broadcast-webviews"></a>

The Web broadcast SDK does not provide support for webviews or weblike environments (TVs, consoles, etc). For mobile implementations, see the Low-Latency Streaming Broadcast SDK Guide for [Android](broadcast-android.md) and for [iOS](broadcast-ios.md).

## Required Device Access
<a name="broadcast-device-access"></a>

The broadcast SDK requires access to the device's cameras and microphones, both those built into the device and those connected through Bluetooth, USB, or audio jack.

## Support
<a name="broadcast-support"></a>

If you encounter a broadcast error or other issue with your stream, determine the unique playback session identifier via the broadcast API. 


| For this Amazon IVS Broadcast SDK: | Use this: | 
| --- | --- | 
| Android | `getSessionId` function on `BroadcastSession`  | 
| iOS | `sessionId` property of `IVSBroadcastSession`  | 
| Web | `getSessionId` function | 

Share this broadcast session identifier with AWS support. With it, they can get information to help troubleshoot your issue.

**Note:** The broadcast SDK is continually improved. See [Amazon IVS Release Notes](release-notes.md) for available versions and fixed issues. If appropriate, before contacting support, update your version of the broadcast SDK and see if that resolves your issue.

### Versioning
<a name="broadcast-support-versioning"></a>

The Amazon IVS broadcast SDKs use [semantic versioning](https://semver.org/).

For this discussion, suppose:
+ The latest release is 4.1.3.
+ The latest release of the prior major version is 3.2.4.
+ The latest release of version 1.x is 1.5.6.

Backward-compatible new features are added as minor releases of the latest version. In this case, the next set of new features will be added as version 4.2.0.

Backward-compatible, minor bug fixes are added as patch releases of the latest version. Here, the next set of minor bug fixes will be added as version 4.1.4.

Backward-compatible, major bug fixes are handled differently; these are added to several versions:
+ Patch release of the latest version. Here, this is version 4.1.4.
+ Patch release of the prior minor version. Here, this is version 3.2.5.
+ Patch release of the latest version 1.x release. Here, this is version 1.5.7.

Major bug fixes are defined by the Amazon IVS product team. Typical examples are critical security updates and selected other fixes necessary for customers.

**Note:** In the examples above, released versions increment without skipping any numbers (e.g., from 4.1.3 to 4.1.4). In reality, one or more patch numbers may remain internal and not be released, so the released version could increment from 4.1.3 to, say, 4.1.6.

# IVS Broadcast SDK: Web Guide \$1 Low-Latency Streaming
<a name="broadcast-web"></a>

The IVS Low-Latency Streaming Web Broadcast SDK gives developers the tools to build interactive, real-time experiences on the web.

**Latest version of Web broadcast SDK:** 1.34.0 ([Release Notes](https://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/release-notes.html#apr09-25-broadcast-web-ll)) 

**Reference documentation:** For information on the most important methods available in the Amazon IVS Web Broadcast SDK, see [https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-reference](https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-reference). Make sure the most current version of the SDK is selected.

**Sample code**: The samples below are a good place to get started quickly with the SDK:
+ [Single broadcast to an IVS channel (HTML and JavaScript)](https://codepen.io/amazon-ivs/pen/poLRoPp)
+ [Single broadcast with screen share to an IVS channel](https://stream.ivs.rocks/) ([React Source Code](https://github.com/aws-samples/amazon-ivs-broadcast-web-demo))

**Platform requirements**: See [Amazon IVS Broadcast SDK](https://docs.aws.amazon.com//ivs/latest/LowLatencyUserGuide/broadcast.html) for a list of supported platforms.

# Getting Started​ with the IVS Web Broadcast SDK \$1 Low-Latency Streaming
<a name="broadcast-web-getting-started"></a>

This document takes you through the steps involved in getting started with the Amazon IVS low-latency streaming Web broadcast SDK.

## Install the Library​
<a name="broadcast-web-install"></a>

Note that the IVSBroadcastClient leverages [reflect-metadata](https://www.npmjs.com/package/reflect-metadata), which extends the global Reflect object. Although this should not create any conflicts, there may be rare instances where this could cause unwanted behavior.

### Using a Script Tag​
<a name="broadcast-web-how-to-install-script"></a>

The Web broadcast SDK is distributed as a JavaScript library and can be retrieved at [https://web-broadcast.live-video.net/1.34.0/amazon-ivs-web-broadcast.js](https://web-broadcast.live-video.net/1.34.0/amazon-ivs-web-broadcast.js).

When loaded via `<script>` tag, the library exposes a global variable in the window scope named `IVSBroadcastClient`.

### Using npm​
<a name="broadcast-web-how-to-install-npm"></a>

To install the `npm` package:

```
npm install amazon-ivs-web-broadcast
```

You can now access the `IVSBroadcastClient` object and pull in other modules and consts such as `Errors`, `BASIC_LANDSCAPE`:

```
import IVSBroadcastClient, {
   Errors,
   BASIC_LANDSCAPE
} from 'amazon-ivs-web-broadcast';
```

## Samples
<a name="broadcast-web-samples"></a>

To get started quickly, see the examples below:
+ [Single broadcast to an IVS channel (HTML and JavaScript)](https://codepen.io/amazon-ivs/pen/poLRoPp)
+ [Single broadcast with screen share to an IVS channel](https://stream.ivs.rocks/) ([React Source Code](https://github.com/aws-samples/amazon-ivs-broadcast-web-demo))

## Create an Instance of the AmazonIVSBroadcastClient​
<a name="broadcast-web-instance"></a>

To use the library, you must create an instance of the client. You can do that by calling the `create` method on `IVSBroadcastClient` with the `streamConfig` parameter (specifying constraints of your broadcast like resolution and framerate). You can specify the ingest endpoint when creating the client or you can set this when you start a stream.

The ingest endpoint can be found in the AWS Console or returned by the CreateChannel operation (e.g., UNIQUE\$1ID.global-contribute.live-video.net).

```
const client = IVSBroadcastClient.create({
   // Enter the desired stream configuration
   streamConfig: IVSBroadcastClient.BASIC_LANDSCAPE,
   // Enter the ingest endpoint from the AWS console or CreateChannel API
   ingestEndpoint: 'UNIQUE_ID.global-contribute.live-video.net',
});
```

These are the common supported stream configurations. Presets are `BASIC` up to 480p and 1.5 Mbps bitrate, BASIC Full HD up to 1080p and 3.5 Mbps bitrate, and `STANDARD` (or `ADVANCED`) up to 1080p and 8.5 Mbps bitrate. You can customize the bitrate, frame rate, and resolution if desired. For more information, see [BroadcastClientConfig](https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-reference/interfaces/BroadcastClientConfig).

```
IVSBroadcastClient.BASIC_LANDSCAPE;
IVSBroadcastClient.BASIC_FULL_HD_LANDSCAPE;
IVSBroadcastClient.STANDARD_LANDSCAPE;
IVSBroadcastClient.BASIC_PORTRAIT;
IVSBroadcastClient.BASIC_FULL_HD_PORTRAIT;
IVSBroadcastClient.STANDARD_PORTRAIT;
```

You can import these individually if using the `npm` package.

Note: Make sure that your client-side configuration aligns with the back-end channel type. For instance, if the channel type is `STANDARD`, `streamConfig` should be set to one of the `IVSBroadcastClient.STANDARD_*` values. If channel type is `ADVANCED`, you’ll need to set the configuration manually as shown below (using `ADVANCED_HD` as an example):

```
const client = IVSBroadcastClient.create({
   // Enter the custom stream configuration
   streamConfig: {
      maxResolution: {
         width: 1080,
         height: 1920,
     },
     maxFramerate: 30,
     /**
      * maxBitrate is measured in kbps
      */
     maxBitrate: 3500,
   },
   // Other configuration . . .
});
```

## Request Permissions
<a name="broadcast-web-request-permissions"></a>

Your app must request permission to access the user’s camera and microphone, and it must be served using HTTPS. (This is not specific to Amazon IVS; it is required for any website that needs access to cameras and microphones.)

Here's an example function showing how you can request and capture permissions for both audio and video devices:

```
async function handlePermissions() {
   let permissions = {
       audio: false,
       video: false,
   };
   try {
       const stream = await navigator.mediaDevices.getUserMedia({ video: true, audio: true });
       for (const track of stream.getTracks()) {
           track.stop();
       }
       permissions = { video: true, audio: true };
   } catch (err) {
       permissions = { video: false, audio: false };
       console.error(err.message);
   }
   // If we still don't have permissions after requesting them display the error message
   if (!permissions.video) {
       console.error('Failed to get video permissions.');
   } else if (!permissions.audio) {
       console.error('Failed to get audio permissions.');
   }
}
```

For additional information, see the [Permissions API](https://developer.mozilla.org/en-US/docs/Web/API/Permissions_API) and [MediaDevices.getUserMedia()](https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia).

## Set Up a Stream Preview
<a name="broadcast-web-request-set-up-stream"></a>

To preview what will be broadcast, provide the SDK with a `<canvas>` element.

```
// where #preview is an existing <canvas> DOM element on your page
const previewEl = document.getElementById('preview');
client.attachPreview(previewEl);
```

## List Available Devices
<a name="broadcast-web-request-list-devices"></a>

To see what devices are available to capture, query the browser's [MediaDevices.enumerateDevices()](https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/enumerateDevices) method:

```
const devices = await navigator.mediaDevices.enumerateDevices();
window.videoDevices = devices.filter((d) => d.kind === 'videoinput');
window.audioDevices = devices.filter((d) => d.kind === 'audioinput');
```

## Retrieve a MediaStream from a Device
<a name="broadcast-web-retrieve-mediastream"></a>

After acquiring the list of available devices, you can retrieve a stream from any number of devices. For example, you can use the `getUserMedia()` method to retrieve a stream from a camera.

If you'd like to specify which device to capture the stream from, you can explicitly set the `deviceId` in the `audio` or `video` section of the media constraints. Alternately, you can omit the `deviceId` and have users select their devices from the browser prompt.

You also can specify an ideal camera resolution using the `width` and `height` constraints. (Read more about these constraints [here](https://developer.mozilla.org/en-US/docs/Web/API/MediaTrackConstraints#properties_of_video_tracks).) The SDK automatically applies width and height constraints that correspond to your maximum broadcast resolution; however, it's a good idea to also apply these yourself to ensure that the source aspect ratio is not changed after you add the source to the SDK.

```
const streamConfig = IVSBroadcastClient.BASIC_LANDSCAPE;
...
window.cameraStream = await navigator.mediaDevices.getUserMedia({
   video: {
       deviceId: window.videoDevices[0].deviceId,
       width: {
           ideal: streamConfig.maxResolution.width,
       },
       height: {
           ideal: streamConfig.maxResolution.height,
       },
   },
});
window.microphoneStream = await navigator.mediaDevices.getUserMedia({
   audio: { deviceId: window.audioDevices[0].deviceId },
});
```

## Add Device to a Stream
<a name="broadcast-web-add-device"></a>

After acquiring the stream, you may add devices to the layout by specifying a unique name (below, this is `camera1`) and composition position (for video). For example, by specifying your webcam device, you add your webcam video source to the broadcast stream.

When specifying the video-input device, you must specify the index, which represents the “layer” on which you want to broadcast. This is synonymous to image editing or CSS, where a z-index represents the ordering of layers to render. Optionally, you can provide a position, which defines the x/y coordinates (as well as the size) of the stream source.

For details on parameters, see [VideoComposition](https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-reference/interfaces/VideoComposition).

```
client.addVideoInputDevice(window.cameraStream, 'camera1', { index: 0 }); // only 'index' is required for the position parameter
client.addAudioInputDevice(window.microphoneStream, 'mic1');
```

## Start a Broadcast
<a name="broadcast-web-start-broadcast"></a>

To start a broadcast, provide the stream key for your Amazon IVS channel:

```
client
   .startBroadcast(streamKey)
   .then((result) => {
       console.log('I am successfully broadcasting!');
   })
   .catch((error) => {
       console.error('Something drastically failed while broadcasting!', error);
   });
```

## Stop a Broadcast
<a name="broadcast-web-stop-broadcast"></a>

```
client.stopBroadcast();
```

## Swap Video Positions
<a name="broadcast-web-swap-video-positions"></a>

The client supports swapping the composition positions of video devices:

```
client.exchangeVideoDevicePositions('camera1', 'camera2');
```

## Mute Audio
<a name="broadcast-web-muting-audio"></a>

To mute audio, either remove the audio device using `removeAudioInputDevice` or set the `enabled` property on the audio track:

```
let audioStream = client.getAudioInputDevice(AUDIO_DEVICE_NAME);
audioStream.getAudioTracks()[0].enabled = false;
```

Where `AUDIO_DEVICE_NAME` is the name given to the original audio device during the `addAudioInputDevice()` call.

To unmute:

```
let audioStream = client.getAudioInputDevice(AUDIO_DEVICE_NAME);
audioStream.getAudioTracks()[0].enabled = true;
```

## Hide Video
<a name="broadcast-web-hiding-video"></a>

To hide video, either remove the video device using `removeVideoInputDevice` or set the `enabled` property on the video track:

```
let videoStream = client.getVideoInputDevice(VIDEO_DEVICE_NAME).source;
videoStream.getVideoTracks()[0].enabled = false;
```

Where `VIDEO_DEVICE_NAME` is the name given to the video device during the original `addVideoInputDevice()` call.

To unhide:

```
let videoStream = client.getVideoInputDevice(VIDEO_DEVICE_NAME).source;
videoStream.getVideoTracks()[0].enabled = true;
```

# Known Issues & Workarounds in the IVS Web Broadcast SDK \$1 Low-Latency Streaming
<a name="broadcast-web-known-issues"></a>

This document lists known issues that you might encounter when using the Amazon IVS low-latency streaming Web broadcast SDK and suggests potential workarounds.
+ Viewers may experience green artifacts or irregular framerate, when watching streams from broadcasters who are using Safari on Intel-based Mac devices.

  **Workaround:** Redirect broadcasters on Intel Mac devices to broadcast using Chrome.
+ The web broadcast SDK requires port 4443 to be open. VPNs and firewalls can block port 4443 and prevent you from streaming.

  **Workaround:** Disable VPNs and/or configure firewalls to ensure that port 4443 is not blocked. 
+ Switching from landscape to portrait mode is buggy.

  **Workaround:** None.
+ The resolution reported in the HLS manifest is incorrect. It is set as the initially received resolution, which usually is much lower than what is possible and does not reflect any upscaling that happens during the duration of the webRTC connection.

  **Workaround:** None.
+ Subsequent client instances created after the initial page is loaded may not respond to `maxFramerate` settings that are different from the first client instance.

  **Workaround:** Set `StreamConfig` only once, through the `IVSBroadcastClient.create` function when the first client instance is created. 
+ On iOS, capturing multiple video device sources is not supported by WebKit.

  **Workaround:** Follow [this issue](https://bugs.webkit.org/show_bug.cgi?id=238492) to track development progress.
+ On iOS, calling `getUserMedia()` once you already have a video source will stop any other video source retrieved using `getUserMedia()`.

  **Workaround:** None.
+ WebRTC dynamically chooses the best bitrate and resolution for the resources that are available. Your stream will not be high quality if your hardware or network cannot support it. The quality of your stream may change during the broadcast as more or fewer resources are available.

  **Workaround:** Provide at least 200 kbps upload.
+ If Auto-Record to Amazon S3 is enabled for a channel and the Web Broadcast SDK is used, recording to the same S3 prefix may not work, as the Web Broadcast SDK dynamically changes bitrates and qualities.

  **Workaround:** None.
+ When using Next.js, an `Uncaught ReferenceError: self is not defined` error may be encountered, depending on how the SDK is imported.

  **Workaround:** [Dynamically import the library](https://nextjs.org/docs/pages/building-your-application/optimizing/lazy-loading) when using Next.js.
+ You may be unable to import the module using a script tag of type `module`; i.e., `<script type="module" src="..."\>`.

  **Workaround:** The library does not have an ES6 build. Remove the `type="module"` from the script tag.

## Safari Limitations
<a name="broadcast-web-safari-limitations"></a>
+ Denying a permissions prompt requires resetting the permission in Safari website settings at the OS level.
+ Safari does not natively detect all devices as effectively as Firefox or Chrome. For example, OBS Virtual Camera does not get detected.

## Firefox Limitations
<a name="broadcast-web-firefox-limitations"></a>
+ System permissions need to be enabled for Firefox to screen share. After enabling them, the user must restart Firefox for it to work correctly; otherwise, if permissions are perceived as blocked, the browser will throw a [NotFoundError](https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getDisplayMedia#exceptions) exception.
+ The `getCapabilities` method is missing. This means users cannot get the media track's resolution or aspect ratio. See this [bugzilla thread](https://bugzilla.mozilla.org/show_bug.cgi?id=1179084).
+ Several `AudioContext` properties are missing; e.g., latency and channel count. This could pose a problem for advanced users who want to manipulate the audio tracks.
+ Camera feeds from `getUserMedia` are restricted to a 4:3 aspect ratio on MacOS. See [bugzilla thread 1](https://bugzilla.mozilla.org/show_bug.cgi?id=1193640) and [bugzilla thread 2](https://bugzilla.mozilla.org/show_bug.cgi?id=1306034).
+ Audio capture is not supported with `getDisplayMedia`. See this [bugzilla thread](https://bugzilla.mozilla.org/show_bug.cgi?id=1541425).
+ Framerate in screen capture is suboptimal (approximately 15fps?). See this [bugzilla thread](https://bugzilla.mozilla.org/show_bug.cgi?id=1703522).

# IVS Broadcast SDK: Android Guide \$1 Low-Latency Streaming
<a name="broadcast-android"></a>

The IVS Low-Latency Streaming Android Broadcast SDK provides the interfaces required to broadcast to IVS on Android.

The `com.amazonaws.ivs.broadcast` package implements the interface described in this document. The following operations are supported: 
+ Set up (initialize) a broadcast session. 
+ Manage broadcasting.
+ Attach and detach input devices.
+ Manage a composition session. 
+ Receive events. 
+ Receive errors. 

**Latest version of Android broadcast SDK:** 1.41.0 ([Release Notes](https://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/release-notes.html#apr09-26-broadcast-mobile-ll)) 

**Reference documentation:** For information on the most important methods available in the Amazon IVS Android broadcast SDK, see the reference documentation at [https://aws.github.io/amazon-ivs-broadcast-docs/1.41.0/android/](https://aws.github.io/amazon-ivs-broadcast-docs/1.41.0/android/).

**Sample code: **See the Android sample repository on GitHub: [https://github.com/aws-samples/amazon-ivs-broadcast-android-sample](https://github.com/aws-samples/amazon-ivs-broadcast-android-sample).

**Platform requirements:** Android 9.0\$1

# Getting Started​ with the IVS Android Broadcast SDK \$1 Low-Latency Streaming
<a name="broadcast-android-getting-started"></a>

This document takes you through the steps involved in getting started with the Amazon IVS low-latency streaming Android broadcast SDK.

## Install the Library
<a name="broadcast-android-install"></a>

To add the Amazon IVS Android broadcast library to your Android development environment, add the library to your module’s `build.gradle` file, as shown here (for the latest version of the Amazon IVS broadcast SDK):

```
repositories {
    mavenCentral()
}
dependencies {
     implementation 'com.amazonaws:ivs-broadcast:1.41.0'
}
```

Alternately, to install the SDK manually, download the latest version from this location:
+ [https://search.maven.org/artifact/com.amazonaws/ivs-broadcast](https://search.maven.org/artifact/com.amazonaws/ivs-broadcast)

## Using the SDK with Debug Symbols
<a name="broadcast-android-using-debug-symbols-ll"></a>

We also publish a version of the Android broadcast SDK which includes debug symbols. You can use this version to improve the quality of debug reports (stack traces) in Firebase Crashlytics, if you run into crashes in the IVS broadcast SDK; i.e., `libbroadcastcore.so`. When you report these crashes to the IVS SDK team, the higher quality stack traces make it easier to fix the issues.

To use this version of the SDK, put the following in your Gradle build files:

```
implementation "com.amazonaws:ivs-broadcast:$version:unstripped@aar"
```

Use the above line instead of this:

```
implementation "com.amazonaws:ivs-broadcast:$version@aar"
```

### Uploading Symbols to Firebase Crashlytics
<a name="android-debug-symbols-ll-firebase-crashlytics"></a>

Ensure that your Gradle build files are set up for Firebase Crashlytics. Follow Google’s instructions here:

[https://firebase.google.com/docs/crashlytics/ndk-reports](https://firebase.google.com/docs/crashlytics/ndk-reports)

Be sure to include `com.google.firebase:firebase-crashlytics-ndk` as a dependency.

When building your app for release, the Firebase Crashlytics plugin should upload symbols automatically. To upload symbols manually, run either of the following:

```
gradle uploadCrashlyticsSymbolFileRelease
```

```
./gradlew uploadCrashlyticsSymbolFileRelease
```

(It will not hurt if symbols are uploaded twice, both automatically and manually.)

### Preventing your Release .apk from Becoming Larger
<a name="android-debug-symbols-ll-sizing-apk"></a>

Before packaging the release `.apk` file, the Android Gradle Plugin automatically tries to strip debug information from shared libraries (including the IVS broadcast SDK's `libbroadcastcore.so` library). However, sometimes this does not happen. As a result, your `.apk` file could become larger and you could get a warning message from the Android Gradle Plugin that it’s unable to strip debug symbols and is packaging `.so` files as is. If this happens, do the following:
+ Install an Android NDK. Any recent version will work.
+ Add `ndkVersion <your_installed_ndk_version_number>` to your application’s `build.gradle` file. Do this even if your application itself does not contain native code.

For more information, see this [issue report](https://issuetracker.google.com/issues/353554169).

## Create the Event Listener
<a name="broadcast-android-create-event-listener"></a>

Setting up an event listener allows you to receive state updates, device-change notifications, errors, and session-audio information.

```
BroadcastSession.Listener broadcastListener = 
          new BroadcastSession.Listener() {
    @Override
    public void onStateChanged(@NonNull BroadcastSession.State state) {
        Log.d(TAG, "State=" + state);
    }

    @Override
    public void onError(@NonNull BroadcastException exception) {
        Log.e(TAG, "Exception: " + exception);
    }
};
```

## Request Permissions
<a name="broadcast-android-permissions"></a>

Your app must request permission to access the user’s camera and mic. (This is not specific to Amazon IVS; it is required for any application that needs access to cameras and microphones.)

Here, we check whether the user has already granted permissions and, if not, ask for them:

```
final String[] requiredPermissions =
         { Manifest.permission.CAMERA, Manifest.permission.RECORD_AUDIO };

for (String permission : requiredPermissions) {
    if (ContextCompat.checkSelfPermission(this, permission) 
                != PackageManager.PERMISSION_GRANTED) {
        // If any permissions are missing we want to just request them all.
        ActivityCompat.requestPermissions(this, requiredPermissions, 0x100);
        break;
    }
}
```

Here, we get the user’s response:

```
@Override
public void onRequestPermissionsResult(int requestCode, 
                                      @NonNull String[] permissions,
                                      @NonNull int[] grantResults) {
    super.onRequestPermissionsResult(requestCode,
               permissions, grantResults);
    if (requestCode == 0x100) {
        for (int result : grantResults) {
            if (result == PackageManager.PERMISSION_DENIED) {
                return;
            }
        }
        setupBroadcastSession();
    }
}
```

## Create the Broadcast Session
<a name="broadcast-android-create-session"></a>

The broadcast interface is `com.amazonaws.ivs.broadcast.BroadcastSession`. Initialize it with a preset, as shown below. If there are any errors during initialization (such as a failure to configure a codec) your `BroadcastListener` will get an error message and `broadcastSession.isReady` will be `false`.

**Important:** All calls to the Amazon IVS Broadcast SDK for Android *must* be made on the thread on which the SDK is instantiated. *A call from a different thread will cause the SDK to throw a fatal error and stop broadcasting*.

```
// Create a broadcast-session instance and sign up to receive broadcast
// events and errors.
Context ctx = getApplicationContext();
broadcastSession = new BroadcastSession(ctx,
                       broadcastListener,
                       Presets.Configuration.STANDARD_PORTRAIT,
                       Presets.Devices.FRONT_CAMERA(ctx));
```

Also see [Create the Broadcast Session (Advanced Version)](broadcast-android-use-cases.md#broadcast-android-create-session-advanced) .

## Set the ImagePreviewView for Preview
<a name="broadcast-android-set-imagepreviewview"></a>

If you want to display a preview for an active camera device, add a preview `ImagePreviewView` for the device to your view hierarchy.

```
// awaitDeviceChanges will fire on the main thread after all pending devices 
// attachments have been completed
broadcastSession.awaitDeviceChanges(() -> {
    for(Device device: session.listAttachedDevices()) {
        // Find the camera we attached earlier
        if(device.getDescriptor().type == Device.Descriptor.DeviceType.CAMERA) {
            LinearLayout previewHolder = findViewById(R.id.previewHolder);
            ImagePreviewView preview = ((ImageDevice)device).getPreviewView();
            preview.setLayoutParams(new LinearLayout.LayoutParams(
                    LinearLayout.LayoutParams.MATCH_PARENT,
                    LinearLayout.LayoutParams.MATCH_PARENT));
            previewHolder.addView(preview);
        }
    }
});
```

## Start a Broadcast
<a name="broadcast-android-start"></a>

The hostname that you receive in the `ingestEndpoint` response field of the `GetChannel` operation needs to have `rtmps://` prepended and `/app` appended. The complete URL should be in this format: `rtmps://{{ ingestEndpoint }}/app`

```
broadcastSession.start(IVS_RTMPS_URL, IVS_STREAMKEY);
```

The Android broadcast SDK supports only RTMPS ingest (not insecure RTMP ingest).

## Stop a Broadcast
<a name="broadcast-android-stop"></a>

```
broadcastSession.stop();
```

## Release the Broadcast Session
<a name="broadcast-android-release-session"></a>

You *must call* the `broadcastSession.release()` method when the broadcast session is no longer in use, to free the resources used by the library.

```
@Override
protected void onDestroy() {
    super.onDestroy();
    previewHolder.removeAllViews();
    broadcastSession.release();
}
```

# Advanced Use Cases for the IVS Android Broadcast SDK \$1 Low-Latency Streaming
<a name="broadcast-android-use-cases"></a>

Here we present some advanced use cases. Start with the basic setup above and continue here. 

## Create a Broadcast Configuration
<a name="broadcast-android-create-configuration"></a>

Here we create a custom configuration with two mixer slots that allow us to bind two video sources to the mixer. One (`custom`) is full screen and laid out behind the other (`camera`), which is smaller and in the bottom-right corner. Note that for the `custom` slot we do not set a position, size, or aspect mode. Because we do not set these parameters, the slot will use the video settings for size and position.

```
BroadcastConfiguration config = BroadcastConfiguration.with($ -> {
    $.audio.setBitrate(128_000);
    $.video.setMaxBitrate(3_500_000);
    $.video.setMinBitrate(500_000);
    $.video.setInitialBitrate(1_500_000);
    $.video.setSize(1280, 720);
    $.mixer.slots = new BroadcastConfiguration.Mixer.Slot[] {
            BroadcastConfiguration.Mixer.Slot.with(slot -> {
                // Do not automatically bind to a source
                slot.setPreferredAudioInput(
                           Device.Descriptor.DeviceType.UNKNOWN);
                // Bind to user image if unbound
                slot.setPreferredVideoInput(
                           Device.Descriptor.DeviceType.USER_IMAGE);
                slot.setName("custom");
                return slot;
            }),
            BroadcastConfiguration.Mixer.Slot.with(slot -> {
                slot.setzIndex(1);
                slot.setAspect(BroadcastConfiguration.AspectMode.FILL);
                slot.setSize(300, 300);
                slot.setPosition($.video.getSize().x - 350,
                        $.video.getSize().y - 350);
                slot.setName("camera");
                return slot;
            })
    };
    return $;
});
```

## Create the Broadcast Session (Advanced Version)
<a name="broadcast-android-create-session-advanced"></a>

Create a `BroadcastSession` as you did in the [basic example](broadcast-android-getting-started.md#broadcast-android-create-session), but provide your custom configuration here. Also provide `null` for the device array, as we will add those manually.

```
// Create a broadcast-session instance and sign up to receive broadcast
// events and errors.
Context ctx = getApplicationContext();
broadcastSession = new BroadcastSession(ctx,
                       broadcastListener,
                       config, // The configuration we created above
                       null); // We’ll manually attach devices after
```

## Iterate and Attach a Camera Device
<a name="broadcast-android-attach-camera"></a>

Here we iterate through input devices that the SDK has detected. On Android 7 (Nougat) this will only return default microphone devices, because the Amazon IVS Broadcast SDK does not support selecting non-default devices on this version of Android.

Once we find a device that we want to use, we call `attachDevice` to attach it. A lambda function is called on the main thread when attaching the input device has completed. In case of failure, you will receive an error in the Listener.

```
for(Device.Descriptor desc: BroadcastSession.listAvailableDevices(getApplicationContext())) {
    if(desc.type == Device.Descriptor.DeviceType.CAMERA &&
            desc.position == Device.Descriptor.Position.FRONT) {
        session.attachDevice(desc, device -> {
            LinearLayout previewHolder = findViewById(R.id.previewHolder);
            ImagePreviewView preview = ((ImageDevice)device).getPreviewView();
            preview.setLayoutParams(new LinearLayout.LayoutParams(
                    LinearLayout.LayoutParams.MATCH_PARENT,
                    LinearLayout.LayoutParams.MATCH_PARENT));
            previewHolder.addView(preview);
            // Bind the camera to the mixer slot we created above.
            session.getMixer().bind(device, "camera");
        });
        break;
    }
}
```

## Swap Cameras
<a name="broadcast-android-swap-cameras"></a>

```
// This assumes you’ve kept a reference called "currentCamera" that points to
// a front facing camera
for(Device device: BroadcastSession.listAvailableDevices()) {
   if(device.type == Device.Descriptor.DeviceType.CAMERA &&
          Device.position != currentCamera.position) {
        // Remove the preview view for the old device.
        // setImagePreviewTextureView is an example function 
        // that handles your view hierarchy.
        setImagePreviewView(null);
        session.exchangeDevices(currentCamera, device, camera -> {
             // Set the preview view for the new device.
             setImagePreviewView(camera.getPreviewView());
             currentCamera = camera;
        });
        break;
   }
}
```

## Create an Input Surface
<a name="broadcast-android-create-input-surface"></a>

To input sound or image data that your app generates, use `createImageInputSource` or `createAudioInputSource`. Both these methods create and attach virtual devices that can be bound to the mixer like any other device.

The `SurfaceSource` returned by `createImageInputSource` has a `getInputSurface` method, which will give you a `Surface` that you can use with the Camera2 API, OpenGL, or Vulkan, or anything else that can write to a Surface.

The `AudioDevice` returned by `createAudioInputSource` can receive Linear PCM data generated by AudioRecorder or other means.

```
SurfaceSource source = session.createImageInputSource();
Surface surface = source.getInputSurface();
session.getMixer().bind(source, “custom”);
```

## Detach a Device
<a name="broadcast-android-detach-device"></a>

If you want to detach and not replace a device, detach it with `Device` or `Device.Descriptor`.

```
session.detachDevice(currentCamera);
```

## Screen and System Audio Capture
<a name="broadcast-android-screen-audio-capture"></a>

The Amazon IVS Broadcast SDK for Android includes some helpers that simplify capturing the device’s screen (Android 6 and higher) and system audio (Android 10 and higher). If you want to manage these manually, you can create a custom image-input source and a custom audio-input source.

To create a screen and system audio-capture session, you must first create a permission-request intent:

```
public void startScreenCapture() {
    MediaProjectionManager manager =
                         (MediaProjectionManager) getApplicationContext()
                         .getSystemService(Context.MEDIA_PROJECTION_SERVICE);
    if(manager != null) {
        Intent intent = manager.createScreenCaptureIntent();
        startActivityIfNeeded(intent, SCREEN_CAPTURE_REQUEST_ID);
    }
}
```

To use this feature, you must provide a class that extends `com.amazonaws.ivs.broadcast.SystemCaptureService`. You do not have to override any of its methods, but the class needs to be there to avoid any potential collisions between services.

You also must add a couple of elements to your Android manifest:

```
<uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
<application ...>
    <service android:name=".ExampleSystemCaptureService"
         android:foregroundServiceType="mediaProjection" 
         android:isolatedProcess="false" />
</application>
...
```

Your class that extends `SystemCaptureService` must be named in the `<service>` element. On Android 9 and later, the `foregroundServiceType` must be `mediaProjection`.

Once the permissions intent has returned, you may proceed with creating the screen and system audio-capture session. On Android 8 and later, you must provide a notification to be displayed in your user’s Notification Panel. The Amazon IVS Broadcast SDK for Android provides the convenience method `createServiceNotificationBuilder`. Alternately, you may provide your own notification. 

```
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
    super.onActivityResult(requestCode, resultCode, data);
    if(requestCode != SCREEN_CAPTURE_REQUEST_ID
       || Activity.RESULT_OK != resultCode) {
        return;
    }
    Notification notification = null;
    if(Build.VERSION.SDK_INT >= 26) {
        Intent intent = new Intent(getApplicationContext(),
                                   NotificationActivity.class);
        notification = session
                         .createServiceNotificationBuilder("example",
                                            "example channel", intent)
                         .build();
    }
    session.createSystemCaptureSources(data,
                  ExampleSystemCaptureService.class,
                  Notification,
                  devices -> {
        // This step is optional if the mixer slots have been given preferred
        // input device types SCREEN and SYSTEM_AUDIO
        for (Device device : devices) {
            session.getMixer().bind(device, "game");
        }
    });
}
```

## Get Recommended Broadcast Settings
<a name="broadcast-android-recommended-settings"></a>

To evaluate your user’s connection before starting a broadcast, use the `recommendedVideoSettings` method to run a brief test. As the test runs, you will receive several recommendations, ordered from most to least recommended. In this version of the SDK, it is not possible to reconfigure the current `BroadcastSession`, so you will need to `release()` it and then create a new one with the recommended settings. You will continue to receive `BroadcastSessionTest.Results` until the `Result.status` is `SUCCESS` or `ERROR`. You can check progress with `Result.progress`.

Amazon IVS supports a maximum bitrate of 8.5 Mbps (for channels whose `type` is `STANDARD` or `ADVANCED`), so the `maximumBitrate` returned by this method never exceeds 8.5 Mbps. To account for small fluctuations in network performance, the recommended `initialBitrate` returned by this method is slightly less than the true bitrate measured in the test. (Using 100% of the available bandwidth usually is inadvisable.)

```
void runBroadcastTest() {
    this.test = session.recommendedVideoSettings(RTMPS_ENDPOINT, RTMPS_STREAMKEY,
        result -> {
            if (result.status == BroadcastSessionTest.Status.SUCCESS) {
                this.recommendation = result.recommendations[0];
            }
        });
}
```

## Using Auto-Reconnect
<a name="broadcast-android-auto-reconnect"></a>

IVS supports automatic reconnection to a broadcast if the broadcast stops unexpectedly without calling the `stop` API; e.g., a temporary loss in network connectivity. To enable auto-reconnect, call `setEnabled(true)` on `BroadcastConfiguration.autoReconnect`.

When something causes the stream to unexpectedly stop, the SDK retries up to 5 times, following a linear backoff strategy. It notifies your application about the retry state through the `BroadcastSession.Listener.onRetryStateChanged` method.

Behind the scenes, auto-reconnect uses IVS [stream-takeover](streaming-config.md#streaming-config-stream-takeover) functionality by appending a priority number, starting with 1, to the end of the provided stream key. For the duration of the `BroadcastSession` instance, that number is incremented by 1 each time a reconnect is attempted. This means if the device’s connection is lost 4 times during a broadcast, and each loss requires 1-4 retry attempts, the priority of the last stream up could be anywhere between 5 and 17. Because of this, *we recommend you do not use IVS stream takeover from another device while auto-reconnect is enabled in the SDK for the same channel*. There are no guarantees what priority the SDK is using at the time, and the SDK will try to reconnect with a higher priority if another device takes over.

## Using Bluetooth Microphones
<a name="broadcast-android-bluetooth-microphones"></a>

To broadcast using Bluetooth microphone devices, you must start a Bluetooth SCO connection:

```
Bluetooth.startBluetoothSco(context);
// Now bluetooth microphones can be used
…
// Must also stop bluetooth SCO
Bluetooth.stopBluetoothSco(context);
```

# Known Issues & Workarounds in the IVS Android Broadcast SDK \$1 Low-Latency Streaming
<a name="broadcast-android-issues"></a>

This document lists known issues that you might encounter when using the Amazon IVS low-latency streaming Android broadcast SDK and suggests potential workarounds.
+ Using an external microphone connected through Bluetooth can be unstable. When a Bluetooth device is connected or disconnected during a broadcasting session, microphone input may stop working until the device is explicitly detached and reattached.

  **Workaround:** If you plan to use a Bluetooth headset, connect it before starting the broadcast and leave it connected throughout the broadcast.
+ The broadcast SDK does not support access on external cameras connected via USB.

  **Workaround:** Do not use external cameras connected via USB. 
+ Submitting audio data faster than realtime (using a custom audio source) results in audio drift.

  **Workaround:** Do not submit audio data faster than realtime. 
+ Android 6 and 7 devices cannot receive the broadcast SDK's `onDeviceAdded` and `onDeviceRemoved` callbacks for microphones, because these Android versions allow only the system’s default microphone.

  **Workaround:** For these devices, the broadcast SDK uses the system's default microphone.
+ When an `ImagePreviewView` is removed from a parent (e.g., `removeView()` is called at the parent), the `ImagePreviewView` is released immediately. The `ImagePreviewView` does not show any frames when it is added to another parent view.

  **Workaround:** Request another preview using `getPreview`.
+ Some Android video encoders cannot be configured with a video size less than 176x176. Configuring a smaller size causes an error and prevents streaming.

  **Workaround:** Do not configure the video size to be less than 176x176.
+ Enabling B-frames can improve compression quality; however some encoders provide less precise bitrate control when B-frames are enabled, which may cause issues during network fluctuations.

  **Workaround:** Consider disabling B-frames if consistent bitrate adherence is more important than compression efficiency for your use case.

# IVS Broadcast SDK: iOS Guide \$1 Low-Latency Streaming
<a name="broadcast-ios"></a>

The IVS Low-Latency Streaming iOS Broadcast SDK provides the interfaces required to broadcast to Amazon IVS on iOS.

The `AmazonIVSBroadcast` module implements the interface described in this document. The following operations are supported:
+ Set up (initialize) a broadcast session. 
+ Manage broadcasting.
+ Attach and detach input devices.
+ Manage a composition session. 
+ Receive events. 
+ Receive errors. 

**Latest version of iOS broadcast SDK:** 1.41.0 ([Release Notes](https://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/release-notes.html#apr09-26-broadcast-mobile-ll)) 

**Reference documentation:** For information on the most important methods available in the Amazon IVS iOS broadcast SDK, see the reference documentation at [https://aws.github.io/amazon-ivs-broadcast-docs/1.41.0/ios/](https://aws.github.io/amazon-ivs-broadcast-docs/1.41.0/ios/).

**Sample code: **See the iOS sample repository on GitHub: [https://github.com/aws-samples/amazon-ivs-broadcast-ios-sample](https://github.com/aws-samples/amazon-ivs-broadcast-ios-sample).

**Platform requirements:** iOS 14\$1

## How iOS Chooses Camera Resolution and Frame Rate
<a name="ios-publish-subscribe-resolution-framerate"></a>

The camera managed by the broadcast SDK optimizes its resolution and frame rate (frames-per-second, or FPS) to minimize heat production and energy consumption. This section explains how the resolution and frame rate are selected to help host applications optimize for their use cases.

When attaching an `IVSCamera` to an `IVSBroadcastSession`, the camera is optimized for a frame rate of `IVSVideoConfiguration.targetFramerate` and a resolution of `IVSVideoConfiguration.size`. These values are provided to the `IVSBroadcastSession` on initialization. 

# Getting Started with the IVS iOS Broadcast SDK \$1 Low-Latency Streaming
<a name="broadcast-ios-getting-started"></a>

This document takes you through the steps involved in getting started with the Amazon IVS low-latency streaming iOS broadcast SDK.

## Install the Library
<a name="broadcast-ios-install"></a>

We recommend that you integrate broadcast SDK via Swift Package Manager. (Alternatively, you can manually add the framework to your project.)

### Recommended: Integrate the Broadcast SDK (Swift Package Manager)
<a name="broadcast-ios-install-swift"></a>

1. Download the Package.swift file from [https://broadcast.live-video.net/1.41.0/Package.swift](https://broadcast.live-video.net/1.41.0/Package.swift).

1. In your project, create a new directory named AmazonIVSBroadcast and add it to version control.

1. Place the downloaded Package.swift file in the new directory.

1. In Xcode, go to **File > Add Package Dependencies** and select **Add Local...**

1. Navigate to and select the AmazonIVSBroadcast directory that you created, and select **Add Package**.

1. When prompted to **Choose Package Products for AmazonIVSBroadcast**, select **AmazonIVSBroadcast** as your **Package Product** by setting your application target in the **Add to Target** section.

1. Select **Add Package**.

### Alternate Approach: Install the Framework Manually
<a name="broadcast-ios-install-manual"></a>

1. Download the latest version from [https://broadcast.live-video.net/1.41.0/AmazonIVSBroadcast.xcframework.zip](https://broadcast.live-video.net/1.41.0/AmazonIVSBroadcast.xcframework.zip).

1. Extract the contents of the archive. `AmazonIVSBroadcast.xcframework` contains the SDK for both device and simulator.

1. Embed `AmazonIVSBroadcast.xcframework` by dragging it into the **Frameworks, Libraries, and Embedded Content** section of the **General** tab for your application target.  
![\[The Frameworks, Libraries, and Embedded Content section of the General tab for your application target.\]](http://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/images/iOS_Broadcast_SDK_Guide_xcframework.png)

## Implement IVSBroadcastSession.Delegate
<a name="broadcast-ios-implement-ivsbroadcastsessiondelegate"></a>

Implement `IVSBroadcastSession.Delegate`, which allows you to receive state updates and device-change notifications:

```
extension ViewController : IVSBroadcastSession.Delegate {
   func broadcastSession(_ session: IVSBroadcastSession,
                         didChange state: IVSBroadcastSession.State) {
      print("IVSBroadcastSession did change state \(state)")
   }

   func broadcastSession(_ session: IVSBroadcastSession,
                         didEmitError error: Error) {
      print("IVSBroadcastSession did emit error \(error)")
   }
}
```

## Request Permissions
<a name="broadcast-ios-permissions"></a>

Your app must request permission to access the user’s camera and mic. (This is not specific to Amazon IVS; it is required for any application that needs access to cameras and microphones.)

Here, we check whether the user has already granted permissions and, if not, we ask for them:

```
switch AVCaptureDevice.authorizationStatus(for: .video) {
case .authorized: // permission already granted.
case .notDetermined:
   AVCaptureDevice.requestAccess(for: .video) { granted in
       // permission granted based on granted bool.
   }
case .denied, .restricted: // permission denied.
@unknown default: // permissions unknown.
}
```

You need to do this for both `.video` and `.audio` media types, if you want access to cameras and microphones, respectively.

You also need to add entries for `NSCameraUsageDescription` and `NSMicrophoneUsageDescription` to your `Info.plist`. Otherwise, your app will crash when trying to request permissions.

## Disable the Application Idle Timer
<a name="broadcast-ios-disable-idle-timer"></a>

This is optional but recommended. It prevents your device from going to sleep while using the broadcast SDK, which would interrupt the broadcast.

```
override func viewDidAppear(_ animated: Bool) {
   super.viewDidAppear(animated)
   UIApplication.shared.isIdleTimerDisabled = true
}
override func viewDidDisappear(_ animated: Bool) {
   super.viewDidDisappear(animated)
   UIApplication.shared.isIdleTimerDisabled = false
}
```

## (Optional) Set Up AVAudioSession
<a name="broadcast-ios-setup-avaudiosession"></a>

By default, the broadcast SDK will set up your application’s `AVAudioSession`. If you want to manage this yourself, set `IVSBroadcastSession.applicationAudioSessionStrategy` to `noAction`. Without control of the `AVAudioSession`, the broadcast SDK cannot manage microphones internally. To use microphones with the `noAction` option, you can create an `IVSCustomAudioSource` and provide your own samples via an `AVCaptureSession`, `AVAudioEngine` or another tool that provides PCM audio samples.

If you are manually setting up your `AVAudioSession`, at a minimum you need to set the category as `.record` or `.playbackAndRecord`, and set it to `active`. If you want to record audio from Bluetooth devices, you need to specify the `.allowBluetooth` option as well:

```
do {
   try AVAudioSession.sharedInstance().setCategory(.record, options: .allowBluetooth)
   try AVAudioSession.sharedInstance().setActive(true)
} catch {
   print("Error configuring AVAudioSession")
}
```

We recommend that you let the SDK handle this for you. Otherwise, if you want to choose between different audio devices, you will need to manually manage the ports.

## Create the Broadcast Session
<a name="broadcast-ios-create-session"></a>

The broadcast interface is `IVSBroadcastSession`. Initialize it as shown below:

```
let broadcastSession = try IVSBroadcastSession(
   configuration: IVSPresets.configurations().standardLandscape(),
   descriptors: IVSPresets.devices().frontCamera(),
   delegate: self)
```

Also see [Create the Broadcast Session (Advanced Version)](broadcast-ios-use-cases.md#broadcast-ios-create-session-advanced)

## Set the IVSImagePreviewView for Preview
<a name="broadcast-ios-set-imagepreviewview"></a>

If you want to display a preview for an active camera device, add the preview `IVSImagePreviewView` for the device to your view hierarchy:

```
// If the session was just created, execute the following 
// code in the callback of IVSBroadcastSession.awaitDeviceChanges 
// to ensure all devices have been attached.
if let devicePreview = try broadcastSession.listAttachedDevices()
   .compactMap({ $0 as? IVSImageDevice })
   .first?
   .previewView()
{
   previewView.addSubview(devicePreview)
}
```

## Start a Broadcast
<a name="broadcast-ios-start"></a>

The hostname that you receive in the `ingestEndpoint` response field of the `GetChannel` operation needs to have `rtmps://` prepended and `/app` appended. The complete URL should be in this format: `rtmps://{{ ingestEndpoint }}/app`

```
try broadcastSession.start(with: IVS_RTMPS_URL, streamKey: IVS_STREAMKEY)
```

 The iOS broadcast SDK supports only RTMPS ingest (not insecure RTMP ingest). 

## Stop a Broadcast
<a name="broadcast-ios-stop"></a>

```
broadcastSession.stop()
```

## Manage Lifecycle Events
<a name="broadcast-ios-lifecycle-events"></a>

### Audio Interruptions
<a name="broadcast-ios-audio-interruptions"></a>

There are several scenarios where the broadcast SDK will not have exclusive access to audio-input hardware. Some example scenarios that you need to handle are:
+ User receives a phone call or FaceTime call
+ User activates Siri

Apple makes it easy to respond to these events by subscribing to `AVAudioSession.interruptionNotification`:

```
NotificationCenter.default.addObserver(
   self,
   selector: #selector(audioSessionInterrupted(_:)),
   name: AVAudioSession.interruptionNotification,
   object: nil)
```

Then you can handle the event with something like this:

```
// This assumes you have a variable `isRunning` which tracks if the broadcast is currently live, and another variable `wasRunningBeforeInterruption` which tracks whether the broadcast was active before this interruption to determine if it should resume after the interruption has ended.

@objc
private func audioSessionInterrupted(_ notification: Notification) {
   guard let userInfo = notification.userInfo,
         let typeValue = userInfo[AVAudioSessionInterruptionTypeKey] as? UInt,
         let type = AVAudioSession.InterruptionType(rawValue: typeValue)
   else {
      return
   }
   switch type {
   case .began:
      wasRunningBeforeInterruption = isRunning
      if isRunning {
         broadcastSession.stop()
      }
   case .ended:
      defer {
         wasRunningBeforeInterruption = false
      }
      guard let optionsValue = userInfo[AVAudioSessionInterruptionOptionKey] as? UInt else { return }
      let options = AVAudioSession.InterruptionOptions(rawValue: optionsValue)
      if options.contains(.shouldResume) && wasRunningBeforeInterruption {
         try broadcastSession.start(
            with: IVS_RTMPS_URL,
            streamKey: IVS_STREAMKEY)
      }
   @unknown default: break
   }
}
```

### App Going Into Background
<a name="broadcast-ios-app-to-background"></a>

Standard applications on iOS are not allowed to use cameras in the background. There also are restrictions on video encoding in the background: since hardware encoders are limited, only foreground applications have access. Because of this, the broadcast SDK automatically terminates its session and sets its `isReady` property to `false`. When your application is about to enter the foreground again, the broadcast SDK reattaches all the devices to their original `IVSMixerSlotConfiguration` entries.

The broadcast SDK does this by responding to `UIApplication.didEnterBackgroundNotification` and `UIApplication.willEnterForegroundNotification`.

If you are providing custom image sources, you should be prepared to handle these notifications. You may need to take extra steps to tear them down before the stream is terminated.

See [Use Background Video](broadcast-ios-use-cases.md#broadcast-ios-background-video) for a workaround that enables streaming while your application is in the background.

### Media Services Lost
<a name="broadcast-ios-media-services-lost"></a>

In very rare cases, the entire media subsystem on an iOS device will crash. In this scenario, we can no longer broadcast. It is up to your application to respond to these notifications appropriately. At a minimum, subscribe to these notifications:
+ [mediaServicesWereLostNotification](https://developer.apple.com/documentation/avfaudio/avaudiosession/1616457-mediaserviceswerelostnotificatio) — Respond by stopping your broadcast and completely deallocating your `IVSBroadcastSession` . All internal components used by the broadcast session will be invalidated.
+ [mediaServicesWereResetNotification](https://developer.apple.com/documentation/avfaudio/avaudiosession/1616540-mediaserviceswereresetnotificati) — Respond by notifying your users that they can broadcast again. Depending on your use case, you may be able to automatically start broadcasting again at this point.

# Advanced Use Cases for the IVS iOS Broadcast SDK \$1 Low-Latency Streaming
<a name="broadcast-ios-use-cases"></a>

Here we present some advanced use cases. Start with the basic setup above and continue here.

## Create a Broadcast Configuration
<a name="broadcast-ios-create-configuration"></a>

Here we create a custom configuration with two mixer slots that allow us to bind two video sources to the mixer. One (`custom`) is full screen and laid out behind the other (`camera`), which is smaller and in the bottom-right corner. Note that for the `custom` slot we do not set a position, size, or aspect mode. Because we do not set these parameters, the slot uses the video settings for size and position.

```
let config = IVSBroadcastConfiguration()
try config.audio.setBitrate(128_000)
try config.video.setMaxBitrate(3_500_000)
try config.video.setMinBitrate(500_000)
try config.video.setInitialBitrate(1_500_000)
try config.video.setSize(CGSize(width: 1280, height: 720))
config.video.defaultAspectMode = .fit
config.mixer.slots = [
    try {
        let slot = IVSMixerSlotConfiguration()
        // Do not automatically bind to a source
        slot.preferredAudioInput = .unknown
        // Bind to user image if unbound
        slot.preferredVideoInput = .userImage
        try slot.setName("custom")
        return slot
    }(),
    try {
        let slot = IVSMixerSlotConfiguration()
        slot.zIndex = 1
        slot.aspect = .fill
        slot.size = CGSize(width: 300, height: 300)
        slot.position = CGPoint(x: config.video.size.width - 400, y: config.video.size.height - 400)
        try slot.setName("camera")
        return slot
    }()
]
```

## Create the Broadcast Session (Advanced Version)
<a name="broadcast-ios-create-session-advanced"></a>

Create an `IVSBroadcastSession` as you did in the [basic example](broadcast-ios-getting-started.md#broadcast-ios-create-session), but provide your custom configuration here. Also provide `nil` for the device array, as we will add those manually.

```
let broadcastSession = try IVSBroadcastSession(
   configuration: config, // The configuration we created above
   descriptors: nil, // We’ll manually attach devices after
   delegate: self)
```

## Iterate and Attach a Camera Device
<a name="broadcast-ios-attach-camera"></a>

Here we iterate through input devices that the SDK has detected. The SDK will only return built-in devices on iOS. Even if Bluetooth audio devices are connected, they will appear as a built-in device. For more information, see [Known Issues & Workarounds in the IVS iOS Broadcast SDK \$1 Low-Latency Streaming](broadcast-ios-issues.md).

Once we find a device that we want to use, we call `attachDevice` to attach it:

```
let frontCamera = IVSBroadcastSession.listAvailableDevices()
    .filter { $0.type == .camera && $0.position == .front }
    .first
if let camera = frontCamera {
    broadcastSession.attach(camera, toSlotWithName: "camera") { device, error in
        // check error
    }
}
```

## Swap Cameras
<a name="broadcast-ios-swap-cameras"></a>

```
// This assumes you’ve kept a reference called `currentCamera` that points to the current camera.
let wants: IVSDevicePosition = (currentCamera.descriptor().position == .front) ? .back : .front
// Remove the current preview view since the device will be changing.
previewView.subviews.forEach { $0.removeFromSuperview() }
let foundCamera = IVSBroadcastSession
        .listAvailableDevices()
        .first { $0.type == .camera && $0.position == wants }
guard let newCamera = foundCamera else { return }
broadcastSession.exchangeOldDevice(currentCamera, withNewDevice: newCamera) { newDevice, _ in
    currentCamera = newDevice
    if let camera = newDevice as? IVSImageDevice {
        do {
            previewView.addSubview(try finalCamera.previewView())
        } catch {
            print("Error creating preview view \(error)")
        }
    }
}
```

## Create a Custom Input Source
<a name="broadcast-ios-create-input-source"></a>

To input sound or image data that your app generates, use `createImageSource` or `createAudioSource`. Both these methods create virtual devices (`IVSCustomImageSource` and `IVSCustomAudioSource`) that can be bound to the mixer like any other device.

The devices returned by both these methods accept a `CMSampleBuffer` through its `onSampleBuffer` function:
+ For video sources, the pixel format must be `kCVPixelFormatType_32BGRA`, `420YpCbCr8BiPlanarFullRange`, or `420YpCbCr8BiPlanarVideoRange`.
+ For audio sources, the buffer must contain Linear PCM data.

You cannot use an `AVCaptureSession` with camera input to feed a custom image source while also using a camera device provided by the broadcast SDK. If you want to use multiple cameras simultaneously, use `AVCaptureMultiCamSession` and provide two custom image sources.

Custom image sources primarily should be used with static content such as images, or with video content:

```
let customImageSource = broadcastSession.createImageSource(withName: "video")
try broadcastSession.attach(customImageSource, toSlotWithName: "custom")
```

## Monitor Network Connectivity
<a name="broadcast-ios-network-connection"></a>

It is common for mobile devices to temporarily lose and regain network connectivity while on the go. Because of this, it is important to monitor your app’s network connectivity and respond appropriately when things change. 

When the broadcaster's connection is lost, the broadcast SDK's state will change to `error` and then `disconnected`. You will be notified of these changes through the `IVSBroadcastSessionDelegate`. When you receive these state changes:

1. Monitor your broadcast app’s connectivity state and call `start` with your endpoint and stream key, once your connection has been restored.

1. **Important:** Monitor the state delegate callback and ensure that the state changes to `connected` after calling `start` again.

## Detach a Device
<a name="broadcast-ios-detach-device"></a>

If you want to detach and not replace a device, detach it with `IVSDevice` or `IVSDeviceDescriptor`:

```
broadcastSession.detachDevice(currentCamera)
```

## ReplayKit Integration
<a name="broadcast-ios-replaykit"></a>

To stream the device’s screen and system audio on iOS, you must integrate with [ReplayKit](https://developer.apple.com/documentation/replaykit?language=objc). The Amazon IVS broadcast SDK makes it easy to integrate ReplayKit using `IVSReplayKitBroadcastSession`. In your `RPBroadcastSampleHandler` subclass, create an instance of `IVSReplayKitBroadcastSession`, then:
+ Start the session in `broadcastStarted`
+ Stop the session in `broadcastFinished`

The session object will have three custom sources for screen images, app audio, and microphone audio. Pass the `CMSampleBuffers` provided in `processSampleBuffer` to those custom sources.

To handle device orientation, you need to extract ReplayKit-specific metadata from the sample buffer. Use the following code:

```
let imageSource = session.systemImageSource;
if let orientationAttachment = CMGetAttachment(sampleBuffer, key: RPVideoSampleOrientationKey as CFString, attachmentModeOut: nil) as? NSNumber,
    let orientation = CGImagePropertyOrientation(rawValue: orientationAttachment.uint32Value) {
    switch orientation {
    case .up, .upMirrored:
        imageSource.setHandsetRotation(0)
    case .down, .downMirrored:
        imageSource.setHandsetRotation(Float.pi)
    case .right, .rightMirrored:
        imageSource.setHandsetRotation(-(Float.pi / 2))
    case .left, .leftMirrored:
        imageSource.setHandsetRotation((Float.pi / 2))
    }
}
```

It is possible to integrate ReplayKit using `IVSBroadcastSession` instead of `IVSReplayKitBroadcastSession`. However, the ReplayKit-specific variant has several modifications to reduce the internal memory footprint, to stay within Apple’s memory ceiling for broadcast extensions.

## Get Recommended Broadcast Settings
<a name="broadcast-ios-recommended-settings"></a>

To evaluate your user’s connection before starting a broadcast, use `IVSBroadcastSession.recommendedVideoSettings` to run a brief test. As the test runs, you will receive several recommendations, ordered from most to least recommended. In this version of the SDK, it is not possible to reconfigure the current `IVSBroadcastSession`, so you must deallocate it and then create a new one with the recommended settings. You will continue to receive `IVSBroadcastSessionTestResults` until the `result.status` is `Success` or `Error`. You can check progress with `result.progress`.

Amazon IVS supports a maximum bitrate of 8.5 Mbps (for channels whose `type` is `STANDARD` or `ADVANCED`), so the `maximumBitrate` returned by this method never exceeds 8.5 Mbps. To account for small fluctuations in network performance, the recommended `initialBitrate` returned by this method is slightly less than the true bitrate measured in the test. (Using 100% of the available bandwidth usually is inadvisable.)

```
func runBroadcastTest() {
    self.test = session.recommendedVideoSettings(with: IVS_RTMPS_URL, streamKey: IVS_STREAMKEY) { [weak self] result in
        if result.status == .success {
            self?.recommendation = result.recommendations[0];
        }
    }
}
```

## Using Auto-Reconnect
<a name="broadcast-ios-auto-reconnect"></a>

IVS supports automatic reconnection to a broadcast if the broadcast stops unexpectedly without calling the `stop` API; e.g., a temporary loss in network connectivity. To enable auto-reconnect, set the `enabled` property on `IVSBroadcastConfiguration.autoReconnect` to `true`.

When something causes the stream to unexpectedly stop, the SDK retries up to 5 times, following a linear backoff strategy. It notifies your application about the retry state through the `IVSBroadcastSessionDelegate.didChangeRetryState` function.

Behind the scenes, auto-reconnect uses IVS [stream-takeover](streaming-config.md#streaming-config-stream-takeover) functionality by appending a priority number, starting with 1, to the end of the provided stream key. For the duration of the `IVSBroadcastSession` instance, that number is incremented by 1 each time a reconnect is attempted. This means if the device’s connection is lost 4 times during a broadcast, and each loss requires 1-4 retry attempts, the priority of the last stream up could be anywhere between 5 and 17. Because of this, *we recommend you do not use IVS stream takeover from another device while auto-reconnect is enabled in the SDK for the same channel*. There are no guarantees what priority the SDK is using at the time, and the SDK will try to reconnect with a higher priority if another device takes over.

## Use Background Video
<a name="broadcast-ios-background-video"></a>

You can continue a non-RelayKit broadcast, even with your application in the background.

To save power and keep foreground applications responsive, iOS gives only one application at a time access to the GPU. The Amazon IVS Broadcast SDK uses the GPU at multiple stages of the video pipeline, including compositing multiple input sources, scaling the image, and encoding the image. While the broadcasting application is in the background, there is no guarantee that the SDK can perform any of these actions.

To address this, use the `createAppBackgroundImageSource` method. It enables the SDK to continue broadcasting both video and audio while in the background. It returns an `IVSBackgroundImageSource`, which is a normal `IVSCustomImageSource` with an additional `finish` function. Every `CMSampleBuffer` provided to the background image source is encoded at the frame rate provided by your original `IVSVideoConfiguration`. Timestamps on the `CMSampleBuffer` are ignored.

The SDK then scales and encodes those images and caches them, automatically looping that feed when your application goes into the background. When your application returns to the foreground, the attached image devices become active again and the pre-encoded stream stops looping.

To undo this process, use `removeImageSourceOnAppBackgrounded`. You do not have to call this unless you want to explicitly revert the SDK’s background behavior; otherwise, it is cleaned up automatically on deallocation of the `IVSBroadcastSession`.

**Notes:** *We strongly recommend that you call this method as part of configuring the broadcast session, before the session goes live.* The method is expensive (it encodes video), so performance of a live broadcast while this method is running may be degraded.

### Example: Generating a Static Image for Background Video
<a name="background-video-example-static-image"></a>

Providing a single image to the background source generates a full GOP of that static image.

Here is an example using CIImage:

```
// Create the background image source
guard let source = session.createAppBackgroundImageSource(withAttemptTrim: true, onComplete: { error in
    print("Background Video Generation Done - Error: \(error.debugDescription)")
}) else {
    return
}

// Create a CIImage of the color red.
let ciImage = CIImage(color: .red)

// Convert the CIImage to a CVPixelBuffer
let attrs = [
    kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
    kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue,
    kCVPixelBufferMetalCompatibilityKey: kCFBooleanTrue,
] as CFDictionary

var pixelBuffer: CVPixelBuffer!
CVPixelBufferCreate(kCFAllocatorDefault,
                    videoConfig.width,
                    videoConfig.height,
                    kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
                    attrs,
                    &pixelBuffer)

let context = CIContext()
context.render(ciImage, to: pixelBuffer)

// Submit to CVPixelBuffer and finish the source
source.add(pixelBuffer)
source.finish()
```

Alternately, instead of creating a CIImage of a solid color, you can use bundled images. The only code shown here is how to convert a UIImage to a CIImage to use with the previous sample:

```
// Load the pre-bundled image and get it’s CGImage
guard let cgImage = UIImage(named: "image")?.cgImage else {
    return
}

// Create a CIImage from the CGImage
let ciImage = CIImage(cgImage: cgImage)
```

### Example: Video with AVAssetImageGenerator
<a name="background-video-example-avassetimagegenerator"></a>

You can use an `AVAssetImageGenerator` to generate `CMSampleBuffers` from an `AVAsset` (though not an HLS stream `AVAsset`):

```
// Create the background image source
guard let source = session.createAppBackgroundImageSource(withAttemptTrim: true, onComplete: { error in
    print("Background Video Generation Done - Error: \(error.debugDescription)")
}) else {
    return
}

// Find the URL for the pre-bundled MP4 file
guard let url = Bundle.main.url(forResource: "sample-clip", withExtension: "mp4") else {
    return
}
// Create an image generator from an asset created from the URL.
let generator = AVAssetImageGenerator(asset: AVAsset(url: url))
// It is important to specify a very small time tolerance.
generator.requestedTimeToleranceAfter = .zero
generator.requestedTimeToleranceBefore = .zero

// At 30 fps, this will generate 4 seconds worth of samples.
let times: [NSValue] = (0...120).map { NSValue(time: CMTime(value: $0, timescale: CMTimeScale(config.video.targetFramerate))) }
var completed = 0

let context = CIContext(options: [.workingColorSpace: NSNull()])

// Create a pixel buffer pool to efficiently feed the source
let attrs = [
    kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
    kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
    kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue,
    kCVPixelBufferMetalCompatibilityKey: kCFBooleanTrue,
    kCVPixelBufferWidthKey: videoConfig.width,
    kCVPixelBufferHeightKey: videoConfig.height,
] as CFDictionary
var pool: CVPixelBufferPool!
CVPixelBufferPoolCreate(kCFAllocatorDefault, nil, attrs, &pool)

generator.generateCGImagesAsynchronously(forTimes: times) { requestTime, image, actualTime, result, error in
    if let image = image {
        // convert to CIImage then CVpixelBuffer
        let ciImage = CIImage(cgImage: image)
        var pixelBuffer: CVPixelBuffer!
        CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pool, &pixelBuffer)
        context.render(ciImage, to: pixelBuffer)
        source.add(pixelBuffer)
    }
    completed += 1
    if completed == times.count {
        // Mark the source finished when all images have been processed
        source.finish()
    }
}
```

It is possible to generate `CVPixelBuffers` using an `AVPlayer` and `AVPlayerItemVideoOutput`. However, that requires using a `CADisplayLink` and executes closer to real-time, while `AVAssetImageGenerator` can process the frames much faster.

### Limitations
<a name="background-video-limitations"></a>

Your application needs the [background audio entitlement](https://developer.apple.com/documentation/xcode/configuring-background-execution-modes) to avoid getting suspended after going into the background.

`createAppBackgroundImageSource` can be called only while your application is in the foreground, since it needs access to the GPU to complete.

`createAppBackgroundImageSource` always encodes to a full GOP. For example, if you have a keyframe interval of 2 seconds (the default) and are running at 30 fps, it encodes a multiple of 60 frames.
+ If fewer than 60 frames are provided, the last frame is repeated until 60 frames are reached, regardless of the trim option’s value.
+ If more than 60 frames are provided and the trim option is `true`, the last N frames are dropped, where N is the remainder of the total number of submitted frames divided by 60.
+ If more than 60 frames are provided and the trim option is `false`, the last frame is repeated until the next multiple of 60 frames is reached.

# Known Issues & Workarounds in the IVS iOS Broadcast SDK \$1 Low-Latency Streaming
<a name="broadcast-ios-issues"></a>

This document lists known issues that you might encounter when using the Amazon IVS low-latency streaming iOS broadcast SDK and suggests potential workarounds.
+ A bug in ReplayKit causes rapid memory growth when plugging in a wired headset during a stream.

  **Workaround:** Start the stream with the wired headset already plugged in, use a Bluetooth headset, or do not use an external microphone.
+ If at any point during a ReplayKit stream you enable the microphone and then interrupt the audio session (e.g., with a phone call or by activating Siri), system audio will stop working. This is a ReplayKit bug that we are working with Apple to resolve.

  **Workaround:** On an audio interruption, stop the broadcast and alert the user.
+ AirPods do not record any audio if the `AVAudioSession` category is set to `record`. By default, the SDK uses `playAndRecord`, so this issue manifests only if the category is changed to `record`.

  **Workaround:** If there is a chance that AirPods will be used to record audio, use `playAndRecord` even if your application is not playing back media. 
+ When AirPods are connected to an iOS 12 device, no other microphone can be used to record audio. Attempting to switch to an internal microphone immediately reverts back to the AirPods.

  **Workaround:** None. If AirPods are connected to iOS 12, they are the only device that can record audio.
+ Submitting audio data faster than realtime (using a custom audio source) results in audio drift.

  **Workaround:** Do not submit audio data faster than realtime. 
+ Audio artifacts can appear at bitrates under 68 kbps when using a high sample rate (44100 Hz or greater) and two channels.

  **Workaround:** Increase the bitrate to 68 kbps or higher, decrease the sample rate to 24000 Hz or lower, or set channels to 1.
+ When echo cancellation is enabled on `IVSMicrophone` devices, only a single microphone source is returned by the `listAvailableInputSources` method. 

  **Workaround:** None. This behavior is controlled by iOS.
+ Changing Bluetooth audio routes can be unpredictable. If you connect a new device mid-session, iOS may or may not automatically change the input route. Also, it is not possible to choose between multiple Bluetooth headsets that are connected at the same time. This happens in both regular broadcast and stage sessions.

  **Workaround:** If you plan to use a Bluetooth headset, connect it before starting the broadcast or stage and leave it connected throughout the session.
+ iOS removes access to the camera when the AirPods popup appears after opening a paired AirPods case while leaving the AirPods themselves in the case. This results in the video for a broadcast or stage freezing.

  **Workaround:** None. iOS completely revokes camera access while the popup is being rendered and it is impossible for third-party applications to prevent the popup.
+ Enabling B-frames can improve compression quality; however some encoders provide less precise bitrate control when B-frames are enabled, which may cause issues during network fluctuations.

  **Workaround:** Consider disabling B-frames if consistent bitrate adherence is more important than compression efficiency for your use case.

# IVS Broadcast SDK: Mixed Devices
<a name="broadcast-mixed-devices"></a>

Mixed devices are audio and video devices that take multiple input sources and generate a single output. Mixing devices is a powerful feature that lets you define and manage multiple on-screen (video) elements and audio tracks. You can combine video and audio from multiple sources such as cameras, microphones, screen captures, and audio and video generated by your app. You can use transitions to move these sources around the video that you stream to IVS, and add to and remove sources mid-stream.

Mixed devices come in image and audio flavors. To create a mixed image device, call:

`DeviceDiscovery.createMixedImageDevice()` on Android

`IVSDeviceDiscovery.createMixedImageDevice()` on iOS

The returned device can be attached to a `BroadcastSession` (low-latency streaming) or `Stage` (real-time streaming), like any other device.

## Terminology
<a name="broadcast-mixed-devices-terminology"></a>

![\[IVS broadcasting mixed devices terminology.\]](http://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/images/Broadcast_SDK_Mixer_Glossary.png)



| Term | Description | 
| --- | --- | 
| Device | A hardware or software component that produces audio or image input. Examples of devices are microphones, cameras, Bluetooth headsets, and virtual devices such as screen captures or custom-image inputs. | 
| Mixed Device | A `Device` that can be attached to a `BroadcastSession` like any other `Device`, but with additional APIs that allow `Source` objects to be added. Mixed devices have internal mixers that composite audio or images, producing a single output audio and image stream. Mixed devices come in either audio or image flavors.  | 
| Mixed device configuration | A configuration object for the mixed device. For mixed image devices, this configures properties like dimensions and framerate. For mixed audio devices, this configures the channel count. | 
|  Source | A container that defines a visual element’s position on screen and an audio track’s properties in the audio mix. A mixed device can be configured with zero or more sources. Sources are given a configuration that affects how the source’s media are used. The image above shows four image sources: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/broadcast-mixed-devices.html)  | 
| Source Configuration |  A configuration object for the source going into a mixed device. The full configuration objects are described below..   | 
| Transition | To move a slot to a new position or change some of its properties, use `MixedDevice.transitionToConfiguration()`. This method takes: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/broadcast-mixed-devices.html) | 

## Mixed Audio Device
<a name="broadcast-mixed-audio-device"></a>

### Configuration
<a name="broadcast-mixed-audio-device-configuration"></a>

`MixedAudioDeviceConfiguration` on Android

`IVSMixedAudioDeviceConfiguration` on iOS


| Name | Type | Description | 
| --- | --- | --- | 
| `channels` | Integer | Number of output channels from the audio mixer. Valid values: 1, 2. 1 is mono audio; 2, stereo audio. Default: 2. | 

### Source Configuration
<a name="broadcast-mixed-audio-device-source-configuration"></a>

`MixedAudioDeviceSourceConfiguration` on Android

`IVSMixedAudioDeviceSourceConfiguration` on iOS


| Name | Type | Description | 
| --- | --- | --- | 
| `gain` | Float | Audio gain. This is a multiplier, so any value above 1 increases the gain; any value below 1, decreases it. Valid values: 0-2. Default: 1.  | 

## Mixed Image Device
<a name="broadcast-mixed-image-device"></a>

### Configuration
<a name="broadcast-mixed-image-device-configuration"></a>

`MixedImageDeviceConfiguration` on Android

`IVSMixedImageDeviceConfiguration` on iOS


| Name | Type | Description | 
| --- | --- | --- | 
| `size` | Vec2 | Size of the video canvas. | 
| `targetFramerate` | Integer | Number of target frames per second for the mixed device. On average, this value should be met, but the system may drop frames under certain circumstances (e.g., high CPU or GPU load). | 
| `transparencyEnabled` | Boolean | This enables blending using the `alpha` property on image source configurations. Setting this to `true` increases memory and CPU consumption. Default: `false`. | 

### Source Configuration
<a name="broadcast-mixed-image-device-source-configuration"></a>

`MixedImageDeviceSourceConfiguration` on Android

`IVSMixedImageDeviceSourceConfiguration` on iOS


| Name | Type | Description | 
| --- | --- | --- | 
| `alpha` | Float | Alpha of the slot. This is multiplicative with any alpha values in the image. Valid values: 0-1. 0 is fully transparent and 1 is fully opaque. Default: 1. | 
| `aspect` | AspectMode | Aspect-ratio mode for any image rendered in the slot. Valid values: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/broadcast-mixed-devices.html) Default: `Fit`  | 
| `fillColor` | Vec4 | Fill color to be used with `aspect Fit` when the slot and image aspect ratios do not match. The format is (red, green, blue, alpha). Valid value (for each channel): 0-1. Default: (0, 0, 0, 0). | 
| `position` | Vec2 | Slot position (in pixels), relative to the top-left corner of the canvas. The origin of the slot also is top-left. | 
| `size` | Vec2 | Size of the slot, in pixels. Setting this value also sets `matchCanvasSize` to `false`. Default: (0, 0); however, because `matchCanvasSize` defaults to `true`, the rendered size of the slot is the canvas size, not (0, 0). | 
| `zIndex` | Float | Relative ordering of slots. Slots with higher `zIndex` values are drawn on top of slots with lower `zIndex` values. | 

## Creating and Configuring a Mixed Image Device
<a name="broadcast-mixed-image-device-creating-configuring"></a>

![\[Configuring a broadcast session for mixing.\]](http://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/images/Broadcast_SDK_Mixer_Configuring.png)


Here, we create a scene similar to the one at the beginning of this guide, with three on-screen elements:
+ Bottom-left slot for a camera.
+ Bottom-right slot for a logo overlay.
+ Top-right slot for a movie.

Note that the origin for the canvas is the top-left corner and this is the same for the slots. Hence, positioning a slot at (0, 0) puts it in the top-left corner with the entire slot visible.

### iOS
<a name="broadcast-mixed-image-device-creating-configuring-ios"></a>

```
let deviceDiscovery = IVSDeviceDiscovery()
let mixedImageConfig = IVSMixedImageDeviceConfiguration()
mixedImageConfig.size = CGSize(width: 1280, height: 720)
try mixedImageConfig.setTargetFramerate(60)
mixedImageConfig.isTransparencyEnabled = true
let mixedImageDevice = deviceDiscovery.createMixedImageDevice(with: mixedImageConfig)

// Bottom Left
let cameraConfig = IVSMixedImageDeviceSourceConfiguration()
cameraConfig.size = CGSize(width: 320, height: 180)
cameraConfig.position = CGPoint(x: 20, y: mixedImageConfig.size.height - cameraConfig.size.height - 20)
cameraConfig.zIndex = 2
let camera = deviceDiscovery.listLocalDevices().first(where: { $0 is IVSCamera }) as? IVSCamera
let cameraSource = IVSMixedImageDeviceSource(configuration: cameraConfig, device: camera)
mixedImageDevice.add(cameraSource)

// Top Right
let streamConfig = IVSMixedImageDeviceSourceConfiguration()
streamConfig.size = CGSize(width: 640, height: 320)
streamConfig.position = CGPoint(x: mixedImageConfig.size.width - streamConfig.size.width - 20, y: 20)
streamConfig.zIndex = 1
let streamDevice = deviceDiscovery.createImageSource(withName: "stream")
let streamSource = IVSMixedImageDeviceSource(configuration: streamConfig, device: streamDevice)
mixedImageDevice.add(streamSource)

// Bottom Right
let logoConfig = IVSMixedImageDeviceSourceConfiguration()
logoConfig.size = CGSize(width: 320, height: 180)
logoConfig.position = CGPoint(x: mixedImageConfig.size.width - logoConfig.size.width - 20,
                              y: mixedImageConfig.size.height - logoConfig.size.height - 20)
logoConfig.zIndex = 3
let logoDevice = deviceDiscovery.createImageSource(withName: "logo")
let logoSource = IVSMixedImageDeviceSource(configuration: logoConfig, device: logoDevice)
mixedImageDevice.add(logoSource)
```

### Android
<a name="broadcast-mixed-image-device-creating-configuring-android"></a>

```
val deviceDiscovery = DeviceDiscovery(this /* context */)
val mixedImageConfig = MixedImageDeviceConfiguration().apply {
    setSize(BroadcastConfiguration.Vec2(1280f, 720f))
    setTargetFramerate(60)
    setEnableTransparency(true)
}
val mixedImageDevice = deviceDiscovery.createMixedImageDevice(mixedImageConfig)

// Bottom Left
val cameraConfig = MixedImageDeviceSourceConfiguration().apply {
    setSize(BroadcastConfiguration.Vec2(320f, 180f))
    setPosition(BroadcastConfiguration.Vec2(20f, mixedImageConfig.size.y - size.y - 20))
    setZIndex(2)
}
val camera = deviceDiscovery.listLocalDevices().firstNotNullOf { it as? CameraSource }
val cameraSource = MixedImageDeviceSource(cameraConfig, camera)
mixedImageDevice.addSource(cameraSource)

// Top Right
val streamConfig = MixedImageDeviceSourceConfiguration().apply {
    setSize(BroadcastConfiguration.Vec2(640f, 320f))
    setPosition(BroadcastConfiguration.Vec2(mixedImageConfig.size.x - size.x - 20, 20f))
    setZIndex(1)
}
val streamDevice = deviceDiscovery.createImageInputSource(streamConfig.size)
val streamSource = MixedImageDeviceSource(streamConfig, streamDevice)
mixedImageDevice.addSource(streamSource)

// Bottom Right
val logoConfig = MixedImageDeviceSourceConfiguration().apply {
    setSize(BroadcastConfiguration.Vec2(320f, 180f))
    setPosition(BroadcastConfiguration.Vec2(mixedImageConfig.size.x - size.x - 20, mixedImageConfig.size.y - size.y - 20))
    setZIndex(1)
}
val logoDevice = deviceDiscovery.createImageInputSource(logoConfig.size)
val logoSource = MixedImageDeviceSource(logoConfig, logoDevice)
mixedImageDevice.addSource(logoSource)
```

## Removing Sources
<a name="broadcast-mixed-devices-removing-sources"></a>

To remove a source, call `MixedDevice.remove` with the `Source` object you want to remove.

## Animations with Transitions
<a name="broadcast-mixed-devices-animations-transitions"></a>

The transition method replaces a source’s configuration with a new configuration. This replacement can be animated over time by setting a duration higher than 0, in seconds. 

### Which Properties Can Be Animated?
<a name="broadcast-mixed-devices-animations-properties"></a>

Not all properties in the slot structure can be animated. Any properties based on Float types can be animated; other properties take effect at either the start or end of the animation.


| Name | Can It Be Animated? | Impact Point | 
| --- | --- | --- | 
| `Audio.gain` | Yes | Interpolated | 
| `Image.alpha` | Yes | Interpolated | 
| `Image.aspect` | No | End | 
| `Image.fillColor` | Yes | Interpolated | 
| `Image.position` | Yes | Interpolated | 
| `Image.size` | Yes | Interpolated | 
| `Image.zIndex` Note: The `zIndex` moves 2D planes through 3D space, so the transition happens when the two planes cross at some point in the middle of the animation. This could be computed, but it depends on the starting and ending `zIndex` values. For a smoother transition, combine this with `alpha`.  | Yes | Unknown | 

### Simple Examples
<a name="broadcast-mixed-devices-animations-examples"></a>

Below are examples of a full-screen camera takeover using the configuration defined above in [Creating and Configuring a Mixed Image Device](#broadcast-mixed-image-device-creating-configuring). This is animated over 0.5 seconds.

#### iOS
<a name="broadcast-mixed-devices-animations-examples-ios"></a>

```
// Continuing the example from above, modifying the existing cameraConfig object.
cameraConfig.size = CGSize(width: 1280, height: 720)
cameraConfig.position = CGPoint.zero
cameraSource.transition(to: cameraConfig, duration: 0.5) { completed in
    if completed {
        print("Animation completed")
    } else {
        print("Animation interrupted")
    }
}
```

#### Android
<a name="broadcast-mixed-devices-animations-examples-android"></a>

```
// Continuing the example from above, modifying the existing cameraConfig object.
cameraConfig.setSize(BroadcastConfiguration.Vec2(1280f, 720f))
cameraConfig.setPosition(BroadcastConfiguration.Vec2(0f, 0f))
cameraSource.transitionToConfiguration(cameraConfig, 500) { completed ->
    if (completed) {
        print("Animation completed")
    } else {
        print("Animation interrupted")
    }
}
```

## Mirroring the Broadcast
<a name="broadcast-mixed-devices-mirroring"></a>


| To mirror an attached image device in the broadcast in this direction … | Use a negative value for … | 
| --- | --- | 
| Horizontally | The width of the slot | 
| Vertically | The height of the slot | 
| Both horizontally and vertically | Both the width and height of the slot | 

The position will need to be adjusted by the same value, to put the slot in the correct position when mirrored.

Below are examples for mirroring the broadcast horizontally and vertically.

### iOS
<a name="broadcast-mixed-devices-mirroring-ios"></a>

Horizontal mirroring:

```
let cameraSource = IVSMixedImageDeviceSourceConfiguration()
cameraSource.size = CGSize(width: -320, height: 720)
// Add 320 to position x since our width is -320
cameraSource.position = CGPoint(x: 320, y: 0)
```

Vertical mirroring:

```
let cameraSource = IVSMixedImageDeviceSourceConfiguration()
cameraSource.size = CGSize(width: 320, height: -720)
// Add 720 to position y since our height is -720
cameraSource.position = CGPoint(x: 0, y: 720)
```

### Android
<a name="broadcast-mixed-devices-mirroring-android"></a>

Horizontal mirroring:

```
val cameraConfig = MixedImageDeviceSourceConfiguration().apply {
    setSize(BroadcastConfiguration.Vec2(-320f, 180f))
   // Add 320f to position x since our width is -320f
    setPosition(BroadcastConfiguration.Vec2(320f, 0f))
}
```

Vertical mirroring:

```
val cameraConfig = MixedImageDeviceSourceConfiguration().apply {
    setSize(BroadcastConfiguration.Vec2(320f, -180f))
    // Add 180f to position y since our height is -180f
    setPosition(BroadcastConfiguration.Vec2(0f, 180f))
}
```

Note: This mirroring is different than the `setMirrored` method on `ImagePreviewView` (Android) and `IVSImagePreviewView` (iOS). That method affects only the local preview view on the device and does not impact the broadcast.

# IVS Broadcast SDK: Custom Image Sources \$1 Low-Latency Streaming
<a name="broadcast-custom-image-sources"></a>

This guide assumes you are already familiar with how to set up a broadcast session ([Android](broadcast-android.md), [iOS](broadcast-ios.md)) and how to [use the mixed devices API](broadcast-mixed-devices.md).

Custom image-input sources allow an application to provide its own image input to the broadcast SDK, instead of being limited to the preset cameras or screen share. A custom image source can be as simple as a semi-transparent watermark or static "be right back" scene, or it can allow the app to do additional custom processing like adding beauty filters to the camera.

You can have multiple custom image sources, like a watermark plus a camera with beauty filters. When you use a custom image-input source for custom control of the camera (such as using beauty-filter libraries that require camera access), the broadcast SDK is no longer responsible for managing the camera. Instead, the application is responsible for handling the camera’s lifecycle correctly. See official platform documentation on how your application should manage the camera.

## Android
<a name="custom-image-sources-android"></a>

After you create a broadcast session, create an image-input source: 

```
SurfaceSource surfaceSource = broadcastSession.createImageInputSource();
```

This method returns a `SurfaceSource`, which is an image source backed by a standard Android [Surface](https://developer.android.com/reference/android/view/Surface). It is automatically attached to the broadcast session, so there is no need to use the `attachDevice(...)` method afterward. However, the `SurfaceSource` needs to be bound to a slot; this is covered later below. The `SurfaceSource` can be resized and rotated. You also can create an `ImagePreviewView` to display a preview of its contents.

To retrieve the underlying `Surface`:

```
Surface surface = surfaceSource.getInputSurface();
```

This `Surface` can be used as the output buffer for image producers like Camera2, OpenGL ES, and other libraries. The simplest use case is directly drawing a static bitmap or color into the Surface’s Canvas. However, many libraries (such as beauty-filter libraries) provide a method that allows an application to specify an external `Surface` for rendering. You can use such a method to pass this `Surface` to the filter library, which allows the library to output processed frames for the broadcast session to stream.

Finally, the `SurfaceSource` must be bound to a `Mixer.Slot` to be streamed by the broadcast session:

```
broadcastSession.getMixer().bind(surfaceSource, "customSlot");
```

The [Android sample code](https://github.com/aws-samples/amazon-ivs-broadcast-android-sample) has several examples that use a custom image source in different ways:
+ A semi-transparent watermark is added in the `MixerActivity`.
+ An MP4 file is looped in the `MixerActivity`.
+ The [CameraManager](https://github.com/aws-samples/amazon-ivs-broadcast-android-sample/blob/main/app/src/main/java/com/amazonaws/ivs/basicbroadcast/common/CameraManager.kt) utility class does custom management of the device camera using the Camera2 method in the `CustomActivity`, which applies a simple sepia filter. This example is especially helpful since it shows how to manage the camera and pass the broadcast session’s custom `SurfaceSource` to the camera capture request. If you use other external libraries, follow their documentation on how to configure the library to output to the Android `Surface` provided by the broadcast session.

## iOS
<a name="custom-image-sources-ios"></a>

After you create the broadcast session, create an image-input source:

```
let customSource = broadcastSession.createImageSource(withName: "customSourceName")
```

This method returns an `IVSCustomImageSource`, which is an image source that allows the application to submit `CMSampleBuffers` manually. For supported pixel formats, see the iOS Broadcast SDK Reference; a link to the most current version is in the [Amazon IVS Release Notes](release-notes.md) for the latest broadcast SDK release. The source is not automatically attached to the broadcast session, so you must attach the image source to the session and bind it to a slot before the source will stream:

```
broadcastSession.attach(customSource, toSlotWithName: "customSourceSlot", onComplete: nil)
```

After the custom source is attached and bound, the application can submit `CMSampleBuffers` directly to the custom source. You may choose to use the `onComplete` callback to start doing so.

Samples submitted to the custom source will be streamed in the broadcast session:

```
customSource.onSampleBuffer(sampleBuffer)
```

For streaming video, use this method in a callback. For example, if you’re using the camera, then every time a new sample buffer is received from an `AVCaptureSession`, the application can forward the sample buffer to the custom image source. If desired, the application can apply further processing (like a beauty filter) before submitting the sample to the custom image source.

For a static image, after the first sample, the application needs to resubmit the sample if the custom image source’s slot binding is changed or the source is detached and reattached to the broadcast session. For example, if you remove the slot from and then add the slot to the mixer, you must resubmit the sample.

The [iOS sample app](https://github.com/aws-samples/amazon-ivs-broadcast-ios-sample) has several examples that use a custom image source in different ways:
+ A semi-transparent watermark is added in `MixerViewController`.
+ An MP4 file is looped in `MixerViewController`.
+ A CIFilter implementation with a device camera is added in `CustomSourcesViewController`. This allows an application to manage a device camera independently of the Amazon IVS Broadcast SDK. It uses `AVCaptureSession` to capture an image from the device camera, processes the image using a CIFilter implementation, and submits `CMSampleBuffers` to `customSource` for live streaming.