

# IVS Broadcast SDK: iOS Guide \$1 Real-Time Streaming
<a name="broadcast-ios"></a>

The IVS real-time streaming iOS broadcast SDK enables participants to send and receive video on iOS.

The `AmazonIVSBroadcast` module implements the interface described in this document. The following operations are supported:
+ Join a stage 
+ Publish media to other participants in the stage
+ Subscribe to media from other participants in the stage
+ Manage and monitor video and audio published to the stage
+ Get WebRTC statistics for each peer connection
+ All operations from the IVS low-latency streaming iOS broadcast SDK

**Latest version of iOS broadcast SDK:** 1.41.0 ([Release Notes](https://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/release-notes.html#apr09-26-broadcast-mobile-rt)) 

**Reference documentation:** For information on the most important methods available in the Amazon IVS iOS broadcast SDK, see the reference documentation at [https://aws.github.io/amazon-ivs-broadcast-docs/1.41.0/ios/](https://aws.github.io/amazon-ivs-broadcast-docs/1.41.0/ios/).

**Sample code: **See the iOS sample repository on GitHub: [https://github.com/aws-samples/amazon-ivs-real-time-streaming-ios-samples](https://github.com/aws-samples/amazon-ivs-real-time-streaming-ios-samples).

**Platform requirements:** iOS 14\$1

# Getting Started​ with the IVS iOS Broadcast SDK \$1 Real-Time Streaming
<a name="broadcast-ios-getting-started"></a>

This document takes you through the steps involved in getting started with the IVS real-time streaming iOS broadcast SDK.

## Install the Library
<a name="broadcast-ios-install"></a>

We recommend that you integrate broadcast SDK via Swift Package Manager. (Alternatively, you can manually add the framework to your project.)

### Recommended: Integrate the Broadcast SDK (Swift Package Manager)
<a name="broadcast-ios-install-swift"></a>

1. Download the Package.swift file from [https://broadcast.live-video.net/1.41.0/Package.swift](https://broadcast.live-video.net/1.41.0/Package.swift).

1. In your project, create a new directory named AmazonIVSBroadcast and add it to version control.

1. Place the downloaded Package.swift file in the new directory.

1. In Xcode, go to **File > Add Package Dependencies** and select **Add Local...**

1. Navigate to and select the AmazonIVSBroadcast directory that you created, and select **Add Package**.

1. When prompted to **Choose Package Products for AmazonIVSBroadcast**, select **AmazonIVSBroadcastStages** as your **Package Product** by setting your application target in the **Add to Target** section.

1. Select **Add Package**.

**Important**: The IVS real-time streaming broadcast SDK includes all features of the IVS low-latency streaming broadcast SDK. It is not possible to integrate both SDKs in the same project.

### Alternate Approach: Install the Framework Manually
<a name="broadcast-ios-install-manual"></a>

1. Download the latest version from [ https://broadcast.live-video.net/1.41.0/AmazonIVSBroadcast-Stages.xcframework.zip](https://broadcast.live-video.net/1.41.0/AmazonIVSBroadcast-Stages.xcframework.zip).

1. Extract the contents of the archive. `AmazonIVSBroadcast.xcframework` contains the SDK for both device and simulator.

1. Embed `AmazonIVSBroadcast.xcframework` by dragging it into the **Frameworks, Libraries, and Embedded Content** section of the **General** tab for your application target.  
![\[The Frameworks, Libraries, and Embedded Content section of the General tab for your application target.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/iOS_Broadcast_SDK_Guide_xcframework.png)

## Request Permissions
<a name="broadcast-ios-permissions"></a>

Your app must request permission to access the user’s camera and mic. (This is not specific to Amazon IVS; it is required for any application that needs access to cameras and microphones.)

Here, we check whether the user has already granted permissions and, if not, we ask for them:

```
switch AVCaptureDevice.authorizationStatus(for: .video) {
case .authorized: // permission already granted.
case .notDetermined:
   AVCaptureDevice.requestAccess(for: .video) { granted in
       // permission granted based on granted bool.
   }
case .denied, .restricted: // permission denied.
@unknown default: // permissions unknown.
}
```

You need to do this for both `.video` and `.audio` media types, if you want access to cameras and microphones, respectively.

You also need to add entries for `NSCameraUsageDescription` and `NSMicrophoneUsageDescription` to your `Info.plist`. Otherwise, your app will crash when trying to request permissions.

## Disable the Application Idle Timer
<a name="broadcast-ios-disable-idle-timer"></a>

This is optional but recommended. It prevents your device from going to sleep while using the broadcast SDK, which would interrupt the broadcast.

```
override func viewDidAppear(_ animated: Bool) {
   super.viewDidAppear(animated)
   UIApplication.shared.isIdleTimerDisabled = true
}
override func viewDidDisappear(_ animated: Bool) {
   super.viewDidDisappear(animated)
   UIApplication.shared.isIdleTimerDisabled = false
}
```

# Publishing & Subscribing with the IVS iOS Broadcast SDK \$1 Real-Time Streaming
<a name="ios-publish-subscribe"></a>

This document takes you through the steps involved in publishing and subscribing to a stage using the IVS real-time streaming iOS broadcast SDK.

## Concepts
<a name="ios-publish-subscribe-concepts"></a>

Three core concepts underlie real-time functionality: [stage](#ios-publish-subscribe-concepts-stage), [strategy](#ios-publish-subscribe-concepts-strategy), and [renderer](#ios-publish-subscribe-concepts-renderer). The design goal is minimizing the amount of client-side logic necessary to build a working product.

### Stage
<a name="ios-publish-subscribe-concepts-stage"></a>

The `IVSStage` class is the main point of interaction between the host application and the SDK. The class represents the stage itself and is used to join and leave the stage. Creating or joining a stage requires a valid, unexpired token string from the control plane (represented as `token`). Joining and leaving a stage are simple.

```
let stage = try IVSStage(token: token, strategy: self)

try stage.join()

stage.leave()
```

The `IVSStage` class also is where the `IVSStageRenderer` and `IVSErrorDelegate` can be attached:

```
let stage = try IVSStage(token: token, strategy: self)
stage.errorDelegate = self
stage.addRenderer(self) // multiple renderers can be added
```

### Strategy
<a name="ios-publish-subscribe-concepts-strategy"></a>

The `IVSStageStrategy` protocol provides a way for the host application to communicate the desired state of the stage to the SDK. Three functions need to be implemented: `shouldSubscribeToParticipant`, `shouldPublishParticipant`, and `streamsToPublishForParticipant`. All are discussed below.

#### Subscribing to Participants
<a name="ios-publish-subscribe-concepts-strategy-participants"></a>

```
func stage(_ stage: IVSStage, shouldSubscribeToParticipant participant: IVSParticipantInfo) -> IVSStageSubscribeType
```

When a remote participant joins a stage, the SDK queries the host application about the desired subscription state for that participant. The options are `.none`, `.audioOnly`, and `.audioVideo`. When returning a value for this function, the host application does not need to worry about the publish state, current subscription state, or stage connection state. If `.audioVideo` is returned, the SDK waits until the remote participant is publishing before subscribing, and it updates the host application through the renderer throughout the process.

Here is a sample implementation:

```
func stage(_ stage: IVSStage, shouldSubscribeToParticipant participant: IVSParticipantInfo) -> IVSStageSubscribeType {
    return .audioVideo
}
```

This is the complete implementation of this function for a host application that always wants all participants to see each other; e.g., a video-chat application.

More advanced implementations also are possible. Use the `attributes` property on `IVSParticipantInfo` to selectively subscribe to participants based on server-provided attributes:

```
func stage(_ stage: IVSStage, shouldSubscribeToParticipant participant: IVSParticipantInfo) -> IVSStageSubscribeType {
    switch participant.attributes["role"] {
    case "moderator": return .none
    case "guest": return .audioVideo
    default: return .none
    }
}
```

This can be used to create a stage where moderators can monitor all guests without being seen or heard themselves. The host application could use additional business logic to let moderators see each other but remain invisible to guests.

#### Configuration for Subscribing to Participants
<a name="ios-publish-subscribe-concepts-strategy-participants-config"></a>

```
func stage(_ stage: IVSStage, subscribeConfigurationForParticipant participant: IVSParticipantInfo) -> IVSSubscribeConfiguration
```

If a remote participant is being subscribed to (see [Subscribing to Participants](#ios-publish-subscribe-concepts-strategy-participants)), the SDK queries the host application about a custom subscribe configuration for that participant. This configuration is optional and allows the host application to control certain aspects of subscriber behavior. For information on what can be configured, see [SubscribeConfiguration](https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-reference/interfaces/SubscribeConfiguration) in the SDK reference documentation.

Here is a sample implementation:

```
func stage(_ stage: IVSStage, subscribeConfigurationForParticipant participant: IVSParticipantInfo) -> IVSSubscribeConfiguration {
    let config = IVSSubscribeConfiguration()

    try! config.jitterBuffer.setMinDelay(.medium())

    return config
}
```

This implementation updates the jitter-buffer minimum delay for all subscribed participants to a preset of `MEDIUM`.

As with `shouldSubscribeToParticipant`, more advanced implementations are possible. The given `ParticipantInfo` can be used to selectively update the subscribe configuration for specific participants.

We recommend using the default behaviors. Specify custom configuration only if there is a particular behavior you want to change.

#### Publishing
<a name="ios-publish-subscribe-concepts-strategy-publishing"></a>

```
func stage(_ stage: IVSStage, shouldPublishParticipant participant: IVSParticipantInfo) -> Bool
```

Once connected to the stage, the SDK queries the host application to see if a particular participant should publish. This is invoked only on local participants that have permission to publish based on the provided token.

Here is a sample implementation:

```
func stage(_ stage: IVSStage, shouldPublishParticipant participant: IVSParticipantInfo) -> Bool {
    return true
}
```

This is for a standard video chat application where users always want to publish. They can mute and unmute their audio and video, to instantly be hidden or seen/heard. (They also can use publish/unpublish, but that is much slower. Mute/unmute is preferable for use cases where changing visibility often is desirable.)

#### Choosing Streams to Publish
<a name="ios-publish-subscribe-concepts-strategy-streams"></a>

```
func stage(_ stage: IVSStage, streamsToPublishForParticipant participant: IVSParticipantInfo) -> [IVSLocalStageStream]
```

When publishing, this is used to determine what audio and video streams should be published. This is covered in more detail later in [Publish a Media Stream](#ios-publish-subscribe-publish-stream).

#### Updating the Strategy
<a name="ios-publish-subscribe-concepts-strategy-updates"></a>

The strategy is intended to be dynamic: the values returned from any of the above functions can be changed at any time. For example, if the host application does not want to publish until the end user taps a button, you could return a variable from `shouldPublishParticipant` (something like `hasUserTappedPublishButton`). When that variable changes based on an interaction by the end user, call `stage.refreshStrategy()` to signal to the SDK that it should query the strategy for the latest values, applying only things that have changed. If the SDK observes that the `shouldPublishParticipant` value has changed, it will start the publish process. If the SDK queries and all functions return the same value as before, the `refreshStrategy` call will not make any modifications to the stage.

If the return value of `shouldSubscribeToParticipant` changes from `.audioVideo` to `.audioOnly`, the video stream will be removed for all participants with changed returned values, if a video stream existed previously.

Generally, the stage uses the strategy to most efficiently apply the difference between the previous and current strategies, without the host application needing to worry about all the state required to manage it properly. Because of this, think of calling `stage.refreshStrategy()` as a cheap operation, because it does nothing unless the strategy changes.

### Renderer
<a name="ios-publish-subscribe-concepts-renderer"></a>

The `IVSStageRenderer` protocol communicates the state of the stage to the host application. Updates to the host application’s UI usually can be powered entirely by the events provided by the renderer. The renderer provides the following functions:

```
func stage(_ stage: IVSStage, participantDidJoin participant: IVSParticipantInfo)

func stage(_ stage: IVSStage, participantDidLeave participant: IVSParticipantInfo)

func stage(_ stage: IVSStage, participant: IVSParticipantInfo, didChange publishState: IVSParticipantPublishState)

func stage(_ stage: IVSStage, participant: IVSParticipantInfo, didChange subscribeState: IVSParticipantSubscribeState)

func stage(_ stage: IVSStage, participant: IVSParticipantInfo, didAdd streams: [IVSStageStream])

func stage(_ stage: IVSStage, participant: IVSParticipantInfo, didRemove streams: [IVSStageStream])

func stage(_ stage: IVSStage, participant: IVSParticipantInfo, didChangeMutedStreams streams: [IVSStageStream])

func stage(_ stage: IVSStage, didChange connectionState: IVSStageConnectionState, withError error: Error?)

func stage(_ stage: IVSStage, participant: IVSParticipantInfo, stream: IVSRemoteStageStream, didChangeStreamAdaption adaption: Bool)

func stage(_ stage: IVSStage, participant: IVSParticipantInfo, stream: IVSRemoteStageStream, didChange layers: [IVSRemoteStageStreamLayer])

func stage(_ stage: IVSStage, participant: IVSParticipantInfo, stream: IVSRemoteStageStream, didSelect layer: IVSRemoteStageStreamLayer?, reason: IVSRemoteStageStream.LayerSelectedReason)
```

It is not expected that the information provided by the renderer impacts the return values of the strategy. For example, the return value of `shouldSubscribeToParticipant` is not expected to change when `participant:didChangePublishState` is called. If the host application wants to subscribe to a particular participant, it should return the desired subscription type regardless of that participant’s publish state. The SDK is responsible for ensuring that the desired state of the strategy is acted on at the correct time based on the state of the stage.

Note that only publishing participants trigger `participantDidJoin`, and whenever a participant stops publishing or leaves the stage session, `participantDidLeave` is triggered.

## Publish a Media Stream
<a name="ios-publish-subscribe-publish-stream"></a>

Local devices such as built-in microphones and cameras are discovered via `IVSDeviceDiscovery`. Here is an example of selecting the front-facing camera and default microphone, then returning them as `IVSLocalStageStreams` to be published by the SDK:

```
let devices = IVSDeviceDiscovery().listLocalDevices()

// Find the camera virtual device, choose the front source, and create a stream
let camera = devices.compactMap({ $0 as? IVSCamera }).first!
let frontSource = camera.listAvailableInputSources().first(where: { $0.position == .front })!
camera.setPreferredInputSource(frontSource)
let cameraStream = IVSLocalStageStream(device: camera)

// Find the microphone virtual device and create a stream
let microphone = devices.compactMap({ $0 as? IVSMicrophone }).first!
let microphoneStream = IVSLocalStageStream(device: microphone)

// Configure the audio manager to use the videoChat preset, which is optimized for bi-directional communication, including echo cancellation.
IVSStageAudioManager.sharedInstance().setPreset(.videoChat)

// This is a function on IVSStageStrategy
func stage(_ stage: IVSStage, streamsToPublishForParticipant participant: IVSParticipantInfo) -> [IVSLocalStageStream] {
    return [cameraStream, microphoneStream]
}
```

## Display and Remove Participants
<a name="ios-publish-subscribe-participants"></a>

After subscribing is completed, you will receive an array of `IVSStageStream` objects through the renderer’s `didAddStreams` function. To preview or receive audio level stats about this participant, you can access the underlying `IVSDevice` object from the stream:

```
if let imageDevice = stream.device as? IVSImageDevice {
    let preview = imageDevice.previewView()
    /* attach this UIView subclass to your view */
} else if let audioDevice = stream.device as? IVSAudioDevice {
    audioDevice.setStatsCallback( { stats in
        /* process stats.peak and stats.rms */
    })
}
```

When a participant stops publishing or is unsubscribed from, the `didRemoveStreams` function is called with the streams that were removed. Host applications should use this as a signal to remove the participant’s video stream from the view hierarchy.

`didRemoveStreams` is invoked for all scenarios in which a stream might be removed, including:
+ The remote participant stops publishing.
+ A local device unsubscribes or changes subscription from `.audioVideo` to `.audioOnly`.
+ The remote participant leaves the stage.
+ The local participant leaves the stage.

Because `didRemoveStreams` is invoked for all scenarios, no custom business logic is required around removing participants from the UI during remote or local leave operations.

## Mute and Unmute Media Streams
<a name="ios-publish-subscribe-mute-streams"></a>

`IVSLocalStageStream` objects have a `setMuted` function that controls whether the stream is muted. This function can be called on the stream before or after it is returned from the `streamsToPublishForParticipant` strategy function.

**Important**: If a new `IVSLocalStageStream` object instance is returned by `streamsToPublishForParticipant` after a call to `refreshStrategy`, the mute state of the new stream object is applied to the stage. Be careful when creating new `IVSLocalStageStream` instances to make sure the expected mute state is maintained.

## Monitor Remote Participant Media Mute State
<a name="ios-publish-subscribe-mute-state"></a>

When a participant changes the mute state of its video or audio stream, the renderer `didChangeMutedStreams` function is invoked with an array of streams that have changed. Use the `isMuted` property on `IVSStageStream` to update your UI accordingly:

```
func stage(_ stage: IVSStage, participant: IVSParticipantInfo, didChangeMutedStreams streams: [IVSStageStream]) {
    streams.forEach { stream in 
        /* stream.isMuted */
    }
}
```

## Create a Stage Configuration
<a name="ios-publish-subscribe-stage-config"></a>

To customize the values of a stage’s video configuration, use `IVSLocalStageStreamVideoConfiguration`:

```
let config = IVSLocalStageStreamVideoConfiguration()
try config.setMaxBitrate(900_000)
try config.setMinBitrate(100_000)
try config.setTargetFramerate(30)
try config.setSize(CGSize(width: 360, height: 640))
config.degradationPreference = .balanced
```

## Get WebRTC Statistics
<a name="ios-publish-subscribe-webrtc-stats"></a>

To get the latest WebRTC statistics for a publishing stream or a subscribing stream, use `requestRTCStats` on `IVSStageStream`. When a collection is completed, you will receive statistics through the `IVSStageStreamDelegate` which can be set on `IVSStageStream`. To continually collect WebRTC statistics, call this function on a `Timer`.

```
func stream(_ stream: IVSStageStream, didGenerateRTCStats stats: [String : [String : String]]) {
    for stat in stats {
      for member in stat.value {
         print("stat \(stat.key) has member \(member.key) with value \(member.value)")
      }
   }
}
```

## Get Participant Attributes
<a name="ios-publish-subscribe-participant-attributes"></a>

If you specify attributes in the `CreateParticipantToken` operation request, you can see the attributes in `IVSParticipantInfo` properties:

```
func stage(_ stage: IVSStage, participantDidJoin participant: IVSParticipantInfo) {
    print("ID: \(participant.participantId)")
    for attribute in participant.attributes {
        print("attribute: \(attribute.key)=\(attribute.value)")
    }
}
```

## Embed Messages
<a name="ios-publish-subscribe-embed-messages"></a>

The `embedMessage` method on IVSImageDevice allows you to insert metadata payloads directly into video frames during publishing. This enables frame-synchronized messaging for real-time applications. Message embedding is available only when using the SDK for real-time publishing (not low-latency publishing).

Embedded messages are not guaranteed to arrive to subscribers because they are embedded directly within video frames and transmitted over UDP, which does not guarantee packet delivery. Packet loss during transmission can result in lost messages, especially in poor network conditions. To mitigate this, the `embedMessage` method includes a `repeatCount` parameter that duplicates the message across multiple consecutive frames, increasing delivery reliability. This capability is available only for video streams.

### Using embedMessage
<a name="ios-embed-messages-using-embedmessage"></a>

Publishing clients can embed message payloads into their video stream using the `embedMessage` method on IVSImageDevice. The payload size must be greater than 0KB and less than 1KB. The number of embedded messages inserted per second must not exceed 10KB per second.

```
let imageDevice: IVSImageDevice = imageStream.device as! IVSImageDevice
let messageData = Data("hello world".utf8)

do {
    try imageDevice.embedMessage(messageData, withRepeatCount: 0)
} catch {
    print("Failed to embed message: \(error)")
}
```

### Repeating Message Payloads
<a name="ios-embed-messages-repeat-payloads"></a>

Use `repeatCount` to duplicate the message across multiple frames for improved reliability. This value must be between 0 and 30. Receiving clients must have logic to de-duplicate the message.

```
try imageDevice.embedMessage(messageData, withRepeatCount: 5)

// repeatCount: 0-30, receiving clients should handle duplicates
```

### Reading Embedded Messages
<a name="ios-embed-messages-read-messages"></a>

See "Get Supplemental Enhancement Information (SEI)" below for how to read embedded messages from incoming streams. 

## Get Supplemental Enhancement Information (SEI)
<a name="ios-publish-subscribe-sei-attributes"></a>

The Supplemental Enhancement Information (SEI) NAL unit is used to store frame-aligned metadata alongside the video. Subscribing clients can read SEI payloads from a publisher who is publishing H.264 video by inspecting the `embeddedMessages` property on the `IVSImageDeviceFrame` objects coming out of the publisher’s `IVSImageDevice`. To do this, acquire a publisher’s `IVSImageDevice`, then observe each frame via a callback provided to `setOnFrameCallback`, as shown in the following example:

```
// in an IVSStageRenderer’s stage:participant:didAddStreams: function, after acquiring the new IVSImageStream

let imageDevice: IVSImageDevice? = imageStream.device as? IVSImageDevice
imageDevice?.setOnFrameCallback { frame in
	for message in frame.embeddedMessages {
    		if let seiMessage = message as? IVSUserDataUnregisteredSEIMessage {
        		let seiMessageData = seiMessage.data
        		let seiMessageUUID = seiMessage.UUID

        		// interpret the message's data based on the UUID
    		}
	}
}
```

## Continue Session in the Background
<a name="ios-publish-subscribe-background-session"></a>

When the app enters the background, you can continue to be in the stage while hearing remote audio, though it is not possible to continue to send your own image and audio. You will need to update your `IVSStrategy` implementation to stop publishing and subscribe to `.audioOnly` (or `.none`, if applicable):

```
func stage(_ stage: IVSStage, shouldPublishParticipant participant: IVSParticipantInfo) -> Bool {
    return false
}
func stage(_ stage: IVSStage, shouldSubscribeToParticipant participant: IVSParticipantInfo) -> IVSStageSubscribeType {
    return .audioOnly
}
```

Then make a call to `stage.refreshStrategy()`.

## Layered Encoding with Simulcast
<a name="ios-publish-subscribe-layered-encoding-simulcast"></a>

Layered encoding with simulcast is an IVS real-time streaming feature that allows publishers to send multiple different quality layers of video, and subscribers to dynamically or manually configure those layers. The feature is described more in the [Streaming Optimizations](real-time-streaming-optimization.md) document.

### Configuring Layered Encoding (Publisher)
<a name="ios-layered-encoding-simulcast-configure-publisher"></a>

As a publisher, to enable layered encoding with simulcast, add the following configuration to your `IVSLocalStageStream` on instantiation:

```
// Enable Simulcast
let config = IVSLocalStageStreamVideoConfiguration()
config.simulcast.enabled = true

let cameraStream = IVSLocalStageStream(device: camera, configuration: config)

// Other Stage implementation code
```

Depending on the resolution you set on video configuration, a set number of layers will be encoded and sent as defined in the [Default Layers, Qualities, and Framerates](real-time-streaming-optimization.md#real-time-streaming-optimization-default-layers) section of *Streaming Optimizations*.

Also, you can optionally configure individual layers from within the simulcast configuration:

```
// Enable Simulcast
let config = IVSLocalStageStreamVideoConfiguration()
config.simulcast.enabled = true

let layers = [
    IVSStagePresets.simulcastLocalLayer().default720(),
    IVSStagePresets.simulcastLocalLayer().default180()
]

try config.simulcast.setLayers(layers)

let cameraStream = IVSLocalStageStream(device: camera, configuration: config)

// Other Stage implementation code
```

Alternately, you can create your own custom layer configurations for up to three layers. If you provide an empty array or no value, the defaults described above are used. Layers are described with the following required property setters:
+ `setSize: CGSize;`
+ `setMaxBitrate: integer;`
+ `setMinBitrate: integer;`
+ `setTargetFramerate: float;`

Starting from the presets, you can either override individual properties or create an entirely new configuration:

```
// Enable Simulcast
let config = IVSLocalStageStreamVideoConfiguration()
config.simulcast.enabled = true

let customHiLayer = IVSStagePresets.simulcastLocalLayer().default720()
try customHiLayer.setTargetFramerate(15)

let layers = [
    customHiLayer,
    IVSStagePresets.simulcastLocalLayer().default180()
]

try config.simulcast.setLayers(layers)

let cameraStream = IVSLocalStageStream(device: camera, configuration: config)

// Other Stage implementation code
```

For maximum values, limits, and errors which can be triggered when configuring individual layers, see the SDK reference documentation.

### Configuring Layered Encoding (Subscriber)
<a name="ios-layered-encoding-simulcast-configure-subscriber"></a>

As a subscriber, there is nothing needed to enable layered encoding. If a publisher is sending simulcast layers, then by default the server dynamically adapts between the layers to choose the optimal quality based on the subscriber's device and network conditions.

Alternatively, to pick explicit layers that the publisher is sending, there are several options, described below.

### Option 1: Initial Layer Quality Preference
<a name="ios-layered-encoding-simulcast-layer-quality-preference"></a>

Using the `subscribeConfigurationForParticipant` strategy, it is possible to choose what initial layer you want to receive as a subscriber:

```
func stage(_ stage: IVSStage, subscribeConfigurationForParticipant participant: IVSParticipantInfo) -> IVSSubscribeConfiguration {
    let config = IVSSubscribeConfiguration()

    config.simulcast.initialLayerPreference = .lowestQuality

    return config
}
```

By default, subscribers always are sent the lowest quality layer first; this slowly ramps up to the highest quality layer. This optimizes end-user bandwidth consumption and provides the best time to video, reducing initial video freezes for users on weaker networks.

These options are available for `InitialLayerPreference`:
+ `lowestQuality` — The server delivers the lowest quality layer of video first. This optimizes bandwidth consumption, as well as time to media. Quality is defined as the combination of size, bitrate, and framerate of the video. For example, 720p video is lower quality than 1080p video.
+ `highestQuality` — The server delivers the highest quality layer of video first. This optimizes quality but may increase the time to media. Quality is defined as the combination of size, bitrate, and framerate of the video. For example, 1080p video is higher quality than 720p video.

**Note:** For initial layer preferences (the `initialLayerPreference` call) to take effect, a re-subscribe is necessary as these updates do not apply to the active subscription.

### Option 2: Preferred Layer for Stream
<a name="ios-layered-encoding-simulcast-preferred-layer"></a>

The `preferredLayerForStream` strategy method lets you select a layer after the stream has started. This strategy method receives the participant and the stream information, so you can select a layer on a participant-by-participant basis. The SDK calls this method in response to specific events, such as when stream layers change, the participant state changes, or the host application refreshes the strategy.

The strategy method returns an `IVSRemoteStageStreamLayer` object, which can be one of the following:
+ A layer object, such as one returned by `IVSRemoteStageStream.layers`.
+ null, which indicates that no layer should be selected and dynamic adaption is preferred.

For example, the following strategy will always have the users selecting the lowest quality layer of video available:

```
func stage(_ stage: IVSStage, participant: IVSParticipantInfo, preferredLayerFor stream: IVSRemoteStageStream) -> IVSRemoteStageStreamLayer? {
    return stream.lowestQualityLayer
}
```

To reset the layer selection and return to dynamic adaption, return null or undefined in the strategy. In this example, `appState` is a placeholder variable that represents the host application’s state.

```
func stage(_ stage: IVSStage, participant: IVSParticipantInfo, preferredLayerFor stream: IVSRemoteStageStream) -> IVSRemoteStageStreamLayer? {
    If appState.isAutoMode {
        return nil
    } else {
        return appState.layerChoice
    }
}
```

### Option 3: RemoteStageStream Layer Helpers
<a name="ios-layered-encoding-simulcast-remotestagestream-helpers"></a>

`IVSRemoteStageStream` has several helpers which can be used to make decisions about layer selection and display the corresponding selections to end users:
+ **Layer Events** — Alongside `IVSStageRenderer`, the `IVSRemoteStageStreamDelegate` has events which communicate layer and simulcast adaption changes:
  + `func stream(_ stream: IVSRemoteStageStream, didChangeAdaption adaption: Bool)`
  + `func stream(_ stream: IVSRemoteStageStream, didChange layers: [IVSRemoteStageStreamLayer])`
  + `func stream(_ stream: IVSRemoteStageStream, didSelect layer: IVSRemoteStageStreamLayer?, reason: IVSRemoteStageStream.LayerSelectedReason)`
+ **Layer Methods** — `IVSRemoteStageStream` has several helper methods which can be used to get information about the stream and the layers being presented. These methods are available on the remote stream provided in the `preferredLayerForStream` strategy, as well as remote streams exposed via `func stage(_ stage: IVSStage, participant: IVSParticipantInfo, didAdd streams: [IVSStageStream])`.
  + `stream.layers`
  + `stream.selectedLayer`
  + `stream.lowestQualityLayer`
  + `stream.highestQualityLayer`
  + `stream.layers(with: IVSRemoteStageStreamLayerConstraints)`

For details, see the `IVSRemoteStageStream` class in the [SDK reference documentation](https://aws.github.io/amazon-ivs-broadcast-docs/latest/ios/). For the `LayerSelected` reason, if `UNAVAILABLE` is returned, this indicates that the requested layer could not be selected. A best-effort selection is made in its place, which typically is a lower quality layer to maintain stream stability.

## Broadcast the Stage to an IVS Channel
<a name="ios-publish-subscribe-broadcast-stage"></a>

To broadcast a stage, create a separate `IVSBroadcastSession` and then follow the usual instructions for broadcasting with the SDK, described above. The `device` property on `IVSStageStream` will be either an `IVSImageDevice` or `IVSAudioDevice` as shown in the snippet above; these can be connected to the `IVSBroadcastSession.mixer` to broadcast the entire stage in a customizable layout.

Optionally, you can composite a stage and broadcast it to an IVS low-latency channel, to reach a larger audience. See [Enabling Multiple Hosts on an Amazon IVS Stream](https://docs.aws.amazon.com//ivs/latest/LowLatencyUserGuide/multiple-hosts.html) in the IVS Low-Latency Streaming User Guide.

# How iOS Chooses Camera Resolution and Frame Rate
<a name="ios-publish-subscribe-resolution-framerate"></a>

The camera managed by the broadcast SDK optimizes its resolution and frame rate (frames-per-second, or FPS) to minimize heat production and energy consumption. This section explains how the resolution and frame rate are selected to help host applications optimize for their use cases.

When creating an `IVSLocalStageStream` with an `IVSCamera`, the camera is optimized for a frame rate of `IVSLocalStageStreamVideoConfiguration.targetFramerate` and a resolution of `IVSLocalStageStreamVideoConfiguration.size`. Calling `IVSLocalStageStream.setConfiguration` updates the camera with newer values. 

## Camera Preview
<a name="resolution-framerate-camera-preview"></a>

If you create a preview of an `IVSCamera` without attaching it to a `IVSBroadcastSession` or `IVSStage`, it defaults to a resolution of 1080p and a frame rate of 60 fps.

## Broadcasting a Stage
<a name="resolution-framerate-broadcast-stage"></a>

When using an `IVSBroadcastSession` to broadcast an `IVSStage`, the SDK tries to optimize the camera with a resolution and frame rate that meet the criteria of both sessions.

For example, if the broadcast configuration is set to have a frame rate of 15 FPS and a resolution of 1080p, while the Stage has a frame rate of 30 FPS and a resolution of 720p, the SDK will select a camera configuration with a frame rate of 30 FPS and a resolution of 1080p. The `IVSBroadcastSession` will drop every other frame from the camera, and the `IVSStage` will scale the 1080p image down to 720p.

If a host application plans on using both `IVSBroadcastSession` and `IVSStage` together, with a camera, we recommend that the `targetFramerate` and `size` properties of the respective configurations match. A mismatch could cause the camera to reconfigure itself while capturing video, which will cause a brief delay in video-sample delivery.

If having identical values does not meet the host application’s use case, creating the higher quality camera first will prevent the camera from reconfiguring itself when the lower quality session is added. For example, if you broadcast at 1080p and 30 FPS and then later join a Stage set to 720p and 30 FPS, the camera will not reconfigure itself and video will continue uninterrupted. This is because 720p is less than or equal to 1080p and 30 FPS is less than or equal to 30 FPS.

## Arbitrary Frame Rates, Resolutions, and Aspect Ratios
<a name="resolution-framerate-arbitrary"></a>

Most camera hardware can exactly match common formats, such as 720p at 30 FPS or 1080p at 60 FPS. However, it is not possible to exactly match all formats. The broadcast SDK chooses the camera configuration based on the following rules (in priority order):

1. The width and height of the resolution are greater than or equal to the desired resolution, but within this constraint, width and height are as small as possible.

1. The frame rate is greater than or equal to the desired frame rate, but within this constraint, frame rate is as low as possible.

1. The aspect ratio matches the desired aspect ratio.

1. If there are multiple matching formats, the format with the greatest field of view is used.

Here are two examples:
+ The host application is trying to broadcast in 4k at 120 FPS. The selected camera supports only 4k at 60 FPS or 1080p at 120 FPS. The selected format will be 4k at 60 FPS, because the resolution rule is higher priority than the frame-rate rule.
+ An irregular resolution is requested, 1910x1070. The camera will use 1920x1080. *Be careful: choosing a resolution like 1921x1080 will cause the camera to scale up to the next available resolution (such as 2592x1944), which incurs a CPU and memory-bandwidth penalty*.

## What about Android?
<a name="resolution-framerate-android"></a>

Android does not adjust its resolution or frame rate on the fly like iOS does, so this does not impact the Android broadcast SDK.

# Known Issues & Workarounds in the IVS iOS Broadcast SDK \$1 Real-Time Streaming ​
<a name="broadcast-ios-known-issues"></a>

This document lists known issues that you might encounter when using the Amazon IVS real-time streaming iOS broadcast SDK and suggests potential workarounds.
+ Changing Bluetooth audio routes can be unpredictable. If you connect a new device mid-session, iOS may or may not automatically change the input route. Also, it is not possible to choose between multiple Bluetooth headsets that are connected at the same time. This happens in both regular broadcast and stage sessions.

  **Workaround:** If you plan to use a Bluetooth headset, connect it before starting the broadcast or stage and leave it connected throughout the session.
+ Participants using an iPhone 14, iPhone 14 Plus, iPhone 14 Pro, or iPhone 14 Pro Max may cause an audio echo issue for other participants.

  **Workaround:** Participants using the affected devices can use headphones to prevent the echo issue for other participants.
+ When a participant joins with a token that is being used by another participant, the first connection is disconnected without a specific error.

  **Workaround:** None.
+ There is a rare issue where the publisher is publishing but the publish state that subscribers receive is `inactive`.

  **Workaround:** Try leaving and then joining the session. If the issue remains, create a new token for the publisher.
+ When a participant is publishing or subscribing, it is possible to receive an error with code 1400 that indicates disconnection due to a network issue, even when the network is stable.

  **Workaround:** Try republishing / resubscribing.
+ A rare audio-distortion issue may occur intermittently during a stage session, typically on calls of longer durations.

  **Workaround:** The participant with distorted audio can either leave and rejoin the session, or unpublish and republish their audio to fix the issue.

# Error Handling in the IVS iOS Broadcast SDK \$1 Real-Time Streaming
<a name="broadcast-ios-error-handling"></a>

This section is an overview of error conditions, how the IVS real-time streaming iOS broadcast SDK reports them to the application, and what an application should do when those errors are encountered.

## Fatal vs. Non-Fatal Errors
<a name="broadcast-ios-fatal-vs-nonfatal-errors"></a>

The error object has an "is fatal" boolean. This is a dictionary entry under `IVSBroadcastErrorIsFatalKey` which contains a boolean.

In general, fatal errors are related to connection to the Stages server (either a connection cannot be established or is lost and cannot be recovered). The application should re-create the stage and re-join, possibly with a new token or when the device’s connectivity recovers.

Non-fatal errors generally are related to the publish/subscribe state and are handled by the SDK, which retries the publish/subscribe operation.

You can check this property:

```
let nsError = error as NSError
if nsError.userInfo[IVSBroadcastErrorIsFatalKey] as? Bool == true {
  // the error is fatal
}
```

## Join Errors
<a name="broadcast-ios-stage-join-errors"></a>

### Malformed Token
<a name="broadcast-ios-stage-join-errors-malformed-token"></a>

This happens when the stage token is malformed.

The SDK throws a Swift exception with error code = 1000 and IVSBroadcastErrorIsFatalKey = YES.

**Action**: Create a valid token and retry joining.

### Expired Token
<a name="broadcast-ios-stage-join-errors-expired-token"></a>

This happens when the stage token is expired.

The SDK throws a Swift exception with error code = 1001 and IVSBroadcastErrorIsFatalKey = YES.

**Action**: Create a new token and retry joining.

### Invalid or Revoked Token
<a name="broadcast-ios-stage-join-errors-invalid-token"></a>

This happens when the stage token is not malformed but is rejected by the Stages server. This is reported asynchronously through the application-supplied stage renderer.

The SDK calls `stage(didChange connectionState, withError error)` with error code = 1026 and IVSBroadcastErrorIsFatalKey = YES.

**Action**: Create a valid token and retry joining.

### Network Errors for Initial Join
<a name="broadcast-ios-stage-join-errors-network-initial-join"></a>

This happens when the SDK cannot contact the Stages server to establish a connection. This is reported asynchronously through the application-supplied stage renderer.

The SDK calls `stage(didChange connectionState, withError error)` with error code = 1300 and IVSBroadcastErrorIsFatalKey = YES.

**Action**: Wait for the device’s connectivity to recover and retry joining.

### Network Errors when Already Joined
<a name="broadcast-ios-stage-join-errors-network-already-joined"></a>

If the device’s network connection goes down, the SDK may lose its connection to Stage servers. This is reported asynchronously through the application-supplied stage renderer.

The SDK calls `stage(didChange connectionState, withError error)` with error code = 1300 and IVSBroadcastErrorIsFatalKey value = YES.

**Action**: Wait for the device’s connectivity to recover and retry joining.

## Publish/Subscribe Errors
<a name="broadcast-ios-publish-subscribe-errors"></a>

### Initial
<a name="broadcast-ios-publish-subscribe-errors-initial"></a>

There are several errors:
+ MultihostSessionOfferCreationFailPublish (1020)
+ MultihostSessionOfferCreationFailSubscribe (1021)
+ MultihostSessionNoIceCandidates (1022)
+ MultihostSessionStageAtCapacity (1024)
+ SignallingSessionCannotRead (1201)
+ SignallingSessionCannotSend (1202)
+ SignallingSessionBadResponse (1203)

These are reported asynchronously through the application-supplied stage renderer.

The SDK retries the operation for a limited number of times. During retries, the publish/subscribe state is `ATTEMPTING_PUBLISH` / `ATTEMPTING_SUBSCRIBE`. If the retry attempts succeed, the state changes to `PUBLISHED` / `SUBSCRIBED`.

The SDK calls `IVSErrorDelegate:didEmitError` with the relevant error code and `IVSBroadcastErrorIsFatalKey == NO`.

**Action**: No action is needed, as the SDK retries automatically. Optionally, the application can refresh the strategy to force more retries.

### Already Established, Then Fail
<a name="broadcast-ios-publish-subscribe-errors-established"></a>

A publish or subscribe can fail after it is established, most likely due to a network error. The error code for a "peer connection lost due to network error" is 1400.

This is reported asynchronously through the application-supplied stage renderer.

The SDK retries the publish/subscribe operation. During retries, the publish/subscribe state is `ATTEMPTING_PUBLISH` / `ATTEMPTING_SUBSCRIBE`. If the retry attempts succeed, the state changes to `PUBLISHED` / `SUBSCRIBED`.

The SDK calls `didEmitError` with error code = 1400 and IVSBroadcastErrorIsFatalKey = NO.

**Action**: No action is needed, as the SDK retries automatically. Optionally, the application can refresh the strategy to force more retries. In the event of total connectivity loss, it’s likely that the connection to Stages will fail too.