

# Getting Started with IVS Real-Time Streaming
<a name="getting-started"></a>

This document takes you through the steps involved in integrating Amazon IVS Real-Time Streaming into your app.

**Topics**
+ [Introduction to IVS Real-Time Streaming](getting-started-introduction.md)
+ [Step 1: Set Up IAM Permissions](getting-started-iam-permissions.md)
+ [Step 2: Create a Stage with Optional Participant Recording](getting-started-create-stage.md)
+ [Step 3: Distribute Participant Tokens](getting-started-distribute-tokens.md)
+ [Step 4: Integrate the IVS Broadcast SDK](getting-started-broadcast-sdk.md)
+ [Step 5: Publish and Subscribe to Video](getting-started-pub-sub.md)

# Introduction to IVS Real-Time Streaming
<a name="getting-started-introduction"></a>

This section lists prerequisites for using real-time streaming and introduces key terminology.

## Prerequisites
<a name="getting-started-introduction-prereq"></a>

Before you use Real-Time Streaming for the first time, complete the following tasks. For instructions, see [Getting Started with IVS Low-Latency Streaming](https://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/getting-started.html).
+ Create an AWS Account
+ Set Up Root and Administrative Users

## Other References
<a name="getting-started-introduction-extref"></a>
+ [IVS Web Broadcast SDK Reference](https://aws.github.io/amazon-ivs-web-broadcast/docs/sdk-reference)
+ [ IVS Android Broadcast SDK Reference](https://aws.github.io/amazon-ivs-broadcast-docs/latest/android/)
+ [IVS iOS Broadcast SDK Reference ](https://aws.github.io/amazon-ivs-broadcast-docs/latest/ios/)
+ [IVS Real-Time Streaming API Reference](https://docs.aws.amazon.com/ivs/latest/RealTimeAPIReference/Welcome.html)

## Real-Time Streaming Terminology
<a name="getting-started-introduction-terminology"></a>


| Term | Description | 
| --- | --- | 
| Stage | A virtual space where participants can exchange video in real time. | 
| Host | A participant that sends local video to the stage. | 
| Viewer | A participant that receives video of the hosts. | 
| Participant | A user connected to the stage as a host or viewer. | 
| Participant token | A token that authenticates a participant when they join a stage. | 
| Broadcast SDK | A client library that enables participants to send and receive video. | 

## Overview of Steps
<a name="getting-started-introduction-steps"></a>

1. [Set up IAM Permissions](getting-started-iam-permissions.md) — Create an AWS Identity and Access Management (IAM) policy that gives users a basic set of permissions and assign that policy to users.

1. [Create a stage](getting-started-create-stage.md) — Create a virtual space where participants can exchange video in real time.

1. [Distribute participant tokens](getting-started-distribute-tokens.md) — Send tokens to participants so they can join your stage.

1. [Integrate the IVS Broadcast SDK](getting-started-broadcast-sdk.md) — Add the broadcast SDK to your app to enable participants to send and receive video: [Web](getting-started-broadcast-sdk.md#getting-started-broadcast-sdk-web), [Android](getting-started-broadcast-sdk.md#getting-started-broadcast-sdk-android), and [iOS](getting-started-broadcast-sdk.md#getting-started-broadcast-sdk-ios).

1. [Publish and subscribe to video](getting-started-pub-sub.md) — Send your video to the stage and receive video from other hosts: [IVS console](getting-started-pub-sub.md#getting-started-pub-sub-console), [Publish & Subscribe with the IVS Web Broadcast SDK](getting-started-pub-sub-web.md), [Publish & Subscribe with the IVS Android Broadcast SDK](getting-started-pub-sub-android.md), and [Publish & Subscribe with the IVS iOS Broadcast SDK](getting-started-pub-sub-ios.md).

# Step 1: Set Up IAM Permissions
<a name="getting-started-iam-permissions"></a>

Next, you must create an AWS Identity and Access Management (IAM) policy that gives users a basic set of permissions (e.g., to create an Amazon IVS stage and create participant tokens) and assign that policy to users. You can either assign the permissions when creating a [new user](#iam-permissions-new-user) or add permissions to an [existing user](#iam-permissions-existing-user). Both procedures are given below.

For more information (for example, to learn about IAM users and policies, how to attach a policy to a user, and how to constrain what users can do with Amazon IVS), see:
+ [Creating an IAM User](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#Using_CreateUser_console) in the *IAM User Guide*
+ The information in [Amazon IVS Security](https://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/security.html) on IAM and "Managed Policies for IVS." 
+ The IAM information in [Amazon IVS Security](https://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/security.html)

You can either use an existing AWS managed policy for Amazon IVS or create a new policy that customizes the permissions you want to grant to a set of users, groups, or roles. Both approaches are described below.

## Use an Existing Policy for IVS Permissions
<a name="iam-permissions-existing-policy"></a>

In most cases, you will want to use an AWS managed policy for Amazon IVS. They are described fully in the [Managed Policies for IVS](https://docs.aws.amazon.com/ivs/latest/LowLatencyUserGuide/security-iam-awsmanpol.html) section of *IVS Security*.
+ Use the `IVSReadOnlyAccess` AWS managed policy to give your application developers access to all IVS Get and List API operations (for both low-latency and real-time streaming).
+ Use the `IVSFullAccess` AWS managed policy to give your application developers access to all IVS API operations (for both low-latency and real-time streaming).

## Optional: Create a Custom Policy for Amazon IVS Permissions
<a name="iam-permissions-new-policy"></a>

Follow these steps:

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Policies**, then choose **Create policy**. A** Specify permissions** window opens..

1. In the **Specify permissions** window, choose the **JSON** tab, and copy and paste the following IVS policy to the **Policy editor** text area. (The policy does not include all Amazon IVS actions. You can add/delete (Allow/Deny) operation access permissions as needed. See [IVS Real-Time Streaming API Reference](https://docs.aws.amazon.com//ivs/latest/RealTimeAPIReference/Welcome.html) for details on IVS operations.)

------
#### [ JSON ]

****  

   ```
   {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
         {
            "Effect": "Allow",
            "Action": [
               "ivs:CreateStage",
               "ivs:CreateParticipantToken",
               "ivs:GetStage",
               "ivs:GetStageSession",
               "ivs:ListStages",
               "ivs:ListStageSessions",
               "ivs:CreateEncoderConfiguration",
               "ivs:GetEncoderConfiguration",
               "ivs:ListEncoderConfigurations",
               "ivs:GetComposition",
               "ivs:ListCompositions",
               "ivs:StartComposition",
               "ivs:StopComposition"
             ],
             "Resource": "*"
         },
         {
            "Effect": "Allow",
            "Action": [
               "cloudwatch:DescribeAlarms",
               "cloudwatch:GetMetricData",
               "s3:DeleteBucketPolicy",
               "s3:GetBucketLocation",
               "s3:GetBucketPolicy",
               "s3:PutBucketPolicy",
               "servicequotas:ListAWSDefaultServiceQuotas",
               "servicequotas:ListRequestedServiceQuotaChangeHistoryByQuota",
               "servicequotas:ListServiceQuotas",
               "servicequotas:ListServices",
               "servicequotas:ListTagsForResource"
            ],
            "Resource": "*"
         }
      ]
   }
   ```

------

1. Still in the **Specify permissions** window, choose **Next** (scroll to the bottom of the window to see this). A **Review and create** window opens. 

1. On the **Review and create** window, enter a **Policy name** and optionally add a **Description**. Make a note of the policy name, as you will need it when creating users (below). Choose **Create policy** (at the bottom of the window).

1. You are returned to the IAM console window, where you should see a banner confirming that your new policy was created.

## Create a New User and Add Permissions
<a name="iam-permissions-new-user"></a>

### IAM User Access Keys
<a name="iam-permissions-new-user-access-keys"></a>

IAM access keys consist of an access key ID and a secret access key. They are used to sign programmatic requests that you make to AWS. If you don't have access keys, you can create them from the AWS Management Console. As a best practice, do not create root-user access keys.

*The only time that you can view or download a secret access key is when you create access keys. You cannot recover them later.* However, you can create new access keys at any time; you must have permissions to perform the required IAM actions.

Always store access keys securely. Never share them with third parties (even if an inquiry seems to come from Amazon). For more information, see [Managing access keys for IAM users](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) in the *IAM User Guide*.

### Procedure
<a name="iam-permissions-new-user-procedure"></a>

Follow these steps:

1. In the navigation pane, choose **Users**, then choose **Create user**. A **Specify user details** window opens. 

1. In the **Specify user details** window:

   1. Under **User details**, type the new **User name** to be created.

   1. Check **Provide user access to the AWS Management Console**.

   1. Under **Console password**, select **Autogenerated password**.

   1. Check **Users must create a new password at next sign-in**.

   1. Choose **Next**. A **Set permissions** window opens.

1. Under **Set permissions**, select **Attach policies directly**. A **Permissions policies** window opens.

1. In the search box, enter an IVS policy name (either an AWS managed policy or your previously created custom policy). When it is found, check the box to select the policy.

1. Choose **Next** (at the bottom of the window). A **Review and create** window opens.

1. On the **Review and create** window, confirm that all user details are correct, then choose **Create user** (at the bottom of the window).

1. The **Retrieve password** window opens, containing your **Console sign-in details**. *Save this information securely for future reference*. When you are done, choose **Return to users list**.

## Add Permissions to an Existing User
<a name="iam-permissions-existing-user"></a>

Follow these steps:

1. Sign in to the AWS Management Console and open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane, choose **Users**, then choose an existing user name to be updated. (Choose the name by clicking on it; do not check the selection box.)

1. On the **Summary** page, on the **Permissions** tab, choose **Add permissions**. An **Add permissions** window opens.

1. Select **Attach existing policies directly**. A **Permissions policies** window opens.

1. In the search box, enter an IVS policy name (either an AWS managed policy or your previously created custom policy). When the policy is found, check the box to select the policy.

1. Choose **Next** (at the bottom of the window). A **Review** window opens.

1. On the **Review** window, select **Add permissions** (at the bottom of the window).

1. On the **Summary** page, confirm that the IVS policy was added.

# Step 2: Create a Stage with Optional Participant Recording
<a name="getting-started-create-stage"></a>

A stage is a virtual space where participants can exchange video in real time. It is the foundational resource of the Real-Time Streaming API. You can create a stage using either the console or the CreateStage operation.

We recommend that where possible, you create a new stage for each logical session and delete it when done, rather than keeping around old stages for possible reuse. If stale resources (old stages, not to be reused) are not cleaned up, you're likely to hit the limit of the maximum number of stages faster.

You can create a stage — with or without individual participant recording — through the Amazon IVS console or the AWS CLI. Stage creation and recording are discussed below.

## Individual Participant Recording
<a name="getting-started-create-stage-ipr-overview"></a>

You have the option of enabling individual participant recording for a stage. If the individual participant recording to S3 feature is enabled, all individual participant broadcasts to the stage are recorded and saved to an Amazon S3 storage bucket that you own. Subsequently, the recording is available for on-demand playback.

*Setting this up is an advanced option.* By default, recording is disabled when a stage is created.

Before you can set up a stage for recording, you must create a *storage configuration*. This is a resource which specifies an Amazon S3 location where the recorded streams for the stage are stored. You can create and manage storage configurations using the console or CLI; both procedures are given below. After you create the storage configuration, you associate it with a stage either when you create the stage (as described below) or later, by updating an existing stage. (In the API, see [CreateStage](https://docs.aws.amazon.com//ivs/latest/RealTimeAPIReference/API_CreateStage.html) and [UpdateStage](https://docs.aws.amazon.com//ivs/latest/RealTimeAPIReference/API_UpdateStage.html).) You can associate multiple stages with the same storage configuration. You can delete a storage configuration that is no longer associated with any stages.

Keep in mind the following constraints:
+ You must own the S3 bucket. That is, the account that sets up a stage to be recorded must own the S3 bucket where recordings will be stored.
+ The stage, storage configuration, and S3 location must be in the same AWS region. If you create stages in other regions and want to record them, you must also set up storage configurations and S3 buckets in those regions.

Recording to your S3 bucket requires authorization with your AWS credentials. To give IVS the required access, an AWS IAM [Service-Linked Role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html) (SLR) is created automatically when the recording configuration is created: the SLR is limited to give IVS write permission only on the specific bucket.

Note that network issues between the streaming location and AWS or within AWS could result in some data loss while recording your stream. In these cases, Amazon IVS prioritizes the live stream over the recording. For redundancy, record locally via your streaming tool.

For more information (including how to set up post-processing or VOD playback on your recorded files), see [Individual Participant Recording](rt-individual-participant-recording.md).

### How to Disable Recording
<a name="getting-started-disable-recording"></a>

To disable Amazon S3 recording on an existing stage:
+ Console — On the details page for the relevant stage, in the **Record individual participants** streams section, toggle off **Enable automatic recording ** under **Auto-record to S3** and then choose **Save changes**. This removes the storage configuration’s association with the stage; streams on that stage will no longer be recorded.
+ CLI — Run the `update-stage` command and pass in the recording-configuration ARN as an empty string:

  ```
  aws ivs-realtime update-stage --arn arn:aws:ivs:us-west-2:123456789012:stage/abcdABCDefgh --auto-participant-recording-configuration storageConfigurationArn=""
  ```

  This returns a stage object with an empty string for `storageConfigurationArn`, indicating that the recording is disabled.

## Console Instructions for Creating an IVS Stage
<a name="getting-started-create-stage-console"></a>

1. Open the [Amazon IVS console](https://console.aws.amazon.com/ivs).

   (You also can access the Amazon IVS console through the [AWS Management Console](https://console.aws.amazon.com/).)

1. On the left navigation pane, select **Stages**, then select **Create stage**. The **Create stage** window appears.  
![\[Use the Create stage window to create a new stage and a participant token for it.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/Create_Stage_Console_IPR.png)

1. Optionally enter a **Stage name**.

1. If you want to enable individual participant recording, complete the steps in [Set Up Automatic Individual Participant Recording to Amazon S3 (Optional)](#getting-started-create-stage-ipr) below.

1. Select **Create stage** to create the stage. The stage details page appears, for the new stage.

### Set Up Automatic Individual Participant Recording to Amazon S3 (Optional)
<a name="getting-started-create-stage-ipr"></a>

Follow these steps to enable individual participant recording while creating a stage:

1. On the **Create stage** page, under **Record individual participants**, turn on **Enable automatic recording**. Additional fields display, to choose **Recorded media types**, to choose an existing **Storage configuration** or create a new one, and to choose whether to record thumbnails at an interval.  
![\[Use the Record individual participants dialog to configure individual participant recording for a stage.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/Create_Stage_Console_enable_IPR.png)

1. Choose which media types to record.

1. Choose **Create storage configuration**. A new window opens, with options for creating an Amazon S3 bucket and attaching it to the new recording configuration.  
![\[Use the Create storage configuration window to create a new storage configuration for a stage.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/Create_Storage_Configuration_IPR.png)

1. Fill out the fields:

   1. Optionally enter a **Storage configuration name**.

   1. Enter a **Bucket name**.

1. Choose **Create storage configuration**, to create a new storage-configuration resource with a unique ARN. Typically, creation of the recording configuration takes a few seconds, but it can be up to 20 seconds. When the storage configuration is created, you are returned to the **Create stage** window. There, the **Record individual participants** area shows your new **Storage configuration** and the S3 bucket (**Storage**) that you created.  
![\[Create a stage using the IVS Console: New storage configuration created.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/Create_Stage_Console_Storage_Configuration.png)

1. You can optionally enable other non-default options such as recording participant replicas, merging individual participant recordings, and thumbnail recording.  
![\[Create a stage using the IVS Console: enable advanced options like thumbnail recording and IPR stitching.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/Create_Stage_Console_IPR_Stitching.png)

## CLI Instructions for Creating an IVS Stage
<a name="getting-started-create-stage-cli"></a>

To install the AWS CLI, see [Install or update to the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).

Now you can use the CLI to create and manage resources following one of the two procedures below, depending on whether you want to create a stage with or without individual participant recording enabled.

### Create a Stage without Individual Participant Recording
<a name="getting-started-create-stage-cli-without-ipr"></a>

The stage API is under the ivs-realtime namespace. For example, to create a stage:

```
aws ivs-realtime create-stage --name "test-stage"
```

The response is:

```
{
   "stage": {
      "arn": "arn:aws:ivs:us-west-2:376666121854:stage/VSWjvX5XOkU3",
      "name": "test-stage"
   }
}
```

### Create a Stage with Individual Participant Recording
<a name="getting-started-create-stage-cli-with-ipr"></a>

To create a stage with individual participant recording enabled:

```
aws ivs-realtime create-stage --name "test-stage-participant-recording" --auto-participant-recording-configuration storageConfigurationArn=arn:aws:ivs:us-west-2:123456789012:storage-configuration/LKZ6QR7r55c2,mediaTypes=AUDIO_VIDEO
```

Optionally, pass the `thumbnailConfiguration` parameter to manually set the thumbnail storage and recording mode, and thumbnail interval seconds:

```
aws ivs-realtime create-stage --name "test-stage-participant-recording" --auto-participant-recording-configuration storageConfigurationArn=arn:aws:ivs:us-west-2:123456789012:storage-configuration/LKZ6QR7r55c2,mediaTypes=AUDIO_VIDEO,thumbnailConfiguration="{targetIntervalSeconds=10,storage=[SEQUENTIAL,LATEST],recordingMode=INTERVAL}"
```

Optionally, pass the `recordingReconnectWindowSeconds` parameter to enable merge fragmented individual participant recordings:

```
aws ivs-realtime create-stage --name "test-stage-participant-recording" --auto-participant-recording-configuration "storageConfigurationArn=arn:aws:ivs:us-west-2:123456789012:storage-configuration/LKZ6QR7r55c2,mediaTypes=AUDIO_VIDEO,thumbnailConfiguration="{targetIntervalSeconds=10,storage=[SEQUENTIAL,LATEST],recordingMode=INTERVAL}",recordingReconnectWindowSeconds=60"
```

The response is:

```
{
   "stage": {
      "arn": "arn:aws:ivs:us-west-2:123456789012:stage/VSWjvX5XOkU3",
      "autoParticipantRecordingConfiguration": {
         "hlsConfiguration": {
             "targetSegmentDurationSeconds": 6
         },
         "mediaTypes": [
            "AUDIO_VIDEO"
         ],
         "recordingReconnectWindowSeconds": 60,
         "recordParticipantReplicas": true,
         "storageConfigurationArn": "arn:aws:ivs:us-west-2:123456789012:storage-configuration/LKZ6QR7r55c2",
         "thumbnailConfiguration": {
            "recordingMode": "INTERVAL",
            "storage": [
               "SEQUENTIAL",
               "LATEST"
            ],
            "targetIntervalSeconds": 10
         }
      },
      "endpoints": {
         "events": "<events-endpoint>",
         "rtmp": "<rtmp-endpoint>",
         "rtmps": "<rtmps-endpoint>",
         "whip": "<whip-endpoint>"
      },
      "name": "test-stage-participant-recording"
   }
}
```

# Step 3: Distribute Participant Tokens
<a name="getting-started-distribute-tokens"></a>

Now that you have a stage, you need to create tokens and distribute them to participants, to enable the participants to join the stage and start sending and receiving video. There are two approaches to generating tokens:
+ [Create](#getting-started-distribute-tokens-self-signed) tokens with a key pair.
+ [Create tokens with the IVS real-time-streaming API](#getting-started-distribute-tokens-api).

Both of these approaches are described below.

## Creating Tokens with a Key Pair
<a name="getting-started-distribute-tokens-self-signed"></a>

You can create tokens on your server application and distribute them to participants to join a stage. You need to generate an ECDSA public/private key pair to sign the JWTs and import the public key to IVS. Then IVS can verify the tokens at the time of stage join. 

IVS does not offer key expiry. If your private key is compromised, you must delete the old public key.

### Create a New Key Pair
<a name="getting-started-distribute-tokens-self-signed-create-key-pair"></a>

There are various ways to create a key pair. Below, we give two examples.

To create a new key pair in the console, follow these steps:

1. Open the [Amazon IVS console](https://console.aws.amazon.com/ivs). Choose your stage’s region if you are not already on it.

1. In the left navigation menu, choose **Real-time streaming > Public keys**.

1. Choose **Create public key**. A **Create public key** dialog appears.

1. Follow the prompts and choose **Create**.

1. Amazon IVS generates a new key pair. The public key is imported as a public key resource and the private key is immediately made available for download. The public key can also be downloaded later if necessary.

   Amazon IVS generates the key on the client side and does not store the private key. ***Be sure you save the key; you cannot retrieve it later.***

To create a new P384 EC key pair with OpenSSL (you may have to install [OpenSSL](https://www.openssl.org/source/) first), follow these steps. This process enables you to access both the private and public keys. You need the public key only if you want to test verification of your tokens.

```
openssl ecparam -name secp384r1 -genkey -noout -out priv.pem
openssl ec -in priv.pem -pubout -out public.pem
```

Now import your new public key, using the instructions below.

### Import the Public Key
<a name="getting-started-distribute-tokens-import-public-key"></a>

Once you have a key pair, you can import the public key into IVS. The private key is not needed by our system but is employed by you to sign tokens.

To import an existing public key with the console:

1. Open the [Amazon IVS console](https://console.aws.amazon.com/ivs). Choose your stage’s region if you are not already on it.

1. In the left navigation menu, choose **Real-time streaming > Public keys**.

1. Choose **Import**. An **Import public key** dialog appears.

1. Follow the prompts and choose **Import**.

1. Amazon IVS imports your public key and generates a public key resource.

To import an existing public key with the CLI:

```
aws ivs-realtime import-public-key --public-key-material "`cat public.pem`" --region <aws-region>
```

You can omit `--region <aws-region>` if the region is in your local AWS configuration file.

Here is an example response:

```
{
    "publicKey": {
        "arn": "arn:aws:ivs:us-west-2:123456789012:public-key/f99cde61-c2b0-4df3-8941-ca7d38acca1a",
        "fingerprint": "98:0d:1a:a0:19:96:1e:ea:0a:0a:2c:9a:42:19:2b:e7",
        "publicKeyMaterial": "-----BEGIN PUBLIC KEY-----\nMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAEVjYMV+P4ML6xemanCrtse/FDwsNnpYmS\nS6vRV9Wx37mjwi02hObKuCJqpj7x0lpz0bHm5v1JBvdZYAd/r2LR5aChK+/GM2Wj\nl8MG9NJIVFaw1u3bvjEjzTASSfS1BDX1\n-----END PUBLIC KEY-----\n",
        "tags": {}
    }
}
```

### API Request
<a name="getting-started-distribute-tokens-create-api"></a>

```
POST /ImportPublicKey HTTP/1.1
{
  "publicKeyMaterial": "<pem file contents>"
}
```

### Generate and Sign the Token
<a name="getting-started-distribute-tokens-self-signed-generate-sign"></a>

For details on working with JWTs and the supported libraries for signing tokens, visit [jwt.io](https://jwt.io/). On the jwt.io interface, you must enter your private key to sign tokens. The public key is needed only if you want to verify tokens.

All JWTs have three fields: header, payload, and signature.

The JSON schemas for the JWT’s header and payload are described below. Alternatively you can copy a sample JSON from the IVS console. To get the header and payload JSON from the IVS console:

1. Open the [Amazon IVS console](https://console.aws.amazon.com/ivs). Choose your stage’s region if you are not already on it.

1. In the left navigation menu, choose **Real-time streaming > Stages**.

1. Select the stage you want to use. Select **View details**.

1. In the **Participant tokens** section, select the drop-down next to **Create token**.

1. Select **Build token header and payload**.

1. Fill in the form and copy the JWT header and payload shown at the bottom of the popup.

#### Token Schema: Header
<a name="getting-started-distribute-tokens-self-signed-generate-sign-header"></a>

The header specifies:
+ `alg` is the signing algorithm. This is ES384, an ECDSA signature algorithm that uses the SHA-384 hash algorithm.
+ `typ` is the token type, JWT.
+ `kid` is the ARN of the public key used to sign the token. It must be the same ARN returned from the [ GetPublicKey](https://docs.aws.amazon.com//ivs/latest/RealTimeAPIReference/API_GetPublicKey.html) API request.

```
{
  "alg": "ES384",
  "typ": "JWT"
  “kid”: “arn:aws:ivs:123456789012:us-east-1:public-key/abcdefg12345”
}
```

#### Token Schema: Payload
<a name="getting-started-distribute-tokens-self-signed-generate-sign-payload"></a>

The payload contains data specific to IVS. All fields except `user_id` are mandatory.
+ `RegisteredClaims` in the JWT specification are reserved claims that need to be provided for stage token to be valid: 
  + `exp` (expiration time) is a Unix UTC timestamp for when the token expires. (A Unix timestamp is a numeric value representing the number of seconds from 1970-01-01T00:00:00Z UTC until the specified UTC date/time, ignoring leap seconds.) The token is validated when the participant joins a stage. IVS provides tokens with a default 12-hour TTL, which we recommend; this can be extended to a maximum of 14 days from the issued at time (iat). This must be an integer type value.
  + `iat` (issued at time) is a Unix UTC timestamp for when the JWT was issued. (See the note for `exp` about Unix timestamps.) It must be an integer type value.
  + `jti` (JWT ID) is the participant ID used for tracking and referring to the participant to whom the token is granted. Every token must have a unique participant ID. It must be a case-sensitive string, up to 64 characters long, containing only alphanumeric, hyphen (-), and underscore (\$1) characters. No other special characters are allowed. 
+ `user_id` is an optional, customer-assigned name to help identify the token; this can be used to link a participant to a user in the customer’s own systems. This should match the `userId` field in the [CreateParticipantToken](https://docs.aws.amazon.com/ivs/latest/RealTimeAPIReference/API_CreateParticipantToken.html) API request. It can be any UTF-8 encoded text and is a string of up to 128 characters. *This field is exposed to all stage participants and should not be used for personally identifying, confidential, or sensitive information.*
+ `resource` is the ARN of the stage; e.g., `arn:aws:ivs:us-east-1:123456789012:stage/oRmLNwuCeMlQ`.
+ `topic` is the ID of the stage, which can be extracted from stage ARN. For example, if the stage ARN is `arn:aws:ivs:us-east-1:123456789012:stage/oRmLNwuCeMlQ`, the stage ID is `oRmLNwuCeMlQ`.
+ `events_url` must be the events endpoint returned from the CreateStage or GetStage operation. We recommend that you cache this value at stage-creation time; the value can be cached for up to 14 days. An example value is `wss://global.events.live-video.net`.
+ `whip_url` must be the WHIP endpoint returned from the CreateStage or GetStage operation. We recommend that you cache this value at stage-creation time; the value can be cached for up to 14 days. An example value is `https://453fdfd2ad24df.global-bm.whip.live-video.net`.
+ `capabilities` specifies the capabilities of the token; valid values are `allow_publish` and `allow_subscribe`. For subscribe-only tokens, set only `allow_subscribe` to `true`.
+ `attributes` is an optional field where you can specify application-provided attributes to encode into the token and attach to a stage. Map keys and values can contain UTF-8 encoded text. The maximum length of this field is 1 KB total. *This field is exposed to all stage participants and should not be used for personally identifying, confidential, or sensitive information.*
+ `version` must be `1.0`.

  ```
  {
    "exp": 1697322063,
    "iat": 1697149263,
    "jti": "Mx6clRRHODPy",
    "user_id": "<optional_customer_assigned_name>",
    "resource": "<stage_arn>",
    "topic": "<stage_id>",
    "events_url": "wss://global.events.live-video.net",
    "whip_url": "https://114ddfabadaf.global-bm.whip.live-video.net",
    "capabilities": {
      "allow_publish": true,
      "allow_subscribe": true
    },
    "attributes": {
      "optional_field_1": "abcd1234",
      "optional_field_2": "false"
    },
    "version": "1.0"
  }
  ```

#### Token Schema: Signature
<a name="getting-started-distribute-tokens-self-signed-generate-sign-signature"></a>

To create the signature, use the private key with the algorithm specified in the header (ES384) to sign the encoded header and encoded payload.

```
ECDSASHA384(
  base64UrlEncode(header) + "." +
  base64UrlEncode(payload),
  <private-key>
)
```

#### Instructions
<a name="getting-started-distribute-tokens-self-signed-generate-sign-instructions"></a>

1. Generate the token’s signature with an ES384 signing algorithm and a private key that is associated with the public key provided to IVS.

1. Assemble the token.

   ```
   base64UrlEncode(header) + "." +
   base64UrlEncode(payload) + "." +
   base64UrlEncode(signature)
   ```

## Creating Tokens with the IVS Real-Time Streaming API
<a name="getting-started-distribute-tokens-api"></a>

![\[Distribute participant tokens: Stage token workflow\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/Distribute_Participant_Tokens.png)


As shown above, a client application asks your server application for a token, and the server application calls CreateParticipantToken using an AWS SDK or SigV4 signed request. Since AWS credentials are used to call the API, the token should be generated in a secure server-side application, not the client-side application.

When creating a participant token, you can optionally specify attributes and/or capabilities:
+ You can specify application-provided attributes to encode into the token and attach to a stage. Map keys and values can contain UTF-8 encoded text. The maximum length of this field is 1 KB total. *This field is exposed to all stage participants and should not be used for personally identifying, confidential, or sensitive information.*
+ You can specify capabilities enabled by the token. The default is `PUBLISH` and `SUBSCRIBE`, which allows the participant to send and receive audio and video, but you could issue tokens with a subset of capabilities. For example, you could issue a token with only the `SUBSCRIBE` capability for moderators. In that case, the moderators could see the participants that are sending video but not send their own video.

For details, see [CreateParticipantToken](https://docs.aws.amazon.com//ivs/latest/RealTimeAPIReference/API_CreateParticipantToken.html).

You can create participant tokens via the console or CLI for testing and development, but most likely you will want to create them with the AWS SDK in your production environment.

You will need a way to distribute tokens from your server to each client (e.g., via an API request). We do not provide this functionality. For this guide, you can simply copy and paste the tokens into client code in the following steps.

**Important**: Treat tokens as opaque; i.e., do not build functionality based on token contents. The format of tokens could change in the future.

### Console Instructions
<a name="getting-started-distribute-tokens-console"></a>

1. Navigate to the stage you created in the prior step.

1. Select **Create token**. The **Create token** window appears.

1. Enter a user ID to be associated with the token. This can be any UTF-8 encoded text. 

1. Select **Create**.

1. Copy the token. *Important: Be sure to save the token; IVS does not store it and you cannot retrieve it later*.

### CLI Instructions
<a name="getting-started-distribute-tokens-cli"></a>

Creating a token with the AWS CLI requires that you first download and configure the CLI on your machine. For details, see the [AWS Command Line Interface User Guide](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html). Note that generating tokens with the AWS CLI is good for testing purposes, but for production use, we recommend that you generate tokens on the server side with the AWS SDK (see instructions below).

1. Run the `create-participant-token` command with the stage ARN. Include any or all of the following capabilities: `"PUBLISH"`, `"SUBSCRIBE"`.

   ```
   aws ivs-realtime create-participant-token --stage-arn arn:aws:ivs:us-west-2:376666121854:stage/VSWjvX5XOkU3 --capabilities '["PUBLISH", "SUBSCRIBE"]'
   ```

1. This returns a participant token:

   ```
   {
       "participantToken": {
           "capabilities": [
               "PUBLISH",
               "SUBSCRIBE"
           ],
           "expirationTime": "2023-06-03T07:04:31+00:00",
           "participantId": "tU06DT5jCJeb",
           "token": "eyJhbGciOiJLTVMiLCJ0eXAiOiJKV1QifQ.eyJleHAiOjE2NjE1NDE0MjAsImp0aSI6ImpGcFdtdmVFTm9sUyIsInJlc291cmNlIjoiYXJuOmF3czppdnM6dXMtd2VzdC0yOjM3NjY2NjEyMTg1NDpzdGFnZS9NbzhPUWJ0RGpSIiwiZXZlbnRzX3VybCI6IndzczovL3VzLXdlc3QtMi5ldmVudHMubGl2ZS12aWRlby5uZXQiLCJ3aGlwX3VybCI6Imh0dHBzOi8vNjZmNzY1YWM4Mzc3Lmdsb2JhbC53aGlwLmxpdmUtdmlkZW8ubmV0IiwiY2FwYWJpbGl0aWVzIjp7ImFsbG93X3B1Ymxpc2giOnRydWUsImFsbG93X3N1YnNjcmliZSI6dHJ1ZX19.MGQCMGm9affqE3B2MAb_DSpEm0XEv25hfNNhYn5Um4U37FTpmdc3QzQKTKGF90swHqVrDgIwcHHHIDY3c9eanHyQmcKskR1hobD0Q9QK_GQETMQS54S-TaKjllW9Qac6c5xBrdAk"
       }
   }
   ```

1. Save this token. You will need this to join the stage and send and receive video.

### AWS SDK Instructions
<a name="getting-started-distribute-tokens-sdk"></a>

You can use the AWS SDK to create tokens. Below are instructions for the AWS SDK using JavaScript. 

**Important:** This code must be executed on the server side and its output passed to the client.

**Prerequisite:** To use the code sample below, you need to install the aws-sdk/client-ivs-realtime package. For details, see [ Getting started with the AWS SDK for JavaScript](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/getting-started.html).

```
import { IVSRealTimeClient, CreateParticipantTokenCommand } from "@aws-sdk/client-ivs-realtime";

const ivsRealtimeClient = new IVSRealTimeClient({ region: 'us-west-2' });
const stageArn = 'arn:aws:ivs:us-west-2:123456789012:stage/L210UYabcdef';
const createStageTokenRequest = new CreateParticipantTokenCommand({
  stageArn,
});
const response = await ivsRealtimeClient.send(createStageTokenRequest);
console.log('token', response.participantToken.token);
```

# Step 4: Integrate the IVS Broadcast SDK
<a name="getting-started-broadcast-sdk"></a>

IVS provides a broadcast SDK for web, Android, and iOS that you can integrate into your application. The broadcast SDK is used for both sending and receiving video. If you have [configured RTMP Ingest for your stage](https://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/rt-stream-ingest.html), you may use any encoder that can broadcast to an RTMP endpoint (e.g., OBS or ffmpeg).

In this section, we write a simple application that enables two or more participants to interact in real time. The steps below guide you through creating an app called BasicRealTime. The full app code is on CodePen and GitHub:
+  Web: [https://codepen.io/amazon-ivs/pen/ZEqgrpo](https://codepen.io/amazon-ivs/pen/ZEqgrpo) 
+  Android: [https://github.com/aws-samples/amazon-ivs-real-time-streaming-android-samples](https://github.com/aws-samples/amazon-ivs-real-time-streaming-android-samples) 
+  iOS: [https://github.com/aws-samples/amazon-ivs-real-time-streaming-ios-samples](https://github.com/aws-samples/amazon-ivs-real-time-streaming-ios-samples) 

## Web
<a name="getting-started-broadcast-sdk-web"></a>

### Set Up Files
<a name="getting-started-broadcast-sdk-web-setup"></a>

To start, set up your files by creating a folder and an initial HTML and JS file:

```
mkdir realtime-web-example
cd realtime-web-example
touch index.html
touch app.js
```

You can install the broadcast SDK using a script tag or npm. Our example uses the script tag for simplicity but is easy to modify if you choose to use npm later.

### Using a Script Tag
<a name="getting-started-broadcast-sdk-web-script"></a>

The Web broadcast SDK is distributed as a JavaScript library and can be retrieved at [https://web-broadcast.live-video.net/1.34.0/amazon-ivs-web-broadcast.js](https://web-broadcast.live-video.net/1.34.0/amazon-ivs-web-broadcast.js).

When loaded via `<script>` tag, the library exposes a global variable in the window scope named `IVSBroadcastClient`.

### Using npm
<a name="getting-started-broadcast-sdk-web-npm"></a>

To install the npm package:

```
npm install amazon-ivs-web-broadcast
```

You can now access the IVSBroadcastClient object:

```
const { Stage } = IVSBroadcastClient;
```

## Android
<a name="getting-started-broadcast-sdk-android"></a>

### Create the Android Project
<a name="getting-started-broadcast-sdk-android-project"></a>

1. In Android Studio, create a **New Project**.

1. Choose **Empty Views Activity**.

   Note: In some older versions of Android Studio, the View-based activity is called **Empty Activity**. If your Android Studio window shows **Empty Activity** and does *not* show **Empty Views** Activity, select **Empty Activity**. Otherwise, don't select **Empty Activity**, since we'll be using View APIs (not Jetpack Compose).

1. Give your project a **Name**, then select **Finish**.

### Install the Broadcast SDK
<a name="getting-started-broadcast-sdk-android-install"></a>

To add the Amazon IVS Android broadcast library to your Android development environment, add the library to your module’s `build.gradle` file, as shown here (for the latest version of the Amazon IVS broadcast SDK). In newer projects the `mavenCentral` repository may already be included in your `settings.gradle` file, if that is the case you can omit the `repositories` block. For our sample, we’ll also need to enable data binding in the `android` block.

```
android {
    dataBinding.enabled true
}

repositories {
    mavenCentral()
}
 
dependencies {
     implementation 'com.amazonaws:ivs-broadcast:1.41.0:stages@aar'
}
```

Alternately, to install the SDK manually, download the latest version from this location:

[https://search.maven.org/artifact/com.amazonaws/ivs-broadcast](https://search.maven.org/artifact/com.amazonaws/ivs-broadcast)

## iOS
<a name="getting-started-broadcast-sdk-ios"></a>

### Create the iOS Project
<a name="getting-started-broadcast-sdk-ios-project"></a>

1. Create a new Xcode project.

1. For **Platform**, select **iOS**.

1. For **Application**, select **App**.

1. Enter the **Product Name** of your app, then select **Next**.

1. Choose (navigate to) a directory in which to save the project, then select **Create**.

Next you need to bring in the SDK. For instructions, see [Install the Library](broadcast-ios-getting-started.md#broadcast-ios-install) in the *iOS Broadcast SDK Guide*.

### Configure Permissions
<a name="getting-started-broadcast-sdk-ios-config"></a>

You need to update your project’s `Info.plist` to add two new entries for `NSCameraUsageDescription` and `NSMicrophoneUsageDescription`. For the values, provide user-facing explanations of why your app is asking for camera and microphone access.

![\[Configure iOS permissions.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/iOS_Configure.png)


# Step 5: Publish and Subscribe to Video
<a name="getting-started-pub-sub"></a>

You can publish/subscribe (real-time) to IVS with:
+ The native [IVS broadcast SDKs](https://docs.aws.amazon.com//ivs/latest/LowLatencyUserGuide/getting-started-set-up-streaming.html#broadcast-sdk), which support WebRTC and RTMPS. We recommend this, especially for production scenarios. See the details below for [Web](getting-started-pub-sub-web.md), [Android](getting-started-pub-sub-android.md), and [iOS](getting-started-pub-sub-ios.md).
+ The Amazon IVS console — This is suitable for testing streams. See below.
+ Other streaming software and hardware encoders — You can use any streaming encoder that supports the RTMP, RTMPS, or WHIP protocols. See [Stream Ingest](rt-stream-ingest.md) for more information.

## IVS Console
<a name="getting-started-pub-sub-console"></a>

1. Open the [Amazon IVS console](https://console.aws.amazon.com/ivs).

   (You can also access the Amazon IVS console through the [AWS Management Console](https://console.aws.amazon.com/).)

1. In the navigation pane, select **Stages**. (If the nav pane is collapsed, expand it by selecting the hamburger icon.)

1. Select the stage to which you want to subscribe or publish, to go to its details page.

1. To subscribe: If the stage has one or more publishers, you can subscribe to it by pressing the **Subscribe** button, under the **Subscribe** tab. (Tabs are below the **General Configuration** section.)

1. To publish:

   1. Select the **Publish** tab.

   1. You will be prompted to grant the IVS console access to your camera and microphone; **Allow** those permissions.

   1. Toward the bottom of the **Publish** tab, use the dropdown boxes to select input devices for the microphone and camera.

   1. To begin publishing, select **Start publishing**.

   1. To view your published content, go back to the **Subscribe** tab.

   1. To stop publishing, go to the **Publish** tab and press the **Stop publishing** button towards the bottom.

**Note**: Subscribing and publishing consume resources, and you will incur an hourly rate for the time you are connected to the stage. To learn more, see [Real-Time Streaming](https://aws.amazon.com/ivs/pricing/#Real-Time_Streaming) on the IVS Pricing page.

# Publish & Subscribe with the IVS Web Broadcast SDK
<a name="getting-started-pub-sub-web"></a>

This section takes you through the steps involved in publishing and subscribing to a stage using your web app.

## Create HTML Boilerplate
<a name="getting-started-pub-sub-web-html"></a>

First let's create the HTML boilerplate and import the library as a script tag:

```
<!DOCTYPE html>
<html lang="en">

<head>
  <meta charset="UTF-8" />
  <meta http-equiv="X-UA-Compatible" content="IE=edge" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0" />

  <!-- Import the SDK -->
  <script src="https://web-broadcast.live-video.net/1.34.0/amazon-ivs-web-broadcast.js"></script>
</head>

<body>

<!-- TODO - fill in with next sections -->
<script src="./app.js"></script>

</body>
</html>
```

## Accept Token Input and Add Join/Leave Buttons
<a name="getting-started-pub-sub-web-join"></a>

Here we fill in the body with our input controls. These take as input the token, and they set up **Join** and **Leave** buttons. Typically applications will request the token from your application's API, but for this example you'll copy and paste the token into the token input.

```
<h1>IVS Real-Time Streaming</h1>
<hr />

<label for="token">Token</label>
<input type="text" id="token" name="token" />
<button class="button" id="join-button">Join</button>
<button class="button" id="leave-button" style="display: none;">Leave</button>
<hr />
```

## Add Media Container Elements
<a name="getting-started-pub-sub-web-media"></a>

These elements will hold the media for our local and remote participants. We add a script tag to load our application's logic defined in `app.js`.

```
<!-- Local Participant -->
<div id="local-media"></div>

<!-- Remote Participants -->
<div id="remote-media"></div>

<!-- Load Script -->
<script src="./app.js"></script>
```

This completes the HTML page and you should see this when loading `index.html` in a browser:

![\[View Real-Time Streaming in a browser: HTML setup complete.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/RT_Browser_View.png)


## Create app.js
<a name="getting-started-pub-sub-web-appjs"></a>

Let's move to defining the contents of our `app.js` file. Begin by importing all the requisite properties from the SDK's global:

```
const {
  Stage,
  LocalStageStream,
  SubscribeType,
  StageEvents,
  ConnectionState,
  StreamType
} = IVSBroadcastClient;
```

## Create Application Variables
<a name="getting-started-pub-sub-web-vars"></a>

Establish variables to hold references to our **Join** and **Leave** button HTML elements and store state for the application:

```
let joinButton = document.getElementById("join-button");
let leaveButton = document.getElementById("leave-button");

// Stage management
let stage;
let joining = false;
let connected = false;
let localCamera;
let localMic;
let cameraStageStream;
let micStageStream;
```

## Create joinStage 1: Define the Function and Validate Input
<a name="getting-started-pub-sub-web-joinstage1"></a>

The `joinStage` function takes the input token, creates a connection to the stage, and begins to publish video and audio retrieved from `getUserMedia`.

To start, we define the function and validate the state and token input. We'll flesh out this function in the next few sections.

```
const joinStage = async () => {
  if (connected || joining) {
    return;
  }
  joining = true;

  const token = document.getElementById("token").value;

  if (!token) {
    window.alert("Please enter a participant token");
    joining = false;
    return;
  }

  // Fill in with the next sections
};
```

## Create joinStage 2: Get Media to Publish
<a name="getting-started-pub-sub-web-joinstage2"></a>

Here is the media that will be published to the stage:

```
async function getCamera() {
  // Use Max Width and Height
  return navigator.mediaDevices.getUserMedia({
    video: true,
    audio: false
  });
}

async function getMic() {
  return navigator.mediaDevices.getUserMedia({
    video: false,
    audio: true
  });
}

// Retrieve the User Media currently set on the page
localCamera = await getCamera();
localMic = await getMic();

// Create StageStreams for Audio and Video
cameraStageStream = new LocalStageStream(localCamera.getVideoTracks()[0]);
micStageStream = new LocalStageStream(localMic.getAudioTracks()[0]);
```

## Create joinStage 3: Define the Stage Strategy and Create the Stage
<a name="getting-started-pub-sub-web-joinstage3"></a>

This stage strategy is the heart of the decision logic that the SDK uses to decide what to publish and which participants to subscribe to. For more information on the function's purpose, see [Strategy](web-publish-subscribe.md#web-publish-subscribe-concepts-strategy).

This strategy is simple. After joining the stage, publish the streams we just retrieved and subscribe to every remote participant's audio and video:

```
const strategy = {
  stageStreamsToPublish() {
    return [cameraStageStream, micStageStream];
  },
  shouldPublishParticipant() {
    return true;
  },
  shouldSubscribeToParticipant() {
    return SubscribeType.AUDIO_VIDEO;
  }
};

stage = new Stage(token, strategy);
```

## Create joinStage 4: Handle Stage Events and Render Media
<a name="getting-started-pub-sub-web-joinstage4"></a>

Stages emit many events. We'll need to listen to the `STAGE_PARTICIPANT_STREAMS_ADDED` and `STAGE_PARTICIPANT_LEFT` to render and remove media to and from the page. A more exhaustive set of events are listed in [Events](web-publish-subscribe.md#web-publish-subscribe-concepts-events).

Note that we create four helper functions here to assist us in managing necessary DOM elements: `setupParticipant`, `teardownParticipant`, `createVideoEl`, and `createContainer`.

```
stage.on(StageEvents.STAGE_CONNECTION_STATE_CHANGED, (state) => {
  connected = state === ConnectionState.CONNECTED;

  if (connected) {
    joining = false;
    joinButton.style = "display: none";
    leaveButton.style = "display: inline-block";
  }
});

stage.on(
  StageEvents.STAGE_PARTICIPANT_STREAMS_ADDED,
  (participant, streams) => {
    console.log("Participant Media Added: ", participant, streams);

    let streamsToDisplay = streams;

    if (participant.isLocal) {
      // Ensure to exclude local audio streams, otherwise echo will occur
      streamsToDisplay = streams.filter(
        (stream) => stream.streamType === StreamType.VIDEO
      );
    }

    const videoEl = setupParticipant(participant);
    streamsToDisplay.forEach((stream) =>
      videoEl.srcObject.addTrack(stream.mediaStreamTrack)
    );
  }
);

stage.on(StageEvents.STAGE_PARTICIPANT_LEFT, (participant) => {
  console.log("Participant Left: ", participant);
  teardownParticipant(participant);
});


// Helper functions for managing DOM

function setupParticipant({ isLocal, id }) {
  const groupId = isLocal ? "local-media" : "remote-media";
  const groupContainer = document.getElementById(groupId);

  const participantContainerId = isLocal ? "local" : id;
  const participantContainer = createContainer(participantContainerId);
  const videoEl = createVideoEl(participantContainerId);

  participantContainer.appendChild(videoEl);
  groupContainer.appendChild(participantContainer);

  return videoEl;
}

function teardownParticipant({ isLocal, id }) {
  const groupId = isLocal ? "local-media" : "remote-media";
  const groupContainer = document.getElementById(groupId);
  const participantContainerId = isLocal ? "local" : id;

  const participantDiv = document.getElementById(
    participantContainerId + "-container"
  );
  if (!participantDiv) {
    return;
  }
  groupContainer.removeChild(participantDiv);
}

function createVideoEl(id) {
  const videoEl = document.createElement("video");
  videoEl.id = id;
  videoEl.autoplay = true;
  videoEl.playsInline = true;
  videoEl.srcObject = new MediaStream();
  return videoEl;
}

function createContainer(id) {
  const participantContainer = document.createElement("div");
  participantContainer.classList = "participant-container";
  participantContainer.id = id + "-container";

  return participantContainer;
}
```

## Create joinStage 5: Join the Stage
<a name="getting-started-pub-sub-web-joinstage5"></a>

Let's complete our `joinStage` function by finally joining the stage\$1

```
try {
  await stage.join();
} catch (err) {
  joining = false;
  connected = false;
  console.error(err.message);
}
```

## Create leaveStage
<a name="getting-started-pub-sub-web-leavestage"></a>

Define the `leaveStage` function which the leave button will invoke.

```
const leaveStage = async () => {
  stage.leave();

  joining = false;
  connected = false;
};
```

## Initialize Input-Event Handlers
<a name="getting-started-pub-sub-web-handlers"></a>

We'll add one last function to our `app.js` file. This function is invoked immediately when the page loads and establishes event handlers for joining and leaving the stage.

```
const init = async () => {
  try {
    // Prevents issues on Safari/FF so devices are not blank
    await navigator.mediaDevices.getUserMedia({ video: true, audio: true });
  } catch (e) {
    alert(
      "Problem retrieving media! Enable camera and microphone permissions."
    );
  }

  joinButton.addEventListener("click", () => {
    joinStage();
  });

  leaveButton.addEventListener("click", () => {
    leaveStage();
    joinButton.style = "display: inline-block";
    leaveButton.style = "display: none";
  });
};

init(); // call the function
```

## Run the Application and Provide a Token
<a name="getting-started-pub-sub-run-app"></a>

At this point you can share the web page locally or with others, [open the page](#getting-started-pub-sub-web-media), and put in a participant token and join the stage.

## What’s Next?
<a name="getting-started-pub-sub-next"></a>

For more detailed examples involving npm, React, and more, see the [IVS Broadcast SDK: Web Guide (Real-Time Streaming Guide)](broadcast-web.md).

# Publish & Subscribe with the IVS Android Broadcast SDK
<a name="getting-started-pub-sub-android"></a>

This section takes you through the steps involved in publishing and subscribing to a stage using your Android app.

## Create Views
<a name="getting-started-pub-sub-android-views"></a>

We start by creating a simple layout for our app using the auto-created `activity_main.xml` file. The layout contains an `EditText` to add a token, a Join `Button`, a `TextView` to show the stage state, and a `CheckBox` to toggle publishing.

![\[Set up the publishing layout for your Android app.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/Publish_Android_1.png)


Here is the XML behind the view:

```
<?xml version="1.0" encoding="utf-8"?>
<layout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools">

    <androidx.constraintlayout.widget.ConstraintLayout
        android:keepScreenOn="true"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        tools:context=".BasicActivity">

        <androidx.constraintlayout.widget.ConstraintLayout
            android:id="@+id/main_controls_container"
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
            android:background="@color/cardview_dark_background"
            android:padding="12dp"
            app:layout_constraintTop_toTopOf="parent">

            <EditText
                android:id="@+id/main_token"
                android:layout_width="0dp"
                android:layout_height="wrap_content"
                android:autofillHints="@null"
                android:backgroundTint="@color/white"
                android:hint="@string/token"
                android:imeOptions="actionDone"
                android:inputType="text"
                android:textColor="@color/white"
                app:layout_constraintEnd_toStartOf="@id/main_join"
                app:layout_constraintStart_toStartOf="parent"
                app:layout_constraintTop_toTopOf="parent" />

            <Button
                android:id="@+id/main_join"
                android:layout_width="wrap_content"
                android:layout_height="wrap_content"
                android:backgroundTint="@color/black"
                android:text="@string/join"
                android:textAllCaps="true"
                android:textColor="@color/white"
                android:textSize="16sp"
                app:layout_constraintBottom_toBottomOf="@+id/main_token"
                app:layout_constraintEnd_toEndOf="parent"
                app:layout_constraintStart_toEndOf="@id/main_token" />

            <TextView
                android:id="@+id/main_state"
                android:layout_width="wrap_content"
                android:layout_height="wrap_content"
                android:text="@string/state"
                android:textColor="@color/white"
                android:textSize="18sp"
                app:layout_constraintBottom_toBottomOf="parent"
                app:layout_constraintStart_toStartOf="parent"
                app:layout_constraintTop_toBottomOf="@id/main_token" />

            <TextView
                android:id="@+id/main_publish_text"
                android:layout_width="wrap_content"
                android:layout_height="wrap_content"
                android:text="@string/publish"
                android:textColor="@color/white"
                android:textSize="18sp"
                app:layout_constraintBottom_toBottomOf="parent"
                app:layout_constraintEnd_toStartOf="@id/main_publish_checkbox"
                app:layout_constraintTop_toBottomOf="@id/main_token" />

            <CheckBox
                android:id="@+id/main_publish_checkbox"
                android:layout_width="wrap_content"
                android:layout_height="wrap_content"
                android:buttonTint="@color/white"
                android:checked="true"
                app:layout_constraintBottom_toBottomOf="@id/main_publish_text"
                app:layout_constraintEnd_toEndOf="parent"
                app:layout_constraintTop_toTopOf="@id/main_publish_text" />

        </androidx.constraintlayout.widget.ConstraintLayout>

        <androidx.recyclerview.widget.RecyclerView
            android:id="@+id/main_recycler_view"
            android:layout_width="match_parent"
            android:layout_height="0dp"
            app:layout_constraintTop_toBottomOf="@+id/main_controls_container"
            app:layout_constraintBottom_toBottomOf="parent" />

    </androidx.constraintlayout.widget.ConstraintLayout>
<layout>
```

We referenced a couple of string IDs here, so we’ll create our entire `strings.xml` file now:

```
<resources>
    <string name="app_name">BasicRealTime</string>
    <string name="join">Join</string>
    <string name="leave">Leave</string>
    <string name="token">Participant Token</string>
    <string name="publish">Publish</string>
    <string name="state">State: %1$s</string>
</resources>
```

Let’s link those views in the XML to our `MainActivity.kt`:

```
import android.widget.Button
import android.widget.CheckBox
import android.widget.EditText
import android.widget.TextView
import androidx.recyclerview.widget.RecyclerView

private lateinit var checkboxPublish: CheckBox
private lateinit var recyclerView: RecyclerView
private lateinit var buttonJoin: Button
private lateinit var textViewState: TextView
private lateinit var editTextToken: EditText

override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    setContentView(R.layout.activity_main)

    checkboxPublish = findViewById(R.id.main_publish_checkbox)
    recyclerView = findViewById(R.id.main_recycler_view)
    buttonJoin = findViewById(R.id.main_join)
    textViewState = findViewById(R.id.main_state)
    editTextToken = findViewById(R.id.main_token)
}
```

Now we create an item view for our `RecyclerView`. To do this, right-click your `res/layout` directory and select **New > Layout Resource File**. Name this new file `item_stage_participant.xml`.

![\[Create an item view for your Android app RecyclerView.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/Publish_Android_2.png)


The layout for this item is simple: it contains a view for rendering a participant’s video stream and a list of labels for displaying information about the participant:

![\[Create an item view for your Android app RecyclerView - labels.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/Publish_Android_3.png)


Here is the XML:

```
<?xml version="1.0" encoding="utf-8"?>
<com.amazonaws.ivs.realtime.basicrealtime.ParticipantItem xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent">

    <FrameLayout
        android:id="@+id/participant_preview_container"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        tools:background="@android:color/darker_gray" />

    <LinearLayout
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_marginStart="8dp"
        android:layout_marginTop="8dp"
        android:background="#50000000"
        android:orientation="vertical"
        android:paddingLeft="4dp"
        android:paddingTop="2dp"
        android:paddingRight="4dp"
        android:paddingBottom="2dp"
        app:layout_constraintStart_toStartOf="parent"
        app:layout_constraintTop_toTopOf="parent">

        <TextView
            android:id="@+id/participant_participant_id"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:textColor="@android:color/white"
            android:textSize="16sp"
            tools:text="You (Disconnected)" />

        <TextView
            android:id="@+id/participant_publishing"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:textColor="@android:color/white"
            android:textSize="16sp"
            tools:text="NOT_PUBLISHED" />

        <TextView
            android:id="@+id/participant_subscribed"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:textColor="@android:color/white"
            android:textSize="16sp"
            tools:text="NOT_SUBSCRIBED" />

        <TextView
            android:id="@+id/participant_video_muted"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:textColor="@android:color/white"
            android:textSize="16sp"
            tools:text="Video Muted: false" />

        <TextView
            android:id="@+id/participant_audio_muted"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:textColor="@android:color/white"
            android:textSize="16sp"
            tools:text="Audio Muted: false" />

        <TextView
            android:id="@+id/participant_audio_level"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:textColor="@android:color/white"
            android:textSize="16sp"
            tools:text="Audio Level: -100 dB" />

    </LinearLayout>

</com.amazonaws.ivs.realtime.basicrealtime.ParticipantItem>
```

This XML file inflates a class we haven’t created yet, `ParticipantItem`. Because the XML includes the full namespace, be sure to update this XML file to your namespace. Let’s create this class and set up the views, but otherwise leave it blank for now.

Create a new Kotlin class, `ParticipantItem`:

```
package com.amazonaws.ivs.realtime.basicrealtime

import android.content.Context
import android.util.AttributeSet
import android.widget.FrameLayout
import android.widget.TextView
import kotlin.math.roundToInt

class ParticipantItem @JvmOverloads constructor(
    context: Context,
    attrs: AttributeSet? = null,
    defStyleAttr: Int = 0,
    defStyleRes: Int = 0,
) : FrameLayout(context, attrs, defStyleAttr, defStyleRes) {

    private lateinit var previewContainer: FrameLayout
    private lateinit var textViewParticipantId: TextView
    private lateinit var textViewPublish: TextView
    private lateinit var textViewSubscribe: TextView
    private lateinit var textViewVideoMuted: TextView
    private lateinit var textViewAudioMuted: TextView
    private lateinit var textViewAudioLevel: TextView

    override fun onFinishInflate() {
        super.onFinishInflate()
        previewContainer = findViewById(R.id.participant_preview_container)
        textViewParticipantId = findViewById(R.id.participant_participant_id)
        textViewPublish = findViewById(R.id.participant_publishing)
        textViewSubscribe = findViewById(R.id.participant_subscribed)
        textViewVideoMuted = findViewById(R.id.participant_video_muted)
        textViewAudioMuted = findViewById(R.id.participant_audio_muted)
        textViewAudioLevel = findViewById(R.id.participant_audio_level)
    }
}
```

## Permissions
<a name="getting-started-pub-sub-android-perms"></a>

To use the camera and microphone, you need to request permissions from the user. We follow a standard permissions flow for this:

```
override fun onStart() {
    super.onStart()
    requestPermission()
}

private val requestPermissionLauncher =
    registerForActivityResult(ActivityResultContracts.RequestMultiplePermissions()) { permissions ->
        if (permissions[Manifest.permission.CAMERA] == true && permissions[Manifest.permission.RECORD_AUDIO] == true) {
            viewModel.permissionGranted() // we will add this later
        }
    }

private val permissions = listOf(
    Manifest.permission.CAMERA,
    Manifest.permission.RECORD_AUDIO,
)

private fun requestPermission() {
    when {
        this.hasPermissions(permissions) -> viewModel.permissionGranted() // we will add this later
        else -> requestPermissionLauncher.launch(permissions.toTypedArray())
    }
}

private fun Context.hasPermissions(permissions: List<String>): Boolean {
    return permissions.all {
        ContextCompat.checkSelfPermission(this, it) == PackageManager.PERMISSION_GRANTED
    }
}
```

## App State
<a name="getting-started-pub-sub-android-app-state"></a>

Our application keeps track of the participants locally in a `MainViewModel.kt` and the state will be communicated back to the `MainActivity` using Kotlin’s [StateFlow](https://kotlinlang.org/api/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines.flow/-state-flow/).

Create a new Kotlin class `MainViewModel`:

```
package com.amazonaws.ivs.realtime.basicrealtime

import android.app.Application
import androidx.lifecycle.AndroidViewModel

class MainViewModel(application: Application) : AndroidViewModel(application), Stage.Strategy, StageRenderer {

}
```

In `MainActivity.kt` we manage our view model:

```
import androidx.activity.viewModels

private val viewModel: MainViewModel by viewModels()
```

To use `AndroidViewModel` and these Kotlin `ViewModel` extensions, you’ll need to add the following to your module’s `build.gradle` file:

```
implementation 'androidx.core:core-ktx:1.10.1'
implementation "androidx.activity:activity-ktx:1.7.2"
implementation 'androidx.appcompat:appcompat:1.6.1'
implementation 'com.google.android.material:material:1.10.0'
implementation "androidx.lifecycle:lifecycle-extensions:2.2.0"

def lifecycle_version = "2.6.1"
implementation "androidx.lifecycle:lifecycle-livedata-ktx:$lifecycle_version"
implementation "androidx.lifecycle:lifecycle-viewmodel-ktx:$lifecycle_version"
implementation 'androidx.constraintlayout:constraintlayout:2.1.4'
```

### RecyclerView Adapter
<a name="getting-started-pub-sub-android-app-state-recycler"></a>

We’ll create a simple `RecyclerView.Adapter` subclass to keep track of our participants and update our `RecyclerView` on stage events. But first, we need a class that represents a participant. Create a new Kotlin class `StageParticipant`:

```
package com.amazonaws.ivs.realtime.basicrealtime

import com.amazonaws.ivs.broadcast.Stage
import com.amazonaws.ivs.broadcast.StageStream

class StageParticipant(val isLocal: Boolean, var participantId: String?) {
    var publishState = Stage.PublishState.NOT_PUBLISHED
    var subscribeState = Stage.SubscribeState.NOT_SUBSCRIBED
    var streams = mutableListOf<StageStream>()

    val stableID: String
        get() {
            return if (isLocal) {
                "LocalUser"
            } else {
                requireNotNull(participantId)
            }
        }
}
```

We’ll use this class in the `ParticipantAdapter` class that we’ll create next. We start by defining the class and creating a variable to track the participants:

```
package com.amazonaws.ivs.realtime.basicrealtime

import android.view.LayoutInflater
import android.view.ViewGroup
import androidx.recyclerview.widget.RecyclerView

class ParticipantAdapter : RecyclerView.Adapter<ParticipantAdapter.ViewHolder>() {

    private val participants = mutableListOf<StageParticipant>()
```

We also have to define our `RecyclerView.ViewHolder` before implementing the rest of the overrides:

```
class ViewHolder(val participantItem: ParticipantItem) : RecyclerView.ViewHolder(participantItem)
```

Using this, we can implement the standard `RecyclerView.Adapter` overrides:

```
override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): ViewHolder {
    val item = LayoutInflater.from(parent.context)
        .inflate(R.layout.item_stage_participant, parent, false) as ParticipantItem
    return ViewHolder(item)
}

override fun getItemCount(): Int {
    return participants.size
}

override fun getItemId(position: Int): Long =
    participants[position]
        .stableID
        .hashCode()
        .toLong()

override fun onBindViewHolder(holder: ViewHolder, position: Int) {
    return holder.participantItem.bind(participants[position])
}

override fun onBindViewHolder(holder: ViewHolder, position: Int, payloads: MutableList<Any>) {
    val updates = payloads.filterIsInstance<StageParticipant>()
    if (updates.isNotEmpty()) {
        updates.forEach { holder.participantItem.bind(it) // implemented later }
    } else {
        super.onBindViewHolder(holder, position, payloads)
    }
}
```

Finally, we add new methods that we will call from our `MainViewModel` when changes to participants are made. These methods are standard CRUD operations on the adapter.

```
fun participantJoined(participant: StageParticipant) {
    participants.add(participant)
    notifyItemInserted(participants.size - 1)
}

fun participantLeft(participantId: String) {
    val index = participants.indexOfFirst { it.participantId == participantId }
    if (index != -1) {
        participants.removeAt(index)
        notifyItemRemoved(index)
    }
}

fun participantUpdated(participantId: String?, update: (participant: StageParticipant) -> Unit) {
    val index = participants.indexOfFirst { it.participantId == participantId }
    if (index != -1) {
        update(participants[index])
        notifyItemChanged(index, participants[index])
    }
}
```

Back in `MainViewModel` we need to create and hold a reference to this adapter:

```
internal val participantAdapter = ParticipantAdapter()
```

## Stage State
<a name="getting-started-pub-sub-android-views-stage-state"></a>

We also need to track some stage state within `MainViewModel`. Let’s define those properties now:

```
private val _connectionState = MutableStateFlow(Stage.ConnectionState.DISCONNECTED)
val connectionState = _connectionState.asStateFlow()

private var publishEnabled: Boolean = false
    set(value) {
        field = value
        // Because the strategy returns the value of `checkboxPublish.isChecked`, just call `refreshStrategy`.
        stage?.refreshStrategy()
    }

private var deviceDiscovery: DeviceDiscovery? = null
private var stage: Stage? = null
private var streams = mutableListOf<LocalStageStream>()
```

To see your own preview before joining a stage, we create a local participant immediately:

```
init {
    deviceDiscovery = DeviceDiscovery(application)

    // Create a local participant immediately to render our camera preview and microphone stats
    val localParticipant = StageParticipant(true, null)
    participantAdapter.participantJoined(localParticipant)
}
```

We want to make sure we clean up these resources when our `ViewModel` is cleaned up. We override `onCleared()` right away, so we don’t forget to clean these resources.

```
override fun onCleared() {
    stage?.release()
    deviceDiscovery?.release()
    deviceDiscovery = null
    super.onCleared()
}
```

Now we populate our local `streams` property as soon as permissions are granted, implementing the `permissionsGranted` method that we called earlier:

```
internal fun permissionGranted() {
    val deviceDiscovery = deviceDiscovery ?: return
    streams.clear()
    val devices = deviceDiscovery.listLocalDevices()
    // Camera
    devices
        .filter { it.descriptor.type == Device.Descriptor.DeviceType.CAMERA }
        .maxByOrNull { it.descriptor.position == Device.Descriptor.Position.FRONT }
        ?.let { streams.add(ImageLocalStageStream(it)) }
    // Microphone
    devices
        .filter { it.descriptor.type == Device.Descriptor.DeviceType.MICROPHONE }
        .maxByOrNull { it.descriptor.isDefault }
        ?.let { streams.add(AudioLocalStageStream(it)) }

    stage?.refreshStrategy()

    // Update our local participant with these new streams
    participantAdapter.participantUpdated(null) {
        it.streams.clear()
        it.streams.addAll(streams)
    }
}
```

## Implementing the Stage SDK
<a name="getting-started-pub-sub-android-stage-sdk"></a>

Three core [concepts](android-publish-subscribe.md#android-publish-subscribe-concepts) underlie real-time functionality: stage, strategy, and renderer. The design goal is minimizing the amount of client-side logic necessary to build a working product.

### Stage.Strategy
<a name="getting-started-pub-sub-android-stage-sdk-strategy"></a>

Our `Stage.Strategy` implementation is simple:

```
override fun stageStreamsToPublishForParticipant(
    stage: Stage,
    participantInfo: ParticipantInfo
): MutableList<LocalStageStream> {
    // Return the camera and microphone to be published.
    // This is only called if `shouldPublishFromParticipant` returns true.
    return streams
}

override fun shouldPublishFromParticipant(stage: Stage, participantInfo: ParticipantInfo): Boolean {
    return publishEnabled
}

override fun shouldSubscribeToParticipant(stage: Stage, participantInfo: ParticipantInfo): Stage.SubscribeType {
    // Subscribe to both audio and video for all publishing participants.
    return Stage.SubscribeType.AUDIO_VIDEO
}
```

To summarize, we publish based on our internal `publishEnabled` state, and if we publish we will publish the streams we collected earlier. Finally for this sample, we always subscribe to other participants, receiving both their audio and video.

### StageRenderer
<a name="getting-started-pub-sub-android-stage-sdk-renderer"></a>

The `StageRenderer` implementation also is fairly simple, though given the number of functions it contains quite a bit more code. The general approach in this renderer is to update our `ParticipantAdapter` when the SDK notifies us of a change to a participant. There are certain scenarios where we handle local participants differently, because we have decided to manage them ourselves so they can see their camera preview before joining.

```
override fun onError(exception: BroadcastException) {
    Toast.makeText(getApplication(), "onError ${exception.localizedMessage}", Toast.LENGTH_LONG).show()
    Log.e("BasicRealTime", "onError $exception")
}

override fun onConnectionStateChanged(
    stage: Stage,
    connectionState: Stage.ConnectionState,
    exception: BroadcastException?
) {
    _connectionState.value = connectionState
}

override fun onParticipantJoined(stage: Stage, participantInfo: ParticipantInfo) {
    if (participantInfo.isLocal) {
        // If this is the local participant joining the stage, update the participant with a null ID because we
        // manually added that participant when setting up our preview
        participantAdapter.participantUpdated(null) {
            it.participantId = participantInfo.participantId
        }
    } else {
        // If they are not local, add them normally
        participantAdapter.participantJoined(
            StageParticipant(
                participantInfo.isLocal,
                participantInfo.participantId
            )
        )
    }
}

override fun onParticipantLeft(stage: Stage, participantInfo: ParticipantInfo) {
    if (participantInfo.isLocal) {
        // If this is the local participant leaving the stage, update the ID but keep it around because
        // we want to keep the camera preview active
        participantAdapter.participantUpdated(participantInfo.participantId) {
            it.participantId = null
        }
    } else {
        // If they are not local, have them leave normally
        participantAdapter.participantLeft(participantInfo.participantId)
    }
}

override fun onParticipantPublishStateChanged(
    stage: Stage,
    participantInfo: ParticipantInfo,
    publishState: Stage.PublishState
) {
    // Update the publishing state of this participant
    participantAdapter.participantUpdated(participantInfo.participantId) {
        it.publishState = publishState
    }
}

override fun onParticipantSubscribeStateChanged(
    stage: Stage,
    participantInfo: ParticipantInfo,
    subscribeState: Stage.SubscribeState
) {
    // Update the subscribe state of this participant
    participantAdapter.participantUpdated(participantInfo.participantId) {
        it.subscribeState = subscribeState
    }
}

override fun onStreamsAdded(stage: Stage, participantInfo: ParticipantInfo, streams: MutableList<StageStream>) {
    // We don't want to take any action for the local participant because we track those streams locally
    if (participantInfo.isLocal) {
        return
    }
    // For remote participants, add these new streams to that participant's streams array.
    participantAdapter.participantUpdated(participantInfo.participantId) {
        it.streams.addAll(streams)
    }
}

override fun onStreamsRemoved(stage: Stage, participantInfo: ParticipantInfo, streams: MutableList<StageStream>) {
    // We don't want to take any action for the local participant because we track those streams locally
    if (participantInfo.isLocal) {
        return
    }
    // For remote participants, remove these streams from that participant's streams array.
    participantAdapter.participantUpdated(participantInfo.participantId) {
        it.streams.removeAll(streams)
    }
}

override fun onStreamsMutedChanged(
    stage: Stage,
    participantInfo: ParticipantInfo,
    streams: MutableList<StageStream>
) {
    // We don't want to take any action for the local participant because we track those streams locally
    if (participantInfo.isLocal) {
        return
    }
    // For remote participants, notify the adapter that the participant has been updated. There is no need to modify
    // the `streams` property on the `StageParticipant` because it is the same `StageStream` instance. Just
    // query the `isMuted` property again.
    participantAdapter.participantUpdated(participantInfo.participantId) {}
}
```

## Implementing a Custom RecyclerView LayoutManager
<a name="getting-started-pub-sub-android-layout"></a>

Laying out different numbers of participants can be complex. You want them to take up the entire parent view’s frame but you don’t want to handle each participant configuration independently. To make this easy, we’ll walk through implementing a `RecyclerView.LayoutManager`.

Create another new class, `StageLayoutManager`, which should extend `GridLayoutManager`. This class is designed to calculate the layout for each participant based on the number of participants in a flow-based row/column layout. Each row is the same height as the others, but columns can be different widths per row. See the code comment above the `layouts` variable for a description of how to customize this behavior.

```
package com.amazonaws.ivs.realtime.basicrealtime

import android.content.Context
import androidx.recyclerview.widget.GridLayoutManager
import androidx.recyclerview.widget.RecyclerView

class StageLayoutManager(context: Context?) : GridLayoutManager(context, 6) {

    companion object {
        /**
         * This 2D array contains the description of how the grid of participants should be rendered
         * The index of the 1st dimension is the number of participants needed to active that configuration
         * Meaning if there is 1 participant, index 0 will be used. If there are 5 participants, index 4 will be used.
         *
         * The 2nd dimension is a description of the layout. The length of the array is the number of rows that
         * will exist, and then each number within that array is the number of columns in each row.
         *
         * See the code comments next to each index for concrete examples.
         *
         * This can be customized to fit any layout configuration needed.
         */
        val layouts: List<List<Int>> = listOf(
            // 1 participant
            listOf(1), // 1 row, full width
            // 2 participants
            listOf(1, 1), // 2 rows, all columns are full width
            // 3 participants
            listOf(1, 2), // 2 rows, first row's column is full width then 2nd row's columns are 1/2 width
            // 4 participants
            listOf(2, 2), // 2 rows, all columns are 1/2 width
            // 5 participants
            listOf(1, 2, 2), // 3 rows, first row's column is full width, 2nd and 3rd row's columns are 1/2 width
            // 6 participants
            listOf(2, 2, 2), // 3 rows, all column are 1/2 width
            // 7 participants
            listOf(2, 2, 3), // 3 rows, 1st and 2nd row's columns are 1/2 width, 3rd row's columns are 1/3rd width
            // 8 participants
            listOf(2, 3, 3),
            // 9 participants
            listOf(3, 3, 3),
            // 10 participants
            listOf(2, 3, 2, 3),
            // 11 participants
            listOf(2, 3, 3, 3),
            // 12 participants
            listOf(3, 3, 3, 3),
        )
    }

    init {
        spanSizeLookup = object : SpanSizeLookup() {
            override fun getSpanSize(position: Int): Int {
                if (itemCount <= 0) {
                    return 1
                }
                // Calculate the row we're in
                val config = layouts[itemCount - 1]
                var row = 0
                var curPosition = position
                while (curPosition - config[row] >= 0) {
                    curPosition -= config[row]
                    row++
                }
                // spanCount == max spans, config[row] = number of columns we want
                // So spanCount / config[row] would be something like 6 / 3 if we want 3 columns.
                // So this will take up 2 spans, with a max of 6 is 1/3rd of the view.
                return spanCount / config[row]
            }
        }
    }

    override fun onLayoutChildren(recycler: RecyclerView.Recycler?, state: RecyclerView.State?) {
        if (itemCount <= 0 || state?.isPreLayout == true) return

        val parentHeight = height
        val itemHeight = parentHeight / layouts[itemCount - 1].size // height divided by number of rows.

        // Set the height of each view based on how many rows exist for the current participant count.
        for (i in 0 until childCount) {
            val child = getChildAt(i) ?: continue
            val layoutParams = child.layoutParams as RecyclerView.LayoutParams
            if (layoutParams.height != itemHeight) {
                layoutParams.height = itemHeight
                child.layoutParams = layoutParams
            }
        }
        // After we set the height for all our views, call super.
        // This works because our RecyclerView can not scroll and all views are always visible with stable IDs.
        super.onLayoutChildren(recycler, state)
    }

    override fun canScrollVertically(): Boolean = false
    override fun canScrollHorizontally(): Boolean = false
}
```

Back in `MainActivity.kt` we need to set the adapter and layout manager for our `RecyclerView`:

```
// In onCreate after setting recyclerView.
recyclerView.layoutManager = StageLayoutManager(this)
recyclerView.adapter = viewModel.participantAdapter
```

## Hooking Up UI Actions
<a name="getting-started-pub-sub-android-actions"></a>

We are getting close; there are just a few UI actions that we need to hook up.

First we’ll have our `MainActivity` observe the `StateFlow` changes from `MainViewModel`:

```
// At the end of your onCreate method
lifecycleScope.launch {
    repeatOnLifecycle(Lifecycle.State.CREATED) {
        viewModel.connectionState.collect { state ->
            buttonJoin.setText(if (state == ConnectionState.DISCONNECTED) R.string.join else R.string.leave)
            textViewState.text = getString(R.string.state, state.name)
        }
    }
}
```

Next we add listeners to our Join button and Publish checkbox:

```
buttonJoin.setOnClickListener {
    viewModel.joinStage(editTextToken.text.toString())
}
checkboxPublish.setOnCheckedChangeListener { _, isChecked ->
    viewModel.setPublishEnabled(isChecked)
}
```

Both of the above call functionality in our `MainViewModel`, which we implement now:

```
internal fun joinStage(token: String) {
    if (_connectionState.value != Stage.ConnectionState.DISCONNECTED) {
        // If we're already connected to a stage, leave it.
        stage?.leave()
    } else {
        if (token.isEmpty()) {
            Toast.makeText(getApplication(), "Empty Token", Toast.LENGTH_SHORT).show()
            return
        }
        try {
            // Destroy the old stage first before creating a new one.
            stage?.release()
            val stage = Stage(getApplication(), token, this)
            stage.addRenderer(this)
            stage.join()
            this.stage = stage
        } catch (e: BroadcastException) {
            Toast.makeText(getApplication(), "Failed to join stage ${e.localizedMessage}", Toast.LENGTH_LONG).show()
            e.printStackTrace()
        }
    }
}

internal fun setPublishEnabled(enabled: Boolean) {
    publishEnabled = enabled
}
```

## Rendering the Participants
<a name="getting-started-pub-sub-android-participants"></a>

Finally, we need to render the data we receive from the SDK onto the participant item that we created earlier. We already have the `RecyclerView` logic finished, so we just need to implement the `bind` API in `ParticipantItem`.

We’ll start by adding the empty function and then walk through it step by step:

```
fun bind(participant: StageParticipant) {

}
```

First we’ll handle the easy state, the participant ID, publish state, and subscribe state. For these, we just update our `TextViews` directly:

```
val participantId = if (participant.isLocal) {
    "You (${participant.participantId ?: "Disconnected"})"
} else {
    participant.participantId
}
textViewParticipantId.text = participantId
textViewPublish.text = participant.publishState.name
textViewSubscribe.text = participant.subscribeState.name
```

Next we’ll update the audio and video muted states. To get the muted state, we need to find the `ImageDevice` and `AudioDevice` from the streams array. To optimize performance, we remember the last attached device IDs.

```
// This belongs outside the `bind` API.
private var imageDeviceUrn: String? = null
private var audioDeviceUrn: String? = null

// This belongs inside the `bind` API.
val newImageStream = participant
    .streams
    .firstOrNull { it.device is ImageDevice }
textViewVideoMuted.text = if (newImageStream != null) {
    if (newImageStream.muted) "Video muted" else "Video not muted"
} else {
    "No video stream"
}

val newAudioStream = participant
    .streams
    .firstOrNull { it.device is AudioDevice }
textViewAudioMuted.text = if (newAudioStream != null) {
    if (newAudioStream.muted) "Audio muted" else "Audio not muted"
} else {
    "No audio stream"
}
```

Finally we want to render a preview for the `imageDevice`:

```
if (newImageStream?.device?.descriptor?.urn != imageDeviceUrn) {
    // If the device has changed, remove all subviews from the preview container
    previewContainer.removeAllViews()
    (newImageStream?.device as? ImageDevice)?.let {
        val preview = it.getPreviewView(BroadcastConfiguration.AspectMode.FIT)
        previewContainer.addView(preview)
        preview.layoutParams = FrameLayout.LayoutParams(
            FrameLayout.LayoutParams.MATCH_PARENT,
            FrameLayout.LayoutParams.MATCH_PARENT
        )
    }
}
imageDeviceUrn = newImageStream?.device?.descriptor?.urn
```

And we display audio stats from the `audioDevice`:

```
if (newAudioStream?.device?.descriptor?.urn != audioDeviceUrn) {
    (newAudioStream?.device as? AudioDevice)?.let {
        it.setStatsCallback { _, rms ->
            textViewAudioLevel.text = "Audio Level: ${rms.roundToInt()} dB"
        }
    }
}
audioDeviceUrn = newAudioStream?.device?.descriptor?.urn
```

# Publish & Subscribe with the IVS iOS Broadcast SDK
<a name="getting-started-pub-sub-ios"></a>

This section takes you through the steps involved in publishing and subscribing to a stage using your iOS app.

## Create Views
<a name="getting-started-pub-sub-ios-views"></a>

We start by using the auto-created `ViewController.swift` file to import `AmazonIVSBroadcast` and then add some `@IBOutlets` to link:

```
import AmazonIVSBroadcast

class ViewController: UIViewController {

    @IBOutlet private var textFieldToken: UITextField!
    @IBOutlet private var buttonJoin: UIButton!
    @IBOutlet private var labelState: UILabel!
    @IBOutlet private var switchPublish: UISwitch!
    @IBOutlet private var collectionViewParticipants: UICollectionView!
```

Now we create those views and link them up in `Main.storyboard`. Here is the view structure that we’ll use:

![\[Use Main.storyboard to create an iOS view.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/Publish_iOS_1.png)


For AutoLayout configuration, we need to customize three views. The first view is **Collection View Participants** (a `UICollectionView`). Bound **Leading**, **Trailing**, and **Bottom** to **Safe Area**. Also bound **Top** to **Controls Container**.

![\[Customize iOS Collection View Participants view.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/Publish_iOS_2.png)


The second view is **Controls Container**. Bound **Leading**, **Trailing**, and **Top** to **Safe Area**:

![\[Customize iOS Controls Container view.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/Publish_iOS_3.png)


The third and last view is **Vertical Stack View**. Bound **Top**, **Leading**, **Trailing**, and **Bottom** to **Superview**. For styling, set the spacing to 8 instead of 0.

![\[Customize iOS Vertical Stack view.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/Publish_iOS_4.png)


The **UIStackViews** will handle the layout of the remaining views. For all three **UIStackViews**, use **Fill** as the **Alignment** and **Distribution**.

![\[Customize remaining iOS views with UIStackViews.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/Publish_iOS_5.png)


Finally, let’s link these views to our `ViewController`. From above, map the following views:
+ **Text Field Join** binds to `textFieldToken`.
+ **Button Join** binds to `buttonJoin`.
+ **Label State** binds to `labelState`.
+ **Switch Publish** binds to `switchPublish`.
+ **Collection View Participants** binds to `collectionViewParticipants`.

Also use this time to set the `dataSource` of the **Collection View Participants** item to the owning `ViewController`:

![\[Set the dataSource of Collection View Participants for iOS app.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/Publish_iOS_6.png)


Now we create the `UICollectionViewCell` subclass in which to render the participants. Start by creating a new **Cocoa Touch Class** file:

![\[Create a UICollectionViewCell to render iOS real-time participants.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/Publish_iOS_7.png)


Name it `ParticipantUICollectionViewCell` and make it a subclass of `UICollectionViewCell` in Swift. We start in the Swift file again, creating our `@IBOutlets` to link:

```
import AmazonIVSBroadcast

class ParticipantCollectionViewCell: UICollectionViewCell {

    @IBOutlet private var viewPreviewContainer: UIView!
    @IBOutlet private var labelParticipantId: UILabel!
    @IBOutlet private var labelSubscribeState: UILabel!
    @IBOutlet private var labelPublishState: UILabel!
    @IBOutlet private var labelVideoMuted: UILabel!
    @IBOutlet private var labelAudioMuted: UILabel!
    @IBOutlet private var labelAudioVolume: UILabel!
```

In the associated XIB file, create this view hierarchy:

![\[Create iOS view hierarchy in associated XIB file.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/Publish_iOS_8.png)


For AutoLayout, we’ll modify three views again. The first view is **View Preview Container**. Set **Trailing**, **Leading**, **Top**, and **Bottom** to **Participant Collection View Cell**.

![\[Customize iOS View Preview Container view.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/Publish_iOS_9.png)


The second view is **View**. Set **Leading** and **Top** to **Participant Collection View Cell** and change the value to 4.

![\[Customize iOS View view.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/Publish_iOS_10.png)


The third view is **Stack View**. Set **Trailing**, **Leading**, **Top**, and **Bottom** to **Superview** and change the value to 4.

![\[Customize iOS Stack View view.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/Publish_iOS_11.png)


## Permissions and Idle Timer
<a name="getting-started-pub-sub-ios-perms"></a>

Going back to our `ViewController`, we will disable the system idle timer to prevent the device from going to sleep while our application is being used:

```
override func viewDidAppear(_ animated: Bool) {
    super.viewDidAppear(animated)
    // Prevent the screen from turning off during a call.
    UIApplication.shared.isIdleTimerDisabled = true
}

override func viewDidDisappear(_ animated: Bool) {
    super.viewDidDisappear(animated)
    UIApplication.shared.isIdleTimerDisabled = false
}
```

Next we request camera and microphone permissions from the system:

```
private func checkPermissions() {
    checkOrGetPermission(for: .video) { [weak self] granted in
        guard granted else {
            print("Video permission denied")
            return
        }
        self?.checkOrGetPermission(for: .audio) { [weak self] granted in
            guard granted else {
                print("Audio permission denied")
                return
            }
            self?.setupLocalUser() // we will cover this later
        }
    }
}

private func checkOrGetPermission(for mediaType: AVMediaType, _ result: @escaping (Bool) -> Void) {
    func mainThreadResult(_ success: Bool) {
        DispatchQueue.main.async {
            result(success)
        }
    }
    switch AVCaptureDevice.authorizationStatus(for: mediaType) {
    case .authorized: mainThreadResult(true)
    case .notDetermined:
        AVCaptureDevice.requestAccess(for: mediaType) { granted in
            mainThreadResult(granted)
        }
    case .denied, .restricted: mainThreadResult(false)
    @unknown default: mainThreadResult(false)
    }
}
```

## App State
<a name="getting-started-pub-sub-ios-app-state"></a>

We need to configure our `collectionViewParticipants` with the layout file that we created earlier:

```
override func viewDidLoad() {
    super.viewDidLoad()
    // We render everything to exactly the frame, so don't allow scrolling.
    collectionViewParticipants.isScrollEnabled = false
    collectionViewParticipants.register(UINib(nibName: "ParticipantCollectionViewCell", bundle: .main), forCellWithReuseIdentifier: "ParticipantCollectionViewCell")
}
```

To represent each participant, we create a simple struct called `StageParticipant`. This can be included in the `ViewController.swift` file, or a new file can be created.

```
import Foundation
import AmazonIVSBroadcast

struct StageParticipant {
    let isLocal: Bool
    var participantId: String?
    var publishState: IVSParticipantPublishState = .notPublished
    var subscribeState: IVSParticipantSubscribeState = .notSubscribed
    var streams: [IVSStageStream] = []

    init(isLocal: Bool, participantId: String?) {
        self.isLocal = isLocal
        self.participantId = participantId
    }
}
```

To track those participants, we keep an array of them as a private property in our `ViewController`:

```
private var participants = [StageParticipant]()
```

This property will be used to power our `UICollectionViewDataSource` that was linked from the storyboard earlier:

```
extension ViewController: UICollectionViewDataSource {

    func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int {
        return participants.count
    }

    func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
        if let cell = collectionView.dequeueReusableCell(withReuseIdentifier: "ParticipantCollectionViewCell", for: indexPath) as? ParticipantCollectionViewCell {
            cell.set(participant: participants[indexPath.row])
            return cell
        } else {
            fatalError("Couldn't load custom cell type 'ParticipantCollectionViewCell'")
        }
    }

}
```

To see your own preview before joining a stage, we create a local participant immediately:

```
override func viewDidLoad() {
    /* existing UICollectionView code */
    participants.append(StageParticipant(isLocal: true, participantId: nil))
}
```

This results in a participant cell being rendered immediately once the app is running, representing the local participant.

Users want to be able to see themselves before joining a stage, so next we implement the `setupLocalUser()` method that gets called from the permissions-handling code earlier. We store the camera and microphone reference as `IVSLocalStageStream` objects.

```
private var streams = [IVSLocalStageStream]()
private let deviceDiscovery = IVSDeviceDiscovery()

private func setupLocalUser() {
    // Gather our camera and microphone once permissions have been granted
    let devices = deviceDiscovery.listLocalDevices()
    streams.removeAll()
    if let camera = devices.compactMap({ $0 as? IVSCamera }).first {
        streams.append(IVSLocalStageStream(device: camera))
        // Use a front camera if available.
        if let frontSource = camera.listAvailableInputSources().first(where: { $0.position == .front }) {
            camera.setPreferredInputSource(frontSource)
        }
    }
    if let mic = devices.compactMap({ $0 as? IVSMicrophone }).first {
        streams.append(IVSLocalStageStream(device: mic))
    }
    participants[0].streams = streams
    participantsChanged(index: 0, changeType: .updated)
}
```

Here we’ve found the device’s camera and microphone through the SDK and stored them in our local `streams` object, then assigned the `streams` array of the first participant (the local participant that we created earlier) to our `streams`. Finally we call `participantsChanged` with an `index` of 0 and `changeType` of `updated`. That function is a helper function for updating our `UICollectionView` with nice animations. Here’s what it looks like:

```
private func participantsChanged(index: Int, changeType: ChangeType) {
    switch changeType {
    case .joined:
        collectionViewParticipants?.insertItems(at: [IndexPath(item: index, section: 0)])
    case .updated:
        // Instead of doing reloadItems, just grab the cell and update it ourselves. It saves a create/destroy of a cell
        // and more importantly fixes some UI flicker. We disable scrolling so the index path per cell
        // never changes.
        if let cell = collectionViewParticipants?.cellForItem(at: IndexPath(item: index, section: 0)) as? ParticipantCollectionViewCell {
            cell.set(participant: participants[index])
        }
    case .left:
        collectionViewParticipants?.deleteItems(at: [IndexPath(item: index, section: 0)])
    }
}
```

Don’t worry about `cell.set` yet; we’ll get to that later, but that’s where we will render the cell’s contents based on the participant.

The `ChangeType` is a simple enum:

```
enum ChangeType {
    case joined, updated, left
}
```

Finally, we want to keep track of whether the stage is connected. We use a simple `bool` to track that, which will automatically update our UI when it is updated itself.

```
private var connectingOrConnected = false {
    didSet {
        buttonJoin.setTitle(connectingOrConnected ? "Leave" : "Join", for: .normal)
        buttonJoin.tintColor = connectingOrConnected ? .systemRed : .systemBlue
    }
}
```

## Implement the Stage SDK
<a name="getting-started-pub-sub-ios-stage-sdk"></a>

Three core [concepts](ios-publish-subscribe.md#ios-publish-subscribe-concepts) underlie real-time functionality: stage, strategy, and renderer. The design goal is minimizing the amount of client-side logic necessary to build a working product.

### IVSStageStrategy
<a name="getting-started-pub-sub-ios-stage-sdk-strategy"></a>

Our `IVSStageStrategy` implementation is simple:

```
extension ViewController: IVSStageStrategy {
    func stage(_ stage: IVSStage, streamsToPublishForParticipant participant: IVSParticipantInfo) -> [IVSLocalStageStream] {
        // Return the camera and microphone to be published.
        // This is only called if `shouldPublishParticipant` returns true.
        return streams
    }

    func stage(_ stage: IVSStage, shouldPublishParticipant participant: IVSParticipantInfo) -> Bool {
        // Our publish status is based directly on the UISwitch view
        return switchPublish.isOn
    }

    func stage(_ stage: IVSStage, shouldSubscribeToParticipant participant: IVSParticipantInfo) -> IVSStageSubscribeType {
        // Subscribe to both audio and video for all publishing participants.
        return .audioVideo
    }
}
```

To summarize, we only publish if the publish switch is in the “on” position, and if we publish we will publish the streams that we collected earlier. Finally, for this sample, we always subscribe to other participants, receiving both their audio and video.

### IVSStageRenderer
<a name="getting-started-pub-sub-ios-stage-sdk-renderer"></a>

The `IVSStageRenderer` implementation also is fairly simple, though given the number of functions it contains quite a bit more code. The general approach in this renderer is to update our `participants` array when the SDK notifies us of a change to a participant. There are certain scenarios where we handle local participants differently, because we have decided to manage them ourselves so they can see their camera preview before joining.

```
extension ViewController: IVSStageRenderer {

    func stage(_ stage: IVSStage, didChange connectionState: IVSStageConnectionState, withError error: Error?) {
        labelState.text = connectionState.text
        connectingOrConnected = connectionState != .disconnected
    }

    func stage(_ stage: IVSStage, participantDidJoin participant: IVSParticipantInfo) {
        if participant.isLocal {
            // If this is the local participant joining the Stage, update the first participant in our array because we
            // manually added that participant when setting up our preview
            participants[0].participantId = participant.participantId
            participantsChanged(index: 0, changeType: .updated)
        } else {
            // If they are not local, add them to the array as a newly joined participant.
            participants.append(StageParticipant(isLocal: false, participantId: participant.participantId))
            participantsChanged(index: (participants.count - 1), changeType: .joined)
        }
    }

    func stage(_ stage: IVSStage, participantDidLeave participant: IVSParticipantInfo) {
        if participant.isLocal {
            // If this is the local participant leaving the Stage, update the first participant in our array because
            // we want to keep the camera preview active
            participants[0].participantId = nil
            participantsChanged(index: 0, changeType: .updated)
        } else {
            // If they are not local, find their index and remove them from the array.
            if let index = participants.firstIndex(where: { $0.participantId == participant.participantId }) {
                participants.remove(at: index)
                participantsChanged(index: index, changeType: .left)
            }
        }
    }

    func stage(_ stage: IVSStage, participant: IVSParticipantInfo, didChange publishState: IVSParticipantPublishState) {
        // Update the publishing state of this participant
        mutatingParticipant(participant.participantId) { data in
            data.publishState = publishState
        }
    }

    func stage(_ stage: IVSStage, participant: IVSParticipantInfo, didChange subscribeState: IVSParticipantSubscribeState) {
        // Update the subscribe state of this participant
        mutatingParticipant(participant.participantId) { data in
            data.subscribeState = subscribeState
        }
    }

    func stage(_ stage: IVSStage, participant: IVSParticipantInfo, didChangeMutedStreams streams: [IVSStageStream]) {
        // We don't want to take any action for the local participant because we track those streams locally
        if participant.isLocal { return }
        // For remote participants, notify the UICollectionView that they have updated. There is no need to modify
        // the `streams` property on the `StageParticipant` because it is the same `IVSStageStream` instance. Just
        // query the `isMuted` property again.
        if let index = participants.firstIndex(where: { $0.participantId == participant.participantId }) {
            participantsChanged(index: index, changeType: .updated)
        }
    }

    func stage(_ stage: IVSStage, participant: IVSParticipantInfo, didAdd streams: [IVSStageStream]) {
        // We don't want to take any action for the local participant because we track those streams locally
        if participant.isLocal { return }
        // For remote participants, add these new streams to that participant's streams array.
        mutatingParticipant(participant.participantId) { data in
            data.streams.append(contentsOf: streams)
        }
    }

    func stage(_ stage: IVSStage, participant: IVSParticipantInfo, didRemove streams: [IVSStageStream]) {
        // We don't want to take any action for the local participant because we track those streams locally
        if participant.isLocal { return }
        // For remote participants, remove these streams from that participant's streams array.
        mutatingParticipant(participant.participantId) { data in
            let oldUrns = streams.map { $0.device.descriptor().urn }
            data.streams.removeAll(where: { stream in
                return oldUrns.contains(stream.device.descriptor().urn)
            })
        }
    }

    // A helper function to find a participant by its ID, mutate that participant, and then update the UICollectionView accordingly.
    private func mutatingParticipant(_ participantId: String?, modifier: (inout StageParticipant) -> Void) {
        guard let index = participants.firstIndex(where: { $0.participantId == participantId }) else {
            fatalError("Something is out of sync, investigate if this was a sample app or SDK issue.")
        }

        var participant = participants[index]
        modifier(&participant)
        participants[index] = participant
        participantsChanged(index: index, changeType: .updated)
    }
}
```

This code uses an extension to convert the connection state into human-friendly text:

```
extension IVSStageConnectionState {
    var text: String {
        switch self {
        case .disconnected: return "Disconnected"
        case .connecting: return "Connecting"
        case .connected: return "Connected"
        @unknown default: fatalError()
        }
    }
}
```

## Implementing a Custom UICollectionViewLayout
<a name="getting-started-pub-sub-ios-layout"></a>

Laying out different numbers of participants can be complex. You want them to take up the entire parent view’s frame but you don’t want to handle each participant configuration independently. To make this easy, we’ll walk through implementing a `UICollectionViewLayout`.

Create another new file, `ParticipantCollectionViewLayout.swift`, which should extend `UICollectionViewLayout`. This class will use another class called `StageLayoutCalculator`, which we’ll cover soon. The class receives calculated frame values for each participant and then generates the necessary `UICollectionViewLayoutAttributes` objects.

```
import Foundation
import UIKit

/**
 Code modified from https://developer.apple.com/documentation/uikit/views_and_controls/collection_views/layouts/customizing_collection_view_layouts?language=objc
 */
class ParticipantCollectionViewLayout: UICollectionViewLayout {

    private let layoutCalculator = StageLayoutCalculator()

    private var contentBounds = CGRect.zero
    private var cachedAttributes = [UICollectionViewLayoutAttributes]()

    override func prepare() {
        super.prepare()

        guard let collectionView = collectionView else { return }

        cachedAttributes.removeAll()
        contentBounds = CGRect(origin: .zero, size: collectionView.bounds.size)

        layoutCalculator.calculateFrames(participantCount: collectionView.numberOfItems(inSection: 0),
                                         width: collectionView.bounds.size.width,
                                         height: collectionView.bounds.size.height,
                                         padding: 4)
        .enumerated()
        .forEach { (index, frame) in
            let attributes = UICollectionViewLayoutAttributes(forCellWith: IndexPath(item: index, section: 0))
            attributes.frame = frame
            cachedAttributes.append(attributes)
            contentBounds = contentBounds.union(frame)
        }
    }

    override var collectionViewContentSize: CGSize {
        return contentBounds.size
    }

    override func shouldInvalidateLayout(forBoundsChange newBounds: CGRect) -> Bool {
        guard let collectionView = collectionView else { return false }
        return !newBounds.size.equalTo(collectionView.bounds.size)
    }

    override func layoutAttributesForItem(at indexPath: IndexPath) -> UICollectionViewLayoutAttributes? {
        return cachedAttributes[indexPath.item]
    }

    override func layoutAttributesForElements(in rect: CGRect) -> [UICollectionViewLayoutAttributes]? {
        var attributesArray = [UICollectionViewLayoutAttributes]()

        // Find any cell that sits within the query rect.
        guard let lastIndex = cachedAttributes.indices.last, let firstMatchIndex = binSearch(rect, start: 0, end: lastIndex) else {
            return attributesArray
        }

        // Starting from the match, loop up and down through the array until all the attributes
        // have been added within the query rect.
        for attributes in cachedAttributes[..<firstMatchIndex].reversed() {
            guard attributes.frame.maxY >= rect.minY else { break }
            attributesArray.append(attributes)
        }

        for attributes in cachedAttributes[firstMatchIndex...] {
            guard attributes.frame.minY <= rect.maxY else { break }
            attributesArray.append(attributes)
        }

        return attributesArray
    }

    // Perform a binary search on the cached attributes array.
    func binSearch(_ rect: CGRect, start: Int, end: Int) -> Int? {
        if end < start { return nil }

        let mid = (start + end) / 2
        let attr = cachedAttributes[mid]

        if attr.frame.intersects(rect) {
            return mid
        } else {
            if attr.frame.maxY < rect.minY {
                return binSearch(rect, start: (mid + 1), end: end)
            } else {
                return binSearch(rect, start: start, end: (mid - 1))
            }
        }
    }
}
```

More important is the `StageLayoutCalculator.swift` class. It is designed to calculate the frames for each participant based on the number of participants in a flow-based row/column layout. Each row is the same height as the others, but the columns can be different widths per row. See the code comment above the `layouts` variable for a description of how to customize this behavior.

```
import Foundation
import UIKit

class StageLayoutCalculator {

    /// This 2D array contains the description of how the grid of participants should be rendered
    /// The index of the 1st dimension is the number of participants needed to active that configuration
    /// Meaning if there is 1 participant, index 0 will be used. If there are 5 participants, index 4 will be used.
    ///
    /// The 2nd dimension is a description of the layout. The length of the array is the number of rows that
    /// will exist, and then each number within that array is the number of columns in each row.
    ///
    /// See the code comments next to each index for concrete examples.
    ///
    /// This can be customized to fit any layout configuration needed.
    private let layouts: [[Int]] = [
        // 1 participant
        [ 1 ], // 1 row, full width
        // 2 participants
        [ 1, 1 ], // 2 rows, all columns are full width
        // 3 participants
        [ 1, 2 ], // 2 rows, first row's column is full width then 2nd row's columns are 1/2 width
        // 4 participants
        [ 2, 2 ], // 2 rows, all columns are 1/2 width
        // 5 participants
        [ 1, 2, 2 ], // 3 rows, first row's column is full width, 2nd and 3rd row's columns are 1/2 width
        // 6 participants
        [ 2, 2, 2 ], // 3 rows, all column are 1/2 width
        // 7 participants
        [ 2, 2, 3 ], // 3 rows, 1st and 2nd row's columns are 1/2 width, 3rd row's columns are 1/3rd width
        // 8 participants
        [ 2, 3, 3 ],
        // 9 participants
        [ 3, 3, 3 ],
        // 10 participants
        [ 2, 3, 2, 3 ],
        // 11 participants
        [ 2, 3, 3, 3 ],
        // 12 participants
        [ 3, 3, 3, 3 ],
    ]

    // Given a frame (this could be for a UICollectionView, or a Broadcast Mixer's canvas), calculate the frames for each
    // participant, with optional padding.
    func calculateFrames(participantCount: Int, width: CGFloat, height: CGFloat, padding: CGFloat) -> [CGRect] {
        if participantCount > layouts.count {
            fatalError("Only \(layouts.count) participants are supported at this time")
        }
        if participantCount == 0 {
            return []
        }
        var currentIndex = 0
        var lastFrame: CGRect = .zero

        // If the height is less than the width, the rows and columns will be flipped.
        // Meaning for 6 participants, there will be 2 rows of 3 columns each.
        let isVertical = height > width

        let halfPadding = padding / 2.0

        let layout = layouts[participantCount - 1] // 1 participant is in index 0, so `-1`.
        let rowHeight = (isVertical ? height : width) / CGFloat(layout.count)

        var frames = [CGRect]()
        for row in 0 ..< layout.count {
            // layout[row] is the number of columns in a layout
            let itemWidth = (isVertical ? width : height) / CGFloat(layout[row])
            let segmentFrame = CGRect(x: (isVertical ? 0 : lastFrame.maxX) + halfPadding,
                                      y: (isVertical ? lastFrame.maxY : 0) + halfPadding,
                                      width: (isVertical ? itemWidth : rowHeight) - padding,
                                      height: (isVertical ? rowHeight : itemWidth) - padding)

            for column in 0 ..< layout[row] {
                var frame = segmentFrame
                if isVertical {
                    frame.origin.x = (itemWidth * CGFloat(column)) + halfPadding
                } else {
                    frame.origin.y = (itemWidth * CGFloat(column)) + halfPadding
                }
                frames.append(frame)
                currentIndex += 1
            }

            lastFrame = segmentFrame
            lastFrame.origin.x += halfPadding
            lastFrame.origin.y += halfPadding
        }
        return frames
    }

}
```

Back in `Main.storyboard`, be sure to set the layout class for the `UICollectionView` to the class we just created:

![\[Xcode interface showing storyboard with UICollectionView and its layout settings.\]](http://docs.aws.amazon.com/ivs/latest/RealTimeUserGuide/images/Publish_iOS_12.png)


## Hooking Up UI Actions
<a name="getting-started-pub-sub-ios-actions"></a>

We are getting close, there are a few `IBActions` that we need to create.

First we’ll handle the join button. It responds differently based on the value of `connectingOrConnected`. When it is already connected, it just leaves the stage. If it is disconnected, it reads the text from the token `UITextField` and creates a new `IVSStage` with that text. Then we add our `ViewController` as the `strategy`, `errorDelegate`, and renderer for the `IVSStage`, and finally we join the stage asynchronously.

```
@IBAction private func joinTapped(_ sender: UIButton) {
    if connectingOrConnected {
        // If we're already connected to a Stage, leave it.
        stage?.leave()
    } else {
        guard let token = textFieldToken.text else {
            print("No token")
            return
        }
        // Hide the keyboard after tapping Join
        textFieldToken.resignFirstResponder()
        do {
            // Destroy the old Stage first before creating a new one.
            self.stage = nil
            let stage = try IVSStage(token: token, strategy: self)
            stage.errorDelegate = self
            stage.addRenderer(self)
            try stage.join()
            self.stage = stage
        } catch {
            print("Failed to join stage - \(error)")
        }
    }
}
```

The other UI action we need to hook up is the publish switch:

```
@IBAction private func publishToggled(_ sender: UISwitch) {
    // Because the strategy returns the value of `switchPublish.isOn`, just call `refreshStrategy`.
    stage?.refreshStrategy()
}
```

## Rendering the Participants
<a name="getting-started-pub-sub-ios-participants"></a>

Finally, we need to render the data we receive from the SDK onto the participant cell that we created earlier. We already have the `UICollectionView` logic finished, so we just need to implement the `set` API in `ParticipantCollectionViewCell.swift`.

We’ll start by adding the `empty` function and then walk through it step by step:

```
func set(participant: StageParticipant) {
   
}
```

First we handle the easy state, the participant ID, publish state, and subscribe state. For these, we just update our `UILabels` directly:

```
labelParticipantId.text = participant.isLocal ? "You (\(participant.participantId ?? "Disconnected"))" : participant.participantId
labelPublishState.text = participant.publishState.text
labelSubscribeState.text = participant.subscribeState.text
```

The text properties of the publish and subscribe enums come from local extensions:

```
extension IVSParticipantPublishState {
    var text: String {
        switch self {
        case .notPublished: return "Not Published"
        case .attemptingPublish: return "Attempting to Publish"
        case .published: return "Published"
        @unknown default: fatalError()
        }
    }
}

extension IVSParticipantSubscribeState {
    var text: String {
        switch self {
        case .notSubscribed: return "Not Subscribed"
        case .attemptingSubscribe: return "Attempting to Subscribe"
        case .subscribed: return "Subscribed"
        @unknown default: fatalError()
        }
    }
}
```

Next we update the audio and video muted states. To get the muted states we need to find the `IVSImageDevice` and `IVSAudioDevice` from the `streams` array. To optimize performance, we will remember the last devices attached.

```
// This belongs outside `set(participant:)`
private var registeredStreams: Set<IVSStageStream> = []
private var imageDevice: IVSImageDevice? {
    return registeredStreams.lazy.compactMap { $0.device as? IVSImageDevice }.first
}
private var audioDevice: IVSAudioDevice? {
    return registeredStreams.lazy.compactMap { $0.device as? IVSAudioDevice }.first
}

// This belongs inside `set(participant:)`
let existingAudioStream = registeredStreams.first { $0.device is IVSAudioDevice }
let existingImageStream = registeredStreams.first { $0.device is IVSImageDevice }

registeredStreams = Set(participant.streams)

let newAudioStream = participant.streams.first { $0.device is IVSAudioDevice }
let newImageStream = participant.streams.first { $0.device is IVSImageDevice }

// `isMuted != false` covers the stream not existing, as well as being muted.
labelVideoMuted.text = "Video Muted: \(newImageStream?.isMuted != false)"
labelAudioMuted.text = "Audio Muted: \(newAudioStream?.isMuted != false)"
```

Finally we want to render a preview for the `imageDevice` and display audio stats from the `audioDevice`:

```
if existingImageStream !== newImageStream {
    // The image stream has changed
    updatePreview() // We’ll cover this next
}

if existingAudioStream !== newAudioStream {
    (existingAudioStream?.device as? IVSAudioDevice)?.setStatsCallback(nil)
    audioDevice?.setStatsCallback( { [weak self] stats in
        self?.labelAudioVolume.text = String(format: "Audio Level: %.0f dB", stats.rms)
    })
    // When the audio stream changes, it will take some time to receive new stats. Reset the value temporarily.
    self.labelAudioVolume.text = "Audio Level: -100 dB"
}
```

The last function we need to create is `updatePreview()`, which adds a preview of the participant to our view:

```
private func updatePreview() {
    // Remove any old previews from the preview container
    viewPreviewContainer.subviews.forEach { $0.removeFromSuperview() }
    if let imageDevice = self.imageDevice {
        if let preview = try? imageDevice.previewView(with: .fit) {
            viewPreviewContainer.addSubviewMatchFrame(preview)
        }
    }
}
```

The above uses a helper function on `UIView` to make embedding subviews easier:

```
extension UIView {
    func addSubviewMatchFrame(_ view: UIView) {
        view.translatesAutoresizingMaskIntoConstraints = false
        self.addSubview(view)
        NSLayoutConstraint.activate([
            view.topAnchor.constraint(equalTo: self.topAnchor, constant: 0),
            view.bottomAnchor.constraint(equalTo: self.bottomAnchor, constant: 0),
            view.leadingAnchor.constraint(equalTo: self.leadingAnchor, constant: 0),
            view.trailingAnchor.constraint(equalTo: self.trailingAnchor, constant: 0),
        ])
    }
}
```