

For similar capabilities to Amazon Timestream for LiveAnalytics, consider Amazon Timestream for InfluxDB. It offers simplified data ingestion and single-digit millisecond query response times for real-time analytics. Learn more [here](https://docs.aws.amazon.com//timestream/latest/developerguide/timestream-for-influxdb.html).

# Working with other services
<a name="OtherServices"></a>

Amazon Timestream for LiveAnalytics integrates with a variety of AWS services and popular third-party tools. Currently, Timestream for LiveAnalytics supports integrations with the following: 

**Topics**
+ [Amazon DynamoDB](dynamodb.md)
+ [AWS Lambda](Lambda.md)
+ [AWS IoT Core](IOT-Core.md)
+ [Amazon Managed Service for Apache Flink](ApacheFlink.md)
+ [Amazon Kinesis](Kinesis.md)
+ [Amazon MQ](MQ.md)
+ [Amazon MSK](MSK.md)
+ [Amazon Quick](Quicksight.md)
+ [Amazon SageMaker AI](Sagemaker.md)
+ [Amazon SQS](SQS.md)
+ [Using DBeaver to work with Amazon Timestream](DBeaver.md)
+ [Grafana](Grafana.md)
+ [Using SquaredUp to work with Amazon Timestream](SquaredUp.md)
+ [Open source Telegraf](Telegraf.md)
+ [JDBC](JDBC.md)
+ [ODBC](ODBC.md)
+ [VPC endpoints (AWS PrivateLink)](vpc-interface-endpoints.md)

# Amazon DynamoDB
<a name="dynamodb"></a>

## Using EventBridge Pipes to send DynamoDB data to Timestream
<a name="DynamoDB-via-pipes"></a>

You can use EventBridge Pipes to send data from a DynamoDB stream to a Amazon Timestream for LiveAnalytics table.

Pipes are intended for point-to-point integrations between supported sources and targets, with support for advanced transformations and enrichment. Pipes reduce the need for specialized knowledge and integration code when developing event-driven architectures. To set up a pipe, you choose the source, add optional filtering, define optional enrichment, and choose the target for the event data.

![\[A source sends events to an EventBridge pipe, which filters and routes matching events to the target.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/pipes-overview_shared_architecture.png)


For more information on EventBridge Pipes, see [EventBridge Pipes](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes.html) in the *EventBridge User Guide*. For information on configuring a pipe to deliver events to a Amazon Timestream for LiveAnalytics table, see [EventBridge Pipes target specifics](https://docs.aws.amazon.com/eventbridge/latest/userguide/pipes-targets-specifics.html#pipes-targets-specifics-timestream).

# AWS Lambda
<a name="Lambda"></a>

 You can create Lambda functions that interact with Timestream for LiveAnalytics. For example, you can create a Lambda function that runs at regular intervals to execute a query on Timestream and send an SNS notification based on the query results satisfying one or more criteria. To learn more about Lambda, see the [AWS Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html).

**Topics**
+ [Build AWS Lambda functions using Amazon Timestream for LiveAnalytics with Python](#Lambda.w-python)
+ [Build AWS Lambda functions using Amazon Timestream for LiveAnalytics with JavaScript](#Lambda.w-js)
+ [Build AWS Lambda functions using Amazon Timestream for LiveAnalytics with Go](#Lambda.w-go)
+ [Build AWS Lambda functions using Amazon Timestream for LiveAnalytics with C\$1](#Lambda.w-c-sharp)

## Build AWS Lambda functions using Amazon Timestream for LiveAnalytics with Python
<a name="Lambda.w-python"></a>

 To build AWS Lambda functions using Amazon Timestream for LiveAnalytics with Python, follow the steps below.

1.  Create an IAM role for Lambda to assume that will grant the required permissions to access the Timestream Service, as outlined in [Provide Timestream for LiveAnalytics access](accessing.md#getting-started.prereqs.iam-user).

1. Edit the trust relationship of the IAM role to add Lambda service. You can use the commands below to update an existing role so that AWS Lambda can assume it:

   1. Create the trust policy document:

      ```
      cat > Lambda-Role-Trust-Policy.json << EOF
      {
        "Version": "2012-10-17",		 	 	 
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": {
              "Service": [
                "lambda.amazonaws.com"
              ]
            },
            "Action": "sts:AssumeRole"
          }
        ]
      }
      EOF
      ```

   1. Update the role from previous step with the trust document

      ```
      aws iam update-assume-role-policy --role-name <name_of_the_role_from_step_1> --policy-document file://Lambda-Role-Trust-Policy.json
      ```

Related references are at [TimestreamWrite](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/timestream-write.html) and [TimestreamQuery](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/timestream-query.html).

## Build AWS Lambda functions using Amazon Timestream for LiveAnalytics with JavaScript
<a name="Lambda.w-js"></a>

 To build AWS Lambda functions using Amazon Timestream for LiveAnalytics with JavaScript, follow the instructions outlined [here](https://docs.aws.amazon.com/lambda/latest/dg/nodejs-package.html#nodejs-package-dependencies).

Related references are at [Timestream Write Client - AWS SDK for JavaScript v3](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-timestream-write/index.html) and [Timestream Query Client - AWS SDK for JavaScript v3](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-timestream-query/index.html).

## Build AWS Lambda functions using Amazon Timestream for LiveAnalytics with Go
<a name="Lambda.w-go"></a>

 To build AWS Lambda functions using Amazon Timestream for LiveAnalytics with Go, follow the instructions outlined [here](https://docs.aws.amazon.com/lambda/latest/dg/golang-package.html).

Related references are at [timestreamwrite](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/timestreamwrite) and [timestreamquery](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/timestreamquery).

## Build AWS Lambda functions using Amazon Timestream for LiveAnalytics with C\$1
<a name="Lambda.w-c-sharp"></a>

 To build AWS Lambda functions using Amazon Timestream for LiveAnalytics with C\$1, follow the instructions outlined [here](https://docs.aws.amazon.com/lambda/latest/dg/csharp-package.html).

Related references are at [Amazon.TimestreamWrite](https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/TimestreamWrite/NTimestreamWrite.html) and [Amazon.TimestreamQuery](https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/TimestreamQuery/NTimestreamQuery.html).

# AWS IoT Core
<a name="IOT-Core"></a>

 You can collect data from IoT devices using [AWS IoT Core](https://docs.aws.amazon.com/iot/latest/developerguide/iot-gs.html) and route the data to Amazon Timestream through IoT Core rule actions. AWS IoT rule actions specify what to do when a rule is triggered. You can define actions to send data to an Amazon Timestream table, an Amazon DynamoDB database, and invoke an AWS Lambda function. 

 The Timestream action in IoT Rules is used to insert data from incoming messages directly into Timestream. The action parses the results of the [IoT Core SQL](https://docs.aws.amazon.com/iot/latest/developerguide/iot-sql-reference.html) statement and stores data in Timestream. The names of the fields from returned SQL result set are used as the measure::name and the value of the field is the measure::value. 

 For example, consider the SQL statement and the sample message payload: 

```
SELECT temperature, humidity from 'iot/topic'
```

```
{
  "dataFormat": 5, 
  "rssi": -88,
  "temperature": 24.04,    
  "humidity": 43.605,    
  "pressure": 101082,    
  "accelerationX": 40,    
  "accelerationY": -20,    
  "accelerationZ": 1016,    
  "battery": 3007,    
  "txPower": 4,    
  "movementCounter": 219,    
  "device_id": 46216,
  "device_firmware_sku": 46216   
}
```

 If an IoT Core rule action for Timestream is created with the SQL statement above, two records will be added to Timestream with measure names temperature and humidity and measure values of 24.04 and 43.605, respectively. 

 You can modify the measure name of a record being added to Timestream by using the AS operator in the SELECT statement. The SQL statement below will create a record with the message name temp instead of temperature. 

 The data type of the measure are inferred from the data type of the value of the message payload. JSON data types such as integer, double, boolean, and string are mapped to Timestream data types of BIGINT, DOUBLE, BOOLEAN, and VARCHAR respectively. Data can also be forced to specific data types using the [cast()](https://docs.aws.amazon.com/iot/latest/developerguide/iot-sql-functions.html#iot-sql-function-cast) function. You can specify the timestamp of the measure. If the timestamp is left blank, the time that the entry was processed is used. 

You can refer to the [ Timestream rules action documentation ](https://docs.aws.amazon.com/iot/latest/developerguide/timestream-rule-action.html) for additional details

 To create an IoT Core rule action to store data in Timestream, follow the steps below: 

**Topics**
+ [Prerequisites](#prereqs)
+ [Using the console](#using-console)
+ [Using the CLI](#using-cli)
+ [Sample application](#sample-app)
+ [Video tutorial](#video-tutorial)

## Prerequisites
<a name="prereqs"></a>

1. Create a database in Amazon Timestream using the instructions described in [Create a database](console_timestream.md#console_timestream.db.using-console).

1. Create a table in Amazon Timestream using the instructions described in [Create a table](console_timestream.md#console_timestream.table.using-console).

## Using the console
<a name="using-console"></a>

1. Use the AWS Management Console for AWS IoT Core to create a rule by clicking on **Manage** > **Messsage routing** > **Rules** followed by **Create rule**.

1. Set the rule name to a name of your choice and the SQL to the text shown below

   ```
   SELECT temperature as temp, humidity from 'iot/topic' 
   ```

1. Select Timestream from the Action list

1. Specify the Timestream database, table, and dimension names along with the role to write data into Timestream. If the role does not exist, you can create one by clicking on Create Roles

1. To test the rule, follow the instructions shown [here](https://docs.aws.amazon.com/iot/latest/developerguide/iot-ddb-rule.html#test-db-rule).

## Using the CLI
<a name="using-cli"></a>

 If you haven't installed the AWS Command Line Interface (AWS CLI), do so from [here](https://aws.amazon.com/cli/). 

1. Save the following rule payload in a JSON file called timestream\$1rule.json. Replace *arn:aws:iam::123456789012:role/TimestreamRole* with your role arn which grants AWS IoT access to store data in Amazon Timestream

   ```
   { 
       "actions": [ 
               { 
                   "timestream": { 
                       "roleArn": "arn:aws:iam::123456789012:role/TimestreamRole", 
                       "tableName": "devices_metrics", 
                       "dimensions": [ 
                           { 
                               "name": "device_id", 
                               "value": "${clientId()}" 
                           }, 
                           { 
                               "name": "device_firmware_sku", 
                               "value": "My Static Metadata" 
                           } 
                       ], 
                       "databaseName": "record_devices" 
                   } 
               } 
       ], 
       "sql": "select * from 'iot/topic'", 
       "awsIotSqlVersion": "2016-03-23", 
       "ruleDisabled": false 
   }
   ```

1. Create a topic rule using the following command

   ```
   aws iot create-topic-rule --rule-name timestream_test --topic-rule-payload file://<path/to/timestream_rule.json> --region us-east-1 
   ```

1. Retrieve details of topic rule using the following command

   ```
   aws iot get-topic-rule --rule-name timestream_test 
   ```

1. Save the following message payload in a file called timestream\$1msg.json

   ```
   {
     "dataFormat": 5, 
     "rssi": -88,
     "temperature": 24.04,    
     "humidity": 43.605,    
     "pressure": 101082,    
     "accelerationX": 40,    
     "accelerationY": -20,    
     "accelerationZ": 1016,    
     "battery": 3007,    
     "txPower": 4,    
     "movementCounter": 219,    
     "device_id": 46216,
     "device_firmware_sku": 46216   
   }
   ```

1. Test the rule using the following command

   ```
   aws iot-data publish --topic 'iot/topic' --payload file://<path/to/timestream_msg.json>
   ```

## Sample application
<a name="sample-app"></a>

 To help you get started with using Timestream with AWS IoT Core, we've created a fully functional sample application that creates the necessary artifacts in AWS IoT Core and Timestream for creating a topic rule and a sample application for publishing a data to the topic. 

1.  Clone the GitHub repository for the [sample application](https://github.com/awslabs/amazon-timestream-tools/blob/master/integrations/iot_core) for AWS IoT Core integration following the instructions from [GitHub](https://docs.github.com/en/free-pro-team@latest/github/creating-cloning-and-archiving-repositories/cloning-a-repository)

1. Follow the instructions in the [README](https://github.com/awslabs/amazon-timestream-tools/blob/master/integrations/iot_core) to use an AWS CloudFormation template to create the necessary artifacts in Amazon Timestream and AWS IoT Core and to publish sample messages to the topic.

## Video tutorial
<a name="video-tutorial"></a>

This [video](https://youtu.be/00Wersoz2Q4) explains how IoT Core works with Timestream.

# Amazon Managed Service for Apache Flink
<a name="ApacheFlink"></a>

You can use Apache Flink to transfer your time series data from Amazon Managed Service for Apache Flink, Amazon MSK, Apache Kafka, and other streaming technologies directly into Amazon Timestream for LiveAnalytics. We've created an Apache Flink sample data connector for Timestream. We've also created a sample application for sending data to Amazon Kinesis so that the data can flow from Kinesis to Managed Service for Apache Flink, and finally on to Amazon Timestream. All of these artifacts are available to you in GitHub. This [video tutorial ](https://youtu.be/64DSlBvN5lg) describes the setup.

**Note**  
 Java 11 is the recommended version for using the Managed Service for Apache Flink Application. If you have multiple Java versions, ensure that you export Java 11 to your JAVA\$1HOME environment variable. 

**Topics**
+ [Sample application](#ApacheFlink.sample-app)
+ [Video tutorial](#ApacheFlink.video-tutorial)

## Sample application
<a name="ApacheFlink.sample-app"></a>

To get started, follow the procedure below:

1. Create a database in Timestream with the name `kdaflink` following the instructions described in [Create a database](console_timestream.md#console_timestream.db.using-console).

1. Create a table in Timestream with the name `kinesisdata1` following the instructions described in [Create a table](console_timestream.md#console_timestream.table.using-console).

1. Create an Amazon Kinesis Data Stream with the name `TimestreamTestStream` following the instructions described in [Creating a Stream](https://docs.aws.amazon.com/streams/latest/dev/amazon-kinesis-streams.html#how-do-i-create-a-stream).

1. Clone the GitHub repository for the [Apache Flink data connector for Timestream](https://github.com/awslabs/amazon-timestream-tools/blob/master/integrations/flink_connector) following the instructions from [GitHub](https://docs.github.com/en/free-pro-team@latest/github/creating-cloning-and-archiving-repositories/cloning-a-repository).

1.  To compile, run and use the sample application, follow the instructions in the [ Apache Flink sample data connector README](https://github.com/awslabs/amazon-timestream-tools/blob/master/integrations/flink_connector/README.md). 

1. Compile the Managed Service for Apache Flink application following the instructions for [Compiling the Application Code](https://docs.aws.amazon.com/managed-flink/latest/java/get-started-exercise.html#get-started-exercise-5.5).

1. Upload the Managed Service for Apache Flink application binary following the instructions to [Upload the Apache Flink Streaming Code](https://docs.aws.amazon.com/managed-flink/latest/java/get-started-exercise.html#get-started-exercise-6).

   1. After clicking on Create Application, click on the link of the IAM Role for the application.

   1. Attach the IAM policies for **AmazonKinesisReadOnlyAccess** and **AmazonTimestreamFullAccess**.
**Note**  
The above IAM policies are not restricted to specific resources and are unsuitable for production use. For a production system, consider using policies that restrict access to specific resources.

1. Clone the GitHub repository for the [ sample application writing data to Kinesis](https://github.com/awslabs/amazon-timestream-tools/tree/mainline/tools/python/kinesis_ingestor) following the instructions from [GitHub](https://docs.github.com/en/free-pro-team@latest/github/creating-cloning-and-archiving-repositories/cloning-a-repository).

1. Follow the instructions in the [README](https://github.com/awslabs/amazon-timestream-tools/blob/mainline/tools/python/kinesis_ingestor/README.md) to run the sample application for writing data to Kinesis.

1. Run one or more queries in Timestream to ensure that data is being sent from Kinesis to Managed Service for Apache Flink to Timestream following the instructions to [Create a table](console_timestream.md#console_timestream.table.using-console).

## Video tutorial
<a name="ApacheFlink.video-tutorial"></a>

This [video](https://youtu.be/64DSlBvN5lg) explains how to use Timestream with Managed Service for Apache Flink.

# Amazon Kinesis
<a name="Kinesis"></a>

## Using Amazon Managed Service for Apache Flink
<a name="kinesis-via-flink"></a>

You can send data from Kinesis Data Streams to Timestream for LiveAnalytics using the sample Timestream data connector for Managed Service for Apache Flink. Refer to [Amazon Managed Service for Apache Flink](ApacheFlink.md) for Apache Flink for more information.

## Using EventBridge Pipes to send Kinesis data to Timestream
<a name="Kinesis-via-pipes"></a>

You can use EventBridge Pipes to send data from a Kinesis stream to a Amazon Timestream for LiveAnalytics table.

Pipes are intended for point-to-point integrations between supported sources and targets, with support for advanced transformations and enrichment. Pipes reduce the need for specialized knowledge and integration code when developing event-driven architectures. To set up a pipe, you choose the source, add optional filtering, define optional enrichment, and choose the target for the event data.

![\[A source sends events to an EventBridge pipe, which filters and routes matching events to the target.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/pipes-overview_shared_architecture.png)


This integration enables you to leverage the power of Timestream's time-series data analysis capabilities, while simplifying your data ingestion pipeline.

Using EventBridge Pipes with Timestream offers the following benefits:
+ Real-time Data Ingestion: Stream data from Kinesis directly to Timestream for LiveAnalytics, enabling real-time analytics and monitoring.
+ Seamless Integration: Utilize EventBridge Pipes to manage the flow of data without the need for complex custom integrations.
+ Enhanced Filtering and Transformation: Filter or transform Kinesis records before they are stored in Timestream to meet your specific data processing requirements.
+ Scalability: Handle high-throughput data streams and ensure efficient data processing with built-in parallelism and batching capabilities.

### Configuration
<a name="Kinesis-via-pipes-config"></a>

To set up an EventBridge Pipe to stream data from Kinesis to Timestream, follow these steps:

1. Create a Kinesis stream

   Ensure you have an active Kinesis data stream from which you want to ingest data.

1. Create a Timestream database and table

   Set up your Timestream database and table where the data will be stored.

1. Configure the EventBridge Pipe:
   + Source: Select your Kinesis stream as the source.
   + Target: Choose Timestream as the target.
   + Batching Settings: Define batching window and batch size to optimize data processing and reduce latency.

**Important**  
When setting up a pipe, we recommend testing the correctness of all configurations by ingesting a few records. Please note that successful creation of a pipe does not guarantee that the pipeline is correct and data will flow without errors. There may be runtime errors, such as incorrect table, incorrect dynamic path parameter, or invalid Timestream record after applying mapping, that will be discovered when actual data flows through the pipe.

The following configurations determine the rate at which data is ingested:
+ BatchSize: The maximum size of the batch that will be sent to Timestream for LiveAnalytics. Range: 0 - 100. Recommendation is to keep this value as 100 to get maximum throughput.
+ MaximumBatchingWindowInSeconds: The maximum time to wait to fill the batchSize before the batch is sent to Timestream for LiveAnalytics target. Depending on the rate of incoming events, this configuration will decide the delay of ingestion, recommendation is to keep this value < 10s to keep sending the data to Timestream in near real-time.
+ ParallelizationFactor: The number of batches to process concurrently from each shard. Recommendation is to use the maximum value of 10 to get maximum throughput and near real-time ingestion.

  If your stream is read by multiple targets, use enhanced fan-out to provide a dedicated consumer to your pipe to achieve high throughput. For more information, see [Developing enhanced fan-out consumers with the Kinesis Data Streams API](https://docs.aws.amazon.com/streams/latest/dev/building-enhanced-consumers-api.html) in the *Kinesis Data Streams User Guide*.

**Note**  
The maximum throughput that can be achieved is bounded by [concurrent pipe executions](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-quota.html#eb-pipes-limits) per account.

The following configuration ensures prevention of data loss:
+ DeadLetterConfig: Recommendation is to always configure DeadLetterConfig to avoid any data loss for cases when events could not be ingested to Timestream for LiveAnalytics due to user errors.

Optimize your pipe's performance with the following configuration settings, which helps prevent records from causing slowdowns or blockages.
+ MaximumRecordAgeInSeconds: Records older than this will not be processed and will directly get moved to DLQ. We recommend setting this value to be no higher than the configured Memory store retention period of the target Timestream table.
+ MaximumRetryAttempts: The number of retry attempts for a record before the record is sent to DeadLetterQueue. Recommendation is to configure this at 10. This should be able to help address any transient issues and for persistent issues, the record will be moved to DeadLetterQueue and unblock the rest of the stream.
+ OnPartialBatchItemFailure: For sources that support partial batch processing, we recommend you to enable this and configure it as AUTOMATIC\$1BISECT for additional retry of failed records before dropping/sending to DLQ.

#### Configuration example
<a name="Kinesis-via-pipes-config-example"></a>

Here is an example of how to configure an EventBridge Pipe to stream data from a Kinesis stream to a Timestream table:

**Example IAM policy updates for Timestream**    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "timestream:WriteRecords"
            ],
            "Resource": [
                "arn:aws:timestream:us-east-1:123456789012:database/my-database/table/my-table"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "timestream:DescribeEndpoints"
            ],
            "Resource": "*"
        }
    ]
}
```

**Example Kinesis stream configuration**  <a name="kinesis-stream-config.example"></a>

```
{
  "Source": "arn:aws:kinesis:us-east-1:123456789012:stream/my-kinesis-stream",
  "SourceParameters": {
    "KinesisStreamParameters": {
        "BatchSize": 100,
        "DeadLetterConfig": {
            "Arn": "arn:aws:sqs:us-east-1:123456789012:my-sqs-queue"
        },
       "MaximumBatchingWindowInSeconds": 5,
        "MaximumRecordAgeInSeconds": 1800,
        "MaximumRetryAttempts": 10,
        "StartingPosition": "LATEST",
       "OnPartialBatchItemFailure": "AUTOMATIC_BISECT"
    }
  }
}
```

**Example Timestream target configuration**  <a name="kinesis-stream-config.example"></a>

```
{
    "Target": "arn:aws:timestream:us-east-1:123456789012:database/my-database/table/my-table",
    "TargetParameters": {
        "TimestreamParameters": {
            "DimensionMappings": [
                {
                    "DimensionName": "sensor_id",
                    "DimensionValue": "$.data.device_id",
                    "DimensionValueType": "VARCHAR"
                },
                {
                    "DimensionName": "sensor_type",
                    "DimensionValue": "$.data.sensor_type",
                    "DimensionValueType": "VARCHAR"
                },
                {
                    "DimensionName": "sensor_location",
                    "DimensionValue": "$.data.sensor_loc",
                    "DimensionValueType": "VARCHAR"
                }
            ],
            "MultiMeasureMappings": [
                {
                    "MultiMeasureName": "readings",
                    "MultiMeasureAttributeMappings": [
                        {
                            "MultiMeasureAttributeName": "temperature",
                            "MeasureValue": "$.data.temperature",
                            "MeasureValueType": "DOUBLE"
                        },
                        {
                            "MultiMeasureAttributeName": "humidity",
                            "MeasureValue": "$.data.humidity",
                            "MeasureValueType": "DOUBLE"
                        },
                        {
                            "MultiMeasureAttributeName": "pressure",
                            "MeasureValue": "$.data.pressure",
                            "MeasureValueType": "DOUBLE"
                        }
                    ]
                }
            ],
            "SingleMeasureMappings": [],
            "TimeFieldType": "TIMESTAMP_FORMAT",
            "TimestampFormat": "yyyy-MM-dd HH:mm:ss.SSS",
            "TimeValue": "$.data.time",
            "VersionValue": "$.approximateArrivalTimestamp"
        }
    }
}
```



### Event transformation
<a name="Kinesis-via-pipes-trans"></a>

EventBridge Pipes allow you to transform data before it reaches Timestream. You can define transformation rules to modify the incoming Kinesis records, such as changing field names.

Suppose your Kinesis stream contains temperature and humidity data. You can use an EventBridge transformation to rename these fields before inserting them into Timestream.

### Best practices
<a name="Kinesis-via-pipes-best"></a>

**Batching and Buffering**
+ Configure the batching window and size to balance between write latency and processing efficiency.
+ Use a batching window to accumulate enough data before processing, reducing the overhead of frequent small batches.

**Parallel Processing**

Utilize the **ParallelizationFactor** setting to increase concurrency, especially for high-throughput streams. This ensures that multiple batches from each shard can be processed simultaneously.

**Data Transformation**

Leverage the transformation capabilities of EventBridge Pipes to filter and enhance records before storing them in Timestream. This can help in aligning the data with your analytical requirements.

**Security**
+ Ensure that the IAM roles used for EventBridge Pipes have the necessary permissions to read from Kinesis and write to Timestream.
+ Use encryption and access control measures to secure data in transit and at rest.

### Debugging failures
<a name="Kinesis-via-pipes-debug"></a>
+ **Automatic Disabling of Pipes**

  Pipes will be automatically disabled in about 2 hours if the target does not exist or has permission issues
+ **Throttles**

  Pipes have the capability to automatically back off and retry until the throttles have reduced.
+ **Enabling Logs**

  We recommend you enable Logs at ERROR level and include execution data to get more insights into failed. Upon any failure, these logs will contain request/response sent/received from Timestream. This helps you understand the error associated and if needed reprocess the records after fixing it.

### Monitoring
<a name="Kinesis-via-pipes-monitor"></a>

We recommend you to set up alarms on the following to detect any issues with data flow:
+ Maximum Age of the Record in Source
  + `GetRecords.IteratorAgeMilliseconds`
+ Failure metrics in Pipes
  + `ExecutionFailed`
  + `TargetStageFailed`
+ Timestream Write API errors
  + `UserErrors`

For additional monitoring metrics, see [Monitoring EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-monitoring.html#eb-metrics) in the *EventBridge User Guide*.

# Amazon MQ
<a name="MQ"></a>

## Using EventBridge Pipes to send Amazon MQ data to Timestream
<a name="MQ-via-pipes"></a>

You can use EventBridge Pipes to send data from a Amazon MQ broker to a Amazon Timestream for LiveAnalytics table.

Pipes are intended for point-to-point integrations between supported sources and targets, with support for advanced transformations and enrichment. Pipes reduce the need for specialized knowledge and integration code when developing event-driven architectures. To set up a pipe, you choose the source, add optional filtering, define optional enrichment, and choose the target for the event data.

![\[A source sends events to an EventBridge pipe, which filters and routes matching events to the target.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/pipes-overview_shared_architecture.png)


For more information on EventBridge Pipes, see [EventBridge Pipes](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes.html) in the *EventBridge User Guide*. For information on configuring a pipe to deliver events to a Amazon Timestream for LiveAnalytics table, see [EventBridge Pipes target specifics](https://docs.aws.amazon.com/eventbridge/latest/userguide/pipes-targets-specifics.html#pipes-targets-specifics-timestream).

# Amazon MSK
<a name="MSK"></a>

## Using Managed Service for Apache Flink to send Amazon MSK data to Timestream for LiveAnalytics
<a name="msk-aka"></a>

You can send data from Amazon MSK to Timestream by building a data connector similar to the sample Timestream data connector for Managed Service for Apache Flink. Refer to [Amazon Managed Service for Apache Flink](ApacheFlink.md) for more information.

## Using Kafka Connect to send Amazon MSK data to Timestream for LiveAnalytics
<a name="msk-kafka-connect"></a>

You can use Kafka Connect to ingest your time series data from Amazon MSK directly into Timestream for LiveAnalytics.

We've created a sample Kafka Sink Connector for Timestream. We've also created a sample Apache jMeter test plan for publishing data to a Kafka topic, so that the data can flow from the topic through the Timestream Kafka Sink Connector, to an Timestream for LiveAnalytics table. All of these artifacts are available on GitHub. 

**Note**  
Java 11 is the recommended version for using the Timestream Kafka Sink Connector. If you have multiple Java versions, ensure that you export Java 11 to your JAVA\$1HOME environment variable. 

### Creating a sample application
<a name="msk-kafka-connect-app"></a>

To get started, follow the procedure below.

1. In Timestream for LiveAnalytics, create a database with the name `kafkastream`. 

   See the procedure [Create a database](console_timestream.md#console_timestream.db.using-console) for detailed instructions.

1. In Timestream for LiveAnalytics, create a table with the name `purchase_history`.

   See the procedure [Create a table](console_timestream.md#console_timestream.table.using-console) for detailed instructions.

1. Follow the instructions shared in the to create the following: , and .
   + An Amazon MSK cluster
   + An Amazon EC2 instance that is configured as a Kafka producer client machine 
   + A Kafka topic

   See the [prerequisites](https://github.com/awslabs/amazon-timestream-tools/tree/mainline/tools/java/kafka_ingestor#prerequisites) of the kafka\$1ingestor project for detailed instructions.

1. Clone the [Timestream Kafka Sink Connector](https://github.com/awslabs/amazon-timestream-tools/tree/mainline/integrations/kafka_connector) repository. 

   See [Cloning a repository](https://docs.github.com/en/free-pro-team@latest/github/creating-cloning-and-archiving-repositories/cloning-a-repository) on GitHub for detailed instructions.

1. Compile the plugin code.

    See [Connector - Build from source](https://github.com/awslabs/amazon-timestream-tools/tree/mainline/integrations/kafka_connector#connector---build-from-source) on GitHub for detailed instructions.

1. Upload the following files to an S3 bucket: following the instructions described in .
   + The jar file (kafka-connector-timestream->VERSION<-jar-with-dependencies.jar) from the `/target` directory
   + The sample json schema file, `purchase_history.json`.

   See [Uploading objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/upload-objects.html) in the *Amazon S3 User Guide* for detailed instructions.

1. Create two VPC endpoints. These endpoints would be used by the MSK Connector to access the resources using AWS PrivateLink.
   + One to access the Amazon S3 bucket
   + One to access the Timestream for LiveAnalytics table.

   See [VPC Endpoints](https://github.com/awslabs/amazon-timestream-tools/tree/mainline/integrations/kafka_connector#vpc-endpoints) for detailed instructions.

1. Create a custom plugin with the uploaded jar file.

   See [Plugins](https://docs.aws.amazon.com/msk/latest/developerguide/msk-connect-plugins.html) in the *Amazon MSK Developer Guide * for detailed instructions.

1. Create a custom worker configuration with the JSON content described in [Worker Configuration parameters](https://github.com/awslabs/amazon-timestream-tools/tree/mainline/integrations/kafka_connector#worker-configuration-parameters). following the instructions described in 

   See [Creating a custom worker configuration](https://docs.aws.amazon.com/msk/latest/developerguide/msk-connect-workers.html#msk-connect-create-custom-worker-config) in the *Amazon MSK Developer Guide * for detailed instructions.

1. Create a service execution IAM role.

   See [IAM Service Role](https://github.com/awslabs/amazon-timestream-tools/tree/mainline/integrations/kafka_connector#iam-service-role) for detailed instructions.

1. Create an Amazon MSK connector with the custom plugin, custom worker configuration, and service execution IAM role created in the previous steps and with the [Sample Connector Configuration](https://github.com/awslabs/amazon-timestream-tools/tree/mainline/integrations/kafka_connector#sample-connector-configuration).

   See [Creating a connector](https://docs.aws.amazon.com/msk/latest/developerguide/msk-connect-connectors.html#mkc-create-connector-intro) in the *Amazon MSK Developer Guide * for detailed instructions.

   Make sure to update the values of the below configuration parameters with respective values. See [Connector Configuration parameters](https://github.com/awslabs/amazon-timestream-tools/tree/mainline/integrations/kafka_connector#connector-configuration-parameters) for details.
   + `aws.region`
   + `timestream.schema.s3.bucket.name`
   + `timestream.ingestion.endpoint`

   The connector creation takes 5–10 minutes to complete. The pipeline is ready when its status changes to `Running`.

1. Publish a continuous stream of messages for writing data to the Kafka topic created.

   See [How to use it](https://github.com/awslabs/amazon-timestream-tools/tree/mainline/tools/java/kafka_ingestor#how-to-use-it) for detailed instructions.

1. Run one or more queries to ensure that the data is being sent from Amazon MSK to MSK Connect to the Timestream for LiveAnalytics table. 

   See the procedure [Run a query](console_timestream.md#console_timestream.queries.using-console) for detailed instructions.

#### Additional resources
<a name="msk-kafka-connect-more-info"></a>

The blog, [Real-time serverless data ingestion from your Kafka clusters into Timestream for LiveAnalytics using Kafka Connect](https://aws.amazon.com/blogs/database/real-time-serverless-data-ingestion-from-your-kafka-clusters-into-amazon-timestream-using-kafka-connect/) explains setting up an end-to-end pipeline using the Timestream for LiveAnalytics Kafka Sink Connector, starting from a Kafka producer client machine that uses the Apache jMeter test plan to publish thousands of sample messages to a Kafka topic to verifying the ingested records in an Timestream for LiveAnalytics table.

# Amazon Quick
<a name="Quicksight"></a>

You can use Amazon Quick to analyze and publish data dashboards that contain your Amazon Timestream data. This section describes how you can create a new QuickSight data source connection, modify permissions, create new datasets, and perform an analysis. This [video tutorial ](https://youtu.be/TzW4HWl-L8s) describes how to work with Timestream and Quick. 

**Note**  
 All datasets in Quick are read-only. You can't make any changes to your actual data in Timestream by using Quick to remove the data source, dataset, or fields. 

**Topics**
+ [Accessing Amazon Timestream from QuickSight](#Quicksight.accessing)
+ [Create a new QuickSight data source connection for Timestream](#Quicksight.create-connection)
+ [Edit permissions for the QuickSight data source connection for Timestream](#Quicksight.permissions)
+ [Create a new QuickSight dataset for Timestream](#Quicksight.create-data)
+ [Create a new analysis for Timestream](#Quicksight.create-analysis)
+ [Video tutorial](#Quicksight.video-tutorial)

## Accessing Amazon Timestream from QuickSight
<a name="Quicksight.accessing"></a>

 Before you can proceed, Amazon QuickSight needs to be authorized to connect to Amazon Timestream. If connections are not enabled, you will receive an error when you try to connect. A QuickSight administrator can authorize connections to AWS resources. To authorize a connection from QuickSight to Timestream, follow the procedure at [Using Other AWS Services: Scoping Down Access](https://docs.aws.amazon.com/quicksight/latest/user/scoping-policies-for-access-to-aws-resources.html), choosing Amazon Timestream in step 5. 

## Create a new QuickSight data source connection for Timestream
<a name="Quicksight.create-connection"></a>

**Note**  
The connection between Amazon QuickSight and Amazon Timestream is encrypted in transit using SSL (TLS 1.2). You cannot create an unencrypted connection.

1. Ensure you have configured the appropriate permissions for Amazon QuickSight to access Amazon Timestream, as described in [Accessing Amazon Timestream from QuickSight](#Quicksight.accessing).

1. Begin by creating a new dataset. Choose **Datasets** from the navigation pane, then choose **New Dataset**. 

1. Select the Timestream data source card.

1. For **Data source name**, enter a name for your Timestream data source connection, for example `US Timestream Data`. 
**Note**  
Because you can create many datasets from a connection to Timestream, it's best to keep the name simple.

1. Choose **Validate connection** to check that you can successfully connect to Timestream.
**Note**  
 **Validate connection** only validates that you can connect. However, it doesn't validate a specific table or query. 

1. Choose **Create data source** to proceed.

1. For **Database**, choose **Select...** to view the list of available options. Choose the one you want to use. 

1. Choose **Select** to continue. 

1. Choose one of the following:
   + To import your data into QuickSight's in-memory engine (called SPICE), choose **Import to SPICE for quicker analytics**. 
   + To allow QuickSight to run a query against your data each time you refresh the dataset or use the analysis or dashboard, choose **Directly query your data**. 

1. Choose **Edit/Preview** and then **Save** to save your dataset and close it.

## Edit permissions for the QuickSight data source connection for Timestream
<a name="Quicksight.permissions"></a>

 The following procedure describes how to view, add, and revoke permissions for other QuickSight users so that they can access the same Timestream data source. The people need to be active users in QuickSight before you can add them. 

**Note**  
In QuickSight, data sources have two permissions levels: user and owner.  
Choose *user* to allow read access. 
Choose *owner* to allow that user to edit, share, or delete this QuickSight data source. 

1. Ensure you have configured the appropriate permissions for Amazon QuickSight to access Amazon Timestream, as described in [Accessing Amazon Timestream from QuickSight](#Quicksight.accessing).

1. Choose **Datasets** at left, then scroll down to find the data source card for your Timestream connection. For example `US Timestream Data`.

1. Choose the `Timestream` data source card.

1. Choose `Share data source`. A list of current permissions displays. 

1. (Optional) To edit permissions, you can choose `user` or `owner`. 

1. (Optional) To revoke permissions, choose `Revoke access`. People you revoke can't create new datasets from this data source. However, their existing datasets will still have access to this data source.

1. To add permissions, choose `Invite users`, then follow these steps to add a user:

   1. Add people to allow them to use the same data source.

   1. For each, choose the `Permission` that you want to apply.

1. When you are finished, choose `Close`.

## Create a new QuickSight dataset for Timestream
<a name="Quicksight.create-data"></a>

1. Ensure you have configured the appropriate permissions for Amazon QuickSight to access Amazon Timestream, as described in [Accessing Amazon Timestream from QuickSight](#Quicksight.accessing).

1. Choose **Datasets** at left, then scroll down to find the data source card for your Timestream connection. If you have many data sources, you can use the search bar at the top of the page to find it with a partial match on the name.

1. Choose the **Timestream** data source card. Then choose **Create data set**.

1. For **Database**, choose **Select** to view the list of available options. Choose the database that you want to use. 

1. For **Tables**, choose the table that you want to use.

1. Choose **Edit/Preview**.

1. (Optional) To add more data, choose **Add data** at top right. 

   1. Choose **Switch data source**, and choose a different data source. 

   1. Follow the UI prompts to finish adding data. 

   1. After adding new data to the same dataset, choose **Configure this join **(the two red dots). Set up a join for each additional table. 

   1. If you want to add calculated fields, choose **Add calculated field**. 

   1. To use Sagemaker, choose **Augment with SageMaker**. This option is only available in QuickSight Enterprise edition.

   1. Uncheck any fields you want to omit.

   1. Update any data types you want to change.

1. When you are done, choose **Save** to save and close the dataset. 

## Create a new analysis for Timestream
<a name="Quicksight.create-analysis"></a>

1. Ensure you have configured the appropriate permissions for Amazon QuickSight to access Amazon Timestream, as described in [Accessing Amazon Timestream from QuickSight](#Quicksight.accessing).

1. Choose **Analyses** at left.

1. Choose one of the following:
   + To create a new analysis, choose **New analysis** at right.
   + To add the Timestream dataset to an existing analysis, open the analysis you want to edit. Choose the pencil icon near at top left, then **Add data set**.

1. Start the first data visualization by choosing fields on the left. 

1. For more information, see [ Working with Analyses - Amazon QuickSight ](https://docs.aws.amazon.com/quicksight/latest/user/working-with-analyses.html)

## Video tutorial
<a name="Quicksight.video-tutorial"></a>

This [video](https://youtu.be/TzW4HWl-L8s) explains how Quick works with Timestream.

# Amazon SageMaker AI
<a name="Sagemaker"></a>

 You can use Amazon SageMaker Notebooks to integrate your machine learning models with Amazon Timestream. To help you get started, we have created a sample SageMaker Notebook that processes data from Timestream. The data is inserted into Timestream from a multi-threaded Python application continuously sending data. The source code for the sample SageMaker Notebook and the sample Python application are available in GitHub. 

1. Create a database and table following the instructions described in [Create a database](console_timestream.md#console_timestream.db.using-console) and [Create a table](console_timestream.md#console_timestream.table.using-console). 

1. Clone the GitHub repository for the [ multi-threaded Python sample application](https://github.com/awslabs/amazon-timestream-tools/tree/mainline/tools/python/continuous-ingestor) following the instructions from [ GitHub](https://docs.github.com/en/free-pro-team@latest/github/creating-cloning-and-archiving-repositories/cloning-a-repository).

1. Clone the GitHub repository for the [sample Timestream SageMaker Notebook](https://github.com/awslabs/amazon-timestream-tools/blob/master/integrations/sagemaker) following the instructions from [ GitHub](https://docs.github.com/en/free-pro-team@latest/github/creating-cloning-and-archiving-repositories/cloning-a-repository). 

1. Run the application for continuously ingesting data into Timestream following the instructions in the [README](https://github.com/awslabs/amazon-timestream-tools/blob/mainline/tools/python/continuous-ingestor/README.md).

1. Follow the instructions to create an Amazon S3 bucket for Amazon SageMaker as described [here](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-config-permissions.html).

1. Create an Amazon SageMaker instance with latest boto3 installed: In addition to the instructions described [here](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-setup-working-env.html), follow the steps below: 

   1. On the **Create notebook** instance page, click on **Additional Configuration**

   1. Click on **Lifecycle configuration - *optional*** and select **Create a new lifecycle configuration**

   1. On the *Create lifecycle configuration* wizard box, do the following:

      1. Fill in a desired name to the configuration, e.g. `on-start`

      1. In Start Notebook script, copy-paste the script content from [ Github](https://github.com/aws-samples/amazon-sagemaker-notebook-instance-lifecycle-config-samples/blob/master/scripts/install-pip-package-single-environment/on-start.sh) 

      1. Replace `PACKAGE=scipy` with `PACKAGE=boto3` in the pasted script.

1. Click on **Create configuration**

1. Go to the IAM service in the AWS Management Console and find the newly created SageMaker execution role for the notebook instance.

1. Attach the IAM policy for `AmazonTimestreamFullAccess` to the execution role.
**Note**  
The `AmazonTimestreamFullAccess` IAM policy is not restricted to specific resources and is unsuitable for production use. For a production system, consider using policies that restrict access to specific resources.

1. When the status of the notebook instance is **InService**, choose **Open Jupyter** to launch a SageMaker Notebook for the instance

1.  Upload the files `timestreamquery.py` and `Timestream_SageMaker_Demo.ipynb` into the Notebook by selecting the **Upload** button

1. Choose `Timestream_SageMaker_Demo.ipynb`
**Note**  
If you see a pop up with **Kernel not found**, choose **conda\$1python3** and click **Set Kernel**.

1. Modify `DB_NAME`, `TABLE_NAME`, `bucket`, and `ENDPOINT` to match the database name, table name, S3 bucket name, and region for the training models.

1. Choose the **play** icon to run the individual cells

1. When you get to the cell `Leverage Timestream to find hosts with average CPU utilization across the fleet`, ensure that the output returns at least 2 host names.
**Note**  
If there are less than 2 host names in the output, you may need to rerun the sample Python application ingesting data into Timestream with a larger number of threads and host-scale. 

1. When you get to the cell `Train a Random Cut Forest (RCF) model using the CPU utilization history`, change the `train_instance_type` based on the resource requirements for your training job

1. When you get to the cell `Deploy the model for inference`, change the `instance_type` based on the resource requirements for your inference job
**Note**  
It may take a few minutes to train the model. When the training is complete, you will see the message **Completed - Training job completed** in the output of the cell.

1. Run the cell `Stop and delete the endpoint` to clean up resources. You can also stop and delete the instance from the SageMaker console

# Amazon SQS
<a name="SQS"></a>

## Using EventBridge Pipes to send Amazon SQS data to Timestream
<a name="SQS-via-pipes"></a>

You can use EventBridge Pipes to send data from a Amazon SQS queue to a Amazon Timestream for LiveAnalytics table.

Pipes are intended for point-to-point integrations between supported sources and targets, with support for advanced transformations and enrichment. Pipes reduce the need for specialized knowledge and integration code when developing event-driven architectures. To set up a pipe, you choose the source, add optional filtering, define optional enrichment, and choose the target for the event data.

![\[A source sends events to an EventBridge pipe, which filters and routes matching events to the target.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/pipes-overview_shared_architecture.png)


For more information on EventBridge Pipes, see [EventBridge Pipes](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes.html) in the *EventBridge User Guide*. For information on configuring a pipe to deliver events to a Amazon Timestream for LiveAnalytics table, see [EventBridge Pipes target specifics](https://docs.aws.amazon.com/eventbridge/latest/userguide/pipes-targets-specifics.html#pipes-targets-specifics-timestream).

# Using DBeaver to work with Amazon Timestream
<a name="DBeaver"></a>

[DBeaver](https://dbeaver.io/) is a free universal SQL client that can be used to manage any database that has a JDBC driver. It is widely used among developers and database administrators because of its robust data viewing, editing, and management capabilities.

Using DBeaver's cloud connectivity options, you can connect DBeaver to Amazon Timestream natively. DBeaver provides a comprehensive and intuitive interface to work with time series data directly from within a DBeaver application. Using your credentials, it also gives you full access to any queries that you could execute from another query interface. It even lets you create graphs for better understanding and visualization of query results.

## Setting up DBeaver to work with Timestream
<a name="DBeaver-setup"></a>

Take the following steps to set up DBeaver to work with Timestream:

1. [Download and install DBeaver](https://dbeaver.io/download/) on your local machine.

1. Launch DBeaver, navigate to the database selection area, choose **Timeseries** in the left pane, and then select the **Timestream** icon in the right pane:  
![\[DBeaver screenshot showing how to select Timestream in the database selection area.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/DBeaver-01.png)

1. In the **Timestream Connection Settings** window, enter all the information necessary to connect to your Amazon Timestream database. Please ensure that the user keys you enter have the permissions necessary to access your Timestream database. Also, be sure to keep the information and keys you input into DBeaver safe and private, as with any sensitive information.  
![\[DBeaver screenshot showing connection fields for Timestream.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/DBeaver-02.png)

1. Test the connection to ensure that everything is set up correctly:  
![\[DBeaver screenshot showing a successful connection test.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/DBeaver-03.png)

1. If the connection test is successful, you can now interact with your Amazon Timestream database just as you would with any other database in DBeaver. For example, you can navigate to the SQL editor or to the ER Diagram view to run queries:  
![\[DBeaver screenshot showing a Timestream query run from the SQL editor.\]](http://docs.aws.amazon.com/timestream/latest/developerguide/images/DBeaver-04.png)

1. DBeaver also provides powerful data visualization tools. To use them, run your query, then select the graph icon to visualize the result set. The graphing tool can help you better understand data trends over time.

Pairing Amazon Timestream with DBeaver creates an effective environment for managing time series data. You can integrate it seamlessly into your existing workflow to enhance productivity and efficiency.

# Grafana
<a name="Grafana"></a>

 You can visualize your time series data and create alerts using Grafana. To help you get started with data visualization, we have created a sample dashboard in Grafana that visualizes data sent to Timestream from a Python application and a [video tutorial ](https://youtu.be/pilkz645cs4) that describes the setup. 

**Topics**
+ [Sample application](#Grafana.sample-app)
+ [Video tutorial](#Grafana.video-tutorial)

## Sample application
<a name="Grafana.sample-app"></a>

1.  Create a database and a table in Timestream following the instructions described in [Create a database](console_timestream.md#console_timestream.db.using-console) for more information. 
**Note**  
 The default database name and table name for the Grafana dashboard are set to grafanaDB and grafanaTable respectively. Use these names to minimize setup. 

1. Install [Python 3.7 ](https://www.python.org/downloads/)or higher.

1.  [Install and configure the Timestream Python SDK](getting-started.python.md).s

1.  Clone the GitHub repository for the [multi-thread Python application](https://github.com/awslabs/amazon-timestream-tools/tree/mainline/tools/python/continuous-ingestor) continuously ingesting data into Timestream following the instructions from [GitHub](https://docs.github.com/en/free-pro-team@latest/github/creating-cloning-and-archiving-repositories/cloning-a-repository).

1. Run the application for continuously ingesting data into Timestream following the instructions in the [README](https://github.com/awslabs/amazon-timestream-tools/blob/mainline/tools/python/continuous-ingestor/README.md). 

1. Complete [Learn how to create and use Amazon Managed Grafana resources](https://docs.aws.amazon.com/grafana/latest/userguide/getting-started-with-AMG.html) or complete [Install Grafana](https://grafana.com/docs/grafana/latest/installation/).

1. If installing Grafana instead of using Amazon Managed Grafana, complete [Installing Amazon Timestream on Grafana Cloud](https://grafana.com/grafana/plugins/grafana-timestream-datasource/?tab=installation/).

1. Open the Grafana dashboard using a browser of your choice. If you've locally installed Grafana, you can follow the instructions described in the Grafana documentation to [log in](https://grafana.com/docs/grafana/latest/getting-started/getting-started/#log-in-for-the-first-time).

1. After launching Grafana, go to Datasources, click on Add Datasource, search for Timestream, and select the Timestream datasource.

1. Configure the Auth Provider and the region and click Save and Test.

1. Set the default macros.

   1. Set \$1\$1\$1database to the name of your Timestream database (e.g. grafanaDB).

   1. Set \$1\$1\$1table to the name of your Timestream table (e.g. grafanaTable).

   1. Set \$1\$1\$1measure to the most commonly used measure from the table.

1. Click Save and Test.

1. Click on the Dashboards tab.

1. Click on Import to import the dashboard.

1. Double click the Sample Application Dashboard.

1. Click on the dashboard settings.

1. Select Variables.

1. Change dbName and tableName to match the names of the Timestream database and table.

1. Click Save.

1. Refresh the dashboard.

1. To create alerts, follow the instructions described in the Grafana documentation to [Configure Grafana-managed alert rules](https://grafana.com/docs/grafana/latest/alerting/alerting-rules/create-grafana-managed-rule/).

1. To troubleshoot alerts, follow the instructions described in the Grafana documentation for [Troubleshooting](https://grafana.com/docs/grafana/latest/troubleshooting/).

1. For additional information, see the [Grafana documentation](https://grafana.com/docs/).

## Video tutorial
<a name="Grafana.video-tutorial"></a>

This [video](https://youtu.be/pilkz645cs4) explains how Grafana works with Timestream.

# Using SquaredUp to work with Amazon Timestream
<a name="SquaredUp"></a>

[SquaredUp](https://SquaredUp.com/) is an observability platform that integrates with Amazon Timestream. You can use SquaredUp's intuitive dashboard designer to visualize, analyze, and monitor your time-series data. Dashboards can be shared publicly or privately, and notification channels can be created to alert you when the health state of a monitor changes.

## Using SquaredUp with Amazon Timestream
<a name="SquaredUp-using"></a>

1. [Sign up](https://app.squaredup.com/?signup=true) for [SquaredUp](https://squaredup.com/) and get started for free.

1. Add an [AWS data source](https://squaredup.com/cloud/pluginsetup-aws).

1. Create a dashboard tile that uses the [Timestream Query](https://squaredup.com/cloud/AWS-Timestream-Query) data stream.

1. Optionally, enable monitoring for the tile, create a notification channel, or share the dashboard publicly or privately.

1. Optionally create other tiles to see your Timestream data alongside data from your other monitoring and observability tools.

# Open source Telegraf
<a name="Telegraf"></a>

 You can use the Timestream for LiveAnalytics output plugin for Telegraf to write metrics into Timestream for LiveAnalytics directly from open source Telegraf.

 This section provides an explanation of how to install Telegraf with the Timestream for LiveAnalytics output plugin, how to run Telegraf with the Timestream for LiveAnalytics output plugin, and how open source Telegraf works with Timestream for LiveAnalytics.

**Topics**
+ [Installing Telegraf with the Timestream for LiveAnalytics output plugin](Telegraf.installing-output-plugin.md)
+ [Running Telegraf with the Timestream for LiveAnalytics output plugin](Telegraf.running-output-plugin.title.md)
+ [Mapping Telegraf/InfluxDB metrics to the Timestream for LiveAnalytics model](Telegraf.how-it-works.md)

# Installing Telegraf with the Timestream for LiveAnalytics output plugin
<a name="Telegraf.installing-output-plugin"></a>

As of version 1.16, the Timestream for LiveAnalytics output plugin is available in the official Telegraf release. To install the output plugin on most major operating systems, follow the steps outlined in the [InfluxData Telegraf Documentation](https://docs.influxdata.com/telegraf/v1.16/introduction/installation/). To install on the Amazon Linux 2 OS, follow the instructions below.

## Installing Telegraf with the Timestream for LiveAnalytics output plugin on Amazon Linux 2
<a name="w2aab7c44c35b9b5"></a>

 To install Telegraf with the Timestream Output Plugin on Amazon Linux 2, perform the following steps. 

1. Install Telegraf using the `yum` package manager.

   ```
   cat <<EOF | sudo tee /etc/yum.repos.d/influxdb.repo
   [influxdb]
   name = InfluxDB Repository - RHEL \$releasever
   baseurl = https://repos.influxdata.com/rhel/\$releasever/\$basearch/stable
   enabled = 1
   gpgcheck = 1
   gpgkey = https://repos.influxdata.com/influxdb.key
   EOF
   ```

1. Run the following command.

   ```
   sudo sed -i "s/\$releasever/$(rpm -E %{rhel})/g" /etc/yum.repos.d/influxdb.repo
   ```

1. Install and start Telegraf.

   ```
   sudo yum install telegraf
   sudo service telegraf start
   ```

# Running Telegraf with the Timestream for LiveAnalytics output plugin
<a name="Telegraf.running-output-plugin.title"></a>

You can follow the instructions below to run Telegraf with the Timestream for LiveAnalytics plugin.

1. Generate an example configuration using Telegraf.

   ```
   telegraf --section-filter agent:inputs:outputs --input-filter cpu:mem --output-filter timestream config > example.config
   ```

1. Create a database in Timestream [using the management console](console_timestream.md#console_timestream.db.using-console), [CLI](https://docs.aws.amazon.com/cli/latest/reference/timestream-write/create-database.html), or [SDKs](getting-started-sdks.md).

1. In the `example.config` file, add your database name by editing the following key under the `[[outputs.timestream]] ` section.

   ```
   database_name = "yourDatabaseNameHere"
   ```

1. By default, Telegraf will create a table. If you wish create a table manually, set `create_table_if_not_exists` to `false` and follow the instructions to create a table [using the management console](console_timestream.md#console_timestream.table.using-console), [CLI](https://docs.aws.amazon.com/cli/latest/reference/timestream-write/create-table.html), or [SDKs](getting-started-sdks.md).

1. In the *example.config* file, configure credentials under the `[[outputs.timestream]] ` section. The credentials should allow the following operations.

   ```
   timestream:DescribeEndpoints
   timestream:WriteRecords
   ```
**Note**  
If you leave `create_table_if_not_exists` set to `true`, include:  

   ```
   timestream:CreateTable
   ```
**Note**  
If you set `describe_database_on_start` to `true`, include the following.  

   ```
   timestream:DescribeDatabase
   ```

1. You can edit the rest of the configuration according to your preferences.

1. When you have finished editing the config file, run Telegraf with the following.

   ```
   ./telegraf --config example.config
   ```

1. Metrics should appear within a few seconds, depending on your agent configuration. You should also see the new tables, *cpu* and *mem*, in the Timestream console.

# Mapping Telegraf/InfluxDB metrics to the Timestream for LiveAnalytics model
<a name="Telegraf.how-it-works"></a>

 When writing data from Telegraf to Timestream for LiveAnalytics, the data is mapped as follows.
+ The timestamp is written as the time field.
+ Tags are written as dimensions.
+ Fields are written as measures.
+ Measurements are mostly written as table names (more on this below).

The Timestream for LiveAnalytics output plugin for Telegraf offers multiple options for organizing and storing data in Timestream for LiveAnalytics. This can be described with an example which begins with the data in line protocol format.

`weather,location=us-midwest,season=summer temperature=82,humidity=71 1465839830100400200 airquality,location=us-west no2=5,pm25=16 1465839830100400200`

The following describes the data.
+ The measurement names are `weather` and `airquality`.
+ The tags are `location` and `season`.
+ The fields are `temperature`, `humidity`, `no2`, and `pm25`.

**Topics**
+ [Storing the data in multiple tables](#Telegraf.how-it-works.multi-table-single-measure.title)
+ [Storing the data in a single table](#Telegraf.how-it-works.single-table-single-measure.title)

## Storing the data in multiple tables
<a name="Telegraf.how-it-works.multi-table-single-measure.title"></a>

You can choose to create a separate table per measurement and store each field in a separate row per table.

The configuration is `mapping_mode = "multi-table"`.
+ The Timestream for LiveAnalytics adapter will create two tables, namely, `weather` and `airquality`.
+ Each table row will contain a single field only.

The resulting Timestream for LiveAnalytics tables, `weather` and `airquality`, will look like this.


**`weather`**  

| time | location | season | measure\$1name | measure\$1value::bigint | 
| --- | --- | --- | --- | --- | 
|  2016-06-13 17:43:50  |  us-midwest  |  summer  |  temperature  |  82  | 
|  2016-06-13 17:43:50  |  us-midwest  |  summer  |  humidity  |  71  | 


**`airquality`**  

| time | location | measure\$1name | measure\$1value::bigint | 
| --- | --- | --- | --- | 
|  2016-06-13 17:43:50  |  us-midwest  |  no2   |  5  | 
|  2016-06-13 17:43:50  |  us-midwest  |  pm25   |  16  | 

## Storing the data in a single table
<a name="Telegraf.how-it-works.single-table-single-measure.title"></a>

You can choose to store all the measurements in a single table and store each field in a separate table row.

The configuration is `mapping_mode = "single-table"`. There are two addition configurations when using `single-table`, `single_table_name` and `single_table_dimension_name_for_telegraf_measurement_name`.
+ The Timestream for LiveAnalytics output plugin will create a single table with name *<single\$1table\$1name>* which includes a *<single\$1table\$1dimension\$1name\$1for\$1telegraf\$1measurement\$1name>* column.
+ The table may contain multiple fields in a single table row.

The resulting Timestream for LiveAnalytics table will look like this.


**`weather`**  

| time | location | season | *<single\$1table\$1dimension\$1name\$1 for\$1telegraf\$1measurement\$1name>* | measure\$1name | measure\$1value::bigint | 
| --- | --- | --- | --- | --- | --- | 
|  2016-06-13 17:43:50  |  us-midwest  |  summer  |  weather  |  temperature  |  82  | 
|  2016-06-13 17:43:50  |  us-midwest  |  summer  |  weather  |  humidity  |  71  | 
|  2016-06-13 17:43:50  |  us-midwest  |  summer  |  airquality  |  no2  |  5  | 
|  2016-06-13 17:43:50  |  us-midwest  |  summer  |  weather  |  pm25  |  16  | 

# JDBC
<a name="JDBC"></a>

 You can use a Java Database Connectivity (JDBC) connection to connect Timestream for LiveAnalytics to your business intelligence tools and other applications, such as [SQL Workbench](https://www.sql-workbench.eu/). The Timestream for LiveAnalytics JDBC driver currently supports SSO with Okta and Microsoft Azure AD. 

**Topics**
+ [Configuring the JDBC driver for Timestream for LiveAnalytics](JDBC.configuring.md)
+ [Connection properties](JDBC.connection-properties.md)
+ [JDBC URL examples](JDBC.url-examples.md)
+ [Setting up Timestream for LiveAnalytics JDBC single sign-on authentication with Okta](JDBC.SSOwithOkta.md)
+ [Setting up Timestream for LiveAnalytics JDBC single sign-on authentication with Microsoft Azure AD](JDBC.withAzureAD.md)

# Configuring the JDBC driver for Timestream for LiveAnalytics
<a name="JDBC.configuring"></a>

Follow the steps below to configure the JDBC driver. 

**Topics**
+ [Timestream for LiveAnalytics JDBC driver JARs](#w2aab7c44c37b7b7)
+ [Timestream for LiveAnalytics JDBC driver class and URL format](#w2aab7c44c37b7b9)
+ [Sample application](#w2aab7c44c37b7c11)

## Timestream for LiveAnalytics JDBC driver JARs
<a name="w2aab7c44c37b7b7"></a>

 You can obtain the Timestream for LiveAnalytics JDBC driver via direct download or by adding the driver as a Maven dependency. 
+  *As a direct download:*. To directly download the Timestream for LiveAnalytics JDBC driver, complete the following steps:

  1. Navigate to [ https://github.com/awslabs/amazon-timestream-driver-jdbc/releases ](https://github.com/awslabs/amazon-timestream-driver-jdbc/releases) 

  1. You can use `amazon-timestream-jdbc-1.0.1-shaded.jar` directly with your business intelligence tools and applications

  1. Download `amazon-timestream-jdbc-1.0.1-javadoc.jar` to a directory of your choice.

  1. In the directory where you have downloaded `amazon-timestream-jdbc-1.0.1-javadoc.jar`, run the following command to extract the Javadoc HTML files: 

     ```
     jar -xvf amazon-timestream-jdbc-1.0.1-javadoc.jar
     ```
+  *As a Maven dependency:* To add the Timestream for LiveAnalytics JDBC driver as a Maven dependency, complete the following steps:

  1. Navigate to and open your application's `pom.xml` file in an editor of your choice.

  1. Add the JDBC driver as a dependency into your application's `pom.xml` file:

     ```
     <!-- https://mvnrepository.com/artifact/software.amazon.timestream/amazon-timestream-jdbc -->
     <dependency>
         <groupId>software.amazon.timestream</groupId>
         <artifactId>amazon-timestream-jdbc</artifactId>
         <version>1.0.1</version>
     </dependency>
     ```

## Timestream for LiveAnalytics JDBC driver class and URL format
<a name="w2aab7c44c37b7b9"></a>

 The driver class for Timestream for LiveAnalytics JDBC driver is: 

```
software.amazon.timestream.jdbc.TimestreamDriver
```

 The Timestream JDBC driver requires the following JDBC URL format: 

```
jdbc:timestream:
```

 To specify database properties through the JDBC URL, use the following URL format: 

```
jdbc:timestream://
```

## Sample application
<a name="w2aab7c44c37b7c11"></a>

To help you get started with using Timestream for LiveAnalytics with JDBC, we've created a fully functional sample application in GitHub.

1. Create a database with sample data following the instructions described [here](getting-started.db-w-sample-data.md#getting-started.db-w-sample-data.using-console).

1. Clone the GitHub repository for the [sample application for JDBC](https://github.com/awslabs/amazon-timestream-tools/tree/mainline/integrations/jdbc) following the instructions from [GitHub](https://docs.github.com/en/free-pro-team@latest/github/creating-cloning-and-archiving-repositories/cloning-a-repository).

1. Follow the instructions in the [README](https://github.com/awslabs/amazon-timestream-tools/tree/mainline/integrations/jdbc/README.md) to get started with the sample application.

# Connection properties
<a name="JDBC.connection-properties"></a>

 The Timestream for LiveAnalytics JDBC driver supports the following options: 

**Topics**
+ [Basic authentication options](#JDBC.connection-properties.basic-auth)
+ [Standard client info option](#JDBC.connection-properties.standard-client)
+ [Driver configuration option](#JDBC.connection-properties.driver-config)
+ [SDK option](#JDBC.connection-properties.sdk-options)
+ [Endpoint configuration option](#JDBC.connection-properties.endpoint-config)
+ [Credential provider options](#JDBC.connection-properties.cred-providers)
+ [SAML-based authentication options for Okta](#JDBC.connection-properties.okta)
+ [SAML-based authentication options for Azure AD](#JDBC.connection-properties.azure-ad)

**Note**  
 If none of the properties are provided, the Timestream for LiveAnalytics JDBC driver will use the default credentials chain to load the credentials. 

**Note**  
 All property keys are case-sensitive. 

## Basic authentication options
<a name="JDBC.connection-properties.basic-auth"></a>

The following table describes the available Basic Authentication options.


| Option | Description | Default | 
| --- | --- | --- | 
|  AccessKeyId  |  The AWS user access key id.  |  NONE  | 
|  SecretAccessKey  |  The AWS user secret access key.  |  NONE  | 
|  SessionToken  |  The temporary session token required to access a database with multi-factor authentication (MFA) enabled.  |  NONE  | 

## Standard client info option
<a name="JDBC.connection-properties.standard-client"></a>

The following table describes the Standard Client Info Option.


| Option | Description | Default | 
| --- | --- | --- | 
|  ApplicationName  |  The name of the application currently utilizing the connection. `ApplicationName` is used for debugging purposes and will not be communicated to the Timestream for LiveAnalytics service.  |  The application name detected by the driver.  | 

## Driver configuration option
<a name="JDBC.connection-properties.driver-config"></a>

The following table describes the Driver Configuration Option.


| Option | Description | Default | 
| --- | --- | --- | 
|  EnableMetaDataPreparedStatement  |  Enables Timestream for LiveAnalytics JDBC driver to return metadata for `PreparedStatements`, but this will incur an additional cost with Timestream for LiveAnalytics when retrieving the metadata.  |  FALSE  | 
|  Region  |  The database's region.  |  us-east-1  | 

## SDK option
<a name="JDBC.connection-properties.sdk-options"></a>

The following table describes the SDK Option.


| Option | Description | Default | 
| --- | --- | --- | 
|  RequestTimeout  |  The time in milliseconds the AWS SDK will wait for a query request before timing out. Non-positive value disables request timeout.  |  0  | 
|  SocketTimeout  |  The time in milliseconds the AWS SDK will wait for data to be transferred over an open connection before timing out. Value must be non-negative. A value of `0` disables socket timeout.  |  50000  | 
|  MaxRetryCountClient  |  The maximum number of retry attempts for retryable errors with 5XX error codes in the SDK. The value must be non-negative.  |  NONE  | 
|  MaxConnections  |  The maximum number of allowed concurrently opened HTTP connections to the Timestream for LiveAnalytics service. The value must be positive.  |  50  | 

## Endpoint configuration option
<a name="JDBC.connection-properties.endpoint-config"></a>

The following table describes the Endpoint Configuration Option.


| Option | Description | Default | 
| --- | --- | --- | 
|  Endpoint  |  The endpoint for the Timestream for LiveAnalytics service.  |  NONE  | 

## Credential provider options
<a name="JDBC.connection-properties.cred-providers"></a>

The following table describes the available Credential Provider options.


| Option | Description | Default | 
| --- | --- | --- | 
|  AwsCredentialsProviderClass  |  One of `PropertiesFileCredentialsProvider` or `InstanceProfileCredentialsProvider` to use for authentication.  |  NONE  | 
|  CustomCredentialsFilePath  |  The path to a properties file containing AWS security credentials `accessKey` and `secretKey`. This is only required if `AwsCredentialsProviderClass` is specified as `PropertiesFileCredentialsProvider` .  |  NONE  | 

## SAML-based authentication options for Okta
<a name="JDBC.connection-properties.okta"></a>

The following table describes the available SAML-based authentication options for Okta.


| Option | Description | Default | 
| --- | --- | --- | 
|  IdpName  |  The Identity Provider (Idp) name to use for SAML-based authentication. One of `Okta` or `AzureAD`.  |  NONE  | 
|  IdpHost  |  The host name of the specified Idp.  |  NONE  | 
|  IdpUserName  |  The user name for the specified Idp account.  |  NONE  | 
|  IdpPassword  |  The password for the specified Idp account.  |  NONE  | 
|  OktaApplicationID  |  The unique Okta-provided ID associated with the Timestream for LiveAnalytics application. `AppId` can be found in the `entityID` field provided in the application metadata. Consider the following example: `entityID = http://www.okta.com//IdpAppID`  |  NONE  | 
|  RoleARN  |  The Amazon Resource Name (ARN) of the role that the caller is assuming.  |  NONE  | 
|  IdpARN  |  The Amazon Resource Name (ARN) of the SAML provider in IAM that describes the Idp.  |  NONE  | 

## SAML-based authentication options for Azure AD
<a name="JDBC.connection-properties.azure-ad"></a>

The following table describes the available SAML-based authentication options for Azure AD.


| Option | Description | Default | 
| --- | --- | --- | 
|  IdpName  |  The Identity Provider (Idp) name to use for SAML-based authentication. One of `Okta` or `AzureAD` .  |  NONE  | 
|  IdpHost  |  The host name of the specified Idp.  |  NONE  | 
|  IdpUserName  |  The user name for the specified Idp account.  |  NONE  | 
|  IdpPassword  |  The password for the specified Idp account.  |  NONE  | 
|  AADApplicationID  |  The unique id of the registered application on Azure AD.  |  NONE  | 
|  AADClientSecret  |  The client secret associated with the registered application on Azure AD used to authorize fetching tokens.  |  NONE  | 
|  AADTenant  |  The Azure AD Tenant ID.  |  NONE  | 
|  IdpARN  |  The Amazon Resource Name (ARN) of the SAML provider in IAM that describes the Idp.  |  NONE  | 

# JDBC URL examples
<a name="JDBC.url-examples"></a>

 This section describes how to create a JDBC connection URL, and provides examples. To specify the [optional connection properties](JDBC.connection-properties.md), use the following URL format:

```
jdbc:timestream://PropertyName1=value1;PropertyName2=value2... 
```

**Note**  
All connection properties are optional. All property keys are case-sensitive.

Below are some examples of JDBC connection URLs.

*Example with basic authentication options and region:*  

```
jdbc:timestream://AccessKeyId=<myAccessKeyId>;SecretAccessKey=<mySecretAccessKey>;SessionToken=<mySessionToken>;Region=us-east-1
```

*Example with client info, region and SDK options:*  

```
jdbc:timestream://ApplicationName=MyApp;Region=us-east-1;MaxRetryCountClient=10;MaxConnections=5000;RequestTimeout=20000
```

*Connect using the default credential provider chain with AWS credential set in environment variables:*  

```
jdbc:timestream
```

*Connect using the default credential provider chain with AWS credential set in the connection URL:*  

```
jdbc:timestream://AccessKeyId=<myAccessKeyId>;SecretAccessKey=<mySecretAccessKey>;SessionToken=<mySessionToken>
```

*Connect using the PropertiesFileCredentialsProvider as the authentication method:*  

```
jdbc:timestream://AwsCredentialsProviderClass=PropertiesFileCredentialsProvider;CustomCredentialsFilePath=<path to properties file>
```

*Connect using the InstanceProfileCredentialsProvider as the authentication method:*  

```
jdbc:timestream://AwsCredentialsProviderClass=InstanceProfileCredentialsProvider
```

*Connect using the Okta credentials as the authentication method:*  

```
jdbc:timestream://IdpName=Okta;IdpHost=<host>;IdpUserName=<name>;IdpPassword=<password>;OktaApplicationID=<id>;RoleARN=<roleARN>;IdpARN=<IdpARN>
```

*Connect using the Azure AD credentials as the authentication method:*  

```
jdbc:timestream://IdpName=AzureAD;IdpUserName=<name>;IdpPassword=<password>;AADApplicationID=<id>;AADClientSecret=<secret>;AADTenant=<tenantID>;IdpARN=<IdpARN>
```

*Connect with a specific endpoint:*  

```
jdbc:timestream://Endpoint=abc.us-east-1.amazonaws.com;Region=us-east-1
```

# Setting up Timestream for LiveAnalytics JDBC single sign-on authentication with Okta
<a name="JDBC.SSOwithOkta"></a>

 Timestream for LiveAnalytics supports Timestream for LiveAnalytics JDBC single sign-on authentication with Okta. To use Timestream for LiveAnalytics JDBC single sign-on authentication with Okta, complete each of the sections listed below. 

**Topics**
+ [Prerequisites](aws-sso-with-okta-prerequisites.md)
+ [AWS account federation in Okta](aws-account-federation-in-okta.md)
+ [Setting up Okta for SAML](aws-setting-up-okta-for-saml.md)

# Prerequisites
<a name="aws-sso-with-okta-prerequisites"></a>

Ensure that you have met the following prerequisites before using the Timestream for LiveAnalytics JDBC single sign-on authentication with Okta:
+ [Admin permissions in AWS to create the identity provider and the roles](security-iam.md).
+  An Okta account (Go to [https://www.okta.com/login/](https://www.okta.com/login/) to create an account).
+ [Access to Amazon Timestream for LiveAnalytics](accessing.md).

Now that you have completed the Prerequisites, you may proceed to [AWS account federation in Okta](aws-account-federation-in-okta.md).

# AWS account federation in Okta
<a name="aws-account-federation-in-okta"></a>

The Timestream for LiveAnalytics JDBC driver supports AWS Account Federation in Okta. To set up AWS Account Federation in Okta, complete the following steps:

1. Sign in to the Okta Admin dashboard using the following URL:

   ```
   https://<company-domain-name>-admin.okta.com/admin/apps/active 
   ```
**Note**  
 Replace **<company-domain-name>** with your domain name. 

1. Upon successful sign-in, choose** Add Application** and search for **AWS Account Federation**.

1. Choose **Add**

1. Change the Login URL to the appropriate URL.

1. Choose **Next**

1. Choose **SAML 2.0** As the **Sign-On** method

1. Choose **Identity Provider metadata** to open the metadata XML file. Save the file locally.

1. Leave all other configuration options blank.

1. Choose **Done**

Now that you have completed AWS Account Federation in Okta, you may proceed to [Setting up Okta for SAML](aws-setting-up-okta-for-saml.md).

# Setting up Okta for SAML
<a name="aws-setting-up-okta-for-saml"></a>

1. Choose the **Sign On** tab. Choose the **View**.

1. Choose the **Setup Instructions** button in the **Settings** section.

**Finding the Okta metadata document**

1. To find the document, go to:

   ```
   https://<domain>-admin.okta.com/admin/apps/active
   ```
**Note**  
 <domain> is your unique domain name for your Okta account. 

1. Choose the **AWS Account Federation** application

1. Choose the **Sign On** tab

# Setting up Timestream for LiveAnalytics JDBC single sign-on authentication with Microsoft Azure AD
<a name="JDBC.withAzureAD"></a>

 Timestream for LiveAnalytics supports Timestream for LiveAnalytics JDBC single sign-on authentication with Microsoft Azure AD. To use Timestream for LiveAnalytics JDBC single sign-on authentication with Microsoft Azure AD, complete each of the sections listed below. 

**Topics**
+ [Prerequisites](JDBC.withAzureAD.prereqs.md)
+ [Setting up Azure AD](JDBC.withAzureAD.setUp.md)
+ [Setting up IAM Identity Provider and roles in AWS](JDBC.withAzureAD.IAM.md)

# Prerequisites
<a name="JDBC.withAzureAD.prereqs"></a>

Ensure that you have met the following prerequisites before using the Timestream for LiveAnalytics JDBC single sign-on authentication with Microsoft Azure AD:
+ [Admin permissions in AWS to create the identity provider and the roles](security-iam.md).
+ An Azure Active Directory account (Go to [ https://azure.microsoft.com/en-ca/services/active-directory/](https://azure.microsoft.com/en-ca/services/active-directory/) to create an account)
+ [Access to Amazon Timestream for LiveAnalytics](accessing.md).

# Setting up Azure AD
<a name="JDBC.withAzureAD.setUp"></a>

1. Sign in to Azure Portal

1. Choose **Azure Active Directory** in the list of Azure services. This will redirect to the Default Directory page.

1. Choose **Enterprise Applications** under the **Manage** section on the sidebar

1. Choose **\$1 New application**.

1. Find and select **Amazon Web Services**.

1. Choose **Single Sign-On** under the **Manage** section in the sidebar

1. Choose **SAML** as the single sign-on method

1. In the Basic SAML Configuration section, enter the following URL for both the Identifier and the Reply URL:

   ```
   https://signin.aws.amazon.com/saml
   ```

1. Choose **Save**

1. Download the Federation Metadata XML in the SAML Signing Certificate section. This will be used when creating the IAM Identity Provider later

1. Return to the Default Directory page and choose **App registrations** under **Manage**.

1. Choose **Timestream for LiveAnalytics** from the **All Applications** section. The page will be redirected to the application's Overview page
**Note**  
Note the Application (client) ID and the Directory (tenant) ID. These values are required for when creating a connection.

1. Choose **Certificates and Secrets**

1. Under **Client secrets**, create a new client secret with **\$1 New client secret**.
**Note**  
Note the generated client secret, as this is required when creating a connection to Timestream for LiveAnalytics.

1. On the sidebar under **Manage**, select **API permissions**

1. In the **Configured permissions**, use **Add a permission** to grant Azure AD permission to sign in to Timestream for LiveAnalytics. Choose **Microsoft Graph** on the Request API permissions page.

1. Choose **Delegated permissions** and select the **User.Read **permission

1. Choose **Add permissions**

1. Choose **Grant admin consent for Default Directory**

# Setting up IAM Identity Provider and roles in AWS
<a name="JDBC.withAzureAD.IAM"></a>

 Complete each section below to set up IAM for Timestream for LiveAnalytics JDBC single sign-on authentication with Microsoft Azure AD: 

**Topics**
+ [Create a SAML Identity Provider](#JDBC.withAzureAD.IAM.SAML)
+ [Create an IAM role](#JDBC.withAzureAD.IAM.roleForIAM)
+ [Create an IAM policy](#JDBC.withAzureAD.IAM.policyForIAM)
+ [Provisioning](#JDBC.withAzureAD.IAM.provisioning)

## Create a SAML Identity Provider
<a name="JDBC.withAzureAD.IAM.SAML"></a>

To create a SAML Identity Provider for the Timestream for LiveAnalytics JDBC single sign-on authentication with Microsoft Azure AD, complete the following steps:

1. Sign in to the AWS Management Console

1. Choose **Services** and select **IAM** under Security, Identity, & Compliance

1. Choose **Identity providers** under Access management

1. Choose **Create Provider** and choose **SAML** as the provider type. Enter the **Provider Name**. This example will use AzureADProvider.

1. Upload the previously downloaded Federation Metadata XML file

1. Choose **Next**, then choose **Create**.

1. Upon completion, the page will be redirected back to the Identity providers page

## Create an IAM role
<a name="JDBC.withAzureAD.IAM.roleForIAM"></a>

To create an IAM role for the Timestream for LiveAnalytics JDBC single sign-on authentication with Microsoft Azure AD, complete the following steps:

1. On the sidebar select **Roles** under Access management

1. Choose Create role

1. Choose **SAML 2.0 federation** as the trusted entity

1. Choose the **Azure AD provider**

1. Choose **Allow programmatic and AWS Management Console access**

1. Choose **Next: Permissions**

1. Attach permissions policies or continue to Next:Tags

1. Add optional tags or continue to Next:Review

1. Enter a Role name. This example will use AzureSAMLRole

1. Provide a role description

1. Choose **Create Role** to complete

## Create an IAM policy
<a name="JDBC.withAzureAD.IAM.policyForIAM"></a>

To create an IAM policy for the Timestream for LiveAnalytics JDBC single sign-on authentication with Microsoft Azure AD complete the following steps:

1. On the sidebar, choose **Policies** under Access management

1. Choose **Create policy** and select the **JSON** tab

1. Add the following policy

------
#### [ JSON ]

****  

   ```
   {
   "Version":"2012-10-17",		 	 	 
   "Statement": [
       {
             "Effect": "Allow",
             "Action": [
                    "iam:ListRoles",
                    "iam:ListAccountAliases"
              ],
              "Resource": "*"
         }
   ]
   }
   ```

------

1. Choose **Create policy**

1. Enter a policy name. This example will use TimestreamAccessPolicy.

1. Choose **Create Policy**

1. On the sidebar, choose **Roles** under Access management. 

1.  Choose the previously created **Azure AD role** and choose **Attach policies** under Permissions.

1. Select the previously created access policy.

## Provisioning
<a name="JDBC.withAzureAD.IAM.provisioning"></a>

To provision the identity provider for Timestream for LiveAnalytics JDBC single sign-on authentication with Microsoft Azure AD, complete the following steps:

1. Go back to Azure Portal

1. Choose **Azure Active Directory** in the list of Azure services. This will redirect to the Default Directory page

1. Choose **Enterprise Applications** under the Manage section on the sidebar

1. Choose **Provisioning**

1. Choose **Automatic mode** for the Provisioning Method

1. Under Admin Credentials, enter your **AwsAccessKeyID** for clientsecret, and **SecretAccessKey** for Secret Token

1. Set the **Provisioning Status** to **On**

1. Choose **save**. This allows Azure AD to load the necessary IAM Roles

1. Once the Current cycle status is completed, choose **Users and groups** on the sidebar

1. Choose **\$1 Add user**

1. Choose the Azure AD user to provide access to Timestream for LiveAnalytics

1. Choose the IAM Azure AD role and the corresponding Azure Identity Provider created in AWS

1. Choose **Assign**

# ODBC
<a name="ODBC"></a>

The open-source [ODBC driver](https://github.com/awslabs/amazon-timestream-odbc-driver/tree/main) for Amazon Timestream for LiveAnalytics provides an SQL-relational interface to Timestream for LiveAnalytics for developers and enables connectivity from business intelligence (BI) tools such as Power BI Desktop and Microsoft Excel. The Timestream for LiveAnalytics ODBC driver is currently available on [Windows, macOS and Linux](https://github.com/awslabs/amazon-timestream-odbc-driver/releases), and also supports SSO with Okta and Microsoft Azure Active Directory (AD).

For more information, see [Amazon Timestream for LiveAnalytics ODBC driver documentation on GitHub](https://github.com/awslabs/amazon-timestream-odbc-driver/blob/main/docs/markdown/index.md).

**Topics**
+ [Setting up the Timestream for LiveAnalytics ODBC driver](ODBC-setup.md)
+ [Connection string syntax and options for the ODBC driver](ODBC-connecting.md)
+ [Connection string examples for the Timestream for LiveAnalytics ODBC driver](ODBC-connecting-examples.md)
+ [Troubleshooting connection with the ODBC driver](ODBC-connecting-troubleshooting.md)

# Setting up the Timestream for LiveAnalytics ODBC driver
<a name="ODBC-setup"></a>

## Set up access to Timestream for LiveAnalytics in your AWS account
<a name="ODBC-setup-access"></a>

If you haven't already set up your AWS account to work with Timestream for LiveAnalytics, follow the insructions in [Accessing Timestream for LiveAnalytics](accessing.md).

## Install the ODBC driver on your system
<a name="ODBC-setup-download"></a>

Download the appropriate Timestream ODBC driver installer for your system from the [ODBC GitHub repository](https://github.com/awslabs/amazon-timestream-odbc-driver/releases), and follow the installation instructions that apply to your system:.
+ [Windows installation guide](https://github.com/awslabs/amazon-timestream-odbc-driver/blob/main/docs/markdown/setup/windows-installation-guide.md)
+ [MacOS installation guide](https://github.com/awslabs/amazon-timestream-odbc-driver/blob/main/docs/markdown/setup/macOS-installation-guide.md)
+ [Linux installation guide](https://github.com/awslabs/amazon-timestream-odbc-driver/blob/main/docs/markdown/setup/linux-installation-guide.md)

## Set up a data source name (DSN) for the ODBC driver
<a name="ODBC-setup-dsn"></a>

Follow the instructions in the DSN configuration guide for your system:
+ [Windows DSN configuration](https://github.com/awslabs/amazon-timestream-odbc-driver/blob/main/docs/markdown/setup/windows-dsn-configuration.md)
+ [MacOS DSN configuration](https://github.com/awslabs/amazon-timestream-odbc-driver/blob/main/docs/markdown/setup/macOS-dsn-configuration.md)
+ [Linux DSN configuration](https://github.com/awslabs/amazon-timestream-odbc-driver/blob/main/docs/markdown/setup/linux-dsn-configuration.md)

## Set up your business intelligence (BI) application to work with the ODBC driver
<a name="ODBC-setup-bi-apps"></a>

Here are instructions for setting several common BI applications to work with the ODBC driver:
+ [Setting up Microsoft Power BI.](https://github.com/awslabs/amazon-timestream-odbc-driver/blob/main/docs/markdown/setup/microsoft-power-bi.md)
+ [Setting up Microsoft Excel](https://github.com/awslabs/amazon-timestream-odbc-driver/blob/main/docs/markdown/setup/microsoft-excel.md)
+ [Setting up Tableau](https://github.com/awslabs/amazon-timestream-odbc-driver/blob/main/docs/markdown/setup/tableau.md)

For other applications

# Connection string syntax and options for the ODBC driver
<a name="ODBC-connecting"></a>

The syntax for specifying connection-string options for the ODBC driver is as follows:

```
DRIVER={Amazon Timestream ODBC Driver};(option)=(value);
```

Available options are as follows:

**Driver connection options**
+ **`Driver`**   *(required)*   –   The driver being used with ODBC.

  The default is Amazon Timestream.
+ **`DSN`**   –   The data source name (DSN) to use for configuring the connection.

  The default is `NONE`.
+ **`Auth`**   –   The authentication mode. This must be one of the following:
  + `AWS_PROFILE` – Use the default credential chain.
  + `IAM` – Use AWS IAM credentials.
  + `AAD` – Use the Azure Active Directory (AD) identity provider.
  + `OKTA` – Use the Okta identity provider.

  The default is `AWS_PROFILE`.

**Endpoint configuration options**
+ **`EndpointOverride`**   –   The endpoint override for the Timestream for LiveAnalytics service. This is an advanced option that overrides the region. For example:

  ```
  query-cell2.timestream.us-east-1.amazonaws.com
  ```
+ **`Region`**   –   The signing region for the Timestream for LiveAnalytics service endpoint.

  The default is `us-east-1`.

**Credentials provider option**
+ **`ProfileName`**   –   The profile name in the AWS config file.

  The default is `NONE`.

**AWS IAM authentication options**
+ **`UID`** or **`AccessKeyId`**   –   The AWS user access key id. If both `UID` and `AccessKeyId` are provided in the connection string, the `UID` value will be used unless it is empty.

  The default is `NONE`.
+ **`PWD`** or **`SecretKey`**   –   The AWS user secret access key. If both `PWD` and `SecretKey` are provided in the connection string, the `PWD` value with will be used unless it's empty.

  The default is `NONE`.
+ **`SessionToken`**   –   The temporary session token required to access a database with multi-factor authentication (MFA) enabled. Do not include a trailing ` = ` in the input.

  The default is `NONE`.

**SAML-based authentication options for Okta**
+ **`IdPHost`**   –   The hostname of the specified IdP.

  The default is `NONE`.
+ **`UID`** or **`IdPUserName`**   –   The user name for the specified IdP account. If both `UID` and `IdPUserName` are provided in the connection string, the `UID` value will be used unless it's empty.

  The default is `NONE`.
+ **`PWD`** or **`IdPPassword`**   –   The password for the specified IdP account. If both `PWD` and `IdPPassword` are provided in the connection string, the `PWD` value will be used unless it's empty.

  The default is `NONE`.
+ **`OktaApplicationID`**   –   The unique Okta-provided ID associated with the Timestream for LiveAnalytics application. A place to find the application ID (AppId) is in the `entityID` field provided in the application metadata. An example is:

  ```
  entityID="http://www.okta.com//(IdPAppID)
  ```

  The default is `NONE`.
+ **`RoleARN`**   –   The Amazon Resource Name (ARN) of the role that the caller is assuming.

  The default is `NONE`.
+ **`IdPARN`**   –   The Amazon Resource Name (ARN) of the SAML provider in IAM that describes the IdP.

  The default is `NONE`.

**SAML-based authentication options for Azure Active Directory**
+ **`UID`** or **`IdPUserName`**   –   The user name for the specified IdP account..

  The default is `NONE`.
+ **`PWD`** or **`IdPPassword`**   –   The password for the specified IdP account.

  The default is `NONE`.
+ **`AADApplicationID`**   –   The unique id of the registered application on Azure AD.

  The default is `NONE`.
+ **`AADClientSecret`**   –   The client secret associated with the registered application on Azure AD used to authorize fetching tokens.

  The default is `NONE`.
+ **`AADTenant`**   –   The Azure AD Tenant ID.

  The default is `NONE`.
+ **`RoleARN`**   –   The Amazon Resource Name (ARN) of the role that the caller is assuming.

  The default is `NONE`.
+ **`IdPARN`**   –   The Amazon Resource Name (ARN) of the SAML provider in IAM that describes the IdP.

  The default is `NONE`.

**AWS SDK (advanced) Options**
+ **`RequestTimeout`**   –   The time in milliseconds that the AWS SDK waits for a query request before timing out. Any non-positive value disables the request timeout.

  The default is `3000`.
+ **`ConnectionTimeout`**   –   The time in milliseconds that the AWS SDK waits for data to be transferred over an open connection before timing out. A value of 0 disables the connection timeout. This value must not be negative.

  The default is `1000`.
+ **`MaxRetryCountClient`**   –   The maximum number of retry attempts for retryable errors with 5xx error codes in the SDK. The value must not be negative.

  The default is `0`.
+ **`MaxConnections`**   –   The maximum number of allowed concurrently open HTTP connections to the Timestream service. The value must be positive.

  The default is `25`.

**ODBC driver logging Options**
+ **`LogLevel`**   –   The log level for driver logging. Must be one of:
  + **0**   (OFF).
  + **1**   (ERROR).
  + **2**   (WARNING).
  + **3**   (INFO).
  + **4**   (DEBUG).

  The default is `1` (ERROR).

  **Warning:** personal information could be logged by the driver when using the DEBUG logging mode.
+ **`LogOutput`**   –   Folder in which to store the log file.

  The default is:
  + **Windows:** `%USERPROFILE%`, or if not available, `%HOMEDRIVE%%HOMEPATH%`.
  + **macOS and Linux:** `$HOME`, or if not available, the field `pw_dir` from the function `getpwuid(getuid())` return value.

**SDK logging options**

The AWS SDK log level is separate from the Timestream for LiveAnalytics ODBC driver log level. Setting one does not affect the other.

The SDK Log Level is set using the environment variable `TS_AWS_LOG_LEVEL`. Valid values are:
+ `OFF`
+ `ERROR`
+ `WARN`
+ `INFO`
+ `DEBUG`
+ `TRACE`
+ `FATAL`

If `TS_AWS_LOG_LEVEL` is not set, the SDK log level is set to the default, which is `WARN`.

## Connecting through a proxy
<a name="ODBC-connecting-proxy"></a>

The ODBC driver supports connecting to Amazon Timestream for LiveAnalytics through a proxy. To use this feature, configure the following environment variables based on your proxy setting:
+ **`TS_PROXY_HOST`**   –   the proxy host.
+ **`TS_PROXY_PORT`**   –   The proxy port number.
+ **`TS_PROXY_SCHEME`**   –   The proxy scheme, either `http` or `https`.
+ **`TS_PROXY_USER`**   –   The user name for proxy authentication.
+ **`TS_PROXY_PASSWORD`**   –   The user password for proxy authentication.
+ **`TS_PROXY_SSL_CERT_PATH`**   –   The SSL Certificate file to use for connecting to an HTTPS proxy.
+ **`TS_PROXY_SSL_CERT_TYPE`**   –   The type of the proxy client SSL certificate.
+ **`TS_PROXY_SSL_KEY_PATH`**   –   The private key file to use for connecting to an HTTPS proxy.
+ **`TS_PROXY_SSL_KEY_TYPE`**   –   The type of the private key file used to connect to an HTTPS proxy.
+ **`TS_PROXY_SSL_KEY_PASSWORD`**   –   The passphrase to the private key file used to connect to an HTTPS proxy.

# Connection string examples for the Timestream for LiveAnalytics ODBC driver
<a name="ODBC-connecting-examples"></a>

## Example of connecting to the ODBC driver with IAM credentials
<a name="ODBC-connecting-examples-iam"></a>

```
Driver={Amazon Timestream ODBC Driver};Auth=IAM;AccessKeyId=(your access key ID);secretKey=(your secret key);SessionToken=(your session token);Region=us-east-2;
```

## Example of connecting to the ODBC driver with a profile
<a name="ODBC-connecting-examples-profile"></a>

```
Driver={Amazon Timestream ODBC Driver};ProfileName=(the profile name);region=us-west-2;
```

The driver will attempt to connect using the credentials provided in `~/.aws/credentials`, or if a file is specified in the environment variable `AWS_SHARED_CREDENTIALS_FILE`, using the credentials in that file.

## Example of connecting to the ODBC driver with Okta
<a name="ODBC-connecting-examples-okta"></a>

```
driver={Amazon Timestream ODBC Driver};auth=okta;region=us-west-2;idPHost=(your host at Okta);idPUsername=(your user name);idPPassword=(your password);OktaApplicationID=(your Okta AppId);roleARN=(your role ARN);idPARN=(your Idp ARN);
```

## Example of connecting to the ODBC driver with Azure Active Directory (AAD)
<a name="ODBC-connecting-examples-aad"></a>

```
driver={Amazon Timestream ODBC Driver};auth=aad;region=us-west-2;idPUsername=(your user name);idPPassword=(your password);aadApplicationID=(your AAD AppId);aadClientSecret=(your AAD client secret);aadTenant=(your AAD tenant);roleARN=(your role ARN);idPARN=(your idP ARN);
```

## Example of connecting to the ODBC driver with a specified endpoint and a log level of 2 (WARNING)
<a name="ODBC-connecting-examples-okta"></a>

```
Driver={Amazon Timestream ODBC Driver};Auth=IAM;AccessKeyId=(your access key ID);secretKey=(your secret key);EndpointOverride=ingest.timestream.us-west-2.amazonaws.com;Region=us-east-2;LogLevel=2;
```

# Troubleshooting connection with the ODBC driver
<a name="ODBC-connecting-troubleshooting"></a>

**Note**  
When the username and password are already specified in the DSN, there is no need to specify them again when the ODBC driver manager asks for them.

An error code of `01S02` with a message, `Re-writing (connection string option) (have you specified it several times?` occurs when a connection string option is passed more than once in the connection string. Specifying an option more than once raises an error. When making a connection with a DSN and a connection string, if a connection option is already specified in the DSN, do not specify it again in the connection string.

# VPC endpoints (AWS PrivateLink)
<a name="vpc-interface-endpoints"></a>

You can establish a private connection between your VPC and Amazon Timestream for LiveAnalytics by creating an *interface VPC endpoint*. For more information, see [VPC endpoints (AWS PrivateLink)](VPCEndpoints.md). 