访问 Amazon Timestream 以使用 LiveAnalytics AWS CLI - Amazon Timestream

本文属于机器翻译版本。若本译文内容与英语原文存在差异,则一律以英文原文为准。

访问 Amazon Timestream 以使用 LiveAnalytics AWS CLI

您可以使用 AWS Command Line Interface (AWS CLI) 从命令行控制多项 AWS 服务,并通过脚本自动执行这些服务。您可以将 AWS CLI 用于即席操作。您还可以使用它在实用程序脚本中嵌入用于 LiveAnalytics 操作的 Amazon Timestream。

必须先设置编程访问权限,然后才能将 Timestream AWS CLI 与 Timestream 配合使用。 LiveAnalytics有关更多信息,请参阅 授予编程式访问权限

有关中可用于 Timestream LiveAnalytics 查询API的所有命令的完整列表 AWS CLI,请参阅《AWS CLI 命令参考》。

有关中可用于 Timestream for W LiveAnalytics rite 的所有命令API的完整列表 AWS CLI,请参阅AWS CLI 命令参考

下载和配置 AWS CLI

它们可以在 Windows、macOS 或 Linux 上 AWS CLI 运行。要下载、安装和配置它,请执行以下步骤:

  1. AWS CLI 在 http://aws.amazon.com/cli 下载。

  2. 按照《AWS Command Line Interface 用户指南》 AWS CLI中有关安装 AWS CLI配置的说明进行操作。

使用 with Tim AWS CLI estream LiveAnalytics

命令行格式由用于 LiveAnalytics 操作名称的 Amazon Timestream 和该操作的参数组成。除此之外,还 AWS CLI 支持参数值的简写语法。JSON

help用于在 Timestream 中列出所有可用的命令。 LiveAnalytics例如:

aws timestream-write help
aws timestream-query help

您还可以使用 help 来描述特定命令并了解有关其用法的详细信息:

aws timestream-write create-database help

例如,要创建数据库,请执行以下操作:

aws timestream-write create-database --database-name myFirstDatabase

要创建启用磁性存储写入功能的表,请执行以下操作:

aws timestream-write create-table \ --database-name metricsdb \ --table-name metrics \ --magnetic-store-write-properties "{\"EnableMagneticStoreWrites\": true}"

要使用单一测量值记录写入数据,请执行以下操作:

aws timestream-write write-records \ --database-name metricsdb \ --table-name metrics \ --common-attributes "{\"Dimensions\":[{\"Name\":\"asset_id\", \"Value\":\"100\"}], \"Time\":\"1631051324000\",\"TimeUnit\":\"MILLISECONDS\"}" \ --records "[{\"MeasureName\":\"temperature\", \"MeasureValueType\":\"DOUBLE\",\"MeasureValue\":\"30\"},{\"MeasureName\":\"windspeed\", \"MeasureValueType\":\"DOUBLE\",\"MeasureValue\":\"7\"},{\"MeasureName\":\"humidity\", \"MeasureValueType\":\"DOUBLE\",\"MeasureValue\":\"15\"},{\"MeasureName\":\"brightness\", \"MeasureValueType\":\"DOUBLE\",\"MeasureValue\":\"17\"}]"

要使用多度量记录写入数据,请执行以下操作:

# wide model helper method to create Multi-measure records function ingest_multi_measure_records { epoch=`date +%s` epoch+=$i # multi-measure records aws timestream-write write-records \ --database-name $src_db_wide \ --table-name $src_tbl_wide \ --common-attributes "{\"Dimensions\":[{\"Name\":\"device_id\", \ \"Value\":\"12345678\"},\ {\"Name\":\"device_type\", \"Value\":\"iPhone\"}, \ {\"Name\":\"os_version\", \"Value\":\"14.8\"}, \ {\"Name\":\"region\", \"Value\":\"us-east-1\"} ], \ \"Time\":\"$epoch\",\"TimeUnit\":\"MILLISECONDS\"}" \ --records "[{\"MeasureName\":\"video_metrics\", \"MeasureValueType\":\"MULTI\", \ \"MeasureValues\": \ [{\"Name\":\"video_startup_time\",\"Value\":\"0\",\"Type\":\"BIGINT\"}, \ {\"Name\":\"rebuffering_ratio\",\"Value\":\"0.5\",\"Type\":\"DOUBLE\"}, \ {\"Name\":\"video_playback_failures\",\"Value\":\"0\",\"Type\":\"BIGINT\"}, \ {\"Name\":\"average_frame_rate\",\"Value\":\"0.5\",\"Type\":\"DOUBLE\"}]}]" \ --endpoint-url $ingest_endpoint \ --region $region } # create 5 records for i in {100..105}; do ingest_multi_measure_records $i; done

如何查询表:

aws timestream-query query \ --query-string "SELECT time, device_id, device_type, os_version, region, video_startup_time, rebuffering_ratio, video_playback_failures, \ average_frame_rate \ FROM metricsdb.metrics \ where time >= ago (15m)"

要创建计划查询,请执行以下操作:

aws timestream-query create-scheduled-query \ --name scheduled_query_name \ --query-string "select bin(time, 1m) as time, \ avg(measure_value::double) as avg_cpu, min(measure_value::double) as min_cpu, region \ from $src_db.$src_tbl where measure_name = 'cpu' \ and time BETWEEN @scheduled_runtime - (interval '5' minute) AND @scheduled_runtime \ group by region, bin(time, 1m)" \ --schedule-configuration "{\"ScheduleExpression\":\"$cron_exp\"}" \ --notification-configuration "{\"SnsConfiguration\":{\"TopicArn\":\"$sns_topic_arn\"}}" \ --scheduled-query-execution-role-arn "arn:aws:iam::452360119086:role/TimestreamSQExecutionRole" \ --target-configuration "{\"TimestreamConfiguration\":{\ \"DatabaseName\": \"$dest_db\",\ \"TableName\": \"$dest_tbl\",\ \"TimeColumn\":\"time\",\ \"DimensionMappings\":[{\ \"Name\": \"region\", \"DimensionValueType\": \"VARCHAR\" }],\ \"MultiMeasureMappings\":{\ \"TargetMultiMeasureName\": \"mma_name\", \"MultiMeasureAttributeMappings\":[{\ \"SourceColumn\": \"avg_cpu\", \"MeasureValueType\": \"DOUBLE\", \"TargetMultiMeasureAttributeName\": \"target_avg_cpu\" },\ { \ \"SourceColumn\": \"min_cpu\", \"MeasureValueType\": \"DOUBLE\", \"TargetMultiMeasureAttributeName\": \"target_min_cpu\" }] \ }\ }}" \ --error-report-configuration "{\"S3Configuration\": {\ \"BucketName\": \"$s3_err_bucket\",\ \"ObjectKeyPrefix\": \"scherrors\",\ \"EncryptionOption\": \"SSE_S3\"\ }\ }"