

Untuk kemampuan serupa dengan Amazon Timestream LiveAnalytics, pertimbangkan Amazon Timestream untuk InfluxDB. Ini menawarkan konsumsi data yang disederhanakan dan waktu respons kueri milidetik satu digit untuk analitik waktu nyata. Pelajari lebih lanjut [di sini](https://docs.aws.amazon.com//timestream/latest/developerguide/timestream-for-influxdb.html).

Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris.

# Kueri Sampel
<a name="sample-queries"></a>

Bagian ini mencakup contoh kasus penggunaan Timestream untuk bahasa LiveAnalytics kueri.

**Topics**
+ [Kueri sederhana](sample-queries.basic-scenarios.md)
+ [Kueri dengan fungsi deret waktu](sample-queries.devops-scenarios.md)
+ [Kueri dengan fungsi agregat](sample-queries.iot-scenarios.md)

# Kueri sederhana
<a name="sample-queries.basic-scenarios"></a>

Berikut ini mendapatkan 10 titik data yang paling baru ditambahkan untuk sebuah tabel.

```
SELECT * FROM <database_name>.<table_name>
ORDER BY time DESC
LIMIT 10
```

Berikut ini mendapatkan 5 titik data tertua untuk ukuran tertentu.

```
SELECT * FROM <database_name>.<table_name>
WHERE measure_name = '<measure_name>'
ORDER BY time ASC
LIMIT 5
```

Berikut ini bekerja dengan stempel waktu granularitas nanodetik.

```
SELECT now() AS time_now
, now() - (INTERVAL '12' HOUR) AS twelve_hour_earlier -- Compatibility with ANSI SQL 
, now() - 12h AS also_twelve_hour_earlier -- Convenient time interval literals
, ago(12h) AS twelve_hours_ago -- More convenience with time functionality
, bin(now(), 10m) AS time_binned -- Convenient time binning support
, ago(50ns) AS fifty_ns_ago -- Nanosecond support
, now() + (1h + 50ns) AS hour_fifty_ns_future
```

Nilai pengukuran untuk catatan multi-ukuran diidentifikasi dengan nama kolom. Nilai ukuran untuk catatan ukuran tunggal diidentifikasi oleh`measure_value::<data_type>`, di mana `<data_type>` salah satu dari`double`,, `bigint``boolean`, atau `varchar` seperti yang dijelaskan dalam[Jenis data yang didukung](supported-data-types.md). Untuk informasi selengkapnya tentang bagaimana nilai ukuran dimodelkan, lihat [Tabel tunggal vs. beberapa tabel](https://docs.aws.amazon.com/timestream/latest/developerguide/data-modeling.html#data-modeling-multiVsinglerecords).

Berikut ini mengambil nilai untuk ukuran yang dipanggil `speed` dari catatan multi-ukuran dengan a`measure_name`. `IoTMulti-stats`

```
SELECT speed FROM <database_name>.<table_name> where measure_name = 'IoTMulti-stats'
```

Berikut ini mengambil `double` nilai dari catatan ukuran tunggal dengan a `measure_name` dari. `load`

```
SELECT measure_value::double FROM <database_name>.<table_name> WHERE measure_name = 'load'
```

# Kueri dengan fungsi deret waktu
<a name="sample-queries.devops-scenarios"></a>

**Topics**
+ [Contoh dataset dan kueri](#sample-queries.devops-scenarios.example)

## Contoh dataset dan kueri
<a name="sample-queries.devops-scenarios.example"></a>

Anda dapat menggunakan Timestream LiveAnalytics untuk memahami dan meningkatkan kinerja dan ketersediaan layanan dan aplikasi Anda. Di bawah ini adalah contoh tabel dan contoh query berjalan pada tabel itu. 

Tabel `ec2_metrics` menyimpan data telemetri, seperti pemanfaatan CPU dan metrik lainnya dari instans EC2. Anda dapat melihat tabel di bawah ini.


| Waktu | region | az | Hostname | ukuran\$1nama | ukuran\$1nilai: :ganda | ukuran\$1nilai: :bigint | 
| --- | --- | --- | --- | --- | --- | --- | 
|  2019-12-04 19:00:00.000 000000  |  us-east-1  |  us-east-1a  |  frontend01  |  pemanfaatan cpu\$1  |  35.1  |  null  | 
|  2019-12-04 19:00:00.000 000000  |  us-east-1  |  us-east-1a  |  frontend01  |  memory\$1utilization  |  55,3  |  null  | 
|  2019-12-04 19:00:00.000 000000  |  us-east-1  |  us-east-1a  |  frontend01  |  network\$1bytes\$1in  |  null  |  1.500  | 
|  2019-12-04 19:00:00.000 000000  |  us-east-1  |  us-east-1a  |  frontend01  |  network\$1bytes\$1out  |  null  |  6,700  | 
|  2019-12-04 19:00:00.000 000000  |  us-east-1  |  us-east-1b  |  frontend02  |  pemanfaatan cpu\$1  |  38,5  |  null  | 
|  2019-12-04 19:00:00.000 000000  |  us-east-1  |  us-east-1b  |  frontend02  |  memory\$1utilization  |  58.4  |  null  | 
|  2019-12-04 19:00:00.000 000000  |  us-east-1  |  us-east-1b  |  frontend02  |  network\$1bytes\$1in  |  null  |  23.000  | 
|  2019-12-04 19:00:00.000 000000  |  us-east-1  |  us-east-1b  |  frontend02  |  network\$1bytes\$1out  |  null  |  12.000  | 
|  2019-12-04 19:00:00.000 000000  |  us-east-1  |  us-east-1c  |  frontend03  |  pemanfaatan cpu\$1  |  45,0  |  null  | 
|  2019-12-04 19:00:00.000 000000  |  us-east-1  |  us-east-1c  |  frontend03  |  memory\$1utilization  |  65.8  |  null  | 
|  2019-12-04 19:00:00.000 000000  |  us-east-1  |  us-east-1c  |  frontend03  |  network\$1bytes\$1in  |  null  |  15.000  | 
|  2019-12-04 19:00:00.000 000000  |  us-east-1  |  us-east-1c  |  frontend03  |  network\$1bytes\$1out  |  null  |  836.000  | 
|  2019-12-04 19:00:05.000 000000  |  us-east-1  |  us-east-1a  |  frontend01  |  pemanfaatan cpu\$1  |  55,2  |  null  | 
|  2019-12-04 19:00:05.000 000000  |  us-east-1  |  us-east-1a  |  frontend01  |  memory\$1utilization  |  75.0  |  null  | 
|  2019-12-04 19:00:05.000 000000  |  us-east-1  |  us-east-1a  |  frontend01  |  network\$1bytes\$1in  |  null  |  1,245  | 
|  2019-12-04 19:00:05.000 000000  |  us-east-1  |  us-east-1a  |  frontend01  |  network\$1bytes\$1out  |  null  |  68,432  | 
|  2019-12-04 19:00:08.000 000000  |  us-east-1  |  us-east-1b  |  frontend02  |  pemanfaatan cpu\$1  |  65.6  |  null  | 
|  2019-12-04 19:00:08.000 000000  |  us-east-1  |  us-east-1b  |  frontend02  |  memory\$1utilization  |  85.3  |  null  | 
|  2019-12-04 19:00:08.000 000000  |  us-east-1  |  us-east-1b  |  frontend02  |  network\$1bytes\$1in  |  null  |  1,245  | 
|  2019-12-04 19:00:08.000 000000  |  us-east-1  |  us-east-1b  |  frontend02  |  network\$1bytes\$1out  |  null  |  68,432  | 
|  2019-12-04 19:00:20.000 000000  |  us-east-1  |  us-east-1c  |  frontend03  |  pemanfaatan cpu\$1  |  12.1  |  null  | 
|  2019-12-04 19:00:20.000 000000  |  us-east-1  |  us-east-1c  |  frontend03  |  memory\$1utilization  |  32.0  |  null  | 
|  2019-12-04 19:00:20.000 000000  |  us-east-1  |  us-east-1c  |  frontend03  |  network\$1bytes\$1in  |  null  |  1.400  | 
|  2019-12-04 19:00:20.000 000000  |  us-east-1  |  us-east-1c  |  frontend03  |  network\$1bytes\$1out  |  null  |  345  | 
|  2019-12-04 19:00:10.000 000000  |  us-east-1  |  us-east-1a  |  frontend01  |  pemanfaatan cpu\$1  |  15.3  |  null  | 
|  2019-12-04 19:00:10.000 000000  |  us-east-1  |  us-east-1a  |  frontend01  |  memory\$1utilization  |  35.4  |  null  | 
|  2019-12-04 19:00:10.000 000000  |  us-east-1  |  us-east-1a  |  frontend01  |  network\$1bytes\$1in  |  null  |  23  | 
|  2019-12-04 19:00:10.000 000000  |  us-east-1  |  us-east-1a  |  frontend01  |  network\$1bytes\$1out  |  null  |  0  | 
|  2019-12-04 19:00:16.000 000000  |  us-east-1  |  us-east-1b  |  frontend02  |  pemanfaatan cpu\$1  |  44,0  |  null  | 
|  2019-12-04 19:00:16.000 000000  |  us-east-1  |  us-east-1b  |  frontend02  |  memory\$1utilization  |  64.2  |  null  | 
|  2019-12-04 19:00:16.000 000000  |  us-east-1  |  us-east-1b  |  frontend02  |  network\$1bytes\$1in  |  null  |  1,450  | 
|  2019-12-04 19:00:16.000 000000  |  us-east-1  |  us-east-1b  |  frontend02  |  network\$1bytes\$1out  |  null  |  200  | 
|  2019-12-04 19:00:40.000 000000  |  us-east-1  |  us-east-1c  |  frontend03  |  pemanfaatan cpu\$1  |  66.4  |  null  | 
|  2019-12-04 19:00:40.000 000000  |  us-east-1  |  us-east-1c  |  frontend03  |  memory\$1utilization  |  86.3  |  null  | 
|  2019-12-04 19:00:40.000 000000  |  us-east-1  |  us-east-1c  |  frontend03  |  network\$1bytes\$1in  |  null  |  300  | 
|  2019-12-04 19:00:40.000 000000  |  us-east-1  |  us-east-1c  |  frontend03  |  network\$1bytes\$1out  |  null  |  423  | 

Temukan pemanfaatan CPU rata-rata, p90, p95, dan p99 untuk host EC2 tertentu selama 2 jam terakhir:

```
SELECT region, az, hostname, BIN(time, 15s) AS binned_timestamp,
    ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization,
    ROUND(APPROX_PERCENTILE(measure_value::double, 0.9), 2) AS p90_cpu_utilization,
    ROUND(APPROX_PERCENTILE(measure_value::double, 0.95), 2) AS p95_cpu_utilization,
    ROUND(APPROX_PERCENTILE(measure_value::double, 0.99), 2) AS p99_cpu_utilization
FROM "sampleDB".DevOps
WHERE measure_name = 'cpu_utilization'
    AND hostname = 'host-Hovjv'
    AND time > ago(2h)
GROUP BY region, hostname, az, BIN(time, 15s)
ORDER BY binned_timestamp ASC
```

Identifikasi host EC2 dengan pemanfaatan CPU yang lebih tinggi sebesar 10% atau lebih dibandingkan dengan pemanfaatan CPU rata-rata seluruh armada selama 2 jam terakhir:

```
WITH avg_fleet_utilization AS (
    SELECT COUNT(DISTINCT hostname) AS total_host_count, AVG(measure_value::double) AS fleet_avg_cpu_utilization
    FROM "sampleDB".DevOps
    WHERE measure_name = 'cpu_utilization'
        AND time > ago(2h)
), avg_per_host_cpu AS (
    SELECT region, az, hostname, AVG(measure_value::double) AS avg_cpu_utilization
    FROM "sampleDB".DevOps
    WHERE measure_name = 'cpu_utilization'
        AND time > ago(2h)
    GROUP BY region, az, hostname
)
SELECT region, az, hostname, avg_cpu_utilization, fleet_avg_cpu_utilization
FROM avg_fleet_utilization, avg_per_host_cpu
WHERE avg_cpu_utilization > 1.1 * fleet_avg_cpu_utilization
ORDER BY avg_cpu_utilization DESC
```

Temukan rata-rata pemanfaatan CPU yang di-binned pada interval 30 detik untuk host EC2 tertentu selama 2 jam terakhir:

```
SELECT BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization
FROM "sampleDB".DevOps
WHERE measure_name = 'cpu_utilization'
    AND hostname = 'host-Hovjv'
    AND time > ago(2h)
GROUP BY hostname, BIN(time, 30s)
ORDER BY binned_timestamp ASC
```

Temukan rata-rata pemanfaatan CPU yang di-binned pada interval 30 detik untuk host EC2 tertentu selama 2 jam terakhir, mengisi nilai yang hilang menggunakan interpolasi linier:

```
WITH binned_timeseries AS (
    SELECT hostname, BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization
    FROM "sampleDB".DevOps
    WHERE measure_name = 'cpu_utilization'
        AND hostname = 'host-Hovjv'
        AND time > ago(2h)
    GROUP BY hostname, BIN(time, 30s)
), interpolated_timeseries AS (
    SELECT hostname,
        INTERPOLATE_LINEAR(
            CREATE_TIME_SERIES(binned_timestamp, avg_cpu_utilization),
                SEQUENCE(min(binned_timestamp), max(binned_timestamp), 15s)) AS interpolated_avg_cpu_utilization
    FROM binned_timeseries
    GROUP BY hostname
)
SELECT time, ROUND(value, 2) AS interpolated_cpu
FROM interpolated_timeseries
CROSS JOIN UNNEST(interpolated_avg_cpu_utilization)
```

Temukan rata-rata pemanfaatan CPU yang di-binned pada interval 30 detik untuk host EC2 tertentu selama 2 jam terakhir, mengisi nilai yang hilang menggunakan interpolasi berdasarkan pengamatan terakhir yang dilakukan ke depan:

```
WITH binned_timeseries AS (
    SELECT hostname, BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization
    FROM "sampleDB".DevOps
    WHERE measure_name = 'cpu_utilization'
        AND hostname = 'host-Hovjv'
        AND time > ago(2h)
    GROUP BY hostname, BIN(time, 30s)
), interpolated_timeseries AS (
    SELECT hostname,
        INTERPOLATE_LOCF(
            CREATE_TIME_SERIES(binned_timestamp, avg_cpu_utilization),
                SEQUENCE(min(binned_timestamp), max(binned_timestamp), 15s)) AS interpolated_avg_cpu_utilization
    FROM binned_timeseries
    GROUP BY hostname
)
SELECT time, ROUND(value, 2) AS interpolated_cpu
FROM interpolated_timeseries
CROSS JOIN UNNEST(interpolated_avg_cpu_utilization)
```

# Kueri dengan fungsi agregat
<a name="sample-queries.iot-scenarios"></a>

Di bawah ini adalah contoh contoh skenario IoT kumpulan data untuk menggambarkan query dengan fungsi agregat.

**Topics**
+ [Contoh data](#sample-queries.iot-scenarios.example-data)
+ [Kueri contoh](#sample-queries.iot-scenarios.example-queries)

## Contoh data
<a name="sample-queries.iot-scenarios.example-data"></a>

Timestream memungkinkan Anda menyimpan dan menganalisis data sensor IoT seperti lokasi, konsumsi bahan bakar, kecepatan, dan kapasitas muat satu atau lebih armada truk untuk memungkinkan manajemen armada yang efektif. Di bawah ini adalah skema dan beberapa data tabel iot\$1trucks yang menyimpan telemetri seperti lokasi, konsumsi bahan bakar, kecepatan, dan kapasitas muat truk.


| Waktu | truck\$1id | Membuat | Model | Armada | kapasitas bahan bakar | load\$1capacity | ukuran\$1nama | ukuran\$1nilai: :ganda | ukuran\$1nilai: :varchar | 
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | 
|  2019-12-04 19:00:00.000 000000  |  123456781  |  GMC  |  Astro  |  Alfa  |  100  |  500  |  bahan bakar\$1membaca  |  65.2  |  null  | 
|  2019-12-04 19:00:00.000 000000  |  123456781  |  GMC  |  Astro  |  Alfa  |  100  |  500  |  muat  |  400,0  |  null  | 
|  2019-12-04 19:00:00.000 000000  |  123456781  |  GMC  |  Astro  |  Alfa  |  100  |  500  |  kecepatan  |  90.2  |  null  | 
|  2019-12-04 19:00:00.000 000000  |  123456781  |  GMC  |  Astro  |  Alfa  |  100  |  500  |  lokasi  |  null  |  47.6062 N, 122.3321 W  | 
|  2019-12-04 19:00:00.000 000000  |  123456782  |  Kenworth  |  W900  |  Alfa  |  150  |  1000  |  bahan bakar\$1membaca  |  10.1  |  null  | 
|  2019-12-04 19:00:00.000 000000  |  123456782  |  Kenworth  |  W900  |  Alfa  |  150  |  1000  |  muat  |  950,3  |  null  | 
|  2019-12-04 19:00:00.000 000000  |  123456782  |  Kenworth  |  W900  |  Alfa  |  150  |  1000  |  kecepatan  |  50,8  |  null  | 
|  2019-12-04 19:00:00.000 000000  |  123456782  |  Kenworth  |  W900  |  Alfa  |  150  |  1000  |  lokasi  |  null  |  40,7128 derajat N, 74,0060 derajat W  | 

## Kueri contoh
<a name="sample-queries.iot-scenarios.example-queries"></a>

Dapatkan daftar semua atribut sensor dan nilai yang dipantau untuk setiap truk di armada.

```
SELECT
    truck_id,
    fleet,
    fuel_capacity,
    model,
    load_capacity,
    make,
    measure_name
FROM "sampleDB".IoT
GROUP BY truck_id, fleet, fuel_capacity, model, load_capacity, make, measure_name
```

Dapatkan pembacaan bahan bakar terbaru dari setiap truk di armada dalam 24 jam terakhir.

```
WITH latest_recorded_time AS (
    SELECT
        truck_id,
        max(time) as latest_time
    FROM "sampleDB".IoT
    WHERE measure_name = 'fuel-reading'
    AND time >= ago(24h)
    GROUP BY truck_id
)
SELECT
    b.truck_id,
    b.fleet,
    b.make,
    b.model,
    b.time,
    b.measure_value::double as last_reported_fuel_reading
FROM
latest_recorded_time a INNER JOIN "sampleDB".IoT b
ON a.truck_id = b.truck_id AND b.time = a.latest_time
WHERE b.measure_name = 'fuel-reading'
AND b.time > ago(24h)
ORDER BY b.truck_id
```

Identifikasi truk yang menggunakan bahan bakar rendah (kurang dari 10%) dalam 48 jam terakhir:

```
WITH low_fuel_trucks AS (
    SELECT time, truck_id, fleet, make, model, (measure_value::double/cast(fuel_capacity as double)*100) AS fuel_pct
    FROM "sampleDB".IoT
    WHERE time >= ago(48h)
    AND (measure_value::double/cast(fuel_capacity as double)*100) < 10
    AND measure_name = 'fuel-reading'
),
other_trucks AS (
SELECT time, truck_id, (measure_value::double/cast(fuel_capacity as double)*100) as remaining_fuel
    FROM "sampleDB".IoT
    WHERE time >= ago(48h)
    AND truck_id IN (SELECT truck_id FROM low_fuel_trucks)
    AND (measure_value::double/cast(fuel_capacity as double)*100) >= 10
    AND measure_name = 'fuel-reading'
),
trucks_that_refuelled AS (
    SELECT a.truck_id
    FROM low_fuel_trucks a JOIN other_trucks b
    ON a.truck_id = b.truck_id AND b.time >= a.time
)
SELECT DISTINCT truck_id, fleet, make, model, fuel_pct
FROM low_fuel_trucks
WHERE truck_id NOT IN (
    SELECT truck_id FROM trucks_that_refuelled
)
```

Temukan beban rata-rata dan kecepatan maksimal untuk setiap truk selama seminggu terakhir:

```
SELECT
    bin(time, 1d) as binned_time,
    fleet,
    truck_id,
    make,
    model,
    AVG(
        CASE WHEN measure_name = 'load' THEN measure_value::double ELSE NULL END
    ) AS avg_load_tons,
    MAX(
        CASE WHEN measure_name = 'speed' THEN measure_value::double ELSE NULL END
    ) AS max_speed_mph
FROM "sampleDB".IoT
WHERE time >= ago(7d)
AND measure_name IN ('load', 'speed')
GROUP BY fleet, truck_id, make, model, bin(time, 1d)
ORDER BY truck_id
```

Dapatkan efisiensi beban untuk setiap truk selama seminggu terakhir:

```
WITH average_load_per_truck AS (
    SELECT
        truck_id,
        avg(measure_value::double)  AS avg_load
    FROM "sampleDB".IoT
    WHERE measure_name = 'load'
    AND time >= ago(7d)
    GROUP BY truck_id, fleet, load_capacity, make, model
),
truck_load_efficiency AS (
    SELECT
        a.truck_id,
        fleet,
        load_capacity,
        make,
        model,
        avg_load,
        measure_value::double,
        time,
        (measure_value::double*100)/avg_load as load_efficiency -- , approx_percentile(avg_load_pct, DOUBLE '0.9')
    FROM "sampleDB".IoT a JOIN average_load_per_truck b
    ON a.truck_id = b.truck_id
    WHERE a.measure_name = 'load'
)
SELECT
    truck_id,
    time,
    load_efficiency
FROM truck_load_efficiency
ORDER BY truck_id, time
```