

Para obtener capacidades similares a las de Amazon Timestream, considere Amazon Timestream LiveAnalytics para InfluxDB. Ofrece una ingesta de datos simplificada y tiempos de respuesta a las consultas en milisegundos de un solo dígito para realizar análisis en tiempo real. Obtenga más información [aquí](https://docs.aws.amazon.com//timestream/latest/developerguide/timestream-for-influxdb.html).

Las traducciones son generadas a través de traducción automática. En caso de conflicto entre la traducción y la version original de inglés, prevalecerá la version en inglés.

# Consultas con funciones de serie temporal
<a name="sample-queries.devops-scenarios"></a>

**Topics**
+ [Conjunto de datos y consultas de ejemplo](#sample-queries.devops-scenarios.example)

## Conjunto de datos y consultas de ejemplo
<a name="sample-queries.devops-scenarios.example"></a>

Puede usar Timestream para LiveAnalytics comprender y mejorar el rendimiento y la disponibilidad de sus servicios y aplicaciones. A continuación, se muestra una tabla de ejemplo y ejemplos de consultas que se ejecutan en esa tabla. 

La tabla `ec2_metrics` almacena datos de telemetría, como el uso de la CPU y otras métricas de las instancias de EC2. Puede ver la tabla que aparece a continuación.


| Time | region | az | Hostname | measure\$1name | measure\$1value::double | measure\$1value::bigint | 
| --- | --- | --- | --- | --- | --- | --- | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  cpu\$1utilization  |  35.1  |  null  | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  memory\$1utilization  |  55.3  |  null  | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  network\$1bytes\$1in  |  null  |  1500  | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  network\$1bytes\$1out  |  null  |  6700  | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  cpu\$1utilization  |  38,5  |  null  | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  memory\$1utilization  |  58,4  |  null  | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  network\$1bytes\$1in  |  null  |  23.000  | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  network\$1bytes\$1out  |  null  |  12 000  | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  cpu\$1utilization  |  45.0  |  null  | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  memory\$1utilization  |  65,8  |  null  | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  network\$1bytes\$1in  |  null  |  15.000  | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  network\$1bytes\$1out  |  null  |  836.000  | 
|  2019-12-04 19:00:05.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  cpu\$1utilization  |  55,2  |  null  | 
|  2019-12-04 19:00:05.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  memory\$1utilization  |  75.0  |  null  | 
|  2019-12-04 19:00:05.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  network\$1bytes\$1in  |  null  |  1.245  | 
|  2019-12-04 19:00:05.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  network\$1bytes\$1out  |  null  |  68.432  | 
|  2019-12-04 19:00:08.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  cpu\$1utilization  |  65,6  |  null  | 
|  2019-12-04 19:00:08.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  memory\$1utilization  |  85,3  |  null  | 
|  2019-12-04 19:00:08.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  network\$1bytes\$1in  |  null  |  1.245  | 
|  2019-12-04 19:00:08.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  network\$1bytes\$1out  |  null  |  68.432  | 
|  2019-12-04 19:00:20.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  cpu\$1utilization  |  12.1  |  null  | 
|  2019-12-04 19:00:20.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  memory\$1utilization  |  32,0  |  null  | 
|  2019-12-04 19:00:20.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  network\$1bytes\$1in  |  null  |  1.400  | 
|  2019-12-04 19:00:20.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  network\$1bytes\$1out  |  null  |  345  | 
|  2019-12-04 19:00:10.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  cpu\$1utilization  |  15.3  |  null  | 
|  2019-12-04 19:00:10.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  memory\$1utilization  |  35,4  |  null  | 
|  2019-12-04 19:00:10.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  network\$1bytes\$1in  |  null  |  23  | 
|  2019-12-04 19:00:10.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  network\$1bytes\$1out  |  null  |  0  | 
|  2019-12-04 19:00:16.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  cpu\$1utilization  |  44.0  |  null  | 
|  2019-12-04 19:00:16.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  memory\$1utilization  |  64,2  |  null  | 
|  2019-12-04 19:00:16.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  network\$1bytes\$1in  |  null  |  1.450  | 
|  2019-12-04 19:00:16.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  network\$1bytes\$1out  |  null  |  200  | 
|  2019-12-04 19:00:40.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  cpu\$1utilization  |  66,4  |  null  | 
|  2019-12-04 19:00:40.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  memory\$1utilization  |  86,3  |  null  | 
|  2019-12-04 19:00:40.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  network\$1bytes\$1in  |  null  |  300  | 
|  2019-12-04 19:00:40.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  network\$1bytes\$1out  |  null  |  423  | 

Calcular la media de uso de la CPU en los valores p90, p95 y p99 de un host EC2 específico durante las últimas 2 horas:

```
SELECT region, az, hostname, BIN(time, 15s) AS binned_timestamp,
    ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization,
    ROUND(APPROX_PERCENTILE(measure_value::double, 0.9), 2) AS p90_cpu_utilization,
    ROUND(APPROX_PERCENTILE(measure_value::double, 0.95), 2) AS p95_cpu_utilization,
    ROUND(APPROX_PERCENTILE(measure_value::double, 0.99), 2) AS p99_cpu_utilization
FROM "sampleDB".DevOps
WHERE measure_name = 'cpu_utilization'
    AND hostname = 'host-Hovjv'
    AND time > ago(2h)
GROUP BY region, hostname, az, BIN(time, 15s)
ORDER BY binned_timestamp ASC
```

Identificar los hosts EC2 cuyo uso de la CPU sea superior en un 10 %, o más, a la media de uso de la CPU de toda la flota durante las últimas 2 horas:

```
WITH avg_fleet_utilization AS (
    SELECT COUNT(DISTINCT hostname) AS total_host_count, AVG(measure_value::double) AS fleet_avg_cpu_utilization
    FROM "sampleDB".DevOps
    WHERE measure_name = 'cpu_utilization'
        AND time > ago(2h)
), avg_per_host_cpu AS (
    SELECT region, az, hostname, AVG(measure_value::double) AS avg_cpu_utilization
    FROM "sampleDB".DevOps
    WHERE measure_name = 'cpu_utilization'
        AND time > ago(2h)
    GROUP BY region, az, hostname
)
SELECT region, az, hostname, avg_cpu_utilization, fleet_avg_cpu_utilization
FROM avg_fleet_utilization, avg_per_host_cpu
WHERE avg_cpu_utilization > 1.1 * fleet_avg_cpu_utilization
ORDER BY avg_cpu_utilization DESC
```

Calcular la media de uso de la CPU agrupada en intervalos de 30 segundos para un host EC2 específico durante las últimas 2 horas:

```
SELECT BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization
FROM "sampleDB".DevOps
WHERE measure_name = 'cpu_utilization'
    AND hostname = 'host-Hovjv'
    AND time > ago(2h)
GROUP BY hostname, BIN(time, 30s)
ORDER BY binned_timestamp ASC
```

Calcular la media de uso de la CPU agrupada en intervalos de 30 segundos para un host EC2 específico durante las últimas 2 horas, y rellenar los valores faltantes mediante la interpolación lineal:

```
WITH binned_timeseries AS (
    SELECT hostname, BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization
    FROM "sampleDB".DevOps
    WHERE measure_name = 'cpu_utilization'
        AND hostname = 'host-Hovjv'
        AND time > ago(2h)
    GROUP BY hostname, BIN(time, 30s)
), interpolated_timeseries AS (
    SELECT hostname,
        INTERPOLATE_LINEAR(
            CREATE_TIME_SERIES(binned_timestamp, avg_cpu_utilization),
                SEQUENCE(min(binned_timestamp), max(binned_timestamp), 15s)) AS interpolated_avg_cpu_utilization
    FROM binned_timeseries
    GROUP BY hostname
)
SELECT time, ROUND(value, 2) AS interpolated_cpu
FROM interpolated_timeseries
CROSS JOIN UNNEST(interpolated_avg_cpu_utilization)
```

Calcular la media de uso de la CPU agrupada en intervalos de 30 segundos para un host EC2 específico durante las últimas 2 horas, y rellenar los valores faltantes mediante la interpolación lineal en función de la última observación disponible:

```
WITH binned_timeseries AS (
    SELECT hostname, BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization
    FROM "sampleDB".DevOps
    WHERE measure_name = 'cpu_utilization'
        AND hostname = 'host-Hovjv'
        AND time > ago(2h)
    GROUP BY hostname, BIN(time, 30s)
), interpolated_timeseries AS (
    SELECT hostname,
        INTERPOLATE_LOCF(
            CREATE_TIME_SERIES(binned_timestamp, avg_cpu_utilization),
                SEQUENCE(min(binned_timestamp), max(binned_timestamp), 15s)) AS interpolated_avg_cpu_utilization
    FROM binned_timeseries
    GROUP BY hostname
)
SELECT time, ROUND(value, 2) AS interpolated_cpu
FROM interpolated_timeseries
CROSS JOIN UNNEST(interpolated_avg_cpu_utilization)
```