

Para obtener capacidades similares a las de Amazon Timestream, considere Amazon Timestream LiveAnalytics para InfluxDB. Ofrece una ingesta de datos simplificada y tiempos de respuesta a las consultas en milisegundos de un solo dígito para realizar análisis en tiempo real. Obtenga más información [aquí](https://docs.aws.amazon.com//timestream/latest/developerguide/timestream-for-influxdb.html).

Las traducciones son generadas a través de traducción automática. En caso de conflicto entre la traducción y la version original de inglés, prevalecerá la version en inglés.

# Consultas de ejemplo
<a name="sample-queries"></a>

Esta sección incluye ejemplos de casos de uso del lenguaje de consulta de Timestream for LiveAnalytics.

**Topics**
+ [Consultas sencillas](sample-queries.basic-scenarios.md)
+ [Consultas con funciones de serie temporal](sample-queries.devops-scenarios.md)
+ [Consultas con funciones de agregación](sample-queries.iot-scenarios.md)

# Consultas sencillas
<a name="sample-queries.basic-scenarios"></a>

A continuación, se obtienen los últimos 10 puntos de datos que se agregaron a una tabla.

```
SELECT * FROM <database_name>.<table_name>
ORDER BY time DESC
LIMIT 10
```

A continuación, se obtienen los 5 puntos de datos más antiguos de una medida específica.

```
SELECT * FROM <database_name>.<table_name>
WHERE measure_name = '<measure_name>'
ORDER BY time ASC
LIMIT 5
```

Lo siguiente funciona con marcas de tiempo de granularidad de nanosegundos.

```
SELECT now() AS time_now
, now() - (INTERVAL '12' HOUR) AS twelve_hour_earlier -- Compatibility with ANSI SQL 
, now() - 12h AS also_twelve_hour_earlier -- Convenient time interval literals
, ago(12h) AS twelve_hours_ago -- More convenience with time functionality
, bin(now(), 10m) AS time_binned -- Convenient time binning support
, ago(50ns) AS fifty_ns_ago -- Nanosecond support
, now() + (1h + 50ns) AS hour_fifty_ns_future
```

Los valores de medida de los registros de medidas múltiples se identifican por el nombre de la columna. Los valores de medida de los registros de medida única se identifican con `measure_value::<data_type>`, donde `<data_type>` puede ser `double`, `bigint`, `boolean`, o `varchar`, tal como se describe en [Tipos de datos compatibles](supported-data-types.md). Para obtener más información sobre cómo se modelan los valores de medida, consulte [Tabla única frente a tablas múltiples.](https://docs.aws.amazon.com/timestream/latest/developerguide/data-modeling.html#data-modeling-multiVsinglerecords)

A continuación, se recuperan los valores de una medida llamada `speed` partir de registros de varias medidas con un valor `measure_name` de `IoTMulti-stats`.

```
SELECT speed FROM <database_name>.<table_name> where measure_name = 'IoTMulti-stats'
```

A continuación, se recuperan los valores `double` de registros de medida única con un `measure_name` de `load`.

```
SELECT measure_value::double FROM <database_name>.<table_name> WHERE measure_name = 'load'
```

# Consultas con funciones de serie temporal
<a name="sample-queries.devops-scenarios"></a>

**Topics**
+ [Conjunto de datos y consultas de ejemplo](#sample-queries.devops-scenarios.example)

## Conjunto de datos y consultas de ejemplo
<a name="sample-queries.devops-scenarios.example"></a>

Puede usar Timestream para LiveAnalytics comprender y mejorar el rendimiento y la disponibilidad de sus servicios y aplicaciones. A continuación, se muestra una tabla de ejemplo y ejemplos de consultas que se ejecutan en esa tabla. 

La tabla `ec2_metrics` almacena datos de telemetría, como el uso de la CPU y otras métricas de las instancias de EC2. Puede ver la tabla que aparece a continuación.


| Time | region | az | Hostname | measure\$1name | measure\$1value::double | measure\$1value::bigint | 
| --- | --- | --- | --- | --- | --- | --- | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  cpu\$1utilization  |  35.1  |  null  | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  memory\$1utilization  |  55.3  |  null  | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  network\$1bytes\$1in  |  null  |  1500  | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  network\$1bytes\$1out  |  null  |  6700  | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  cpu\$1utilization  |  38,5  |  null  | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  memory\$1utilization  |  58,4  |  null  | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  network\$1bytes\$1in  |  null  |  23.000  | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  network\$1bytes\$1out  |  null  |  12 000  | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  cpu\$1utilization  |  45.0  |  null  | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  memory\$1utilization  |  65,8  |  null  | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  network\$1bytes\$1in  |  null  |  15.000  | 
|  2019-12-04 19:00:00.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  network\$1bytes\$1out  |  null  |  836.000  | 
|  2019-12-04 19:00:05.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  cpu\$1utilization  |  55,2  |  null  | 
|  2019-12-04 19:00:05.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  memory\$1utilization  |  75.0  |  null  | 
|  2019-12-04 19:00:05.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  network\$1bytes\$1in  |  null  |  1.245  | 
|  2019-12-04 19:00:05.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  network\$1bytes\$1out  |  null  |  68.432  | 
|  2019-12-04 19:00:08.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  cpu\$1utilization  |  65,6  |  null  | 
|  2019-12-04 19:00:08.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  memory\$1utilization  |  85,3  |  null  | 
|  2019-12-04 19:00:08.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  network\$1bytes\$1in  |  null  |  1.245  | 
|  2019-12-04 19:00:08.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  network\$1bytes\$1out  |  null  |  68.432  | 
|  2019-12-04 19:00:20.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  cpu\$1utilization  |  12.1  |  null  | 
|  2019-12-04 19:00:20.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  memory\$1utilization  |  32,0  |  null  | 
|  2019-12-04 19:00:20.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  network\$1bytes\$1in  |  null  |  1.400  | 
|  2019-12-04 19:00:20.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  network\$1bytes\$1out  |  null  |  345  | 
|  2019-12-04 19:00:10.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  cpu\$1utilization  |  15.3  |  null  | 
|  2019-12-04 19:00:10.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  memory\$1utilization  |  35,4  |  null  | 
|  2019-12-04 19:00:10.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  network\$1bytes\$1in  |  null  |  23  | 
|  2019-12-04 19:00:10.000000000  |  us-east-1  |  us-east-1a  |  frontend01  |  network\$1bytes\$1out  |  null  |  0  | 
|  2019-12-04 19:00:16.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  cpu\$1utilization  |  44.0  |  null  | 
|  2019-12-04 19:00:16.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  memory\$1utilization  |  64,2  |  null  | 
|  2019-12-04 19:00:16.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  network\$1bytes\$1in  |  null  |  1.450  | 
|  2019-12-04 19:00:16.000000000  |  us-east-1  |  us-east-1b  |  frontend02  |  network\$1bytes\$1out  |  null  |  200  | 
|  2019-12-04 19:00:40.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  cpu\$1utilization  |  66,4  |  null  | 
|  2019-12-04 19:00:40.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  memory\$1utilization  |  86,3  |  null  | 
|  2019-12-04 19:00:40.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  network\$1bytes\$1in  |  null  |  300  | 
|  2019-12-04 19:00:40.000000000  |  us-east-1  |  us-east-1c  |  frontend03  |  network\$1bytes\$1out  |  null  |  423  | 

Calcular la media de uso de la CPU en los valores p90, p95 y p99 de un host EC2 específico durante las últimas 2 horas:

```
SELECT region, az, hostname, BIN(time, 15s) AS binned_timestamp,
    ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization,
    ROUND(APPROX_PERCENTILE(measure_value::double, 0.9), 2) AS p90_cpu_utilization,
    ROUND(APPROX_PERCENTILE(measure_value::double, 0.95), 2) AS p95_cpu_utilization,
    ROUND(APPROX_PERCENTILE(measure_value::double, 0.99), 2) AS p99_cpu_utilization
FROM "sampleDB".DevOps
WHERE measure_name = 'cpu_utilization'
    AND hostname = 'host-Hovjv'
    AND time > ago(2h)
GROUP BY region, hostname, az, BIN(time, 15s)
ORDER BY binned_timestamp ASC
```

Identificar los hosts EC2 cuyo uso de la CPU sea superior en un 10 %, o más, a la media de uso de la CPU de toda la flota durante las últimas 2 horas:

```
WITH avg_fleet_utilization AS (
    SELECT COUNT(DISTINCT hostname) AS total_host_count, AVG(measure_value::double) AS fleet_avg_cpu_utilization
    FROM "sampleDB".DevOps
    WHERE measure_name = 'cpu_utilization'
        AND time > ago(2h)
), avg_per_host_cpu AS (
    SELECT region, az, hostname, AVG(measure_value::double) AS avg_cpu_utilization
    FROM "sampleDB".DevOps
    WHERE measure_name = 'cpu_utilization'
        AND time > ago(2h)
    GROUP BY region, az, hostname
)
SELECT region, az, hostname, avg_cpu_utilization, fleet_avg_cpu_utilization
FROM avg_fleet_utilization, avg_per_host_cpu
WHERE avg_cpu_utilization > 1.1 * fleet_avg_cpu_utilization
ORDER BY avg_cpu_utilization DESC
```

Calcular la media de uso de la CPU agrupada en intervalos de 30 segundos para un host EC2 específico durante las últimas 2 horas:

```
SELECT BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization
FROM "sampleDB".DevOps
WHERE measure_name = 'cpu_utilization'
    AND hostname = 'host-Hovjv'
    AND time > ago(2h)
GROUP BY hostname, BIN(time, 30s)
ORDER BY binned_timestamp ASC
```

Calcular la media de uso de la CPU agrupada en intervalos de 30 segundos para un host EC2 específico durante las últimas 2 horas, y rellenar los valores faltantes mediante la interpolación lineal:

```
WITH binned_timeseries AS (
    SELECT hostname, BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization
    FROM "sampleDB".DevOps
    WHERE measure_name = 'cpu_utilization'
        AND hostname = 'host-Hovjv'
        AND time > ago(2h)
    GROUP BY hostname, BIN(time, 30s)
), interpolated_timeseries AS (
    SELECT hostname,
        INTERPOLATE_LINEAR(
            CREATE_TIME_SERIES(binned_timestamp, avg_cpu_utilization),
                SEQUENCE(min(binned_timestamp), max(binned_timestamp), 15s)) AS interpolated_avg_cpu_utilization
    FROM binned_timeseries
    GROUP BY hostname
)
SELECT time, ROUND(value, 2) AS interpolated_cpu
FROM interpolated_timeseries
CROSS JOIN UNNEST(interpolated_avg_cpu_utilization)
```

Calcular la media de uso de la CPU agrupada en intervalos de 30 segundos para un host EC2 específico durante las últimas 2 horas, y rellenar los valores faltantes mediante la interpolación lineal en función de la última observación disponible:

```
WITH binned_timeseries AS (
    SELECT hostname, BIN(time, 30s) AS binned_timestamp, ROUND(AVG(measure_value::double), 2) AS avg_cpu_utilization
    FROM "sampleDB".DevOps
    WHERE measure_name = 'cpu_utilization'
        AND hostname = 'host-Hovjv'
        AND time > ago(2h)
    GROUP BY hostname, BIN(time, 30s)
), interpolated_timeseries AS (
    SELECT hostname,
        INTERPOLATE_LOCF(
            CREATE_TIME_SERIES(binned_timestamp, avg_cpu_utilization),
                SEQUENCE(min(binned_timestamp), max(binned_timestamp), 15s)) AS interpolated_avg_cpu_utilization
    FROM binned_timeseries
    GROUP BY hostname
)
SELECT time, ROUND(value, 2) AS interpolated_cpu
FROM interpolated_timeseries
CROSS JOIN UNNEST(interpolated_avg_cpu_utilization)
```

# Consultas con funciones de agregación
<a name="sample-queries.iot-scenarios"></a>

A continuación, se muestra un ejemplo de conjunto de datos de un escenario de IoT para ilustrar las consultas con funciones agregadas.

**Topics**
+ [Datos de ejemplo](#sample-queries.iot-scenarios.example-data)
+ [Consultas de ejemplo](#sample-queries.iot-scenarios.example-queries)

## Datos de ejemplo
<a name="sample-queries.iot-scenarios.example-data"></a>

Timestream le permite almacenar y analizar los datos de los sensores de IoT, como la ubicación, el consumo de combustible, la velocidad y la capacidad de carga de una o más flotas de camiones, para permitir una gestión eficaz de la flota. A continuación, se muestra el esquema y algunos de los datos de una tabla iot\$1trucks que almacena la telemetría, como la ubicación, el consumo de combustible, la velocidad y la capacidad de carga de los camiones.


| Time | truck\$1id | Make | Modelo | Fleet | fuel\$1capacity | load\$1capacity | measure\$1name | measure\$1value::double | measure\$1value::varchar | 
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | 
|  2019-12-04 19:00:00.000000000  |  123456781  |  GMC  |  Astro  |  Alpha  |  100  |  500  |  fuel\$1reading  |  65,2  |  null  | 
|  2019-12-04 19:00:00.000000000  |  123456781  |  GMC  |  Astro  |  Alpha  |  100  |  500  |  carga  |  400,0  |  null  | 
|  2019-12-04 19:00:00.000000000  |  123456781  |  GMC  |  Astro  |  Alpha  |  100  |  500  |  speed  |  90,2  |  null  | 
|  2019-12-04 19:00:00.000000000  |  123456781  |  GMC  |  Astro  |  Alpha  |  100  |  500  |  ubicación  |  null  |  47,6062 N, 122,3321 W  | 
|  2019-12-04 19:00:00.000000000  |  123456782  |  Kenworth  |  W900  |  Alpha  |  150  |  1 000  |  fuel\$1reading  |  10.1  |  null  | 
|  2019-12-04 19:00:00.000000000  |  123456782  |  Kenworth  |  W900  |  Alpha  |  150  |  1 000  |  carga  |  950,3  |  null  | 
|  2019-12-04 19:00:00.000000000  |  123456782  |  Kenworth  |  W900  |  Alpha  |  150  |  1 000  |  speed  |  50,8  |  null  | 
|  2019-12-04 19:00:00.000000000  |  123456782  |  Kenworth  |  W900  |  Alpha  |  150  |  1 000  |  ubicación  |  null  |  40,7128 grados N, 74,0060 grados W  | 

## Consultas de ejemplo
<a name="sample-queries.iot-scenarios.example-queries"></a>

Obtener una lista de todos los atributos y valores de los sensores que se monitorean para cada camión de la flota.

```
SELECT
    truck_id,
    fleet,
    fuel_capacity,
    model,
    load_capacity,
    make,
    measure_name
FROM "sampleDB".IoT
GROUP BY truck_id, fleet, fuel_capacity, model, load_capacity, make, measure_name
```

Obtener la lectura de combustible más reciente de cada camión de la flota en las últimas 24 horas.

```
WITH latest_recorded_time AS (
    SELECT
        truck_id,
        max(time) as latest_time
    FROM "sampleDB".IoT
    WHERE measure_name = 'fuel-reading'
    AND time >= ago(24h)
    GROUP BY truck_id
)
SELECT
    b.truck_id,
    b.fleet,
    b.make,
    b.model,
    b.time,
    b.measure_value::double as last_reported_fuel_reading
FROM
latest_recorded_time a INNER JOIN "sampleDB".IoT b
ON a.truck_id = b.truck_id AND b.time = a.latest_time
WHERE b.measure_name = 'fuel-reading'
AND b.time > ago(24h)
ORDER BY b.truck_id
```

Identificar los camiones que han estado funcionando con poco combustible (menos del 10 %) en las últimas 48 horas:

```
WITH low_fuel_trucks AS (
    SELECT time, truck_id, fleet, make, model, (measure_value::double/cast(fuel_capacity as double)*100) AS fuel_pct
    FROM "sampleDB".IoT
    WHERE time >= ago(48h)
    AND (measure_value::double/cast(fuel_capacity as double)*100) < 10
    AND measure_name = 'fuel-reading'
),
other_trucks AS (
SELECT time, truck_id, (measure_value::double/cast(fuel_capacity as double)*100) as remaining_fuel
    FROM "sampleDB".IoT
    WHERE time >= ago(48h)
    AND truck_id IN (SELECT truck_id FROM low_fuel_trucks)
    AND (measure_value::double/cast(fuel_capacity as double)*100) >= 10
    AND measure_name = 'fuel-reading'
),
trucks_that_refuelled AS (
    SELECT a.truck_id
    FROM low_fuel_trucks a JOIN other_trucks b
    ON a.truck_id = b.truck_id AND b.time >= a.time
)
SELECT DISTINCT truck_id, fleet, make, model, fuel_pct
FROM low_fuel_trucks
WHERE truck_id NOT IN (
    SELECT truck_id FROM trucks_that_refuelled
)
```

Calcular la carga media y la velocidad máxima de cada camión durante la última semana:

```
SELECT
    bin(time, 1d) as binned_time,
    fleet,
    truck_id,
    make,
    model,
    AVG(
        CASE WHEN measure_name = 'load' THEN measure_value::double ELSE NULL END
    ) AS avg_load_tons,
    MAX(
        CASE WHEN measure_name = 'speed' THEN measure_value::double ELSE NULL END
    ) AS max_speed_mph
FROM "sampleDB".IoT
WHERE time >= ago(7d)
AND measure_name IN ('load', 'speed')
GROUP BY fleet, truck_id, make, model, bin(time, 1d)
ORDER BY truck_id
```

Obtener la eficiencia de carga de cada camión durante la semana pasada:

```
WITH average_load_per_truck AS (
    SELECT
        truck_id,
        avg(measure_value::double)  AS avg_load
    FROM "sampleDB".IoT
    WHERE measure_name = 'load'
    AND time >= ago(7d)
    GROUP BY truck_id, fleet, load_capacity, make, model
),
truck_load_efficiency AS (
    SELECT
        a.truck_id,
        fleet,
        load_capacity,
        make,
        model,
        avg_load,
        measure_value::double,
        time,
        (measure_value::double*100)/avg_load as load_efficiency -- , approx_percentile(avg_load_pct, DOUBLE '0.9')
    FROM "sampleDB".IoT a JOIN average_load_per_truck b
    ON a.truck_id = b.truck_id
    WHERE a.measure_name = 'load'
)
SELECT
    truck_id,
    time,
    load_efficiency
FROM truck_load_efficiency
ORDER BY truck_id, time
```