

本文属于机器翻译版本。若本译文内容与英语原文存在差异，则一律以英文原文为准。

# 支持 OpenSearch 的 SQL 命令和函数
<a name="supported-directquery-sql"></a>

以下参考表显示了 D OpenSearch iscover 中支持哪些 SQL 命令来查询 Amazon S3、Security Lake 或 CloudWatch 日志中的数据，以及 Logs Insights 中 CloudWatch 支持哪些 SQL 命令。L CloudWatch ogs Insights 中支持的 SQL 语法与 OpenSearch Discover 中支持的 SQL 语法相同，在下表中被称为 CloudWatch 日志。 CloudWatch 

**注意**  
OpenSearch 还支持 SQL，用于查询已提取 OpenSearch并存储在索引中的数据。此 SQL 方言不同于直接查询中使用的 SQL，在[索引上被称为 OpenSearch SQL](https://opensearch.org/docs/latest/search-plugins/sql/sql/index/)。

**Topics**
+ [命令](#supported-sql-data-retrieval)
+ [函数](#supported-sql-functions)
+ [通用 SQL 限制](#general-sql-restrictions)
+ [为使用 OpenSearch SQL 的 “ CloudWatch 日志见解” 用户提供的更多信息](#supported-sql-for-multi-log-queries)

## 命令
<a name="supported-sql-data-retrieval"></a>

**注意**  
在示例命令列中，根据需要替换 `<tableName/logGroup>`，具体取决于您要查询的数据来源。  
命令示例：`SELECT Body , Operation FROM <tableName/logGroup>`
如果要查询 Amazon S3 或 Security Lake，则使用：`SELECT Body , Operation FROM table_name`
如果您要查询 CloudWatch 日志，请使用：`SELECT Body , Operation FROM `LogGroupA``


| 命令 | 说明 | CloudWatch 日志 | Amazon S3 | 安全湖 |  命令示例 | 
| --- | --- | --- | --- | --- | --- | 
|  [SELECT 子句](#supported-sql-select)  |  显示预测值。  | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>SELECT <br />    method,<br />    status <br />FROM <br />    <tableName/logGroup></pre>  | 
| [WHERE 子句](#supported-sql-where) |  根据提供的字段条件筛选日志事件。  | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>SELECT <br />    * <br />FROM <br />    <tableName/logGroup><br />WHERE <br />    status = 100</pre>  | 
| [GROUP BY 子句](#supported-sql-group-by) |  根据类别对日志事件进行分组，并根据统计数据计算平均值。  | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>SELECT <br />    method,<br />    status,<br />    COUNT(*) AS request_count,<br />    SUM(bytes) AS total_bytes <br />FROM <br />    <tableName/logGroup> <br />GROUP BY <br />    method, <br />    status</pre>  | 
| [HAVING 子句](#supported-sql-having) |  根据分组条件筛选结果。  | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>SELECT <br />    method,<br />    status,<br />    COUNT(*) AS request_count,<br />    SUM(bytes) AS total_bytes <br />FROM <br />    <tableName/logGroup> <br />GROUP BY <br />    method,<br />    status<br />HAVING <br />    COUNT(*) > 5</pre>  | 
| [ORDER BY 子句](#supported-sql-order-by) |  根据顺序子句中的字段对结果进行排序。您可以按降序或升序进行排序。  | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>SELECT <br />    * <br />FROM <br />    <tableName/logGroup> <br />ORDER BY <br />    status DESC</pre>  | 
|  [JOIN 子句](#supported-sql-join)  ( `INNER` \$1 `CROSS` \$1 `LEFT` `OUTER` )  |  根据共同字段合并两个表的结果。  |  支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg)（合并必须使用 `Inner` 和 `Left Outer` 关键字；SELECT 语句中仅支持一个 JOIN 操作）  | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg)（必须使用 Inner、Left Outer 和 Cross 关键字进行合并） | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg)（必须使用 Inner、Left Outer 和 Cross 关键字进行合并） |  <pre>SELECT <br />    A.Body,<br />    B.Timestamp<br />FROM <br />    <tableNameA/logGroupA> AS A <br />INNER JOIN <br />    <tableNameB/logGroupB> AS B <br />    ON A.`requestId` = B.`requestId`</pre>  | 
| [LIMIT 子句](#supported-sql-limit) |  将结果限制为前 N 行。  | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>SELECT <br />    * <br />FROM <br />    <tableName/logGroup> <br />LIMIT <br />    10</pre>  | 
| [CASE 子句](#supported-sql-case) | 评估条件，并在满足第一个条件时返回一个值。 | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>SELECT<br />    method,<br />    status,<br />    CASE<br />        WHEN status BETWEEN 100 AND 199 THEN 'Informational'<br />        WHEN status BETWEEN 200 AND 299 THEN 'Success'<br />        WHEN status BETWEEN 300 AND 399 THEN 'Redirection'<br />        WHEN status BETWEEN 400 AND 499 THEN 'Client Error'<br />        WHEN status BETWEEN 500 AND 599 THEN 'Server Error'<br />        ELSE 'Unknown Status'<br />    END AS status_category,<br />    CASE method<br />        WHEN 'GET' THEN 'Read Operation'<br />        WHEN 'POST' THEN 'Create Operation'<br />        WHEN 'PUT' THEN 'Update Operation'<br />        WHEN 'PATCH' THEN 'Partial Update Operation'<br />        WHEN 'DELETE' THEN 'Delete Operation'<br />        ELSE 'Other Operation'<br />    END AS operation_type,<br />    bytes,<br />    datetime<br />FROM <tableName/logGroup>                         </pre>  | 
| [公用表表达式](#supported-sql-cte) | 在 SELECT、INSERT、UPDATE、DELETE 或 MERGE 语句中创建命名的临时结果集。 | 不支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/negative_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>WITH RequestStats AS (<br />    SELECT <br />        method,<br />        status,<br />        bytes,<br />        COUNT(*) AS request_count<br />    FROM <br />        tableName<br />    GROUP BY <br />        method,<br />        status,<br />        bytes<br />)<br />SELECT <br />    method,<br />    status,<br />    bytes,<br />    request_count <br />FROM <br />    RequestStats <br />WHERE <br />    bytes > 1000</pre>  | 
| [EXPLAIN](#supported-sql-explain) | 显示 SQL 语句的执行计划，但不实际执行该语句。 | 不支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/negative_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>EXPLAIN<br />SELECT <br />    k,<br />    SUM(v)<br />FROM <br />    VALUES <br />        (1, 2),<br />        (1, 3) AS t(k, v)<br />GROUP BY <br />    k</pre>  | 
| [LATERAL SUBQUERY 子句](#supported-sql-lateral-subquery) | 允许 FROM 子句的子查询引用同一 FROM 子句中先前项的列。 | 不支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/negative_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre> SELECT <br />    * <br />FROM <br />    tableName<br />LATERAL (<br />    SELECT <br />        * <br />    FROM <br />        t2 <br />    WHERE <br />        t1.c1 = t2.c1<br />)</pre>  | 
| [LATERAL VIEW 子句](#supported-sql-lateral-view) | 通过对基表的每一行应用表生成函数而生成虚拟表。 | 不支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/negative_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>SELECT <br />    * <br />FROM <br />    tableName<br />LATERAL VIEW <br />    EXPLODE(ARRAY(30, 60)) tableName AS c_age<br />LATERAL VIEW <br />    EXPLODE(ARRAY(40, 80)) AS d_age</pre>  | 
| [LIKE 谓词](#supported-sql-like-predicate) | 使用通配符，将字符串与模式进行匹配。 | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre> SELECT <br />    method,<br />    status,<br />    request,<br />    host <br />FROM <br />    <tableName/logGroup> <br />WHERE <br />    method LIKE 'D%'</pre>  | 
| [OFFSET](#supported-sql-offset) | 指定在开始返回查询结果行之前要跳过的行数。 | 在查询中与 LIMIT 子句结合使用时支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg)。例如：[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/supported-directquery-sql.html) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre> SELECT <br />    method,<br />    status,<br />    bytes,<br />    datetime <br />FROM <br />    <tableName/logGroup> <br />ORDER BY <br />    datetime<br />OFFSET <br />    10 </pre>  | 
| [PIVOT 子句](#supported-sql-pivot) | 将行转换为列，将数据从基于行的格式转换为基于列的格式。 | 不支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/negative_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>SELECT <br />    * <br />FROM <br />    (<br />        SELECT <br />            method,<br />            status,<br />            bytes<br />        FROM <br />            <tableName/logGroup><br />    ) AS SourceTable <br />PIVOT <br />(<br />    SUM(bytes) <br />    FOR method IN ('GET', 'POST', 'PATCH', 'PUT', 'DELETE')<br />) AS PivotTable</pre>  | 
| [集合运算符](#supported-sql-set) | 合并两个或多个 SELECT 语句（例如 UNION、INTERSECT、EXCEPT）的结果。 | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>SELECT <br />    method,<br />    status,<br />    bytes<br />FROM <br />    <tableName/logGroup><br />WHERE <br />    status = '416'<br /><br />UNION<br /><br />SELECT <br />    method,<br />    status,<br />    bytes<br />FROM <br />    <tableName/logGroup><br />WHERE <br />    bytes > 20000</pre>  | 
| [SORT BY 子句](#supported-sql-sort-by) | 指定返回查询结果的顺序。 | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>SELECT <br />    method,<br />    status,<br />    bytes<br />FROM <br />    <tableName/logGroup><br />SORT BY <br />    bytes DESC</pre>  | 
| [UNPIVOT](#supported-sql-unpivot) | 将列转换为行，将数据从基于列的格式转换为基于行的格式。 | 不支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/negative_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre> SELECT <br />    status,<br />    REPLACE(method, '_bytes', '') AS request_method,<br />    bytes,<br />    datetime <br />FROM <br />    PivotedData <br />UNPIVOT <br />(<br />    bytes <br />    FOR method IN <br />    (<br />        GET_bytes,<br />        POST_bytes,<br />        PATCH_bytes,<br />        PUT_bytes,<br />        DELETE_bytes<br />    )<br />) AS UnpivotedData</pre>  | 

## 函数
<a name="supported-sql-functions"></a>

**注意**  
在示例命令列中，根据需要替换 `<tableName/logGroup>`，具体取决于您要查询的数据来源。  
命令示例：`SELECT Body , Operation FROM <tableName/logGroup>`
如果要查询 Amazon S3 或 Security Lake，则使用：`SELECT Body , Operation FROM table_name`
如果您要查询 CloudWatch 日志，请使用：`SELECT Body , Operation FROM `LogGroupA``


| 可用的 SQL 语法 | 说明 | CloudWatch 日志 | Amazon S3 | 安全湖 |  命令示例 | 
| --- | --- | --- | --- | --- | --- | 
| [字符串函数](#supported-sql-string) |  内置函数，可在 SQL 查询中操作和转换字符串及文本数据。例如，转换大小写、合并字符串、提取部分内容以及清理文本。  | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>SELECT <br />    UPPER(method) AS upper_method,<br />    LOWER(host) AS lower_host <br />FROM <br />    <tableName/logGroup></pre>  | 
| [日期和时间函数](#supported-sql-date-time) |  内置函数，用于处理和转换查询中的日期和时间戳数据。例如，**date\$1add**、**date\$1format**、**datediff** 和 **current\$1date**。  | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>SELECT <br />    TO_TIMESTAMP(datetime) AS timestamp,<br />    TIMESTAMP_SECONDS(UNIX_TIMESTAMP(datetime)) AS from_seconds,<br />    UNIX_TIMESTAMP(datetime) AS to_unix,<br />    FROM_UTC_TIMESTAMP(datetime, 'PST') AS to_pst,<br />    TO_UTC_TIMESTAMP(datetime, 'EST') AS from_est <br />FROM <br />    <tableName/logGroup></pre>  | 
| [聚合函数](#supported-sql-aggregate) |  内置函数，可对多行数据执行计算以生成单个汇总值。例如，**sum**、**count**、**avg**、**max** 和 **min**。  | ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg)支持的 |  支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg)  | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>SELECT <br />    COUNT(*) AS total_records,<br />    COUNT(DISTINCT method) AS unique_methods,<br />    SUM(bytes) AS total_bytes,<br />    AVG(bytes) AS avg_bytes,<br />    MIN(bytes) AS min_bytes,<br />    MAX(bytes) AS max_bytes <br />FROM <br />    <tableName/logGroup></pre>  | 
| [条件函数](#supported-sql-conditional) |  内置函数，根据指定条件执行操作，或在特定条件下评估表达式。例如，**CASE** 和 **IF**。  | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>SELECT <br />    CASE <br />        WHEN method = 'GET' AND bytes < 1000 THEN 'Small Read'<br />        WHEN method = 'POST' AND bytes > 10000 THEN 'Large Write'<br />        WHEN status >= 400 OR bytes = 0 THEN 'Problem'<br />        ELSE 'Normal'<br />    END AS request_type <br />FROM <br />    <tableName/logGroup></pre>  | 
| [JSON 函数](#supported-sql-json) |  内置函数，用于在 SQL 查询中解析、提取、修改和查询 JSON 格式的数据（例如，from\$1json、to\$1json、get\$1json\$1object、json\$1tuple），支持对数据集中的 JSON 结构进行操作。  | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>SELECT <br />    FROM_JSON(<br />        @message, <br />        'STRUCT<<br />            host: STRING,<br />            user-identifier: STRING,<br />            datetime: STRING,<br />            method: STRING,<br />            status: INT,<br />            bytes: INT<br />        >'<br />    ) AS parsed_json <br />FROM <br />    <tableName/logGroup> </pre>  | 
| [数组函数](#supported-sql-array) |  内置函数，用于在 SQL 查询中处理数组类型列，允许访问、修改和分析数组数据（例如，size、explode、array\$1contains）等操作。  | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>SELECT <br />    scores,<br />    size(scores) AS length,<br />    array_contains(scores, 90) AS has_90 <br />FROM <br />    <tableName/logGroup></pre>  | 
| [窗口函数](#supported-sql-window) | 内置函数，可在当前行（窗口）相关的指定行集上执行计算，支持排名、运行总数和移动平均值（例如 ROW\$1NUMBER、RANK、LAG、LEAD）等操作 | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre> SELECT <br />    field1,<br />    field2,<br />    RANK() OVER (ORDER BY field2 DESC) AS field2Rank <br />FROM <br />    <tableName/logGroup></pre>  | 
| [转换函数](#supported-sql-conversion) |  内置函数，在 SQL 查询中用于将数据从一种类型转换为另一种类型，支持数据类型转换和格式转换（例如，CAST、TO\$1DATE、TO\$1TIMESTAMP、BINARY）  | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>SELECT <br />    CAST('123' AS INT) AS converted_number,<br />    CAST(123 AS STRING) AS converted_string <br />FROM <br />    <tableName/logGroup></pre>  | 
| [谓词函数](#supported-sql-predicate) |  内置函数，用于评估条件，并根据指定的标准或模式（例如，IN、LIKE、BETWEEN、IS NULL、EXISTS）返回布尔值（true/false）  | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>SELECT <br />    * <br />FROM <br />    <tableName/logGroup> <br />WHERE <br />    id BETWEEN 50000 AND 75000</pre>  | 
| [映射函数](#supported-sql-map) | 对集合中的每个元素应用指定函数，将数据转换为一组新值。 | 不支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/negative_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>SELECT <br />    MAP_FILTER(<br />        MAP(<br />            'method', method,<br />            'status', CAST(status AS STRING),<br />            'bytes', CAST(bytes AS STRING)<br />        ),<br />        (k, v) -> k IN ('method', 'status') AND v != 'null'<br />    ) AS filtered_map <br />FROM <br />    <tableName/logGroup> <br />WHERE <br />    status = 100</pre>  | 
| [数学函数](#supported-sql-math) | 对数值数据执行数学运算，例如计算平均值、总和或三角函数值。 | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>SELECT <br />    bytes,<br />    bytes + 1000 AS added,<br />    bytes - 1000 AS subtracted,<br />    bytes * 2 AS doubled,<br />    bytes / 1024 AS kilobytes,<br />    bytes % 1000 AS remainder <br />FROM <br />    <tableName/logGroup></pre>  | 
| [多日志组函数](#multi-log-queries) |  允许用户在 SQL SELECT 语句中指定多个日志组  | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 不适用 | 不适用 |  <pre>SELECT <br />    lg1.Column1,<br />    lg1.Column2 <br />FROM <br />    `logGroups(logGroupIdentifier: ['LogGroup1', 'LogGroup2'])` AS lg1 <br />WHERE <br />    lg1.Column3 = "Success"<br /></pre>  | 
| [生成器函数](#supported-sql-generator) | 创建生成值序列的迭代器对象，从而在大型数据集中实现高效的内存使用。 | 不支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/negative_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) | 支持 ![\[alt text not found\]](http://docs.aws.amazon.com/zh_cn/opensearch-service/latest/developerguide/images/success_icon.svg) |  <pre>SELECT <br />    explode(array(10, 20)) </pre>  | 

## 通用 SQL 限制
<a name="general-sql-restrictions"></a>

将 OpenSearch SQL 与 CloudWatch 日志、Amazon S3 和安全湖一起使用时，必须遵守以下限制。

1. SELECT 语句中仅可使用一个 JOIN 操作。

1. 仅支持一层嵌套子查询。

1. 不支持用分号分隔的多语句查询。

1. 不支持包含相同但仅在大小写上不同的字段名称的查询（例如 field1 和 FIELD1）。

   例如，不支持以下查询：

   ```
   Select AWSAccountId, awsaccountid from LogGroup
   ```

   然而，由于两个日志组中的字段名（@logStream）完全相同，因此可用以下查询：

   ```
   Select a.`@logStream`, b.`@logStream` from Table A INNER Join Table B on a.id = b.id 
   ```

1. 函数和表达式必须对字段名进行操作，并作为包含在 FROM 子句中指定日志组的 SELECT 语句的一部分。

   例如，不支持此查询：

   ```
   SELECT cos(10) FROM LogGroup
   ```

   支持此查询：

   ```
   SELECT cos(field1) FROM LogGroup
   ```

## 为使用 OpenSearch SQL 的 “ CloudWatch 日志见解” 用户提供的更多信息
<a name="supported-sql-for-multi-log-queries"></a>

 CloudWatch 日志支持 OpenSearch Logs Insights 控制台、API 和 CLI 中的 SQL 查询。CloudWatch Logs 支持大多数命令，包括 SELECT、FROM、WHERE、GROUP BY、HAVING、JOINS 和嵌套查询，以及 JSON、数学、字符串和条件函数。但是，L CloudWatch ogs 仅支持读取操作，因此不允许 DDL 或 DML 语句。有关受支持命令和函数的完整列表，请参阅前文各节中的表。

### 多日志组函数
<a name="multi-log-queries"></a>

CloudWatch Logs Insights 支持查询多个日志组的功能。要在 SQL 中处理此使用案例，可使用 `logGroups` 命令。此命令专门用于查询 L CloudWatch ogs Insights 中涉及一个或多个日志组的数据。使用此语法通过在命令中指定多个日志组以进行查询，而不是为每个日志组编写查询并使用 `UNION` 命令以将其进行组合。

语法：

```
`logGroups(
    logGroupIdentifier: ['LogGroup1','LogGroup2', ...'LogGroupn']
)
```

在此语法中，您可在 `logGroupIndentifier` 参数中指定最多 50 个日志组。要引用监控账户中的日志组，请使用 ARNs 而不是`LogGroup`名称。

示例查询：

```
SELECT LG1.Column1, LG1.Column2 from `logGroups(
    logGroupIdentifier: ['LogGroup1', 'LogGroup2']
)` as LG1 
WHERE LG1.Column1 = 'ABC'
```

查询日志时，不支持在语`FROM`句后使用涉及多个 CloudWatch 日志组的以下语法：

```
SELECT Column1, Column2 FROM 'LogGroup1', 'LogGroup2', ...'LogGroupn' 
WHERE Column1 = 'ABC'
```

### 限制
<a name="restrictions"></a>

使用 SQL 或 PPL 命令时，需将特定字段用反引号括起来才能进行查询。包含特殊字符（非字母和非数字）的字段需使用反引号。例如，对 `@message`、`Operation.Export,` 和 `Test::Field` 使用反引号。纯字母名称的列无需使用反引号。

包含简单字段的查询示例：

```
SELECT SessionToken, Operation, StartTime  FROM `LogGroup-A`
LIMIT 1000;
```

附加反引号的相同查询：

```
SELECT `SessionToken`, `Operation`, `StartTime`  FROM `LogGroup-A`
LIMIT 1000;
```

有关不特定于 CloudWatch 日志的其他一般限制，请参阅[通用 SQL 限制](#general-sql-restrictions)。

### 示例查询和限额
<a name="samples"></a>

**注意**  
以下内容适用于 CloudWatch Logs Insights OpenSearch 用户和查询 CloudWatch 数据的用户。

有关可在 CloudWatch 日志中使用的示例 SQL 查询，请参阅 Amazon CloudWatch Logs Insights 控制台中**保存的查询和示例查询**以获取示例。

有关查询 OpenSearch 服务 CloudWatch 日志时适用的限制的信息，请参阅 Amazon Logs 用户指南中的[CloudWatch CloudWatch 日志配额](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/cloudwatch_limits_cwl.html)。限制包括您可以查询的 CloudWatch 日志组数量、可以执行的最大并发查询数、最长查询执行时间以及结果中返回的最大行数。无论您使用哪种语言查询 CloudWatch 日志（即 OpenSearch PPL、SQL 和 Logs Insights），限制都是一样的。

### SQL 命令
<a name="supported-sql-commands-details"></a>

**Topics**
+ [字符串函数](#supported-sql-string)
+ [日期和时间函数](#supported-sql-date-time)
+ [聚合函数](#supported-sql-aggregate)
+ [条件函数](#supported-sql-conditional)
+ [JSON 函数](#supported-sql-json)
+ [数组函数](#supported-sql-array)
+ [窗口函数](#supported-sql-window)
+ [转换函数](#supported-sql-conversion)
+ [谓词函数](#supported-sql-predicate)
+ [映射函数](#supported-sql-map)
+ [数学函数](#supported-sql-math)
+ [生成器函数](#supported-sql-generator)
+ [SELECT 子句](#supported-sql-select)
+ [WHERE 子句](#supported-sql-where)
+ [GROUP BY 子句](#supported-sql-group-by)
+ [HAVING 子句](#supported-sql-having)
+ [ORDER BY 子句](#supported-sql-order-by)
+ [JOIN 子句](#supported-sql-join)
+ [LIMIT 子句](#supported-sql-limit)
+ [CASE 子句](#supported-sql-case)
+ [公用表表达式](#supported-sql-cte)
+ [EXPLAIN](#supported-sql-explain)
+ [LATERAL SUBQUERY 子句](#supported-sql-lateral-subquery)
+ [LATERAL VIEW 子句](#supported-sql-lateral-view)
+ [LIKE 谓词](#supported-sql-like-predicate)
+ [OFFSET](#supported-sql-offset)
+ [PIVOT 子句](#supported-sql-pivot)
+ [集合运算符](#supported-sql-set)
+ [SORT BY 子句](#supported-sql-sort-by)
+ [UNPIVOT](#supported-sql-unpivot)

#### 字符串函数
<a name="supported-sql-string"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。


****  

| 函数 | 说明 | 
| --- | --- | 
| ascii(str) | 返回 str 的第一个字符的数值。 | 
| base64(bin) | 将参数从二进制 bin 转换为 base 64 字符串。 | 
| bit\$1length(expr) | 返回字符串数据的位长度或二进制数据的位数。 | 
| btrim(str) | 删除 str 中首空格字符和尾随空格字符。 | 
| btrim(str, trimStr) | 删除 str 中开头和结尾的 trimStr 字符。 | 
| char(expr) | 返回二进制值等同于 expr 的 ASCII 字符。如果 n 大于 256，则结果等同于 chr(n % 256) | 
| char\$1length(expr) | 返回字符串数据的字符长度或二进制数据的字节数。字符串数据的长度包含尾随空格。二进制数据的长度包含二进制零。 | 
| character\$1length(expr) | 返回字符串数据的字符长度或二进制数据的字节数。字符串数据的长度包含尾随空格。二进制数据的长度包含二进制零。 | 
| chr(expr) | 返回二进制值等同于 expr 的 ASCII 字符。如果 n 大于 256，则结果等同于 chr(n % 256) | 
| concat\$1ws(sep[, str \$1 array(str)]\$1) | 返回以 sep 分隔的字符串连接，跳过 null 值。 | 
| contains(left, right) | 返回一个布尔值。如果在左值中找到右值，则该值为 True。如果任一输入表达式为 NULL，则返回 NULL。否则返回 False。左右两侧都必须是字符串（STRING）或二进制（BINARY）类型。 | 
| decode(bin, charset) | 使用第二个参数字符集解码第一个参数。 | 
| decode(expr, search, result [, search, result ] ... [, default]) | 按顺序将 expr 与每个搜索值进行比较。如果 expr 等于搜索值，解码函数将返回对应的结果。如果未找到匹配项，则返回默认值。如果省略默认值，则返回 null。 | 
| elt(n, input1, input2, ...) | 返回第 n 个输入，例如，当n 为 2 时返回 input2。 | 
| encode(str, charset) | 使用第二个参数字符集编码第一个参数。 | 
| endswith(left, right) | 返回一个布尔值。如果左值以右值结尾，则该值为 True。如果任一输入表达式为 NULL，则返回 NULL。否则返回 False。左右两侧都必须是字符串（STRING）或二进制（BINARY）类型。 | 
| find\$1in\$1set(str, str\$1array) | 返回给定字符串（str）在逗号分隔列表（str\$1array）中的索引（从 1 开始计数）。如果未找到字符串或给定字符串（str）包含逗号，则返回 0。 | 
| format\$1number(expr1, expr2) | 将数字 expr1 格式化为“\$1,\$1\$1\$1,\$1\$1\$1.\$1\$1”，并四舍五入至 expr2 小数位。如果 expr2 为 0，则结果不包含小数点或分数部分。expr2 也接受用户指定的格式。这与 MySQL 的 FORMAT 函数类似。 | 
| format\$1string(strfmt, obj, ...) | 根据 printf 样式的格式字符串返回格式化字符串。 | 
| initcap(str) | 返回 str，其中每个单词的首字母大写。所有其他字母均为小写。单词由空格进行分隔。 | 
| instr(str, substr) | 返回 str 中 substr 首次出现的索引（从 1 开始计数）。 | 
| lcase(str) | 返回 str，其中所有字符均转换为小写。 | 
| left(str, len) | 返回字符串 str 中最左侧（len可以是字符串类型）的 len 个字符，如果 len 小于或等于 0，则结果为空字符串。 | 
| len(expr) | 返回字符串数据的字符长度或二进制数据的字节数。字符串数据的长度包含尾随空格。二进制数据的长度包含二进制零。 | 
| length(expr) | 返回字符串数据的字符长度或二进制数据的字节数。字符串数据的长度包含尾随空格。二进制数据的长度包含二进制零。 | 
| levenshtein(str1, str2[, threshold]) | 返回两个给定字符串之间的 Levenshtein 距离。如果已设置阈值且距离大于该阈值，则返回 -1。 | 
| locate(substr, str[, pos]) | 返回在位置 pos 之后，str 中 substr 首次出现的位置。给定 pos 和返回值均从 1 开始计数。 | 
| lower(str) | 返回 str，其中所有字符均转换为小写。 | 
| lpad(str, len[, pad]) | 返回 str，用 pad 向左填充至 len 的长度。如果 str 比 len 长，则返回值将缩短至 len 个字符或字节。如果未指定 pad，则当 str 为字符串时，将在其左侧用空格字符填充；当其为字节序列时，则用零填充。 | 
| ltrim(str) | 删除 str 中首空格字符。 | 
| luhn\$1check(str ) | 根据 Luhn 算法检查数字的字符串是否有效。该校验函数广泛应用于信用卡号和政府身份证号码，用于区分有效号码与输入错误或不正确的号码。 | 
| mask(input[, upperChar, lowerChar, digitChar, otherChar]) | 对给定的字符串值进行掩码处理。该函数将字符替换为“X”或“x”，将数字替换为“n”。这有助于创建去除敏感信息的表格副本。 | 
| octet\$1length(expr) | 返回字符串数据的字节长度或二进制数据的字节数。 | 
| overlay(input, replace, pos[, len]) | 将 input 替换为 replace，开头为 pos 且长度为 len。 | 
| position(substr, str[, pos]) | 返回在位置 pos 之后，str 中 substr 首次出现的位置。给定 pos 和返回值均从 1 开始计数。 | 
| printf(strfmt, obj, ...) | 根据 printf 样式的格式字符串返回格式化字符串。 | 
| regexp\$1count(str, regexp) | 返回正则表达式模式 regexp 在字符串 str 中匹配的次数。 | 
| regexp\$1extract(str, regexp[, idx]) | 提取 str 中与表达式匹配且对应 regexp 正则表达式组索引的第一个字符串。 | 
| regexp\$1extract\$1all(str, regexp[, idx]) | 提取 str 中与表达式匹配且对应 regexp 正则表达式组索引的所有字符串。 | 
| regexp\$1instr(str, regexp) | 在字符串中搜索正则表达式，并返回指示匹配子字符串开始位置的整数。位置从 1 开始计数，而不是从 0 开始计数。如果未找到匹配项，则返回 0。 | 
| regexp\$1replace(str, regexp, rep[, position]) | 将 str 中所有匹配 regexp 的子字符串替换为 rep。 | 
| regexp\$1substr(str, regexp) | 返回字符串 str 中与正则表达式 regexp 匹配的子字符串。如果未找到正则表达式，则结果为 null。 | 
| repeat(str, n) | 返回将给定字符串重复 n 次的字符串。 | 
| replace(str, search[, replace]) | 将所有出现的 search 替换为 replace。 | 
| right(str, len) | 返回字符串 str 中最右侧（len可以是字符串类型）的 len 个字符，如果 len 小于或等于 0，则结果为空字符串。 | 
| rpad(str, len[, pad]) | 返回 str，用 pad 向右填充至 len 的长度。如果 str 比 len 长，则返回值将缩短至 len 个字符。如果未指定 pad，则当 str 为字符串时，将在其右侧用空格字符填充；当其为二进制字符串时，则用零填充。 | 
| rtrim(str) | 删除 str 中结尾的空格字符。 | 
| sentences(str[, lang, country]) | 将 str 拆分成由单词组成的数组。 | 
| soundex(str) | 返回字符串的 Soundex 代码。 | 
| space(n) | 返回由 n 个空格组成的字符串。 | 
| split(str, regex, limit) | 将 str 按匹配 regex 的出现次数进行拆分，并返回一个长度不超过 limit 的数组 | 
| split\$1part(str, delimiter, partNum) | 将 str 按分隔符进行拆分，并返回请求的拆分部分（从 1 开始计数）。如果任何输入为 null，则返回 null。如果 partNum 超出拆分部分的范围，则返回空字符串。如果 partNum 为 0，则引发错误。如果 partNum 为负数，则从字符串末尾开始倒序计数。如果 delimiter 为空字符串，则未拆分 str。 | 
| startswith(left, right) | 返回一个布尔值。如果左值以右值开始，则该值为 True。如果任一输入表达式为 NULL，则返回 NULL。否则返回 False。左右两侧都必须是字符串（STRING）或二进制（BINARY）类型。 | 
| substr(str, pos[, len]) | 返回 len 的子字符串，其开头为 pos 且长度为 str 的，或者返回字节数组的片段，其开头为 pos 且长度为 len。 | 
| substr(str FROM pos[ FOR len]]) | 返回 len 的子字符串，其开头为 pos 且长度为 str 的，或者返回字节数组的片段，其开头为 pos 且长度为 len。 | 
| substring(str, pos[, len]) | 返回 len 的子字符串，其开头为 pos 且长度为 str 的，或者返回字节数组的片段，其开头为 pos 且长度为 len。 | 
| substring(str FROM pos[ FOR len]]) | 返回 len 的子字符串，其开头为 pos 且长度为 str 的，或者返回字节数组的片段，其开头为 pos 且长度为 len。 | 
| substring\$1index(str, delim, count) | 从 str 返回分隔符 delim 出现 count 次之前的子字符串。如果 count 为正，则返回最终分隔符左侧的所有内容（从左侧开始计数）。如果 count 为负，则返回最终分隔符右侧的所有内容（从右侧开始计数）。函数 substring\$1index 在搜索 delim 时执行区分大小写的匹配。 | 
| to\$1binary(str[, fmt]) | 根据提供的 fmt，将输入 str 转换为二进制值。fmt 可以是不区分大小写的字符串常量，取值为“hex”、“utf-8”、“utf8”或“base64”。默认情况下，如果 fmt 省略，则转换的二进制格式为“hex”。如果至少一个输入参数为 NULL，则该函数返回 NULL。 | 
| to\$1char(numberExpr, formatExpr) | 基于 formatExpr，将 numberExpr 转换为字符串。如果转换失败，则引发异常。格式可包含以下字符（不区分大小写）：“0”或“9”：表示预期为 0 到 9 之间的数字。格式字符串中 0 或 9 序列匹配输入值中的数字序列，生成与格式字符串中对应序列长度相同的输出字符串。如果 0/9 序列的位数多于十进制值的匹配部分，且以 0 开头并位于小数点前，则结果字符串将用零填充左侧。否则，则用空格填充。“.”或“D”：指定小数点的位置（可选，仅允许出现一次）。“,”或“G”：指定千位分隔符（,）的位置。每组分隔符的左右两侧必须分别有数字 0 或 9。 | 
| to\$1number(expr, fmt) | 将字符串“expr”转换为基于字符串格式“fmt”的数字。如果转换失败，则引发异常。格式可包含以下字符（不区分大小写）：“0”或“9”：表示预期为 0 到 9 之间的数字。格式字符串中的 0 或 9 序列匹配输入字符串中的数字序列。如果 0/9 序列以 0 开头且位于小数点前，则只能匹配相同长度的数字序列。否则，如果序列以 9 开头或位于小数点后，则可匹配大小相同或更小的数字序列。“.”或“D”：指定小数点的位置（可选，仅允许出现一次）。“,”或“G”：指定千位分隔符（,）的位置。每组分隔符的左右两侧必须分别有数字 0 或 9。“expr”必须与数字大小相关的分组分隔符相匹配。 | 
| to\$1varchar(numberExpr, formatExpr) | 基于 formatExpr，将 numberExpr 转换为字符串。如果转换失败，则引发异常。格式可包含以下字符（不区分大小写）：“0”或“9”：表示预期为 0 到 9 之间的数字。格式字符串中 0 或 9 序列匹配输入值中的数字序列，生成与格式字符串中对应序列长度相同的输出字符串。如果 0/9 序列的位数多于十进制值的匹配部分，且以 0 开头并位于小数点前，则结果字符串将用零填充左侧。否则，则用空格填充。“.”或“D”：指定小数点的位置（可选，仅允许出现一次）。“,”或“G”：指定千位分隔符（,）的位置。每组分隔符的左右两侧必须分别有数字 0 或 9。 | 
| translate(input, from, to) | 转换 input 字符串，方法是将 from 字符串中出现的字符替换为 to 字符串中的相应字符。 | 
| trim(str) | 删除 str 中首空格字符和尾随空格字符。 | 
| trim(BOTH FROM str) | 删除 str 中首空格字符和尾随空格字符。 | 
| trim(LEADING FROM str) | 删除 str 中首空格字符。 | 
| trim(TRAILING FROM str) | 删除 str 中结尾的空格字符。 | 
| trim(trimStr FROM str) | 删除 str 中开头和结尾的 trimStr 字符。 | 
| trim(BOTH trimStr FROM str) | 删除 str 中开头和结尾的 trimStr 字符。 | 
| trim(LEADING trimStr FROM str) | 删除 str 中开头的 trimStr 字符。 | 
| trim(TRAILING trimStr FROM str) | 删除 str 中结尾的 trimStr 字符。 | 
| try\$1to\$1binary(str[, fmt]) | 这是 to\$1binary 的特殊版本，执行相同操作，但在转换无法完成时返回 NULL 值而非引发错误。 | 
| try\$1to\$1number(expr, fmt) | 将字符串“expr”转换为基于字符串格式 fmt 的数字。如果字符串“expr”与预期格式不匹配，则返回 NULL。该格式遵循与 to\$1number 函数相同的语义。 | 
| ucase(str) | 返回 str，其中所有字符均转换为大写。 | 
| unbase64(str) | 将参数从 base 64 字符串 str 转换为二进制。 | 
| upper(str) | 返回 str，其中所有字符均转换为大写。 | 



**示例**

```
-- ascii
SELECT ascii('222');
+----------+
|ascii(222)|
+----------+
|        50|
+----------+
SELECT ascii(2);
+--------+
|ascii(2)|
+--------+
|      50|
+--------+
-- base64
SELECT base64('Feathers');
+-----------------+
|base64(Feathers)|
+-----------------+
|     RmVhdGhlcnM=|
+-----------------+
SELECT base64(x'537061726b2053514c');
+-----------------------------+
|base64(X'537061726B2053514C')|
+-----------------------------+
|                 U3BhcmsgU1FM|
+-----------------------------+
-- bit_length
SELECT bit_length('Feathers');
+---------------------+
|bit_length(Feathers)|
+---------------------+
|                   64|
+---------------------+
SELECT bit_length(x'537061726b2053514c');
+---------------------------------+
|bit_length(X'537061726B2053514C')|
+---------------------------------+
|                               72|
+---------------------------------+
-- btrim
SELECT btrim('    Feathers   ');
+----------------------+
|btrim(    Feathers   )|
+----------------------+
|              Feathers|
+----------------------+
SELECT btrim(encode('    Feathers   ', 'utf-8'));
+-------------------------------------+
|btrim(encode(    Feathers   , utf-8))|
+-------------------------------------+
|                             Feathers|
+-------------------------------------+
SELECT btrim('Feathers', 'Fe');
+---------------------+
|btrim(Alphabet, Al)|
+---------------------+
|               athers|
+---------------------+
SELECT btrim(encode('Feathers', 'utf-8'), encode('Al', 'utf-8'));
+---------------------------------------------------+
|btrim(encode(Feathers, utf-8), encode(Al, utf-8))|
+---------------------------------------------------+
|                                             athers|
+---------------------------------------------------+
-- char
SELECT char(65);
+--------+
|char(65)|
+--------+
|       A|
+--------+
-- char_length
SELECT char_length('Feathers ');
+-----------------------+
|char_length(Feathers )|
+-----------------------+
|                     9 |
+-----------------------+
SELECT char_length(x'537061726b2053514c');
+----------------------------------+
|char_length(X'537061726B2053514C')|
+----------------------------------+
|                                 9|
+----------------------------------+
SELECT CHAR_LENGTH('Feathers ');
+-----------------------+
|char_length(Feathers )|
+-----------------------+
|                     9|
+-----------------------+
SELECT CHARACTER_LENGTH('Feathers ');
+----------------------------+
|character_length(Feathers )|
+----------------------------+
|                          9|
+----------------------------+
-- character_length
SELECT character_length('Feathers ');
+----------------------------+
|character_length(Feathers )|
+----------------------------+
|                          9|
+----------------------------+
SELECT character_length(x'537061726b2053514c');
+---------------------------------------+
|character_length(X'537061726B2053514C')|
+---------------------------------------+
|                                      9|
+---------------------------------------+
SELECT CHAR_LENGTH('Feathers ');
+-----------------------+
|char_length(Feathers )|
+-----------------------+
|                     9|
+-----------------------+
SELECT CHARACTER_LENGTH('Feathers ');
+----------------------------+
|character_length(Feathers )|
+----------------------------+
|                          9|
+----------------------------+
-- chr
SELECT chr(65);
+-------+
|chr(65)|
+-------+
|      A|
+-------+
-- concat_ws
SELECT concat_ws(' ', 'Fea', 'thers');
+------------------------+
|concat_ws( , Fea, thers)|
+------------------------+
|               Feathers|
+------------------------+
SELECT concat_ws('s');
+------------+
|concat_ws(s)|
+------------+
|            |
+------------+
SELECT concat_ws('/', 'foo', null, 'bar');
+----------------------------+
|concat_ws(/, foo, NULL, bar)|
+----------------------------+
|                     foo/bar|
+----------------------------+
SELECT concat_ws(null, 'Fea', 'thers');
+---------------------------+
|concat_ws(NULL, Fea, thers)|
+---------------------------+
|                       NULL|
+---------------------------+
-- contains
SELECT contains('Feathers', 'Fea');
+--------------------------+
|contains(Feathers, Fea)|
+--------------------------+
|                      true|
+--------------------------+
SELECT contains('Feathers', 'SQL');
+--------------------------+
|contains(Feathers, SQL)|
+--------------------------+
|                     false|
+--------------------------+
SELECT contains('Feathers', null);
+-------------------------+
|contains(Feathers, NULL)|
+-------------------------+
|                     NULL|
+-------------------------+
SELECT contains(x'537061726b2053514c', x'537061726b');
+----------------------------------------------+
|contains(X'537061726B2053514C', X'537061726B')|
+----------------------------------------------+
|                                          true|
+----------------------------------------------+
-- decode
SELECT decode(encode('abc', 'utf-8'), 'utf-8');
+---------------------------------+
|decode(encode(abc, utf-8), utf-8)|
+---------------------------------+
|                              abc|
+---------------------------------+
SELECT decode(2, 1, 'Southlake', 2, 'San Francisco', 3, 'New Jersey', 4, 'Seattle', 'Non domestic');
+----------------------------------------------------------------------------------+
|decode(2, 1, Southlake, 2, San Francisco, 3, New Jersey, 4, Seattle, Non domestic)|
+----------------------------------------------------------------------------------+
|                                                                     San Francisco|
+----------------------------------------------------------------------------------+
SELECT decode(6, 1, 'Southlake', 2, 'San Francisco', 3, 'New Jersey', 4, 'Seattle', 'Non domestic');
+----------------------------------------------------------------------------------+
|decode(6, 1, Southlake, 2, San Francisco, 3, New Jersey, 4, Seattle, Non domestic)|
+----------------------------------------------------------------------------------+
|                                                                      Non domestic|
+----------------------------------------------------------------------------------+
SELECT decode(6, 1, 'Southlake', 2, 'San Francisco', 3, 'New Jersey', 4, 'Seattle');
+--------------------------------------------------------------------+
|decode(6, 1, Southlake, 2, San Francisco, 3, New Jersey, 4, Seattle)|
+--------------------------------------------------------------------+
|                                                                NULL|
+--------------------------------------------------------------------+
SELECT decode(null, 6, 'Fea', NULL, 'thers', 4, 'rock');
+-------------------------------------------+
|decode(NULL, 6, Fea, NULL, thers, 4, rock)|
+-------------------------------------------+
|                                      thers|
+-------------------------------------------+
-- elt
SELECT elt(1, 'scala', 'java');
+-------------------+
|elt(1, scala, java)|
+-------------------+
|              scala|
+-------------------+
SELECT elt(2, 'a', 1);
+------------+
|elt(2, a, 1)|
+------------+
|           1|
+------------+
-- encode
SELECT encode('abc', 'utf-8');
+------------------+
|encode(abc, utf-8)|
+------------------+
|        [61 62 63]|
+------------------+
-- endswith
SELECT endswith('Feathers', 'ers');
+------------------------+
|endswith(Feathers, ers)|
+------------------------+
|                    true|
+------------------------+
SELECT endswith('Feathers', 'SQL');
+--------------------------+
|endswith(Feathers, SQL)|
+--------------------------+
|                     false|
+--------------------------+
SELECT endswith('Feathers', null);
+-------------------------+
|endswith(Feathers, NULL)|
+-------------------------+
|                     NULL|
+-------------------------+
SELECT endswith(x'537061726b2053514c', x'537061726b');
+----------------------------------------------+
|endswith(X'537061726B2053514C', X'537061726B')|
+----------------------------------------------+
|                                         false|
+----------------------------------------------+
SELECT endswith(x'537061726b2053514c', x'53514c');
+------------------------------------------+
|endswith(X'537061726B2053514C', X'53514C')|
+------------------------------------------+
|                                      true|
+------------------------------------------+
-- find_in_set
SELECT find_in_set('ab','abc,b,ab,c,def');
+-------------------------------+
|find_in_set(ab, abc,b,ab,c,def)|
+-------------------------------+
|                              3|
+-------------------------------+
-- format_number
SELECT format_number(12332.123456, 4);
+------------------------------+
|format_number(12332.123456, 4)|
+------------------------------+
|                   12,332.1235|
+------------------------------+
SELECT format_number(12332.123456, '##################.###');
+---------------------------------------------------+
|format_number(12332.123456, ##################.###)|
+---------------------------------------------------+
|                                          12332.123|
+---------------------------------------------------+
-- format_string
SELECT format_string("Hello World %d %s", 100, "days");
+-------------------------------------------+
|format_string(Hello World %d %s, 100, days)|
+-------------------------------------------+
|                       Hello World 100 days|
+-------------------------------------------+
-- initcap
SELECT initcap('Feathers');
+------------------+
|initcap(Feathers)|
+------------------+
|         Feathers|
+------------------+
-- instr
SELECT instr('Feathers', 'ers');
+--------------------+
|instr(Feathers, ers)|
+--------------------+
|                   6|
+--------------------+
-- lcase
SELECT lcase('Feathers');
+---------------+
|lcase(Feathers)|
+---------------+
|       feathers|
+---------------+
-- left
SELECT left('Feathers', 3);
+------------------+
|left(Feathers, 3)|
+------------------+
|               Fea|
+------------------+
SELECT left(encode('Feathers', 'utf-8'), 3);
+---------------------------------+
|left(encode(Feathers, utf-8), 3)|
+---------------------------------+
|                       [RmVh]|
+---------------------------------+
-- len
SELECT len('Feathers ');
+---------------+
|len(Feathers )|
+---------------+
|             9|
+---------------+
SELECT len(x'537061726b2053514c');
+--------------------------+
|len(X'537061726B2053514C')|
+--------------------------+
|                         9|
+--------------------------+
SELECT CHAR_LENGTH('Feathers ');
+-----------------------+
|char_length(Feathers )|
+-----------------------+
|                     9|
+-----------------------+
SELECT CHARACTER_LENGTH('Feathers ');
+----------------------------+
|character_length(Feathers )|
+----------------------------+
|                          9|
+----------------------------+
-- length
SELECT length('Feathers ');
+------------------+
|length(Feathers )|
+------------------+
|                9|
+------------------+
SELECT length(x'537061726b2053514c');
+-----------------------------+
|length(X'537061726B2053514C')|
+-----------------------------+
|                            9|
+-----------------------------+
SELECT CHAR_LENGTH('Feathers ');
+-----------------------+
|char_length(Feathers )|
+-----------------------+
|                     9|
+-----------------------+
SELECT CHARACTER_LENGTH('Feathers ');
+----------------------------+
|character_length(Feathers )|
+----------------------------+
|                          9|
+----------------------------+
-- levenshtein
SELECT levenshtein('kitten', 'sitting');
+----------------------------+
|levenshtein(kitten, sitting)|
+----------------------------+
|                           3|
+----------------------------+
SELECT levenshtein('kitten', 'sitting', 2);
+-------------------------------+
|levenshtein(kitten, sitting, 2)|
+-------------------------------+
|                             -1|
+-------------------------------+
-- locate
SELECT locate('bar', 'foobarbar');
+-------------------------+
|locate(bar, foobarbar, 1)|
+-------------------------+
|                        4|
+-------------------------+
SELECT locate('bar', 'foobarbar', 5);
+-------------------------+
|locate(bar, foobarbar, 5)|
+-------------------------+
|                        7|
+-------------------------+
SELECT POSITION('bar' IN 'foobarbar');
+-------------------------+
|locate(bar, foobarbar, 1)|
+-------------------------+
|                        4|
+-------------------------+
-- lower
SELECT lower('Feathers');
+---------------+
|lower(Feathers)|
+---------------+
|       feathers|
+---------------+
-- lpad
SELECT lpad('hi', 5, '??');
+---------------+
|lpad(hi, 5, ??)|
+---------------+
|          ???hi|
+---------------+
SELECT lpad('hi', 1, '??');
+---------------+
|lpad(hi, 1, ??)|
+---------------+
|              h|
+---------------+
SELECT lpad('hi', 5);
+--------------+
|lpad(hi, 5,  )|
+--------------+
|            hi|
+--------------+
SELECT hex(lpad(unhex('aabb'), 5));
+--------------------------------+
|hex(lpad(unhex(aabb), 5, X'00'))|
+--------------------------------+
|                      000000AABB|
+--------------------------------+
SELECT hex(lpad(unhex('aabb'), 5, unhex('1122')));
+--------------------------------------+
|hex(lpad(unhex(aabb), 5, unhex(1122)))|
+--------------------------------------+
|                            112211AABB|
+--------------------------------------+
-- ltrim
SELECT ltrim('    Feathers   ');
+----------------------+
|ltrim(    Feathers   )|
+----------------------+
|           Feathers   |
+----------------------+
-- luhn_check
SELECT luhn_check('8112189876');
+----------------------+
|luhn_check(8112189876)|
+----------------------+
|                  true|
+----------------------+
SELECT luhn_check('79927398713');
+-----------------------+
|luhn_check(79927398713)|
+-----------------------+
|                   true|
+-----------------------+
SELECT luhn_check('79927398714');
+-----------------------+
|luhn_check(79927398714)|
+-----------------------+
|                  false|
+-----------------------+
-- mask
SELECT mask('abcd-EFGH-8765-4321');
+----------------------------------------+
|mask(abcd-EFGH-8765-4321, X, x, n, NULL)|
+----------------------------------------+
|                     xxxx-XXXX-nnnn-nnnn|
+----------------------------------------+
SELECT mask('abcd-EFGH-8765-4321', 'Q');
+----------------------------------------+
|mask(abcd-EFGH-8765-4321, Q, x, n, NULL)|
+----------------------------------------+
|                     xxxx-QQQQ-nnnn-nnnn|
+----------------------------------------+
SELECT mask('AbCD123-@$#', 'Q', 'q');
+--------------------------------+
|mask(AbCD123-@$#, Q, q, n, NULL)|
+--------------------------------+
|                     QqQQnnn-@$#|
+--------------------------------+
SELECT mask('AbCD123-@$#');
+--------------------------------+
|mask(AbCD123-@$#, X, x, n, NULL)|
+--------------------------------+
|                     XxXXnnn-@$#|
+--------------------------------+
SELECT mask('AbCD123-@$#', 'Q');
+--------------------------------+
|mask(AbCD123-@$#, Q, x, n, NULL)|
+--------------------------------+
|                     QxQQnnn-@$#|
+--------------------------------+
SELECT mask('AbCD123-@$#', 'Q', 'q');
+--------------------------------+
|mask(AbCD123-@$#, Q, q, n, NULL)|
+--------------------------------+
|                     QqQQnnn-@$#|
+--------------------------------+
SELECT mask('AbCD123-@$#', 'Q', 'q', 'd');
+--------------------------------+
|mask(AbCD123-@$#, Q, q, d, NULL)|
+--------------------------------+
|                     QqQQddd-@$#|
+--------------------------------+
SELECT mask('AbCD123-@$#', 'Q', 'q', 'd', 'o');
+-----------------------------+
|mask(AbCD123-@$#, Q, q, d, o)|
+-----------------------------+
|                  QqQQdddoooo|
+-----------------------------+
SELECT mask('AbCD123-@$#', NULL, 'q', 'd', 'o');
+--------------------------------+
|mask(AbCD123-@$#, NULL, q, d, o)|
+--------------------------------+
|                     AqCDdddoooo|
+--------------------------------+
SELECT mask('AbCD123-@$#', NULL, NULL, 'd', 'o');
+-----------------------------------+
|mask(AbCD123-@$#, NULL, NULL, d, o)|
+-----------------------------------+
|                        AbCDdddoooo|
+-----------------------------------+
SELECT mask('AbCD123-@$#', NULL, NULL, NULL, 'o');
+--------------------------------------+
|mask(AbCD123-@$#, NULL, NULL, NULL, o)|
+--------------------------------------+
|                           AbCD123oooo|
+--------------------------------------+
SELECT mask(NULL, NULL, NULL, NULL, 'o');
+-------------------------------+
|mask(NULL, NULL, NULL, NULL, o)|
+-------------------------------+
|                           NULL|
+-------------------------------+
SELECT mask(NULL);
+-------------------------+
|mask(NULL, X, x, n, NULL)|
+-------------------------+
|                     NULL|
+-------------------------+
SELECT mask('AbCD123-@$#', NULL, NULL, NULL, NULL);
+-----------------------------------------+
|mask(AbCD123-@$#, NULL, NULL, NULL, NULL)|
+-----------------------------------------+
|                              AbCD123-@$#|
+-----------------------------------------+
-- octet_length
SELECT octet_length('Feathers');
+-----------------------+
|octet_length(Feathers)|
+-----------------------+
|                      8|
+-----------------------+
SELECT octet_length(x'537061726b2053514c');
+-----------------------------------+
|octet_length(X'537061726B2053514C')|
+-----------------------------------+
|                                  9|
+-----------------------------------+
-- overlay
SELECT overlay('Feathers' PLACING '_' FROM 6);
+----------------------------+
|overlay(Feathers, _, 6, -1)|
+----------------------------+
|                   Feathe_ers|
+----------------------------+
SELECT overlay('Feathers' PLACING 'ures' FROM 5);
+-------------------------------+
|overlay(Feathers, ures, 5, -1)|
+-------------------------------+
|                     Features  |
+-------------------------------+
-- position
SELECT position('bar', 'foobarbar');
+---------------------------+
|position(bar, foobarbar, 1)|
+---------------------------+
|                          4|
+---------------------------+
SELECT position('bar', 'foobarbar', 5);
+---------------------------+
|position(bar, foobarbar, 5)|
+---------------------------+
|                          7|
+---------------------------+
SELECT POSITION('bar' IN 'foobarbar');
+-------------------------+
|locate(bar, foobarbar, 1)|
+-------------------------+
|                        4|
+-------------------------+
-- printf
SELECT printf("Hello World %d %s", 100, "days");
+------------------------------------+
|printf(Hello World %d %s, 100, days)|
+------------------------------------+
|                Hello World 100 days|
+------------------------------------+
-- regexp_count
SELECT regexp_count('Steven Jones and Stephen Smith are the best players', 'Ste(v|ph)en');
+------------------------------------------------------------------------------+
|regexp_count(Steven Jones and Stephen Smith are the best players, Ste(v|ph)en)|
+------------------------------------------------------------------------------+
|                                                                             2|
+------------------------------------------------------------------------------+
SELECT regexp_count('abcdefghijklmnopqrstuvwxyz', '[a-z]{3}');
+--------------------------------------------------+
|regexp_count(abcdefghijklmnopqrstuvwxyz, [a-z]{3})|
+--------------------------------------------------+
|                                                 8|
+--------------------------------------------------+
-- regexp_extract
SELECT regexp_extract('100-200', '(\\d+)-(\\d+)', 1);
+---------------------------------------+
|regexp_extract(100-200, (\d+)-(\d+), 1)|
+---------------------------------------+
|                                    100|
+---------------------------------------+
-- regexp_extract_all
SELECT regexp_extract_all('100-200, 300-400', '(\\d+)-(\\d+)', 1);
+----------------------------------------------------+
|regexp_extract_all(100-200, 300-400, (\d+)-(\d+), 1)|
+----------------------------------------------------+
|                                          [100, 300]|
+----------------------------------------------------+
-- regexp_instr
SELECT regexp_instr('user@opensearch.org', '@[^.]*');
+----------------------------------------------+
|regexp_instr(user@opensearch.org, @[^.]*, 0)|
+----------------------------------------------+
|                                             5|
+----------------------------------------------+
-- regexp_replace
SELECT regexp_replace('100-200', '(\\d+)', 'num');
+--------------------------------------+
|regexp_replace(100-200, (\d+), num, 1)|
+--------------------------------------+
|                               num-num|
+--------------------------------------+
-- regexp_substr
SELECT regexp_substr('Steven Jones and Stephen Smith are the best players', 'Ste(v|ph)en');
+-------------------------------------------------------------------------------+
|regexp_substr(Steven Jones and Stephen Smith are the best players, Ste(v|ph)en)|
+-------------------------------------------------------------------------------+
|                                                                         Steven|
+-------------------------------------------------------------------------------+
SELECT regexp_substr('Steven Jones and Stephen Smith are the best players', 'Jeck');
+------------------------------------------------------------------------+
|regexp_substr(Steven Jones and Stephen Smith are the best players, Jeck)|
+------------------------------------------------------------------------+
|                                                                    NULL|
+------------------------------------------------------------------------+
-- repeat
SELECT repeat('123', 2);
+--------------+
|repeat(123, 2)|
+--------------+
|        123123|
+--------------+
-- replace
SELECT replace('ABCabc', 'abc', 'DEF');
+-------------------------+
|replace(ABCabc, abc, DEF)|
+-------------------------+
|                   ABCDEF|
+-------------------------+
-- right
SELECT right('Feathers', 3);
+-------------------+
|right(Feathers, 3)|
+-------------------+
|                ers|
+-------------------+
-- rpad
SELECT rpad('hi', 5, '??');
+---------------+
|rpad(hi, 5, ??)|
+---------------+
|          hi???|
+---------------+
SELECT rpad('hi', 1, '??');
+---------------+
|rpad(hi, 1, ??)|
+---------------+
|              h|
+---------------+
SELECT rpad('hi', 5);
+--------------+
|rpad(hi, 5,  )|
+--------------+
|         hi   |
+--------------+
SELECT hex(rpad(unhex('aabb'), 5));
+--------------------------------+
|hex(rpad(unhex(aabb), 5, X'00'))|
+--------------------------------+
|                      AABB000000|
+--------------------------------+
SELECT hex(rpad(unhex('aabb'), 5, unhex('1122')));
+--------------------------------------+
|hex(rpad(unhex(aabb), 5, unhex(1122)))|
+--------------------------------------+
|                            AABB112211|
+--------------------------------------+
-- rtrim
SELECT rtrim('    Feathers   ');
+----------------------+
|rtrim(    Feathers   )|
+----------------------+
|              Feathers|
+----------------------+
-- sentences
SELECT sentences('Hi there! Good morning.');
+--------------------------------------+
|sentences(Hi there! Good morning., , )|
+--------------------------------------+
|                  [[Hi, there], [Go...|
+--------------------------------------+
-- soundex
SELECT soundex('Miller');
+---------------+
|soundex(Miller)|
+---------------+
|           M460|
+---------------+
-- space
SELECT concat(space(2), '1');
+-------------------+
|concat(space(2), 1)|
+-------------------+
|                  1|
+-------------------+
-- split
SELECT split('oneAtwoBthreeC', '[ABC]');
+--------------------------------+
|split(oneAtwoBthreeC, [ABC], -1)|
+--------------------------------+
|             [one, two, three, ]|
+--------------------------------+
SELECT split('oneAtwoBthreeC', '[ABC]', -1);
+--------------------------------+
|split(oneAtwoBthreeC, [ABC], -1)|
+--------------------------------+
|             [one, two, three, ]|
+--------------------------------+
SELECT split('oneAtwoBthreeC', '[ABC]', 2);
+-------------------------------+
|split(oneAtwoBthreeC, [ABC], 2)|
+-------------------------------+
|              [one, twoBthreeC]|
+-------------------------------+
-- split_part
SELECT split_part('11.12.13', '.', 3);
+--------------------------+
|split_part(11.12.13, ., 3)|
+--------------------------+
|                        13|
+--------------------------+
-- startswith
SELECT startswith('Feathers', 'Fea');
+----------------------------+
|startswith(Feathers, Fea)|
+----------------------------+
|                        true|
+----------------------------+
SELECT startswith('Feathers', 'SQL');
+--------------------------+
|startswith(Feathers, SQL)|
+--------------------------+
|                     false|
+--------------------------+
SELECT startswith('Feathers', null);
+---------------------------+
|startswith(Feathers, NULL)|
+---------------------------+
|                       NULL|
+---------------------------+
SELECT startswith(x'537061726b2053514c', x'537061726b');
+------------------------------------------------+
|startswith(X'537061726B2053514C', X'537061726B')|
+------------------------------------------------+
|                                            true|
+------------------------------------------------+
SELECT startswith(x'537061726b2053514c', x'53514c');
+--------------------------------------------+
|startswith(X'537061726B2053514C', X'53514C')|
+--------------------------------------------+
|                                       false|
+--------------------------------------------+
-- substr
SELECT substr('Feathers', 5);
+--------------------------------+
|substr(Feathers, 5, 2147483647)|
+--------------------------------+
|                           hers |
+--------------------------------+
SELECT substr('Feathers', -3);
+---------------------------------+
|substr(Feathers, -3, 2147483647)|
+---------------------------------+
|                              ers|
+---------------------------------+
SELECT substr('Feathers', 5, 1);
+-----------------------+
|substr(Feathers, 5, 1)|
+-----------------------+
|                      h|
+-----------------------+
SELECT substr('Feathers' FROM 5);
+-----------------------------------+
|substring(Feathers, 5, 2147483647)|
+-----------------------------------+
|                              hers |
+-----------------------------------+
SELECT substr('Feathers' FROM -3);
+------------------------------------+
|substring(Feathers, -3, 2147483647)|
+------------------------------------+
|                                 ers|
+------------------------------------+
SELECT substr('Feathers' FROM 5 FOR 1);
+--------------------------+
|substring(Feathers, 5, 1)|
+--------------------------+
|                         h|
+--------------------------+
-- substring
SELECT substring('Feathers', 5);
+-----------------------------------+
|substring(Feathers, 5, 2147483647)|
+-----------------------------------+
|                              hers |
+-----------------------------------+
SELECT substring('Feathers', -3);
+------------------------------------+
|substring(Feathers, -3, 2147483647)|
+------------------------------------+
|                                 ers|
+------------------------------------+
SELECT substring('Feathers', 5, 1);
+--------------------------+
|substring(Feathers, 5, 1)|
+--------------------------+
|                         h|
+--------------------------+
SELECT substring('Feathers' FROM 5);
+-----------------------------------+
|substring(Feathers, 5, 2147483647)|
+-----------------------------------+
|                              hers |
+-----------------------------------+
SELECT substring('Feathers' FROM -3);
+------------------------------------+
|substring(Feathers, -3, 2147483647)|
+------------------------------------+
|                                 ers|
+------------------------------------+
SELECT substring('Feathers' FROM 5 FOR 1);
+--------------------------+
|substring(Feathers, 5, 1)|
+--------------------------+
|                         h|
+--------------------------+
-- substring_index
SELECT substring_index('www.apache.org', '.', 2);
+-------------------------------------+
|substring_index(www.apache.org, ., 2)|
+-------------------------------------+
|                           www.apache|
+-------------------------------------+
-- to_binary
SELECT to_binary('abc', 'utf-8');
+---------------------+
|to_binary(abc, utf-8)|
+---------------------+
|           [61 62 63]|
+---------------------+
-- to_char
SELECT to_char(454, '999');
+-----------------+
|to_char(454, 999)|
+-----------------+
|              454|
+-----------------+
SELECT to_char(454.00, '000D00');
+-----------------------+
|to_char(454.00, 000D00)|
+-----------------------+
|                 454.00|
+-----------------------+
SELECT to_char(12454, '99G999');
+----------------------+
|to_char(12454, 99G999)|
+----------------------+
|                12,454|
+----------------------+
SELECT to_char(78.12, '$99.99');
+----------------------+
|to_char(78.12, $99.99)|
+----------------------+
|                $78.12|
+----------------------+
SELECT to_char(-12454.8, '99G999D9S');
+----------------------------+
|to_char(-12454.8, 99G999D9S)|
+----------------------------+
|                   12,454.8-|
+----------------------------+
-- to_number
SELECT to_number('454', '999');
+-------------------+
|to_number(454, 999)|
+-------------------+
|                454|
+-------------------+
SELECT to_number('454.00', '000.00');
+-------------------------+
|to_number(454.00, 000.00)|
+-------------------------+
|                   454.00|
+-------------------------+
SELECT to_number('12,454', '99,999');
+-------------------------+
|to_number(12,454, 99,999)|
+-------------------------+
|                    12454|
+-------------------------+
SELECT to_number('$78.12', '$99.99');
+-------------------------+
|to_number($78.12, $99.99)|
+-------------------------+
|                    78.12|
+-------------------------+
SELECT to_number('12,454.8-', '99,999.9S');
+-------------------------------+
|to_number(12,454.8-, 99,999.9S)|
+-------------------------------+
|                       -12454.8|
+-------------------------------+
-- to_varchar
SELECT to_varchar(454, '999');
+-----------------+
|to_char(454, 999)|
+-----------------+
|              454|
+-----------------+
SELECT to_varchar(454.00, '000D00');
+-----------------------+
|to_char(454.00, 000D00)|
+-----------------------+
|                 454.00|
+-----------------------+
SELECT to_varchar(12454, '99G999');
+----------------------+
|to_char(12454, 99G999)|
+----------------------+
|                12,454|
+----------------------+
SELECT to_varchar(78.12, '$99.99');
+----------------------+
|to_char(78.12, $99.99)|
+----------------------+
|                $78.12|
+----------------------+
SELECT to_varchar(-12454.8, '99G999D9S');
+----------------------------+
|to_char(-12454.8, 99G999D9S)|
+----------------------------+
|                   12,454.8-|
+----------------------------+
-- translate
SELECT translate('AaBbCc', 'abc', '123');
+---------------------------+
|translate(AaBbCc, abc, 123)|
+---------------------------+
|                     A1B2C3|
+---------------------------+
-- try_to_binary
SELECT try_to_binary('abc', 'utf-8');
+-------------------------+
|try_to_binary(abc, utf-8)|
+-------------------------+
|               [61 62 63]|
+-------------------------+
select try_to_binary('a!', 'base64');
+-------------------------+
|try_to_binary(a!, base64)|
+-------------------------+
|                     NULL|
+-------------------------+
select try_to_binary('abc', 'invalidFormat');
+---------------------------------+
|try_to_binary(abc, invalidFormat)|
+---------------------------------+
|                             NULL|
+---------------------------------+
-- try_to_number
SELECT try_to_number('454', '999');
+-----------------------+
|try_to_number(454, 999)|
+-----------------------+
|                    454|
+-----------------------+
SELECT try_to_number('454.00', '000.00');
+-----------------------------+
|try_to_number(454.00, 000.00)|
+-----------------------------+
|                       454.00|
+-----------------------------+
SELECT try_to_number('12,454', '99,999');
+-----------------------------+
|try_to_number(12,454, 99,999)|
+-----------------------------+
|                        12454|
+-----------------------------+
SELECT try_to_number('$78.12', '$99.99');
+-----------------------------+
|try_to_number($78.12, $99.99)|
+-----------------------------+
|                        78.12|
+-----------------------------+
SELECT try_to_number('12,454.8-', '99,999.9S');
+-----------------------------------+
|try_to_number(12,454.8-, 99,999.9S)|
+-----------------------------------+
|                           -12454.8|
+-----------------------------------+
-- ucase
SELECT ucase('Feathers');
+---------------+
|ucase(Feathers)|
+---------------+
|       FEATHERS|
+---------------+
-- unbase64
SELECT unbase64('U3BhcmsgU1FM');
+----------------------+
|unbase64(U3BhcmsgU1FM)|
+----------------------+
|  [53 70 61 72 6B 2...|
+----------------------+
-- upper
SELECT upper('Feathers');
+---------------+
|upper(Feathers)|
+---------------+
|       FEATHERS|
+---------------+
```

#### 日期和时间函数
<a name="supported-sql-date-time"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。


****  

| 函数 | 说明 | 
| --- | --- | 
| add\$1months(start\$1date, num\$1months) | 返回 start\$1date 之后的日期 num\$1months。 | 
| convert\$1timezone([sourceTz, ]targetTz, sourceTs) | 将不带时区 sourceTs 的时间戳从 sourceTz 时区转换为 targetTz. | 
| curdate() | 返回查询评估开始时的当前日期。在同一查询中对 curdate 的所有调用都返回相同值。 | 
| current\$1date() | 返回查询评估开始时的当前日期。在同一查询中对 current\$1date 的所有调用都返回相同值。 | 
| current\$1date | 返回查询评估开始时的当前日期。 | 
| current\$1timestamp() | 返回查询评估开始时的当前时间戳。在同一查询中对 current\$1timestamp 的所有调用都返回相同值。 | 
| current\$1timestamp | 返回查询评估开始时的当前时间戳。 | 
| current\$1timezone() | 返回当前会话的本地时区。 | 
| date\$1add(start\$1date, num\$1days) | 返回 start\$1date 之后的日期 num\$1days。 | 
| date\$1diff(endDate, startDate) | 返回从 startDate 到 endDate 的天数。 | 
| date\$1format(timestamp, fmt) | 将 timestamp 转换为符合日期格式 fmt 指定格式的字符串值。 | 
| date\$1from\$1unix\$1date(days) | 根据自 1970 年 1 月 1 日起的天数创建日期。 | 
| date\$1part(field, source) | 提取 date/timestamp 或间隔源的一部分。 | 
| date\$1sub(start\$1date, num\$1days) | 返回 start\$1date 之前的日期 num\$1days。 | 
| date\$1trunc(fmt, ts) | 返回按格式模型 fmt 指定的单位截断的时间戳 ts。 | 
| dateadd(start\$1date, num\$1days) | 返回 start\$1date 之后的日期 num\$1days。 | 
| datediff(endDate, startDate) | 返回从 startDate 到 endDate 的天数。 | 
| datepart(field, source) | 提取 date/timestamp 或间隔源的一部分。 | 
| day(date) | 返回日期/时间戳的月份日期。 | 
| dayofmonth(date) | 返回日期/时间戳的月份日期。 | 
| dayofweek(date) | 返回 date/timestamp (1 = 星期日，2 = 星期一，...，7 = 星期六) 的一周中的某一天。 | 
| dayofyear(date) | 返回日期/时间戳的年份日期。 | 
| extract(field FROM source) | 提取 date/timestamp 或间隔源的一部分。 | 
| from\$1unixtime(unix\$1time[, fmt]) | 以指定的 fmt 形式返回 unix\$1time。 | 
| from\$1utc\$1timestamp(timestamp, timezone) | 给定类似“2017-07-14 02:40:00.0”的时间戳，将其解释为 UTC 时间，并将其转换为指定时区中的时间戳进行显示。例如，“GMT\$11”将转换为“2017-07-14 03:40:00.0”。 | 
| hour(timestamp) | 返回字符串/时间戳的小时部分。 | 
| last\$1day(date) | 返回该日期所属月份的最后一天。 | 
| localtimestamp() | 返回查询评估开始时无时区的当前时间戳。在同一查询中对 localtimestamp 的所有调用都返回相同值。 | 
| localtimestamp | 返回查询评估开始时会话时区的当前本地日期时间。 | 
| make\$1date(year, month, day) | 根据年、月、日字段创建日期。 | 
| make\$1dt\$1interval([days[, hours[, mins[, secs]]]]) | 以天、小时、分钟和秒为单位制作 DayTimeIntervalType 持续时间。 | 
| make\$1interval([years[, months[, weeks[, days[, hours[, mins[, secs]]]]]]]) | 时间间隔设置为年、月、周、天、时、分和秒。 | 
| make\$1timestamp(year, month, day, hour, min, sec[, timezone]) | 根据年、月、日、时、分、秒和时区字段创建时间戳。 | 
| make\$1timestamp\$1ltz(year, month, day, hour, min, sec[, timezone]) | 根据年、月、日、时、分、秒和时区字段创建带本地时区的当前时间戳。 | 
| make\$1timestamp\$1ntz(year, month, day, hour, min, sec) | 根据年、月、日、时、分、秒字段创建本地日期时间。 | 
| make\$1ym\$1interval([years[, months]]) | 根据年份、月份生成年-月区间。 | 
| minute(timestamp) | 返回字符串/时间戳的分钟部分。 | 
| month(date) | 返回日期/时间戳的月份部分。 | 
| months\$1between(timestamp1, timestamp2[, roundOff]) | 如果 timestamp1 晚于 timestamp2，则结果为正。如果 timestamp1 和 timestamp2 在同一天，或两者均为月末最后一天，则忽略具体日期时间。否则，差值将按每月 31 天计算，并四舍五入至 8 位小数，除非 roundOff=false。 | 
| next\$1day(start\$1date, day\$1of\$1week) | 返回第一个晚于 start\$1date 且名称如指定的日期。如果至少一个输入参数为 NULL，则该函数返回 NULL。 | 
| now() | 返回查询评估开始时的当前时间戳。 | 
| quarter(date) | 返回日期所属的季度，范围为 1 到 4。 | 
| second(timestamp) | 返回字符串/时间戳的秒钟部分。 | 
| session\$1window(time\$1column, gap\$1duration) | 根据指定时间戳生成会话窗口，该时间戳用于确定列和间隔持续时间。有关详细说明和示例，请参阅《结构化流处理指南》文档中的“时间窗口类型”部分。 | 
| timestamp\$1micros(microseconds) | 根据自 UTC epoch 起的微秒数创建时间戳。 | 
| timestamp\$1millis(milliseconds) | 根据自 UTC epoch 起的毫秒数创建时间戳。 | 
| timestamp\$1seconds(seconds) | 根据自 UTC epoch 起的秒数（可以是小数）创建时间戳。 | 
| to\$1date(date\$1str[, fmt]) | 将使用 fmt 表达式的 date\$1str 表达式解析为日期。输入无效时返回 null。默认情况下，如果省略 fmt，则遵循转换规则转换为日期。 | 
| to\$1timestamp(timestamp\$1str[, fmt]) | 将使用 fmt 表达式的 timestamp\$1str 表达式解析为时间戳。输入无效时返回 null。默认情况下，如果省略 fmt，则遵循转换规则转换为时间戳。 | 
| to\$1timestamp\$1ltz(timestamp\$1str[, fmt]) | 将使用 fmt 表达式的 timestamp\$1str 表达式解析为使用本地时区的时间戳。输入无效时返回 null。默认情况下，如果省略 fmt，则遵循转换规则转换为时间戳。 | 
| to\$1timestamp\$1ntz(timestamp\$1str[, fmt]) | 将使用 fmt 表达式的 timestamp\$1str 表达式解析为无时区的时间戳。输入无效时返回 null。默认情况下，如果省略 fmt，则遵循转换规则转换为时间戳。 | 
| to\$1unix\$1timestamp(timeExp[, fmt]) | 返回给定时间的 UNIX 时间戳。 | 
| to\$1utc\$1timestamp(timestamp, timezone) | 给定类似“2017-07-14 02:40:00.0”的时间戳，将其解释为给定时区中的时间，并将其转换为 UTC 中的时间戳进行显示。例如，“GMT\$11”将转换为“2017-07-14 01:40:00.0”。 | 
| trunc(date, fmt) | 返回 date，其中日期的时间部分按格式模型 fmt 指定的单位进行截断。 | 
| try\$1to\$1timestamp(timestamp\$1str[, fmt]) | 将使用 fmt 表达式的 timestamp\$1str 表达式解析为时间戳。 | 
| unix\$1date(date) | 返回自 1970 年 1 月 1 日起经过的天数。 | 
| unix\$1micros(timestamp) | 返回自 1970-01-01 00:00:00 UTC 起经过的微秒数。 | 
| unix\$1millis(timestamp) | 返回自 1970-01-01 00:00:00 UTC 起经过的毫秒数。截断更高精度的数值。 | 
| unix\$1seconds(timestamp) | 返回自 1970-01-01 00:00:00 UTC 起经过的秒数。截断更高精度的数值。 | 
| unix\$1timestamp([timeExp[, fmt]]) | 返回当前或指定时间的 UNIX 时间戳。 | 
| weekday(date) | 返回 date/timestamp (0 = 星期一，1 = 星期二，...，6 = 星期日) 的一周中的某一天。 | 
| weekofyear(date) | 返回给定日期所在年份的星期。一周以星期一为起点开始计算，第 1 周即指首个超过 3 天的完整周。 | 
| window(time\$1column, window\$1duration[, slide\$1duration[, start\$1time]]) | 将行按指定列的时间戳划分为一个或多个时间窗口。时间窗口起始点包含在内，但终点不包含在内。例如，12:05 将位于窗口 [12:05,12:10) 内，但不在 [12:00,12:05) 之内。时间窗口支持微秒级精度。不支持以月为单位的时间窗口。有关详细说明和示例，请参阅《结构化流处理指南》文档中的“事件时间窗口操作”部分。 | 
| window\$1time(window\$1column) | 从 time/session 窗口列中提取时间值，该值可用于窗口的事件时间值。提取的时间为 (window.end - 1)，这反映出聚合窗口具有排他性上限——[start, end)。有关详细说明和示例，请参阅《结构化流处理指南》文档中的“事件时间窗口操作”部分。 | 
| year(date) | 返回日期/时间戳的年份部分。 | 

**示例**

```
-- add_months
SELECT add_months('2016-08-31', 1);
+-------------------------+
|add_months(2016-08-31, 1)|
+-------------------------+
|               2016-09-30|
+-------------------------+
-- convert_timezone
SELECT convert_timezone('Europe/Brussels', 'America/Los_Angeles', timestamp_ntz'2021-12-06 00:00:00');
+-------------------------------------------------------------------------------------------+
|convert_timezone(Europe/Brussels, America/Los_Angeles, TIMESTAMP_NTZ '2021-12-06 00:00:00')|
+-------------------------------------------------------------------------------------------+
|                                                                        2021-12-05 15:00:00|
+-------------------------------------------------------------------------------------------+
SELECT convert_timezone('Europe/Brussels', timestamp_ntz'2021-12-05 15:00:00');
+------------------------------------------------------------------------------------------+
|convert_timezone(current_timezone(), Europe/Brussels, TIMESTAMP_NTZ '2021-12-05 15:00:00')|
+------------------------------------------------------------------------------------------+
|                                                                       2021-12-05 07:00:00|
+------------------------------------------------------------------------------------------+
-- curdate
SELECT curdate();
+--------------+
|current_date()|
+--------------+
|    2024-02-24|
+--------------+
-- current_date
SELECT current_date();
+--------------+
|current_date()|
+--------------+
|    2024-02-24|
+--------------+
SELECT current_date;
+--------------+
|current_date()|
+--------------+
|    2024-02-24|
+--------------+
-- current_timestamp
SELECT current_timestamp();
+--------------------+
| current_timestamp()|
+--------------------+
|2024-02-24 16:36:...|
+--------------------+
SELECT current_timestamp;
+--------------------+
| current_timestamp()|
+--------------------+
|2024-02-24 16:36:...|
+--------------------+
-- current_timezone
SELECT current_timezone();
+------------------+
|current_timezone()|
+------------------+
|        Asia/Seoul|
+------------------+
-- date_add
SELECT date_add('2016-07-30', 1);
+-----------------------+
|date_add(2016-07-30, 1)|
+-----------------------+
|             2016-07-31|
+-----------------------+
-- date_diff
SELECT date_diff('2009-07-31', '2009-07-30');
+---------------------------------+
|date_diff(2009-07-31, 2009-07-30)|
+---------------------------------+
|                                1|
+---------------------------------+
SELECT date_diff('2009-07-30', '2009-07-31');
+---------------------------------+
|date_diff(2009-07-30, 2009-07-31)|
+---------------------------------+
|                               -1|
+---------------------------------+
-- date_format
SELECT date_format('2016-04-08', 'y');
+--------------------------+
|date_format(2016-04-08, y)|
+--------------------------+
|                      2016|
+--------------------------+
-- date_from_unix_date
SELECT date_from_unix_date(1);
+----------------------+
|date_from_unix_date(1)|
+----------------------+
|            1970-01-02|
+----------------------+
-- date_part
SELECT date_part('YEAR', TIMESTAMP '2019-08-12 01:00:00.123456');
+-------------------------------------------------------+
|date_part(YEAR, TIMESTAMP '2019-08-12 01:00:00.123456')|
+-------------------------------------------------------+
|                                                   2019|
+-------------------------------------------------------+
SELECT date_part('week', timestamp'2019-08-12 01:00:00.123456');
+-------------------------------------------------------+
|date_part(week, TIMESTAMP '2019-08-12 01:00:00.123456')|
+-------------------------------------------------------+
|                                                     33|
+-------------------------------------------------------+
SELECT date_part('doy', DATE'2019-08-12');
+---------------------------------+
|date_part(doy, DATE '2019-08-12')|
+---------------------------------+
|                              224|
+---------------------------------+
SELECT date_part('SECONDS', timestamp'2019-10-01 00:00:01.000001');
+----------------------------------------------------------+
|date_part(SECONDS, TIMESTAMP '2019-10-01 00:00:01.000001')|
+----------------------------------------------------------+
|                                                  1.000001|
+----------------------------------------------------------+
SELECT date_part('days', interval 5 days 3 hours 7 minutes);
+-------------------------------------------------+
|date_part(days, INTERVAL '5 03:07' DAY TO MINUTE)|
+-------------------------------------------------+
|                                                5|
+-------------------------------------------------+
SELECT date_part('seconds', interval 5 hours 30 seconds 1 milliseconds 1 microseconds);
+-------------------------------------------------------------+
|date_part(seconds, INTERVAL '05:00:30.001001' HOUR TO SECOND)|
+-------------------------------------------------------------+
|                                                    30.001001|
+-------------------------------------------------------------+
SELECT date_part('MONTH', INTERVAL '2021-11' YEAR TO MONTH);
+--------------------------------------------------+
|date_part(MONTH, INTERVAL '2021-11' YEAR TO MONTH)|
+--------------------------------------------------+
|                                                11|
+--------------------------------------------------+
SELECT date_part('MINUTE', INTERVAL '123 23:55:59.002001' DAY TO SECOND);
+---------------------------------------------------------------+
|date_part(MINUTE, INTERVAL '123 23:55:59.002001' DAY TO SECOND)|
+---------------------------------------------------------------+
|                                                             55|
+---------------------------------------------------------------+
-- date_sub
SELECT date_sub('2016-07-30', 1);
+-----------------------+
|date_sub(2016-07-30, 1)|
+-----------------------+
|             2016-07-29|
+-----------------------+
-- date_trunc
SELECT date_trunc('YEAR', '2015-03-05T09:32:05.359');
+-----------------------------------------+
|date_trunc(YEAR, 2015-03-05T09:32:05.359)|
+-----------------------------------------+
|                      2015-01-01 00:00:00|
+-----------------------------------------+
SELECT date_trunc('MM', '2015-03-05T09:32:05.359');
+---------------------------------------+
|date_trunc(MM, 2015-03-05T09:32:05.359)|
+---------------------------------------+
|                    2015-03-01 00:00:00|
+---------------------------------------+
SELECT date_trunc('DD', '2015-03-05T09:32:05.359');
+---------------------------------------+
|date_trunc(DD, 2015-03-05T09:32:05.359)|
+---------------------------------------+
|                    2015-03-05 00:00:00|
+---------------------------------------+
SELECT date_trunc('HOUR', '2015-03-05T09:32:05.359');
+-----------------------------------------+
|date_trunc(HOUR, 2015-03-05T09:32:05.359)|
+-----------------------------------------+
|                      2015-03-05 09:00:00|
+-----------------------------------------+
SELECT date_trunc('MILLISECOND', '2015-03-05T09:32:05.123456');
+---------------------------------------------------+
|date_trunc(MILLISECOND, 2015-03-05T09:32:05.123456)|
+---------------------------------------------------+
|                               2015-03-05 09:32:...|
+---------------------------------------------------+
-- dateadd
SELECT dateadd('2016-07-30', 1);
+-----------------------+
|date_add(2016-07-30, 1)|
+-----------------------+
|             2016-07-31|
+-----------------------+
-- datediff
SELECT datediff('2009-07-31', '2009-07-30');
+--------------------------------+
|datediff(2009-07-31, 2009-07-30)|
+--------------------------------+
|                               1|
+--------------------------------+
SELECT datediff('2009-07-30', '2009-07-31');
+--------------------------------+
|datediff(2009-07-30, 2009-07-31)|
+--------------------------------+
|                              -1|
+--------------------------------+
-- datepart
SELECT datepart('YEAR', TIMESTAMP '2019-08-12 01:00:00.123456');
+----------------------------------------------------------+
|datepart(YEAR FROM TIMESTAMP '2019-08-12 01:00:00.123456')|
+----------------------------------------------------------+
|                                                      2019|
+----------------------------------------------------------+
SELECT datepart('week', timestamp'2019-08-12 01:00:00.123456');
+----------------------------------------------------------+
|datepart(week FROM TIMESTAMP '2019-08-12 01:00:00.123456')|
+----------------------------------------------------------+
|                                                        33|
+----------------------------------------------------------+
SELECT datepart('doy', DATE'2019-08-12');
+------------------------------------+
|datepart(doy FROM DATE '2019-08-12')|
+------------------------------------+
|                                 224|
+------------------------------------+
SELECT datepart('SECONDS', timestamp'2019-10-01 00:00:01.000001');
+-------------------------------------------------------------+
|datepart(SECONDS FROM TIMESTAMP '2019-10-01 00:00:01.000001')|
+-------------------------------------------------------------+
|                                                     1.000001|
+-------------------------------------------------------------+
SELECT datepart('days', interval 5 days 3 hours 7 minutes);
+----------------------------------------------------+
|datepart(days FROM INTERVAL '5 03:07' DAY TO MINUTE)|
+----------------------------------------------------+
|                                                   5|
+----------------------------------------------------+
SELECT datepart('seconds', interval 5 hours 30 seconds 1 milliseconds 1 microseconds);
+----------------------------------------------------------------+
|datepart(seconds FROM INTERVAL '05:00:30.001001' HOUR TO SECOND)|
+----------------------------------------------------------------+
|                                                       30.001001|
+----------------------------------------------------------------+
SELECT datepart('MONTH', INTERVAL '2021-11' YEAR TO MONTH);
+-----------------------------------------------------+
|datepart(MONTH FROM INTERVAL '2021-11' YEAR TO MONTH)|
+-----------------------------------------------------+
|                                                   11|
+-----------------------------------------------------+
SELECT datepart('MINUTE', INTERVAL '123 23:55:59.002001' DAY TO SECOND);
+------------------------------------------------------------------+
|datepart(MINUTE FROM INTERVAL '123 23:55:59.002001' DAY TO SECOND)|
+------------------------------------------------------------------+
|                                                                55|
+------------------------------------------------------------------+
-- day
SELECT day('2009-07-30');
+---------------+
|day(2009-07-30)|
+---------------+
|             30|
+---------------+
-- dayofmonth
SELECT dayofmonth('2009-07-30');
+----------------------+
|dayofmonth(2009-07-30)|
+----------------------+
|                    30|
+----------------------+
-- dayofweek
SELECT dayofweek('2009-07-30');
+---------------------+
|dayofweek(2009-07-30)|
+---------------------+
|                    5|
+---------------------+
-- dayofyear
SELECT dayofyear('2016-04-09');
+---------------------+
|dayofyear(2016-04-09)|
+---------------------+
|                  100|
+---------------------+
-- extract
SELECT extract(YEAR FROM TIMESTAMP '2019-08-12 01:00:00.123456');
+---------------------------------------------------------+
|extract(YEAR FROM TIMESTAMP '2019-08-12 01:00:00.123456')|
+---------------------------------------------------------+
|                                                     2019|
+---------------------------------------------------------+
SELECT extract(week FROM timestamp'2019-08-12 01:00:00.123456');
+---------------------------------------------------------+
|extract(week FROM TIMESTAMP '2019-08-12 01:00:00.123456')|
+---------------------------------------------------------+
|                                                       33|
+---------------------------------------------------------+
SELECT extract(doy FROM DATE'2019-08-12');
+-----------------------------------+
|extract(doy FROM DATE '2019-08-12')|
+-----------------------------------+
|                                224|
+-----------------------------------+
SELECT extract(SECONDS FROM timestamp'2019-10-01 00:00:01.000001');
+------------------------------------------------------------+
|extract(SECONDS FROM TIMESTAMP '2019-10-01 00:00:01.000001')|
+------------------------------------------------------------+
|                                                    1.000001|
+------------------------------------------------------------+
SELECT extract(days FROM interval 5 days 3 hours 7 minutes);
+---------------------------------------------------+
|extract(days FROM INTERVAL '5 03:07' DAY TO MINUTE)|
+---------------------------------------------------+
|                                                  5|
+---------------------------------------------------+
SELECT extract(seconds FROM interval 5 hours 30 seconds 1 milliseconds 1 microseconds);
+---------------------------------------------------------------+
|extract(seconds FROM INTERVAL '05:00:30.001001' HOUR TO SECOND)|
+---------------------------------------------------------------+
|                                                      30.001001|
+---------------------------------------------------------------+
SELECT extract(MONTH FROM INTERVAL '2021-11' YEAR TO MONTH);
+----------------------------------------------------+
|extract(MONTH FROM INTERVAL '2021-11' YEAR TO MONTH)|
+----------------------------------------------------+
|                                                  11|
+----------------------------------------------------+
SELECT extract(MINUTE FROM INTERVAL '123 23:55:59.002001' DAY TO SECOND);
+-----------------------------------------------------------------+
|extract(MINUTE FROM INTERVAL '123 23:55:59.002001' DAY TO SECOND)|
+-----------------------------------------------------------------+
|                                                               55|
+-----------------------------------------------------------------+
-- from_unixtime
SELECT from_unixtime(0, 'yyyy-MM-dd HH:mm:ss');
+-------------------------------------+
|from_unixtime(0, yyyy-MM-dd HH:mm:ss)|
+-------------------------------------+
|                  1970-01-01 09:00:00|
+-------------------------------------+
SELECT from_unixtime(0);
+-------------------------------------+
|from_unixtime(0, yyyy-MM-dd HH:mm:ss)|
+-------------------------------------+
|                  1970-01-01 09:00:00|
+-------------------------------------+
-- from_utc_timestamp
SELECT from_utc_timestamp('2016-08-31', 'Asia/Seoul');
+------------------------------------------+
|from_utc_timestamp(2016-08-31, Asia/Seoul)|
+------------------------------------------+
|                       2016-08-31 09:00:00|
+------------------------------------------+
-- hour
SELECT hour('2009-07-30 12:58:59');
+-------------------------+
|hour(2009-07-30 12:58:59)|
+-------------------------+
|                       12|
+-------------------------+
-- last_day
SELECT last_day('2009-01-12');
+--------------------+
|last_day(2009-01-12)|
+--------------------+
|          2009-01-31|
+--------------------+
-- localtimestamp
SELECT localtimestamp();
+--------------------+
|    localtimestamp()|
+--------------------+
|2024-02-24 16:36:...|
+--------------------+
-- make_date
SELECT make_date(2013, 7, 15);
+----------------------+
|make_date(2013, 7, 15)|
+----------------------+
|            2013-07-15|
+----------------------+
SELECT make_date(2019, 7, NULL);
+------------------------+
|make_date(2019, 7, NULL)|
+------------------------+
|                    NULL|
+------------------------+
-- make_dt_interval
SELECT make_dt_interval(1, 12, 30, 01.001001);
+-------------------------------------+
|make_dt_interval(1, 12, 30, 1.001001)|
+-------------------------------------+
|                 INTERVAL '1 12:30...|
+-------------------------------------+
SELECT make_dt_interval(2);
+-----------------------------------+
|make_dt_interval(2, 0, 0, 0.000000)|
+-----------------------------------+
|               INTERVAL '2 00:00...|
+-----------------------------------+
SELECT make_dt_interval(100, null, 3);
+----------------------------------------+
|make_dt_interval(100, NULL, 3, 0.000000)|
+----------------------------------------+
|                                    NULL|
+----------------------------------------+
-- make_interval
SELECT make_interval(100, 11, 1, 1, 12, 30, 01.001001);
+----------------------------------------------+
|make_interval(100, 11, 1, 1, 12, 30, 1.001001)|
+----------------------------------------------+
|                          100 years 11 mont...|
+----------------------------------------------+
SELECT make_interval(100, null, 3);
+----------------------------------------------+
|make_interval(100, NULL, 3, 0, 0, 0, 0.000000)|
+----------------------------------------------+
|                                          NULL|
+----------------------------------------------+
SELECT make_interval(0, 1, 0, 1, 0, 0, 100.000001);
+-------------------------------------------+
|make_interval(0, 1, 0, 1, 0, 0, 100.000001)|
+-------------------------------------------+
|                       1 months 1 days 1...|
+-------------------------------------------+
-- make_timestamp
SELECT make_timestamp(2014, 12, 28, 6, 30, 45.887);
+-------------------------------------------+
|make_timestamp(2014, 12, 28, 6, 30, 45.887)|
+-------------------------------------------+
|                       2014-12-28 06:30:...|
+-------------------------------------------+
SELECT make_timestamp(2014, 12, 28, 6, 30, 45.887, 'CET');
+------------------------------------------------+
|make_timestamp(2014, 12, 28, 6, 30, 45.887, CET)|
+------------------------------------------------+
|                            2014-12-28 14:30:...|
+------------------------------------------------+
SELECT make_timestamp(2019, 6, 30, 23, 59, 60);
+---------------------------------------+
|make_timestamp(2019, 6, 30, 23, 59, 60)|
+---------------------------------------+
|                    2019-07-01 00:00:00|
+---------------------------------------+
SELECT make_timestamp(2019, 6, 30, 23, 59, 1);
+--------------------------------------+
|make_timestamp(2019, 6, 30, 23, 59, 1)|
+--------------------------------------+
|                   2019-06-30 23:59:01|
+--------------------------------------+
SELECT make_timestamp(null, 7, 22, 15, 30, 0);
+--------------------------------------+
|make_timestamp(NULL, 7, 22, 15, 30, 0)|
+--------------------------------------+
|                                  NULL|
+--------------------------------------+
-- make_timestamp_ltz
SELECT make_timestamp_ltz(2014, 12, 28, 6, 30, 45.887);
+-----------------------------------------------+
|make_timestamp_ltz(2014, 12, 28, 6, 30, 45.887)|
+-----------------------------------------------+
|                           2014-12-28 06:30:...|
+-----------------------------------------------+
SELECT make_timestamp_ltz(2014, 12, 28, 6, 30, 45.887, 'CET');
+----------------------------------------------------+
|make_timestamp_ltz(2014, 12, 28, 6, 30, 45.887, CET)|
+----------------------------------------------------+
|                                2014-12-28 14:30:...|
+----------------------------------------------------+
SELECT make_timestamp_ltz(2019, 6, 30, 23, 59, 60);
+-------------------------------------------+
|make_timestamp_ltz(2019, 6, 30, 23, 59, 60)|
+-------------------------------------------+
|                        2019-07-01 00:00:00|
+-------------------------------------------+
SELECT make_timestamp_ltz(null, 7, 22, 15, 30, 0);
+------------------------------------------+
|make_timestamp_ltz(NULL, 7, 22, 15, 30, 0)|
+------------------------------------------+
|                                      NULL|
+------------------------------------------+
-- make_timestamp_ntz
SELECT make_timestamp_ntz(2014, 12, 28, 6, 30, 45.887);
+-----------------------------------------------+
|make_timestamp_ntz(2014, 12, 28, 6, 30, 45.887)|
+-----------------------------------------------+
|                           2014-12-28 06:30:...|
+-----------------------------------------------+
SELECT make_timestamp_ntz(2019, 6, 30, 23, 59, 60);
+-------------------------------------------+
|make_timestamp_ntz(2019, 6, 30, 23, 59, 60)|
+-------------------------------------------+
|                        2019-07-01 00:00:00|
+-------------------------------------------+
SELECT make_timestamp_ntz(null, 7, 22, 15, 30, 0);
+------------------------------------------+
|make_timestamp_ntz(NULL, 7, 22, 15, 30, 0)|
+------------------------------------------+
|                                      NULL|
+------------------------------------------+
-- make_ym_interval
SELECT make_ym_interval(1, 2);
+----------------------+
|make_ym_interval(1, 2)|
+----------------------+
|  INTERVAL '1-2' YE...|
+----------------------+
SELECT make_ym_interval(1, 0);
+----------------------+
|make_ym_interval(1, 0)|
+----------------------+
|  INTERVAL '1-0' YE...|
+----------------------+
SELECT make_ym_interval(-1, 1);
+-----------------------+
|make_ym_interval(-1, 1)|
+-----------------------+
|   INTERVAL '-0-11' ...|
+-----------------------+
SELECT make_ym_interval(2);
+----------------------+
|make_ym_interval(2, 0)|
+----------------------+
|  INTERVAL '2-0' YE...|
+----------------------+
-- minute
SELECT minute('2009-07-30 12:58:59');
+---------------------------+
|minute(2009-07-30 12:58:59)|
+---------------------------+
|                         58|
+---------------------------+
-- month
SELECT month('2016-07-30');
+-----------------+
|month(2016-07-30)|
+-----------------+
|                7|
+-----------------+
-- months_between
SELECT months_between('1997-02-28 10:30:00', '1996-10-30');
+-----------------------------------------------------+
|months_between(1997-02-28 10:30:00, 1996-10-30, true)|
+-----------------------------------------------------+
|                                           3.94959677|
+-----------------------------------------------------+
SELECT months_between('1997-02-28 10:30:00', '1996-10-30', false);
+------------------------------------------------------+
|months_between(1997-02-28 10:30:00, 1996-10-30, false)|
+------------------------------------------------------+
|                                    3.9495967741935485|
+------------------------------------------------------+
-- next_day
SELECT next_day('2015-01-14', 'TU');
+------------------------+
|next_day(2015-01-14, TU)|
+------------------------+
|              2015-01-20|
+------------------------+
-- now
SELECT now();
+--------------------+
|               now()|
+--------------------+
|2024-02-24 16:36:...|
+--------------------+
-- quarter
SELECT quarter('2016-08-31');
+-------------------+
|quarter(2016-08-31)|
+-------------------+
|                  3|
+-------------------+
-- second
SELECT second('2009-07-30 12:58:59');
+---------------------------+
|second(2009-07-30 12:58:59)|
+---------------------------+
|                         59|
+---------------------------+
-- session_window
SELECT a, session_window.start, session_window.end, count(*) as cnt FROM VALUES ('A1', '2021-01-01 00:00:00'), ('A1', '2021-01-01 00:04:30'), ('A1', '2021-01-01 00:10:00'), ('A2', '2021-01-01 00:01:00') AS tab(a, b) GROUP by a, session_window(b, '5 minutes') ORDER BY a, start;
+---+-------------------+-------------------+---+
|  a|              start|                end|cnt|
+---+-------------------+-------------------+---+
| A1|2021-01-01 00:00:00|2021-01-01 00:09:30|  2|
| A1|2021-01-01 00:10:00|2021-01-01 00:15:00|  1|
| A2|2021-01-01 00:01:00|2021-01-01 00:06:00|  1|
+---+-------------------+-------------------+---+
SELECT a, session_window.start, session_window.end, count(*) as cnt FROM VALUES ('A1', '2021-01-01 00:00:00'), ('A1', '2021-01-01 00:04:30'), ('A1', '2021-01-01 00:10:00'), ('A2', '2021-01-01 00:01:00'), ('A2', '2021-01-01 00:04:30') AS tab(a, b) GROUP by a, session_window(b, CASE WHEN a = 'A1' THEN '5 minutes' WHEN a = 'A2' THEN '1 minute' ELSE '10 minutes' END) ORDER BY a, start;
+---+-------------------+-------------------+---+
|  a|              start|                end|cnt|
+---+-------------------+-------------------+---+
| A1|2021-01-01 00:00:00|2021-01-01 00:09:30|  2|
| A1|2021-01-01 00:10:00|2021-01-01 00:15:00|  1|
| A2|2021-01-01 00:01:00|2021-01-01 00:02:00|  1|
| A2|2021-01-01 00:04:30|2021-01-01 00:05:30|  1|
+---+-------------------+-------------------+---+
-- timestamp_micros
SELECT timestamp_micros(1230219000123123);
+----------------------------------+
|timestamp_micros(1230219000123123)|
+----------------------------------+
|              2008-12-26 00:30:...|
+----------------------------------+
-- timestamp_millis
SELECT timestamp_millis(1230219000123);
+-------------------------------+
|timestamp_millis(1230219000123)|
+-------------------------------+
|           2008-12-26 00:30:...|
+-------------------------------+
-- timestamp_seconds
SELECT timestamp_seconds(1230219000);
+-----------------------------+
|timestamp_seconds(1230219000)|
+-----------------------------+
|          2008-12-26 00:30:00|
+-----------------------------+
SELECT timestamp_seconds(1230219000.123);
+---------------------------------+
|timestamp_seconds(1230219000.123)|
+---------------------------------+
|             2008-12-26 00:30:...|
+---------------------------------+
-- to_date
SELECT to_date('2009-07-30 04:17:52');
+----------------------------+
|to_date(2009-07-30 04:17:52)|
+----------------------------+
|                  2009-07-30|
+----------------------------+
SELECT to_date('2016-12-31', 'yyyy-MM-dd');
+-------------------------------+
|to_date(2016-12-31, yyyy-MM-dd)|
+-------------------------------+
|                     2016-12-31|
+-------------------------------+
-- to_timestamp
SELECT to_timestamp('2016-12-31 00:12:00');
+---------------------------------+
|to_timestamp(2016-12-31 00:12:00)|
+---------------------------------+
|              2016-12-31 00:12:00|
+---------------------------------+
SELECT to_timestamp('2016-12-31', 'yyyy-MM-dd');
+------------------------------------+
|to_timestamp(2016-12-31, yyyy-MM-dd)|
+------------------------------------+
|                 2016-12-31 00:00:00|
+------------------------------------+
-- to_timestamp_ltz
SELECT to_timestamp_ltz('2016-12-31 00:12:00');
+-------------------------------------+
|to_timestamp_ltz(2016-12-31 00:12:00)|
+-------------------------------------+
|                  2016-12-31 00:12:00|
+-------------------------------------+
SELECT to_timestamp_ltz('2016-12-31', 'yyyy-MM-dd');
+----------------------------------------+
|to_timestamp_ltz(2016-12-31, yyyy-MM-dd)|
+----------------------------------------+
|                     2016-12-31 00:00:00|
+----------------------------------------+
-- to_timestamp_ntz
SELECT to_timestamp_ntz('2016-12-31 00:12:00');
+-------------------------------------+
|to_timestamp_ntz(2016-12-31 00:12:00)|
+-------------------------------------+
|                  2016-12-31 00:12:00|
+-------------------------------------+
SELECT to_timestamp_ntz('2016-12-31', 'yyyy-MM-dd');
+----------------------------------------+
|to_timestamp_ntz(2016-12-31, yyyy-MM-dd)|
+----------------------------------------+
|                     2016-12-31 00:00:00|
+----------------------------------------+
-- to_unix_timestamp
SELECT to_unix_timestamp('2016-04-08', 'yyyy-MM-dd');
+-----------------------------------------+
|to_unix_timestamp(2016-04-08, yyyy-MM-dd)|
+-----------------------------------------+
|                               1460041200|
+-----------------------------------------+
-- to_utc_timestamp
SELECT to_utc_timestamp('2016-08-31', 'Asia/Seoul');
+----------------------------------------+
|to_utc_timestamp(2016-08-31, Asia/Seoul)|
+----------------------------------------+
|                     2016-08-30 15:00:00|
+----------------------------------------+
-- trunc
SELECT trunc('2019-08-04', 'week');
+-----------------------+
|trunc(2019-08-04, week)|
+-----------------------+
|             2019-07-29|
+-----------------------+
SELECT trunc('2019-08-04', 'quarter');
+--------------------------+
|trunc(2019-08-04, quarter)|
+--------------------------+
|                2019-07-01|
+--------------------------+
SELECT trunc('2009-02-12', 'MM');
+---------------------+
|trunc(2009-02-12, MM)|
+---------------------+
|           2009-02-01|
+---------------------+
SELECT trunc('2015-10-27', 'YEAR');
+-----------------------+
|trunc(2015-10-27, YEAR)|
+-----------------------+
|             2015-01-01|
+-----------------------+
-- try_to_timestamp
SELECT try_to_timestamp('2016-12-31 00:12:00');
+-------------------------------------+
|try_to_timestamp(2016-12-31 00:12:00)|
+-------------------------------------+
|                  2016-12-31 00:12:00|
+-------------------------------------+
SELECT try_to_timestamp('2016-12-31', 'yyyy-MM-dd');
+----------------------------------------+
|try_to_timestamp(2016-12-31, yyyy-MM-dd)|
+----------------------------------------+
|                     2016-12-31 00:00:00|
+----------------------------------------+
SELECT try_to_timestamp('foo', 'yyyy-MM-dd');
+---------------------------------+
|try_to_timestamp(foo, yyyy-MM-dd)|
+---------------------------------+
|                             NULL|
+---------------------------------+
-- unix_date
SELECT unix_date(DATE("1970-01-02"));
+---------------------+
|unix_date(1970-01-02)|
+---------------------+
|                    1|
+---------------------+
-- unix_micros
SELECT unix_micros(TIMESTAMP('1970-01-01 00:00:01Z'));
+---------------------------------+
|unix_micros(1970-01-01 00:00:01Z)|
+---------------------------------+
|                          1000000|
+---------------------------------+
-- unix_millis
SELECT unix_millis(TIMESTAMP('1970-01-01 00:00:01Z'));
+---------------------------------+
|unix_millis(1970-01-01 00:00:01Z)|
+---------------------------------+
|                             1000|
+---------------------------------+
-- unix_seconds
SELECT unix_seconds(TIMESTAMP('1970-01-01 00:00:01Z'));
+----------------------------------+
|unix_seconds(1970-01-01 00:00:01Z)|
+----------------------------------+
|                                 1|
+----------------------------------+
-- unix_timestamp
SELECT unix_timestamp();
+--------------------------------------------------------+
|unix_timestamp(current_timestamp(), yyyy-MM-dd HH:mm:ss)|
+--------------------------------------------------------+
|                                              1708760216|
+--------------------------------------------------------+
SELECT unix_timestamp('2016-04-08', 'yyyy-MM-dd');
+--------------------------------------+
|unix_timestamp(2016-04-08, yyyy-MM-dd)|
+--------------------------------------+
|                            1460041200|
+--------------------------------------+
-- weekday
SELECT weekday('2009-07-30');
+-------------------+
|weekday(2009-07-30)|
+-------------------+
|                  3|
+-------------------+
-- weekofyear
SELECT weekofyear('2008-02-20');
+----------------------+
|weekofyear(2008-02-20)|
+----------------------+
|                     8|
+----------------------+
-- window
SELECT a, window.start, window.end, count(*) as cnt FROM VALUES ('A1', '2021-01-01 00:00:00'), ('A1', '2021-01-01 00:04:30'), ('A1', '2021-01-01 00:06:00'), ('A2', '2021-01-01 00:01:00') AS tab(a, b) GROUP by a, window(b, '5 minutes') ORDER BY a, start;
+---+-------------------+-------------------+---+
|  a|              start|                end|cnt|
+---+-------------------+-------------------+---+
| A1|2021-01-01 00:00:00|2021-01-01 00:05:00|  2|
| A1|2021-01-01 00:05:00|2021-01-01 00:10:00|  1|
| A2|2021-01-01 00:00:00|2021-01-01 00:05:00|  1|
+---+-------------------+-------------------+---+
SELECT a, window.start, window.end, count(*) as cnt FROM VALUES ('A1', '2021-01-01 00:00:00'), ('A1', '2021-01-01 00:04:30'), ('A1', '2021-01-01 00:06:00'), ('A2', '2021-01-01 00:01:00') AS tab(a, b) GROUP by a, window(b, '10 minutes', '5 minutes') ORDER BY a, start;
+---+-------------------+-------------------+---+
|  a|              start|                end|cnt|
+---+-------------------+-------------------+---+
| A1|2020-12-31 23:55:00|2021-01-01 00:05:00|  2|
| A1|2021-01-01 00:00:00|2021-01-01 00:10:00|  3|
| A1|2021-01-01 00:05:00|2021-01-01 00:15:00|  1|
| A2|2020-12-31 23:55:00|2021-01-01 00:05:00|  1|
| A2|2021-01-01 00:00:00|2021-01-01 00:10:00|  1|
+---+-------------------+-------------------+---+
-- window_time
SELECT a, window.start as start, window.end as end, window_time(window), cnt FROM (SELECT a, window, count(*) as cnt FROM VALUES ('A1', '2021-01-01 00:00:00'), ('A1', '2021-01-01 00:04:30'), ('A1', '2021-01-01 00:06:00'), ('A2', '2021-01-01 00:01:00') AS tab(a, b) GROUP by a, window(b, '5 minutes') ORDER BY a, window.start);
+---+-------------------+-------------------+--------------------+---+
|  a|              start|                end| window_time(window)|cnt|
+---+-------------------+-------------------+--------------------+---+
| A1|2021-01-01 00:00:00|2021-01-01 00:05:00|2021-01-01 00:04:...|  2|
| A1|2021-01-01 00:05:00|2021-01-01 00:10:00|2021-01-01 00:09:...|  1|
| A2|2021-01-01 00:00:00|2021-01-01 00:05:00|2021-01-01 00:04:...|  1|
+---+-------------------+-------------------+--------------------+---+
-- year
SELECT year('2016-07-30');
+----------------+
|year(2016-07-30)|
+----------------+
|            2016|
+----------------+
```

#### 聚合函数
<a name="supported-sql-aggregate"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。

聚合函数对各行的值进行运算以执行数学计算，例如求和、平均 minimum/maximum 值、计数、值、标准差和估计，以及一些非数学运算。

**语法**

```
aggregate_function(input1 [, input2, ...]) FILTER (WHERE boolean_expression) 
```

**参数**
+ `boolean_expression`：指定任何评估结果为布尔值的表达式。两个或多个表达式可使用逻辑运算符（AND、OR）进行组合。

**有序集合聚合函数**

这些聚合函数采用与其他聚合函数不同的语法，用于指定对值进行排序的表达式（通常为列名）。

**语法**

```
{ PERCENTILE_CONT | PERCENTILE_DISC }(percentile) WITHIN GROUP (ORDER BY { order_by_expression [ ASC | DESC ] [ NULLS { FIRST | LAST } ] [ , ... ] }) FILTER (WHERE boolean_expression) 
```

**参数**
+ `percentile`：要查找值的百分位数。百分位数必须是介于 0.0 和 1.0 之间的常数。
+ `order_by_expression`：用于在聚合值之前对值进行排序的表达式（通常为列名）。
+ `boolean_expression`：指定任何评估结果为布尔值的表达式。两个或多个表达式可使用逻辑运算符（AND、OR）进行组合。

**示例**

```
CREATE OR REPLACE TEMPORARY VIEW basic_pays AS SELECT * FROM VALUES
('Jane Doe','Accounting',8435),
('Akua Mansa','Accounting',9998),
('John Doe','Accounting',8992),
('Juan Li','Accounting',8870),
('Carlos Salazar','Accounting',11472),
('Arnav Desai','Accounting',6627),
('Saanvi Sarkar','IT',8113),
('Shirley Rodriguez','IT',5186),
('Nikki Wolf','Sales',9181),
('Alejandro Rosalez','Sales',9441),
('Nikhil Jayashankar','Sales',6660),
('Richard Roe','Sales',10563),
('Pat Candella','SCM',10449),
('Gerard Hernandez','SCM',6949),
('Pamela Castillo','SCM',11303),
('Paulo Santos','SCM',11798),
('Jorge Souza','SCM',10586)
AS basic_pays(employee_name, department, salary);
SELECT * FROM basic_pays;
+-------------------+----------+------+
|    employee_name  |department|salary|
+-------------------+----------+------+
| Arnav Desai       |Accounting|  6627|
| Jorge Souza       |       SCM| 10586|
| Jane Doe          |Accounting|  8435|
| Nikhil Jayashankar|     Sales|  6660|
| Diego Vanauf      |     Sales| 10563|
| Carlos Salazar    |Accounting| 11472|
| Gerard Hernandez  |       SCM|  6949|
| John Doe          |Accounting|  8992|
| Nikki Wolf        |     Sales|  9181|
| Paulo Santos      |       SCM| 11798|
| Saanvi Sarkar     |        IT|  8113|
| Shirley Rodriguez |        IT|  5186|
| Pat Candella      |       SCM| 10449|
| Akua Mansa        |Accounting|  9998|
| Pamela Castillo   |       SCM| 11303|
| Alejandro Rosalez |     Sales|  9441|
| Juan Li           |Accounting|  8870|
+-------------------+----------+------+
SELECT
department,
percentile_cont(0.25) WITHIN GROUP (ORDER BY salary) AS pc1,
percentile_cont(0.25) WITHIN GROUP (ORDER BY salary) FILTER (WHERE employee_name LIKE '%Bo%') AS pc2,
percentile_cont(0.25) WITHIN GROUP (ORDER BY salary DESC) AS pc3,
percentile_cont(0.25) WITHIN GROUP (ORDER BY salary DESC) FILTER (WHERE employee_name LIKE '%Bo%') AS pc4,
percentile_disc(0.25) WITHIN GROUP (ORDER BY salary) AS pd1,
percentile_disc(0.25) WITHIN GROUP (ORDER BY salary) FILTER (WHERE employee_name LIKE '%Bo%') AS pd2,
percentile_disc(0.25) WITHIN GROUP (ORDER BY salary DESC) AS pd3,
percentile_disc(0.25) WITHIN GROUP (ORDER BY salary DESC) FILTER (WHERE employee_name LIKE '%Bo%') AS pd4
FROM basic_pays
GROUP BY department
ORDER BY department;
+----------+-------+--------+-------+--------+-----+-----+-----+-----+
|department|    pc1|     pc2|    pc3|     pc4|  pd1|  pd2|  pd3|  pd4|
+----------+-------+--------+-------+--------+-----+-----+-----+-----+
|Accounting|8543.75| 7838.25| 9746.5|10260.75| 8435| 6627| 9998|11472|
|        IT|5917.75|    NULL|7381.25|    NULL| 5186| NULL| 8113| NULL|
|     Sales|8550.75|    NULL| 9721.5|    NULL| 6660| NULL|10563| NULL|
|       SCM|10449.0|10786.25|11303.0|11460.75|10449|10449|11303|11798|
+----------+-------+--------+-------+--------+-----+-----+-----+-----+
```

#### 条件函数
<a name="supported-sql-conditional"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。


****  

| 函数 | 说明 | 
| --- | --- | 
| coalesce(expr1, expr2, ...) | 如果存在，则返回第一个非 null 参数。否则为 null。 | 
| if(expr1, expr2, expr3) | 如果 expr1 计算结果为 true，则返回 expr2；否则返回 expr3。 | 
| ifnull(expr1, expr2) | 如果 expr1 为 null，则返回 expr2，否则返回 expr1。 | 
| nanvl(expr1, expr2) | 如果不是 NaN，则返回 expr1，否则返回 expr2。 | 
| nullif(expr1, expr2) | 如果 expr1 等于 expr2，则返回 null，否则返回 expr1。 | 
| nvl(expr1, expr2) | 如果 expr1 为 null，则返回 expr2，否则返回 expr1。 | 
| nvl2(expr1, expr2, expr3) | 如果 expr1 不为 null，则返回 expr2，否则返回 expr3。 | 
| CASE WHEN expr1 THEN expr2 [WHEN expr3 THEN expr4]\$1 [ELSE expr5] END | 当 expr1 = true 时，返回 expr2；而当 expr3 = true 时，返回 expr4；否则返回 expr5。 | 

**示例**

```
-- coalesce
SELECT coalesce(NULL, 1, NULL);
+-----------------------+
|coalesce(NULL, 1, NULL)|
+-----------------------+
|                      1|
+-----------------------+
-- if
SELECT if(1 < 2, 'a', 'b');
+-------------------+
|(IF((1 < 2), a, b))|
+-------------------+
|                  a|
+-------------------+
-- ifnull
SELECT ifnull(NULL, array('2'));
+----------------------+
|ifnull(NULL, array(2))|
+----------------------+
|                   [2]|
+----------------------+
-- nanvl
SELECT nanvl(cast('NaN' as double), 123);
+-------------------------------+
|nanvl(CAST(NaN AS DOUBLE), 123)|
+-------------------------------+
|                          123.0|
+-------------------------------+
-- nullif
SELECT nullif(2, 2);
+------------+
|nullif(2, 2)|
+------------+
|        NULL|
+------------+
-- nvl
SELECT nvl(NULL, array('2'));
+-------------------+
|nvl(NULL, array(2))|
+-------------------+
|                [2]|
+-------------------+
-- nvl2
SELECT nvl2(NULL, 2, 1);
+----------------+
|nvl2(NULL, 2, 1)|
+----------------+
|               1|
+----------------+
-- when
SELECT CASE WHEN 1 > 0 THEN 1 WHEN 2 > 0 THEN 2.0 ELSE 1.2 END;
+-----------------------------------------------------------+
|CASE WHEN (1 > 0) THEN 1 WHEN (2 > 0) THEN 2.0 ELSE 1.2 END|
+-----------------------------------------------------------+
|                                                        1.0|
+-----------------------------------------------------------+
SELECT CASE WHEN 1 < 0 THEN 1 WHEN 2 > 0 THEN 2.0 ELSE 1.2 END;
+-----------------------------------------------------------+
|CASE WHEN (1 < 0) THEN 1 WHEN (2 > 0) THEN 2.0 ELSE 1.2 END|
+-----------------------------------------------------------+
|                                                        2.0|
+-----------------------------------------------------------+
SELECT CASE WHEN 1 < 0 THEN 1 WHEN 2 < 0 THEN 2.0 END;
+--------------------------------------------------+
|CASE WHEN (1 < 0) THEN 1 WHEN (2 < 0) THEN 2.0 END|
+--------------------------------------------------+
|                                              NULL|
+--------------------------------------------------+
```

#### JSON 函数
<a name="supported-sql-json"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。




****  

| 函数 | 说明 | 
| --- | --- | 
| from\$1json(jsonStr, schema[, options]) | 返回包含给定 `jsonStr` 和 `schema` 的结构值。 | 
| get\$1json\$1object(json\$1txt, path) | 从 `path` 中提取 json 对象。 | 
| json\$1array\$1length(jsonArray) | 返回最外层 JSON 数组中的元素数量。 | 
| json\$1object\$1keys(json\$1object) | 以数组形式返回最外层 JSON 对象的所有键。 | 
| json\$1tuple(jsonStr, p1, p2, ..., pn) | 返回与函数 get\$1json\$1object 类似的元组，但接受多个名称。所有输入参数和输出列类型均为字符串。 | 
| schema\$1of\$1json(json[, options]) | 返回以 JSON 字符串的 DDL 格式表示的架构。 | 
| to\$1json(expr[, options]) | 返回包含给定结构值的 JSON 字符串 | 

**示例**

```
-- from_json
SELECT from_json('{"a":1, "b":0.8}', 'a INT, b DOUBLE');
+---------------------------+
| from_json({"a":1, "b":0.8}) |
+---------------------------+
| {1, 0.8}                  |
+---------------------------+

SELECT from_json('{"time":"26/08/2015"}', 'time Timestamp', map('timestampFormat', 'dd/MM/yyyy'));
+--------------------------------+
| from_json({"time":"26/08/2015"}) |
+--------------------------------+
| {2015-08-26 00:00...           |
+--------------------------------+

SELECT from_json('{"teacher": "Alice", "student": [{"name": "Bob", "rank": 1}, {"name": "Charlie", "rank": 2}]}', 'STRUCT<teacher: STRING, student: ARRAY<STRUCT<name: STRING, rank: INT>>>');
+--------------------------------------------------------------------------------------------------------+
| from_json({"teacher": "Alice", "student": [{"name": "Bob", "rank": 1}, {"name": "Charlie", "rank": 2}]}) |
+--------------------------------------------------------------------------------------------------------+
| {Alice, [{Bob, 1}...                                                                                   |
+--------------------------------------------------------------------------------------------------------+

-- get_json_object
SELECT get_json_object('{"a":"b"}', '$.a');
+-------------------------------+
| get_json_object({"a":"b"}, $.a) |
+-------------------------------+
| b                             |
+-------------------------------+

-- json_array_length
SELECT json_array_length('[1,2,3,4]');
+----------------------------+
| json_array_length([1,2,3,4]) |
+----------------------------+
| 4                          |
+----------------------------+

SELECT json_array_length('[1,2,3,{"f1":1,"f2":[5,6]},4]');
+------------------------------------------------+
| json_array_length([1,2,3,{"f1":1,"f2":[5,6]},4]) |
+------------------------------------------------+
| 5                                              |
+------------------------------------------------+

SELECT json_array_length('[1,2');
+-----------------------+
| json_array_length([1,2) |
+-----------------------+
| NULL                  |
+-----------------------+

-- json_object_keys
SELECT json_object_keys('{}');
+--------------------+
| json_object_keys({}) |
+--------------------+
| []                 |
+--------------------+

SELECT json_object_keys('{"key": "value"}');
+----------------------------------+
| json_object_keys({"key": "value"}) |
+----------------------------------+
| [key]                            |
+----------------------------------+

SELECT json_object_keys('{"f1":"abc","f2":{"f3":"a", "f4":"b"}}');
+--------------------------------------------------------+
| json_object_keys({"f1":"abc","f2":{"f3":"a", "f4":"b"}}) |
+--------------------------------------------------------+
| [f1, f2]                                               |
+--------------------------------------------------------+

-- json_tuple
SELECT json_tuple('{"a":1, "b":2}', 'a', 'b');
+---+---+
| c0| c1|
+---+---+
|  1|  2|
+---+---+

-- schema_of_json
SELECT schema_of_json('[{"col":0}]');
+---------------------------+
| schema_of_json([{"col":0}]) |
+---------------------------+
| ARRAY<STRUCT<col:...      |
+---------------------------+

SELECT schema_of_json('[{"col":01}]', map('allowNumericLeadingZeros', 'true'));
+----------------------------+
| schema_of_json([{"col":01}]) |
+----------------------------+
| ARRAY<STRUCT<col:...       |
+----------------------------+

-- to_json
SELECT to_json(named_struct('a', 1, 'b', 2));
+---------------------------------+
| to_json(named_struct(a, 1, b, 2)) |
+---------------------------------+
| {"a":1,"b":2}                   |
+---------------------------------+

SELECT to_json(named_struct('time', to_timestamp('2015-08-26', 'yyyy-MM-dd')), map('timestampFormat', 'dd/MM/yyyy'));
+-----------------------------------------------------------------+
| to_json(named_struct(time, to_timestamp(2015-08-26, yyyy-MM-dd))) |
+-----------------------------------------------------------------+
| {"time":"26/08/20...                                            |
+-----------------------------------------------------------------+

SELECT to_json(array(named_struct('a', 1, 'b', 2)));
+----------------------------------------+
| to_json(array(named_struct(a, 1, b, 2))) |
+----------------------------------------+
| [{"a":1,"b":2}]                        |
+----------------------------------------+

SELECT to_json(map('a', named_struct('b', 1)));
+-----------------------------------+
| to_json(map(a, named_struct(b, 1))) |
+-----------------------------------+
| {"a":{"b":1}}                     |
+-----------------------------------+

SELECT to_json(map(named_struct('a', 1),named_struct('b', 2)));
+----------------------------------------------------+
| to_json(map(named_struct(a, 1), named_struct(b, 2))) |
+----------------------------------------------------+
| {"[1]":{"b":2}}                                    |
+----------------------------------------------------+

SELECT to_json(map('a', 1));
+------------------+
| to_json(map(a, 1)) |
+------------------+
| {"a":1}          |
+------------------+

SELECT to_json(array(map('a', 1)));
+-------------------------+
| to_json(array(map(a, 1))) |
+-------------------------+
| [{"a":1}]               |
+-------------------------+
```

#### 数组函数
<a name="supported-sql-array"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。


****  

| 函数 | 说明 | 
| --- | --- | 
| array(expr, ...) | 返回包含给定元素的数组。 | 
| array\$1append(array, element) | 将元素添加到作为第一个参数传递的数组末尾。元素类型应与数组中元素的类型相似。Null 元素也要附加到数组中。但如果传递的数组为 NULL，则输出为 NULL | 
| array\$1compact(array) | 从数组中移除 null 值。 | 
| array\$1contains(array, value) | 如果数组包含该值，则返回 true。 | 
| array\$1distinct(array) | 从数组中移除重复值。 | 
| array\$1except(array1, array2) | 返回数组，包含 array1 中的元素，但不包含 array2 中的元素，且不包含重复项。 | 
| array\$1insert(x, pos, val) | 将 val 放入数组 x 的索引位置 pos。数组索引从 1 开始。最大负索引值为 -1，此时函数会在当前最后一个元素之后插入新元素。索引大于数组大小时，在数组末尾附加数组；如果索引为负值，则在数组开头附加数组，并填充“null”元素。 | 
| array\$1intersect(array1, array2) | 返回包含 array1 和 array2 交集元素的数组，且不含重复项。 | 
| array\$1join(array, delimiter[, nullReplacement]) | 使用分隔符和可选字符串连接给定数组的元素以替换 null 值。如果未设置 nullReplacement 的值，则会筛选任何 null 值。 | 
| array\$1max(array) | 返回数组中的最大值。就 double/float 类型而言，NaN 大于任何非 NaN 元素。跳过 NULL 元素。 | 
| array\$1min(array) | 返回数组中的最小值。就 double/float 类型而言，NaN 大于任何非 NaN 元素。跳过 NULL 元素。 | 
| array\$1position(array, element) | 返回数组中第一个匹配元素的（从 1 开始）索引作为长整型数值，如果未找到匹配项，则返回 0。 | 
| array\$1prepend(array, element) | 将元素添加到作为第一个参数传递的数组开头。元素类型应与数组中元素的类型相同。数组前面还会添加 Null 元素。但如果传递的数组为 NULL，则输出为 NULL | 
| array\$1remove(array, element) | 从数组中移除所有等于元素的元素。 | 
| array\$1repeat(element, count) | 返回包含元素计数次数的数组。 | 
| array\$1union(array1, array2) | 返回包含 array1 和 array2 并集元素的数组，且不含重复项。 | 
| arrays\$1overlap(a1, a2) | 如果 a1 中至少包含一个在 a2 中也存在的非 null 元素，则返回 true。如果数组没有公共元素且均为非空，并且其中任何一个都包含 null 元素，则返回 null，否则返回 false。 | 
| arrays\$1zip(a1, a2, ...) | 返回一个合并的结构数组，其中第 N 个结构包含输入数组的所有第 N 个值。 | 
| 展平 () arrayOfArrays | 将数组的数组转换为单个数组。 | 
| get(array, index) | 返回数组中指定索引处（从 0 开始计数）的元素。如果索引超出数组边界，则该函数返回 NULL。 | 
| sequence(start, stop, step) | 生成从起始值到结束值（包含）的元素数组，逐渐递增。返回元素的类型与参数表达式的类型相同。支持的类型包括：字节、短整型、整数、长整型、日期、时间戳。起始和结束表达式必须解析为相同类型。如果起始和结束表达式解析为“日期”或“时间戳”类型，则步骤表达式必须解析为“间隔”、“年-月间隔”或“日-时间间隔”类型，否则应解析为与起始和结束表达式相同的类型。 | 
| shuffle(array) | 返回给定数组的随机排列。 | 
| slice(x, start, length) | 子集数组 x 从索引开始（数组索引从 1 开始，如果起始为负数，则从末尾开始），长度为指定。 | 
| sort\$1array(array[, ascendingOrder]) | 根据数组元素的自然排序规则，将输入数组按升序或降序进行排序。就 double/float 类型而言，NaN 大于任何非 NaN 元素。Null 元素将在返回数组的开头按升序排列，或在返回数组的结尾按降序排列。 | 

**示例**

```
-- array
SELECT array(1, 2, 3);
+--------------+
|array(1, 2, 3)|
+--------------+
|     [1, 2, 3]|
+--------------+
-- array_append
SELECT array_append(array('b', 'd', 'c', 'a'), 'd');
+----------------------------------+
|array_append(array(b, d, c, a), d)|
+----------------------------------+
|                   [b, d, c, a, d]|
+----------------------------------+
SELECT array_append(array(1, 2, 3, null), null);
+----------------------------------------+
|array_append(array(1, 2, 3, NULL), NULL)|
+----------------------------------------+
|                    [1, 2, 3, NULL, N...|
+----------------------------------------+
SELECT array_append(CAST(null as Array<Int>), 2);
+---------------------+
|array_append(NULL, 2)|
+---------------------+
|                 NULL|
+---------------------+
-- array_compact
SELECT array_compact(array(1, 2, 3, null));
+-----------------------------------+
|array_compact(array(1, 2, 3, NULL))|
+-----------------------------------+
|                          [1, 2, 3]|
+-----------------------------------+
SELECT array_compact(array("a", "b", "c"));
+-----------------------------+
|array_compact(array(a, b, c))|
+-----------------------------+
|                    [a, b, c]|
+-----------------------------+
-- array_contains
SELECT array_contains(array(1, 2, 3), 2);
+---------------------------------+
|array_contains(array(1, 2, 3), 2)|
+---------------------------------+
|                             true|
+---------------------------------+
-- array_distinct
SELECT array_distinct(array(1, 2, 3, null, 3));
+---------------------------------------+
|array_distinct(array(1, 2, 3, NULL, 3))|
+---------------------------------------+
|                        [1, 2, 3, NULL]|
+---------------------------------------+
-- array_except
SELECT array_except(array(1, 2, 3), array(1, 3, 5));
+--------------------------------------------+
|array_except(array(1, 2, 3), array(1, 3, 5))|
+--------------------------------------------+
|                                         [2]|
+--------------------------------------------+
-- array_insert
SELECT array_insert(array(1, 2, 3, 4), 5, 5);
+-------------------------------------+
|array_insert(array(1, 2, 3, 4), 5, 5)|
+-------------------------------------+
|                      [1, 2, 3, 4, 5]|
+-------------------------------------+
SELECT array_insert(array(5, 4, 3, 2), -1, 1);
+--------------------------------------+
|array_insert(array(5, 4, 3, 2), -1, 1)|
+--------------------------------------+
|                       [5, 4, 3, 2, 1]|
+--------------------------------------+
SELECT array_insert(array(5, 3, 2, 1), -4, 4);
+--------------------------------------+
|array_insert(array(5, 3, 2, 1), -4, 4)|
+--------------------------------------+
|                       [5, 4, 3, 2, 1]|
+--------------------------------------+
-- array_intersect
SELECT array_intersect(array(1, 2, 3), array(1, 3, 5));
+-----------------------------------------------+
|array_intersect(array(1, 2, 3), array(1, 3, 5))|
+-----------------------------------------------+
|                                         [1, 3]|
+-----------------------------------------------+
-- array_join
SELECT array_join(array('hello', 'world'), ' ');
+----------------------------------+
|array_join(array(hello, world),  )|
+----------------------------------+
|                       hello world|
+----------------------------------+
SELECT array_join(array('hello', null ,'world'), ' ');
+----------------------------------------+
|array_join(array(hello, NULL, world),  )|
+----------------------------------------+
|                             hello world|
+----------------------------------------+
SELECT array_join(array('hello', null ,'world'), ' ', ',');
+-------------------------------------------+
|array_join(array(hello, NULL, world),  , ,)|
+-------------------------------------------+
|                              hello , world|
+-------------------------------------------+
-- array_max
SELECT array_max(array(1, 20, null, 3));
+--------------------------------+
|array_max(array(1, 20, NULL, 3))|
+--------------------------------+
|                              20|
+--------------------------------+
-- array_min
SELECT array_min(array(1, 20, null, 3));
+--------------------------------+
|array_min(array(1, 20, NULL, 3))|
+--------------------------------+
|                               1|
+--------------------------------+
-- array_position
SELECT array_position(array(312, 773, 708, 708), 708);
+----------------------------------------------+
|array_position(array(312, 773, 708, 708), 708)|
+----------------------------------------------+
|                                             3|
+----------------------------------------------+
SELECT array_position(array(312, 773, 708, 708), 414);
+----------------------------------------------+
|array_position(array(312, 773, 708, 708), 414)|
+----------------------------------------------+
|                                             0|
+----------------------------------------------+
-- array_prepend
SELECT array_prepend(array('b', 'd', 'c', 'a'), 'd');
+-----------------------------------+
|array_prepend(array(b, d, c, a), d)|
+-----------------------------------+
|                    [d, b, d, c, a]|
+-----------------------------------+
SELECT array_prepend(array(1, 2, 3, null), null);
+-----------------------------------------+
|array_prepend(array(1, 2, 3, NULL), NULL)|
+-----------------------------------------+
|                     [NULL, 1, 2, 3, N...|
+-----------------------------------------+
SELECT array_prepend(CAST(null as Array<Int>), 2);
+----------------------+
|array_prepend(NULL, 2)|
+----------------------+
|                  NULL|
+----------------------+
-- array_remove
SELECT array_remove(array(1, 2, 3, null, 3), 3);
+----------------------------------------+
|array_remove(array(1, 2, 3, NULL, 3), 3)|
+----------------------------------------+
|                            [1, 2, NULL]|
+----------------------------------------+
-- array_repeat
SELECT array_repeat('123', 2);
+--------------------+
|array_repeat(123, 2)|
+--------------------+
|          [123, 123]|
+--------------------+
-- array_union
SELECT array_union(array(1, 2, 3), array(1, 3, 5));
+-------------------------------------------+
|array_union(array(1, 2, 3), array(1, 3, 5))|
+-------------------------------------------+
|                               [1, 2, 3, 5]|
+-------------------------------------------+
-- arrays_overlap
SELECT arrays_overlap(array(1, 2, 3), array(3, 4, 5));
+----------------------------------------------+
|arrays_overlap(array(1, 2, 3), array(3, 4, 5))|
+----------------------------------------------+
|                                          true|
+----------------------------------------------+
-- arrays_zip
SELECT arrays_zip(array(1, 2, 3), array(2, 3, 4));
+------------------------------------------+
|arrays_zip(array(1, 2, 3), array(2, 3, 4))|
+------------------------------------------+
|                      [{1, 2}, {2, 3}, ...|
+------------------------------------------+
SELECT arrays_zip(array(1, 2), array(2, 3), array(3, 4));
+-------------------------------------------------+
|arrays_zip(array(1, 2), array(2, 3), array(3, 4))|
+-------------------------------------------------+
|                             [{1, 2, 3}, {2, 3...|
+-------------------------------------------------+
-- flatten
SELECT flatten(array(array(1, 2), array(3, 4)));
+----------------------------------------+
|flatten(array(array(1, 2), array(3, 4)))|
+----------------------------------------+
|                            [1, 2, 3, 4]|
+----------------------------------------+
-- get
SELECT get(array(1, 2, 3), 0);
+----------------------+
|get(array(1, 2, 3), 0)|
+----------------------+
|                     1|
+----------------------+
SELECT get(array(1, 2, 3), 3);
+----------------------+
|get(array(1, 2, 3), 3)|
+----------------------+
|                  NULL|
+----------------------+
SELECT get(array(1, 2, 3), -1);
+-----------------------+
|get(array(1, 2, 3), -1)|
+-----------------------+
|                   NULL|
+-----------------------+
-- sequence
SELECT sequence(1, 5);
+---------------+
| sequence(1, 5)|
+---------------+
|[1, 2, 3, 4, 5]|
+---------------+
SELECT sequence(5, 1);
+---------------+
| sequence(5, 1)|
+---------------+
|[5, 4, 3, 2, 1]|
+---------------+
SELECT sequence(to_date('2018-01-01'), to_date('2018-03-01'), interval 1 month);
+----------------------------------------------------------------------+
|sequence(to_date(2018-01-01), to_date(2018-03-01), INTERVAL '1' MONTH)|
+----------------------------------------------------------------------+
|                                                  [2018-01-01, 2018...|
+----------------------------------------------------------------------+
SELECT sequence(to_date('2018-01-01'), to_date('2018-03-01'), interval '0-1' year to month);
+--------------------------------------------------------------------------------+
|sequence(to_date(2018-01-01), to_date(2018-03-01), INTERVAL '0-1' YEAR TO MONTH)|
+--------------------------------------------------------------------------------+
|                                                            [2018-01-01, 2018...|
+--------------------------------------------------------------------------------+
-- shuffle
SELECT shuffle(array(1, 20, 3, 5));
+---------------------------+
|shuffle(array(1, 20, 3, 5))|
+---------------------------+
|              [5, 1, 20, 3]|
+---------------------------+
SELECT shuffle(array(1, 20, null, 3));
+------------------------------+
|shuffle(array(1, 20, NULL, 3))|
+------------------------------+
|              [1, NULL, 20, 3]|
+------------------------------+
-- slice
SELECT slice(array(1, 2, 3, 4), 2, 2);
+------------------------------+
|slice(array(1, 2, 3, 4), 2, 2)|
+------------------------------+
|                        [2, 3]|
+------------------------------+
SELECT slice(array(1, 2, 3, 4), -2, 2);
+-------------------------------+
|slice(array(1, 2, 3, 4), -2, 2)|
+-------------------------------+
|                         [3, 4]|
+-------------------------------+
-- sort_array
SELECT sort_array(array('b', 'd', null, 'c', 'a'), true);
+-----------------------------------------+
|sort_array(array(b, d, NULL, c, a), true)|
+-----------------------------------------+
|                       [NULL, a, b, c, d]|
+-----------------------------------------+
```

#### 窗口函数
<a name="supported-sql-window"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。

窗口函数对一组称为窗口的行进行操作，并根据该组行计算每行的返回值。窗口函数适用于处理诸如计算移动平均值、求取累积统计量，或根据当前行相对位置获取行值等任务。

**语法** 

```
window_function [ nulls_option ] OVER ( [ { PARTITION | DISTRIBUTE } BY partition_col_name = partition_col_val ( [ , ... ] ) ] { ORDER | SORT } BY expression [ ASC | DESC ] [ NULLS { FIRST | LAST } ] [ , ... ] [ window_frame ] ) 
```

**参数** 
+ 

  window\$1function 

  排名函数 

  语法：`RANK | DENSE_RANK | PERCENT_RANK | NTILE | ROW_NUMBER `

  分析函数 

  语法：`CUME_DIST | LAG | LEAD | NTH_VALUE | FIRST_VALUE | LAST_VALUE `

  聚合函数 

  语法：`MAX | MIN | COUNT | SUM | AVG | ... `
+ `nulls_option`：指定在评估窗口函数时是否跳过 null 值。RESPECT NULLS 表示不跳过 NULL 值，IGNORE NULLS 表示跳过。如果未指定，则默认为 RESPECT NULLS。

  语法：`{ IGNORE | RESPECT } NULLS `

  注意：`Only LAG` \$1 `LEAD` \$1 `NTH_VALUE` \$1 `FIRST_VALUE` \$1 `LAST_VALUE` 可以与 `IGNORE NULLS` 结合使用。
+ `window_frame`：指定窗口从哪一行开始显示以及在何处结束。

  语法：`{ RANGE | ROWS } { frame_start | BETWEEN frame_start AND frame_end }`

  frame\$1start 和 frame\$1end 的语法如下：

  语法：`UNBOUNDED PRECEDING | offset PRECEDING | CURRENT ROW | offset FOLLOWING | UNBOUNDED FOLLOWING `

  偏移量：指定相对于当前行位置的偏移量。

  **注意**，如果省略 frame\$1end，则默认为 CURRENT ROW。

**示例**

```
CREATE TABLE employees (name STRING, dept STRING, salary INT, age INT);
INSERT INTO employees VALUES ("Lisa", "Sales", 10000, 35);
INSERT INTO employees VALUES ("Evan", "Sales", 32000, 38);
INSERT INTO employees VALUES ("Fred", "Engineering", 21000, 28);
INSERT INTO employees VALUES ("Alex", "Sales", 30000, 33);
INSERT INTO employees VALUES ("Tom", "Engineering", 23000, 33);
INSERT INTO employees VALUES ("Jane", "Marketing", 29000, 28);
INSERT INTO employees VALUES ("Jeff", "Marketing", 35000, 38);
INSERT INTO employees VALUES ("Paul", "Engineering", 29000, 23);
INSERT INTO employees VALUES ("Chloe", "Engineering", 23000, 25);
SELECT * FROM employees;
+-----+-----------+------+-----+
| name|       dept|salary|  age|
+-----+-----------+------+-----+
|Chloe|Engineering| 23000|   25|
| Fred|Engineering| 21000|   28|
| Paul|Engineering| 29000|   23|
|Helen|  Marketing| 29000|   40|
|  Tom|Engineering| 23000|   33|
| Jane|  Marketing| 29000|   28|
| Jeff|  Marketing| 35000|   38|
| Evan|      Sales| 32000|   38|
| Lisa|      Sales| 10000|   35|
| Alex|      Sales| 30000|   33|
+-----+-----------+------+-----+
SELECT name, dept, salary, RANK() OVER (PARTITION BY dept ORDER BY salary) AS rank FROM employees;
+-----+-----------+------+----+
| name|       dept|salary|rank|
+-----+-----------+------+----+
| Lisa|      Sales| 10000|   1|
| Alex|      Sales| 30000|   2|
| Evan|      Sales| 32000|   3|
| Fred|Engineering| 21000|   1|
|  Tom|Engineering| 23000|   2|
|Chloe|Engineering| 23000|   2|
| Paul|Engineering| 29000|   4|
|Helen|  Marketing| 29000|   1|
| Jane|  Marketing| 29000|   1|
| Jeff|  Marketing| 35000|   3|
+-----+-----------+------+----+
SELECT name, dept, salary, DENSE_RANK() OVER (PARTITION BY dept ORDER BY salary ROWS BETWEEN
UNBOUNDED PRECEDING AND CURRENT ROW) AS dense_rank FROM employees;
+-----+-----------+------+----------+
| name|       dept|salary|dense_rank|
+-----+-----------+------+----------+
| Lisa|      Sales| 10000|         1|
| Alex|      Sales| 30000|         2|
| Evan|      Sales| 32000|         3|
| Fred|Engineering| 21000|         1|
|  Tom|Engineering| 23000|         2|
|Chloe|Engineering| 23000|         2|
| Paul|Engineering| 29000|         3|
|Helen|  Marketing| 29000|         1|
| Jane|  Marketing| 29000|         1|
| Jeff|  Marketing| 35000|         2|
+-----+-----------+------+----------+
SELECT name, dept, age, CUME_DIST() OVER (PARTITION BY dept ORDER BY age
RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS cume_dist FROM employees;
+-----+-----------+------+------------------+
| name|       dept|age   |         cume_dist|
+-----+-----------+------+------------------+
| Alex|      Sales|    33|0.3333333333333333|
| Lisa|      Sales|    35|0.6666666666666666|
| Evan|      Sales|    38|               1.0|
| Paul|Engineering|    23|              0.25|
|Chloe|Engineering|    25|              0.75|
| Fred|Engineering|    28|              0.25|
|  Tom|Engineering|    33|               1.0|
| Jane|  Marketing|    28|0.3333333333333333|
| Jeff|  Marketing|    38|0.6666666666666666|
|Helen|  Marketing|    40|               1.0|
+-----+-----------+------+------------------+
SELECT name, dept, salary, MIN(salary) OVER (PARTITION BY dept ORDER BY salary) AS min
FROM employees;
+-----+-----------+------+-----+
| name|       dept|salary|  min|
+-----+-----------+------+-----+
| Lisa|      Sales| 10000|10000|
| Alex|      Sales| 30000|10000|
| Evan|      Sales| 32000|10000|
|Helen|  Marketing| 29000|29000|
| Jane|  Marketing| 29000|29000|
| Jeff|  Marketing| 35000|29000|
| Fred|Engineering| 21000|21000|
|  Tom|Engineering| 23000|21000|
|Chloe|Engineering| 23000|21000|
| Paul|Engineering| 29000|21000|
+-----+-----------+------+-----+
SELECT name, salary,
LAG(salary) OVER (PARTITION BY dept ORDER BY salary) AS lag,
LEAD(salary, 1, 0) OVER (PARTITION BY dept ORDER BY salary) AS lead
FROM employees;
+-----+-----------+------+-----+-----+
| name|       dept|salary|  lag| lead|
+-----+-----------+------+-----+-----+
| Lisa|      Sales| 10000|NULL |30000|
| Alex|      Sales| 30000|10000|32000|
| Evan|      Sales| 32000|30000|    0|
| Fred|Engineering| 21000| NULL|23000|
|Chloe|Engineering| 23000|21000|23000|
|  Tom|Engineering| 23000|23000|29000|
| Paul|Engineering| 29000|23000|    0|
|Helen|  Marketing| 29000| NULL|29000|
| Jane|  Marketing| 29000|29000|35000|
| Jeff|  Marketing| 35000|29000|    0|
+-----+-----------+------+-----+-----+
SELECT id, v,
LEAD(v, 0) IGNORE NULLS OVER w lead,
LAG(v, 0) IGNORE NULLS OVER w lag,
NTH_VALUE(v, 2) IGNORE NULLS OVER w nth_value,
FIRST_VALUE(v) IGNORE NULLS OVER w first_value,
LAST_VALUE(v) IGNORE NULLS OVER w last_value
FROM test_ignore_null
WINDOW w AS (ORDER BY id)
ORDER BY id;
+--+----+----+----+---------+-----------+----------+
|id|   v|lead| lag|nth_value|first_value|last_value|
+--+----+----+----+---------+-----------+----------+
| 0|NULL|NULL|NULL|     NULL|       NULL|      NULL|
| 1|   x|   x|   x|     NULL|          x|         x|
| 2|NULL|NULL|NULL|     NULL|          x|         x|
| 3|NULL|NULL|NULL|     NULL|          x|         x|
| 4|   y|   y|   y|        y|          x|         y|
| 5|NULL|NULL|NULL|        y|          x|         y|
| 6|   z|   z|   z|        y|          x|         z|
| 7|   v|   v|   v|        y|          x|         v|
| 8|NULL|NULL|NULL|        y|          x|         v|
+--+----+----+----+---------+-----------+----------+
```

#### 转换函数
<a name="supported-sql-conversion"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。


****  

| 函数 | 说明 | 
| --- | --- | 
| bigint(expr) | 将值 `expr` 转换为目标数据类型 `bigint`。 | 
| binary(expr) | 将值 `expr` 转换为目标数据类型 `binary`。 | 
| boolean(expr) | 将值 `expr` 转换为目标数据类型 `boolean`。 | 
| cast(expr AS type) | 将值 `expr` 转换为目标数据类型 `type`。 | 
| date(expr) | 将值 `expr` 转换为目标数据类型 `date`。 | 
| decimal(expr) | 将值 `expr` 转换为目标数据类型 `decimal`。 | 
| double(expr) | 将值 `expr` 转换为目标数据类型 `double`。 | 
| float(expr) | 将值 `expr` 转换为目标数据类型 `float`。 | 
| int(expr) | 将值 `expr` 转换为目标数据类型 `int`。 | 
| smallint(expr) | 将值 `expr` 转换为目标数据类型 `smallint`。 | 
| string(expr) | 将值 `expr` 转换为目标数据类型 `string`。 | 
| timestamp(expr) | 将值 `expr` 转换为目标数据类型 `timestamp`。 | 
| tinyint(expr) | 将值 `expr` 转换为目标数据类型 `tinyint`。 | 

**示例**

```
-- cast
SELECT cast(field as int);
+---------------+
|CAST(field AS INT)|
+---------------+
|             10|
+---------------+
```

#### 谓词函数
<a name="supported-sql-predicate"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。


****  

| 函数 | 说明 | 
| --- | --- | 
| \$1 expr | 逻辑非。 | 
| expr1 < expr2 | 如果 `expr1` 小于 `expr2`，则返回 true。 | 
| expr1 <= expr2 | 如果 `expr1` 小于等于 `expr2`，则返回 true。 | 
| expr1 <=> expr2 | 对于非 null 操作数，返回与 EQUAL(=) 运算符相同的结果，但如果两者都为 null，则返回 true；如果其中一个为 null，则返回 false。 | 
| expr1 = expr2 | 如果 `expr1` 等于 `expr2`，则返回 true，否则返回 false。 | 
| expr1 == expr2 | 如果 `expr1` 等于 `expr2`，则返回 true，否则返回 false。 | 
| expr1 > expr2 | 如果 `expr1` 大于 `expr2`，则返回 true。 | 
| expr1 >= expr2 | 如果 `expr1` 大于等于 `expr2`，则返回 true。 | 
| expr1 and expr2 | 逻辑 AND。 | 
| str ilike pattern[ ESCAPE escape] | 如果字符串 str 与带 `escape` 的 `pattern` 匹配（不区分大小写），则返回 true；如果任何参数为 null，则返回 null；否则返回 false。 | 
| expr1 in(expr2, expr3, ...) | 如果 `expr` 等于任何 valN，则返回 true。 | 
| isnan(expr) | 如果 `expr` 为 NaN，则返回 true，否则返回 false。 | 
| isnotnull(expr) | 如果 `expr` 不为 null，则返回 true，否则返回 false。 | 
| isnull(expr) | 如果 `expr` 为 null，则返回 true，否则返回 false。 | 
| str like pattern[ ESCAPE escape] | 如果字符串 str 与带 `escape` 的 `pattern` 匹配，则返回 true；如果任何参数为 null，则返回 null；否则返回 false。 | 
| not expr | 逻辑非。 | 
| expr1 or expr2 | 逻辑 OR。 | 
| regexp(str, regexp) | 如果 `str` 与 `regexp` 匹配，则返回 true，否则返回 false。 | 
| regexp\$1like(str, regexp) | 如果 `str` 与 `regexp` 匹配，则返回 true，否则返回 false。 | 
| rlike(str, regexp) | 如果 `str` 与 `regexp` 匹配，则返回 true，否则返回 false。 | 

**示例**

```
-- !
SELECT ! true;
+----------+
|(NOT true)|
+----------+
|     false|
+----------+
SELECT ! false;
+-----------+
|(NOT false)|
+-----------+
|       true|
+-----------+
SELECT ! NULL;
+----------+
|(NOT NULL)|
+----------+
|      NULL|
+----------+
-- <
SELECT to_date('2009-07-30 04:17:52') < to_date('2009-07-30 04:17:52');
+-------------------------------------------------------------+
|(to_date(2009-07-30 04:17:52) < to_date(2009-07-30 04:17:52))|
+-------------------------------------------------------------+
|                                                        false|
+-------------------------------------------------------------+
SELECT to_date('2009-07-30 04:17:52') < to_date('2009-08-01 04:17:52');
+-------------------------------------------------------------+
|(to_date(2009-07-30 04:17:52) < to_date(2009-08-01 04:17:52))|
+-------------------------------------------------------------+
|                                                         true|
+-------------------------------------------------------------+
SELECT 1 < NULL;
+----------+
|(1 < NULL)|
+----------+
|      NULL|
+----------+
-- <=
SELECT 2 <= 2;
+--------+
|(2 <= 2)|
+--------+
|    true|
+--------+
SELECT 1.0 <= '1';
+----------+
|(1.0 <= 1)|
+----------+
|      true|
+----------+
SELECT to_date('2009-07-30 04:17:52') <= to_date('2009-07-30 04:17:52');
+--------------------------------------------------------------+
|(to_date(2009-07-30 04:17:52) <= to_date(2009-07-30 04:17:52))|
+--------------------------------------------------------------+
|                                                          true|
+--------------------------------------------------------------+
SELECT to_date('2009-07-30 04:17:52') <= to_date('2009-08-01 04:17:52');
+--------------------------------------------------------------+
|(to_date(2009-07-30 04:17:52) <= to_date(2009-08-01 04:17:52))|
+--------------------------------------------------------------+
|                                                          true|
+--------------------------------------------------------------+
SELECT 1 <= NULL;
+-----------+
|(1 <= NULL)|
+-----------+
|       NULL|
+-----------+
-- <=>
SELECT 2 <=> 2;
+---------+
|(2 <=> 2)|
+---------+
|     true|
+---------+
SELECT 1 <=> '1';
+---------+
|(1 <=> 1)|
+---------+
|     true|
+---------+
SELECT true <=> NULL;
+---------------+
|(true <=> NULL)|
+---------------+
|          false|
+---------------+
SELECT NULL <=> NULL;
+---------------+
|(NULL <=> NULL)|
+---------------+
|           true|
+---------------+
-- =
SELECT 2 = 2;
+-------+
|(2 = 2)|
+-------+
|   true|
+-------+
SELECT 1 = '1';
+-------+
|(1 = 1)|
+-------+
|   true|
+-------+
SELECT true = NULL;
+-------------+
|(true = NULL)|
+-------------+
|         NULL|
+-------------+
SELECT NULL = NULL;
+-------------+
|(NULL = NULL)|
+-------------+
|         NULL|
+-------------+
-- ==
SELECT 2 == 2;
+-------+
|(2 = 2)|
+-------+
|   true|
+-------+
SELECT 1 == '1';
+-------+
|(1 = 1)|
+-------+
|   true|
+-------+
SELECT true == NULL;
+-------------+
|(true = NULL)|
+-------------+
|         NULL|
+-------------+
SELECT NULL == NULL;
+-------------+
|(NULL = NULL)|
+-------------+
|         NULL|
+-------------+
-- >
SELECT 2 > 1;
+-------+
|(2 > 1)|
+-------+
|   true|
+-------+
SELECT 2 > 1.1;
+-------+
|(2 > 1)|
+-------+
|   true|
+-------+
SELECT to_date('2009-07-30 04:17:52') > to_date('2009-07-30 04:17:52');
+-------------------------------------------------------------+
|(to_date(2009-07-30 04:17:52) > to_date(2009-07-30 04:17:52))|
+-------------------------------------------------------------+
|                                                        false|
+-------------------------------------------------------------+
SELECT to_date('2009-07-30 04:17:52') > to_date('2009-08-01 04:17:52');
+-------------------------------------------------------------+
|(to_date(2009-07-30 04:17:52) > to_date(2009-08-01 04:17:52))|
+-------------------------------------------------------------+
|                                                        false|
+-------------------------------------------------------------+
SELECT 1 > NULL;
+----------+
|(1 > NULL)|
+----------+
|      NULL|
+----------+
-- >=
SELECT 2 >= 1;
+--------+
|(2 >= 1)|
+--------+
|    true|
+--------+
SELECT 2.0 >= '2.1';
+------------+
|(2.0 >= 2.1)|
+------------+
|       false|
+------------+
SELECT to_date('2009-07-30 04:17:52') >= to_date('2009-07-30 04:17:52');
+--------------------------------------------------------------+
|(to_date(2009-07-30 04:17:52) >= to_date(2009-07-30 04:17:52))|
+--------------------------------------------------------------+
|                                                          true|
+--------------------------------------------------------------+
SELECT to_date('2009-07-30 04:17:52') >= to_date('2009-08-01 04:17:52');
+--------------------------------------------------------------+
|(to_date(2009-07-30 04:17:52) >= to_date(2009-08-01 04:17:52))|
+--------------------------------------------------------------+
|                                                         false|
+--------------------------------------------------------------+
SELECT 1 >= NULL;
+-----------+
|(1 >= NULL)|
+-----------+
|       NULL|
+-----------+
-- and
SELECT true and true;
+---------------+
|(true AND true)|
+---------------+
|           true|
+---------------+
SELECT true and false;
+----------------+
|(true AND false)|
+----------------+
|           false|
+----------------+
SELECT true and NULL;
+---------------+
|(true AND NULL)|
+---------------+
|           NULL|
+---------------+
SELECT false and NULL;
+----------------+
|(false AND NULL)|
+----------------+
|           false|
+----------------+
-- ilike
SELECT ilike('Wagon', '_Agon');
+-------------------+
|ilike(Wagon, _Agon)|
+-------------------+
|               true|
+-------------------+
SELECT '%SystemDrive%\Users\John' ilike '\%SystemDrive\%\\users%';
+--------------------------------------------------------+
|ilike(%SystemDrive%\Users\John, \%SystemDrive\%\\users%)|
+--------------------------------------------------------+
|                                                    true|
+--------------------------------------------------------+
SELECT '%SystemDrive%\\USERS\\John' ilike '\%SystemDrive\%\\\\Users%';
+--------------------------------------------------------+
|ilike(%SystemDrive%\USERS\John, \%SystemDrive\%\\Users%)|
+--------------------------------------------------------+
|                                                    true|
+--------------------------------------------------------+
SELECT '%SystemDrive%/Users/John' ilike '/%SYSTEMDrive/%//Users%' ESCAPE '/';
+--------------------------------------------------------+
|ilike(%SystemDrive%/Users/John, /%SYSTEMDrive/%//Users%)|
+--------------------------------------------------------+
|                                                    true|
+--------------------------------------------------------+
-- in
SELECT 1 in(1, 2, 3);
+----------------+
|(1 IN (1, 2, 3))|
+----------------+
|            true|
+----------------+
SELECT 1 in(2, 3, 4);
+----------------+
|(1 IN (2, 3, 4))|
+----------------+
|           false|
+----------------+
SELECT named_struct('a', 1, 'b', 2) in(named_struct('a', 1, 'b', 1), named_struct('a', 1, 'b', 3));
+----------------------------------------------------------------------------------+
|(named_struct(a, 1, b, 2) IN (named_struct(a, 1, b, 1), named_struct(a, 1, b, 3)))|
+----------------------------------------------------------------------------------+
|                                                                             false|
+----------------------------------------------------------------------------------+
SELECT named_struct('a', 1, 'b', 2) in(named_struct('a', 1, 'b', 2), named_struct('a', 1, 'b', 3));
+----------------------------------------------------------------------------------+
|(named_struct(a, 1, b, 2) IN (named_struct(a, 1, b, 2), named_struct(a, 1, b, 3)))|
+----------------------------------------------------------------------------------+
|                                                                              true|
+----------------------------------------------------------------------------------+
-- isnan
SELECT isnan(cast('NaN' as double));
+--------------------------+
|isnan(CAST(NaN AS DOUBLE))|
+--------------------------+
|                      true|
+--------------------------+
-- isnotnull
SELECT isnotnull(1);
+---------------+
|(1 IS NOT NULL)|
+---------------+
|           true|
+---------------+
-- isnull
SELECT isnull(1);
+-----------+
|(1 IS NULL)|
+-----------+
|      false|
+-----------+
-- like
SELECT like('Wagon', '_Agon');
+----------------+
|Wagon LIKE _Agon|
+----------------+
|            true|
+----------------+
-- not
SELECT not true;
+----------+
|(NOT true)|
+----------+
|     false|
+----------+
SELECT not false;
+-----------+
|(NOT false)|
+-----------+
|       true|
+-----------+
SELECT not NULL;
+----------+
|(NOT NULL)|
+----------+
|      NULL|
+----------+
-- or
SELECT true or false;
+---------------+
|(true OR false)|
+---------------+
|           true|
+---------------+
SELECT false or false;
+----------------+
|(false OR false)|
+----------------+
|           false|
+----------------+
SELECT true or NULL;
+--------------+
|(true OR NULL)|
+--------------+
|          true|
+--------------+
SELECT false or NULL;
+---------------+
|(false OR NULL)|
+---------------+
|           NULL|
+---------------+
```

#### 映射函数
<a name="supported-sql-map"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。


****  

| 函数 | 说明 | 
| --- | --- | 
| element\$1at(array, index) | 返回数组中指定索引处（从 1 开始计数）的元素。 | 
| element\$1at(map, key) | 返回给定键的值。如果键不在映射中，则该函数返回 NULL。 | 
| map(key0, value0, key1, value1, ...) | 用给定的配 key/value 对创建地图。 | 
| map\$1concat(map, ...) | 返回所有给定映射的并集 | 
| map\$1contains\$1key(map, key) | 如果映射包含键，则返回 true。 | 
| map\$1entries(map) | 返回给定映射中所有条目的无序数组。 | 
| map\$1from\$1arrays(keys, values) | 使用一对给定 key/value 数组创建地图。键中所有元素均不应为 null | 
| map\$1from\$1entries () arrayOfEntries | 返回根据给定条目数组创建的映射。 | 
| map\$1keys(map) | 返回包含映射的键的无序数组。 | 
| map\$1values(map) | 返回包含映射的值的无序数组。 | 
| str\$1to\$1map（文本 [，pairDelim [，]]） keyValueDelim | 将文本按分隔符拆分为键值对后创建映射。“pairDelim` 的默认分隔符是 '、' 和 ':' 代表 `。keyValueDelim`pairDelim`和``都被视为正则表达式。keyValueDelim | 
| try\$1element\$1at(array, index) | 返回数组中指定索引处（从 1 开始计数）的元素。如果索引为 0，系统将抛出错误。如果索引 < 0，则从最后一个元素开始访问元素直至第一个元素。如果索引超过数组的长度，则函数始终返回 NULL。 | 
| try\$1element\$1at(map, key) | 返回给定键的值。如果键不在映射中，则该函数始终返回 NULL。 | 

**示例**

```
-- element_at
SELECT element_at(array(1, 2, 3), 2);
+-----------------------------+
|element_at(array(1, 2, 3), 2)|
+-----------------------------+
|                            2|
+-----------------------------+
SELECT element_at(map(1, 'a', 2, 'b'), 2);
+------------------------------+
|element_at(map(1, a, 2, b), 2)|
+------------------------------+
|                             b|
+------------------------------+
-- map
SELECT map(1.0, '2', 3.0, '4');
+--------------------+
| map(1.0, 2, 3.0, 4)|
+--------------------+
|{1.0 -> 2, 3.0 -> 4}|
+--------------------+
-- map_concat
SELECT map_concat(map(1, 'a', 2, 'b'), map(3, 'c'));
+--------------------------------------+
|map_concat(map(1, a, 2, b), map(3, c))|
+--------------------------------------+
|                  {1 -> a, 2 -> b, ...|
+--------------------------------------+
-- map_contains_key
SELECT map_contains_key(map(1, 'a', 2, 'b'), 1);
+------------------------------------+
|map_contains_key(map(1, a, 2, b), 1)|
+------------------------------------+
|                                true|
+------------------------------------+
SELECT map_contains_key(map(1, 'a', 2, 'b'), 3);
+------------------------------------+
|map_contains_key(map(1, a, 2, b), 3)|
+------------------------------------+
|                               false|
+------------------------------------+
-- map_entries
SELECT map_entries(map(1, 'a', 2, 'b'));
+----------------------------+
|map_entries(map(1, a, 2, b))|
+----------------------------+
|            [{1, a}, {2, b}]|
+----------------------------+
-- map_from_arrays
SELECT map_from_arrays(array(1.0, 3.0), array('2', '4'));
+---------------------------------------------+
|map_from_arrays(array(1.0, 3.0), array(2, 4))|
+---------------------------------------------+
|                         {1.0 -> 2, 3.0 -> 4}|
+---------------------------------------------+
-- map_from_entries
SELECT map_from_entries(array(struct(1, 'a'), struct(2, 'b')));
+---------------------------------------------------+
|map_from_entries(array(struct(1, a), struct(2, b)))|
+---------------------------------------------------+
|                                   {1 -> a, 2 -> b}|
+---------------------------------------------------+
-- map_keys
SELECT map_keys(map(1, 'a', 2, 'b'));
+-------------------------+
|map_keys(map(1, a, 2, b))|
+-------------------------+
|                   [1, 2]|
+-------------------------+
-- map_values
SELECT map_values(map(1, 'a', 2, 'b'));
+---------------------------+
|map_values(map(1, a, 2, b))|
+---------------------------+
|                     [a, b]|
+---------------------------+
-- str_to_map
SELECT str_to_map('a:1,b:2,c:3', ',', ':');
+-----------------------------+
|str_to_map(a:1,b:2,c:3, ,, :)|
+-----------------------------+
|         {a -> 1, b -> 2, ...|
+-----------------------------+
SELECT str_to_map('a');
+-------------------+
|str_to_map(a, ,, :)|
+-------------------+
|        {a -> NULL}|
+-------------------+
-- try_element_at
SELECT try_element_at(array(1, 2, 3), 2);
+---------------------------------+
|try_element_at(array(1, 2, 3), 2)|
+---------------------------------+
|                                2|
+---------------------------------+
SELECT try_element_at(map(1, 'a', 2, 'b'), 2);
+----------------------------------+
|try_element_at(map(1, a, 2, b), 2)|
+----------------------------------+
|                                 b|
+----------------------------------+
```

#### 数学函数
<a name="supported-sql-math"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。


****  

| 函数 | 说明 | 
| --- | --- | 
| expr1 % expr2 | 返回 `expr1`/`expr2` 运算后的余数。 | 
| expr1 \$1 expr2 | 返回 `expr1`\$1`expr2`。 | 
| expr1 \$1 expr2 | 返回 `expr1`\$1`expr2`。 | 
| expr1 - expr2 | 返回 `expr1`-`expr2`。 | 
| expr1 / expr2 | 返回 `expr1`/`expr2`。总是使用浮点除法。 | 
| abs(expr) | 返回数值或区间值的绝对值。 | 
| acos(expr) | 返回 `expr` 的反余弦值，其计算方式与 `java.lang.Math.acos` 一致。 | 
| acosh(expr) | 返回 `expr` 的反双曲余弦值。 | 
| asin(expr) | 返回 `expr` 的反正弦值，其计算方式与 `java.lang.Math.asin` 一致。 | 
| asinh(expr) | 返回 `expr` 的反双曲正弦值。 | 
| atan(expr) | 返回 `expr` 的反正切值，其计算方式与 `java.lang.Math.atan` 一致 | 
| atan2(exprY, exprX) | 返回平面正 x 轴与坐标点 (`exprX`, `exprY`) 之间夹角的弧度值，其计算方式与 `java.lang.Math.atan2` 一致。 | 
| atanh(expr) | 返回 `expr` 的反双曲正切值。 | 
| bin(expr) | 返回长整型值 `expr` 的二进制字符串表示形式。 | 
| bround(expr, d) | 返回按 `d` 位小数精度四舍五入的 `expr`，采用 HALF\$1EVEN 舍入规则。 | 
| cbrt(expr) | 返回 `expr` 的立方根。 | 
| ceil(expr[, scale]) | 返回四舍五入后不小于 `expr` 的最小数值。可选的 `scale` 参数可用于控制舍入行为。 | 
| ceiling(expr[, scale]) | 返回四舍五入后不小于 `expr` 的最小数值。可选的 `scale` 参数可用于控制舍入行为。 | 
| conv(num, from\$1base, to\$1base) | 将 `num` 从 `from\$1base` 转换为 `to\$1base`。 | 
| cos(expr) | 返回 `expr` 的余弦值，其计算方式与 `java.lang.Math.cos` 一致。 | 
| cosh(expr) | 返回 `expr` 的双曲余弦值，其计算方式与 `java.lang.Math.cosh` 一致。 | 
| cot(expr) | 返回 `expr` 的余切值，其计算方式与 `1/java.lang.Math.tan` 一致。 | 
| csc(expr) | 返回 `expr` 的余割值，其计算方式与 `1/java.lang.Math.sin` 一致。 | 
| degrees(expr) | 将弧度转换为度。 | 
| expr1 div expr2 | `expr1` 除以 `expr2`。如果操作数为 NULL 或 `expr2` 为 0，则返回 NULL。结果将强制转换为长整型。 | 
| e() | 返回欧拉常数 e。 | 
| exp(expr) | 返回 e 的 `expr` 次幂。 | 
| expm1(expr)：返回 exp(`expr`) | 1 | 
| factorial(expr) | 返回 `expr` 的阶乘。`expr` 的取值范围为 [0..20]。否则为 null。 | 
| floor(expr[, scale]) | 返回向下取整后不大于 `expr` 的最大数值。可选的 `scale` 参数可用于控制舍入行为。 | 
| greatest(expr, ...) | 返回所有参数中的最大值，跳过 null 值。 | 
| hex(expr) | 将 `expr` 转换为十六进制。 | 
| hypot(expr1, expr2) | 返回 sqrt(`expr1`\$1\$12 \$1 `expr2`\$1\$12)。 | 
| least(expr, ...) | 返回所有参数中的最小值，跳过 null 值。 | 
| ln(expr) | 返回 `expr` 的自然对数（以 e 为底）。 | 
| log(base, expr) | 返回以 `base` 为底 `expr` 的对数。 | 
| log10(expr) | 返回以 10 为底 `expr` 的对数。 | 
| log1p(expr) | 返回 log(1 \$1 `expr`)。 | 
| log2(expr) | 返回以 2 为底 `expr` 的对数。 | 
| expr1 mod expr2 | 返回 `expr1`/`expr2` 运算后的余数。 | 
| negative(expr) | 返回 `expr` 的取反值。 | 
| pi() | 返回 pi。 | 
| pmod(expr1, expr2) | 返回 `expr1` 除以 `expr2` 的余数（取整数部分）。 | 
| positive(expr) | 返回 `expr` 的值。 | 
| pow(expr1, expr2) | 将 `expr1` 乘方为 `expr2` 次。 | 
| power(expr1, expr2) | 将 `expr1` 乘方为 `expr2` 次。 | 
| radians(expr) | 将度转换为弧度。 | 
| rand([seed]) | 返回随机值，该值在 [0, 1) 区间内呈独立同分布（i.i.d.）的均匀分布。 | 
| randn([seed]) | 返回随机值，该值从标准正态分布中抽取，具有独立同分布（i.i.d.）特性。 | 
| random([seed]) | 返回随机值，该值在 [0, 1) 区间内呈独立同分布（i.i.d.）的均匀分布。 | 
| rint(expr) | 返回与参数值最接近且等于数学整数的双精度值。 | 
| round(expr, d) | 返回按 `d` 位小数精度四舍五入的 `expr`，采用 HALF\$1UP 舍入规则。 | 
| sec(expr) | 返回 `expr` 的正割值，其计算方式与 `1/java.lang.Math.cos` 一致。 | 
| shiftleft(base, expr) | 按位左移。 | 
| sign(expr) | 返回 -1.0、0.0 或 1.0，取决于 `expr` 是负数、0 还是正数。 | 
| signum(expr) | 返回 -1.0、0.0 或 1.0，取决于 `expr` 是负数、0 还是正数。 | 
| sin(expr) | 返回 `expr` 的正弦值，其计算方式与 `java.lang.Math.sin` 一致。 | 
| sinh(expr) | 返回 `expr` 的双曲正弦值，其计算方式与 `java.lang.Math.sinh` 一致。 | 
| sqrt(expr) | 返回 `expr` 的平方根。 | 
| tan(expr) | 返回 `expr` 的正切值，其计算方式与 `java.lang.Math.tan` 一致。 | 
| tanh(expr) | 返回 `expr` 的双曲正切值，其计算方式与 `java.lang.Math.tanh` 一致。 | 
| try\$1add(expr1, expr2) | 返回 `expr1` 和 `expr2` 的和，当发生溢出时结果为 null。可接受的输入类型与 `\$1` 运算符相同。 | 
| try\$1divide(dividend, divisor) | 返回 `dividend`/`divisor`。总是使用浮点除法。如果 `expr2` 为 0，则其结果始终为 null。`dividend` 必须是数值或区间类型。。`divisor`必须是数值。 | 
| try\$1multiply(expr1, expr2) | 返回 `expr1`\$1`expr2`，当发生溢出时结果为 null。可接受的输入类型与 `\$1` 运算符相同。 | 
| try\$1subtract(expr1, expr2) | 返回 `expr1`-`expr2`，当发生溢出时结果为 null。可接受的输入类型与 `-` 运算符相同。 | 
| unhex(expr) | 将十六进制 `expr` 转换为二进制。 | 
| width\$1bucket(value, min\$1value, max\$1value, num\$1bucket) | 返回带有 `num\$1buckets` 存储桶的等宽直方图中，`value` 为存储桶编号，范围为 `min\$1value` 到 `max\$1value`。 | 

**示例**

```
-- %
SELECT 2 % 1.8;
+---------+
|(2 % 1.8)|
+---------+
|      0.2|
+---------+
SELECT MOD(2, 1.8);
+-----------+
|mod(2, 1.8)|
+-----------+
|        0.2|
+-----------+
-- *
SELECT 2 * 3;
+-------+
|(2 * 3)|
+-------+
|      6|
+-------+
-- +
SELECT 1 + 2;
+-------+
|(1 + 2)|
+-------+
|      3|
+-------+
-- -
SELECT 2 - 1;
+-------+
|(2 - 1)|
+-------+
|      1|
+-------+
-- /
SELECT 3 / 2;
+-------+
|(3 / 2)|
+-------+
|    1.5|
+-------+
SELECT 2L / 2L;
+-------+
|(2 / 2)|
+-------+
|    1.0|
+-------+
-- abs
SELECT abs(-1);
+-------+
|abs(-1)|
+-------+
|      1|
+-------+
SELECT abs(INTERVAL -'1-1' YEAR TO MONTH);
+----------------------------------+
|abs(INTERVAL '-1-1' YEAR TO MONTH)|
+----------------------------------+
|              INTERVAL '1-1' YE...|
+----------------------------------+
-- acos
SELECT acos(1);
+-------+
|ACOS(1)|
+-------+
|    0.0|
+-------+
SELECT acos(2);
+-------+
|ACOS(2)|
+-------+
|    NaN|
+-------+
-- acosh
SELECT acosh(1);
+--------+
|ACOSH(1)|
+--------+
|     0.0|
+--------+
SELECT acosh(0);
+--------+
|ACOSH(0)|
+--------+
|     NaN|
+--------+
-- asin
SELECT asin(0);
+-------+
|ASIN(0)|
+-------+
|    0.0|
+-------+
SELECT asin(2);
+-------+
|ASIN(2)|
+-------+
|    NaN|
+-------+
-- asinh
SELECT asinh(0);
+--------+
|ASINH(0)|
+--------+
|     0.0|
+--------+
-- atan
SELECT atan(0);
+-------+
|ATAN(0)|
+-------+
|    0.0|
+-------+
-- atan2
SELECT atan2(0, 0);
+-----------+
|ATAN2(0, 0)|
+-----------+
|        0.0|
+-----------+
-- atanh
SELECT atanh(0);
+--------+
|ATANH(0)|
+--------+
|     0.0|
+--------+
SELECT atanh(2);
+--------+
|ATANH(2)|
+--------+
|     NaN|
+--------+
-- bin
SELECT bin(13);
+-------+
|bin(13)|
+-------+
|   1101|
+-------+
SELECT bin(-13);
+--------------------+
|            bin(-13)|
+--------------------+
|11111111111111111...|
+--------------------+
SELECT bin(13.3);
+---------+
|bin(13.3)|
+---------+
|     1101|
+---------+
-- bround
SELECT bround(2.5, 0);
+--------------+
|bround(2.5, 0)|
+--------------+
|             2|
+--------------+
SELECT bround(25, -1);
+--------------+
|bround(25, -1)|
+--------------+
|            20|
+--------------+
-- cbrt
SELECT cbrt(27.0);
+----------+
|CBRT(27.0)|
+----------+
|       3.0|
+----------+
-- ceil
SELECT ceil(-0.1);
+----------+
|CEIL(-0.1)|
+----------+
|         0|
+----------+
SELECT ceil(5);
+-------+
|CEIL(5)|
+-------+
|      5|
+-------+
SELECT ceil(3.1411, 3);
+---------------+
|ceil(3.1411, 3)|
+---------------+
|          3.142|
+---------------+
SELECT ceil(3.1411, -3);
+----------------+
|ceil(3.1411, -3)|
+----------------+
|            1000|
+----------------+
-- ceiling
SELECT ceiling(-0.1);
+-------------+
|ceiling(-0.1)|
+-------------+
|            0|
+-------------+
SELECT ceiling(5);
+----------+
|ceiling(5)|
+----------+
|         5|
+----------+
SELECT ceiling(3.1411, 3);
+------------------+
|ceiling(3.1411, 3)|
+------------------+
|             3.142|
+------------------+
SELECT ceiling(3.1411, -3);
+-------------------+
|ceiling(3.1411, -3)|
+-------------------+
|               1000|
+-------------------+
-- conv
SELECT conv('100', 2, 10);
+----------------+
|conv(100, 2, 10)|
+----------------+
|               4|
+----------------+
SELECT conv(-10, 16, -10);
+------------------+
|conv(-10, 16, -10)|
+------------------+
|               -16|
+------------------+
-- cos
SELECT cos(0);
+------+
|COS(0)|
+------+
|   1.0|
+------+
-- cosh
SELECT cosh(0);
+-------+
|COSH(0)|
+-------+
|    1.0|
+-------+
-- cot
SELECT cot(1);
+------------------+
|            COT(1)|
+------------------+
|0.6420926159343306|
+------------------+
-- csc
SELECT csc(1);
+------------------+
|            CSC(1)|
+------------------+
|1.1883951057781212|
+------------------+
-- degrees
SELECT degrees(3.141592653589793);
+--------------------------+
|DEGREES(3.141592653589793)|
+--------------------------+
|                     180.0|
+--------------------------+
-- div
SELECT 3 div 2;
+---------+
|(3 div 2)|
+---------+
|        1|
+---------+
SELECT INTERVAL '1-1' YEAR TO MONTH div INTERVAL '-1' MONTH;
+------------------------------------------------------+
|(INTERVAL '1-1' YEAR TO MONTH div INTERVAL '-1' MONTH)|
+------------------------------------------------------+
|                                                   -13|
+------------------------------------------------------+
-- e
SELECT e();
+-----------------+
|              E()|
+-----------------+
|2.718281828459045|
+-----------------+
-- exp
SELECT exp(0);
+------+
|EXP(0)|
+------+
|   1.0|
+------+
-- expm1
SELECT expm1(0);
+--------+
|EXPM1(0)|
+--------+
|     0.0|
+--------+
-- factorial
SELECT factorial(5);
+------------+
|factorial(5)|
+------------+
|         120|
+------------+
-- floor
SELECT floor(-0.1);
+-----------+
|FLOOR(-0.1)|
+-----------+
|         -1|
+-----------+
SELECT floor(5);
+--------+
|FLOOR(5)|
+--------+
|       5|
+--------+
SELECT floor(3.1411, 3);
+----------------+
|floor(3.1411, 3)|
+----------------+
|           3.141|
+----------------+
SELECT floor(3.1411, -3);
+-----------------+
|floor(3.1411, -3)|
+-----------------+
|                0|
+-----------------+
-- greatest
SELECT greatest(10, 9, 2, 4, 3);
+------------------------+
|greatest(10, 9, 2, 4, 3)|
+------------------------+
|                      10|
+------------------------+
-- hex
SELECT hex(17);
+-------+
|hex(17)|
+-------+
|     11|
+-------+
SELECT hex('SQL');
+------------------+
|    hex(SQL)|
+------------------+
|53514C|
+------------------+
-- hypot
SELECT hypot(3, 4);
+-----------+
|HYPOT(3, 4)|
+-----------+
|        5.0|
+-----------+
-- least
SELECT least(10, 9, 2, 4, 3);
+---------------------+
|least(10, 9, 2, 4, 3)|
+---------------------+
|                    2|
+---------------------+
-- ln
SELECT ln(1);
+-----+
|ln(1)|
+-----+
|  0.0|
+-----+
-- log
SELECT log(10, 100);
+------------+
|LOG(10, 100)|
+------------+
|         2.0|
+------------+
-- log10
SELECT log10(10);
+---------+
|LOG10(10)|
+---------+
|      1.0|
+---------+
-- log1p
SELECT log1p(0);
+--------+
|LOG1P(0)|
+--------+
|     0.0|
+--------+
-- log2
SELECT log2(2);
+-------+
|LOG2(2)|
+-------+
|    1.0|
+-------+
-- mod
SELECT 2 % 1.8;
+---------+
|(2 % 1.8)|
+---------+
|      0.2|
+---------+
SELECT MOD(2, 1.8);
+-----------+
|mod(2, 1.8)|
+-----------+
|        0.2|
+-----------+
-- negative
SELECT negative(1);
+-----------+
|negative(1)|
+-----------+
|         -1|
+-----------+
-- pi
SELECT pi();
+-----------------+
|             PI()|
+-----------------+
|3.141592653589793|
+-----------------+
-- pmod
SELECT pmod(10, 3);
+-----------+
|pmod(10, 3)|
+-----------+
|          1|
+-----------+
SELECT pmod(-10, 3);
+------------+
|pmod(-10, 3)|
+------------+
|           2|
+------------+
-- positive
SELECT positive(1);
+-----+
|(+ 1)|
+-----+
|    1|
+-----+
-- pow
SELECT pow(2, 3);
+---------+
|pow(2, 3)|
+---------+
|      8.0|
+---------+
-- power
SELECT power(2, 3);
+-----------+
|POWER(2, 3)|
+-----------+
|        8.0|
+-----------+
-- radians
SELECT radians(180);
+-----------------+
|     RADIANS(180)|
+-----------------+
|3.141592653589793|
+-----------------+
-- rand
SELECT rand();
+------------------+
|            rand()|
+------------------+
|0.7211420708112387|
+------------------+
SELECT rand(0);
+------------------+
|           rand(0)|
+------------------+
|0.7604953758285915|
+------------------+
SELECT rand(null);
+------------------+
|        rand(NULL)|
+------------------+
|0.7604953758285915|
+------------------+
-- randn
SELECT randn();
+-------------------+
|            randn()|
+-------------------+
|-0.8175603217732732|
+-------------------+
SELECT randn(0);
+------------------+
|          randn(0)|
+------------------+
|1.6034991609278433|
+------------------+
SELECT randn(null);
+------------------+
|       randn(NULL)|
+------------------+
|1.6034991609278433|
+------------------+
-- random
SELECT random();
+-----------------+
|           rand()|
+-----------------+
|0.394205008255365|
+-----------------+
SELECT random(0);
+------------------+
|           rand(0)|
+------------------+
|0.7604953758285915|
+------------------+
SELECT random(null);
+------------------+
|        rand(NULL)|
+------------------+
|0.7604953758285915|
+------------------+
-- rint
SELECT rint(12.3456);
+-------------+
|rint(12.3456)|
+-------------+
|         12.0|
+-------------+
-- round
SELECT round(2.5, 0);
+-------------+
|round(2.5, 0)|
+-------------+
|            3|
+-------------+
-- sec
SELECT sec(0);
+------+
|SEC(0)|
+------+
|   1.0|
+------+
-- shiftleft
SELECT shiftleft(2, 1);
+---------------+
|shiftleft(2, 1)|
+---------------+
|              4|
+---------------+
-- sign
SELECT sign(40);
+--------+
|sign(40)|
+--------+
|     1.0|
+--------+
SELECT sign(INTERVAL -'100' YEAR);
+--------------------------+
|sign(INTERVAL '-100' YEAR)|
+--------------------------+
|                      -1.0|
+--------------------------+
-- signum
SELECT signum(40);
+----------+
|SIGNUM(40)|
+----------+
|       1.0|
+----------+
SELECT signum(INTERVAL -'100' YEAR);
+----------------------------+
|SIGNUM(INTERVAL '-100' YEAR)|
+----------------------------+
|                        -1.0|
+----------------------------+
-- sin
SELECT sin(0);
+------+
|SIN(0)|
+------+
|   0.0|
+------+
-- sinh
SELECT sinh(0);
+-------+
|SINH(0)|
+-------+
|    0.0|
+-------+
-- sqrt
SELECT sqrt(4);
+-------+
|SQRT(4)|
+-------+
|    2.0|
+-------+
-- tan
SELECT tan(0);
+------+
|TAN(0)|
+------+
|   0.0|
+------+
-- tanh
SELECT tanh(0);
+-------+
|TANH(0)|
+-------+
|    0.0|
+-------+
-- try_add
SELECT try_add(1, 2);
+-------------+
|try_add(1, 2)|
+-------------+
|            3|
+-------------+
SELECT try_add(2147483647, 1);
+----------------------+
|try_add(2147483647, 1)|
+----------------------+
|                  NULL|
+----------------------+
SELECT try_add(date'2021-01-01', 1);
+-----------------------------+
|try_add(DATE '2021-01-01', 1)|
+-----------------------------+
|                   2021-01-02|
+-----------------------------+
SELECT try_add(date'2021-01-01', interval 1 year);
+---------------------------------------------+
|try_add(DATE '2021-01-01', INTERVAL '1' YEAR)|
+---------------------------------------------+
|                                   2022-01-01|
+---------------------------------------------+
SELECT try_add(timestamp'2021-01-01 00:00:00', interval 1 day);
+----------------------------------------------------------+
|try_add(TIMESTAMP '2021-01-01 00:00:00', INTERVAL '1' DAY)|
+----------------------------------------------------------+
|                                       2021-01-02 00:00:00|
+----------------------------------------------------------+
SELECT try_add(interval 1 year, interval 2 year);
+---------------------------------------------+
|try_add(INTERVAL '1' YEAR, INTERVAL '2' YEAR)|
+---------------------------------------------+
|                            INTERVAL '3' YEAR|
+---------------------------------------------+
-- try_divide
SELECT try_divide(3, 2);
+----------------+
|try_divide(3, 2)|
+----------------+
|             1.5|
+----------------+
SELECT try_divide(2L, 2L);
+----------------+
|try_divide(2, 2)|
+----------------+
|             1.0|
+----------------+
SELECT try_divide(1, 0);
+----------------+
|try_divide(1, 0)|
+----------------+
|            NULL|
+----------------+
SELECT try_divide(interval 2 month, 2);
+---------------------------------+
|try_divide(INTERVAL '2' MONTH, 2)|
+---------------------------------+
|             INTERVAL '0-1' YE...|
+---------------------------------+
SELECT try_divide(interval 2 month, 0);
+---------------------------------+
|try_divide(INTERVAL '2' MONTH, 0)|
+---------------------------------+
|                             NULL|
+---------------------------------+
-- try_multiply
SELECT try_multiply(2, 3);
+------------------+
|try_multiply(2, 3)|
+------------------+
|                 6|
+------------------+
SELECT try_multiply(-2147483648, 10);
+-----------------------------+
|try_multiply(-2147483648, 10)|
+-----------------------------+
|                         NULL|
+-----------------------------+
SELECT try_multiply(interval 2 year, 3);
+----------------------------------+
|try_multiply(INTERVAL '2' YEAR, 3)|
+----------------------------------+
|              INTERVAL '6-0' YE...|
+----------------------------------+
-- try_subtract
SELECT try_subtract(2, 1);
+------------------+
|try_subtract(2, 1)|
+------------------+
|                 1|
+------------------+
SELECT try_subtract(-2147483648, 1);
+----------------------------+
|try_subtract(-2147483648, 1)|
+----------------------------+
|                        NULL|
+----------------------------+
SELECT try_subtract(date'2021-01-02', 1);
+----------------------------------+
|try_subtract(DATE '2021-01-02', 1)|
+----------------------------------+
|                        2021-01-01|
+----------------------------------+
SELECT try_subtract(date'2021-01-01', interval 1 year);
+--------------------------------------------------+
|try_subtract(DATE '2021-01-01', INTERVAL '1' YEAR)|
+--------------------------------------------------+
|                                        2020-01-01|
+--------------------------------------------------+
SELECT try_subtract(timestamp'2021-01-02 00:00:00', interval 1 day);
+---------------------------------------------------------------+
|try_subtract(TIMESTAMP '2021-01-02 00:00:00', INTERVAL '1' DAY)|
+---------------------------------------------------------------+
|                                            2021-01-01 00:00:00|
+---------------------------------------------------------------+
SELECT try_subtract(interval 2 year, interval 1 year);
+--------------------------------------------------+
|try_subtract(INTERVAL '2' YEAR, INTERVAL '1' YEAR)|
+--------------------------------------------------+
|                                 INTERVAL '1' YEAR|
+--------------------------------------------------+
-- unhex
SELECT decode(unhex('53514C'), 'UTF-8');
+----------------------------------------+
|decode(unhex(53514C), UTF-8)|
+----------------------------------------+
|                               SQL|
+----------------------------------------+
-- width_bucket
SELECT width_bucket(5.3, 0.2, 10.6, 5);
+-------------------------------+
|width_bucket(5.3, 0.2, 10.6, 5)|
+-------------------------------+
|                              3|
+-------------------------------+
SELECT width_bucket(-2.1, 1.3, 3.4, 3);
+-------------------------------+
|width_bucket(-2.1, 1.3, 3.4, 3)|
+-------------------------------+
|                              0|
+-------------------------------+
SELECT width_bucket(8.1, 0.0, 5.7, 4);
+------------------------------+
|width_bucket(8.1, 0.0, 5.7, 4)|
+------------------------------+
|                             5|
+------------------------------+
SELECT width_bucket(-0.9, 5.2, 0.5, 2);
+-------------------------------+
|width_bucket(-0.9, 5.2, 0.5, 2)|
+-------------------------------+
|                              3|
+-------------------------------+
SELECT width_bucket(INTERVAL '0' YEAR, INTERVAL '0' YEAR, INTERVAL '10' YEAR, 10);
+--------------------------------------------------------------------------+
|width_bucket(INTERVAL '0' YEAR, INTERVAL '0' YEAR, INTERVAL '10' YEAR, 10)|
+--------------------------------------------------------------------------+
|                                                                         1|
+--------------------------------------------------------------------------+
SELECT width_bucket(INTERVAL '1' YEAR, INTERVAL '0' YEAR, INTERVAL '10' YEAR, 10);
+--------------------------------------------------------------------------+
|width_bucket(INTERVAL '1' YEAR, INTERVAL '0' YEAR, INTERVAL '10' YEAR, 10)|
+--------------------------------------------------------------------------+
|                                                                         2|
+--------------------------------------------------------------------------+
SELECT width_bucket(INTERVAL '0' DAY, INTERVAL '0' DAY, INTERVAL '10' DAY, 10);
+-----------------------------------------------------------------------+
|width_bucket(INTERVAL '0' DAY, INTERVAL '0' DAY, INTERVAL '10' DAY, 10)|
+-----------------------------------------------------------------------+
|                                                                      1|
+-----------------------------------------------------------------------+
SELECT width_bucket(INTERVAL '1' DAY, INTERVAL '0' DAY, INTERVAL '10' DAY, 10);
+-----------------------------------------------------------------------+
|width_bucket(INTERVAL '1' DAY, INTERVAL '0' DAY, INTERVAL '10' DAY, 10)|
+-----------------------------------------------------------------------+
|                                                                      2|
+-----------------------------------------------------------------------+
```

#### 生成器函数
<a name="supported-sql-generator"></a>

**注意**  
要查看哪些 AWS 数据源集成支持这些 SQL 函数，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。


****  

| 函数 | 说明 | 
| --- | --- | 
| explode(expr) | 将数组 `expr` 的元素拆分为多行，或将映射 `expr` 的元素拆分为多行多列。除非另有说明，否则数组元素默认使用列名 `col`，而映射元素默认使用 `key` 和 `value`。 | 
| explode\$1outer(expr) | 将数组 `expr` 的元素拆分为多行，或将映射 `expr` 的元素拆分为多行多列。除非另有说明，否则数组元素默认使用列名 `col`，而映射元素默认使用 `key` 和 `value`。 | 
| inline(expr) | 将结构数组展开为表。除非另有说明，否则默认使用列名 col1、col2 等。 | 
| inline\$1outer(expr) | 将结构数组展开为表。除非另有说明，否则默认使用列名 col1、col2 等。 | 
| posexplode(expr) | 将数组 `expr` 的元素按位置拆分为多行，或将映射 `expr` 的元素按位置拆分为多行多列。除非另有说明，否则使用列名 `pos` 表示位置，`col` 表示数组元素，`key` 和 `value` 表示映射元素。 | 
| posexplode\$1outer(expr) | 将数组 `expr` 的元素按位置拆分为多行，或将映射 `expr` 的元素按位置拆分为多行多列。除非另有说明，否则使用列名 `pos` 表示位置，`col` 表示数组元素，`key` 和 `value` 表示映射元素。 | 
| stack(n, expr1, ..., exprk) | 将 `expr1`、...、`exprk` 拆分为 `n` 行。除非另有说明，否则默认使用列名 col0、col1 等。 | 

**示例**

```
-- explode
SELECT explode(array(10, 20));
+---+
|col|
+---+
| 10|
| 20|
+---+

SELECT explode(collection => array(10, 20));
+---+
|col|
+---+
| 10|
| 20|
+---+

SELECT * FROM explode(collection => array(10, 20));
+---+
|col|
+---+
| 10|
| 20|
+---+

-- explode_outer
SELECT explode_outer(array(10, 20));
+---+
|col|
+---+
| 10|
| 20|
+---+

SELECT explode_outer(collection => array(10, 20));
+---+
|col|
+---+
| 10|
| 20|
+---+

SELECT * FROM explode_outer(collection => array(10, 20));
+---+
|col|
+---+
| 10|
| 20|
+---+

-- inline
SELECT inline(array(struct(1, 'a'), struct(2, 'b')));
+----+----+
|col1|col2|
+----+----+
|   1|   a|
|   2|   b|
+----+----+

-- inline_outer
SELECT inline_outer(array(struct(1, 'a'), struct(2, 'b')));
+----+----+
|col1|col2|
+----+----+
|   1|   a|
|   2|   b|
+----+----+

-- posexplode
SELECT posexplode(array(10,20));
+---+---+
|pos|col|
+---+---+
|  0| 10|
|  1| 20|
+---+---+

SELECT * FROM posexplode(array(10,20));
+---+---+
|pos|col|
+---+---+
|  0| 10|
|  1| 20|
+---+---+

-- posexplode_outer
SELECT posexplode_outer(array(10,20));
+---+---+
|pos|col|
+---+---+
|  0| 10|
|  1| 20|
+---+---+

SELECT * FROM posexplode_outer(array(10,20));
+---+---+
|pos|col|
+---+---+
|  0| 10|
|  1| 20|
+---+---+

-- stack
SELECT stack(2, 1, 2, 3);
+----+----+
|col0|col1|
+----+----+
|   1|   2|
|   3|NULL|
+----+----+
```

#### SELECT 子句
<a name="supported-sql-select"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。

OpenSearch SQL 支持用于从一个或多个表中检索结果集的`SELECT`语句。以下部分描述查询的整体语法及其不同构造。

**语法** 

```
select_statement 
[ { UNION | INTERSECT | EXCEPT } [ ALL | DISTINCT ] select_statement, ... ]
[ ORDER BY 
    { expression [ ASC | DESC ] [ NULLS { FIRST | LAST } ] 
    [ , ... ] 
    } 
]
[ SORT BY 
    { expression [ ASC | DESC ] [ NULLS { FIRST | LAST } ] 
    [ , ... ] 
    } 
]
[ WINDOW { named_window [ , WINDOW named_window, ... ] } ]
[ LIMIT { ALL | expression } ]
```

当 `select_statement` 定义为：

```
SELECT [ ALL | DISTINCT ] { [ [ named_expression ] [ , ... ] ] }
FROM { from_item [ , ... ] }
[ PIVOT clause ]
[ UNPIVOT clause ]
[ LATERAL VIEW clause ] [ ... ]
[ WHERE boolean_expression ]
[ GROUP BY expression [ , ... ] ]
[ HAVING boolean_expression ]
```

 **参数** 
+ **全部** 

  从关系中选择所有匹配的行，默认处于启用状态。
+ **DISTINCT** 

  移除结果中的重复项后，从关系中选择所有匹配的行。
+ **named\$1expression**

  具有指定名称的表达式。通常表示列表达式。

  语法：`expression [[AS] alias]`
+ **from\$1item**

  表关系

  关联关系

  转置关系

  逆透视关系

  表值函数

  内联表

  `[ LATERAL ] ( Subquery )`
+ **PIVOT** 

  `PIVOT` 子句用于数据透视。您可以根据特定列值获取聚合值。
+ **UNPIVOT** 

  `UNPIVOT` 子句将列转换为行。与 `PIVOT` 相反，但值的聚合除外。
+ **LATERAL VIEW**

  `LATERAL VIEW` 子句与生成器函数（例如 `EXPLODE`）结合使用，后者将生成包含一行或多行的虚拟表。

  `LATERAL VIEW` 会将这些行应用于每个原始输出行。
+ **WHERE** 

  根据提供的谓词筛选 `FROM` 子句的结果。
+ **GROUP BY**

  指定用于对行进行分组的表达式。

  这与聚合函数（`MIN`、`MAX`、`COUNT`、`SUM`、`AVG` 等）结合使用，根据分组表达式对行进行分组，并在每个组内计算聚合值。

  将 `FILTER` 子句附加到聚合函数时，仅将匹配的行传递给该函数。
+ **有** 

  指定用于筛选由 `GROUP BY` 生成的行所依据的谓词。

  `HAVING` 子句用于在分组操作完成后筛选行。

  如果未指定不含 `GROUP BY` 的 `HAVING`，则表示不含分组表达式的 `GROUP BY`（全局聚合）。
+ **ORDER BY**

  指定查询完整结果集的行排序顺序。

  输出行按分区进行排序。

  此参数与 `SORT BY` 和 `DISTRIBUTE BY` 互斥，不可同时指定。
+ **SORT BY**

  指定每个分区中行的排序顺序。

  此参数与 `ORDER BY` 互斥，不可同时指定。
+ **LIMIT** 

  指定语句或子查询可返回的最大行数。

  此子句主要与 `ORDER BY` 结合使用，以产生确定性结果。
+ **boolean\$1expression**

  指定任何评估结果为布尔值的表达式。

  两个或多个表达式可使用逻辑运算符（`AND`、`OR`）进行组合。
+ **expression** 

  指定一个或多个值、运算符和计算结果为值的 SQL 函数的组合。
+ **named\$1window**

  指定一个或多个源窗口规格的别名。

  可以在查询的窗口定义中引用源窗口规格。

#### WHERE 子句
<a name="supported-sql-where"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。

`WHERE` 子句用于根据指定条件限制查询或子查询中 `FROM` 子句的结果。

**语法** 

```
WHERE boolean_expression
```

**参数**
+ **boolean\$1expression** 

  指定任何评估结果为布尔值的表达式。

  两个或多个表达式可使用逻辑运算符（`AND`、`OR`）进行组合。

**示例**

```
CREATE TABLE person (id INT, name STRING, age INT);
INSERT INTO person VALUES
(100, 'John', 30),
(200, 'Mary', NULL),
(300, 'Mike', 80),
(400, 'Dan',  50);

-- Comparison operator in `WHERE` clause.
SELECT * FROM person WHERE id > 200 ORDER BY id;
+---+----+---+
| id|name|age|
+---+----+---+
|300|Mike| 80|
|400| Dan| 50|
+---+----+---+

-- Comparison and logical operators in `WHERE` clause.
SELECT * FROM person WHERE id = 200 OR id = 300 ORDER BY id;
+---+----+----+
| id|name| age|
+---+----+----+
|200|Mary|null|
|300|Mike|  80|
+---+----+----+

-- IS NULL expression in `WHERE` clause.
SELECT * FROM person WHERE id > 300 OR age IS NULL ORDER BY id;
+---+----+----+
| id|name| age|
+---+----+----+
|200|Mary|null|
|400| Dan|  50|
+---+----+----+

-- Function expression in `WHERE` clause.
SELECT * FROM person WHERE length(name) > 3 ORDER BY id;
+---+----+----+
| id|name| age|
+---+----+----+
|100|John|  30|
|200|Mary|null|
|300|Mike|  80|
+---+----+----+

-- `BETWEEN` expression in `WHERE` clause.
SELECT * FROM person WHERE id BETWEEN 200 AND 300 ORDER BY id;
+---+----+----+
| id|name| age|
+---+----+----+
|200|Mary|null|
|300|Mike|  80|
+---+----+----+

-- Scalar Subquery in `WHERE` clause.
SELECT * FROM person WHERE age > (SELECT avg(age) FROM person);
+---+----+---+
| id|name|age|
+---+----+---+
|300|Mike| 80|
+---+----+---+
 
-- Correlated Subquery in `WHERE` clause.
SELECT id FROM person
WHERE exists (SELECT id FROM person where id = 200);
+---+----+----+
|id |name|age |
+---+----+----+
|200|Mary|null|
+---+----+----+
```

#### GROUP BY 子句
<a name="supported-sql-group-by"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。

`GROUP BY` 子句用于根据一组指定的分组表达式对行进行分组，并基于一个或多个指定的聚合函数对分组后的行进行聚合计算。

系统还通过 `GROUPING SETS`、`CUBE`、`ROLLUP` 子句对同一输入记录集执行多重聚合操作。分组表达式和高级聚合可在 `GROUP BY` 子句中混合使用，也可以嵌套在 `GROUPING SETS ` 子句中。有关更多详细信息，请参阅 `Mixed/Nested Grouping Analytics ` 一节。

将 `FILTER` 子句附加到聚合函数时，仅将匹配的行传递给该函数。

**语法** 

```
GROUP BY group_expression [ , group_expression [ , ... ] ] [ WITH { ROLLUP | CUBE } ]
GROUP BY { group_expression | { ROLLUP | CUBE | GROUPING SETS } (grouping_set [ , ...]) } [ , ... ]
```

聚合函数定义为：

```
aggregate_name ( [ DISTINCT ] expression [ , ... ] ) [ FILTER ( WHERE boolean_expression ) ]
```

**参数**
+ **group\$1expression**

  指定用于将行分组的标准。根据分组表达式的结果值对行进行分组。

  分组表达式可以是列名（如 `GROUP BY a`）、列位置（如 `GROUP BY 0`）或表达式（如 `GROUP BY a + b`）。
+ **grouping\$1set**

  分组集由零个或多个用逗号分隔的括号表达式指定。当分组集仅包含一个元素时，可以省略括号。

  例如，`GROUPING SETS ((a), (b))` 与 `GROUPING SETS (a, b)` 相同。

  语法：`{ ( [ expression [ , ... ] ] ) | expression }`
+ **GROUPING SETS**

  将行按 `GROUPING SETS` 之后指定的每个分组集进行分组。

  例如，`GROUP BY GROUPING SETS ((warehouse), (product))` 在语义上等同于 `GROUP BY warehouse` 和 `GROUP BY product` 的结果并集。此子句是 UNION ALL 的简写形式，其中 `UNION ALL` 运算符的每个分支对 `GROUPING SETS` 子句中指定的每个分组集进行聚合。

  同样，`GROUP BY GROUPING SETS ((warehouse, product), (product), ())` 在语义上等同于 `GROUP BY warehouse, product, GROUP BY product` 和全局聚合的结果并集。
+ **ROLLUP** 

  在单个语句中指定多级聚合操作。此子句用于计算基于多个分组集的聚合。`ROLLUP` 是 `GROUPING SETS` 的简写形式。

  例如，`GROUP BY warehouse, product WITH ROLLUP or GROUP BY ROLLUP(warehouse, product)` 等同于 `GROUP BY GROUPING SETS((warehouse, product), (warehouse), ())`。

  `GROUP BY ROLLUP(warehouse, product, (warehouse, location))` 等同于 `GROUP BY GROUPING SETS((warehouse, product, location), (warehouse, product), (warehouse), ())`。

  ROLLUP 规范的 N 个元素会生成 N\$11 个分组集。
+ **CUBE** 

  CUBE 子句用于根据 GROUP BY 子句中指定的分组列组合执行聚合操作。CUBE 是 GROUPING SETS 的简写形式。

  例如，`GROUP BY warehouse, product WITH CUBE or GROUP BY CUBE(warehouse, product)` 等同于 `GROUP BY GROUPING SETS((warehouse, product), (warehouse), (product), ())`。

  `GROUP BY CUBE(warehouse, product, (warehouse, location))` 等同于 `GROUP BY GROUPING SETS((warehouse, product, location), (warehouse, product), (warehouse, location), (product, warehouse, location), (warehouse), (product), (warehouse, product), ())`。`CUBE` 规范的 N 个元素会生成 2^N 个 `GROUPING SETS`。
+ **混合/嵌套分组分析**

  `GROUP BY` 子句可以包含多个 group\$1expressions 和多个 `CUBE|ROLLUP|GROUPING SETS`。`GROUPING SETS` 也可以包含嵌套的 `CUBE|ROLLUP|GROUPING SETS` 子句，例如 `GROUPING SETS(ROLLUP(warehouse, location)`、`CUBE(warehouse, location))`、`GROUPING SETS(warehouse, GROUPING SETS(location, GROUPING SETS(ROLLUP(warehouse, location),`、`CUBE(warehouse, location))))`。

  `CUBE|ROLLUP` 只是 `GROUPING SETS` 的语法糖。关于如何将 `CUBE|ROLLUP` 转换为 `GROUPING SETS`，请参阅上述章节。在此语境下，`group_expression` 可视为单一组 `GROUPING SETS`。

  对于 `GROUP BY` 子句中的多个 `GROUPING SETS`，我们通过对原始 `GROUPING SETS` 进行交叉运算生成单个 `GROUPING SETS`。对于 `GROUPING SETS` 子句中的嵌套 `GROUPING SETS`，我们只需提取其分组集并将其去除。

  例如，`GROUP BY warehouse, GROUPING SETS((product), ()), GROUPING SETS((location, size), (location), (size), ()) and GROUP BY warehouse, ROLLUP(product), CUBE(location, size)` 等同于 `GROUP BY GROUPING SETS( (warehouse, product, location, size), (warehouse, product, location), (warehouse, product, size), (warehouse, product), (warehouse, location, size), (warehouse, location), (warehouse, size), (warehouse))`。

  `GROUP BY GROUPING SETS(GROUPING SETS(warehouse), GROUPING SETS((warehouse, product)))` 等同于 `GROUP BY GROUPING SETS((warehouse), (warehouse, product))`。
+ **aggregate\$1name**

  指定聚合函数名称（`MIN`、`MAX`、`COUNT`、`SUM`、`AVG` 等）。
+ **DISTINCT** 

  在将输入行传递给聚合函数之前，移除其中的重复项。
+ **过滤器** 

  筛选输入行，其中 `WHERE` 子句中 `boolean_expression` 计算结果为 true 时，该行将传递给聚合函数；其余行将被丢弃。

**示例**

```
CREATE TABLE dealer (id INT, city STRING, car_model STRING, quantity INT);
INSERT INTO dealer VALUES
(100, 'Fremont', 'Honda Civic', 10),
(100, 'Fremont', 'Honda Accord', 15),
(100, 'Fremont', 'Honda CRV', 7),
(200, 'Dublin', 'Honda Civic', 20),
(200, 'Dublin', 'Honda Accord', 10),
(200, 'Dublin', 'Honda CRV', 3),
(300, 'San Jose', 'Honda Civic', 5),
(300, 'San Jose', 'Honda Accord', 8);

-- Sum of quantity per dealership. Group by `id`.
SELECT id, sum(quantity) FROM dealer GROUP BY id ORDER BY id;
+---+-------------+
| id|sum(quantity)|
+---+-------------+
|100|           32|
|200|           33|
|300|           13|
+---+-------------+

-- Use column position in GROUP by clause.
SELECT id, sum(quantity) FROM dealer GROUP BY 1 ORDER BY 1;
+---+-------------+
| id|sum(quantity)|
+---+-------------+
|100|           32|
|200|           33|
|300|           13|
+---+-------------+

-- Multiple aggregations.
-- 1. Sum of quantity per dealership.
-- 2. Max quantity per dealership.
SELECT id, sum(quantity) AS sum, max(quantity) AS max FROM dealer GROUP BY id ORDER BY id;
+---+---+---+
| id|sum|max|
+---+---+---+
|100| 32| 15|
|200| 33| 20|
|300| 13|  8|
+---+---+---+

-- Count the number of distinct dealer cities per car_model.
SELECT car_model, count(DISTINCT city) AS count FROM dealer GROUP BY car_model;
+------------+-----+
|   car_model|count|
+------------+-----+
| Honda Civic|    3|
|   Honda CRV|    2|
|Honda Accord|    3|
+------------+-----+

-- Sum of only 'Honda Civic' and 'Honda CRV' quantities per dealership.
SELECT id, sum(quantity) FILTER (
WHERE car_model IN ('Honda Civic', 'Honda CRV')
) AS `sum(quantity)` FROM dealer
GROUP BY id ORDER BY id;
+---+-------------+
| id|sum(quantity)|
+---+-------------+
|100|           17|
|200|           23|
|300|            5|
+---+-------------+

-- Aggregations using multiple sets of grouping columns in a single statement.
-- Following performs aggregations based on four sets of grouping columns.
-- 1. city, car_model
-- 2. city
-- 3. car_model
-- 4. Empty grouping set. Returns quantities for all city and car models.
SELECT city, car_model, sum(quantity) AS sum FROM dealer
GROUP BY GROUPING SETS ((city, car_model), (city), (car_model), ())
ORDER BY city;
+---------+------------+---+
|     city|   car_model|sum|
+---------+------------+---+
|     null|        null| 78|
|     null| HondaAccord| 33|
|     null|    HondaCRV| 10|
|     null|  HondaCivic| 35|
|   Dublin|        null| 33|
|   Dublin| HondaAccord| 10|
|   Dublin|    HondaCRV|  3|
|   Dublin|  HondaCivic| 20|
|  Fremont|        null| 32|
|  Fremont| HondaAccord| 15|
|  Fremont|    HondaCRV|  7|
|  Fremont|  HondaCivic| 10|
| San Jose|        null| 13|
| San Jose| HondaAccord|  8|
| San Jose|  HondaCivic|  5|
+---------+------------+---+

-- Group by processing with `ROLLUP` clause.
-- Equivalent GROUP BY GROUPING SETS ((city, car_model), (city), ())
SELECT city, car_model, sum(quantity) AS sum FROM dealer
GROUP BY city, car_model WITH ROLLUP
ORDER BY city, car_model;
+---------+------------+---+
|     city|   car_model|sum|
+---------+------------+---+
|     null|        null| 78|
|   Dublin|        null| 33|
|   Dublin| HondaAccord| 10|
|   Dublin|    HondaCRV|  3|
|   Dublin|  HondaCivic| 20|
|  Fremont|        null| 32|
|  Fremont| HondaAccord| 15|
|  Fremont|    HondaCRV|  7|
|  Fremont|  HondaCivic| 10|
| San Jose|        null| 13|
| San Jose| HondaAccord|  8|
| San Jose|  HondaCivic|  5|
+---------+------------+---+

-- Group by processing with `CUBE` clause.
-- Equivalent GROUP BY GROUPING SETS ((city, car_model), (city), (car_model), ())
SELECT city, car_model, sum(quantity) AS sum FROM dealer
GROUP BY city, car_model WITH CUBE
ORDER BY city, car_model;
+---------+------------+---+
|     city|   car_model|sum|
+---------+------------+---+
|     null|        null| 78|
|     null| HondaAccord| 33|
|     null|    HondaCRV| 10|
|     null|  HondaCivic| 35|
|   Dublin|        null| 33|
|   Dublin| HondaAccord| 10|
|   Dublin|    HondaCRV|  3|
|   Dublin|  HondaCivic| 20|
|  Fremont|        null| 32|
|  Fremont| HondaAccord| 15|
|  Fremont|    HondaCRV|  7|
|  Fremont|  HondaCivic| 10|
| San Jose|        null| 13|
| San Jose| HondaAccord|  8|
| San Jose|  HondaCivic|  5|
+---------+------------+---+

--Prepare data for ignore nulls example
CREATE TABLE person (id INT, name STRING, age INT);
INSERT INTO person VALUES
(100, 'Mary', NULL),
(200, 'John', 30),
(300, 'Mike', 80),
(400, 'Dan', 50);

--Select the first row in column age
SELECT FIRST(age) FROM person;
+--------------------+
| first(age, false)  |
+--------------------+
| NULL               |
+--------------------+

--Get the first row in column `age` ignore nulls,last row in column `id` and sum of column `id`.
SELECT FIRST(age IGNORE NULLS), LAST(id), SUM(id) FROM person;
+-------------------+------------------+----------+
| first(age, true)  | last(id, false)  | sum(id)  |
+-------------------+------------------+----------+
| 30                | 400              | 1000     |
+-------------------+------------------+----------+
```

#### HAVING 子句
<a name="supported-sql-having"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。

`HAVING` 子句用于根据指定条件筛选 `GROUP BY` 生成的结果。该子句通常与 `GROUP BY` 子句搭配使用。

**语法** 

```
HAVING boolean_expression
```

**参数**
+ **boolean\$1expression**

  指定任何评估结果为布尔值的表达式。两个或多个表达式可使用逻辑运算符（`AND`、`OR`）进行组合。

  **注意**，`HAVING` 子句中指定的表达式只能指：

  1. 常量 

  1. `GROUP BY` 中出现的表达式 

  1. 聚合函数 

**示例**

```
CREATE TABLE dealer (id INT, city STRING, car_model STRING, quantity INT);
INSERT INTO dealer VALUES
(100, 'Fremont', 'Honda Civic', 10),
(100, 'Fremont', 'Honda Accord', 15),
(100, 'Fremont', 'Honda CRV', 7),
(200, 'Dublin', 'Honda Civic', 20),
(200, 'Dublin', 'Honda Accord', 10),
(200, 'Dublin', 'Honda CRV', 3),
(300, 'San Jose', 'Honda Civic', 5),
(300, 'San Jose', 'Honda Accord', 8);

-- `HAVING` clause referring to column in `GROUP BY`.
SELECT city, sum(quantity) AS sum FROM dealer GROUP BY city HAVING city = 'Fremont';
+-------+---+
|   city|sum|
+-------+---+
|Fremont| 32|
+-------+---+

-- `HAVING` clause referring to aggregate function.
SELECT city, sum(quantity) AS sum FROM dealer GROUP BY city HAVING sum(quantity) > 15;
+-------+---+
|   city|sum|
+-------+---+
| Dublin| 33|
|Fremont| 32|
+-------+---+

-- `HAVING` clause referring to aggregate function by its alias.
SELECT city, sum(quantity) AS sum FROM dealer GROUP BY city HAVING sum > 15;
+-------+---+
|   city|sum|
+-------+---+
| Dublin| 33|
|Fremont| 32|
+-------+---+

-- `HAVING` clause referring to a different aggregate function than what is present in
-- `SELECT` list.
SELECT city, sum(quantity) AS sum FROM dealer GROUP BY city HAVING max(quantity) > 15;
+------+---+
|  city|sum|
+------+---+
|Dublin| 33|
+------+---+

-- `HAVING` clause referring to constant expression.
SELECT city, sum(quantity) AS sum FROM dealer GROUP BY city HAVING 1 > 0 ORDER BY city;
+--------+---+
|    city|sum|
+--------+---+
|  Dublin| 33|
| Fremont| 32|
|San Jose| 13|
+--------+---+

-- `HAVING` clause without a `GROUP BY` clause.
SELECT sum(quantity) AS sum FROM dealer HAVING sum(quantity) > 10;
+---+
|sum|
+---+
| 78|
+---+
```

#### ORDER BY 子句
<a name="supported-sql-order-by"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。

`ORDER BY` 子句用于按用户指定的顺序返回排序后的结果行。与 SORT BY 子句不同，该子句保证输出结果具有全序关系。

**语法** 

```
ORDER BY { expression [ sort_direction | nulls_sort_order ] [ , ... ] }
```

**参数**
+ **ORDER BY**

  指定用逗号分隔的表达式列表以及用于对行进行排序的可选参数 `sort_direction` 和 `nulls_sort_order`。
+ **sort\$1direction**

  （可选）指定是按升序还是降序对行进行排序。

  排序方向的有效值为 `ASC` 表示升序，`DESC` 表示降序。

  如果未明确指定排序方向，则默认按升序对行进行排序。

  语法：`[ ASC | DESC ] `
+ **nulls\$1sort\$1order**

  （可选）指定`NULL`值是否返回 before/after 非 NULL 值。

  如果未指定 null\$1sort\$1order，如果排序顺序为 `ASC`，则 `NULLs` 排在最前；如果排序顺序为 `DESC`，则 NULL 排在最后。

  1. 如果已指定 `NULLS FIRST`，则无论排序顺序如何，都将首先返回 NULL 值。

  2. 如果已指定 `NULLS LAST`，则无论排序顺序如何，都将最后返回 NULL 值。

  语法：`[ NULLS { FIRST | LAST } ]`

**示例**

```
CREATE TABLE person (id INT, name STRING, age INT);
INSERT INTO person VALUES
(100, 'John', 30),
(200, 'Mary', NULL),
(300, 'Mike', 80),
(400, 'Jerry', NULL),
(500, 'Dan',  50);

-- Sort rows by age. By default rows are sorted in ascending manner with NULL FIRST.
SELECT name, age FROM person ORDER BY age;
+-----+----+
| name| age|
+-----+----+
|Jerry|null|
| Mary|null|
| John|  30|
|  Dan|  50|
| Mike|  80|
+-----+----+

-- Sort rows in ascending manner keeping null values to be last.
SELECT name, age FROM person ORDER BY age NULLS LAST;
+-----+----+
| name| age|
+-----+----+
| John|  30|
|  Dan|  50|
| Mike|  80|
| Mary|null|
|Jerry|null|
+-----+----+

-- Sort rows by age in descending manner, which defaults to NULL LAST.
SELECT name, age FROM person ORDER BY age DESC;
+-----+----+
| name| age|
+-----+----+
| Mike|  80|
|  Dan|  50|
| John|  30|
|Jerry|null|
| Mary|null|
+-----+----+

-- Sort rows in ascending manner keeping null values to be first.
SELECT name, age FROM person ORDER BY age DESC NULLS FIRST;
+-----+----+
| name| age|
+-----+----+
|Jerry|null|
| Mary|null|
| Mike|  80|
|  Dan|  50|
| John|  30|
+-----+----+

-- Sort rows based on more than one column with each column having different
-- sort direction.
SELECT * FROM person ORDER BY name ASC, age DESC;
+---+-----+----+
| id| name| age|
+---+-----+----+
|500|  Dan|  50|
|400|Jerry|null|
|100| John|  30|
|200| Mary|null|
|300| Mike|  80|
+---+-----+----+
```

#### JOIN 子句
<a name="supported-sql-join"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。

SQL 联接用于根据联接标准合并两个关系中的行。以下部分将介绍整体联接语法、不同类型的联接及其示例。

**语法** 

```
relation INNER JOIN relation [ join_criteria ]
```

**参数**
+ **relation**

  指定要联接的关系。
+ **join\$1type**

  指定联接类型。

  语法：`INNER | CROSS | LEFT OUTER`
+ **join\$1criteria**

  指定如何将一个关系中的行与另一个关系的行进行合并。

  语法：`ON boolean_expression | USING ( column_name [ , ... ] ) `
+ **boolean\$1expression**

  指定返回类型为布尔值的表达式。

**联接类型**
+ **内联接**

  内联接需显式指定。内联接选择在两个关系中都具有匹配值的行。

  语法：`relation INNER JOIN relation [ join_criteria ] `
+ **左联接**

  左联接返回左侧关系中的所有值以及右侧关系中的匹配值，如果不存在匹配项，则附加 NULL。也称为左外联接。

  语法：`relation LEFT OUTER JOIN relation [ join_criteria ]`
+ **交叉联接**

  交叉联接返回两个关系的笛卡尔乘积。

  语法：`relation CROSS JOIN relation [ join_criteria ]`

**示例**

```
-- Use employee and department tables to demonstrate different type of joins.
SELECT * FROM employee;
+---+-----+------+
| id| name|deptno|
+---+-----+------+
|105|Chloe|     5|
|103| Paul|     3|
|101| John|     1|
|102| Lisa|     2|
|104| Evan|     4|
|106|  Amy|     6|
+---+-----+------+
SELECT * FROM department;
+------+-----------+
|deptno|   deptname|
+------+-----------+
|     3|Engineering|
|     2|      Sales|
|     1|  Marketing|
+------+-----------+

-- Use employee and department tables to demonstrate inner join.
SELECT id, name, employee.deptno, deptname
FROM employee INNER JOIN department ON employee.deptno = department.deptno;
+---+-----+------+-----------|
| id| name|deptno|   deptname|
+---+-----+------+-----------|
|103| Paul|     3|Engineering|
|101| John|     1|  Marketing|
|102| Lisa|     2|      Sales|
+---+-----+------+-----------|

-- Use employee and department tables to demonstrate left join.
SELECT id, name, employee.deptno, deptname
FROM employee LEFT JOIN department ON employee.deptno = department.deptno;
+---+-----+------+-----------|
| id| name|deptno|   deptname|
+---+-----+------+-----------|
|105|Chloe|     5|       NULL|
|103| Paul|     3|Engineering|
|101| John|     1|  Marketing|
|102| Lisa|     2|      Sales|
|104| Evan|     4|       NULL|
|106|  Amy|     6|       NULL|
+---+-----+------+-----------|

-- Use employee and department tables to demonstrate cross join.
SELECT id, name, employee.deptno, deptname FROM employee CROSS JOIN department;
+---+-----+------+-----------|
| id| name|deptno|   deptname|
+---+-----+------+-----------|
|105|Chloe|     5|Engineering|
|105|Chloe|     5|  Marketing|
|105|Chloe|     5|      Sales|
|103| Paul|     3|Engineering|
|103| Paul|     3|  Marketing|
|103| Paul|     3|      Sales|
|101| John|     1|Engineering|
|101| John|     1|  Marketing|
|101| John|     1|      Sales|
|102| Lisa|     2|Engineering|
|102| Lisa|     2|  Marketing|
|102| Lisa|     2|      Sales|
|104| Evan|     4|Engineering|
|104| Evan|     4|  Marketing|
|104| Evan|     4|      Sales|
|106|  Amy|     4|Engineering|
|106|  Amy|     4|  Marketing|
|106|  Amy|     4|      Sales|
+---+-----+------+-----------|
```

#### LIMIT 子句
<a name="supported-sql-limit"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。

`LIMIT` 子句用于限制 `SELECT` 语句返回的行数。通常，该子句需与 `ORDER BY` 配合使用，以确保结果具有确定性。

**语法** 

```
LIMIT { ALL | integer_expression }
```

**参数**
+ **全部**

  如果指定，则查询返回所有行。换言之，如果指定此项，则不应用任何限制。
+ **integer\$1expression**

  指定返回整数的可折叠表达式。

**示例**

```
CREATE TABLE person (name STRING, age INT);
INSERT INTO person VALUES
('Jane Doe', 25),
('Pat C', 18),
('Nikki W', 16),
('John D', 25),
('Juan L', 18),
('Jorge S', 16);

-- Select the first two rows.
SELECT name, age FROM person ORDER BY name LIMIT 2;
+-------+---+
|   name|age|
+-------+---+
|  Pat C| 18|
|Jorge S| 16|
+------+---+

-- Specifying ALL option on LIMIT returns all the rows.
SELECT name, age FROM person ORDER BY name LIMIT ALL;
+--------+---+
|    name|age|
+--------+---+
|   Pat C| 18|
| Jorge S| 16|
|  Juan L| 18|
|  John D| 25|
| Nikki W| 16|
|Jane Doe| 25|
+--------+---+

-- A function expression as an input to LIMIT.
SELECT name, age FROM person ORDER BY name LIMIT length('OPENSEARCH');
+-------+---+
|   name|age|
+-------+---+
|  Pat C| 18|
|Jorge S| 16|
| Juan L| 18|
| John D| 25|
|Nikki W| 16|
+-------+---+
```

#### CASE 子句
<a name="supported-sql-case"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。

`CASE` 子句使用规则根据指定条件返回特定结果，类似于其他编程语言中的 if/else 语句。

**语法** 

```
CASE [ expression ] { WHEN boolean_expression THEN then_expression } [ ... ]
[ ELSE else_expression ]
END
```

**参数**
+ **boolean\$1expression**

  指定任何评估结果为布尔值的表达式。

  两个或多个表达式可使用逻辑运算符（`AND`、`OR`）进行组合。
+ **then\$1expression**

  根据 boolean\$1expression 条件指定 then 表达式。

  `then_expression` 和 `else_expression` 应为相同类型或可强制转换为共同类型。
+ **else\$1expression**

  指定默认表达式。

  `then_expression` 和 ` else_expression` 应为相同类型或可强制转换为共同类型。

**示例**

```
CREATE TABLE person (id INT, name STRING, age INT);
INSERT INTO person VALUES
(100, 'John', 30),
(200, 'Mary', NULL),
(300, 'Mike', 80),
(400, 'Dan', 50);
SELECT id, CASE WHEN id > 200 THEN 'bigger' ELSE 'small' END FROM person;
+------+--------------------------------------------------+
|  id  | CASE WHEN (id > 200) THEN bigger ELSE small END  |
+------+--------------------------------------------------+
| 100  | small                                            |
| 200  | small                                            |
| 300  | bigger                                           |
| 400  | bigger                                           |
+------+--------------------------------------------------+
SELECT id, CASE id WHEN 100 then 'bigger' WHEN  id > 300 THEN '300' ELSE 'small' END FROM person;
+------+-----------------------------------------------------------------------------------------------+
|  id  | CASE WHEN (id = 100) THEN bigger WHEN (id = CAST((id > 300) AS INT)) THEN 300 ELSE small END  |
+------+-----------------------------------------------------------------------------------------------+
| 100  | bigger                                                                                        |
| 200  | small                                                                                         |
| 300  | small                                                                                         |
| 400  | small                                                                                         |
+------+-----------------------------------------------------------------------------------------------+
```

#### 公用表表达式
<a name="supported-sql-cte"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。

公用表表达式（CTE）定义临时结果集，用户可在 SQL 语句作用域内多次引用该结果集。CTE 主要用于 `SELECT` 语句。

**语法** 

```
WITH common_table_expression [ , ... ]
```

当 `common_table_expression` 定义为：

```
Syntexpression_name [ ( column_name [ , ... ] ) ] [ AS ] ( query )
```

**参数** 
+ **expression\$1name**

  指定公用表表达式的名称。
+ **query** 

  `SELECT` 语句。

**示例**

```
-- CTE with multiple column aliases
WITH t(x, y) AS (SELECT 1, 2)
SELECT * FROM t WHERE x = 1 AND y = 2;
+---+---+
|  x|  y|
+---+---+
|  1|  2|
+---+---+

-- CTE in CTE definition
WITH t AS (
WITH t2 AS (SELECT 1)
SELECT * FROM t2
)
SELECT * FROM t;
+---+
|  1|
+---+
|  1|
+---+

-- CTE in subquery
SELECT max(c) FROM (
WITH t(c) AS (SELECT 1)
SELECT * FROM t
);
+------+
|max(c)|
+------+
|     1|
+------+

-- CTE in subquery expression
SELECT (
WITH t AS (SELECT 1)
SELECT * FROM t
);
+----------------+
|scalarsubquery()|
+----------------+
|               1|
+----------------+

-- CTE in CREATE VIEW statement
CREATE VIEW v AS
WITH t(a, b, c, d) AS (SELECT 1, 2, 3, 4)
SELECT * FROM t;
SELECT * FROM v;
+---+---+---+---+
|  a|  b|  c|  d|
+---+---+---+---+
|  1|  2|  3|  4|
+---+---+---+---+
```

#### EXPLAIN
<a name="supported-sql-explain"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。

`EXPLAIN` 语句用于为输入语句提供逻辑/物理计划。默认情况下，此子句仅提供有关物理计划的信息。

**语法** 

```
EXPLAIN [ EXTENDED | CODEGEN | COST | FORMATTED ] statement
```

**参数**
+ **EXTENDED** 

  生成解析后的逻辑计划、分析后的逻辑计划、优化后的逻辑计划以及物理计划。

  解析后的逻辑计划是从查询中提取出的未决计划。

  分析后的逻辑计划将 `unresolvedAttribute` 和 `unresolvedRelation` 转换为完全类型化的对象。

  优化的逻辑计划通过一组优化规则进行转换，最终生成物理计划。
+ **CODEGEN** 

  生成语句的代码及物理计划（如有）。
+ **COST** 

  如果计划节点统计信息可用，则生成逻辑计划及统计信息。
+ **FORMATTED** 

  生成两大部分：物理计划概述及节点详细信息。
+ **语句** 

  指定要解释的 SQL 语句。

**示例**

```
-- Default Output
EXPLAIN select k, sum(v) from values (1, 2), (1, 3) t(k, v) group by k;
+----------------------------------------------------+
|                                                plan|
+----------------------------------------------------+
| == Physical Plan ==
*(2) HashAggregate(keys=[k#33], functions=[sum(cast(v#34 as bigint))])
+- Exchange hashpartitioning(k#33, 200), true, [id=#59]
+- *(1) HashAggregate(keys=[k#33], functions=[partial_sum(cast(v#34 as bigint))])
+- *(1) LocalTableScan [k#33, v#34]
|
+----------------------------------------------------

-- Using Extended
EXPLAIN EXTENDED select k, sum(v) from values (1, 2), (1, 3) t(k, v) group by k;
+----------------------------------------------------+
|                                                plan|
+----------------------------------------------------+
| == Parsed Logical Plan ==
'Aggregate ['k], ['k, unresolvedalias('sum('v), None)]
 +- 'SubqueryAlias `t`
+- 'UnresolvedInlineTable [k, v], [List(1, 2), List(1, 3)]
   
 == Analyzed Logical Plan ==
 k: int, sum(v): bigint
 Aggregate [k#47], [k#47, sum(cast(v#48 as bigint)) AS sum(v)#50L]
 +- SubqueryAlias `t`
    +- LocalRelation [k#47, v#48]
   
 == Optimized Logical Plan ==
 Aggregate [k#47], [k#47, sum(cast(v#48 as bigint)) AS sum(v)#50L]
 +- LocalRelation [k#47, v#48]
   
 == Physical Plan ==
 *(2) HashAggregate(keys=[k#47], functions=[sum(cast(v#48 as bigint))], output=[k#47, sum(v)#50L])
+- Exchange hashpartitioning(k#47, 200), true, [id=#79]
   +- *(1) HashAggregate(keys=[k#47], functions=[partial_sum(cast(v#48 as bigint))], output=[k#47, sum#52L])
    +- *(1) LocalTableScan [k#47, v#48]
|
+----------------------------------------------------+

-- Using Formatted
EXPLAIN FORMATTED select k, sum(v) from values (1, 2), (1, 3) t(k, v) group by k;
+----------------------------------------------------+
|                                                plan|
+----------------------------------------------------+
| == Physical Plan ==
 * HashAggregate (4)
 +- Exchange (3)
    +- * HashAggregate (2)
       +- * LocalTableScan (1)
   
   
 (1) LocalTableScan [codegen id : 1]
 Output: [k#19, v#20]
        
 (2) HashAggregate [codegen id : 1]
 Input: [k#19, v#20]
        
 (3) Exchange
 Input: [k#19, sum#24L]
        
 (4) HashAggregate [codegen id : 2]
 Input: [k#19, sum#24L]
|
+----------------------------------------------------+
```

#### LATERAL SUBQUERY 子句
<a name="supported-sql-lateral-subquery"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。

`LATERAL SUBQUERY` 是前面有关键字 `LATERAL` 的子查询。该子句提供引用前置 `FROM` 子句中列的方法。如果没有 `LATERAL` 关键字，子查询只能引用外部查询中的列，而不能引用 `FROM` 子句中的列。`LATERAL SUBQUERY` 使复杂查询变得更简单、更高效。

**语法** 

```
[ LATERAL ] primary_relation [ join_relation ]
```

**参数**
+ **primary\$1relation**

  指定主要关系。它可以是下列项之一：

  1. 表关系 

  1. 别名查询 

     语法：`( query ) [ [ AS ] alias ] `

  1. 别名关系 

     `Syntax: ( relation ) [ [ AS ] alias ]` 

**示例**

```
CREATE TABLE t1 (c1 INT, c2 INT);
INSERT INTO t1 VALUES (0, 1), (1, 2);
CREATE TABLE t2 (c1 INT, c2 INT);
INSERT INTO t2 VALUES (0, 2), (0, 3);
SELECT * FROM t1,
LATERAL (SELECT * FROM t2 WHERE t1.c1 = t2.c1);
+--------+-------+--------+-------+
|  t1.c1 | t1.c2 |  t2.c1 | t2.c2 |
+-------+--------+--------+-------+
|    0   |   1   |    0   |   3   |
|    0   |   1   |    0   |   2   |
+-------+--------+--------+-------+
SELECT a, b, c FROM t1,
LATERAL (SELECT c1 + c2 AS a),
LATERAL (SELECT c1 - c2 AS b),
LATERAL (SELECT a * b AS c);
+--------+-------+--------+
|    a   |   b   |    c   |
+-------+--------+--------+
|    3   |  -1   |   -3   |
|    1   |  -1   |   -1   |
+-------+--------+--------+
```

#### LATERAL VIEW 子句
<a name="supported-sql-lateral-view"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。

`LATERAL VIEW` 子句与生成器函数（例如 `EXPLODE`）结合使用，后者将生成包含一行或多行的虚拟表。`LATERAL VIEW` 将这些行应用于每个原始输出行。

**语法** 

```
LATERAL VIEW [ OUTER ] generator_function ( expression [ , ... ] ) [ table_alias ] AS column_alias [ , ... ]
```

**参数**
+ **外部**

  如果`OUTER`指定，则如果输入为空 array/map 则返回 null，或者返回 null。
+ **generator\$1function**

  指定生成器函数（`EXPLODE`、`INLINE` 等）。
+ **table\$1alias**

  `generator_function` 的别名，可选。
+ **column\$1alias**

  列出 `generator_function` 的列别名，这些别名可在输出行中使用。

  如果 `generator_function` 有多个输出列，则可以有多个别名。

**示例**

```
CREATE TABLE person (id INT, name STRING, age INT, class INT, address STRING);
INSERT INTO person VALUES
(100, 'John', 30, 1, 'Street 1'),
(200, 'Mary', NULL, 1, 'Street 2'),
(300, 'Mike', 80, 3, 'Street 3'),
(400, 'Dan', 50, 4, 'Street 4');
SELECT * FROM person
LATERAL VIEW EXPLODE(ARRAY(30, 60)) tableName AS c_age
LATERAL VIEW EXPLODE(ARRAY(40, 80)) AS d_age;
+------+-------+-------+--------+-----------+--------+--------+
|  id  | name  |  age  | class  |  address  | c_age  | d_age  |
+------+-------+-------+--------+-----------+--------+--------+
| 100  | John  | 30    | 1      | Street 1  | 30     | 40     |
| 100  | John  | 30    | 1      | Street 1  | 30     | 80     |
| 100  | John  | 30    | 1      | Street 1  | 60     | 40     |
| 100  | John  | 30    | 1      | Street 1  | 60     | 80     |
| 200  | Mary  | NULL  | 1      | Street 2  | 30     | 40     |
| 200  | Mary  | NULL  | 1      | Street 2  | 30     | 80     |
| 200  | Mary  | NULL  | 1      | Street 2  | 60     | 40     |
| 200  | Mary  | NULL  | 1      | Street 2  | 60     | 80     |
| 300  | Mike  | 80    | 3      | Street 3  | 30     | 40     |
| 300  | Mike  | 80    | 3      | Street 3  | 30     | 80     |
| 300  | Mike  | 80    | 3      | Street 3  | 60     | 40     |
| 300  | Mike  | 80    | 3      | Street 3  | 60     | 80     |
| 400  | Dan   | 50    | 4      | Street 4  | 30     | 40     |
| 400  | Dan   | 50    | 4      | Street 4  | 30     | 80     |
| 400  | Dan   | 50    | 4      | Street 4  | 60     | 40     |
| 400  | Dan   | 50    | 4      | Street 4  | 60     | 80     |
+------+-------+-------+--------+-----------+--------+--------+
SELECT c_age, COUNT(1) FROM person
LATERAL VIEW EXPLODE(ARRAY(30, 60)) AS c_age
LATERAL VIEW EXPLODE(ARRAY(40, 80)) AS d_age
GROUP BY c_age;
+--------+-----------+
| c_age  | count(1)  |
+--------+-----------+
| 60     | 8         |
| 30     | 8         |
+--------+-----------+
SELECT * FROM person
LATERAL VIEW EXPLODE(ARRAY()) tableName AS c_age;
+-----+-------+------+--------+----------+--------+
| id  | name  | age  | class  | address  | c_age  |
+-----+-------+------+--------+----------+--------+
+-----+-------+------+--------+----------+--------+
SELECT * FROM person
LATERAL VIEW OUTER EXPLODE(ARRAY()) tableName AS c_age;
+------+-------+-------+--------+-----------+--------+
|  id  | name  |  age  | class  |  address  | c_age  |
+------+-------+-------+--------+-----------+--------+
| 100  | John  | 30    | 1      | Street 1  | NULL   |
| 200  | Mary  | NULL  | 1      | Street 2  | NULL   |
| 300  | Mike  | 80    | 3      | Street 3  | NULL   |
| 400  | Dan   | 50    | 4      | Street 4  | NULL   |
+------+-------+-------+--------+-----------+--------+
```

#### LIKE 谓词
<a name="supported-sql-like-predicate"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。

`LIKE` 谓词用于搜索特定模式。此谓词还支持多种模式，其量词包括 `ANY`、`SOME` 和 `ALL`。

**语法** 

```
[ NOT ] { LIKE search_pattern [ ESCAPE esc_char ] | [ RLIKE | REGEXP ] regex_pattern }
[ NOT ] { LIKE quantifiers ( search_pattern [ , ... ]) }
```

**参数**
+ **search\$1pattern**

  指定 LIKE 子句要搜索的字符串模式。可以包含特殊的模式匹配字符：
  + `%` 匹配 0 或多个字符。
  + `_` 精准匹配 1 个字符。
+ **esc\$1char**

  指定转义字符。默认的转义字符为 `\`。
+ **regex\$1pattern**

  指定要由 `RLIKE` 或 `REGEXP` 子句搜索的正则表达式搜索模式。
+ **量词** 

  指定谓词量词，包括 `ANY`、`SOME` 和 `ALL`。

  `ANY` 或 `SOME` 表示如果其中一个模式与输入匹配，则返回 true。

  `ALL` 表示如果所有模式都与输入匹配，则返回 true。

**示例**

```
CREATE TABLE person (id INT, name STRING, age INT);
INSERT INTO person VALUES
(100, 'John', 30),
(200, 'Mary', NULL),
(300, 'Mike', 80),
(400, 'Dan',  50),
(500, 'Evan_w', 16);
SELECT * FROM person WHERE name LIKE 'M%';
+---+----+----+
| id|name| age|
+---+----+----+
|300|Mike|  80|
|200|Mary|null|
+---+----+----+
SELECT * FROM person WHERE name LIKE 'M_ry';
+---+----+----+
| id|name| age|
+---+----+----+
|200|Mary|null|
+---+----+----+
SELECT * FROM person WHERE name NOT LIKE 'M_ry';
+---+------+---+
| id|  name|age|
+---+------+---+
|500|Evan_W| 16|
|300|  Mike| 80|
|100|  John| 30|
|400|   Dan| 50|
+---+------+---+
SELECT * FROM person WHERE name RLIKE 'M+';
+---+----+----+
| id|name| age|
+---+----+----+
|300|Mike|  80|
|200|Mary|null|
+---+----+----+
SELECT * FROM person WHERE name REGEXP 'M+';
+---+----+----+
| id|name| age|
+---+----+----+
|300|Mike|  80|
|200|Mary|null|
+---+----+----+
SELECT * FROM person WHERE name LIKE '%\_%';
+---+------+---+
| id|  name|age|
+---+------+---+
|500|Evan_W| 16|
+---+------+---+
SELECT * FROM person WHERE name LIKE '%$_%' ESCAPE '$';
+---+------+---+
| id|  name|age|
+---+------+---+
|500|Evan_W| 16|
+---+------+---+
SELECT * FROM person WHERE name LIKE ALL ('%an%', '%an');
+---+----+----+
| id|name| age|
+---+----+----+
|400| Dan|  50|
+---+----+----+
SELECT * FROM person WHERE name LIKE ANY ('%an%', '%an');
+---+------+---+
| id|  name|age|
+---+------+---+
|400|   Dan| 50|
|500|Evan_W| 16|
+---+------+---+
SELECT * FROM person WHERE name LIKE SOME ('%an%', '%an');
+---+------+---+
| id|  name|age|
+---+------+---+
|400|   Dan| 50|
|500|Evan_W| 16|
+---+------+---+
SELECT * FROM person WHERE name NOT LIKE ALL ('%an%', '%an');
+---+----+----+
| id|name| age|
+---+----+----+
|100|John|  30|
|200|Mary|null|
|300|Mike|  80|
+---+----+----+
SELECT * FROM person WHERE name NOT LIKE ANY ('%an%', '%an');
+---+------+----+
| id|  name| age|
+---+------+----+
|100|  John|  30|
|200|  Mary|null|
|300|  Mike|  80|
|500|Evan_W|  16|
+---+------+----+
SELECT * FROM person WHERE name NOT LIKE SOME ('%an%', '%an');
+---+------+----+
| id|  name| age|
+---+------+----+
|100|  John|  30|
|200|  Mary|null|
|300|  Mike|  80|
|500|Evan_W|  16|
+---+------+----+
```

#### OFFSET
<a name="supported-sql-offset"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。

`OFFSET` 子句用于指定在开始返回 `SELECT` 语句返回的行之前需要跳过的行数。通常，该子句需与 `ORDER BY` 配合使用，以确保结果具有确定性。

**语法** 

```
OFFSET integer_expression
```

**参数**
+ **integer\$1expression**

  指定返回整数的可折叠表达式。

**示例**

```
CREATE TABLE person (name STRING, age INT);
INSERT INTO person VALUES
('Jane Doe', 25),
('Pat C', 18),
('Nikki W', 16),
('Juan L', 25),
('John D', 18),
('Jorge S', 16);

-- Skip the first two rows.
SELECT name, age FROM person ORDER BY name OFFSET 2;
+-------+---+
|   name|age|
+-------+---+
| John D| 18|
| Juan L| 25|
|Nikki W| 16|
|Jane Doe| 25|
+-------+---+

-- Skip the first two rows and returns the next three rows.
SELECT name, age FROM person ORDER BY name LIMIT 3 OFFSET 2;
+-------+---+
|   name|age|
+-------+---+
| John D| 18|
| Juan L| 25|
|Nikki W| 16|
+-------+---+

-- A function expression as an input to OFFSET.
SELECT name, age FROM person ORDER BY name OFFSET length('WAGON');
+-------+---+
|   name|age|
+-------+---+
|Jane Doe| 25|
+-------+---+
```

#### PIVOT 子句
<a name="supported-sql-pivot"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。

`PIVOT` 子句用于数据透视。我们可根据特定列值获取聚合值，这些值将转换为 `SELECT` 子句中使用的多个列。`PIVOT` 子句可在表名或子查询之后指定。

**语法** 

```
PIVOT ( { aggregate_expression [ AS aggregate_expression_alias ] } [ , ... ] FOR column_list IN ( expression_list ) ) 
```

**参数** 
+ **aggregate\$1expression**

  指定聚合表达式（`(SUM(a)`、`COUNT(DISTINCT b)` 等）。
+ **aggregate\$1expression\$1alias**

  指定聚合表达式的别名。
+ **column\$1list**

  包含 `FROM` 子句中的列，该子句指定要替换为新列的列。可用方括号将各列括起来，例如 `(c1, c2)`。
+ **expression\$1list**

  指定新列，用于匹配 `column_list` 中作为聚合条件的值。您还可为这些列添加别名。

**示例**

```
CREATE TABLE person (id INT, name STRING, age INT, class INT, address STRING);
INSERT INTO person VALUES
(100, 'John', 30, 1, 'Street 1'),
(200, 'Mary', NULL, 1, 'Street 2'),
(300, 'Mike', 80, 3, 'Street 3'),
(400, 'Dan', 50, 4, 'Street 4');
SELECT * FROM person
PIVOT (
SUM(age) AS a, AVG(class) AS c
FOR name IN ('John' AS john, 'Mike' AS mike)
);
+------+-----------+---------+---------+---------+---------+
|  id  |  address  | john_a  | john_c  | mike_a  | mike_c  |
+------+-----------+---------+---------+---------+---------+
| 200  | Street 2  | NULL    | NULL    | NULL    | NULL    |
| 100  | Street 1  | 30      | 1.0     | NULL    | NULL    |
| 300  | Street 3  | NULL    | NULL    | 80      | 3.0     |
| 400  | Street 4  | NULL    | NULL    | NULL    | NULL    |
+------+-----------+---------+---------+---------+---------+
SELECT * FROM person
PIVOT (
SUM(age) AS a, AVG(class) AS c
FOR (name, age) IN (('John', 30) AS c1, ('Mike', 40) AS c2)
);
+------+-----------+-------+-------+-------+-------+
|  id  |  address  | c1_a  | c1_c  | c2_a  | c2_c  |
+------+-----------+-------+-------+-------+-------+
| 200  | Street 2  | NULL  | NULL  | NULL  | NULL  |
| 100  | Street 1  | 30    | 1.0   | NULL  | NULL  |
| 300  | Street 3  | NULL  | NULL  | NULL  | NULL  |
| 400  | Street 4  | NULL  | NULL  | NULL  | NULL  |
+------+-----------+-------+-------+-------+-------+
```

#### 集合运算符
<a name="supported-sql-set"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。

集合运算符用于将两个输入关系合并为单一关系。 OpenSearch SQL 支持三种类型的集合运算符：
+ `EXCEPT` 或 `MINUS`
+ `INTERSECT` 
+ `UNION` 

输入关系必须具有相同数量的列，且各列的数据类型必须兼容。

**除了** 

`EXCEPT` 和 `EXCEPT ALL` 返回在其中一个关系中存在而在另一个中不存在的行。`EXCEPT`（或 `EXCEPT DISTINCT`）仅获取不同的行，而 `EXCEPT ALL` 不会从结果行中移除重复的行。请注意，`MINUS` 是 `EXCEPT` 的别名。

**语法** 

```
 [ ( ] relation [ ) ] EXCEPT | MINUS [ ALL | DISTINCT ] [ ( ] relation [ ) ] 
```

**示例**

```
-- Use table1 and table2 tables to demonstrate set operators in this page.
SELECT * FROM table1;
+---+
|  c|
+---+
|  3|
|  1|
|  2|
|  2|
|  3|
|  4|
+---+
SELECT * FROM table2;
+---+
|  c|
+---+
|  5|
|  1|
|  2|
|  2|
+---+
SELECT c FROM table1 EXCEPT SELECT c FROM table2;
+---+
|  c|
+---+
|  3|
|  4|
+---+
SELECT c FROM table1 MINUS SELECT c FROM table2;
+---+
|  c|
+---+
|  3|
|  4|
+---+
SELECT c FROM table1 EXCEPT ALL (SELECT c FROM table2);
+---+
|  c|
+---+
|  3|
|  3|
|  4|
+---+
SELECT c FROM table1 MINUS ALL (SELECT c FROM table2);
+---+
|  c|
+---+
|  3|
|  3|
|  4|
+---+
```

**相交** 

`INTERSECT` 和 `INTERSECT ALL` 返回同时存在于两个关系中的行。`INTERSECT`（或 `INTERSECT DISTINCT`）仅获取不同的行，而 `INTERSECT ALL` 不会从结果行中移除重复的行。

**语法** 

```
 [ ( ] relation [ ) ] INTERSECT [ ALL | DISTINCT ] [ ( ] relation [ ) ]
```

**示例**

```
(SELECT c FROM table1) INTERSECT (SELECT c FROM table2);
+---+
|  c|
+---+
|  1|
|  2|
+---+
(SELECT c FROM table1) INTERSECT DISTINCT (SELECT c FROM table2);
+---+
|  c|
+---+
|  1|
|  2|
+---+
(SELECT c FROM table1) INTERSECT ALL (SELECT c FROM table2);
+---+
|  c|
+---+
|  1|
|  2|
|  2|
+---+
```

**联盟** 

`UNION` 和 `UNION ALL` 返回在任一关系中找到的行。`UNION`（或 `UNION DISTINCT`）仅获取不同的行，而 `UNION ALL` 不会从结果行中移除重复的行。

**语法** 

```
 [ ( ] relation [ ) ] UNION [ ALL | DISTINCT ] [ ( ] relation [ ) ]
```

**示例**

```
(SELECT c FROM table1) UNION (SELECT c FROM table2);
+---+
|  c|
+---+
|  1|
|  3|
|  5|
|  4|
|  2|
+---+
(SELECT c FROM table1) UNION DISTINCT (SELECT c FROM table2);
+---+
|  c|
+---+
|  1|
|  3|
|  5|
|  4|
|  2|
+---+
SELECT c FROM table1 UNION ALL (SELECT c FROM table2);
+---+
|  c|
+---+
|  3|
|  1|
|  2|
|  2|
|  3|
|  4|
|  5|
|  1|
|  2|
|  2|
+---+
```

#### SORT BY 子句
<a name="supported-sql-sort-by"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。

`SORT BY` 子句用于按用户指定的顺序返回每个分区内排序后的结果行。当存在多个分区时，`SORT BY` 可能返回部分排序的结果。这与保证输出结果完全序关系的 `ORDER BY` 子句不同。

**语法** 

```
SORT BY { expression [ sort_direction | nulls_sort_order ] [ , ... ] }
```

**参数**
+ **SORT BY**

  指定用逗号分隔的表达式列表以及用于对每个分区内行进行排序的可选参数 sort\$1direction 和 nulls\$1sort\$1order。
+ **sort\$1direction**

  （可选）指定是按升序还是降序对行进行排序。

  排序方向的有效值为 `ASC` 表示升序，`DESC` 表示降序。

  如果未明确指定排序方向，则默认按升序对行进行排序。

  语法：`[ ASC | DESC ]`
+ **nulls\$1sort\$1order**

  （可选）指定 NULL 值是在非 NULL 值之前还是之后返回。

  如果`null_sort_order`未指定，则如果 NULLs 排序顺序为，则先排序`ASC`；如果排序顺序为`DESC`，则最后排序 NULLS。

  1. 如果已指定 `NULLS FIRST`，则无论排序顺序如何，都将首先返回 NULL 值。

  2. 如果已指定 `NULLS LAST`，则无论排序顺序如何，都将最后返回 NULL 值。

  语法：`[ NULLS { FIRST | LAST } ] `

**示例**

```
CREATE TABLE person (zip_code INT, name STRING, age INT);
INSERT INTO person VALUES
(94588, 'Shirley Rodriguez', 50),
(94588, 'Juan Li', 18),
(94588, 'Anil K', 27),
(94588, 'John D', NULL),
(94511, 'David K', 42),
(94511, 'Aryan B.', 18),
(94511, 'Lalit B.', NULL);
-- Sort rows by `name` within each partition in ascending manner
SELECT name, age, zip_code FROM person SORT BY name;
+------------------+----+--------+
|              name| age|zip_code|
+------------------+----+--------+
|            Anil K|  27|   94588|
|           Juan Li|  18|   94588|
|            John D|null|   94588|
| Shirley Rodriguez|  50|   94588|
|          Aryan B.|  18|   94511|
|           David K|  42|   94511|
|          Lalit B.|null|   94511|
+------------------+----+--------+
-- Sort rows within each partition using column position.
SELECT name, age, zip_code FROM person SORT BY 1;
+----------------+----+----------+
|              name| age|zip_code|
+------------------+----+--------+
|            Anil K|  27|   94588|
|           Juan Li|  18|   94588|
|            John D|null|   94588|
| Shirley Rodriguez|  50|   94588|
|          Aryan B.|  18|   94511|
|           David K|  42|   94511|
|          Lalit B.|null|   94511|
+------------------+----+--------+

-- Sort rows within partition in ascending manner keeping null values to be last.
SELECT age, name, zip_code FROM person SORT BY age NULLS LAST;
+----+------------------+--------+
| age|              name|zip_code|
+----+------------------+--------+
|  18|           Juan Li|   94588|
|  27|            Anil K|   94588|
|  50| Shirley Rodriguez|   94588|
|null|            John D|   94588|
|  18|          Aryan B.|   94511|
|  42|           David K|   94511|
|null|          Lalit B.|   94511|
+--------------+--------+--------+

-- Sort rows by age within each partition in descending manner, which defaults to NULL LAST.
SELECT age, name, zip_code FROM person SORT BY age DESC;
+----+------------------+--------+
| age|              name|zip_code|
+----+------------------+--------+
|  50|          Shirley Rodriguez|   94588|
|  27|            Anil K|   94588|
|  18|           Juan Li|   94588|
|null|            John D|   94588|
|  42|           David K|   94511|
|  18|          Aryan B.|   94511|
|null|          Lalit B.|   94511|
+----+------------------+--------+

-- Sort rows by age within each partition in descending manner keeping null values to be first.
SELECT age, name, zip_code FROM person SORT BY age DESC NULLS FIRST;
+----+------------------+--------+
| age|              name|zip_code|
+----+------------------+--------+
|null|            John D|   94588|
|  50| Shirley Rodriguez|   94588|
|  27|            Anil K|   94588|
|  18|           Juan Li|   94588|
|null|          Lalit B.|   94511|
|  42|           David K|   94511|
|  18|          Aryan B.|   94511|
+--------------+--------+--------+

-- Sort rows within each partition based on more than one column with each column having
-- different sort direction.
SELECT name, age, zip_code FROM person
SORT BY name ASC, age DESC;
+------------------+----+--------+
|              name| age|zip_code|
+------------------+----+--------+
|            Anil K|  27|   94588|
|           Juan Li|  18|   94588|
|            John D|null|   94588|
| Shirley Rodriguez|  50|   94588|
|          Aryan B.|  18|   94511|
|           David K|  42|   94511|
|          Lalit B.|null|   94511|
+------------------+----+--------+
```

#### UNPIVOT
<a name="supported-sql-unpivot"></a>

**注意**  
要查看哪些 AWS 数据源集成支持此 SQL 命令，请参阅[支持 OpenSearch 的 SQL 命令和函数](#supported-directquery-sql)。

`UNPIVOT` 子句将多个列转换为 `SELECT` 子句中使用的多个行。`UNPIVOT` 子句可在表名或子查询之后指定。

**语法** 

```
UNPIVOT [ { INCLUDE | EXCLUDE } NULLS ] (
    { single_value_column_unpivot | multi_value_column_unpivot }
) [[AS] alias]

single_value_column_unpivot:
    values_column
    FOR name_column
    IN (unpivot_column [[AS] alias] [, ...])

multi_value_column_unpivot:
    (values_column [, ...])
    FOR name_column
    IN ((unpivot_column [, ...]) [[AS] alias] [, ...])
```

**参数**
+ **unpivot\$1column**

  包含 `FROM` 子句中的列，该子句指定我们要反转置的列。
+ **name\$1column**

  列的名称，包含反转置列的名称。
+ **values\$1column**

  列的名称，包含反转置列的值。

**示例**

```
CREATE TABLE sales_quarterly (year INT, q1 INT, q2 INT, q3 INT, q4 INT);
INSERT INTO sales_quarterly VALUES
(2020, null, 1000, 2000, 2500),
(2021, 2250, 3200, 4200, 5900),
(2022, 4200, 3100, null, null);
-- column names are used as unpivot columns
SELECT * FROM sales_quarterly
UNPIVOT (
sales FOR quarter IN (q1, q2, q3, q4)
);
+------+---------+-------+
| year | quarter | sales |
+------+---------+-------+
| 2020 | q2      | 1000  |
| 2020 | q3      | 2000  |
| 2020 | q4      | 2500  |
| 2021 | q1      | 2250  |
| 2021 | q2      | 3200  |
| 2021 | q3      | 4200  |
| 2021 | q4      | 5900  |
| 2022 | q1      | 4200  |
| 2022 | q2      | 3100  |
+------+---------+-------+
-- NULL values are excluded by default, they can be included
-- unpivot columns can be alias
-- unpivot result can be referenced via its alias
SELECT up.* FROM sales_quarterly
UNPIVOT INCLUDE NULLS (
sales FOR quarter IN (q1 AS Q1, q2 AS Q2, q3 AS Q3, q4 AS Q4)
) AS up;
+------+---------+-------+
| year | quarter | sales |
+------+---------+-------+
| 2020 | Q1      | NULL  |
| 2020 | Q2      | 1000  |
| 2020 | Q3      | 2000  |
| 2020 | Q4      | 2500  |
| 2021 | Q1      | 2250  |
| 2021 | Q2      | 3200  |
| 2021 | Q3      | 4200  |
| 2021 | Q4      | 5900  |
| 2022 | Q1      | 4200  |
| 2022 | Q2      | 3100  |
| 2022 | Q3      | NULL  |
| 2022 | Q4      | NULL  |
+------+---------+-------+
-- multiple value columns can be unpivoted per row
SELECT * FROM sales_quarterly
UNPIVOT EXCLUDE NULLS (
(first_quarter, second_quarter)
FOR half_of_the_year IN (
(q1, q2) AS H1,
(q3, q4) AS H2
)
);
+------+------------------+---------------+----------------+
|  id  | half_of_the_year | first_quarter | second_quarter |
+------+------------------+---------------+----------------+
| 2020 | H1               | NULL          | 1000           |
| 2020 | H2               | 2000          | 2500           |
| 2021 | H1               | 2250          | 3200           |
| 2021 | H2               | 4200          | 5900           |
| 2022 | H1               | 4200          | 3100           |
+------+------------------+---------------+----------------+
```