Using the log_fdw extension to access the DB log using SQL
RDS for PostgreSQL DB instance
supports the log_fdw
extension, which you can use to access
your database engine log using a SQL interface. The log_fdw
extension provides
two functions that make it easy to create foreign tables for database logs:
-
list_postgres_log_files
– Lists the files in the database log directory and the file size in bytes. -
create_foreign_table_for_log_file(table_name text, server_name text, log_file_name text)
– Builds a foreign table for the specified file in the current database.
All functions created by log_fdw
are owned by
rds_superuser
. Members of the rds_superuser
role can grant access to these functions to other database users.
By default, the log files are generated by
Amazon RDS in stderr
(standard error) format, as specified in log_destination
parameter. There are only two options for this parameter, stderr
and csvlog
(comma-separated values, CSV). If you add the csvlog
option to the parameter,
Amazon RDS generates both stderr
and csvlog
logs.
This can affect the storage capacity on your DB cluster, so you need to be aware
of the other parameters that affect log handling. For more information, see
Setting the log
destination (stderr, csvlog).
One benefit of generating csvlog
logs is that the log_fdw
extension lets you build foreign tables with the data neatly split into
several columns. To do this, your instance needs to be associated with a custom DB parameter group so
that you can change the setting for log_destination
. For more information about how
to do so, see Working with parameters on your RDS for PostgreSQL DB instance.
The following example assumes that the log_destination
parameter
includes cvslog
.
To use the log_fdw extension
-
Install the
log_fdw
extension.postgres=>
CREATE EXTENSION log_fdw;
CREATE EXTENSION
-
Create the log server as a foreign data wrapper.
postgres=>
CREATE SERVER log_server FOREIGN DATA WRAPPER log_fdw;
CREATE SERVER
-
Select all from a list of log files.
postgres=>
SELECT * FROM list_postgres_log_files() ORDER BY 1;
A sample response is as follows.
file_name | file_size_bytes ------------------------------+----------------- postgresql.log.2023-08-09-22.csv | 1111 postgresql.log.2023-08-09-23.csv | 1172 postgresql.log.2023-08-10-00.csv | 1744 postgresql.log.2023-08-10-01.csv | 1102 (4 rows)
-
Create a table with a single 'log_entry' column for the selected file.
postgres=>
SELECT create_foreign_table_for_log_file('my_postgres_error_log', 'log_server', 'postgresql.log.2023-08-09-22.csv');
The response provides no detail other than that the table now exists.
----------------------------------- (1 row)
-
Select a sample of the log file. The following code retrieves the log time and error message description.
postgres=>
SELECT log_time, message FROM my_postgres_error_log ORDER BY 1;
A sample response is as follows.
log_time | message ----------------------------------+--------------------------------------------------------------------------- Tue Aug 09 15:45:18.172 2023 PDT | ending log output to stderr Tue Aug 09 15:45:18.175 2023 PDT | database system was interrupted; last known up at 2023-08-09 22:43:34 UTC Tue Aug 09 15:45:18.223 2023 PDT | checkpoint record is at 0/90002E0 Tue Aug 09 15:45:18.223 2023 PDT | redo record is at 0/90002A8; shutdown FALSE Tue Aug 09 15:45:18.223 2023 PDT | next transaction ID: 0/1879; next OID: 24578 Tue Aug 09 15:45:18.223 2023 PDT | next MultiXactId: 1; next MultiXactOffset: 0 Tue Aug 09 15:45:18.223 2023 PDT | oldest unfrozen transaction ID: 1822, in database 1 (7 rows)