Mengotomatiskan snapshot yang konsisten dengan aplikasi dengan Data Lifecycle Manager - Amazon EBS

Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris.

Mengotomatiskan snapshot yang konsisten dengan aplikasi dengan Data Lifecycle Manager

Anda dapat mengotomatiskan snapshot yang konsisten dengan aplikasi menggunakan Amazon Data Lifecycle Manager dengan mengaktifkan skrip pra dan pasca dalam kebijakan siklus hidup snapshot yang menargetkan instans.

Amazon Data Lifecycle Manager terintegrasi dengan (Systems AWS Systems Manager Manager) untuk mendukung snapshot yang konsisten dengan aplikasi. Amazon Data Lifecycle Manager menggunakan dokumen perintah Systems Manager (SSM) yang menyertakan skrip pra dan pasca untuk mengotomatiskan tindakan yang diperlukan untuk menyelesaikan snapshot yang konsisten dengan aplikasi. Sebelum memulai pembuatan snapshot, Amazon Data Lifecycle Manager menjalankan perintah dalam skrip pra untuk membekukan dan mencairkan I/O. Setelah memulai pembuatan snapshot, Amazon Data Lifecycle Manager menjalankan perintah dalam skrip pasca untuk mencairkan I/O.

Menggunakan Amazon Data Lifecycle Manager, Anda dapat mengotomatisasi snapshot yang konsisten dengan aplikasi berikut ini:

  • Aplikasi Windows menggunakan Volume Shadow Copy Service (VSS)

  • SAPHANAmenggunakan SSDM dokumen AWS terkelola. Untuk informasi selengkapnya, lihat EBSsnapshot Amazon untuk SAP HANA.

  • Database yang dikelola sendiri, seperti My, Postgre SQL atau SQL InterSystems IRIS, menggunakan templat dokumen SSM

Persyaratan untuk menggunakan skrip pra dan pasca

Tabel berikut menguraikan persyaratan untuk menggunakan skrip pra dan pasca dengan Amazon Data Lifecycle Manager.

Snapshot yang konsisten dengan aplikasi
Persyaratan VSSBackup SSMDokumen kustom Kasus penggunaan lainnya
SSMAgen diinstal dan berjalan pada instance target
VSSpersyaratan sistem terpenuhi pada instance target
VSSmengaktifkan profil instance yang terkait dengan instance target
VSSkomponen diinstal pada instance target
Siapkan SSM dokumen dengan perintah skrip pra dan pasca
Mempersiapkan IAM peran Amazon Data Lifecycle Manager menjalankan skrip pra dan pasca
Buat kebijakan snapshot yang menargetkan instance dan dikonfigurasi untuk skrip pra dan pasca

Memulai snapshot yang konsisten dengan aplikasi

Bagian ini menjelaskan langkah-langkah yang perlu Anda ikuti untuk mengotomatisasi snapshot yang konsisten dengan aplikasi menggunakan Amazon Data Lifecycle Manager.

Anda perlu menyiapkan instans yang ditargetkan untuk snapshot yang konsisten dengan aplikasi menggunakan Amazon Data Lifecycle Manager. Lakukan salah satu langkah berikut sesuai dengan kasus penggunaan Anda.

Prepare for VSS Backups
Untuk mempersiapkan instance target Anda untuk backup VSS
  1. Instal SSM Agen pada instance target Anda, jika belum diinstal. Jika SSM Agen sudah diinstal pada instance target Anda, lewati langkah ini.

    Untuk informasi selengkapnya, lihat Menginstal EC2 Instans SSM Agen di Amazon secara manual untuk Windows.

  2. Pastikan SSM agen berjalan. Untuk informasi selengkapnya, lihat Memeriksa status SSM Agen dan memulai agen.

  3. Siapkan Systems Manager untuk EC2 instans Amazon. Untuk informasi selengkapnya, lihat Menyiapkan Systems Manager untuk EC2 instans Amazon di Panduan AWS Systems Manager Pengguna.

  4. Pastikan persyaratan sistem untuk VSS backup terpenuhi.

  5. Lampirkan profil instance VSS -enabled ke instance target.

  6. Instal VSS komponen.

Prepare for SAP HANA backups
Untuk mempersiapkan instance target Anda untuk backup SAP HANA
  1. Persiapkan SAP HANA lingkungan pada contoh target Anda.

    1. Siapkan instance Anda dengan SAPHANA. Jika Anda belum memiliki SAP HANA lingkungan yang ada, maka Anda dapat merujuk ke SAPHANAEnvironment Setup on AWS.

    2. Masuk ke SystemDB sebagai pengguna administrator yang sesuai.

    3. Buat pengguna cadangan basis data untuk digunakan dengan Amazon Data Lifecycle Manager.

      CREATE USER username PASSWORD password NO FORCE_FIRST_PASSWORD_CHANGE;

      Misalnya, perintah berikut membuat pengguna bernama dlm_user dengan kata sandi password.

      CREATE USER dlm_user PASSWORD password NO FORCE_FIRST_PASSWORD_CHANGE;
    4. Tetapkan BACKUP OPERATOR peran ke pengguna cadangan basis data yang Anda buat di langkah sebelumnya.

      GRANT BACKUP OPERATOR TO username

      Misalnya, perintah berikut menetapkan peran untuk pengguna bernama dlm_user.

      GRANT BACKUP OPERATOR TO dlm_user
    5. Masuk ke sistem operasi sebagai administrator, misalnya sidadm.

    6. Buat hdbuserstore entri untuk menyimpan informasi koneksi sehingga SAP HANA SSM dokumen dapat terhubung SAP HANA tanpa pengguna harus memasukkan informasi.

      hdbuserstore set DLM_HANADB_SNAPSHOT_USER localhost:3hana_instance_number13 username password

      Sebagai contoh:

      hdbuserstore set DLM_HANADB_SNAPSHOT_USER localhost:30013 dlm_user password
    7. Uji koneksi.

      hdbsql -U DLM_HANADB_SNAPSHOT_USER "select * from dummy"
  2. Instal SSM Agen pada instance target Anda, jika belum diinstal. Jika SSM Agen sudah diinstal pada instance target Anda, lewati langkah ini.

    Untuk informasi selengkapnya, lihat Menginstal SSM Agen secara manual di EC2 instans Amazon untuk Linux.

  3. Pastikan SSM agen berjalan. Untuk informasi selengkapnya, lihat Memeriksa status SSM Agen dan memulai agen.

  4. Siapkan Systems Manager untuk EC2 instans Amazon. Untuk informasi selengkapnya, lihat Menyiapkan Systems Manager untuk EC2 instans Amazon di Panduan AWS Systems Manager Pengguna.

Prepare for custom SSM documents
Untuk menyiapkan dokumen kustom SSM instance target Anda
  1. Instal SSM Agen pada instance target Anda, jika belum diinstal. Jika SSM Agen sudah diinstal pada instance target Anda, lewati langkah ini.

  2. Pastikan SSM agen berjalan. Untuk informasi selengkapnya, lihat Memeriksa status SSM Agen dan memulai agen.

  3. Siapkan Systems Manager untuk EC2 instans Amazon. Untuk informasi selengkapnya, lihat Menyiapkan Systems Manager untuk EC2 instans Amazon di Panduan AWS Systems Manager Pengguna.

catatan

Langkah ini hanya diperlukan untuk SSM dokumen khusus. Hal ini tidak diperlukan untuk VSS Backup atau SAPHANA. Untuk VSS Pencadangan dan SAPHANA, Amazon Data Lifecycle Manager menggunakan dokumen terkelola. AWS SSM

Jika Anda mengotomatiskan snapshot yang konsisten aplikasi untuk database yang dikelola sendiri, seperti My, PostgreSQL, atau SQL InterSystems IRIS, Anda harus membuat dokumen SSM perintah yang menyertakan skrip pra untuk membekukan dan menyiram I/O sebelum pembuatan snapshot dimulai, dan skrip posting untuk mencairkan I/O setelah pembuatan snapshot dimulai.

Jika MySQL, PostgreSQL, atau InterSystems IRIS database Anda menggunakan konfigurasi standar, Anda dapat membuat dokumen SSM perintah menggunakan konten SSM dokumen sampel di bawah ini. Jika MySQL, PostgreSQL, atau InterSystems IRIS database Anda menggunakan konfigurasi non-standar, Anda dapat menggunakan konten sampel di bawah ini sebagai titik awal untuk dokumen SSM perintah Anda dan kemudian menyesuaikannya untuk memenuhi kebutuhan Anda. Atau, jika Anda ingin membuat SSM dokumen baru dari awal, Anda dapat menggunakan templat SSM dokumen kosong di bawah ini dan menambahkan perintah pra dan pos Anda di bagian dokumen yang sesuai.

Perhatikan hal berikut:
  • Merupakan tanggung jawab Anda untuk memastikan bahwa SSM dokumen melakukan tindakan yang benar dan diperlukan untuk konfigurasi database Anda.

  • Snapshot dijamin konsisten aplikasi hanya jika skrip pra dan posting dalam SSM dokumen Anda berhasil membekukan, menyiram, dan mencairkan I/O.

  • SSMDokumen harus menyertakan bidang yang diperlukan untukallowedValues, termasukpre-script,post-script, dandry-run. Amazon Data Lifecycle Manager akan menjalankan perintah pada instans Anda berdasarkan konten bagian tersebut. Jika SSM dokumen Anda tidak memiliki bagian tersebut, maka Amazon Data Lifecycle Manager akan memperlakukannya sebagai eksekusi yang gagal.

MySQL sample document content
###===============================================================================### # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # Permission is hereby granted, free of charge, to any person obtaining a copy of this # software and associated documentation files (the "Software"), to deal in the Software # without restriction, including without limitation the rights to use, copy, modify, # merge, publish, distribute, sublicense, and/or sell copies of the Software, and to # permit persons to whom the Software is furnished to do so. # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, # INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A # PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT # HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ###===============================================================================### schemaVersion: '2.2' description: Amazon Data Lifecycle Manager Pre/Post script for MySQL databases parameters: executionId: type: String default: None description: (Required) Specifies the unique identifier associated with a pre and/or post execution allowedPattern: ^(None|[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12})$ command: # Data Lifecycle Manager will trigger the pre-script and post-script actions during policy execution. # 'dry-run' option is intended for validating the document execution without triggering any commands # on the instance. The following allowedValues will allow Data Lifecycle Manager to successfully # trigger pre and post script actions. type: String default: 'dry-run' description: (Required) Specifies whether pre-script and/or post-script should be executed. allowedValues: - pre-script - post-script - dry-run mainSteps: - action: aws:runShellScript description: Run MySQL Database freeze/thaw commands name: run_pre_post_scripts precondition: StringEquals: - platformType - Linux inputs: runCommand: - | #!/bin/bash ###===============================================================================### ### Error Codes ###===============================================================================### # The following Error codes will inform Data Lifecycle Manager of the type of error # and help guide handling of the error. # The Error code will also be emitted via AWS Eventbridge events in the 'cause' field. # 1 Pre-script failed during execution - 201 # 2 Post-script failed during execution - 202 # 3 Auto thaw occurred before post-script was initiated - 203 # 4 Pre-script initiated while post-script was expected - 204 # 5 Post-script initiated while pre-script was expected - 205 # 6 Application not ready for pre or post-script initiation - 206 ###=================================================================### ### Global variables ###=================================================================### START=$(date +%s) # For testing this script locally, replace the below with OPERATION=$1. OPERATION={{ command }} FS_ALREADY_FROZEN_ERROR='freeze failed: Device or resource busy' FS_ALREADY_THAWED_ERROR='unfreeze failed: Invalid argument' FS_BUSY_ERROR='mount point is busy' # Auto thaw is a fail safe mechanism to automatically unfreeze the application after the # duration specified in the global variable below. Choose the duration based on your # database application's tolerance to freeze. export AUTO_THAW_DURATION_SECS="60" # Add all pre-script actions to be performed within the function below execute_pre_script() { echo "INFO: Start execution of pre-script" # Check if filesystem is already frozen. No error code indicates that filesystem # is not currently frozen and that the pre-script can proceed with freezing the filesystem. check_fs_freeze # Execute the DB commands to flush the DB in preparation for snapshot snap_db # Freeze the filesystem. No error code indicates that filesystem was succefully frozen freeze_fs echo "INFO: Schedule Auto Thaw to execute in ${AUTO_THAW_DURATION_SECS} seconds." $(nohup bash -c execute_schedule_auto_thaw >/dev/null 2>&1 &) } # Add all post-script actions to be performed within the function below execute_post_script() { echo "INFO: Start execution of post-script" # Unfreeze the filesystem. No error code indicates that filesystem was successfully unfrozen. unfreeze_fs thaw_db } # Execute Auto Thaw to automatically unfreeze the application after the duration configured # in the AUTO_THAW_DURATION_SECS global variable. execute_schedule_auto_thaw() { sleep ${AUTO_THAW_DURATION_SECS} execute_post_script } # Disable Auto Thaw if it is still enabled execute_disable_auto_thaw() { echo "INFO: Attempting to disable auto thaw if enabled" auto_thaw_pgid=$(pgrep -f execute_schedule_auto_thaw | xargs -i ps -hp {} -o pgid) if [ -n "${auto_thaw_pgid}" ]; then echo "INFO: execute_schedule_auto_thaw process found with pgid ${auto_thaw_pgid}" sudo pkill -g ${auto_thaw_pgid} rc=$? if [ ${rc} != 0 ]; then echo "ERROR: Unable to kill execute_schedule_auto_thaw process. retval=${rc}" else echo "INFO: Auto Thaw has been disabled" fi fi } # Iterate over all the mountpoints and check if filesystem is already in freeze state. # Return error code 204 if any of the mount points are already frozen. check_fs_freeze() { for target in $(lsblk -nlo MOUNTPOINTS) do # Freeze of the root and boot filesystems is dangerous and pre-script does not freeze these filesystems. # Hence, we will skip the root and boot mountpoints while checking if filesystem is in freeze state. if [ $target == '/' ]; then continue; fi if [[ "$target" == *"/boot"* ]]; then continue; fi error_message=$(sudo mount -o remount,noatime $target 2>&1) # Remount will be a no-op without a error message if the filesystem is unfrozen. # However, if filesystem is already frozen, remount will fail with busy error message. if [ $? -ne 0 ];then # If the filesystem is already in frozen, return error code 204 if [[ "$error_message" == *"$FS_BUSY_ERROR"* ]];then echo "ERROR: Filesystem ${target} already frozen. Return Error Code: 204" exit 204 fi # If the check filesystem freeze failed due to any reason other than the filesystem already frozen, return 201 echo "ERROR: Failed to check_fs_freeze on mountpoint $target due to error - $errormessage" exit 201 fi done } # Iterate over all the mountpoints and freeze the filesystem. freeze_fs() { for target in $(lsblk -nlo MOUNTPOINTS) do # Freeze of the root and boot filesystems is dangerous. Hence, skip filesystem freeze # operations for root and boot mountpoints. if [ $target == '/' ]; then continue; fi if [[ "$target" == *"/boot"* ]]; then continue; fi echo "INFO: Freezing $target" error_message=$(sudo fsfreeze -f $target 2>&1) if [ $? -ne 0 ];then # If the filesystem is already in frozen, return error code 204 if [[ "$error_message" == *"$FS_ALREADY_FROZEN_ERROR"* ]]; then echo "ERROR: Filesystem ${target} already frozen. Return Error Code: 204" sudo mysql -e 'UNLOCK TABLES;' exit 204 fi # If the filesystem freeze failed due to any reason other than the filesystem already frozen, return 201 echo "ERROR: Failed to freeze mountpoint $targetdue due to error - $errormessage" thaw_db exit 201 fi echo "INFO: Freezing complete on $target" done } # Iterate over all the mountpoints and unfreeze the filesystem. unfreeze_fs() { for target in $(lsblk -nlo MOUNTPOINTS) do # Freeze of the root and boot filesystems is dangerous and pre-script does not freeze these filesystems. # Hence, will skip the root and boot mountpoints during unfreeze as well. if [ $target == '/' ]; then continue; fi if [[ "$target" == *"/boot"* ]]; then continue; fi echo "INFO: Thawing $target" error_message=$(sudo fsfreeze -u $target 2>&1) # Check if filesystem is already unfrozen (thawed). Return error code 204 if filesystem is already unfrozen. if [ $? -ne 0 ]; then if [[ "$error_message" == *"$FS_ALREADY_THAWED_ERROR"* ]]; then echo "ERROR: Filesystem ${target} is already in thaw state. Return Error Code: 205" exit 205 fi # If the filesystem unfreeze failed due to any reason other than the filesystem already unfrozen, return 202 echo "ERROR: Failed to unfreeze mountpoint $targetdue due to error - $errormessage" exit 202 fi echo "INFO: Thaw complete on $target" done } snap_db() { # Run the flush command only when MySQL DB service is up and running sudo systemctl is-active --quiet mysqld.service if [ $? -eq 0 ]; then echo "INFO: Execute MySQL Flush and Lock command." sudo mysql -e 'FLUSH TABLES WITH READ LOCK;' # If the MySQL Flush and Lock command did not succeed, return error code 201 to indicate pre-script failure if [ $? -ne 0 ]; then echo "ERROR: MySQL FLUSH TABLES WITH READ LOCK command failed." exit 201 fi sync else echo "INFO: MySQL service is inactive. Skipping execution of MySQL Flush and Lock command." fi } thaw_db() { # Run the unlock command only when MySQL DB service is up and running sudo systemctl is-active --quiet mysqld.service if [ $? -eq 0 ]; then echo "INFO: Execute MySQL Unlock" sudo mysql -e 'UNLOCK TABLES;' else echo "INFO: MySQL service is inactive. Skipping execution of MySQL Unlock command." fi } export -f execute_schedule_auto_thaw export -f execute_post_script export -f unfreeze_fs export -f thaw_db # Debug logging for parameters passed to the SSM document echo "INFO: ${OPERATION} starting at $(date) with executionId: ${EXECUTION_ID}" # Based on the command parameter value execute the function that supports # pre-script/post-script operation case ${OPERATION} in pre-script) execute_pre_script ;; post-script) execute_post_script execute_disable_auto_thaw ;; dry-run) echo "INFO: dry-run option invoked - taking no action" ;; *) echo "ERROR: Invalid command parameter passed. Please use either pre-script, post-script, dry-run." exit 1 # return failure ;; esac END=$(date +%s) # Debug Log for profiling the script time echo "INFO: ${OPERATION} completed at $(date). Total runtime: $((${END} - ${START})) seconds."
PostgreSQL sample document content
###===============================================================================### # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # Permission is hereby granted, free of charge, to any person obtaining a copy of this # software and associated documentation files (the "Software"), to deal in the Software # without restriction, including without limitation the rights to use, copy, modify, # merge, publish, distribute, sublicense, and/or sell copies of the Software, and to # permit persons to whom the Software is furnished to do so. # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, # INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A # PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT # HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ###===============================================================================### schemaVersion: '2.2' description: Amazon Data Lifecycle Manager Pre/Post script for PostgreSQL databases parameters: executionId: type: String default: None description: (Required) Specifies the unique identifier associated with a pre and/or post execution allowedPattern: ^(None|[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12})$ command: # Data Lifecycle Manager will trigger the pre-script and post-script actions during policy execution. # 'dry-run' option is intended for validating the document execution without triggering any commands # on the instance. The following allowedValues will allow Data Lifecycle Manager to successfully # trigger pre and post script actions. type: String default: 'dry-run' description: (Required) Specifies whether pre-script and/or post-script should be executed. allowedValues: - pre-script - post-script - dry-run mainSteps: - action: aws:runShellScript description: Run PostgreSQL Database freeze/thaw commands name: run_pre_post_scripts precondition: StringEquals: - platformType - Linux inputs: runCommand: - | #!/bin/bash ###===============================================================================### ### Error Codes ###===============================================================================### # The following Error codes will inform Data Lifecycle Manager of the type of error # and help guide handling of the error. # The Error code will also be emitted via AWS Eventbridge events in the 'cause' field. # 1 Pre-script failed during execution - 201 # 2 Post-script failed during execution - 202 # 3 Auto thaw occurred before post-script was initiated - 203 # 4 Pre-script initiated while post-script was expected - 204 # 5 Post-script initiated while pre-script was expected - 205 # 6 Application not ready for pre or post-script initiation - 206 ###===============================================================================### ### Global variables ###===============================================================================### START=$(date +%s) OPERATION={{ command }} FS_ALREADY_FROZEN_ERROR='freeze failed: Device or resource busy' FS_ALREADY_THAWED_ERROR='unfreeze failed: Invalid argument' FS_BUSY_ERROR='mount point is busy' # Auto thaw is a fail safe mechanism to automatically unfreeze the application after the # duration specified in the global variable below. Choose the duration based on your # database application's tolerance to freeze. export AUTO_THAW_DURATION_SECS="60" # Add all pre-script actions to be performed within the function below execute_pre_script() { echo "INFO: Start execution of pre-script" # Check if filesystem is already frozen. No error code indicates that filesystem # is not currently frozen and that the pre-script can proceed with freezing the filesystem. check_fs_freeze # Execute the DB commands to flush the DB in preparation for snapshot snap_db # Freeze the filesystem. No error code indicates that filesystem was succefully frozen freeze_fs echo "INFO: Schedule Auto Thaw to execute in ${AUTO_THAW_DURATION_SECS} seconds." $(nohup bash -c execute_schedule_auto_thaw >/dev/null 2>&1 &) } # Add all post-script actions to be performed within the function below execute_post_script() { echo "INFO: Start execution of post-script" # Unfreeze the filesystem. No error code indicates that filesystem was successfully unfrozen unfreeze_fs } # Execute Auto Thaw to automatically unfreeze the application after the duration configured # in the AUTO_THAW_DURATION_SECS global variable. execute_schedule_auto_thaw() { sleep ${AUTO_THAW_DURATION_SECS} execute_post_script } # Disable Auto Thaw if it is still enabled execute_disable_auto_thaw() { echo "INFO: Attempting to disable auto thaw if enabled" auto_thaw_pgid=$(pgrep -f execute_schedule_auto_thaw | xargs -i ps -hp {} -o pgid) if [ -n "${auto_thaw_pgid}" ]; then echo "INFO: execute_schedule_auto_thaw process found with pgid ${auto_thaw_pgid}" sudo pkill -g ${auto_thaw_pgid} rc=$? if [ ${rc} != 0 ]; then echo "ERROR: Unable to kill execute_schedule_auto_thaw process. retval=${rc}" else echo "INFO: Auto Thaw has been disabled" fi fi } # Iterate over all the mountpoints and check if filesystem is already in freeze state. # Return error code 204 if any of the mount points are already frozen. check_fs_freeze() { for target in $(lsblk -nlo MOUNTPOINTS) do # Freeze of the root and boot filesystems is dangerous and pre-script does not freeze these filesystems. # Hence, we will skip the root and boot mountpoints while checking if filesystem is in freeze state. if [ $target == '/' ]; then continue; fi if [[ "$target" == *"/boot"* ]]; then continue; fi error_message=$(sudo mount -o remount,noatime $target 2>&1) # Remount will be a no-op without a error message if the filesystem is unfrozen. # However, if filesystem is already frozen, remount will fail with busy error message. if [ $? -ne 0 ];then # If the filesystem is already in frozen, return error code 204 if [[ "$error_message" == *"$FS_BUSY_ERROR"* ]];then echo "ERROR: Filesystem ${target} already frozen. Return Error Code: 204" exit 204 fi # If the check filesystem freeze failed due to any reason other than the filesystem already frozen, return 201 echo "ERROR: Failed to check_fs_freeze on mountpoint $target due to error - $errormessage" exit 201 fi done } # Iterate over all the mountpoints and freeze the filesystem. freeze_fs() { for target in $(lsblk -nlo MOUNTPOINTS) do # Freeze of the root and boot filesystems is dangerous. Hence, skip filesystem freeze # operations for root and boot mountpoints. if [ $target == '/' ]; then continue; fi if [[ "$target" == *"/boot"* ]]; then continue; fi echo "INFO: Freezing $target" error_message=$(sudo fsfreeze -f $target 2>&1) if [ $? -ne 0 ];then # If the filesystem is already in frozen, return error code 204 if [[ "$error_message" == *"$FS_ALREADY_FROZEN_ERROR"* ]]; then echo "ERROR: Filesystem ${target} already frozen. Return Error Code: 204" exit 204 fi # If the filesystem freeze failed due to any reason other than the filesystem already frozen, return 201 echo "ERROR: Failed to freeze mountpoint $targetdue due to error - $errormessage" exit 201 fi echo "INFO: Freezing complete on $target" done } # Iterate over all the mountpoints and unfreeze the filesystem. unfreeze_fs() { for target in $(lsblk -nlo MOUNTPOINTS) do # Freeze of the root and boot filesystems is dangerous and pre-script does not freeze these filesystems. # Hence, will skip the root and boot mountpoints during unfreeze as well. if [ $target == '/' ]; then continue; fi if [[ "$target" == *"/boot"* ]]; then continue; fi echo "INFO: Thawing $target" error_message=$(sudo fsfreeze -u $target 2>&1) # Check if filesystem is already unfrozen (thawed). Return error code 204 if filesystem is already unfrozen. if [ $? -ne 0 ]; then if [[ "$error_message" == *"$FS_ALREADY_THAWED_ERROR"* ]]; then echo "ERROR: Filesystem ${target} is already in thaw state. Return Error Code: 205" exit 205 fi # If the filesystem unfreeze failed due to any reason other than the filesystem already unfrozen, return 202 echo "ERROR: Failed to unfreeze mountpoint $targetdue due to error - $errormessage" exit 202 fi echo "INFO: Thaw complete on $target" done } snap_db() { # Run the flush command only when PostgreSQL DB service is up and running sudo systemctl is-active --quiet postgresql if [ $? -eq 0 ]; then echo "INFO: Execute Postgres CHECKPOINT" # PostgreSQL command to flush the transactions in memory to disk sudo -u postgres psql -c 'CHECKPOINT;' # If the PostgreSQL Command did not succeed, return error code 201 to indicate pre-script failure if [ $? -ne 0 ]; then echo "ERROR: Postgres CHECKPOINT command failed." exit 201 fi sync else echo "INFO: PostgreSQL service is inactive. Skipping execution of CHECKPOINT command." fi } export -f execute_schedule_auto_thaw export -f execute_post_script export -f unfreeze_fs # Debug logging for parameters passed to the SSM document echo "INFO: ${OPERATION} starting at $(date) with executionId: ${EXECUTION_ID}" # Based on the command parameter value execute the function that supports # pre-script/post-script operation case ${OPERATION} in pre-script) execute_pre_script ;; post-script) execute_post_script execute_disable_auto_thaw ;; dry-run) echo "INFO: dry-run option invoked - taking no action" ;; *) echo "ERROR: Invalid command parameter passed. Please use either pre-script, post-script, dry-run." exit 1 # return failure ;; esac END=$(date +%s) # Debug Log for profiling the script time echo "INFO: ${OPERATION} completed at $(date). Total runtime: $((${END} - ${START})) seconds."
InterSystems IRIS sample document content
###===============================================================================### # MIT License # # Copyright (c) 2024 InterSystems # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. ###===============================================================================### schemaVersion: '2.2' description: SSM Document Template for Amazon Data Lifecycle Manager Pre/Post script feature for InterSystems IRIS. parameters: executionId: type: String default: None description: Specifies the unique identifier associated with a pre and/or post execution allowedPattern: ^(None|[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12})$ command: type: String # Data Lifecycle Manager will trigger the pre-script and post-script actions. You can also use this SSM document with 'dry-run' for manual testing purposes. default: 'dry-run' description: (Required) Specifies whether pre-script and/or post-script should be executed. #The following allowedValues will allow Data Lifecycle Manager to successfully trigger pre and post script actions. allowedValues: - pre-script - post-script - dry-run mainSteps: - action: aws:runShellScript description: Run InterSystems IRIS Database freeze/thaw commands name: run_pre_post_scripts precondition: StringEquals: - platformType - Linux inputs: runCommand: - | #!/bin/bash ###===============================================================================### ### Global variables ###===============================================================================### DOCKER_NAME=iris LOGDIR=./ EXIT_CODE=0 OPERATION={{ command }} START=$(date +%s) # Check if Docker is installed # By default if Docker is present, script assumes that InterSystems IRIS is running in Docker # Leave only the else block DOCKER_EXEC line, if you run InterSystems IRIS non-containerised (and Docker is present). # Script assumes irissys user has OS auth enabled, change the OS user or supply login/password depending on your configuration. if command -v docker &> /dev/null then DOCKER_EXEC="docker exec $DOCKER_NAME" else DOCKER_EXEC="sudo -i -u irissys" fi # Add all pre-script actions to be performed within the function below execute_pre_script() { echo "INFO: Start execution of pre-script" # find all iris running instances iris_instances=$($DOCKER_EXEC iris qall 2>/dev/null | tail -n +3 | grep '^up' | cut -c5- | awk '{print $1}') echo "`date`: Running iris instances $iris_instances" # Only for running instances for INST in $iris_instances; do echo "`date`: Attempting to freeze $INST" # Detailed instances specific log LOGFILE=$LOGDIR/$INST-pre_post.log #check Freeze status before starting $DOCKER_EXEC irissession $INST -U '%SYS' "##Class(Backup.General).IsWDSuspendedExt()" freeze_status=$? if [ $freeze_status -eq 5 ]; then echo "`date`: ERROR: $INST IS already FROZEN" EXIT_CODE=204 else echo "`date`: $INST is not frozen" # Freeze # Docs: https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&CLASSNAME=Backup.General#ExternalFreeze $DOCKER_EXEC irissession $INST -U '%SYS' "##Class(Backup.General).ExternalFreeze(\"$LOGFILE\",,,,,,600,,,300)" status=$? case $status in 5) echo "`date`: $INST IS FROZEN" ;; 3) echo "`date`: $INST FREEZE FAILED" EXIT_CODE=201 ;; *) echo "`date`: ERROR: Unknown status code: $status" EXIT_CODE=201 ;; esac echo "`date`: Completed freeze of $INST" fi done echo "`date`: Pre freeze script finished" } # Add all post-script actions to be performed within the function below execute_post_script() { echo "INFO: Start execution of post-script" # find all iris running instances iris_instances=$($DOCKER_EXEC iris qall 2>/dev/null | tail -n +3 | grep '^up' | cut -c5- | awk '{print $1}') echo "`date`: Running iris instances $iris_instances" # Only for running instances for INST in $iris_instances; do echo "`date`: Attempting to thaw $INST" # Detailed instances specific log LOGFILE=$LOGDIR/$INST-pre_post.log #check Freeze status befor starting $DOCKER_EXEC irissession $INST -U '%SYS' "##Class(Backup.General).IsWDSuspendedExt()" freeze_status=$? if [ $freeze_status -eq 5 ]; then echo "`date`: $INST is in frozen state" # Thaw # Docs: https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&CLASSNAME=Backup.General#ExternalFreeze $DOCKER_EXEC irissession $INST -U%SYS "##Class(Backup.General).ExternalThaw(\"$LOGFILE\")" status=$? case $status in 5) echo "`date`: $INST IS THAWED" $DOCKER_EXEC irissession $INST -U%SYS "##Class(Backup.General).ExternalSetHistory(\"$LOGFILE\")" ;; 3) echo "`date`: $INST THAW FAILED" EXIT_CODE=202 ;; *) echo "`date`: ERROR: Unknown status code: $status" EXIT_CODE=202 ;; esac echo "`date`: Completed thaw of $INST" else echo "`date`: ERROR: $INST IS already THAWED" EXIT_CODE=205 fi done echo "`date`: Post thaw script finished" } # Debug logging for parameters passed to the SSM document echo "INFO: ${OPERATION} starting at $(date) with executionId: ${EXECUTION_ID}" # Based on the command parameter value execute the function that supports # pre-script/post-script operation case ${OPERATION} in pre-script) execute_pre_script ;; post-script) execute_post_script ;; dry-run) echo "INFO: dry-run option invoked - taking no action" ;; *) echo "ERROR: Invalid command parameter passed. Please use either pre-script, post-script, dry-run." # return failure EXIT_CODE=1 ;; esac END=$(date +%s) # Debug Log for profiling the script time echo "INFO: ${OPERATION} completed at $(date). Total runtime: $((${END} - ${START})) seconds." exit $EXIT_CODE

Untuk informasi lebih lanjut, lihat GitHub repositori.

Empty document template
###===============================================================================### # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # Permission is hereby granted, free of charge, to any person obtaining a copy of this # software and associated documentation files (the "Software"), to deal in the Software # without restriction, including without limitation the rights to use, copy, modify, # merge, publish, distribute, sublicense, and/or sell copies of the Software, and to # permit persons to whom the Software is furnished to do so. # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, # INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A # PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT # HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ###===============================================================================### schemaVersion: '2.2' description: SSM Document Template for Amazon Data Lifecycle Manager Pre/Post script feature parameters: executionId: type: String default: None description: (Required) Specifies the unique identifier associated with a pre and/or post execution allowedPattern: ^(None|[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12})$ command: # Data Lifecycle Manager will trigger the pre-script and post-script actions during policy execution. # 'dry-run' option is intended for validating the document execution without triggering any commands # on the instance. The following allowedValues will allow Data Lifecycle Manager to successfully # trigger pre and post script actions. type: String default: 'dry-run' description: (Required) Specifies whether pre-script and/or post-script should be executed. allowedValues: - pre-script - post-script - dry-run mainSteps: - action: aws:runShellScript description: Run Database freeze/thaw commands name: run_pre_post_scripts precondition: StringEquals: - platformType - Linux inputs: runCommand: - | #!/bin/bash ###===============================================================================### ### Error Codes ###===============================================================================### # The following Error codes will inform Data Lifecycle Manager of the type of error # and help guide handling of the error. # The Error code will also be emitted via AWS Eventbridge events in the 'cause' field. # 1 Pre-script failed during execution - 201 # 2 Post-script failed during execution - 202 # 3 Auto thaw occurred before post-script was initiated - 203 # 4 Pre-script initiated while post-script was expected - 204 # 5 Post-script initiated while pre-script was expected - 205 # 6 Application not ready for pre or post-script initiation - 206 ###===============================================================================### ### Global variables ###===============================================================================### START=$(date +%s) # For testing this script locally, replace the below with OPERATION=$1. OPERATION={{ command }} # Add all pre-script actions to be performed within the function below execute_pre_script() { echo "INFO: Start execution of pre-script" } # Add all post-script actions to be performed within the function below execute_post_script() { echo "INFO: Start execution of post-script" } # Debug logging for parameters passed to the SSM document echo "INFO: ${OPERATION} starting at $(date) with executionId: ${EXECUTION_ID}" # Based on the command parameter value execute the function that supports # pre-script/post-script operation case ${OPERATION} in pre-script) execute_pre_script ;; post-script) execute_post_script ;; dry-run) echo "INFO: dry-run option invoked - taking no action" ;; *) echo "ERROR: Invalid command parameter passed. Please use either pre-script, post-script, dry-run." exit 1 # return failure ;; esac END=$(date +%s) # Debug Log for profiling the script time echo "INFO: ${OPERATION} completed at $(date). Total runtime: $((${END} - ${START})) seconds."

Setelah Anda memiliki konten SSM dokumen Anda, gunakan salah satu prosedur berikut untuk membuat SSM dokumen kustom.

Console
Untuk membuat dokumen SSM perintah
  1. Buka AWS Systems Manager konsol di https://console.aws.amazon.com//systems-manager/.

  2. Di panel navigasi, pilih Dokumen, lalu pilih Buat dokumen, Perintah atau Sesi.

  3. Untuk Nama, masukkan nama deskriptif untuk dokumen.

  4. Untuk jenis Target, pilih/AWS::EC2: :Instance.

  5. Untuk Jenis dokumen, pilih Perintah.

  6. Di bidang Konten, pilih YAMLlalu tempel konten dokumen.

  7. Di bagian Tanda dokumen, tambahkan tanda dengan kunci tanda DLMScriptsAccess, dan nilai tanda true.

    penting

    DLMScriptsAccess:trueTag diperlukan oleh kebijakan AWSDataLifecycleManagerSSMFullAccess AWS terkelola yang digunakan pada Langkah 3: Siapkan peran Amazon Data Lifecycle Manager. IAM Kebijakan menggunakan kunci aws:ResourceTag kondisi untuk membatasi akses ke SSM dokumen yang memiliki tag ini.

  8. Pilih Buat dokumen.

AWS CLI
Untuk membuat dokumen SSM perintah

Gunakan perintah create-document. Untuk --name, tentukan nama deskriptif untuk dokumen. Untuk --document-type, tentukan Command. Untuk--content, tentukan path ke file.yaml dengan konten SSM dokumen. Untuk --tags, tentukan "Key=DLMScriptsAccess,Value=true".

$ aws ssm create-document \ --content file://path/to/file/documentContent.yaml \ --name "document_name" \ --document-type "Command" \ --document-format YAML \ --tags "Key=DLMScriptsAccess,Value=true"
catatan

Langkah ini diperlukan jika:

  • Anda membuat atau memperbarui kebijakan snapshot pra/posting berkemampuan skrip yang menggunakan peran kustom. IAM

  • Anda menggunakan baris perintah untuk membuat atau memperbarui kebijakan snapshot skrip pra/pasca yang diaktifkan yang menggunakan default.

Jika Anda menggunakan konsol untuk membuat atau memperbarui kebijakan snapshot berkemampuan skrip pra/posting yang menggunakan peran default untuk mengelola snapshot () AWSDataLifecycleManagerDefaultRole, lewati langkah ini. Dalam hal ini, kami secara otomatis melampirkan AWSDataLifecycleManagerSSMFullAccesskebijakan ke peran itu.

Anda harus memastikan bahwa IAM peran yang Anda gunakan untuk kebijakan memberikan izin Amazon Data Lifecycle Manager untuk melakukan SSM tindakan yang diperlukan untuk menjalankan skrip pra dan pasca pada instans yang ditargetkan oleh kebijakan.

Amazon Data Lifecycle Manager menyediakan kebijakan terkelola (AWSDataLifecycleManagerSSMFullAccess) yang menyertakan izin yang diperlukan. Anda dapat melampirkan kebijakan ini ke IAM peran Anda untuk mengelola snapshot guna memastikan bahwa kebijakan tersebut menyertakan izin.

penting

Kebijakan AWSDataLifecycleManagerSSMFullAccess terkelola menggunakan kunci aws:ResourceTag kondisi untuk membatasi akses ke SSM dokumen tertentu saat menggunakan skrip pra dan pasca. Untuk mengizinkan Amazon Data Lifecycle Manager mengakses SSM dokumen, Anda harus memastikan bahwa SSM dokumen Anda ditandai. DLMScriptsAccess:true

Atau, Anda dapat membuat kebijakan khusus secara manual atau menetapkan izin yang diperlukan langsung ke IAM peran yang Anda gunakan. Anda dapat menggunakan izin yang sama yang ditentukan dalam kebijakan AWSDataLifecycleManagerSSMFullAccess terkelola, namun, kunci aws:ResourceTag kondisi bersifat opsional. Jika Anda memutuskan untuk tidak menyertakan kunci kondisi itu, maka Anda tidak perlu menandai SSM dokumen AndaDLMScriptsAccess:true.

Gunakan salah satu metode berikut untuk menambahkan AWSDataLifecycleManagerSSMFullAccesskebijakan ke IAM peran Anda.

Console
Untuk melampirkan kebijakan terkelola ke peran kustom
  1. Buka IAM konsol di https://console.aws.amazon.com/iam/.

  2. Di panel navigasi, pilih Peran.

  3. Cari dan pilih peran kustom Anda untuk mengelola snapshot.

  4. Pada tab Izin, pilih Tambahkan izin, Lampirkan kebijakan.

  5. Cari dan pilih kebijakan AWSDataLifecycleManagerSSMFullAccessterkelola, lalu pilih Tambahkan izin.

AWS CLI
Untuk melampirkan kebijakan terkelola ke peran kustom

Gunakan attach-role-policyperintah. Untuk ---role-name, tentukan nama peran kustom Anda. Untuk --policy-arn, tentukan arn:aws:iam::aws:policy/AWSDataLifecycleManagerSSMFullAccess.

$ aws iam attach-role-policy \ --policy-arn arn:aws:iam::aws:policy/AWSDataLifecycleManagerSSMFullAccess \ --role-name your_role_name

Untuk mengotomatisasi snapshot yang konsisten dengan aplikasi, Anda harus membuat kebijakan siklus hidup snapshot yang menargetkan instans, dan mengonfigurasi skrip pra dan pasca untuk kebijakan tersebut.

Console
Untuk membuat kebijakan siklus hidup snapshot
  1. Buka EC2 konsol Amazon di https://console.aws.amazon.com/ec2/.

  2. Di panel navigasi, pilih Elastic Block Store, Lifecycle Manager, lalu pilih Buat kebijakan siklus hidup.

  3. Pada layar Pilih jenis kebijakan, pilih kebijakan EBS snapshot lalu pilih Berikutnya.

  4. Di bagian Sumber daya target, lakukan hal berikut ini:

    1. Untuk Jenis sumber daya target, pilih Instance.

    2. Untuk Tanda sumber daya target, tentukan tanda sumber daya yang mengidentifikasi instans yang akan dicadangkan. Hanya sumber daya yang memiliki tanda tertentu yang akan dicadangkan.

  5. Untuk IAMperan, pilih AWSDataLifecycleManagerDefaultRole(peran default untuk mengelola snapshot), atau pilih peran khusus yang Anda buat dan siapkan untuk skrip pra dan pasca.

  6. Konfigurasikan jadwal dan opsi tambahan sesuai kebutuhan. Sebaiknya jadwalkan waktu pembuatan snapshot untuk periode waktu yang sesuai dengan beban kerja Anda, seperti selama jendela pemeliharaan.

    Untuk SAPHANA, kami menyarankan Anda mengaktifkan pemulihan snapshot cepat.

    catatan

    Jika Anda mengaktifkan jadwal untuk VSS Pencadangan, Anda tidak dapat mengaktifkan Kecualikan volume data tertentu atau Salin tag dari sumber.

  7. Di bagian Skrip pra dan pasca, pilih Aktifkan skrip pra dan pasca, lalu lakukan hal berikut, bergantung pada beban kerja Anda:

    • Untuk membuat snapshot aplikasi yang konsisten aplikasi dari aplikasi Windows Anda, pilih Backup. VSS

    • Untuk membuat snapshot beban SAP HANA kerja yang konsisten dengan aplikasi, pilih. SAPHANA

    • Untuk membuat snapshot yang konsisten dengan aplikasi dari semua database dan beban kerja lainnya, termasuk My, Postgre, atau InterSystems IRIS database yang dikelola sendiriSQL, menggunakan dokumen kustomSQL, pilih Dokumen kustom. SSM SSM

      1. Untuk Opsi otomatisasi, pilih Skrip pra dan pasca.

      2. Untuk SSMdokumen, pilih SSM dokumen yang Anda siapkan.

  8. Bergantung pada opsi yang Anda pilih, konfigurasikan opsi tambahan berikut:

    • Batas waktu skrip - (Khusus SSM dokumen khusus) Periode batas waktu setelah Amazon Data Lifecycle Manager gagal dalam upaya menjalankan skrip jika belum selesai. Jika skrip tidak selesai dalam periode batas waktu, Amazon Data Lifecycle Manager menggagalkan upaya tersebut. Periode batas waktu berlaku untuk skrip pra dan pasca secara individual. Periode batas waktu minimum dan default-nya adalah 10 detik. Dan periode batas waktu maksimumnya adalah 120 detik.

    • Coba lagi skrip yang gagal — Pilih opsi ini untuk mencoba lagi skrip yang tidak selesai dalam periode batas waktu. Jika skrip pra gagal, Amazon Data Lifecycle Manager akan mencoba ulang seluruh proses pembuatan snapshot, termasuk menjalankan skrip pra dan pasca. Jika skrip pasca gagal, Amazon Data Lifecycle Manager mencoba ulang skrip pasca saja; dalam hal ini, skrip pra akan selesai dan snapshot mungkin telah dibuat.

    • Default ke snapshot crash-consistent — Pilih opsi ini ke default ke snapshot crash-consistent jika skrip pra gagal dijalankan. Ini adalah perilaku pembuatan snapshot default untuk Amazon Data Lifecycle Manager jika skrip pra dan pasca tidak diaktifkan. Jika Anda mengaktifkan percobaan ulang, Amazon Data Lifecycle Manager akan default ke snapshot crash-consistent hanya setelah semua upaya percobaan ulang habis. Jika skrip pra gagal dan Anda tidak menetapkan default ke snapshot crash-consistent, Amazon Data Lifecycle Manager tidak akan membuat snapshot untuk instans selama jadwal berjalan.

      catatan

      Jika Anda membuat snapshot untuk SAPHANA, maka Anda mungkin ingin menonaktifkan opsi ini. Cuplikan SAP HANA beban kerja yang konsisten dengan kerusakan tidak dapat dipulihkan dengan cara yang sama.

  9. Pilih Buat kebijakan default.

    catatan

    Jika Anda mendapatkan kesalahan Role with name AWSDataLifecycleManagerDefaultRole already exists, lihat Memecahkan masalah Amazon Data Lifecycle Manager untuk informasi selengkapnya.

AWS CLI
Untuk membuat kebijakan siklus hidup snapshot

Gunakan create-lifecycle-policyperintah, dan sertakan Scripts parameter diCreateRule. Untuk informasi selengkapnya tentang parameter, lihat Referensi Amazon Data Lifecycle Manager. API

$ aws dlm create-lifecycle-policy \ --description "policy_description" \ --state ENABLED \ --execution-role-arn iam_role_arn \ --policy-details file://policyDetails.json

Di mana policyDetails.json termasuk salah satu hal berikut, tergantung pada kasus penggunaan Anda:

  • VSSBackup

    { "PolicyType": "EBS_SNAPSHOT_MANAGEMENT", "ResourceTypes": [ "INSTANCE" ], "TargetTags": [{ "Key": "tag_key", "Value": "tag_value" }], "Schedules": [{ "Name": "schedule_name", "CreateRule": { "CronExpression": "cron_for_creation_frequency", "Scripts": [{ "ExecutionHandler":"AWS_VSS_BACKUP", "ExecuteOperationOnScriptFailure":true|false, "MaximumRetryCount":retries (0-3) }] }, "RetainRule": { "Count": retention_count } }] }
  • SAPHANAcadangan

    { "PolicyType": "EBS_SNAPSHOT_MANAGEMENT", "ResourceTypes": [ "INSTANCE" ], "TargetTags": [{ "Key": "tag_key", "Value": "tag_value" }], "Schedules": [{ "Name": "schedule_name", "CreateRule": { "CronExpression": "cron_for_creation_frequency", "Scripts": [{ "Stages": ["PRE","POST"], "ExecutionHandlerService":"AWS_SYSTEMS_MANAGER", "ExecutionHandler":"AWSSystemsManagerSAP-CreateDLMSnapshotForSAPHANA", "ExecuteOperationOnScriptFailure":true|false, "ExecutionTimeout":timeout_in_seconds (10-120), "MaximumRetryCount":retries (0-3) }] }, "RetainRule": { "Count": retention_count } }] }
  • SSMDokumen kustom

    { "PolicyType": "EBS_SNAPSHOT_MANAGEMENT", "ResourceTypes": [ "INSTANCE" ], "TargetTags": [{ "Key": "tag_key", "Value": "tag_value" }], "Schedules": [{ "Name": "schedule_name", "CreateRule": { "CronExpression": "cron_for_creation_frequency", "Scripts": [{ "Stages": ["PRE","POST"], "ExecutionHandlerService":"AWS_SYSTEMS_MANAGER", "ExecutionHandler":"ssm_document_name|arn", "ExecuteOperationOnScriptFailure":true|false, "ExecutionTimeout":timeout_in_seconds (10-120), "MaximumRetryCount":retries (0-3) }] }, "RetainRule": { "Count": retention_count } }] }

Pertimbangan untuk VSS Pencadangan dengan Amazon Data Lifecycle Manager

Dengan Amazon Data Lifecycle Manager, Anda dapat mencadangkan dan memulihkan aplikasi Windows berkemampuan VSS (Volume Shadow Copy Service) yang berjalan di instans Amazon. EC2 Jika aplikasi memiliki VSS penulis yang terdaftar di WindowsVSS, maka Amazon Data Lifecycle Manager membuat snapshot yang akan konsisten aplikasi untuk aplikasi itu.

catatan

Amazon Data Lifecycle Manager saat ini mendukung snapshot sumber daya yang konsisten aplikasi yang berjalan di EC2 Amazon saja, khususnya untuk skenario pencadangan di mana data aplikasi dapat dipulihkan dengan mengganti instance yang ada dengan instance baru yang dibuat dari cadangan. Tidak semua jenis instans atau aplikasi didukung untuk VSS backup. Untuk informasi lebih lanjut, lihat Apa itu AWS VSS? di Panduan EC2 Pengguna Amazon.

Tipe instans yang didukung

Jenis EC2 instans Amazon berikut tidak didukung untuk VSS pencadangan. Jika kebijakan Anda menargetkan salah satu jenis instance ini, Amazon Data Lifecycle Manager mungkin masih membuat VSS cadangan, tetapi snapshot mungkin tidak ditandai dengan tag sistem yang diperlukan. Tanpa tanda ini, snapshot tidak akan dikelola oleh Amazon Data Lifecycle Manager setelah pembuatan. Anda mungkin perlu menghapus snapshot tersebut secara manual.

  • T3: | t3.nano t3.micro

  • T3a: | t3a.nano t3a.micro

  • T2: | t2.nano t2.micro

Tanggung jawab bersama untuk snapshot yang konsisten dengan aplikasi

Anda harus memastikan bahwa:
  • SSMAgen diinstal, up-to-date, dan berjalan pada instance target Anda

  • Systems Manager memiliki izin untuk melakukan tindakan yang diperlukan pada instans target

  • Amazon Data Lifecycle Manager memiliki izin untuk melakukan tindakan Systems Manager yang diperlukan untuk menjalankan skrip pra dan pasca pada instans target.

  • Untuk beban kerja kustom, seperti My, PostgreSQL, atau InterSystems IRIS database yang dikelola sendiriSQL, SSM dokumen yang Anda gunakan menyertakan tindakan yang benar dan diperlukan untuk membekukan, membilas, dan mencairkan I/O untuk konfigurasi database Anda.

  • Waktu pembuatan snapshot selaras dengan jadwal beban kerja Anda. Misalnya, cobalah untuk menjadwalkan pembuatan snapshot selama jendela pemeliharaan terjadwal.

Amazon Data Lifecycle Manager memastikan bahwa:
  • Pembuatan snapshot dimulai dalam waktu 60 menit dari waktu pembuatan snapshot yang dijadwalkan.

  • Skrip pra dijalankan sebelum pembuatan snapshot dimulai.

  • Skrip pasca berjalan setelah skrip pra berhasil dan pembuatan snapshot telah dimulai. Amazon Data Lifecycle Manager menjalankan skrip pasca hanya jika skrip pra berhasil. Jika skrip pra gagal, Amazon Data Lifecycle Manager tidak akan menjalankan skrip pasca.

  • Snapshot ditandai dengan tanda yang sesuai pada pembuatan.

  • CloudWatch metrik dan peristiwa dipancarkan ketika skrip dimulai, dan ketika mereka gagal atau berhasil.