Cluster administration
You can manually migrate cluster resources from one node to another with the following command as root user:
root@prihana ~]# pcs resource move SAPHana_HDB_00-master Warning: Creating location constraint cli-ban-SAPHana_HDB_00- master-on-prihana with a score of -INFINITY for resource SAPHana_HDB_00-master on node prihana. This will prevent SAPHana_HDB_00-master from running on prihana until the constraint is removed. This will be the case even if prihana is the last node in the cluster.
You can check the status of the cluster again to verify the status of resource migration.
[root@prihana ~]#pcs status Thu Nov 12 10:45:14 2020 Cluster name: rhelhanaha Stack: corosync Current DC: sechana (version 1.1.19-8.el7_6.5-c3c624ea3d) - partition with quorum Last updated: Thu Nov 12 10:45:14 2020 Last change: Thu Nov 12 10:45:06 2020 by root via crm_attribute on sechana 2 nodes configured 6 resources configured Online: [ prihana sechana ] Full list of resources: clusterfence (stonith:fence_aws): Started prihana Clone Set: SAPHanaTopology_HDB_00-clone [SAPHanaTopology_HDB_00] Started: [ prihana sechana ] Master/Slave Set: SAPHana_HDB_00-master [SAPHana_HDB_00] Masters: [ sechana ] Stopped: [ prihana ] hana-oip (ocf::heartbeat:aws-vpc-move-ip): Started sechana Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
Clean up the failed actions as shown in next section. With each pcs
resource move command invocation, the cluster creates location constraints to cause the
resource to move. These constraints must be removed to allow automated failover in the
future. To remove the constraints created by the move, run the following command:
root@prihana ~]# pcs resource clear SAPHana_HDB_00-master
Check the status of the cluster:
root@prihana ~]# pcs status Cluster name: rhelhanaha Stack: corosync Current DC: sechana (version 1.1.19-8.el7_6.5-c3c624ea3d) - partition with quorum Last updated: Thu Nov 12 10:49:44 2020 Last change: Thu Nov 12 10:49:12 2020 by root via crm_attribute on sechana 2 nodes configured 6 resources configured Online: [ prihana sechana ] Full list of resources: clusterfence (stonith:fence_aws): Started prihana Clone Set: SAPHanaTopology_HDB_00-clone [SAPHanaTopology_HDB_00] Started: [ prihana sechana ] Master/Slave Set: SAPHana_HDB_00-master [SAPHana_HDB_00] Masters: [ sechana ] Slaves: [ prihana ] hana-oip (ocf::heartbeat:aws-vpc-move-ip): Started sechana Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled