Start a node and shut down a node in a Nutanix Cluster

Start Node VMware

  1. Using the vSphere client, take the ESXi hosts out of maintenance mode
  2. Power On the CVM
  3. SSH into the CVM, and issue the following command:

    ncli cluster status | grep -A 15 cvm_ip_addr

  4. Validate that the datastores, are available and connected to all hosts within the cluster

Stop Node VMware

GUI

  1. Using the vSphere client, place the ESXi host into maintenance mode
  2. SSH into CVM and issue the following command:

    cvm_shutdown –P now

  3. Once the CVM is powered down, shutdown the host

CLI

  1. SSH into the node being shutdown
  2. From the command line, issue the following command:

    cvm_shutdown -P now

  3. Login to another CVM, and issue the following commands:

    ~/serviceability/bin/esx-enter-maintenance-mode –s

    ~/serviceability/bin/esx-shutdown –s

  4. Ping the hypervisor IP and confirm that it is powered down

Start Node AHV

  1. Log on to the AHV host with SSH.
  2. Find the name of the Controller VM.

    root@ahv# virsh list –all | grep CVM

  3. Make a note of the Controller VM name in the second column.
  4. Determine if the Controller VM is running.
    • If the Controller VM is off, a line similar to the following should be returned:

      NTNX-12AM2K470031-D-CVM shut off

    • Make a note of the Controller VM name in the second column.
    • If the Controller VM is on, a line similar to the following should be returned:

      NTNX-12AM2K470031-D-CVM running

    • If the Controller VM is shut off, start it.

      root@ahv# virsh start cvm_name

    • Replace cvm_name with the name of the Controller VM that you found from the preceding command.
  5. If the node is in maintenance mode, log on to the Controller VM and take the node out of maintenance mode.

    nutanix@cvm$ acli

    <acropolis> host.exit_maintenance_mode AHV-hypervisor-IP-address

  6. Replace AHV-hypervisor-IP-address with the IP address of the AHV hypervisor.

    <acropolis> exit

  7. Log on to another Controller VM in the cluster with SSH.
  8. Verify that all services are up on all Controller VMs.

    nutanix@cvm$ cluster status

  9. If the cluster is running properly, output similar to the following is displayed for each node in the cluster:
CVM: 10.1.64.60 Up
                              Zeus   UP       [5362, 5391, 5392, 10848, 10977, 10992]
                           Scavenger   UP       [6174, 6215, 6216, 6217]
                       SSLTerminator   UP       [7705, 7742, 7743, 7744]
                      SecureFileSync   UP       [7710, 7761, 7762, 7763]
                              Medusa   UP       [8029, 8073, 8074, 8176, 8221]
                  DynamicRingChanger   UP       [8324, 8366, 8367, 8426]
                              Pithos   UP       [8328, 8399, 8400, 8418]
                                Hera   UP       [8347, 8408, 8409, 8410]
                            Stargate   UP       [8742, 8771, 8772, 9037, 9045]
                          InsightsDB   UP       [8774, 8805, 8806, 8939]
                InsightsDataTransfer   UP       [8785, 8840, 8841, 8886, 8888, 8889, 8890]
                               Ergon   UP       [8814, 8862, 8863, 8864]
                             Cerebro   UP       [8850, 8914, 8915, 9288]
                             Chronos   UP       [8870, 8975, 8976, 9031]
                             Curator   UP       [8885, 8931, 8932, 9243]
                               Prism   UP       [3545, 3572, 3573, 3627, 4004, 4076]
                                 CIM   UP       [8990, 9042, 9043, 9084]
                        AlertManager   UP       [9017, 9081, 9082, 9324]
                            Arithmos   UP       [9055, 9217, 9218, 9353]
                             Catalog   UP       [9110, 9178, 9179, 9180]
                           Acropolis   UP       [9201, 9321, 9322, 9323]
                               Atlas   UP       [9221, 9316, 9317, 9318]
                               Uhura   UP       [9390, 9447, 9448, 9449]
                                Snmp   UP       [9418, 9513, 9514, 9516]
                    SysStatCollector   UP       [9451, 9510, 9511, 9518]
                              Tunnel   UP       [9480, 9543, 9544]
                       ClusterHealth   UP       [9521, 9619, 9620, 9947, 9976, 9977, 10301]
                               Janus   UP       [9532, 9624, 9625]
                   NutanixGuestTools   UP       [9572, 9650, 9651, 9674]
                          MinervaCVM   UP       [10174, 10200, 10201, 10202, 10371]
                       ClusterConfig   UP       [10205, 10233, 10234, 10236]
                         APLOSEngine   UP       [10231, 10261, 10262, 10263]
                               APLOS   UP       [10343, 10368, 10369, 10370, 10502, 10503]
                               Lazan   UP       [10377, 10402, 10403, 10404]
                               Orion   UP       [10409, 10449, 10450, 10474]
                              Delphi   UP       [10418, 10466, 10467, 10468]

Stop Node AHV

Caution: Verify the data resiliency status of your cluster. If the cluster only has replication factor 2 (RF2), you can only shut down one node for each cluster. If an RF2 cluster would have more than one node shut down, shut down the entire cluster.

  1. Shut down guest VMs that are running on the node, or move them to other nodes in the cluster.
  2. If the Controller VM is running, shut down the Controller VM.
  3. Log on to the Controller VM with SSH.
    • List all the hosts in the cluster.
    • acli host.list

    • Note the value of Hypervisor address for the node you want to shut down.
  4. Put the node into maintenance mode.
  5. nutanix@cvm$ acli host.enter_maintenance_mode Hypervisor address [wait=”{ true | false }” ]

  6. Specify wait=true to wait for the host evacuation attempt to finish.
  7. Shut down the Controller VM.
  8. nutanix@cvm$ cvm_shutdown -P now

  9. Log on to the AHV host with SSH.
  10. Shut down the host.
  11. root@ahv# shutdown -h now

Leave a Reply

Your email address will not be published. Required fields are marked *