Migrate a VM from an ESXi cluster to an AHV cluster

Windows VM Migration Prerequisites

  1. On the source hypervisor, power off all the VMs that you want to migrate.
  2. Ensure that the source VMs do not have any hypervisor snapshots associated with them.
  3. (Optional) Clone any VMs that you want to preserve.
  4. (Optional) Create a storage container on the AHV cluster.
  5. (For ESXi source environments) Windows VMs using Unified Extensible Firmware Interface (UEFI) are not supported for migration in AOS or AHV.
  6. By default, CD-ROM drives are supported and configured only on the IDE bus.

VirtIO

  1. Install Nutanix VirtIO on all the VMs that need to be migrated. You can download Nutanix VirtIO from the Nutanix Support Portal (see Nutanix VirtIO for Windows).
  2. Verify that you have completed all pre-migration tasks. See also Windows VM Migration Prerequisites.
  3. From the following scenarios, choose the scenario that applies to you and migrate the Windows VMs to the AHV cluster.

Scenario 1: The AHV Cluster Can Access the Source Virtual Disk Files over NFS or HTTP

  1. Provide the target AHV cluster with read access to the virtual disk files on the NFS or HTTP server. See Providing Read Access to the Nutanix Cluster.
  2. Import the virtual disks by using Image Service. See Configuring Images. This step must be performed for each virtual disk on the source VM. Note the following when using Image Service:
  3. For a source Nutanix ESXi or Hyper-V cluster, specify the IP address of any of the Controller VMs in the source cluster.
  4. For a source non-Nutanix NFS server, specify the IP address the NFS server.
  5. Use the Prism web console to create a VM from the imported image. Use the Clone from Image Service option. Also assign a vNIC to the VM. See Creating a Windows VM on AHV after Migration.

Scenario 2: The Source Virtual Disk Files Are Not Accessible to the AHV Cluster

  1. On the target AHV cluster, provide access to the source hypervisor hosts by adding the host IP addresses to the AHV cluster’s filesystem whitelist. See Configuring a Filesystem Whitelist.
  2. Adding a source hypervisor host’s IP address to the target AHV cluster’s filesystem whitelist enables the host to mount the target cluster’s container as a temporary NFS datastore or SMB share.
  3. Mount the target AHV cluster’s container on the source hypervisor host as an NFS datastore or SMB share. See Mounting an AHV Container on a Source Hypervisor Host.
  4. From the source hypervisor host, copy the virtual disk files from their original location to the temporary NFS datastore or SMB share mounted from the target AHV cluster. See Migrating VM Disks to AHV Storage.
  5. In the Prism web console, use Image Service to convert the virtual disk files to the raw format that AHV can use. See Configuring Images.
  6. Use the Prism web console to create a VM from the imported image. Use the Clone from Image Service option. Also assign a vNIC to the VM. See Creating a Windows VM on AHV after Migration.

Scenario 3: The Source Virtual Disk Files Have Been Exported

  1. Use an SFTP client to connect to the target AHV cluster. Connect to port 2222 on any Controller VM IP address and log in as the Prism admin user.
  2. Optionally, create a subdirectory in the target AHV container to use as a staging area for the virtual disk files. Make sure that the container you use is the container in which the virtual disk files will eventually reside.
  3. By using the SFTP client, copy the virtual disk files (*-flat.vmdk files) to the target AHV container.
  4. In the Prism web console, use Image Service to convert the virtual disk files to the raw format that AHV can use. See Configuring Images.
  5. Use the Prism web console to create a VM from the imported image. Use the Clone from Image Service option. Also assign a vNIC to the VM. See Creating a Windows VM on AHV after Migration.

Creating a Windows VM on AHV after Migration

See Use the Image Service to Deploy a VM.

Create a disk from the disk image by clicking Add New Disk and completing the indicated fields.

  1. TYPE: DISK
  2. OPERATION: CLONE FROM IMAGE
  3. BUS TYPE: SCSI
  4. CLONE FROM IMAGE SERVICE: Select the image you created previously from the drop-down menu.
  5. Click Add to add the disk drive.

The Path field is displayed when Clone from ADSF file is selected from the Operation field. For example, you can specify the image path to copy as nfs://127.0.0.1/container_name/vm_name/vm_name.vmdk or nfs://127.0.0.1/container_name/vm_name/vm_name-flat.vmdk.

Post Migration Tasks, Windows

Complete the post-migration tasks in the Prism web console.

  1. After the VM is created, the Received operation to create VM dialog box appears. Click View the task details and then select the VM. The Summary line (middle of screen) displays the VM name with a set of relevant action links on the right.
  2. (For Generation 2 VMs migrated from a Hyper-V host) Before you power on a UEFI guest VM, configure the VM with the aCLI option uefi_boot=True. For example:

    acli vm.update vm_id uefi_boot=True

    • Note: Only Generation 2 (Gen 2) Windows VMs that use UEFI to start (boot) are supported for migration. However, support is limited.
  3. Click Power on to start the Windows VM.
  4. After the VM is started, the Received operation to power on the VM dialog box appears. Click View the task details and then select the VM. The Summary line (middle of screen) displays the VM name with a set of relevant action links on the right. Click Launch Console to log on to the VM through the console.
  5. Configure an IP address for the VM. Follow the prompts in the console to configure an IP address.
  6. (For VMs migrated from ESXi) Open the Control Panel in the Windows server VM and remove the VMware Tools and other VMware related software.
  7. Restart the VM.

Linux VM Migration Prerequisites

  1. Prepare the VM for migration.
    1. Install Nutanix VM Mobility by enabling and mounting the Nutanix Guest Tools on the Linux VM. See Nutanix Guest Tools in the Prism Web Console Guide.
    2. Check that the virtIO drivers are installed.
  2. In the Prism web console, add the source hypervisor host IP address to the target AHV cluster’s filesystem whitelist. See Configuring a Filesystem Whitelist.
  3. Use vSphere Storage vMotion to migrate the VM disk image to the AHV storage container. Mount the Acropolis storage container as a temporary NFS datastore or SMB share.
  4. Create a VM and attached the imported disk image.
  5. Power on the VM and log in to the VM’s console. Optionally, you can uninstall VMWare tools, if installed.

Linux VM Migration Requirements

  • The SUSE/Ubuntu Linux kernel must include appropriate virtIO drivers to migrate Linux servers.
  • Ensure virtIO modules are loaded on the VM to be migrated.
  • vSphere files on a Acropolis storage container mounted as a temporary NFS Datastore or SMB share.
Minimum AOS versionAOS 4.6.1.1
Minimum AHV versionAHV-20160217.2
Minimum vSphere versionvSphere 5.0 U2
Minimum Ubuntu versionFor SCSI bus:Ubuntu 12.04.3 and later
Ubuntu 14.04.x
SUSE 2.6.1 and later
For PCI bus:
Ubuntu 12.04.2 and earlier
Connectivity type between clustersAHV network connected
Acropolis storage container mounted as a temporary NFS Datastore or SMB share

Checking VirtIO Module Status

Verify that your kernel has the correct virtIO modules built in.

$ grep -i virtio /boot/config-3.0.101-63-default

Check the output to verify if the drivers are installed.

CONFIG_NET_9P_VIRTIO=m
CONFIG_VIRTIO_BLK=m
CONFIG_SCSI_VIRTIO=m
CONFIG_VIRTIO_NET=m
CONFIG_VIRTIO_CONSOLE=m
CONFIG_HW_RANDOM_VIRTIO=m
CONFIG_VIRTIO=m
#Virtio drivers
CONFIG_VIRTIO_PCI=m
CONFIG_VIRTIO_BALLOON=m
#CONFIG_VIRTIO_MMIO is not set

  • For CONFIG_VIRTIO_PCI and CONFIG_SCI_VIRTIO, the =m output means the VirtIO SCSI driver is built directly into the kernel and is a loadable kernel module.

Prerequisites for Migrating Ubuntu

Check the Ubuntu version and confirm the installed virtIO drivers on the Ubuntu VM.

  1. On vSphere, log into the Ubuntu VM and open a terminal window.
  2. Verify that the minimum Ubuntu version is at least 12.04.
  3. $ cat /etc/lsb-release

  4. Output might look similar to the following.
  5. DISTRIB_ID=Ubuntu DISTRIB_RELEASE=14.04
    DISTRIB_CODENAME=name DISTRIB_DESCRIPTION=”Ubuntu 14.04 LTS”

  6. Check that the virtIO drivers are installed.
  7. $ grep -i virtio /boot/config-`uname -r`

    Check the output to verify if the drivers are installed.
    • For CONFIG_VIRTIO_PCI, the =y output means the VirtIO PCI driver is built directly into the kernel.
    • For CONFIG_SCSI_VIRTIO, the =m output means the VirtIO SCSI driver is built directly into the kernel and is a loadable kernel module.
  8. Confirm that the virtio_scsi module is built into the initramsf image.
    1. Copy the initramsf image to a temporary locaiton.
    2. nutanix@ubuntu12045:~$ cp -p /boot/initrd.img-`uname -r` /tmp/initrd.img-`uname -r`.gz

    3. Check the virtIO SCSI module is built in
    4. nutanix@ubuntu12045:~$ zcat /tmp/initrd.img-`uname -r`.gz | cpio -it | grep virtio

Migrating VM Disks to AHV Storage

Migrate the virtual disk files from the source hypervisor to the temporarily mounted container. It is recommended to create one or more sub-directories in the container on the target AHV cluster. Copying the virtual disk files to sub-directories helps organize migrated virtual disk files. The virtual disk files are only required until converted by Image Service and can be easily identified and deleted when no longer required.

  • If the source hypervisor is ESXi, do one of the following:
    • Use Storage VMotion to move the virtual disk files of running VMs from their original location to the temporarily mounted NFS datastore on the target AHV cluster. when specifying a datastore for Storage vMotion, you can enter the file path nfs://127.0.0.1/container_name/vm_name/vm_name.vmdk.
      • Replace container_name with the name of the storage container where the image is placed and replace vm_name with the name of the VM where the image is placed.
      • After Storage vMotion moves the virtual disk files, use Image Service to convert the files to the raw format that AHV can use. When Storage vMotion is complete, shut down the source VM.
    • Use Vmkfstools to copy the virtual disk files from the datastore on the source hypervisor host. The following commands create a subdirectory in the container on the target AHV cluster and then move the virtual disk files to the subdirectory.
    • ~ # mkdir /vmfs/volumes/container_name/subdirectory

      ~ # vmkfstools -i /vmfs/volumes/original_datastore/win7-vm/virt_disk_file.vmdk

      /vmfs/volumes/container_name/subdirectory/win7-vm.vmdk

    • Replace container_name with the name of the container on the target AHV cluster, subdirectory with a name for the subdirectory, and virt_disk_file with the name of the virtual disk file.

Creating a Linux VM on AHV after Migration

See Use the Image Service to Deploy a VM.

Create a disk from the disk image by clicking the + New Disk button and completing the indicated fields.

  1. TYPE: DISK
  2. OPERATION:
    • For Ubuntu VMs, select CLONE FROM ADS FILE
    • For SUSE VMs, select CLONE FROM NDFS FILE
  3. BUS TYPE: SCSI
  4. PATH: From the drop down list, choose the path name. Type a forward slash and pick the file name, /container_name/vm_name/flat_vmdk_file.
  5. Replace container_name with the name of the storage container.
    • Replace vm_name with the name of the VM you migrated.
    • Replace flat_vmdk_fileFor example, a file path might look similar to /default-container-32395/Ubuntu12345VMW/Ubuntu12345VMW-flat.vmdk.
  6. Click Add to add the disk driver.

Post Migration Tasks, Linux

  1. Log in to the Prism Web Console and navigate to Home > VM. Choose the Table view.
  2. Select the VM you created in the table and click Power on from the action links.
  3. After the VM powers on, click Launch Console and log into the VM. This opens a Virtual Network Computing (VNC) client and displays the console in a new tab or window. This option is available only when the VM is powered on.
  4. If they are installed, remove the VMware tools from the VM.
    • From a tar installation:

      $ sudo /usr/bin/vmware-uninstall-tools.pl

    • From an rpm installation:

      rpm -e VMwareTools

Leave a Reply

Your email address will not be published. Required fields are marked *