VM Recovery

Recover your Virtual Machine data now!

Remote ESXi recovery using iSCSI and RDM mapped disk

Majority of the real life data recovery cases we met, are processed on production and running servers that can not be shut down for recovery. Sometimes servers are rented from service providers and located remotely with no physical access at all. Not to mention that all vmWare administrators are used to control servers remotely and don’t want to mess with screws to recover lost data.

ESXi server offer a variety of methods to access datastores on file level. These methods are very efficient and shows high speed of data transfer. However none of them offer low level disk access which is required for data recovery purposes.

Considering all of above we’ve added SSH support to VMFS Recovery couple of years ago, however it’s speed frustrating low. On test machines we get only 30% of the network productivity and even less on real life cases.

Our research and efforts to increase speed for network data recovery brought us to iSCSI technology. We were managed to adapt it to our needs and it seems at least twice faster than SSH. Our developers and testers are still researching optimal settings but expect even more gain from iSCSI.

The drawback that we were unable to set up native iSCSI server for ESX yet and have to use Linux VM + RDM mapping to access to datastore from within VM.

We were used Ubuntu, but any other Linux distributive should work too. Also we’ve used Midnight Commander as file editor but of course, you can use any other text editor you like.

Ok, here is a step by step guide how you can gain low level disk access via network:


1. Set up iSCSI target on Linux VM


  • a. Create a new VM and install Linux. We’ve used Ubuntu mini, but any other Linux distributive should work too

  • b. Install iSCSI server(target) on this Linux system

    >apt-get install iscsitarget iscsitarget-dkms


  • c. We’ve used Midnight Commander to browse and edit files. You can use any other software you prefer.

    install mc


  • d. Edit configuration file /etc/default/iscsitarget You need to change ISCSITARGET_ENABLE parameter to TRUE

    ISCSITARGET_ENABLE=TRUE

  • e. Mount VMDK, HDD or datastore you need to gain access to using RDM technology and following this guide.

    Note: you need to access to physical HDD for any data recovery appliance.

  • f. Edit configuration file: /etc/iet/ietd.conf

    Target iqn.2001-04.com.example:storage.lun1
    CHAP Users
    IncomingUser user pass1234567890
    Lun 0 Path=/dev/sdb,Type=fileio
    Alias LUN1

    Note: password should be at least 12 symbols. Parameter /dev/sdb is a HDD mounted via RDM from previous step.

  • g. Start iSCSI server by executing command

    >/etc/init.d/iscsitarget start

    You may also find useful following commands:

    >/etc/init.d/iscsitarget status
    >/etc/init.d/iscsitarget restart


2. Connect *rdm.vmdk to VM


  • a. Run vSphere and open Properties of the new Linux VM

  • b. Add HardDrive -> Add Existing File

  • c. Select VMDK that is mapped via RDM to ESXi HDD

3. Configuring a client side


  • a. Launch iSCSI Initiator

  • b. Enter IP address of the Linux VM with iSCSI target. Tab Targets. section Quick connect->Target

  • c. Click “Connect”

  • d. Click Advanced on the new window

  • e. Check “CHAP” option

  • f. Enter user & secret values, these should equals to the ones you were entered in configuration file at the step 2.f. In this example it was user / pass1234567890

  • g. After a new disk is added to the Windows, run VMFS Recovery and scan it like a standard local disk

Such configuration above resulted in speed of 7.5Mb\s on 100Mbit network. SSH displayed only 3.5Mb\s. We’re still testing if ESXi can act like an iSCSI target itself without Linux VM+RDM. At this case we’ll update this instruction.


Additional information:

Remote ESXi recovery via SSH

Mapping a VMFS disk to Guest OS as RDM disk


Return to contents