Remote ESXi recovery using iSCSI and RDM mapped disk
Majority of the real-life data recovery cases we met, are processed on production and running servers that can not be shut down for recovery. Sometimes servers are rented from service providers and located remotely with no physical access at all. Not to mention that all VMware administrators are used to control servers remotely and don’t want to mess with screws to recover lost data.
ESXi server offers a variety of methods to access datastores on the file level. These methods are very efficient and show a high speed of data transfer. However, none of them offer low-level disk access which is required for data recovery purposes.
Considering all of the above we added SSH support to VMFS Recovery™ a couple of years ago, however, its speed is frustratingly low. On test machines, we get only 30% of the network productivity and even less on real-life cases.
Our research and efforts to increase speed for network data recovery brought us to iSCSI technology. We managed to adapt it to our needs and it seems at least twice faster than SSH. Our developers and testers are still researching optimal settings but expect even more gains from iSCSI.
The drawback is that we were unable to set up a native iSCSI server for ESX yet and have to use Linux VM + RDM mapping to access to datastore from within VM.
We used Ubuntu, but any other Linux distributive should work too. Also, we’ve used Midnight Commander as a file editor but of course, you can use any other text editor you like.
Ok, here is a step-by-step guide on how you can gain low-level disk access via network:
1. Set up iSCSI target on Linux VM
- a. Create a new VM and install Linux. We’ve used Ubuntu mini, but any other Linux distributive should work too
- b. Install iSCSI server(target) on this Linux system
>apt-get install SCSI target iscsitarget-dkms
- c. We’ve used Midnight Commander to browse and edit files. You can use any other software you prefer.
install mc
- d. Edit configuration file /etc/default/iscsitarget
You need to change ISCSITARGET_ENABLE parameter to TRUE
ISCSITARGET_ENABLE=TRUE
- e. Mount VMDK, HDD, or datastore you need to gain access to using RDM technology and following this guide.
Note: you need to access the physical HDD for any data recovery appliance. - f. Edit configuration file: /etc/iet/ietd.conf
Target iqn.2001-04.com.example:storage.lun1
Note: password should be at least 12 symbols. Parameter /dev/sdb is an HDD mounted via RDM from the previous step.
CHAP Users
IncomingUser user pass1234567890
Lun 0 Path=/dev/sdb,Type=fileio
Alias LUN1 - g. Start the iSCSI server by executing the command
>/etc/init.d/iscsitarget start
You may also find useful following commands:
>/etc/init.d/iscsitarget status
>/etc/init.d/iscsitarget restart
2. Connect *rdm.vmdk to VM
- a. Run vSphere and open Properties of the new Linux VM
- b. Add HardDrive -> Add Existing File
- c. Select VMDK that is mapped via RDM to ESXi HDD
3. Configuring a client side
- a. Launch iSCSI Initiator
- b. Enter the IP address of the Linux VM with the iSCSI target. Tab Targets. section Quick connect->Target
- c. Click “Connect”
- d. Click Advanced on the new window
- e. Check “CHAP” option
- f. Enter user & secret values, these should equal the ones you entered in the configuration file in step 2.f. In this example it was user / pass1234567890
- g. After a new disk is added to the Windows, run VMFS Recovery™ and scan it like a standard local disk
Such configuration above resulted in a speed of 7.5Mb\s on a 100Mbit network. SSH displayed only 3.5Mb\s. We’re still testing if ESXi can act like an iSCSI target itself without Linux VM+RDM. In this case, we’ll update this instruction.