A Linux Physical2Virtual how-to guide for HP ProLiant servers
Transcription
A Linux Physical2Virtual how-to guide for HP ProLiant servers
A Linux Physical2Virtual how-to guide for HP ProLiant servers Executive summary............................................................................................................................... 2 Why virtualize..................................................................................................................................... 2 Virtualization overview ......................................................................................................................... 3 Profile of a system to be virtualized........................................................................................................ 3 Choosing a migration method ............................................................................................................... 4 Utilizing the RHEL3 distribution .............................................................................................................. 5 Preparing the disk image file ............................................................................................................. 6 Transferring the physical server files to the disk image .......................................................................... 6 Rsync utility.................................................................................................................................. 6 Netcat transfers............................................................................................................................ 7 Customize the virtual machine environment ......................................................................................... 7 Configuring the boot loader for the new virtualized instance ................................................................. 8 Clean up build environment .............................................................................................................. 9 Boot the virtual machine.................................................................................................................... 9 Additional maintenance of the VM environment ................................................................................... 9 Disable unnecessary services........................................................................................................... 10 HP Insight Control Management Software ......................................................................................... 10 Improving performance................................................................................................................... 10 Without RHEL3 distribution ................................................................................................................. 10 Disk image creation ....................................................................................................................... 10 Partitioning the disk image file ......................................................................................................... 11 Transferring the physical server files to the disk image ........................................................................ 13 Adding a bootblock to the disk image file ......................................................................................... 13 Create the VM domain file .............................................................................................................. 15 Completing the migration................................................................................................................ 15 Appendix A – SLES10 hosting of VM ................................................................................................... 16 Appendix B – Repairing the bootblock ................................................................................................. 17 Appendix C – Consolidated list of commands ....................................................................................... 18 For more information.......................................................................................................................... 19 Executive summary Today’s IT infrastructure is more dynamic than ever before. More demands are being placed on IT administrators to effectively and efficiently manage their IT environments with less resources and money. The desire to “go green” is driving IT to new, more efficient environments. Virtualization technology provides IT administrators with the choice and flexibility they need to effectively and efficiently manage their IT environments with less resources and money. HP has developed energy efficient solutions encompassing power supplies, processors, server cooling, thermal logic, and datacenter smart cooling to meet your business needs. For more information visit the www.hp.com/go/ProLiant-Energy-Efficient website for solution details and white papers. As Red Hat’s lifecycle for Red Hat Enterprise Linux (RHEL) 3 enters Phase 3 support, customers are faced with the dilemma of whether to migrate or virtualize. Phase 3 support from Red Hat provides no new functionality, no new hardware enablement, and no updated installation images are planned for release in Production 3 lifecycle phase. There are also no minor releases planned during this phase. For more information on Red Hat lifecycle, contact your Red Hat representative or refer to Red Hat’s lifecycle guideline www.redhat.com/security/updates/errata/. This how-to guide will provide insight and a scenario to take a physical Linux system environment and virtualize the environment to a virtual machine on another host providing detailed instructions for hosting a Red Hat Enterprise Linux release 3 system on a Red Hat Enterprise Linux 5 host. Appendix A provides instructions for a Novell SUSE Linux Enterprise Server 10 (SLES10) host. Target audience: The target audience for this document is an experienced Red Hat or Linux System Administrator who has an extensive depth of knowledge in system configuration, file systems and application configuration requirements. Why virtualize In the lifecycle of a datacenter, new computer systems are being acquired and older hardware continues to function in a status quo mode to facilitate production applications. However, the cost of maintaining a large datacenter continues to rise. The cost of a maintenance contract for physical hardware which the hardware vendor has deemed end of life can be exorbitant and the replacement parts may be non-existent. The environmental factors of cooling, power, and physical space often need to be reduced to control operational expenses. The vendor’s operating system lifecycle for a particular operating system version on a particular hardware platform is sometimes reduced to a critical only maintenance phase or possibly end of life. In reality, the server technology today has been under utilized in terms of capacity (CPU, memory, storage, and networking) as datacenters tend to over provision and dedicate a server to a particular project or function. Virtualizing a physical server into a virtual machine (VM) on another host can be efficient in many of the areas mentioned above and also can contribute to: • Allow an application to function more efficiently on updated hardware without the concern of driver support for new hardware in an older operating system version • Allow an application to function within its current version of the operating system while projects may be underway to port and/or migrate to the latest release of the operating system perhaps avoiding extended downtime and issues associated with porting and/or migration 2 • More efficient hardware utilization reduces operational expenses with regard to environmental factors • Additional test environments for resolving issues for application or operating system upgrades • Improved uptime availability by migrating the virtual machine to another physical server when hardware maintenance procedures are necessary • Simplify replication of production environments without time-consuming installation and configuration steps • Improved return on investment (ROI) for both hardware and software support contracts Performance of the application in a virtual machine is very application dependent. If the memory usage by the application needs to be increased, the adjustment can be made using a simple command line utility, provided the host system has free memory available to allocate to the virtual machine. However, intensive network and disk functions challenge the virtual machine when configured in a fully-virtualized (FV) environment as every interaction with the hardware is emulated. Virtualization overview Xen is an open-sourced project to provide a server environment to host virtual machines. Xen is currently included in RHEL and SLES distributions. The hypervisor or virtual machine monitor (VMM) is the software layer that is initially loaded to provide the virtual machine (VM) server functionality. The VMM runs between the server hardware and Linux operating system and is loaded first at boot. Once the VMM has loaded, the Xen VM Server is loaded to create and control the other VMs and communicate with the server hardware. This Xen VM server is referred to as Dom0 or domain0 and runs in privileged mode. A VM, also referred to as a guest domain (DomU), is an isolated environment running an operating system and applications. This guest domain runs unprivileged. A guest domain may or may not know it is running in a VM, depending on whether it is a paravirtualized or fully-virtualized VM. A para-virtualized (PV) VM means that the virtual machine monitor (VMM) has APIs to assist in accessing the hardware, and the guest operating system has been modified to know it is running in a VM. The VMM emulates the underlying hardware by presenting virtual devices to the guest operating system. A fully-virtualized VM requires no modifications to the guest operating system, the CPU traps all privileged instructions and sends the instruction to the VMM to emulate. For fully-virtualized VMs the physical server must have processors that support virtualization technology. HP has virtualization technology enabled hardware utilizing both Intel® Xeon® and AMD Opteron™ processors. Fullyvirtualized VMs will tend to perform slower than para-virtualized VMs. The operating system distribution vendors now provide PV drivers for network and disk functions while operating in a FV virtual machine instance. These PV drivers which are virtualization aware improve the caliber of performance for applications operating within the virtual machine. However the disk that contains the master boot record (MBR), kernel initrd images /boot directory, cannot use PV block device drivers due to an issue with the bootloader. Consult the Red Hat Virtualization Guide documentation for additional configuration information, support and restrictions on PV drivers. RHEL3 virtual machines need new devices created in /dev. The operating system distribution vendors also provide performance information of virtual machines compared with bare-metal physical server installations. Profile of a system to be virtualized In this scenario, the machine to be virtualized is an HP ProLiant DL360 G1 server with 1 GB of memory running Red Hat Enterprise Linux AS release 3 update 9 (RHEL3). This system will be fully virtualized and hosted on a Red Hat Enterprise Linux 5.2 (RHEL5) system running on an HP ProLiant 3 BL460c server with 12 GB of memory. On the RHEL3 system, the current amount of disk space in use and the partitioning of the devices, shown below, will establish the base environment for our virtual machine. Additional disk image files, simulating adding additional disk spindles can be added to the configuration when necessary. # df -h Filesystem /dev/cciss/c0d0p2 /dev/cciss/c0d0p1 Size 32G 97M Used Avail Use% Mounted on 1.6G 29G 6% / 27M 65M 30% /boot # fdisk -l /dev/cciss/c0d0 Disk /dev/cciss/c0d0: 36.4 GB, 36414750720 bytes 255 heads, 32 sectors/track, 8716 cylinders Units = cylinders of 8160 * 512 = 4177920 bytes Device Boot /dev/cciss/c0d0p1 * /dev/cciss/c0d0p2 /dev/cciss/c0d0p3 Start 1 26 8216 End 25 8215 8716 Blocks 101984 33415200 2044080 Id 83 83 82 System Linux Linux Linux swap Note With sufficient storage space available on the BL460c RHEL5 host system, the entire 36GB DL360 RHEL3 storage device /dev/cciss/c0d0 can be virtualized if required for the virtualization project. There are some documented processes or live-cd environments that will utilize the “dd” utility to create a bit for bit copy of the DL360 RHEL3 storage device which saves process steps in creating partitions, transferring files and installing the bootblock but allocates the full 36GB of storage on the BL460c RHEL5 host. However the direction of this document is to provide insight in a process that can be scripted and efficiently utilize resources on the BL460c RHEL5 host system by reducing the 36GB storage device down to what is effectively in use. This scenario will utilize a modest storage device capacity of 6GB since the DL360 RHEL3 system storage (36GB) is not at capacity. Choosing a migration method Migration of the DL360 RHEL3 system can be accomplished in a variety of methods based on resources available. A fairly streamlined approach utilizes the RHEL3 distribution media to install a base RHEL3 system in the VM and then replace data within the disk image file with the files from the reference DL360 RHEL3 system being virtualized. This method reduces the technical complexity of configuring the VM environment. An advantage to completing a base system installation in the guest ensures the disk partitions and master boot record bootblock are properly sized, configured and populated by the operating system distribution. See the section entitled Utilizing the RHEL3 distribution. An alternative method is to methodically create a disk image for the VM that is constructed using system utilities that perform tasks normally executed by the operating system installation environment (for example, partitioning the disk file, and embedding a bootable master boot record). The technical complexity of this method is much greater since the command utilities in the RHEL5 hosting environment are often enhanced versions which may have options not supported in a RHEL3 environment. See the section entitled Without RHEL3 distribution. 4 Utilizing the RHEL3 distribution On the BL460c RHEL5 host system, use the command line utility virt-install to automatically create the appropriate domain definition file, initialize the disk image file, and install a base RHEL3 environment. The utility will create a virtual machine with the implied task of performing a full installation using the distribution noted in the location option. Using the host system virtualization utilities ensures better compatibility for the virtual machine in the host environment. View the man page for more information on virt-install. The IP address 192.168.10.90 is the network NFS server exporting the operating system distributions. The RAM size of 2GB is recommended by Red Hat in the virtualization guide for RHEL 5.2. # virt-install --name rhel3p2v --hvm --ram 2048 \ --file /var/lib/xen/images/rhel3p2v.img --file-size 6 \ --os-variant=rhel3 --os-type=linux --vnc --noautoconsole \ --location=nfs:192.168.10.90:/kits/rhel3 Starting install... Creating storage file... 100% |=========================| 6.0 GB 00:00 Creating domain... 0 B 00:00 Domain installation still in progress. You can reconnect to the console to complete the installation process. Complete a base RHEL3 system installation with the appropriate disk partitioning for the profiled DL360 RHEL3 system. This scenario will use one swap partition and one additional partition for the “/” partition. At the completion of the installation it is not necessary to boot the base RHEL3 environment. Shutdown the virtual machine instance using the xm command utility. # xm list Name Domain-0 rhel3p2v # xm shutdown rhel3p2v # xm list Name Domain-0 ID Mem(MiB) VCPUs State 0 14448 8 r----1 1031 1 r----- Time(s) 122.3 33.8 ID Mem(MiB) VCPUs State 0 14448 8 r----- Time(s) 123.4 View the virtual machine domain file located in the default location, /etc/xen directory, and take note of the device pneumonic for the boot device located in the disk directive. # cat /etc/xen/rhel3p2v name = "rhel3p2v" uuid = "b4aaa4e2-1c78-23bf-f5ac-58663673713a" maxmem = 2048 memory = 2048 vcpus = 1 builder = "hvm" kernel = "/usr/lib/xen/boot/hvmloader" boot = "c" pae = 1 acpi = 1 apic = 1 localtime = 0 on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" device_model = "/usr/lib/xen/bin/qemu-dm" sdl = 0 vnc = 1 vncunused = 1 disk = [ "file:/var/lib/xen/images/rhel3p2v.img,hda,w", ",hdc:cdrom,r" ] vif = [ "mac=00:16:3e:25:c9:1c,bridge=xenbr0" ] serial = "pty" 5 Preparing the disk image file Map the loopback device to the disk image to create partition maps and mount the filesystem(s) locally. Use “losetup –f ” command to print the name of the first unused loop device. # losetup -f /dev/loop0 # losetup /dev/loop0 /var/lib/xen/images/rhel3p2v.img Use the kpartx utility to manage the partitions for device-mapper devices. Create device maps from the newly created partition tables for the disk image. For more information, see the kpartx man page. # kpartx -av /dev/loop0 add map loop0p1 : 0 10474317 linear /dev/loop0 63 add map loop0p2 : 0 2104515 linear /dev/loop0 10474380 Mount the newly created filesystem(s) on a temporary mount point to facilitate copying the files from the physical server to be virtualized and customizing configuration files for the virtual environment. # mkdir /mnt/rhel3p2v # mount /dev/mapper/loop0p1 /mnt/rhel3p2v # df /mnt/rhel3p2v Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/loop0p1 5154852 141440 4751556 3% /mnt/rhel3p2v Transferring the physical server files to the disk image Options available for transferring the DL360 RHEL3 server files to the disk image are quite numerous and the choice is left to the system administrator. This document highlights two options that have been tested in the development of this paper. Before replacing the files in the disk image created by the base RHEL3 system install, review and/or save the information in the following files to a temporary area to assist in customization of the RHEL3 VM environment. /etc/fstab /etc/modules.conf /boot/grub/device.map /etc/X11/XF86Config Rsync utility Using the rsync utility allows the transfer of the information using various protocols and allows the command to be initiated on either system. The example below utilizes SSH to access the data on the DL360 RHEL3 host. You will need to ensure the Secure Shell sshd configuration /etc/ssh/sshd_config is configured for PermitRootLogin yes for rsync to work properly as in this example. The command options are specific to ensure file protections/ownerships remain consistent. The rsync utility also allows for recursive copies to bring over only the files that have changed. This process is good to setup a test machine initially and then update the virtual machine with all changes when the migration is finalized. For more information on rsync, view the man page. Copy the DL360 RHEL3 source system files. Do not copy /mnt, /sys, /proc, or /tmp as these are typically dynamic runtime directories that will be repopulated during boot of the virtual machine. Optionally the /dev directory can also be excluded in the copy since its contents are primarily reflective of the server hardware and will be repopulated during boot of the virtual machine. In this example 192.168.10.192 is the IP address of the DL360 RHEL3 system being virtualized. The rsync –x command option prevents the utility from crossing filesystem boundaries and therefore would not transfer the /boot partition of the DL360 RHEL3 system. The rsync utility is executed twice in this scenario to bring over the /boot partition that is separately mounted. 6 # rsync -avxH --numeric-ids --progress \ > --exclude '/proc' --exclude '/sys' --exclude '/mnt' --exclude '/tmp' \ > 192.168.10.192:/ /mnt/rhel3p2v/ root@192.168.10.192's password: receiving file list ... 70641 files to consider <snip> sent 1089193 bytes received 1580951628 bytes total size is 1576656284 speedup is 1.00 8862973.79 bytes/sec # rsync -avxH --numeric-ids --progress 192.168.10.192:/boot/ /mnt/rhel3p2v/boot/ root@192.168.10.192's password: receiving file list ... 41 files to consider boot/ boot/System.map -> System.map-2.4.21-50.ELsmp boot/grub/ boot/grub/menu.lst -> ./grub.conf boot/lost+found/ <snip> sent 1033137 bytes received 1471812733 bytes total size is 1600318021 speedup is 1.09 10122652.03 bytes/sec Netcat transfers Alternatively, the combination of tar and netcat utilities can be used to transfer the files to the new virtual machine disk. The netcat utility is used to facilitate ad hoc connections between systems using TCP and UDP protocols and hence requires the utility to be initiated on both systems before the file transfer commences. For more information on the netcat utility, see the nc man page. In combination, the tar utility is used to “package” the data to be transferred with appropriate file protections and ownerships, and pipes the data package to the netcat process for transfer. On DL360 RHEL3 server (sending): # cd /; tar –cf – bin home lib media sbin srv var boot etc initrd misc opt root selinux usr | nc –w 10 –l –p 5432 On BL460c RHEL5 host server (receiving): cd /mnt; netcat –w 10 <physical host ip> 5432 | tar –xvf – Customize the virtual machine environment On the BL460c RHEL5 host, modify the RHEL3 system files as necessary for the new virtualized instance. Use the chroot utility to create a safer environment for editing the necessary files without affecting the BL460c RHEL5 Xen host system. Refer to the information saved from the initial RHEL3 base system install. # chroot /mnt/rhel3p2v Create the directories which were excluded during the transfer of files and set the sticky bit needed for /tmp. Additional directories may be needed depending which source directories were excluded during the data transfer. # # # # # mkdir mkdir mkdir mkdir mkdir /proc /sys /mnt /mnt/cdrom –m 1777 /tmp Using the device pneumonic created in the Xen domain file for the disk directive, update the /etc/fstab to reflect the proper swap partition. Since the DL360 RHEL3 physical server was using file system labels, it is not necessary to update the reference for the root partition. 7 In this example /dev/cciss/c0d0p3 became /dev/hda2 for the swap partition. # cat /etc/fstab LABEL=/ none none none /dev/hda2 / /dev/pts /proc /dev/shm swap ext3 devpts proc tmpfs swap defaults gid=5,mode=620 defaults defaults defaults 1 0 0 0 0 1 0 0 0 0 Note: You may wish to also edit the mount table file, /etc/mtab, to reflect the new boot device pneumonic otherwise the output of commands such as df will list the original device pneumonic. Edit, if necessary, the network startup scripts (/etc/sysconfig/network-scripts/ifcfg-eth*) to remove any dependency for a particular MAC address or update the parameter to match the Xen MAC address defined by the vif directive in the Xen domain file. Edit the loadable device module configuration file, /etc/modules.conf for RHEL3, to remove any references to scsi_hostadapter modules since the fully virtualized guest will use IDE devices. Also remove any statements for USB controllers. Modify the device driver for the eth0 network device to be a RealTek 8139cp since the FV VM emulates that network device. # cat /etc/modules.conf alias eth0 8139cp Edit /boot/grub/device.map and set correct hd information. In this example the specification for hd0 was modified from /dev/cciss/c0d0 to /dev/hda. # this device map was generated by anaconda (fd0) /dev/fd0 (hd0) /dev/hda Consider editing the system inittab file (/etc/inittab) to change the default runlevel from level 5 to level 3 for the initial boot since the video adapter will be changed to a Cirrus Logic GD 5446, and the X11 environment will need to be reconfigured using redhat-config-xfreex86. Configuring the boot loader for the new virtualized instance If the DL360 RHEL3 system utilized GRUB (GRand Unified Bootloader), edit /boot/grub/menu.lst to reflect proper partition and device information. In the following example, “root=/dev/hda1” and “boot=/dev/hda” were modified for consistency with the new boot device pneumonic since our physical server utilized file system labels in the boot stanza. In this scenario, the DL360 RHEL3 system utilized a separate /boot partition to hold the kernel and initrd images. The VM is configured to utilize one root partition which contains the /boot directory. The menu.lst file must be updated to reflect the explicit path for the kernel and initrd. The menu.lst file on the DL360 RHEL3 system: # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,0) # kernel /vmlinuz-version ro root=/dev/cciss/c0d0p2 # initrd /initrd-version.img #boot=/dev/cciss/c0d0 default=0 timeout=10 splashimage=(hd0,0)/grub/splash.xpm.gz title Red Hat Enterprise Linux AS (2.4.21-57.ELsmp) root (hd0,0) kernel /vmlinuz-2.4.21-57.ELsmp ro root=LABEL=/ initrd /initrd-2.4.21-57.ELsmp.img 8 The menu.lst file for the RHEL3 VM with updates reflecting the path for the kernel and initrd files: # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You do not have a /boot partition. This means that # all kernel and initrd paths are relative to /, eg. # root (hd0,0) # kernel /boot/vmlinuz-version ro root=/dev/hda1 # initrd /boot/initrd-version.img #boot=/dev/hda default=0 timeout=10 splashimage=(hd0,0)/boot/grub/splash.xpm.gz title Red Hat Enterprise Linux AS (2.4.21-57.ELsmp) root (hd0,0) kernel /boot/vmlinuz-2.4.21-57.ELsmp ro root=LABEL=/ initrd /boot/initrd-2.4.21-57.ELsmp.img If LILO was configured on the DL360 RHEL3 server, look at /etc/lilo.conf and edit the file as needed. Exit the “chroot” environment to return to the BL460c RHEL5 host environment. # exit Clean up build environment Instead of rebooting your BL460c RHEL5 host environment to clear out any configurations used to build the virtual disk image, execute the following commands: # cd / # sync # umount /mnt/rhel3p2v # rmdir /mnt/rhel3p2v # rm /dev/loop # kpartx –dv /dev/loop0 del devmap : loop0p1 del devmap : loop0p2 # losetup –d /dev/loop0 # losetup –d /dev/loop1 Boot the virtual machine Since this scenario used the RHEL5 virt-install utility to create the virtual machine domain file, the setup and configuration of the RHEL3 system in the virtual environment is now complete and ready to be booted and tested. This document assumes the administrator performing this task of building a virtualized system already has the skills to boot and monitor virtual machines on the RHEL5 host environment. See the Red Hat Virtualization Guide documentation for additional information. # xm create rhel3p2v # xm console rhel3p2v Additional maintenance of the VM environment During the initial boot of the VM, error messages will appear as the kernel attempts to load drivers (for example, cciss) specified in the initrd linuxrc script. Once the environment is functioning as expected, consider rebuilding the initrd file using the mkinitrd utility to prevent the drivers from being loaded since the RHEL3 /etc/modules.conf file has been updated. # # # # cd /boot uname -r mv initrd-`uname –r`.img initrd-`uname –r`.img.save mkinitrd –v –f /boot/initrd-`uname –r`.img `uname –r` 9 Also during the initial boot of the VM, the hardware discover utility, kudzu will appear and provide an opportunity to update the hardware configuration for the VM. Accept the appropriate changes to match the VM environment. Disable unnecessary services Once the VM is booted and verified to be functioning as expected, review system services that are no longer applicable for the environment. For example, the SMART disk monitoring service can be disabled or turned off within the VM since the physical storage is monitored by the BL460c RHEL5 host. # /sbin/service smartd stop # /sbin/chkconfig smartd off HP Insight Control Management Software In a standard HP ProLiant environment, many servers have the Insight Control Management software installed on each physical server. The RHEL3 system is now virtualized on a RHEL5 host. The BL460c RHEL5 host would typically have the Insight Control Management software installed at the physical server level. In the RHEL3 virtualized environment, the health and wellness agents can now be removed. Improving performance This scenario created a fully virtualized RHEL3 system on a RHEL5 system environment. Fully virtualized systems tend to perform slower in a virtualized environment as all input/output (I/O) operations are emulated by the host hypervisor. The Linux operating system distribution vendors have created para-virtualized drivers suitable for installation in fully virtualized guests to improve I/O operations. Obtain and install the appropriate drivers from the distribution vendor. For Red Hat more information is available at: http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/Virtualization/chapVirtualization-Introduction_to_Para_virtualized_Drivers.html For SLES more information is available at: http://www.novell.com/documentation/sles10/xen_admin/index.html?page=/documentation/sles1 0/xen_admin/data/cha_xen_drivers.html Without RHEL3 distribution For the scenario when the RHEL3 distribution is not available to build a base RHEL3 VM, this section details the steps necessary to build the disk image file with the proper attributes and to embed a bootblock in the MBR. The technical complexity of this method is much greater since the command utilities in the RHEL5 hosting environment are often enhanced versions which may have options not supported in a RHEL3 environment. Disk image creation Without using the operating system provided virtualization tools, the disk image file can be manually created for configuration and population of files leaving the task of creating the domain file for a later time. Use the dd utility to create a 6GB sparse disk image file: # dd if=/dev/zero of=rhel3p2v.img bs=1024k count=1 seek=6144 10 Partitioning the disk image file Map the loopback device to the disk image to create partitions and mount the filesystem(s) locally. Use “losetup –f ” command to print the name of the first unused loop device. # losetup -f /dev/loop0 # losetup /dev/loop0 /var/lib/xen/images/rhel3p2v.img Create the necessary partitions using the fdisk utility based on the system being virtualized. For this example, the /boot and / partitions from the DL360 RHEL3 physical server are being merged into one larger partition for the virtualized environment and a single swap area is being created. Complete the partitioning task by setting the correct partition type on each partition (option t) and make the root partition bootable (option a). In this example p1 will be the root partition and p2 will be the swap partition. # fdisk /dev/loop0 Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): p Disk /dev/loop0: 6442 MB, 6442450944 bytes 255 heads, 63 sectors/track, 783 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-783, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-783, default 783): 652 Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 2 First cylinder (653-783, default 653): Using default value 653 Last cylinder or +size or +sizeM or +sizeK (653-783, default 783): Using default value 783 Command (m for help): p Disk /dev/loop0: 6442 MB, 6442450944 bytes 255 heads, 63 sectors/track, 783 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot /dev/loop0p1 /dev/loop0p2 Start 1 653 End 652 783 Blocks 5237158 1052257+ Id 83 83 System Linux Linux Command (m for help): t Partition number (1-4): 2 Hex code (type L to list codes): 82 Changed system type of partition 2 to 82 (Linux swap / Solaris) Command (m for help): a Partition number (1-4): 1 11 Command (m for help): p Disk /dev/loop0: 6442 MB, 6442450944 bytes 255 heads, 63 sectors/track, 783 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot /dev/loop0p1 * /dev/loop0p2 Start 1 653 End 652 783 Blocks 5237158 1052257+ Id 83 82 System Linux Linux swap / Solaris Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 22: Invalid argument. The kernel still uses the old table. The new table will be used at the next reboot. Syncing disks. Use the kpartx utility to manage partition creation for device-mapper devices. Create device maps from the newly created partition tables for the disk image. For more information, see the kpartx man page. # kpartx -av /dev/loop0 add map loop0p1 : 0 10474317 linear /dev/loop0 63 add map loop0p2 : 0 2104515 linear /dev/loop0 10474380 Create the appropriate file systems on each partition using the newly created device maps – depending on the size of the file system this step could take a few minutes. The feature option (–O) is necessary since the default feature set for the mke2fs utility on RHEL5 is incompatible with the version on RHEL3. In particular, a few RHEL5 features will prevent the RHEL3 VM from properly checking the root filesystem. The boot process would return the following error message: “fsck.ext3: Filesystem has unsupported feature(s) (/) e2fsck: Get a newer version of e2fsck”. # mkfs -j –O none /dev/mapper/loop0p1 mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) 26104 inodes, 104388 blocks 5219 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67371008 13 block groups 8192 blocks per group, 8192 fragments per group 2008 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729 Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 23 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. # mkswap /dev/mapper/loop0p2 Setting up swapspace version 1, size = 1077506 kB Mount the newly created filesystem(s) on a temporary mount point to facilitate copying the files from the physical server to be virtualized and customizing configuration files for the virtual environment. # mkdir /mnt/rhel3p2v # mount /dev/mapper/loop0p1 /mnt/rhel3p2v # df /mnt/rhel3p2v Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/loop0p1 5154852 141440 4751556 3% /mnt/rhel3p2v 12 If the filesystems were utilizing labels on the physical server being virtualized, label the newly created partitions appropriately. The use of labels will simplify any additional updates to the file system table and boot configuration files # e2label /dev/mapper/loop0p1 / Transferring the physical server files to the disk image Using the rsync utility allows the transfer of the information using various protocols and allows the command to be initiated on either system. The example below utilizes SSH to access the data on the DL360 RHEL3 host. You will need to ensure the Secure Shell sshd configuration /etc/ssh/sshd_config is configured for PermitRootLogin yes for rsync to work properly as in this example. The command options are specific to ensure file protections/ownerships remain consistent. The rsync utility also allows for recursive copies to bring over only the files that have changed. This process is good to setup a test machine initially and then update the virtual machine with all changes when the migration is finalized. For more information on rsync, view the man page. Copy the DL360 RHEL3 source system files. Do not copy /mnt, /sys, /proc, or /tmp as these are typically dynamic runtime directories that will be repopulated during boot of the virtual machine. Optionally the /dev directory can also be excluded in the copy since its contents are primarily reflective of the server hardware and will be repopulated during boot of the virtual machine. In this example 192.168.10.192 is the IP address of the DL360 RHEL3 system being virtualized. The rsync –x command option prevents the utility from crossing filesystem boundaries and therefore would not transfer the /boot partition of the DL360 RHEL3 system. The rsync utility is executed twice in this scenario to bring over the /boot partition that is separately mounted. # rsync -avxH --numeric-ids --progress \ > --exclude '/proc' --exclude '/sys' --exclude '/mnt' --exclude '/tmp' \ > 192.168.10.192:/ /mnt/rhel3p2v/ root@192.168.10.192's password: receiving file list ... 70641 files to consider <snip> sent 1089193 bytes received 1580951628 bytes total size is 1576656284 speedup is 1.00 8862973.79 bytes/sec # rsync -avxH --numeric-ids --progress 192.168.10.192:/boot/ /mnt/rhel3p2v/boot/ root@192.168.10.192's password: receiving file list ... 41 files to consider boot/ boot/System.map -> System.map-2.4.21-50.ELsmp boot/grub/ boot/grub/menu.lst -> ./grub.conf boot/lost+found/ <snip> sent 1033137 bytes received 1471812733 bytes total size is 1600318021 speedup is 1.09 10122652.03 bytes/sec Adding a bootblock to the disk image file The disk image file must have a bootblock in order to boot the virtual machine instance properly. The process to install/restore a bootblock depends on whether you were using LILO or GRUB on the physical server to be virtualized. This scenario will focus on the process for GRUB. 13 GRUB expects a device name (/dev/name), for example /dev/hda, to denote the entire disk where the master boot record (MBR) and partition table reside. GRUB also expects a device name (/dev/name1), /dev/hda1, to represent the filesystem partition containing the /boot directory. Use the fdisk utility in expert mode and print the partition table to determine the starting sector of the filesystem partition. The starting sector for partition Nr1 is 63. The offset is calculated by multiplying the starting sector by a block_size of 512 for each sector. # fdisk rhel3p2v.img last_lba(): I don't know how to handle files with mode 81ed You must set cylinders. You can do this from the extra functions menu. Command (m for help): x Expert command (m for help): p Disk rhel3p2v.img: 255 heads, 63 sectors, 0 cylinders Nr 1 2 3 4 AF 80 00 00 00 Hd Sec 1 1 0 1 0 0 0 0 Cyl Hd Sec 0 254 63 652 254 63 0 0 0 0 0 0 Cyl 651 782 0 0 Start 63 10474380 0 0 Size ID 10474317 83 2104515 82 0 00 0 00 Expert command (m for help): q Utilize the losetup utility to map the filesystem partition with the calculated offset (63*512=32256) onto /dev/loop1. # losetup –o 32256 /dev/loop1 rhel3p2v.img Create a symbolic link for the mapped device to the disk image file already mounted on /dev/loop0. # ln -s /dev/loop0 /dev/loop Edit /mnt/rhel3p2v/boot/grub/device.map and set correct hd information. In this example the specification for hd0 was modified from /dev/cciss/c0d0 to /dev/hda. # this device map was generated by anaconda (fd0) /dev/fd0 (hd0) /dev/hda With the appropriate device names available (/dev/loop, /dev/loop1) and mapped to the disk image file, invoke the GRUB shell to embed the bootblock information. The device-map option signals the GRUB shell not to scan the system for BIOS devices and not to regenerate a device.map file. Using the no-curses option prevents your terminal screen from being cleared when entering the GRUB shell. # grub --no-curses --device-map=/mnt/rhel3p2v/boot/grub/device.map GNU GRUB version 0.97 (640K lower / 3072K upper memory) [ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename.] grub> device (hd0) /dev/loop device (hd0) /dev/loop grub> root (hd0,0) root (hd0,0) Filesystem type is ext2fs, partition type 0x83 grub> setup (hd0) setup (hd0) Checking if "/boot/grub/stage1" exists... yes Checking if "/boot/grub/stage2" exists... yes Checking if "/boot/grub/e2fs_stage1_5" exists... yes Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 16 sectors are embedded. succeeded Running "install /boot/grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded Done. grub> quit 14 If during the boot and testing phase of the new virtual machine this instance fails to boot, boot the virtual machine with a Red Hat Linux rescue disk and reinstall the bootblock into MBR. See Appendix B for more information on this process. Create the VM domain file Manually create the virtual machine domain file from scratch or appropriately edit a clone of another working definition in the directory dictated by the operating system environment appropriately editing unique values for name, uuid, disk, and the vif MAC address. Completing the migration Return to the section entitled Customize the virtual machine environment for the remaining instructions on completing this migration. 15 Appendix A – SLES10 hosting of VM Since RHEL5 and SLES10 virtualization environments are based on the same underlying technology, the process to host the RHEL3 VM on a SLES10 SP2 host is identical with only the few exceptions noted below. • The host system virtualization command line utility to create the VM is called vm-install. • VM domain files are stored in the /etc/xen/vm directory. • The netcat utility is invoked with the netcat command. 16 Appendix B – Repairing the bootblock The GRUB boot loader may not function properly when booting the virtual machine. The following steps utilizing the RHEL3 scenario outline the process to reinstall a bootblock into the master boot record of the disk image file using the operating system rescue environment. • Obtain an ISO image of the operating system distribution rescue environment and store the ISO in the /var/lib/xen/images directory. • Edit the virtual machine domain configuration file to include a bootable cdrom definition in the disk directive statement, and modify the boot directive from drive c to drive d. disk = [ "file:/var/lib/xen/images/rhel3p2v.img,hda,w", "file:/var/lib/xen/images/rhel3rescue.iso,hdc:cdrom,r" ] boot = “d” • Boot the virtual machine with the appropriate operating system command and connect to the virtual machine console. • Specify linux rescue at the boot prompt and follow the operating system prompts for information. • From the shell environment of the booted virtual machine, type chroot /mnt/sysimage to work within the targeted virtual machine environment. • Execute the command /sbin/grub-install /dev/hda where /dev/hda is the boot device pneumonic. • Exit the chroot environment. • Shutdown the virtual machine. • Re-edit the virtual machine domain configuration file to return to the original cdrom definition in the disk directive statement. • Reboot the virtual machine. 17 Appendix C – Consolidated list of commands The section provides for reference a consolidated list of commands used in the scenario in which the RHEL3 distribution was not available to perform a base system install in the VM. # dd if=/dev/zero of=rhel3p2v.img bs=1024k count=1 seek=6144 # losetup -f # losetup /dev/loop0 rhel3p2v.img # fdisk /dev/loop0 <<EOT p n p 1 1 652 n p 2 653 783 p t 2 82 a 1 p w EOT # kpartx -av /dev/loop0 # mke2fs -j -O none /dev/mapper/loop0p1 # mkswap /dev/mapper/loop0p2 # e2label /dev/mapper/loop0p1 / # mount /dev/mapper/loop0p1 /mnt/rhel3p2v # df /mnt/rhel3p2v # dumpe2fs /dev/mapper/loop0p1 | grep -i features # rsync -avxH --numeric-ids --progress --exclude '/proc' --exclude '/mnt' \ --exclude '/tmp' 192.168.10.192:/ /mnt/rhel3p2v/ # rsync -avxH --numeric-ids --progress 192.168.10.192:/boot/ /mnt/rhel3p2v/boot/ # chroot /mnt/rhel3p2v # fdisk rhel3p2v.img <<EOT x p q EOT # chroot /mnt/rhel3p2v <<EOT # mkdir /proc # mkdir /sys # mkdir /mnt # mkdir –m 1777 /tmp # vi /etc/fstab # vi /etc/mtab # vi /etc/sysconfig/network-scripts/ifcfg-eth0 # vi /etc/modules.conf # vi /etc/inittab # vi /boot/grub/device.map # vi /boot/grub/menu.lst # exit EOT # losetup -o 32256 /dev/loop1 rhel3p2v.img # ln -s /dev/loop0 /dev/loop # grub --no-curses --device-map=/mnt/rhel3p2v/boot/grub/device.map --batch <<EOT device (hd0) /dev/loop root (hd0,0) setup (hd0) quit EOT # cd /var/lib/xen/images # sync # umount /mnt/rhel3p2v # rmdir /mnt/rhel3p2v # rm /dev/loop # kpartx -dv /dev/loop0 # losetup -d /dev/loop0 # losetup -d /dev/loop1 18 For more information Linux on ProLiant, http://www.hp.com/go/proliantlinux HP and Red Hat Enterprise Linux, http://www.hp.com/go/redhat Red Hat OS Lifecycle, http://www.redhat.com/security/updates/errata/ HP and SUSE Linux Enterprise, http://www.hp.com/go/sles Open Source and Linux from HP, http://www.hp.com/go/linux HP ProLiant Energy Efficient Solutions, http://www.hp.com/go/ProLiant-Energy-Efficient To help us improve our documents, please provide feedback at www.hp.com/solutions/feedback. © 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. AMD Opteron is a trademark of Advanced Micro Devices, Inc. Intel and Xeon are trademarks of Intel Corporation in the U.S. and other countries. 4AA2-1813ENW, August 2008