среда, 14 апреля 2010 г.

VMware ESX 4 can even virtualize itself

ESX on ESXRunning VMware ESX inside a virtual machine is a great way to experiment with different configurations and features without building out a whole lab full of hardware and storage. It is pretty common to do this on VMware Workstation nowadays — the first public documentation of this process that I know of was published by Xtravirt a couple of years ago.

But what if you prefer to run ESX on ESX instead of Workstation?

You may be pleased to know that the GA build of ESX 4 allows installing ESX 4 as a virtual machine as well as powering on nested virtual machines — VMs running on the virtual ESX host. You can even VMotion a running virtual machine from the physical ESX to a virtual ESX — on the same physical server!

The extra tweaks to make it all work are minimal, and I will show you how without even opening up a text editor.

After installing ESX 4 onto your real hardware, configure as desired and enable promiscuous mode on a vSwitch:

vswitch0 promisc

Create a new VM with the following guidance (choose “Custom”):

  • Virtual Machine Version 7
  • Guest OS: Linux / Red Hat Enterprise Linux 5 (64-bit)
  • 2 VCPUs, 2GB RAM
  • 2 NICs – e1000
  • LSI Logic Parallel
  • New disk – reasonable size

After you have the VM ready, simply attach a VMware ESX 4 ISO image, power on, and install ESX as a guest OS.

Virtual ESX 4

After installation, add the new virtual ESX to vCenter 4 and create a new VM.

vSphere Client with virtual ESX

If you do not need to run VMs on your virtual ESX, you can stop there. However, if you try to power on that nested VM, you will see the following error:

You maynot power on a virtual machine inside a VM.

To prevent this, just one tweak is needed:

  • Shut down the virtual ESX VM
  • Click Edit Settings
  • Click the Options tab
  • Click Advanced / General / Configuration Parameters…
  • Click Add Row
  • For the Name/Value enter: monitor_control.restrict_backdoor / TRUE

Editing VM configuration.

The above procedure is just an alternative to hand-editing the .vmx file — if you prefer to do it that way, feel free.

Now you are ready to power your virtual ESX VM back on, as well as the nested VMs. This capability should come in handy as you start investigating the new features of vSphere 4.

You may be interested in this related post where a VM is migrated between the physical and virtual ESX hosts.

RELATED POSTS:

  1. VMotion from physical ESX 4 to virtual ESX 4
  2. Taking snapshots of VMware ESX 4 running in a VM
  3. VM Encapsulation
  4. Easy recovery from a full VMware ESX datastore
  5. What would things be like without VMFS?

среда, 10 февраля 2010 г.

Three ways to kill a frozen vSphere ESXi host virtual machine

Three ways to kill a frozen vSphere ESXi host virtual machine



Posted by: Eric Siebert
Eric Siebert, ESXi, vSphere, stuck VM, VMware

At some point, you may need to know how to kill a stuck or frozen VMware vSphere 4.0 ESXi host virtual machine when the traditional power controls do not work. As with VMware ESX, there are several methods, which I covered in a previous post, killing a virtual machine (VM) on a VMware ESX host in vSphere.

The methods for ESXi are very similar to that of ESX, but the execution is different as ESXi doesn’t have a service console like ESX’s. The methods below are listed in order of usage preference, beginning with using normal VM commands and ending with a brute force method.

Method 1: Use the vmware-cmd command in the vSphere command-line interface (CLI)

Note: The vSphere CLI is formerly known as the Remote CLI and is not to be confused with the vSphere PowerCLI. The vSphere CLI is the CLI equivalent of using the vSphere Client. Because ESXi does not have a service console like ESX’s, you need to use the remote vSphere CLI to run the vmware-cmd command with ESXi. The vSphere CLI can be downloaded and installed on any Linux or Windows system and can be used to run specific commands remotely on any ESX/ESXi host, and consists of a collection of Perl scripts for each specific ESX/ESXi command. To use this method, follow the steps below.

  1. Run the vSphere CLI on the system that you installed it on. You’ll need to switch to the \bin subdirectory where the Perl scripts are located to run the commands.
  2. The vmware-cmd command uses the configuration file name (.vmx) of the VM to specify the VM on which it’s going to perform an operation. You can type vmware-cmd.pl -H -l to get a list of all VMs on the host and the path and name of their configuration file. The path uses the Universally Unique Identifier (UUID) or long name of the data store; alternatively, you can use the friendly name instead. You’ll be prompted for a log in to the ESXi host before the command will execute. Here you have the option to specify a vCenter Server with -H and you use -T to specify the ESXi host that the vCenter Server manages. Note: You can avoid entering log-in information every time you run a command by using a configuration file or Windows authentication passthrough using Security Support Provider Interface (SSPI). See the vSphere Command-Line Interface
    Installation and Reference Guide documentation
    for more info.
  3. You can optionally check the power state of the VM by typing vmware-cmd.pl -H getstate.
  4. To forcibly shut down a VM, type vmware-cmd.pl -H stop hard.
  5. You can check the state again to see if it worked; if it did the state should now be off.

Method 2: Use the vm-support command to shut down the VM

When you use the vm-support command to shut down a VM, you must first find the virtual machine ID (VMID) and then use the vm-support command to forcibly terminate it. This method does more then shut down the VM – it also produces debug information that you can use to troubleshoot an unresponsive VM. On ESXi hosts the vm-support command can be using the special tech support mode which provides access to its Busybox, Posix-based management console.

  1. On the ESXi console, press Alt-F1.
  2. Type the word unsupported (text will not be displayed while typing) and press Enter. A password prompt will appear, enter the root password for the ESXi host and you will be at a # prompt in the root partition.
  3. The vm-support command is a multi-purpose command that is mainly used to troubleshoot host and VM problems. You can use the -X parameter to forcibly shut down a VM and also produce a file with debug information. As with ESX hosts, running this command will create a .tgz file but it will not be located in the directory that you run the command in. Instead it will be created in the /var/tmp directory which points to the 4 GB Virtual File Allocation Table (VFAT) system swap partition. You can also set a Virtual Machine File System (VMFS) volume as your working directory for the .tgz file. First, type vm-support -x to get a list of VMIDs of your running VMs.
  4. To forcibly shut down the VM and generate core dumps and log files, type vm-support -X . If you wish to specify an alternate directory for the .tgz file that is created also add the -w parameter. You will receive prompts asking if you want to take a screenshot of the VM. This can be useful if you want to see if there are any error messages. You will also be prompted about whether you wish to send an non-maskable interrupt (NMI) and an ABORT to the VM, which can further aid in debugging. You must say yes to the ABORT prompt for the VM to be forcibly stopped. Once the process completes, which can take 10-15 minutes, a .tgz file will be created in the /var/tmp directory that you can use for troubleshooting purposes.
  5. You can check the state of the VM again by typing vm-support -x. You should not see the VM listed at this point. Be sure and delete the .tgz file that is created when you are done to avoid filling up your host disk.
  6. You can leave tech support mode by typing ‘exit’ and pressing Alt-F2 to return to the normal console mode.

Method 3: Find the VM’s process identifier and forcibly terminate it

This method also relies on using the tech support mode console that is used in method 2 to run the commands.

  1. On the ESXi console, press Alt-F1.
  2. Type the word unsupported (text will not be displayed while typing) and press Enter. A password prompt will appear. Enter the root password for the ESXi host and you will be at a # prompt in the root partition.
  3. The process status (ps) command shows the currently-running processes on a server, and the grep command finds the specified text in the output of the ps command. Type ps -g | grep which will return the WID (first column), CID (second column) and process group ID (PGID) (fourth column) of the running processes of the VM. You will have several entries returned; the number in the fourth column of the entries is the PGID of the VM.
  4. The kill command sends a signal to terminate a process using its ID number. The ‘-9′ parameter forces the process to quit immediately and cannot be ignored like the more graceful ‘-15′ parameter can sometimes be. Type kill -9 which will forcibly terminate the process for the specified VM.
  5. You can check the state of the VM again by typing vm-support -x; you should no longer see the VM listed.
  6. You can leave tech support mode by typing ‘exit’ and press Alt-F2 to return to the normal console mode.

All three of these methods work identically on ESXi hosts in both VMware Infrasture 3 and vSphere.

понедельник, 11 января 2010 г.

ESX Server, IP Storage, and Jumbo Frames

ESX Server, IP Storage, and Jumbo Frames

With the release of VMware Infrastructure version 3.5, VMware added support for jumbo frames. Although the documentation states that jumbo frames “are not supported for NAS and iSCSI traffic”, jumbo frames for NFS and iSCSI does actually work. Here’s some information on getting it working.

HOW I TESTED

Keep in mind that this is not an “officially supported” configuration (see the section on the “Official” Support Statement below), so use at your own risk. I will not be held responsible if you blow up your production environment trying to make jumbo frames work.

Here’s how I tested the use of jumbo frames for software iSCSI and NFS datastores:

  • For the physical switch infrastructure, I used a Cisco Catalyst 3560G running Cisco IOS version 12.2(25)SEB4.
  • For the physical server hardware, I used a group of HP ProLiant DL385 G2 servers with dual-core AMD Opteron processors and a quad-port PCIe Intel Gigabit Ethernet NIC.
  • For the storage system, I used a NetApp FAS940 running Data ONTAP 7.2.4.

The exact commands and/or procedures may be different for you depending upon the hardware and/or software versions that you’re running in your environment. Keep that in mind.

CONFIGURING THE PHYSICAL SWITCH

Fortunately for me, the Cisco Catalyst 3560G does indeed support jumbo frames. (Naturally, you’ll want to ensure that your switch supports jumbo frames.) Jumbo frames are not, however, enabled by default; they must be enabled using the following command in global configuration mode:

system mtu jumbo 9000

Note that 9000 bytes seems to be the generally accepted size for jumbo frames, so that’s what I used.

After running this command, you must reboot the switch. The change doesn’t take effect until a reload. Fortunately, IOS reminds you of this after you enter the command. Once the switch has rebooted, you can verify the MTU setting with this command:

show system mtu

This should report that the system jumbo MTU size is 9000 bytes, confirming that the switch is ready for jumbo frames. Now we’re prepared to configure the storage system.

CONFIGURING THE STORAGE SYSTEM

Using FilerView, increasing the MTU on the appropriate network interfaces to 9000 bytes is as simple as going to Network > Manage Interfaces and then clicking the Modify link for the interface to be changed. Set the “MTU size” to 9000 (from the default of 1500), click Apply, and you’re ready to roll.

You can verify the settings in FilerView using Network > Manage Interfaces > Show All Interface Details, or by using the “ifconfig -a” command from a Data ONTAP command prompt.

CONFIGURING ESX SERVER

There is no GUI in VirtualCenter for configuring jumbo frames; all of the configuration must be done from a command line on the ESX server itself. There are two basic steps:

  1. Configure the MTU on the vSwitch.
  2. Create a VMkernel interface with the correct MTU.

First, we need to set the MTU for the vSwitch. This is pretty easily accomplished using esxcfg-vswitch:

esxcfg-vswitch -m 9000 vSwitch1

A quick run of “esxcfg-vswitch -l” (that’s a lowercase L) will show the vSwitch’s MTU is now 9000; in addition, “esxcfg-nics -l” (again, a lowercase L) will show the MTU for the NICs linked to that vSwitch are now set to 9000 as well.

Second, we need to create a VMkernel interface. This step is a bit more complicated, because we need to have a port group in place already, and that port group needs to be on the vSwitch whose MTU we set previously:

esxcfg-vmknic -a -i 172.16.1.1 -n 255.255.0.0 -m 9000 IPStorage

This creates a port group called IPStorage on vSwitch1—the vSwitch whose MTU was previously set to 9000—and then creates a VMkernel port with an MTU of 9000 on that port group. Be sure to use an IP address that is appropriate for your network when creating the VMkernel interface.

To test that everything is working so far, use the vmkping command:

vmkping -s 9000 172.16.1.200

Clearly, you’ll want to substitute the IP address of your storage system in that command.

That’s it! From here you should be able to easily add an NFS datastore or connect to an iSCSI LUN using jumbo frames from the ESX server.

“OFFICIAL” SUPPORT STATEMENT

Officially, jumbo frames are only supported by VMware for use by virtual machines. Technically, VMware does not support the use of jumbo frames for the software iSCSI initiator or for use with NFS datastores. At least, that’s my understanding.

So, feel free to tinker around with jumbo frames for IP-based storage, and when VMware adds official support for it in the future—I can’t imagine why they wouldn’t—then you’ll be able to hit the ground running with the configuration steps necessary to make it work.

Tags: , , , , , ,

Постоянные читатели