Citrix XenApp

Your Journey towards cloud.

Virtualization Picking up Speed

Are your Skills keeping up? Skill up. Be Relevant

Are you a System Admin

Learn Citrix XenApp, Its future.

Citrix XenApp

Industry-leading virtualization platform for building cloud.

Cloud Computing in Demand

Learn how to build cloud on Citrix XenApp.

Showing posts with label vSphere. Show all posts
Showing posts with label vSphere. Show all posts

Thursday, 7 February 2013

VMware vSphere can virtualize itself + 64-bit nested guests

Running VMware ESXi inside a virtual machine is a great way to experiment with different configurations and features without building out a whole lab full of hardware and storage. Virtualization enthusiasts everywhere have benefited from the ability to run ESXi on ESXi, first introduced with the vSphere 4 release.
VMware vSphere 5 makes it easier than ever to virtualize hypervisor hosts. With new capabilities to run nested 64-bit guests and take snapshots of virtual ESXi VMs, the sky is the limit for your cloud infrastructure development lab. Heck, you can even run Hyper-V on top of vSphere 5 — not that you’d want to.

Physical Host Setup

The physical host running VMware ESXi 5 requires just a few configuration changes; here is a guide:
  • Install VMware ESXi 5 on a physical host and configure networking, storage, and other aspects as needed
  • Configure a vSwitch and/or Port Group to have Promiscuous Mode enabled
  • Create a second Port Group named “Trunk” with VLAN ID All (4095) if you want to use VLANs on virtual hypervisors
  • Log in to Tech Support Mode (iLO or ssh) and make the following tweak to enable nested 64-bit guests
    echo 'vhv.allow = "TRUE"' >> /etc/vmware/config

Virtual VMware ESXi Machine (vESXi) Creation

For various reasons, it’s not feasible to clone virtual ESXi VMs. As an alternative, create a fully-configured shell VM to use as a template — it can be cloned before ESXi is installed.
Create a new VM with the following guidance:
  • Guest OS: Linux / Red Hat Enterprise Linux 5 (64-bit)
  • 2 virtual sockets, 2+ GB RAM
  • 4 NICs — connect NIC 1 to the management network and the rest to the “Trunk” network:
  • Thin provisioned virtual disks work fine
  • Finish creating the VM, then edit the following settings
    • Options/General Options: change Guest Operating System to Other – VMware ESXi 5.x
    • CPU/MMU Virtualization: Use Intel VT … EPT… ( bottom radio button)
  • Don’t power this VM on — keep it to act as a template
  • Clone and install VMware ESXi via ISO image or PXE boot
  • Add to vCenter and configure virtual ESXi hosts for action

Nested 64-bit Guests

With the release of VMware vSphere 5, nested guests can be 64-bit operating systems. Just be sure to make the change to /etc/vmware/config on the physical host as indicated above.
Nested guests can be migrated with vMotion between virtual or physical VMware ESXi hosts; this requires a vMotion network and shared storage.

Nested Hyper-V Virtual Machines

It is possible to run other hypervisors as vSphere virtual machines, and even power on nested VMs. Here you can see Hyper-V running a CentOS virtual machine — all on VMware ESXi. Talk about disrupting the space-time continuum!

A couple of extra tweaks are needed to enable this, and performance is not great. Nevertheless, an amazing feat of engineering from VMware!
Do the following to enable Hyper-V on VMware ESXi:
  • Add hypervisor.cpuid.v0 = FALSE to the VM configuration

  • Add —-:—-:—-:—-:—-:—-:–h-:—- to the CPU mask for Level 1 ECX (Intel)

Friday, 14 September 2012

Applying a default host profiles in vSphere 5.1 cluster fails

I was playing around with host profiles in my vSphere 5.1 home lab today. It was easy enough to create a baseline by selecting a given host in a cluster. But, without having changed anything, when I tried to check for compliance I received the following error:

"A general system error occurred: Failed to run Execute operation on esxi-hostname.domain.net: IP address '192.168.1.x' is used for multiple virtual NICs"



I was pretty sure that I had only used that IP address for the service console, or the management interface, for one host.

To fix it, it is necessary to modify the profile as it is trying to apply the same IP address to the vmk0 (the management interface) of the other host(s) in the cluster.

Go to Network configuration -> Host virtual NIC -> dvSwitch ->IP address settings ->IPv4 address (assuming you are using a dvSwitch for vmk0) and change the option to:

'User specified IPv4 address to be used while applying the configuration', see screenshot below.

Then update the answer file for each host and rerun the compliance check.

Thursday, 13 September 2012

Enabling 64-bit VMs on nested ESXi 5.1

In my home lab, I have a 2-node cluster with two virtual ESXi 5.1. When I tried to boot a 64-bit on these hosts I received the following error:

"Longmode is unsupported. It is required for 64-bit guest OS support. On Intel systems, longmode requires VT-x to be enabled in the BIOS. On nested virtual ESX hosts, longmode requires the "Virtualized Hardware Virtualization" flag to be enabled on the outer VM."

I seem to remember that in version 5.0 you had to configure a given parameter in the ESXi console. For ESXi 5.1 this has changed according to this VMware KB.

It states the following:

"Virtualized HV is fully  supported for virtual hardware version 9 VMs on hosts that support  Intel VT-x and EPT or AMD-V and RVI. To enable virtualized HV, use the web client and navigate to the processor settings screen. Check the  box next to  "Expose hardware-assisted virtualization to the guest operating system."  This setting is not available under the traditional C# client."

So, access the web client, locate the VM, right click -> Edit settings, and check the box as mentioned (for the parent VM, not the virtual ESXi...). It works like a charm, see screendump below:


Monday, 5 December 2011

VMXNET 3: Supported Guest Operating Systems

VMXNET 3 is the newest NIC driver for VMs (requries VM HW v7). It should be chosen as default for all supported guest operating systems. Windows Server 2000, however, is not supported. Here's link to VMware KB with more info. Remember, that when you delete the old NIC and add a new one, then all IP info is wiped and should be reconfigured (mostly relevant for static IPs).


Wednesday, 7 September 2011

Configuring iSCSI for vSphere 5

Configuring a software iSCSI initiator for ESXi 5.0 is a relatively simple operation. This quick guide assumes that you have already configured an iSCSI target and published it on the network.

For inspiration, have a look at this VMware KB

Create a new vSwitch (Configuration -> Networking -> Add Networking) and add a VMkernel. Configure it with an IP address. 


Go to Storage adapters and click "Add" to add a software iSCSI adapter if it does not exist already.



Once added, right click the software initiator and choose "properties". 


Go to Network Configuration tab and click "Add".


Choose the vSwitch/VMkernel that you created above.


Go ot Dynamic Discovery tab and click "Add" to add an iSCSI target


You will be prompted to input IP address of the iSCSI target, just leave port 3260 as default unless you have configured it differently on your target.


Go to Configuration -> Storage and click "Add storage". Click DISK/LUN and next. If everything has been done correctly, you be able to see your published iSCSI target and can then add and format it with the new VMFS5 file system, uh lala!

Upgrading vCenter v4.1 to v5.0

I just upgraded my home lab vCenter server the other day from v4.1 to v5 and took some screen dumps of the installation process. The steps look fairly familiar compared to earlier versions. At one point in the installation I had an error stating that:

"The Fully Qualified Domain Name cannot be resolved. If you continue the installation, some features might not work correctly"

The reason for this error is that I had not created a reverse lookup on the DNS server.
Here are the screen dumps:












It was at this step that the DNS error ocurred. Below image shows how a reverse lookup zone was created on the DNS server.







Sunday, 17 July 2011

Changing IP and VLAN on host - no VM downtime

It is possible to change the service console (COS) IP and VLAN id for hosts in a cluster without having VM downtime (see this post for changing hostname). The trick is to change the COS IP first on all hosts and then wait with the changing of the vMotion IP until all COS IP's have been changed. This way, you will be able to put the hosts into maintenance mode one by one and vMotion will still work with the old IP even though COS IP's will differ in range and VLAN id.

NB. It may be neccesary to disable HA for the cluster before you begin as the HA agent will not be able to configure on the hosts when IP's don't match for all hosts.

  1. Enter maintenance mode
  2. Update the DNS entry on the DNS server
  3. Log on to the vCenter server and flush the DNS: ipconfig /flushdns
  4. Go to ILO, DRAC or something similar for the host (you will loose remote network connection when changing the IP) and change the IP (use this KB article for inspiration): 
  5. [root@server root]# esxcfg-vswif -i a.b.c.d -n w.x.y.z vswif0 , where a.b.c.d is the IP address and w.x.y.z is the subnet mask.
  6. Change the VLAN id (in this case VLAN 12): esxcfg-vswitch -v 12 -p 'Service Console' vSwitch0
  7. Change gateway: nano /etc/sysconfig/network
  8. Change DNS servers: nano /etc/resolv.conf
  9. Restart network: service network restart
  10. Ensure that gateway can be pinged
  11. Update the NTP server from the vSphere client if needed.
  12. Continue the process with next host in the cluster
When all COS IP's have been changed, go to the vSphere client and change all vMotion IP addresses and VLAN id's. This will not require any downtime. And then test that vMotion works.
Done.

Monday, 27 June 2011

Error during upgrade: The system call API checksum doesn’t match

Today, I got an error during upgrade from vSphere 4.0 to 4.1 stating something like:

The system call API checksum doesn’t match

There was a lot of similar lines filling the console. I was a bit worried that the upgrade had gone wrong even though I had done three similar upgrades before this one with no errors - and that I would have to reinstall in stead.

Luckily, I found this error description in the 4.1 release notes stating that a reboot will fix the issue. So I waited for a while to be sure that the upgrade finished, rebooted, and everything looks fine:

Link to release notes:

"ESX service console displays error messages when upgrading from ESX 4.0 or ESX 4.1 to ESX 4.1 Update 1
When you upgrade from ESX 4.0 or ESX 4.1 release to ESX 4.1 Update 1, the service console might display error messages similar to the following:
On the ESX 4.0 host: Error during version check: The system call API checksum doesn’t match"
On the ESX 4.1 host: Vmkctl & VMkernel Mismatch,Signature mismatch between Vmkctl & Vmkernel

You can ignore the messages.

Workaround: Reboot the ESX 4.1 Update 1 host. "