Hyper-V Enhancements

High Availability. Hyper-V includes support for host-to-host connectivity and enables you to cluster all virtual machines running on a host through Windows Clustering (up to 16 nodes).  Enterprise or datacenter editions of Windows Server 2008 are needed.

Quick migration. Hyper-V enables you to rapidly migrate a running virtual machined across Hyper-V hosts with minimal downtime, leveraging familiar high-availability capabilities of Windows Server 2008 and System Center management tools.
       
Quick Migration isn’t the same as VMware vMotion or Citrix XenServer XenMotion. With Quick Migrate there is (small) downtime of the Virtual Machine. The downtime depends on the amount of memory the Virtual Machine is consuming. The Quick Migrate process is: The VM state is saved, the VM is moved to another Hyper-V machine and the VM state is restored.     

A Quick migrate of a VM using 1 GB memory takes approximately 4 seconds. The VHD file needs to be stored on a shared storage and you need the same processor architecture across the nodes.

Server Core role
. Hyper-V is now available as a role in a Server Core installation of Windows Server 2008.
Integrated into Server Manager. Hyper-V is now integrated into Server Manager by default and customers can now enable the role within Server Manager.

VHD tools
. Hyper-V includes support for VHD tools to enable compaction, expansion and inspection of VHDs created with Hyper-V.

Improved access control with AzMan
. Hyper-V now includes support for Authorization Manager (AzMan) to enable Role-Based Access Control models for better administration of the Hyper-V environment with increased security.

Host characteristics
16 logical processors, 2TB memory, SAS/SATA/discs and FibreChannel support.

Guest characteristics
: 32-bit (x86) and 64-bit (x64) child partitions, 64Gb memory support within VMs, 4 core SMP VMs, max 4 NIC’s.

Live Backups
with VSS, Volume Shadow Copy Services (VSS) enables the functionality to take Live Backups of running virtual machines.

Resource Management
, CPU, disk and network can be managed using Windows Server Resource Manager (WSRM).

Hyper-V Architecture

Parent Partition – Manages machine-level functions such as device drivers, power management, and device hot addition/removal. The root (or parent) partition is the only partition that has direct access to physical memory and devices.

Child Partition – Partition that hosts a guest operating system - All access to physical memory and devices by a child partition is provided via the Virtual Machine Bus (VMBus) or the hypervisor.

VMBus – Channel-based communication mechanism used for inter-partition communication and device enumeration on systems with multiple active virtualized partitions. The VMBus is installed with Hyper-V Integration Services.

WMI – The Virtual Machine Management Service exposes a set of Windows Management Instrumentation (WMI)-based APIs for managing and controlling virtual machines.

VSC – Virtualization Service Client – A synthetic device instance that resides in a child partition. VSCs utilize hardware resources that are provided by Virtualization Service Providers (VSPs) in the parent partition. They communicate with the corresponding VSPs in the parent partition over the VMBus to satisfy a child partitions device I/O requests.

VSP – Virtualization Service Provider – Resides in the root partition and provide synthetic device support to child partitions over the Virtual Machine Bus (VMBus).

Hypercall – Interface for communication with the hypervisor - The hypercall interface accommodates access to the optimizations provided by the hypervisor.

Hypervisor – A layer of software that sits between the hardware and one or more operating systems. Its primary job is to provide isolated execution environments called partitions. The hypervisor controls and arbitrates access to the underlying hardware.

System Requirements of HyperV

CPU: Hyper-V requires specific processor enhancements from either Intel or AMD. Intel VT is integral to the Intel vPro range.

64 bits environment, Virtualization is a prime candidate for the expanded memory and processing facilities that 64-bit platforms offer. To ensure these expanded facilities are available, Hyper-V only runs on x64-bit editions of Windows Server 2008.

Approved hardware, Hyper-V requires hardware that is on the Windows server catalog of tested hardware. Microsoft hardware approval is particularly important in Hyper-V because the Windows Hypervisor layer interfaces directly between the hardware and the parent and child partitions. Rigorous testing of third party device drivers also helps to enhance parent partition stability. Although Hyper-V is running fine on my laptop this device isn’t the most suitable candidate for the server virtualization role.

Physical Memory on the host computer is the main limiting factor that sets the number of virtual computers that can run simultaneously. The virtual computers share this physical memory with the parent partition. Memory requirements are typically 512MB for the parent partition, plus the allocated memory for each child partition and a further 32MB overhead for each child partition. Therefore, a child partition that has 256MB allocated virtual RAM requires a host that has at least (512+(256+32)) = 800MB

Types of Hypervisor Technology Part-II

Type II Hypervisors.

Type II Hypervisors run on an existing operating system that provides the interface. The guest operating systems then run on the Type II hypervisor at the third level above the hardware as can be seen in Figure:

A Type II approach to Hypervisors
Once more, as in Type I, the three guest OS’s are unaware of the fact that they are running on anything but the hardware directly.

Comparison of the two Types of Hypervisors.

For speed and efficiency, the Type I Hypervisors are definitely better. Since they run directly on the hardware and manage it face to face, they are able to work without cutting through the various layers that hamper the speed of the Type II Hypervisors.

Type I hypervisors provide higher performance efficiency, availability, and security than Type II hypervisors.

Type II Hypervisors run on client systems where considerations of speed are less important. They can be installed directly on the Operating system and are thus much easier to setup.

In addition, Type II Hypervisors support a much broader range of hardware since the hardware resources are provided by the underlying Operating System on which it runs.

Examples of Type I Hypervisors are Microsoft’s Hyper-V and VMware ESX Server.
Examples of Type II Hypervisors are VMware GSX Server and Microsoft’s Virtual Server