Guest Operating System Support by Hyper-V


Here I have listed Microsoft recommended Guest OS which are supported by Hyper-V. Guest OS is the OS which is installed on the VM (Virtual Machine) Created using Microsoft Server 2008 Hyper-V.

The following guest operating systems are supported on Hyper-V:

Windows Server 2008 x64 (VM configured with 1-, 2-, or 4 virtual processors SMP)
  • Windows Server 2008 Standard/Enterprise/Datacenter x64
  • Windows Web Server 2008 x64
  • Windows Server 2008 Standard/Enterprise/Datacenter without Hyper-V x64

Windows Server 2008 x86 (VM configured with 1-, 2-, or 4 virtual processors SMP)
  • Windows Server 2008 Standard/Enterprise/Datacenter x86
  • Windows Web Server 2008 x86
  • Windows Server 2008 Standard/Enterprise/Datacenter without Hyper-V x86

Windows Server 2003 x86 (VMs configured with 1- or 2 virtual processors SMP only)
  • Windows Server 2003 Standard/Enterprise/Datacenter x86 Edition with Service Pack 2
  • Windows Server 2003 Web x86 Edition with Service Pack 2

Windows Server 2003 x64 (VMs configured with 1- or 2- virtual processors only)
  • Windows Server 2003 Standard/Enterprise/Datacenter x64 Edition with Service Pack 2

Windows Server 2000 (VMs configured with 1- virtual processors)
  • Windows 2000 Server with Service Pack 4
  • Windows 2000 Advanced Server with Service Pack 4

Other Operating Systems (VMs configured with 1- or 2- or 4-virtual processors only)
  • Windows HPC Server 2008

Linux Distributions (VMs configured with 1 virtual processor only)
  • SUSE Linux Enterprise Server 10 with Service Pack 2 x86/x64 Edition
  • SUSE Linux Enterprise Server 10 with Service Pack 1 x86/x64 Edition

Apart from these OS I was able to install and configure other guest OS License CentOS, Fedora, Ubuntu/Debian Successfully. So, above listed OS are the recommended OS from Microsoft but you can install any guest OS on a VM without any problem.

When to Use Hyper-V Server 2008

Microsoft Hyper-V Server 2008 provides a wide choice for the users/customers for basic and simplified Virtualization Solutions consolidating servers as well as for test environments. Hyper-V Server 2008 only offers the most basic of virtualization features, making it ideal for:

    * Test and Development
    * Basic Server Consolidation
    * Branch Office Consolidation
    * Hosted Desktop Virtualization (VDI)
   
Users/Customers who require more virtualization features like Quick Migration, multi-site clustering, large memory support (greater than 32 GB of RAM), and more than four processors on the host server, should use Windows Server 2008.
The following table outlines which Hyper-V–enabled product would suit your needs:


Hyper-V Enhancements

High Availability. Hyper-V includes support for host-to-host connectivity and enables you to cluster all virtual machines running on a host through Windows Clustering (up to 16 nodes).  Enterprise or datacenter editions of Windows Server 2008 are needed.

Quick migration. Hyper-V enables you to rapidly migrate a running virtual machined across Hyper-V hosts with minimal downtime, leveraging familiar high-availability capabilities of Windows Server 2008 and System Center management tools.
       
Quick Migration isn’t the same as VMware vMotion or Citrix XenServer XenMotion. With Quick Migrate there is (small) downtime of the Virtual Machine. The downtime depends on the amount of memory the Virtual Machine is consuming. The Quick Migrate process is: The VM state is saved, the VM is moved to another Hyper-V machine and the VM state is restored.     

A Quick migrate of a VM using 1 GB memory takes approximately 4 seconds. The VHD file needs to be stored on a shared storage and you need the same processor architecture across the nodes.

Server Core role
. Hyper-V is now available as a role in a Server Core installation of Windows Server 2008.
Integrated into Server Manager. Hyper-V is now integrated into Server Manager by default and customers can now enable the role within Server Manager.

VHD tools
. Hyper-V includes support for VHD tools to enable compaction, expansion and inspection of VHDs created with Hyper-V.

Improved access control with AzMan
. Hyper-V now includes support for Authorization Manager (AzMan) to enable Role-Based Access Control models for better administration of the Hyper-V environment with increased security.

Host characteristics
16 logical processors, 2TB memory, SAS/SATA/discs and FibreChannel support.

Guest characteristics
: 32-bit (x86) and 64-bit (x64) child partitions, 64Gb memory support within VMs, 4 core SMP VMs, max 4 NIC’s.

Live Backups
with VSS, Volume Shadow Copy Services (VSS) enables the functionality to take Live Backups of running virtual machines.

Resource Management
, CPU, disk and network can be managed using Windows Server Resource Manager (WSRM).

Hyper-V Architecture

Parent Partition – Manages machine-level functions such as device drivers, power management, and device hot addition/removal. The root (or parent) partition is the only partition that has direct access to physical memory and devices.

Child Partition – Partition that hosts a guest operating system - All access to physical memory and devices by a child partition is provided via the Virtual Machine Bus (VMBus) or the hypervisor.

VMBus – Channel-based communication mechanism used for inter-partition communication and device enumeration on systems with multiple active virtualized partitions. The VMBus is installed with Hyper-V Integration Services.

WMI – The Virtual Machine Management Service exposes a set of Windows Management Instrumentation (WMI)-based APIs for managing and controlling virtual machines.

VSC – Virtualization Service Client – A synthetic device instance that resides in a child partition. VSCs utilize hardware resources that are provided by Virtualization Service Providers (VSPs) in the parent partition. They communicate with the corresponding VSPs in the parent partition over the VMBus to satisfy a child partitions device I/O requests.

VSP – Virtualization Service Provider – Resides in the root partition and provide synthetic device support to child partitions over the Virtual Machine Bus (VMBus).

Hypercall – Interface for communication with the hypervisor - The hypercall interface accommodates access to the optimizations provided by the hypervisor.

Hypervisor – A layer of software that sits between the hardware and one or more operating systems. Its primary job is to provide isolated execution environments called partitions. The hypervisor controls and arbitrates access to the underlying hardware.

System Requirements of HyperV

CPU: Hyper-V requires specific processor enhancements from either Intel or AMD. Intel VT is integral to the Intel vPro range.

64 bits environment, Virtualization is a prime candidate for the expanded memory and processing facilities that 64-bit platforms offer. To ensure these expanded facilities are available, Hyper-V only runs on x64-bit editions of Windows Server 2008.

Approved hardware, Hyper-V requires hardware that is on the Windows server catalog of tested hardware. Microsoft hardware approval is particularly important in Hyper-V because the Windows Hypervisor layer interfaces directly between the hardware and the parent and child partitions. Rigorous testing of third party device drivers also helps to enhance parent partition stability. Although Hyper-V is running fine on my laptop this device isn’t the most suitable candidate for the server virtualization role.

Physical Memory on the host computer is the main limiting factor that sets the number of virtual computers that can run simultaneously. The virtual computers share this physical memory with the parent partition. Memory requirements are typically 512MB for the parent partition, plus the allocated memory for each child partition and a further 32MB overhead for each child partition. Therefore, a child partition that has 256MB allocated virtual RAM requires a host that has at least (512+(256+32)) = 800MB

Types of Hypervisor Technology Part-II

Type II Hypervisors.

Type II Hypervisors run on an existing operating system that provides the interface. The guest operating systems then run on the Type II hypervisor at the third level above the hardware as can be seen in Figure:

A Type II approach to Hypervisors
Once more, as in Type I, the three guest OS’s are unaware of the fact that they are running on anything but the hardware directly.

Comparison of the two Types of Hypervisors.

For speed and efficiency, the Type I Hypervisors are definitely better. Since they run directly on the hardware and manage it face to face, they are able to work without cutting through the various layers that hamper the speed of the Type II Hypervisors.

Type I hypervisors provide higher performance efficiency, availability, and security than Type II hypervisors.

Type II Hypervisors run on client systems where considerations of speed are less important. They can be installed directly on the Operating system and are thus much easier to setup.

In addition, Type II Hypervisors support a much broader range of hardware since the hardware resources are provided by the underlying Operating System on which it runs.

Examples of Type I Hypervisors are Microsoft’s Hyper-V and VMware ESX Server.
Examples of Type II Hypervisors are VMware GSX Server and Microsoft’s Virtual Server

Types of Hypervisor Technology Part-I

Depending on their implementation, there are basically two types of ways in which Hypervisors work. They are called Type I and Type II hypervisors. All products that implement Hypervisor technology fall into one of these two types.

Type I Hypervisors.

Type I Hypervisors run directly on the Hardware. Instead of the Operating system, they are fully in charge of the management of system resources. The operating systems run on top of the Hypervisor which intercepts their requests and manages them in such a way that they are completely independent of each other Figure
A Type I approach to Hypervisors

As can be seen in Figure 1, the three guest operating systems (OS’s) are running on top of the hypervisor. The OS’s see only those resources that the Hypervisor presents to them. As can be seen from the above figure, the guest operating systems run at the second level above the hardware