Hyper-V is a server virtualization software developed by Microsoft Corporation that virtualizes a single hardware server into multiple virtual servers/machines. (Learn more about Hyper-V in our post, These are the 8 best use cases for Microsoft Hyper-V.) Each virtual machine runs as an isolated logical unit/partition and the underlying hardware resources (processor, hard drive, memory, etc.) of the host server are shared between the virtual machines.
Microsoft Hyper-V is available in the following three variants:
- Hyper-V for WIndows Server: An optional add-on to the Windows Server Family that includes Windows Server OS licenses for guest VMs as well (See Servers Direct for details).
- Hyper-V Server: A stand-alone, stripped-down and command line interface (CLI)-based version of Hyper-V with no included licenses for guest VMs.
- Client Hyper-V: This is the streamlined version of Server Hyper-V for individual desktops and machines.
Hyper-V offers a number of benefits. Key use cases include maximizing the usage of hardware resources, minimizing downtimes by offering live migrations and failover clustering, and reducing data center footprints. It also assists the IT teams by increasing their productivity, improving manageability, and enabling R & D and troubleshooting.
Let us now explore the optimum hardware that can be used to run Microsoft Hyper-V.
Minimum hardware requirements to run Hyper-V
Hyper-V has the following system requirements listed as a minimum:
- 64-bit processors with Second Level Address Translation
- Minimum 4 GB memory
- Virtualization support enabled in BIOS/UEFI
While hardware enforced Data Execution Prevention is also listed as a requirement, most of the modern CPUs will have this feature by default.
The hardware requirements listed above are fairly basic and any system that meets these criteria can become the host server for Hyper-V installation. However, in order to get the most out of a Hyper-V enabled environment, there are further hardware considerations that need to be evaluated. Since the host server’s physical hardware resources like CPU, memory, and network limit the performance of guest VMs, it is important to design and scale these resources accordingly. Let’s explore this in further detail.
CPU
While the speed of the CPU is important, it is actually the number of cores that improve the performance of VMs. The greater the number of cores and threads, the faster the VMs will be able to perform.
It’s important to maintain a certain correlation between the number of cores on the host machine and the vCPUs on the guest OS. Physical compute power should neither be under-provisioned nor over-engineered. For most workloads, a physical to vCPU correlation of 1-to-8 or 1-to-12 will suffice. Processors with higher caches will also boost performance.
Memory
Because the guest virtual machines run in memory, the physical RAM of the host server can directly impact the VM’s performance. Mission critical VMs should be allocated a greater memory compared to a non-intensive application like a mailserver.
While sizing the memory, it is important to remember that Hyper-V itself will have a memory overhead of 2-3% and the host OS should also have 2-4 GB memory available. So for example, if 5 VMs are deployed on a server and each VM requires 2GB memory, the host server should at least have 16GB RAM. Overloading the server with a greater number of VMs or fitting the server with a greater RAM will not be a good choice in this case.
Storage
While storage is not exactly a hardware bottleneck, it has to be sized rightly to ensure enough space is available for files and data of both the host and guest OS. The total disk space is dependent on the expected cumulative size of all systems combined. A data growth rate for each VM/application should also be factored into the sizing. RAID configurations, higher spindle counts, greater RPMs, and the option to go for SSDs will improve the performance.
Network
Although network adapter is not listed as a hardware requirement, it is obvious that all the traffic of the host server and the guest VMs will transit from the available physical network interface. It is generally a good practice to have two network adapters—one is dedicated for the host OS while the second is shared among the VMs. In case a traffic-intensive workload is expected for a particular VM, another dedicated network interface might be considered. 2 x 10 Gig Ethernet/SFP should be taken as a minimum will a quad port card can be considered for greater network traffic.
Which whitebox servers are best for Hyper-V?
Whitebox servers have been a popular choice in the Data Center industry for a long time and have consistently held a solid share in terms of number of units shipped as well as overall server revenue. Cloud service providers tend to tilt more towards whitebox servers as compared to enterprise clients or telcos.
Ease of customization and configurability, lower cost of ownership and ease of maintenance/support by virtue of using standard hardware modules are the key reasons for adopting whitebox servers over branded systems. The rise of virtualization use cases (see Why White Box Servers Beat the Big Brands) have further increased the popularity of whitebox servers. We have shortlisted two server families from our Ultra Server series to explain why they are the most suitable hardware to run virtualized workloads. Let’s take a look at the hardware configuration options for 1029 and 2029 Ultra Server series:
Hardware Consideration |
1029 Family | 2029 Family |
CPU |
Dual Socket P (LGA 3647)
2nd Gen Intel® Xeon® Scalable Processors and Intel® Xeon® Scalable Processors‡, Dual UPI up to 10.4GT/s |
Dual Socket P (LGA 3647)
2nd Gen Intel® Xeon® Scalable Processors and Intel® Xeon® Scalable Processors‡, Dual UPI up to 10.4GT/s |
Memory |
Configurable from 8 GB DDR up to 6 TB | Configurable from 8 GB DDR up to 6 TB |
Network |
Configurable from 4xGbE up to 4x10GBaseT | Configurable from 4xGbE up to 4x10GBaseT |
Storage |
10x 15TB Hot Swap Drive Bays for NVME, SAS/SATA or SSD drives in multiple configurations | 24x 15TB Hot Swap Drive Bays for, NVME SAS/SATA or SSD drives in multiple configurations |
Additional Considerations |
|
|
- Both 1029 and 2029 provide dual CPU sockets with multiple CPU options to provide upto an overall 56 cores/112 threads capacity. Considering a physical to vCPU correlation of 1-to-8, these servers can provide 416 vCPUs (assuming 4 physical cores are dedicated for the host OS). The available 416 vCPUs can be used to architect a great number of possible VM configurations, for example 104 VMs with 4 vCPUs each or 52 servers with 8 vCPUs each or any combination thereof. A physical to vCPU correlation of 1-to-12 for non-intensive workloads can further increase these numbers.
- These server families can support upto 6TB of physical memory providing enough memory to be allocated to the VMs without hitting performance bottlenecks.
- 1029 server family has 10 drive bays with a maximum drive size of 15 TB each. This amounts to 150TB of total storage space. 2029 increases the number of drive bays to 24 which increases the total storage size to 360 TB. This provides adequate storage distribution options for the guest VMs. Both 1029 and 2029 series also have the option of SSD for improved read/write speeds.
- Both 1029 and 2029 series also provide the option to add NVIDIA GPU for increased graphical or computational workloads.
- Both server families come with a standard three year warranty which can be further enhanced to extended or on-site support.
Conclusion
1029 and 2029 server families score well among all the hardware considerations mentioned earlier in the article and the available configuration options cater to a very wide range of possible requirements. From hosting a few VMs to 100 plus VMs, these server families can scale well to serve a diverse set of workloads. Considering the configuration options, extensibility and scalability, and value for money and support, 1029 and 2029 Ultra Server series are among the best server hardware to run a virtualized environment. You can read more about these server families here.