Earlier today, Microsoft released Windows 10 Insider Preview build 20201 to the Dev channel. As usual, that comes alongside a bunch of other releases, such as a new SDK and a new Windows Server build. What's different about today's Windows Server build, however, is that Microsoft actually published a blog post detailing what's new.
Typically, new builds show up for downloads, and Microsoft doesn't even acknowledge their existence. Here's the full changelog:
CoreNet: Data Path and Transports
- MsQuic – an open source implementation of the IETF QUIC transport protocol powers both HTTP/3 web processing and SMB file transfers.
- UDP performance improvements — UDP is becoming a very popular protocol carrying more and more networking traffic. With the QUIC protocol built on top of UDP and the increasing popularity of RTP and custom (UDP) streaming and gaming protocols it is time to bring the performance of UDP to a level on par with TCP. In Server vNext we include the game changing UDP Segmentation Offload (USO). USO moves most of the work required to send UDP packets from the CPU to the NIC’s specialized hardware. Complimenting USO in Server vNext we include UDP Receive Side Coalescing (UDP RSC) which coalesces packets and reduces CPU usage for UDP processing. To go along with these two new enhancements, we have made hundreds of improvements to the UDP data path both transmit and receive.
- TCP performance improvements — Server vNext uses TCP HyStart++ to reduce packet loss during connection start up (especially in high speed networks) and SendTracker + RACK to reduce Retransmit TimeOuts (RTO). These features are enabled in the transport stack by default and provide a smoother network data flow with better performance at high speeds.
- PktMon support in TCPIP — The cross-component network diagnostics tool for Windows now has TCPIP support providing visibility into the networking stack. PktMon can be used for packet capture, packet drop detection, packet filtering and counting for virtualization scenarios, like container networking and SDN.
(Improved) RSC in the vSwitch
RSC in the vSwitch has been improved for better performance. First released in Windows Server 2019, Receive Segment Coalescing (RSC) in the vSwitch enables packets to be coalesced and processed as one larger segment upon entry in the virtual switch. This greatly reduces the CPU cycles consumed processing each byte (Cycles/byte).
However, in its original form, once traffic exited the virtual switch, it would be re-segmented for travel across the VMBus. In Windows Server vNext, segments will remain coalesced across the entire data path until processed by the intended application. This improves two scenarios:
– Traffic from an external host, received by a virtual NIC
– Traffic from a virtual NIC to another virtual NIC on the same host
These improvements to RSC in the vSwitch will be enabled by default; there is noo action required on your part.
Direct Server Return (DSR) load balancing support for Containers and Kubernetes
DSR is an implementation of asymmetric network load distribution in load balanced systems, meaning that the request and response traffic use a different network path. The use of different network paths helps avoid extra hops and reduces the latency by which not only speeds up the response time between the client and the service but also removes some extra load from the load balancer.
Using DSR is a transparent way to achieve increased network performance for your applications with little to no infrastructure changes. More information
Introducing Virtual Machine (Role) Affinity/AntiAffinity rules with Failover Clustering
In the past, we have relied on the group property AntiAffinityClassNames to keep roles apart, but there was no site-specific awareness. If there was a DC that needed to be in one site and a DC that needs to be in another site, it wasn’t guaranteed. It was also important to remember to type the correct AntiAffinityClassNames string for each role.
There are these PowerShell cmdlets:
- New-ClusterAffinityRule = This allows you to create a new Affinity or AntiAffinityrule. There are four different rule types (-RuleType)
- DifferentFaultDomain = keep the groups on different fault domains
- DifferentNode = keep the groups on different nodes (note could be on different or same fault domain)
- SameFaultDomain = keep the groups on the same fault domain
- SameNode = keep the groups on the same node
- Set-ClusterAffinityRule = This allows you to enable (default) or disable a rule
- Add-ClusterGroupToAffinityRule = Add a group to an existing rule
- Get-ClusterAffinityRule = Display all or specific rules
- Add-ClusterSharedVolumeToAffinityRule = This is for storage Affinity/AntiAffinity where Cluster Shared Volumes can be added to current rules
- Remove-ClusterAffinityRule = Removes a specific rule
- Remove-ClusterGroupFromAffinityRule = Removes a group from a specific rule
- Remove-ClusterSharedVolumeFromAffinityRule = Removes a specific Cluster Shared Volume from a specific rule
- Move-ClusterGroup -IgnoreAffinityRule = This is not a new cmdlet but does allow you to forcibly move a group to a node or fault domain that otherwise would be prevented. In PowerShell, Cluster Manager, and Windows Admin Center, it would show that the group is in violation as reminder.
Now you can keep things together or apart. When moving a role, the affinity object ensures that it can be moved. The object also looks for other objects and verifies those as well, including disks, so you can have storage affinity with virtual machines (or Roles) and Cluster Shared Volumes (storage affinity) if desired. You can add roles to multiples such as Domain controllers, for example. You can set an AntiAffinity rule so that the DCs remain in a different fault domain. You can then set an affinity rule for each of the DCs to their specific CSV drive so they can stay together. If you have SQL Server VMs that need to be on each site with a specific DC, you can set an Affinity Rule of same fault domain between each SQL and their respective DC. Because it is now a cluster object, if you were to try and move a SQL VM from one site to another, it checks all cluster objects associated with it. It sees there is a pairing with the DC in the same site. It then sees that the DC has a rule and verifies it. It sees that the DC cannot be in the same fault domain as the other DC, so the move is disallowed.
There are built-in overrides so that you can force a move when necessary. You can also easily disable/enable rules if desired, as compared to AntiAffinityClassNames with ClusterEnforcedAffinity where you had to remove the property to get it to move and come up. We also have added functionality in Drain where if it must move to another domain and there is an AntiAffinity rule preventing it, we will bypass the rule. Any rule violations are exposed in both Cluster Admin as well as Windows Admin Center for your review.
Flexible BitLocker Protector for Failover Clusters
BitLocker has been available for Failover Clustering for quite some time. The requirement was the cluster nodes must be all in the same domain as the BitLocker key is tied to the Cluster Name Object (CNO). However, for those clusters at the edge, workgroup clusters, and multidomain clusters, Active Directory may not be present. With no Active Directory, there is no CNO. These cluster scenarios had no data at-rest security. Starting with this Windows Server Insiders, we introduced our own BitLocker key stored locally (encrypted) for cluster to use. This additional key will only be created when the clustered drives are BitLocker protected after cluster creation.
New Cluster Validation network tests
Networking configurations continue to get more and more complex. A new set of Cluster Validation tests have been added to help validate the configurations are set properly. These tests include:
- List Network Metric Order (driver versioning)
- Validate Cluster Network Configuration (virtual switch configuration)
- Validate IP Configuration Warning
- Network Communication Success
- Switch Embedded Teaming Configurations (symmetry, vNIC, pNIC)
- Validate Windows Firewall Configuration Success
- QOS (PFC and ETS) have been configured
(Note regarding QOS settings above: This does not imply that these settings are valid, simply that settings are implemented. These settings must match your physical network configuration and as such, we cannot validate that these are set to the appropriate values)
Server Core Container images are 20 percent amaller [sic]
In what should be a significant win for any workflow that pulls Windows containers images, the download size of the Windows Server Core container Insider image has been reduced by 20%. This has been achieved by optimizing the set of .NET pre-compiled native images included in the Server Core container image. If you are using .NET Framework with Windows containers, including Windows PowerShell, use a .NET Framework image, which will include additional .NET pre-compiled native images to maintain performance for those scenarios, while also benefitting [sic] from a reduced size.
What’s new with the SMB protocol
Raising the security bar even further, SMB now supports AES-256 Encryption. There is also increased performance when using SMB encryption or signing with SMB Direct with RDMA enabled network cards. SMB now also has the ability to do compression to improve network performance.
According to Microsoft, this build os Windows Server vNext is actually for the next Long-Term Servicing Channel release, and it includes both the Desktop Experience and Server Core. If you want to download it, you can find it here.
2 Comments - Add comment