Hyper-V 2019 R2 using Failover Clustering with CSV and SCVMM 2019 using GUI

Prereqs & design

Install required roles/feature

Do on every node:

  1. Server ManagerManageAdd Roles and Features:

  2. If RoCE: Server ManagerToolsDCB to verify DCB/PFC. (You’ll typically configure PFC/DCB on the switches; host-side QoS can be via WAC or PowerShell.)

Note: SET (Switch Embedded Teaming) is PowerShell-only. If you want a strictly GUI experience for virtual switch/teaming, you can:

Host networking

Option A – Classic NIC Teaming (pure GUI)
On each host: Server ManagerLocal ServerNIC TeamingTasksNew Team

Option B – Recommended (SCVMM Logical Switch)
If you plan to manage through SCVMM, you can skip creating vSwitches now; VMM will push a Logical Switch (with Uplink Port Profile, etc.) to all hosts later (see Step 7).

Validate and create the cluster

  1. On any node: Failover Cluster ManagerValidate Configuration.

  2. Create Cluster (wizard):

  3. Post-create:

Enable Storage Spaces Direct via Windows Admin Center

While you can enable S2D with PowerShell, Windows Admin Center (WAC) gives a full GUI:

  1. Install Windows Admin Center on a management box.

  2. Add the cluster: WAC → AddWindows Server cluster → select the cluster.

  3. In the cluster pane: StorageVolumesEnable Storage Spaces Direct (wizard).

If you must use PowerShell instead of WAC:
Enable-ClusterS2D -CimSession <ClusterName> -Confirm:$false

Create virtual disks & CSVs

In WAC (recommended):
Cluster → StoragePools → open S2D on <Cluster>Create:

In Failover Cluster Manager (also possible):
Cluster → StoragePoolsVirtual DisksNew Virtual Disk … then DisksAdd to Cluster Shared Volumes.

Tune cluster networks & Live Migration

  1. Failover Cluster Manager → Cluster → Networks:

  2. Live Migration settings:

Stand up SCVMM 2019

  1. Prepare SQL (SQL Server 2016+; dedicated instance recommended).

  2. Install VMM Management Server + VMM Console (GUI wizard).

  3. Create a VMM Library share (NTFS share on a file server or VMM box) and add it in FabricServersLibrary Servers.

  4. RunAs Accounts:

Add hosts & discover the cluster

  1. In VMM ConsoleFabric:

  2. After hosts add successfully, VMM will discover the Failover Cluster automatically and list it under All Hosts.

Fabric networking in VMM

This replaces per-host manual vSwitch setup and keeps you GUI-centric.

  1. Logical Network: FabricNetworkingLogical NetworksCreate:

  2. Port Profiles & Classifications:

  3. Logical Switch:

  4. Apply Logical Switch to hosts:

Storage classification in VMM

VM networks & templates

  1. VM Network (if using NVGRE/VLAN isolation):

  2. Templates:

Test Live Migration, HA, and resiliency

Common things to know

Minimal PowerShell (only if you skip WAC)

If you don’t want to use WAC for enabling S2D:

# From any cluster node
Validate-Cluster -Node Node1,Node2,Node3,Node4 -Include "Storage Spaces Direct",Inventory,Network,SystemConfiguration
New-Cluster -Name HV19-CL01 -Node Node1,Node2,Node3,Node4 -StaticAddress 10.0.10.50
Enable-ClusterS2D -Confirm:$false
# Then create a ReFS mirror virtual disk and CSV via Failover Cluster Manager GUI