Hardware: 2–16 nodes, identical or similar. If using RDMA, prefer RoCEv2 (needs DCB/PFC) or iWARP (no DCB).
Disks: Mix of NVMe/SSD/HDD is fine; each node needs local disks for S2D (no shared SAN).
Licensing/Edition: Windows Server 2019 Datacenter on all nodes.
Networking (typical):
Mgmt: 1× 1/10GbE
Storage (S2D+CSV/SMB): 2× 10/25/40GbE RDMA NICs
Optional: 1× Live Migration
AD/DNS:
Put the Hyper-V hosts in a dedicated OU.
Either pre-stage the Cluster Name Object (CNO), or grant Create Computer Objects to the hosts’ computer accounts on the OU (this avoids “logon type”/CNO creation errors).
Time/patching: Same patch level, BIOS/firmware consistent, time in sync.
Security: If using BitLocker on data volumes, plan for cluster-aware key management.
Do on every node:
Server Manager → Manage → Add Roles and Features:
Roles: Hyper-V, Failover Clustering
Features: Multipath I/O (optional), Data-Center Bridging (if RoCE), Hyper-V Management Tools, Failover Cluster Management Tools
If RoCE: Server Manager → Tools → DCB to verify DCB/PFC. (You’ll typically configure PFC/DCB on the switches; host-side QoS can be via WAC or PowerShell.)
Note: SET (Switch Embedded Teaming) is PowerShell-only. If you want a strictly GUI experience for virtual switch/teaming, you can:
Use NIC Teaming in Server Manager (classic team) + vSwitch on top OR
Let SCVMM deploy a Logical Switch (recommended).
Option A – Classic NIC Teaming (pure GUI)
On each host: Server Manager → Local Server → NIC Teaming → Tasks → New Team
Create a team for Storage (2× RDMA NICs if you’re not using SET) and one for Mgmt/VM as desired.
Then Hyper-V Manager → Virtual Switch Manager… → create an External vSwitch bound to the appropriate team or NIC.
Option B – Recommended (SCVMM Logical Switch)
If you plan to manage through SCVMM, you can skip creating vSwitches now; VMM will push a Logical Switch (with Uplink Port Profile, etc.) to all hosts later (see Step 7).
On any node: Failover Cluster Manager → Validate Configuration.
Add all nodes → run all tests. Fix reds before proceeding.
Create Cluster (wizard):
Select nodes → Access Point for Admin = Cluster Name (e.g., HV19-CL01); ensure DNS registration.
Choose to add all eligible storage = no shared disks yet is OK (S2D uses local disks).
Post-create:
In Failover Cluster Manager → right-click the cluster → More Actions → Configure Cluster Quorum Settings. Use defaults (Node Majority) unless stretched/odd design.
In AD Users & Computers: confirm the CNO got created, and it can create VCOs (computer objects) for things like Scale-Out File Server if you add it.
While you can enable S2D with PowerShell, Windows Admin Center (WAC) gives a full GUI:
Install Windows Admin Center on a management box.
Add the cluster: WAC → Add → Windows Server cluster → select the cluster.
In the cluster pane: Storage → Volumes → Enable Storage Spaces Direct (wizard).
It detects eligible disks → Enable.
After a few minutes you’ll see a Storage Pool named S2D on <ClusterName>.
If you must use PowerShell instead of WAC:
Enable-ClusterS2D -CimSession <ClusterName> -Confirm:$false
In WAC (recommended):
Cluster → Storage → Pools → open S2D on <Cluster> → Create:
Virtual disk layout: Mirror (2-way with ≥2 nodes; 3-way recommended for 4+ nodes), or Mirror-accelerated parity for archive.
Filesystem: ReFS (recommended for Hyper-V).
Make this volume a Cluster Shared Volume: Yes (CSV).
Give it a friendly name (e.g., CSV01).
In Failover Cluster Manager (also possible):
Cluster → Storage → Pools → Virtual Disks → New Virtual Disk … then Disks → Add to Cluster Shared Volumes.
Failover Cluster Manager → Cluster → Networks:
Rename networks (e.g., Mgmt, LM, S2D/CSV).
Right-click each → Properties:
Storage/CSV network: Allow cluster network communication (checked); Do not allow clients to connect (set).
Mgmt: Allow clients to connect.
Live Migration settings:
Cluster → Roles not needed; instead Failover Cluster Manager → Cluster → Networks → set Live Migration order in Hyper-V Settings per host (Hyper-V Manager → host → Hyper-V Settings… → Live Migrations → Add the LM network; set max simultaneous, compression/SMB).
Prepare SQL (SQL Server 2016+; dedicated instance recommended).
Install VMM Management Server + VMM Console (GUI wizard).
Create a VMM Library share (NTFS share on a file server or VMM box) and add it in Fabric → Servers → Library Servers.
RunAs Accounts:
VMM → Settings → Run As Accounts: add domain admin/installer accounts as needed for host provisioning (minimum permission is fine if you’ve granted the OU Create Computer Objects to hosts/CNO).
In VMM Console → Fabric:
Add Resources → Hyper-V Hosts and Clusters → Windows Server computers in a trusted Active Directory domain.
Provide credentials (Run As) → discover the hostnames → Add.
After hosts add successfully, VMM will discover the Failover Cluster automatically and list it under All Hosts.
This replaces per-host manual vSwitch setup and keeps you GUI-centric.
Logical Network: Fabric → Networking → Logical Networks → Create:
Example: Prod-Fabric.
Network sites per datacenter/switch domain with VLANs (e.g., Mgmt 10, Storage 20, LM 30, VM 100–199).
Create IP Pools for Mgmt / VM networks if VMM should hand out addresses.
Port Profiles & Classifications:
Uplink Port Profile (e.g., Uplink-2x25G): define trunk VLANs for the host uplinks.
Port Classifications: define traffic types (e.g., HostMgmt, CSV/Storage, LiveMigration, TenantVM).
Logical Switch:
Fabric → Networking → Logical Switches → Create:
Associate Uplink Port Profile and Port Classifications.
Enable SR-IOV if supported; set minimum bandwidth weight for Storage/LM classes.
Apply Logical Switch to hosts:
Fabric → Servers → All Hosts → select host → Properties → Virtual Switches → New Virtual Switch (choose your Logical Switch).
Pick the physical NICs (or team) that go to your top-of-rack uplinks.
VMM will create the vSwitch and host vNICs per classification (Mgmt/CSV/LM), tag VLANs, and set QoS. Repeat for all hosts (or apply to the Host Group).
Fabric → Storage:
The S2D pool appears as Cluster Shared Volumes via the cluster.
Create Storage Classifications (e.g., Gold-Mirror, Silver-MAP) and assign to the CSVs for template policies.
VM Network (if using NVGRE/VLAN isolation):
Fabric → Networking → VM Networks → Create → bind to Logical Network and VLAN/IP Pool.
Templates:
Library → import a Sysprepped VHDX (Gen2, ReFS on CSV).
Create VM Template wizard: OS profile, hardware profile (vNIC classification, vCPU/RAM), placement on cluster/CSV.
In VMM or Failover Cluster Manager, Live Migrate a running VM between nodes.
Pull a node into Pause/Drain to validate CSV I/O continuity.
Check Health Service in WAC for S2D health (drives, virtual disks, repair jobs).
CNO/VCO permissions: The cluster computer object (CNO) or the host accounts need Create Computer Objects on the OU. Missing permissions cause cluster creation/storage role failures and “logon type” errors.
RoCE without PFC/DCB: You’ll see dropped SMB Direct traffic and flaky performance. Ensure switches and hosts have PFC for the storage class.
Mixing GUI NIC Team + VMM Logical Switch badly: Let VMM own networking (preferred) or keep it manual—don’t half-and-half.
BitLocker on S2D volumes: Plan keys/guarded hosts; don’t enable ad-hoc per node.
Driver/firmware drift: Keep NIC/storage firmware aligned; S2D is sensitive to HBA/NIC driver versions.
ReFS requirement: For best Hyper-V perf (block cloning), format CSVs as ReFS, not NTFS.
If you don’t want to use WAC for enabling S2D:
# From any cluster node
Validate-Cluster -Node Node1,Node2,Node3,Node4 -Include "Storage Spaces Direct",Inventory,Network,SystemConfiguration
New-Cluster -Name HV19-CL01 -Node Node1,Node2,Node3,Node4 -StaticAddress 10.0.10.50
Enable-ClusterS2D -Confirm:$false
# Then create a ReFS mirror virtual disk and CSV via Failover Cluster Manager GUI