Hosts: 4 × Windows Server 2019 Datacenter (recommended for S2D), domain-joined
NICs per host (minimum):
2 × 25/10 GbE for S2D/SMB Direct (RoCEv2 or iWARP)
2 × 10 GbE for Host Mgmt / VM / Live Migration (or team via SET)
Switch: DCB enabled if using RoCEv2 (ETS/PFC)
Storage: All-flash or hybrid (NVMe/SSD + HDD), HBA/JBOD pass-through. No RAID.
Fabric mgmt: SCVMM 2019 + SQL, with a Run As account that’s local admin on hosts + rights to create cluster objects in AD.
Witness: Azure Cloud Witness (recommended) or File Share witness.
IP plan (sample):
Mgmt: 10.10.0.0/24
Storage (SMB/S2D): 172.16.0.0/24 (dedicated)
Live Migration: 10.20.0.0/24
Cluster: uses above networks (Cluster-Only/Cluster-And-Client as appropriate)
Update BIOS, NIC, HBA, NVMe/SSD firmware to vendor HCL for S2D.
In storage controllers: Disable RAID, set drives to JBOD/HBA mode. Enable NVMe if applicable.
Install Windows Server 2019 Datacenter, latest cumulative updates.
Join all hosts to the same AD domain. Ensure consistent time/NTP.
Name resolution: hosts resolve each other by A records; reverse lookup zones present.
PowerShell on each host (elevated):
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V, Failover-Clustering, FS-FileServer, RSAT-Clustering-PowerShell, DataCenterBridging -All
Install-WindowsFeature -Name Hyper-V, Failover-Clustering, FS-FileServer, RSAT-Clustering-PowerShell, RSAT-Hyper-V-Tools -IncludeManagementTools
If using BitLocker on data drives, plan to enable Auto-unlock for S2D volumes later (don’t encrypt before pool creation).
Enable DCB/ETS on switches and hosts. Configure PFC for SMB (priority 3 typically).
If using iWARP, DCB not required.
Example host config (adjust NIC names):
# Identify NICs for Storage (RDMA)
Get-NetAdapter | Where-Object {$_.Name -like "Storage*"} | Enable-NetAdapterRdma
# If RoCEv2: enable DCB and set ETS/PFC (example; use vendor guidance for exact priorities)
Install-WindowsFeature Data-Center-Bridging
New-NetQosPolicy "SMB" -NetDirectPortMatchCondition 445 -PriorityValue8021Action 3
Enable-NetQosFlowControl -Priority 3
Disable-NetQosFlowControl -Priority 0,1,2,4,5,6,7
Set-NetQosDcbxSetting -Willing 0
Enable-NetAdapterQos -Name "Storage1","Storage2"
On each host, create a Hyper-V SET switch for VM/Li-Mig/mgmt (or separate switches if you prefer):
New-VMSwitch -Name "vSwitch-Host" -NetAdapterName "Mgmt1","Mgmt2" -EnableEmbeddedTeaming $true -AllowManagementOS $true
# Tag VLANs if needed
Set-VMNetworkAdapter -ManagementOS -Name "vEthernet (vSwitch-Host)" -VlanID 10 # Mgmt VLAN example
Assign dedicated IPs for Storage NICs (no default gateway on storage VLAN):
New-NetIPAddress -InterfaceAlias "Storage1" -IPAddress 172.16.0.11 -PrefixLength 24
New-NetIPAddress -InterfaceAlias "Storage2" -IPAddress 172.16.0.12 -PrefixLength 24
Enable-VMMigration
Set-VMHost -VirtualMachineMigrationAuthenticationType CredSSP -VirtualMachineMigrationPerformanceOption SMB
Add-VMMigrationNetwork 10.20.0.0/24
On one host, validate disks are visible as CanPool:
Get-PhysicalDisk | ft FriendlyName, CanPool, MediaType, Size, HealthStatus
All S2D-candidate disks must be CanPool = True (system OS disk won’t be). If false, clear metadata:
Get-PhysicalDisk -CanPool $False | Where MediaType -ne "Unspecified" # sanity check
# If needed (be careful):
# Reset-PhysicalDisk -FriendlyName "PhysicalDiskX"
From any host (or an admin workstation):
Test-Cluster -Node Host1,Host2,Host3,Host4 -Include "Storage","Inventory","Network","SystemConfiguration" -Verbose
Fix all Errors. Warnings for S2D disks are typical pre-enable.
New-Cluster -Name HVCL01 -Node Host1,Host2,Host3,Host4 -StaticAddress 10.10.0.50 -NoStorage
Azure Cloud Witness (recommended):
Set-ClusterQuorum -CloudWitness -AccountName "<StorageAccountName>" -AccessKey "<Key>"
(or file share witness if preferred)
From one node:
Enable-ClusterS2D -Confirm:$false -CacheState Enabled
# or explicit:
# Enable-ClusterS2D -AutoConfig:Enabled -Confirm:$false
This creates a cluster storage pool named like S2D on HVCL01.
# Discover mediatypes
Get-PhysicalDisk | Group MediaType, CanPool | ft Count, Name
# Create tiers
New-StorageTier -StoragePoolFriendlyName "S2D on HVCL01" -FriendlyName "Capacity" -MediaType HDD
New-StorageTier -StoragePoolFriendlyName "S2D on HVCL01" -FriendlyName "Performance" -MediaType SSD
Example: 2× volumes with nested tiers:
# Example sizes—adjust to capacity
New-Volume -StoragePoolFriendlyName "S2D on HVCL01" -FriendlyName "CSV01" `
-FileSystem ReFS -StorageTierFriendlyNames Performance,Capacity `
-StorageTierSizes 500GB,2TB -AllocationUnitSize 64KB
New-Volume -StoragePoolFriendlyName "S2D on HVCL01" -FriendlyName "CSV02" `
-FileSystem ReFS -StorageTierFriendlyNames Performance,Capacity `
-StorageTierSizes 500GB,2TB -AllocationUnitSize 64KB
These auto-mount as Cluster Shared Volumes under C:\ClusterStorage\CSV01, etc.
(Optional) CSV cache for read-heavy Hyper-V:
(Get-Cluster).BlockCacheSize = 2048 # in MB
# Live migration: SMB over storage/LM network
(Get-Cluster).RouteHistoryLength = 30 # optional visibility tuning
Set-ClusterParameter -InputObject (Get-ClusterNetwork | ? {$_.Role -eq 1}) -Name Metric -Value 100 # Example network metric tuning
# Enable VMQ/NOT on mgmt NICs (vendor guidance), SR-IOV if supported
# Ensure Time Sync, Integration Services (2019 uses in-box components)
SCVMM 2019 installed + latest UR.
Library share created (ISO, scripts, templates).
Create a Run As Account with:
Local admin on all hosts
Permission to create the cluster AD computer object (or pre-stage it)
VMM console → Fabric → Servers → Add Hyper-V Hosts and Clusters:
Windows Server computers in a trusted Active Directory domain
Supply Run As account
Add Host1…Host4 → place into a Host Group (e.g., HG-Prod)
VMM will discover the existing Failover Cluster automatically.
Fabric → Networking:
Logical Network: LN-Mgmt, LN-Storage, LN-LM, LN-VM (with associated sites/VLANs).
IP Pools for Mgmt and Live Migration (Storage often static/no gateway).
Fabric → Networking → Port Profiles / Logical Switches:
Uplink Port Profile (e.g., UPP-SET) with the correct uplink mode (Embedded Teaming/SET), map to Logical Networks (Mgmt, VM, LM).
Port Classification: e.g., HostMgmt, VMNetwork, LiveMig.
Logical Switch LS-Prod using UPP-SET, add classifications.
Apply the Logical Switch to each host NIC team (vSwitch-Host). VMM can standardize vNICs on the host (e.g., Mgmt, LM).
For each Logical Network, create VM Network(s) (and subnets/VLANs) for tenant workloads.
Optionally create Port Profiles for VM NIC QoS.
Fabric → Storage:
VMM discovers S2D storage via the cluster.
Create Storage Classifications (e.g., Gold for NVMe/SSD tier, Silver for hybrid).
Classify the CSVs accordingly so templates can target them.
Create a Guest OS Profile and Hardware Profile (Gen 2, Secure Boot on, vNIC Port Classification, vCPU/RAM).
Create a VM Template from an ISO/sysprep’d VHDX.
Deploy a test VM to CSV01 using placement rules.
In Failover Cluster Manager, Move the test VM between nodes (Live Migration).
Pull a drive (in a maintenance window) or pause a node to validate S2D resiliency.
Check Storage Health:
Get-StoragePool -FriendlyName "S2D on HVCL01" | Get-PhysicalDisk | ft FriendlyName, HealthStatus
Get-StorageSubSystem Cluster* | Get-StorageHealthReport
Use Cluster Aware Updating (CAU) or VMM servicing windows.
Drain roles, pause node, patch, resume.
Keep driver/firmware + OS cumulative updates aligned across all hosts.
Shielded VMs / HGS (if needed): plan Host Guardian Service before enabling Shielded VMs.
BitLocker on S2D volumes: enable after volumes exist:
$vol = Get-Volume -FileSystemLabel "CSV01"
Enable-BitLocker -MountPoint $vol.Path -UsedSpaceOnly -TpmProtector
Backup: Veeam/MABS/DPM with VSS-aware cluster backups targeting CSVs.
“Enable-ClusterS2D” fails: disks not JBOD/HBA, stale metadata, mixed sector sizes, unsupported media mix. Use Get-PhysicalDisk and vendor firmware baselines.
Run As account errors in VMM: ensure Allow log on locally & Allow log on through RDP on hosts, local admin membership, and AD rights to create the cluster computer object in the target OU.
RDMA not working: confirm Get-SmbClientNetworkInterface shows RDMA Capable: True; check PFC on both ends (RoCEv2), verify no mismatched VLAN/QoS.
Validation errors: re-run Test-Cluster after each fix; S2D warnings pre-enable are normal.
CSV performance: use ReFS with 64K allocation, enable CSV cache for read-heavy, keep firmware current.
# On each host
Install-WindowsFeature Hyper-V, Failover-Clustering, FS-FileServer -IncludeManagementTools
New-VMSwitch -Name "vSwitch-Host" -NetAdapterName "Mgmt1","Mgmt2" -EnableEmbeddedTeaming $true -AllowManagementOS $true
Enable-NetAdapterRdma -Name "Storage1","Storage2"
New-NetIPAddress -InterfaceAlias "Storage1" -IPAddress 172.16.0.11 -PrefixLength 24
New-NetIPAddress -InterfaceAlias "Storage2" -IPAddress 172.16.0.12 -PrefixLength 24
# From one admin node
Test-Cluster -Node Host1,Host2,Host3,Host4
New-Cluster -Name HVCL01 -Node Host1,Host2,Host3,Host4 -StaticAddress 10.10.0.50 -NoStorage
Set-ClusterQuorum -CloudWitness -AccountName "<StorageAccount>" -AccessKey "<Key>"
Enable-ClusterS2D -Confirm:$false
# Create tiers & volumes (examples)
New-StorageTier -StoragePoolFriendlyName "S2D on HVCL01" -FriendlyName "Perf" -MediaType SSD
New-StorageTier -StoragePoolFriendlyName "S2D on HVCL01" -FriendlyName "Cap" -MediaType HDD
New-Volume -StoragePoolFriendlyName "S2D on HVCL01" -FriendlyName "CSV01" -FileSystem ReFS `
-StorageTierFriendlyNames Perf,Cap -StorageTierSizes 500GB,2TB -AllocationUnitSize 64KB