A. Stretched S2D host cluster (recommended for HCI): One failover cluster spans two sites; storage is Storage Spaces Direct (no SAN/FC). Site-awareness mirrors data across sites; VMs live-migrate/fail over cross-site. Manage with VMM. Microsoft Learn
B. Traditional shared-SAN host cluster (FC/iSCSI): Hosts share the same FC/iSCSI LUNs across sites (requires metro SAN). Manage with VMM.
C. Guest clustering: Keep hosts simple; give VMs shared storage via Virtual Fibre Channel, iSCSI, or Shared VHDX/VHD Set, and cluster inside the VMs. VMM manages placement; storage presentation is per-VM. Microsoft Learn
Note: S2D itself doesn’t use Fibre Channel; FC is only for SAN-based host clustering (B) or guest clustering (C). S2D uses SMB3/RDMA between nodes. Microsoft Learn
Windows Server 2019/2022/2025 Hyper-V + Failover Clustering role on at least 2 nodes per site.
Qualified NVMe/SAS/SATA in each node; identical NIC layout; RDMA (RoCEv2 or iWARP) + DCB recommended for storage east-west.
Two L3 subnets (one per site). Define AD Sites & Services correctly; clustering uses those subnets for site awareness. Luka Manojlovic
Low, deterministic latency between sites for synchronous resilience (design guides typically target single-digit ms). Dell FTP
Quorum: Cloud Witness or a third site/fileshare witness.
Add Hyper-V hosts (both sites) to VMM and place them into Host Groups like Region\SiteA and Region\SiteB.
Create Logical Networks, IP Pools, Uplink Port Profiles and Logical Switch; apply the same logical switch to all nodes.
(Optional) Create Storage Classifications for later volume tagging.
You can do this in VMM (“Create Hyper-V Cluster”), or with PowerShell then bring it under VMM. Typical order:
Create the cluster across all nodes (both sites) and set quorum (Cloud Witness).
Enable S2D on the cluster; Windows forms a single site-aware storage pool with fault domains per site.
Create CSV volumes with the proper resiliency (e.g., 2-way mirror per site with cross-site mirror), and label/tag by site affinity.
Enable site awareness / fault domains so volumes have preferred sites and VM/storage affinity policies keep compute near its storage.
Add the cluster to VMM (if you created it by PowerShell/Server Manager).
References: S2D overview and stretched/campus capabilities; Windows Server 2025 stretch cluster demo; VMM S2D management overview & “What’s new” (support continues in current VMM). Microsoft Learn
Place each VM with a site-preferred host and keep its VHDX on a CSV with matching site affinity.
Use anti-affinity for multi-VM services.
Test cross-site Live Migration and CSV ownership moves.
If you’re on Azure Local (Azure Stack HCI) and using VMM: current docs note stretched clusters aren’t supported for management in VMM (manage via WAC/PowerShell). Check your exact version. Microsoft Learn
This is the classic stretched “metro” cluster where the same LUNs are presented to all hosts in both sites by your array (array-level replication or true active/active metro volumes).
Fabric: In VMM, onboard the FC fabric (add providers/zoning) or define iSCSI targets; map the same LUNs to hosts in both sites.
Networking/host prep: Same as above (logical switch, networks, teams).
Cluster: Create a single Hyper-V cluster across both sites; use shared LUNs as CSV.
Quorum: Cloud/File Share Witness in neutral site.
Storage policies: If the array supports site-preferred paths/affinity, configure that; in VMM use classifications to hint placement.
When to use: You already own a metro-capable SAN and want to keep FC/iSCSI. (No S2D in this pattern.)
Here the Hyper-V host/cluster can be simple (S2D or SAN); the application runs a Windows Failover Cluster inside the VMs.
Virtual Fibre Channel (vFC): Present FC LUNs direct to VMs via virtual SAN (vHBA). Requires NPIV-capable HBAs/fabric. Microsoft Learn+1
iSCSI in-guest: VMs connect to an iSCSI target over the data network.
Shared VHDX / VHD Set: Share a .vhdx/.vhds between VMs for the data disk/witness—simple and common for SQL FCIs, etc. Microsoft Learn+1
On hosts: Ensure FC HBAs/zoning allow NPIV and mask the guest LUNs to the host ports.
In Hyper-V Manager/VMM: Create a Virtual SAN; add a Virtual Fibre Channel adapter to each VM; connect to the vSAN; assign WWNs (static or pools in VMM). Microsoft Learn
On the array: Map LUNs to the vWWNs.
In the VMs: Bring disks online/format; create the guest failover cluster and add the shared disks.
VMM: Manage VM placement/anti-affinity; storage is handled by the SAN team.
Place the shared .vhdx/.vhds on a CSV.
Attach the disk to both VMs as shared; use it only for data/witness (not OS).
Build the guest cluster. Microsoft Learn
Fabric groundwork in VMM
Discover hosts into the right Host Groups (by site).
Define Logical Networks (e.g., Management, Storage, LiveMigration), IP Pools, Uplink Port Profiles, and a Logical Switch; deploy to all nodes.
Create the cluster (hosts from both sites)
In VMM: Create Hyper-V Cluster → select all nodes → assign networks, Live Migration settings, and witness (Cloud/File Share Witness).
Validate Cluster in wizard (or Test-Cluster beforehand).
Enable S2D & site awareness
From VMM (or PowerShell): enable S2D on the new cluster; Windows creates a site-aware pool with fault domains per site.
Create CSV volumes sized per workload; select resiliency suited for stretched (mirrors across sites).
Configure storage/VM affinity (preferred site) so compute follows storage. Microsoft Learn+1
VM rollout
Create Clouds and Hardware Profiles in VMM; pin site preference with host groups.
Place VMs on CSVs that match the site; test cross-site Live Migration and planned failover.
Quorum & DR drills
Use Cloud Witness for asymmetric failures; validate dynamic quorum and node vote weights (site-weighted).
Run Test-SRTopology if you evaluate an SR-based stretch with shared storage. Microsoft Learn
Don’t mix S2D and FC on the same dataset. Either go HCI (S2D) or shared-SAN; FC makes sense for (B) or (C). Microsoft Learn
Witness is mandatory for two-site designs—prefer Cloud Witness. Microsoft Learn
Latency matters: stretched designs assume low-ms RTT; validate with perf tests and your array vendor if doing (B). Dell FTP
VMM versions: Current VMM (2022/2025) manages S2D clusters and keeps adding S2D quality-of-life features; check your exact build. Microsoft Learn
Azure Local nuance: Managing Azure Local stretched clusters is not supported in VMM per current docs—use Windows Admin Center/PowerShell there. Microsoft Learn
Guest clustering choice: For app teams that need LUN-level control, use vFC; for simpler ops, use Shared VHDX/VHD Set. Microsoft Learn+1