Configure Hyper-V stretched cluster with scvmm using s2d, scsi and fiber channel

Pick your topology

Note: S2D itself doesn’t use Fibre Channel; FC is only for SAN-based host clustering (B) or guest clustering (C). S2D uses SMB3/RDMA between nodes. Microsoft Learn


Stretched S2D host cluster with VMM (two sites)

Prereqs (per site)

VMM fabric setup

  1. Add Hyper-V hosts (both sites) to VMM and place them into Host Groups like Region\SiteA and Region\SiteB.

  2. Create Logical Networks, IP Pools, Uplink Port Profiles and Logical Switch; apply the same logical switch to all nodes.

  3. (Optional) Create Storage Classifications for later volume tagging.

Create the stretched cluster (host level)

You can do this in VMM (“Create Hyper-V Cluster”), or with PowerShell then bring it under VMM. Typical order:

  1. Create the cluster across all nodes (both sites) and set quorum (Cloud Witness).

  2. Enable S2D on the cluster; Windows forms a single site-aware storage pool with fault domains per site.

  3. Create CSV volumes with the proper resiliency (e.g., 2-way mirror per site with cross-site mirror), and label/tag by site affinity.

  4. Enable site awareness / fault domains so volumes have preferred sites and VM/storage affinity policies keep compute near its storage.

  5. Add the cluster to VMM (if you created it by PowerShell/Server Manager).

References: S2D overview and stretched/campus capabilities; Windows Server 2025 stretch cluster demo; VMM S2D management overview & “What’s new” (support continues in current VMM). Microsoft Learn

VM placement & protection

Notes


Hyper-V stretched host cluster with shared SAN (FC/iSCSI)

This is the classic stretched “metro” cluster where the same LUNs are presented to all hosts in both sites by your array (array-level replication or true active/active metro volumes).

High-level steps

  1. Fabric: In VMM, onboard the FC fabric (add providers/zoning) or define iSCSI targets; map the same LUNs to hosts in both sites.

  2. Networking/host prep: Same as above (logical switch, networks, teams).

  3. Cluster: Create a single Hyper-V cluster across both sites; use shared LUNs as CSV.

  4. Quorum: Cloud/File Share Witness in neutral site.

  5. Storage policies: If the array supports site-preferred paths/affinity, configure that; in VMM use classifications to hint placement.

When to use: You already own a metro-capable SAN and want to keep FC/iSCSI. (No S2D in this pattern.)


Guest clustering: SCSI/iSCSI/Virtual FC/Shared VHDX

Here the Hyper-V host/cluster can be simple (S2D or SAN); the application runs a Windows Failover Cluster inside the VMs.

Options

Steps (Virtual Fibre Channel example)

  1. On hosts: Ensure FC HBAs/zoning allow NPIV and mask the guest LUNs to the host ports.

  2. In Hyper-V Manager/VMM: Create a Virtual SAN; add a Virtual Fibre Channel adapter to each VM; connect to the vSAN; assign WWNs (static or pools in VMM). Microsoft Learn

  3. On the array: Map LUNs to the vWWNs.

  4. In the VMs: Bring disks online/format; create the guest failover cluster and add the shared disks.

  5. VMM: Manage VM placement/anti-affinity; storage is handled by the SAN team.

Steps (Shared VHDX / VHD Set)

  1. Place the shared .vhdx/.vhds on a CSV.

  2. Attach the disk to both VMs as shared; use it only for data/witness (not OS).

  3. Build the guest cluster. Microsoft Learn


Detailed, opinionated end-to-end (A) build order

  1. Fabric groundwork in VMM

  1. Create the cluster (hosts from both sites)

  1. Enable S2D & site awareness

  1. VM rollout

  1. Quorum & DR drills


Best-practice tips & gotchas