Ready-to-run build/runbook. It covers all three storage back-ends (iSCSI, Fiber Channel, SMB 3.0), with both approaches:
Build the Windows clusters first, then import into SCVMM 2012 R2
Build everything from inside SCVMM 2012 R2
# ===== Edit to your environment =====
# Host groups by storage type
$Hosts_iSCSI = @('HVISCSI01','HVISCSI02','HVISCSI03','HVISCSI04')
$Hosts_FC = @('HVFC01','HVFC02','HVFC03','HVFC04')
$Hosts_SMB = @('HVSMB01','HVSMB02','HVSMB03','HVSMB04')
# Cluster names & IPs
$CluName_iSCSI = 'HVCLU-ISCSI'; $CluIP_iSCSI = '10.10.10.51'
$CluName_FC = 'HVCLU-FC'; $CluIP_FC = '10.10.10.52'
$CluName_SMB = 'HVCLU-SMB'; $CluIP_SMB = '10.10.10.53'
# Networks (examples)
$VLAN_Mgmt = 10 # 10.10.10.0/24
$VLAN_LM = 20 # 10.10.20.0/24
$VLAN_Stor = 30 # 10.10.30.0/24
# Team & vSwitch names
$TeamName = 'Team0'
$ProdVSwitch = 'vSwitch-Prod'
# File share witness
$WitnessShare = '\\fs-witness\hvwitness$'
# iSCSI target portal (example)
$iSCSIPortal = '10.10.30.10'
# SMB 3.0 shares (Scale-Out File Server)
$SMB_Stores = @('\\SOFS01\HVShare1','\\SOFS01\HVShare2')
# VMM server & RunAs (for the VMM steps)
$VMMServer = 'VMM01'
$RunAsName = 'VMM-SVC' # created in VMM; should be domain acct with local admin on hosts
# Roles & tools
Install-WindowsFeature Hyper-V, Failover-Clustering, Multipath-IO -IncludeManagementTools
# Optional: enable MPIO for iSCSI (adjust as needed for your array)
mpclaim.exe -r -i -a "VEN_MSFT&PROD_iSCSI"
# Create NIC Team (LBFO) for Mgmt/VM traffic
New-NetLbfoTeam -Name $TeamName -TeamMembers 'NIC1','NIC2' -TeamingMode SwitchIndependent -LoadBalancingAlgorithm Dynamic
# Create external Hyper-V vSwitch on the team, allow host mgmt on same switch (optional)
New-VMSwitch -Name $ProdVSwitch -NetAdapterName $TeamName -AllowManagementOS $true
# Live Migration tuning (enable and prefer Compression; cap concurrent LM)
(Get-VMHost).VirtualMachineMigrationPerformanceOption = 'Compression'
Set-VMHost -VirtualMachineMigrationEnabled $true -VirtualMachineMigrationAuthenticationType CredSSP -VirtualMachineMigrationMaximum 4
Set-Service MSiSCSI -StartupType Automatic; Start-Service MSiSCSI
New-IscsiTargetPortal -TargetPortalAddress $iSCSIPortal
Get-IscsiTarget | Connect-IscsiTarget -IsPersistent $true -IsMultipathEnabled $true
Mask the same LUNs to all four FC hosts from the array.
Verify in Disk Management (or Get-Disk) the LUNs appear on all nodes with identical LUN IDs.
On the SOFS cluster, ensure each share is Continuously Available and grants Full Control to:
The Hyper-V host computer accounts (Domain Computers, or explicit host objects)
Your VMM RunAs account ($RunAsName)
Do these blocks once per storage type (iSCSI, FC, SMB).
Run from any admin node (adjust the
$Hosts_*, names, and IPs per cluster).
# iSCSI cluster
Test-Cluster -Node $Hosts_iSCSI -Include 'Inventory','Network','Storage'
New-Cluster -Name $CluName_iSCSI -Node $Hosts_iSCSI -StaticAddress $CluIP_iSCSI -NoStorage
Set-ClusterQuorum -FileShareWitness $WitnessShare
# Add disks, then convert to CSV
Get-ClusterAvailableDisk | Add-ClusterDisk
# Convert all cluster disks to CSV
Get-ClusterResource | ? {$_.ResourceType -eq 'Physical Disk'} | % { Add-ClusterSharedVolume -Name $_.Name }Test-Cluster -Node $Hosts_FC -Include 'Inventory','Network','Storage'
New-Cluster -Name $CluName_FC -Node $Hosts_FC -StaticAddress $CluIP_FC -NoStorage
Set-ClusterQuorum -FileShareWitness $WitnessShare
Get-ClusterAvailableDisk | Add-ClusterDisk
Get-ClusterResource | ? {$_.ResourceType -eq 'Physical Disk'} | % { Add-ClusterSharedVolume -Name $_.Name }Test-Cluster -Node $Hosts_SMB -Include 'Inventory','Network' # Storage test not applicable for SMB shares
New-Cluster -Name $CluName_SMB -Node $Hosts_SMB -StaticAddress $CluIP_SMB -NoStorage
Set-ClusterQuorum -FileShareWitness $WitnessShare
# (Optional) Create a test VM that uses SMB storage:
# New-VM -Name 'TestVM-SMB' -Path '\\SOFS01\HVShare1\VMs' -MemoryStartupBytes 2GB -Generation 2 -SwitchName $ProdVSwitch# Review networks
Get-ClusterNetwork | ft Name, Address, Role
# Example: set roles (1 = ClusterOnly, 3 = ClusterAndClient)
# Adjust names to your cluster network names:
(Get-ClusterNetwork 'StorageNet').Role = 1
(Get-ClusterNetwork 'LiveMigrationNet').Role = 3
(Get-ClusterNetwork 'ManagementNet').Role = 3In the VMM console (reliable path on 2012 R2):
Fabric → Servers → Add Resources → Hyper-V Hosts and Clusters
Choose Windows Server computers in a trusted AD domain
Pick the OU or type the hostnames; select RunAs = $RunAsName
VMM discovers the existing clusters; select the cluster, choose Host Group, Finish
(Optional VMM PowerShell skeleton — adjust names as needed)
Import-Module VirtualMachineManager
$vmm = Get-SCVMMServer -ComputerName $VMMServer
$runAs = Get-SCRunAsAccount -VMMServer $vmm | ? {$_.Name -eq $RunAsName}
# Add a single host (repeat or loop)
Add-SCVMHost -VMMServer $vmm -ComputerName 'HVISCSI01' -Credential $runAs
# When two or more nodes of an existing Windows cluster are in VMM,
# the cluster object is discovered automatically under the Host Group.Logical Networks: MgmtNet, LiveMigNet, StorageNet, VMNet (with your VLAN/Subnet sites)
Port Classifications: Host Mgmt, VM Prod
Uplink Port Profile(s) (Team0 with associated logical networks)
Logical Switch LS-Prod → apply to each host (Console: Host → Properties → Virtual Switches)
Storage:
iSCSI/FC: add SMI-S provider (Fabric → Storage → Providers), classify tiers, Allocate LUNs to cluster → they’ll show as CSV in VMM.
SMB: add File Server and Shares to VMM (Fabric → Storage → File Servers) and assign to hosts/clusters.
Do this once for each storage type (iSCSI, FC, SMB). Start with adding the standalone hosts to VMM, then run the Create Host Cluster wizard.
Console: Fabric → Servers → Add Resources → Hyper-V Hosts and Clusters → Windows Server computers → add the four hosts of the target cluster, RunAs = $RunAsName.
(PowerShell skeleton):
Import-Module VirtualMachineManager
$vmm = Get-SCVMMServer -ComputerName $VMMServer
$runAs = Get-SCRunAsAccount -VMMServer $vmm | ? {$_.Name -eq $RunAsName}
$all = @('HVISCSI01','HVISCSI02','HVISCSI03','HVISCSI04') # repeat per storage type
$all | % { Add-SCVMHost -VMMServer $vmm -ComputerName $_ -Credential $runAs }Console: Fabric → Create Host Cluster
Select the four hosts (e.g., iSCSI group)
Cluster name & static IP (e.g., $CluName_iSCSI / $CluIP_iSCSI)
Validation runs (must be clean)
Storage selection:
iSCSI/FC: choose the shared LUNs presented to all hosts (VMM will format/add as CSV)
SMB: choose File Shares (previously added under Fabric → Storage → File Servers)
Finish → VMM creates the cluster and registers it in the Host Group.
Repeat for FC and SMB with their respective 4-host sets.
Logical Networks (MgmtNet, LMNet, StorageNet, VMNet) and Network Sites (VLAN/Subnet mappings)
Port Classifications (e.g., Host Mgmt, VM Prod)
Uplink Port Profile(s) (bind the right networks to Team0)
Logical Switch LS-Prod and apply to each host (Host → Properties → Virtual Switches → New Virtual Switch (logical switch))
Host vNICs (if you’re doing converged networking): create/assign vNICs (Mgmt, LM, etc.) with the right classification.
Library: Import a generalized VHDX (2012 R2/2016/2019 guest as needed).
Create a Hardware Profile (2 vCPU, 4–8 GB RAM, connected to vSwitch-Prod).
Create a VM Template → place on CSV (iSCSI/FC clusters) or SMB share (SMB cluster).
Deploy VM to each cluster → verify it lands under C:\ClusterStorage\VolumeX\… (CSV) or the SMB path.
Live Migration test: right-click VM → Migrate (Live) to another node; confirm no downtime.
AD permissions for cluster creation: Pre-stage the CNO (disabled computer object with your cluster name) or grant Create Computer Objects to the host computer objects (or your VMM RunAs) on the hosts’ OU.
# Remove a failed resource from a half-baked cluster build
Get-ClusterResource | ? Name -like '*Disk*' | Remove-ClusterResource -Force
# Clear disk reservations if a failed build left them "clustered"
Clear-ClusterDiskReservation -All
# Destroy cluster cleanly (last resort)
Stop-Cluster -Name 'HVCLU-ISCSI' -Force
Remove-Cluster -Name 'HVCLU-ISCSI' -Force