Hyper-V clustering with SMB, Fiber Channel and ISCI - Powershell Run Book

Ready-to-run build/runbook. It covers all three storage back-ends (iSCSI, Fiber Channel, SMB 3.0), with both approaches:

  1. Build the Windows clusters first, then import into SCVMM 2012 R2

  2. Build everything from inside SCVMM 2012 R2

Common variables

# ===== Edit to your environment =====
# Host groups by storage type
$Hosts_iSCSI = @('HVISCSI01','HVISCSI02','HVISCSI03','HVISCSI04')
$Hosts_FC    = @('HVFC01','HVFC02','HVFC03','HVFC04')
$Hosts_SMB   = @('HVSMB01','HVSMB02','HVSMB03','HVSMB04')

# Cluster names & IPs
$CluName_iSCSI = 'HVCLU-ISCSI'; $CluIP_iSCSI = '10.10.10.51'
$CluName_FC    = 'HVCLU-FC';    $CluIP_FC    = '10.10.10.52'
$CluName_SMB   = 'HVCLU-SMB';   $CluIP_SMB   = '10.10.10.53'

# Networks (examples)
$VLAN_Mgmt = 10        # 10.10.10.0/24
$VLAN_LM   = 20        # 10.10.20.0/24
$VLAN_Stor = 30        # 10.10.30.0/24

# Team & vSwitch names
$TeamName      = 'Team0'
$ProdVSwitch   = 'vSwitch-Prod'

# File share witness
$WitnessShare = '\\fs-witness\hvwitness$'

# iSCSI target portal (example)
$iSCSIPortal = '10.10.30.10'

# SMB 3.0 shares (Scale-Out File Server)
$SMB_Stores = @('\\SOFS01\HVShare1','\\SOFS01\HVShare2')

# VMM server & RunAs (for the VMM steps)
$VMMServer = 'VMM01'
$RunAsName = 'VMM-SVC'   # created in VMM; should be domain acct with local admin on hosts

Baseline on every host (12 Hosts)

# Roles & tools
Install-WindowsFeature Hyper-V, Failover-Clustering, Multipath-IO -IncludeManagementTools

# Optional: enable MPIO for iSCSI (adjust as needed for your array)
mpclaim.exe -r -i -a "VEN_MSFT&PROD_iSCSI"

# Create NIC Team (LBFO) for Mgmt/VM traffic
New-NetLbfoTeam -Name $TeamName -TeamMembers 'NIC1','NIC2' -TeamingMode SwitchIndependent -LoadBalancingAlgorithm Dynamic

# Create external Hyper-V vSwitch on the team, allow host mgmt on same switch (optional)
New-VMSwitch -Name $ProdVSwitch -NetAdapterName $TeamName -AllowManagementOS $true

# Live Migration tuning (enable and prefer Compression; cap concurrent LM)
(Get-VMHost).VirtualMachineMigrationPerformanceOption = 'Compression'
Set-VMHost -VirtualMachineMigrationEnabled $true -VirtualMachineMigrationAuthenticationType CredSSP -VirtualMachineMigrationMaximum 4

Storage-specific host prep

iSCSI hosts only (4)

Set-Service MSiSCSI -StartupType Automatic; Start-Service MSiSCSI
New-IscsiTargetPortal -TargetPortalAddress $iSCSIPortal
Get-IscsiTarget | Connect-IscsiTarget -IsPersistent $true -IsMultipathEnabled $true

Fibre Channel hosts only (4)

SMB 3.0 hosts only (4)

Build the clusters first, then import into VMM

Do these blocks once per storage type (iSCSI, FC, SMB).

Create & validate the cluster (iSCSI example)

Run from any admin node (adjust the $Hosts_*, names, and IPs per cluster).

# iSCSI cluster
Test-Cluster -Node $Hosts_iSCSI -Include 'Inventory','Network','Storage'
New-Cluster -Name $CluName_iSCSI -Node $Hosts_iSCSI -StaticAddress $CluIP_iSCSI -NoStorage
Set-ClusterQuorum -FileShareWitness $WitnessShare

# Add disks, then convert to CSV
Get-ClusterAvailableDisk | Add-ClusterDisk
# Convert all cluster disks to CSV
Get-ClusterResource | ? {$_.ResourceType -eq 'Physical Disk'} | % { Add-ClusterSharedVolume -Name $_.Name }

Fibre Channel cluster

Test-Cluster -Node $Hosts_FC -Include 'Inventory','Network','Storage'
New-Cluster -Name $CluName_FC -Node $Hosts_FC -StaticAddress $CluIP_FC -NoStorage
Set-ClusterQuorum -FileShareWitness $WitnessShare

Get-ClusterAvailableDisk | Add-ClusterDisk
Get-ClusterResource | ? {$_.ResourceType -eq 'Physical Disk'} | % { Add-ClusterSharedVolume -Name $_.Name }

SMB 3.0 cluster (no CSV; storage is SMB shares)

Test-Cluster -Node $Hosts_SMB -Include 'Inventory','Network'   # Storage test not applicable for SMB shares
New-Cluster -Name $CluName_SMB -Node $Hosts_SMB -StaticAddress $CluIP_SMB -NoStorage
Set-ClusterQuorum -FileShareWitness $WitnessShare

# (Optional) Create a test VM that uses SMB storage:
# New-VM -Name 'TestVM-SMB' -Path '\\SOFS01\HVShare1\VMs' -MemoryStartupBytes 2GB -Generation 2 -SwitchName $ProdVSwitch

Network roles inside each cluster (set LM/Cluster network intent)

# Review networks
Get-ClusterNetwork | ft Name, Address, Role

# Example: set roles (1 = ClusterOnly, 3 = ClusterAndClient)
# Adjust names to your cluster network names:
(Get-ClusterNetwork 'StorageNet').Role = 1
(Get-ClusterNetwork 'LiveMigrationNet').Role = 3
(Get-ClusterNetwork 'ManagementNet').Role = 3

Import all three clusters into SCVMM 2012 R2

In the VMM console (reliable path on 2012 R2):

  1. Fabric → Servers → Add Resources → Hyper-V Hosts and Clusters

  2. Choose Windows Server computers in a trusted AD domain

  3. Pick the OU or type the hostnames; select RunAs = $RunAsName

  4. VMM discovers the existing clusters; select the cluster, choose Host Group, Finish

(Optional VMM PowerShell skeleton — adjust names as needed)

Import-Module VirtualMachineManager
$vmm   = Get-SCVMMServer -ComputerName $VMMServer
$runAs = Get-SCRunAsAccount -VMMServer $vmm | ? {$_.Name -eq $RunAsName}

# Add a single host (repeat or loop)
Add-SCVMHost -VMMServer $vmm -ComputerName 'HVISCSI01' -Credential $runAs

# When two or more nodes of an existing Windows cluster are in VMM,
# the cluster object is discovered automatically under the Host Group.

Post-import fabric modeling (quick, safe checklist)

Build directly from inside SCVMM 2012 R2

Do this once for each storage type (iSCSI, FC, SMB). Start with adding the standalone hosts to VMM, then run the Create Host Cluster wizard.

Add standalone hosts to VMM

Console: Fabric → Servers → Add Resources → Hyper-V Hosts and Clusters → Windows Server computers → add the four hosts of the target cluster, RunAs = $RunAsName.

(PowerShell skeleton):

Import-Module VirtualMachineManager
$vmm   = Get-SCVMMServer -ComputerName $VMMServer
$runAs = Get-SCRunAsAccount -VMMServer $vmm | ? {$_.Name -eq $RunAsName}

$all = @('HVISCSI01','HVISCSI02','HVISCSI03','HVISCSI04')  # repeat per storage type
$all | % { Add-SCVMHost -VMMServer $vmm -ComputerName $_ -Credential $runAs }

Create Host Cluster (wizard is the reliable path on 2012 R2)

Console: Fabric → Create Host Cluster

Repeat for FC and SMB with their respective 4-host sets.

Apply networking via VMM

  1. Logical Networks (MgmtNet, LMNet, StorageNet, VMNet) and Network Sites (VLAN/Subnet mappings)

  2. Port Classifications (e.g., Host Mgmt, VM Prod)

  3. Uplink Port Profile(s) (bind the right networks to Team0)

  4. Logical Switch LS-Prod and apply to each host (Host → Properties → Virtual Switches → New Virtual Switch (logical switch))

  5. Host vNICs (if you’re doing converged networking): create/assign vNICs (Mgmt, LM, etc.) with the right classification.

Quick template + live migration test (from VMM)

  1. Library: Import a generalized VHDX (2012 R2/2016/2019 guest as needed).

  2. Create a Hardware Profile (2 vCPU, 4–8 GB RAM, connected to vSwitch-Prod).

  3. Create a VM Template → place on CSV (iSCSI/FC clusters) or SMB share (SMB cluster).

  4. Deploy VM to each cluster → verify it lands under C:\ClusterStorage\VolumeX\… (CSV) or the SMB path.

  5. Live Migration test: right-click VM → Migrate (Live) to another node; confirm no downtime.

AD permissions for cluster creation: Pre-stage the CNO (disabled computer object with your cluster name) or grant Create Computer Objects to the host computer objects (or your VMM RunAs) on the hosts’ OU.

Rollback / recovery snippets

# Remove a failed resource from a half-baked cluster build
Get-ClusterResource | ? Name -like '*Disk*' | Remove-ClusterResource -Force

# Clear disk reservations if a failed build left them "clustered"
Clear-ClusterDiskReservation -All

# Destroy cluster cleanly (last resort)
Stop-Cluster -Name 'HVCLU-ISCSI' -Force
Remove-Cluster -Name 'HVCLU-ISCSI' -Force