Azure Local is the new name for Azure Stack HCI; same core product, unified branding. Existing HCI clusters just continue under the new name.
Cloud / identity
Azure tenant + subscription with permissions to register and manage Azure Local via Arc (Owner on the target RG/sub is the safe default).
Active Directory (AD DS) required for 23H2 clusters. Plan an OU dedicated for the deployment and a service account with delegated rights to that OU (Microsoft recommends that hosts are not domain-joined before deployment; the deployment process joins them).
Hardware
1–16 identical nodes (CPU family, NICs, drive types/count) from the Azure Stack HCI/Local validated catalog. SLAT-capable Intel/AMD server CPUs. (Most production clusters are 2–16 nodes; single-node is supported for specific scenarios.)
Networking (host/Top-of-Rack)
At least one NIC qualified for management/compute/storage traffic; for best results, use 2×10/25/40/100Gb RDMA NICs (RoCEv2 or iWARP) for east-west storage + 1-2 NICs for management/LM. Ensure your switches support DCB (PFC 802.1Qbb + ETS 802.1Qaz) if using RoCE/iWARP.
Storage
Each node: mix of NVMe/SSD/HDD as per your performance goals; identical layout across nodes for S2D. (NVMe/SSD for cache, SSD/HDD for capacity.)
Tools
A management workstation with Windows Admin Center (WAC) (latest) and PowerShell remoting to the hosts; Azure CLI/Azure PowerShell is helpful for Arc/onboarding. (WAC is the simplest end-to-end path.)
Prep AD & Azure (OU, GPO scope, service account, subscription permissions).
Download the Azure Stack HCI 23H2 image from the Azure portal and install the OS on each node (bare-metal).
Initial host setup: IPs/VLANs, BIOS/firmware, NIC driver/firmware alignment, DCB/RDMA config on switches and hosts.
Use WAC “Create cluster” workflow: validate, domain-join via the deployment account, create Failover Cluster, set witness, and enable S2D.
Create volumes (ReFS) for VM storage; configure CSV ownership/placements.
Register the cluster with Azure (Arc) to light up Azure Local features, billing, Update Manager, monitoring.
Patch/Update policy via Azure Update Manager or Cluster-Aware Updating; set baselines.
Create a dedicated OU (e.g., OU=AzLocalHosts,DC=corp,DC=contoso,DC=com).
Block GPO inheritance or ensure clean GPOs for deployment; overly restrictive baseline GPOs can break the workflow.
Create a deployment/LCM account with full rights on that OU (reset/computer-join/modify DNS). Microsoft guidance explicitly calls out these items for Azure Local.
On ToR switches, enable DCB (PFC + ETS) and assign traffic classes; ensure a lossless class for storage (SMB Direct). Map priorities consistently (e.g., CoS=3 for SMB).
On hosts, install vendor NIC drivers/firmware; confirm RDMA is enabled (Get-NetAdapterRdma). (RoCEv2 or iWARP—both supported.)
Download media from the Azure portal; install to bare-metal. Keep a strong local admin password ready (≥14 chars; complexity).
Set static IP on mgmt NIC(s), set hostnames, and add DNS pointing to your AD DS. (Do not manually join the domain if you plan to let the WAC workflow handle that, per guidance.)
Add servers (by IP/DNS).
Validate hardware/driver/network/storage (built-in validation).
Domain join the nodes using the prepared deployment account & OU.
Create cluster (name, mgmt IP), add cloud witness (or file share witness).
Enable S2D; WAC will pool disks and create a storage pool.
Create ReFS volumes (mirror/erasure coding as needed) and add as CSV.
These actions and their sequence are laid out in Microsoft’s deployment overview and install articles.
Quick PowerShell equivalents (if you prefer CLI for parts):
# On one node after domain join
Test-Cluster -Node node1,node2 -Include "Inventory","Network","System Configuration"
New-Cluster -Name AzLocal-CLS -Node node1,node2 -StaticAddress 10.0.0.50
Set-ClusterQuorum -CloudWitness -AccountName <storageacct> -AccessKey <key>
Enable-ClusterS2D
New-Volume -StoragePoolFriendlyName "S2D*"
In WAC (or PowerShell), register the cluster to Azure using your subscription/RG. This enables billing, Azure Update Manager, monitoring, and Azure-side lifecycle.
Turn on Azure Update Manager/CAU for orchestrated, cluster-aware patching.
Apply firmware/driver baselines that match your vendor’s Azure Local validated SKU. (Many OEM guides echo this along with RDMA/DCB specifics.)
NIC layout: Common pattern is 2×RDMA (SMB Direct) for S2D + 1–2×1/10Gb for management/LM; or 2×25/100Gb for everything with intent-based QoS + DCB. Validate with your OEM’s HCI reference design.
Witness: Prefer cloud witness (tiny Azure Storage account) for quorum simplicity.
Volumes: Use ReFS with mirror-accelerated parity for capacity tiers where appropriate; balance resiliency vs. efficiency based on the number of nodes.
Pre-joined machines: For current Azure Local guidance, don’t pre-join; let the deployment process handle domain join into the prepped OU.
GPO conflicts: A baseline or security GPO that disables WinRM/PSRemoting or alters services can break deployment—hence the OU + blocked inheritance guidance.
Switch features: Missing PFC/ETS or mis-prioritized traffic will tank S2D performance; validate DCB and RDMA end-to-end.
Deployment Overview & Sequence (download OS → install → permissions → deploy): Microsoft Learn
Install the OS (23H2) details: Microsoft Learn
AD preparation & OU/GPO guidance: Microsoft Learn+1
System requirements (23H2): Microsoft Learn
Host/physical network requirements (DCB/RDMA): Azure Documentation+1
Arc/Azure registration permissions: Microsoft Learn
Branding change (Azure Stack HCI → Azure Local): Microsoft Learn
Item | Requirement | Notes |
|---|---|---|
Nodes | 4 identical physical servers | Same CPU family, BIOS, firmware, NICs, drives |
RAM | ≥128 GB per node | 256–512 GB recommended |
NICs | 2 × RDMA (25/40/100 GbE) + 1 × mgmt | RoCEv2 + DCB or iWARP; enable RSS, VMQ |
Drives | At least 2 × NVMe/SSD (cache) + 4 × SSD/HDD (capacity) | Identical layout across nodes |
Switches | DCB enabled (PFC + ETS) | Needed for RoCE; VLANs for Mgmt / Cluster / SMB |
OS media | Azure Stack HCI 23H2/24H2 image | Download from Azure portal |
Install Azure Stack HCI 23H2 on each node (bare-metal).
Assign temporary static IPs, rename hosts (AZHCI-01 … AZHCI-04).
Ensure DNS resolves each name.
Do not join domain yet (the wizard will handle this).
Verify latest vendor firmware/driver bundles.
On each host (PowerShell as admin):
# Identify adapters
Get-NetAdapter
# Enable RDMA
Enable-NetAdapterRdma -Name "SMB01","SMB02"
# Disable RDMA on mgmt NIC
Disable-NetAdapterRdma -Name "Mgmt"
# (Optional) assign IPs
New-NetIPAddress -InterfaceAlias "Mgmt" -IPAddress 10.0.0.11 -PrefixLength 24 -DefaultGateway 10.0.0.1
Set-DnsClientServerAddress -InterfaceAlias "Mgmt" -ServerAddresses 10.0.0.10
Switch-side:
Configure DCB (PFC + ETS): Priority 3 = SMB Direct.
Create VLANs:
VLAN 10 = Mgmt
VLAN 20 = Cluster/Live Migration
VLAN 30 = S2D storage (RDMA)
Test RDMA:
Get-SmbClientNetworkInterface
Test-SmbBandwidth -ServerName <PeerNodeName>Create an OU OU=AzureLocal,DC=corp,DC=contoso,DC=com.
Create a service account (e.g. svc_azlocaldeploy) with delegated rights to that OU.
Ensure the account can Create/Delete computer objects and Join to domain.
Verify DNS updates are dynamic.
In WAC → Azure Stack HCI → Create Cluster:
Add servers → AZHCI-01–AZHCI-04.
Validate hardware/network/storage.
Domain-join the nodes using svc_azlocaldeploy and OU above.
Create Failover Cluster:
Name = AZHCI-CLS
IP = 10.0.0.50 (Mgmt VLAN)
Configure quorum → Cloud Witness (Azure Storage).
Enable Storage Spaces Direct (S2D) when prompted.
If doing manually:
# On one node
Test-Cluster -Node AZHCI-01,AZHCI-02,AZHCI-03,AZHCI-04 -Include "Storage Spaces Direct","Inventory","Network","System Configuration"
New-Cluster -Name AZHCI-CLS -Node AZHCI-01,AZHCI-02,AZHCI-03,AZHCI-04 -StaticAddress 10.0.0.50
Set-ClusterQuorum -CloudWitness -AccountName mystorageacct -AccessKey <key>
Enable-ClusterS2DVerify:
Get-StorageSubSystem Cluster* | Get-PhysicalDisk
Get-StoragePool S2D* | Get-VirtualDiskExample – Mirror volume for VMs:
New-Volume -FriendlyName VMStore01 -FileSystem CSVFS_ReFS -StoragePoolFriendlyName "S2D on AZHCI-CLS" -Size 2TB -ResiliencySettingName MirrorExample – Mirror-accelerated Parity (capacity tier):
New-Volume -FriendlyName Archive01 -FileSystem CSVFS_ReFS -StoragePoolFriendlyName "S2D on AZHCI-CLS" -Size 10TB -StorageTierFriendlyNames Performance,Capacity -StorageTierSizes 1TB,9TBCheck CSVs:
Get-ClusterSharedVolumeIn WAC → Register with Azure → sign in → select Subscription + Resource Group.
This enables:
Azure Update Manager
Azure Monitor/Insights
Azure Arc management
Billing/usage tracking
Task | PowerShell / WAC action |
|---|---|
Enable Cluster-Aware Updating |
|
Configure backups | Azure Backup Server or 3rd party |
Monitor | Azure Arc → Insights |
Patch hosts |
|
Test failover | Move-ClusterGroup, simulate node loss |
Test-Cluster -Cluster AZHCI-CLS
(Get-Cluster).ClusterFunctionalLevel
Get-StoragePool S2D* | Get-PhysicalDisk | Group-Object HealthStatus