Nodes: 2+ Windows Server (same patch level), domain-joined.
Service accounts:
One Domain User for SQL Server Engine (no local admin).
One Domain User for SQL Agent (optional).
DNS & IPs: Reserve an IP for the Cluster Name and one for the SQL Network Name (per subnet).
Disks:
Shared SAN/iSCSI: One or more LUNs (Basic, NTFS/ReFS, no drive letters—use mount points or letters, not both). Only one node online per disk at a time.
S2D (Azure Stack HCI/physical): Identical local disks in each server (no partitions).
Roles/Features on each node:
Failover Clustering feature
.NET Framework required by your SQL version
Firewall: Allow SMB, RPC, Cluster traffic; later you’ll open SQL (TCP 1433 by default).
AD rights: The account creating the cluster needs rights to create computer objects (or pre-create the objects).
Server Manager → Manage → Add Roles and Features.
Features → check Failover Clustering (+ Management Tools).
Do this on all nodes, then restart if prompted.
On your SAN/iSCSI: present the same LUNs to both nodes.
On one node, open Disk Management → bring each new disk Online, Initialize (GPT), New Simple Volume, Format (NTFS/ReFS), assign a drive letter or mount point.
Do not bring these disks online on the second node (they’ll switch owners via the cluster later).
Skip this subsection if you’re using S2D (we’ll do that with Windows Admin Center below).
On any node, open Failover Cluster Manager.
Validate Configuration…
Add all cluster nodes, run all tests (especially Storage if SAN/iSCSI).
Fix any errors before continuing (warnings are often OK, but read them).
Failover Cluster Manager → Create Cluster…
Add nodes.
Enter a Cluster Name and its static IP.
Finish. The cluster object & core resources are created.
In Failover Cluster Manager, right-click the cluster → More Actions → Configure Cluster Quorum Settings…
Choose the model:
2 nodes SAN: use Disk Witness (a small separate shared disk) or File Share Witness.
S2D: commonly File Share Witness (on a 3rd server/site).
Complete the wizard.
If you did 1B, the formatted shared disks will already appear under Storage in Failover Cluster Manager:
Failover Cluster Manager → Storage → Disks:
If the wizard didn’t add them, use Add Disk…
Confirm disks are Clustered and Online.
(Optional) Create a Clustered Shared Volume (CSV) only if your SQL design calls for it. Traditional FCIs usually use dedicated disks per FCI (not CSV), with one disk for data and one for logs (and optionally one for TempDB).
Most classic SQL FCIs = non-CSV clustered disks dedicated to the role.
S2D enablement itself is PowerShell on Windows Server, but Windows Admin Center (WAC) provides a full GUI deployment and management path—perfect for Azure Stack HCI.
Install Windows Admin Center (WAC) on a management machine.
In WAC, Add both nodes, ensure Azure Stack HCI or Windows Server shows healthy.
In WAC, open Cluster Manager → Create (or Add) → HCI cluster wizard.
Provide nodes, networking, intent, and enable Storage Spaces Direct in the wizard.
In WAC Storage → Volumes → + Create:
Select Resiliency (Mirror/Parity), size, and create CSV volumes (these show as C:\ClusterStorage\VolumeX).
Back in Failover Cluster Manager → Storage → Disks/CSV: confirm CSVs are visible and Online.
For SQL FCI on S2D, you can place databases on CSV paths (e.g.,
C:\ClusterStorage\Volume1\SQLData,Volume2\SQLLogs). This is common on Azure Stack HCI.
You’ll run SQL setup on one node to create the FCI, then on the second node to add it.
Log on to Node 1 with an account that has local admin and adequate rights.
Mount SQL Server setup media → run setup.exe.
Installation page → select New SQL Server failover cluster installation.
Pass Setup Support Rules and Product Key/License.
Feature Selection: at minimum Database Engine Services, Client Tools.
Instance Configuration:
SQL Network Name (this is the client-facing name; different from Cluster Name).
Default or Named instance.
Cluster Resource Group: use the suggested new group for this FCI (e.g., SQL Server (MSSQLSERVER)).
Cluster Disk Selection:
SAN path: choose the clustered disks you prepared for data/logs (not the quorum).
S2D path: choose CSV volumes (they appear as clustered disks or via CSV paths later).
Data Directories:
Point Data, Log, and Backup to the selected clustered disks or CSV folders.
TempDB: best practice is local SSDs; but for pure FCI portability you can place TempDB on clustered storage (CSV works well). Choose based on your policy.
Server Configuration: set the SQL Server Engine service to run as your domain service account; set SQL Agent account if used.
Collation: confirm your required collation.
Database Engine Configuration:
Authentication: Mixed or Windows; add SQL sysadmin users.
(Optional) TempDB files count/size.
IP Address: In the Cluster Network Configuration, assign the static IP for the SQL Network Name (per subnet as needed).
Install. When complete, the clustered instance exists with its disks and name.
On Node 2, run setup.exe again.
Installation → Add node to a SQL Server failover cluster.
The wizard discovers the existing FCI; proceed through rules and feature prerequisites.
Provide the same service accounts (they’ll be read from the cluster) and confirm IP/network settings.
Install. Now both nodes host the FCI.
Failover Cluster Manager → Roles: select your SQL Server role.
Verify SQL Server, Agent, SQL Network Name, and disks/CSV are Online.
Move the role to the other node (right-click role → Move → Best possible node) to test failover.
From a client machine, connect to SQL-NETWORK-NAME in SSMS.
In SSMS (Instance Properties), confirm default paths, max memory, and add operators/jobs, etc.
Firewall (on both nodes): allow TCP 1433 (or your chosen port) and SQL Browser if using named instances/dynamic ports.
Computer Objects in AD:
Cluster Name Object (CNO) creates Virtual Computer Objects (VCOs) for your SQL Network Name. If creation fails, pre-create the VCOs and delegate permissions to the CNO.
Disks visible but not cluster-capable (SAN): Ensure they’re Basic, NTFS/ReFS, No dynamic disks, No OEM partitions, and Online on only one node at a time before clustering.
Quorum: For 2 nodes, use a Witness (Disk or File Share) to avoid split-brain.
TempDB: Local SSDs give speed, but tie failover behavior to startup scripting. CSV keeps it portable. Pick one intentionally.
Backups: Point to a clustered path or a network share reachable from either node.
Anti-virus: Exclude SQL data/log/tempdb folders and cluster CSV paths from real-time scanning.
Always On Availability Groups (AGs) are not FCIs. AGs use separate instances and replicate at the database level—no shared storage.
SQL on Azure VMs can use Azure Shared Disks for FCIs; Azure Stack HCI/Azure Local commonly uses S2D + CSV.