Since VSAN is coded in ESXi 5.5 hypervisor it does not require an installation but an enablement. A VSAN capable cluster must be created and appropriate disks must be claimed by hosts in order to provide capacity and performance to the cluster.
For VSAN testing we need at least three ESXi hosts, each with an unused, unformatted, SSD and HDD. VSAN supports up to a maximum of 1 SSD and 7 HDDs for each host. If you installed ESXi locally on an HDD this one cannot be used for VSAN since it has been formatted with VMFS.
VSAN, at this moment, allows up to eight ESXi hosts, both "active" or "passive", within the same cluster. As explained in previous article not every host participating in VSAN cluster must have local HDD and SSD (referred by me, for sake of simplicity, as "active host") but we need at least three of them in order to VSAN to work properly since every VM has, by default policy, its vmdks backed by two hosts with a third host acting as witness.
VSAN can also be tested in a virtual lab, no hardware requirements except an hypervisor (ESXi or Workstation) and enough disk space.
For this article purpouse I create a vLab environment for VSAN testing so let's start by creating three ESXi 5.5 hosts.
Each VM on which will be installed ESXi has been configured with:
-VMware ESXi 5.5
-4GB of RAM
-A 2GB HDD for installing ESXi
-A 4GB *fake* SSD for VSAN
-An 8GB HDD for VSAN
Of course you can tune these values according to your needs.
SSDs in nested virtualization are simply virtual disks faked to be recognized by ESXi as SSDs. This can be done following this great article by William Lam: Emulating an SSD Virtual Disk in a VMware Environment.
Another great resource provided by William Lam is a deployment template for a VSAN host. Basically it creates a VM with the aforementioned specifications, so if you don't want to manually configure a VM just dowload William's one.
After ESXi hosts has been installed enter vSphere Web Client and add them to a datacenter.
In order to work VSAN requires a dedicated network for VSAN traffic. A vmkernel is required, when you create/modify it ensure to tick Virtual SAN Traffic checkbox.
The resulting vSwitch will be similar to this one:
Now let's create a VSAN enabled cluster. Cluster creation is the same as for any cluster you already created but in this case we need to tick Virtual SAN checkbox. You can leave Automatic under Add disks to storage in order to automatically reclaim suitable VSAN disks by each host.
DRS & HA can be enabled since fully supported by VSAN.
Add your hosts to the newly created cluster.
VSAN can be managed under cluster's Manage -> Settings -> Virtual SAN. General tab reports VSAN status like used and usable capacity.
Assign VSAN license under Configuration -> Virtual SAN Licensing.
Now let's assign disks to diskgroup. A diskgroup can be seen as a logical container of both SSDs and HDDs resources created aggregating local SSDs and HDDs of each host. SSDs will provide performances and will not be counted as usable space because all writes, after being acknowledged, will be staged from SSDs to HDDs, conversely HDDs will provide capacity.
Click on Claim Disks button.
Claim disks popup window will appear, here will be listed all unused HDDs and SSDs, claimable by VSAN, for each server.
Select them by clicking Select all eligible disks button.
Diskgroup will be created.
New changes will be reflected under General tab. As said before as Total capacity of VSAN datastore will be reported only HDDs provided space.
At this point VSAN cluster is correctly setup, we now need to create a custom storage policy and assign it to our VMs residing on VSAN. This will be explained in Part3.
Other blog posts in VSAN Series:
VSAN Part1 - Introduction
VSAN Part2 - Initial Setup
VSAN Part3 - Storage Policies
VSAN Part4 - Automate VSAN using PowerCLI