A few months ago I wrote a blog post regarding EMC VNX virtual storage appliance.
Today I would like to start a blog post series on HP StoreVirtual VSA. If the VNX was intended to be a simulator to practice with Unisphere for file storage provisioning, HP StoreVirtual VSA is a proper virtual storage appliance (VSA) supported in production environments providing block-based storage via iSCSI.
A VSA is a virtual appliance deployed in a VMware environment which aggregates and abstracts physical underlying storage in a common storage pool which will be presented to the hypervisor and can be used to store virtual machine disks and related files.
StoreVirtual VSA can use both existing VMFS datastores or RDM (raw LUNs) to store data and it can be configured to support sub-volume tiering to move data chunks across tiers. StoreVirtual VSA as the "physical" HP StoreVirtual counterpart is a scale-out solution, this means that if you need to increase storage capacity, resilience or performance other StoreVirtual VSA nodes (i.e. virtual appliances) can be deployed.
I will discuss scale-out capabilities in another article since by adding StoreVirtual VSA nodes proper configurations need to be applied (cluster creation, FOM deployment, etc.).
Guest OS, of a VM residing in StoreVirtual VSA, issues I/O requests to its VM disks residing in a datastore which is presented via iSCSI to the ESXi host by StoreVirtual VSA. StoreVirtual VSA itself issues I/Os to its disks residing on datastores or RDMs located in the physical underlying storage. This allow at first to abstract storage arrays having StoreVirtual VSA disks residing in different physical storage (DAS disks, NFS or iSCSI, FCP, FCoE, etc. datastores), secondly it introduces tiering capabilities by defining (I will explain how in the next article) higher and lower disk tiers. These aforementioned requests pass through VMKernel again before hitting physical storage. Conversely data returns from physical storage to guest OS following the opposite path.
Due to the long path followed by I/O, performances are not the prior concern when dealing with VSAs. I/Os managed by VMKernel have an average latency of microseconds (KAVG metric) while to hit physical storage we incur in millisecons latencies (DAVG metric). Consider that usually using a VSA we introduce several more passages through VMKernel. As an additional note certain VSAs (AFAIK not HP's VSA) use their RAM memory, which physically resides inside an ESXi host, as a caching layer on which frequently accessed data is fetched from storage due to locality access principle. This avoids I/Os to traverse VMKernel through physical storage when a certain block, requested by a guest OS, belongs to the pre-fetched & cached blocks. If block is not there (MISS on cache) VSA retrieves it from physical storage.
Prerequisites for HP StoreVirtual VSA according to HP VSA Documentation are the following:
-3GB of RAM reserved for VSA VM.
-One vCPU with 2GHz reserved for VSA VM.
-A minimum of 5GB and a maximum of 2TB for each virtual disk. Up to 10TB of space is supported per each VSA.
-A dedicated gigabit virtual switch.
Prior to start VSA installation a dedicated gigabit virtual switch needs to be created. Both VM PortGroup and VMKernel for iSCSI will reside on the same vSwitch preventing in this way iSCSI traffic to hit physical switch. If possible (i.e. Enterprise Plus license with vDS) setting LACP/Etherchannel on virtual switch physical ports will increase available bandwidth. I also set MTU to 9000bytes on both iSCSI VMKernels and vSwitch.
Let's now install the HP StoreVirtual VSA. Once downloaded and uncompressed run the setup. If want to use GUI installer select option 2.
Centralized Management Console is the software used to manage HP StoreVirtual VSA (and physical appliance) so if you don't already have it you need to install it.
Next connect to a vCenter (or ESXi host) in order to deploy HP StoreVirtual VSA.
Select host on which place the HP StoreVirtual VSA. ESXi host's datastores or connected RDMs will be listed.
Select HP StoreVirtual VSA. I will return on FailOver Manager (FOM) installation in a coming up blog post. If you have different datastores in different storage arrays you could also enable VSA auto-tiering.
Select datastore in which HP StoreVirtual VSA files will reside. This is the datastore where VSA VM files and OS disk will be created. Data stored in HP StoreVirtual VSA will *NOT* reside here.
By default VSA comes with two virtual network adapters. Only one of these will be used for LeftHand OS management traffic, iSCSI data transfer and intra-cluster (data exchanged between different VSAs) traffic. Edit network settings accordingly to your environment.
Select VSA VM name.
Next you need to select how much space will HP StoreVirtual VSA provide and from which datastore VSA will use it. As you can see in image below I selected 10GB of space from Datastore_VSA datastore and 40GB from Datastore_DATA_VSA datastore. Up to 7 different datastores can be used to store VSA disks. Usable space presented by StoreVirtual VSA will be the sum of these values. In my case StoreVirtual VSA will have 50GB of raw/usable space. Raw space and usable space are the same since in VSA striped RAID will be used by default because VSA assumes that proper RAID configurations are already implemented on underlying physical storage.
Since in this article we are installing a single node VSA select "No, I'm done".
Summary screen will appear, if everything is correct press Deploy.
After deployment has completed press Finish button.
HP StoreVirtual VSA has been installed and powered-on in your selected ESXi host.
VSA configuration will be explained next.
Other blog posts in HP StoreVirtual VSA Series:
HP StoreVirtual VSA Part1 - Installation
HP StoreVirtual VSA Part2 - Initial Configuration
HP StoreVirtual VSA Part3 - Management Groups Clusters and Volumes
HP StoreVirtual VSA Part4 - Multi VSA Cluster