To address this question let’s go back to one of the core tenets of VMware technology. And that is the encapsulation of a virtual machine into a set of files.
As most folks know, the VM file structure looks something like this:
▪ The VMDK system drive
▪ .vmx config file
▪ .nvram file
▪ swap file
▪ Snapshot VMDK001 ——> VMDKXX
All of these objects nominally ended up in the same folder as the VM, held within a datastore which lived on a LUN.
Now LUNs are gone what happens ?
VVol will start with a minimum of three objects, each one stored somewhere on the storage array according to rules satisfied via Storage Policy Based Management or SPBM.
At a minimum a VM needs the following VVols
The first three VVol objects are stored in the same pool (in HDS).
In HDS block storage implementation, subsequent VVols and snapshot VVols are created in this way:
- Data volumes added to the VM can be assigned to a different VM Storage policy.
- Placement will be decided based on matching of rules.
- Snapshot volumes will live in a defined location for each storage container. You designate a location at container level
- This means snapshots will no longer kill an entire datastore full of VMs.
- The config VVol is special.
I mentioned in the last post that the VVol datastore is only a logical placeholder in vSphere. There is no filesystem. However when you browse a VVol datastore you will see the VMs stored somewhere on the backend. Curious ?
I know I was confused when I saw this ! So a folder for each VM ? You will not see the three VVols I just mentioned — you will see the historical structure of VMDK, nvram and other files.
How could that be ?
In part 5 I’ll answer that question so you need to keep an eye out for the next blog 😉
Orchestration of VVol
Here are few statements about VVol:
- VVol’s are orchestrated in the web client by standard tasks. There is nothing special going on and it’s intuitive and simple to use.
- You do not create VVols directly on the storage using your storage Management suite.
- You cannot have a VM with mixed VMFS and VVol (I know I found this one a bit strange and am still not convinced)
- VVol orchestration is handled via communication via the ESXi kernel & the vendors VASA Provider. In HDS implementation, the Vasa Provider instructs the API for the Hitachi Command Suite application where to place the object, which then performs the request.
- HDS Vasa Provider implementation is based on OVA deployment of virtual machines, so out-of-band from the storage.
- Other vendors run the Vasa provider on the controllers themselves.
- Check with your vendor how they realise VP!
- There is NO concept of locking with VVol on HDS arrays. Ever.
- This is a huge operational consideration for any customers used to systems with locking, like you have had with regular LUNs on some systems.
- Create either a VM or a VVol object and the operation completes in less than 10 seconds (on HDS storage systems) for a new VM.
- In HDS Implementation if you examine the pool of objects you will see exactly a single object for every VVol including snapshots.
- In our parlance this means a LDEV (logical device) with special designation – SLU (Secondary logical unit). So there is still an object on the backend with an ID assigned from the available pool of VVol addresses.
The placement of a VVol will be decided by matching the rules defined in vCenter with the profiles surfaced and combined into VM storage policies. Ultimately the object can be found on the storage inside a Pool. As I mentioned last time, you could have VVol that goes into any one of three or four or five HDS pools on the backend, it could maybe be on multiple arrays.
Do you care ?
You might want to care about making sure it’s in the right location so you can monitor it. In future HDS will introduce a mechanism to ensure much more granular control over placement.
Note that vROps does NOT support VVol as things stand so using HDS or another vendors management packs for vRealize Operations Suite won’t work.
A word on Interoperability
Don’t get bamboozled into thinking that just because a storage container can match an entire array (in HDS implementation) that you can configure a VVol as unlimited size.
Can you ? The answer is No !
Just like the hard restrictions NSX has regarding the number of interfaces an edge gateway can have (10), VVol maximum size is also constrained by the maximum “VMDK” size for a VM and that is 62TB. So that is the largest VVol size.
Also, to expand a VVol from 1.5TB to 3TB won’t work online — just like for a regular VMDK file. The VM must be powered off to do this.
Current VMware Interoperability considerations
For VMware feature and product interoperability refer to this VMware KB Article:
I don’t consider any of this a problem. VVol is powerful but like any transformative technology it will take time to achieve full interoperability. Use the time to plan !
In the next post we will look at VVol Storage Containers & Capability Profiles in more detail and what these look like on HDS storage. Until then …….