VVol and HDS Part 4: So how is a VVol instantiated ?

VVol Series
Part 1: Introduction
Part 2: With and Without VVols
Part 3: New Storage Constructs
Part 4: How are VVols instantiated

To address this question let’s go back to one of the core tenets of VMware technology. And that is the encapsulation of a virtual machine into a set of files.

As most folks know, the VM file structure looks something like this:

▪    The VMDK system drive
▪    .vmx config file
▪    .nvram file
▪    swap file
▪    Snapshot VMDK001 ——> VMDKXX
▪    etc

All of these objects nominally ended up in the same folder as the VM, held within a datastore which lived on a LUN.

Now LUNs are gone what happens ?

VVol will start with a minimum of three objects, each one stored somewhere on the storage array according to rules satisfied via Storage Policy Based Management or SPBM.

At a minimum a VM needs the following VVols

  • System
  • Config
  • Swap

The first three VVol objects are stored in the same pool (in HDS).

In HDS block storage implementation, subsequent VVols and snapshot VVols are created in this way:

  • Data volumes added to the VM can be assigned to a different VM Storage policy.
  • Placement will be decided based on matching of rules.
  • Snapshot volumes will live in a defined location for each storage container. You designate a location at container level
  • This means snapshots will no longer kill an entire datastore full of VMs.
  • The config VVol is special.

I mentioned in the last post that the VVol datastore is only a logical placeholder in vSphere. There is no filesystem. However when you browse a VVol datastore you will see the VMs stored somewhere on the backend.  Curious ?

I know I was confused when I saw this !  So a folder for each VM ?  You will not see the three VVols I just mentioned — you will see the historical structure of VMDK, nvram and other files.

How could that be ?

In part 5 I’ll answer that question so you need to keep an eye out for the next blog 😉

Orchestration of VVol

Here are few statements about VVol:

  • VVol’s are orchestrated in the web client by standard tasks. There is nothing special going on and it’s intuitive and simple to use.
  • You do not create VVols directly on the storage using your storage Management suite.
  • You cannot have a VM with mixed VMFS and VVol (I know I found this one a bit strange and am still not convinced)
  • VVol orchestration is handled via communication via the ESXi kernel & the vendors VASA Provider. In HDS implementation, the Vasa Provider instructs the API for the Hitachi Command Suite application where to place the object, which then performs the request.
  • HDS Vasa Provider implementation is based on OVA deployment of virtual machines, so out-of-band from the storage.
    • Other vendors run the Vasa provider on the controllers themselves.
    • Check with your vendor how they realise VP!
  • There is NO concept of locking with VVol on HDS arrays. Ever.
    • This is a huge operational consideration for any customers used to systems with locking, like you have had with regular LUNs on some systems.
    • Create either a VM or a VVol object and the operation completes in less than 10 seconds (on HDS storage systems) for a new VM.
  • In HDS Implementation if you examine the pool of objects you will see exactly a single object for every VVol including snapshots.
    • In our parlance this means a LDEV (logical device) with special designation – SLU (Secondary logical unit). So there is still an object on the backend with an ID assigned from the available pool of VVol addresses.


The placement of a VVol will be decided by matching the rules defined in vCenter with the profiles surfaced and combined into VM storage policies. Ultimately the object can be found on the storage inside a Pool. As I mentioned last time, you could have VVol that goes into any one of three or four or five HDS pools on the backend, it could maybe be on multiple arrays.

Do you care ?

You might want to care about making sure it’s in the right location so you can monitor it. In future HDS will introduce a mechanism to ensure much more granular control over placement.

Note that vROps does NOT support VVol as things stand so using HDS or another vendors management packs for vRealize Operations Suite won’t work.

A word on Interoperability

Don’t get bamboozled into thinking that just because a storage container can match an entire array (in HDS implementation) that you can configure a VVol as unlimited size.

Can you ? The answer is No !

Just like the hard restrictions NSX has regarding the number of interfaces an edge gateway can have (10), VVol maximum size is also constrained by the maximum “VMDK” size for a VM and that is 62TB. So that is the largest VVol size.

Also, to expand a VVol from 1.5TB to 3TB won’t work online — just like for a regular VMDK file. The VM must be powered off to do this.

Current VMware Interoperability considerations

For VMware feature and product interoperability refer to this VMware KB Article:


I don’t consider any of this a problem. VVol is powerful but like any transformative technology it will take time to achieve full interoperability. Use the time to plan !

In the next post we will look at VVol Storage Containers & Capability Profiles in more detail and what these look like on HDS storage. Until then …….

VVol and HDS Part 3: New Storage Constructs

VVol Series
Part 1: Introduction
Part 2: With and Without VVols
Part 3: New Storage Constructs
Part 4: How are VVols instantiated

Before we can architect VM Storage Profiles which are the point of consumption for VMs and applications, we need to understand how the north-of-hypervisor view maps downwards. I don’t want to harp on about the storage; this is more about describing some new concepts to aid understanding the entire architecture.

So let’s introduce three (new-ish) concepts and then we can expand on them later:
▪    Protocol Endpoint
▪    Storage Container
▪    VASA Provider

The Protocol Endpoint

Think of a protocol endpoint like a NAT gateway for VVols. Thanks to a colleague (VH) for the NAT analogy.

So the only “logical” connection between an ESXi host and the storage is to the PE. More on how that is configured later. Once that connection is established and you have some storage configured, you can start to create VVols. The Protocol Endpoint is maybe badly named as you can use any protocol across it.

So there is still an object that is “addressed” by the ESXi host called the protocol endpoint. It looks like a LUN – in Hitachi Block systems it is 46MB and is mounted to the ESXi host on LUNID 256 – but that’s where the similarity ends. I already mentioned before that LUNs are history !!. More on PE configuration later.

A PE is the portal (Sci-Fi reference) behind which VVols exist and some would say hide. That’s not an entirely facetious statement.

Secondary LUN-ID is an inherent feature of VVol which is how once the logical connection between an ESXi host and VM is established, we are then into pure object creation paradise. No more zoning, masking, creation of filesystem etc. The VVol objects end up in the correct place thanks to translation between VM Storage profiles and Capability Profiles on the storage. In HDS’ implementation these are configured at pool level.

I will show screenshots and examples of all of this in subsequent blogs, but it’s important to build the picture up layer by layer.

Storage Container

Screen Shot 2015-09-13 at 19.10.31A Storage Container is a logical grouping of capacity. It is mapped to a “virtual datastore” called a VVol datastore. That is not real or persistent in the sense that there is no filesystem. It helps direct the I/O to the appropriate place but there is no unit of administration or associated overhead

In HDS’ implementation, you can use multiple disk pools (thin pools or tiered pools) in a storage container, and each storage container can be the size of the array if required. We have already publicly stated elsewhere that ultimately containers will likely span arrays. The use of single or multiple pools is an interesting corner case I’ll discuss later.

At pool-level, you configure capabilities such as performance, availability, replication. HDS design provides for containers with multiple pools, meeting the requirements of applications using capability profiles per pool, with little overhead. That’s part of HDS implementation. So you can still manage very simply, but when it comes to the placements of VVol that’s where pools come into play.

So an example would be three pools in a storage container.

  • Pool 1= Database Data files on a Tiered Pool (HDT)
  • Pool 2= Database Log files on a Thin pool (HDP)
  • Pool 3 = General system files and disks on a thin pool (HDP)

Are there any reasons to have multiple storage containers ?

One use case can be deployment of VVol in Cloud. Thanks to partitioning technology in HDS arrays you could segregate each vDC (virtual datacenter) on a separate array partition (separate resource group), with a single storage container and one or more protocol endpoints.

That is an interesting use case which I will cover later.

The VASA Provider

The VASA provider is not new. It is a set of APIs that stands for vStorage API for Storage Awareness. In V2.0 of the APIs we inherit the ability at hypervisor-level to orchestrate VVol objects (directly) on the storage, as well as publish what the storage can do.

VPOnce the connection to the PE is established a VVol can be created (after a few simple steps) using the VASA provider interface to the storage infrastructure API.

Recently HDS Vasa Provider 3.1 has been released. In the not too distant future VP 3.1f will be released. This will be a single vehicle for VASA Provider for both file and block to present a consolidated user experience for both types of systems in the same management plane.

In the next post I will talk about how VVol is realised. Until then !

VVol and HDS Part 2: With and Without VVols

VVol Series
Part 1: Introduction
Part 2: With and Without VVols
Part 3: New Storage Constructs
Part 4: How are VVols instantiated

Bye Bye VMFS

At its simplest, VVol removes the layer of abstraction that exists between a virtual machine and it’s data. As one of my colleagues has said, it’s akin to providing an a raw-device mapped LUN for every VMDK file without the operational implications of RDM.

It is important to understand that the layer of abstraction that will be removed is the VMFS filesystem, and this has been the middleman between the virtual machine and it’s data. This is now removed and this has many implications.

For starters, it means there is no local filesystem instantiation in vSphere. This has an impact on snapshots, for example, which now cannot be completed without hardware offload.

Why ? If you think about it there is nowhere “local” where vSphere can store the snapshot. And there are some neat benefits of snapshots from a placement, fault isolation and containment perspective. I’ll cover them in later posts …. the concept of a datastore filling up and stopping all I/O to all resident VMs will become a thing of the past.

As I mentioned in my last post here <VVol and HDS: An Introduction> there have been consequences from an operational perspective such as the way mapped blocks (used space on disk) are managed separately by the hypervisor and the storage array. This is why operationally there have been problems with the traditional LUN presentation approach to virtual infrastructure.

Without VVols

Consider the diagram:

Screen Shot 2015-08-30 at 11.06.50The Impact:

This approach relies on either the predictive schema (discrete datastore for individual apps, VMs, or VMDKs) or the adaptive schema where many VMs share the same datastore and if a problem arises you move the individual VMDKs to another datastore. So most folks “suck it an see” in the adaptive schema.

That’s not to say you can’t empirically size datastores – You can – but most people don’t know how or don’t do it. And problems many vendor engineers don’t know how either.

Some of the operational challenges today are:

  • If you want to use the fastest application consistent backup method (hardware snapshot), the snapshot has to snap the entire datastore.
    • If the entire contents are being backed up then the period of exposure can be a long time to capture all the objects.
    • Creation of these snapshots involves coordinating the point in time a datastore and LUN are snapped to ensure consistency.
    • This is not a simple engineering problem and can lead to problems when it stops working. It also is highly dependent on a proven interoperability configuration which is also tricky.
  • People feeling that have to adopt single size fits all approach
    • Set all datastores the same size for simplicity
    • Present datastores to all clusters (despite this being not a good practice or a good idea)
  • The VMware consumer has no visibility regarding the capabilities of the datastore and is entirely reliant on the storage administrator to ensure the correct datastore is presented etc.
    • He also has no idea where they are. Wouldn’t it be great if this metadata was exposed ?
  • Typically more datastores are pre-allocated than are required, normally to multiple clusters “just in case”. This is not a good idea and leads to too large a fault domain.

With VVols 

Screen Shot 2015-08-30 at 11.08.00

The impact:

Now consider the removal of VMFS from the picture. Now we map every VM disk (including config files and swap files) to a separate object on the storage.
ASIDE: One of my NAS colleagues quite funnily refers to block storage being able to finally replicate the same simple features of NAS. It’s not quite that simple but he has a point. Later on I’ll show how we still think the we have same structure (folders) on the storage due to the way the Web Client abstracts the Virtual Machine objects. This is quite interesting too !

Now we have an improved operational picture:

  • Data services are provided at virtual machine disk level.
  • No longer do we need to compromise on making everything the same for the sake of administrative overhead.
    • So “Let’s make all datastores 1TB or 2TB …….”
  • Abstraction of back-end storage, pools and containers from a virtual machine.
    • Now we just place a disk somewhere like “SQL Server Production”, “Dublin Business Critical”. Now you don’t need to know where your data resides. Why do you care ? Once your requirements are met, let VM storage profiles help with data placement.
  • Use VM storage profiles as the logical container in which virtual machines are stored.
    • This is a paradigm shift. What happens now when you change the location of where a VM disk is stored ? We will address that question later but it’s interesting and one you should consider !.

In the next post I will describe the VASA provider and Protocol Endpoints in more detail. Until then …. think about some of these ideas !




VMware VVol and HDS: An Introduction

VVol Series
Part 1: Introduction
Part 2: With and Without VVols
Part 3: New Storage Constructs
Part 4: How are VVols instantiated

Over the last few weeks I’ve been delivering presentations with colleagues as part of internal and external training sessions on VMware VVol technology, and more specifically, it’s application when using HDS storage & software.

On August 18th VMware VVol was formally supported on HDS G1000 Storage, our first Block storage array to support VVol.

This post is the first in a series to share some of what I learned as well as how VVol will be realised with HDS technology. I hope that you will read on – this will not be a case of whether HDS implementation is better or worse than other – I will try to keep the series neutral but will describe HDS specific considerations for our customers.

We at HDS expect a phased adoption of VVol technology at least in terms of the planning and evaluation side, prior to migration. This will be due to the ramifications VVol brings to storage and virtual architectures, and more importantly the operational processes that need to be put in place to support VVol. There are significant opportunities to enhance operations with VVol which we are sure will be achieved.

Several layers of interoperability must be considered, not withstanding HDS or VMware features. So we are advising customers to consider this a number of months before deployment.

Start at the beginning

When will you deploy vSphere 6 ?  

That is the first question that needs to be considered and one we ask customers first.

This is not a trivial concern due to the fundamental changes vSphere 6 brings to the landscape. Now is the ideal time to test drive the technology and get hands on experience which will inform better decision making. That depends on what equipment you have, but this is the only real way to “get your head around it”.

The ultimate deliverable and unit of consumption of VVol will be Virtual Machine Storage Policies,   consumed by applications and VMs. That must not be forgotten !. Therefore, a VM Storage Policy is a fundamental construct in the VVol ecosystem.

Operational Benefits are key

No matter how you look at it, the operational benefits of VVol are clear, and even the most ardent debates ended up with audiences agreeing this point. That alone justifies adoption of the technology.

These benefits can be due to legacy constructs that exist within the presentation of storage (LUNs/VMFS) to virtual infrastructure which clearly needed a new framework. Zero Page Reclaim and SCSI UNMAP were highlighted by many on the VMware Administration side of the house as causing significant administrative overhead.

These will be a thing of the past with the adoption of VVol.

Not a question of If

In the sessions so far it is not a question of If, but rather How and When VVol can be implemented.

So in this series I will cover many of the key topics within what I am calling a foundational technology for the future of the virtual datacenter (if you’re a VMware customer).

Benefits to Cloud & “Storage As A Service”

If you are planning to consume storage at scale by provisioning large sets of virtual machines using a Cloud Management Platform (e.g. for DevOps), VVol will make possible what is otherwise very difficult from an operational perspective.

Consider creation of 400 VMs using lazyzeroedthick block allocation onto a thin pool, and the difficulties this incurs. This makes traditional storage unsuited to the rigours of flexible cloud consumption.

Now consider how the VVol disks of a Virtual Machine can be created and destroyed on-demand, via the VASA Provider. So control has moved to the hypervisor management layer.  This is a KEY concept to grasp and will be a key benefit to cloud consumption models.

VVol allows consumption and capacity provisioning to be truly separate activities. This is a paradigm shift that allows a VM to be more easily offered as a service via service portal, including storage without the previous risks that went along with that.

The starting position for our sessions is always the organisational considerations. We believe VVol can help heal fractures between the Storage and Virtual Infrastructure teams in the same way it can bring closely-aligned teams even closer together.

in the next post I will start to describe the concepts from a storage and vCenter perspective. We will slowly build the picture up layer-by-layer to prevent confusion which can be common with VVol ……

Thoughts from vExpert Session July14th -> Scaling Converged vs Hyperconverged

First published on July 24, 2015 11:23:00 AM at HDS VMware Community

On July 14th we held a private webinar for the vExpert Community. Thanks to the always helpful Corey Romero, vExpert community manager for hosting it and giving us the opportunity to share some of the coolest converged technology out there.

The session was all about the power of the Unified Compute Platform API to allow orchestration of servers, storage, networking via API, the same way you can do so directly within the vCenter UI.

As I mentioned on the webex how about the following service catalog items in vRA !!

Continue reading Thoughts from vExpert Session July14th -> Scaling Converged vs Hyperconverged

HDS customers and fans join our new VMware Community

HDS recently launched the new VMware Community over at VMware Community. On this site you will find many of our product managers and senior VMware Consultants contributing deep-dive knowledge. This is perfect timing with VMworld fast approaching.

This is a lot more than just  storage, so you can find awesome resources about VMware VVol, EVO:RAIL, UCP Director for VMware and there will lots of other resources added…coming soon !

The purpose of the site is to share knowledge for when you put HDS and VMware together and shows the incredibly strong relationship that exists today between Hitachi Data Systems and VMware.

For anyone who needs information, now you have a shortcut where you can go and talk to the people who really know. There is also huge support internally for this initiative so over time many more internal experts will be getting involved directly helping customers….As usual we are all busy so sometimes we need to poke and prod them first :-)

Screen Shot 2015-07-27 at 17.38.28

Join up today !!

My VMworld 2015 USA Session Schedule

This year is my first time to attend VMworld USA. Will be great to get the chance to meet up with some great community comrades and also see the bigger scale. For me this is also the first time to be doing some presentations on behalf of the company. Probably booth-based in our booth theatre but let’s see what happens.

When I’m not at customer meetings or booth duty I wanted to share  my current session list. For me there are a few main areas of focus

  • Automation (agnostic)
  • vRealize Automation Suite
  • Service Catalog Design and Automation
  • Microservices / Docker / Applications in the 21 Century and how to automate them
  • vSphere advanced topics – Performance, Certificate Management, VCSA, NSX deep dive
  • vCloud Air as part of a coherent DR strategy
  • Openstack (VIO)

Continue reading My VMworld 2015 USA Session Schedule

Attention vExperts: Join vExpert HDS session on July 14th 4pm GMT

To all my esteemed 2015 vExperts in the VMware community, I really hope you can join the HDS team on a vExpert briefing next week to talk about why we believe HDS super-converged offering …. (I just made that term up) …… is the best fit today for scaling your Private Cloud environment.


Continue reading Attention vExperts: Join vExpert HDS session on July 14th 4pm GMT