VVOL Demo by HDS: Worth a Watch (but let’s be careful out there)

Disclaimer: I work for Hitachi Data Systems. However this post is not officially sanctioned by or speaks on behalf of my company. These thoughts are my own based on my experience. Use at your own discretion.

Pretty much the entire world knows that in the last week or so VMware announced vSphere 6.0.

I haven’t written a post on it yet but will do soon.

Suffice it to say VMware has really focused and invested huge engineering resources to achieve this. For my money vSphere 6 is the best ever release.

As someone said … The hypervisor is not a commodity, which when you see the features and improvements that have come with this release I think that rings true.

So kudos to VMware engineering and Product Management !!

Virtual Volumes

I’ve had a few questions lately from colleagues asking me how this landmark feature works in detail.

I think the following demo by my countryman Paul Morrissey, HDS  global Product Manager for VMware, should really help aid learning.

Paul does a great walkthrough of the setup on the storage side, configuration through the vCenter Web Client as well as provisioning to VM’s.

It’s a nice demo – not too long but covers the typical questions you might ask.

HDS support for VVOL

It is expected that HDS will be announcing support initially with Hitachi NAS (HNAS) product in March at GA time for vSphere 6.0, followed swiftly by support for all HDS block storage within 2-3 months.

With the initally supported HNAS, HDS supports:

  • 100 MILLION VM snapshots
  • 400,000 Vvols (with architectural support for up to 10 MILLION Vvols)

A Word of Caution

Don’t forget you heard it here first !!!!

There has been no commentary or proper consideration in my view regarding the potentially negative impact of VVols on the datacenter, both in terms of operational processes as well as matching business requirements to the correct SLA.

Of course VVols provide an amazing level of granularity of control regarding workload placement and aligment. This of course is to be welcomed.

However, the onus is really on IT SI’s and vendors to help customers understand the ACTUAL requirements for their applications. To get the best out of VVols, business and IT need to be working more closely together to ensure the correct capabilities are coupled with the right use cases.

My belief is that this new technology needs to be properly managed and given the “VCDX treatment”. What I mean is that the SLA matches the applications needs. Otherwise we will end up in a situation similar to the early days of virtualisation  with 8-vCPU VMs all over the virtual datacenter.

This has been one of the downsides of virtualisation – consumers requesting double what they actually need because of the so-called virtualisation tax, which is now pretty much negligible and irrelevant when compared against the upsides. This kind of thinking can have a negative impact on performance (ironically) which can be notoriously difficult to identify. I think were it not for the robustness of the vSphere hypervisor we would not have gotten away with this.

The request for VVols could go like this:

  • I need a VM
  • What type of storage (VVOL) do you need?
  • What have you got?
  • I’ve got Platinum, Gold and Silver
  • What does Platinum have?
  • Everything
  • I’ll take 10 (John in Finance said to ask for more than you need – then if IT give you Silver that will be fine)

I admit that this is a slightly facetious scenario, but without internal showback/chargeback the danger is that the right SLA will not end up being matched to the right use case or application requirement, and we will end up with an overprovisioned datacenter, that once deployed cannot be un-picked.

Paul covered how cost can be included in his demo. We need to accept human nature and understand that data is precious gold, and if someone thinks they can have a second copy of their data in a separate data center then who wouldn’t want that ?

It is also our default human nature to say Yes, as humans really don’t like to say No?

This has long been a concern of mine with VVols so we need to perform proper due diligence to ensure this technology does not get misused.

More on this next week.

7 thoughts on “VVOL Demo by HDS: Worth a Watch (but let’s be careful out there)

  1. Great review Paul! The caution is also well worth the time thinking through i each environment.

    Two comments though:
    1. As humans…default to say yes…don’t like saying no. When it pertains to oneself – definitely. As I found when I married, No can be very easy for some parties. Being a parent, my use of it is also quite popular (with me anyway). I imagine with consptiob and chargeback (or forward) this will balance to a team decision in the data center as you speculate.

    2. 100 Million snapshots?!!! That kind of scale might really offset how much someone needs to ask for. John in Finance might not recommend filling the cup so much when you can recover your drink so many times if it spills.

    1. Haha great Comments Shawn and fair point about our friend John. You’ve hit the nail on the head – this is about letting our customers know that we’ve got their back and can always help them meet their challenges too. That IT won’t always say No and always want to help. But conservatism is sensible and once people understand they’ll get that there’s no need to create waste.

  2. Hi,

    The VVOL datastores, which you provide on storage level, do you connect those (mapping/masking) to a VMWare cluster?

    And how do you set up the connections between VMWare cluster and HDS storage? And could more cluster use the same VVOL datastores?

    Do you have documentation on this?

    Thank you and regards,
    Ronald Blom

    1. Hi Brad,

      Thanks for the comment. I think I might write a blog on this !!

      Way back last year we ran an internal deep dive session with Hitachi PS guys in one of the EU countries and they said the same thing. In that country most customers were using HDT with Hitachi Acclerated Flash in a single pool in Hitachi arrays, and sometimes with the relatively new Hitachi real-time tiering. And just 2 weeks ago at a workshop we were discussing how this has become so effective that trying to manually create raid groups and volumes in certain use cases may not make sense.

      From a placement perspective I agree that ultimately if you have a single array with one Disk pool that it will always end up landing the VVol in the same place. In that case the rules matching doesn’t do a whole lot as the placement will also have a single outcome. So what’s the point ?

      If you have multiple arrays with more than one HDP/HDT/HRT pool in each then right away you get some benefits as it will route the VVol to the appropriate tier and array and you can use tags to ensure it gets routed to “dublin” or “production” or whatever.

      The real benefits are operational and from a resilience and data protection perspective:
      1. On-demand creation and destruction of data on the backend directly by ESXi
      2. Per-VVol backups via hardware snapshots
      3. Per-VVol replication
      4. Per-VVol performance visibility end-end (inside Hitachi tool and vROps)

      So 1 means no more thin versus thick versus eagerzeroed versus lazyzeroed. Thin by default and no SCSI unmap as blocks are initialised on write and destroyed when object is deleted. So no disparity between vCenter capacity usage and array capacity usage. I ran foul of this in the past with serious consequences.

      Second, the backup scenario has clear benefits of per-volume snaps and these are isolated inside a separate pool so no chance of filling up a datastore with snapshots. This pool can also be sized in it’s own right. So much easier to manage IMO and selectable when deploying the VM. (that’s on our roadmap coming soon).

      Re: DR Hitachi supports this today (with HDS documented processes) but native VMware support comes later this year with VASA 3.0. Then you can create an object and select replication in vCenter SPBM and it will have more awareness of the underlying storage layer and will offload the provisioning operation to the storage layer.

      I think by year end a lot of features enterprises will demand will be in place. At the end of the session with my colleagues they all agreed that maybe the single pool may not provide a benefit but some of the other ones I listed above will become just expected in the future.

      Thanks again,
      Paul

Leave a Reply

Your email address will not be published.