VMware Support Summit #vTechSummit Cork Feb 24-25

On Tuesday and Wednesday I finally made it to the VMware EMEA Tech Support Summit in Cork, sponsored by GSS. I’d never been before and wasn’t sure what to expect.  Would it be a waste of time ?

Not quite. In fact it was the opposite.

I met up with my good buddy and fellow VCAP Ross Wynne @rosswynne. So both of us were looking for good quality tech content.

To save you reading through this whole post I can confirm the complete absence of product marketing. There was one presentation which was borderline which I really didn’t get, but that was the exception. It was all informative, interactive, technical and suited to the audience.

The Presenters

With maybe one exception the presenters had a command of their subject matter that  was right up there with any I’ve seen. When you think about it, that’s to be expected of support and product experts (escalation engineers).

But what I didn’t expect was that these guys might not do this every day (some of them anyway), but most of them were very accomplished presenters and explainers of deep tech concepts.

And because the slides will be shared with the attendees you didn’t need to worry about writing stuff down. You couldn’t anyway as within each 1 hour session there was so much packed in. This is one to sit back and listen and don’t bother note-taking. I even decided on a self-imposed tweeting ban.

Design Do’s & Don’ts

What I also didn’t expect was the focus on sharing of examples from the field of what architectures worked, didn’t work, and what best practices you should following with your designs. So this was best practices based on reality. Real examples of upper limits you might hit, and upper limits you shouldn’t attempt to hit.

How to get out of jail

And then the most obvious part.

  • What are the worst problems ?
  • What is the root cause ?
  • How can you fix them ?
    • What logs can you check
    • Should you use TCPDUMP ? vmkping or some other utility
    • Which ones are well-known to support.
    • In each case where there either was or wasn’t a fix the KB’s were listed

I particuarly enjoyed the top-10 issues session just for the range of items that arose.

In many cases there were no fixes. The nature of this session meant that it was a case of full disclosure and sharing of information. I like that a lot !!.

vCloud Director

Take the case of the vCloud preso which was really awesome. Some of the others were excellent too, but it had a really nice mix and flow.

This one was focused on designing and scaling a cloud and included a walkthrough of the product, how to go about building and scaling a cloud as well as building types of  allocation models – reservations, pooled with overcommitment, pay as you go. So walking through the practical impacts of these decisions and how that might end up arising in an SR.

There was a case study to look at:

  • What are the Small, Medium and Large options for building a 30,000 VM cloud, based on 10, 20 or 30 hosts in a cluster.
  • What does each solution look like when you push it through configuration maximums for VSAN, vCD, vSphere, vCenter, cluster limits etc.
  • Where will most such designs fall down in the real world.
  • What are the top issues GSS see on a day to day basis and how can you resolve them.

One thing I took from that I don’t come up against too often and that is the GSS view that you always stay comfortably within the maximums. And I mean specifically when sizing. So don’t ever deploy a 32-host vSphere cluster. Limit it to a maximum of 30 hosts, just to leave headroom to stay within acceptable tolerance. Even though it’s supported.

Other highlights/points of interest

Things that left an imprint in my head:

  • VSAN is mission-critical workload-ready – in my opinion.
    • I still have question marks regarding it’s interoperability with other VMware and third-party products, but that’s gonna happen. It’s a matter of time.
    • As I’ve always said … it’s baked into the kernel. You just tick a box to turn it on which offers almost zero barrier to adoption. Which is what also appeals about Pernixdata.
    • For VSAN we’re talking about a queue depth of 256 due to higher performance capability.
  • for vCloud Air I think it’s got all the features an enterprise will need – HA, good storage performance, but failback (for now) is still not an option. So for the moment moving an entire enterprise workload to the cloud in DRaaS sounds good, but like SRM used to be, failing back won’t be trivial.
    • So seems like Development/Test or more tactical use is more likely.
    • The interesting piece of information provided was that on an Exchange system, due to the fact that reading an email generates a database entry, 30% of data changes daily which will not help any attempt to keep Exchange synced up with the cloud.
  • NSX for me is still the only game in town. A great solution but from speaking to people cost is still an issue. Considering what it can do it doesn’t seem prohibitive but it is what it is.
  • Biggest potential screwup with vCD Is deletion of vCenter inventory objects in vCenter prior to vCloud. Always delete from vCloud first … otherwise you’re potentially looking at a DB restore for vCloud and vCenter as recommended remediation.
  • for VSAN in SSD/SAS we’re looking at a 70/30 read/write split for cache.
  • forVSAN All-flash we’re talking about a 100% write cache (potentially)
    • Suggested figures are 20-40000 IOPS in hybrid config. 100,000 potentially per host in all-flash VSAN.
  • Everyone still loves vCloud Director and I don’t think that will change, and the audience echoed that sentiment.
  • The new VMware View Cloud Pod architecture will support up to 20,000 desktops, and in v6.1 there will not be a UI to configure it. Up to know it’s been configurable in the CLI only.
  • Some stuff many will know already aboutvROps but worth mentioning again.
    • Now there is a single combined GUI for vROps 6.0 and a single virtual machine inside the vApp.
    • You can have replica nodes as part of a master-slave type arrangement for redundancy
    • The Custom UI is now included with the regular UI inside the same interface.
  • For VVols the presentation made me think about the fact that for the storage admin it doesn’t mean the end …
    • As was mentioned, this can mean to storage admin can spend his time properly managing “data” and understanding what’s going on and making more informed decisions.
  • The/Certificate automation tool is awesome
    • The new version pretty much does everything for you which is very important considering the major architecture changes with the PSC and VMCA and how this can impact on use of certificates from CAs.
    • You can make the VMCA a subordinate of your internal CA to make things simple.
    • Machine SSL certificate is the key object … to talk to port 443 on vCenter
    • There are 4 further component certificates that may need to be changed. I’m not going into that here. Normally Julian Wood over at wooditwork.com has a great writeup on certificates.

I could go on and on and on and on ……

Anyway there’s so much more but I hope this shows how much I enjoyed it. A lot of people feel VMworld is more of a networking event now, but that sessions don’t always go deep enough. So this is an opportunity to redress the balance for free.

Until the next one.

Application Dependency Mapping with vCenter Infrastructure Navigator (VIN)

If you need to perform Application Dependency Mapping – ADM – also called Application Dependency Planning – ADP, there are a lot of different ways you can do so. Some automatically using software, some manually using software, and some using a pen and paper and workshops. You will likely use a combination of some or all of the three approaches to achieve a successful outcome.

First the why?

Why would you want to do that and why does it matter ?

For optimal cluster design it is very important to understand mission-critical application dependencies. This helps you make smart decisions regarding workload placement and whether you should either co-locate certain VMs together on the same host, or keep them apart, to ensure best possible performance.

This is of course what DRS Affinity and Anti-Affinity rules are all about, but what about if you have a large estate with 5,000 VMs. You can’t manually document dependencies for each of them, unless you’re Superman ……


And as part of your Virtual Machine design you should always consider workload placement to ensure noisy neighbours do not interfere with critical workloads. You might also have to use either Resource Pools, Reservations, Storage I/O Control or Storage DRS as other tools to ensure critical workloads get priority and are not “interfered” with by less critical workloads.

It is often the case  that this has not been carried out at all, and things are left to move around. That’s my experience. When you spend significant time studying vSphere you really understand all the inputs required to design it optimally – most are pretty common sense but it’s always been my opinion that this is why vSphere is so great. Despite workloads being thrown at it and it being grown in the most horrible organic ways, it continues to work and deliver for businesses due to robust design.


The VMware VCAP-DCD certification includes the methdology you need to learn in order to understand how to design your virtual infrastructure in the best way possible. And that blueprint and exam both contain Objectives and Tasks respectively, to understand application dependency mapping. This methodology is what VCDX is built on. In fact VCDX is the practical implementation and demonstration of competency in this approach. Here is a grab of one of the pages of VCAP-DCD blueprint. I’ve seen this get asked A LOT by people contemplating or preparing for VCAP-DCD.

Screen Shot 2015-02-18 at 21.07.46

Example 1: Data Locality

Let’s say you build a vSphere cluster and pay zero attention to where any of your workloads are running. It’s to be expected that the situation in this figure can arise:

ADM Pic 1

So you have a situation where your applications (in this case Commvault Commserve and vCenter) are accessing a SQL Server instance by traversing the network. This really doesn’t make sense from an availability or manageability perspective. It makes more sense to keep application-database traffic local. This is a simple example but it can be more difficult when the dependencies are not obvious.

Example 2: Exchange

In this case you’re planning for Exchange 2013. By using DRS you can align CASHUB servers and Mailbox servers together to ensure load balancing and better availability for the application. This is the recommended practice by VMware and shows how DRS (and vSphere HA) can always provide a benefit when you virtualise an application.

ADM Pic 2

Example 3: DR with SRM

For SRM it is critical to understand application dependencies to design protection groups. This also feeds into datastore design when using array-based replication. The bottom line is that there’s no point in failing over your Mission-critical three-tier application consisting of Web, App and DB if you can’t meet the underlying dependencies on AD and DNS.

Tip: Anyone going for VCAP-DCD this is an example of how to create an entity-relationship diagram. Pretty Simple isn’t it !!

ADM Pic 3

The Tools

VMware offer two tools to help you

  • vCenter Infrastructure Navigator (VIN)
  • Application Dependency Planner (Partner-only tool)

It’s important to say VIN or vRealize Infrastructure Navigator only works on vSphere for virtualised workloads, (I can’t figure out if it’s still part of vCenter or not)

ADP is a deep, complex product that can capture workloads via SPAN switch traffic-sniffing for non-VMware environments. So you could get a partner to use it prior to a large P2V, but the design considerations are not trivial. This is an entire exercise in itself. I’m not suggesting you still don’t need to spend time for VIN, but that it’s certainly simpler to deploy. The output still needs to be interpreted though and taken into account.

With VIN you can easily deploy it from an OVA template after downloading from myvmware.com.

Screen Shot 2015-02-19 at 11.41.46

You deploy it using vCenter. It creates a plugin within the Web Client only. Don’t even think about using this with the VI Client.

VIN icon

Before you can use it you must assign it a license. So you need the correct privileges to do this:


Once you’ve done that you need to enable Virtual Machine monitoring. Just click on the vCenter Infrastructure Navigator icon on the Home page, or get to it via the Explorer view. In the view below I already have it turned on.

vin turn on vm access

One you’ve done that there are two places where you see the results. You can go to an individual VM, and Select Manage, then you will see the following map using the Application Dependencies Tab:



Pretty Neat – right !!!! And that’s within a few minutes. You can do a lot more but that’s for another post.

You can also go to the vCenter specific view of the entire estate which looks like this and let’s you save a CSV version. As a Community Comrade said yesterday, would be nice to get a more finely formatted report.

vin vcenter view

That’s about it until the next post. Thanks to Sunny Dua who has the best set of vCOps/vROps posts ever, over at http://vxpresss.blogspot.ie/ and has provided the usual level of help/feedback for questions I had related to vCOps/vROps. I was aware of it before but Sunny has confirmed that to plug VIN into vROps and monitor a full dependent stack of applications you can use this adapter:

Screen Shot 2015-02-19 at 12.13.47

That’s for next time…..


Hitachi CB2500 Blade Chassis – Power & Flexibility for SAP HANA

Disclaimer: I work for Hitachi Data Systems. However this post is not officially sanctioned by or speaks on behalf of my company. These thoughts are my own based on my experience. Use at your own discretion.

Did you know that the world’s largest SAP HANA implementation runs on Hitachi servers and was installed by HDS. It’s true, and the whole area of in-memory database systems is growing rapidly.

Before I begin I have to give a shoutout to my colleague Sander who works in my team as an SAP SME. I’ve learned a lot about HANA from him – otherwise I would know just the basics. He doesn’t blog so I thought I’d put some info out there.

One of the biggest areas where HDS is focusing is on SAP HANA and the partnership with SAP is of strategic importance, just like it is with VMware. And it’s true to say this is an area where Hitachi has technical leadership.

Just as I’m writing this another massive deal has been signed based on Hitachi (SMP) blades and our Global Active Device (GAD) technology. You can read more about GAD  here.

For any customers looking to deploy HANA I suggest you contact HDS as in this area HDS has many advantages that are black and white. There are many design criteria to consider when running HANA to guarantee performance, and HDS qualified limits are higher than  competitors from a compute and storage perspective. In some cases our competitors don’t have a competing compute offering so it’s even more compelling.

I should mention that at the moment the ability to use vSphere for HANA is slightly limited mainly due to SAP limitations regarding what is and is not supported. This will undoubtedly change especially thanks to vSphere 6 and the increased VM and host limits of 4TB and 12TB respectively.

What is HANA ?

In case you don’t know, SAP HANA is SAP’s relational database that runs in memory. It’s a compelling technology for a number of reasons, mostly due to speed of query, massive paralleism and smaller footprint. The figure of queries running 1800 times faster is being widely mentioned.

It stores data in columns so it is called a columnar database. This has the effect of a massive reduction in required storage capacity due to a reduction in duplication of stored data.


HANA has some particularly important pre-requisites for performance both in terms of memory and back-end storage, and very careful qualification is required.

In the Hitachi UCP stack, currently the CB500 blades are supported in our UCP Director which is the fully orchestrated converged platform. This is a 6u 8-blade chassis with all blades half-width in form. This is not the subject of this post really – in this one I want to talk about the CB2500 chassis and how you might use it.

Let me refer you to my colleague Nick Winkworth’s blog announcing the release of the Hitachi CB2500 blades here for a little more info on use cases etc.

Here’s a picture of the chassis:

Screen Shot 2015-02-06 at 11.24.16

How can you use the CB2500 chassis ?

The CB2500 chassis is 12U in height which might seem quite large until you understand what you can do with it.

On the chassis, you can have:

  • (Up to)14 half-height CB520Hv3 blades with a maximum of 768GB DDR4 RAM each. These have Intel Xeon E5-2600v3 series inside.
  • 8 full-width blades that can hold up to 1.5TB each when configured with 32GB DIMMs,  or 3TB each when configured with 64GB DIMMs. These can have Intel Xeon E7-8800v2 series processors inside.

Data Locality and SMP

When you start using in-memory databases and other in-memory constructs this is where SMP and LPARs can become important.

  • SMP = A single logical server instance spanning multiple smaller servers
  • LPAR = A logical hardware partition of a server on which a single OS can run

There is an obsession with data living close to the CPU these days. Some of this is just marketing. The reason I say this is that if you use a suitably sized fibre channel array the latency can be so negligible as to be irrelevant. Also for typical business-as-usual use cases this is typically not a consideration.

However for applications like HANA it’s of particular importance. And this is where scale-out systems can trip up. How do you create a single 6 or 12TB HANA Scale-Up instance on a hyper-converged architecture and protect this with HA ?

The answer is you can’t right now.

Subdividing SMP

In relation to SMP technology, when you create a larger SMP instance you can still subdivide this into LPARs to maintain further flexibility.

Take 8 full-width dual-socket blades for example and subdivide into:

  • 4 x 4-socket blades
  • 2 x 8-socket blades

So if you plump for larger blades you can create logical blades from these. Furthermore, in a single SMP instance comprising 4 CB520X blades you can have 192 DIMMS of 64GB in size totalling 6TB of RAM.

You can protect this inside the same chassis with Hitachi N+M failover.

Multiple chassis can be connected together to create larger instances. We currently a customer running a 12TB+ in-memory database instance spanning multiple chassis.


Running in memory requires us to go beyond the limits of one box if we want to address the required amounts of memory for the largest use cases, where right now vSphere cannot meet these requirements. This is not entirely related to vSphere limits but is more related to what SAP do and don’t support.

This is where the ability to link multiple blades together using Hitachi SMP is a powerful tool in the next generation of systems which will use memory as the primary storage tier.


VVOL Demo by HDS: Worth a Watch (but let’s be careful out there)

Disclaimer: I work for Hitachi Data Systems. However this post is not officially sanctioned by or speaks on behalf of my company. These thoughts are my own based on my experience. Use at your own discretion.

Pretty much the entire world knows that in the last week or so VMware announced vSphere 6.0.

I haven’t written a post on it yet but will do soon.

Suffice it to say VMware has really focused and invested huge engineering resources to achieve this. For my money vSphere 6 is the best ever release.

As someone said … The hypervisor is not a commodity, which when you see the features and improvements that have come with this release I think that rings true.

So kudos to VMware engineering and Product Management !!

Virtual Volumes

I’ve had a few questions lately from colleagues asking me how this landmark feature works in detail.

I think the following demo by my countryman Paul Morrissey, HDS  global Product Manager for VMware, should really help aid learning.

Paul does a great walkthrough of the setup on the storage side, configuration through the vCenter Web Client as well as provisioning to VM’s.

It’s a nice demo – not too long but covers the typical questions you might ask.

HDS support for VVOL

It is expected that HDS will be announcing support initially with Hitachi NAS (HNAS) product in March at GA time for vSphere 6.0, followed swiftly by support for all HDS block storage within 2-3 months.

With the initally supported HNAS, HDS supports:

  • 100 MILLION VM snapshots
  • 400,000 Vvols (with architectural support for up to 10 MILLION Vvols)

A Word of Caution

Don’t forget you heard it here first !!!!

There has been no commentary or proper consideration in my view regarding the potentially negative impact of VVols on the datacenter, both in terms of operational processes as well as matching business requirements to the correct SLA.

Of course VVols provide an amazing level of granularity of control regarding workload placement and aligment. This of course is to be welcomed.

However, the onus is really on IT SI’s and vendors to help customers understand the ACTUAL requirements for their applications. To get the best out of VVols, business and IT need to be working more closely together to ensure the correct capabilities are coupled with the right use cases.

My belief is that this new technology needs to be properly managed and given the “VCDX treatment”. What I mean is that the SLA matches the applications needs. Otherwise we will end up in a situation similar to the early days of virtualisation  with 8-vCPU VMs all over the virtual datacenter.

This has been one of the downsides of virtualisation – consumers requesting double what they actually need because of the so-called virtualisation tax, which is now pretty much negligible and irrelevant when compared against the upsides. This kind of thinking can have a negative impact on performance (ironically) which can be notoriously difficult to identify. I think were it not for the robustness of the vSphere hypervisor we would not have gotten away with this.

The request for VVols could go like this:

  • I need a VM
  • What type of storage (VVOL) do you need?
  • What have you got?
  • I’ve got Platinum, Gold and Silver
  • What does Platinum have?
  • Everything
  • I’ll take 10 (John in Finance said to ask for more than you need – then if IT give you Silver that will be fine)

I admit that this is a slightly facetious scenario, but without internal showback/chargeback the danger is that the right SLA will not end up being matched to the right use case or application requirement, and we will end up with an overprovisioned datacenter, that once deployed cannot be un-picked.

Paul covered how cost can be included in his demo. We need to accept human nature and understand that data is precious gold, and if someone thinks they can have a second copy of their data in a separate data center then who wouldn’t want that ?

It is also our default human nature to say Yes, as humans really don’t like to say No?

This has long been a concern of mine with VVols so we need to perform proper due diligence to ensure this technology does not get misused.

More on this next week.

vSphere Storage Interoperability Series Part 2: Storage I/O Control (SIOC) 1 of 2

Disclaimer: I work for Hitachi Data Systems. However this post is not officially sanctioned by or speaks on behalf of my company. These thoughts are my own based on my experience. Use at your own discretion.

This is the second part of my storage interoperability series designed to bring it all together.

Here are the other parts:

Part 1: vSphere Storage Interoperability: Introduction

This started from a desire I had to be crystal clear about what these settings are and when you should and shouldn’t use them. Many vSphere experts may be aware of these settings but I’m hoping to bring this to the masses in a single set for easy access. All feedback gratefully accepted !!!!

In this post I will cover some topics around Storage I/O Control. This is the first post related to SIOC.  I will have at least one more and possibly two more depending on how easy each is to digest.

SIOC Overview

Storage I/O Control is an I/O queue-throttling mechanism that is enabled at datastore-level in vSphere. It takes action in real-time when IO congestion occurs for any datastore on which it is enabled.

When I say “takes action” I mean it reduces the queue depth or pipeline of I/Os queued up for a datastore in real-time leading to an immediate effect. It’s like driving with the handbrake on or in a lower gear.

It is important to note that device latency and device queuing relate to the individual queue associated with the datastore itself, not at HBA or at VMkernel level. The term device in vSphere always refers to a datastore when used in connection with storage.

You may have seen this image before which shows the different queues at play in vSphere and the work the kernel is doing which is quite amazing really:

Screen Shot 2014-12-11 at 10.39.08

SIOC does not take any action during normal operational conditions.

It only intervenes when one of two specific thresholds set by the administrator are breached:

  • An explicit congestion latency threshold is breached for a datastore. This is an observed latency or response time in Milliseconds (ms)
  • A percentage of peak performance has been breached for a datastore. (The peak performance of the datastore is calculated automatically by vSphere.)

When you enable Storage I/O Control on a datastore, ESXi begins to monitor the device latency observed by connected hosts via a file called IORMSTATS.sf which is stored directly on the datastore. All connected hosts have access to this file and can read from and write to the file. When device latency exceeds the defined threshold, the datastore is considered congested, and each virtual machine that accesses that datastore is allocated I/O resources in proportion to their shares.

If all virtual machines accessing the datastore have the same shares (the default is that they are all normal), each virtual machine will be allowed equal access regardless of their size or whether they are running on the same host or not. So, by default, critical virtual machines will have exactly the same priority or shares as the most unimportant virtual machines.


SIOC is not just a cluster-wide setting. It applies to any host connected to the datastore. Connected hosts write to a file called IORMSTATS.SF held on the datastore, regardless of cluster membership. So you could have standalone ESXi hosts managed by vCenter sharing datastores with a cluster. Not a great idea IMHO but it’s possible.


In an ideal world do not share datastores between clusters. It is possible but much more difficult to isolate performance (or other) problems in this scenario. This might not be possible when using VSAN or other similar system/SDS/hyper-converged systems.

In the past I recommended using “swing LUNs” to move datastores datafiles between clusters. This will not be so much of an issue now with the flexibility that is now supported within vCenter for cross-vCenter vmotion etc.


Unless a storage vendor or VMware has issued guidance otherwise, enable Storage IO Control on datastores. It is still a good idea to check before making wholesale changes affecting hundreds/thousands of virtual machines.

Without SIOC, hosts have equal device queue depths and a single VM could max out a datastore despite the fact the datastore is being accessed by multiple VMs on other hosts. When congestion occurs this situation will be maintained and it is possible that more critical workloads will be throttled to the benefit of less important workloads. It is always a good idea to enable SIOC regardless of the back-end disk configuration, with a high enough threshold so as not to inhibit performance.

SIOC characteristics

Storage IO Control can very simply apply powerful controls against runaway workloads and prevent so-called “noisy neighbour” syndrome within a vSphere environment.

When configuring SIOC the following should be taken into account:

The feature is enabled once per-datastore and this setting is inherited across all hosts. Note the previous point that this can be across multiple clusters or any hosts that share the datastore.

  • It takes action when target latency has been exceeded OR percentage performance capability of the datastore has been reached.

o          Both of these settings are customizable for each datastore.

  • Enabling virtual machine shares also allows you to set a ceiling of maximum IOPS per-VM if you choose, as well as the relative virtual machine shares.

o          This allows relative priority and a level of Quality of Service (QoS) to be defined and enforced during periods of contention.

  • With SIOC disabled, all hosts accessing a datastore get an equal portion of that datastores available queue. This means it is possible for a single virtual machine to become a “noisy neighbour” and drive more than it’s fair share of the available IOPS. In this scenario, with SIOC disabled, no intervention will occur to correct this situation.

By default, all virtual machine shares are set to Normal (1000) with unlimited IOPS. If you select this setting to prioritize performance the following settings are available:

  • Low: 500
  • Normal: 1000
  • High: 2000
  • Custom: Set a custom value for the VM

So the ratio of Low:Normal:High is 1:2:4. This can be adjusted if required or custom values can be set.

Storage I/O requirements and limitations

The following conditions must be adhered to when considering using SIOC.

Datastores that are enabled for SIOC must be managed by a single vCenter Server system. As mentioned earlier, vCenter is not part of the data plane and is only used for management and configuration of the service.

  • Storage I/O Control is supported on Fibre Channel-connected, iSCSI-connected, and NFS-connected storage.
  • Raw Device Mapping (RDM) is not supported for SIOC.
  • Storage I/O Control does not support datastores with multiple extents. Extents are equivalent to traditional elements of a concatenated volume that are not striped.


It’s best not to allow concurrent access to a datastore (backed by a common disk set) to vSphere clusters or hosts managed by multiple vCenter instances. Always ensure back-end spindles are dedicated to clusters or individual hosts managed by the same vCenter instance if at all possible.


Many people wonder about whether you can use tiering with SIOC. I will cover this in the next post but it’s safe to use it, with some caveats, and this is one you should check with your vendor to ensure they have qualified it !!!.

Set a threshold that is “impossibly” high so as not to be “in play” in normal operational state. So for an SSD backed tiered pool make sure the threshold is at least 10-20ms and SIOC should not ever intervene unless serious problems occurs.

It should make sense now that you should not allow an external physical host to use the same backend disk pool (HDP or HDT) as a vSphere host or cluster on which SIOC is enabled. This is just bad design and an accident waiting to happen.

NOTE: Make sure not to forget about the impact of using VADP backup proxy servers accessing datastores/LUNs directly for SAN-based backup and the IOPS Impact this could have on your backend storage.

Until the next time …….

vSphere Storage Interoperability: Part 1 Introduction


Disclaimer: I work for Hitachi Data Systems. However this post is not officially sanctioned by or speaks on behalf of my company. These thoughts are my own based on my experience. Use at your own discretion.

VMware vSphere has many features within its storage stack that enhance the operation of a virtual datacenter. When used correctly these can lead to optimal performance for tenants of the virtual environment. When used incorrectly these features can lead to a conflict with array-based technology such as dynamic tiering, dynamic (thin) pools and other features. This can obviously have a detrimental affect on performance and lead to an increase in operational overhead to manage an environment.person picture

It is quite common to observe difficulty among server administrators as well as storage consultants in understanding the VMware technology stack and how it interacts with storage subsystems in servers and storage arrays. It is technically complex and the behavior of certain features changes frequently, as newer versions of vSphere and vCenter are released.


Don’t ever just enable a feature like Storage I/O Control, Storage DRS or any other feature without thoroughly understanding what it does. Always abide by the maxim “Just because you can doesn’t necessarily mean you should.” Be conservative when introducing features to your environment and ensure you understand the end-to-end impact of any such change. It is my experience that some vSphere engineers  haven’t a clue how these features really work, so don’t always just take what they say at face value. Always do your homework.


This is the first in a series of posts to introduce the reader to some of the features that must be considered when designing and managing VMware vSphere solutions in conjunction with Hitachi or other storage systems. Many of the design and operational considerations apply across different vendors technology and could be considered generic recommendations unless stated otherwise.

In order to understand how vSphere storage technology works I  strongly recommend all vSphere or Storage architects read the cluster deep-dive book by Duncan Epping and Frank Denneman. This is the definitive source for a deep dive in how vSphere works under the covers. Also checkout any of Cormac Hogan’s posts which are also the source for much clarification on these matters.

Screen Shot 2015-01-09 at 10.28.30



Why write this series ?

When questions came up in my head regarding whether I should or shouldn’t use certain features I always ended up slightly confused. Thanks to Twitter and Duncan, Cormac Hogan, Frank and others who are always available to answer these questions.

This is an attempt to pull together black and white recommendations regarding whether you should use a certain feature or not in conjunction with storage features, and bring this all into a single series.

apple picture

The first series focuses on Storage IO control, Adaptive Queuing, Storage DRS (Dynamic Resource Scheduler) and HLDM/multi-pathing in VMware environments, and how these features interoperate with storage. I also plan to cover thick vs thin (covered in a previous post), VAAI, VASA, and HBA queue depth and queuing considerations in general. Basically anything that seems relevant.

Should I use or enforce limits within vSphere?

Virtualization has relied on oversubscription of CPU, Memory, Network and Storage in order to provide better utilization of hardware resources. It was common to see less than 5-10% average peak CPU and Memory utilization across a server estate.

While vSphere uses oversubscription as a key resource scheduling strategy, this is designed to take advantage of the idle cycles available. The intention should always be to monitor an environment and ensure an out-of-resources situation does not occur. An administrator could over-subscribe resources on a server leading to contention and degradation in performance. This is the danger of not adopting a conservative approach to design of a vSphere cluster. Many different types of limits can be applied to ensure that this situation does not arise.

In some environments close-to 100% server virtualization has been achieved, so gambling with a company’s full workload running on one or more clusters can impact all the company’s line-of-business applications. That’s why risk mitigation in vSphere design is always the most critical design strategy (IMO).


If at all possible please be conservative with vSphere design. If you’re putting your eggs in one basket use common sense. Don’t oversubscribe your infrastructure to death. Always plan for disaster and assume it will happen as it probably will. And don’t just use tactics such as virtual CPU to physical CPU consolidation ratios as your design strategy. If a customer doesn’t want to pay explain that the cost of bad design is business risk which has a serious $$$ impact. 

More on Reservations

Reservations not only take active resources from a server preventing other tenants on the same server from using them, but also require other servers in a HA cluster to hold back resources to ensure this can be respected by the cluster in the event of a failure. This feature of vSphere is called High Availability (HA) Admission Control and ensures a cluster always maintains resources (preparing for when a host failure occurs).


Implementing limits is a double-edged sword in vSphere. Do not introduce Reservations, Resource Pools or any other limits unless absolutely necessary. These decisions should be driven by specific business requirements and informed by monitoring existing performance to achieve the best possible outcome.

In certain cases like Microsoft Exchange it makes complete sense to use reservations, as Exchange is a CPU-sensitive application that should never be oversubscribed !. But that is an application/business requirement driving that decision and is a VMware/Microsoft recommendation.


The following text has been taken from the vSphere Resource Management guide for vSphere 5.5 and provides some important guidance regarding enforcing limits.

Limit specifies an upper bound for CPU, memory, or storage I/O resources that can be allocated to a virtual machine. A server can allocate more than the reservation to a virtual machine, but never allocates more than the limit, even if there are unused resources on the system. The limit is expressed in concrete units (megahertz, megabytes, or I/O operations per second). CPU, memory, and storage I/O resource limits default to unlimited. When the memory limit is unlimited, the amount of memory configured for the virtual machine when it was created becomes its effective limit.

 In most cases, it is not necessary to specify a limit. There are benefits and drawbacks:

  • Benefits — Assigning a limit is useful if you start with a small number of virtual machines and want to manage user expectations. Performance deteriorates as you add more virtual machines. You can simulate having fewer resources available by specifying a limit.
  • Drawbacks — You might waste idle resources if you specify a limit. The system does not allow virtual machines to use more resources than the limit, even when the system is underutilized and idle resources are available. Specify the limit only if you have good reasons for doing so.

In the next part I cover Storage I/O Control … coming soon.

Opinion Piece: The truth about Converged Systems

Disclaimer: I work for Hitachi Data Systems. This post is not authorised or approved by my company and is based on my personal opinions. I hope you read on.

A couple of weeks back Pure Storage announced their converged stack called the Flashstack. So now Hitachi has UCP Director, VCE has Vblock, Netapp has Flexpod. Now Pure is in the Club.

A Personal Experience of a converged deployment

Last year I worked on two converged system implementations as an independent consultant. This post is written from that perspective and is based on frustrations about over-engineered solutions for the customers in question. I was a sub-contractor, only brought in to build vSphere environments once the solutions had been scoped/sold/delivered.

Both instances involved CCIEs working for the system integrator for the network/fabric/compute design & build. Core network management was the customer’s responsibilty. So this was Layer-2 design only yet still required that level of expertise on the networking  side.

In this project, there were 100+ days PS time for the storage system for snapshot integration and (just) 10 days to design & deploy SRM.

I had a hunch that the customer didn’t have the skillsets to understand how to manage this environment and would ultimately have to outsource the management which is what happened. This was for an environment for 50-100 VMs (I know !).


When I started this post I was going to talk about how Hitachi converged the management stack using UCP Director inside vCenter and Hyper-V. That sounds like FUD which I don’t want to get involved in, so I decided to raise the question of whether a converged system is better and what is converged ?.

Question: What is a converged system?

Answer: It is a pre-qualified combination of hardware, with a reference architecture and a single point of support (in some cases).


Question: So does that make manageability easier ?

Answer: In many cases you still manage hypervisor,  server image deployment as well as the storage & network separately. So you still need to provision LUNs, zone FC switches, drop images on blades etc etc. And each of these activities requires a separate element management system (to clarify: this is not the case with HDS).

Question: So how is this better ?

Answer: If you look at some of the features of the blade systems in question, they are definitely an improvement re: the ability of blades to inherit a “persona”. The approach of oversubscribing normally under-utilised uplinks for storage and network traffic is also a good idea. However you have to ask yourself at what scale many customers would require this functionality i.e. how many customers deploy new blades every day of the week and with modern hardware, server failures are relatively uncommon, so will they benefit from the more modern architecture.

Question: So wouldn’t it maybe be simpler to just use rack servers ?

Answer: It depends on your requirements, what you’re comfortable with and in most cases whether the vendor you are most comfortable with has a solution that suits your needs. Also it might make sense to spread a fault domain across multiple racks which can be a challenge with a single blade chassis which was the case for the customers I worked with.

Question: So when should you deploy a converged platform

Answer: A good use case could be consolidating from many legacy server models onto a single infrastructure reference architecture/combination, which can reduce support overhead via standardisation.

I don’t know about other vendors but Hitachi just released a much smaller converged solution – the 4000E – which comes in a form factor of 2-16 blades. So at the very least make sure you have a system that is not too big for your needs, or over-engineered.


A bridge too far

FCoE has not taken off, has it ?

I think this is one of the great failures of the whole converged story i.e. the ability to take advantage of the undoubted benefits of converging storage and Ethernet traffic onto a single medium. This has not happened in the most part. I think it’s widely accepted that Cisco was championing FCoE as part of it’s Data Center 3.0 initiative.

In relation to the concept of FCoE, I’m suggesting most customers use native FC on an array and “convert” to Ethernet inside the converged stack by running inside Ethernet, thereby removing one of the main value propositions of the whole concept i.e. to remove a separate fibre channel fabric and reduce HBA, cabling and FC switch costs. I’d welcome feedback on that point.

Bottom Line

Only go down this route if your requirements and scale justify it, and if you have the skills to understand the technology underpinning such a solution.

As has been said many times … KISS … Keep it simple Stupid and in my humble opinion sometimes converged is not the simplest route to go. Now with some hyper-converged solutions such as VMware EVO:RAIL they have simplified management and made this a great solution for many customers.

What do you think ?

Comment or ping me on Twitter @paulpmeehan

Hitachi HDLM Multipathing for vSphere Latest Docs


HDLM stands for Hitachi Dynamic Link Manager which is our multi-pathing software for regular OSes such as Windows, Linux etc but also for vSphere. You can install a VIB and use this to optimise storage performance in Hitachi storage environments.

I came across these documents yesterday as part of something I was doing. I thought I’d share them here as sometimes people tell me it can be hard to find stuff related to HDS.

For those customers and partners using HDLM, attached find the latest documentation from October 2014 for vSphere for HDLM. First the User Guide. This is Revision 9.



Then the Release Notes from October 2014. This is Revision 10.



Regarding default PSPs for different arrays types:

Active-Active: FIXED
(Although round robin can be used in many cases)
Active-Passive: MRU
(Although some arrays need to use Fixed)

For more info start at this KB article:


If you want more balanced performance and awareness of path failures/failovers for Hitachi arrays you can use HDLM. Please consult the user guide but to summarise, the main benefits are:

  • Load balancing (Not based on thresholds such as hitting 1000 IOPS)
  • Path failure detection and enhanced path failover/failback
  • Enhanced path selection policies
    • Extended Round Robin
    • Extended Least I/Os
    • Extended Least Blocks

Extended Least I/Os is the default.

Why would you use it: A quick Recap

If are using Hitachi storage and are entitled to use HDLM then you should probably use it to get optimal performance and multi-pathing between the host-target. I would typically suggest that for Production at the very least, and you could make a call on whether to use it for Dev/Test etc.

The main benefit is a more optimal use of paths in a synchronous manner. I’d equate it to the way DRS smooths out performance imbalances across a cluster. HDLM does the same thing with the IO directed to a datastore/LUN in real-time based on enhanced path selection policies.

You can use the native ESXi multipathing NMP and in most cases it will be more than sufficient for your needs if you don’t use HDLM,  or EMC Powerpath.  There is a misunderstanding commonly about the native NMP Round-Robin (RR) Path Selection Policy (PSP).

To clarify:

  • For a given datastore only a single path will be used at a single point in time.
    • The important thing to note is that this will be the case until a given amount of data is transferred. Consider this the threshold.
  • This is kind of like using LACP on a vSphere host for NFS. You cannot split an Ethernet frame across two paths and reassemble at the other end to simultaneously transmit a single frame across multiple uplinks. Same thing for storage isn’t it ?
  • The switching threshold can be either based on IOPS or bytes. The default is 1000 IOPS.
    • I suggest leaving this alone. Read on.
  • When the threshold set for the path is reached the host will switch to the next path.
  • The host will not use the previous path for the particular datastore at that point in time until it is selected again.

From experience it is entirely acceptable that a particular datastore is accessed using one path at a time.

I have seen extremely high IO driven from a single VM with very large block size. Using the infamous IOPS=1 setting is not enough to get the IO to the required level. Then you need to look at queue depth and number of outstanding IOs (DSNRO).

Once you start playing with these parameters the impact can be system/cluster-wide which is why it’s probably not a good idea.

Queuing in vSphere

Also consider the different queues involved in the IO path:

Screen Shot 2014-12-11 at 10.39.08

Consider the job the hypervisor is doing for a second. It is co-ordinating IO from multiple VMs, living on different data stores, on multiple different hosts. Think of the amount of moving parts here.

Imagine a cluster with

  • 20 hosts
  • 2 HBAs per hosts
  • 180 datastores
    • Thats a total of 360 active devices each host is managing with multiple queues on top and underneath.

That’s not trivial so before you let the handbrake off, take the time to consider the decision you’re making to ensure it will not have an unforeseen impact somewhere down the line.








Thoughts for Today, Ideas for Tomorrow