Come say hello at Frankfurt VMUG !!!

download

On Wednesday this week I will be travelling to the Frankfurt VMware User Group (VMUG). The event is on at the Marriott downtown Frankfurt.

I’m filling in for one of my colleagues on the desk at the conference and have taken the opportunity to come visit Germany.

I am bringing my laptop so anyone who wants to come along and talk to me about why I believe Hitachi Converged platforms are the best out there can catch a glimpse of Unified Compute Platform Director software running inside vCenter orchestrating servers, network and storage. No parlour tricks !!!.

Also UCP Director has been designed from Day 1 to offer feature parity between Web Client, CLI and API. So you can consume it as required.

Here is the current range:

Screen Shot 2015-06-15 at 21.28.02

There are a lot of options with simple naming and positioning for each tier.

  • Starting with UCP1000 AKA EVO:RAIL, an awesome VMware hyper-converged solution based on Hitachi servers.
  • Hitachi AND Cisco servers.
  • Brocade AND Cisco networking.
  • vSphere/vCenter AND HyperV/SystemCenter.
  • Many different storage systems are supported.

If you come for a chat I will explain why I am 100% sure UCP Director is the perfect endpoint for vRealize Automation suite (or Windows Azure Pack for that matter).

vSphere 6.0 General availability

Just this week we have announced UCP Director 4 which provides support for vSphere 6. This is big news for many of our customers.

It also brings support for the G200-G600 storage range running the same OS (Storage Virtualisation Operating System – SVOS) as the industry leading G1000 which has achieved 2,000,000 IOPS on an industry-standard SPC-1 benchmark. I wish more vendors would just stick to the benchmarks and show they can walk the walk as well as just talk the talk.

You can run an active-active vSphere Metro Storage cluster on these “mid range” arrays with the reliability characteristics of the G1000. The only array to offer 100% availability uptime guarantee in the industry, in writing.

it’s important to note that a converged platform like UCP Director gives you much more control over scale compared to competitive hyper-converged offerings. You can tune the knobs for storage, compute and networking like some hyper-converged vendors are starting to do, now they’ve realised one size doesn’t fit all (based on customer demand). So with UCP we don’t need to worry about “compute dense” or “storage dense” nodes. That concept doesn’t exist.

Without the addition of a much simpler management framework inside vCenter, are converged systems really worth it ?

I can even show you some pretty cool stuff such as REST API calls against a fully converged stack from inside Chrome.

  • So how about standing up a multi-node cluster with host profiles, server profiles and ESXi image with the first HTTP REST call
  • Then attach a bunch of LUNs with a second REST call.

Come over and talk to me and I’ll explain what it’s all about :-)

Until then.

SDDC, I am your father, and I am called SDI

Luke-Skywalker-and-Darth-Vader-in-Empire-Strikes-Back

Listening to one of my favourite albums, by one of my favourite bands, Alt-J, I happened to glance across at the song title “Choice Kingdom”. It reminded me of what is going on in the ICT Industry today. We are in a world where some might say it’s All-or-Nothing. That’s their job, to disrupt, destroy and create doubt in the mind of buyers so they question everything they ever knew. To say it’s cheaper, faster, better; nothing can ever go wrong with our product. Right?

This is not new and has been covered before as a concept in the book Accidental Empires by Robert X. Cringely, describing Silicon Valley in the days of Jobs and others, where he likens the serial entrepreneurs as commandos who have a singular purpose. That is to create as much mayhem as possible in the enemy’s camps and ranks. It’s all about disruption.

This is described more succinctly at the following blog http://blog.codinghorror.com/commandos-infantry-and-police/

This is clearly what we are seeing now with the world of Hyper-Converged-Infrastructure. Of course the architecture is different and in some cases has clear advantages over traditional architectures. And this is great for customers – further choice. But does that mean we throw out the baby with the bath water?

Also, for sure there are parallels with the early days of server virtualization, where a watershed was reached when it was no longer possible to exercise a server’s silicon. What is slightly different is that one of the main drivers of this evolution (not revolution) is the appearance of Flash as a viable acceleration tier in conjunction with software.

It’s Standard, Wait no it’s not

One or two years ago, a standardised form factor was suggested as one of the key value propositions of Hyper-converged solutions, to ensure a simple scalable deployment model. Already we see divergence from this model to “specialist” nodes designed for dense compute, storage, or other special requirements.

So in this new paradigm catering for Pets vs. Cattle workloads is still an important consideration.

pets vs cattle

My colleague Paula Phipps wrote an excellent post over at community.hds.com where she provided a gentle reminder that webscale as a concept does not always map to the application inventory of many customers today.

https://community.hds.com/people/pphipps/blog/2015/04/16/software-defined-infrastructure-something-is-brewing-at-hitachi-time-to-pour-a-fresh-cup

If a webscale vendor has four major enterprise applications as analysts suggest, yet most Enterprise’s have hundreds, is this a wakeup call that it’s time the commandos left the theatre?. The beach-head is established, there is a new player in town. That’s a good thing. Now it’s time for reality to kick back in and peace to prevail.

The Move to SDDC

Our friends at VMware have championed the Software Defined Data Center or SDDC. It’s an acronym that has caught on and rightly so !.

The SDDC concept is clearly about the power of software to enable and empower faster go-to-market and turn IT from being the Poo on the Shoe to the driver for growth. You heard that phrase here first by the way.

At Hitachi we like to think most people would agree that we can use three A’s to sum it up.

Three-as

  1. Automation: As VMware PowerCLI gurus have been known to say, if you need to do it more than once, script it. Reduce labour, Increase productivity. Simple.
  1. Access: What’s the point of all this data, be it on a cloud or 4GB memory key if access is not available on-demand from anywhere.
  1. Abstraction: VMware VVol is a good example. This is a more complex technical solution being delivered to the end user via a set of Policies. So consumption of policies, not technology.

Software Defined Infrastructure

If Luke Skywalker is SDDC then Darth Vadar may just be SDI. And that goodness is also about to make a real appearance.

Most analysts categorise SDDC as being a datacenter-focused paradigm, really down in the nuts and bolts of IT and applications, and typically housed in datacenters. I think that’s true.

Its focus isn’t on vertical industries so much or smart cities or the like.

industries

From here on in you will hear a lot more about Software Defined Infrastructure (SDI), especially from Hitachi – a company focused on a singular vision – IT becoming the enabler for societal change.

We need to take a giant step back and look at the overall requirements of society and how IT can become a driver for change and innovation in Healthcare, Telecoms, Smart Cities and many other areas that will become the next wave of growth in our industry. And this is what Software Defined Infrastructure is all about. Seeing the bigger picture.

In summary, we can already see that a convergent approach may require a bespoke or customer-specific solution. This is proof that a one-size-fits-all approach cannot persist in all cases, and I expect as the commandos get bored and look to invade the next territory (maybe after a number of Initial Public Offerings) the helium balloon will begin to lose it’s burgeoning shape and become a normal balloon again.

Get Ready for HDSConnect

In the meantime you will hear lots from Hitachi about SDI. Keep an ear out next week at #HDSConnect in Las Vegas for some mind-blowing software-defined announcements.

Follow the #HDSConnect hashtag on Twitter for lots more info.

Until then, use the force and keep it real.

HDS SRA and VMware SRM 6.0 Certification Complete

HDS has now completed certification and qualification requirements for VMware Site Recovery Manager (SRM) 6.0 with SRA 2.1.4.

As of now all Hitachi block storage arrays certified with SRM 5.5 are certified with SRM 6.0.

The storage array models supported include VSP G1000, HUS VM, VSP, HUS, and USP V/VM.

VMware VVol Series: Part 3 Implementation Considerations and availability

The whole area of VVols will require quite a bit of consideration and people are wondering how it is implemented i.e. what does that really look like.

In the latest post in this three-part series my colleague Paul Morrissey discusses a little bit more regarding what the deployment looks like as well as support for HDS products. This is in the form of FAQ that have arisen both internally and externally.

The post is HERE:

He also talks about HDS V2I (Virtual Infrastructure Integrator) product which provides direct snapshot integration into the vCenter when using HDS arrays. It’s much more scalable than competitive offering as it’s running on HDS tin, which is also proven and validated to be more scalable (and performant) than all competitive offerings in the marketplace.  This will shortly be available for block as well as file storage directly inside vCenter. I’ll be blogging in the future to provide more background on that product.

I have previously published a full list of FAQ on the website.  You can find that resource HERE.

Hope this helps increase your knowledge level a bit more.

Virtualisation Poll Result 1: CPU and Memory Utlilisation

Recently I wrote a post looking for assistance with some questions regarding real-world Virtualisation deployments. It was partly for curiosity and personal reasons and partly for work-related reasons.

I’m wishing now I had expanded it to make it more extensive. I plan to run the same thing later in the year and try and make this an annual thing – a kind of barometer if you like – to get a real view on what’s happening in the world of virtualisation, as opposed to analysts reports or other forms of information.

That post was HERE and the polls are now closed.

Note: don’t confuse this with the vBlog voting Eric Siebert is running.

The first question was:

The results were as follows:
Screen Shot 2015-03-24 at 10.25.27
Before analysis, let’s take a look at the next question which relates to memory consumption:

 Here are the results of that question:
Screen Shot 2015-03-25 at 09.21.07

Analysis

There is always a question regarding whether workloads are CPU or memory bound. I’ve worked in sites where CPU and processing was what it was all about. Alternatively when running a cloud platform it is likely that memory oversubscription will become the bottleneck.
For me I’m kind of surprised that the disparity was so great between how many sites are running over 40% memory utilisation – over 87%, versus CPU over 40% – 38.4%.
So taken as a whole:
  • CPU utilisation is less  than 40% in over 60% of responses.
  • Memory utilisation is less than 40% in only about 12% of responses.

So we can clearly see the disparity. There are a couple of questions which for me are linked to this one. I will publish posts over the coming days on those but you might ask yourself what effect these results have on CPU and Memory reservations as well as the vCPU to pCPU oversubscription ratios.

Clearly we could potentially further oversubscribe CPU Cores but eventually we reach a position where availability considerations and failure domains become real concerns. So if we push CPU consolidation ratios too much we could lead to 100-200 VMs falling over if a host dies. This is not a good place to be but this is where we are at.

My personal view has always been that if you make vSphere or Hyper-V your “Mainframe” on which your entire workload runs, then it is penny-wise and pound-foolish to underallocate hosts for the sake of marginal cost implications, when weighed up against the downsides. That will always be a personal starting position but it obviously depends if it’s a production, QA, or Test/Dev environment.

Is this a problem ?

So let’s extrapolate slightly forward. If CPU becomes more and more powerful and even more cores are exposed on a physical server unit, what might the impact be, based on the availability considerations of a single host.

For me, this is where the application architecture must change. Right now, many applications cannot be deployed in scale-out (cattle) architectures, and are not ideally placed to leverage further inevitable increases in CPU core numbers and respective core performance. The reason I say that is there is no inherent resilience within most existing applications to withstand individual node failures.

This is going to cause a big problem for enterprises, balancing too much oversubscription in terms of VMs per-host and the implications in terms of line-of-business resilience versus the quest to drive better consolidation ratios from ever more powerful hardware resources.

That’s just a personal view but one I’m seeing regularly where customers and partners are trying to achieve CPU consolidation ratios of 10-20-30 of vCPU to pCPU which is a dangerous place to be in my opinion.

VMware VVol Series Part 2: Deeper Dive

In the next post in his series over at community.hds.com my colleague Paul Morrissey, HDS Global Product Manager for Virtualisation, delves a little deeper into the mapping between workloads and backend capabilities.

This post focuses more on Storage Policy-Based Management (SPBM) and how this works in practice. So a bit more on the practical side of protocol endpoints, scalability and other questions that might arise.

SPBM is a key concept in VVol and one you need to understand before planning your implementation.

You can find the latest post HERE

I previously wrote about the need for correctly mapping the requirements of applications  to VVol capabilities. You can find that post HERE:

I am VERY SURPRISED that I have not read one other post from anyone in the VMware community talking about this. Has this been forgotten or do people suggest that everything will just “be fine” ?

My advice is this:  Before you consider VVol perform proper planning and due diligence. Work with your storage vendor and your application teams and business continuity people to ensure the correct capabilities are created before surfacing these for consumption. Prove the concept with a smaller set of capabilities (maybe just one) which you can test and educate people on.  Then expand and roll out. Don’t forget about business continuity and operational recovery – prove these use cases in your implementations.

I am confident HDS advises customers to be conservative and mitigate risk were possible – I know this from speaking to colleagues all over EMEA – customer relationships and protecting their customers is fundamental to everything they do.

You can also see this mindset at work within the design of HDS VVol implementation where ensuring minimum disruption is given priority…. From Paul’s Post ….

Question:  Would it be possible to migrate current customer’s data stores with VMs to storage containers with VVols

Paul : “That was in fact part of the original design specs and core requirements. As part of the firmware upgrade for our arrays, our VASA Providers and VVol implementations will allow for hardware-accelerated non-disruptive migration between source VMFS/NFS datastores and destination VVol storage container, via storage vMotion.”

What does this mean ? Use of hardware acceleration inside the arrays speeds up copying of data which ultimately reduces exposure to risk during migrations. This is a Version 1 attribute – not a bolt-on later on.

Further Resources: 

Here are some other useful VVol resources created by HDS related to VVol :

More to Come ……

 

 

VVol Series: Part 1 Introduction

I’m delighted to be able to write this note linking to my colleague Paul Morrissey’s post related to VVol technology, over at the community.hds.com portal which is a public service for customers, partners and those interested in our technology.

Paul is HDS Global Product Manager for VMware Solutions so he is best placed to dive deep on the integration between HDS technology and VMware.

This is building on some of the excellent resources related to VVols released by HDS so far.

First: check out this great video/demo by Paul at this location:

Also please check out this awesome FAQ Document on the HDS VVol homepage HERE with 63 of the most typical questions that might arise.

Back to this series

Paul’s post is the first in a three-part series which will go deeper into the technology and some of the more practical considerations that might arise.

You can find Part 1 HERE.

HDS support for VVol is here today, initially with HNAS (High-performance NAS) but soon to be followed in the next month or two by our block arrays.

Virtualisation Poll: Please Help by sharing

Poll

Hi,

I’m looking to get some information regarding the average utilisation of virtualisation environments across the industry. This is not just for vSphere but Hyper-V and KVM too.

There are only a few questions and this is to help me with design considerations and help guide good practice based on what is common in the industry, I will be publishing the results and sharing on this site.

If you could help I would appreciate it. I am particularly interested in real workloads in customer environments – not what people do in labs.

Many Thanks !!!