This week at Virtualization Field Day 3 we visited the guys at CloudPhysics to hear all about their product.
To introduce them, CloudPhysics is a Cloud-based Software-as-a-Service (SaaS) solution designed to provide intelligent infrastructure analysis of Enterprise vSphere infrastructure platforms. They are also targeting Openstack and Hyper-V to provide heterogenous hypervisor support.
They have a solid pedigree. Three of the guys driving the company forward are ex-VMware. They are John Blumental, Irfan Ahmad and Krishna Raj Raja. As such you can understand that their first supported platform is vSphere.
John was Director of Product Management for the ESX Storage Stack.
Irfan has a history of leaving a trail of success behind him such as vScsistats (a tool for monitoring and diagnosing virtual machine performance), but more importantly Storage DRS and Storage IO Control which are very nice additions to vSphere 4 and 5. Storage DRS in particular is a high value feature in my opinion as you scale your vSphere environment.
Rather than tackle things in the traditional way with onsite management or monitoring infrastructure, CloudPhysics have taken a different approach.
That’s described by the following:
Step 1: Deploy a virtual appliance (OVF) to your platform. Configure the appliance according to setup instructions. This is called The Observer.
Step 2: Give it some credentials with read access for your vCenter instance.
Step 3: Make sure your firewall is open (More about this later !!).
Step 4: Allow a few minutes and then connect to the CloudPhysics Portal.
Once you connect to the portal you land in The Deck. From there you can start to configure Cards in your deck.
Cards are a way of targeting specific metrics or states of different objects that you are interested in. Remember that in vSphere everything is an object with a managed object reference ID.
Here are some examples. You will note that community cards can be created and shared between customers which is a very nice feature.
You add Cards to your Desk to activate them and use the embedded analytical capabilities.
Security of Data
When delegates quizzed Irfan Ahmed, CTO, about how customers feel about (their) site-specific data living in the cloud, he explained the following points:
- In their relatively early engagements with customers, because it is telemetry metadata that is travelling to the Cloud, this can be fully anonymised, and for them has not proven to be a sticking point so far. I can’t validate that view as I have not discussed this with any of the customers I deal with. Well not yet anyway.
- I spoke to some of the VFD3 delegates about this and some feel this could be a problem. I would just point out that products like VMware Capacity Planner do a similar thing and send metadata (and even actual customer data) to the cloud for analysis and modelling.
- My view is that this might well be an issue for some customers. This is often the case either for provision of remote access or any other requirement that requires opening a firewall port. However those obstacles can be overcome and this is no different.
- In order to apply analytical intelligence against a larger dataset to achieve more meaningful results, having all the data in one place is of definite value. That makes a lot of sense to me. It’s by combining things together and performing cross-correlation that patterns are identified and mapped.
- I seem to remember lately Irfan tweeting and publishing on LinkedIn that 37% of vSphere clusters have HA disabled. Trends like this are of great value to many customers and VMware partners who can help them address these issues.
- Note that this is also a question that I asked regarding whether a channel model fits the CloudPhysics approach. They confirmed that they see this as a good matchup.
For all of us involved in Virtualization Field Day 3, we have heard the term “On-premise” mentioned many times over the last few days. It’s funny they way on-prem is now something we’re asking about as if it’s a new feature we’re all looking for.
It’s definitely not “On-Prem” (It’s that word again)
In the case of Cloudphysics the current service is a SaaS offering. There is NO on-prem offering.
It’s a true case of a Cloud based solution where data is stored in and consumed from the Cloud.
The location where this data is stored is AWS. Irfan confirmed that if data locality is an issue for customers, he expects that down the road geographically disparate instances could be “copied and pasted”. I like that concept when we are talking about large amounts of data. In essence what he’s saying is that the architecture is designed to facilitate this without some nasty, costly upgrades or roadblocks later.
In terms of scale, CloudPhysics are designing the architecture to very large scale. They are collecting up to a billion data points per day from environments.
Two final points that are important
- CloudPhysics do not rollup data as time progresses, unlike vCenter itself. This means at the end of Week 1, 2, 3 etc data is not being averaged out which leads to a reduction in granularity of reports, charts etc
- Data is kept for ever and is not thrown away which is a nice feature.
These two points are pretty important in my view.
Pricing is based on a subscription model, per Hypervisor host CPU.
Advice to the Community
Try it out. You could test it first on a lab system, it would work with VMware Workstation once your lab could talk to CloudPhysics Cloud. This is definitely a case where suck-it-and-see is the best approach. I suggest you seek Irfan, Krishna or the other guys out on Twitter (@virtualirfan @esxtopguru) or via their websites if you have further questions.
I hope to complete another post in the next two weeks to further demonstrate Card functionality.