Virtualisation Poll Result 1: CPU and Memory Utlilisation

Recently I wrote a post looking for assistance with some questions regarding real-world Virtualisation deployments. It was partly for curiosity and personal reasons and partly for work-related reasons.

I’m wishing now I had expanded it to make it more extensive. I plan to run the same thing later in the year and try and make this an annual thing – a kind of barometer if you like – to get a real view on what’s happening in the world of virtualisation, as opposed to analysts reports or other forms of information.

That post was HERE and the polls are now closed.

Note: don’t confuse this with the vBlog voting Eric Siebert is running.

The first question was:

The results were as follows:
Screen Shot 2015-03-24 at 10.25.27
Before analysis, let’s take a look at the next question which relates to memory consumption:

 Here are the results of that question:
Screen Shot 2015-03-25 at 09.21.07


There is always a question regarding whether workloads are CPU or memory bound. I’ve worked in sites where CPU and processing was what it was all about. Alternatively when running a cloud platform it is likely that memory oversubscription will become the bottleneck.
For me I’m kind of surprised that the disparity was so great between how many sites are running over 40% memory utilisation – over 87%, versus CPU over 40% – 38.4%.
So taken as a whole:
  • CPU utilisation is less  than 40% in over 60% of responses.
  • Memory utilisation is less than 40% in only about 12% of responses.

So we can clearly see the disparity. There are a couple of questions which for me are linked to this one. I will publish posts over the coming days on those but you might ask yourself what effect these results have on CPU and Memory reservations as well as the vCPU to pCPU oversubscription ratios.

Clearly we could potentially further oversubscribe CPU Cores but eventually we reach a position where availability considerations and failure domains become real concerns. So if we push CPU consolidation ratios too much we could lead to 100-200 VMs falling over if a host dies. This is not a good place to be but this is where we are at.

My personal view has always been that if you make vSphere or Hyper-V your “Mainframe” on which your entire workload runs, then it is penny-wise and pound-foolish to underallocate hosts for the sake of marginal cost implications, when weighed up against the downsides. That will always be a personal starting position but it obviously depends if it’s a production, QA, or Test/Dev environment.

Is this a problem ?

So let’s extrapolate slightly forward. If CPU becomes more and more powerful and even more cores are exposed on a physical server unit, what might the impact be, based on the availability considerations of a single host.

For me, this is where the application architecture must change. Right now, many applications cannot be deployed in scale-out (cattle) architectures, and are not ideally placed to leverage further inevitable increases in CPU core numbers and respective core performance. The reason I say that is there is no inherent resilience within most existing applications to withstand individual node failures.

This is going to cause a big problem for enterprises, balancing too much oversubscription in terms of VMs per-host and the implications in terms of line-of-business resilience versus the quest to drive better consolidation ratios from ever more powerful hardware resources.

That’s just a personal view but one I’m seeing regularly where customers and partners are trying to achieve CPU consolidation ratios of 10-20-30 of vCPU to pCPU which is a dangerous place to be in my opinion.

VMware VVol Series Part 2: Deeper Dive

In the next post in his series over at my colleague Paul Morrissey, HDS Global Product Manager for Virtualisation, delves a little deeper into the mapping between workloads and backend capabilities.

This post focuses more on Storage Policy-Based Management (SPBM) and how this works in practice. So a bit more on the practical side of protocol endpoints, scalability and other questions that might arise.

SPBM is a key concept in VVol and one you need to understand before planning your implementation.

You can find the latest post HERE

I previously wrote about the need for correctly mapping the requirements of applications  to VVol capabilities. You can find that post HERE:

I am VERY SURPRISED that I have not read one other post from anyone in the VMware community talking about this. Has this been forgotten or do people suggest that everything will just “be fine” ?

My advice is this:  Before you consider VVol perform proper planning and due diligence. Work with your storage vendor and your application teams and business continuity people to ensure the correct capabilities are created before surfacing these for consumption. Prove the concept with a smaller set of capabilities (maybe just one) which you can test and educate people on.  Then expand and roll out. Don’t forget about business continuity and operational recovery – prove these use cases in your implementations.

I am confident HDS advises customers to be conservative and mitigate risk were possible – I know this from speaking to colleagues all over EMEA – customer relationships and protecting their customers is fundamental to everything they do.

You can also see this mindset at work within the design of HDS VVol implementation where ensuring minimum disruption is given priority…. From Paul’s Post ….

Question:  Would it be possible to migrate current customer’s data stores with VMs to storage containers with VVols

Paul : “That was in fact part of the original design specs and core requirements. As part of the firmware upgrade for our arrays, our VASA Providers and VVol implementations will allow for hardware-accelerated non-disruptive migration between source VMFS/NFS datastores and destination VVol storage container, via storage vMotion.”

What does this mean ? Use of hardware acceleration inside the arrays speeds up copying of data which ultimately reduces exposure to risk during migrations. This is a Version 1 attribute – not a bolt-on later on.

Further Resources: 

Here are some other useful VVol resources created by HDS related to VVol :

More to Come ……



VVol Series: Part 1 Introduction

I’m delighted to be able to write this note linking to my colleague Paul Morrissey’s post related to VVol technology, over at the portal which is a public service for customers, partners and those interested in our technology.

Paul is HDS Global Product Manager for VMware Solutions so he is best placed to dive deep on the integration between HDS technology and VMware.

This is building on some of the excellent resources related to VVols released by HDS so far.

First: check out this great video/demo by Paul at this location:

Also please check out this awesome FAQ Document on the HDS VVol homepage HERE with 63 of the most typical questions that might arise.

Back to this series

Paul’s post is the first in a three-part series which will go deeper into the technology and some of the more practical considerations that might arise.

You can find Part 1 HERE.

HDS support for VVol is here today, initially with HNAS (High-performance NAS) but soon to be followed in the next month or two by our block arrays.

Virtualisation Poll: Please Help by sharing



I’m looking to get some information regarding the average utilisation of virtualisation environments across the industry. This is not just for vSphere but Hyper-V and KVM too.

There are only a few questions and this is to help me with design considerations and help guide good practice based on what is common in the industry, I will be publishing the results and sharing on this site.

If you could help I would appreciate it. I am particularly interested in real workloads in customer environments – not what people do in labs.

Many Thanks !!!


VMware Support Summit #vTechSummit Cork Feb 24-25

On Tuesday and Wednesday I finally made it to the VMware EMEA Tech Support Summit in Cork, sponsored by GSS. I’d never been before and wasn’t sure what to expect.  Would it be a waste of time ?

Not quite. In fact it was the opposite.

I met up with my good buddy and fellow VCAP Ross Wynne @rosswynne. So both of us were looking for good quality tech content.

To save you reading through this whole post I can confirm the complete absence of product marketing. There was one presentation which was borderline which I really didn’t get, but that was the exception. It was all informative, interactive, technical and suited to the audience.

The Presenters

With maybe one exception the presenters had a command of their subject matter that  was right up there with any I’ve seen. When you think about it, that’s to be expected of support and product experts (escalation engineers).

But what I didn’t expect was that these guys might not do this every day (some of them anyway), but most of them were very accomplished presenters and explainers of deep tech concepts.

And because the slides will be shared with the attendees you didn’t need to worry about writing stuff down. You couldn’t anyway as within each 1 hour session there was so much packed in. This is one to sit back and listen and don’t bother note-taking. I even decided on a self-imposed tweeting ban.

Design Do’s & Don’ts

What I also didn’t expect was the focus on sharing of examples from the field of what architectures worked, didn’t work, and what best practices you should following with your designs. So this was best practices based on reality. Real examples of upper limits you might hit, and upper limits you shouldn’t attempt to hit.

How to get out of jail

And then the most obvious part.

  • What are the worst problems ?
  • What is the root cause ?
  • How can you fix them ?
    • What logs can you check
    • Should you use TCPDUMP ? vmkping or some other utility
    • Which ones are well-known to support.
    • In each case where there either was or wasn’t a fix the KB’s were listed

I particuarly enjoyed the top-10 issues session just for the range of items that arose.

In many cases there were no fixes. The nature of this session meant that it was a case of full disclosure and sharing of information. I like that a lot !!.

vCloud Director

Take the case of the vCloud preso which was really awesome. Some of the others were excellent too, but it had a really nice mix and flow.

This one was focused on designing and scaling a cloud and included a walkthrough of the product, how to go about building and scaling a cloud as well as building types of  allocation models – reservations, pooled with overcommitment, pay as you go. So walking through the practical impacts of these decisions and how that might end up arising in an SR.

There was a case study to look at:

  • What are the Small, Medium and Large options for building a 30,000 VM cloud, based on 10, 20 or 30 hosts in a cluster.
  • What does each solution look like when you push it through configuration maximums for VSAN, vCD, vSphere, vCenter, cluster limits etc.
  • Where will most such designs fall down in the real world.
  • What are the top issues GSS see on a day to day basis and how can you resolve them.

One thing I took from that I don’t come up against too often and that is the GSS view that you always stay comfortably within the maximums. And I mean specifically when sizing. So don’t ever deploy a 32-host vSphere cluster. Limit it to a maximum of 30 hosts, just to leave headroom to stay within acceptable tolerance. Even though it’s supported.

Other highlights/points of interest

Things that left an imprint in my head:

  • VSAN is mission-critical workload-ready – in my opinion.
    • I still have question marks regarding it’s interoperability with other VMware and third-party products, but that’s gonna happen. It’s a matter of time.
    • As I’ve always said … it’s baked into the kernel. You just tick a box to turn it on which offers almost zero barrier to adoption. Which is what also appeals about Pernixdata.
    • For VSAN we’re talking about a queue depth of 256 due to higher performance capability.
  • for vCloud Air I think it’s got all the features an enterprise will need – HA, good storage performance, but failback (for now) is still not an option. So for the moment moving an entire enterprise workload to the cloud in DRaaS sounds good, but like SRM used to be, failing back won’t be trivial.
    • So seems like Development/Test or more tactical use is more likely.
    • The interesting piece of information provided was that on an Exchange system, due to the fact that reading an email generates a database entry, 30% of data changes daily which will not help any attempt to keep Exchange synced up with the cloud.
  • NSX for me is still the only game in town. A great solution but from speaking to people cost is still an issue. Considering what it can do it doesn’t seem prohibitive but it is what it is.
  • Biggest potential screwup with vCD Is deletion of vCenter inventory objects in vCenter prior to vCloud. Always delete from vCloud first … otherwise you’re potentially looking at a DB restore for vCloud and vCenter as recommended remediation.
  • for VSAN in SSD/SAS we’re looking at a 70/30 read/write split for cache.
  • forVSAN All-flash we’re talking about a 100% write cache (potentially)
    • Suggested figures are 20-40000 IOPS in hybrid config. 100,000 potentially per host in all-flash VSAN.
  • Everyone still loves vCloud Director and I don’t think that will change, and the audience echoed that sentiment.
  • The new VMware View Cloud Pod architecture will support up to 20,000 desktops, and in v6.1 there will not be a UI to configure it. Up to know it’s been configurable in the CLI only.
  • Some stuff many will know already aboutvROps but worth mentioning again.
    • Now there is a single combined GUI for vROps 6.0 and a single virtual machine inside the vApp.
    • You can have replica nodes as part of a master-slave type arrangement for redundancy
    • The Custom UI is now included with the regular UI inside the same interface.
  • For VVols the presentation made me think about the fact that for the storage admin it doesn’t mean the end …
    • As was mentioned, this can mean to storage admin can spend his time properly managing “data” and understanding what’s going on and making more informed decisions.
  • The/Certificate automation tool is awesome
    • The new version pretty much does everything for you which is very important considering the major architecture changes with the PSC and VMCA and how this can impact on use of certificates from CAs.
    • You can make the VMCA a subordinate of your internal CA to make things simple.
    • Machine SSL certificate is the key object … to talk to port 443 on vCenter
    • There are 4 further component certificates that may need to be changed. I’m not going into that here. Normally Julian Wood over at has a great writeup on certificates.

I could go on and on and on and on ……

Anyway there’s so much more but I hope this shows how much I enjoyed it. A lot of people feel VMworld is more of a networking event now, but that sessions don’t always go deep enough. So this is an opportunity to redress the balance for free.

Until the next one.

Application Dependency Mapping with vCenter Infrastructure Navigator (VIN)

If you need to perform Application Dependency Mapping – ADM – also called Application Dependency Planning – ADP, there are a lot of different ways you can do so. Some automatically using software, some manually using software, and some using a pen and paper and workshops. You will likely use a combination of some or all of the three approaches to achieve a successful outcome.

First the why?

Why would you want to do that and why does it matter ?

For optimal cluster design it is very important to understand mission-critical application dependencies. This helps you make smart decisions regarding workload placement and whether you should either co-locate certain VMs together on the same host, or keep them apart, to ensure best possible performance.

This is of course what DRS Affinity and Anti-Affinity rules are all about, but what about if you have a large estate with 5,000 VMs. You can’t manually document dependencies for each of them, unless you’re Superman ……


And as part of your Virtual Machine design you should always consider workload placement to ensure noisy neighbours do not interfere with critical workloads. You might also have to use either Resource Pools, Reservations, Storage I/O Control or Storage DRS as other tools to ensure critical workloads get priority and are not “interfered” with by less critical workloads.

It is often the case  that this has not been carried out at all, and things are left to move around. That’s my experience. When you spend significant time studying vSphere you really understand all the inputs required to design it optimally – most are pretty common sense but it’s always been my opinion that this is why vSphere is so great. Despite workloads being thrown at it and it being grown in the most horrible organic ways, it continues to work and deliver for businesses due to robust design.


The VMware VCAP-DCD certification includes the methdology you need to learn in order to understand how to design your virtual infrastructure in the best way possible. And that blueprint and exam both contain Objectives and Tasks respectively, to understand application dependency mapping. This methodology is what VCDX is built on. In fact VCDX is the practical implementation and demonstration of competency in this approach. Here is a grab of one of the pages of VCAP-DCD blueprint. I’ve seen this get asked A LOT by people contemplating or preparing for VCAP-DCD.

Screen Shot 2015-02-18 at 21.07.46

Example 1: Data Locality

Let’s say you build a vSphere cluster and pay zero attention to where any of your workloads are running. It’s to be expected that the situation in this figure can arise:

ADM Pic 1

So you have a situation where your applications (in this case Commvault Commserve and vCenter) are accessing a SQL Server instance by traversing the network. This really doesn’t make sense from an availability or manageability perspective. It makes more sense to keep application-database traffic local. This is a simple example but it can be more difficult when the dependencies are not obvious.

Example 2: Exchange

In this case you’re planning for Exchange 2013. By using DRS you can align CASHUB servers and Mailbox servers together to ensure load balancing and better availability for the application. This is the recommended practice by VMware and shows how DRS (and vSphere HA) can always provide a benefit when you virtualise an application.

ADM Pic 2

Example 3: DR with SRM

For SRM it is critical to understand application dependencies to design protection groups. This also feeds into datastore design when using array-based replication. The bottom line is that there’s no point in failing over your Mission-critical three-tier application consisting of Web, App and DB if you can’t meet the underlying dependencies on AD and DNS.

Tip: Anyone going for VCAP-DCD this is an example of how to create an entity-relationship diagram. Pretty Simple isn’t it !!

ADM Pic 3

The Tools

VMware offer two tools to help you

  • vCenter Infrastructure Navigator (VIN)
  • Application Dependency Planner (Partner-only tool)

It’s important to say VIN or vRealize Infrastructure Navigator only works on vSphere for virtualised workloads, (I can’t figure out if it’s still part of vCenter or not)

ADP is a deep, complex product that can capture workloads via SPAN switch traffic-sniffing for non-VMware environments. So you could get a partner to use it prior to a large P2V, but the design considerations are not trivial. This is an entire exercise in itself. I’m not suggesting you still don’t need to spend time for VIN, but that it’s certainly simpler to deploy. The output still needs to be interpreted though and taken into account.

With VIN you can easily deploy it from an OVA template after downloading from

Screen Shot 2015-02-19 at 11.41.46

You deploy it using vCenter. It creates a plugin within the Web Client only. Don’t even think about using this with the VI Client.

VIN icon

Before you can use it you must assign it a license. So you need the correct privileges to do this:


Once you’ve done that you need to enable Virtual Machine monitoring. Just click on the vCenter Infrastructure Navigator icon on the Home page, or get to it via the Explorer view. In the view below I already have it turned on.

vin turn on vm access

One you’ve done that there are two places where you see the results. You can go to an individual VM, and Select Manage, then you will see the following map using the Application Dependencies Tab:



Pretty Neat – right !!!! And that’s within a few minutes. You can do a lot more but that’s for another post.

You can also go to the vCenter specific view of the entire estate which looks like this and let’s you save a CSV version. As a Community Comrade said yesterday, would be nice to get a more finely formatted report.

vin vcenter view

That’s about it until the next post. Thanks to Sunny Dua who has the best set of vCOps/vROps posts ever, over at and has provided the usual level of help/feedback for questions I had related to vCOps/vROps. I was aware of it before but Sunny has confirmed that to plug VIN into vROps and monitor a full dependent stack of applications you can use this adapter:

Screen Shot 2015-02-19 at 12.13.47

That’s for next time…..


Hitachi CB2500 Blade Chassis – Power & Flexibility for SAP HANA

Disclaimer: I work for Hitachi Data Systems. However this post is not officially sanctioned by or speaks on behalf of my company. These thoughts are my own based on my experience. Use at your own discretion.

Did you know that the world’s largest SAP HANA implementation runs on Hitachi servers and was installed by HDS. It’s true, and the whole area of in-memory database systems is growing rapidly.

Before I begin I have to give a shoutout to my colleague Sander who works in my team as an SAP SME. I’ve learned a lot about HANA from him – otherwise I would know just the basics. He doesn’t blog so I thought I’d put some info out there.

One of the biggest areas where HDS is focusing is on SAP HANA and the partnership with SAP is of strategic importance, just like it is with VMware. And it’s true to say this is an area where Hitachi has technical leadership.

Just as I’m writing this another massive deal has been signed based on Hitachi (SMP) blades and our Global Active Device (GAD) technology. You can read more about GAD  here.

For any customers looking to deploy HANA I suggest you contact HDS as in this area HDS has many advantages that are black and white. There are many design criteria to consider when running HANA to guarantee performance, and HDS qualified limits are higher than  competitors from a compute and storage perspective. In some cases our competitors don’t have a competing compute offering so it’s even more compelling.

I should mention that at the moment the ability to use vSphere for HANA is slightly limited mainly due to SAP limitations regarding what is and is not supported. This will undoubtedly change especially thanks to vSphere 6 and the increased VM and host limits of 4TB and 12TB respectively.

What is HANA ?

In case you don’t know, SAP HANA is SAP’s relational database that runs in memory. It’s a compelling technology for a number of reasons, mostly due to speed of query, massive paralleism and smaller footprint. The figure of queries running 1800 times faster is being widely mentioned.

It stores data in columns so it is called a columnar database. This has the effect of a massive reduction in required storage capacity due to a reduction in duplication of stored data.


HANA has some particularly important pre-requisites for performance both in terms of memory and back-end storage, and very careful qualification is required.

In the Hitachi UCP stack, currently the CB500 blades are supported in our UCP Director which is the fully orchestrated converged platform. This is a 6u 8-blade chassis with all blades half-width in form. This is not the subject of this post really – in this one I want to talk about the CB2500 chassis and how you might use it.

Let me refer you to my colleague Nick Winkworth’s blog announcing the release of the Hitachi CB2500 blades here for a little more info on use cases etc.

Here’s a picture of the chassis:

Screen Shot 2015-02-06 at 11.24.16

How can you use the CB2500 chassis ?

The CB2500 chassis is 12U in height which might seem quite large until you understand what you can do with it.

On the chassis, you can have:

  • (Up to)14 half-height CB520Hv3 blades with a maximum of 768GB DDR4 RAM each. These have Intel Xeon E5-2600v3 series inside.
  • 8 full-width blades that can hold up to 1.5TB each when configured with 32GB DIMMs,  or 3TB each when configured with 64GB DIMMs. These can have Intel Xeon E7-8800v2 series processors inside.

Data Locality and SMP

When you start using in-memory databases and other in-memory constructs this is where SMP and LPARs can become important.

  • SMP = A single logical server instance spanning multiple smaller servers
  • LPAR = A logical hardware partition of a server on which a single OS can run

There is an obsession with data living close to the CPU these days. Some of this is just marketing. The reason I say this is that if you use a suitably sized fibre channel array the latency can be so negligible as to be irrelevant. Also for typical business-as-usual use cases this is typically not a consideration.

However for applications like HANA it’s of particular importance. And this is where scale-out systems can trip up. How do you create a single 6 or 12TB HANA Scale-Up instance on a hyper-converged architecture and protect this with HA ?

The answer is you can’t right now.

Subdividing SMP

In relation to SMP technology, when you create a larger SMP instance you can still subdivide this into LPARs to maintain further flexibility.

Take 8 full-width dual-socket blades for example and subdivide into:

  • 4 x 4-socket blades
  • 2 x 8-socket blades

So if you plump for larger blades you can create logical blades from these. Furthermore, in a single SMP instance comprising 4 CB520X blades you can have 192 DIMMS of 64GB in size totalling 6TB of RAM.

You can protect this inside the same chassis with Hitachi N+M failover.

Multiple chassis can be connected together to create larger instances. We currently a customer running a 12TB+ in-memory database instance spanning multiple chassis.


Running in memory requires us to go beyond the limits of one box if we want to address the required amounts of memory for the largest use cases, where right now vSphere cannot meet these requirements. This is not entirely related to vSphere limits but is more related to what SAP do and don’t support.

This is where the ability to link multiple blades together using Hitachi SMP is a powerful tool in the next generation of systems which will use memory as the primary storage tier.


VVOL Demo by HDS: Worth a Watch (but let’s be careful out there)

Disclaimer: I work for Hitachi Data Systems. However this post is not officially sanctioned by or speaks on behalf of my company. These thoughts are my own based on my experience. Use at your own discretion.

Pretty much the entire world knows that in the last week or so VMware announced vSphere 6.0.

I haven’t written a post on it yet but will do soon.

Suffice it to say VMware has really focused and invested huge engineering resources to achieve this. For my money vSphere 6 is the best ever release.

As someone said … The hypervisor is not a commodity, which when you see the features and improvements that have come with this release I think that rings true.

So kudos to VMware engineering and Product Management !!

Virtual Volumes

I’ve had a few questions lately from colleagues asking me how this landmark feature works in detail.

I think the following demo by my countryman Paul Morrissey, HDS  global Product Manager for VMware, should really help aid learning.

Paul does a great walkthrough of the setup on the storage side, configuration through the vCenter Web Client as well as provisioning to VM’s.

It’s a nice demo – not too long but covers the typical questions you might ask.

HDS support for VVOL

It is expected that HDS will be announcing support initially with Hitachi NAS (HNAS) product in March at GA time for vSphere 6.0, followed swiftly by support for all HDS block storage within 2-3 months.

With the initally supported HNAS, HDS supports:

  • 100 MILLION VM snapshots
  • 400,000 Vvols (with architectural support for up to 10 MILLION Vvols)

A Word of Caution

Don’t forget you heard it here first !!!!

There has been no commentary or proper consideration in my view regarding the potentially negative impact of VVols on the datacenter, both in terms of operational processes as well as matching business requirements to the correct SLA.

Of course VVols provide an amazing level of granularity of control regarding workload placement and aligment. This of course is to be welcomed.

However, the onus is really on IT SI’s and vendors to help customers understand the ACTUAL requirements for their applications. To get the best out of VVols, business and IT need to be working more closely together to ensure the correct capabilities are coupled with the right use cases.

My belief is that this new technology needs to be properly managed and given the “VCDX treatment”. What I mean is that the SLA matches the applications needs. Otherwise we will end up in a situation similar to the early days of virtualisation  with 8-vCPU VMs all over the virtual datacenter.

This has been one of the downsides of virtualisation – consumers requesting double what they actually need because of the so-called virtualisation tax, which is now pretty much negligible and irrelevant when compared against the upsides. This kind of thinking can have a negative impact on performance (ironically) which can be notoriously difficult to identify. I think were it not for the robustness of the vSphere hypervisor we would not have gotten away with this.

The request for VVols could go like this:

  • I need a VM
  • What type of storage (VVOL) do you need?
  • What have you got?
  • I’ve got Platinum, Gold and Silver
  • What does Platinum have?
  • Everything
  • I’ll take 10 (John in Finance said to ask for more than you need – then if IT give you Silver that will be fine)

I admit that this is a slightly facetious scenario, but without internal showback/chargeback the danger is that the right SLA will not end up being matched to the right use case or application requirement, and we will end up with an overprovisioned datacenter, that once deployed cannot be un-picked.

Paul covered how cost can be included in his demo. We need to accept human nature and understand that data is precious gold, and if someone thinks they can have a second copy of their data in a separate data center then who wouldn’t want that ?

It is also our default human nature to say Yes, as humans really don’t like to say No?

This has long been a concern of mine with VVols so we need to perform proper due diligence to ensure this technology does not get misused.

More on this next week.

Thoughts for Today, Ideas for Tomorrow