What I Learned at VMworld 2012
At the end of August, I was fortunate to attend VMworld. A record crowd of around 20,000 flooded the Moscone Center in San Francisco. Park Place partners extensively with VMware across all our service lines so we were well represented from all our disciplines – technology consulting, cloud services, and technology integration. At a macro level I was struck by the impact this company and their technology have made on the industry. Several years ago at VMworld 2008, well under 8,000 gathered in the same venue primarily to argue whether or not VMware was ready for primetime, and where its best fit was in the enterprise. This year the conversations were about how to make disparate public and private clouds interoperable; how to advance management, monitoring, and security tools; and about how to broadly virtualize not only desktops, but other computing endpoints, including the smartphone. vSphere as a production environment was no longer a discussion – it has become, in many quarters, an assumption. While I am not really that qualified to determine whether or not that’s a good thing, I will say this – VMware and the growing set of tools around it arrived just in time to revive a fairly boring technical infrastructure industry that had been reduced to arguing about the speeds and feeds of their servers, storage, and networking gear that mainly has its origins in four ginormous factories in China. “Designed in Palo Alto, Cost Reduced to the detriment of the US Economy in Asia.” (Please don’t infer my vote from that statement – I am attempting to telegraph my frustration with the “WalMart Mentality” prevalent in our population that is destroying service and jobs in America and contributing to repressive dictatorships worldwide. We New Englanders appreciate frugality more than most, but saving money at human, macroeconomic and cultural cost – is it really still saving?) Ahhh but I digress.
VMware, and virtualization in general, has been proven over and over again to cost a little more in the short term and then save a LOT of money for IT shops in the long term – but here’s what I really love about it: it empowers systems designers to more deterministically match compute resources to workloads. It is the best IT resource management tool in our toolkit to date. Virtualization in storage arrays unhitches us from a precise number of spindles per MEDITECH server and allows us to create and apply IOPS as needed. Virtualization in servers lets us balance and protect the performance and availability of workloads, and apply not too little, and not too much, but the just right amount of cores, memory, networking, and storage to a workload. Goldilocks would approve. The furtive conversations about whether or not virtualization technology can really be trusted to run production workloads stem mainly from the fact that many early users of virtualization took their “mad science projects” live and forgot to maintain them. The hypervisor isn’t quite smart enough yet to create additional CPU’s and SSD’s on demand – yet. The single biggest failure in VM administration we see in the field is the failure to maintain headroom. Virtual Machines aren’t free, friends. When N+2 becomes N+0 because you spawned too many guests its time to roll in the hardware before the calls about poor systems performance light up your night. Just sayin’…..until next time, peace.
Jim Fitzgerald is Executive Vice-President and CTO of Park Place International where he is responsible for technology solutions strategy, development, and quality spanning the entire Park Place portfolio of Technology Integration, Technical Consulting, and Cloud Services. In his 28 year career, Jim has enjoyed the opportunity to observe and participate in the evolution of network computing platforms and their application to business and healthcare workflows. His current passion is helping hospitals developing the right mixture of local and cloud-delivered services in order to achieve operational sustainability.