877-991-1991

Healthcare IT Blog

Hardware Wars: The Return of the Mainframe

Published on 12/13/2012 by Jim Fitzgerald
Category: Virtualization

The mainframe created virtualization. Now virtualization is recreating the mainframe. Please humor me as we trace the path.  In the beginning, there was the mainframe. The mainframe was big, heavy, and drank power like a zit-laden teenager swilling 44 oz. Cokes. You had to pass a test to work directly with it in any way. You had to learn some level of programming even to be a data entry person. To work in the data center you had to wear thick glasses, smoke Winstons or Marlboros, and hold a generally dismissive view of the rest of humanity. The mainframe may have come as 1, 2, or 3 huge “frames” connected by highly shielded custom data cables that could barely bend 30 degrees and that weighed enough to hurt the operators if mishandled. It functioned as a single unit that incorporated processor, memory, storage, and access. Several years after broad commercial deployment of the mainframe, a better way to allocate this decidedly expensive fixed asset was developed – the Virtual Machine. That’s right, the VM was born in the 1970’s, not in the late 2000’s when it started being a standard fixture in windows-based client/server hospital data systems.

Deeper in the 1970’s the Minicomputer was born. The mini was essentially a mainframe squeezed into less space and generally used in more focused roles in IT.  The mini used a lot less power, space, and cooling, and made compute available to smaller organizations of all kinds. I had my first experience with computer systems with a DEC PDP-8 acquired by my tiny little high school, Roxbury Latin, in 1977. Minicomputer admins in commerce still smoked Winstons and Marlboros but minicomputer admins in research and academia tended to smoke something else entirely, especially if they used the UNIX OS. Makers of minicomputers, trying to band together against the mainframe, adopted standards that we take for granted today like the ASCII character set, asynchronous serial communications, SCSI disk buses, and eventually Ethernet as a device access medium. Our beloved MEDITECH HCIS emerged first on minicomputers from Digital Equipment Corporation and Data General Corporation. MAGIC was groundbreaking in a lot of ways, not least of which was taking full advantage of distributing printers and terminals and managing them at high speed over Ethernet. MEDITECH even wrote code that loaded onto industry-standard terminal servers and turned them into intelligent access nodes for the MAGIC Color Terminal.

Not long after the mini, Apple and IBM PCs appeared on the scene and changed the compute landscape. As the market and technology matured, organizations and individuals shelling out $3000 and more for an early PC were not satisfied with the idea that they had to hook up to the corporate mainframe or minicomputer over a slow serial connection by using a program that turned their $3000 PC into a $500 dumb terminal. Refugees of the dying minicomputer industry began developing operating systems and code that would function over networks, allow a sharing of expensive resources like disk arrays and high-speed printers, and would ultimately allow a distribution of the computing workload between multiple computers on a network. Thus client/server computing was born, and it was a painful birth. Operating Systems technology at the time was reasonably good at managing a single machine – as long as the user didn’t make any radical unplanned configuration changes – and could even begin to access shared resources over Ethernet with the advent of “Network OS’s” from Banyan, Novell, Ungermann Bass, and others.

One problem with the client/server model was that there was very little real-world experience with linking a process on a client to a process on a server over a network. Programs that had previously had the full attention of a mainframe or minicomputer during the users time slice now had to send requests over a network and wait for answers from the server. Architects began to worry about many of the essential problems that still bedevil badly-architected HCIS infrastructure environments today: network latency, server utilization, storage queue length, storage latency, and locking mechanisms. As the software began to catch up with the problem, the hardware was not standing still. Processor and memory densities, network speed, and disk bus speed all grew at asymptotic rates – generating an ongoing demand for retuning of the network operating systems and the client/server software that used them.

Into this pulsating glob of barely-manageable network computing, the VM re-entered. Depending on your perspective, VM meant either “virtualization monster” or “virtual machine”. VM’s for the PC space have, on the downside, been yet another technology that was essentially beta tested on the entire end-user community. On the positive side, VM’s have created a toolset which lets us manage our increasingly-powerful microcomputer server resources more efficiently, has given us tools for enhancing availability, and has forced more discipline in the way the interface to the physical hardware is managed. There is an “embarrassment of riches” heading our way from technology innovators. Solid state drives (SSD’s) that can generate 200,000 IOPS (input/outputs per second) – compare that to 150 IOPS for a state-of-the art mechanical SAS drives; superscalar memory technologies that can aggregate memory arrays and provide memory management offload to core CPUs; 40 and 100 Gb/sec networking; short distance network technologies that mimic direct memory access – these are just a foretaste of things to come.

It turns out that the virtualization technology will happily adopt these new, fast resources with little adaptation required, and so the mainframe – a different mainframe – will be reborn. You won’t have to be a smoker to operate it, although a Venti Latte may be in order. This mainframe will not only be a compute a storage powerhouse on its own, it will also play nicely with local, wide-area, and cloud networks, and could potentially even the capacity playing field between large well-funded IT shops and smaller, more budget-constrained facilities.  Stay tuned.

Jim Fitzgerald is EVP and CTO of Park Place International. He is young enough to be enthusiastic and old enough to be cautious.



top