Healthcare IT Blog

The Quest for Data Center Availability

Published on 01/08/2013 by Mark Middleton
Category: Healthcare IT

The fusion of health care reform, exponential data growth, and increase of computing density bring a number of demands and resulting questions to health care systems and providers, no matter their size or stature.  Reform measures require health care providers to  digitize their care process in a meaningful way and make the data readily available at all times to the appropriate people that need it.   When the provider successfully implements the nirvana app to transform their health care process, the availability demands skyrocket upward.  To keep data available, providers have to consider all aspects of the architecture stack, i.e. application, processing, storage, connectivity, and presentation.  But at the root of all those components lies the most fundamental building block, and that’s the data center that shelters, protects, powers, and cools all the "cool" stuff.    If you are looking to house your own nirvana health care IT operation, then reading this should help you think about your own production data center.   If you looking to partially outsource that aspect in form of a colocation site with your own systems, OR you are looking to completely outsource your operation via Cloud services, application service providers and the likes, then it should help you qualify you prospective vendors.

The data center provides 6 important functions:  1)  Shelter or Accommodation, 2) Power, 3) Cooling, 4) Security, 5) Other Environmental Controls, and 6) Connectivity.   First, shelter and accommodation include protection from both natural and manmade disasters.  This is impacted by not only the general location of the country, but also the specific site selection and mitigating efforts that go into site preparation.  Avoiding tornado alleys, hurricane paths, earthquake prone areas, locations close to airports, nuclear reactors, and major highways is a start, but hardening facilities to handle extreme wind, minimize roof penetrations to avoid leaks, providing multiple barriers to protect against angry vehicles, flying trees, or inbound comets are also important aspects.   Second, the facility must provide adequate volumes, quality, and availability of power which may be accomplished with a myriad of options.   Third, the facility has to cool what it powers.  As a rough rule of thumb, for every watt of power that’s consumed with IT gear, another .8/10 to 1 watt has to be provided for cooling.  Fourth, security systems must be in place to insure only the right people have access to the appropriate pieces of the building and systems.  These controls include biometric access, badge readers, video surveillance and recording, and in some cases, a big man with a gun in his hand.  The fifth component includes all other environmental controls for everything else, including:  early detection of heat, smoke or fire (much faster and greater sensitivity than a home alarm), detection of water on the floor, and other monitoring systems.  Finally, robust and diverse communication paths should be available to provide ample bandwidth, minimal latency, and redundancy in the event of the inevitable backhoe that wanders through the neighborhood.

Fortunately, the Uptime Institute, which is a consortium of companies that have developed standards and practices for the data center industry as well as defined Tier certifications, has developed general guidelines for helping us think through design requirements and specifications.   The Uptime Institute defines four tiers of data centers, which are described below.  It should be noted that the price per square foot goes up exponentially as you climb the availability ladder.   The guidelines are general and data centers may fit along a spectrum from the bottom of a Tier 1 to the best of a Tier 4.  Sometimes a data center will contain elements of two different Tier ratings, but will generally be classified in one rating.

Tier I – This is the vanilla version with little to no redundancy, single power paths (a path represents a multiple components from a power provider all the way to the plug on the data center floor), single UPS, and one (or no) generator.  Cooling is sufficient as long as all components are working.  Tier I is generally not acceptable for any mission critical type services or applications.  Planned maintenance nearly always results in data center downtime. According to benchmarks, availability of a Tier I facility would be less than 99.67% available, which would be close to 2.4 hours per month of downtime – averaged over a year or multi-year period.

Tier II – A Tier II data center begins introducing redundant components such as power and cooling, although all are delivered through single paths. Said another way, there are still many single points of failure in the distribution of both power and cooling.   The redundant components may be taken offline for maintenance with minimal impact to operations, however that only applies to the redundant systems.   A simple example of this would be a site that had two UPS but shared a single power path, breakers, and switches to a single plug for the equipment.  A Tier II would provide estimated 99.75% availability or an average of 1.8 hours of downtime per month. Tier III – A Tier III facility has both redundant components, designated by “n+1” where “n” represents the number of components required for capacity or function and “1” represents the spare components.  This standard provides multiple delivery paths and components, and provides appreciably greater redundancy and reliability.  In a Tier III facility, only one path is active at a time and the other path is considered passive and may be utilized for unplanned outages with no impact on operations.   A Tier III facility, depending on configuration, can provide an average of up to 99.98% availability or an average of 10 minutes per month average.

Tier IV – This tier provides the ultimate in fault tolerance and reliability by offering multiple redundancies for each component and generally functions in an active/active scenario.   The facilities are designed to “not go down” and may achieve up to 100% uptime. We hope this helps you in your quest for system availability nirvana.  My next blog article will explore some capacity elements of today’s and future data centers.

  Mark Middleton is the Director of Cloud Services at Park Place International. In the past, Mark worked at Christus Health as the System Director for IT Architecture. Mark has been a finalist in the Data Center Executive Excellence Awards and holds degrees in Biomedical Technology and Business Administration, as well as the highest level ITIL Expert Certification.