Many data centers are up against the maximum electric power available to them from their utility. Others are facing management challenges: the amount of time to deploy new capacity, and to manage existing capacity and systems. And gains made by virtualizing and consolidating servers are often lost again as more gear is added in.
The demand for more CPU cycles and petabytes of storage won't go away. Nor will budget concerns, or the cost of power, cooling and space.
Here's a look at how vendors, industry groups, and savvy IT and Facilities planners are meeting those challenges -- plus a few ideas that may still be a little blue-sky.
Location, location, location
Data centers need power. Lots of it, and at a favorable price.
Data centers also need cooling, since all that electricity going to and through IT gear eventually turns into heat. Typically, this cooling requires yet more electrical power. One measure of a data center's power efficiency is its PUE -- Power Usage Effectiveness -- which is the ratio of total power consumed by the facility for IT, cooling, lighting, etc., divided by the power consumed by IT gear. The best PUE is as close as possible to 1.0; PUE ratings of 2.0 are, sadly, all too typical.
"You want to be in a cool dry geography with cheap power, like parts of the Pacific NorthWest. For example, FaceBook's data center in Prineville, Oregon. Or in a very dry place, where you can get very efficient evaporative cooling," says Rich Fichera, VP and Principal Analyst, Infrastructure and Operations Group, Forrester Research.
[ See also: Facebook shares its data-center secrets ]
Companies like Apple, Google and Microsoft, along with data center hosting companies, have been sussing out sites that meet affordable power and cooling criteria (along with not being prone to earthquakes or dangerous weather extremes, available and affordable real estate, good network connectivity, and good places to eat lunch).
Google, with an estimated 900,000 servers, dedicates considerable attention to data center efficiency and other best practices, like, where and when possible, using evaporative cooling to minimize how often energy-hogging "chillers" run (When in use, chillers "can consume many times more power than the rest of the cooling system combined"). Evaporative cooling still requires power -- but much less. And Google's new facility in Hamina, Finland, "utilizes sea water to provide chiller-less cooling." (See Google's video.) According to the company, "Google-designed data centers use about half the energy of a typical data center."
[ See also: Visit a Google Data Center ... if you dare! ]
Renewable, carbon-neutral power
In addition to looking for affordability, many data center planners are looking at power sources that don't consume fuel, or otherwise have a low carbon footprint.
For example, Verne Global is cranking up a "carbon-neutral data center" in Iceland -- currently scheduled to go live November 2011 -- powered entirely by a combination of hydro-electric sources and geothermal sources, according to Lisa Rhodes, VP Marketing and Sales, Verne Global. (About 80% of the power will come from hydro-electric.)
Power in Iceland is also abundant, Rhodes points out: "The current power grid in Iceland offers approximately 2900 Megawatts (MW) of power capacity and the population of Iceland is roughly 320,000 people. Their utilization of the total available power is thought to be in the range of 300MWatts. Aluminum smelters are currently the most power-intensive industry in Iceland, leaving more than sufficient capacity for the data center industry."
Iceland's year-around low ambient temperatures permit free cooling, says Rhodes. "Chiller plants are not required, resulting in a significant reduction in power cost. If a wholesale client should decide they want cooling at the server, there is a natural cold-water aquifer on the campus that can be used to accommodate their needs."
Depending on where the customer is, the trade-off for locating data centers based on power, cooling or other factors, can, of course, be incrementally more network latency -- the delay caused by signals travelling through one or several thousands of miles of fiber, plus, possibly, another network device or two. For example, one-way transit from the data center to London or Europe adds 18 milliseconds, to the United States, about 40 milliseconds.
It's not just the heat, it's the humidity
"Dry places" aren't necessarily in cool locations. i/o Data Centers' Phoenix facility, which according to the company is one of the world's largest data centers, is located, as the facility's name suggests, in Phoenix, Arizona.
"One of the benefits of the desert is it's very dry," says Anthony Wanger, i/o President. "It's easier to remove heat in a dry environment, which makes Arizona an ideal location."
According to the company, the Phoenix data center employs a number of techniques and technologies to reduce energy consumption and improve energy efficiency.
"We are doing everything possible to be energy efficient at all of our data centers, says Wanger. "We separate cold air supply and warm air return." To get the heat away, says Wanger, "There is still no more-efficient means of moving energy than through water. Air as a liquid is much less dense and less efficient. Once we get that hot air, we dump it into a closed loop system and exchange it into an open-loop system, where we can remove the heat. We also use thermal storage. We can consume energy at night when it's sitting in the utility's inventory."
Also, says Wanger, "We have to put humidity into the air. The old way was to have a big metal halide lamp baking water. The humidification solution was to fire up a heat lamp and phase-transition it to humidity. Now we use a process called ultrasonic humidification, which uses high-frequency electronic signals to break water surface tension and dissipate cool water vapor into the air -- this takes about 1/50th the amount of energy."
The mod pod
For several years now, a growing number of vendors, like HP and Microsoft have been offering ready-to-compute data center modules that not only include compute, storage, but also cooling gear -- just plop (well, put gently) into place, and connect up power, connectivity, and whatever cooling is needed.
[ See also: Make mine modular: The rise of prefab data centers ]
Some don't even need a proper data center to house them in.
And it's not just vendors, either; hosting providers like i/o Data Center not only use their own modules, but also offer them directly to customers who might not be availing themselves of i/o's facilities.
For example, HP offers its Performance Optimized Datacenter 240a, a.k.a. "the HP EcoPOD." Amazon has its own Perdix container, and Microsoft offers its Data Center ITPAC (IT Pre-Assembled Components).
HP's EcoPOD uses free-air and DX (direct-expansion) cooling, without needing any chilled water. "Just add power and networking -- in any environment," says to John Gromala, director of product marketing, Modular Systems, Industry Standard Servers and Software, HP. According to Gromala, "the EcoPOD optimizes efficiency achieving near-perfect Power Usage Effectiveness (PUE) between 1.05 to 1.30 (depending on ambient conditions)." And, says Gromala, "because EcoPODs are freestanding, they can be deployed in as quickly as three months. Customers are putting EcoPODs behind their existing facilities, inside warehouses or on roofs."
Switching from AC to DC
IT gear runs on DC (direct current), but utilities provide electricity as AC (alternating current).
Normally, "A UPS converts the 3-phase 480vAC coming from the power utility to DC, to charge its batteries, and then reconverts back to 3-phase 480vAC to send it through the data center. The PDU (Power Distribution Unit) for each rack or row of racks converts the 3-phase 480vAC to 3-phase 208vAC, which is what normally goes into IT gear like servers and storage arrays. And the power supplies in the IT gear converts that 208vAC into 380vDC," says Dennis Symanski, Senior Project Manager, Electric Power Research Institute, and chairman of the EMerge Alliance's committee writing the 380vDC standard.
Various initiative are underway exploring going, ahem, directly from utility power. "We've done a lot of demos worldwide about running data centers at 380vDC (volts of Direct Current) instead of 208vAC," says Symanski.
Moving to a direct current infrastructure says Symanski, "gets rid of three conversion steps in the electrical system, and also reduces the load on the air conditioning by the reduced amount of heat being created.
What's that mean in terms of dollar savings? "We've found in most of our demonstrations that we get about a 15% reduction in the power used to run IT equipment. Plus the savings from needing less air conditioning, which are probably comparable, but harder to measure.
Since a DC infrastructure means DC UPSs, DC circuit breakers, DC interconnect cables, et etc., data centers are unlikely to convert existing AC set-ups, other than as testbeds, says Symanski. "This is for when you are expanding in your data center, like adding a new row of racks, or building a new data center."
Switch to power-saving components
There are many opportunities to reduce power consumption simply by replacing some of the components in existing power and cooling systems.
i/O Data Centers, for example, "uses variable frequency chillers, pumps, cooling towers and air handlers to reduce energy consumption. By using only the power necessary to keep equipment running at optimal levels, i/o is able to operate energy-efficient data centers."
"You don't change the fan or the motor, you put a VSD on the motor. What used to be a single speed fan you can now slow down," notes EPRI's Symanski. "And by reducing the speed of a fan by 50%, with a variable-speed drive (VSD), you use only one-eighth of the power," However, Symanski cautions, "You have to make sure you don't get condensation and that the refrigerant doesn't freeze by slowing down too much."
There's even one easy component upgrade that can be done with some existing IT gear, Symanski points out: Replacing older power supplies with one of the new energy-efficient ones with certifications like 80PLUS and Energy Star.
"New power supplies may come in different versions -- Bronze, Silver, Gold and Platinum -- with correspondingly better efficiencies," Symanski notes. "Replacing an older power supply with a Platinum-level one can yield ten to fifteen percent energy savings, -- and the power supply is an inexpensive part."
Crazy like a fox, or just crazy?
So far, everything you've read is available and being done, or at least being explored in test conditions. But why stop there when there's still room for further improvement? Here are a few blue-sky ideas...
I take full credit and/or blame for this idea. Why not put servers inside turbine wheels, and drop them -- tethered by coax fiber -- into the water. The water motion on the turbine supplies power, the water movement keeps the server cool. For maximum heat exchange, (and to avoid buoyancy problems), use liquid-immersion cooling on the servers, like from Hardcore Computer. For extra credit -- being careful to put wire mesh screens around the servers -- farm salmon, clams and/or tilapia, since the water may be warmer than otherwise.
Speaking of location, with air-based power generation being developed, how about airborne data center modules, generating power and getting air-cooled without consuming ground footprint? (Granted, an easy target for air pirates armed with six-foot bolt cutters.) Or even larger ones in lighter-than-air dirigible housings?
My favorite suggestion comes from Perry R. Szarka, Solution Consultant III at system integrator MCPc, Inc.: "How about a combination micro-brewery and datacenter? The idea would be for the beer to participate in the datacenter cooling process somehow as it goes through the distillation/fermentation process. The microbrewery side could perhaps feature a bar where patrons could view the beer through large windows and clear glass tubing as it moves along through the system. I guess this could be the ultimate datacenter liquid cooling concept, at least for the datacenter administrators who I have met!"
What innovative data center practices are you seeing? Or do you have some blue-sky ideas of your own?