The Uptime Institute Symposium 2012: Digital Infrastructure Convergence was a 3+ day conference dedicated to data center topics. The event included a mix of vendors, buyers, consultants, and analysis. This year the main themes for most sessions revolved around DCIM, modular data centers, and greening the data center. The event had an amazing number of concurrent sessions and I was only able to attend some of them, however here are some tidbits from those informative sessions.
DCIM and Data Center Operations
Amongst the presenters was our very own CTO for our CA ecoSoftware solution, Dhesikan Ananchaperumal. He was a panelist on the topic of tracking and analyzing data center operational data and gave some thoughts on some of the keys to successfully running a data center. One of his major points was that the integrated relationships between the data of different systems are what allow managers to easily troubleshoot issues when they arise. This often includes integration between IT and facility systems.
Andrew Stokes gave a great presentation on the Deutsche Bank Eco Data Center. Their data center, located in NYC, is a very impressive data center project with 2N Power and 2N Network infrastructure.They maintain a very low PUE (<1.4) and their IT load varies by less than 5%. Their tested acceptable temperature range was between 58 - 85 degrees Fahrenheit and their acceptable pressure and humidity were between 0.1 - 0.4 WC Static Pressure and 20-80% humidity respectively. Those are surprisingly large ranges compared to historical data center operations guidelines and recommendations. The data from the Deutsche Bank Eco Data Center example provides quantitative support for the increase in temperature, heat, and humidity ranges for certain data center equipment. This also supports the recommendations in the ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers) TC 9.9 2011 Thermal Guidelines for Data Processing Environments. In addition, Andrew recommended a holistic and systemic view of the data center as well as the enterprise. He says managers should be looking at the total cost savings for IT load, Fans, etc. and not just PUE.
Modular Data Centers
There were many presentations regarding modular data centers, unfortunately I was only able to catch a few. One that stood out was when Dan Golding of RagingWire spoke about Modularity in Data Centers. He suggested that modularity can be very beneficial for smaller data centers (under 5 MW). However, a data center over 10 MW can lose efficiencies of scale in the prefab modular model. He also warns that too much modularization can inhibit innovation.
Green Data Centers and Efficiency
Greening the data center was also a major theme, with organizations like Facebook, eBay, the Environmental Defense Fund, and Greenpeace speaking on aspects of their efforts to create and encourage energy efficiency in data center design and operations.
Jonathan Koomey from Stanford University and Ken Brill the founder of Uptime Institute spoke about what's next in data center efficiency. They claim that for data centers to become more efficient, no new technology is required. Savings up to 50% are already available if organizations did a few simple things.
- Organizations should focus on total costs and not allow departments to be in silos
- Organizations should make an effort to get rid of unutilized servers
- Organizations should monitor their servers' CPU utilization and take advantage of power saving settings in hardware systems
To expand on the second bullet, the easiest way to improve data center efficiency is to get rid of as many "comatose" servers as possible. Comatose servers are servers that are on but not running applications or providing any value for the business.
Benefits of removing unused servers include:
- Reduction in kilowatts of IT capacity
- Operations expenses reduce
- Electricity operational expenses reduced
- Software license operation fees reduced
- Maintenance operational expenses reduced
One example of the benefits of auditing server utility is AOL, who was able to get rid of 93,000 or 26% of servers that weren't doing any useful work at all! The expectation is that 30% or more of all servers in any given organization are "comatose" servers. Even more striking, according to The 451 Group numbers, the amount of power consumed does not significantly change whether a server is running applications or is simply on and idling. This means that removing the unused servers would provide a large energy savings for the organization. According to IDC data in 2010, there were 32 million servers in the world. A 30% reduction in servers worldwide would be a huge reduction of power consumption.
In addition to removing unutilized servers, companies should also be looking into their CPU utilization numbers for efficiency gains. The total CPU utilization for all servers is typically estimated at 20-30%. Jonathan and Ken believe actual utilization of CPU is more like 1-2%. If organizations focus on distributing their capacity and managing their servers effectively, there may be significant gains possible for improving server utilization efficiency and possibly reducing power consumption and server growth needs. Another thing to keep in mind is that data storage will become a more pressing concern as simply storing data will utilize 10-30% of total power.
The speakers warn that if the IT functions do not keep up with efficiency gains in the industry, the mounting pressure from cloud providers, service providers, and others will start to compete with internal IT jobs. CIO will transform from the "keeper" of systems to the "broker" of information services.
That's it for my "wrap up." For those that were not able to attend the Uptime Institute Symposium, I hope that you find this wrap up helpful and for those that did attend the event, I hope that you found the event as informational as I did.