The demands placed on the world's data centers are accelerating due to increasing information volumes and the fact that much of that data is being deposited in cloud storage solutions. Most operators realize that they need to build a comprehensive strategy that takes the next several years into account when deciding on new technology. Effectively reducing total cost of ownership means more than simply buying the most affordable hardware: It means making strategic choices that will lower costs in the long run.
One of the primary contributors to data center TCO is power usage, and technology in recent years has become increasingly energy hungry. An August report from Berkeley National Laboratory noted that peak data center power usage reached close to 500 megawatts in 2011, but found that hardware owners could bring down energy consumption dramatically by changing specific IT practices and implementing efficient technology. The organization's study produced several load balancing and management strategies that have minimal impact on day-to-day operations, and could reduce data center energy demand by as much as 25 percent.
Researchers identified a set of demand response action items, which are intended to reduce overall power usage in response to the supply, demand and cost of electricity. One of the key components of an effective DR plan is automating response based on a set of contingencies. For example, if electricity costs spike in an area, an operator may elect to shift hardware to DR mode.
Automated load balancing was proven as an effective way to reduce the power draw of both storage and computing systems. For computing systems, the team established predefined thresholds that would trigger a DR event and prevented some new jobs from starting throughout its duration. This way, critical operations could be continued and new jobs were started again after the DR event ended.
"Load on the storage clusters can be reduced by rescheduling tape and data backup jobs to a time outside the DR event window," the researchers explained, expanding on their strategy for reducing storage energy consumption. "This is a classic application of the load shift strategy. The revised job schedule will free storage hard drives, filer heads, and other resources, which can be idled to reduce the system demand. By gracefully turning off the storage shelves and filer heads, significant energy can be reduced."
Load balancing and power consumption
Load balancing already plays a critical role in ensuring uptime, but it can also be used to improve efficiency. Thomas Parent of Rackspace wrote a recent blog post, in which he emphasized the importance of further energy efficiency innovation to reduce the large amounts of power data centers consume – about 1.3 percent of all energy used worldwide. He noted that virtualization and load balancing have allowed for greater hardware utilization, but more needs to be done on the non-technical side.
Parent suggested collaboration in the technology industry may lead to initiatives similar to Facebook's Open Compute Project. The concept simply puts the open source philosophy to work at the hardware level. By opening the design and deployment frameworks of its data center, Facebook encouraged collaboration and efficiency improvements. Moving forward, Parent predicted, creating this kind of collaborative environment may produce greater technological innovations and further reduce data center TCO.
"Of course, improvements in server technology and facilities management will continue at a blistering pace," Parent wrote. "One newer development is the use of modular systems which allow providers to more efficiently size the facility to meet current demand. Virtualization has only just begun and will generate even more savings in the coming years."