Data center design is evolving as the cloud become less of a technological novelty and more of a specific way to scale and enhance business. At the same time, companies are integrating software-defined networking and storage technologies that enable greater flexibility in hardware, with Open Compute Project serving as the blueprint for a scalable yet interoperable cloud infrastructure.
Writing for ZDNet, Tim Lohman examined the different imperatives driving these changes in data center design. For example, some organizations may be seeking to lower energy costs, or to cultivate partnerships. Handing off the data center to a managed services provider is one possible way to lessen a business' IT burden and let it focus on serving customers. On the other hand, hardware and software customization may be a better option in the long term, since it gives companies granular control over data center setup while likely lowering costs due to the use of industry-standard hardware.
Using appliances built in accordance with OCP, companies like Facebook have been able to forgo proprietary tools and in turn create servers that require less cooling and maintenance. Speaking to Business Cloud News, Rackspace's Nigel Beighton focused on the benefits of using OCP, beyond just cost effectiveness.
"People always jump to the conclusion that OCP is all about cheap hardware – it's not," argued Beighton. "This is about allowing people to scale quickly. And it's about sourcing. To be able to multisource as much as you can is very critical so your supply chain is not bottlenecked. To open source that means you have a much better way to multisource. And you've got a standard that people will build to which will allow people to connect clouds together."
On the software side, Beighton expressed optimism about the software possibilities created by the OpenStack community. However, he asserted that storage hardware is still needed to catch up to the advances made in software.