Data center planning frequently involves a delicate balancing act between accounting for evolving demands and avoiding excess spending. Cloud storage companies in particular face a unique challenge because their infrastructures are expected to be efficient and highly scalable. Given the wave of adoption and the technology’s continued momentum, it may be tempting to invest in significantly more resources than a company actually needs to be prepared for the next several years.
There are a few problems with this cloud hardware strategy, as David Appelbaum, vice president of marketing at performance management provider Sentilla, recently explained in a Data Center Knowledge article. The first challenge to address is gaining visibility over the current state of the data center, which he said can be challenging given how quickly technology trends and customer demands can shift operations.
“Today’s data center is more like the jumbo jet in the fog than the small plane,” Appelbaum wrote. “There are many moving parts. New technologies like virtualization add layers of abstraction to the data center. Applications – and especially data – grow exponentially. Power, space, and storage are expensive and finite, yet you need to keep all the business-critical applications running.”
As a result of this rapid expansion and growing complexity, cloud builders need to have a set of core metrics to guide data center planning. To yield the most benefit, these measurables must not only track performance but also identify how all the different moving parts within a cloud datacenter relate to each other. Appelbaum outlined several questions that can facilitate the creation of truly valuable metrics, including:
- What is the ongoing cost of a given application when compared to another?
- What is the data center’s total power capacity?
- How much of that capacity is utilized?
- Which applications are under utilizing their allocated resources?
Many businesses have already run into common roadblocks that have been created by over provisioning. Appelbaum highlighted a data center operator survey that found approximately 33 percent of respondents were already using the majority (70 percent or more) of their total rack space. Similar problems emerged within the storage ecosystem specifically, with 40 percent of operators using more than half of their total capacity. Many businesses have turned to the cloud to address this problem, placing the burden on cloud storage companies to handle this explosion of data. However, just as their customers should carefully plan to avoid over provisioning, providers can benefit from taking stock of their cloud hardware and using current metrics as a framework for future investments.
Stronger SLAs in 2013
One thing cloud providers can expect is that customers will be looking for more robust service-level agreements over the course of the next 12 months. Analysts from cloud backup provider Symform suggested that outages from major cloud vendors have put issues, such as reliability, front-of-mind for many business decision makers. In addition, the average cost of downtime for all businesses was $5,600 per minute in 2012. In response, it is likely that customers will ask for clear promises regarding both reliability and transparency from cloud storage companies.
“As we enter 2013, the massive growth of digital data and the challenges around how to secure, manage and store that data is only increasing – in spite of the many technologies and cloud solutions aimed at alleviating these issues,” said Matthew J. Schiltz, CEO of Symform. “This year, we will see even greater awareness of the wasted, unused capacity in our existing infrastructure investments and the need to embrace distributed, decentralized systems. This will enable IT’s efforts to gain back control of rogue devices and cloud applications.”