The study highlights that nearly half–44 percent–of data on virtual systems is not regularly backed up and only one in five respondents use replication and failover technologies to protect virtual environments. Respondents also indicated that 60 percent of virtualized servers are not covered in their current disaster recovery (DR) plans. This is up significantly from the 45 percent reported by respondents in 2009.
Inadequate Tools, Security, and Control
Using multiple tools that manage and protect applications and data in virtual environments causes major difficulties for data center managers. In particular, nearly six in 10 of these respondents (58 percent) who encountered problems protecting mission-critical applications in virtual and physical environments reported this to be a large challenge for their organization.
In terms of cloud computing, respondents reported that their organization runs approximately 50 percent of mission-critical applications in the cloud. Two-thirds of respondents (66 percent) report that security is the main concern of putting applications in the cloud. However, the biggest challenge respondents face when implementing cloud computing and storage is the ability to control failovers and make resources highly available (55 percent).
Resource and Storage Constraints Hamper Backup
Respondents state that 82 percent of backups occur only weekly or less frequently, rather than daily. Resource constraints, lack of storage capacity, and incomplete adoption of advanced and more efficient protection methods hampers rapid deployment of virtual environments. In particular:
- 59 percent of respondents identified resource constraints (people, budget, and space) as the top challenge when backing up virtual machines.
- Respondents state that the lack of available primary (57 percent) and backup storage (60 percent) hampers protecting mission critical data.
- 50 percent of respondents use advanced methods (clientless) to reduce the impact of virtual machine backups.
The study showed that the time required to recover from an outage is twice as long as respondents perceive it to be. When asked if a significant disaster were to occur at their organization that destroyed the main data center, respondents indicated that:
- They expected the downtime per outage to be two hours to be up and running after an outage.
- This is an improvement from 2009, when they reported it would take four hours to be up and running after an outage.
- The median downtime per outage in the last 12 months was five hours, more than doubling the two hour expectation.
- Organizations experienced on average four downtime incidents in the past 12 months.
When asked what caused their organization to experience downtime over the past five years, respondents reported their outages were mainly from system upgrades, power outages and failures and cyberattacks. Specifically:
- 72 percent experienced an outage from system upgrades, resulting in 50.9 hours of downtime.
- 70 percent experienced an outage from power outages and failures, resulting in 11.3 hours of downtime.
- 63 percent experienced an outage from cyberattacks over the past 12 months resulting in 52.7 hours of downtime.
“While organizations are adopting new technologies such as virtualization and the cloud to reduce costs and enhance disaster recovery efforts, they are currently adding more complexity to their environments and leaving mission critical applications and data unprotected,” said Dan Lamorena, director, storage and availability management group, Symantec. “We expect to see organizations adopt tools that provide a holistic solution with a consistent set of policies across all environments. Data center managers should simplify and standardize so they can focus on fundamental best practices that help reduce downtime.”