The Standard Performance Evaluation Corp. (SPEC) has released SPEC Cloud IaaS 2016, new benchmark software that measures the performance of infrastructure-as-a-service (IaaS) cloud implementations.
The SPEC Cloud IaaS 2016 benchmark measures scalability, elasticity, and provisioning capabilities of IaaS cloud implementations, both public and private. Cloud providers, users of cloud services, and hardware, software and management platform vendors can use the benchmark to evaluate how different implementations affect cloud performance.
Realistic, Flexible Workloads
Two multi-instance workloads are included within the SPEC Cloud IaaS framework: one based on YCSB (Yahoo! Cloud Serving Benchmark) that uses the Cassandra NoSQL database to store and retrieve data in a manner representative of social media applications; and another representing big data analytics based on a K-Means clustering workload using Hadoop. The workloads are designed to stress compute, network, storage and API performance of an IaaS cloud.
The SPEC Cloud IaaS 2016 benchmark reports performance based on three primary metrics:
- Scalability — measures the total amount of work performed by application instances running in a cloud.
- Elasticity — measures whether the work performed by application instances scales linearly in a cloud.
- Mean instance provisioning time — measures the average time taken to provision instances from initial request to getting ready to accept connections.
The SPEC Cloud IaaS 2016 benchmark gives users the flexibility to configure the IaaS cloud under test using various combinations of physical nodes, virtual machines, and/or containers with appropriate storage and networking. The benchmark also supports multi-tenancy.
Fulfilling A Need
Global spending on IaaS was expected to reach almost US$16.5 billion in 2015, an increase of 32.8% from 2014. Compound annual growth rate (CAGR) from 2014 to 2019 is forecast at 29.1%, according to Gartner.
"The rapidly growing market calls for a benchmark that can be used to perform meaningful, repeatable and comparable measurement of cloud performance as seen by a consumer," says Salman A. Baset, chair of SPEC's cloud subcommittee. "The benchmark is developed by a SPEC consortium, which means that it benefits from a wide range of input and doesn't favor one vendor's interests over another's."
SPEC members participating in the cloud subcommittee include Dell, Digital Ocean, IBM, Intel, Red Hat and VMware. Long-time SPEC benchmark developer Yun-seng Chao is a supporting contributor. Additional contributions were made by Amazon, AMD, Google, Lenovo, and Oracle.