Optimizing Your Cloud for Maximum Success Part 2: PerformanceMay 30, 2013 No Comments
Many people believe that cloud platform performance can’t match the technical excellence of dedicated hardware. But in addition to benefits like mobility and cost savings, an optimized cloud can provide unmatched speed, scalability and reliability – all of which are critical for preserving a bullet-proof cloud. As applications compete for shared system and network resources, performance becomes as critical an element as security in managing a successful virtual infrastructure.
Intersection with security and compliance
Security and compliance work hand-in-hand when it comes to virtual infrastructures. Transmitting, storing or processing data in the cloud, be it sensitive healthcare data or financial transactions, means that complying with HIPAA and PCI regulations is critical – both to avoid the headaches of audit and keep the data secure. Yet both rely on a high-performing infrastructure to ensure that sensitive data stays protected while being immediately accessible.
Critical security devices and logging mechanisms are dependent on the performance of the underlying infrastructure. These services need to maintain real-time operation to be effective in protecting and monitoring sensitive data.
Optimal Performance: what it takes
All businesses prioritize specific needs from their cloud environments. Growing businesses require scaling technologies so they can pay for as much or as little usage as they need, while businesses with high continuity demands will require uptime assurance. Some companies want to accelerate response times for data management challenges and high-demand workloads – and everyone wants to protect critical digital assets like website properties and SaaS applications.
- For processing power that stands up to dedicated and other IaaS methods like private, hybrid or commodity public cloud, best-in-class technology is critical to improve the overall performance of the cloud ecosystem. Use hardware specifically designed for high-performance virtualization workloads along with a robust high performance hypervisor.
- At the host level, use the latest hardware-assisted virtualization processors to reduce hypervisor overhead when virtualizing CPU instructions and Memory pages; hardware-assisted memory management and high-speed memory around 1600Mhz speeds up guest workloads significantly. Hyper-threading is also important and can provide a significant increase in performance for hypervisor workloads as it increases the efficiency of thread concurrency.
- For storage, it’s important to understand that virtualization creates blended IO workloads, and storage is often the bottleneck to virtualization performance. Many workloads are sensitive to the latency of I/O operations, so build the back-end storage configuration to satisfy random IO from multiple high-demand workloads.
- Fibre channel is purpose built to deliver low latency storage connectivity and is more efficient than storage over TCP/IP which encounters tremendous protocol handshaking overhead; FC 8Gb or 16Gb connectivity is recommended. For the fastest response time possible, SSD offers the best performance in a virtualized environment. Leveraging SSD caching techniques within the storage array delivers high IOPs and processes blended IO requests more efficiently than spinning disks.
- Finally, utilize scalable lock management to reduce locking-related overhead within the hypervisor’s clustered file system. ATS, sometimes called “atomic test and set,” is recommended to alleviate performance issues related to disk contention from multiple virtual machines.
- Be sure to measure your server performance, memory speed, processor speed and storage speed with a cloud monitoring tool.
Scalability and Reliability
- To maintain quality of service, compartmentalize your storage and compute resources, as opposed to taking a monolithic approach. Not only does this increase your control over performance, it also creates more fault domains or “blast radiuses” that limit the impact of equipment failure.
- Leverage real-time dynamic resource scheduling to balance resource utilization, scale workloads and eliminate hot spots in the cloud infrastructure.
- Maintain spare resource capacity. Instead of overcommitting resources, be sure to provide a buffer of resources for additional scaling or failover. Most hypervisors have built-in countermeasures that take action when a host is oversubscribed and can lead to erratic performance.
- Also important: always use technology that is redundant or fault-tolerant for protection from hardware and software related failures.
- Finally, take full advantage of high availability features within the hypervisor to minimize downtime and protect critical services.
Attaining Peak Performance in the Cloud
A high-performing cloud can deliver cost savings, flexibility and convenience. Optimizing your cloud with technologies that improve performance and increase capacity can improve user experience while underscoring security and compliance.
Todd Gleason, Director of Innovation
As the director of innovation at FireHost, Todd Gleason is responsible for driving key new offerings from FireHost’s secure cloud hosting, and ensuring the company always stays one step ahead in the market. He oversees research and development of new initiatives that will keep customers safe, compliant, and at top performance all the time.CLOUD COMPUTING, Fresh Ink