Machines

Using newer VMs can reduce cloud costs • The Register

Better, faster and more efficient chips drive down cloud operating costs and drive down prices, according to a study by IT infrastructure standards advisory group, the Uptime Institute.

With each generation of processor families, cloud prices tend to drop with one notable exception, explained Owen Rogers, research director for cloud computing at the Uptime Institute, in a post this week.

The research tracked Amazon Web Services (AWS) prices across six generations of AMD and Intel processors and three generations of Nvidia GPUs using data obtained from the cloud provider’s pricelist API. Although Rogers recognized AWS’s ARM Graviton series of compatible processors, they were not included in the tests.

All testing was performed on AWS’s US-East-1 region, however, Rogers notes that its results should be similar across all AWS regions.

Of the eight AWS instances tracked by Rogers, the majority saw a steady decline in customer prices with each subsequent processor generation. Prices for the m AWS family of general purpose instances, for example, have dropped 50% from the first generation to today.

Some instances, especially AWS’s storage-optimized instances, saw even steeper price drops, which he attributed to other factors, including memory and storage.

It’s no surprise that processor performance in these instances tends to improve with each generation, Rogers noted, citing the various performance and efficiency benefits of architectural and process improvements.

For example, AMD’s third-generation Epyc Milan processor family and Intel’s Xeon Ice Lake Scalable processor family claim a 19-20% performance advantage over previous generation chips. Both families are now available in a variety of AWS instances, including a storage-optimized instance announced last week.

“Users can expect greater processing speed with newer generations compared to older versions while paying less. The efficiency gap is more substantial than the pricing simply suggests,” he wrote, adding that this is clearly seen in AWS pricing.

In other words, while intuitively you might think that instances based on older processor technology should be cheaper, newer, more power-efficient instances are often cheaper to incentivize adoption.

“However, how much of the cost savings AWS passes on to its customers from increased gross margin remains hidden,” he wrote.

Some of this can be attributed to customer buying habits, particularly those that prioritize cost over performance. “Because of this pricing pressure, prices for virtual cloud instances are falling,” he wrote.

The GPU pricing anomaly

The exception to this rule are GPU instances, which have actually become more expensive with each generation, Rogers found.

His research tracked AWS g-series and p-series GPU-accelerated instances over three and four generations, respectively, and found that the rapid growth in total performance alongside the increase in demanding AI/ML workloads enabled cloud providers – and Nvidia – to raise prices.

“Customers are willing to pay more for new GPU instances if they provide value by being able to solve complex problems faster,” he wrote.

This can be attributed in part to the fact that, until recently, customers looking to deploy workloads to these instances had to do so on dedicated GPUs, rather than renting smaller virtual processing units. And while Rogers notes that customers, for the most part, prefer to run their workloads this way, that could change.

In recent years, Nvidia – which dominates the cloud GPU market – has, for its part, introduced features allowing customers to split GPUs into multiple independent virtual processing units using a technology called Multi- GPU or MIG instance for short. Launched alongside Nvidia’s Ampere architecture in early 2020, the technology allows customers to split each physical GPU into up to seven individually addressable instances.

And with the chipmaker’s Hopper architecture and H100 GPUs, announced at GTC this spring, MIG gained per-instance isolation, I/O virtualization, and multitenancy, which open the door to their use in environments confidential computers.

Migratory migraines persist

Unfortunately for customers, taking advantage of this performance and savings is not without risk. In most cases, workloads are not automatically migrated to newer, cheaper infrastructure, Rogers noted. Cloud subscribers should test their applications on newer types of virtual machines before embarking on a mass migration.

“There may be unexpected interoperability issues or downtime during migration,” Rogers wrote, adding, “Just as users schedule server refreshes, they should incorporate virtual instance refreshes into their ongoing maintenance.”

By supporting older generation cloud providers, customers can upgrade at their own pace, Rogers said. “The vendor does not want to appear to force the user to migrate applications that may not be compatible with new server platforms.” ®