As cloud data-center infrastructure becomes increasingly complex, public cloud providers are partnering with chipset providers to effectively manage and optimize the workloads that are entering their data centers. Infrastructure Processing Units (IPUs) and Data Processing Units (DPUs) shift network, storage, and data virtualization workloads from the Central Processing Unit (CPU), freeing up the CPU for core application workloads. IPUs, DPUs, and SmartNICs will feature prominently in the modern data center as cloud service providers seek new ways of managing their data centers more efficiently.
Registered users can unlock up to five pieces of premium content each month.
Log in or register to unlock this Insight.
Google Cloud Announces Its C3 Virtual Machine Powered by Intel Hardware and Software
At Google Cloud Next ’22, Google Cloud announced that its new C3 Virtual Machine (VM)—powered by Intel’s 4th Gen Xeon Scalable processors and custom IPU—is in private preview and leverages the combined benefits of Intel’s hardware and software offerings. Intel and Google Cloud codesigned the E2000 IPU, a programmable networking chip that reduces networking overhead, freeing up the CPU to handle application processes. The E2000 IPU leverages on learnings from Intel’s field-programmable gate array SmartNICs and houses a programmable packet processing engine as well as an open-source software design.
‘While Intel’s IPU offering is similar to the DPUs currently available in the market, Intel seems to be taking a different path, preferring to work with public cloud providers such as Google Cloud while companies such as AMD, Nvidia, and Marvell look to offer application-specific or general purpose DPUs to the general public. This can be evidenced by the E2000 IPU that is highly specific to Google Cloud’s hardware and environment specifications while AMD’s Pensando DPU and Nvidia’s BlueField-3 DPU are readily available for most public cloud data centers.
Partnerships Between Chipset and Public Cloud Providers Important to the Evolution of Data Centers
The partnership between Intel and Google Cloud promotes the conversation of IPU integration in data centers. Public cloud providers such as AWS, Google Cloud, and Microsoft Azure are increasingly looking at ways to optimize the flow of workloads into data centers, to improve data-center utilization, and to provide microservices. The social media company Snap reported an increase of up to 20% in workload performance over the previous generation C2 VMs while Ansys, an engineering software provider, saw up to a three-times performance gain due to higher memory bandwidth and lower network latency.
Another interesting partnership is the collaboration between AMD and VMware, with AMD’s Pensando Distributed Services Card being one of the first DPU solutions to support VMware vSphere 8. The vSphere Distributed Services engine and the Pensando DPU aim to unify workloads, improve performance by freeing up CPU resources, and provide a layer of security by isolating infrastructure services from server workloads. AMD and VMware are also working with infrastructure vendors such as Dell Technologies and HPE to provide integrated data-center solutions across networking, security, and storage environments.
These partnerships underline the growing demand for specialized chips in data centers. Data from modern and industry-specific applications are increasingly flowing through data-center infrastructure, forcing data-center providers to provide better performance and higher efficiency. Intelligent accelerators such as IPUs and DPUs provide better processing capabilities for network, storage, and security tasks. The IPU and DPU offerings also represent further diversification by Intel, AMD, and Nvidia from CPUs and graphics processing units, an area that has been affected by recent market and economic challenges.
Taking the Ecosystem Approach
Intel has placed big bets on its IPU road map, starting with its partnership with Google Cloud. Other chipset players such as Marvell, Nvidia, and AMD have introduced DPUs to help off-load data processes for more efficient data storage. Chipset players have realized that a CPU alone is no longer enough to handle the high and specific workloads that are running through public cloud data centers. Public cloud providers are increasingly looking for ways to better optimize customer workloads and manage their infrastructure platforms efficiently.
Running a data center is anything but simple, with various moving parts and stakeholders involved in the day-to-day operations. However, it will be crucial for public cloud providers to take an ecosystem approach toward building a modern data center and establishing relationships with chipset providers, server vendors, networking infrastructure players, and software application vendors. The ability to integrate these crucial parts together will be a competitive advantage, especially when it comes to customers that have specific requirements in terms of application workloads, network latency, and bandwidth.
Partnerships between chipset and public cloud providers will continue beyond the IPU and DPU conversation. As more data and processing-intensive workloads flow through the public data-center infrastructure, public cloud providers will have to look at optimization. Because chipsets are a key component that provide computing power to the software and hardware within a data center, chipset providers will need to be at the center of this conversation. The future of public data centers will be to provide not only high data-center uptime but also speed and quality in data processing for highly specialized workloads.