Dell PowerEdge R760xa Server with Red Hat AI Drives Generative AI Workloads

Dell Technologies and IBM subsidiary Red Hat have teamed up to deliver a cutting-edge solution for businesses embracing generative AI. The Dell PowerEdge R760xa server, now packaged with Red Hat Enterprise Linux AI (RHEL AI), is designed to handle the computational demands of running large language models (LLMs).

Key Features of the Dell PowerEdge R760xa Server

The 2RU PowerEdge R760xa is equipped with up to two 5th gen Xeon 64-core CPUs, PCIe Gen 5 interconnects, and supports a wide range of Nvidia GPUs, including the A10, A30, A40, L4, L40, L40S, A100, and H100. The server offers impressive storage capacity with 8 x NVMe or 6 x SATA storage drive bays, ensuring high-performance data access and processing. Additionally, the server integrates Nvidia’s Omniverse OVX 3.0 platform, enhancing its AI capabilities as part of Dell’s AI Factory solution.

RHEL AI builds upon the standard Red Hat Enterprise Linux platform, incorporating advanced AI features that optimize the PowerEdge server for generative AI workloads across hybrid cloud environments.

A Collaborative Approach to GenAI with Red Hat and Dell

By combining their expertise, Red Hat and Dell Technologies are bringing a validated, trusted solution to market. Joe Fernandes, VP and GM of Red Hat’s Generative AI Foundation Platforms, emphasized the value of the collaboration:

“By collaborating with Dell Technologies to validate and empower RHEL AI on Dell PowerEdge servers, we are enabling customers with greater confidence and flexibility to harness the power of GenAI workloads across hybrid cloud environments and propel their business into the future.”

The validation of RHEL AI on Dell PowerEdge servers ensures that customers can trust the performance of these AI-optimized platforms. Arun Narayanan, Dell Technologies’ Senior VP, reinforced this sentiment:

“Validating RHEL AI for AI workloads on Dell PowerEdge servers provides customers with greater confidence that the servers, GPUs, and foundational platforms are tested and validated on an ongoing basis. This simplifies the GenAI user experience and accelerates the process to build and deploy critical AI workloads on a trusted software stack.”

Key AI Features in RHEL AI and OpenShift AI

RHEL AI, integrated with Red Hat OpenShift AI, offers two standout features for AI deployments:

  1. IBM Research’s Granite LLM Set: This open-source set of large language models forms the foundation of RHEL AI’s generative AI capabilities.

  2. InstructLab Tools: Based on the LAB (Large-scale Alignment for chatBots) methodology, these tools streamline model alignment, enhancing LLMs with fewer human resources and computational power. The InstructLab project, a community-driven initiative from IBM and Red Hat, supports continuous improvement of LLMs through upstream contributions.

These features are central to Red Hat OpenShift AI, a machine learning operations (MLOps) platform designed for running models and scaling AI deployments across distributed Kubernetes clusters.

Conclusion: Simplifying AI Deployment for Enterprises

The partnership between Dell and Red Hat brings together advanced hardware and optimized AI software to simplify AI deployment for enterprises. The Dell PowerEdge R760xa, paired with RHEL AI, provides the performance, scalability, and flexibility required for running generative AI models at scale, making it an ideal solution for businesses looking to harness the potential of AI without compromising on reliability or cost-efficiency.

You can read more blogs like this here.

Click below and ‘share’ this article!