Top 10 customer FAQ about Red Hat OpenShift AI

Introduction

This FAQ answers the 10 most popular questions concerning Red Hat® OpenShift® AI. 

What is Red Hat OpenShift AI?

Answer: Red Hat OpenShift AI is a platform for building, training, tuning, deploying, and monitoring AI-enabled applications, and predictive and foundation models at scale across hybrid cloud environments. OpenShift AI helps hasten AI innovation projects, promote operational consistency, and optimize access to resources when integrating trusted AI solutions.

OpenShift AI builds on top of Red Hat OpenShift to deliver a consistent, streamlined, and automated experience when handling the workload and performance demands of enterprise AI projects. Its machine learning operations (MLOps) capabilities can help organizations control and automate their AI workloads to deliver AI-enabled applications into production more timely. 

How is OpenShift AI related to Red Hat AI?

Answer: Red Hat AI is a portfolio of products that includes Red Hat AI Inference Server, Red Hat Enterprise Linux AI and OpenShift AI. Together, Red Hat AI functions as a platform that accelerates AI innovation and reduces the operational cost of developing and delivering AI solutions across hybrid cloud environments. It helps reduce costs with optimized models and efficient inference, simplifies integration with private data, and accelerates delivery of agentic AI workflows with a scalable, flexible platform. 

What advantages does Red Hat OpenShift AI offer?

Answer:  OpenShift AI offers 3 key benefits: Improved efficiency, simplified operations, and the ability to support a hybrid cloud environment.

  • Optimized efficiency at scale.
    OpenShift AI handles the most demanding AI workloads while reducing costs. It does this by providing access to smaller and preoptimized models that cost less to train, tune, and run. Additionally, OpenShift AI helps manage the costs of model inferencing by providing optimized serving engines, such as virtual large language models (vLLM), and scaling the underlying infrastructure as the workload demands.
  • Reduced operational complexity. 
    OpenShift AI simplifies the fine tuning of models with enterprise data to provide efficient, high-performance models. To assist in putting these models in operation, OpenShift AI provides advanced AI tooling to automate deployments and manage the lifecycle. Gain efficiency and reduce operational complexity by more effectively managing AI accelerators, graphics processing units (GPUs), and workload resources across a scalable clustered environment. Allow your data practitioners to self-service and scale their model training and serving environments as needed by their gen AI or predictive AI workloads—helping customers run models, model tooling, and model applications all on the same platform.
  • Added hybrid cloud flexibility. 
    OpenShift AI provides the ability to train, deploy, and monitor AI/ML workloads in a cloud environment, on-premise datacenters, or at the network edge close to where data is generated or located. This flexibility allows AI strategies to evolve, moving operations to a cloud or to the edge as required by the business. Organizations can train and deploy models and AI-enabled applications wherever they need to meet relevant regulatory, security, and data requirements, including air-gapped and disconnected environments.

Does OpenShift AI require a Red Hat OpenShift license and implementation underneath it?

Answer:  Yes, OpenShift AI is a software product or service layered on top of Red Hat OpenShift. It is offered as a traditional software product add-on to Red Hat OpenShift Container Platform or as a managed cloud service add-on to Red Hat OpenShift Service on AWS or Red Hat OpenShift Dedicated. 

What is the cost of Red Hat OpenShift AI?

Answer:  Pricing for OpenShift AI follows the OpenShift Container Platform model, with core-based and bare-metal SKUs available for Standard and Premium support. 

You must purchase Red Hat OpenShift separately.  

Self-managed version of OpenShift AI: Red Hat patterns the pricing after the OpenShift Container Platform pricing. You must have either OpenShift Container Platform or Red Hat OpenShift Platform Plus before installing OpenShift AI. Options for Standard support and Premium support are available on both core-based and bare-metal SKUs. These can be purchased in yearly increments (at a marginal discount) or smaller increments for Hybrid Committed Spend deals. 

Self managed customers only pay for the cluster units used by OpenShift AI. 

  • For example, if a customer has an existing OpenShift Container Platform cluster and wants to add 2 bare-metal nodes for OpenShift AI, the customer would only need to purchase an OpenShift AI subscription to cover the 2 additional nodes for OpenShift AI use.
  • If a customer usage exceeds the subscription capacity, the customer is responsible for self-reporting the additional use and purchasing additional SKUs/units to cover excess use. 

In the case of a huge worker node, only the used portion has to be under subscription. 

  • For example, if a worker with 64 vCPUs used only 32 for OpenShift AI, only those 32 need a subscription.

Managed version of OpenShift AI: Red Hat offers this version as an add-on service on top of Red Hat OpenShift Service on AWS and OpenShift Dedicated on AWS and Google Cloud Platform (GCP). Red Hat OpenShift on AWS, OpenShift Dedicated, and the underlying AWS infrastructure must be purchased separately.  

Managed OpenShift AI yearly SKU pricing is based on total vCPUs for all worker nodes in the cluster and is priced based on $/vCPU/year. Consumption-based pricing of $/vCPU/hour (vCPU-hours for all cluster worker nodes) is also offered as a SKU. 

How is OpenShift AI differentiated from other AI platforms?

Answer:  Built on the AI open source community, OpenShift AI offers transparency and reliable support. Red Hat has shown its commitment to open source AI through its history of contributing value to projects and technologies like Jupyter, PyTorch, KubeFlow, KServe, vLLM, and TrustyAI. 

OpenShift AI provides a complete open source AI platform customers can build their AI models and application services. Built on top of Red Hat OpenShift, OpenShift AI allows customers to use the reliable and trusted application platform services that Red Hat OpenShift has become well-known for. This allows OpenShift AI to be deployed wherever customers need it—closer to their data and/or expanded to the edge—for a fully supported hybrid AI Platform. 

Is OpenShift AI only for gen AI?

Answer:  No. OpenShift AI covers the entire range of possible AI/ML projects, including both traditional and gen AI. 

How can customers try Red Hat OpenShift AI?

Answer:  Customers can try OpenShift AI at no cost in the Developer Sandbox. A dedicated 60-day trial is also available for customers to try in their own cluster.

Does OpenShift AI include any gen AI models with it?

Answer: OpenShift AI provides a model catalog with prevalidated and optimized models as a tech preview feature. The model catalog includes a subset of the third-party validated and optimized AI models published on the Red Hat AI repository on HuggingFace. The model catalog is a read-only version of these common gen AI models, but users can modify these models and manage them through the integrated model registry. 

What are the core components of Red Hat OpenShift AI?

Answer:  Core components work together on the underlying Red Hat OpenShift platform to provide a complete and integrated environment for building, deploying, and managing AI-powered applications. Some of the core components include:

  • OpenShift AI dashboard: A user-friendly interface that provides a clear view of applications, available resources, and administrative functionalities.
  • Data science workbench and JupyterLab: This provides an interactive environment for data scientists to develop and experiment with models.
  • Popular AI/ML frameworks and libraries: OpenShift AI supports widely used frameworks such as: TensorFlow, PyTorch, and Scikit-learn. Other frameworks are supported.
  • Kubeflow components: OpenShift AI integrates key components from Kubeflow, an open-source framework for simplifying AI/ML workflow deployment at scale. These include, Notebook controller, model serving (KServe), and data science pipelines.
  • Hardware acceleration integration: OpenShift AI is designed to work with specialized hardware for accelerating AI/ML workloads, including: NVIDIA GPUs, Intel XPUs (e.g., Intel Gaudi AI accelerators) and AMD GPUs.
  • Model serving: OpenShift AI integrates multiple model serving engines and runtimes including the AI Inference Server (powered by an optimized vLLM), OpenVINO, and NVIDIA NIM (validated).
  • Model fine tuning and RAG: Distributed InstructLab fine tuning capabilities (tech preview), vector DB embeddings via integrated partners, and LoRA/QLoRA are provided.
  • Model monitoring and evaluation: OpenShift AI provides tools for centralized monitoring, drift detection, bias and AI guardrails. LMEval is a framework that helps determine if your LLMs are performing correctly.
  • Distributed workloads components (CodeFlare, Ray, Kueue): Allows data scientists to use multiple nodes in parallel to train or serve ML models.
  • Model registry, model catalog, and feature store (all tech preview): Provides access to prevalidated and optimized models and helps manage and govern customized models and data features.
  • Third party technology partner integration: The dashboard provides access to complementary technology from partners such as NVIDIA, AMD, Intel, Starburst, Hewlett Packard Enterprise (formerly Pachyderm), Elastic and EDB.

Discover how these corporations are using Red Hat OpenShift AI:

Learn more

Discover more about the capabilities and benefits of AI, and explore our AI partners by visiting Red Hat OpenShift AI

Ready to get started? Contact sales to talk with a Red Hatter about OpenShift AI.