RSS 피드 구독하기

Many organizations possess a wealth of unique internal knowledge. This includes customized operational runbooks, environment-specific configurations, internal best practices, and stringent compliance protocols. This information may be critical for the organization's day-to-day operations, but it sits outside public knowledge bases where large language models (LLM) are trained. There's a clear need to bridge this gap, and to enable an AI assistant to understand and leverage proprietary context and provide specific and actionable guidance. In response to this need, we introduced the "bring your own knowledge" (BYO knowledge) capability to Red Hat OpenShift Lightspeed. BYO knowledge empowers you to augment Lightspeed's intelligence with your organization's private documentation and internal expertise. This transforms OpenShift Lightspeed from a generally knowledgeable OpenShift expert into a highly specialized, context-aware partner. It's not just data, it's your data. 

The benefit of bringing your own data to AI is immediate and impactful. OpenShift Lightspeed avoids generic solutions that you'd need to adjust for your setup, potentially facing access issues. Instead, it delivers tailored, policy-compliant answers to address your specific needs effectively. 

The ability to customize your AI knowledge base is particularly transformative for industries operating under strict regulatory frameworks, or for those with highly customized OpenShift policies and procedures. Financial services institutions, for instance, can ingest internal security policies and compliance checklists, ensuring Lightspeed's advice adheres to specific governance. Similarly, telecommunications companies with bespoke network configurations or government agencies with unique procedural requirements can equip Lightspeed with the necessary insights to provide highly relevant and accurate support. Ultimately, the BYO knowledge feature helps make the power of generative AI in OpenShift Lightspeed not just intelligent but intelligently tailored to you..

How does it work? Follow along and see. The following steps assume you already have installed and configured OpenShift Lightspeed in your environment.

Note: Bring Your Own Knowledge is a technology preview feature that is not in its final state. The specific process for bringing your knowledge into OpenShift Lightspeed will change when it becomes generally available as this feature is still under development and still maturing.

1. Start with documentation

The first step to use the BYO knowledge process is to gather your documentation in one place. Today, that means a directory with markdown files. It's fine if your directory has many subdirectories of content in it. Take a look at this simple example:

$ ls -lG
total 20
-rw-r--r--. 1 tux 1917 Apr 29 12:23 apex-certificates.md
-rw-r--r--. 1 tux  706 Apr 10 14:01 autoscaling-rules.md
-rw-r--r--. 1 tux 3911 Apr 10 14:03 gpu-node.md
-rw-r--r--. 1 tux 2031 May 15 09:10 mode-select-app.md
-rw-r--r--. 1 tux 3499 Apr 10 14:08 serverless-prereq.md

Once you have gathered all of your documentation, you can now prepare to use the BYO knowledge tool.

2. Use the BYO knowledge tool

At this time, we provide a container image with the tool to build your knowledge into a container image (yes, you're building containers with containers)! After you build the image, you put it into an image registry accessible to your OpenShift cluster. 

For this article, I expose the OpenShift image registry and use that, but you can use any image registry as long as the cluster with OpenShift Lightspeed can access it.

First, prepare a new folder that's separate from your content. This is used by the tool for temporary output.

Next, run the tool. On a Linux system that already has Podman installed, execute the following command:

$ podman run -it --rm --device=/dev/fuse \
  -v $XDG_RUNTIME_DIR/containers/auth.json:/run/user/0/containers/auth.json:Z \
  -v /path/to/content:/markdown:Z \
  -v /my/output/path:/output:Z \
registry.redhat.io/openshift-lightspeed-tech-preview/lightspeed-rag-tool-rhel9:latest

Make sure you have your paths properly entered. At the end of this process, you have a TAR file in the output directory. In this example, I end up with:
/my/output/path/byok-image.tar.

The next step is to import the TAR file into the local Podman image store:

$ podman load -i /my/output/path/byok-image.tar

This results in a local image called localhost/byok-image:latest, which you must re-tag before pushing. The specific pullspec to use when re-tagging depends on the registry you want to push to. If you plan to push images to the OpenShift cluster's registry, then you must have already exposed the registry beforehand. 

Assume your registry URL looks like this:

default.openshift-image-registry.apps.example.com

In that case, you would tag your image like this:

$ podman tag localhost/byok-image:latest \
default.openshift-image-registry.apps.example.com/openshift-lightspeed/openshift-lightspeed/acme-byok:latest

Then you can push that image (assuming you have done a Podman login to access your registry first):

$ podman push default.openshift-image-registry.apps.example.com/openshift-lightspeed/openshift-lightspeed/acme-byok:latest

3. Configure OpenShift Lightspeed for your Knowledge

Once you have built and pushed the image with your knowledge, you must configure OpenShift Lightspeed to use it. Edit your OLSConfig, and modify the ols section as demonstrated:

apiVersion: ols.openshift.io/v1alpha1
kind: OLSConfig
spec:
 llm:
   providers:
     - name: myOpenai
       type: openai
       credentialsSecretRef:
         name: openai-api-keys
       url: 'https://api.openai.com/v1'
       models:
         - name: gpt-4o
 ols:
   defaultModel: gpt-4o
   defaultProvider: myOpenai
   rag:
     - image: image-registry.openshift-image-registry.svc:5000/openshift-lightspeed/acme-byok:latest
       indexID: vector_db_index
       indexPath: /rag/vector_db

The rag stanza is located at .spec.ols.rag. The indentation is very important. The rag keyword must be at the same indentation level as defaultModel.

The OpenShift Lightspeed operator restarts the OpenShift Lightspeed API server pods in the openshift-lightspeed namespace.

Bring your own knowledge

Now that OpenShift Lightspeed is using your knowledge image, try asking a question that's supported by your own documentation. You get an answer informed by your internal knowledge, and not by generic knowledge.

There are still many improvements to be made to the BYO knowledge feature. In the meantime, you can see how useful this feature already is, and how it can help ensure that OpenShift Lightspeed's answers are specific to your organization's context.

제품 체험판

Red Hat OpenShift Container Platform | 제품 체험판

컨테이너화된 애플리케이션을 빌드하고 규모를 확장하기 위한 일관된 하이브리드 클라우드 기반입니다.

저자 소개

Ben has been at Red Hat since 2019, where he has focused on edge computing with Red Hat OpenShift as well as private clouds based on Red Hat OpenStack Platform. Before this he spent a decade doing a mix of sales and product marking across telecommunications, enterprise storage and hyperconverged infrastructure.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

채널별 검색

automation icon

오토메이션

기술, 팀, 인프라를 위한 IT 자동화 최신 동향

AI icon

인공지능

고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트

open hybrid cloud icon

오픈 하이브리드 클라우드

하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요

security icon

보안

환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보

edge icon

엣지 컴퓨팅

엣지에서의 운영을 단순화하는 플랫폼 업데이트

Infrastructure icon

인프라

세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보

application development icon

애플리케이션

복잡한 애플리케이션에 대한 솔루션 더 보기

Virtualization icon

가상화

온프레미스와 클라우드 환경에서 워크로드를 유연하게 운영하기 위한 엔터프라이즈 가상화의 미래