订阅 RSS 源

Many organizations possess a wealth of unique internal knowledge. This includes customized operational runbooks, environment-specific configurations, internal best practices, and stringent compliance protocols. This information may be critical for the organization's day-to-day operations, but it sits outside public knowledge bases where large language models (LLM) are trained. There's a clear need to bridge this gap, and to enable an AI assistant to understand and leverage proprietary context and provide specific and actionable guidance. In response to this need, we introduced the "bring your own knowledge" (BYO knowledge) capability to Red Hat OpenShift Lightspeed. BYO knowledge empowers you to augment Lightspeed's intelligence with your organization's private documentation and internal expertise. This transforms OpenShift Lightspeed from a generally knowledgeable OpenShift expert into a highly specialized, context-aware partner. It's not just data, it's your data. 

The benefit of bringing your own data to AI is immediate and impactful. OpenShift Lightspeed avoids generic solutions that you'd need to adjust for your setup, potentially facing access issues. Instead, it delivers tailored, policy-compliant answers to address your specific needs effectively. 

The ability to customize your AI knowledge base is particularly transformative for industries operating under strict regulatory frameworks, or for those with highly customized OpenShift policies and procedures. Financial services institutions, for instance, can ingest internal security policies and compliance checklists, ensuring Lightspeed's advice adheres to specific governance. Similarly, telecommunications companies with bespoke network configurations or government agencies with unique procedural requirements can equip Lightspeed with the necessary insights to provide highly relevant and accurate support. Ultimately, the BYO knowledge feature helps make the power of generative AI in OpenShift Lightspeed not just intelligent but intelligently tailored to you..

How does it work? Follow along and see. The following steps assume you already have installed and configured OpenShift Lightspeed in your environment.

Note: Bring Your Own Knowledge is a technology preview feature that is not in its final state. The specific process for bringing your knowledge into OpenShift Lightspeed will change when it becomes generally available as this feature is still under development and still maturing.

1. Start with documentation

The first step to use the BYO knowledge process is to gather your documentation in one place. Today, that means a directory with markdown files. It's fine if your directory has many subdirectories of content in it. Take a look at this simple example:

$ ls -lG
total 20
-rw-r--r--. 1 tux 1917 Apr 29 12:23 apex-certificates.md
-rw-r--r--. 1 tux  706 Apr 10 14:01 autoscaling-rules.md
-rw-r--r--. 1 tux 3911 Apr 10 14:03 gpu-node.md
-rw-r--r--. 1 tux 2031 May 15 09:10 mode-select-app.md
-rw-r--r--. 1 tux 3499 Apr 10 14:08 serverless-prereq.md

Once you have gathered all of your documentation, you can now prepare to use the BYO knowledge tool.

2. Use the BYO knowledge tool

At this time, we provide a container image with the tool to build your knowledge into a container image (yes, you're building containers with containers)! After you build the image, you put it into an image registry accessible to your OpenShift cluster. 

For this article, I expose the OpenShift image registry and use that, but you can use any image registry as long as the cluster with OpenShift Lightspeed can access it.

First, prepare a new folder that's separate from your content. This is used by the tool for temporary output.

Next, run the tool. On a Linux system that already has Podman installed, execute the following command:

$ podman run -it --rm --device=/dev/fuse \
  -v $XDG_RUNTIME_DIR/containers/auth.json:/run/user/0/containers/auth.json:Z \
  -v /path/to/content:/markdown:Z \
  -v /my/output/path:/output:Z \
registry.redhat.io/openshift-lightspeed-tech-preview/lightspeed-rag-tool-rhel9:latest

Make sure you have your paths properly entered. At the end of this process, you have a TAR file in the output directory. In this example, I end up with:
/my/output/path/byok-image.tar.

The next step is to import the TAR file into the local Podman image store:

$ podman load -i /my/output/path/byok-image.tar

This results in a local image called localhost/byok-image:latest, which you must re-tag before pushing. The specific pullspec to use when re-tagging depends on the registry you want to push to. If you plan to push images to the OpenShift cluster's registry, then you must have already exposed the registry beforehand. 

Assume your registry URL looks like this:

default.openshift-image-registry.apps.example.com

In that case, you would tag your image like this:

$ podman tag localhost/byok-image:latest \
default.openshift-image-registry.apps.example.com/openshift-lightspeed/openshift-lightspeed/acme-byok:latest

Then you can push that image (assuming you have done a Podman login to access your registry first):

$ podman push default.openshift-image-registry.apps.example.com/openshift-lightspeed/openshift-lightspeed/acme-byok:latest

3. Configure OpenShift Lightspeed for your Knowledge

Once you have built and pushed the image with your knowledge, you must configure OpenShift Lightspeed to use it. Edit your OLSConfig, and modify the ols section as demonstrated:

apiVersion: ols.openshift.io/v1alpha1
kind: OLSConfig
spec:
 llm:
   providers:
     - name: myOpenai
       type: openai
       credentialsSecretRef:
         name: openai-api-keys
       url: 'https://api.openai.com/v1'
       models:
         - name: gpt-4o
 ols:
   defaultModel: gpt-4o
   defaultProvider: myOpenai
   rag:
     - image: image-registry.openshift-image-registry.svc:5000/openshift-lightspeed/acme-byok:latest
       indexID: vector_db_index
       indexPath: /rag/vector_db

The rag stanza is located at .spec.ols.rag. The indentation is very important. The rag keyword must be at the same indentation level as defaultModel.

The OpenShift Lightspeed operator restarts the OpenShift Lightspeed API server pods in the openshift-lightspeed namespace.

Bring your own knowledge

Now that OpenShift Lightspeed is using your knowledge image, try asking a question that's supported by your own documentation. You get an answer informed by your internal knowledge, and not by generic knowledge.

There are still many improvements to be made to the BYO knowledge feature. In the meantime, you can see how useful this feature already is, and how it can help ensure that OpenShift Lightspeed's answers are specific to your organization's context.

产品试用

红帽 OpenShift 容器平台 | 产品试用

此平台可为构建和扩展容器化应用提供一致的混合云基础。

关于作者

Ben has been at Red Hat since 2019, where he has focused on edge computing with Red Hat OpenShift as well as private clouds based on Red Hat OpenStack Platform. Before this he spent a decade doing a mix of sales and product marking across telecommunications, enterprise storage and hyperconverged infrastructure.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

按频道浏览

automation icon

自动化

有关技术、团队和环境 IT 自动化的最新信息

AI icon

人工智能

平台更新使客户可以在任何地方运行人工智能工作负载

open hybrid cloud icon

开放混合云

了解我们如何利用混合云构建更灵活的未来

security icon

安全防护

有关我们如何跨环境和技术减少风险的最新信息

edge icon

边缘计算

简化边缘运维的平台更新

Infrastructure icon

基础架构

全球领先企业 Linux 平台的最新动态

application development icon

应用领域

我们针对最严峻的应用挑战的解决方案

Virtualization icon

虚拟化

适用于您的本地或跨云工作负载的企业虚拟化的未来