Assinar feed RSS

Many organizations possess a wealth of unique internal knowledge. This includes customized operational runbooks, environment-specific configurations, internal best practices, and stringent compliance protocols. This information may be critical for the organization's day-to-day operations, but it sits outside public knowledge bases where large language models (LLM) are trained. There's a clear need to bridge this gap, and to enable an AI assistant to understand and leverage proprietary context and provide specific and actionable guidance. In response to this need, we introduced the "bring your own knowledge" (BYO knowledge) capability to Red Hat OpenShift Lightspeed. BYO knowledge empowers you to augment Lightspeed's intelligence with your organization's private documentation and internal expertise. This transforms OpenShift Lightspeed from a generally knowledgeable OpenShift expert into a highly specialized, context-aware partner. It's not just data, it's your data. 

The benefit of bringing your own data to AI is immediate and impactful. OpenShift Lightspeed avoids generic solutions that you'd need to adjust for your setup, potentially facing access issues. Instead, it delivers tailored, policy-compliant answers to address your specific needs effectively. 

The ability to customize your AI knowledge base is particularly transformative for industries operating under strict regulatory frameworks, or for those with highly customized OpenShift policies and procedures. Financial services institutions, for instance, can ingest internal security policies and compliance checklists, ensuring Lightspeed's advice adheres to specific governance. Similarly, telecommunications companies with bespoke network configurations or government agencies with unique procedural requirements can equip Lightspeed with the necessary insights to provide highly relevant and accurate support. Ultimately, the BYO knowledge feature helps make the power of generative AI in OpenShift Lightspeed not just intelligent but intelligently tailored to you..

How does it work? Follow along and see. The following steps assume you already have installed and configured OpenShift Lightspeed in your environment.

Note: Bring Your Own Knowledge is a technology preview feature that is not in its final state. The specific process for bringing your knowledge into OpenShift Lightspeed will change when it becomes generally available as this feature is still under development and still maturing.

1. Start with documentation

The first step to use the BYO knowledge process is to gather your documentation in one place. Today, that means a directory with markdown files. It's fine if your directory has many subdirectories of content in it. Take a look at this simple example:

$ ls -lG
total 20
-rw-r--r--. 1 tux 1917 Apr 29 12:23 apex-certificates.md
-rw-r--r--. 1 tux  706 Apr 10 14:01 autoscaling-rules.md
-rw-r--r--. 1 tux 3911 Apr 10 14:03 gpu-node.md
-rw-r--r--. 1 tux 2031 May 15 09:10 mode-select-app.md
-rw-r--r--. 1 tux 3499 Apr 10 14:08 serverless-prereq.md

Once you have gathered all of your documentation, you can now prepare to use the BYO knowledge tool.

2. Use the BYO knowledge tool

At this time, we provide a container image with the tool to build your knowledge into a container image (yes, you're building containers with containers)! After you build the image, you put it into an image registry accessible to your OpenShift cluster. 

For this article, I expose the OpenShift image registry and use that, but you can use any image registry as long as the cluster with OpenShift Lightspeed can access it.

First, prepare a new folder that's separate from your content. This is used by the tool for temporary output.

Next, run the tool. On a Linux system that already has Podman installed, execute the following command:

$ podman run -it --rm --device=/dev/fuse \
  -v $XDG_RUNTIME_DIR/containers/auth.json:/run/user/0/containers/auth.json:Z \
  -v /path/to/content:/markdown:Z \
  -v /my/output/path:/output:Z \
registry.redhat.io/openshift-lightspeed-tech-preview/lightspeed-rag-tool-rhel9:latest

Make sure you have your paths properly entered. At the end of this process, you have a TAR file in the output directory. In this example, I end up with:
/my/output/path/byok-image.tar.

The next step is to import the TAR file into the local Podman image store:

$ podman load -i /my/output/path/byok-image.tar

This results in a local image called localhost/byok-image:latest, which you must re-tag before pushing. The specific pullspec to use when re-tagging depends on the registry you want to push to. If you plan to push images to the OpenShift cluster's registry, then you must have already exposed the registry beforehand. 

Assume your registry URL looks like this:

default.openshift-image-registry.apps.example.com

In that case, you would tag your image like this:

$ podman tag localhost/byok-image:latest \
default.openshift-image-registry.apps.example.com/openshift-lightspeed/openshift-lightspeed/acme-byok:latest

Then you can push that image (assuming you have done a Podman login to access your registry first):

$ podman push default.openshift-image-registry.apps.example.com/openshift-lightspeed/openshift-lightspeed/acme-byok:latest

3. Configure OpenShift Lightspeed for your Knowledge

Once you have built and pushed the image with your knowledge, you must configure OpenShift Lightspeed to use it. Edit your OLSConfig, and modify the ols section as demonstrated:

apiVersion: ols.openshift.io/v1alpha1
kind: OLSConfig
spec:
 llm:
   providers:
     - name: myOpenai
       type: openai
       credentialsSecretRef:
         name: openai-api-keys
       url: 'https://api.openai.com/v1'
       models:
         - name: gpt-4o
 ols:
   defaultModel: gpt-4o
   defaultProvider: myOpenai
   rag:
     - image: image-registry.openshift-image-registry.svc:5000/openshift-lightspeed/acme-byok:latest
       indexID: vector_db_index
       indexPath: /rag/vector_db

The rag stanza is located at .spec.ols.rag. The indentation is very important. The rag keyword must be at the same indentation level as defaultModel.

The OpenShift Lightspeed operator restarts the OpenShift Lightspeed API server pods in the openshift-lightspeed namespace.

Bring your own knowledge

Now that OpenShift Lightspeed is using your knowledge image, try asking a question that's supported by your own documentation. You get an answer informed by your internal knowledge, and not by generic knowledge.

There are still many improvements to be made to the BYO knowledge feature. In the meantime, you can see how useful this feature already is, and how it can help ensure that OpenShift Lightspeed's answers are specific to your organization's context.

Teste de produto

Red Hat OpenShift Container Platform | Teste de solução

Uma base consistente de nuvem híbrida para desenvolver e escalar aplicações em container.

Sobre os autores

Ben has been at Red Hat since 2019, where he has focused on edge computing with Red Hat OpenShift as well as private clouds based on Red Hat OpenStack Platform. Before this he spent a decade doing a mix of sales and product marking across telecommunications, enterprise storage and hyperconverged infrastructure.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

Navegue por canal

automation icon

Automação

Últimas novidades em automação de TI para empresas de tecnologia, equipes e ambientes

AI icon

Inteligência artificial

Descubra as atualizações nas plataformas que proporcionam aos clientes executar suas cargas de trabalho de IA em qualquer ambiente

open hybrid cloud icon

Nuvem híbrida aberta

Veja como construímos um futuro mais flexível com a nuvem híbrida

security icon

Segurança

Veja as últimas novidades sobre como reduzimos riscos em ambientes e tecnologias

edge icon

Edge computing

Saiba quais são as atualizações nas plataformas que simplificam as operações na borda

Infrastructure icon

Infraestrutura

Saiba o que há de mais recente na plataforma Linux empresarial líder mundial

application development icon

Aplicações

Conheça nossas soluções desenvolvidas para ajudar você a superar os desafios mais complexos de aplicações

Virtualization icon

Virtualização

O futuro da virtualização empresarial para suas cargas de trabalho on-premise ou na nuvem