RSS 피드 구독하기

The engineering teams of Red Hat Trusted Profile Analyzer (TPA) and Trustify decided to experiment with Model Context Protocol (MCP). This article will take you through the challenges we had to face along the way in hopes that our journey can help others attempting something similar.

To give you some context, Red Hat Trusted Profile Analyzer (TPA) is a Red Hat product for software bill of materials (SBOM) management—it stores SBOMs and correlates the packages within the SBOMs with known public vulnerabilities. It is based on the upstream project Trustify.

At a high level, its architecture is fairly "traditional":

  • The frontend is developed using React and PatternFly components (trustify-ui)
  • The backend is developed with Rust, it connects to a database instance and stores the SBOMs in S3-compatible storage

The high level steps we took:

  1. Designing the MCP server integration with TPA/Trustify
  2. Defining the MCP server’s tool descriptions
  3. Designing the MCP server’s tool parameters

Here we talk about the considerations in each phase, and the final result is the MCP server available on GitHub

Designing MCP server integration with TPA/Trustify

Before getting into how we defined the integration between the MCP server and Trustify, we faced a typical dilemma in the life of a software engineer: Which library am I going to adopt in this project? Should I start from scratch and develop everything on my own?

As true believers in open source, we took a look at the current landscape of Rust libraries (Trustify is mainly developed in Rust so a Rust library was our preferred option) for implementing an MCP server.

Our search didn't take very long because it turns out that MCP provides some official libraries in their GitHub organization and among those was one developed with Rust.

This library, besides including the code needed to support the development of an MCP server, also provides a great set of examples to start with.

It was immediately clear that, beside the library-specific details for running the MCP server and defining the tools available with their parameters, we had to decide how we wanted the MCP server to get access to the “backend” data.

We evaluated two different options. The MCP server could retrieve the data either by:

  • Directly connecting to the database (DB) where Trustify’s backend stores the data, or
  • Calling the REST endpoints provided by the Trustify backend

As you can imagine, both have pros and cons, which triggered a lively discussion that I’ll summarize here.

Pros of directly connecting to the DB:

  1. Performant access to the data
  2. Opportunity to have a text-to-SQL approach in place

Cons:

  1. MCP server must be at the same architectural level as the backend
  2. Code duplication would be needed to manage the data access compared to the code in the backend
  3. Data format management would be needed to define the outputs of the MCP tools’ calls

Pros of calling the REST endpoints:

  1. Calls adhere to the authentication and authorization already in place on the backend APIs
  2. The data available from the MCP server will be fully consistent with what's available in the UI since they're using the same data source
  3. JSON output available for free simply by sending out the output returned from the backend APIs

Cons:

  1. Slower performance due to having to go through more architectural tiers

In the end we decided to call the REST endpoints from the MCP server’s tools because the drawback of having to co-locate the MCP server beside the backend and “close enough” to the DB was really a potential blocker, especially in the MCP server with the stdio transport executed locally on developers’ hosts.

Having all the data formatted “for free” into JSON responses was another benefit in this initial development phase.

Defining the MCP server’s tool descriptions

Once we decided that the MCP server’s tools will call the backend APIs, we had to decide how to describe the different tools. We wanted to have, in the first iteration, each MCP tool to call a single backend API endpoint.

Considering Trustify documents the available endpoints using the OpenAPI openapi.yaml file, we decided to use the OpenAPI endpoints’ description and definitions as the MCP tool’s description so we could evaluate how good those endpoints' documentation is for our users. This effectively made our agentic AI the "customer zero" of our own APIs.

All of this has been done with the approach of continuous improvement—if the descriptions of Trustify’s APIs are good enough for an LLM to manage, then our users should be able to understand that documentation as well.

Following this approach is helping us in improving each endpoint, and it brought us to our next design decision.

Designing the MCP server’s tools' parameters

At this point, we faced the issue related to the input parameters for the tool’s invocation and to understand it we needed to take a step back. Trustify's endpoint for retrieving a list of entities accepts a q query parameter. This parameter allows users to specify a query based on a grammar that is defined in the OpenAPI specifications. 

The options we had were:

  1. Directly expose the endpoint’s q path parameter as the MCP tool input parameter
  2. Expose the inner fields available for building the query value for the q parameter as the input parameters of the MCP tool

We tested both of these approaches.

The first approach requires a strong and detailed description of the query parameter that, at the moment, the OpenAPI specification doesn’t provide. We believe that a comprehensive list of queryable fields should be a mandatory part of the documentation, not an optional one. It would be useful for all users to have access to this information.

This second approach simplifies the process for the AI agent. By explicitly listing the parameters to query—such as vulnerability severity, publish date, or description—it makes the information more consumable for the LLM. This removes the need for the LLM to first interpret a query's grammar, which can be a complex step in the first approach.

A further consideration is that listing all the available parameters explicitly on the MCP tool requires ongoing work to maintain consistency with the actual backend endpoint implementation. On the other hand, exposing only a subset of the parameters available has the effect of reducing the versatility of the tool with no guarantee of reducing the maintenance overhead.

We decided to move forward with using a q query parameter for the MCP tool, and we'll enhance its description within the OpenAPI definition so all users can benefit.

Final thoughts

In designing an MCP server we adopted the following approach:

  • MCP server leverages the existing APIs
  • MCP server leverages the existing OpenAPI documentation
  • MCP server tools expose the same parameter that the remote API endpoint is expecting

As we mentioned earlier, the final result is available on GitHub.

resource

엔터프라이즈를 위한 AI 시작하기: 입문자용 가이드

이 입문자용 가이드에서 Red Hat OpenShift AI와 Red Hat Enterprise Linux AI로 AI 도입 여정을 가속화할 수 있는 방법을 알아보세요.

저자 소개

UI_Icon-Red_Hat-Close-A-Black-RGB

채널별 검색

automation icon

오토메이션

기술, 팀, 인프라를 위한 IT 자동화 최신 동향

AI icon

인공지능

고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트

open hybrid cloud icon

오픈 하이브리드 클라우드

하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요

security icon

보안

환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보

edge icon

엣지 컴퓨팅

엣지에서의 운영을 단순화하는 플랫폼 업데이트

Infrastructure icon

인프라

세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보

application development icon

애플리케이션

복잡한 애플리케이션에 대한 솔루션 더 보기

Virtualization icon

가상화

온프레미스와 클라우드 환경에서 워크로드를 유연하게 운영하기 위한 엔터프라이즈 가상화의 미래