The engineering teams of Red Hat Trusted Profile Analyzer (TPA) and Trustify decided to experiment with Model Context Protocol (MCP). This article will take you through the challenges we had to face along the way in hopes that our journey can help others attempting something similar.
To give you some context, Red Hat Trusted Profile Analyzer (TPA) is a Red Hat product for software bill of materials (SBOM) management—it stores SBOMs and correlates the packages within the SBOMs with known public vulnerabilities. It is based on the upstream project Trustify.
At a high level, its architecture is fairly "traditional":
- The frontend is developed using React and PatternFly components (trustify-ui)
- The backend is developed with Rust, it connects to a database instance and stores the SBOMs in S3-compatible storage
The high level steps we took:
- Designing the MCP server integration with TPA/Trustify
- Defining the MCP server’s tool descriptions
- Designing the MCP server’s tool parameters
Here we talk about the considerations in each phase, and the final result is the MCP server available on GitHub.
Designing MCP server integration with TPA/Trustify
Before getting into how we defined the integration between the MCP server and Trustify, we faced a typical dilemma in the life of a software engineer: Which library am I going to adopt in this project? Should I start from scratch and develop everything on my own?
As true believers in open source, we took a look at the current landscape of Rust libraries (Trustify is mainly developed in Rust so a Rust library was our preferred option) for implementing an MCP server.
Our search didn't take very long because it turns out that MCP provides some official libraries in their GitHub organization and among those was one developed with Rust.
This library, besides including the code needed to support the development of an MCP server, also provides a great set of examples to start with.
It was immediately clear that, beside the library-specific details for running the MCP server and defining the tools available with their parameters, we had to decide how we wanted the MCP server to get access to the “backend” data.
We evaluated two different options. The MCP server could retrieve the data either by:
- Directly connecting to the database (DB) where Trustify’s backend stores the data, or
- Calling the REST endpoints provided by the Trustify backend
As you can imagine, both have pros and cons, which triggered a lively discussion that I’ll summarize here.
Pros of directly connecting to the DB:
- Performant access to the data
- Opportunity to have a text-to-SQL approach in place
Cons:
- MCP server must be at the same architectural level as the backend
- Code duplication would be needed to manage the data access compared to the code in the backend
- Data format management would be needed to define the outputs of the MCP tools’ calls
Pros of calling the REST endpoints:
- Calls adhere to the authentication and authorization already in place on the backend APIs
- The data available from the MCP server will be fully consistent with what's available in the UI since they're using the same data source
- JSON output available for free simply by sending out the output returned from the backend APIs
Cons:
- Slower performance due to having to go through more architectural tiers
In the end we decided to call the REST endpoints from the MCP server’s tools because the drawback of having to co-locate the MCP server beside the backend and “close enough” to the DB was really a potential blocker, especially in the MCP server with the stdio transport executed locally on developers’ hosts.
Having all the data formatted “for free” into JSON responses was another benefit in this initial development phase.
Defining the MCP server’s tool descriptions
Once we decided that the MCP server’s tools will call the backend APIs, we had to decide how to describe the different tools. We wanted to have, in the first iteration, each MCP tool to call a single backend API endpoint.
Considering Trustify documents the available endpoints using the OpenAPI openapi.yaml file, we decided to use the OpenAPI endpoints’ description and definitions as the MCP tool’s description so we could evaluate how good those endpoints' documentation is for our users. This effectively made our agentic AI the "customer zero" of our own APIs.
All of this has been done with the approach of continuous improvement—if the descriptions of Trustify’s APIs are good enough for an LLM to manage, then our users should be able to understand that documentation as well.
Following this approach is helping us in improving each endpoint, and it brought us to our next design decision.
Designing the MCP server’s tools' parameters
At this point, we faced the issue related to the input parameters for the tool’s invocation and to understand it we needed to take a step back. Trustify's endpoint for retrieving a list of entities accepts a q query parameter. This parameter allows users to specify a query based on a grammar that is defined in the OpenAPI specifications.
The options we had were:
- Directly expose the endpoint’s q path parameter as the MCP tool input parameter
- Expose the inner fields available for building the query value for the q parameter as the input parameters of the MCP tool
We tested both of these approaches.
The first approach requires a strong and detailed description of the query parameter that, at the moment, the OpenAPI specification doesn’t provide. We believe that a comprehensive list of queryable fields should be a mandatory part of the documentation, not an optional one. It would be useful for all users to have access to this information.
This second approach simplifies the process for the AI agent. By explicitly listing the parameters to query—such as vulnerability severity, publish date, or description—it makes the information more consumable for the LLM. This removes the need for the LLM to first interpret a query's grammar, which can be a complex step in the first approach.
A further consideration is that listing all the available parameters explicitly on the MCP tool requires ongoing work to maintain consistency with the actual backend endpoint implementation. On the other hand, exposing only a subset of the parameters available has the effect of reducing the versatility of the tool with no guarantee of reducing the maintenance overhead.
We decided to move forward with using a q query parameter for the MCP tool, and we'll enhance its description within the OpenAPI definition so all users can benefit.
Final thoughts
In designing an MCP server we adopted the following approach:
- MCP server leverages the existing APIs
- MCP server leverages the existing OpenAPI documentation
- MCP server tools expose the same parameter that the remote API endpoint is expecting
As we mentioned earlier, the final result is available on GitHub.
resource
Erste Schritte mit KI für Unternehmen: Ein Guide für den Einstieger
Über den Autor
Mehr davon
Nach Thema durchsuchen
Automatisierung
Das Neueste zum Thema IT-Automatisierung für Technologien, Teams und Umgebungen
Künstliche Intelligenz
Erfahren Sie das Neueste von den Plattformen, die es Kunden ermöglichen, KI-Workloads beliebig auszuführen
Open Hybrid Cloud
Erfahren Sie, wie wir eine flexiblere Zukunft mit Hybrid Clouds schaffen.
Sicherheit
Erfahren Sie, wie wir Risiken in verschiedenen Umgebungen und Technologien reduzieren
Edge Computing
Erfahren Sie das Neueste von den Plattformen, die die Operations am Edge vereinfachen
Infrastruktur
Erfahren Sie das Neueste von der weltweit führenden Linux-Plattform für Unternehmen
Anwendungen
Entdecken Sie unsere Lösungen für komplexe Herausforderungen bei Anwendungen
Virtualisierung
Erfahren Sie das Neueste über die Virtualisierung von Workloads in Cloud- oder On-Premise-Umgebungen