订阅 RSS 源

We’ve shared how it started "Blog #1: The Hope", where it broke "Blog #2: The Crash", and what we’re doing differently "Blog #3: Iterate with Intention". This last piece is for anyone trying to lead through the messy middle.

This isn’t a teardown or a tutorial. It’s a field note, a reflection on what it actually takes to manage AI projects inside complex organizations, especially when the systems are messy, content is nuanced, and the use case isn’t clear.

You won’t find a perfect framework here, because most of the time, one doesn't exist. However, you will find lessons we learned while in the process—lessons that might help you spot patterns faster, build better partnerships, and make progress even when the roadmap keeps shifting.

Whether you’re a formal program manager, a product lead, or simply the person who got handed the bot project because someone trusted you to figure it out, this article’s for you.

AI challenge: Construct a chatbot that can leverage constantly changing, unstructured go-to-market (GTM) content to reduce sales friction by providing brief and accurate answers to seller questions as well as links to more detailed information.

The build: We built this assistant on Red Hat OpenShift Platform Plus (MP+) and Red Hat OpenShift AI, using Granite as the core model giving us enterprise-grade model serving and deployment. LangChain orchestrated the retrieval flow, and PGVector handled vector storage (an extension to the popular PostgreSQL database). We used MongoDB for logging the interactions with the AI. To preserve context from long-form documents, we used structure-aware tools like Docling and experimented with Unstructured’s Python libraries to pull speaker notes from slides. While that code didn’t make it into production, the experiment revealed just how crucial structure and formatting are to successful retrieval—lessons that now guide our preprocessing efforts.

Note to fellow PMs (and accidental knowledge strategists)

You don’t have to be an AI expert. But you do have to stay close to the work—close enough to spot what’s not working, advocate for what is, and connect patterns no one else is tracking.

You’ll likely end up doing more than managing timelines. You’ll translate between engineering and business. You'll map out signal paths across disconnected tools. You'll capture moments where things break down and explain why. Sometimes you’ll be the only one holding the full picture, or the closest thing to it.

When AI is the buzzword, everyone has different expectations. You’ll need to push for alignment on purpose, success metrics, and what the assistant is actually supposed to do. Don’t assume that’s already been agreed upon.

Also, avoid the trap of big plans and long roadmaps. AI projects move fast. Assumptions go stale quickly. Overplanning adds overhead and delays feedback. A waterfall approach doesn’t hold up here. Instead, scope just enough to move, then adjust. That’s minimum viable planning.

When things get fuzzy or stuck:

  • Refocus on the smallest testable version of success.
  • Keep version one usable, useful, and grounded in real need.

When you hit a wall, don’t give up. Connect with product teams. Suggest enhancements. Do your own research. Bring in market shifts and competitor moves. But don’t chase every shiny thing. If a new tool or setup could actually shift the outcome, open that conversation, but be critical of the developer time it costs. The engineering team knows the tech. You understand how the data source and user needs shape the retrieval strategy. When the tech gets fuzzy, that’s the bridge you bring—the full context, the right questions, and the clarity to test what matters.

And if you’re lucky, like I was, you’ll have a great team, a knowledgeable subject matter expert (SME) who’s just as invested as you are, and a leadership team that supports your growth, not just your output. That combination creates space for creativity,  honesty, and iteration that actually moves the needle.

If your SME partner and leadership team will test the bot on a weekend to trace signals to build out reporting, you’ve got something special. If you find that kind of partnership and support, hold onto it.

The path isn’t straight. It rarely is. But at least it’s never boring.


PM checklist – Field notes for AI PMs (or anyone asked to make it work)

  • Work with content creators: Start here. Partner with the people who know the material best. Consolidate, review, and refresh. Redundant assets confuse both users and the bot. Fewer, stronger resources perform better for AI and for people.
  • Do the audit: Yes, the content audit. Even if you have to do it yourself. It's not glamorous, but it is your data preparation for AI. Fix your tags. Find your duplicates. See what's actually being used. You can't improve retrieval if you don't know what you've got.
  • Stay close, observe deeply: Don't manage from a distance. Be close enough to notice when something is off. Talk to your users. Track what's breaking and investigate.
  • Bridge technology and business: Learn just enough technology to translate, challenge, and connect the dots. Align on outcomes early and make sure your assistant solves real problems, not just technical ones.
  • Design for movement: It's going to get messy. That's normal. Skip rigid roadmaps. Scope just enough to move, then adjust. Build in ways to pause, course-correct, and keep going without starting over.
  • Validate what matters: Test and report retrieval and summarization separately. The summary might look fine, but if it pulled the wrong content, it's still off. Reviewing output is manual and slow, but essential if you want to track what's actually improving.
  • Balance vision with reality: Don't chase cool features without proof. Compare with the market, but stay focused on your users.
  • Partner up: Your SMEs and content creators aren't just reviewers—they're co-pilots. They bring the nuance that AI misses and help you pressure-test what's being built. You can't validate outputs without them.

4 key considerations for implementing AI technology

resource

开启企业 AI 之旅:新手指南

此新手指南介绍了红帽 OpenShift AI 和红帽企业 Linux AI 如何加快您的 AI 采用之旅。

关于作者

Andrea Hudson is a program manager focused on making AI tools useful in the messy world of enterprise content. She has been at Red Hat since 2022, she helps teams untangle complexity, connect the dots, and turn good intentions into working systems. Most recently, she helped reshape an AI chatbot project that aimed to surface the right go-to-market content—but ran into the chaos of unstructured data.
Her background spans product launches, enablement, product testing, data prep, and evolving content for the new AI era. As a systems thinker with early training in the U.S. Navy, she relies on what works in practice, not just in theory. She’s focused on building things that scale, reduce rework, and make people’s lives easier.
Andrea writes with honesty, sharing lessons from the projects that don’t go as planned. She believes transparency is key to growth and wants others to have a starting point by sharing the messy middle and not just the polished end.
When she’s not wrangling AI or metadata, you’ll find her tinkering with dashboards, learning low/no-code tools, playing on Red Hat’s charity eSports teams, recruiting teammates, or enjoying time with her family.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

按频道浏览

automation icon

自动化

有关技术、团队和环境 IT 自动化的最新信息

AI icon

人工智能

平台更新使客户可以在任何地方运行人工智能工作负载

open hybrid cloud icon

开放混合云

了解我们如何利用混合云构建更灵活的未来

security icon

安全防护

有关我们如何跨环境和技术减少风险的最新信息

edge icon

边缘计算

简化边缘运维的平台更新

Infrastructure icon

基础架构

全球领先企业 Linux 平台的最新动态

application development icon

应用领域

我们针对最严峻的应用挑战的解决方案

Virtualization icon

虚拟化

适用于您的本地或跨云工作负载的企业虚拟化的未来