Boston Children's Hospital is at the forefront of medical imaging innovation, piloting AI-powered analysis with Red Hat OpenShift to revolutionize fetal and pediatric care. Dr. Rudolph Pienaar, a computational scientist and technical director of the Fetal Neonatal Neuroimaging and Developmental Science Center (FNNDSC) at Boston Children’s Hospital, answers our questions on how open source platforms and advanced computing are enhancing image quality, automating complex analyses and facilitating critical data sharing. He explains how this is transforming healthcare to be more accessible and efficient for researchers and clinicians and to better serve patients.
How is AI transforming early diagnosis in fetal and pediatric care?
You might be surprised to learn that currently AI is used very little in any clinical imaging workflows. I feel I should qualify that. When people think/hear of “AI” they usually think of what is called generative or gen AI. The reality is that for image processing gen AI is not necessarily the right tool to use. Mostly, we use other imaging techniques, including predictive AI, to find tumours, anatomical landmarks, disease states and more. Also, at times, AI is used to improve the contrast or even the resolution of images. At these tasks, predictive AI really excels.
As one of the top rated hospitals in the U.S., Boston Children’s Hospital often sees the most difficult pediatric cases. In terms of imaging, anything we can do computing-wise to improve images obviously helps improve diagnoses. We are pioneering some of the first examples of how AI can make a difference in improving the quality of images and even provide insights into images themselves.
What do I mean by this? Imagine I walk through my house with my phone and take a picture every second or so. Now imagine each time I take the picture I twist my phone this way, that way, to the ceiling, the wall, or use a strange angle. Now imagine your task is to determine if my house is structurally sound or if the windows in the bedrooms are up to specifications. My stack of topsy-turvy photos just makes your job that much more difficult. So, we can use imaging techniques and predictive AI to make better sense of all these pictures and rearrange them so that it becomes quite easy to see what the house looks like. We don’t use the AI to determine if your house is sound or the windows are in code. We use it to improve the pictures so the human expert can more quickly come to a better and more correct decision.
Currently, we use these techniques to get better, sharper pictures of a fetus' brain in utero so a neuroradiologist can more accurately assess if the brain is developing normally or, if there is something abnormal, better visualize abnormalities. In other cases, we use this to automatically find the lengths and ages of bones from x-ray images.
How does reducing image analysis time enable quicker, more precise intervention?
If you are a radiologist making a diagnosis, you rely on imaging. If those images are “bad,” you might miss important clues. If there was a way to drastically improve the quality of those images, that changes things. However, if the time to get these better images is too long, or if the process to get these improved images back to you is prohibitively complex, the benefit is lost.
Let me give an example. The way that a baby kicks in utero can be an indicator of neurological health or “dis-health.” How can you figure out the kicking pattern by looking at a stack of images? Well, you can play them like a movie and perhaps get a sense of the kicking. Still, this seems rather imprecise.
We can apply AI techniques, however, to determine exactly where the knees and leg bones are on each image and measure the angle and frequency of kicking. Now there is no guesswork, but a real measurement. This considerably simplifies determining whether the kicks are healthy or unhealthy. We’re also able to deliver this analysis to the radiologist via tools they are familiar with. If the baby’s kicking is deemed unhealthy, the radiologist can more confidently assess the need for surgical intervention. In cases such as this, timely surgical intervention in utero can impact the baby’s wellbeing throughout its life.
How are open source platforms like ChRIS providing transparency and control over AI tools?
The ChRIS Research Integration Service is an open source, web-based medical computing platform. Most of the tools, AI and otherwise, that run on the ChRIS platform are open source. In the case of an AI tool, having full access to the source code and data it would use affords a strong measure of trust. It also allows us to independently test the algorithm. Moreover, researchers utilizing public funding are commonly required to use open source in software development. The best way to understand any computing tool, including AI tools, is to assess its source code. Equally important is the platform that is running the tool. Having transparency in both the platform (that controls the tool) and the tool itself covers the entire meaningful spectrum.
How does Red Hat OpenShift help power these innovations?
ChRIS was designed for seamless integration with Red Hat infrastructure and to scale effectively with Red Hat OpenShift. This ensures consistent performance for platforms and tools, whether running on laptops with Podman Desktop or across large computing clusters. With Red Hat providing this consistent foundation, researchers don’t have to try and finagle this themselves. Instead, they can focus on what they know how to do and immediately benefit from the power of this infrastructure.
Furthermore, public clouds also offer OpenShift, which enables us to deploy ChRIS on the cloud, as well. This means that you can have an isolated ChRIS instance deployed fully inside a hospital that analyses sensitive, private information (e.g., patient data, etc). Then, if needed, this internal ChRIS can also speak to a different ChRIS deployed on a cloud and pass along anonymized information to be analysed at scale safely using more powerful computers. So a small hospital without powerful on-premise resources can have the best of all worlds. Private data can still be processed internally, and where needed at scale, and data can be analyzed on the cloud. Critically, in the ChRIS platform, the exact same ChRIS application used on a laptop can be run without modification on a large OpenShift cluster.
Additionally, running the ChRIS platform on Red Hat OpenShift on the cloud means any other ChRIS in the world can easily download applications that the public ChRIS hosts. It’s a bit like an app store for medical analysis. Right now you can download ChRIS to your laptop, launch it, connect to our global ChRIS, and with a single click, install any of our available computing and analysis applications–all at no cost. Innovations in medical computing can spread freely to wherever they are needed.
What skills have you developed for your role?
Well, that assumes I know what my role even is! At times I feel more like a jack of all trades and demonstrable master of none. But more seriously, some of the most important skills I’ve had to develop are the intangible people skills. Technical and scientific skills can be complex and hard to master. But without the ability to connect meaningfully to people in a multiplicity of contexts, any fruits from the scientific labour are simply lost, or worse yet, abused. I find it ironic that these skills are often the ones not taught formally when they are so important.
What challenges do you face and how do you manage them?
Back at the lab (or office), my challenges as a leader are probably the same as most people’s. How do you inspire your team? How do you juggle that with your own work? How do you stay awake in yet another Zoom meeting? I have to say that I probably got lucky in that regard by having wonderful people. My management strategy is to point someone in a direction and get out of their way.
Other challenges, especially in computing in healthcare, really have very little to do with computing. For example, as we all know, computer programs are fragile little beasts. Oftentimes the smallest hurdle and they fall over, legs in the air. Imagine you have an AI image application that finds a tumour in a mammogram. Well, often in a hospital there are many different image “types” that a technician can apply when told to do a mammogram. A radiologist might order a mammogram, but at the machine itself, there is often a range of image scans to choose from. This is a bit like different zoom levels or filters on your phone camera. While a human radiologist is robust enough to understand the image no matter how it was “filtered,” an AI application is not. It is trained on one specific type of image. If the technologist selects the generally same but slightly different scan, the AI application will probably fail to analyze it. These operational challenges are quite endemic.
What does the future hold for the healthcare sector?
Healthcare is moving profoundly and irrevocably from the 20th century concept of medicine-is-pharmacology to the 21st century concept of medicine-is-computing. Of course pharmacology will remain, but it will be informed and shaped by computing. I think that right now, in the mid 2020s we are at an inflection point that will shape the nature of computing in medicine. To stretch an analogy, pharma is, and has been, “closed source.” If computing takes prominence, we are now in a position to shift that to becoming profoundly “open source.” I feel that is the best way forward, but it won’t just happen by default. If left alone, computing in healthcare will probably devolve into the equivalent of the “streaming wars.” If you want a mammogram AI application, you need to use the platform of company XYZ. If you want the brain tumour application, you need the platform of company ABC (see how I’m cleverly not naming actual healthcare imaging companies). All these apps and platforms in this world will be closed source. XYZ’s app won’t be available on ABC’s platform, just like you won’t find some Disney+ shows on Netflix. While okay for entertainment, it is not okay for healthcare computing.
This is one of the reasons I believe that companies like Red Hat can actually have a hand in transforming this space. For a start, their infrastructure is better than a healthcare company’s computing offering. Labs like us can innovate on top of this infrastructure and deliver solutions quickly in our space. I know that Red Hat has focused on making AI open source, but I feel the impact of this can spread vertically through healthcare, making healthcare computing open source, too.
To be clear, I’m not suggesting healthcare companies should not make money–but to compete on the merits of providing the best care, the best apps, using open and universal underlying platforms. The world is a better place if all mobiles use one charging interface. Similarly, the evolving healthcare sector would be better if apps can run on all platforms, if yet-to-be-built AI apps are transparent, or conversely, if the choice of some underlying platform does not restrict the care you can provide to your patients.
I am an optimist. I choose to believe that such a world is possible and hopefully in our small way our team at BCH can help bring some of this about.
Learn more about Red Hat’s AI solutions and the role of open source and AI in healthcare.
執筆者紹介
Dr. Rudolph Pienaar completed a Bachelors and Masters in Electrical, Electronic, and Computer Engineering at the University of Pretoria in South Africa. He also holds a Doctorate in Biomedical Engineering from Cleveland State University/Cleveland Clinic Foundation, where he conducted research in Reinforcement Learning applied to musculo-skeletal bio-mechanical control systems. He completed postdoctoral work at the Massachusetts General Hospital, where he was an assistant in Medical Imaging. He is currently faculty in Radiology at Boston Children's Hospital and an assistant professor in Radiology at Harvard Medical School.
Dr. Pienaar's research interests include brain surface feature analysis, tractography-from-an-informatics perspective, cloud computing, image visualization, and system design. At the Fetal Neonatal Neuroimaging and Developmental Science Center at Boston Children's Hospital, he leads the Advanced Computing Group, responsible for the developing new informatics infrastructure solutions to clinical problems. He is the main technical lead on ChRIS.
Amy is in the EMEA PR and communications at Red Hat, driving awareness of Red Hat’s portfolio, the value of enterprise open source for customers and its vision for the future of IT. She is also co-chair of the UK chapter of Red Hat's Women's Leadership Community.
類似検索
チャンネル別に見る
自動化
テクノロジー、チームおよび環境に関する IT 自動化の最新情報
AI (人工知能)
お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート
オープン・ハイブリッドクラウド
ハイブリッドクラウドで柔軟に未来を築く方法をご確認ください。
セキュリティ
環境やテクノロジー全体に及ぶリスクを軽減する方法に関する最新情報
エッジコンピューティング
エッジでの運用を単純化するプラットフォームのアップデート
インフラストラクチャ
世界有数のエンタープライズ向け Linux プラットフォームの最新情報
アプリケーション
アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細
仮想化
オンプレミスまたは複数クラウドでのワークロードに対応するエンタープライズ仮想化の将来についてご覧ください