Red Hatters Atul Deshpande, Principal Chief Architect, and Rob McManus, Principal Product Marketing Manager, have just returned from Digital Transformation World Ignite (DTW) Copenhagen, the annual gathering for discussing operational support systems/business support systems (OSS/BSS), TM Forum’s Open Digital Architecture (ODA) and IT architectures. This year’s focus was on autonomous intelligent networks. Customers and partners were eager to learn how artificial intelligence (AI) is crucial for autonomous intelligent networks and how Red Hat is collaborating with partners to create effective solutions for service providers.
Defining an autonomous intelligent network
An autonomous intelligent network is a fully automated, zero-touch deployment and operations infrastructure consisting of compute, storage and networking for information and communication technology (ICT) services that is self-configuring, self-healing, self-optimizing and self-evolving. An autonomous intelligent network has to embed hyperautomation where everything is automated, with data analytics and AI models providing deep learning for advanced decision making and autonomy and governance, providing privacy and use policies to enforce compliant deployment and operational decisions and actions.

How to embark on an automation transformation journey
Atul and Rob noted a growing trend: the building of autonomous intelligent networks on public clouds rather than on-premise.
Consider the following: you are a leading telecommunication (telco) service provider, operating nationwide LTE and 5G networks, with a legacy network stack (e.g., fixed broadband, 2G, etc) in operation. To enhance efficiency and operational improvements, you have engaged a few partners to select use cases to automate end-to-end operations, using AI and an application platform. Moreover, almost all of your network is running on-premise.

What choices are available?
There are several deployment options available:
- On-premise: Building the network entirely within the service provider’s own infrastructure
- On the public cloud: Deploying the network entirely using a public cloud provider
- A mix of both on-premise and public cloud: Combining both on-premise and public cloud deployments for the network
There are challenges you’ll face with each of these options. In this blog, we’ll delve deeper into the deployment of an autonomous intelligent network on the public cloud.
Deployment using public cloud
The solution architecture of this scenario could look as follows:

There are 4 key challenges to consider when using a public cloud:
- The service provider is sending and storing almost all of its network data within the public cloud. Over time, the data storage and transfer costs will increase significantly.
- Depending on regulation and country-specific laws, the service provider has to abide by relevant and mandated data sovereignty requirements and any breach of such laws will have consequences.
- The data and AI engineering application platform may lead to vendor-lock, which creates complexity if the service provider wants to move a few elements of the stack to a different cloud or on-premise. This would require an enormous effort to port all workloads and applications.
- Even with initial benefits in the beginning, eventually the operating expenses (OPEX) would hit an upper limit.
The image below shows the framework for a service provider porting some of the network functions (e.g., 5G core) to the public cloud:

This could be considered phase 2: the service provider has ported a few network functions to the public cloud and has started to build an autonomous intelligent network. However, the above 4 challenges persist.
Many leading network equipment providers (NEPs) are now offering 5G container-as-a-service (CaaS) on the public cloud. To complicate the matter further, public cloud providers are also offering autonomous operations as-a-service (AOaaS), combining their data, AI and gen AI stack with network functions.
The image below shows the framework for a service provider adopting an autonomous intelligent network entirely on the public cloud:

The 4 challenges persist within this framework, indicating that the benefits of an end-to-end autonomous intelligent network will eventually diminish due to increasing cloud OPEX and platform engineering costs.
The autonomous network framework, defined by TM Forum, doesn't explain how to build automation use cases, leaving their deployment options, and challenges, entirely to stakeholders. Service providers would benefit greatly from guiding principles that discuss the available choices and their respective pitfalls.
Wrap up
Service providers need to integrate automation methodologies within a robust framework to deliver consistent, secure and autonomous intelligent networks that can evolve at the speed of innovation.
They also need a choice when building autonomous intelligent networks to optimize their OPEX and capital expenditure (CAPEX) and realize the benefits of its adoption. Sometimes the obvious choices are not optimal when networks scale.
This blog post lays the foundation for building autonomous intelligent networks with a service provider cloud. In the next blog post in the series, we will delve deeper into an optimized approach to build an autonomous intelligent network across a distributed hybrid cloud using Red Hat's framework.
product trial
Versión de prueba de Developer Sandbox for Red Hat OpenShift
Sobre el autor
Rob McManus is a Principal Product Marketing Manager at Red Hat. McManus is an adept member of complex matrix-style teams tasked to define and position telecommunication service provider and partner solutions with a focus on network transformation that includes 5G, vRAN and the evolution to cloud-native network functions (CNFs).
Más similar
Navegar por canal
Automatización
Las últimas novedades en la automatización de la TI para los equipos, la tecnología y los entornos
Inteligencia artificial
Descubra las actualizaciones en las plataformas que permiten a los clientes ejecutar cargas de trabajo de inteligecia artificial en cualquier lugar
Nube híbrida abierta
Vea como construimos un futuro flexible con la nube híbrida
Seguridad
Vea las últimas novedades sobre cómo reducimos los riesgos en entornos y tecnologías
Edge computing
Conozca las actualizaciones en las plataformas que simplifican las operaciones en el edge
Infraestructura
Vea las últimas novedades sobre la plataforma Linux empresarial líder en el mundo
Aplicaciones
Conozca nuestras soluciones para abordar los desafíos más complejos de las aplicaciones
Virtualización
El futuro de la virtualización empresarial para tus cargas de trabajo locales o en la nube