There are many reasons enterprises want to adopt Kubernetes to run web services, mobile applications, Internet of Things (IoT) edge streaming, artificial intelligence and machine learning (AI/ML), and other business applications. One of the biggest benefits of Kubernetes is the ability to autoscale your applications on-demand, as this reduces the amount of process time required to handle incidents. It also helps make your cloud platform more reliable and stable to serve business services seamlessly.
[ Download An architect's guide to multicloud infrastructure. ]
Kubernetes autoscaling works based on hardware resource utilization (CPU, memory) through Horizontal Pod Autoscaling (HPA). This creates a new challenge with event-driven architectures. In an event-driven architecture, you probably have multiple event sources, such as Apache Kafka and message queue brokers, to consume message streams. These metrics are more relevant than a pod's CPU usage for deciding when applications need to be scaled out and in.
Kubernetes Event-Driven Autoscaling (KEDA) is designed to solve this challenge by autoscaling existing deployed applications based on event metrics. Knative Serving can also scale serverless applications on Kubernetes using its own Knative autoscaler. But what if you need to manage the autoscaling capability from normal applications to serverless functions based on event sources?
Fortunately, there is a way to redesign an event-driven autoscaling architecture utilizing Knative and KEDA infrastructure. I'll be discussing this at Red Hat's Event-Driven Architecture event on April 19, 2022. In my presentation, Event-driven autoscaling through KEDA and Knative Integration, I'll also explain how to deploy serverless applications (Quarkus) using Knative Serving and KEDA to autoscale Knative Eventing components (KafkaSource) based on events consumption over standard resources (CPU, memory).
You can access the slides or watch the video from the event below.
About the author
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Virtualization
The future of enterprise virtualization for your workloads on-premise or across clouds