KEDA 2.0 is now generally available and ready to scale all your workloads
Over the past year(s), we’ve seen more and more customers moving their applications to Kubernetes to use an open-platform to run their applications on. However; using Kubernetes is not a walk in the park, certainly when you are new to it. One of those challenging aspects, certainly in the early days, is autoscaling Kubernetes.
Back in May 2019, Microsoft & Red Hat announced Kubernetes-event Driven Autoscaling which allows you to easily scale Kubernetes applications based on metrics coming from a variety of cloud providers and other technologies that are not supported out-of-the-box. Since autoscaling is one of my passions and KEDA closes a huge gap in Kubernetes, I joined them to make application autoscaling dead-simple.
In January 2020, we’ve successfully donated KEDA to the CNCF as a sandbox project and in March we have had the honors to host a Codit webinar with Jeff Hollan (Group Product Manager for Microsoft Azure Functions) and myself to talk about KEDA.
Announcing KEDA 2.0
Yesterday was an exciting day as we’ve released KEDA 2.0 which takes application autoscaling to the next level! It is now generally available and ready to scale all your workloads! 🎊
Here are just a few highlights of this release:
- Introducing scalers for Azure Log Analytics, Metrics API, CPU, Memory & External Push
- Support for multiple triggers in a single ScaledObject
- Support for Managed Identity for Azure Monitor scaler
- Improved operational experience with probing, Prometheus metrics, and more
- …
You can read more about it in our blog post to have a full overview of what’s new and how Alibaba Cloud is using KEDA to offer autoscaling to their customers! Want to get started? Use our .NET Core sample to learn how to scale your workloads!
Already using KEDA? Thanks! Here is a migration guide to help you move to KEDA 2.0.
A big thank you to our community and users, from which are just a few listed here:
If you are using KEDA in production and we can list them, please let us know!