Scaling .NET Core apps with Kubernetes-based event-driven autoscaling (KEDA)
TL;DR - Learn how you can autoscale a .NET Core 3.0 worker based on Service Bus queue size in Kubernetes with KEDA.
In our previous article we've learned what Kubernetes-based event-driven autoscaling (KEDA) is and how it can help us with scaling our applications.
As part of that effort, we've built a small .NET Core 3.0 Worker which processes messages from a Service Bus queue. By using a ScaledObject
to scale our order processor via KEDA.
Today, we are happy to contribute this sample to the KEDA organization! Let's have a closer look and run it ourselves!
For the sake of keeping this walkthrough short I will not go in depth in terms of building the .NET Core Worker itself. If you are interested in learning about that, you can find all the sources on GitHub.
Before we start...
This article requires you to have the following tools & services:
- Azure CLI
- Azure Subscription
- .NET Core 3.0 Preview 5
- Kubernetes cluster with KEDA installed
Creating an Azure Service Bus Queue
We will start by creating a new Azure Service Bus namespace:
⚡ tkerkhove@tomkerkhove C:\keda
❯ az servicebus namespace create --name <namespace-name> --resource-group <resource-group-name> --sku basic
After that, we create an orders
queue in our namespace:
⚡ tkerkhove@tomkerkhove C:\keda
❯ az servicebus queue create --namespace-name <namespace-name> --name orders --resource-group <resource-group-name>
We need to be able to connect to our queue, so we create a new authorization rule. In this case, we will assign Management
permissions given this is a requirement for KEDA.
⚡ tkerkhove@tomkerkhove C:\keda
❯ az servicebus queue authorization-rule create --resource-group keda-sandbox --namespace-name keda-sandbox --queue-name orders --name order-consumer --rights Manage Send Listen
Once the authorization rule is created, we can list the connection string as following:
⚡ tkerkhove@tomkerkhove C:\keda
❯ az servicebus queue authorization-rule keys list --resource-group keda-sandbox --namespace-name keda-sandbox --queue-name orders --name order-consumer
{
"aliasPrimaryConnectionString": null,
"aliasSecondaryConnectionString": null,
"keyName": "order-consumer",
"primaryConnectionString": "Endpoint=sb://keda.servicebus.windows.net/;SharedAccessKeyName=order-consumer;SharedAccessKey=<redacted>;EntityPath=orders",
"primaryKey": "<redacted>",
"secondaryConnectionString": "Endpoint=sb://keda.servicebus.windows.net/;SharedAccessKeyName=order-consumer;SharedAccessKey=<redacted>;EntityPath=orders",
"secondaryKey": "<redacted>"
}
Create a base64 representation of primaryConnectionString
:
⚡ tkerkhove@tomkerkhove C:\keda
❯ echo "<connection string>" | base64
Create a secret to deploy in Kubernetes that contains our connection string:
apiVersion: v1
kind: Secret
metadata:
name: order-secrets
labels:
app: order-processor
data:
SERVICEBUS_QUEUE_CONNECTIONSTRING: <base-64-connection-string>
This secret will be used by our order processor and KEDA to connect to the queue.
Save the secret declaration in deploy/deploy-secret.yaml
.
Deploying our Service Bus secret in Kubernetes
We will start by creating a new Kubernetes namespace to run our order processor in:
⚡ tkerkhove@tomkerkhove C:\keda
❯ kubectl create namespace keda-dotnet-sample
namespace "keda-dotnet-sample" created
Before we can connect to our queue, we need to deploy the secret which contains the Service Bus connection string to the queue.
⚡ tkerkhove@tomkerkhove C:\keda
❯ kubectl apply -f deploy/deploy-secret.yaml --namespace keda-dotnet-sample
secret "order-secrets" created
Once created, you should be able to retrieve the secret:
⚡ tkerkhove@tomkerkhove C:\keda
❯ kubectl get secrets --namespace keda-dotnet-sample
NAME TYPE DATA AGE
order-secrets Opaque 1 24s
Deploying our order processor pod
We are ready to go! We will start by creating a Kubernetes deployment.
The deployment will schedule a pod running our order processor based on tomkerkhove/keda-sample-dotnet-worker-servicebus-queue
Docker image.
apiVersion: apps/v1
kind: Deployment
metadata:
name: order-processor
labels:
app: order-processor
spec:
selector:
matchLabels:
app: order-processor
template:
metadata:
labels:
app: order-processor
spec:
containers:
- name: order-processor
image: tomkerkhove/keda-sample-dotnet-worker-servicebus-queue
env:
- name: KEDA_SERVICEBUS_QUEUE_CONNECTIONSTRING
valueFrom:
secretKeyRef:
name: order-secrets
key: SERVICEBUS_QUEUE_CONNECTIONSTRING
As you can see it passes a KEDA_SERVICEBUS_QUEUE_CONNECTIONSTRING
environment variable which contains the value of the secret we've just deployed.
Kubernetes will automatically decode it and pass the raw connection string.
Save the deployment declaration and deploy it:
⚡ tkerkhove@tomkerkhove C:\keda
❯ kubectl apply -f deploy/deploy-queue-processor.yaml --namespace keda-dotnet-sample
deployment.apps "order-processor" created
Once created, you will see that our deployment shows up with one pod:
⚡ tkerkhove@tomkerkhove C:\keda
❯ kubectl get deployments --namespace keda-dotnet-sample -o wide
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
order-processor 1 1 1 1 49s order-processor tomkerkhove/keda-sample-dotnet-worker-servicebus-queue app=order-processor
Defining how we want to autoscale with a ScaledObject
Now that our app is running we can start automatically scaling it!
By deploying a ScaledObject
you tell KEDA what deployment you want to scale and how:
apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
name: order-processor-scaler
labels:
app: order-processor
deploymentName: order-processor
spec:
scaleTargetRef:
deploymentName: order-processor
# minReplicaCount: 0 Change to define how many minimum replicas you want
maxReplicaCount: 10
triggers:
- type: azure-servicebus
metadata:
queueName: orders
connection: KEDA_SERVICEBUS_QUEUE_CONNECTIONSTRING
queueLength: '5'
In our case we define that we want to use the azure-servicebus
scale trigger and what our criteria is. For our scenario we'd like to scale out if there are 5 or more messages in the orders
queue with a maximum of 10 concurrent replicas which is defined via maxReplicaCount
.
KEDA will use the KEDA_SERVICEBUS_QUEUE_CONNECTIONSTRING
environment variable on our order-processor
Kubernetes Deployment to connect to Azure Service Bus. This allows us to avoid duplication of configuration.
Note - If we were to use a sidecar, we would need to define containerName
which contains this environment variable.
Save the ScaledObject
declaration and deploy it:
⚡ tkerkhove@tomkerkhove C:\keda
❯ kubectl apply -f deploy/deploy-queue-scaler.yaml --namespace keda-dotnet-sample
scaledobject.keda.k8s.io "order-processor-scaler" created
Once the ScaledObject
is deployed you'll notice that we don't have any pods running anymore:
⚡ tkerkhove@tomkerkhove C:\keda
❯ kubectl get deployments --namespace keda-dotnet-sample -o wide
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
order-processor 0 0 0 0 3m order-processor tomkerkhove/keda-sample-dotnet-worker-servicebus-queue app=order-processor
This is because our queue is empty and KEDA scaled it down until there is work to do.
In that case, let's generate some!
Publishing messages to the queue
The following job will send messages to the "orders" queue on which the order processor is listening to. As the queue builds up, KEDA will help the horizontal pod autoscaler add more and more pods until the queue is drained. The order generator will allow you to specify how many messages you want to queue.
First you should clone the project:
❯ git clone https://github.com/kedacore/sample-dotnet-worker-servicebus-queue
❯ cd sample-dotnet-worker-servicebus-queue
Configure the connection string in the tool via your favorite text editor, in this case via Visual Studio Code:
⚡ tkerkhove@tomkerkhove C:\keda
❯ code .\src\Keda.Samples.Dotnet.OrderGenerator\Program.cs
Next, you can run the order generator via the CLI:
⚡ tkerkhove@tomkerkhove C:\keda
❯ dotnet run --project .\src\Keda.Samples.Dotnet.OrderGenerator\Keda.Samples.Dotnet.OrderGenerator.csproj
Let's queue some orders, how many do you want?
300
Queuing order 719a7b19-f1f7-4f46-a543-8da9bfaf843d - A Hat for Reilly Davis
Queuing order 5c3a954c-c356-4cc9-b1d8-e31cd2c04a5a - A Salad for Savanna Rowe
[...]
That's it, see you later!
Now that the messages are generated, you'll see that KEDA starts automatically scaling out your deployment:
⚡ tkerkhove@tomkerkhove C:\keda
❯ kubectl get deployments --namespace keda-dotnet-sample -o wide
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
order-processor 8 8 8 4 7m order-processor tomkerkhove/keda-sample-dotnet-worker-servicebus-queue app=order-processor
Eventually we will have 10 pods running processing messages in parallel:
⚡ tkerkhove@tomkerkhove C:\keda
❯ kubectl get pods --namespace keda-dotnet-sample
NAME READY STATUS RESTARTS AGE
order-processor-65d5dd564-9wbph 1/1 Running 0 54s
order-processor-65d5dd564-czlqb 1/1 Running 0 39s
order-processor-65d5dd564-h2l5l 1/1 Running 0 54s
order-processor-65d5dd564-h6fcl 1/1 Running 0 24s
order-processor-65d5dd564-httnf 1/1 Running 0 1m
order-processor-65d5dd564-j64wq 1/1 Running 0 54s
order-processor-65d5dd564-ncwfd 1/1 Running 0 39s
order-processor-65d5dd564-q7tkt 1/1 Running 0 39s
order-processor-65d5dd564-t2g6x 1/1 Running 0 24s
order-processor-65d5dd564-v79x6 1/1 Running 0 39s
You can look at the logs for a given processor as following:
⚡ tkerkhove@tomkerkhove C:\keda
❯ kubectl logs order-processor-65d5dd564-httnf --namespace keda-dotnet-sample
info: Keda.Samples.Dotnet.OrderProcessor.OrdersQueueProcessor[0]
Starting message pump at: 06/03/2019 12:32:14 +00:00
info: Keda.Samples.Dotnet.OrderProcessor.OrdersQueueProcessor[0]
Message pump started at: 06/03/2019 12:32:14 +00:00
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
Content root path: /app
info: Keda.Samples.Dotnet.OrderProcessor.OrdersQueueProcessor[0]
Received message 513b896fbe3b4085ad274d9c23e01842 with body {"Id":"7ff54254-a370-4697-8115-134e55ebdc65","Amount":1741776525,"ArticleNumber":"Chicken","Customer":{"FirstName":"Myrtis","LastName":"Balistreri"}}
info: Keda.Samples.Dotnet.OrderProcessor.OrdersQueueProcessor[0]
Processing order 7ff54254-a370-4697-8115-134e55ebdc65 for 1741776525 units of Chicken bought by Myrtis Balistreri at: 06/03/2019 12:32:15 +00:00
info: Keda.Samples.Dotnet.OrderProcessor.OrdersQueueProcessor[0]
Order 7ff54254-a370-4697-8115-134e55ebdc65 processed at: 06/03/2019 12:32:17 +00:00
info: Keda.Samples.Dotnet.OrderProcessor.OrdersQueueProcessor[0]
Message 513b896fbe3b4085ad274d9c23e01842 processed at: 06/03/2019 12:32:17 +00:00
info: Keda.Samples.Dotnet.OrderProcessor.OrdersQueueProcessor[0]
Received message 9d24f13cd5ec44e884efdc9ed4a8842d with body {"Id":"cd9fe9e4-f421-432d-9b19-b94dbf9090f5","Amount":-186606051,"ArticleNumber":"Shoes","Customer":{"FirstName":"Valerie","LastName":"Schaefer"}}
info: Keda.Samples.Dotnet.OrderProcessor.OrdersQueueProcessor[0]
Processing order cd9fe9e4-f421-432d-9b19-b94dbf9090f5 for -186606051 units of Shoes bought by Valerie Schaefer at: 06/03/2019 12:32:17 +00:00
info: Keda.Samples.Dotnet.OrderProcessor.OrdersQueueProcessor[0]
Order cd9fe9e4-f421-432d-9b19-b94dbf9090f5 processed at: 06/03/2019 12:32:19 +00:00
info: Keda.Samples.Dotnet.OrderProcessor.OrdersQueueProcessor[0]
Message 9d24f13cd5ec44e884efdc9ed4a8842d processed at: 06/03/2019 12:32:19 +00:00
Once all the messages have been processed KEDA will scale the deployment back to 0 pod instances.
Time to clean up
Don't forget to clean up your resources!
❯ kubectl delete -f deploy/deploy-queue-processor.yaml --namespace keda-dotnet-sample
❯ kubectl delete -f deploy/deploy-secret.yaml --namespace keda-dotnet-sample
❯ kubectl delete namespace keda-dotnet-sample
❯ az servicebus namespace delete --name <namespace-name> --resource-group <resource-group-name>
❯ helm delete --purge keda
❯ kubectl delete customresourcedefinition scaledobjects.keda.k8s.io
❯ kubectl delete namespace keda
Conclusion
We have easily deployed a .NET Core 3.0 Worker on Kubernetes which was processing messages from Service Bus.
Once we've deployed a ScaledObject
for our Kubernetes deployment it started scaling the pods out and in according to the queue depth.
We could very easily plug in autoscaling for our existing application without making any changes!
Thanks for reading,
Tom.