Revolgy blog

Advanced API Routing in EKS with Traefik, AWS LoadBalancer Controller and External-DNS

Written by Jan Sokol | January 14, 2021

Have you been thinking about how to distribute incoming HTTP(S) (and TCP) requests in EKS? This problem might sound complicated at first, since in most cases you need to combine multiple applications to make it work. But it’s not as hard as it looks at first glance!

I will show you a solution that is really versatile and gives you the option of using advanced routing rules — such as query parameter-based routing, or matching a HTTP header with a regular expression.

As it is with Kubernetes and other open-source software, there are multiple ways to solve this problem:

Kubernetes Service with type: LoadBalancer
This is the native option for ingresses in EKS, although it does not use an Ingress resource at all. So every time you create a Service resource with type LoadBalancer, it spawns an ELB (this incurs an unnecessary cost). Also, you cannot route requests according to any rules based on URIs or request headers.
 
ALB LoadBalancer Controller
Another option is to use a standalone ALB LoadBalancer Controller - its drawbacks are described below.
 
A special solution!
I will show you a different solution consisting of three main parts —Traefik, AWS LoadBalancer Controller, and External-DNS.

The diagram above shows how Kubernetes services interoperate with each other. The client communicates with ALB via TLS only, providing secure communication. ALB then terminates the TLS connection and thereafter communicates with Traefik on HTTP.

Traefik is used to redistribute requests to corresponding Kubernetes services, based on the rules described below.

We will use AWS LoadBalancer Controller to dynamically provision the ALB for Traefik, along with its configuration. To relieve us from manually configuring DNS records an External-DNS deployment will be used, which dynamically checks for annotations in Kubernetes and creates DNS records in Route53 (but also in many more other DNS providers).

I will show you how to set up the mentioned combination of services in AWS. The great thing about them is that they are all open-source, so you can tinker with it and change any part you miss or dislike.

Prerequisites aka what you'll need

To create the setup described above, you will need a few things:

  • AWS Account,
  • Created EKS cluster inside the AWS account,
  • kubectl and Helm 3 installed on your local machine,
  • Custom public DNS domain that you stored in Route 53,
  • Kube2iam deployed in the cluster to let Kubernetes Services assume IAM roles.

After that you are ready to go!

 

Traefik API Gateway

We will use Traefik as an API gateway to route client requests to corresponding Kubernetes services.
Traefik was born in the cloud, giving it an advantage over conventional reverse proxy tools, such as Nginx and Haproxy.
What I like the most about Traefik is the extensibility shown in the diagram below.

As you can see in the diagram, the request comes first to one of its entry points (which can be HTTP, Let’s Encrypt/self-signed/imported certificate HTTPS or TCP), routes it through one (or more) middlewares (you can use one of many already created ones, or create your own) and then forwards the request to its corresponding Kubernetes Service.

How to deploy Traefik with Helm?

Helm is used to deploying both Traefik and other services. To deploy Traefik with Helm, you will need multiple things. First, Traefik should be deployed from the official Helm chart. Besides the standard configuration, values.yaml file should be created. Containing

```
service:
annotations: {}
type: NodePort
```

This way Kubernetes allocates a static port from a range and each worker node will proxy that same port to the Traefik Service. We will bind an ALB to this static port later.

Besides the usual Traefik Helm chart, add an Ingress resource, containing

```
# Source: traefik/templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik
annotations:
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig":
{ "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/certificate-arn: <cert_arn1>,<cert_arn2>
alb.ingress.kubernetes.io/healthcheck-path: /ping
alb.ingress.kubernetes.io/healthcheck-port: “<traefik_health_check_port>”
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/security-groups: <sg_group_id_where_alb_will_reside>
kubernetes.io/ingress.class: alb
external-dns.alpha.kubernetes.io/hostname: “domain1.net,domain2.com”
labels:
app: traefik
spec:
rules:
- http:
paths:
# HTTP to HTTPS redirect entry
- path: /*
backend:
serviceName: ssl-redirect
servicePort: use-annotation
- path: "/*"
backend:
serviceName: traefik
servicePort: 443
```

You can see multiple kinds of annotations, the one starting with alb.ingress.kubernetes.io and kubernetes.io/ingress.class is to configure AWS LoadBalancer Controller, external-dns.alpha.kubernetes.io is for External-DNS, both of which I’ll describe more in detail below. Annotation naming should be quite self-explanatory, but to understand it more in detail, consult the documentation for AWS LoadBalancer Controller and External-DNS.

You can either add the Ingress definition to the Helm chart or deploy it using kubectl apply.

Assigning ALB to Traefik

As docs say, AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster. The Controller watches for Ingress events and in case the Ingress definition satisfies requirements, it will create AWS resources for it (ELB and other needed resources). We will use it to deploy an ALB for the Trafik deployment, as the diagram in the Introduction section shows.

ALB Ingress Controller is also deployed using Helm, from the EKS Helm repository. Create values.yaml file containing

```
# Select the region where the EKS cluster will reside
awsRegion: "eu-central-1"

# Select role which AWS LoadBalancer Controller will assume
podAnnotations:
iam.amazonaws.com/role: “kube2iam_prod-cluster/aws-loadbalancer-controller"

clusterName: prod-cluster

autoDiscoverAwsVpcID: true
```

Role `kube2iam_prod-cluster/aws-loadbalancer-controller` needs to have permissions to provision and update AWS loadbalancers, plus other permissions.

You can create the role in the AWS Console, using cli or in Terraform.

Finally, let’s deploy AWS Load Balancer Controller:

```

helm install aws-loadbalancer-controller eks/aws-load-balancer-controller

--values=values.yaml -n kube-system

```

Dynamically creating DNS records

It doesn’t make sense to manage DNS records manually, since we register and de-register Kubernetes Services quite often. External-DNS will help us with that. It allows us to control DNS records dynamically via Kubernetes resources in a DNS provider-agnostic way.

Now let’s deploy External-DNS from bitnami repo. First, create values.yaml file containing ``` aws:

Select the region where the EKS cluster will reside

```
aws:
# Select the region where the EKS cluster will reside
region: "eu-central-1"
preferCNAME: false

## ref: https://github.com/kubernetes-sigs/external-dns/blob/master/docs/proposal/registry.md
## bug: External-dns creates a TXT record with stored config, but External-dns creates CNAME (Canonical) record instead of CNAME;
## Zone cannot contain CNAME and other record with the same principal)
registry: "noop"

podAnnotations:
iam.amazonaws.com/role: "kube2iam_prod-cluster/external-dns"
```

Role `kube2iam_prod-cluster/external-dns` needs to be created and have permissions to create and update records in Route53. External-DNS is then deployed using the command

```
helm install external-dns bitnami/external-dns --values=staging/external-dns/chart-values.yaml -n kube-system
```

Deploying a customer service to be routed via the set up above

Every new service you deploy to Kubernetes should contain the following annotations and resources (in addition to your traditional resources):

IngressRoute resource, to tell which requests to route to which Kubernetes Service. In this example, we will route all requests heading to www.example.net to the example-app Kubernetes Service.

```
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: example-app-prod-api-tls
annotations:
helm.sh/hook: "post-install, post-upgrade"
labels:

helm.sh/chart: example-app-0.1.0
app.kubernetes.io/name: example-app
app.kubernetes.io/instance: example-app-prod
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
entryPoints:
- websecure
routes:
- match: “Host(`www.example.net`)"
kind: Rule
services:
- name: "example-app-prod"
port: 80

```

Empty Ingress resource, to which annotations External-DNS service will listen and make changes in Route 53:

```
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-app-prod
annotations:
external-dns.alpha.kubernetes.io/hostname: "www.example.net"
external-dns.alpha.kubernetes.io/target: "example.net"
spec:
rules:
- host: "example.net"
```

This will tell External-DNS to create a CNAME record (www.example.net pointing to example.net) in the example.net Route53 zone.

Advantages over using AWS LoadBalancer Controller the conventional way

Up until recently you had to provision a new load balancer (ALB) for every Ingress resource you've defined. So when you have more than a few Ingress rules in your cluster, this might end up quite costly (the monthly price for one ALB is approximately 16 USD). The setup above aggregates all incoming connections into one load balancer.

Another advantage is that the whole stack is built upon open-source software, so moving to another Kubernetes distribution from another cloud provider will not be as time-consuming. The only thing that would have to be changed is the load balancer provisioning, everything can stay the same.

If you've read this far, congrats! I hope I explained to you that routing even with advanced functionalities might not be as hard as it looks. The solution I showed you has a great advantage over other options — it’s open-source, so moving will be quite easy when you decide to shift to a different Kubernetes distribution. I highly encourage you to give it a try and share your feedback and questions with us at ask@revolgy.com :)