SNI is something that's been enabled by default for most modern browsers and http clients. It allows us to serve multiple different SSL certs on the same IP address and TCP port. This is incredibly useful for multi tenancy in EKS when used along side Nginx Ingress Controller and cert-manager. Unsurprisingly, there are still some legacy clients that do not support this feature. When such clients make a request to an endpoints that uses SNI to route requests to their respective services (think multi tenancy), we might be getting a SSL connection error.
$ java -Djsse.enableSNIExtension=true SSLPoke example.com 443 # works
$ java -Djsse.enableSNIExtension=false SSLPoke example.com 443 # SSL Error
To support such clients, we just need to give the endpoint a dedicated IP address (or set it as default) and avoid using any services that require SNIs like Nginx Ingress Controller. There are couple of options to achieve this in AWS.
Service level Classical ELB
The most straight forward option is to simply create a Service level ELB with type: LoadBalancer
. You can use external-dns
annotation to link the ELB to a domain in Route53. Or you can manually create a CNAME record in Route53 and map a subdomain (egnon-sni.example.com
) to the ELB that will be provisioned (egelb-url.us-west-2.elb.amazonaws.com
). The following is all yaml we need for this method to work.
apiVersion: v1
kind: Service
metadata:
annotations:
#external-dns.alpha.kubernetes.io/hostname: non-sni.example.com
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:0000:certificate/XXXX
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
labels:
app: frontend
name: frontend-proxy-svc
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
name: http
protocol: TCP
- port: 443
targetPort: 8080
name: https
protocol: TCP
selector:
app: frontend
Note that we are completely bypassing Nginx Ingress Controller and cert-manager here as that would require us to create a dedicated Kubernetes Ingress level ELB.
www.example.com -> ELB -> ingress-nginx + cert-manager -> service
non-sni.example.com -> ELB -> service
- Requires Service level ELB
- We need dedicated ACM cert, we can't use Let's Encrypt
- Requires manual CNAME record update every time ELB changes, can cook up some bash script to automated it tho
- Additional ELB cost
- Terminates SSL at ELB level
Ingress level ALB
This option has a high maintenance cost if you're not already using ALB as we need to install additional Operator called AWS ALB Ingress Controller.
- Still needs a dedicated SSL cert
Although the AWS Application Load Balancer (ALB) is a modern load balancer offered by AWS that can can be provisioned from within EKS, at the time of writing, the alb-ingress-controller; is only capable of serving sites using certificates stored in AWS Certificate Manager (ACM). source
- Requires manual CNAME record updates (according to the only cert-manager docs on ALB) but probably can be achieved using
external-dns
- Incurs additional cost of ALB
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: frontend
name: frontend-proxy-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:0000:certificate/XXXX
spec:
tls:
- hosts:
- "non-sni.example.com"
secretName: ca-star-example-com-key-pair
rules:
- host: "non-sni.example.com"
http:
paths:
- path: /
backend:
serviceName: frontend-proxy-svc
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: frontend
name: frontend-proxy-svc
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
name: http
protocol: TCP
selector:
app: frontend
Ingress level classical ELB
Another potential solution but I didn't spend time investigating this.
How we could have done it in GCP
I can't help myself but to compare how this whole ordeal could have been so much easier in GCP. Simply create a reserved static IP named frontend-proxy-static-ip
$ gcloud compute addresses create frontend-proxy-static-ip --global
$ gcloud compute addresses describe frontend-proxy-static-ip --global --format 'value(address)'
# 35.186.228.000
Attach the static IP to Ingress by just adding one line annotation kubernetes.io/ingress.global-static-ip-name
. Service just needs the usualtype: NodePort
. Then we create an A Record pointing to reserved static IP from non-sni.example.com
.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: frontend
name: frontend-proxy-ingress
annotations:
certmanager.k8s.io/cluster-issuer: ca-issuer-ent-frontend-com
kubernetes.io/ingress.global-static-ip-name: frontend-proxy-static-ip
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- hosts:
- "non-sni.example.com"
secretName: ent-default-ssl-certificate
rules:
- http:
paths:
- path: /
backend:
serviceName: frontend-svc
servicePort: 80
- No load balancers are involved
- We can still use Let's Encrypt certs via cert-manager, we don't need dedicated certs
- No additional cost, in-use static IPs are free
- No need to create additional service, note that we're using
frontend-svc
instead offrontend-proxy-svc