Learning Istio | Ingress
In the previous post, we deployed the Bookinfo application on a k3s cluster with Istio enabled. In this post, we will explore the features on Istio Ingress.
Kubernetes Ingress
Istio should handle Kubernetes Ingress resource just fine as documented here.
Here we create a Kubernetes Ingress to access the Bookinfo application. Note the additional annotation kubernetes.io/ingress.class: istio
:
kubectl -n bookinfo apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: productpage-k8s-ingress
spec:
rules:
- http:
paths:
- path: /productpage
pathType: Exact
backend:
service:
name: productpage
port:
number: 9080
- path: /static
pathType: Prefix
backend:
service:
name: productpage
port:
number: 9080
- path: /login
pathType: Exact
backend:
service:
name: productpage
port:
number: 9080
- path: /logout
pathType: Exact
backend:
service:
name: productpage
port:
number: 9080
- path: /api/v1/products
pathType: Prefix
backend:
service:
name: productpage
port:
number: 9080
EOF
The application is now exposed through the Istio ingress gateway on port 80, which in turn is exposed on port 8080 via the k3d configuration. Verify that it is working by browsing to http://localhost:8080/productpage
To verify that the route has been configured on the Istio ingress gateway, first get the name of the route
created by the Ingress:
$ ISTIO_INGRESS_GW_POD=$(kubectl -n istio-system get pods -l app=istio-ingressgateway -o jsonpath='{.items[*].metadata.name}')
$ istioctl proxy-config routes -n istio-system $ISTIO_INGRESS_GW_POD
NAME DOMAINS MATCH VIRTUAL SERVICE
http.80 * /productpage -productpage-k8s-ingress-istio-autogenerated-k8s-ingress.bookinfo
* /healthz/ready*
* /stats/prometheus*
In the example above, we see http.80
is the route created for our ingress matching /productpage
. We can then print out the details of the route by name
$ istioctl proxy-config routes -n istio-system $ISTIO_INGRESS_GW_POD --name http.80 -o yaml
- name: http.80
validateClusters: false
virtualHosts:
- domains:
- '*'
includeRequestAttemptCount: true
name: '*:80'
routes:
- decorator:
operation: productpage.bookinfo.svc.cluster.local:9080/productpage
match:
caseSensitive: true
path: /productpage
metadata:
filterMetadata:
istio:
config: /apis/networking.istio.io/v1alpha3/namespaces/bookinfo/virtual-service/-productpage-k8s-ingress-istio-autogenerated-k8s-ingress
route:
cluster: outbound|9080||productpage.bookinfo.svc.cluster.local
maxGrpcTimeout: 0s
retryPolicy:
hostSelectionRetryMaxAttempts: "5"
numRetries: 2
retriableStatusCodes:
- 503
retryHostPredicate:
- name: envoy.retry_host_predicates.previous_hosts
retryOn: connect-failure,refused-stream,unavailable,cancelled,retriable-status-codes
timeout: 0s
<output truncated>
Alternatively, you can go all out and get a config dump from the ingress gateway (which is basically Envoy) by exposing the Envoy admin port 15000
k -n istio-system port-forward $ISTIO_INGRESS_GW_POD 15000
and calling the config_dump
API in a separate terminal
curl localhost:15000/config_dump
Istio Gateway and Virtual Service
Istio offers the Gateway
and VirtualService
CRDs for better control of ingress traffic.
We start by deploying the Gateway
and VirtualService
resources
kubectl -n bookinfo apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage
port:
number: 9080
EOF
Verify that the appropriate routes are created:
$ istioctl proxy-config routes -n istio-system $ISTIO_INGRESS_GW_POD
NAME DOMAINS MATCH VIRTUAL SERVICE
http.80 * /productpage -productpage-k8s-ingress-istio-autogenerated-k8s-ingress.bookinfo
http.80 * /static/* -productpage-k8s-ingress-istio-autogenerated-k8s-ingress.bookinfo
http.80 * /login -productpage-k8s-ingress-istio-autogenerated-k8s-ingress.bookinfo
http.80 * /logout -productpage-k8s-ingress-istio-autogenerated-k8s-ingress.bookinfo
http.80 * /api/v1/products/* -productpage-k8s-ingress-istio-autogenerated-k8s-ingress.bookinfo
http.80 * /productpage bookinfo.bookinfo
http.80 * /static* bookinfo.bookinfo
http.80 * /login bookinfo.bookinfo
http.80 * /logout bookinfo.bookinfo
http.80 * /api/v1/products* bookinfo.bookinfo
* /healthz/ready*
* /stats/prometheus*
Notice that we now have 2 sets of routes, 1 from Kubernetes Ingress and another from the Istio VirtualService. According to Envoy’s route matching rules, they are evaluated in order, so the routes introduced by the Ingress resource still has priority.
Let’s verify this by changing the target port for /productpage in the productpage-k8s-ingress
Ingress to something else
kubectl -n bookinfo patch ingress productpage-k8s-ingress -p '{
"spec": {
"rules": [
{
"http": {
"paths": [
{
"path": "/productpage",
"pathType": "Exact",
"backend": {
"service": {
"name": "productpage",
"port": {
"number": 10080
}
}
}
}
]
}
}
]
}
}'
Attempting to browse to http://localhost:8080/productpage will now fail as expected since the productpage-k8s-ingress
Ingress configuration is wrong.
Now that we have observed this behaviour of Envoy route matching, we can remove the productpage-k8s-ingress
Ingress, and have Istio’s Gateway and VirtualService to take effect
kubectl -n bookinfo delete ingress productpage-k8s-ingress
The productpage should now work again.
Learnings
- Istio can handle Kubernetes Ingress once the
kubernetes.io/ingress.class: istio
annotation has been added - Viewing the Ingress gateway routes
- Ingress gateway route matching behaviour