Jeff Notes

通过 helm 部署高可用的 Ingress NGINX Controller

Ingress NGINX Controller 是 Kubernetes 社区广泛使用的 Ingress 控制器之一,负责管理和路由外部流量进入集群内部服务。它支持负载均衡、主机名和路径路由、SSL/TLS 终止等功能,提供了灵活的流量管理方式,是在 Kubernetes 中实现外部访问的重要组件。在高可用性部署中,Ingress NGINX Controller 可以与多实例和外部负载均衡方案结合,以确保服务的可靠性。

ingress-nginx GitHub Repository

https://github.com/kubernetes/ingress-nginx

前提条件

  1. 确保网络畅通,能正常从境外站点拉取镜像
  2. 可事先部署 OpenELB,用于为 Ingress 分配可供集群外部访问的 IP;或使用其它方式提供 IP

Kubernetes 集群节点信息

Kubernetes 版本:v1.28.2

      角色     主机名       IP  
Kubernetes Master k8s-master01 192.168.1.61
Kubernetes Master k8s-master02 192.168.1.62
Kubernetes Master k8s-master03 192.168.1.63
Kubernetes Worker k8s-node01 192.168.1.71
Kubernetes Worker k8s-node02 192.168.1.72
Kubernetes Worker k8s-node03 192.168.1.73
API Server HA Primary k8s-nginx01 192.168.1.58
API Server HA Backup k8s-nginx02 192.168.1.59

1 使用 helm 部署 Ingress NGINX Controller

ingress-nginx Helm Chart

https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx


添加 ingress-nginx​ 的官方 Helm 仓库

[root@k8s-master01 17:49 ~]$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
"ingress-nginx" has been added to your repositories


[root@k8s-master01 17:49 ~]$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "harbor" chart repository
...Successfully got an update from the "ingress-nginx" chart repository
Update Complete. ⎈Happy Helming!⎈


[root@k8s-master01 17:49 ~]$ helm repo list
NAME            URL                                            
ingress-nginx   https://kubernetes.github.io/ingress-nginx

1.1 准备 values.yaml 配置文件

为部署高可用的 Ingress NGINX Controller,创建自定义的 values.yaml​ 文件

[root@k8s-master01 17:56 ~]$ mkdir ingress-nginx
[root@k8s-master01 17:56 ~]$ cd ingress-nginx/

[root@k8s-master01 17:56 ~/ingress-nginx]$ vim values.yaml
controller:
  kind: Deployment
  replicaCount: 3

  nodeSelector:
    kubernetes.io/os: linux

  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
              - key: app.kubernetes.io/name
                operator: In
                values:
                  - ingress-nginx
          topologyKey: "kubernetes.io/hostname"

  service:
    type: LoadBalancer

  admissionWebhooks:
    enabled: true
    patch:
      tolerations:
        - key: node-role.kubernetes.io/control-plane
          operator: Exists
          effect: NoSchedule
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: node-role.kubernetes.io/control-plane
                    operator: Exists

values.yaml 配置解析

1. controller

2. affinity

3. service

4. admissionWebhooks


这份配置文件旨在确保 Ingress NGINX Controller 的高可用性和稳定性,通过多副本部署、反亲和性调度策略以及云环境中的负载均衡来实现这一目标。Admission Webhooks 的配置则保证了 Kubernetes 集群中敏感资源的自动管理,进一步提升了集群的安全性和自动化水平。

1.2 基于自定义 values.yaml 文件部署 Ingress NGINX Controller

安装 ingress-nginx Helm Chart 4.8.2​ 版本
Ingress NGINX Controller 的版本是 1.9.3

[root@k8s-master01 18:22 ~/ingress-nginx]$ helm show chart ingress-nginx/ingress-nginx --version 4.8.2
annotations:
  artifacthub.io/changes: |-
    - "update nginx base, httpbun, e2e, helm webhook cert gen (#10506)"
    - "Update Ingress-Nginx version controller-v1.9.3"
  artifacthub.io/prerelease: "false"
apiVersion: v2
appVersion: 1.9.3
description: Ingress controller for Kubernetes using NGINX as a reverse proxy and
  load balancer
home: https://github.com/kubernetes/ingress-nginx
icon: https://upload.wikimedia.org/wikipedia/commons/thumb/c/c5/Nginx_logo.svg/500px-Nginx_logo.svg.png
keywords:
- ingress
- nginx
kubeVersion: '>=1.20.0-0'
maintainers:
- name: rikatz
- name: strongjz
- name: tao12345666333
name: ingress-nginx
sources:
- https://github.com/kubernetes/ingress-nginx
version: 4.8.2


[root@k8s-master01 18:35 ~/ingress-nginx]$ helm install ingress-nginx ingress-nginx/ingress-nginx --version 4.8.2 -f values.yaml -n ingress-nginx --create-namespace
NAME: ingress-nginx
LAST DEPLOYED: Mon Oct 19 18:35:35 2023
NAMESPACE: ingress-nginx
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace ingress-nginx get services -o wide -w ingress-nginx-controller'

An example Ingress that makes use of the controller:
  apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    name: example
    namespace: foo
  spec:
    ingressClassName: nginx
    rules:
      - host: www.example.com
        http:
          paths:
            - pathType: Prefix
              backend:
                service:
                  name: exampleService
                  port:
                    number: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
      - hosts:
        - www.example.com
        secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls


[root@k8s-master01 18:40 ~]$ helm list -n ingress-nginx
NAME            NAMESPACE       REVISION    UPDATED                                 STATUS      CHART               APP VERSION
ingress-nginx   ingress-nginx   1           2023-10-19 18:35:35.541738493 +0800 CST deployed    ingress-nginx-4.8.2 1.9.3  


[root@k8s-master01 18:40 ~]$ kubectl get all -n ingress-nginx
NAME                                            READY   STATUS    RESTARTS   AGE
pod/ingress-nginx-controller-7d67b8964c-849mm   1/1     Running   0          4m43s
pod/ingress-nginx-controller-7d67b8964c-wbnkz   1/1     Running   0          4m43s
pod/ingress-nginx-controller-7d67b8964c-xcxj7   1/1     Running   0          4m43s

NAME                                         TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
service/ingress-nginx-controller             LoadBalancer   10.105.95.6     192.168.1.200   80:30711/TCP,443:32333/TCP   4m43s
service/ingress-nginx-controller-admission   ClusterIP      10.97.225.236   <none>          443/TCP                      4m43s

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   3/3     3            3           4m43s

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-7d67b8964c   3         3         3       4m43s

通过 helm get values​ 命令,可以查看已经安装的 ingress-nginx​ helm release 的所有 values

[root@k8s-master01 18:42 ~]$ helm get values ingress-nginx -a -n ingress-nginx

COMPUTED VALUES:
commonLabels: {}
controller:
  addHeaders: {}
  admissionWebhooks:
    annotations: {}
    certManager:
      admissionCert:
        duration: ""
      enabled: false
      rootCert:
        duration: ""
    certificate: /usr/local/certificates/cert
    createSecretJob:
      resources: {}
      securityContext:
        allowPrivilegeEscalation: false
    enabled: true
    existingPsp: ""
    extraEnvs: []
    failurePolicy: Fail
    key: /usr/local/certificates/key
    labels: {}
    namespaceSelector: {}
    objectSelector: {}
    patch:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: node-role.kubernetes.io/control-plane
                operator: Exists
      enabled: true
      image:
        digest: sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80
        image: ingress-nginx/kube-webhook-certgen
        pullPolicy: IfNotPresent
        registry: registry.k8s.io
        tag: v20231011-8b53cabe0
......

2 使用 Deployment 部署 demoapp 用于测试

创建 demoapp Deployment 对象,提供两个 demoapp Pod 实例
这个 demoapp​ 是简单的用于测试的 Web 应用,感谢马哥提供!

[root@k8s-master01 18:53 ~]$ kubectl create namespace demo
namespace/demo created


[root@k8s-master01 19:17 ~]$ cd ingress-nginx/

[root@k8s-master01 19:20 ~/ingress-nginx]$ kubectl create deployment demoapp --image=ikubernetes/demoapp:v1.0 --replicas=2 -n demo --dry-run=client -o yaml > demoapp-deployment.yaml
[root@k8s-master01 19:21 ~/ingress-nginx]$ vim demoapp-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: demoapp
  name: demoapp
  namespace: demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: demoapp
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: demoapp
    spec:
      containers:
      - image: ikubernetes/demoapp:v1.0
        name: demoapp
        resources: {}
status: {}
[root@k8s-master01 19:24 ~/ingress-nginx]$ kubectl apply -f demoapp-deployment.yaml 
deployment.apps/demoapp created


[root@k8s-master01 19:24 ~/ingress-nginx]$ kubectl get pods --show-labels -n demo 
NAME                      READY   STATUS    RESTARTS   AGE     LABELS
demoapp-7c58cd6bb-2b78d   1/1     Running   0          20s     app=demoapp,pod-template-hash=7c58cd6bb
demoapp-7c58cd6bb-9h89h   1/1     Running   0          20s     app=demoapp,pod-template-hash=7c58cd6bb


[root@k8s-master01 19:24 ~/ingress-nginx]$ kubectl get pods -n demo -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP             NODE                       NOMINATED NODE   READINESS GATES
demoapp-7c58cd6bb-2b78d   1/1     Running   0          22s   10.244.5.158   k8s-node03.jeffnotes.net   <none>           <none>
demoapp-7c58cd6bb-9h89h   1/1     Running   0          22s   10.244.4.253   k8s-node02.jeffnotes.net   <none>           <none>

为 demoapp Pods 创建 Service 对象

[root@k8s-master01 19:30 ~/ingress-nginx]$ kubectl create service clusterip demoapp-svc --tcp=80:80 -n demo --dry-run=client -o yaml > demoapp-service.yaml
[root@k8s-master01 19:31 ~/ingress-nginx]$ vim demoapp-service.yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: demoapp-svc
  name: demoapp-svc
  namespace: demo
spec:
  ports:
  - name: 80-80
    port: 80
    protocol: TCP
    targetPort: 80
  selector:

    # 这个标签用于关联 demoapp Pods,要相应改一下
    app: demoapp

  type: ClusterIP
status:
  loadBalancer: {}
[root@k8s-master01 19:34 ~/ingress-nginx]$ kubectl apply -f demoapp-service.yaml 
service/demoapp-svc created


[root@k8s-master01 19:36 ~/ingress-nginx]$ cd

[root@k8s-master01 19:36 ~]$ kubectl get svc -n demo
NAME          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
demoapp-svc   ClusterIP   10.96.84.219   <none>        80/TCP    27s


# 能通过 Service 的 IP 访问到 demoapp,说明 Service 的配置没有问题

[root@k8s-master01 19:36 ~]$ curl 10.96.84.219
iKubernetes demoapp v1.0 !! ClientIP: 192.168.1.61, ServerName: demoapp-7c58cd6bb-2b78d, ServerIP: 10.244.5.158!
[root@k8s-master01 19:36 ~]$ curl 10.96.84.219
iKubernetes demoapp v1.0 !! ClientIP: 192.168.1.61, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
[root@k8s-master01 19:36 ~]$ curl 10.96.84.219
iKubernetes demoapp v1.0 !! ClientIP: 192.168.1.61, ServerName: demoapp-7c58cd6bb-2b78d, ServerIP: 10.244.5.158!
[root@k8s-master01 19:36 ~]$ curl 10.96.84.219
iKubernetes demoapp v1.0 !! ClientIP: 192.168.1.61, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
[root@k8s-master01 19:36 ~]$ curl 10.96.84.219
iKubernetes demoapp v1.0 !! ClientIP: 192.168.1.61, ServerName: demoapp-7c58cd6bb-2b78d, ServerIP: 10.244.5.158!
[root@k8s-master01 19:36 ~]$ curl 10.96.84.219
iKubernetes demoapp v1.0 !! ClientIP: 192.168.1.61, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!

3 为 demoapp Service 创建 Ingress

创建 Ingress 对象,提供 demoapp.jeffnotes.net​ 虚拟主机

[root@k8s-master01 19:46 ~]$ cd ingress-nginx/

[root@k8s-master01 19:46 ~/ingress-nginx]$ kubectl create ingress demoapp --rule='demoapp.jeffnotes.net/*'=demoapp-svc:80 --class=nginx -n demo --dry-run=client -o yaml > demoapp-ingress.yaml
[root@k8s-master01 19:46 ~/ingress-nginx]$ vim demoapp-ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  creationTimestamp: null
  name: demoapp
  namespace: demo
spec:
  ingressClassName: nginx
  rules:
  - host: demoapp.jeffnotes.net
    http:
      paths:
      - backend:
          service:
            name: demoapp-svc
            port:
              number: 80
        path: /
        pathType: Prefix
status:
  loadBalancer: {}
[root@k8s-master01 19:47 ~/ingress-nginx]$ kubectl apply -f demoapp-ingress.yaml
ingress.networking.k8s.io/demoapp created


[root@k8s-master01 19:47 ~/ingress-nginx]$ cd

# 大约几秒钟之后,就自动从 OpenELB 那边取到 IP 了

[root@k8s-master01 19:50 ~]$ kubectl get ingress -n demo 
NAME      CLASS   HOSTS                   ADDRESS         PORTS   AGE
demoapp   nginx   demoapp.jeffnotes.net   192.168.1.200   80      2m56s

4 测试 Ingress NGINX Controller

Ingress NGINX Controller 的功能测试

4.1 通过 demoapp 作访问测试

在 Kubernetes 集群外部的 API Server HA Primary 节点上进行测试
先添加 hosts 解析配置

[root@k8s-nginx01 19:52 ~]$ vim /etc/hosts
192.168.1.200     demoapp.jeffnotes.net

后端的两个 demoapp​ Pod 都能被请求到

[root@k8s-nginx01 19:55 ~]$ while true; do curl http://demoapp.jeffnotes.net; sleep .3; done
iKubernetes demoapp v1.0 !! ClientIP: 10.244.5.156, ServerName: demoapp-7c58cd6bb-2b78d, ServerIP: 10.244.5.158!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.3.103, ServerName: demoapp-7c58cd6bb-2b78d, ServerIP: 10.244.5.158!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.4.251, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.5.156, ServerName: demoapp-7c58cd6bb-2b78d, ServerIP: 10.244.5.158!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.3.103, ServerName: demoapp-7c58cd6bb-2b78d, ServerIP: 10.244.5.158!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.4.251, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.5.156, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.3.103, ServerName: demoapp-7c58cd6bb-2b78d, ServerIP: 10.244.5.158!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.4.251, ServerName: demoapp-7c58cd6bb-2b78d, ServerIP: 10.244.5.158!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.5.156, ServerName: demoapp-7c58cd6bb-2b78d, ServerIP: 10.244.5.158!
......

并且,通过 ClientIP​ 字段,可以看出请求被负载均衡到 3 个 Ingress NGINX Controller 实例处理

[root@k8s-master01 19:56 ~]$ kubectl get pods -n ingress-nginx -o wide
NAME                                        READY   STATUS    RESTARTS   AGE   IP             NODE                    NOMINATED NODE   READINESS GATES
ingress-nginx-controller-7d67b8964c-849mm   1/1     Running   0          87m   10.244.3.103   k8s-node01.jeffnotes.net   <none>           <none>
ingress-nginx-controller-7d67b8964c-wbnkz   1/1     Running   0          87m   10.244.4.251   k8s-node02.jeffnotes.net   <none>           <none>
ingress-nginx-controller-7d67b8964c-xcxj7   1/1     Running   0          87m   10.244.5.156   k8s-node03.jeffnotes.net   <none>           <none>

4.2 Ingress NGINX Controller 的高可用测试

假设 Node01 节点发生故障离线

[root@k8s-node01 20:12 ~]$ poweroff

[root@k8s-master01 20:12 ~]$ kubectl get nodes
NAME                            STATUS     ROLES           AGE    VERSION
k8s-master01.jeffnotes.net      Ready      control-plane   182d   v1.28.2
k8s-master02.jeffnotes.net      Ready      control-plane   182d   v1.28.2
k8s-master03.jeffnotes.net      Ready      control-plane   182d   v1.28.2
k8s-node01.jeffnotes.net        NotReady   <none>          182d   v1.28.2
k8s-node02.jeffnotes.net        Ready      <none>          182d   v1.28.2
k8s-node03.jeffnotes.net        Ready      <none>          182d   v1.28.2

此时会出现部分请求失败的情况,大约 1 分钟之后再次发起请求,则恢复正常

[root@k8s-nginx01 20:13 ~]$ while true; do curl http://demoapp.jeffnotes.net; sleep .3; done
......
curl: (52) Empty reply from server
iKubernetes demoapp v1.0 !! ClientIP: 10.244.4.251, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.5.156, ServerName: demoapp-7c58cd6bb-2b78d, ServerIP: 10.244.5.158!
curl: (7) Failed to connect to demoapp.jeffnotes.net port 80 after 0 ms: Connection refused
iKubernetes demoapp v1.0 !! ClientIP: 10.244.4.251, ServerName: demoapp-7c58cd6bb-2b78d, ServerIP: 10.244.5.158!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.5.156, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
curl: (7) Failed to connect to demoapp.jeffnotes.net port 80 after 0 ms: Connection refused
iKubernetes demoapp v1.0 !! ClientIP: 10.244.4.251, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.5.156, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
curl: (7) Failed to connect to demoapp.jeffnotes.net port 80 after 0 ms: Connection refused
iKubernetes demoapp v1.0 !! ClientIP: 10.244.4.251, ServerName: demoapp-7c58cd6bb-2b78d, ServerIP: 10.244.5.158!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.5.156, ServerName: demoapp-7c58cd6bb-2b78d, ServerIP: 10.244.5.158!
curl: (7) Failed to connect to demoapp.jeffnotes.net port 80 after 0 ms: Connection refused
iKubernetes demoapp v1.0 !! ClientIP: 10.244.4.251, ServerName: demoapp-7c58cd6bb-2b78d, ServerIP: 10.244.5.158!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.5.156, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
curl: (7) Failed to connect to demoapp.jeffnotes.net port 80 after 0 ms: Connection refused
iKubernetes demoapp v1.0 !! ClientIP: 10.244.4.251, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.5.156, ServerName: demoapp-7c58cd6bb-2b78d, ServerIP: 10.244.5.158!


[root@k8s-nginx01 20:15 ~]$ while true; do curl http://demoapp.jeffnotes.net; sleep .3; done
iKubernetes demoapp v1.0 !! ClientIP: 10.244.4.251, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.5.156, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.4.251, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.5.156, ServerName: demoapp-7c58cd6bb-2b78d, ServerIP: 10.244.5.158!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.4.251, ServerName: demoapp-7c58cd6bb-2b78d, ServerIP: 10.244.5.158!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.5.156, ServerName: demoapp-7c58cd6bb-2b78d, ServerIP: 10.244.5.158!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.4.251, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
......

再关闭 1 个节点 (Node03)

[root@k8s-node03 20:18 ~]$ poweroff

[root@k8s-master01 20:19 ~]$ kubectl get pods -n demo -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP             NODE                       NOMINATED NODE   READINESS GATES
demoapp-7c58cd6bb-2b78d   1/1     Running   0          54m   10.244.5.158   k8s-node03.jeffnotes.net   <none>           <none>
demoapp-7c58cd6bb-9h89h   1/1     Running   0          54m   10.244.4.253   k8s-node02.jeffnotes.net   <none>           <none>


[root@k8s-master01 20:19 ~]$ kubectl get deployment -n demo 
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
demoapp   1/2     2            1           55m

此时由于之前 demoapp​ 的其中 1 个 Pod 也运行于 Node03 节点,会导致访问异常;

demoapp​ 是通过 Deployment 部署的 2 Replicas,因此会自动再调度运行 1 个 Pod 到仅剩的 Node02 节点;

从旧 Pod 故障,到新 Pod 正常运行,这个过程大约经历了 6 分钟;这个时间的长短,取决于节点的健康检查周期,节点失效确认时间 --node-monitor-grace-period​,以及 Pod 驱逐超时时间 --pod-eviction-timeout

[root@k8s-master01 20:25 ~]$ kubectl get deployment -n demo
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
demoapp   2/2     2            2           60m


[root@k8s-master01 20:25 ~]$ kubectl get pods -n demo -o wide 
NAME                      READY   STATUS        RESTARTS   AGE   IP             NODE                       NOMINATED NODE   READINESS GATES
demoapp-7c58cd6bb-2b78d   1/1     Terminating   0          60m   10.244.5.158   k8s-node03.jeffnotes.net   <none>           <none>
demoapp-7c58cd6bb-69psb   1/1     Running       0          43s   10.244.4.2     k8s-node02.jeffnotes.net   <none>           <none>
demoapp-7c58cd6bb-9h89h   1/1     Running       0          60m   10.244.4.253   k8s-node02.jeffnotes.net   <none>           <none>

不过,在出现异常请求后,同样大约 1 分钟之后再次发起请求,则恢复正常;
在新的 demoapp​ Pod 被调度运行之前,仅剩 Node02 节点上的 Ingress NGINX Controller 和 demoapp​ 提供服务

[root@k8s-nginx01 20:20 ~]$ while true; do curl http://demoapp.jeffnotes.net; sleep .3; done
iKubernetes demoapp v1.0 !! ClientIP: 10.244.4.251, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.4.251, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.4.251, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.4.251, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.4.251, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.4.251, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.4.251, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.4.251, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.4.251, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.4.251, ServerName: demoapp-7c58cd6bb-9h89h, ServerIP: 10.244.4.253!
......

以上测试结果,说明 3 实例的 Ingress Nginx Controller 高可用功能正常,两次故障导致的中断,分别会对业务请求造成大约 1 分钟以内的访问缓慢影响。

参考资料

[1]​ helm 安装 ingress-nginx: http://i7q.cn/5NvRjO
[2]​ 将 Pod 指派给节点: https://bit.ly/46V8iQN
[3]​ ChatGPT: https://chatgpt.com/