I set up Kubernetes on bare-metal servers. Because we are in the K8s migration phase, we have some Nginx nodes as load balancers, they proxy requests to Kubernetes nodes on port 80.
we use Ingress that node-ported on port 80 and 443 and IngressControllers that proxy requests to our application pods.
I proxy 1% of our production load on Kubernetes nodes. but in three days, I think there is a memory leak on allocated sockets. How can I troubleshoot this issue? Why this happened and how can I fix that?
I tried but no effect:
- Restart application pods
- Restart ingress-controller pods.
[one of k8s node TCP socket metrics in 7days]
UPDATE: Add more information
- K8s version: v1.19.3
- Node linux info: Ubuntu 20.04.1 LTS - 5.4.0-58-generic
- Docker verison: v19.3.13
question from:
https://stackoverflow.com/questions/65857374/memory-leak-tcp-alloc-in-kubernetes-node 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…