User Tools

Site Tools


система_kubernetes

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
система_kubernetes [2025/03/26 16:23]
val [Deployment, Replica Sets, Pods]
система_kubernetes [2025/11/06 12:33] (current)
val [Развертывание через Kubespray]
Line 1: Line 1:
 ====== Система Kubernetes ====== ====== Система Kubernetes ======
 +
 +  * [[https://​habr.com/​ru/​companies/​vk/​articles/​645985/​|Почему Kubernetes — это новый Linux: 4 аргумента]]
  
   * [[https://​kubernetes.io/​ru/​docs/​home/​|Документация по Kubernetes (на русском)]]   * [[https://​kubernetes.io/​ru/​docs/​home/​|Документация по Kubernetes (на русском)]]
Line 11: Line 13:
   * [[https://​habr.com/​ru/​companies/​domclick/​articles/​566224/​|Различия между Docker, containerd, CRI-O и runc]]   * [[https://​habr.com/​ru/​companies/​domclick/​articles/​566224/​|Различия между Docker, containerd, CRI-O и runc]]
   * [[https://​daily.dev/​blog/​kubernetes-cni-comparison-flannel-vs-calico-vs-canal|Kubernetes CNI Comparison: Flannel vs Calico vs Canal]]   * [[https://​daily.dev/​blog/​kubernetes-cni-comparison-flannel-vs-calico-vs-canal|Kubernetes CNI Comparison: Flannel vs Calico vs Canal]]
 +  * [[https://​habr.com/​ru/​companies/​slurm/​articles/​464987/​|Хранилища в Kubernetes: OpenEBS vs Rook (Ceph) vs Rancher Longhorn vs StorageOS vs Robin vs Portworx vs Linstor]]
 +  * [[https://​parshinpn.ru/​ru/​blog/​external-connectivity-kubernetes-calico|Настраиваем сетевую связность внешнего узла с кластером Kubernetes (route reflector)]]
  
   * [[https://​habr.com/​ru/​company/​vk/​blog/​542730/​|11 факапов PRO-уровня при внедрении Kubernetes и как их избежать]]   * [[https://​habr.com/​ru/​company/​vk/​blog/​542730/​|11 факапов PRO-уровня при внедрении Kubernetes и как их избежать]]
Line 36: Line 40:
  
 # mv kubectl /​usr/​local/​bin/​ # mv kubectl /​usr/​local/​bin/​
 +</​code>​
 +== Debian 13 ==
 +<​code>​
 +# apt install kubectl
 </​code>​ </​code>​
  
Line 61: Line 69:
 ... ...
 </​code><​code>​ </​code><​code>​
 +kubectl version
 +
 kubectl get all -o wide --all-namespaces kubectl get all -o wide --all-namespaces
 kubectl get all -o wide -A kubectl get all -o wide -A
Line 99: Line 109:
  
   * [[https://​minikube.sigs.k8s.io/​docs/​start/​|Documentation/​Get Started/​minikube start]]   * [[https://​minikube.sigs.k8s.io/​docs/​start/​|Documentation/​Get Started/​minikube start]]
 +  * [[https://​stackoverflow.com/​questions/​42564058/​how-can-i-use-local-docker-images-with-minikube|How can I use local Docker images with Minikube?]]
  
 <​code>​ <​code>​
Line 114: Line 125:
 <​code>​ <​code>​
 gitlab-runner@server:​~$ time minikube start --driver=docker --insecure-registry "​server.corpX.un:​5000"​ gitlab-runner@server:​~$ time minikube start --driver=docker --insecure-registry "​server.corpX.un:​5000"​
-real    ​29m8.320s+real    ​41m8.320s
 ... ...
  
Line 135: Line 146:
 </​code><​code>​ </​code><​code>​
 gitlab-runner@server:​~$ kubectl get pods -A gitlab-runner@server:​~$ kubectl get pods -A
 +</​code>​
 +
 +или
 +
 +<​code>​
 +# cp -v /​home/​gitlab-runner/​.minikube/​cache/​linux/​amd64/​v*/​kubectl /​usr/​local/​bin/​
 </​code>​ </​code>​
  
Line 342: Line 359:
 <​code>​ <​code>​
 root@node1:​~#​ mkdir -p /​etc/​containerd/​ root@node1:​~#​ mkdir -p /​etc/​containerd/​
 +
 +root@node1:​~#​ ###​containerd config default > /​etc/​containerd/​config.toml
  
 root@node1:​~#​ cat /​etc/​containerd/​config.toml root@node1:​~#​ cat /​etc/​containerd/​config.toml
Line 362: Line 381:
  
 root@nodeN:​~#​ containerd config dump | less root@nodeN:​~#​ containerd config dump | less
 +</​code>​
 +
 +== сontainerd v3 ==
 +
 +  * [[https://​stackoverflow.com/​questions/​79305194/​unable-to-pull-image-from-insecure-registry-http-server-gave-http-response-to/​79308521#​79308521]]
 +
 +<​code>​
 +# mkdir -p /​etc/​containerd/​certs.d/​server.corpX.un:​5000/​
 +
 +# cat /​etc/​containerd/​certs.d/​server.corpX.un:​5000/​hosts.toml
 +</​code><​code>​
 +[host."​http://​server.corpX.un:​5000"​]
 +  capabilities = ["​pull",​ "​resolve",​ "​push"​]
 +  skip_verify = true
 +</​code><​code> ​
 +# systemctl restart containerd.service
 </​code>​ </​code>​
  
Line 368: Line 403:
 <​code>​ <​code>​
 root@nodeN:​~#​ crictl -r unix:///​run/​containerd/​containerd.sock pull server.corpX.un:​5000/​student/​gowebd root@nodeN:​~#​ crictl -r unix:///​run/​containerd/​containerd.sock pull server.corpX.un:​5000/​student/​gowebd
-</​code>​ 
  
 +root@kubeN:​~#​ crictl pull server.corpX.un:​5000/​student/​pywebd2
 +</​code>​
 ==== Развертывание через Kubespray ==== ==== Развертывание через Kubespray ====
  
Line 507: Line 543:
 ingress_nginx_host_network:​ true ingress_nginx_host_network:​ true
 ... ...
 +</​code>​
 +
 +==== Обновление сертификатов ====
 +  * [[https://​weng-albert.medium.com/​updating-kubernetes-certificates-easy-peasy-en-139fc07f26c8|Updating Kubernetes Certificates:​ Easy Peasy!(En)]]
 +  * [[https://​medium.com/​@reza.sadriniaa/​automatic-kubernetes-certificate-renewal-a-step-by-step-guide-c4320192a74d|Automatic Kubernetes Certificate Renewal: A Step-by-Step Guide]]
 +<​code>​
 +kubeM:~# kubeadm certs check-expiration
 +
 +kubeM:~# cp -rp /​etc/​kubernetes /​root/​old_k8s_config
 +
 +kubeM:~# kubeadm certs renew all
 +...
 +Done renewing certificates. You must restart the kube-apiserver,​ kube-controller-manager,​ kube-scheduler and etcd, so that they can use the new certificates.
 +
 +kubeM:~# cp /​etc/​kubernetes/​admin.conf /​root/​.kube/​config
 </​code>​ </​code>​
 ===== Базовые объекты k8s ===== ===== Базовые объекты k8s =====
Line 559: Line 610:
 $ kubectl delete deployment my-debian $ kubectl delete deployment my-debian
 </​code>​ </​code>​
 +
 +==== Manifest ====
 +
   * [[https://​kubernetes.io/​docs/​reference/​glossary/?​all=true#​term-manifest|Kubernetes Documentation Reference Glossary/​Manifest]]   * [[https://​kubernetes.io/​docs/​reference/​glossary/?​all=true#​term-manifest|Kubernetes Documentation Reference Glossary/​Manifest]]
 <​code>​ <​code>​
Line 578: Line 632:
         app: my-debian         app: my-debian
     spec:     spec:
 +      #​serviceAccountName:​ admin-user
       containers:       containers:
       - name: my-debian       - name: my-debian
Line 620: Line 675:
 $ ### kubectl delete deployment my-webd -n my-ns $ ### kubectl delete deployment my-webd -n my-ns
  
-cd webd/+mkdir ??webd-k8s/; cd $_
  
 $ cat my-webd-deployment.yaml $ cat my-webd-deployment.yaml
Line 646: Line 701:
 #        image: server.corpX.un:​5000/​student/​webd:​ver1.N #        image: server.corpX.un:​5000/​student/​webd:​ver1.N
 #        image: httpd #        image: httpd
 +#        args: ["​gunicorn",​ "​app:​app",​ "​--bind",​ "​0.0.0.0:​8000",​ "​-k",​ "​uvicorn.workers.UvicornWorker"​]
  
 #        imagePullPolicy:​ "​Always"​ #        imagePullPolicy:​ "​Always"​
Line 803: Line 859:
  
 $ kubectl get endpoints -n my-ns $ kubectl get endpoints -n my-ns
 +  или ​
 +$ kubectl get endpointslice -n my-ns
 </​code>​ </​code>​
 === NodePort === === NodePort ===
Line 930: Line 988:
  
   * [[https://​kubernetes.github.io/​ingress-nginx/​deploy/#​quick-start|NGINX ingress controller quick-start]]   * [[https://​kubernetes.github.io/​ingress-nginx/​deploy/#​quick-start|NGINX ingress controller quick-start]]
 +  * [[#​Работа с готовыми Charts]]
  
 === Minikube ingress-nginx-controller === === Minikube ingress-nginx-controller ===
Line 1046: Line 1105:
  
 <​code>​ <​code>​
-node1# ### kubectl create ingress my-ingress --class=nginx --rule="​webd.corpX.un/​*=my-webd:​80"​ -n my-ns+kube1# ### kubectl create ingress my-ingress --class=nginx --rule="​webd.corpX.un/​*=my-webd:​80"​ -n my-ns
  
-node1# cat my-ingress.yaml+kube1# cat my-ingress.yaml
 </​code><​code>​ </​code><​code>​
 apiVersion: networking.k8s.io/​v1 apiVersion: networking.k8s.io/​v1
Line 1085: Line 1144:
         pathType: Prefix         pathType: Prefix
 </​code><​code>​ </​code><​code>​
-node1# kubectl apply -f my-ingress.yaml -n my-ns+kube1# kubectl apply -f my-ingress.yaml -n my-ns
  
-node1# kubectl get ingress -n my-ns+kube1# kubectl get ingress -n my-ns
 NAME      CLASS   ​HOSTS ​                            ​ADDRESS ​                        ​PORTS ​  AGE NAME      CLASS   ​HOSTS ​                            ​ADDRESS ​                        ​PORTS ​  AGE
 my-webd ​  ​nginx ​  ​webd.corpX.un,​gowebd.corpX.un ​  ​192.168.X.202,​192.168.X.203 ​  ​80 ​     14m my-webd ​  ​nginx ​  ​webd.corpX.un,​gowebd.corpX.un ​  ​192.168.X.202,​192.168.X.203 ​  ​80 ​     14m
Line 1102: Line 1161:
 $ kubectl logs -n ingress-nginx -l app.kubernetes.io/​name=ingress-nginx -f $ kubectl logs -n ingress-nginx -l app.kubernetes.io/​name=ingress-nginx -f
  
-node1# ### kubectl delete ingress my-ingress -n my-ns+kube1# ### kubectl delete ingress my-ingress -n my-ns
 </​code>​ </​code>​
  
Line 1118: Line 1177:
 $ ###kubectl delete secret/​gowebd-tls -n my-ns $ ###kubectl delete secret/​gowebd-tls -n my-ns
 </​code>​ </​code>​
 +=== cert-manager ===
  
 +  * [[Letsencrypt Certbot]]
 +  * [[https://​cert-manager.io/​docs/​tutorials/​acme/​nginx-ingress/​|cert-manager Securing NGINX-ingress]]
 +  * [[Сервис Keepalived]] для 443-го порта
 +  * [[Решение HAProxy]] для 80-го (cert-manager проверяет ссылку изнутри кластера)
 +<​code>​
 +увидеть ссылку
 +student@debian:​~/​gowebd-k8s$ kubectl -n my-ns get ingress -o yaml | less
 +
 +увидеть обработчик
 +student@debian:​~/​gowebd-k8s$ kubectl -n my-ns get pods
 +NAME                        READY   ​STATUS ​   RESTARTS ​  AGE
 +cm-acme-http-solver-5j2pr ​  ​1/​1 ​    ​Running ​  ​0 ​         28s
 +my-webd-78ffd6cc5f-4qplt ​   1/1     ​Running ​  ​0 ​         4d14h
 +my-webd-78ffd6cc5f-zpcsh ​   1/1     ​Running ​  ​0 ​         4d14h
 +</​code>​
 ==== Volumes ==== ==== Volumes ====
  
Line 1400: Line 1475:
   * Делаем снапшот   * Делаем снапшот
   * Что-то ломаем (удаляем пользователя)   * Что-то ломаем (удаляем пользователя)
-  * Останавливаем сервис+ 
 +== Остановка сервиса ==
  
 <​code>​ <​code>​
Line 1737: Line 1813:
 #    use-forwarded-headers:​ true #    use-forwarded-headers:​ true
 #    allow-snippet-annotations:​ true #    allow-snippet-annotations:​ true
 +#  service:
 +#    type: LoadBalancer
 +#    loadBalancerIP:​ "​192.168.X.64"​
 </​code><​code>​ </​code><​code>​
 $ helm template ingress-nginx -f values.yaml --repo https://​kubernetes.github.io/​ingress-nginx -n ingress-nginx | tee t2.yaml $ helm template ingress-nginx -f values.yaml --repo https://​kubernetes.github.io/​ingress-nginx -n ingress-nginx | tee t2.yaml
Line 2055: Line 2134:
  
 ===== Kubernetes Dashboard ===== ===== Kubernetes Dashboard =====
 +
 +  * https://​www.bytebase.com/​blog/​top-open-source-kubernetes-dashboard/​
  
   * https://​kubernetes.io/​docs/​tasks/​access-application-cluster/​web-ui-dashboard/​   * https://​kubernetes.io/​docs/​tasks/​access-application-cluster/​web-ui-dashboard/​
   * https://​github.com/​kubernetes/​dashboard/​blob/​master/​docs/​user/​access-control/​creating-sample-user.md   * https://​github.com/​kubernetes/​dashboard/​blob/​master/​docs/​user/​access-control/​creating-sample-user.md
 +
 +  * [[https://​kubernetes.io/​docs/​reference/​kubectl/​generated/​kubectl_create/​kubectl_create_token/​]]
 +  * [[https://​www.jwt.io/​|JSON Web Token (JWT) Debugger]]
  
 <​code>​ <​code>​
 $ kubectl apply -f https://​raw.githubusercontent.com/​kubernetes/​dashboard/​v2.7.0/​aio/​deploy/​recommended.yaml $ kubectl apply -f https://​raw.githubusercontent.com/​kubernetes/​dashboard/​v2.7.0/​aio/​deploy/​recommended.yaml
  
-$ cat dashboard-user-role.yaml+$ cat dashboard-sa-admin-user.yaml
 </​code><​code>​ </​code><​code>​
 --- ---
Line 2070: Line 2154:
   name: admin-user   name: admin-user
   namespace: kubernetes-dashboard   namespace: kubernetes-dashboard
 +  #namespace: default
 --- ---
 apiVersion: rbac.authorization.k8s.io/​v1 apiVersion: rbac.authorization.k8s.io/​v1
Line 2083: Line 2168:
   name: admin-user   name: admin-user
   namespace: kubernetes-dashboard   namespace: kubernetes-dashboard
----+  #namespace: default 
 +</​code><​code>​ 
 +$ kubectl apply -f dashboard-sa-admin-user.yaml 
 + 
 +$ kubectl auth can-i get pods --as=system:​serviceaccount:​kubernetes-dashboard:​admin-user 
 + 
 +$ kubectl create token admin-user -n kubernetes-dashboard #​--duration=1h 
 + 
 +$ ###ps aux | grep kube-apiserver | grep service-account-key-file 
 +$ ###cat /​etc/​kubernetes/​ssl/​sa.pub 
 +$ ###echo ... | jq -R '​split("​."​) | .[1] | @base64d | fromjson'​ 
 +$ ###echo ... | awk -F'​.'​ '​{print $2}' | base64 -d | jq -r '.exp | todate'​ 
 +</​code>​ 
 + 
 +==== Доступ через proxy ==== 
 + 
 +<​code>​ 
 +cmder$ kubectl proxy 
 +</​code>​ 
 + 
 +  * http://​localhost:​8001/​api/​v1/​namespaces/​kubernetes-dashboard/​services/​https:​kubernetes-dashboard:/​proxy/​ 
 + 
 + 
 +==== Доступ через port-forward ==== 
 +<​code>​ 
 +$ kubectl -n kubernetes-dashboard port-forward svc/​kubernetes-dashboard-kong-proxy 8443:443 
 +</​code>​ 
 + 
 +  * https://​localhost:​8443 
 + 
 +==== Создание долговременного токена ==== 
 +<​code>​ 
 +$ cat dashboard-secret-for-token.yaml 
 +</​code><​code>​
 apiVersion: v1 apiVersion: v1
 kind: Secret kind: Secret
Line 2089: Line 2207:
   name: admin-user   name: admin-user
   namespace: kubernetes-dashboard   namespace: kubernetes-dashboard
 +  #namespace: default
   annotations:​   annotations:​
     kubernetes.io/​service-account.name:​ "​admin-user"​     kubernetes.io/​service-account.name:​ "​admin-user"​
 type: kubernetes.io/​service-account-token type: kubernetes.io/​service-account-token
 </​code><​code>​ </​code><​code>​
-$ kubectl apply -f dashboard-user-role.yaml +$ kubectl apply -f dashboard-secret-for-token.yaml
- +
-$ kubectl -n kubernetes-dashboard create token admin-user+
  
 $ kubectl get secret admin-user -n kubernetes-dashboard -o jsonpath={"​.data.token"​} | base64 -d ; echo $ kubectl get secret admin-user -n kubernetes-dashboard -o jsonpath={"​.data.token"​} | base64 -d ; echo
- 
-cmder$ kubectl proxy 
 </​code>​ </​code>​
- 
-  * http://​localhost:​8001/​api/​v1/​namespaces/​kubernetes-dashboard/​services/​https:​kubernetes-dashboard:/​proxy/​ 
- 
 ===== Мониторинг ===== ===== Мониторинг =====
  
Line 2129: Line 2241:
 kube1# kubectl top pod #-n kube-system kube1# kubectl top pod #-n kube-system
  
-kube1# kubectl top pod -A --sort-by=mem+kube1# kubectl top pod -A --sort-by=memory
  
 kube1# kubectl top node kube1# kubectl top node
 </​code>​ </​code>​
  
 +==== kube-state-metrics ====
 +
 +  * [[https://​github.com/​prometheus-community/​helm-charts/​tree/​main/​charts/​kube-state-metrics]]
 +  * ... алерты с инфой по упавшим подам ...
 +
 +<​code>​
 +kube1# helm repo add prometheus-community https://​prometheus-community.github.io/​helm-charts
 +
 +kube1# helm repo update
 +kube1# helm install kube-state-metrics prometheus-community/​kube-state-metrics -n vm --create-namespace
 +
 +kube1# curl kube-state-metrics.vm.svc.cluster.local:​8080/​metrics
 +</​code>​
 ===== Отладка,​ troubleshooting ===== ===== Отладка,​ troubleshooting =====
  
система_kubernetes.1742995395.txt.gz · Last modified: 2025/03/26 16:23 by val