User Tools

Site Tools


система_kubernetes

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
система_kubernetes [2025/03/26 16:23]
val [Deployment, Replica Sets, Pods]
система_kubernetes [2026/01/06 14:16] (current)
val [Ingress]
Line 1: Line 1:
 ====== Система Kubernetes ====== ====== Система Kubernetes ======
 +
 +  * [[https://​habr.com/​ru/​companies/​vk/​articles/​645985/​|Почему Kubernetes — это новый Linux: 4 аргумента]]
 +
 +  * [[https://​habr.com/​ru/​companies/​vk/​articles/​821021/​|Зачем делать прожорливый софт: принципы reconciliation loop (Привет,​ K8s!)]]
  
   * [[https://​kubernetes.io/​ru/​docs/​home/​|Документация по Kubernetes (на русском)]]   * [[https://​kubernetes.io/​ru/​docs/​home/​|Документация по Kubernetes (на русском)]]
Line 11: Line 15:
   * [[https://​habr.com/​ru/​companies/​domclick/​articles/​566224/​|Различия между Docker, containerd, CRI-O и runc]]   * [[https://​habr.com/​ru/​companies/​domclick/​articles/​566224/​|Различия между Docker, containerd, CRI-O и runc]]
   * [[https://​daily.dev/​blog/​kubernetes-cni-comparison-flannel-vs-calico-vs-canal|Kubernetes CNI Comparison: Flannel vs Calico vs Canal]]   * [[https://​daily.dev/​blog/​kubernetes-cni-comparison-flannel-vs-calico-vs-canal|Kubernetes CNI Comparison: Flannel vs Calico vs Canal]]
 +  * [[https://​habr.com/​ru/​companies/​slurm/​articles/​464987/​|Хранилища в Kubernetes: OpenEBS vs Rook (Ceph) vs Rancher Longhorn vs StorageOS vs Robin vs Portworx vs Linstor]]
 +  * [[https://​parshinpn.ru/​ru/​blog/​external-connectivity-kubernetes-calico|Настраиваем сетевую связность внешнего узла с кластером Kubernetes (route reflector)]]
  
   * [[https://​habr.com/​ru/​company/​vk/​blog/​542730/​|11 факапов PRO-уровня при внедрении Kubernetes и как их избежать]]   * [[https://​habr.com/​ru/​company/​vk/​blog/​542730/​|11 факапов PRO-уровня при внедрении Kubernetes и как их избежать]]
Line 36: Line 42:
  
 # mv kubectl /​usr/​local/​bin/​ # mv kubectl /​usr/​local/​bin/​
 +</​code>​
 +== Debian 13 ==
 +<​code>​
 +# apt install kubectl
 </​code>​ </​code>​
  
Line 61: Line 71:
 ... ...
 </​code><​code>​ </​code><​code>​
-kubectl get all -o wide --all-namespaces +kubectl version 
-kubectl get all -o wide -A+ 
 +kubectl get all -o wide --all-namespaces ​#-A 
 + 
 +kubectl get nodes
 </​code>​ </​code>​
-=== Настройка автодополнения ===+==== Настройка автодополнения ​====
 <​code>​ <​code>​
 kube1:~# less /​etc/​bash_completion.d/​kubectl.sh kube1:~# less /​etc/​bash_completion.d/​kubectl.sh
Line 80: Line 93:
 </​code>​ </​code>​
  
-=== Подключение к другому кластеру ===+==== Создание ​файла ​конфигурации kubectl ==== 
 + 
 +  * [[https://​kubernetes.io/​docs/​reference/​kubectl/​generated/​kubectl_config/​kubectl_config_set-credentials/​]]
  
 <​code>​ <​code>​
-gitlab-runner@server:~$ scp root@kube1:.kube/config ​.kube/config_kube1+user1@client1:~$ ###export KUBECONFIG=~/​.kube/config_test 
 +user1@client1:​~$ ###rm -rf .kube/
  
-gitlab-runner@server:~$ cat .kube/config_kube1 +user1@client1:~$ kubectl config set-cluster cluster.local --server=https:​//​192.168.13.221:​6443 --insecure-skip-tls-verify=true ​ 
-</code><​code>​ +kubeN# ###​cat ​/etc/​kubernetes/​ssl/​ca.crt 
-... +  ​ИЛИ 
-    .kube/config_kube1 +root@my-debian:​~#​ kubectl config set-cluster cluster.local --server=https://​192.168.13.221:6443 --certificate-authority=/​run/​secrets/​kubernetes.io/​serviceaccount/​ca.crt #​--embed-certs=true 
-... + 
-</​code><​code>​ +user1@client1:​~$ cat .kube/config 
-gitlab-runner@server:~$ export KUBECONFIG=~/.kube/config_kube1+ 
 +user1@client1:​~$ kubectl config set-credentials user1 --client-certificate=user1.crt --client-key=user1.key #​--embed-certs=true 
 +  ​ИЛИ 
 +user1@client1:~$ kubectl config set-credentials user1 --token=................................... 
 +  ИЛИ 
 +root@my-debian:​~# kubectl config set-credentials user1 --token=$(cat ​/run/​secrets/​kubernetes.io/serviceaccount/​token) 
 + 
 +user1@client1:​~$ kubectl config get-users 
 + 
 +user1@client1:​~$ kubectl config set-context default-context --cluster=cluster.local --user=user1 
 + 
 +user1@client1:​~$ kubectl config use-context default-context 
 + 
 +user1@client1:​~$ kubectl auth whoami 
 + 
 +user1@client1:​~$ kubectl auth can-i get pods #-n my-ns
  
-gitlab-runner@server:~$ kubectl get nodes+user1@client1:~$ kubectl get pods #-A 
 +Error from server (Forbidden) или ...
 </​code>​ </​code>​
  
Line 99: Line 131:
  
   * [[https://​minikube.sigs.k8s.io/​docs/​start/​|Documentation/​Get Started/​minikube start]]   * [[https://​minikube.sigs.k8s.io/​docs/​start/​|Documentation/​Get Started/​minikube start]]
 +  * [[https://​stackoverflow.com/​questions/​42564058/​how-can-i-use-local-docker-images-with-minikube|How can I use local Docker images with Minikube?]]
  
 <​code>​ <​code>​
-root@server:​~#​ apt install -y curl wget apt-transport-https+root@server:​~#​ apt install -y wget
  
 root@server:​~#​ wget https://​storage.googleapis.com/​minikube/​releases/​latest/​minikube-linux-amd64 root@server:​~#​ wget https://​storage.googleapis.com/​minikube/​releases/​latest/​minikube-linux-amd64
Line 114: Line 147:
 <​code>​ <​code>​
 gitlab-runner@server:​~$ time minikube start --driver=docker --insecure-registry "​server.corpX.un:​5000"​ gitlab-runner@server:​~$ time minikube start --driver=docker --insecure-registry "​server.corpX.un:​5000"​
-real    ​29m8.320s+real    ​3m9.625s ... 41m8.320s
 ... ...
  
Line 135: Line 168:
 </​code><​code>​ </​code><​code>​
 gitlab-runner@server:​~$ kubectl get pods -A gitlab-runner@server:​~$ kubectl get pods -A
 +</​code>​
 +
 +или
 +
 +<​code>​
 +# cp -v /​home/​gitlab-runner/​.minikube/​cache/​linux/​amd64/​v*/​kubectl /​usr/​local/​bin/​
 </​code>​ </​code>​
  
Line 165: Line 204:
 gitlab-runner@server:​~$ ###minikube start gitlab-runner@server:​~$ ###minikube start
 </​code>​ </​code>​
-===== Кластер Kubernetes ===== 
  
 +==== "​Внутри"​ minikube ====
 +<​code>​
 +gitlab-runner@server:​~/​webd-k8s$ kubectl -n my-ns get service
 +my-webd ​  ​ClusterIP ​  ​10.109.239.180 ​  <​none> ​       80/TCP ...
 +
 +gitlab-runner@server:​~/​webd-k8s$ kubectl get pods -o wide -A | grep dns
 +kube-system ​    ​coredns ... 10.244.0.2
 +
 +gitlab-runner@server:​~/​apwebd-k8s$ minikube ssh
 +
 +docker@minikube:​~$ host my-webd.my-ns.svc.cluster.local 10.244.0.2
 +...
 +my-webd.my-ns.svc.cluster.local has address 10.109.239.180
 +
 +docker@minikube:​~$ curl 10.109.239.180
 +...
 +</​code>​
 +
 +==== Использование в minikube своих docker образов ====
 +
 +  * [[https://​stackoverflow.com/​questions/​42564058/​how-can-i-use-local-docker-images-with-minikube|How can I use local Docker images with Minikube?]]
 +
 +  * [[Технология Docker#​Приложение golang gowebd]]
 +
 +<​code>​
 +gitlab-runner@server:​~/​gowebd$ eval $(minikube docker-env)
 +
 +gitlab-runner@server:​~/​gowebd$ docker build -t gowebd .
 +</​code>​
 +
 +
 +===== Кластер Kubernetes =====
  
 ==== Развертывание через kubeadm ==== ==== Развертывание через kubeadm ====
Line 342: Line 412:
 <​code>​ <​code>​
 root@node1:​~#​ mkdir -p /​etc/​containerd/​ root@node1:​~#​ mkdir -p /​etc/​containerd/​
 +
 +root@node1:​~#​ ###​containerd config default > /​etc/​containerd/​config.toml
  
 root@node1:​~#​ cat /​etc/​containerd/​config.toml root@node1:​~#​ cat /​etc/​containerd/​config.toml
Line 362: Line 434:
  
 root@nodeN:​~#​ containerd config dump | less root@nodeN:​~#​ containerd config dump | less
 +</​code>​
 +
 +== сontainerd v3 ==
 +
 +  * [[https://​stackoverflow.com/​questions/​79305194/​unable-to-pull-image-from-insecure-registry-http-server-gave-http-response-to/​79308521#​79308521]]
 +
 +<​code>​
 +# mkdir -p /​etc/​containerd/​certs.d/​server.corpX.un:​5000/​
 +
 +# cat /​etc/​containerd/​certs.d/​server.corpX.un:​5000/​hosts.toml
 +</​code><​code>​
 +[host."​http://​server.corpX.un:​5000"​]
 +  capabilities = ["​pull",​ "​resolve",​ "​push"​]
 +  skip_verify = true
 +</​code><​code> ​
 +# systemctl restart containerd.service
 </​code>​ </​code>​
  
Line 368: Line 456:
 <​code>​ <​code>​
 root@nodeN:​~#​ crictl -r unix:///​run/​containerd/​containerd.sock pull server.corpX.un:​5000/​student/​gowebd root@nodeN:​~#​ crictl -r unix:///​run/​containerd/​containerd.sock pull server.corpX.un:​5000/​student/​gowebd
-</​code>​ 
  
 +root@kubeN:​~#​ crictl pull server.corpX.un:​5000/​student/​pywebd2
 +</​code>​
 ==== Развертывание через Kubespray ==== ==== Развертывание через Kubespray ====
  
Line 380: Line 469:
  
 === Подготовка к развертыванию через Kubespray === === Подготовка к развертыванию через Kubespray ===
- 
-  * [[Язык программирования Python#​Виртуальная среда Python]] 
  
 <​code>​ <​code>​
-(venv1) ​server# ssh-keygen+server# ssh-keygen ​   ### -t rsa
  
-(venv1) ​server# ssh-copy-id kube1;​ssh-copy-id kube2;​ssh-copy-id kube3;​ssh-copy-id kube4;+server# ssh-copy-id kube1;​ssh-copy-id kube2;​ssh-copy-id kube3;​ssh-copy-id kube4; 
 +</​code>​
  
 +=== Вариант 1 (ansible) ===
 +
 +  * [[https://​github.com/​kubernetes-sigs/​kubespray/​blob/​v2.26.0/​README.md]]
 +  * [[Язык программирования Python#​Виртуальная среда Python]]
 +
 +<​code>​
 (venv1) server# git clone https://​github.com/​kubernetes-sigs/​kubespray (venv1) server# git clone https://​github.com/​kubernetes-sigs/​kubespray
  
Line 507: Line 601:
 ingress_nginx_host_network:​ true ingress_nginx_host_network:​ true
 ... ...
 +</​code>​
 +
 +=== Вариант 2 (docker) ===
 +
 +  * [[https://​github.com/​kubernetes-sigs/​kubespray/​blob/​v2.29.0/​README.md]]
 +
 +<​code>​
 +server:~# mkdir -p inventory/​sample
 +
 +server:~# cat inventory/​sample/​inventory.ini
 +</​code><​code>​
 +#[all]
 +#kube1 ansible_host=192.168.X.221
 +#kube2 ansible_host=192.168.X.222
 +#kube3 ansible_host=192.168.X.223
 +##kube4 ansible_host=192.168.X.224
 +
 +[kube_control_plane]
 +kube[1:3]
 +
 +[etcd:​children]
 +kube_control_plane
 +
 +[kube_node]
 +kube[1:3]
 +#kube[1:4]
 +</​code><​code>​
 +server:~# docker run --userns=host --rm -it -v /​root/​inventory/​sample:/​inventory -v /​root/​.ssh/:/​root/​.ssh/​ quay.io/​kubespray/​kubespray:​v2.29.0 bash
 +
 +root@cf764ca3b291:/​kubespray#​ time ansible-playbook -i /​inventory/​inventory.ini cluster.yml
 +...
 +real    12m18.679s
 +...
 +</​code>​
 +
 +==== Управление образами ====
 +<​code>​
 +kubeN#
 +crictl pull server.corpX.un:​5000/​student/​gowebd
 +crictl images
 +crictl rmi server.corpX.un:​5000/​student/​gowebd
 +</​code>​
 +
 +==== Обновление сертификатов ====
 +  * [[https://​weng-albert.medium.com/​updating-kubernetes-certificates-easy-peasy-en-139fc07f26c8|Updating Kubernetes Certificates:​ Easy Peasy!(En)]]
 +  * [[https://​medium.com/​@reza.sadriniaa/​automatic-kubernetes-certificate-renewal-a-step-by-step-guide-c4320192a74d|Automatic Kubernetes Certificate Renewal: A Step-by-Step Guide]]
 +<​code>​
 +kubeM:~# kubeadm certs check-expiration
 +
 +kubeM:~# cp -rp /​etc/​kubernetes /​root/​old_k8s_config
 +
 +kubeM:~# kubeadm certs renew all
 +...
 +Done renewing certificates. You must restart the kube-apiserver,​ kube-controller-manager,​ kube-scheduler and etcd, so that they can use the new certificates.
 +
 +kubeM:~# cp /​etc/​kubernetes/​admin.conf /​root/​.kube/​config
 </​code>​ </​code>​
 ===== Базовые объекты k8s ===== ===== Базовые объекты k8s =====
Line 559: Line 709:
 $ kubectl delete deployment my-debian $ kubectl delete deployment my-debian
 </​code>​ </​code>​
 +
 +==== Manifest ====
 +
   * [[https://​kubernetes.io/​docs/​reference/​glossary/?​all=true#​term-manifest|Kubernetes Documentation Reference Glossary/​Manifest]]   * [[https://​kubernetes.io/​docs/​reference/​glossary/?​all=true#​term-manifest|Kubernetes Documentation Reference Glossary/​Manifest]]
 <​code>​ <​code>​
Line 578: Line 731:
         app: my-debian         app: my-debian
     spec:     spec:
 +      #​serviceAccountName:​ admin-user
       containers:       containers:
       - name: my-debian       - name: my-debian
Line 620: Line 774:
 $ ### kubectl delete deployment my-webd -n my-ns $ ### kubectl delete deployment my-webd -n my-ns
  
-cd webd/+mkdir ??webd-k8s/; cd $_
  
 $ cat my-webd-deployment.yaml $ cat my-webd-deployment.yaml
Line 646: Line 800:
 #        image: server.corpX.un:​5000/​student/​webd:​ver1.N #        image: server.corpX.un:​5000/​student/​webd:​ver1.N
 #        image: httpd #        image: httpd
 +#        args: ["​gunicorn",​ "​app:​app",​ "​--bind",​ "​0.0.0.0:​8000",​ "​-k",​ "​uvicorn.workers.UvicornWorker"​]
  
-#        imagePullPolicy:​ "Always"+#        image: gowebd 
 +#        imagePullPolicy:​ "IfNotPresent"
  
 #        lifecycle: #        lifecycle:
Line 726: Line 882:
 (доступны опции -f, --tail=2000,​ --previous) (доступны опции -f, --tail=2000,​ --previous)
  
-$ kubectl scale deployment my-webd --replicas=3 -n my-ns+$ kubectl scale deployment my-webd --replicas=3 -n my-ns   # 0 - остановка приложения
  
 $ kubectl delete pod/​my-webd-NNNNNNNNNN-NNNNN -n my-ns $ kubectl delete pod/​my-webd-NNNNNNNNNN-NNNNN -n my-ns
Line 768: Line 924:
 </​code>​ </​code>​
  
 +=== Поиск и удаление подов в нерабочем состоянии ===
 +
 +  * [[https://​stackoverflow.com/​questions/​55072235/​how-to-delete-completed-kubernetes-pod|How to delete completed kubernetes pod?]]
 +
 +<​code>​
 +kube1:~# kubectl get pods --field-selector=status.phase!=Running -A -o wide
 +
 +kube1:~# kubectl delete pod --field-selector=status.phase==Succeeded -A
 +
 +kube1:~# kubectl delete pod --field-selector=status.phase==Failed -A
 +</​code>​
 ==== Service ==== ==== Service ====
  
Line 803: Line 970:
  
 $ kubectl get endpoints -n my-ns $ kubectl get endpoints -n my-ns
 +  или ​
 +$ kubectl get endpointslice -n my-ns
 </​code>​ </​code>​
 === NodePort === === NodePort ===
Line 839: Line 1008:
  
 $ kubectl -n metallb-system get all $ kubectl -n metallb-system get all
 +
 +$ mkdir metallb-system;​ cd $_
  
 $ cat first-pool.yaml $ cat first-pool.yaml
Line 886: Line 1057:
  
 == port-forward == == port-forward ==
 +  * [[https://​www.golinuxcloud.com/​kubectl-port-forward/​|kubectl port-forward examples in Kubernetes]]
   * [[#​Инструмент командной строки kubectl]]   * [[#​Инструмент командной строки kubectl]]
  
Line 930: Line 1101:
  
   * [[https://​kubernetes.github.io/​ingress-nginx/​deploy/#​quick-start|NGINX ingress controller quick-start]]   * [[https://​kubernetes.github.io/​ingress-nginx/​deploy/#​quick-start|NGINX ingress controller quick-start]]
 +  * [[#​Работа с готовыми Charts]]
  
 === Minikube ingress-nginx-controller === === Minikube ingress-nginx-controller ===
  
   * [[https://​kubernetes.io/​docs/​tasks/​access-application-cluster/​ingress-minikube/​|Set up Ingress on Minikube with the NGINX Ingress Controller]]   * [[https://​kubernetes.io/​docs/​tasks/​access-application-cluster/​ingress-minikube/​|Set up Ingress on Minikube with the NGINX Ingress Controller]]
-  * [[https://​www.golinuxcloud.com/​kubectl-port-forward/​|kubectl port-forward examples in Kubernetes]] 
  
 <​code>​ <​code>​
-server# cat /​etc/​bind/​corpX.un +$ minikube addons list 
-</​code><​code>​ + 
-... +$ minikube addons enable ingress
-webd A 192.168.49.2 +
-</​code><​code>​ +
-gitlab-runner@server:​~$ minikube addons enable ingress+
 </​code>​ </​code>​
  
Line 1046: Line 1214:
  
 <​code>​ <​code>​
-node1# ### kubectl create ingress my-ingress --class=nginx --rule="​webd.corpX.un/​*=my-webd:​80"​ -n my-ns+kube1# ### kubectl create ingress my-ingress --class=nginx --rule="​webd.corpX.un/​*=my-webd:​80"​ -n my-ns
  
-node1# cat my-ingress.yaml+kube1# cat my-ingress.yaml
 </​code><​code>​ </​code><​code>​
 apiVersion: networking.k8s.io/​v1 apiVersion: networking.k8s.io/​v1
Line 1057: Line 1225:
 #    nginx.ingress.kubernetes.io/​canary:​ "​true"​ #    nginx.ingress.kubernetes.io/​canary:​ "​true"​
 #    nginx.ingress.kubernetes.io/​canary-weight:​ "​30"​ #    nginx.ingress.kubernetes.io/​canary-weight:​ "​30"​
 +#    cert-manager.io/​issuer:​ "​...-issuer"​
 +#    cert-manager.io/​cluster-issuer:​ "​...-issuer"​
 spec: spec:
   ingressClassName:​ nginx   ingressClassName:​ nginx
-#  tls: 
-#  - hosts: 
-#    - gowebd.corpX.un 
-#    secretName: gowebd-tls 
   rules:   rules:
   - host: webd.corpX.un   - host: webd.corpX.un
Line 1084: Line 1250:
         path: /         path: /
         pathType: Prefix         pathType: Prefix
 +#  tls:
 +#  - hosts:
 +#    - gowebd.corpX.un
 +#    - "​*.corpX.un"​
 +#    secretName: gowebd-tls
 +#  - hosts:
 +#    - webd.corpX.un
 +#    secretName: webd-tls
 </​code><​code>​ </​code><​code>​
-node1# kubectl apply -f my-ingress.yaml -n my-ns+kube1# kubectl apply -f my-ingress.yaml -n my-ns
  
-node1# kubectl get ingress -n my-ns+kube1# kubectl get ingress -n my-ns
 NAME      CLASS   ​HOSTS ​                            ​ADDRESS ​                        ​PORTS ​  AGE NAME      CLASS   ​HOSTS ​                            ​ADDRESS ​                        ​PORTS ​  AGE
 my-webd ​  ​nginx ​  ​webd.corpX.un,​gowebd.corpX.un ​  ​192.168.X.202,​192.168.X.203 ​  ​80 ​     14m my-webd ​  ​nginx ​  ​webd.corpX.un,​gowebd.corpX.un ​  ​192.168.X.202,​192.168.X.203 ​  ​80 ​     14m
Line 1102: Line 1276:
 $ kubectl logs -n ingress-nginx -l app.kubernetes.io/​name=ingress-nginx -f $ kubectl logs -n ingress-nginx -l app.kubernetes.io/​name=ingress-nginx -f
  
-node1# ### kubectl delete ingress my-ingress -n my-ns+kube1# ### kubectl delete ingress my-ingress -n my-ns
 </​code>​ </​code>​
  
Line 1328: Line 1502:
 ssh root@kube2 'chmod 777 /​opt/​local-path-provisioner'​ ssh root@kube2 'chmod 777 /​opt/​local-path-provisioner'​
 ssh root@kube3 'chmod 777 /​opt/​local-path-provisioner'​ ssh root@kube3 'chmod 777 /​opt/​local-path-provisioner'​
 +ssh root@kube4 'mkdir /​opt/​local-path-provisioner'​
 +ssh root@kube4 'chmod 777 /​opt/​local-path-provisioner'​
  
 $ ###kubectl patch storageclass local-path -p '​{"​metadata":​ {"​annotations":​{"​storageclass.kubernetes.io/​is-default-class":"​true"​}}}'​ $ ###kubectl patch storageclass local-path -p '​{"​metadata":​ {"​annotations":​{"​storageclass.kubernetes.io/​is-default-class":"​true"​}}}'​
Line 1347: Line 1523:
  
 (venv1) server:~# ansible all -f 4 -m apt -a '​pkg=open-iscsi state=present update_cache=true'​ -i /​root/​kubespray/​inventory/​mycluster/​hosts.yaml (venv1) server:~# ansible all -f 4 -m apt -a '​pkg=open-iscsi state=present update_cache=true'​ -i /​root/​kubespray/​inventory/​mycluster/​hosts.yaml
 +
 +root@a7818cd3f7c7:/​kubespray#​ ansible all -f 4 -m apt -a '​pkg=open-iscsi state=present update_cache=true'​ -i /​inventory/​inventory.ini
 </​code>​ </​code>​
   * [[https://​github.com/​longhorn/​longhorn]]   * [[https://​github.com/​longhorn/​longhorn]]
Line 1359: Line 1537:
 </​code>​ </​code>​
  
-Подключение через kubectl proxy+Подключение через ​[[#kubectl proxy]]
  
   * [[https://​stackoverflow.com/​questions/​45172008/​how-do-i-access-this-kubernetes-service-via-kubectl-proxy|How do I access this Kubernetes service via kubectl proxy?]]   * [[https://​stackoverflow.com/​questions/​45172008/​how-do-i-access-this-kubernetes-service-via-kubectl-proxy|How do I access this Kubernetes service via kubectl proxy?]]
Line 1400: Line 1578:
   * Делаем снапшот   * Делаем снапшот
   * Что-то ломаем (удаляем пользователя)   * Что-то ломаем (удаляем пользователя)
-  * Останавливаем сервис+ 
 +== Остановка сервиса ==
  
 <​code>​ <​code>​
Line 1410: Line 1589:
   * Volume -> Attache to Host (любой) в режиме Maintenance,​ Revert к снапшоту,​ Deattache ​   * Volume -> Attache to Host (любой) в режиме Maintenance,​ Revert к снапшоту,​ Deattache ​
   * Запускаем сервис   * Запускаем сервис
 +
 +  * Еще, если не в режиме Maintenance,​ то можно:
 +<​code>​
 +kube1:~# mount /​dev/​longhorn/​pvc-2057c044-ac3f-4052-92ce-d7f57453a704 /mnt
 +</​code>​
  
 <​code>​ <​code>​
Line 1683: Line 1867:
  
 <​code>​ <​code>​
-# wget https://​get.helm.sh/​helm-v3.16.4-linux-amd64.tar.gz+# ###wget https://​get.helm.sh/​helm-v3.16.4-linux-amd64.tar.gz 
 +# wget https://​get.helm.sh/​helm-v4.0.4-linux-amd64.tar.gz
  
 # tar -zxvf helm-*-linux-amd64.tar.gz # tar -zxvf helm-*-linux-amd64.tar.gz
Line 1737: Line 1922:
 #    use-forwarded-headers:​ true #    use-forwarded-headers:​ true
 #    allow-snippet-annotations:​ true #    allow-snippet-annotations:​ true
 +#  service:
 +#    type: LoadBalancer
 +#    loadBalancerIP:​ "​192.168.X.64"​
 </​code><​code>​ </​code><​code>​
 $ helm template ingress-nginx -f values.yaml --repo https://​kubernetes.github.io/​ingress-nginx -n ingress-nginx | tee t2.yaml $ helm template ingress-nginx -f values.yaml --repo https://​kubernetes.github.io/​ingress-nginx -n ingress-nginx | tee t2.yaml
  
 $ helm upgrade ingress-nginx -i ingress-nginx -f values.yaml --repo https://​kubernetes.github.io/​ingress-nginx -n ingress-nginx --create-namespace $ helm upgrade ingress-nginx -i ingress-nginx -f values.yaml --repo https://​kubernetes.github.io/​ingress-nginx -n ingress-nginx --create-namespace
 +
 +$ kubectl get all -n ingress-nginx
  
 $ kubectl exec -n ingress-nginx pods/​ingress-nginx-controller-<​TAB>​ -- cat /​etc/​nginx/​nginx.conf | tee nginx.conf | grep use_forwarded_headers $ kubectl exec -n ingress-nginx pods/​ingress-nginx-controller-<​TAB>​ -- cat /​etc/​nginx/​nginx.conf | tee nginx.conf | grep use_forwarded_headers
Line 2054: Line 2244:
 </​code>​ </​code>​
  
-===== Kubernetes Dashboard =====+===== Аутентификацция и авторизация ===== 
 + 
 +==== Использование сертификатов ==== 
 + 
 +  * [[Пакет OpenSSL#​Создание приватного ключа пользователя]] 
 +  * [[Пакет OpenSSL#​Создание запроса на сертификат]] 
 + 
 +<​code>​ 
 +user1@client1:​~$ cat user1.req | base64 -w0 
 +</​code>​ 
 +  * [[https://​stackoverflow.com/​questions/​75735249/​what-do-the-values-in-certificatesigningrequest-spec-usages-mean|What do the values in CertificateSigningRequest.spec.usages mean?]] 
 +<​code>​ 
 +kube1:​~/​users#​ kubectl explain csr.spec.usages 
 + 
 +kube1:​~/​users#​ cat user1.req.yaml 
 +</​code><​code>​ 
 +apiVersion: certificates.k8s.io/​v1 
 +kind: CertificateSigningRequest 
 +metadata: 
 +  name: user1 
 +spec: 
 +  request: LS0t...S0tCg== 
 +  signerName: kubernetes.io/​kube-apiserver-client 
 +  expirationSeconds:​ 8640000 ​ # 100 * one day 
 +  usages: 
 +#  - digital signature 
 +#  - key encipherment 
 +  - client auth 
 +</​code><​code>​ 
 +kube1:​~/​users#​ kubectl apply -f user1.req.yaml 
 + 
 +kube1:​~/​users#​ kubectl describe csr/user1 
 + 
 +kube1:​~/​users#​ kubectl certificate approve user1 
 + 
 +kube1:​~/​users#​ kubectl get csr 
 + 
 +kube1:​~/​users#​ kubectl get csr/user1 -o yaml 
 + 
 +kube1:​~/​users#​ kubectl get csr/user1 -o jsonpath="​{.status.certificate}"​ | base64 -d | tee user1.crt 
 + 
 +user1@client1:​~$ scp root@kube1:​users/​user1.crt . 
 + 
 +kube1:​~/​users#​ ###kubectl delete csr user1 
 +</​code>​ 
 + 
 +==== Использование ServiceAccount ==== 
 + 
 +  * [[Система Kubernetes#Kubernetes Dashboard]] 
 + 
 +==== Использование Role и RoleBinding ==== 
 + 
 +=== Предоставление доступа к services/​proxy в Namespace === 
 + 
 +  * Cloud native distributed block storage for Kubernetes [[Система Kubernetes#​longhorn]] 
 + 
 +<​code>​ 
 +kube1:~# kubectl api-resources -o wide | less 
 +APIVERSION = <​group>​ + "/"​ + <version of the API> 
 + 
 +kube1:​~/​users#​ cat lh-svc-proxy-role.yaml 
 +</​code><​code>​ 
 +apiVersion: rbac.authorization.k8s.io/​v1 
 +kind: Role 
 +metadata: 
 +  namespace: longhorn-system 
 +  name: lh-svc-proxy-role 
 +rules: 
 +- apiGroups: [""​] 
 +  resources: ["​services/​proxy"​] 
 +  verbs: ["​get"​] 
 +</​code><​code>​ 
 +kube1:​~/​users#​ cat user1-lh-svc-proxy-rolebinding.yaml 
 +</​code><​code>​ 
 +apiVersion: rbac.authorization.k8s.io/​v1 
 +kind: RoleBinding 
 +metadata: 
 +  name: user1-lh-svc-proxy-rolebinding 
 +  namespace: longhorn-system 
 +subjects: 
 +- kind: User 
 +  name: user1 
 +  apiGroup: rbac.authorization.k8s.io 
 +roleRef: 
 +  kind: Role 
 +  name: lh-svc-proxy-role 
 +  apiGroup: rbac.authorization.k8s.io 
 +</​code><​code>​ 
 +kube1:​~/​users#​ kubectl apply -f lh-svc-proxy-role.yaml,​user1-lh-svc-proxy-rolebinding.yaml 
 + 
 +student@client1:​~$ kubectl proxy 
 + 
 +student@client1:​~$ curl http://​localhost:​8001/​api/​v1/​namespaces/​longhorn-system/​services/​longhorn-frontend:​80/​proxy/​ 
 + 
 +student@client1:​~$ curl http://​localhost:​8001/​api/​v1/​namespaces/​kubernetes-dashboard/​services/​https:​kubernetes-dashboard:/​proxy/​ 
 + 
 +kube1:​~/​users#​ kubectl delete -f lh-svc-proxy-role.yaml,​user1-lh-svc-proxy-rolebinding.yaml 
 +</​code>​ 
 +=== Предоставление полного доступа к Namespace === 
 + 
 +<​code>​ 
 +kube1:​~/​users#​ cat ns-full-access.yaml 
 +</​code><​code>​ 
 +--- 
 +kind: Role 
 +apiVersion: rbac.authorization.k8s.io/​v1 
 +metadata: 
 +  name: ns-full-access 
 +  namespace: my-ns 
 +rules: 
 +- apiGroups: ["​*"​] 
 +  resources: ["​*"​] 
 +  verbs: ["​*"​] 
 +--- 
 +kind: RoleBinding 
 +apiVersion: rbac.authorization.k8s.io/​v1 
 +metadata: 
 +  name: ns-full-access-rolebinding 
 +  namespace: my-ns 
 +subjects: 
 +- apiGroup: rbac.authorization.k8s.io 
 +  kind: Group 
 +  name: cko 
 +  #kind: User 
 +  #name: user1 
 +roleRef: 
 +  kind: Role 
 +  name: ns-full-access 
 +  apiGroup: rbac.authorization.k8s.io 
 +#roleRef: 
 +  #apiGroup: rbac.authorization.k8s.io 
 +  #kind: ClusterRole 
 +  #name: admin 
 +</​code><​code>​ 
 +kube1:​~/​users#​ kubectl apply -f ns-full-access.yaml 
 +</​code>​ 
 + 
 +=== Поиск предоставленных ролей для учетной записи === 
 +<​code>​ 
 +kube1:​~/​users#​ kubectl get rolebindings --all-namespaces -o=json | jq '​.items[] | select(.subjects[]?​.name == "​user1"​)'​ 
 + 
 +kube1:​~/​users#​ kubectl get rolebindings --all-namespaces -o=json | jq '​.items[] | select(.subjects[]?​.name == "​cko"​)'​ 
 + 
 +kube1:​~/​users#​ kubectl delete -f ns-full-access.yaml 
 +  ИЛИ 
 +kube1:​~/​users#​ kubectl -n my-ns delete rolebindings ns-full-access-rolebinding 
 +kube1:​~/​users#​ kubectl -n my-ns delete role ns-full-access 
 +</​code>​ 
 + 
 +==== Использование ClusterRole и ClusterRoleBinding ==== 
 + 
 +=== Предоставление доступа к services/​port-forward в Cluster === 
 + 
 +<​code>​ 
 +kube1:​~/​users#​ cat svc-pfw-role.yaml 
 +</​code><​code>​ 
 +apiVersion: rbac.authorization.k8s.io/​v1 
 +kind: ClusterRole 
 +#kind: Role 
 +metadata: 
 +  name: svc-pfw-role 
 +#  namespace: my-pgcluster-ns 
 +rules: 
 +- apiGroups: [""​] 
 +  resources: ["​services"​] 
 +  verbs: ["​get"​] 
 +- apiGroups: [""​] 
 +  resources: ["​pods"​] 
 +  verbs: ["​get",​ "​list"​] 
 +- apiGroups: [""​] 
 +  resources: ["​pods/​portforward"​] 
 +  verbs: ["​create"​] 
 +</​code><​code>​ 
 +kube1:​~/​users#​ cat user1-svc-pfw-rolebinding.yaml 
 +</​code><​code>​ 
 +apiVersion: rbac.authorization.k8s.io/​v1 
 +kind: ClusterRoleBinding 
 +#kind: RoleBinding 
 +metadata: 
 +  name: user1-svc-pfw-rolebinding 
 +#  namespace: my-pgcluster-ns 
 +subjects: 
 +- kind: User 
 +  name: user1 
 +  apiGroup: rbac.authorization.k8s.io 
 +roleRef: 
 +  kind: ClusterRole 
 +#  kind: Role 
 +  name: svc-pfw-role 
 +  apiGroup: rbac.authorization.k8s.io 
 +</​code><​code>​ 
 +kube1:​~/​users#​ kubectl apply -f svc-pfw-role.yaml,​user1-svc-pfw-rolebinding.yaml 
 + 
 +student@client1:​~$ kubectl port-forward -n my-pgcluster-ns services/​my-pgcluster-rw 5432:5432 
 + 
 +student@client1:​~$ psql postgres://​keycloak:​strongpassword@127.0.0.1:​5432/​keycloak 
 +</​code>​ 
 + 
 +  * Доступ через proxy к [[Система Kubernetes#​Kubernetes Dashboard]] 
 + 
 +<​code>​ 
 +kube1:​~/​users#​ kubectl delete -f svc-pfw-role.yaml,​user1-svc-pfw-rolebinding.yaml 
 +</​code>​ 
 +=== Предоставление полного доступа к Kubernetes Cluster === 
 + 
 +<​code>​ 
 +kube1:​~/​users#​ kubectl get clusterroles | less 
 + 
 +kube1:​~/​users#​ kubectl get clusterrole cluster-admin -o yaml 
 + 
 +kube1:​~/​users#​ kubectl get clusterrolebindings | less 
 + 
 +kube1:​~/​users#​ kubectl get clusterrolebindings kubeadm:​cluster-admins -o yaml 
 + 
 +kube1:​~/​users#​ kubectl get clusterrolebindings cluster-admin -o yaml 
 + 
 +kube1:​~/​users#​ cat user1-cluster-admin.yaml 
 +</​code><​code>​ 
 +apiVersion: rbac.authorization.k8s.io/​v1 
 +kind: ClusterRoleBinding 
 +metadata: 
 +  name: user1-cluster-admin 
 +subjects: 
 +- kind: User 
 +  name: user1 
 +#  name: user1@corp13.un ​  
 +  apiGroup: rbac.authorization.k8s.io 
 +roleRef: 
 +  kind: ClusterRole 
 +  name: cluster-admin 
 +  apiGroup: rbac.authorization.k8s.io 
 +</​code><​code>​ 
 +kube1:​~/​users#​ kubectl apply -f user1-cluster-admin.yaml 
 + 
 +student@client1:​~$ kubectl get nodes 
 +</​code>​ 
 + 
 +=== Поиск предоставленных кластерных ролей для учетной записи === 
 +<​code>​ 
 +kube1:​~/​users#​ kubectl get clusterrolebindings -o=json | jq '​.items[] | select(.subjects[]?​.name == "​kubeadm:​cluster-admins"​)'​ 
 + 
 +kube1:​~/​users#​ kubectl get clusterrolebindings -o=json | jq '​.items[] | select(.subjects[]?​.name == "​user1"​)'​ 
 + 
 +kube1:​~/​users#​ kubectl get clusterrolebindings -o=json | jq '​.items[] | select(.subjects[]?​.name == "​default"​)'​ 
 + 
 +kube1:​~/​users#​ kubectl delete -f user1-cluster-admin.yaml 
 +  ИЛИ 
 +kube1:​~/​users#​ kubectl delete clusterrolebindings user1-cluster-admin 
 +</​code>​ 
 + 
 +===== cert-manager ===== 
 + 
 +  * [[Letsencrypt Certbot]] 
 +  * [[https://​cert-manager.io/​docs/​installation/​|cert-manager Installation]] 
 +  * [[https://​cert-manager.io/​docs/​tutorials/​acme/​nginx-ingress/​|cert-manager Securing NGINX-ingress]] 
 + 
 +  * [[https://​debuntu.ru/​manuals/​kubernetes/​tls-kerberos-in-kubernetes/​cert-manager_and_all_about_it/​installing-configuring-cert-manager/​|debuntu.ru Установка и настройка cert-manager]] 
 +  * [[https://​habr.com/​ru/​companies/​nubes/​articles/​808035/​|Автоматический выпуск SSL-сертификатов. Используем Kubernetes и FreeIPA]] 
 +  * [[https://​cert-manager.io/​docs/​configuration/​acme/#​private-acme-servers|Private ACME Servers]] 
 + 
 +<​code>​ 
 +student@vps:​~$ kubectl apply -f https://​github.com/​cert-manager/​cert-manager/​releases/​download/​v1.19.1/​cert-manager.yaml 
 + 
 +student@vps:​~$ kubectl -n cert-manager get all 
 + 
 +student@vps:​~$ #kubectl create secret generic cert-manager-tsig-secret --from-literal=tsig-secret-key="​NNN...NNN"​ -n cert-manager 
 + 
 +student@vps:​~$ cat ...issuer.yaml 
 +</​code><​code>​ 
 +apiVersion: cert-manager.io/​v1 
 +#kind: Issuer 
 +kind: ClusterIssuer 
 +metadata: 
 +  #name: letsencrypt-staging-clusterissuer 
 +  #name: letsencrypt-prod-clusterissuer 
 +  #name: freeipa-clusterissuer 
 +  #name: freeipa-dns-clusterissuer 
 +spec: 
 +  acme: 
 +    #server: https://​acme-staging-v02.api.letsencrypt.org/​directory 
 +    #server: https://​acme-v02.api.letsencrypt.org/​directory 
 +    #profile: tlsserver 
 + 
 +    #server: https://​server.corpX.un/​acme/​directory 
 +    #caBundle: # cat /​etc/​ipa/​ca.crt | base64 -w0 
 + 
 +    email: student@corpX.un 
 +    privateKeySecretRef:​ 
 +      name: ...issuer-secret 
 +    solvers: 
 +    - http01: 
 +        ingress: 
 +          ingressClassName:​ nginx 
 +    #- dns01: 
 +        #rfc2136: 
 +          #​nameserver:​ 192.168.X.10 
 +          #​tsigKeyName:​ cert-manager 
 +          #​tsigAlgorithm:​ HMACSHA256 
 +          #​tsigSecretSecretRef:​ 
 +            #name: cert-manager-tsig-secret 
 +            #key: tsig-secret-key 
 + 
 +</​code><​code>​ 
 +student@vps:​~$ kubectl apply -f ...issuer.yaml #-n my-ns 
 + 
 +student@vps:​~$ kubectl get secret -n cert-manager #-n my-ns 
 + 
 +student@vps:​~$ kubectl get clusterissuers.cert-manager.io 
 +student@vps:​~$ kubectl get issuers.cert-manager.io #-n my-ns 
 +NAME                    READY   AGE 
 +...issuer ​              ​True ​   42s 
 +</​code>​ 
 + 
 +  * Запустить выпуск сертификата можно 2-мя способами:​ 
 + 
 +1-й способ:​ annotations в [[#ingress example]] 
 + 
 +2-й способ (используется если для сайта нет ingress и негде указать annotations или для rfc2136) 
 +<​code>​ 
 +student@vps:​~/​webd-k8s$ cat my-certificate.yaml 
 +</​code><​code>​ 
 +apiVersion: cert-manager.io/​v1 
 +kind: Certificate 
 +metadata: 
 +  name: webd-cert 
 +spec: 
 +  secretName: webd-tls 
 +  dnsNames: 
 +    #- siteN.mgtu.ru 
 +    #- keycloak.corpX.un 
 +    #- gitlab.corpX.un 
 +  issuerRef:​ 
 +    name: ...issuer 
 +    #kind: ClusterIssuer 
 +    #kind: Issuer 
 +</​code>​ 
 + 
 +<​code>​ 
 +student@vps:​~/​webd-k8s$ kubectl apply -f my-certificate.yaml -n my-ns 
 + 
 +student@vps:​~$ kubectl get certificate,​secrets -n my-ns 
 + 
 +student@vps:​~$ kubectl events -n my-ns 
 +... 
 +Certificate fetched from issuer successfully 
 + 
 +student@vps:​~$ kubectl get secret webd-tls -o yaml -n my-ns 
 +</​code>​ 
 + 
 +===== Dashboards ===== 
 + 
 +==== k9s ==== 
 + 
 +  * [[https://​habr.com/​ru/​companies/​flant/​articles/​524196/​|Обзор k9s — продвинутого терминального интерфейса для Kubernetes]] 
 +  * [[https://​notes.kodekloud.com/​docs/​Kubernetes-Troubleshooting-for-Application-Developers/​Prerequisites/​k9s-Walkthrough|k9s Walkthrough]] 
 + 
 +<​code>​ 
 +kube1# wget https://​github.com/​derailed/​k9s/​releases/​download/​v0.50.16/​k9s_linux_amd64.deb 
 + 
 +kube1# dpkg -i k9s_linux_amd64.deb 
 +</​code>​ 
 + 
 +==== Kubernetes Dashboard ​==== 
 + 
 +  * https://​www.bytebase.com/​blog/​top-open-source-kubernetes-dashboard/​
  
   * https://​kubernetes.io/​docs/​tasks/​access-application-cluster/​web-ui-dashboard/​   * https://​kubernetes.io/​docs/​tasks/​access-application-cluster/​web-ui-dashboard/​
   * https://​github.com/​kubernetes/​dashboard/​blob/​master/​docs/​user/​access-control/​creating-sample-user.md   * https://​github.com/​kubernetes/​dashboard/​blob/​master/​docs/​user/​access-control/​creating-sample-user.md
 +
 +  * [[https://​kubernetes.io/​docs/​reference/​kubectl/​generated/​kubectl_create/​kubectl_create_token/​]]
 +  * [[https://​www.jwt.io/​|JSON Web Token (JWT) Debugger]]
  
 <​code>​ <​code>​
 $ kubectl apply -f https://​raw.githubusercontent.com/​kubernetes/​dashboard/​v2.7.0/​aio/​deploy/​recommended.yaml $ kubectl apply -f https://​raw.githubusercontent.com/​kubernetes/​dashboard/​v2.7.0/​aio/​deploy/​recommended.yaml
  
-$ cat dashboard-user-role.yaml+$ cat dashboard-sa-admin-user.yaml
 </​code><​code>​ </​code><​code>​
 --- ---
Line 2070: Line 2627:
   name: admin-user   name: admin-user
   namespace: kubernetes-dashboard   namespace: kubernetes-dashboard
 +  #namespace: default
 --- ---
 apiVersion: rbac.authorization.k8s.io/​v1 apiVersion: rbac.authorization.k8s.io/​v1
Line 2083: Line 2641:
   name: admin-user   name: admin-user
   namespace: kubernetes-dashboard   namespace: kubernetes-dashboard
----+  #namespace: default 
 +</​code><​code>​ 
 +$ kubectl apply -f dashboard-sa-admin-user.yaml 
 + 
 +$ kubectl auth can-i get pods --as=system:​serviceaccount:​kubernetes-dashboard:​admin-user 
 + 
 +$ kubectl create token admin-user -n kubernetes-dashboard #​--duration=1h 
 + 
 +$ ###ps aux | grep kube-apiserver | grep service-account-key-file 
 +$ ###cat /​etc/​kubernetes/​ssl/​sa.pub 
 +$ ###echo ... | jq -R '​split("​."​) | .[1] | @base64d | fromjson'​ 
 +$ ###echo ... | awk -F'​.'​ '​{print $2}' | base64 -d | jq -r '.exp | todate'​ 
 +</​code>​ 
 + 
 +=== Доступ через proxy === 
 + 
 +<​code>​ 
 +cmder$ kubectl proxy 
 +</​code>​ 
 + 
 +  * http://​localhost:​8001/​api/​v1/​namespaces/​kubernetes-dashboard/​services/​https:​kubernetes-dashboard:/​proxy/​ 
 + 
 + 
 +=== Доступ через port-forward === 
 +<​code>​ 
 +$ kubectl -n kubernetes-dashboard port-forward svc/​kubernetes-dashboard-kong-proxy 8443:443 
 +</​code>​ 
 + 
 +  * https://​localhost:​8443 
 + 
 +=== Создание долговременного токена === 
 +<​code>​ 
 +$ cat dashboard-secret-for-token.yaml 
 +</​code><​code>​
 apiVersion: v1 apiVersion: v1
 kind: Secret kind: Secret
Line 2089: Line 2680:
   name: admin-user   name: admin-user
   namespace: kubernetes-dashboard   namespace: kubernetes-dashboard
 +  #namespace: default
   annotations:​   annotations:​
     kubernetes.io/​service-account.name:​ "​admin-user"​     kubernetes.io/​service-account.name:​ "​admin-user"​
 type: kubernetes.io/​service-account-token type: kubernetes.io/​service-account-token
 </​code><​code>​ </​code><​code>​
-$ kubectl apply -f dashboard-user-role.yaml +$ kubectl apply -f dashboard-secret-for-token.yaml
- +
-$ kubectl -n kubernetes-dashboard create token admin-user+
  
 $ kubectl get secret admin-user -n kubernetes-dashboard -o jsonpath={"​.data.token"​} | base64 -d ; echo $ kubectl get secret admin-user -n kubernetes-dashboard -o jsonpath={"​.data.token"​} | base64 -d ; echo
- 
-cmder$ kubectl proxy 
 </​code>​ </​code>​
- 
-  * http://​localhost:​8001/​api/​v1/​namespaces/​kubernetes-dashboard/​services/​https:​kubernetes-dashboard:/​proxy/​ 
- 
 ===== Мониторинг ===== ===== Мониторинг =====
  
Line 2129: Line 2714:
 kube1# kubectl top pod #-n kube-system kube1# kubectl top pod #-n kube-system
  
-kube1# kubectl top pod -A --sort-by=mem+kube1# kubectl top pod -A --sort-by=memory
  
 kube1# kubectl top node kube1# kubectl top node
 </​code>​ </​code>​
  
 +==== kube-state-metrics ====
 +
 +  * [[https://​github.com/​prometheus-community/​helm-charts/​tree/​main/​charts/​kube-state-metrics]]
 +  * ... алерты с инфой по упавшим подам ...
 +
 +<​code>​
 +kube1# helm repo add prometheus-community https://​prometheus-community.github.io/​helm-charts
 +
 +kube1# helm repo update
 +kube1# helm install kube-state-metrics prometheus-community/​kube-state-metrics -n vm --create-namespace
 +
 +kube1# curl kube-state-metrics.vm.svc.cluster.local:​8080/​metrics
 +</​code>​
 ===== Отладка,​ troubleshooting ===== ===== Отладка,​ troubleshooting =====
  
Line 2251: Line 2849:
 ==== kompose ==== ==== kompose ====
  
 +  * https://​kompose.io/​
   * [[https://​stackoverflow.com/​questions/​47536536/​whats-the-difference-between-docker-compose-and-kubernetes|What'​s the difference between Docker Compose and Kubernetes?​]]   * [[https://​stackoverflow.com/​questions/​47536536/​whats-the-difference-between-docker-compose-and-kubernetes|What'​s the difference between Docker Compose and Kubernetes?​]]
   * [[https://​loft.sh/​blog/​docker-compose-to-kubernetes-step-by-step-migration/​|Docker Compose to Kubernetes: Step-by-Step Migration]]   * [[https://​loft.sh/​blog/​docker-compose-to-kubernetes-step-by-step-migration/​|Docker Compose to Kubernetes: Step-by-Step Migration]]
Line 2256: Line 2855:
  
 <​code>​ <​code>​
 +kube1:​~/​gitlab#​ curl -L https://​github.com/​kubernetes/​kompose/​releases/​download/​v1.37.0/​kompose-linux-amd64 -o /​usr/​local/​bin/​kompose
 +
 +kube1:​~/​gitlab#​ chmod +x /​usr/​local/​bin/​kompose
 +
 +
 +
 +
 root@gate:​~#​ curl -L https://​github.com/​kubernetes/​kompose/​releases/​download/​v1.26.0/​kompose-linux-amd64 -o kompose root@gate:​~#​ curl -L https://​github.com/​kubernetes/​kompose/​releases/​download/​v1.26.0/​kompose-linux-amd64 -o kompose
 root@gate:​~#​ chmod +x kompose root@gate:​~#​ chmod +x kompose
система_kubernetes.1742995395.txt.gz · Last modified: 2025/03/26 16:23 by val