User Tools

Site Tools


система_kubernetes

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
система_kubernetes [2025/03/26 16:21]
val [Deployment, Replica Sets, Pods]
система_kubernetes [2026/03/21 08:57] (current)
val [Использование ClusterRole и ClusterRoleBinding]
Line 1: Line 1:
 ====== Система Kubernetes ====== ====== Система Kubernetes ======
 +
 +  * [[https://​habr.com/​ru/​companies/​vk/​articles/​645985/​|Почему Kubernetes — это новый Linux: 4 аргумента]]
 +
 +  * [[https://​habr.com/​ru/​companies/​vk/​articles/​821021/​|Зачем делать прожорливый софт: принципы reconciliation loop (Привет,​ K8s!)]]
  
   * [[https://​kubernetes.io/​ru/​docs/​home/​|Документация по Kubernetes (на русском)]]   * [[https://​kubernetes.io/​ru/​docs/​home/​|Документация по Kubernetes (на русском)]]
Line 11: Line 15:
   * [[https://​habr.com/​ru/​companies/​domclick/​articles/​566224/​|Различия между Docker, containerd, CRI-O и runc]]   * [[https://​habr.com/​ru/​companies/​domclick/​articles/​566224/​|Различия между Docker, containerd, CRI-O и runc]]
   * [[https://​daily.dev/​blog/​kubernetes-cni-comparison-flannel-vs-calico-vs-canal|Kubernetes CNI Comparison: Flannel vs Calico vs Canal]]   * [[https://​daily.dev/​blog/​kubernetes-cni-comparison-flannel-vs-calico-vs-canal|Kubernetes CNI Comparison: Flannel vs Calico vs Canal]]
 +  * [[https://​habr.com/​ru/​companies/​slurm/​articles/​464987/​|Хранилища в Kubernetes: OpenEBS vs Rook (Ceph) vs Rancher Longhorn vs StorageOS vs Robin vs Portworx vs Linstor]]
 +  * [[https://​parshinpn.ru/​ru/​blog/​external-connectivity-kubernetes-calico|Настраиваем сетевую связность внешнего узла с кластером Kubernetes (route reflector)]]
  
   * [[https://​habr.com/​ru/​company/​vk/​blog/​542730/​|11 факапов PRO-уровня при внедрении Kubernetes и как их избежать]]   * [[https://​habr.com/​ru/​company/​vk/​blog/​542730/​|11 факапов PRO-уровня при внедрении Kubernetes и как их избежать]]
Line 36: Line 42:
  
 # mv kubectl /​usr/​local/​bin/​ # mv kubectl /​usr/​local/​bin/​
 +</​code>​
 +== Debian 13 ==
 +<​code>​
 +# apt install kubectl
 </​code>​ </​code>​
  
Line 61: Line 71:
 ... ...
 </​code><​code>​ </​code><​code>​
-kubectl get all -o wide --all-namespaces +kubectl version 
-kubectl get all -o wide -A+ 
 +kubectl get all -o wide --all-namespaces ​#-A 
 + 
 +kubectl get nodes
 </​code>​ </​code>​
-=== Настройка автодополнения ===+==== Настройка автодополнения ​====
 <​code>​ <​code>​
 kube1:~# less /​etc/​bash_completion.d/​kubectl.sh kube1:~# less /​etc/​bash_completion.d/​kubectl.sh
Line 80: Line 93:
 </​code>​ </​code>​
  
-=== Подключение к другому кластеру ===+==== Создание ​файла ​конфигурации kubectl ==== 
 + 
 +  * [[https://​kubernetes.io/​docs/​reference/​kubectl/​generated/​kubectl_config/​kubectl_config_set-credentials/​]]
  
 <​code>​ <​code>​
-gitlab-runner@server:~$ scp root@kube1:.kube/config ​.kube/config_kube1+user1@client1:~$ ###export KUBECONFIG=~/​.kube/config_test 
 +user1@client1:​~$ ###rm -rf .kube/
  
-gitlab-runner@server:~$ cat .kube/config_kube1 +user1@client1:~$ kubectl config set-cluster cluster.local --server=https:​//​192.168.X.221:​6443 --certificate-authority=ca.crt #​--insecure-skip-tls-verify=true ​ 
-</code><​code>​ +kubeN# ###​cat ​/etc/​kubernetes/​ssl/​ca.crt 
-... +  ​ИЛИ 
-    .kube/config_kube1 +root@my-debian:​~#​ kubectl config set-cluster cluster.local --server=https://​192.168.X.221:6443 --certificate-authority=/​run/​secrets/​kubernetes.io/​serviceaccount/​ca.crt #​--embed-certs=true 
-... + 
-</​code><​code>​ +user1@client1:​~$ cat .kube/config 
-gitlab-runner@server:~$ export KUBECONFIG=~/.kube/config_kube1+ 
 +user1@client1:​~$ kubectl config set-credentials user1 --client-certificate=user1.crt --client-key=user1.key #​--embed-certs=true 
 +  ​ИЛИ 
 +user1@client1:~$ kubectl config set-credentials user1 --token=................................... 
 +  ИЛИ 
 +root@my-debian:​~# kubectl config set-credentials user1 --token=$(cat ​/run/​secrets/​kubernetes.io/serviceaccount/​token) 
 + 
 +user1@client1:​~$ kubectl config get-users 
 + 
 +user1@client1:​~$ kubectl config set-context default-context --cluster=cluster.local --user=user1 
 + 
 +user1@client1:​~$ kubectl config use-context default-context 
 + 
 +user1@client1:​~$ kubectl auth whoami 
 + 
 +user1@client1:​~$ kubectl auth can-i get pods #-n my-ns
  
-gitlab-runner@server:~$ kubectl get nodes+user1@client1:~$ kubectl get pods #-A 
 +Error from server (Forbidden) или ...
 </​code>​ </​code>​
  
 ===== Установка minikube ===== ===== Установка minikube =====
  
 +  * [[https://​github.com/​kubernetes/​minikube/​tags]]
   * [[https://​minikube.sigs.k8s.io/​docs/​start/​|Documentation/​Get Started/​minikube start]]   * [[https://​minikube.sigs.k8s.io/​docs/​start/​|Documentation/​Get Started/​minikube start]]
 +  * [[https://​stackoverflow.com/​questions/​42564058/​how-can-i-use-local-docker-images-with-minikube|How can I use local Docker images with Minikube?]]
  
 <​code>​ <​code>​
-root@server:​~#​ apt install -y curl wget apt-transport-https+root@server:​~#​ apt install -y wget
  
-root@server:​~#​ wget https://​storage.googleapis.com/​minikube/​releases/​latest/​minikube-linux-amd64+root@server:​~#​ wget https://​storage.googleapis.com/​minikube/​releases/​v1.37.0/​minikube-linux-amd64
  
 root@server:​~#​ mv minikube-linux-amd64 /​usr/​local/​bin/​minikube root@server:​~#​ mv minikube-linux-amd64 /​usr/​local/​bin/​minikube
Line 113: Line 147:
  
 <​code>​ <​code>​
-gitlab-runner@server:​~$ time minikube start --driver=docker --insecure-registry "​server.corpX.un:​5000"​ +gitlab-runner@server:​~$ time minikube start --driver=docker --insecure-registry "​server.corpX.un:​5000" #​--registry-mirror="​https://​mirror.gcr.io
-real    ​29m8.320s+real    ​3m9.625s ... 41m8.320s
 ... ...
  
Line 135: Line 169:
 </​code><​code>​ </​code><​code>​
 gitlab-runner@server:​~$ kubectl get pods -A gitlab-runner@server:​~$ kubectl get pods -A
 +</​code>​
 +
 +или
 +
 +<​code>​
 +# cp -v /​home/​gitlab-runner/​.minikube/​cache/​linux/​amd64/​v*/​kubectl /​usr/​local/​bin/​
 </​code>​ </​code>​
  
Line 165: Line 205:
 gitlab-runner@server:​~$ ###minikube start gitlab-runner@server:​~$ ###minikube start
 </​code>​ </​code>​
-===== Кластер Kubernetes ===== 
  
 +==== "​Внутри"​ minikube ====
 +<​code>​
 +gitlab-runner@server:​~/​webd-k8s$ kubectl -n my-ns get service
 +my-webd ​  ​ClusterIP ​  ​10.109.239.180 ​  <​none> ​       80/TCP ...
 +
 +gitlab-runner@server:​~/​webd-k8s$ kubectl get pods -o wide -A | grep dns
 +kube-system ​    ​coredns ... 10.244.0.2
 +
 +gitlab-runner@server:​~/​apwebd-k8s$ minikube ssh
 +
 +docker@minikube:​~$ host my-webd.my-ns.svc.cluster.local 10.244.0.2
 +...
 +my-webd.my-ns.svc.cluster.local has address 10.109.239.180
 +
 +docker@minikube:​~$ curl 10.109.239.180
 +...
 +</​code>​
 +
 +==== Использование в minikube своих docker образов ====
 +
 +  * [[https://​stackoverflow.com/​questions/​42564058/​how-can-i-use-local-docker-images-with-minikube|How can I use local Docker images with Minikube?]]
 +
 +  * [[Технология Docker#​Приложение golang gowebd]]
 +
 +<​code>​
 +gitlab-runner@server:​~/​gowebd$ eval $(minikube docker-env)
 +
 +gitlab-runner@server:​~/​gowebd$ docker build -t gowebd .
 +</​code>​
 +
 +
 +===== Кластер Kubernetes =====
  
 ==== Развертывание через kubeadm ==== ==== Развертывание через kubeadm ====
Line 342: Line 413:
 <​code>​ <​code>​
 root@node1:​~#​ mkdir -p /​etc/​containerd/​ root@node1:​~#​ mkdir -p /​etc/​containerd/​
 +
 +root@node1:​~#​ ###​containerd config default > /​etc/​containerd/​config.toml
  
 root@node1:​~#​ cat /​etc/​containerd/​config.toml root@node1:​~#​ cat /​etc/​containerd/​config.toml
Line 362: Line 435:
  
 root@nodeN:​~#​ containerd config dump | less root@nodeN:​~#​ containerd config dump | less
 +</​code>​
 +
 +== сontainerd v3 ==
 +
 +  * [[https://​stackoverflow.com/​questions/​79305194/​unable-to-pull-image-from-insecure-registry-http-server-gave-http-response-to/​79308521#​79308521]]
 +
 +<​code>​
 +# mkdir -p /​etc/​containerd/​certs.d/​server.corpX.un:​5000/​
 +
 +# cat /​etc/​containerd/​certs.d/​server.corpX.un:​5000/​hosts.toml
 +</​code><​code>​
 +[host."​http://​server.corpX.un:​5000"​]
 +  capabilities = ["​pull",​ "​resolve",​ "​push"​]
 +  skip_verify = true
 +</​code><​code> ​
 +# systemctl restart containerd.service
 </​code>​ </​code>​
  
Line 368: Line 457:
 <​code>​ <​code>​
 root@nodeN:​~#​ crictl -r unix:///​run/​containerd/​containerd.sock pull server.corpX.un:​5000/​student/​gowebd root@nodeN:​~#​ crictl -r unix:///​run/​containerd/​containerd.sock pull server.corpX.un:​5000/​student/​gowebd
-</​code>​ 
  
 +root@kubeN:​~#​ crictl pull server.corpX.un:​5000/​student/​pywebd2
 +</​code>​
 ==== Развертывание через Kubespray ==== ==== Развертывание через Kubespray ====
  
Line 380: Line 470:
  
 === Подготовка к развертыванию через Kubespray === === Подготовка к развертыванию через Kubespray ===
- 
-  * [[Язык программирования Python#​Виртуальная среда Python]] 
  
 <​code>​ <​code>​
-(venv1) ​server# ssh-keygen+server# ssh-keygen ​   ### -t rsa
  
-(venv1) ​server# ssh-copy-id kube1;​ssh-copy-id kube2;​ssh-copy-id kube3;​ssh-copy-id kube4;+server# ssh-copy-id kube1;​ssh-copy-id kube2;​ssh-copy-id kube3;​ssh-copy-id kube4; 
 +</​code>​
  
 +=== Вариант 1 (ansible) ===
 +
 +  * [[https://​github.com/​kubernetes-sigs/​kubespray/​blob/​v2.26.0/​README.md]]
 +  * [[Язык программирования Python#​Виртуальная среда Python]]
 +
 +<​code>​
 (venv1) server# git clone https://​github.com/​kubernetes-sigs/​kubespray (venv1) server# git clone https://​github.com/​kubernetes-sigs/​kubespray
  
Line 435: Line 530:
 </​code>​ </​code>​
  
-  * [[Сервис Ansible#​Использование модулей]] Ansible для отключения swap +  * [[Сервис Ansible#​Использование модулей]] Ansible для отключения swap (делается автоматически) 
-  * [[Сервис Ansible#​Использование ролей]] Ansible для настройки сети+  * [[Сервис Ansible#​Использование ролей]] Ansible для настройки сети ​(не обязательно) 
 +  * Может потребоваться [[#​Настройка registry-mirrors для Kubespray]]
  
 === Развертывание кластера через Kubespray === === Развертывание кластера через Kubespray ===
Line 478: Line 574:
 </​code>​ </​code>​
  
-=== Добавление insecure_registries через Kubespray ​===+=== Вариант 2 (docker) ​=== 
 + 
 +  * [[https://​github.com/​kubernetes-sigs/​kubespray/​blob/​v2.29.0/​README.md]] 
 <​code>​ <​code>​
-~/kubespray# cat inventory/mycluster/group_vars/​all/​containerd.yml+server:~# mkdir -p inventory/sample 
 + 
 +server:~# cat inventory/sample/inventory.ini
 </​code><​code>​ </​code><​code>​
 +#[all]
 +#kube1 ansible_host=192.168.X.221
 +#kube2 ansible_host=192.168.X.222
 +#kube3 ansible_host=192.168.X.223
 +##kube4 ansible_host=192.168.X.224
 +
 +[kube_control_plane]
 +kube[1:3]
 +
 +[etcd:​children]
 +kube_control_plane
 +
 +[kube_node]
 +kube[1:3]
 +#kube[1:4]
 +</​code><​code>​
 +server:~# docker run --userns=host --rm -it -v /​root/​inventory/​sample:/​inventory -v /​root/​.ssh/:/​root/​.ssh/​ quay.io/​kubespray/​kubespray:​v2.29.0 bash
 +
 +root@cf764ca3b291:/​kubespray#​ ansible all -m ping -i /​inventory/​inventory.ini
 +</​code>​
 +<​code>​
 +root@cf764ca3b291:/​kubespray#​ cp -rv inventory/​sample/​group_vars/​ /inventory/
 +</​code>​
 +  * Может потребоваться [[#​Настройка registry-mirrors для Kubespray]] и [[#​Добавление insecure_registries через Kubespray]]
 +<​code>​
 +root@cf764ca3b291:/​kubespray#​ time ansible-playbook -i /​inventory/​inventory.ini cluster.yml
 ... ...
-containerd_insecure_registries:​ +real    12m18.679s
-  "​server.corpX.un:​5000":​ "​http://​server.corpX.un:​5000"​ +
-containerd_registry_auth:​ +
-  - registry: server.corpX.un:​5000 +
-    username: student +
-    password: Pa$$w0rd+
 ... ...
 +</​code>​
 +
 +==== Управление образами ====
 +
 +=== Тестирование ===
 +<​code>​
 +kubeN#
 +crictl pull server.corpX.un:​5000/​student/​gowebd
 +crictl images
 +crictl rmi server.corpX.un:​5000/​student/​gowebd
 +</​code>​
 +
 +=== Смена registry для containerd ===
 +<​code>​
 +server:~# cat hosts.toml
 </​code><​code>​ </​code><​code>​
-~/kubespray# time ansible-playbook -i inventory/mycluster/hosts.yaml cluster.yml +server = "​https:​//docker.io"​ 
-user    46m37.151s+[host."​https:​//mirror.gcr.io"] 
 +  ​capabilities = ["​pull","​resolve"​] 
 +  skip_verify = false 
 +  override_path = false 
 +</​code><​code>​ 
 +server:~# scp hosts.toml kubeN:/​etc/​containerd/​certs.d/​docker.io/​hosts.toml
  
-less /etc/containerd/​config.toml+server:~ssh kubeN service ​containerd ​restart
 </​code>​ </​code>​
  
-=== Управление дополнениями через Kubespray ​===+=== Использование ​proxy в containerd ​===
 <​code>​ <​code>​
-~/kubespraycat inventory/​mycluster/​group_vars/​k8s_cluster/​addons.yml+systemctl edit containerd
 </​code><​code>​ </​code><​code>​
 ... ...
-helm_enabledtrue+[Service] 
 +Environment="​HTTP_PROXY=http://​openproxy2.bmstu.ru:​3128"​ 
 +Environment="​HTTPS_PROXY=http://​openproxy2.bmstu.ru:​3128"​ 
 +Environment="​NO_PROXY=localhost,​127.0.0.1,::​1,​10.0.0.0/​8,​192.168.0.0/​16,​.svc,​.cluster.local"​
 ... ...
-ingress_nginx_enabledtrue +</​code>​ 
-ingress_nginx_host_networktrue+==== Обновление сертификатов ==== 
 +  * [[https://​weng-albert.medium.com/​updating-kubernetes-certificates-easy-peasy-en-139fc07f26c8|Updating Kubernetes Certificates:​ Easy Peasy!(En)]] 
 +  * [[https://​medium.com/​@reza.sadriniaa/​automatic-kubernetes-certificate-renewal-a-step-by-step-guide-c4320192a74d|Automatic Kubernetes Certificate Renewal: A Step-by-Step Guide]] 
 +<​code>​ 
 +kubeM:~# kubeadm certs check-expiration 
 + 
 +kubeM:~# cp -rp /​etc/​kubernetes /​root/​old_k8s_config 
 + 
 +kubeM:~# kubeadm certs renew all
 ... ...
 +Done renewing certificates. You must restart the kube-apiserver,​ kube-controller-manager,​ kube-scheduler and etcd, so that they can use the new certificates.
 +
 +kubeM:~# cp /​etc/​kubernetes/​admin.conf /​root/​.kube/​config
 </​code>​ </​code>​
 ===== Базовые объекты k8s ===== ===== Базовые объекты k8s =====
Line 559: Line 715:
 $ kubectl delete deployment my-debian $ kubectl delete deployment my-debian
 </​code>​ </​code>​
 +
 +==== Manifest ====
 +
   * [[https://​kubernetes.io/​docs/​reference/​glossary/?​all=true#​term-manifest|Kubernetes Documentation Reference Glossary/​Manifest]]   * [[https://​kubernetes.io/​docs/​reference/​glossary/?​all=true#​term-manifest|Kubernetes Documentation Reference Glossary/​Manifest]]
 <​code>​ <​code>​
Line 578: Line 737:
         app: my-debian         app: my-debian
     spec:     spec:
 +      #​serviceAccountName:​ admin-user
       containers:       containers:
       - name: my-debian       - name: my-debian
Line 583: Line 743:
         command: ["/​bin/​sh"​]         command: ["/​bin/​sh"​]
         args: ["​-c",​ "while :;do echo -n random-value:;​od -A n -t d -N 1 /​dev/​urandom;​sleep 5; done"]         args: ["​-c",​ "while :;do echo -n random-value:;​od -A n -t d -N 1 /​dev/​urandom;​sleep 5; done"]
- 
         resources:         resources:
           requests:           requests:
Line 594: Line 753:
 </​code><​code>​ </​code><​code>​
 $ kubectl apply -f my-debian-deployment.yaml #​--dry-run=client #-o yaml $ kubectl apply -f my-debian-deployment.yaml #​--dry-run=client #-o yaml
 +
 +$ kubectl logs -l app=my-debian -f
 ... ...
 $ kubectl delete -f my-debian-deployment.yaml $ kubectl delete -f my-debian-deployment.yaml
Line 619: Line 780:
 $ ### kubectl delete deployment my-webd -n my-ns $ ### kubectl delete deployment my-webd -n my-ns
  
-cd webd/+mkdir ??webd-k8s/; cd $_
  
 $ cat my-webd-deployment.yaml $ cat my-webd-deployment.yaml
Line 645: Line 806:
 #        image: server.corpX.un:​5000/​student/​webd:​ver1.N #        image: server.corpX.un:​5000/​student/​webd:​ver1.N
 #        image: httpd #        image: httpd
 +#        image: brndnmtthws/​nginx-echo-headers
 +#        args: ["​gunicorn",​ "​app:​app",​ "​--bind",​ "​0.0.0.0:​8000",​ "​-k",​ "​uvicorn.workers.UvicornWorker"​]
  
-#        imagePullPolicy:​ "Always"+#        image: gowebd 
 +#        imagePullPolicy:​ "IfNotPresent"
  
 #        lifecycle: #        lifecycle:
Line 724: Line 888:
 $ kubectl logs -l app=my-webd -n my-ns  $ kubectl logs -l app=my-webd -n my-ns 
 (доступны опции -f, --tail=2000,​ --previous) (доступны опции -f, --tail=2000,​ --previous)
 +</​code>​
  
-$ kubectl scale deployment my-webd --replicas=3 -n my-ns+=== Управление количеством реплик и остановка приложения === 
 +<​code>​ 
 +$ kubectl ​-n my-ns scale deployment my-webd --replicas=3 ​  # 0 остановка приложения
  
-$ kubectl delete pod/​my-webd-NNNNNNNNNN-NNNNN ​-n my-ns+$ kubectl ​-n my-ns delete pod/​my-webd-NNNNNNNNNN-NNNNN
 </​code>​ </​code>​
  
Line 767: Line 934:
 </​code>​ </​code>​
  
 +=== Поиск и удаление подов в нерабочем состоянии ===
 +
 +  * [[https://​stackoverflow.com/​questions/​55072235/​how-to-delete-completed-kubernetes-pod|How to delete completed kubernetes pod?]]
 +
 +<​code>​
 +kube1:~# kubectl get pods --field-selector=status.phase!=Running -A -o wide
 +
 +kube1:~# kubectl delete pod --field-selector=status.phase==Succeeded -A
 +
 +kube1:~# kubectl delete pod --field-selector=status.phase==Failed -A
 +</​code>​
 ==== Service ==== ==== Service ====
  
Line 802: Line 980:
  
 $ kubectl get endpoints -n my-ns $ kubectl get endpoints -n my-ns
 +  или ​
 +$ kubectl get endpointslice -n my-ns
 </​code>​ </​code>​
 === NodePort === === NodePort ===
Line 838: Line 1018:
  
 $ kubectl -n metallb-system get all $ kubectl -n metallb-system get all
 +
 +$ mkdir metallb-system;​ cd $_
  
 $ cat first-pool.yaml $ cat first-pool.yaml
Line 850: Line 1032:
   addresses:   addresses:
   - 192.168.X.64/​28   - 192.168.X.64/​28
 +#  - 192.168.X.60-192.168.X.69
   autoAssign: false   autoAssign: false
 #  autoAssign: true #  autoAssign: true
Line 865: Line 1048:
 </​code><​code>​ </​code><​code>​
 $ kubectl apply -f first-pool.yaml $ kubectl apply -f first-pool.yaml
 +
 +$ kubectl -n metallb-system get ipaddresspools.metallb.io
 +
 +$ kubectl get services -A | grep LoadBalancer
  
 ... ...
Line 885: Line 1072:
  
 == port-forward == == port-forward ==
 +  * [[https://​www.golinuxcloud.com/​kubectl-port-forward/​|kubectl port-forward examples in Kubernetes]]
   * [[#​Инструмент командной строки kubectl]]   * [[#​Инструмент командной строки kubectl]]
  
Line 929: Line 1116:
  
   * [[https://​kubernetes.github.io/​ingress-nginx/​deploy/#​quick-start|NGINX ingress controller quick-start]]   * [[https://​kubernetes.github.io/​ingress-nginx/​deploy/#​quick-start|NGINX ingress controller quick-start]]
 +  * [[#​Работа с готовыми Charts]]
  
 === Minikube ingress-nginx-controller === === Minikube ingress-nginx-controller ===
  
   * [[https://​kubernetes.io/​docs/​tasks/​access-application-cluster/​ingress-minikube/​|Set up Ingress on Minikube with the NGINX Ingress Controller]]   * [[https://​kubernetes.io/​docs/​tasks/​access-application-cluster/​ingress-minikube/​|Set up Ingress on Minikube with the NGINX Ingress Controller]]
-  * [[https://​www.golinuxcloud.com/​kubectl-port-forward/​|kubectl port-forward examples in Kubernetes]] 
  
 <​code>​ <​code>​
-server# cat /​etc/​bind/​corpX.un +$ minikube addons list 
-</​code><​code>​ + 
-... +$ minikube addons enable ingress
-webd A 192.168.49.2 +
-</​code><​code>​ +
-gitlab-runner@server:​~$ minikube addons enable ingress+
 </​code>​ </​code>​
  
Line 1038: Line 1222:
 ... ...
 </​code>​ </​code>​
 +
 +=== ingress-traefik-controller ===
 +
 +  * [[#​Traefik]] (providers.kubernetesGateway.enabled:​ false и ingressRoute.dashboard.enabled:​ false (требует CRD IngressRoute))
  
 === ingress example === === ingress example ===
Line 1045: Line 1233:
  
 <​code>​ <​code>​
-node1# ### kubectl create ingress my-ingress --class=nginx --rule="​webd.corpX.un/​*=my-webd:​80"​ -n my-ns+kube1# kubectl get ingressclasses 
 + 
 +kube1# ### kubectl create ingress my-ingress --class=nginx --rule="​webd.corpX.un/​*=my-webd:​80"​ -n my-ns
  
-node1# cat my-ingress.yaml+kube1# cat my-ingress.yaml
 </​code><​code>​ </​code><​code>​
 apiVersion: networking.k8s.io/​v1 apiVersion: networking.k8s.io/​v1
Line 1056: Line 1246:
 #    nginx.ingress.kubernetes.io/​canary:​ "​true"​ #    nginx.ingress.kubernetes.io/​canary:​ "​true"​
 #    nginx.ingress.kubernetes.io/​canary-weight:​ "​30"​ #    nginx.ingress.kubernetes.io/​canary-weight:​ "​30"​
 +#    cert-manager.io/​issuer:​ "​...-issuer"​
 +#    cert-manager.io/​cluster-issuer:​ "​...-issuer"​
 spec: spec:
   ingressClassName:​ nginx   ingressClassName:​ nginx
-#  ​tls: +#  ​ingressClassNametraefik ​
-#  - hosts: +
-#    - gowebd.corpX.un +
-#    secretName: gowebd-tls+
   rules:   rules:
   - host: webd.corpX.un   - host: webd.corpX.un
Line 1083: Line 1272:
         path: /         path: /
         pathType: Prefix         pathType: Prefix
 +#  tls:
 +#  - hosts:
 +#    - gowebd.corpX.un
 +#    - "​*.corpX.un"​
 +#    secretName: gowebd-tls
 +#  - hosts:
 +#    - webd.corpX.un
 +#    secretName: webd-tls
 </​code><​code>​ </​code><​code>​
-node1# kubectl apply -f my-ingress.yaml -n my-ns+kube1# kubectl apply -f my-ingress.yaml -n my-ns
  
-node1# kubectl get ingress -n my-ns+kube1# kubectl get ingress -n my-ns
 NAME      CLASS   ​HOSTS ​                            ​ADDRESS ​                        ​PORTS ​  AGE NAME      CLASS   ​HOSTS ​                            ​ADDRESS ​                        ​PORTS ​  AGE
 my-webd ​  ​nginx ​  ​webd.corpX.un,​gowebd.corpX.un ​  ​192.168.X.202,​192.168.X.203 ​  ​80 ​     14m my-webd ​  ​nginx ​  ​webd.corpX.un,​gowebd.corpX.un ​  ​192.168.X.202,​192.168.X.203 ​  ​80 ​     14m
Line 1101: Line 1298:
 $ kubectl logs -n ingress-nginx -l app.kubernetes.io/​name=ingress-nginx -f $ kubectl logs -n ingress-nginx -l app.kubernetes.io/​name=ingress-nginx -f
  
-node1# ### kubectl delete ingress my-ingress -n my-ns+kube1# ### kubectl delete ingress my-ingress -n my-ns
 </​code>​ </​code>​
  
Line 1118: Line 1315:
 </​code>​ </​code>​
  
 +==== IngressRoute ====
 +
 +  * [[#​Traefik]]
 +<​code>​
 +kube1:​~/​traefik#​ kubectl get ingressclasses
 +
 +kube1:​~/​webd-k8s#​ ###cat my-ingressroute.yaml
 +</​code><​code>​
 +apiVersion: traefik.io/​v1alpha1
 +kind: IngressRoute
 +metadata:
 +  name: my-ingressroute
 +spec:
 +  entryPoints:​
 +    - web
 +  routes:
 +    - match: Host(`htwebd.corpX.un`)
 +      kind: Rule
 +      services:
 +        - name: my-webd
 +          port: 80
 +</​code>​
 +==== Gateway API ====
 +
 +  * https://​gateway-api.sigs.k8s.io/​guides/​getting-started/​
 +
 +<​code>​
 +kube1:~# kubectl get gatewayclasses
 +
 +kube1:~# kubectl get customresourcedefinitions | grep gate
 +</​code>​
 +
 +=== Traefik ===
 +
 +  * https://​doc.traefik.io/​traefik/​getting-started/​quick-start-with-kubernetes/​
 +
 +<​code>​
 +kube1:​~/​traefik#​ helm show values traefik --repo https://​traefik.github.io/​charts --version 39.0.1 | tee values.yaml.orig
 +
 +kube1:​~/​traefik#​ cat values.yaml
 +</​code><​code>​
 +service:
 +  spec:
 +    loadBalancerIP:​ "​192.168.X.66"​
 +ingressRoute:​
 +  dashboard:
 +    enabled: true
 +    matchRule: Host(`dash-tr.corpX.un`)
 +    entryPoints:​
 +      - web
 +providers:
 +  kubernetesGateway:​
 +    enabled: true
 +#gateway:
 +#  listeners:
 +#    web:
 +#      namespacePolicy:​
 +#        from: All
 +</​code><​code>​
 +kube1:​~/​traefik#​ helm template traefik -f values.yaml --repo https://​traefik.github.io/​charts -n traefik --version 39.0.1
 +
 +kube1:​~/​traefik#​ helm install traefik traefik -f values.yaml --repo https://​traefik.github.io/​charts -n traefik --version 39.0.1 --create-namespace
 +
 +kube1:​~/​traefik#​ kubectl -n traefik get endpointslices
 +NAME            ADDRESSTYPE ​  ​PORTS ​      ​ENDPOINTS ​    AGE
 +traefik-j6bwt ​  ​IPv4 ​         8000,​8443 ​  ​10.233.87.8 ​  36m
 +</​code>​
 +
 +=== Envoy Gateway ===
 +
 +  * [[https://​gateway.envoyproxy.io/​latest/​install/​install-helm/​]]
 +  * [[https://​hub.docker.com/​r/​envoyproxy/​gateway-helm/​tags]]
 +
 +<​code>​
 +kube1:​~/​envoygateway#​ helm show values oci://​docker.io/​envoyproxy/​gateway-helm --version v1.6.4
 +  ​
 +kube1:​~/​envoygateway#​ helm install eg oci://​docker.io/​envoyproxy/​gateway-helm --version v1.6.4 -n envoy-gateway-system --create-namespace
 +
 +kube1:​~/​envoygateway#​ cat envoyproxy.yaml
 +</​code><​code>​
 +apiVersion: gateway.envoyproxy.io/​v1alpha1
 +kind: EnvoyProxy
 +metadata:
 +  name: custom-envoy-proxy
 +  namespace: envoy-gateway-system
 +spec:
 +  provider:
 +    type: Kubernetes
 +    kubernetes:
 +      envoyService:​
 +        type: LoadBalancer
 +        annotations:​
 +          metallb.universe.tf/​loadBalancerIPs:​ "​192.168.X.67"​
 +</​code><​code>​
 +kube1:​~/​envoygateway#​ kubectl -n envoy-gateway-system apply -f envoyproxy.yaml
 +
 +kube1:​~/​envoygateway#​ cat gatewayclass.yaml
 +</​code><​code>​
 +apiVersion: gateway.networking.k8s.io/​v1
 +kind: GatewayClass
 +metadata:
 +  name: eg
 +spec:
 +  controllerName:​ gateway.envoyproxy.io/​gatewayclass-controller
 +  parametersRef:​
 +    group: gateway.envoyproxy.io
 +    kind: EnvoyProxy
 +    name: custom-envoy-proxy
 +    namespace: envoy-gateway-system
 +</​code><​code>​
 +kube1:​~/​envoygateway#​ kubectl apply -f gatewayclass.yaml
 +</​code>​
 +
 +=== Gateway ===
 +<​code>​
 +kube1:​~/​webd-k8s#​ cat my-gateway.yaml
 +</​code><​code>​
 +apiVersion: gateway.networking.k8s.io/​v1
 +kind: Gateway
 +metadata:
 +  name: my-gateway
 +spec:
 +#  gatewayClassName:​ traefik
 +#  gatewayClassName:​ eg
 +  listeners:
 +  - name: http
 +#    port: 8000
 +#    port: 80
 +    protocol: HTTP
 +  - name: https
 +    hostname: "​webd.corpX.un"​
 +    protocol: HTTPS
 +#    port: 8443
 +#    port: 443
 +    tls:
 +      mode: Terminate
 +      certificateRefs:​
 +        - kind: Secret
 +          name: webd-tls
 +</​code>​
 +
 +=== HTTPRoute ===
 +<​code>​
 +kube1:​~/​webd-k8s#​ cat my-httproute.yaml
 +</​code><​code>​
 +apiVersion: gateway.networking.k8s.io/​v1
 +kind: HTTPRoute
 +metadata:
 +  name: my-httproute
 +spec:
 +  hostnames:
 +  - webd.corpX.un
 +  parentRefs:
 +#  - name: my-gateway
 +#  - name: traefik-gateway
 +#    namespace: traefik
 +  rules:
 +  - matches:
 +    - path:
 +        type: Exact
 +        value: /
 +#    filters:
 +#    - type: RequestHeaderModifier
 +#      requestHeaderModifier:​
 +#        add:
 +#        - name: X-Gateway-ID
 +#          value: "​external-gw-prod"​
 +    backendRefs:​
 +    - name: my-webd
 +      port: 80
 +#      weight: 70
 +#    - name: my-webd2
 +#      port: 80
 +#      weight: 30
 +
 +</​code>​
 ==== Volumes ==== ==== Volumes ====
  
Line 1327: Line 1700:
 ssh root@kube2 'chmod 777 /​opt/​local-path-provisioner'​ ssh root@kube2 'chmod 777 /​opt/​local-path-provisioner'​
 ssh root@kube3 'chmod 777 /​opt/​local-path-provisioner'​ ssh root@kube3 'chmod 777 /​opt/​local-path-provisioner'​
 +ssh root@kube4 'mkdir /​opt/​local-path-provisioner'​
 +ssh root@kube4 'chmod 777 /​opt/​local-path-provisioner'​
  
 $ ###kubectl patch storageclass local-path -p '​{"​metadata":​ {"​annotations":​{"​storageclass.kubernetes.io/​is-default-class":"​true"​}}}'​ $ ###kubectl patch storageclass local-path -p '​{"​metadata":​ {"​annotations":​{"​storageclass.kubernetes.io/​is-default-class":"​true"​}}}'​
Line 1346: Line 1721:
  
 (venv1) server:~# ansible all -f 4 -m apt -a '​pkg=open-iscsi state=present update_cache=true'​ -i /​root/​kubespray/​inventory/​mycluster/​hosts.yaml (venv1) server:~# ansible all -f 4 -m apt -a '​pkg=open-iscsi state=present update_cache=true'​ -i /​root/​kubespray/​inventory/​mycluster/​hosts.yaml
 +
 +root@a7818cd3f7c7:/​kubespray#​ ansible all -f 4 -m apt -a '​pkg=open-iscsi state=present update_cache=true'​ -i /​inventory/​inventory.ini
 </​code>​ </​code>​
   * [[https://​github.com/​longhorn/​longhorn]]   * [[https://​github.com/​longhorn/​longhorn]]
Line 1358: Line 1735:
 </​code>​ </​code>​
  
-Подключение через kubectl proxy+Подключение через ​[[#kubectl proxy]]
  
   * [[https://​stackoverflow.com/​questions/​45172008/​how-do-i-access-this-kubernetes-service-via-kubectl-proxy|How do I access this Kubernetes service via kubectl proxy?]]   * [[https://​stackoverflow.com/​questions/​45172008/​how-do-i-access-this-kubernetes-service-via-kubectl-proxy|How do I access this Kubernetes service via kubectl proxy?]]
Line 1399: Line 1776:
   * Делаем снапшот   * Делаем снапшот
   * Что-то ломаем (удаляем пользователя)   * Что-то ломаем (удаляем пользователя)
-  * Останавливаем сервис+ 
 +== Остановка сервиса ==
  
 <​code>​ <​code>​
Line 1409: Line 1787:
   * Volume -> Attache to Host (любой) в режиме Maintenance,​ Revert к снапшоту,​ Deattache ​   * Volume -> Attache to Host (любой) в режиме Maintenance,​ Revert к снапшоту,​ Deattache ​
   * Запускаем сервис   * Запускаем сервис
 +
 +  * Еще, если не в режиме Maintenance,​ то можно:
 +<​code>​
 +kube1:~# mount /​dev/​longhorn/​pvc-2057c044-ac3f-4052-92ce-d7f57453a704 /mnt
 +</​code>​
  
 <​code>​ <​code>​
Line 1426: Line 1809:
  
 ==== ConfigMap, Secret ==== ==== ConfigMap, Secret ====
 +
 +=== ConfigMap для переменных окружения ===
  
 +<​code>​
 +kube1:​~/​gowebd-k8s#​ cat env-config.yaml
 +apiVersion: v1
 +kind: ConfigMap
 +metadata:
 +    name: env-config
 +data:
 +    SECRET: strongpassword
 +</​code><​code>​
 +kube1:​~/​gowebd-k8s#​ cat my-webd-deployment.yaml
 +</​code><​code>​
 +...
 +        image: ...
 +        envFrom:
 +        - configMapRef:​
 +            name: env-config
 +...
 +</​code>​
 +
 +=== ConfigMap для файла конфигурации ===
 <​code>​ <​code>​
 server# scp /​etc/​pywebd/​* kube1:/tmp/ server# scp /​etc/​pywebd/​* kube1:/tmp/
Line 1448: Line 1853:
  
 kube1:​~/​pywebd-k8s#​ kubectl -n my-ns get configmaps kube1:​~/​pywebd-k8s#​ kubectl -n my-ns get configmaps
 +</​code>​ 
 +=== Secret для ключа и сертификата === 
 +<​code>​
 kube1:​~/​pywebd-k8s#​ kubectl create secret tls pywebd-tls --key /​tmp/​pywebd.key --cert /​tmp/​pywebd.crt --dry-run=client -o yaml | tee my-webd-secret-tls.yaml kube1:​~/​pywebd-k8s#​ kubectl create secret tls pywebd-tls --key /​tmp/​pywebd.key --cert /​tmp/​pywebd.crt --dry-run=client -o yaml | tee my-webd-secret-tls.yaml
  
Line 1468: Line 1875:
  
 kube1:​~/​pywebd-k8s#​ kubectl -n my-ns get secrets kube1:​~/​pywebd-k8s#​ kubectl -n my-ns get secrets
 +</​code>​
  
 +=== Secret для Docker Registry ===
 +
 +<​code>​
 kube1:​~/​pywebd-k8s# ​ kubectl create secret docker-registry regcred --docker-server=server.corpX.un:​5000 --docker-username=student --docker-password='​strongpassword'​ -n my-ns kube1:​~/​pywebd-k8s# ​ kubectl create secret docker-registry regcred --docker-server=server.corpX.un:​5000 --docker-username=student --docker-password='​strongpassword'​ -n my-ns
  
Line 1515: Line 1926:
 </​code>​ </​code>​
  
-==== ConfigMap ==== 
  
-  * [[https://​www.aquasec.com/​cloud-native-academy/​kubernetes-101/​kubernetes-configmap/​|Kubernetes ConfigMap: Creating, Viewing, Consuming & Managing]] 
-  * [[https://​blog.lapw.at/​how-to-enable-ssh-into-a-kubernetes-pod/​|How to enable SSH connections into a Kubernetes pod]] 
- 
-<​code>​ 
-root@node1:​~#​ cat sshd_config 
-</​code><​code>​ 
-PermitRootLogin yes 
-PasswordAuthentication no 
-ChallengeResponseAuthentication no 
-UsePAM no 
-</​code><​code>​ 
-root@node1:​~#​ kubectl create configmap ssh-config --from-file=sshd_config --dry-run=client -o yaml 
-... 
- 
-server:~# cat .ssh/​id_rsa.pub 
-... 
- 
-root@node1:​~#​ cat my-openssh-server-deployment.yaml 
-</​code><​code>​ 
-apiVersion: v1 
-kind: ConfigMap 
-metadata: 
-  name: ssh-config 
-data: 
-  sshd_config:​ | 
-    PermitRootLogin yes 
-    PasswordAuthentication no 
-    ChallengeResponseAuthentication no 
-    UsePAM no 
-  authorized_keys:​ | 
-    ssh-rsa AAAAB.....C0zOcZ68= root@server.corpX.un 
---- 
-apiVersion: apps/v1 
-kind: Deployment 
-metadata: 
-  name: my-openssh-server 
-spec: 
-  selector: 
-    matchLabels:​ 
-      app: my-openssh-server 
-  template: 
-    metadata: 
-      labels: 
-        app: my-openssh-server 
-    spec: 
-      containers: 
-      - name: my-openssh-server 
-        image: linuxserver/​openssh-server 
-        command: ["/​bin/​sh"​] 
-        args: ["​-c",​ "/​usr/​bin/​ssh-keygen -A; usermod -p '​*'​ root; /​usr/​sbin/​sshd.pam -D"] 
-        ports: 
-        - containerPort:​ 22 
-        volumeMounts:​ 
-        - name: ssh-volume 
-          subPath: sshd_config 
-          mountPath: /​etc/​ssh/​sshd_config 
-        - name: ssh-volume 
-          subPath: authorized_keys 
-          mountPath: /​root/​.ssh/​authorized_keys 
-      volumes: 
-      - name: ssh-volume 
-        configMap: 
-          name: ssh-config 
---- 
-apiVersion: v1 
-kind: Service 
-metadata: 
-  name: my-openssh-server 
-spec: 
-  type: NodePort 
-  ports: 
-  - port: 22 
-    nodePort: 32222 
-  selector: 
-    app: my-openssh-server 
-</​code><​code>​ 
-root@node1:​~#​ kubectl apply -f my-openssh-server-deployment.yaml 
- 
-root@node1:​~#​ iptables-save | grep 32222 
- 
-root@node1:​~#​ ###kubectl exec -ti my-openssh-server-NNNNNNNN-NNNNN -- bash 
- 
-server:~# ssh -p 32222 nodeN 
-Welcome to OpenSSH Server 
-my-openssh-server-NNNNNNNN-NNNNN:​~#​ nslookup my-openssh-server.default.svc.cluster.local 
-</​code>​ 
 ==== Пример с multi container pod ==== ==== Пример с multi container pod ====
  
Line 1682: Line 2006:
  
 <​code>​ <​code>​
-# wget https://​get.helm.sh/​helm-v3.16.4-linux-amd64.tar.gz+# ###wget https://​get.helm.sh/​helm-v3.16.4-linux-amd64.tar.gz 
 +# wget https://​get.helm.sh/​helm-v4.0.4-linux-amd64.tar.gz
  
 # tar -zxvf helm-*-linux-amd64.tar.gz # tar -zxvf helm-*-linux-amd64.tar.gz
Line 1736: Line 2061:
 #    use-forwarded-headers:​ true #    use-forwarded-headers:​ true
 #    allow-snippet-annotations:​ true #    allow-snippet-annotations:​ true
 +#  service:
 +#    type: LoadBalancer
 +#    loadBalancerIP:​ "​192.168.X.64"​
 </​code><​code>​ </​code><​code>​
 $ helm template ingress-nginx -f values.yaml --repo https://​kubernetes.github.io/​ingress-nginx -n ingress-nginx | tee t2.yaml $ helm template ingress-nginx -f values.yaml --repo https://​kubernetes.github.io/​ingress-nginx -n ingress-nginx | tee t2.yaml
  
 $ helm upgrade ingress-nginx -i ingress-nginx -f values.yaml --repo https://​kubernetes.github.io/​ingress-nginx -n ingress-nginx --create-namespace $ helm upgrade ingress-nginx -i ingress-nginx -f values.yaml --repo https://​kubernetes.github.io/​ingress-nginx -n ingress-nginx --create-namespace
 +
 +$ kubectl get all -n ingress-nginx
  
 $ kubectl exec -n ingress-nginx pods/​ingress-nginx-controller-<​TAB>​ -- cat /​etc/​nginx/​nginx.conf | tee nginx.conf | grep use_forwarded_headers $ kubectl exec -n ingress-nginx pods/​ingress-nginx-controller-<​TAB>​ -- cat /​etc/​nginx/​nginx.conf | tee nginx.conf | grep use_forwarded_headers
Line 1755: Line 2085:
 </​code>​ </​code>​
 ==== Развертывание своего приложения ==== ==== Развертывание своего приложения ====
 +
 +  * [[Универсальный Helm-чарт|Helm - от основ до универсального чарта]]
  
   * [[https://​helm.sh/​docs/​chart_template_guide/​getting_started/​|chart_template_guide getting_started]]   * [[https://​helm.sh/​docs/​chart_template_guide/​getting_started/​|chart_template_guide getting_started]]
Line 1805: Line 2137:
 #        - gowebd.corpX.un #        - gowebd.corpX.un
 ... ...
-#​APWEBD_HOSTNAME:​ "​apwebd.corp13.un" +#env: 
-#​KEYCLOAK_HOSTNAME:​ "​keycloak.corp13.un" +  ​#​APWEBD_HOSTNAME:​ "​apwebd.corpX.un" 
-#​REALM_NAME:​ "corp13"+  #​KEYCLOAK_HOSTNAME:​ "​keycloak.corpX.un" 
 +  #​REALM_NAME:​ "corpX" 
 +  #SECRET: strongpassword
 </​code><​code>​ </​code><​code>​
 $ less webd-chart/​templates/​deployment.yaml $ less webd-chart/​templates/​deployment.yaml
Line 1813: Line 2147:
 ... ...
           imagePullPolicy:​ {{ .Values.image.pullPolicy }}           imagePullPolicy:​ {{ .Values.image.pullPolicy }}
 +#          {{- with .Values.env }}
 #          env: #          env:
-#          - nameAPWEBD_HOSTNAME +#          ​{{range $key, $val :. }} 
-#            value: "​{{ ​.Values.APWEBD_HOSTNAME ​}}" +#          - name: {{$key}} 
-#          - name: KEYCLOAK_HOSTNAME +#            value: {{$val|quote}} 
-#            value: ​"{{ .Values.KEYCLOAK_HOSTNAME ​}}" +#          ​{{end}} 
-#          - name: REALM_NAME +         {{- end}}
-           ​value:​ "{{ .Values.REALM_NAME ​}}"+
 ... ...
 </​code><​code>​ </​code><​code>​
Line 1845: Line 2179:
 </​code>​ </​code>​
  
 +==== Работа с чувствительными данными (секретами) ====
 +
 +  * [[https://​habr.com/​ru/​companies/​ru_mts/​articles/​656351/​|Прячем секреты в репозитории с помощью helm-secrets,​ sops, vault и envsubst]]
 +  * [[https://​github.com/​jkroepke/​helm-secrets]]
 +
 +  * [[Mozilla Sops]]
 +
 +<​code>​
 +kube1#
 +helm plugin install https://​github.com/​jkroepke/​helm-secrets/​releases/​download/​v4.7.4/​secrets-4.7.4.tgz --verify=false
 +helm plugin install https://​github.com/​jkroepke/​helm-secrets/​releases/​download/​v4.7.4/​secrets-getter-4.7.4.tgz ​ --verify=false
 +
 +kube1:​~/​keycloak#​ helm template my-keycloak -f secrets://​values.yaml oci://​registry-1.docker.io/​bitnamicharts/​keycloak -n my-keycloak-ns --version $KC_HC_VER | grep password
 +
 +kube1:​~/​keycloak#​ helm upgrade my-keycloak -i -f secrets://​values.yaml oci://​registry-1.docker.io/​bitnamicharts/​keycloak -n my-keycloak-ns --version $KC_HC_VER
 +</​code>​
 ==== Работа со своим репозиторием ==== ==== Работа со своим репозиторием ====
  
Line 1864: Line 2214:
 ~/​gowebd-k8s$ tar -tf webd-chart-0.1.1.tgz ~/​gowebd-k8s$ tar -tf webd-chart-0.1.1.tgz
  
-~/​gowebd-k8s$ helm plugin install https://​github.com/​chartmuseum/​helm-push+~/​gowebd-k8s$ helm plugin install https://​github.com/​chartmuseum/​helm-push ​#​--verify=false
  
 ~/​gowebd-k8s$ helm cm-push webd-chart-0.1.1.tgz webd ~/​gowebd-k8s$ helm cm-push webd-chart-0.1.1.tgz webd
Line 1934: Line 2284:
  
 kube1:​~/​gitlab-runner#​ helm show values gitlab/​gitlab-runner --version 0.70.5 | tee values.yaml kube1:​~/​gitlab-runner#​ helm show values gitlab/​gitlab-runner --version 0.70.5 | tee values.yaml
 +
 +kube1:​~/​gitlab-runner#​ ###curl https://​val.bmstu.ru/​unix/​Git/​gitlab-runner-values.yaml | tee values.yaml ​
  
 kube1:​~/​gitlab-runner#​ cat values.yaml kube1:​~/​gitlab-runner#​ cat values.yaml
Line 1982: Line 2334:
  
 kube1:​~/​gitlab-runner#​ kubectl get all -n gitlab-runner kube1:​~/​gitlab-runner#​ kubectl get all -n gitlab-runner
 +
 +kube1:​~/​gitlab-runner#​ ### kubectl -n gitlab-runner get serviceaccounts
 +kube1:​~/​gitlab-runner#​ ### kubectl -n gitlab-runner get role gitlab-runner -o yaml
  
 kube1:​~/​gitlab-runner#​ ### helm -n gitlab-runner uninstall gitlab-runner kube1:​~/​gitlab-runner#​ ### helm -n gitlab-runner uninstall gitlab-runner
 </​code>​ </​code>​
  
-== старая версия ==+ 
 +===== Аутентификация и авторизация ===== 
 + 
 +==== Использование сертификатов ​==== 
 + 
 +  * [[Пакет OpenSSL#​Создание приватного ключа пользователя]] и запроса на сертификат (O=cko) 
 <​code>​ <​code>​
-gitlab-runner@server:~$ helm repo add gitlab ​https://charts.gitlab.io+user1@client1:~$ cat user1.req | base64 -w0 
 +</​code>​ 
 +  * [[https://stackoverflow.com/​questions/​75735249/​what-do-the-values-in-certificatesigningrequest-spec-usages-mean|What do the values in CertificateSigningRequest.spec.usages mean?]] 
 +<​code>​ 
 +kube1:​~/​users#​ kubectl explain csr.spec.usages
  
-gitlab-runner@server:~$ helm repo list+kube1:​~/​users#​ cat user1.req.yaml 
 +</​code><​code>​ 
 +apiVersion: certificates.k8s.io/​v1 
 +kind: CertificateSigningRequest 
 +metadata: 
 +  name: user1 
 +spec: 
 +  request: LS0t...S0tCg== 
 +  signerName: kubernetes.io/​kube-apiserver-client 
 +  expirationSeconds:​ 8640000 ​ # 100 * one day 
 +  usages: 
 +#  - digital signature 
 +#  - key encipherment 
 +  - client auth 
 +</​code><​code>​ 
 +kube1:~/users# kubectl apply -f user1.req.yaml
  
-gitlab-runner@server:~$ helm search repo -l gitlab+kube1:~/users# kubectl describe csr/user1
  
-gitlab-runner@server:~$ helm search repo -l gitlab/gitlab-runner+kube1:~/users# kubectl certificate approve user1
  
-gitlab-runner@server:~$ helm show values gitlab/gitlab-runner --version 0.56.0 | tee values.yaml+kube1:~/users# kubectl get csr
  
-gitlab-runner@server:~$ diff values.yaml values.yaml.orig+kube1:​~/​users#​ kubectl get csr/​user1 ​-o yaml 
 + 
 +kube1:​~/​users#​ kubectl get csr/user1 -o jsonpath="​{.status.certificate}"​ | base64 -d | tee user1.crt 
 + 
 +kube1:​~/​users#​ scp user1.crt user1@client1: 
 +kube1:​~/​users#​ scp /​etc/​kubernetes/​ssl/​ca.crt user1@client1:​ 
 + 
 +kube1:​~/​users#​ ###kubectl delete csr user1 
 +</​code>​ 
 + 
 +==== Использование ServiceAccount ==== 
 + 
 +  * [[Система Kubernetes#​Kubernetes Dashboard]] 
 + 
 +==== Oбзор ресурсов кластера и действий над ними ==== 
 + 
 +<​code>​ 
 +kube1:~# kubectl api-resources -o wide | less 
 +APIVERSION = <​group>​ + "/"​ + <version of the API> 
 +</​code>​ 
 + 
 +==== Использование Role и RoleBinding ==== 
 + 
 +=== Предоставление доступа к services/​proxy в Namespace === 
 + 
 +  * Cloud native distributed block storage for Kubernetes [[Система Kubernetes#​longhorn]] 
 + 
 +<​code>​ 
 +kube1:​~/​users#​ cat lh-svc-proxy-role.yaml 
 +</​code><​code>​ 
 +apiVersion: rbac.authorization.k8s.io/​v1 
 +kind: Role 
 +metadata: 
 +  namespace: longhorn-system 
 +  name: lh-svc-proxy-role 
 +rules: 
 +- apiGroups: [""​] 
 +  resources: ["​services/​proxy"​] 
 +  verbs: ["​get"​] 
 +</​code><​code>​ 
 +kube1:​~/​users#​ cat user1-lh-svc-proxy-rolebinding.yaml 
 +</​code><​code>​ 
 +apiVersion: rbac.authorization.k8s.io/​v1 
 +kind: RoleBinding 
 +metadata: 
 +  name: user1-lh-svc-proxy-rolebinding 
 +  namespace: longhorn-system 
 +subjects: 
 +- kind: User 
 +  name: user1 
 +  apiGroup: rbac.authorization.k8s.io 
 +roleRef: 
 +  kind: Role 
 +  name: lh-svc-proxy-role 
 +  apiGroup: rbac.authorization.k8s.io 
 +</​code><​code>​ 
 +kube1:​~/​users#​ kubectl apply -f lh-svc-proxy-role.yaml,​user1-lh-svc-proxy-rolebinding.yaml 
 + 
 +student@client1:~$ kubectl proxy 
 + 
 +student@client1:​~$ curl http://​localhost:​8001/​api/​v1/​namespaces/​longhorn-system/​services/​longhorn-frontend:​80/​proxy/​ 
 + 
 +student@client1:​~$ curl http://​localhost:​8001/​api/​v1/​namespaces/​kubernetes-dashboard/​services/​https:​kubernetes-dashboard:/​proxy/​ 
 + 
 +kube1:​~/​users#​ kubectl delete -f lh-svc-proxy-role.yaml,​user1-lh-svc-proxy-rolebinding.yaml 
 +</​code>​ 
 +=== Предоставление полного доступа к Namespace === 
 + 
 +<​code>​ 
 +kube1:​~/​users#​ cat ns-full-access.yaml
 </​code><​code>​ </​code><​code>​
-... 
-gitlabUrl: http://​server.corpX.un/​ 
-... 
-runnerRegistrationToken:​ "​NNNNNNNNNNNNNNNNNNNNNNNN"​ 
-... 
-148,149c142 
-<   ​create:​ true 
 --- ---
->   ​createfalse +kindRole 
-325d317 +apiVersion: rbac.authorization.k8s.io/​v1 
-<         ​privileged = true +metadata: 
-432c424 +  name: ns-full-access 
-<   ​allowPrivilegeEscalationtrue+  namespace: my-ns 
 +rules: 
 +- apiGroups: ["​*"​] 
 +  resources: ["​*"​] 
 +  verbs["​*"​]
 --- ---
->   ​allowPrivilegeEscalationfalse +kindRoleBinding 
-435c427 +apiVersion: rbac.authorization.k8s.io/​v1 
-<   ​privilegedtrue +metadata
---- +  name: ns-full-access-rolebinding 
->   ​privilegedfalse+  namespace: my-ns 
 +subjects: 
 +- apiGroup: rbac.authorization.k8s.io 
 +  kind: Group 
 +  name: cko 
 +  #kind: User 
 +  #name: user1 
 +roleRef: 
 +  kind: Role 
 +  name: ns-full-access 
 +  apiGroup: rbac.authorization.k8s.io 
 +#roleRef: 
 +  #apiGroup: rbac.authorization.k8s.io 
 +  #kind: ClusterRole 
 +  #nameadmin
 </​code><​code>​ </​code><​code>​
-gitlab-runner@server:~$ helm upgrade -i gitlab-runner gitlab/gitlab-runner ​-f values.yaml -n gitlab-runner ​--create-namespace ​--version 0.56.0+kube1:~/users# kubectl apply -f ns-full-access.yaml 
 +</​code>​ 
 + 
 +=== Поиск предоставленных ролей для учетной записи === 
 +<​code>​ 
 +kube1:​~/​users#​ kubectl get rolebindings ​--all-namespaces ​-o=json | jq '​.items[] | select(.subjects[]?​.name == "​user1"​)'​ 
 + 
 +kube1:​~/​users#​ kubectl get rolebindings ​--all-namespaces -o=json | jq '​.items[] | select(.subjects[]?.name == "​cko"​)'​
  
-gitlab-runner@server:~kubectl ​get all -n gitlab-runner+kube1:​~/​users#​ kubectl delete ​-f ns-full-access.yaml 
 +  ИЛИ 
 +kube1:~/​users# ​kubectl -n my-ns delete rolebindings ns-full-access-rolebinding 
 +kube1:​~/​users#​ kubectl -n my-ns delete role ns-full-access
 </​code>​ </​code>​
  
-== SSL/TLS ==+==== Использование ClusterRole и ClusterRoleBinding ==== 
 + 
 +=== Предоставление доступа к services/port-forward в Cluster ===
  
 <​code>​ <​code>​
-kubectl ​-n gitlab-runner ​create ​configmap wild-crt --from-file=wild.crt+kube1:​~/​userscat svc-pfw-role.yaml 
 +</​code><​code>​ 
 +apiVersion: rbac.authorization.k8s.io/​v1 
 +kind: ClusterRole 
 +#kind: Role 
 +metadata: 
 +  name: svc-pfw-role 
 +#  namespace: my-pgcluster-ns 
 +rules: 
 +- apiGroups: [""​] 
 +  resources: ["​services"​] 
 +  verbs: ["​get"​] 
 +- apiGroups: [""​] 
 +  resources: ["​pods"​] 
 +  verbs: ["​get",​ "​list"​] 
 +- apiGroups: [""​] 
 +  resources: ["​pods/​portforward"​] 
 +  verbs: ["create"] 
 +</​code><​code>​ 
 +kube1:​~/​users#​ cat user1-svc-pfw-rolebinding.yaml 
 +</​code><​code>​ 
 +apiVersion: rbac.authorization.k8s.io/​v1 
 +kind: ClusterRoleBinding 
 +#kind: RoleBinding 
 +metadata: 
 +  name: user1-svc-pfw-rolebinding 
 +#  namespace: my-pgcluster-ns 
 +subjects: 
 +- kind: User 
 +  name: user1 
 +  apiGroup: rbac.authorization.k8s.io 
 +roleRef: 
 +  kind: ClusterRole 
 +#  kind: Role 
 +  name: svc-pfw-role 
 +  apiGroup: rbac.authorization.k8s.io 
 +</​code><​code>​ 
 +kube1:​~/​users#​ kubectl apply -f svc-pfw-role.yaml,​user1-svc-pfw-rolebinding.yaml
  
-# cat values.yaml+student@client1:​~$ kubectl port-forward -n my-pgcluster-ns services/​my-pgcluster-rw 5432:5432 
 + 
 +student@client1:​~$ psql postgres://​keycloak:​strongpassword@127.0.0.1:​5432/​keycloak 
 +</​code>​ 
 + 
 +  * Доступ через proxy к [[Система Kubernetes#​Kubernetes Dashboard]] 
 + 
 +<​code>​ 
 +kube1:​~/​users#​ kubectl delete -f svc-pfw-role.yaml,​user1-svc-pfw-rolebinding.yaml 
 +</​code>​ 
 +=== Предоставление полного доступа к Kubernetes Cluster === 
 + 
 +<​code>​ 
 +kube1:​~/​users#​ kubectl get clusterroles | less 
 + 
 +kube1:​~/​users#​ kubectl get clusterrole cluster-admin -o yaml 
 + 
 +kube1:​~/​users#​ kubectl get clusterrolebindings | less 
 + 
 +kube1:​~/​users#​ kubectl get clusterrolebindings kubeadm:​cluster-admins -o yaml 
 + 
 +kube1:​~/​users#​ kubectl get clusterrolebindings cluster-admin -o yaml 
 + 
 +kube1:​~/​users#​ cat user1-cluster-admin.yaml 
 +</​code><​code>​ 
 +apiVersion: rbac.authorization.k8s.io/​v1 
 +kind: ClusterRoleBinding 
 +metadata: 
 +  name: user1-cluster-admin 
 +subjects: 
 +- kind: User 
 +  name: user1 
 +#  name: user1@corp13.un ​  
 +  apiGroup: rbac.authorization.k8s.io 
 +roleRef: 
 +  kind: ClusterRole 
 +  name: cluster-admin 
 +  apiGroup: rbac.authorization.k8s.io 
 +</​code><​code>​ 
 +kube1:​~/​users#​ kubectl apply -f user1-cluster-admin.yaml 
 + 
 +kube1:​~/​users#​ cat freeipa-kube-admin.yaml 
 +</​code><​code>​ 
 +apiVersion: rbac.authorization.k8s.io/​v1 
 +kind: ClusterRoleBinding 
 +metadata: 
 +  name: freeipa-kube-admin 
 +subjects: 
 +- apiGroup: rbac.authorization.k8s.io 
 +  kind: Group 
 +  name: /​freeipa-kube-admin 
 +roleRef: 
 +  kind: ClusterRole 
 +  name: cluster-admin 
 +  apiGroup: rbac.authorization.k8s.io 
 +</​code><​code>​ 
 +student@client1:​~$ kubectl get nodes 
 +</​code>​ 
 + 
 +=== Поиск предоставленных кластерных ролей для учетной записи или ServiceAccount === 
 +<​code>​ 
 +kube1:​~/​users#​ kubectl get clusterrolebindings -o=json | jq '​.items[] | select(.subjects[]?​.name == "​kubeadm:​cluster-admins"​)'​ 
 + 
 +kube1:​~/​users#​ kubectl get clusterrolebindings -o=json | jq '​.items[] | select(.subjects[]?​.name == "​user1"​)'​ 
 + 
 +kube1:​~/​users#​ kubectl get clusterrolebindings -o=json | jq '​.items[] | select(.subjects[]?​.name == "​default"​)'​ 
 + 
 +kube1:​~/​users#​ kubectl get clusterrolebindings -o=json | jq '​.items[] | select(.subjects[]?​.name == "​admin-user"​)'​ 
 + 
 +kube1:​~/​users#​ kubectl delete -f user1-cluster-admin.yaml 
 +  ИЛИ 
 +kube1:​~/​users#​ kubectl delete clusterrolebindings user1-cluster-admin 
 +</​code>​ 
 + 
 +===== Horizontal Pod Autoscaler ===== 
 + 
 +  * [[#Metrics Server]] 
 + 
 +<​code>​ 
 +kube1:​~/​webd-k8s# cat my-webd-deployment.yaml
 </​code><​code>​ </​code><​code>​
 ... ...
-gitlabUrl: https://​server.corpX.un/​+        resources:​ 
 +          requests: 
 +            memory: "​64Mi"​ 
 +            cpu: "​250m"​ 
 +</​code><​code>​ 
 +kube1:​~/​webd-k8s#​ cat my-webd-hpa.yaml 
 +</​code><​code>​ 
 +apiVersion: autoscaling/​v2 
 +kind: HorizontalPodAutoscaler 
 +metadata: 
 +  name: my-webd-hpa 
 +spec: 
 +  scaleTargetRef:​ 
 +    apiVersion: apps/v1 
 +    kind: Deployment 
 +    name: my-webd 
 +  minReplicas:​ 2 
 +  maxReplicas:​ 10 
 +  metrics: 
 +  - type: Resource 
 +    resource: 
 +      name: cpu 
 +      target: 
 +        type: Utilization 
 +        averageUtilization:​ 50 
 +  - type: Resource 
 +    resource: 
 +      name: memory 
 +      target: 
 +        type: Utilization 
 +        averageUtilization:​ 80 
 +</​code><​code>​ 
 +kube1:​~/​webd-k8s#​ kubectl -n my-ns get hpa 
 +</​code>​ 
 + 
 +===== cert-manager ===== 
 + 
 +  * [[Letsencrypt Certbot]] 
 +  * [[https://​cert-manager.io/​docs/​installation/​|cert-manager Installation]] 
 +  * [[https://​cert-manager.io/​docs/​tutorials/​acme/​nginx-ingress/​|cert-manager Securing NGINX-ingress]] 
 + 
 +  * [[https://​debuntu.ru/​manuals/​kubernetes/​tls-kerberos-in-kubernetes/​cert-manager_and_all_about_it/​installing-configuring-cert-manager/​|debuntu.ru Установка и настройка cert-manager]] 
 +  * [[https://​habr.com/​ru/​companies/​nubes/​articles/​808035/​|Автоматический выпуск SSL-сертификатов. Используем Kubernetes и FreeIPA]] 
 +  * [[https://​cert-manager.io/​docs/​configuration/​acme/#​private-acme-servers|Private ACME Servers]] 
 + 
 +  * Решение FreeIPA [[Решение FreeIPA#​Поддержка ACME]] 
 +  * Решение FreeIPA [[Решение FreeIPA#​Динамический DNS]] 
 + 
 +<​code>​ 
 +kube1:~# kubectl apply -f https://​github.com/​cert-manager/​cert-manager/​releases/​download/​v1.19.1/​cert-manager.yaml 
 + 
 +kube1:~# kubectl -n cert-manager get all 
 + 
 +kube1:​~/​cert-manager#​ kubectl create secret generic cert-manager-tsig-secret --from-literal=tsig-secret-key="​s751+e/​OkNNNNNN="​ -n cert-manager 
 + 
 +kube1:​~/​cert-manager#​ cat freeipa-dns-clusterissuer.yaml 
 +</​code><​code>​ 
 +apiVersion: cert-manager.io/​v1 
 +#kind: Issuer 
 +kind: ClusterIssuer 
 +metadata: 
 +  #name: letsencrypt-staging-clusterissuer 
 +  #name: letsencrypt-prod-clusterissuer 
 +  #name: freeipa-clusterissuer 
 +  name: freeipa-dns-clusterissuer 
 +spec: 
 +  acme: 
 +    #server: https://​acme-staging-v02.api.letsencrypt.org/​directory 
 +    #server: https://​acme-v02.api.letsencrypt.org/​directory 
 +    #profile: tlsserver 
 + 
 +    server: https://​server.corpX.un/​acme/​directory 
 +    caBundle: # cat /​etc/​ipa/​ca.crt | base64 -w0 
 + 
 +    email: student@corpX.un 
 +    privateKeySecretRef:​ 
 +      name: freeipa-dns-clusterissuer-secret 
 +    solvers: 
 +#    - http01: 
 +#        ingress: 
 +#          ingressClassName:​ nginx 
 +    - dns01: 
 +        rfc2136: 
 +          nameserver: 192.168.X.10 
 +          tsigKeyName:​ cert-manager 
 +          tsigAlgorithm:​ HMACSHA256 
 +          tsigSecretSecretRef:​ 
 +            name: cert-manager-tsig-secret 
 +            key: tsig-secret-key 
 +</​code><​code>​ 
 +kube1:​~/​cert-manager#​ kubectl apply -f freeipa-dns-clusterissuer.yaml #-n my-... 
 + 
 +kube1:​~/​cert-manager#​ kubectl get secret -n cert-manager #-n my-... 
 + 
 +kube1:​~/​cert-manager#​ kubectl get clusterissuers.cert-manager.io 
 +kube1:​~/​cert-manager#​ #kubectl get issuers.cert-manager.io #-n my-... 
 +NAME                    READY   AGE 
 +...issuer ​              ​True ​   42s 
 +</​code>​ 
 + 
 +  * Запустить выпуск сертификата можно 2-мя способами:​ 
 + 
 +1-й способ:​ annotations в [[#ingress example]] 
 + 
 +2-й способ (используется если для сайта нет ingress и негде указать annotations или для rfc2136) 
 +<​code>​ 
 +kube1:​~/​gitlab#​ cat my-certificate.yaml 
 +</​code><​code>​ 
 +apiVersion: cert-manager.io/​v1 
 +kind: Certificate 
 +metadata: 
 +  name: gitlab-cert 
 +spec: 
 +  secretName: gitlab-tls 
 +  dnsNames: 
 +    #- siteN.mgtu.ru 
 +    #- keycloak.corpX.un 
 +    - gitlab.corpX.un 
 +  issuerRef:​ 
 +    name: freeipa-dns-clusterissuer 
 +    kind: ClusterIssuer 
 +    #kind: Issuer 
 +  privateKey:​ 
 +    rotationPolicy:​ Always 
 +</​code><​code>​ 
 +kube1:​~/​gitlab#​ kubectl apply -f my-certificate.yaml -n my-gitlab-ns 
 + 
 +kube1:​~/​gitlab#​ kubectl get certificate,​secrets -n my-gitlab-ns 
 + 
 +kube1:​~/​gitlab#​ kubectl events -n my-gitlab-ns
 ... ...
-  config: | +Certificate fetched from issuer successfully 
-    ​[[runners]] + 
-      ​tls-ca-file = "/mnt/​wild.crt"​ +kube1:~/gitlabkubectl get secret gitlab-tls -o yaml -n my-gitlab-ns
-      [runners.kubernetes] ​      +
-... +
-#volumeMounts:​ [] +
-volumeMounts:​ +
-  ​name: wild-crt +
-    subPath: wild.crt +
-    mountPath: /​mnt/​wild.crt +
-     +
-#volumes: [] +
-volumes: +
-  ​name: wild-crt +
-    configMap:​ +
-      name: wild-crt+
 </​code>​ </​code>​
 +==== Добавление корпоративного корневого сертификата в кластер ====
 +<​code>​
 +server#
  
-===== Kubernetes ​Dashboard ​=====+bash -c ' 
 +scp /​opt/​freeipa-data/​etc/​ipa/​ca.crt kube1:/​usr/​local/​share/​ca-certificates/​ 
 +ssh kube1 update-ca-certificates 
 +ssh kube1 systemctl restart containerd 
 +scp /​opt/​freeipa-data/​etc/​ipa/​ca.crt kube2:/​usr/​local/​share/​ca-certificates/​ 
 +ssh kube2 update-ca-certificates 
 +ssh kube2 systemctl restart containerd 
 +scp /​opt/​freeipa-data/​etc/​ipa/​ca.crt kube3:/​usr/​local/​share/​ca-certificates/​ 
 +ssh kube3 update-ca-certificates 
 +ssh kube3 systemctl restart containerd 
 +scp /​opt/​freeipa-data/​etc/​ipa/​ca.crt kube4:/​usr/​local/​share/​ca-certificates/​ 
 +ssh kube4 update-ca-certificates 
 +ssh kube4 systemctl restart containerd 
 +
 +</​code>​ 
 +===== Dashboards ===== 
 + 
 +==== k9s ==== 
 + 
 +  * [[https://​habr.com/​ru/​companies/​flant/​articles/​524196/​|Обзор k9s — продвинутого терминального интерфейса для ​Kubernetes]] 
 +  * [[https://​notes.kodekloud.com/​docs/​Kubernetes-Troubleshooting-for-Application-Developers/​Prerequisites/​k9s-Walkthrough|k9s Walkthrough]] 
 + 
 +<​code>​ 
 +kube1# wget https://​github.com/​derailed/​k9s/​releases/​download/​v0.50.16/​k9s_linux_amd64.deb 
 + 
 +kube1# dpkg -i k9s_linux_amd64.deb 
 +</​code>​ 
 + 
 +==== Kubernetes Dashboard ​==== 
 + 
 +  * https://​www.bytebase.com/​blog/​top-open-source-kubernetes-dashboard/​
  
   * https://​kubernetes.io/​docs/​tasks/​access-application-cluster/​web-ui-dashboard/​   * https://​kubernetes.io/​docs/​tasks/​access-application-cluster/​web-ui-dashboard/​
   * https://​github.com/​kubernetes/​dashboard/​blob/​master/​docs/​user/​access-control/​creating-sample-user.md   * https://​github.com/​kubernetes/​dashboard/​blob/​master/​docs/​user/​access-control/​creating-sample-user.md
 +
 +=== Установка dashboard ===
 +
 +  * Для информации о потреблении ресурсов можно установить [[#Metrics Server]]
  
 <​code>​ <​code>​
 $ kubectl apply -f https://​raw.githubusercontent.com/​kubernetes/​dashboard/​v2.7.0/​aio/​deploy/​recommended.yaml $ kubectl apply -f https://​raw.githubusercontent.com/​kubernetes/​dashboard/​v2.7.0/​aio/​deploy/​recommended.yaml
 +</​code>​
  
-$ cat dashboard-user-role.yaml+=== Доступ через proxy === 
 + 
 +<​code>​ 
 +cmder$ kubectl proxy 
 +</​code>​ 
 + 
 +  * http://​localhost:​8001/​api/​v1/​namespaces/​kubernetes-dashboard/​services/​https:​kubernetes-dashboard:/​proxy/​ 
 + 
 +=== Доступ через port-forward === 
 +<​code>​ 
 +$ kubectl -n kubernetes-dashboard port-forward svc/​kubernetes-dashboard-kong-proxy 8443:443 
 +</​code>​ 
 + 
 +  * https://​localhost:​8443 
 + 
 +=== Создание и привязка ServiceAccount к ClusterRole === 
 +<​code>​ 
 +$ cat dashboard-sa-admin-user.yaml
 </​code><​code>​ </​code><​code>​
 --- ---
Line 2069: Line 2828:
   name: admin-user   name: admin-user
   namespace: kubernetes-dashboard   namespace: kubernetes-dashboard
 +  #namespace: default
 --- ---
 apiVersion: rbac.authorization.k8s.io/​v1 apiVersion: rbac.authorization.k8s.io/​v1
Line 2082: Line 2842:
   name: admin-user   name: admin-user
   namespace: kubernetes-dashboard   namespace: kubernetes-dashboard
----+  #namespace: default 
 +</​code><​code>​ 
 +$ kubectl apply -f dashboard-sa-admin-user.yaml 
 + 
 +$ kubectl auth can-i get pods --as=system:​serviceaccount:​kubernetes-dashboard:​admin-user 
 +</​code>​ 
 + 
 +=== Создание временного токена === 
 + 
 +  * [[https://​kubernetes.io/​docs/​reference/​kubectl/​generated/​kubectl_create/​kubectl_create_token/​]] 
 +  * [[https://​www.jwt.io/​|JSON Web Token (JWT) Debugger]] 
 + 
 +<​code>​ 
 +$ kubectl create token admin-user -n kubernetes-dashboard #​--duration=1h 
 + 
 +$ ###ps aux | grep kube-apiserver | grep service-account-key-file 
 +$ ###cat /​etc/​kubernetes/​ssl/​sa.pub 
 +$ ###echo ... | jq -R '​split("​."​) | .[1] | @base64d | fromjson'​ 
 +$ ###echo ... | awk -F'​.'​ '​{print $2}' | base64 -d | jq -r '.exp | todate'​ 
 +</​code>​ 
 + 
 +=== Создание long-lived токена === 
 +<​code>​ 
 +$ cat dashboard-secret-for-token.yaml 
 +</​code><​code>​
 apiVersion: v1 apiVersion: v1
 kind: Secret kind: Secret
Line 2088: Line 2872:
   name: admin-user   name: admin-user
   namespace: kubernetes-dashboard   namespace: kubernetes-dashboard
 +  #namespace: default
   annotations:​   annotations:​
     kubernetes.io/​service-account.name:​ "​admin-user"​     kubernetes.io/​service-account.name:​ "​admin-user"​
 type: kubernetes.io/​service-account-token type: kubernetes.io/​service-account-token
 </​code><​code>​ </​code><​code>​
-$ kubectl apply -f dashboard-user-role.yaml +$ kubectl apply -f dashboard-secret-for-token.yaml
- +
-$ kubectl -n kubernetes-dashboard create token admin-user+
  
 $ kubectl get secret admin-user -n kubernetes-dashboard -o jsonpath={"​.data.token"​} | base64 -d ; echo $ kubectl get secret admin-user -n kubernetes-dashboard -o jsonpath={"​.data.token"​} | base64 -d ; echo
- 
-cmder$ kubectl proxy 
 </​code>​ </​code>​
- 
-  * http://​localhost:​8001/​api/​v1/​namespaces/​kubernetes-dashboard/​services/​https:​kubernetes-dashboard:/​proxy/​ 
- 
 ===== Мониторинг ===== ===== Мониторинг =====
  
 ==== Metrics Server ==== ==== Metrics Server ====
  
-  * [[https://​kubernetes-sigs.github.io/​metrics-server/​Kubernetes Metrics Server]]+  * [[https://github.com/​kubernetes-sigs/​metrics-server/​releases]] 
   * [[https://​medium.com/​@cloudspinx/​fix-error-metrics-api-not-available-in-kubernetes-aa10766e1c2f|Fix “error: Metrics API not available” in Kubernetes]]   * [[https://​medium.com/​@cloudspinx/​fix-error-metrics-api-not-available-in-kubernetes-aa10766e1c2f|Fix “error: Metrics API not available” in Kubernetes]]
  
 +<​code>​
 +kube1# kubectl apply -f https://​github.com/​kubernetes-sigs/​metrics-server/​releases/​download/​v0.8.1/​components.yaml
 +
 +kube1# kubectl patch deployment metrics-server -n kube-system --type='​json'​ -p='​[{"​op":​ "​add",​ "​path":​ "/​spec/​template/​spec/​containers/​0/​args/​-",​ "​value":​ "​--kubelet-insecure-tls"​}]'​
 +</​code>​
 +или
 <​code>​ <​code>​
 kube1:​~/​metrics-server#​ curl -L https://​github.com/​kubernetes-sigs/​metrics-server/​releases/​download/​v0.7.2/​components.yaml | tee metrics-server-components.yaml kube1:​~/​metrics-server#​ curl -L https://​github.com/​kubernetes-sigs/​metrics-server/​releases/​download/​v0.7.2/​components.yaml | tee metrics-server-components.yaml
Line 2123: Line 2908:
 </​code><​code>​ </​code><​code>​
 kube1:​~/​metrics-server#​ kubectl apply -f metrics-server-components.yaml kube1:​~/​metrics-server#​ kubectl apply -f metrics-server-components.yaml
 +</​code>​ 
 +Проверки 
 +<​code>​
 kube1# kubectl get pods -A | grep metrics-server kube1# kubectl get pods -A | grep metrics-server
 +
 +kube1# kubectl logs -n kube-system -l k8s-app=metrics-server
  
 kube1# kubectl top pod #-n kube-system kube1# kubectl top pod #-n kube-system
  
-kube1# kubectl top pod -A --sort-by=mem+kube1# kubectl top pod -A --sort-by=memory
  
 kube1# kubectl top node kube1# kubectl top node
 </​code>​ </​code>​
  
 +==== kube-state-metrics ====
 +
 +  * [[https://​github.com/​prometheus-community/​helm-charts/​tree/​main/​charts/​kube-state-metrics]]
 +  * ... алерты с инфой по упавшим подам ...
 +
 +<​code>​
 +kube1# helm repo add prometheus-community https://​prometheus-community.github.io/​helm-charts
 +
 +kube1# helm repo update
 +kube1# helm install kube-state-metrics prometheus-community/​kube-state-metrics -n vm --create-namespace
 +
 +kube1# curl kube-state-metrics.vm.svc.cluster.local:​8080/​metrics
 +</​code>​
 ===== Отладка,​ troubleshooting ===== ===== Отладка,​ troubleshooting =====
  
Line 2156: Line 2958:
 ===== Дополнительные материалы ===== ===== Дополнительные материалы =====
  
-==== Настройка registry-mirrors для Kubespray ​====+==== Дополнительные материалы по Kubespray ==== 
 + 
 +=== Настройка registry-mirrors для Kubespray ===
 <​code>​ <​code>​
 +~# cat inventory/​sample/​group_vars/​all/​docker.yml
 +
 ~/​kubespray#​ cat inventory/​mycluster/​group_vars/​all/​docker.yml ~/​kubespray#​ cat inventory/​mycluster/​group_vars/​all/​docker.yml
 </​code><​code>​ </​code><​code>​
Line 2165: Line 2971:
 ... ...
 </​code><​code>​ </​code><​code>​
 +:~# cat inventory/​sample/​group_vars/​all/​containerd.yml
 +
 ~/​kubespray#​ cat inventory/​mycluster/​group_vars/​all/​containerd.yml ~/​kubespray#​ cat inventory/​mycluster/​group_vars/​all/​containerd.yml
 </​code><​code>​ </​code><​code>​
Line 2176: Line 2984:
 ... ...
 </​code>​ </​code>​
 +
 +=== Добавление insecure_registries через Kubespray ===
 +<​code>​
 +~/​kubespray#​ cat inventory/​mycluster/​group_vars/​all/​containerd.yml
 +</​code><​code>​
 +...
 +containerd_insecure_registries:​
 +  "​server.corpX.un:​5000":​ "​http://​server.corpX.un:​5000"​
 +containerd_registry_auth:​
 +  - registry: server.corpX.un:​5000
 +    username: student
 +    password: Pa$$w0rd
 +...
 +</​code><​code>​
 +~/​kubespray#​ time ansible-playbook -i inventory/​mycluster/​hosts.yaml cluster.yml
 +user    46m37.151s
 +
 +# less /​etc/​containerd/​config.toml
 +</​code>​
 +
 +=== Управление дополнениями через Kubespray ===
 +<​code>​
 +~/​kubespray#​ cat inventory/​mycluster/​group_vars/​k8s_cluster/​addons.yml
 +</​code><​code>​
 +...
 +helm_enabled:​ true
 +...
 +ingress_nginx_enabled:​ true
 +ingress_nginx_host_network:​ true
 +...
 +</​code>​
 +
 +==== Kustomize ​ ====
 +
 +  * [[https://​kustomize.io/​]]
 +  * [[https://​github.com/​kubernetes-sigs/​kustomize/​]]
 +
 +<​code>​
 +# curl -s "​https://​raw.githubusercontent.com/​kubernetes-sigs/​kustomize/​master/​hack/​install_kustomize.sh"​ | bash
 +# mv kustomize /​usr/​local/​bin/​
 +
 +kube1:​~/​my-pgcluster#​ cat kustomization.yaml
 +</​code><​code>​
 +apiVersion: kustomize.config.k8s.io/​v1beta1
 +kind: Kustomization
 +resources:
 +  - my-pgcluster.yaml
 +  - keycloak-db-secret.yaml
 +#​generators:​
 +#  - keycloak-db-secret-generator.yaml
 +</​code><​code>​
 +kube1:​~/​my-pgcluster#​ kustomize build . | kubectl apply -f - -n my-pgcluster-ns
 +</​code>​
 +  * [[Mozilla Sops]]
 +  * [[https://​github.com/​viaduct-ai/​kustomize-sops]]
 +<​code>​
 +
 +kube1:​~/​my-pgcluster#​ sops -e -i keycloak-db-secret.yaml
 +
 +# source <(curl -s https://​raw.githubusercontent.com/​viaduct-ai/​kustomize-sops/​master/​scripts/​install-ksops-archive.sh)
 +# which ksops
 +
 +kube1:​~/​my-pgcluster#​ cat keycloak-db-secret-generator.yaml
 +</​code><​code>​
 +apiVersion: viaduct.ai/​v1
 +kind: ksops
 +metadata:
 +  name: keycloak-db-secret-generator
 +  annotations:​
 +    config.kubernetes.io/​function:​ |
 +        exec:
 +          path: ksops
 +files:
 +  - keycloak-db-secret.yaml
 +</​code><​code>​
 +kube1:​~/​my-pgcluster#​ kustomize build --enable-alpha-plugins --enable-exec . | kubectl apply -f - -n my-pgcluster-ns
 +</​code>​
 +
 +==== NetworkPolicy ====
 +
 +  * [[https://​gitlab.com/​k11s-os/​k8s-lessons/​-/​tree/​main/​NetworkPolicy]]
 +
 +==== SecurityContext ====
 +
 +  * [[https://​gitlab.com/​k11s-os/​k8s-lessons/​-/​tree/​main/​SecurityContext]]
  
 ==== Установка kubelet kubeadm kubectl в ubuntu20 ==== ==== Установка kubelet kubeadm kubectl в ubuntu20 ====
Line 2250: Line 3143:
 ==== kompose ==== ==== kompose ====
  
 +  * https://​kompose.io/​
   * [[https://​stackoverflow.com/​questions/​47536536/​whats-the-difference-between-docker-compose-and-kubernetes|What'​s the difference between Docker Compose and Kubernetes?​]]   * [[https://​stackoverflow.com/​questions/​47536536/​whats-the-difference-between-docker-compose-and-kubernetes|What'​s the difference between Docker Compose and Kubernetes?​]]
   * [[https://​loft.sh/​blog/​docker-compose-to-kubernetes-step-by-step-migration/​|Docker Compose to Kubernetes: Step-by-Step Migration]]   * [[https://​loft.sh/​blog/​docker-compose-to-kubernetes-step-by-step-migration/​|Docker Compose to Kubernetes: Step-by-Step Migration]]
Line 2255: Line 3149:
  
 <​code>​ <​code>​
-root@gate:~# curl -L https://​github.com/​kubernetes/​kompose/​releases/​download/​v1.26.0/​kompose-linux-amd64 -o kompose +# curl -L https://​github.com/​kubernetes/​kompose/​releases/​download/​v1.37.0/​kompose-linux-amd64 -o /​usr/​local/​bin/​kompose 
-root@gate:~# chmod +x kompose + 
-root@gate:​~#​ sudo mv ./​kompose ​/​usr/​local/​bin/​kompose+# chmod +x /​usr/​local/​bin/​kompose
 </​code>​ </​code>​
  
   * [[Технология Docker#​docker-compose]]   * [[Технология Docker#​docker-compose]]
 +
 <​code>​ <​code>​
-gitlab-runner@gate:​~/webd$ kompose convert +~/webd$ kompose convert 
-gitlab-runner@gate:​~/webd$ ls *yaml + 
-gitlab-runner@gate:​~/webd$ kubectl apply -f sftp-deployment.yaml,​vol1-persistentvolumeclaim.yaml,​webd-service.yaml,​sftp-service.yaml,​webd-deployment.yaml +~/webd$ ls *yaml 
-gitlab-runner@gate:​~/​webd$ kubectl get all+ 
 +~/webd$ kubectl apply -f sftp-deployment.yaml,​vol1-persistentvolumeclaim.yaml,​webd-service.yaml,​sftp-service.yaml,​webd-deployment.yaml -n my-ns
 </​code>​ </​code>​
  
система_kubernetes.1742995260.txt.gz · Last modified: 2025/03/26 16:21 by val