User Tools

Site Tools


система_kubernetes

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
система_kubernetes [2022/07/20 15:38]
val [Service]
система_kubernetes [2025/04/05 08:39] (current)
val [kube-state-metrics]
Line 1: Line 1:
 ====== Система Kubernetes ====== ====== Система Kubernetes ======
 +
 +  * [[https://​kubernetes.io/​ru/​docs/​home/​|Документация по Kubernetes (на русском)]]
  
   * [[https://​youtu.be/​sLQefhPfwWE|youtube Введение в Kubernetes на примере Minikube]]   * [[https://​youtu.be/​sLQefhPfwWE|youtube Введение в Kubernetes на примере Minikube]]
Line 7: Line 9:
   * [[https://​habr.com/​ru/​company/​domclick/​blog/​577964/​|Ультимативный гайд по созданию CI/CD в GitLab с автодеплоем в Kubernetes на голом железе всего за 514$ в год ( ͡° ͜ʖ ͡°)]]   * [[https://​habr.com/​ru/​company/​domclick/​blog/​577964/​|Ультимативный гайд по созданию CI/CD в GitLab с автодеплоем в Kubernetes на голом железе всего за 514$ в год ( ͡° ͜ʖ ͡°)]]
   * [[https://​habr.com/​ru/​company/​flant/​blog/​513908/​|Полноценный Kubernetes с нуля на Raspberry Pi]]   * [[https://​habr.com/​ru/​company/​flant/​blog/​513908/​|Полноценный Kubernetes с нуля на Raspberry Pi]]
 +  * [[https://​habr.com/​ru/​companies/​domclick/​articles/​566224/​|Различия между Docker, containerd, CRI-O и runc]]
 +  * [[https://​daily.dev/​blog/​kubernetes-cni-comparison-flannel-vs-calico-vs-canal|Kubernetes CNI Comparison: Flannel vs Calico vs Canal]]
  
   * [[https://​habr.com/​ru/​company/​vk/​blog/​542730/​|11 факапов PRO-уровня при внедрении Kubernetes и как их избежать]]   * [[https://​habr.com/​ru/​company/​vk/​blog/​542730/​|11 факапов PRO-уровня при внедрении Kubernetes и как их избежать]]
  
   * [[https://​github.com/​dgkanatsios/​CKAD-exercises|A set of exercises that helped me prepare for the Certified Kubernetes Application Developer exam]]   * [[https://​github.com/​dgkanatsios/​CKAD-exercises|A set of exercises that helped me prepare for the Certified Kubernetes Application Developer exam]]
 +
 +  * [[https://​www.youtube.com/​watch?​v=XZQ7-7vej6w|Наш опыт с Kubernetes в небольших проектах / Дмитрий Столяров (Флант)]]
 +
 +  * [[https://​habr.com/​ru/​companies/​aenix/​articles/​541118/​|Ломаем и чиним Kubernetes]]
 +
 +
  
 ===== Инструмент командной строки kubectl ===== ===== Инструмент командной строки kubectl =====
  
   * [[https://​kubernetes.io/​docs/​reference/​generated/​kubectl/​kubectl-commands]]   * [[https://​kubernetes.io/​docs/​reference/​generated/​kubectl/​kubectl-commands]]
 +  * [[https://​kubernetes.io/​ru/​docs/​reference/​kubectl/​cheatsheet/​|Шпаргалка по kubectl]]
  
 ==== Установка ==== ==== Установка ====
 +
 +=== Linux ===
 <​code>​ <​code>​
-root@gate.corp13.un:​~# curl -LO https://​storage.googleapis.com/​kubernetes-release/​release/​`curl -s https://​storage.googleapis.com/​kubernetes-release/​release/​stable.txt`/​bin/​linux/​amd64/​kubectl +# curl -LO https://​storage.googleapis.com/​kubernetes-release/​release/​`curl -s https://​storage.googleapis.com/​kubernetes-release/​release/​stable.txt`/​bin/​linux/​amd64/​kubectl 
-root@gate.corp13.un:​~# chmod +x kubectl + 
-root@gate.corp13.un:​~# mv kubectl /​usr/​local/​bin/​+# chmod +x kubectl 
 + 
 +# mv kubectl /​usr/​local/​bin/​
 </​code>​ </​code>​
  
-==== Подключение к кластеру ====+=== Windows ​===
  
-  ​* Если не minikube, то достаточно только копию .kube/​config +  * [[https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/|Install and Set Up kubectl on Windows]]
-  ​* [[https://habr.com/ru/company/flant/blog/345580/|см. Настройка GitLab Runner]]+
  
 <​code>​ <​code>​
-student@node2:​~tar zcf kube-config.tar.gz .kube/config .minikube/ca.crt .minikube/profiles/minikube+cmdercurl -LO "​https://​dl.k8s.io/release/v1.29.0/bin/windows/​amd64/​kubectl.exe"​
  
-gitlab-runner@gate:​~scp student@node2:​kube-config.tar.gz ​.+cmdermv kubectl.exe /usr/bin 
 +</​code>​
  
-gitlab-runner@gate:​~$ tar -xvf kube-config.tar.gz+==== Подключение к кластеру ====
  
-gitlab-runner@gate:~cat .kube/​config+<​code>​ 
 +mkdir ~/.kube/ 
 + 
 +scp root@192.168.X.2N1:.kube/​config ​~/.kube/ 
 + 
 +cat ~/.kube/​config
 </​code><​code>​ </​code><​code>​
 ... ...
-    ​certificate-authority: /home/​gitlab-runner/.minikube/ca.crt+    ​server: https://192.168.X.2N1:6443
 ... ...
-    client-certificate: /home/gitlab-runner/.minikube/profiles/minikube/​client.crt +</​code><​code>​ 
-    ​client-key: /home/​gitlab-runner/.minikube/profiles/minikube/​client.key+kubectl get all -o wide --all-namespaces 
 +kubectl get all -o wide -A 
 +</​code>​ 
 +=== Настройка автодополнения === 
 +<​code>​ 
 +kube1:~# less /etc/bash_completion.d/kubectl.sh 
 + 
 +  или 
 + 
 +$ cat ~/.profile 
 +</code><​code>​ 
 +#..
 +source <​(kubectl completion bash) 
 + 
 +alias k=kubectl 
 +complete ​-F __start_kubectl k 
 +#... 
 +</​code>​ 
 + 
 +=== Подключение к другому кластеру === 
 + 
 +<​code>​ 
 +gitlab-runner@server:~$ scp root@kube1:​.kube/config .kube/config_kube1 
 + 
 +gitlab-runner@server:~$ cat .kube/config_kube1 
 +</code><​code>​ 
 +... 
 +    .kube/​config_kube1
 ... ...
 </​code><​code>​ </​code><​code>​
-gitlab-runner@gate:~$ kubectl get all -o wide --all-namespaces+gitlab-runner@server:~$ export KUBECONFIG=~/​.kube/​config_kube1 
 + 
 +gitlab-runner@server:~$ kubectl get nodes
 </​code>​ </​code>​
 +
 ===== Установка minikube ===== ===== Установка minikube =====
  
-  * [[https://​www.linuxtechi.com/​how-to-install-minikube-on-ubuntu/​|How to Install Minikube on Ubuntu 20.04 LTS / 21.04]] 
   * [[https://​minikube.sigs.k8s.io/​docs/​start/​|Documentation/​Get Started/​minikube start]]   * [[https://​minikube.sigs.k8s.io/​docs/​start/​|Documentation/​Get Started/​minikube start]]
-  * Технология Docker [[Технология Docker#​Предоставление прав непривилегированным пользователям]] 
  
 <​code>​ <​code>​
-student@node3:~$ minikube delete+root@server:~# apt install -y curl wget apt-transport-https
  
-student@node3:~minikube ​start --driver=docker --insecure-registry "​server.corp13.un:​5000"​+root@server:~# wget https://​storage.googleapis.com/​minikube/​releases/​latest/​minikube-linux-amd64
  
-ИЛИ +root@server:~# mv minikube-linux-amd64 /​usr/​local/​bin/​minikube
-</​code><​code>​ +
-student@node2:~$ sudo apt install conntrack+
  
-https://computingforgeeks.com/install-mirantis-cri-dockerd-as-docker-engine-shim-for-kubernetes+root@server:~# chmod +x /usr/local/bin/minikube 
-...+</​code>​
  
-wget https://​github.com/​kubernetes-sigs/cri-tools/​releases/​download/​v1.24.2/​crictl-v1.24.2-linux-amd64.tar.gz+  * Технология Docker [[Технология Docker#​Предоставление прав непривилегированным пользователям]] 
 + 
 +<​code>​ 
 +gitlab-runner@server:~$ time minikube start --driver=docker ​--insecure-registry "​server.corpX.un:​5000"​ 
 +real    29m8.320s
 ... ...
  
-student@node2:~$ minikube ​start --driver=none --insecure-registry "server.corp13.un:5000"+gitlab-runner@server:~$ minikube ​status 
 + 
 +gitlab-runner@server:​~$ minikube ip 
 +</​code>​ 
 + 
 + 
 +==== minikube kubectl ==== 
 +<​code>​ 
 +gitlab-runner@server:​~$ minikube kubectl ​-- get pods -A 
 + 
 +gitlab-runner@server:~$ cat ~/.profile
 </​code><​code>​ </​code><​code>​
-student@node3:~$ minikube status+#... 
 +# not work in gitlab-ci 
 +alias kubectl='​minikube kubectl --' 
 +#... 
 +</​code><​code>​ 
 +gitlab-runner@server:~$ kubectl get pods -A 
 +</​code>​
  
-student@node3:​~$ minikube ip+или
  
-student@node3:~$ minikube addons list+  * [[#​Инструмент командной строки kubectl]] 
 + 
 +==== minikube addons list ==== 
 +<​code>​ 
 +gitlab-runner@server:~$ minikube addons list
  
-student@node3:~$ minikube addons configure registry-creds+gitlab-runner@server:~$ minikube addons configure registry-creds
 ... ...
 Do you want to enable Docker Registry? [y/n]: y Do you want to enable Docker Registry? [y/n]: y
--- Enter docker registry server url: http://​server.corp13.un:5000+-- Enter docker registry server url: http://​server.corpX.un:5000
 -- Enter docker registry username: student -- Enter docker registry username: student
 -- Enter docker registry password: -- Enter docker registry password:
 ... ...
  
-student@node3:~$ minikube addons enable registry-creds +gitlab-runner@server:~$ minikube addons enable registry-creds
- +
-student@node3:​~$ minikube dashboard & +
-... +
-Opening http://​127.0.0.1:​NNNNN/​api/​v1/​namespaces/​kubernetes-dashboard/​services/​http:​kubernetes-dashboard:/​proxy/​ in your default browser +
-... +
-/​home/​mobaxterm>​ ssh -L NNNNN:​localhost:​NNNNN student@192.168.13.230 +
-Теперь,​ та же ссылка работает на win host системе+
 </​code>​ </​code>​
  
 +==== minikube start stop delete ====
 +<​code>​
 +gitlab-runner@server:​~$ ###minikube stop
 +
 +gitlab-runner@server:​~$ ### minikube delete
 +gitlab-runner@server:​~$ ### rm -rv .minikube/
 +
 +gitlab-runner@server:​~$ ###minikube start
 +</​code>​
 ===== Кластер Kubernetes ===== ===== Кластер Kubernetes =====
  
-==== Развертывание ==== 
  
-  ​* [[https://infoit.com.ua/linux/kak-ustanovit-kubernetes-na-ubuntu-20-04-lts/|Как установить ​Kubernetes ​на Ubuntu ​20.04 LTS]] +==== Развертывание через kubeadm ==== 
-  * [[https://​www.cloud4y.ru/blog/installation-kubernetes/​|Установка ​Kubernetes]]+ 
 +  ​* [[https://kubernetes.io/​docs/​setup/​production-environment/​tools/​kubeadm/​install-kubeadm/​|Installing kubeadm]] 
 +  * [[https://​kubernetes.io/docs/setup/​production-environment/​tools/​kubeadm/​create-cluster-kubeadm/​|kubernetes.io Creating a cluster with kubeadm]] 
 + 
 +  * [[https://​www.linuxtechi.com/​install-kubernetes-on-ubuntu-22-04/|How to Install ​Kubernetes ​Cluster on Ubuntu ​22.04]] 
 +  * [[https://​www.linuxtechi.com/install-kubernetes-cluster-on-debian/|https://​www.linuxtechi.com/​install-kubernetes-cluster-on-debian/|How to Install Kubernetes Cluster on Debian 12 | 11]] 
 + 
 +  * [[https://​www.baeldung.com/​ops/​kubernetes-cluster-components|Kubernetes Cluster Components]] 
 + 
 + 
 +=== Подготовка ​узлов ===
  
 <​code>​ <​code>​
 +node1# ssh-keygen
 +
 +node1# ssh-copy-id node2
 +node1# ssh-copy-id node3
 +
 +node1# bash -c '
 +swapoff -a
 +ssh node2 swapoff -a
 +ssh node3 swapoff -a
 +'
 +
 +node1# bash -c '
 +sed -i""​ -e "/​swap/​s/​^/#/"​ /etc/fstab
 +ssh node2 sed -i""​ -e "/​swap/​s/​^/#/"​ /etc/fstab
 +ssh node3 sed -i""​ -e "/​swap/​s/​^/#/"​ /etc/fstab
 +'
 +</​code>​
 +
 +=== Установка ПО ===
 +
 +=== !!! Обратитесь к преподавателю !!! ===
 +
 +== Установка и настройка CRI ==
 +<​code>​
 +node1_2_3# apt-get install -y docker.io
 +
 +Проверяем,​ если: ​
 +node1# containerd config dump | grep SystemdCgroup
 +не равно:
 +           ​SystemdCgroup = true
 +то, выполняем следующие 4-ре команды:​
 +
 +bash -c 'mkdir -p /​etc/​containerd/​
 +ssh node2 mkdir -p /​etc/​containerd/​
 +ssh node3 mkdir -p /​etc/​containerd/​
 +'
 +bash -c '​containerd config default > /​etc/​containerd/​config.toml
 +ssh node2 "​containerd config default > /​etc/​containerd/​config.toml"​
 +ssh node3 "​containerd config default > /​etc/​containerd/​config.toml"​
 +'
 +bash -c 'sed -i "​s/​SystemdCgroup \= false/​SystemdCgroup \= true/​g"​ /​etc/​containerd/​config.toml
 +ssh node2 sed -i \"​s/​SystemdCgroup \= false/​SystemdCgroup \= true/​g\"​ /​etc/​containerd/​config.toml
 +ssh node3 sed -i \"​s/​SystemdCgroup \= false/​SystemdCgroup \= true/​g\"​ /​etc/​containerd/​config.toml
 +'
 +bash -c '​service containerd restart
 +ssh node2 service containerd restart
 +ssh node3 service containerd restart
 +'
 +</​code>​
 +== Подключаем репозиторий и устанавливаем ПО ==
 +<​code>​
 +bash -c 'mkdir -p /​etc/​apt/​keyrings
 +ssh node2 mkdir -p /​etc/​apt/​keyrings
 +ssh node3 mkdir -p /​etc/​apt/​keyrings
 +'
 +
 +bash -c 'curl -fsSL https://​pkgs.k8s.io/​core:/​stable:/​v1.30/​deb/​Release.key | gpg --dearmor -o /​etc/​apt/​keyrings/​kubernetes-apt-keyring.gpg
 +ssh node2 "curl -fsSL https://​pkgs.k8s.io/​core:/​stable:/​v1.30/​deb/​Release.key | gpg --dearmor -o /​etc/​apt/​keyrings/​kubernetes-apt-keyring.gpg"​
 +ssh node3 "curl -fsSL https://​pkgs.k8s.io/​core:/​stable:/​v1.30/​deb/​Release.key | gpg --dearmor -o /​etc/​apt/​keyrings/​kubernetes-apt-keyring.gpg"​
 +'
 +
 +bash -c 'echo "deb [signed-by=/​etc/​apt/​keyrings/​kubernetes-apt-keyring.gpg] https://​pkgs.k8s.io/​core:/​stable:/​v1.30/​deb/​ /" | tee /​etc/​apt/​sources.list.d/​kubernetes.list
 +ssh node2 echo "deb [signed-by=/​etc/​apt/​keyrings/​kubernetes-apt-keyring.gpg] https://​pkgs.k8s.io/​core:/​stable:/​v1.30/​deb/​ /" \| tee /​etc/​apt/​sources.list.d/​kubernetes.list
 +ssh node3 echo "deb [signed-by=/​etc/​apt/​keyrings/​kubernetes-apt-keyring.gpg] https://​pkgs.k8s.io/​core:/​stable:/​v1.30/​deb/​ /" \| tee /​etc/​apt/​sources.list.d/​kubernetes.list
 +'
 +
 +bash -c '​apt-get update && apt-get install -y kubelet kubeadm kubectl
 +ssh node2 "​apt-get update && apt-get install -y kubelet kubeadm kubectl"​
 +ssh node3 "​apt-get update && apt-get install -y kubelet kubeadm kubectl"​
 +'
 +Время выполнения:​ 2 минуты
 +</​code>​
 +
 +=== Инициализация master ===
 +
 +  * [[https://​stackoverflow.com/​questions/​70416935/​create-same-master-and-working-node-in-kubenetes|Create same master and working node in kubenetes]]
 +
 +<​code>​
 +root@node1:​~#​ kubeadm init --pod-network-cidr=10.244.0.0/​16 --apiserver-advertise-address=192.168.X.201
 +Время выполнения:​ 3 минуты
 +
 +root@node1:​~#​ mkdir -p $HOME/.kube
 +
 +root@node1:​~#​ cp -i /​etc/​kubernetes/​admin.conf $HOME/​.kube/​config
 +</​code>​
 +=== Настройка сети ===
 +<​code>​
 +root@nodeN:​~#​ lsmod | grep br_netfilter
 +</​code>​
 +  * [[Управление ядром и модулями в Linux#​Модули ядра]]
 +<​code>​
 +root@node1:​~#​ kubectl apply -f https://​raw.githubusercontent.com/​coreos/​flannel/​master/​Documentation/​kube-flannel.yml
 +</​code>​
 +=== Проверка работоспособности ===
 +<​code>​
 +root@node1:​~#​ kubectl get pod --all-namespaces -o wide
 +
 +root@node1:​~#​ kubectl get --raw='/​readyz?​verbose'​
 +</​code>​
 +
 +=== Подключение worker ===
 +
 +<​code>​
 +root@node2_3:​~#​ curl -k https://​node1:​6443/​livez?​verbose
 +
 +root@node2_3:​~#​ kubeadm join 192.168.X.201:​6443 --token NNNNNNNNNNNNNNNNNNNN \
 +        --discovery-token-ca-cert-hash sha256:​NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN
 +
 +root@node2_3:​~#​ curl -sSL http://​127.0.0.1:​10248/​healthz
 +
 +root@node1:​~#​ kubeadm token list
 +
 +root@node1:​~#​ kubeadm token create --print-join-command
 +</​code>​
 +=== Проверка состояния ===
 +<​code>​
 +root@node1:​~#​ kubectl cluster-info
 +
 +root@node1:​~#​ kubectl get nodes -o wide
 +
 +root@node1:​~#​ kubectl describe node node2
 +</​code>​
 +
 +=== Удаление узла ===
 +
 +  * [[https://​stackoverflow.com/​questions/​56064537/​how-to-remove-broken-nodes-in-kubernetes|How to remove broken nodes in Kubernetes]]
 +
 +<​code>​
 +$ kubectl cordon kube3
 +
 +$ time kubectl drain kube3 #​--ignore-daemonsets --delete-emptydir-data --force
 +
 +$ kubectl delete node kube3
 +</​code>​
 +
 +=== Удаление кластера ===
 +
 +  * [[https://​stackoverflow.com/​questions/​44698283/​how-to-completely-uninstall-kubernetes|How to completely uninstall kubernetes]]
 +
 +<​code>​
 +node1# bash -c '
 +kubeadm reset
 +ssh node2 kubeadm reset
 +ssh node3 kubeadm reset
 +'
 +</​code>​
 +
 +=== Настройка доступа к Insecure Private Registry ===
 +=== !!! Обратитесь к преподавателю !!! ===
 +
 +  * [[https://​github.com/​containerd/​containerd/​issues/​4938|Unable to pull image from insecure registry, http: server gave HTTP response to HTTPS client #4938]]
 +  * [[https://​github.com/​containerd/​containerd/​issues/​3847|Containerd cannot pull image from insecure registry #3847]]
 +
 +  * [[https://​mrzik.medium.com/​how-to-configure-private-registry-for-kubernetes-cluster-running-with-containerd-cf74697fa382|How to Configure Private Registry for Kubernetes cluster running with containerd]]
 +  * [[https://​github.com/​containerd/​containerd/​blob/​main/​docs/​PLUGINS.md#​version-header|containerd/​docs/​PLUGINS.md migrate config v1 to v2]]
 +
 +== сontainerd ==
 +
 +<​code>​
 +root@node1:​~#​ mkdir -p /​etc/​containerd/​
 +
 +root@node1:​~#​ cat /​etc/​containerd/​config.toml
 +</​code><​code>​
 ... ...
-root@node1:​~#​ kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.13.210+      [plugins."io.containerd.grpc.v1.cri".registry.mirrors] 
 +        [plugins."​io.containerd.grpc.v1.cri"​.registry.mirrors."​server.corpX.un:​5000"​] 
 +          endpoint = ["​http://​server.corpX.un:​5000"​]
 ... ...
-student@node1:~$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml +</​code><​code>​ 
-... +node1# bash -c ' 
-student@node1:​~$ kubectl get pod -o wide --all-namespaces+ssh node2 mkdir -p /etc/containerd/ 
 +ssh node3 mkdir -p /etc/containerd/ 
 +scp /etc/​containerd/​config.toml node2:/​etc/​containerd/​config.toml 
 +scp /​etc/​containerd/​config.toml node3:/​etc/​containerd/​config.toml 
 +systemctl restart containerd 
 +ssh node2 systemctl restart containerd 
 +ssh node3 systemctl restart containerd 
 +'
  
-student@node1:~$ kubectl get nodes -o wide+root@nodeN:~# containerd config dump | less
 </​code>​ </​code>​
  
-==== Настройка доступа к Insecure Private Registry ====+Проверка
  
-  ​* [[https://​github.com/​containerd/containerd/issues/4938Unable to pull image from insecure registry, http: server gave HTTP response to HTTPS client #4938 ]] +<​code>​ 
-  * [[https://mrzik.medium.com/how-to-configure-private-registry-for-kubernetes-cluster-running-with-containerd-cf74697fa382|How to Configure Private Registry for Kubernetes ​cluster running ​with containerd]] +root@nodeN:​~#​ crictl -r unix:///​run/​containerd/​containerd.sock pull server.corpX.un:​5000/​student/​gowebd 
-  * [[https://github.com/containerd/containerd/blob/main/docs/PLUGINS.md#​version-headercontainerd/​docs/​PLUGINS.md migrate config v1 to v2]]+</​code>​ 
 + 
 +==== Развертывание через Kubespray ==== 
 + 
 +  ​* [[https://​github.com/​kubernetes-sigs/kubespray]] 
 +  * [[https://habr.com/​ru/​companies/​domclick/​articles/​682364/​|Самое подробное руководство по установке высокодоступного (почти ಠ ͜ʖ ಠ ) Kubernetes-кластера]] 
 +  * [[https://habr.com/ru/​companies/​X5Tech/​articles/​645651/​|Bare-metal kubernetes-кластер на своём локальном компьютере]] 
 +  * [[https://​internet-lab.ru/​k8s_kubespray|Kubernetes — установка через Kubespray]] 
 +  * [[https://​www.mshowto.org/​en/​ubuntu-sunucusuna-kubespray-ile-kubernetes-kurulumu.html|Installing ​Kubernetes ​on Ubuntu Server ​with Kubespray]] 
 +  * [[https://kubernetes.io/docs/setup/production-environment/tools/kubespray/|Installing Kubernetes with Kubespray]] 
 + 
 +=== Подготовка к развертыванию через Kubespray ===
  
-  * Docker ​[[Технология ​Docker#Insecure Private Registry]]+  * [[Язык программирования Python#Виртуальная среда Python]]
  
 <​code>​ <​code>​
-root@nodeN:​~mkdir /​etc/​containerd/​+(venv1) serverssh-keygen
  
-root@node2:~# cat /etc/containerd/config.toml+(venv1) server# ssh-copy-id kube1;​ssh-copy-id kube2;​ssh-copy-id kube3;​ssh-copy-id kube4; 
 + 
 +(venv1) server# git clone https://​github.com/​kubernetes-sigs/​kubespray 
 + 
 +(venv1) server# cd kubespray/​ 
 + 
 +(venv1) server:~/kubespraygit tag -l 
 + 
 +(venv1) server:~/kubespray# git checkout tags/​v2.26.0 
 +  или 
 +(venv1) server:​~/​kubespray#​ git checkout tags/​v2.27.0 
 + 
 +(venv1) server:​~/​kubespray#​ time pip3 install -r requirements.txt 
 + 
 +(venv1) server:​~/​kubespray#​ cp -rvfpT inventory/​sample inventory/​mycluster 
 + 
 +(venv1) server:​~/​kubespray#​ cat inventory/mycluster/hosts.yaml
 </​code><​code>​ </​code><​code>​
-version = 2+all: 
 +  hosts: 
 +    kube1: 
 +    kube2: 
 +    kube3: 
 +    kube4: 
 +  children: 
 +    kube_control_plane:​ 
 +      hosts: 
 +        kube1: 
 +        kube2: 
 +    kube_node:​ 
 +      hosts: 
 +        kube1: 
 +        kube2: 
 +        kube3: 
 +    etcd: 
 +      hosts: 
 +        kube1: 
 +        kube2: 
 +        kube3: 
 +    k8s_cluster:​ 
 +      children: 
 +        kube_control_plane:​ 
 +        kube_node:​ 
 +    calico_rr:​ 
 +      hosts: {} 
 +</​code><​code>​ 
 +(venv1) server:​~/​kubespray#​ ansible all -m ping -i inventory/​mycluster/​hosts.yaml 
 +</​code>​
  
-[plugins."​io.containerd.grpc.v1.cri"​.registry+  * [[Сервис Ansible#​Использование модулей]] Ansible для отключения swap 
-  [plugins."io.containerd.grpc.v1.cri".registry.mirrors] +  ​[[Сервис Ansible#​Использование ролей]] Ansible для настройки сети 
-    [plugins."io.containerd.grpc.v1.cri"​.registry.mirrors."​server.corp13.un:​5000"​+ 
-      ​endpoint = ["http://server.corp13.un:​5000"​+=== Развертывание кластера через Kubespray === 
-  [plugins."io.containerd.grpc.v1.cri".registry.configs] +<​code>​ 
-    ​[plugins."io.containerd.grpc.v1.cri"​.registry.configs."​server.corp13.un:5000"​.tls] +~/​kubespray#​ time ansible-playbook -i inventory/​mycluster/​hosts.yaml cluster.yml 
-      ​insecure_skip_verify = true +real    45m31.796s 
-[plugins."io.containerd.grpc.v1.cri".registry.configs."​server.corp13.un:​5000"​.auth] + 
-      auth = "​c3R1ZGVudDpwYXNzd29yZA=="​+kube1# less ~/.kube/​config 
 + 
 +~/​kubespray#​ ###time ansible-playbook -i inventory/​mycluster/​hosts.yaml reset.yml 
 +real    7m31.796s 
 +</​code>​ 
 + 
 +=== Добавление узла через Kubespray === 
 + 
 +  * [[https://​github.com/​kubernetes-sigs/​kubespray/​blob/​master/​docs/​operations/​nodes.md|Adding/​replacing a node (github.com/​kubernetes-sigs/​kubespray)]
 +  ​* ​[[https://nixhub.ru/​posts/​k8s-nodes-scale/​|K8s - добавление нод через kubespray]
 +  ​[[https://​blog.unetresgrossebite.com/?​p=934|Redeploy Kubernetes Nodes with KubeSpray]] 
 + 
 +<​code>​ 
 +~/​kubespray#​ cat inventory/​mycluster/​hosts.yaml 
 +</​code><​code>​ 
 +all: 
 +  hosts: 
 +... 
 +    ​kube4: 
 +... 
 +    kube_node
 +      ​hosts: 
 +... 
 +        kube4: 
 +...
 </​code><​code>​ </​code><​code>​
-root@node2:~# systemctl restart containerd+(venv1) server:~/kubespraytime ansible-playbook -i inventory/​mycluster/​hosts.yaml cluster.yml 
 +real    6m31.562s
  
-root@node2:~# containerd config dump+~/kubespray###time ansible-playbook -i inventory/​mycluster/​hosts.yaml --limit=kube4 scale.yml 
 +real    17m37.459s 
 + 
 +$ kubectl get nodes -o wide
 </​code>​ </​code>​
  
-Проверка+=== Добавление insecure_registries через Kubespray === 
 +<​code>​ 
 +~/​kubespray#​ cat inventory/​mycluster/​group_vars/​all/​containerd.yml 
 +</​code><​code>​ 
 +... 
 +containerd_insecure_registries:​ 
 +  "​server.corpX.un:​5000":​ "​http://​server.corpX.un:​5000"​ 
 +containerd_registry_auth:​ 
 +  - registry: server.corpX.un:​5000 
 +    username: student 
 +    password: Pa$$w0rd 
 +... 
 +</​code><​code>​ 
 +~/​kubespray#​ time ansible-playbook -i inventory/​mycluster/​hosts.yaml cluster.yml 
 +user    46m37.151s
  
 +# less /​etc/​containerd/​config.toml
 +</​code>​
 +
 +=== Управление дополнениями через Kubespray ===
 <​code>​ <​code>​
-root@node2:~# crictl -r unix:///run/containerd/containerd.sock pull server.corp13.un:5000/​student/​webd+~/kubespraycat inventory/mycluster/group_vars/k8s_cluster/addons.yml 
 +</code><​code>​ 
 +... 
 +helm_enabledtrue 
 +... 
 +ingress_nginx_enabled:​ true 
 +ingress_nginx_host_network:​ true 
 +...
 </​code>​ </​code>​
 ===== Базовые объекты k8s ===== ===== Базовые объекты k8s =====
Line 153: Line 515:
   * [[https://​kubernetes.io/​ru/​docs/​reference/​kubectl/​docker-cli-to-kubectl/​|kubectl для пользователей Docker]]   * [[https://​kubernetes.io/​ru/​docs/​reference/​kubectl/​docker-cli-to-kubectl/​|kubectl для пользователей Docker]]
   * [[https://​kubernetes.io/​docs/​tasks/​run-application/​run-stateless-application-deployment/​|Run a Stateless Application Using a Deployment]]   * [[https://​kubernetes.io/​docs/​tasks/​run-application/​run-stateless-application-deployment/​|Run a Stateless Application Using a Deployment]]
 +
  
 <​code>​ <​code>​
-$ kubectl ​create deployment my-debian --image=debian -- "​sleep"​ "​3600"​+$ kubectl ​api-resources
  
-$ kubectl ​get all+###kubectl ​run -ti --rm my-debian --image=debian --overrides='​{"​spec":​ { "​nodeSelector":​ {"​kubernetes.io/​hostname":​ "​kube4"​}}}'​
  
-$ kubectl ​get deployments+$ kubectl ​run my-debian --image=debian -- "​sleep"​ "​60"​
  
 $ kubectl get pods $ kubectl get pods
  
 +kubeN# crictl ps | grep debi
 +kubeN# crictl images
 +nodeN# ctr ns ls
 +nodeN# ctr -n=k8s.io image ls | grep debi
 +
 +$ kubectl delete pod my-debian
 +$ ###kubectl delete pod my-debian --grace-period=0 --force
 +
 +$ kubectl create deployment my-debian --image=debian -- "​sleep"​ "​infinity"​
 +
 +$ kubectl get all
 +$ kubectl get deployments
 +$ kubectl get replicasets
 +</​code>​
 +  * [[#​Настройка автодополнения]]
 +<​code>​
 $ kubectl attach my-debian-NNNNNNNNN-NNNNN $ kubectl attach my-debian-NNNNNNNNN-NNNNN
  
 $ kubectl exec -ti my-debian-NNNNNNNNN-NNNNN -- bash $ kubectl exec -ti my-debian-NNNNNNNNN-NNNNN -- bash
 Ctrl-D Ctrl-D
 +</​code>​ 
 +  * [[Технология Docker#​Анализ параметров запущенного контейнера изнутри]] 
 +<​code>​
 $ kubectl get deployment my-debian -o yaml $ kubectl get deployment my-debian -o yaml
 +</​code>​ 
 +  * [[Переменные окружения]] EDITOR 
 +<​code>​
 $ kubectl edit deployment my-debian $ kubectl edit deployment my-debian
 +
 +$ kubectl get pods -o wide
  
 $ kubectl delete deployment my-debian $ kubectl delete deployment my-debian
-</code><​code> +</​code>​ 
-  [[https://​kubernetes.io/​docs/​reference/​glossary/?​all=true#​term-manifest| ​   Kubernetes Documentation Reference Glossary/​Manifest]] +  ​[[https://​kubernetes.io/​docs/​reference/​glossary/?​all=true#​term-manifest|Kubernetes Documentation Reference Glossary/​Manifest]] 
-</​code>​<​code>​+<​code>​
 $ cat my-debian-deployment.yaml $ cat my-debian-deployment.yaml
 </​code><​code>​ </​code><​code>​
 apiVersion: apps/v1 apiVersion: apps/v1
-kind: Deployment+kind: ReplicaSet 
 +#kind: Deployment
 metadata: metadata:
   name: my-debian   name: my-debian
Line 186: Line 572:
     matchLabels:​     matchLabels:​
       app: my-debian       app: my-debian
 +  replicas: 2
   template:   template:
     metadata:     metadata:
Line 195: Line 582:
         image: debian         image: debian
         command: ["/​bin/​sh"​]         command: ["/​bin/​sh"​]
-        args: ["​-c",​ "​while ​true; do echo hello; sleep 3;​done"​]+        args: ["​-c",​ "​while ​:;do echo -n random-value:;​od -A n -t d -N 1 /​dev/​urandom;​sleep ​5; done"] 
 +        resources:​ 
 +          requests: 
 +            memory: "​64Mi"​ 
 +            cpu: "​250m"​ 
 +          limits: 
 +            memory: "​128Mi"​ 
 +            cpu: "​500m"​
       restartPolicy:​ Always       restartPolicy:​ Always
 </​code><​code>​ </​code><​code>​
-$ kubectl ​create ​-f my-debian-deployment.yaml+$ kubectl ​apply -f my-debian-deployment.yaml ​#​--dry-run=client #-o yaml 
 + 
 +$ kubectl logs -l app=my-debian -f
 ... ...
 $ kubectl delete -f my-debian-deployment.yaml $ kubectl delete -f my-debian-deployment.yaml
 </​code>​ </​code>​
 +
 ==== namespace для своего приложения ==== ==== namespace для своего приложения ====
 +
 +==== Deployment ====
 +
 +  * [[https://​stackoverflow.com/​questions/​52857825/​what-is-an-endpoint-in-kubernetes|What is an '​endpoint'​ in Kubernetes?​]]
 +  * [[https://​matthewpalmer.net/​kubernetes-app-developer/​articles/​kubernetes-volumes-example-nfs-persistent-volume.html|How to use an NFS volume]]
 +  * [[https://​www.kryukov.biz/​kubernetes/​lokalnye-volumes/​emptydir/​|emptyDir]]
 +  * [[https://​hub.docker.com/​_/​httpd|The Apache HTTP Server Project - httpd Docker Official Image]]
 +  * [[https://​habr.com/​ru/​companies/​oleg-bunin/​articles/​761662/​|Дополнительные контейнеры в Kubernetes и где они обитают:​ от паттернов к автоматизации управления]]
 +  * [[https://​stackoverflow.com/​questions/​39436845/​multiple-command-in-poststart-hook-of-a-container|multiple command in postStart hook of a container]]
 +  * [[https://​stackoverflow.com/​questions/​33887194/​how-to-set-multiple-commands-in-one-yaml-file-with-kubernetes|How to set multiple commands in one yaml file with Kubernetes?​]]
 +
 <​code>​ <​code>​
 $ kubectl create namespace my-ns $ kubectl create namespace my-ns
Line 208: Line 616:
 $ kubectl get namespaces $ kubectl get namespaces
  
-$ ### kubectl create deployment my-webd --image=server.corp13.un:​5000/​student/​webd:​latest --replicas=2 -n my-ns+$ ### kubectl create deployment my-webd --image=server.corpX.un:​5000/​student/​webd:​latest --replicas=2 -n my-ns 
 + 
 +$ ### kubectl delete deployment my-webd -n my-ns 
 + 
 +$ cd webd/
  
 $ cat my-webd-deployment.yaml $ cat my-webd-deployment.yaml
Line 216: Line 628:
 metadata: metadata:
   name: my-webd   name: my-webd
-  namespacemy-ns+#  annotations: 
 +#    kubernetes.io/​change-cause: "​update to ver1.2"​
 spec: spec:
   selector:   selector:
Line 229: Line 642:
       containers:       containers:
       - name: my-webd       - name: my-webd
-        ​image: server.corp13.un:​5000/​student/​webd:​latest+ 
 +#        ​image: server.corpX.un:​5000/​student/​webd 
 +#        image: server.corpX.un:​5000/​student/​webd:​ver1.N 
 +#        image: httpd 
 + 
 +#        imagePullPolicy:​ "​Always"​ 
 + 
 +#        lifecycle:​ 
 +#          postStart:​ 
 +#            exec: 
 +#              command: 
 +#              - /bin/sh 
 +#              - -c 
 +#              - | 
 +#                #test -f /​usr/​local/​apache2/​htdocs/​index.html && exit 0 
 +#                mkdir -p /​usr/​local/​apache2/​htdocs/​ 
 +#                cd /​usr/​local/​apache2/​htdocs/​ 
 +#                echo "<​h1>​Hello from apache2 on $(hostname) at $(date)</​h1>"​ > index.html 
 +#                echo "<​img src=img/​logo.gif>"​ >> index.html 
 + 
 +#        env: 
 +#        - name: PYWEBD_DOC_ROOT 
 +#          value: "/​usr/​local/​apache2/​htdocs/"​ 
 +#        - name: PYWEBD_PORT 
 +#          value: "​4080"​ 
 +#        - name: APWEBD_HOSTNAME 
 +#          value: "​apwebd.corpX.un"​ 
 +#        - name: KEYCLOAK_HOSTNAME 
 +#          value: "​keycloak.corpX.un"​ 
 +#        - name: REALM_NAME 
 +#          value: "​corpX"​ 
 + 
 +#        livenessProbe:​ 
 +#          httpGet: 
 +#            port: 80 
 +#            #scheme: HTTPS 
 + 
 +#        volumeMounts:​ 
 +#        - name: htdocs-volume 
 +#          mountPath: /​usr/​local/​apache2/​htdocs 
 + 
 + 
 +#        volumeMounts:​ 
 +#        - name: nfs-volume 
 +#          mountPath: /var/www 
 + 
 +#      volumes: 
 +#      - name: htdocs-volume 
 +#        emptyDir: {} 
 + 
 + 
 +#      volumes: 
 +#      - name: nfs-volume 
 +#        nfs: 
 +#          server: server.corpX.un 
 +#          path: /var/www 
 + 
 +#      initContainers:​ 
 +#      - name: load-htdocs-files 
 +#        image: curlimages/​curl 
 +##        command: ['​sh',​ '​-c',​ 'mkdir /mnt/img; curl http://​val.bmstu.ru/​unix/​Media/​logo.gif > /​mnt/​img/​logo.gif'​] 
 +#        command: ["/​bin/​sh",​ "​-c"​] 
 +#        args: 
 +#        - | 
 +#          test -d /mnt/img/ && exit 0 
 +#          mkdir /mnt/img; cd /mnt/img 
 +#          curl http://​val.bmstu.ru/​unix/​Media/​logo.gif > logo.gif 
 +#          ls -lR /mnt/ 
 +#        volumeMounts:​ 
 +#        - mountPath: /mnt 
 +#          namehtdocs-volume 
 </​code><​code>​ </​code><​code>​
-$ kubectl apply -f my-webd-deployment.yaml+$ kubectl apply -f my-webd-deployment.yaml -n my-ns #​--dry-run=client #-o yaml
  
-$ kubectl get all -n my-ns -o wide +$ kubectl get all -n my-ns -o wide
  
 $ kubectl describe -n my-ns pod/​my-webd-NNNNNNNNNN-NNNNN $ kubectl describe -n my-ns pod/​my-webd-NNNNNNNNNN-NNNNN
 +
 +$ kubectl -n my-ns logs pod/​my-webd-NNNNNNNNNN-NNNNN #-c load-htdocs-files
 +
 +$ kubectl logs -l app=my-webd -n my-ns 
 +(доступны опции -f, --tail=2000,​ --previous)
  
 $ kubectl scale deployment my-webd --replicas=3 -n my-ns $ kubectl scale deployment my-webd --replicas=3 -n my-ns
 +
 +$ kubectl delete pod/​my-webd-NNNNNNNNNN-NNNNN -n my-ns
 +</​code>​
 +
 +=== Версии deployment ===
 +
 +  * [[https://​learnk8s.io/​kubernetes-rollbacks|How do you rollback deployments in Kubernetes?​]]
 +  * [[https://​kubernetes.io/​docs/​concepts/​workloads/​controllers/​deployment/#​updating-a-deployment|Updating a Deployment]]
 +
 +<​code>​
 +$ ###kubectl rollout pause deployment my-webd-dep -n my-ns
 +$ ###kubectl set image deployment/​my-webd-dep my-webd-con=server.corpX.un:​5000/​student/​gowebd:​ver1.2 -n my-ns
 +$ ###kubectl rollout resume deployment my-webd-dep -n my-ns
 +
 +$ ###kubectl rollout status deployment/​my-webd-dep -n my-ns
 +
 +$ kubectl rollout history deployment/​my-webd -n my-ns
 +</​code><​code>​
 +REVISION ​ CHANGE-CAUSE
 +1         <​none>​
 +...
 +N         ​update to ver1.2
 +</​code><​code>​
 +$ kubectl rollout history deployment/​my-webd --revision=1 -n my-ns
 +</​code><​code>​
 +...
 +    Image: ​     server.corpX.un:​5000/​student/​webd:​ver1.1
 +...
 +</​code><​code>​
 +$ kubectl rollout undo deployment/​my-webd --to-revision=1 -n my-ns
 +
 +$ kubectl annotate deployment/​my-webd kubernetes.io/​change-cause="​revert to ver1.1"​ -n my-ns
 +
 +$ kubectl rollout history deployment/​my-webd -n my-ns
 +</​code><​code>​
 +REVISION ​ CHANGE-CAUSE
 +2         ​update to ver1.2
 +...
 +N+1        revert to ver1.1
 </​code>​ </​code>​
  
Line 243: Line 771:
  
   * [[https://​kubernetes.io/​docs/​concepts/​services-networking/​service/​|Kubernetes Documentation Concepts Services, Load Balancing, and Networking Service]]   * [[https://​kubernetes.io/​docs/​concepts/​services-networking/​service/​|Kubernetes Documentation Concepts Services, Load Balancing, and Networking Service]]
 +  * [[https://​alesnosek.com/​blog/​2017/​02/​14/​accessing-kubernetes-pods-from-outside-of-the-cluster/​|Accessing Kubernetes Pods From Outside of the Cluster]]
 +  * [[https://​stackoverflow.com/​questions/​33069736/​how-do-i-get-logs-from-all-pods-of-a-kubernetes-replication-controller|How do I get logs from all pods of a Kubernetes replication controller?​]]
 +
  
 <​code>​ <​code>​
 $ ### kubectl expose deployment my-webd --type=NodePort --port=80 -n my-ns $ ### kubectl expose deployment my-webd --type=NodePort --port=80 -n my-ns
 +
 +$ ### kubectl delete svc my-webd -n my-ns
  
 $ cat my-webd-service.yaml $ cat my-webd-service.yaml
Line 253: Line 786:
 metadata: metadata:
   name: my-webd   name: my-webd
-  namespace: my-ns 
 spec: spec:
-  ​type: NodePort+#  ​type: NodePort 
 +#  type: LoadBalancer 
 +#  loadBalancerIP:​ 192.168.X.64
   selector:   selector:
     app: my-webd     app: my-webd
Line 261: Line 795:
   - protocol: TCP   - protocol: TCP
     port: 80     port: 80
-#    targetPort: ​80 +#    targetPort: ​4080 
-#status: +   ​nodePort30111
-#  loadBalancer:​ {}+
 </​code><​code>​ </​code><​code>​
-$ kubectl apply -f my-webd-service.yaml+$ kubectl apply -f my-webd-service.yaml ​-n my-ns
  
 +$ kubectl describe svc my-webd -n my-ns
 +
 +$ kubectl get endpoints -n my-ns
 +</​code>​
 +=== NodePort ===
 +
 +  * [[https://​www.baeldung.com/​ops/​kubernetes-nodeport-range|Why Kubernetes NodePort Services Range From 30000 – 32767]]
 +
 +<​code>​
 $ kubectl get svc my-webd -n my-ns $ kubectl get svc my-webd -n my-ns
 NAME              TYPE       ​CLUSTER-IP ​      ​EXTERNAL-IP ​  ​PORT(S) ​       AGE NAME              TYPE       ​CLUSTER-IP ​      ​EXTERNAL-IP ​  ​PORT(S) ​       AGE
-my-webd-svc ​  ​NodePort ​  ​10.102.135.146 ​  <​none> ​       80:30350/TCP   18h+my-webd-svc ​  ​NodePort ​  ​10.102.135.146 ​  <​none> ​       80:NNNNN/TCP   18h
  
-kubectl describe svc my-webd -n my-ns+curl http://​kube1,​2,​3:​NNNNN 
 +</​code>​ 
 +== NodePort Minikube == 
 +<​code>​ 
 +$ minikube service list
  
-student@node3:​~$ minikube service my-webd -n my-ns --url +$ minikube service my-webd ​--url -n my-ns 
-http://​192.168.49.2:​30350+http://​192.168.49.2:​NNNNN
  
-student@node3:​~$ curl $(minikube service ​my-webd -n my-ns --url)+$ curl http://​192.168.49.2:​NNNNN 
 +</​code>​ 
 + 
 +=== LoadBalancer === 
 + 
 +== MetalLB == 
 + 
 +  * [[https://​www.adaltas.com/​en/​2022/​09/​08/​kubernetes-metallb-nginx/​|Ingresses and Load Balancers in Kubernetes with MetalLB and nginx-ingress]] 
 + 
 +  * [[https://​metallb.universe.tf/​installation/​|Installation]] 
 +  * [[https://​metallb.universe.tf/​configuration/​_advanced_ipaddresspool_configuration/​|Advanced AddressPool configuration]] 
 +  * [[https://​metallb.universe.tf/​configuration/​_advanced_l2_configuration/​|Advanced L2 configuration]] 
 + 
 +<​code>​ 
 +kubectl apply -f https://​raw.githubusercontent.com/​metallb/​metallb/​v0.14.3/​config/​manifests/​metallb-native.yaml 
 + 
 +$ kubectl -n metallb-system get all 
 + 
 +$ cat first-pool.yaml 
 +</​code><​code>​ 
 +--- 
 +apiVersion: metallb.io/​v1beta1 
 +kind: IPAddressPool 
 +metadata: 
 +  name: first-pool 
 +  namespace: metallb-system 
 +spec: 
 +  addresses:​ 
 +  - 192.168.X.64/​28 
 +  autoAssign: false 
 +#  autoAssign: true 
 +--- 
 +apiVersion: metallb.io/​v1beta1 
 +kind: L2Advertisement 
 +metadata: 
 +  name: first-pool-advertisement 
 +  namespace: metallb-system 
 +spec: 
 +  ipAddressPools:​ 
 +  - first-pool 
 +  interfaces:​ 
 +  - eth0 
 +</​code><​code>​ 
 +$ kubectl apply -f first-pool.yaml 
 + 
 +... 
 +$ kubectl get svc my-webd -n my-ns 
 +NAME      TYPE           ​CLUSTER-IP     ​EXTERNAL-IP     ​PORT(S       AGE 
 +my-webd ​  ​LoadBalancer ​  ​10.233.23.29 ​  ​192.168.X.64 ​  ​80:​NNNNN/​TCP ​  50s 
 + 
 + 
 +$ #kubectl delete -f first-pool.yaml && rm first-pool.yaml 
 + 
 +$ #kubectl delete -f https://​raw.githubusercontent.com/​metallb/​metallb/​v0.14.3/​config/​manifests/​metallb-native.yaml 
 +</​code>​ 
 + 
 +=== ClusterIP === 
 +<​code>​ 
 +kube1# host my-webd.my-ns.svc.cluster.local 169.254.25.10 
 + 
 +kube1# curl my-webd.my-ns.svc.cluster.local 
 +</​code>​ 
 + 
 +== port-forward == 
 + 
 +  * [[#​Инструмент командной строки kubectl]] 
 + 
 +<​code>​ 
 +node1/​kube1#​ kubectl port-forward -n my-ns --address 0.0.0.0 services/​my-webd 1234:80 
 + 
 +cmder> kubectl port-forward -n my-ns services/​my-webd 1234:80 
 +</​code>​ 
 + 
 +  * http://​192.168.X.2N1:​1234 
 +  * http://​localhost:​1234 
 + 
 +<​code>​ 
 +node1/​kube1#​ kubectl -n my-ns delete pod/​my-webd... 
 +</​code>​ 
 + 
 +== kubectl proxy == 
 + 
 +  * [[#​Инструмент командной строки kubectl]] 
 + 
 +<​code>​ 
 +kube1:~# kubectl proxy --address='​0.0.0.0'​ --accept-hosts='​^*$'​ 
 + 
 +cmder> kubectl proxy 
 +</​code>​ 
 + 
 +  * http://​192.168.X.2N1:​8001/​api/​v1/​namespaces/​my-ns/​services/​my-webd:​80/​proxy/​ 
 +  * http://​localhost:​8001/​api/​v1/​namespaces/​my-ns/​services/​my-webd:​80/​proxy/​ 
 + 
 + 
 +==== Удаление объектов ==== 
 +<​code>​ 
 +$ kubectl get all -n my-ns 
 + 
 +$ kubectl delete -n my-ns -f my-webd-deployment.yaml,​my-webd-service.yaml 
 + 
 +или 
 + 
 +$ kubectl delete namespace my-ns
 </​code>​ </​code>​
  
 ==== Ingress ==== ==== Ingress ====
 +
 +  * [[https://​kubernetes.github.io/​ingress-nginx/​deploy/#​quick-start|NGINX ingress controller quick-start]]
 +
 +=== Minikube ingress-nginx-controller ===
  
   * [[https://​kubernetes.io/​docs/​tasks/​access-application-cluster/​ingress-minikube/​|Set up Ingress on Minikube with the NGINX Ingress Controller]]   * [[https://​kubernetes.io/​docs/​tasks/​access-application-cluster/​ingress-minikube/​|Set up Ingress on Minikube with the NGINX Ingress Controller]]
-  * [[https://stackoverflow.com/questions/​33069736/​how-do-i-get-logs-from-all-pods-of-a-kubernetes-replication-controller|How do I get logs from all pods of a Kubernetes ​replication controller?]]+  * [[https://www.golinuxcloud.com/kubectl-port-forward/​|kubectl port-forward examples in Kubernetes]]
  
 <​code>​ <​code>​
-student@node2:~$ minikube addons enable ingress+server# cat /​etc/​bind/​corpX.un 
 +</​code><​code>​ 
 +... 
 +webd A 192.168.49.2 
 +</​code><​code>​ 
 +gitlab-runner@server:~$ minikube addons enable ingress 
 +</​code>​
  
-gitlab-runner@gate:~/webd$ cat my-webd-ingress.yaml+=== Baremetal ingress-nginx-controller ​ === 
 + 
 +  * [[https://​github.com/​kubernetes/​ingress-nginx/​tags]] Версии 
 +  * [[https://​stackoverflow.com/​questions/​61616203/​nginx-ingress-controller-failed-calling-webhook|Nginx Ingress Controller - Failed Calling Webhook]] 
 +  * [[https://​stackoverflow.com/​questions/​51511547/​empty-address-kubernetes-ingress|Empty ADDRESS kubernetes ingress]] 
 +  * [[https://​kubernetes.github.io/​ingress-nginx/​deploy/#​bare-metal-clusters|ingress-nginx/​deploy/​bare-metal-clusters]] 
 + 
 +<​code>​ 
 +server# cat /​etc/​bind/​corpX.un 
 +</​code><​code>​ 
 +... 
 +webd            ​A ​      ​192.168.X.202 
 +                A       ​192.168.X.203 
 +gowebd ​         CNAME   ​webd 
 +</​code><​code>​ 
 +node1# curl https://​raw.githubusercontent.com/​kubernetes/​ingress-nginx/​controller-v1.3.1/​deploy/​static/​provider/​baremetal/​deploy.yaml | tee ingress-nginx.controller-v1.3.1.baremetal.yaml 
 + 
 +node1# cat ingress-nginx.controller-v1.3.1.baremetal.yaml 
 +</​code><​code>​ 
 +... 
 +kind: Deployment 
 +... 
 +spec: 
 +... 
 +  replicas: 3    ### insert this (equial count of worker nodes) 
 +  template: 
 +... 
 +      terminationGracePeriodSeconds:​ 300 
 +      hostNetwork:​ true                    ###insert this 
 +      volumes: 
 +... 
 +</​code><​code>​ 
 +node1# kubectl apply -f ingress-nginx.controller-v1.3.1.baremetal.yaml 
 + 
 +node1# kubectl get all -n ingress-nginx 
 + 
 +node1# ###kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission 
 + 
 +node1# ###kubectl delete -f ingress-nginx.controller-v1.3.1.baremetal.yaml 
 +</​code>​ 
 + 
 +=== Ingress baremetal DaemonSet === 
 +<​code>​ 
 +kube1:~# mkdir -p ingress-nginx;​ cd $
 + 
 +kube1:​~/​ingress-nginx#​ curl https://​raw.githubusercontent.com/​kubernetes/​ingress-nginx/​controller-v1.12.0/​deploy/​static/​provider/​baremetal/​deploy.yaml | tee ingress-nginx.controller-v1.12.0.baremetal.yaml 
 + 
 +kube1:​~/​ingress-nginx# ​cat ingress-nginx.controller-v1.12.0.baremetal.yaml 
 +</​code><​code>​ 
 +... 
 +apiVersion: v1 
 +#data: null 
 +data: 
 +  allow-snippet-annotations:​ "​true"​ 
 +  use-forwarded-headers:​ "​true"​ 
 +kind: ConfigMap 
 +... 
 +#kind: Deployment 
 +kind: DaemonSet 
 +... 
 +#  strategy: 
 +#    rollingUpdate:​ 
 +#      maxUnavailable:​ 1 
 +#    type: RollingUpdate 
 +... 
 +      hostNetwork:​ true                    ### insert this 
 +      terminationGracePeriodSeconds:​ 300 
 +      volumes: 
 +... 
 +</​code><​code>​ 
 +kube1:​~/​ingress-nginx#​ kubectl apply -f ingress-nginx.controller-v1.12.0.baremetal.yaml 
 + 
 +kube1:​~/​ingress-nginx#​ kubectl -n ingress-nginx get pods -o wide 
 + 
 +kube1:​~/​ingress-nginx#​ kubectl -n ingress-nginx describe service/​ingress-nginx-controller 
 +</​code><​code>​ 
 +... 
 +Endpoints: ​               192.168.X.221:​80,​192.168.X.222:​80,​192.168.X.223:​80 
 +... 
 +</​code><​code>​ 
 +kube1:​~/​ingress-nginx#​ ###kubectl delete -f ingress-nginx.controller-v1.12.0.baremetal.yaml 
 +</​code>​ 
 + 
 +=== Управление конфигурацией ingress-nginx-controller ​ === 
 +<​code>​ 
 +master-1:~$ kubectl exec -n ingress-nginx pods/​ingress-nginx-controller-<​TAB>​ -- cat /​etc/​nginx/​nginx.conf | tee nginx.conf 
 + 
 +master-1:~$ kubectl edit -n ingress-nginx configmaps ingress-nginx-controller 
 +</​code><​code>​ 
 +... 
 +data: 
 +  use-forwarded-headers:​ "​true"​ 
 +... 
 +</​code>​ 
 + 
 +=== ingress example === 
 + 
 +  * [[https://​stackoverflow.com/​questions/​49829452/​why-ingress-serviceport-can-be-port-and-targetport-of-service|!!! The NGINX ingress controller does not use Services to route traffic to the pods]] 
 +  * [[https://​stackoverflow.com/​questions/​54459015/​how-to-configure-ingress-to-direct-traffic-to-an-https-backend-using-https|how to configure ingress to direct traffic to an https backend using https]] 
 + 
 +<​code>​ 
 +node1# ### kubectl create ingress ​my-ingress --class=nginx --rule="​webd.corpX.un/​*=my-webd:​80"​ -n my-ns 
 + 
 +node1# cat my-ingress.yaml
 </​code><​code>​ </​code><​code>​
 apiVersion: networking.k8s.io/​v1 apiVersion: networking.k8s.io/​v1
 kind: Ingress kind: Ingress
 metadata: metadata:
-  name: my-webd +  name: my-ingress 
-  ​namespacemy-ns + annotations
-  ​annotations+#    nginx.ingress.kubernetes.io/​canary"​true"​ 
-    nginx.ingress.kubernetes.io/​rewrite-target/$1+   ​nginx.ingress.kubernetes.io/​canary-weight"​30"​
 spec: spec:
 +  ingressClassName:​ nginx
 +#  tls:
 +#  - hosts:
 +#    - gowebd.corpX.un
 +#    secretName: gowebd-tls
   rules:   rules:
-    ​- host: webd.corp13.un +  ​- host: webd.corpX.un 
-      http: +    http: 
-        paths: +      paths
-          - path: /(.*) +      - backend
-            pathType: Prefix  ​# ПопробоватьImplementationSpecific +          ​service: 
-            backend: +            name: my-webd 
-              service: +            port: 
-                name: my-webd +              number: 4080 
-                port: +        ​path: / 
-                  number: 80+        pathType: Prefix 
 +  - hostgowebd.corpX.un 
 +    ​http:​ 
 +      paths: 
 +      - backend: 
 +          service: 
 +            name: my-gowebd 
 +            port: 
 +              number: 80 
 +        path: / 
 +        pathType: Prefix
 </​code><​code>​ </​code><​code>​
-kubectl apply -f my-webd-ingress.yaml+node1# ​kubectl apply -f my-ingress.yaml ​-n my-ns
  
-kubectl get ingress -n my-ns +node1# ​kubectl get ingress -n my-ns 
-Напиши, что тут?+NAME      CLASS   ​HOSTS ​                            ​ADDRESS ​                        ​PORTS ​  AGE 
 +my-webd ​  ​nginx ​  ​webd.corpX.un,​gowebd.corpX.un ​  ​192.168.X.202,​192.168.X.203 ​  ​80 ​     14m 
 +</​code>​ 
 +  * [[Утилита curl]] 
 +<​code>​ 
 +$ curl webd.corpX.un 
 +$ curl gowebd.corpX.un 
 +$ curl https://​gowebd.corpX.un #-kv
  
-root@gate.corp13.un:~# host webd +$ curl http://​nodeN/​ -H "Host: webd.corpX.un" 
-webd.corp13.un is an alias for node2.corp13.un. +$ curl --connect-to "":"":​kubeN:​443 https://​gowebd.corpX.un #-vk
-node2.corp13.un has address 192.168.13.220+
  
-curl webd.corp13.un+kubectl logs -n ingress-nginx -l app.kubernetes.io/​name=ingress-nginx -f
  
-kubectl ​logs -l app=my-webd -n my-ns+node1# ### kubectl ​delete ingress ​my-ingress ​-n my-ns
 </​code>​ </​code>​
-==== Удаление объектов ====+ 
 +=== secrets tls === 
 + 
 +  * [[https://​devopscube.com/​configure-ingress-tls-kubernetes/​|How To Configure Ingress TLS/SSL Certificates in Kubernetes]] 
 <​code>​ <​code>​
-$ kubectl ​delete ​-n my-ns -f my-webd-deployment.yaml,my-webd-service.yaml,​my-webd-ingress.yaml+$ kubectl ​create secret tls gowebd-tls --key gowebd.key ​--cert gowebd.crt -n my-ns 
 +     
 +$ kubectl get secrets ​-my-ns
  
-или+$ kubectl get secret/​gowebd-tls -o yaml -n my-ns
  
-$ kubectl delete ​namespace ​my-ns+###kubectl delete ​secret/​gowebd-tls -n my-ns
 </​code>​ </​code>​
  
-==== Пример с nfs volume ​====+==== Volumes ​====
  
-  ​* [[https://​matthewpalmer.net/​kubernetes-app-developer/​articles/​kubernetes-volumes-example-nfs-persistent-volume.html|How to use an NFS volume]]+=== hostPath и nodeSelector === 
 + 
 +  ​* [[Средства программирования shell#​Ресурсы Web сервера на shell]] на kube3
  
 <​code>​ <​code>​
-cat my-webd-nfs-deployment.yaml+kube1# kubectl label nodes kube3 htdocs-node=yes 
 + 
 +kube1# kubectl get nodes --show-labels 
 + 
 +kube1:​~/​pywebd-k8s# ​cat my-webd-deployment.yaml 
 +</​code><​code>​
 ... ...
-    ​spec:+        volumeMounts:​ 
 +        - name: htdocs-volume 
 +          mountPath: /​usr/​local/​apache2/​htdocs 
 + 
 +#        lifecycle:​ 
 +#        ... 
 + 
 +      volumes: 
 +      - name: htdocs-volume 
 +        hostPath: 
 +          path: /var/www/ 
 + 
 +      nodeSelector:​ 
 +        htdocs-node:​ "​yes"​ 
 + 
 +#      initContainers:​ 
 +#      ... 
 +</​code>​ 
 + 
 +=== PersistentVolume и PersistentVolumeClaim === 
 + 
 +  * [[https://​qna.habr.com/​q/​629022|Несколько Claim на один Persistent Volumes?​]] 
 +  * [[https://​serveradmin.ru/​hranilishha-dannyh-persistent-volumes-v-kubernetes/​|Хранилища данных (Persistent Volumes) в Kubernetes]] 
 +  * [[https://​stackoverflow.com/​questions/​59915899/​limit-persistent-volume-claim-content-folder-size-using-hostpath|Limit persistent volume claim content folder size using hostPath]] 
 +  * [[https://​stackoverflow.com/​questions/​63490278/​kubernetes-persistent-volume-hostpath-vs-local-and-data-persistence|Kubernetes persistent volume: hostpath vs local and data persistence]] 
 +  * [[https://​www.alibabacloud.com/​blog/​kubernetes-volume-basics-emptydir-and-persistentvolume_594834|Kubernetes Volume Basics: emptyDir and PersistentVolume]] 
 + 
 +<​code>​ 
 +kube1:~/pv# cat my-ha-pv.yaml 
 +</​code><​code>​ 
 +apiVersion: v1 
 +kind: PersistentVolume 
 +metadata: 
 +  name: my-pv-kube3-sz-128m-num-001 
 +#  name: my-pv-kube3-keycloak 
 +#  labels: 
 +#    type: local 
 +spec: 
 +## comment storageClassName for keycloak 
 +  storageClassName:​ my-ha-sc 
 +  capacity: 
 +    storage: 128Mi 
 +#    storage: 8Gi 
 +  accessModes:​ 
 +    - ReadWriteOnce 
 +  hostPath: 
 +#    path: /disk2 
 +    path: /​disk2/​dir1 
 +  persistentVolumeReclaimPolicy:​ Retain 
 +  nodeAffinity:​ 
 +    required: 
 +      nodeSelectorTerms:​ 
 +      - matchExpressions:​ 
 +        - key: kubernetes.io/​hostname 
 +          operator: In 
 +          values: 
 +          - kube3 
 +--- 
 +#... 
 +</​code><​code>​ 
 +kube1:~/pv# kubectl apply -f my-ha-pv.yaml 
 + 
 +kube1# kubectl get pv 
 + 
 +kube1# kubectl delete pv my-pv-kube3-sz-128m-num-001 
 + 
 +kube3# mkdir -p /​disk2/​dir{0..3} 
 + 
 +kube3# chmod 777 -R /disk2/ 
 + 
 +kube3# find /disk2/ 
 + 
 +kube3# ###rm -rf /disk2/ 
 +</​code>​ 
 + 
 +  * [[https://​stackoverflow.com/​questions/​55639436/​create-multiple-persistent-volumes-in-one-yaml]] 
 +  * Знакомимся с [[#Helm]] 
 + 
 +<​code>​ 
 +kube1:~/pv# cat my-ha-pv-chart/​Chart.yaml 
 +</​code><​code>​ 
 +apiVersion: v2 
 +name: my-ha-pv-chart 
 +version: 0.1.0 
 +</​code><​code>​ 
 +kube1:~/pv# cat my-ha-pv-chart/​values.yaml 
 +</​code><​code>​ 
 +volume_names:​ 
 +- "​dir1"​ 
 +- "​dir2"​ 
 +- "​dir3"​ 
 +numVolumes: "​3"​ 
 +</​code><​code>​ 
 +kube1:~/pv# cat my-ha-pv-chart/​templates/​my-ha-pv.yaml 
 +</​code><​code>​ 
 +{{ range .Values.volume_names }} 
 +{{/* range $k, $v := until (atoi .Values.numVolumes) */}} 
 +--- 
 +apiVersion: v1 
 +kind: PersistentVolume 
 +metadata: 
 +  name: my-pv-sz-128m-num-{{ . }} 
 +spec: 
 +  storageClassName:​ my-ha-sc 
 +  capacity: 
 +    storage: 128Mi 
 +  accessModes:​ 
 +    - ReadWriteOnce 
 +  hostPath: 
 +    path: /disk2/{{ . }}/ 
 +{{/*    path: /​disk2/​dir{{ $v }}/ */}} 
 +  persistentVolumeReclaimPolicy:​ Retain 
 +  nodeAffinity:​ 
 +    required: 
 +      nodeSelectorTerms:​ 
 +      - matchExpressions:​ 
 +        - key: kubernetes.io/​hostname 
 +          operator: In 
 +          values: 
 +          - kube3 
 +{{ end }} 
 +</​code><​code>​ 
 +kube1:~/pv# helm template my-ha-pv-chart my-ha-pv-chart/​ 
 + 
 +kube1:~/pv# helm install my-ha-pv-chart my-ha-pv-chart/​ 
 + 
 +kube1# kubectl get pv 
 + 
 +kube1:~/pv# ###helm uninstall my-ha-pv-chart 
 +</​code><​code>​ 
 +kube1:​~/​pywebd-k8s#​ cat my-webd-pvc.yaml 
 +</​code><​code>​ 
 +apiVersion: v1 
 +kind: PersistentVolumeClaim 
 +metadata: 
 +  name: my-webd-pvc 
 +spec: 
 +  storageClassName:​ my-ha-sc 
 +  accessModes:​ 
 +    - ReadWriteOnce 
 +  resources:​ 
 +    requests: 
 +      storage: 64Mi 
 +</​code><​code>​ 
 +kube1:​~/​pywebd-k8s#​ kubectl apply -f my-webd-pvc.yaml -n my-ns 
 + 
 +kube1:​~/​pywebd-k8s#​ kubectl get pvc -n my-ns 
 + 
 +kube1:​~/​pywebd-k8s#​ cat my-webd-deployment.yaml 
 +</​code><​code>​ 
 +... 
 +        volumeMounts:​ 
 +        - name: htdocs-volume 
 +          mountPath: /​usr/​local/​apache2/​htdocs 
 + 
 +        lifecycle:​ 
 +        ... 
 + 
 +      volumes: 
 +      - name: htdocs-volume 
 +        persistentVolumeClaim:​ 
 +          claimName: my-webd-pvc 
 + 
 +      initContainers:​ 
 +      ... 
 +</​code><​code>​ 
 +kube3# find /disk2 
 +</​code>​ 
 + 
 +=== Dynamic Volume Provisioning === 
 + 
 +  * [[https://​kubernetes.io/​docs/​concepts/​storage/​dynamic-provisioning/​|Dynamic Volume Provisioning]] 
 +  * [[https://​kubernetes.io/​docs/​tasks/​administer-cluster/​change-default-storage-class/​|Changing the default StorageClass]] 
 + 
 +=== rancher local-path-provisioner === 
 + 
 +  * [[https://​github.com/​rancher/​local-path-provisioner|rancher local-path-provisioner]] 
 +  * [[https://​artifacthub.io/​packages/​helm/​ebrianne/​local-path-provisioner|This chart bootstraps a  deployment on a  cluster using the  package manager]] 
 + 
 +<​code>​ 
 +$ kubectl apply -f https://​raw.githubusercontent.com/​rancher/​local-path-provisioner/​v0.0.26/​deploy/​local-path-storage.yaml 
 + 
 +$ kubectl get sc 
 + 
 +$ kubectl -n local-path-storage get all 
 + 
 +$ curl https://​raw.githubusercontent.com/​rancher/​local-path-provisioner/​v0.0.26/​deploy/​local-path-storage.yaml | less 
 +/​DEFAULT_PATH_FOR_NON_LISTED_NODES 
 + 
 +ssh root@kube1 'mkdir /​opt/​local-path-provisioner'​ 
 +ssh root@kube2 'mkdir /​opt/​local-path-provisioner'​ 
 +ssh root@kube3 'mkdir /​opt/​local-path-provisioner'​ 
 +ssh root@kube1 'chmod 777 /​opt/​local-path-provisioner'​ 
 +ssh root@kube2 'chmod 777 /​opt/​local-path-provisioner'​ 
 +ssh root@kube3 'chmod 777 /​opt/​local-path-provisioner'​ 
 + 
 +$ ###kubectl patch storageclass local-path -p '​{"​metadata":​ {"​annotations":​{"​storageclass.kubernetes.io/​is-default-class":"​true"​}}}'​ 
 +</​code>​ 
 + 
 +  * Сервис Keycloak в [[Сервис Keycloak#​Kubernetes]] 
 + 
 +<​code>​ 
 +$ kubectl get pvc -n my-keycloak-ns 
 + 
 +$ kubectl get pv 
 + 
 +$ ###kubectl -n my-keycloak-ns delete pvc data-my-keycloak-postgresql-0 
 +</​code>​ 
 +=== longhorn === 
 + 
 +<​code>​ 
 +kubeN:~# apt install open-iscsi 
 + 
 +(venv1) server:~# ansible all -f 4 -m apt -a '​pkg=open-iscsi state=present update_cache=true'​ -i /​root/​kubespray/​inventory/​mycluster/​hosts.yaml 
 +</​code>​ 
 +  * [[https://​github.com/​longhorn/​longhorn]] 
 +<​code>​ 
 +$ kubectl apply -f https://​raw.githubusercontent.com/​longhorn/​longhorn/​v1.6.0/​deploy/​longhorn.yaml 
 + 
 +$ kubectl -n longhorn-system get pods -o wide --watch 
 + 
 +Setting->​General 
 + 
 +Pod Deletion Policy When Node is Down: delete-statefuset-pod 
 +</​code>​ 
 + 
 +Подключение через kubectl proxy 
 + 
 +  * [[https://​stackoverflow.com/​questions/​45172008/​how-do-i-access-this-kubernetes-service-via-kubectl-proxy|How do I access this Kubernetes service via kubectl proxy?]] 
 + 
 +<​code>​ 
 +cmder> kubectl proxy 
 +</​code>​ 
 + 
 +  * http://​localhost:​8001/​api/​v1/​namespaces/​longhorn-system/​services/​longhorn-frontend:​80/​proxy/​ 
 + 
 +Подключение через ingress 
 + 
 +!!! Добавить пример с аутентификацией !!! 
 +<​code>​ 
 +student@server:​~/​longhorn$ cat ingress.yaml 
 +apiVersion: networking.k8s.io/​v1 
 +kind: Ingress 
 +metadata: 
 +  name: longhorn-ingress 
 +  namespace: longhorn-system 
 +spec: 
 +  ingressClassName:​ nginx 
 +  rules: 
 +  - host: lh.corp13.un 
 +    http: 
 +      paths: 
 +      - backend: 
 +          service: 
 +            name: longhorn-frontend 
 +            port: 
 +              number: 80 
 +        path: / 
 +        pathType: Prefix 
 +</​code>​ 
 + 
 +== Использование snapshot-ов == 
 + 
 +  * [[https://​github.com/​longhorn/​longhorn/​issues/​63?​ref=https%3A%2F%2Fgiter.vip|What should be the best procedure to recover a snapshot or a backup in rancher2/​longhorn ?]] 
 + 
 +  * Делаем снапшот 
 +  * Что-то ломаем (удаляем пользователя) 
 +  * Останавливаем сервис 
 + 
 +<​code>​ 
 +kube1:~# kubectl -n my-keycloak-ns scale --replicas 0 statefulset my-keycloak 
 + 
 +kube1:~# kubectl -n my-keycloak-ns scale --replicas 0 statefulset my-keycloak-postgresql 
 +</​code>​ 
 + 
 +  * Volume -> Attache to Host (любой) в режиме Maintenance,​ Revert к снапшоту,​ Deattache  
 +  * Запускаем сервис 
 + 
 +<​code>​ 
 +kube1:~# kubectl -n my-keycloak-ns scale --replicas 1 statefulset my-keycloak-postgresql 
 + 
 +kube1:~# kubectl -n my-keycloak-ns scale --replicas 2 statefulset my-keycloak 
 +</​code>​ 
 + 
 +== Использование backup-ов == 
 + 
 +  * Разворачиваем [[Сервис NFS]] на server  
 + 
 +<​code>​ 
 +Setting -> General -> Backup Target -> nfs://​server.corp13.un:/​var/​www (nfs client linux не нужен) 
 +</​code>​ 
 +  * Volume -> Create Backup, удаляем NS, восстанавливаем Volume из бекапа,​ создаем NS и создаем для Volume PV/PVC как было, устанавливаем чарт, он подхватывает pv/pvc 
 + 
 +==== ConfigMap, Secret ==== 
 + 
 +<​code>​ 
 +server# scp /​etc/​pywebd/​* kube1:/​tmp/​ 
 + 
 +kube1:​~/​pywebd-k8s#​ kubectl create configmap pywebd-conf --from-file=/​tmp/​pywebd.conf --dry-run=client -o yaml | tee my-webd-configmap.yaml 
 + 
 +kube1:​~/​pywebd-k8s#​ cat my-webd-configmap.yaml 
 +</​code><​code>​ 
 +apiVersion: v1 
 +data: 
 +  pywebd.conf:​ | 
 +    [default] 
 +    DocumentRoot = /​usr/​local/​apache2/​htdocs 
 +    Listen = 4443 
 +kind: ConfigMap 
 +metadata: 
 +  creationTimestamp:​ null 
 +  name: pywebd-conf 
 +</​code><​code>​ 
 +kube1:​~/​pywebd-k8s#​ kubectl apply -f my-webd-configmap.yaml -n my-ns 
 + 
 +kube1:​~/​pywebd-k8s#​ kubectl -n my-ns get configmaps 
 + 
 +kube1:​~/​pywebd-k8s#​ kubectl create secret tls pywebd-tls --key /​tmp/​pywebd.key --cert /​tmp/​pywebd.crt --dry-run=client -o yaml | tee my-webd-secret-tls.yaml 
 + 
 +kube1:​~/​pywebd-k8s#​ less my-webd-secret-tls.yaml 
 +</​code><​code>​ 
 +apiVersion: v1 
 +data: 
 +  tls.crt: ... 
 +  tls.key: ... 
 +kind: Secret 
 +metadata: 
 +  creationTimestamp:​ null 
 +  name: pywebd-tls 
 +type: kubernetes.io/​tls 
 +</​code><​code>​ 
 +kube1:​~/​pywebd-k8s#​ rm -rv /​tmp/​pywebd.* 
 + 
 +kube1:​~/​pywebd-k8s#​ kubectl apply -f my-webd-secret-tls.yaml -n my-ns 
 + 
 +kube1:​~/​pywebd-k8s#​ kubectl -n my-ns get secrets 
 + 
 +kube1:​~/​pywebd-k8s# ​ kubectl create secret docker-registry regcred --docker-server=server.corpX.un:​5000 --docker-username=student --docker-password='​strongpassword'​ -n my-ns 
 + 
 +kube1:​~/​pywebd-k8s#​ cat my-webd-deployment.yaml 
 +</​code><​code>​ 
 +... 
 +      imagePullSecrets:​ 
 +      - name: regcred 
       containers:       containers:
       - name: my-webd       - name: my-webd
-        image: server.corp13.un:​5000/​student/​webd:latest+        image: server.corpX.un:​5000/​student/​pywebd:ver1.2 
 +        imagePullPolicy:​ "​Always"​ 
 + 
 +#        env: 
 +#          ... 
 +... 
 +        livenessProbe:​ 
 +          httpGet: 
 +            port: 4443 
 +            scheme: HTTPS 
 +...
         volumeMounts:​         volumeMounts:​
-        ​- name: nfs-volume +... 
-          mountPath: /var/www+        ​- name: conf-volume 
 +          subPath: pywebd.conf 
 +          mountPath: /etc/pywebd/​pywebd.conf 
 +        - name: secret-tls-volume 
 +          subPath: tls.crt 
 +          mountPath: /​etc/​pywebd/​pywebd.crt 
 +        - name: secret-tls-volume 
 +          subPath: tls.key 
 +          mountPath: /​etc/​pywebd/​pywebd.key 
 +...
       volumes:       volumes:
-      ​- name: nfs-volume +... 
-        ​nfs+      ​- name: conf-volume 
-          ​server192.168.13.1 +        ​configMap
-          path: /var/www+          ​namepywebd-conf 
 +      - name: secret-tls-volume 
 +        secret: 
 +          secretName: pywebd-tls 
 +... 
 +</​code><​code>​ 
 +kubeN$ curl --connect-to "":"":<​POD_IP>:​4443 https://pywebd.corpX.un
 </​code>​ </​code>​
  
 +==== ConfigMap ====
 +
 +  * [[https://​www.aquasec.com/​cloud-native-academy/​kubernetes-101/​kubernetes-configmap/​|Kubernetes ConfigMap: Creating, Viewing, Consuming & Managing]]
 +  * [[https://​blog.lapw.at/​how-to-enable-ssh-into-a-kubernetes-pod/​|How to enable SSH connections into a Kubernetes pod]]
 +
 +<​code>​
 +root@node1:​~#​ cat sshd_config
 +</​code><​code>​
 +PermitRootLogin yes
 +PasswordAuthentication no
 +ChallengeResponseAuthentication no
 +UsePAM no
 +</​code><​code>​
 +root@node1:​~#​ kubectl create configmap ssh-config --from-file=sshd_config --dry-run=client -o yaml
 +...
 +
 +server:~# cat .ssh/​id_rsa.pub
 +...
 +
 +root@node1:​~#​ cat my-openssh-server-deployment.yaml
 +</​code><​code>​
 +apiVersion: v1
 +kind: ConfigMap
 +metadata:
 +  name: ssh-config
 +data:
 +  sshd_config:​ |
 +    PermitRootLogin yes
 +    PasswordAuthentication no
 +    ChallengeResponseAuthentication no
 +    UsePAM no
 +  authorized_keys:​ |
 +    ssh-rsa AAAAB.....C0zOcZ68= root@server.corpX.un
 +---
 +apiVersion: apps/v1
 +kind: Deployment
 +metadata:
 +  name: my-openssh-server
 +spec:
 +  selector:
 +    matchLabels:​
 +      app: my-openssh-server
 +  template:
 +    metadata:
 +      labels:
 +        app: my-openssh-server
 +    spec:
 +      containers:
 +      - name: my-openssh-server
 +        image: linuxserver/​openssh-server
 +        command: ["/​bin/​sh"​]
 +        args: ["​-c",​ "/​usr/​bin/​ssh-keygen -A; usermod -p '​*'​ root; /​usr/​sbin/​sshd.pam -D"]
 +        ports:
 +        - containerPort:​ 22
 +        volumeMounts:​
 +        - name: ssh-volume
 +          subPath: sshd_config
 +          mountPath: /​etc/​ssh/​sshd_config
 +        - name: ssh-volume
 +          subPath: authorized_keys
 +          mountPath: /​root/​.ssh/​authorized_keys
 +      volumes:
 +      - name: ssh-volume
 +        configMap:
 +          name: ssh-config
 +---
 +apiVersion: v1
 +kind: Service
 +metadata:
 +  name: my-openssh-server
 +spec:
 +  type: NodePort
 +  ports:
 +  - port: 22
 +    nodePort: 32222
 +  selector:
 +    app: my-openssh-server
 +</​code><​code>​
 +root@node1:​~#​ kubectl apply -f my-openssh-server-deployment.yaml
 +
 +root@node1:​~#​ iptables-save | grep 32222
 +
 +root@node1:​~#​ ###kubectl exec -ti my-openssh-server-NNNNNNNN-NNNNN -- bash
 +
 +server:~# ssh -p 32222 nodeN
 +Welcome to OpenSSH Server
 +my-openssh-server-NNNNNNNN-NNNNN:​~#​ nslookup my-openssh-server.default.svc.cluster.local
 +</​code>​
 ==== Пример с multi container pod ==== ==== Пример с multi container pod ====
  
Line 379: Line 1631:
       containers:       containers:
       - name: my-webd       - name: my-webd
-        image: server.corp13.un:​5000/​student/​webd:​latest+        image: server.corpX.un:​5000/​student/​webd:​latest
         volumeMounts:​         volumeMounts:​
         - name: html         - name: html
Line 425: Line 1677:
  
  
-==== Установка ====+==== Установка ​Helm ====
  
   * [[https://​helm.sh/​docs/​intro/​install/​|Installing Helm]]   * [[https://​helm.sh/​docs/​intro/​install/​|Installing Helm]]
 +  * [[https://​github.com/​helm/​helm/​releases|helm releases]]
  
 <​code>​ <​code>​
-wget https://​get.helm.sh/​helm-v3.9.0-linux-amd64.tar.gz+wget https://​get.helm.sh/​helm-v3.16.4-linux-amd64.tar.gz
  
-tar -zxvf helm-v3.9.0-linux-amd64.tar.gz+tar -zxvf helm-*-linux-amd64.tar.gz
  
-$ sudo mv linux-amd64/​helm /​usr/​local/​bin/​helm+mv linux-amd64/​helm /​usr/​local/​bin/​helm 
 + 
 +$ cat ~/​.profile 
 +</​code><​code>​ 
 +... 
 +source <(helm completion bash)
 </​code>​ </​code>​
  
 +==== Работа с готовыми Charts ====
 +
 +  * Сервис Keycloak [[Сервис Keycloak#​Kubernetes]]
 +
 +=== ingress-nginx ===
 +
 +  * [[https://​kubernetes.github.io/​ingress-nginx/​deploy/​|NGINX Ingress Controller Installation Guide]]
 +  * [[https://​stackoverflow.com/​questions/​56915354/​how-to-install-nginx-ingress-with-hostnetwork-on-bare-metal|stackoverflow How to install nginx-ingress with hostNetwork on bare-metal?​]]
 +  * [[https://​devpress.csdn.net/​cloud/​62fc8e7e7e66823466190055.html|devpress.csdn.net How to install nginx-ingress with hostNetwork on bare-metal?​]]
 +  * [[https://​github.com/​kubernetes/​ingress-nginx/​blob/​main/​charts/​ingress-nginx/​values.yaml]]
 +
 +  * [[https://​github.com/​kubernetes/​ingress-nginx]] --version 4.7.3
 +
 +<​code>​
 +$ helm upgrade ingress-nginx --install ingress-nginx \
 +--set controller.hostNetwork=true,​controller.publishService.enabled=false,​controller.kind=DaemonSet,​controller.config.use-forwarded-headers=true \
 +--repo https://​kubernetes.github.io/​ingress-nginx --namespace ingress-nginx --create-namespace
 +
 +$ helm list --namespace ingress-nginx
 +$ helm list -A
 +
 +$ kubectl get all -n ingress-nginx -o wide
 +
 +$ helm delete ingress-nginx --namespace ingress-nginx
 +
 +
 +$ mkdir -p ingress-nginx;​ cd ingress-nginx
 +
 +$ helm template ingress-nginx --repo https://​kubernetes.github.io/​ingress-nginx --namespace ingress-nginx | tee t1.yaml
 +
 +$ helm show values ingress-nginx --repo https://​kubernetes.github.io/​ingress-nginx | tee values.yaml.orig
 +
 +$ cat values.yaml
 +</​code><​code>​
 +controller:
 +  hostNetwork:​ true
 +  publishService:​
 +    enabled: false
 +  kind: DaemonSet
 +#  config:
 +#    use-forwarded-headers:​ true
 +#    allow-snippet-annotations:​ true
 +</​code><​code>​
 +$ helm template ingress-nginx -f values.yaml --repo https://​kubernetes.github.io/​ingress-nginx -n ingress-nginx | tee t2.yaml
 +
 +$ helm upgrade ingress-nginx -i ingress-nginx -f values.yaml --repo https://​kubernetes.github.io/​ingress-nginx -n ingress-nginx --create-namespace
 +
 +$ kubectl exec -n ingress-nginx pods/​ingress-nginx-controller-<​TAB>​ -- cat /​etc/​nginx/​nginx.conf | tee nginx.conf | grep use_forwarded_headers
 +
 +$ kubectl -n ingress-nginx describe service/​ingress-nginx-controller
 +...
 +Endpoints: ​               192.168.X.221:​80,​192.168.X.222:​80,​192.168.X.223:​80
 +...
 +
 +# kubectl get clusterrole -A | grep -i ingress
 +# kubectl get clusterrolebindings -A | grep -i ingress
 +# kubectl get validatingwebhookconfigurations -A | grep -i ingress
 +
 +# ###helm uninstall ingress-nginx -n ingress-nginx
 +</​code>​
 ==== Развертывание своего приложения ==== ==== Развертывание своего приложения ====
  
 +  * [[https://​helm.sh/​docs/​chart_template_guide/​getting_started/​|chart_template_guide getting_started]]
   * [[https://​opensource.com/​article/​20/​5/​helm-charts|How to make a Helm chart in 10 minutes]]   * [[https://​opensource.com/​article/​20/​5/​helm-charts|How to make a Helm chart in 10 minutes]]
   * [[https://​stackoverflow.com/​questions/​49812830/​helm-upgrade-with-same-chart-version-but-different-docker-image-tag|Helm upgrade with same chart version, but different Docker image tag]]   * [[https://​stackoverflow.com/​questions/​49812830/​helm-upgrade-with-same-chart-version-but-different-docker-image-tag|Helm upgrade with same chart version, but different Docker image tag]]
 +  * [[https://​stackoverflow.com/​questions/​69817305/​how-set-field-app-version-in-helm3-chart|how set field app-version in helm3 chart?]]
  
 <​code>​ <​code>​
-$ helm create webd-chart+~/​gowebd-k8s$ helm create webd-chart 
 + 
 +$ less webd-chart/​templates/​deployment.yaml
  
 $ cat webd-chart/​Chart.yaml $ cat webd-chart/​Chart.yaml
Line 451: Line 1773:
 ... ...
 version: 0.1.1 version: 0.1.1
 +icon: https://​val.bmstu.ru/​unix/​Media/​logo.gif
 ... ...
 appVersion: "​latest"​ appVersion: "​latest"​
 +#​appVersion:​ ver1.7 ​  #for vanilla argocd
 </​code><​code>​ </​code><​code>​
 $ cat webd-chart/​values.yaml $ cat webd-chart/​values.yaml
 </​code><​code>​ </​code><​code>​
 ... ...
 +replicaCount:​ 2
 +
 image: image:
-  repository: server.corp13.un:​5000/​student/​webd+  repository: server.corpX.un:​5000/​student/​webd
   pullPolicy: Always   pullPolicy: Always
 ... ...
Line 465: Line 1791:
 ... ...
 service: service:
-  ​type: NodePort+#  ​type: NodePort
 ... ...
 ingress: ingress:
   enabled: true   enabled: true
 +  className: "​nginx"​
 ... ...
   hosts:   hosts:
-    - host: webd.corp13.un+    - host: webd.corpX.un
 ... ...
 +#  tls: []
 +#  tls:
 +#    - secretName: gowebd-tls
 +#      hosts:
 +#        - gowebd.corpX.un
 +...
 +#​APWEBD_HOSTNAME:​ "​apwebd.corp13.un"​
 +#​KEYCLOAK_HOSTNAME:​ "​keycloak.corp13.un"​
 +#​REALM_NAME:​ "​corp13"​
 </​code><​code>​ </​code><​code>​
 $ less webd-chart/​templates/​deployment.yaml $ less webd-chart/​templates/​deployment.yaml
 </​code><​code>​ </​code><​code>​
 ... ...
-          ​image"{{ .Values.image.repository ​}}:{{ .Values.image.tag | default ​.Chart.AppVersion ​}}"+          ​imagePullPolicy: {{ .Values.image.pullPolicy ​}} 
 +#          env: 
 +#          - name: APWEBD_HOSTNAME 
 +#            value: "{{ .Values.APWEBD_HOSTNAME }}" 
 +#          - name: KEYCLOAK_HOSTNAME 
 +#            value: "​{{ ​.Values.KEYCLOAK_HOSTNAME }}" 
 +#          - name: REALM_NAME 
 +#            value: "{{ .Values.REALM_NAME ​}}"
 ... ...
 </​code><​code>​ </​code><​code>​
-!!! Был замечен "​глюк"​ DNS, из-за которого не загружался Docker образ, "​лечился"​ предварительным созданием namespace+$ helm lint webd-chart/
  
-$ helm install ​my-webd webd-chart/ --n my-ns --create-namespace --wait+$ helm template ​my-webd webd-chart/ ​| less 
 + 
 +$ helm install my-webd webd-chart/ ​-n my-ns --create-namespace --wait 
 + 
 +$ curl kubeN -H "Host: gowebd.corpX.un"​ 
 + 
 +$ kubectl describe events -n my-ns | less
  
 $ export HELM_NAMESPACE=my-ns $ export HELM_NAMESPACE=my-ns
Line 488: Line 1837:
 $ helm list $ helm list
  
-$ helm upgrade my-webd webd-chart/ --set=image.tag=ver1.10+### helm upgrade my-webd webd-chart/ --set=image.tag=ver1.10
  
 $ helm history my-webd $ helm history my-webd
Line 503: Line 1852:
   * [[https://​medium.com/​containerum/​how-to-make-and-share-your-own-helm-package-50ae40f6c221|How to make and share your own Helm package]]   * [[https://​medium.com/​containerum/​how-to-make-and-share-your-own-helm-package-50ae40f6c221|How to make and share your own Helm package]]
   * [[https://​docs.gitlab.com/​ee/​user/​profile/​personal_access_tokens.html|Gitlab Personal access tokens]]   * [[https://​docs.gitlab.com/​ee/​user/​profile/​personal_access_tokens.html|Gitlab Personal access tokens]]
 +  * [[Инструмент GitLab#​Подключение через API]] - Role: Mainteiner, api (read_registry,​ write_registry не нужно)
  
 +=== Добавляем приложение в свой репозиторий ===
 <​code>​ <​code>​
-$ helm repo add --username student --password ​NNNNNN-NNNNNNNNNNNNN ​webd http://192.168.13.1/​api/​v4/​projects/​6/​packages/​helm/​stable+~/​gowebd-k8s$ helm repo add --username student --password ​NNNNN-NNNNNNNNNNNNNNNNNNN ​webd https://server.corpX.un/​api/​v4/​projects/​N/​packages/​helm/​stable 
 +"​webd"​ has been added to your repositories
  
-$ helm repo list+~/​gowebd-k8s$ helm repo list
  
-$ helm package webd-chart +~/​gowebd-k8s$ helm package webd-chart
-$ ls *tgz+
  
-$ helm plugin install https://​github.com/​chartmuseum/helm-push +~/gowebd-k8star -tf webd-chart-0.1.1.tgz
-helm cm-push webd-chart-0.1.0.tgz webd+
  
-... С другого кластера подключаем (аналогично) наш репозиторий и ...+~/​gowebd-k8s$ helm plugin install https://​github.com/​chartmuseum/​helm-push
  
-$ helm search repo webd+~/​gowebd-k8s$ helm cm-push webd-chart-0.1.1.tgz ​webd
  
-helm repo update ​webd+~/​gowebd-k8srm webd-chart-0.1.1.tgz
  
-$ helm install my-webd webd/​webd-chart+~/​gowebd-k8s### helm repo remove webd 
 + 
 +~/​gowebd-k8s$ ### helm plugin uninstall cm-push 
 +</​code>​ 
 +=== Устанавливаем приложение через подключение репозитория === 
 +<​code>​ 
 +kube1:~# helm repo add webd https://​server.corpX.un/​api/​v4/​projects/​N/​packages/​helm/​stable 
 + 
 +kube1:~# helm repo update 
 + 
 +kube1:~# helm search repo webd 
 + 
 +kube1:~# helm repo update webd 
 + 
 +kube1:~# helm show values webd/​webd-chart | tee values.yaml.orig 
 + 
 +kube1:~# ###helm pull webd/​webd-chart 
 + 
 +kube1:​~# ​helm install my-webd webd/​webd-chart 
 + 
 +kube1:~# ###helm uninstall my-webd 
 + 
 +kube1:~# ###helm repo remove webd 
 +</​code>​ 
 +=== Устанавливаем приложение без подключение репозитория === 
 +<​code>​ 
 +kube1:~# mkdir gowebd; cd gowebd 
 + 
 +kube1:​~/​gowebd#​ ###helm pull webd-chart --repo https://​server.corpX.un/​api/​v4/​projects/​N/​packages/​helm/​stable 
 + 
 +kube1:​~/​gowebd#​ helm show values webd-chart --repo https://​server.corpX.un/​api/​v4/​projects/​N/​packages/​helm/​stable | tee values.yaml.orig 
 + 
 +kube1:​~/​gowebd#​ cat values.yaml 
 +</​code><​code>​ 
 +replicaCount:​ 3 
 +image: 
 +  tag: "​ver1.1"​ 
 +#​REALM_NAME:​ "​corp"​ 
 +</​code><​code>​ 
 +kube1:​~/​gowebd#​ helm upgrade my-webd -i webd-chart -f values.yaml -n my-ns --create-namespace --repo https://​server.corpX.un/​api/​v4/​projects/​N/​packages/​helm/​stable 
 + 
 +$ curl http://​kubeN -H "Host: gowebd.corpX.un"​ 
 + 
 +kube1:​~/​gowebd#​ ###helm uninstall my-webd -n my-ns
 </​code>​ </​code>​
  
 ==== Работа с публичными репозиториями ==== ==== Работа с публичными репозиториями ====
 +
 +=== gitlab-runner kubernetes ===
 +
 <​code>​ <​code>​
-$ helm search hub -o json wordpress | jq '​.'​ | less+kube1:​~/​gitlab-runner# kubectl create ns gitlab-runner
  
-$ helm repo add bitnami https://charts.bitnami.com/bitnami+kube1:~/gitlab-runner#​ kubectl -n gitlab-runner create configmap ca-crt --from-file=/usr/local/​share/​ca-certificates/​ca.crt
  
-helm show values ​bitnami/wordpress+kube1:​~/​gitlab-runner#​ helm repo add gitlab https://​charts.gitlab.io 
 + 
 +kube1:​~/​gitlab-runner#​ helm repo list 
 + 
 +kube1:​~/​gitlab-runner#​ helm search repo -l gitlab 
 + 
 +kube1:​~/​gitlab-runner#​ helm search repo -l gitlab/​gitlab-runner 
 + 
 +kube1:​~/​gitlab-runner# ​helm show values ​gitlab/gitlab-runner --version 0.70.5 | tee values.yaml 
 + 
 +kube1:​~/​gitlab-runner#​ cat values.yaml 
 +</​code><​code>​ 
 +... 
 +gitlabUrl: https://​server.corpX.un 
 +... 
 +runnerToken:​ "​NNNNNNNNNNNNNNNNNNNNN"​ 
 +... 
 +rbac: 
 +... 
 +  create: true                      #change this 
 +... 
 +serviceAccount:​ 
 +... 
 +  create: true                      #change this 
 +... 
 +runners: 
 +... 
 +  config: | 
 +    [[runners]] 
 +      tls-ca-file = "/​mnt/​ca.crt" ​  #​insert this 
 +      [runners.kubernetes] 
 +        namespace = "​{{.Release.Namespace}}"​ 
 +        image = "​alpine"​ 
 +        privileged = true           #​insert this 
 +... 
 +securityContext:​ 
 +  allowPrivilegeEscalation:​ true    #change this 
 +  readOnlyRootFilesystem:​ false 
 +  runAsNonRoot:​ true 
 +  privileged: true                  #change this 
 +... 
 +#​volumeMounts:​ []                   #​comment this 
 +volumeMounts:​ 
 +  - name: ca-crt 
 +    subPath: ca.crt 
 +    mountPath: /​mnt/​ca.crt 
 +... 
 +#volumes: []                        #comment this 
 +volumes: 
 +  - name: ca-crt 
 +    configMap:​ 
 +      name: ca-crt 
 +... 
 +</​code><​code>​ 
 +kube1:​~/​gitlab-runner#​ helm upgrade -i gitlab-runner gitlab/​gitlab-runner -f values.yaml -n gitlab-runner --version 0.70.5 
 + 
 +kube1:​~/​gitlab-runner#​ kubectl get all -n gitlab-runner 
 + 
 +kube1:​~/​gitlab-runner#​ ### helm -n gitlab-runner uninstall gitlab-runner
 </​code>​ </​code>​
  
 +== старая версия ==
 +<​code>​
 +gitlab-runner@server:​~$ helm repo add gitlab https://​charts.gitlab.io
 +
 +gitlab-runner@server:​~$ helm repo list
 +
 +gitlab-runner@server:​~$ helm search repo -l gitlab
 +
 +gitlab-runner@server:​~$ helm search repo -l gitlab/​gitlab-runner
 +
 +gitlab-runner@server:​~$ helm show values gitlab/​gitlab-runner --version 0.56.0 | tee values.yaml
 +
 +gitlab-runner@server:​~$ diff values.yaml values.yaml.orig
 +</​code><​code>​
 +...
 +gitlabUrl: http://​server.corpX.un/​
 +...
 +runnerRegistrationToken:​ "​NNNNNNNNNNNNNNNNNNNNNNNN"​
 +...
 +148,149c142
 +<   ​create:​ true
 +---
 +>   ​create:​ false
 +325d317
 +<         ​privileged = true
 +432c424
 +<   ​allowPrivilegeEscalation:​ true
 +---
 +>   ​allowPrivilegeEscalation:​ false
 +435c427
 +<   ​privileged:​ true
 +---
 +>   ​privileged:​ false
 +</​code><​code>​
 +gitlab-runner@server:​~$ helm upgrade -i gitlab-runner gitlab/​gitlab-runner -f values.yaml -n gitlab-runner --create-namespace --version 0.56.0
 +
 +gitlab-runner@server:​~$ kubectl get all -n gitlab-runner
 +</​code>​
 +
 +== SSL/TLS ==
 +
 +<​code>​
 +# kubectl -n gitlab-runner create configmap wild-crt --from-file=wild.crt
 +
 +# cat values.yaml
 +</​code><​code>​
 +...
 +gitlabUrl: https://​server.corpX.un/​
 +...
 +  config: |
 +    [[runners]]
 +      tls-ca-file = "/​mnt/​wild.crt"​
 +      [runners.kubernetes] ​     ​
 +...
 +#​volumeMounts:​ []
 +volumeMounts:​
 +  - name: wild-crt
 +    subPath: wild.crt
 +    mountPath: /​mnt/​wild.crt
 +    ​
 +#volumes: []
 +volumes:
 +  - name: wild-crt
 +    configMap:
 +      name: wild-crt
 +</​code>​
 +
 +===== Kubernetes Dashboard =====
 +
 +  * https://​kubernetes.io/​docs/​tasks/​access-application-cluster/​web-ui-dashboard/​
 +  * https://​github.com/​kubernetes/​dashboard/​blob/​master/​docs/​user/​access-control/​creating-sample-user.md
 +
 +<​code>​
 +$ kubectl apply -f https://​raw.githubusercontent.com/​kubernetes/​dashboard/​v2.7.0/​aio/​deploy/​recommended.yaml
 +
 +$ cat dashboard-user-role.yaml
 +</​code><​code>​
 +---
 +apiVersion: v1
 +kind: ServiceAccount
 +metadata:
 +  name: admin-user
 +  namespace: kubernetes-dashboard
 +---
 +apiVersion: rbac.authorization.k8s.io/​v1
 +kind: ClusterRoleBinding
 +metadata:
 +  name: admin-user
 +roleRef:
 +  apiGroup: rbac.authorization.k8s.io
 +  kind: ClusterRole
 +  name: cluster-admin
 +subjects:
 +- kind: ServiceAccount
 +  name: admin-user
 +  namespace: kubernetes-dashboard
 +---
 +apiVersion: v1
 +kind: Secret
 +metadata:
 +  name: admin-user
 +  namespace: kubernetes-dashboard
 +  annotations:​
 +    kubernetes.io/​service-account.name:​ "​admin-user"​
 +type: kubernetes.io/​service-account-token
 +</​code><​code>​
 +$ kubectl apply -f dashboard-user-role.yaml
 +
 +$ kubectl -n kubernetes-dashboard create token admin-user
 +
 +$ kubectl get secret admin-user -n kubernetes-dashboard -o jsonpath={"​.data.token"​} | base64 -d ; echo
 +
 +cmder$ kubectl proxy
 +</​code>​
 +
 +  * http://​localhost:​8001/​api/​v1/​namespaces/​kubernetes-dashboard/​services/​https:​kubernetes-dashboard:/​proxy/​
 +
 +===== Мониторинг =====
 +
 +==== Metrics Server ====
 +
 +  * [[https://​kubernetes-sigs.github.io/​metrics-server/​Kubernetes Metrics Server]]
 +  * [[https://​medium.com/​@cloudspinx/​fix-error-metrics-api-not-available-in-kubernetes-aa10766e1c2f|Fix “error: Metrics API not available” in Kubernetes]]
 +
 +<​code>​
 +kube1:​~/​metrics-server#​ curl -L https://​github.com/​kubernetes-sigs/​metrics-server/​releases/​download/​v0.7.2/​components.yaml | tee metrics-server-components.yaml
 +
 +kube1:​~/​metrics-server#​ cat metrics-server-components.yaml
 +</​code><​code>​
 +...
 +      containers:
 +      - args:
 +        - --cert-dir=/​tmp
 +        - --kubelet-insecure-tls ​  # add this
 +...
 +</​code><​code>​
 +kube1:​~/​metrics-server#​ kubectl apply -f metrics-server-components.yaml
 +
 +kube1# kubectl get pods -A | grep metrics-server
 +
 +kube1# kubectl top pod #-n kube-system
 +
 +kube1# kubectl top pod -A --sort-by=memory
 +
 +kube1# kubectl top node
 +</​code>​
 +
 +==== kube-state-metrics ====
 +
 +  * [[https://​github.com/​prometheus-community/​helm-charts/​tree/​main/​charts/​kube-state-metrics]]
 +  * ... алерты с инфой по упавшим подам ...
 +
 +<​code>​
 +kube1# helm repo add prometheus-community https://​prometheus-community.github.io/​helm-charts
 +
 +kube1# helm repo update
 +kube1# helm install kube-state-metrics prometheus-community/​kube-state-metrics -n vm --create-namespace
 +
 +kube1# curl kube-state-metrics.vm.svc.cluster.local:​8080/​metrics
 +</​code>​
 +===== Отладка,​ troubleshooting =====
 +
 +==== Отладка etcd ====
 +
 +  * [[https://​sysdig.com/​blog/​monitor-etcd/​|How to monitor etcd]]
 +
 +<​code>​
 +kubeN:~# more /​etc/​kubernetes/​manifests/​kube-apiserver.yaml
 +
 +kubeN:~# etcdctl member list -w table \
 +  --endpoints=https://​kube1:​2379 \
 +  --cacert=/​etc/​ssl/​etcd/​ssl/​ca.pem \
 +  --cert=/​etc/​ssl/​etcd/​ssl/​node-kube1.pem \
 +  --key=/​etc/​ssl/​etcd/​ssl/​node-kube1-key.pem
 +
 +kubeN:~# etcdctl endpoint status -w table \
 +  --endpoints=https://​kube1:​2379,​https://​kube2:​2379,​https://​kube3:​2379 \
 +  --cacert=/​etc/​ssl/​etcd/​ssl/​ca.pem \
 +  --cert=/​etc/​ssl/​etcd/​ssl/​node-kube1.pem \
 +  --key=/​etc/​ssl/​etcd/​ssl/​node-kube1-key.pem
 +</​code>​
 ===== Дополнительные материалы ===== ===== Дополнительные материалы =====
  
 +==== Настройка registry-mirrors для Kubespray ====
 +<​code>​
 +~/​kubespray#​ cat inventory/​mycluster/​group_vars/​all/​docker.yml
 +</​code><​code>​
 +...
 +docker_registry_mirrors:​
 +  - https://​mirror.gcr.io
 +...
 +</​code><​code>​
 +~/​kubespray#​ cat inventory/​mycluster/​group_vars/​all/​containerd.yml
 +</​code><​code>​
 +...
 +containerd_registries_mirrors:​
 +  - prefix: docker.io
 +    mirrors:
 +    - host: https://​mirror.gcr.io
 +      capabilities:​ ["​pull",​ "​resolve"​]
 +      skip_verify:​ false
 +...
 +</​code>​
 +
 +==== Установка kubelet kubeadm kubectl в ubuntu20 ====
 +
 +  * На каждом узле нужно сделать
 +
 +<​code>​
 +mkdir /​etc/​apt/​keyrings
 +
 +curl -fsSL https://​pkgs.k8s.io/​core:/​stable:/​v1.28/​deb/​Release.key | sudo gpg --dearmor -o /​etc/​apt/​keyrings/​kubernetes-apt-keyring.gpg
 +
 +echo 'deb [signed-by=/​etc/​apt/​keyrings/​kubernetes-apt-keyring.gpg] https://​pkgs.k8s.io/​core:/​stable:/​v1.28/​deb/​ /' | sudo tee /​etc/​apt/​sources.list.d/​kubernetes.list
 +
 +apt update && apt install -y kubeadm=1.28.1-1.1 kubelet=1.28.1-1.1 kubectl=1.28.1-1.1
 +</​code>​
 +
 +==== Use .kube/​config Client certs in curl ====
 +  * [[https://​serverfault.com/​questions/​1094361/​use-kube-config-client-certs-in-curl|Use .kube/​config Client certs in curl]]
 +<​code>​
 +cat ~/​.kube/​config | yq -r '​.clusters[0].cluster."​certificate-authority-data"'​ | base64 -d - > ~/​.kube/​ca.pem ​
 +cat ~/​.kube/​config | yq -r '​.users[0].user."​client-certificate-data"'​ | base64 -d - > ~/​.kube/​user.pem
 +cat ~/​.kube/​config | yq -r '​.users[0].user."​client-key-data"'​ | base64 -d - > ~/​.kube/​user-key.pem
 +
 +SERVER_URL=$(cat ~/​.kube/​config | yq -r .clusters[0].cluster.server)
 +
 +curl --cacert ~/​.kube/​ca.pem --cert ~/​.kube/​user.pem --key ~/​.kube/​user-key.pem -X GET  ${SERVER_URL}/​api/​v1/​namespaces/​default/​pods/​
 +</​code>​
 +
 +==== bare-metal minikube ====
 +
 +<​code>​
 +student@node2:​~$ sudo apt install conntrack
 +
 +https://​computingforgeeks.com/​install-mirantis-cri-dockerd-as-docker-engine-shim-for-kubernetes/​
 +...
 +
 +wget https://​github.com/​kubernetes-sigs/​cri-tools/​releases/​download/​v1.24.2/​crictl-v1.24.2-linux-amd64.tar.gz
 +...
 +
 +student@node2:​~$ minikube start --driver=none --insecure-registry "​server.corpX.un:​5000"​
 +</​code>​
 +
 +==== minikube dashboard ====
 +<​code>​
 +student@node1:​~$ minikube dashboard &
 +...
 +Opening http://​127.0.0.1:​NNNNN/​api/​v1/​namespaces/​kubernetes-dashboard/​services/​http:​kubernetes-dashboard:/​proxy/​ in your default browser
 +...
 +/​home/​mobaxterm>​ ssh -L NNNNN:​localhost:​NNNNN student@192.168.X.10
 +Теперь,​ та же ссылка работает на win host системе
 +</​code>​
 +
 +==== Подключение к minikube с другой системы ====
 +
 +  * Если не minikube, то достаточно только копию .kube/​config
 +  * [[https://​habr.com/​ru/​company/​flant/​blog/​345580/​|см. Настройка GitLab Runner]]
 +
 +<​code>​
 +student@node1:​~$ tar -cvzf kube-config.tar.gz .kube/​config .minikube/​ca.crt .minikube/​profiles/​minikube
 +
 +gitlab-runner@server:​~$ scp student@node1:​kube-config.tar.gz .
 +
 +gitlab-runner@server:​~$ tar -xvf kube-config.tar.gz
 +
 +gitlab-runner@server:​~$ cat .kube/​config
 +</​code><​code>​
 +...
 +    certificate-authority:​ /​home/​gitlab-runner/​.minikube/​ca.crt
 +...
 +    client-certificate:​ /​home/​gitlab-runner/​.minikube/​profiles/​minikube/​client.crt
 +    client-key: /​home/​gitlab-runner/​.minikube/​profiles/​minikube/​client.key
 +...
 +</​code>​
 ==== kompose ==== ==== kompose ====
  
 +  * [[https://​stackoverflow.com/​questions/​47536536/​whats-the-difference-between-docker-compose-and-kubernetes|What'​s the difference between Docker Compose and Kubernetes?​]]
   * [[https://​loft.sh/​blog/​docker-compose-to-kubernetes-step-by-step-migration/​|Docker Compose to Kubernetes: Step-by-Step Migration]]   * [[https://​loft.sh/​blog/​docker-compose-to-kubernetes-step-by-step-migration/​|Docker Compose to Kubernetes: Step-by-Step Migration]]
   * [[https://​kubernetes.io/​docs/​tasks/​configure-pod-container/​translate-compose-kubernetes/​|Translate a Docker Compose File to Kubernetes Resources]]   * [[https://​kubernetes.io/​docs/​tasks/​configure-pod-container/​translate-compose-kubernetes/​|Translate a Docker Compose File to Kubernetes Resources]]
  
 <​code>​ <​code>​
-root@gate.corp13.un:~# curl -L https://​github.com/​kubernetes/​kompose/​releases/​download/​v1.26.0/​kompose-linux-amd64 -o kompose +root@gate:​~#​ curl -L https://​github.com/​kubernetes/​kompose/​releases/​download/​v1.26.0/​kompose-linux-amd64 -o kompose 
-root@gate.corp13.un:~# chmod +x kompose +root@gate:​~#​ chmod +x kompose 
-root@gate.corp13.un:~# sudo mv ./kompose /​usr/​local/​bin/​kompose+root@gate:​~#​ sudo mv ./kompose /​usr/​local/​bin/​kompose
 </​code>​ </​code>​
  
система_kubernetes.1658320716.txt.gz · Last modified: 2022/07/20 15:38 by val