User Tools

Site Tools


система_kubernetes

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
система_kubernetes [2022/09/27 04:26]
val [Развертывание]
система_kubernetes [2024/03/26 13:20]
val [Volumes]
Line 1: Line 1:
 ====== Система Kubernetes ====== ====== Система Kubernetes ======
 +
 +  * [[https://​kubernetes.io/​ru/​docs/​home/​|Документация по Kubernetes (на русском)]]
  
   * [[https://​youtu.be/​sLQefhPfwWE|youtube Введение в Kubernetes на примере Minikube]]   * [[https://​youtu.be/​sLQefhPfwWE|youtube Введение в Kubernetes на примере Minikube]]
Line 7: Line 9:
   * [[https://​habr.com/​ru/​company/​domclick/​blog/​577964/​|Ультимативный гайд по созданию CI/CD в GitLab с автодеплоем в Kubernetes на голом железе всего за 514$ в год ( ͡° ͜ʖ ͡°)]]   * [[https://​habr.com/​ru/​company/​domclick/​blog/​577964/​|Ультимативный гайд по созданию CI/CD в GitLab с автодеплоем в Kubernetes на голом железе всего за 514$ в год ( ͡° ͜ʖ ͡°)]]
   * [[https://​habr.com/​ru/​company/​flant/​blog/​513908/​|Полноценный Kubernetes с нуля на Raspberry Pi]]   * [[https://​habr.com/​ru/​company/​flant/​blog/​513908/​|Полноценный Kubernetes с нуля на Raspberry Pi]]
 +  * [[https://​habr.com/​ru/​companies/​domclick/​articles/​566224/​|Различия между Docker, containerd, CRI-O и runc]]
  
   * [[https://​habr.com/​ru/​company/​vk/​blog/​542730/​|11 факапов PRO-уровня при внедрении Kubernetes и как их избежать]]   * [[https://​habr.com/​ru/​company/​vk/​blog/​542730/​|11 факапов PRO-уровня при внедрении Kubernetes и как их избежать]]
  
   * [[https://​github.com/​dgkanatsios/​CKAD-exercises|A set of exercises that helped me prepare for the Certified Kubernetes Application Developer exam]]   * [[https://​github.com/​dgkanatsios/​CKAD-exercises|A set of exercises that helped me prepare for the Certified Kubernetes Application Developer exam]]
 +
 +  * [[https://​www.youtube.com/​watch?​v=XZQ7-7vej6w|Наш опыт с Kubernetes в небольших проектах / Дмитрий Столяров (Флант)]]
 +
 +  * [[https://​alesnosek.com/​blog/​2017/​02/​14/​accessing-kubernetes-pods-from-outside-of-the-cluster/​|Accessing Kubernetes Pods From Outside of the Cluster]]
  
 ===== Инструмент командной строки kubectl ===== ===== Инструмент командной строки kubectl =====
Line 17: Line 24:
  
 ==== Установка ==== ==== Установка ====
 +
 +=== Linux ===
 <​code>​ <​code>​
 # curl -LO https://​storage.googleapis.com/​kubernetes-release/​release/​`curl -s https://​storage.googleapis.com/​kubernetes-release/​release/​stable.txt`/​bin/​linux/​amd64/​kubectl # curl -LO https://​storage.googleapis.com/​kubernetes-release/​release/​`curl -s https://​storage.googleapis.com/​kubernetes-release/​release/​stable.txt`/​bin/​linux/​amd64/​kubectl
Line 23: Line 32:
  
 # mv kubectl /​usr/​local/​bin/​ # mv kubectl /​usr/​local/​bin/​
 +</​code>​
 +
 +=== Windows ===
 +
 +  * [[https://​kubernetes.io/​docs/​tasks/​tools/​install-kubectl-windows/​|Install and Set Up kubectl on Windows]]
 +
 +<​code>​
 +cmder$ curl -LO "​https://​dl.k8s.io/​release/​v1.29.0/​bin/​windows/​amd64/​kubectl.exe"​
 +
 +cmder$ mv kubectl.exe /usr/bin
 </​code>​ </​code>​
  
 ==== Подключение к кластеру ==== ==== Подключение к кластеру ====
  
-  * [[https://medium.com/​@jacobtomlinson/how-to-merge-kubernetes-kubectl-config-files-737b61bd517d|How to merge Kubernetes kubectl config files]]+<​code>​ 
 +mkdir ~/.kube/
  
 +scp root@192.168.X.2N1:​.kube/​config ~/.kube/
 +
 +cat ~/​.kube/​config
 +</​code><​code>​
 +...
 +    server: https://​192.168.X.2N1:​6443
 +...
 +</​code><​code>​
 +kubectl get all -o wide --all-namespaces
 +kubectl get all -o wide -A
 +</​code>​
 +=== Настройка автодополнения ===
 <​code>​ <​code>​
-$ kubectl ​config get-contexts+gitlab-runner@server:​~source <(kubectl ​completion bash) 
 +</​code>​
  
-$ kubectl config use-context kubernetes-admin@kubernetes+=== Подключение к другому кластеру ===
  
-gitlab-runner@server:​~$ ​mkdir .kube+<​code>​ 
 +gitlab-runner@server:​~$ ​scp root@kube1:​.kube/​config ​.kube/​config_kube1
  
-gitlab-runner@server:​~$ ​scp root@node1:.kube/config ​.kube/config+gitlab-runner@server:​~$ ​cat .kube/config_kube1 
 +</​code><​code>​ 
 +... 
 +    .kube/​config_kube1 
 +... 
 +</​code><​code>​ 
 +gitlab-runner@server:​~$ export KUBECONFIG=~/​.kube/config_kube1
  
-gitlab-runner@server:​~$ kubectl get all -o wide --all-namespaces+gitlab-runner@server:​~$ kubectl get nodes
 </​code>​ </​code>​
  
Line 63: Line 103:
  
 gitlab-runner@server:​~$ time minikube start --driver=docker --insecure-registry "​server.corpX.un:​5000"​ gitlab-runner@server:​~$ time minikube start --driver=docker --insecure-registry "​server.corpX.un:​5000"​
-real    ​5m8.320s+real    ​29m8.320s
 ... ...
  
Line 69: Line 109:
  
 gitlab-runner@server:​~$ minikube ip gitlab-runner@server:​~$ minikube ip
- 
-gitlab-runner@server:​~$ minikube kubectl -- get pods -A 
  
 gitlab-runner@server:​~$ minikube addons list gitlab-runner@server:​~$ minikube addons list
  
-gitlab-runner@server:​~$ minikube addons configure registry-creds+gitlab-runner@server:​~$ minikube addons configure registry-creds ​  #Не нужно для registry попубличных проектов
 ... ...
 Do you want to enable Docker Registry? [y/n]: y Do you want to enable Docker Registry? [y/n]: y
Line 83: Line 121:
  
 gitlab-runner@server:​~$ minikube addons enable registry-creds gitlab-runner@server:​~$ minikube addons enable registry-creds
 +
 +gitlab-runner@server:​~$ minikube kubectl -- get pods -A
 +
 +gitlab-runner@server:​~$ alias kubectl='​minikube kubectl --'
 +
 +gitlab-runner@server:​~$ kubectl get pods -A
 </​code>​ </​code>​
 +
 +или
  
   * [[#​Инструмент командной строки kubectl]]   * [[#​Инструмент командной строки kubectl]]
 +
 +<​code>​
 +gitlab-runner@server:​~$ ###minikube stop
 +
 +gitlab-runner@server:​~$ ###minikube start
 +</​code>​
 ===== Кластер Kubernetes ===== ===== Кластер Kubernetes =====
  
-==== Развертывание ==== 
  
 +==== Развертывание через kubeadm ====
 +
 +  * [[https://​kubernetes.io/​docs/​setup/​production-environment/​tools/​kubeadm/​create-cluster-kubeadm/​|kubernetes.io Creating a cluster with kubeadm]]
   * [[https://​infoit.com.ua/​linux/​kak-ustanovit-kubernetes-na-ubuntu-20-04-lts/​|Как установить Kubernetes на Ubuntu 20.04 LTS]]   * [[https://​infoit.com.ua/​linux/​kak-ustanovit-kubernetes-na-ubuntu-20-04-lts/​|Как установить Kubernetes на Ubuntu 20.04 LTS]]
 +  * [[https://​www.linuxtechi.com/​install-kubernetes-on-ubuntu-22-04/​|How to Install Kubernetes Cluster on Ubuntu 22.04]]
 +  * [[https://​www.linuxtechi.com/​install-kubernetes-cluster-on-debian/​|https://​www.linuxtechi.com/​install-kubernetes-cluster-on-debian/​]]
   * [[https://​www.cloud4y.ru/​blog/​installation-kubernetes/​|Установка Kubernetes]]   * [[https://​www.cloud4y.ru/​blog/​installation-kubernetes/​|Установка Kubernetes]]
  
-=== Установка ​ПО и подготовка узлов ===+=== Подготовка узлов ===
  
 <​code>​ <​code>​
-server# ssh-keygen+node1# ssh-keygen
  
-server# ssh-copy-id ​node1 +node1# ssh-copy-id node2 
-server# ssh-copy-id node2 +node1# ssh-copy-id node3 
-server# ssh-copy-id node3 + 
-</​code>​ +node1# bash -c ' 
-  * [[Технология Docker]] +swapoff ​-a 
-<​code>​ +ssh node2 swapoff ​-a 
-server# bash -c ' +ssh node3 swapoff ​-a
-ssh node1 http_proxy=http://​proxy.isp.un:​3128/​ apt install ​-y docker.io +
-ssh node2 http_proxy=http://​proxy.isp.un:​3128/​ apt install ​-y docker.io +
-ssh node3 http_proxy=http://​proxy.isp.un:​3128/​ apt install ​-y docker.io+
 ' '
  
-server# bash -c ' +node1# bash -c ' 
-ssh node1 http_proxy=http://​proxy.isp.un:​3128/​ apt -y install apt-transport-https curl+sed -i""​ -e "/​swap/​s/​^/#/"​ /etc/fstab 
 +ssh node2 sed -i""​ -e "/​swap/​s/​^/#/"​ /​etc/​fstab 
 +ssh node3 sed -i""​ -e "/​swap/​s/​^/#/"​ /​etc/​fstab 
 +
 +</​code>​ 
 + 
 +=== Установка ПО === 
 +<​code>​ 
 +node1# bash -c ' 
 +http_proxy=http://​proxy.isp.un:​3128/​ apt -y install apt-transport-https curl
 ssh node2 http_proxy=http://​proxy.isp.un:​3128/​ apt -y install apt-transport-https curl ssh node2 http_proxy=http://​proxy.isp.un:​3128/​ apt -y install apt-transport-https curl
 ssh node3 http_proxy=http://​proxy.isp.un:​3128/​ apt -y install apt-transport-https curl ssh node3 http_proxy=http://​proxy.isp.un:​3128/​ apt -y install apt-transport-https curl
 ' '
  
-server# bash -c ' +node1# bash -c ' 
-ssh node1 "curl -s https://​packages.cloud.google.com/​apt/​doc/​apt-key.gpg | sudo apt-key add"+curl -s https://​packages.cloud.google.com/​apt/​doc/​apt-key.gpg | sudo apt-key add
 ssh node2 "curl -s https://​packages.cloud.google.com/​apt/​doc/​apt-key.gpg | sudo apt-key add" ssh node2 "curl -s https://​packages.cloud.google.com/​apt/​doc/​apt-key.gpg | sudo apt-key add"
 ssh node3 "curl -s https://​packages.cloud.google.com/​apt/​doc/​apt-key.gpg | sudo apt-key add" ssh node3 "curl -s https://​packages.cloud.google.com/​apt/​doc/​apt-key.gpg | sudo apt-key add"
 ' '
  
-server# bash -c ' +node1# bash -c ' 
-ssh node1 apt-add-repository ​\"deb http://​apt.kubernetes.io/​ kubernetes-xenial main\"+apt-add-repository "deb http://​apt.kubernetes.io/​ kubernetes-xenial main"
 ssh node2 apt-add-repository \"deb http://​apt.kubernetes.io/​ kubernetes-xenial main\" ssh node2 apt-add-repository \"deb http://​apt.kubernetes.io/​ kubernetes-xenial main\"
 ssh node3 apt-add-repository \"deb http://​apt.kubernetes.io/​ kubernetes-xenial main\" ssh node3 apt-add-repository \"deb http://​apt.kubernetes.io/​ kubernetes-xenial main\"
 ' '
  
-server# bash -c ' +node1# bash -c ' 
-ssh node1 http_proxy=http://​proxy.isp.un:​3128/​ apt -y install kubeadm kubelet kubectl kubernetes-cni+http_proxy=http://​proxy.isp.un:​3128/​ apt -y install kubeadm kubelet kubectl kubernetes-cni
 ssh node2 http_proxy=http://​proxy.isp.un:​3128/​ apt -y install kubeadm kubelet kubectl kubernetes-cni ssh node2 http_proxy=http://​proxy.isp.un:​3128/​ apt -y install kubeadm kubelet kubectl kubernetes-cni
 ssh node3 http_proxy=http://​proxy.isp.un:​3128/​ apt -y install kubeadm kubelet kubectl kubernetes-cni ssh node3 http_proxy=http://​proxy.isp.un:​3128/​ apt -y install kubeadm kubelet kubectl kubernetes-cni
 ' '
  
-server# bash -c ' +https://​forum.linuxfoundation.org/​discussion/​864693/​the-repository-http-apt-kubernetes-io-kubernetes-xenial-release-does-not-have-a-release-file 
-ssh node1 swapoff ​-+!!!! Внимание на каждом узле нужно сделать:​ !!!! 
-ssh node2 swapoff ​-a + 
-ssh node3 swapoff ​-a +удалить из /​etc/​apt/​sources.list строчку с kubernetes 
-'+ 
 +mkdir /​etc/​apt/​keyrings 
 + 
 +curl -fsSL https://​pkgs.k8s.io/​core:/​stable:/​v1.28/​deb/​Release.key | sudo gpg --dearmor -o /​etc/​apt/​keyrings/​kubernetes-apt-keyring.gpg 
 + 
 +echo 'deb [signed-by=/​etc/​apt/​keyrings/​kubernetes-apt-keyring.gpg] https://​pkgs.k8s.io/​core:/​stable:/​v1.28/​deb/​ /' | sudo tee /​etc/​apt/​sources.list.d/​kubernetes.list 
 + 
 +apt update 
 + 
 +apt install -y kubeadm=1.28.1-1.1 kubelet=1.28.1-1.1 kubectl=1.28.1-1.1
  
-server# bash -c ' 
-ssh node1 sed -i""​ -e "/​swap/​s/​^/#/"​ /etc/fstab 
-ssh node2 sed -i""​ -e "/​swap/​s/​^/#/"​ /etc/fstab 
-ssh node3 sed -i""​ -e "/​swap/​s/​^/#/"​ /etc/fstab 
-' 
 </​code>​ </​code>​
  
 === Инициализация master === === Инициализация master ===
- 
-  * [[https://​github.com/​containerd/​containerd/​issues/​4581|[ERROR CRI]: container runtime is not running]] 
  
 <​code>​ <​code>​
Line 163: Line 227:
  
 root@node1:​~#​ kubectl get --raw='/​readyz?​verbose'​ root@node1:​~#​ kubectl get --raw='/​readyz?​verbose'​
 +</​code>​
 +  * Может понадобиться в случае возникновения ошибки [[https://​github.com/​containerd/​containerd/​issues/​4581|[ERROR CRI]: container runtime is not running]]
 +<​code>​
 +node1# bash -c '
 +rm /​etc/​containerd/​config.toml
 +systemctl restart containerd
 +ssh node2 rm /​etc/​containerd/​config.toml
 +ssh node2 systemctl restart containerd
 +ssh node3 rm /​etc/​containerd/​config.toml
 +ssh node3 systemctl restart containerd
 +'
 </​code>​ </​code>​
  
Line 180: Line 255:
  
 root@node1:​~#​ kubectl get nodes -o wide root@node1:​~#​ kubectl get nodes -o wide
 +</​code>​
 +
 +=== Удаление узла ===
 +<​code>​
 +$ kubectl cordon kube3
 +
 +$ kubectl drain kube3 --ignore-daemonsets --delete-emptydir-data
 +
 +$ kubectl delete node kube3
 +</​code>​
 +
 +=== Удаление кластера ===
 +
 +  * [[https://​stackoverflow.com/​questions/​44698283/​how-to-completely-uninstall-kubernetes|How to completely uninstall kubernetes]]
 +
 +<​code>​
 +node1# bash -c '
 +kubeadm reset
 +ssh node2 kubeadm reset
 +ssh node3 kubeadm reset
 +'
 </​code>​ </​code>​
  
Line 190: Line 286:
   * [[https://​github.com/​containerd/​containerd/​blob/​main/​docs/​PLUGINS.md#​version-header|containerd/​docs/​PLUGINS.md migrate config v1 to v2]]   * [[https://​github.com/​containerd/​containerd/​blob/​main/​docs/​PLUGINS.md#​version-header|containerd/​docs/​PLUGINS.md migrate config v1 to v2]]
  
-  * Docker [[Технология Docker#​Insecure Private Registry]]+== сontainerd ==
  
 <​code>​ <​code>​
-server# bash -c ' +root@node1:~# mkdir /etc/containerd/
-scp /​etc/​docker/​daemon.json ​node1:/​etc/​docker/​daemon.json +
-scp /​etc/​docker/​daemon.json node2:/​etc/​docker/​daemon.json +
-scp /​etc/​docker/​daemon.json node3:/​etc/​docker/​daemon.json +
-+
- +
-serverbash -c ' +
-ssh node1 service docker restart +
-ssh node2 service docker restart +
-ssh node3 service docker restart +
-+
- +
-# don't work in cri-tools 1.25, need public project +
-### server# docker login http://​server.corpX.un:​5000 +
- +
-### server# bash -c ' +
-ssh node1 mkdir -p .docker +
-ssh node2 mkdir -p .docker +
-ssh node3 mkdir -p .docker +
-scp ~/.docker/config.json node1:​.docker/​config.json +
-scp ~/​.docker/​config.json node2:​.docker/​config.json +
-scp ~/​.docker/​config.json node3:​.docker/config.json +
-'+
  
 root@node1:​~#​ cat /​etc/​containerd/​config.toml root@node1:​~#​ cat /​etc/​containerd/​config.toml
Line 225: Line 299:
     [plugins."​io.containerd.grpc.v1.cri"​.registry.mirrors."​server.corpX.un:​5000"​]     [plugins."​io.containerd.grpc.v1.cri"​.registry.mirrors."​server.corpX.un:​5000"​]
       endpoint = ["​http://​server.corpX.un:​5000"​]       endpoint = ["​http://​server.corpX.un:​5000"​]
-  ​[plugins."​io.containerd.grpc.v1.cri"​.registry.configs] + 
-    [plugins."​io.containerd.grpc.v1.cri"​.registry.configs."​server.corpX.un:​5000"​.tls] +# no need 
-      insecure_skip_verify = true+#  ​[plugins."​io.containerd.grpc.v1.cri"​.registry.configs] 
 +   ​[plugins."​io.containerd.grpc.v1.cri"​.registry.configs."​server.corpX.un:​5000"​.tls] 
 +     ​insecure_skip_verify = true 
 # don't work in cri-tools 1.25, need public project # don't work in cri-tools 1.25, need public project
 #​[plugins."​io.containerd.grpc.v1.cri"​.registry.configs."​server.corpX.un:​5000"​.auth] #​[plugins."​io.containerd.grpc.v1.cri"​.registry.configs."​server.corpX.un:​5000"​.auth]
 #      auth = "​c3R1ZGVudDpwYXNzd29yZA=="​ #      auth = "​c3R1ZGVudDpwYXNzd29yZA=="​
 </​code><​code>​ </​code><​code>​
-server# bash -c ' +node1# bash -c ' 
-scp -3 node1:/​etc/​containerd/​config.toml node2:/​etc/​containerd/​config.toml +ssh node2 mkdir /​etc/​containerd/​ 
-scp -3 node1:/​etc/​containerd/​config.toml node3:/​etc/​containerd/​config.toml +ssh node3 mkdir /​etc/​containerd/​ 
-ssh node1 systemctl restart containerd+scp /​etc/​containerd/​config.toml node2:/​etc/​containerd/​config.toml 
 +scp /​etc/​containerd/​config.toml node3:/​etc/​containerd/​config.toml 
 +systemctl restart containerd
 ssh node2 systemctl restart containerd ssh node2 systemctl restart containerd
 ssh node3 systemctl restart containerd ssh node3 systemctl restart containerd
 ' '
-root@nodeN:​~#​ containerd config dump+ 
 +root@nodeN:​~#​ containerd config dump | less
 </​code>​ </​code>​
  
Line 245: Line 325:
  
 <​code>​ <​code>​
-root@nodeN:​~#​ crictl -r unix:///​run/​containerd/​containerd.sock pull server.corpX.un:​5000/​student/​webd+root@nodeN:​~#​ crictl -r unix:///​run/​containerd/​containerd.sock pull server.corpX.un:​5000/​student/​gowebd 
 +</​code>​ 
 + 
 +==== Развертывание через Kubespray ==== 
 + 
 +  * [[https://​github.com/​kubernetes-sigs/​kubespray]] 
 +  * [[https://​habr.com/​ru/​companies/​domclick/​articles/​682364/​|Самое подробное руководство по установке высокодоступного (почти ಠ ͜ʖ ಠ ) Kubernetes-кластера]] 
 +  * [[https://​habr.com/​ru/​companies/​X5Tech/​articles/​645651/​|Bare-metal kubernetes-кластер на своём локальном компьютере]] 
 +  * [[https://​internet-lab.ru/​k8s_kubespray|Kubernetes — установка через Kubespray]] 
 +  * [[https://​www.mshowto.org/​en/​ubuntu-sunucusuna-kubespray-ile-kubernetes-kurulumu.html|Installing Kubernetes on Ubuntu Server with Kubespray]] 
 +  * [[https://​kubernetes.io/​docs/​setup/​production-environment/​tools/​kubespray/​|Installing Kubernetes with Kubespray]] 
 + 
 +  * [[https://​stackoverflow.com/​questions/​29882263/​browse-list-of-tagged-releases-in-a-repo]] 
 + 
 +<​code>​ 
 +kube1# ssh-keygen 
 + 
 +kube1# ssh-copy-id kube1;​ssh-copy-id kube2;​ssh-copy-id kube3;​ssh-copy-id kube4; 
 + 
 +kube1# apt update 
 + 
 +kube1# apt install python3-pip -y 
 + 
 +kube1# git clone https://​github.com/​kubernetes-sigs/​kubespray 
 + 
 +kube1# cd kubespray/​ 
 + 
 +~/​kubespray#​ grep -r containerd_insecure_registries . 
 +~/​kubespray#​ git log 
 + 
 +~/​kubespray#​ git branch -r 
 +~/​kubespray#​ ### git checkout origin/​release-2.22 ​  # debian11 - 2.24 
 + 
 +~/​kubespray#​ git tag -l 
 +~/​kubespray#​ ### git checkout tags/​v2.22.1 
 + 
 +~/​kubespray#​ git checkout 4c37399c7582ea2bfb5202c3dde3223f9c43bf59 
 + 
 +~/​kubespray#​ ### git checkout master 
 +</​code>​ 
 + 
 +  * Может потребоваться [[Язык программирования Python#​Виртуальная среда Python]] 
 +  * Может потребоваться [[https://​github.com/​kubernetes-sigs/​kubespray/​issues/​10688|"​The conditional check '​groups.get('​kube_control_plane'​)'​ failed. The error was: Conditional is marked as unsafe, and cannot be evaluated."​ #10688]] 
 + 
 +<​code>​ 
 +~/​kubespray#​ time pip3 install -r requirements.txt 
 +real    1m48.202s 
 + 
 +~/​kubespray#​ cp -rfp inventory/​sample inventory/​mycluster 
 + 
 +~/​kubespray#​ declare -a IPS=(kube1,​192.168.X.221 kube2,​192.168.X.222 kube3,​192.168.X.223) 
 + 
 +~/​kubespray#​ CONFIG_FILE=inventory/​mycluster/​hosts.yaml python3 contrib/​inventory_builder/​inventory.py ${IPS[@]} 
 + 
 +~/​kubespray#​ less inventory/​mycluster/​hosts.yaml 
 + 
 +~/​kubespray#​ time ansible-playbook -i inventory/​mycluster/​hosts.yaml cluster.yml 
 +real    45m31.796s 
 + 
 +kube1# less ~/​.kube/​config 
 + 
 +~/​kubespray#​ ###time ansible-playbook -i inventory/​mycluster/​hosts.yaml reset.yml 
 +real    7m31.796s 
 +</​code>​ 
 + 
 +=== Добавление узла через Kubespray === 
 +<​code>​ 
 +~/​kubespray#​ cat inventory/​mycluster/​hosts.yaml 
 +</​code><​code>​ 
 +... 
 +    node4: 
 +      ansible_host:​ 192.168.X.204 
 +      ip: 192.168.X.204 
 +      access_ip: 192.168.X.204 
 +... 
 +    kube_node:​ 
 +... 
 +        node4: 
 +... 
 +</​code><​code>​ 
 + 
 +~/​kubespray#​ time ansible-playbook -i inventory/​mycluster/​hosts.yaml --limit=kube4 scale.yml 
 +real    17m37.459s 
 + 
 +$ kubectl get nodes -o wide 
 +</​code>​ 
 + 
 +=== Добавление insecure_registries через Kubespray === 
 +<​code>​ 
 +~/​kubespray#​ cat inventory/​mycluster/​group_vars/​all/​containerd.yml 
 +</​code><​code>​ 
 +... 
 +containerd_insecure_registries:​ 
 +  "​server.corpX.un:​5000":​ "​http://​server.corpX.un:​5000"​ 
 +containerd_registry_auth:​ 
 +  - registry: server.corpX.un:​5000 
 +    username: student 
 +    password: Pa$$w0rd 
 +... 
 +</​code><​code>​ 
 +~/​kubespray#​ time ansible-playbook -i inventory/​mycluster/​hosts.yaml cluster.yml 
 +user    46m37.151s 
 + 
 +# less /​etc/​containerd/​config.toml 
 +</​code>​ 
 + 
 +=== Управление дополнениями через Kubespray === 
 +<​code>​ 
 +~/​kubespray#​ cat inventory/​mycluster/​group_vars/​k8s_cluster/​addons.yml 
 +</​code><​code>​ 
 +... 
 +helm_enabled:​ true 
 +... 
 +ingress_nginx_enabled:​ true 
 +ingress_nginx_host_network:​ true 
 +...
 </​code>​ </​code>​
 ===== Базовые объекты k8s ===== ===== Базовые объекты k8s =====
Line 257: Line 452:
  
 <​code>​ <​code>​
-$ kubectl ​create deployment ​my-debian --image=debian -- "​sleep"​ "​3600"​+$ kubectl ​api-resources 
 + 
 +$ kubectl run my-debian --image=debian -- "​sleep"​ "​3600"​
  
 $ kubectl get all $ kubectl get all
  
-$ kubectl get deployments+kubeN# crictl ps | grep debi 
 +kubeN# crictl images 
 +nodeN# ctr ns ls 
 +nodeN# ctr -n=k8s.io image ls | grep debi
  
-$ kubectl ​get pods -o wide+$ kubectl ​delete pod my-debian
  
 +$ kubectl create deployment my-debian --image=debian -- "​sleep"​ "​3600"​
 +
 +$ kubectl get deployments
 +</​code>​
 +  * [[#​Настройка автодополнения]]
 +<​code>​
 $ kubectl attach my-debian-NNNNNNNNN-NNNNN $ kubectl attach my-debian-NNNNNNNNN-NNNNN
  
Line 271: Line 477:
  
 $ kubectl get deployment my-debian -o yaml $ kubectl get deployment my-debian -o yaml
 +</​code>​ 
 +  * [[Переменные окружения]] EDITOR 
 +<​code>​
 $ kubectl edit deployment my-debian $ kubectl edit deployment my-debian
 +
 +$ kubectl get pods -o wide
  
 $ kubectl delete deployment my-debian $ kubectl delete deployment my-debian
Line 281: Line 491:
 </​code><​code>​ </​code><​code>​
 apiVersion: apps/v1 apiVersion: apps/v1
-kind: Deployment+kind: ReplicaSet 
 +#kind: Deployment
 metadata: metadata:
   name: my-debian   name: my-debian
Line 288: Line 499:
     matchLabels:​     matchLabels:​
       app: my-debian       app: my-debian
 +  replicas: 2
   template:   template:
     metadata:     metadata:
Line 300: Line 512:
       restartPolicy:​ Always       restartPolicy:​ Always
 </​code><​code>​ </​code><​code>​
-$ kubectl ​create ​-f my-debian-deployment.yaml+$ kubectl ​apply -f my-debian-deployment.yaml
 ... ...
 $ kubectl delete -f my-debian-deployment.yaml $ kubectl delete -f my-debian-deployment.yaml
Line 307: Line 519:
  
   * [[https://​matthewpalmer.net/​kubernetes-app-developer/​articles/​kubernetes-volumes-example-nfs-persistent-volume.html|How to use an NFS volume]]   * [[https://​matthewpalmer.net/​kubernetes-app-developer/​articles/​kubernetes-volumes-example-nfs-persistent-volume.html|How to use an NFS volume]]
 +  * [[https://​hub.docker.com/​_/​httpd|The Apache HTTP Server Project - httpd Docker Official Image]]
  
 <​code>​ <​code>​
Line 325: Line 538:
 metadata: metadata:
   name: my-webd   name: my-webd
-  namespace: my-ns 
 spec: spec:
   selector:   selector:
Line 342: Line 554:
 #        image: server.corpX.un:​5000/​student/​webd:​ver1.N #        image: server.corpX.un:​5000/​student/​webd:​ver1.N
  
-        ​livenessProbe:​ +#        imagePullPolicy:​ "​Always"​ 
-          httpGet: + 
-            port: 80+#        image: httpd 
 +#        lifecycle:​ 
 +#          postStart:​ 
 +#            exec: 
 +#              command: ["/​bin/​sh",​ "​-c",​ "echo Hello from apache2 on $(hostname) > /​usr/​local/​apache2/​htdocs/​index.html"​] 
 + 
 +#        env: 
 +#        - name: APWEBD_HOSTNAME 
 +#          value: "​apwebd.corpX.un"​ 
 +#        - name: KEYCLOAK_HOSTNAME 
 +#          value: "​keycloak.corpX.un"​ 
 +#        - name: REALM_NAME 
 +#          value: "​corpX"​ 
 + 
 +#        ​livenessProbe:​ 
 +         ​httpGet:​ 
 +           port: 80
  
 #        volumeMounts:​ #        volumeMounts:​
Line 355: Line 583:
 #          path: /var/www #          path: /var/www
 </​code><​code>​ </​code><​code>​
-$ kubectl apply -f my-webd-deployment.yaml+$ kubectl apply -f my-webd-deployment.yaml ​-n my-ns
  
 $ kubectl get all -n my-ns -o wide  $ kubectl get all -n my-ns -o wide 
Line 383: Line 611:
 metadata: metadata:
   name: my-webd   name: my-webd
-  namespace: my-ns 
 spec: spec:
-  ​type: NodePort+#  ​type: NodePort 
 +#  type: LoadBalancer 
 +#  loadBalancerIP:​ 192.168.X.64
   selector:   selector:
     app: my-webd     app: my-webd
Line 393: Line 622:
 #    nodePort: 30111 #    nodePort: 30111
 </​code><​code>​ </​code><​code>​
-$ kubectl apply -f my-webd-service.yaml+$ kubectl apply -f my-webd-service.yaml ​-n my-ns
  
 +$ kubectl logs -l app=my-webd -n my-ns 
 +(доступны опции -f, --tail=2000,​ --previous)
 +</​code>​
 +=== NodePort ===
 +<​code>​
 $ kubectl get svc my-webd -n my-ns $ kubectl get svc my-webd -n my-ns
 NAME              TYPE       ​CLUSTER-IP ​      ​EXTERNAL-IP ​  ​PORT(S) ​       AGE NAME              TYPE       ​CLUSTER-IP ​      ​EXTERNAL-IP ​  ​PORT(S) ​       AGE
Line 402: Line 636:
  
 $ curl http://​node1,​2,​3:​NNNNN $ curl http://​node1,​2,​3:​NNNNN
- +на "​самодельном kubeadm"​ кластере работает не стабильно 
 +</​code>​ 
 +== NodePort Minikube == 
 +<​code>​
 $ minikube service list $ minikube service list
  
Line 410: Line 646:
  
 $ curl $(minikube service my-webd -n my-ns --url) $ curl $(minikube service my-webd -n my-ns --url)
 +</​code>​
  
 +=== LoadBalancer ===
  
-$ kubectl ​logs -l app=my-webd -n my-ns  +== MetalLB == 
-(доступны опции ​-f--tail=2000, ​--previous)+ 
 +  * [[https://​www.adaltas.com/​en/​2022/​09/​08/​kubernetes-metallb-nginx/​|Ingresses and Load Balancers in Kubernetes with MetalLB and nginx-ingress]] 
 + 
 +  * [[https://​metallb.universe.tf/​installation/​|Installation]] 
 +  * [[https://​metallb.universe.tf/​configuration/​_advanced_ipaddresspool_configuration/​|Advanced AddressPool configuration]] 
 +  * [[https://​metallb.universe.tf/​configuration/​_advanced_l2_configuration/​|Advanced L2 configuration]] 
 + 
 +<​code>​ 
 +$ kubectl ​apply -f https://​raw.githubusercontent.com/​metallb/​metallb/​v0.14.3/​config/​manifests/​metallb-native.yaml 
 + 
 +$ kubectl ​-n metallb-system get all 
 + 
 +$ cat first-pool.yaml 
 +</​code><​code>​ 
 +--- 
 +apiVersion: metallb.io/​v1beta1 
 +kind: IPAddressPool 
 +metadata: 
 +  name: first-pool 
 +  namespace: metallb-system 
 +spec: 
 +  addresses:​ 
 +  - 192.168.13.64/​28 
 +  autoAssign: false 
 +--- 
 +apiVersion: metallb.io/​v1beta1 
 +kind: L2Advertisement 
 +metadata: 
 +  name: first-pool-advertisement 
 +  namespace: metallb-system 
 +spec: 
 +  ipAddressPools:​ 
 +  - first-pool 
 +  interfaces:​ 
 +  - eth0 
 +</​code><​code>​ 
 +$ kubectl apply -f first-pool.yaml 
 + 
 +$ ### kubectl delete ​-f first-pool.yaml && rm first-pool.yaml 
 + 
 +$ ### kubectl apply -f https://​raw.githubusercontent.com/​metallb/​metallb/​v0.14.3/​config/​manifests/​metallb-native.yaml
 </​code>​ </​code>​
 +
 +=== ClusterIP ===
 +<​code>​
 +kube1# host my-webd.my-ns.svc.cluster.local 169.254.25.10
 +...10.102.135.146...
 +
 +server# ssh -p 32222 nodeN
 +
 +my-openssh-server-NNNNNNNN-NNNNN:​~#​ curl my-webd.my-ns.svc.cluster.local
 +  ИЛИ
 +my-openssh-server-NNNNNNNN-NNNNN:​~#​ curl my-webd-webd-chart.my-ns.svc.cluster.local
 +</​code>​
 +
 +== port-forward ==
 +
 +  * [[#​Инструмент командной строки kubectl]]
 +
 +<​code>​
 +node1/​kube1#​ kubectl port-forward -n my-ns --address 0.0.0.0 services/​my-webd 1234:80
 +
 +cmder> kubectl port-forward -n my-ns services/​my-webd 1234:80
 +</​code>​
 +
 +  * http://​192.168.X.2N1:​1234
 +  * http://​localhost:​1234
 +
 +<​code>​
 +node1/​kube1#​ kubectl -n my-ns delete pod/​my-webd...
 +</​code>​
 +
 +== kubectl proxy ==
 +
 +  * [[#​Инструмент командной строки kubectl]]
 +
 +<​code>​
 +kube1:~# kubectl proxy --address='​0.0.0.0'​ --accept-hosts='​^*$'​
 +
 +cmder> kubectl proxy
 +</​code>​
 +
 +  * http://​192.168.X.2N1:​8001/​api/​v1/​namespaces/​my-ns/​services/​my-webd:​80/​proxy/​
 +  * http://​localhost:​8001/​api/​v1/​namespaces/​my-ns/​services/​my-webd:​80/​proxy/​
 +
  
 ==== Удаление объектов ==== ==== Удаление объектов ====
 <​code>​ <​code>​
 +$ kubectl get all -n my-ns
 +
 $ kubectl delete -n my-ns -f my-webd-deployment.yaml,​my-webd-service.yaml $ kubectl delete -n my-ns -f my-webd-deployment.yaml,​my-webd-service.yaml
  
Line 426: Line 749:
  
 ==== Ingress ==== ==== Ingress ====
 +
 +  * [[https://​kubernetes.github.io/​ingress-nginx/​deploy/#​quick-start|NGINX ingress controller quick-start]]
 +
 +=== Minikube ingress-nginx-controller ===
  
   * [[https://​kubernetes.io/​docs/​tasks/​access-application-cluster/​ingress-minikube/​|Set up Ingress on Minikube with the NGINX Ingress Controller]]   * [[https://​kubernetes.io/​docs/​tasks/​access-application-cluster/​ingress-minikube/​|Set up Ingress on Minikube with the NGINX Ingress Controller]]
Line 431: Line 758:
  
 <​code>​ <​code>​
-server# ​host webd +server# ​cat /etc/bind/corpX.un 
-webd.corpX.un ​has address 192.168.49.+</​code><​code>​ 
-  или +... 
-webd.corpX.un has address ​192.168.X.201+webd 192.168.49.
 +</​code><​code>​ 
 +gitlab-runner@server:​~$ minikube addons enable ingress 
 +</​code>​
  
 +=== Baremetal ingress-nginx-controller ​ ===
  
-gitlab-runner@server:~$ minikube addons enable ​ingress+  * [[https://​github.com/​kubernetes/​ingress-nginx/​tags]] Версии 
 +  * [[https://​stackoverflow.com/​questions/​61616203/​nginx-ingress-controller-failed-calling-webhook|Nginx Ingress Controller - Failed Calling Webhook]] 
 +  * [[https://​stackoverflow.com/​questions/​51511547/​empty-address-kubernetes-ingress|Empty ADDRESS kubernetes ingress]] 
 +  * [[https://​kubernetes.github.io/​ingress-nginx/​deploy/#​bare-metal-clusters|ingress-nginx/​deploy/​bare-metal-clusters]] 
 + 
 +<​code>​ 
 +server# cat /​etc/​bind/​corpX.un 
 +</​code><​code>​ 
 +... 
 +webd            A       ​192.168.X.202 
 +                A       ​192.168.X.203 
 +gowebd ​         CNAME   ​webd 
 +</​code><​code>​ 
 +node1# curl https://​raw.githubusercontent.com/​kubernetes/​ingress-nginx/​controller-v1.3.1/​deploy/​static/​provider/​baremetal/​deploy.yaml | tee ingress-nginx.controller-v1.3.1.baremetal.yaml 
 + 
 +node1# cat ingress-nginx.controller-v1.3.1.baremetal.yaml 
 +</​code><​code>​ 
 +... 
 +kind: Deployment 
 +... 
 +spec: 
 +... 
 +  replicas: 3    ### insert this (equial count of worker nodes) 
 +  template: 
 +... 
 +      terminationGracePeriodSeconds:​ 300 
 +      hostNetwork:​ true                    ###insert this 
 +      volumes: 
 +... 
 +</​code><​code>​ 
 +node1# kubectl apply -f ingress-nginx.controller-v1.3.1.baremetal.yaml 
 + 
 +node1# kubectl get all -n ingress-nginx 
 + 
 +node1# ### kubectl delete -f ingress-nginx.controller-v1.3.1.baremetal.yaml
 </​code>​ </​code>​
-  * [[https://​kubernetes.github.io/​ingress-nginx/​deploy/#​quick-start|NGINX ingress ​controller ​quick-start]]+ 
 +=== Управление конфигурацией ​ingress-nginx-controller ​ ===
 <​code>​ <​code>​
-root@node1:~kubectl ​port-forward --namespace=ingress-nginx ​--address 0.0.0.0 service/​ingress-nginx-controller ​80:80+master-1:~kubectl ​exec -ingress-nginx ​pods/​ingress-nginx-controller-<​TAB>​ -- cat /​etc/​nginx/​nginx.conf | tee nginx.conf
  
 +master-1:~$ kubectl edit -n ingress-nginx configmaps ingress-nginx-controller
 +</​code><​code>​
 +...
 +data:
 +  use-forwarded-headers:​ "​true"​
 +...
 +</​code>​
  
-gitlab-runner@server:~/webd$ ### kubectl create ingress my-webd --class=nginx --rule="​webd.corpX.un/​*=my-webd:​80"​ -n my-ns+=== Итоговый вариант с DaemonSet === 
 +<​code>​ 
 +node1# diff ingress-nginx.controller-v1.8.2.baremetal.yaml.orig ingress-nginx.controller-v1.8.2.baremetal.yaml 
 +</​code><​code>​ 
 +323a324 
 +>   ​use-forwarded-headers"​true"​ 
 +391c392,​393 
 +< kind: Deployment 
 +--- 
 +> #kind: Deployment 
 +> kind: DaemonSet 
 +409,​412c411,​414 
 +<   ​strategy:​ 
 +<     ​rollingUpdate:​ 
 +<       ​maxUnavailable:​ 1 
 +<     type: RollingUpdate 
 +--- 
 +> #  strategy: 
 +> #    rollingUpdate:​ 
 +> #      maxUnavailable:​ 1 
 +> #    type: RollingUpdate 
 +501a504 
 +>       ​hostNetwork:​ true 
 +</code><​code>​ 
 +node1# kubectl -n ingress-nginx describe service/​ingress-nginx-controller 
 +... 
 +Endpoints: ​               192.168.X.221:​80,​192.168.X.222:​80,​192.168.X.223:​80 
 +... 
 +</​code>​ 
 + 
 +=== ingress example === 
 + 
 +<​code>​ 
 +node1# ​### kubectl create ingress my-ingress ​--class=nginx --rule="​webd.corpX.un/​*=my-webd:​80"​ -n my-ns
  
-gitlab-runner@server:​~/​webd$ ​cat my-webd-ingress.yaml+node1# ​cat my-ingress.yaml
 </​code><​code>​ </​code><​code>​
 apiVersion: networking.k8s.io/​v1 apiVersion: networking.k8s.io/​v1
 kind: Ingress kind: Ingress
 metadata: metadata:
-  name: my-webd +  name: my-ingress
-  namespace: my-ns+
 spec: spec:
   ingressClassName:​ nginx   ingressClassName:​ nginx
 +#  tls:
 +#  - hosts:
 +#    - gowebd.corpX.un
 +#    secretName: gowebd-tls
   rules:   rules:
   - host: webd.corpX.un   - host: webd.corpX.un
Line 466: Line 875:
         path: /         path: /
         pathType: Prefix         pathType: Prefix
-status+  - hostgowebd.corpX.un 
-  ​loadBalancer{}+    http: 
 +      paths: 
 +      - backend: 
 +          service: 
 +            name: my-gowebd 
 +            port: 
 +              number: 80 
 +        path: / 
 +        pathType: Prefix
 </​code><​code>​ </​code><​code>​
-kubectl apply -f my-webd-ingress.yaml+node1# ​kubectl apply -f my-ingress.yaml ​-n my-ns
  
  
-kubectl get ingress -n my-ns +node1# ​kubectl get ingress -n my-ns 
-NAME      CLASS   ​HOSTS ​           ADDRESS ​  ​PORTS   AGE +NAME      CLASS   ​HOSTS ​                            ​ADDRESS ​                        ​PORTS   AGE 
-my-webd ​  ​nginx ​  ​webd.corpX.un ​            ​80      ​11s+my-webd ​  ​nginx ​  ​webd.corpX.un,​gowebd.corpX.un ​  ​192.168.X.202,​192.168.X.203 ​  80      ​14m 
  
 $ curl webd.corpX.un $ curl webd.corpX.un
 +$ curl gowebd.corpX.un
 +$ curl https://​gowebd.corpX.un #-kv
  
-$ ### kubectl delete ingress my-webd -n my-ns+curl http://​nodeN/​ -H "Host: webd.corpX.un"​ 
 +$ curl --connect-to "":"":​kubeN:​443 https://​gowebd.corpX.un #-vk 
 + 
 +$ kubectl logs -n ingress-nginx -l app.kubernetes.io/​name=ingress-nginx -f 
 + 
 +node1# ​### kubectl delete ingress my-ingress ​-n my-ns
 </​code>​ </​code>​
  
 +=== secrets tls ===
 +
 +  * [[https://​devopscube.com/​configure-ingress-tls-kubernetes/​|How To Configure Ingress TLS/SSL Certificates in Kubernetes]]
 +
 +<​code>​
 +$ kubectl create secret tls gowebd-tls --key gowebd.key --cert gowebd.crt -n my-ns
 +    ​
 +$ kubectl get secrets -n my-ns
 +
 +$ kubectl get secret/​gowebd-tls -o yaml -n my-ns
 +
 +$ ###kubectl delete secret/​gowebd-tls -n my-ns
 +</​code>​
 +
 +==== Volumes ====
 +
 +=== PersistentVolume и PersistentVolumeVolumeClaim ===
 +<​code>​
 +root@node1:​~#​ ssh node2 mkdir /disk2
 +
 +root@node1:​~#​ ### ssh kube3 chmod 777 /disk2/
 +
 +root@node1:​~#​ ssh node2 touch /​disk2/​disk2_node2
 +
 +root@node1:​~#​ kubectl label nodes node2 disk2=yes
 +
 +root@node1:​~#​ kubectl get nodes --show-labels
 +
 +root@node1:​~#​ ###kubectl label nodes node2 disk2-
 +
 +root@node1:​~#​ cat my-debian-deployment.yaml
 +</​code><​code>​
 +...
 +        args: ["​-c",​ "while true; do echo hello; sleep 3;​done"​]
 +
 +        volumeMounts:​
 +          - name: my-disk2-volume
 +            mountPath: /data
 +
 +#        volumeMounts:​
 +#          - name: data
 +#            mountPath: /data
 +
 +      volumes:
 +        - name: my-disk2-volume
 +          hostPath:
 +            path: /disk2/
 +      nodeSelector:​
 +        disk2: "​yes"​
 +
 +#      volumes:
 +#      - name: data
 +#        persistentVolumeClaim:​
 +#          claimName: my-ha-pvc-sz64m
 +
 +      restartPolicy:​ Always
 +</​code><​code>​
 +root@node1:​~#​ kubectl apply -f my-debian-deployment.yaml
 +
 +root@node1:​~#​ kubectl get all -o wide
 +</​code>​
 +
 +  * [[https://​qna.habr.com/​q/​629022|Несколько Claim на один Persistent Volumes?]]
 +  * [[https://​serveradmin.ru/​hranilishha-dannyh-persistent-volumes-v-kubernetes/​|Хранилища данных (Persistent Volumes) в Kubernetes]]
 +  * [[https://​stackoverflow.com/​questions/​59915899/​limit-persistent-volume-claim-content-folder-size-using-hostpath|Limit persistent volume claim content folder size using hostPath]]
 +  * [[https://​stackoverflow.com/​questions/​63490278/​kubernetes-persistent-volume-hostpath-vs-local-and-data-persistence|Kubernetes persistent volume: hostpath vs local and data persistence]]
 +  * [[https://​www.alibabacloud.com/​blog/​kubernetes-volume-basics-emptydir-and-persistentvolume_594834|Kubernetes Volume Basics: emptyDir and PersistentVolume]]
 +
 +<​code>​
 +root@node1:​~#​ cat my-ha-pv.yaml
 +</​code><​code>​
 +apiVersion: v1
 +kind: PersistentVolume
 +metadata:
 +  name: my-pv-node2-sz-128m-num-001
 +  labels:
 +    type: local
 +spec:
 +## comment storageClassName for keycloak
 +  storageClassName:​ my-ha-sc
 +  capacity:
 +    storage: 128Mi
 +#    storage: 8Gi
 +#  volumeMode: Filesystem
 +  accessModes:​
 +    - ReadWriteMany
 +#    - ReadWriteOnce
 +  hostPath:
 +    path: /disk2
 +  persistentVolumeReclaimPolicy:​ Retain
 +  nodeAffinity:​
 +    required:
 +      nodeSelectorTerms:​
 +      - matchExpressions:​
 +        - key: kubernetes.io/​hostname
 +          operator: In
 +          values:
 +          - node2
 +#          - kube3
 +</​code><​code>​
 +root@node1:​~#​ kubectl apply -f my-ha-pv.yaml
 +
 +root@node1:​~#​ kubectl get persistentvolume
 +  или
 +root@node1:​~#​ kubectl get pv
 +...
 +root@node1:​~#​ ###kubectl delete pv my-pv-node2-sz-128m-num-001
 +
 +root@node1:​~#​ cat my-ha-pvc.yaml
 +</​code><​code>​
 +apiVersion: v1
 +kind: PersistentVolumeClaim
 +metadata:
 +  name: my-ha-pvc-sz64m
 +spec:
 +  storageClassName:​ my-ha-sc
 +#  storageClassName:​ local-path
 +  accessModes:​
 +    - ReadWriteMany
 +  resources:
 +    requests:
 +      storage: 64Mi
 +</​code><​code>​
 +root@node1:​~#​ kubectl apply -f my-ha-pvc.yaml
 +
 +root@node1:​~#​ kubectl get persistentvolumeclaims
 +  или
 +root@node1:​~#​ kubectl get pvc
 +...
 +
 +root@node1:​~#​ ### kubectl delete pvc my-ha-pvc-sz64m
 +</​code>​
 +
 +=== Dynamic Volume Provisioning ===
 +
 +  * [[https://​kubernetes.io/​docs/​concepts/​storage/​dynamic-provisioning/​|Dynamic Volume Provisioning]]
 +
 +== rancher local-path-provisioner ==
 +
 +  * [[https://​github.com/​rancher/​local-path-provisioner|rancher local-path-provisioner]]
 +  * [[https://​artifacthub.io/​packages/​helm/​ebrianne/​local-path-provisioner|This chart bootstraps a  deployment on a  cluster using the  package manager]]
 +
 +<​code>​
 +$ kubectl apply -f https://​raw.githubusercontent.com/​rancher/​local-path-provisioner/​v0.0.26/​deploy/​local-path-storage.yaml
 +
 +$ kubectl get sc
 +
 +$ curl https://​raw.githubusercontent.com/​rancher/​local-path-provisioner/​v0.0.26/​deploy/​local-path-storage.yaml | less
 +/​DEFAULT_PATH_FOR_NON_LISTED_NODES
 +
 +ssh root@kube1 'mkdir /​opt/​local-path-provisioner'​
 +ssh root@kube2 'mkdir /​opt/​local-path-provisioner'​
 +ssh root@kube3 'mkdir /​opt/​local-path-provisioner'​
 +ssh root@kube1 'chmod 777 /​opt/​local-path-provisioner'​
 +ssh root@kube2 'chmod 777 /​opt/​local-path-provisioner'​
 +ssh root@kube3 'chmod 777 /​opt/​local-path-provisioner'​
 +</​code>​
 +
 +  * Сервис Keycloak в [[Сервис Keycloak#​Kubernetes]]
 +
 +<​code>​
 +$ kubectl get pvc -n my-keycloak-ns
 +
 +$ kubectl get pv
 +
 +$ ###kubectl -n my-keycloak-ns delete pvc data-my-keycloak-postgresql-0
 +</​code>​
 +== longhorn ==
 +
 +<​code>​
 +kubeN:~# apt install open-iscsi
 +</​code>​
 +  * [[https://​github.com/​longhorn/​longhorn]]
 +<​code>​
 +Setting->​General
 +
 +Pod Deletion Policy When Node is Down: delete-statefuset-pod
 +</​code>​
 +
 +Подключение через kubectl proxy
 +
 +  * [[https://​stackoverflow.com/​questions/​45172008/​how-do-i-access-this-kubernetes-service-via-kubectl-proxy|How do I access this Kubernetes service via kubectl proxy?]]
 +
 +<​code>​
 +cmder> kubectl proxy
 +
 +http://​localhost:​8001/​api/​v1/​namespaces/​longhorn-system/​services/​longhorn-frontend:​80/​proxy/​
 +</​code>​
 +
 +Подключение через ingress
 +
 +!!! Добавить пример с аутентификацией !!!
 +<​code>​
 +student@server:​~/​longhorn$ cat ingress.yaml
 +apiVersion: networking.k8s.io/​v1
 +kind: Ingress
 +metadata:
 +  name: longhorn-ingress
 +  namespace: longhorn-system
 +spec:
 +  ingressClassName:​ nginx
 +  rules:
 +  - host: lh.corp13.un
 +    http:
 +      paths:
 +      - backend:
 +          service:
 +            name: longhorn-frontend
 +            port:
 +              number: 80
 +        path: /
 +        pathType: Prefix
 +</​code>​
 +
 +Использование snapshot-ов
 +
 +  * [[https://​github.com/​longhorn/​longhorn/​issues/​63?​ref=https%3A%2F%2Fgiter.vip|What should be the best procedure to recover a snapshot or a backup in rancher2/​longhorn ?]]
 +
 +<​code>​
 +kube1:~# kubectl -n my-keycloak-ns scale --replicas 0 statefulset my-keycloak
 +
 +kube1:~# kubectl -n my-keycloak-ns scale --replicas 0 statefulset my-keycloak-postgresql
 +
 +kube1:~# kubectl -n my-keycloak-ns scale --replicas 1 statefulset my-keycloak-postgresql
 +
 +kube1:~# kubectl -n my-keycloak-ns scale --replicas 1 statefulset my-keycloak
 +</​code>​
 +
 +Использование backup-ов
 +<​code>​
 +Setting -> General -> Backup Target -> nfs://​server.corp13.un:/​disk2
 +</​code>​
 +  * бекапим,​ удаляем чарт, удаляем pv/pvc, восстанавливаем Volume из бекапа,​ создаем для него PV/PVC как было, устанавливаем чарт, он подхватывает pv/pvc
 +
 +==== ConfigMap ====
 +
 +  * [[https://​www.aquasec.com/​cloud-native-academy/​kubernetes-101/​kubernetes-configmap/​|Kubernetes ConfigMap: Creating, Viewing, Consuming & Managing]]
 +  * [[https://​blog.lapw.at/​how-to-enable-ssh-into-a-kubernetes-pod/​|How to enable SSH connections into a Kubernetes pod]]
 +
 +<​code>​
 +root@node1:​~#​ cat sshd_config
 +</​code><​code>​
 +PermitRootLogin yes
 +PasswordAuthentication no
 +ChallengeResponseAuthentication no
 +UsePAM no
 +</​code><​code>​
 +root@node1:​~#​ kubectl create configmap ssh-config --from-file=sshd_config --dry-run=client -o yaml
 +...
 +
 +server:~# cat .ssh/​id_rsa.pub
 +...
 +
 +root@node1:​~#​ cat my-openssh-server-deployment.yaml
 +</​code><​code>​
 +apiVersion: v1
 +kind: ConfigMap
 +metadata:
 +  name: ssh-config
 +data:
 +  sshd_config:​ |
 +    PermitRootLogin yes
 +    PasswordAuthentication no
 +    ChallengeResponseAuthentication no
 +    UsePAM no
 +  authorized_keys:​ |
 +    ssh-rsa AAAAB.....C0zOcZ68= root@server.corpX.un
 +---
 +apiVersion: apps/v1
 +kind: Deployment
 +metadata:
 +  name: my-openssh-server
 +spec:
 +  selector:
 +    matchLabels:​
 +      app: my-openssh-server
 +  template:
 +    metadata:
 +      labels:
 +        app: my-openssh-server
 +    spec:
 +      containers:
 +      - name: my-openssh-server
 +        image: linuxserver/​openssh-server
 +        command: ["/​bin/​sh"​]
 +        args: ["​-c",​ "/​usr/​bin/​ssh-keygen -A; usermod -p '​*'​ root; /​usr/​sbin/​sshd.pam -D"]
 +        ports:
 +        - containerPort:​ 22
 +        volumeMounts:​
 +        - name: ssh-volume
 +          subPath: sshd_config
 +          mountPath: /​etc/​ssh/​sshd_config
 +        - name: ssh-volume
 +          subPath: authorized_keys
 +          mountPath: /​root/​.ssh/​authorized_keys
 +      volumes:
 +      - name: ssh-volume
 +        configMap:
 +          name: ssh-config
 +---
 +apiVersion: v1
 +kind: Service
 +metadata:
 +  name: my-openssh-server
 +spec:
 +  type: NodePort
 +  ports:
 +  - port: 22
 +    nodePort: 32222
 +  selector:
 +    app: my-openssh-server
 +</​code><​code>​
 +root@node1:​~#​ kubectl apply -f my-openssh-server-deployment.yaml
 +
 +root@node1:​~#​ iptables-save | grep 32222
 +
 +root@node1:​~#​ ###kubectl exec -ti my-openssh-server-NNNNNNNN-NNNNN -- bash
 +
 +server:~# ssh -p 32222 nodeN
 +Welcome to OpenSSH Server
 +my-openssh-server-NNNNNNNN-NNNNN:​~#​ nslookup my-openssh-server.default.svc.cluster.local
 +</​code>​
 ==== Пример с multi container pod ==== ==== Пример с multi container pod ====
  
Line 508: Line 1255:
       containers:       containers:
       - name: my-webd       - name: my-webd
-        image: server.corp13.un:​5000/​student/​webd:​latest+        image: server.corpX.un:​5000/​student/​webd:​latest
         volumeMounts:​         volumeMounts:​
         - name: html         - name: html
Line 557: Line 1304:
  
   * [[https://​helm.sh/​docs/​intro/​install/​|Installing Helm]]   * [[https://​helm.sh/​docs/​intro/​install/​|Installing Helm]]
 +  * [[https://​github.com/​helm/​helm/​releases|helm releases]]
  
 <​code>​ <​code>​
-server# wget https://​get.helm.sh/​helm-v3.9.0-linux-amd64.tar.gz+# wget https://​get.helm.sh/​helm-v3.9.0-linux-amd64.tar.gz
  
 # tar -zxvf helm-v3.9.0-linux-amd64.tar.gz # tar -zxvf helm-v3.9.0-linux-amd64.tar.gz
Line 567: Line 1315:
  
 ==== Работа с готовыми Charts ==== ==== Работа с готовыми Charts ====
 +
 +=== ingress-nginx ===
  
   * [[https://​kubernetes.github.io/​ingress-nginx/​deploy/​|NGINX Ingress Controller Installation Guide]]   * [[https://​kubernetes.github.io/​ingress-nginx/​deploy/​|NGINX Ingress Controller Installation Guide]]
 +  * [[https://​stackoverflow.com/​questions/​56915354/​how-to-install-nginx-ingress-with-hostnetwork-on-bare-metal|stackoverflow How to install nginx-ingress with hostNetwork on bare-metal?​]]
 +  * [[https://​devpress.csdn.net/​cloud/​62fc8e7e7e66823466190055.html|devpress.csdn.net How to install nginx-ingress with hostNetwork on bare-metal?​]]
 +  * [[https://​github.com/​kubernetes/​ingress-nginx/​blob/​main/​charts/​ingress-nginx/​values.yaml]]
  
 <​code>​ <​code>​
-curl https://raw.githubusercontent.com/​kubernetes/​ingress-nginx/controller-v1.3.1/​deploy/​static/​provider/​cloud/​deploy.yaml+helm upgrade ingress-nginx --install ingress-nginx \ 
 +--set controller.hostNetwork=true,​controller.publishService.enabled=false,​controller.kind=DaemonSet,​controller.config.use-forwarded-headers=true \ 
 +--repo ​https://kubernetes.github.io/​ingress-nginx --namespace ingress-nginx --create-namespace
  
-kubectl apply -f https://​raw.githubusercontent.com/​kubernetes/​ingress-nginx/controller-v1.3.1/​deploy/​static/​provider/​cloud/​deploy.yaml+helm list --namespace ​ingress-nginx 
 +$ helm list -A
  
-$ kubectl ​delete ​-f https://​raw.githubusercontent.com/​kubernetes/​ingress-nginx/controller-v1.3.1/​deploy/​static/​provider/​cloud/​deploy.yaml+$ kubectl ​get all -ingress-nginx -o wide
  
-$ helm upgrade --install ingress-nginx ingress-nginx --repo https://​kubernetes.github.io/​ingress-nginx --namespace ingress-nginx ​--create-namespace+$ helm delete ​ingress-nginx --namespace ingress-nginx
  
-$ helm list --namespace ingress-nginx 
  
-$ ### helm delete ​ingress-nginx --namespace ingress-nginx+mkdir ingress-nginx;​ cd ingress-nginx 
 + 
 +$ helm template ingress-nginx --repo https://​kubernetes.github.io/​ingress-nginx --namespace ingress-nginx | tee t1.yaml 
 + 
 +$ helm show values ingress-nginx --repo https://​kubernetes.github.io/​ingress-nginx | tee values.yaml.orig 
 + 
 +$ cat values.yaml 
 +</​code><​code>​ 
 +controller:​ 
 +  hostNetwork:​ true 
 +  publishService:​ 
 +    enabled: false 
 +  kind: DaemonSet 
 + ​config:​ 
 +   ​use-forwarded-headers:​ true 
 +   ​allow-snippet-annotations:​ true 
 +</​code><​code>​ 
 +helm template ​ingress-nginx -f values.yaml --repo https://​kubernetes.github.io/​ingress-nginx -n ingress-nginx | tee t2.yaml 
 + 
 +$ helm upgrade ingress-nginx -i ingress-nginx -f values.yaml --repo https://​kubernetes.github.io/​ingress-nginx -n ingress-nginx --create-namespace 
 + 
 +$ kubectl exec -n ingress-nginx ​pods/​ingress-nginx-controller-<​TAB>​ -- cat /​etc/​nginx/​nginx.conf | tee nginx.conf | grep use_forwarded_headers 
 + 
 +$ kubectl -n ingress-nginx describe service/​ingress-nginx-controller 
 +... 
 +Endpoints: ​               192.168.X.221:​80,​192.168.X.222:​80,​192.168.X.223:​80 
 +... 
 + 
 +# kubectl get clusterrole -A | grep -i ingress 
 +# kubectl get clusterrolebindings -A | grep -i ingress 
 +# kubectl get validatingwebhookconfigurations -A | grep -i ingress
 </​code>​ </​code>​
 ==== Развертывание своего приложения ==== ==== Развертывание своего приложения ====
Line 587: Line 1372:
   * [[https://​opensource.com/​article/​20/​5/​helm-charts|How to make a Helm chart in 10 minutes]]   * [[https://​opensource.com/​article/​20/​5/​helm-charts|How to make a Helm chart in 10 minutes]]
   * [[https://​stackoverflow.com/​questions/​49812830/​helm-upgrade-with-same-chart-version-but-different-docker-image-tag|Helm upgrade with same chart version, but different Docker image tag]]   * [[https://​stackoverflow.com/​questions/​49812830/​helm-upgrade-with-same-chart-version-but-different-docker-image-tag|Helm upgrade with same chart version, but different Docker image tag]]
 +  * [[https://​stackoverflow.com/​questions/​69817305/​how-set-field-app-version-in-helm3-chart|how set field app-version in helm3 chart?]]
  
 <​code>​ <​code>​
-$ helm create webd-chart+gitlab-runner@server:​~/​gowebd-k8s$ helm create webd-chart 
 + 
 +$ less webd-chart/​templates/​deployment.yaml
  
 $ cat webd-chart/​Chart.yaml $ cat webd-chart/​Chart.yaml
Line 599: Line 1387:
 ... ...
 appVersion: "​latest"​ appVersion: "​latest"​
 +#​appVersion:​ ver1.7 ​  #for vanilla argocd
 </​code><​code>​ </​code><​code>​
 $ cat webd-chart/​values.yaml $ cat webd-chart/​values.yaml
 </​code><​code>​ </​code><​code>​
 ... ...
 +replicaCount:​ 2
 +
 image: image:
   repository: server.corpX.un:​5000/​student/​webd   repository: server.corpX.un:​5000/​student/​webd
Line 611: Line 1402:
 ... ...
 service: service:
-  ​type: NodePort+#  ​type: NodePort
 ... ...
 ingress: ingress:
Line 618: Line 1409:
 ... ...
   hosts:   hosts:
-    - host: webd.corp13.un+    - host: webd.corpX.un
 ... ...
 +#  tls: []
 +#  tls:
 +#    - secretName: gowebd-tls
 +#      hosts:
 +#        - gowebd.corpX.un
 +...
 +#​APWEBD_HOSTNAME:​ "​apwebd.corp13.un"​
 +#​KEYCLOAK_HOSTNAME:​ "​keycloak.corp13.un"​
 +#​REALM_NAME:​ "​corp13"​
 </​code><​code>​ </​code><​code>​
 $ less webd-chart/​templates/​deployment.yaml $ less webd-chart/​templates/​deployment.yaml
Line 625: Line 1425:
 ... ...
           image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"           image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
 +#          env:
 +#          - name: APWEBD_HOSTNAME
 +#            value: "{{ .Values.APWEBD_HOSTNAME }}"
 +#          - name: KEYCLOAK_HOSTNAME
 +#            value: "{{ .Values.KEYCLOAK_HOSTNAME }}"
 +#          - name: REALM_NAME
 +#            value: "{{ .Values.REALM_NAME }}"
 +...
 +#          livenessProbe:​
 +#            httpGet:
 +#              path: /
 +#              port: http
 +#          readinessProbe:​
 +#            httpGet:
 +#              path: /
 +#              port: http
 ... ...
 </​code><​code>​ </​code><​code>​
 +$ helm template my-webd webd-chart/ | less
 +
 $ helm install my-webd webd-chart/ -n my-ns --create-namespace --wait $ helm install my-webd webd-chart/ -n my-ns --create-namespace --wait
 +
 +$ kubectl describe events -n my-ns | less
  
 $ export HELM_NAMESPACE=my-ns $ export HELM_NAMESPACE=my-ns
Line 633: Line 1453:
 $ helm list $ helm list
  
-$ helm upgrade my-webd webd-chart/ --set=image.tag=ver1.10+### helm upgrade my-webd webd-chart/ --set=image.tag=ver1.10
  
 $ helm history my-webd $ helm history my-webd
Line 648: Line 1468:
   * [[https://​medium.com/​containerum/​how-to-make-and-share-your-own-helm-package-50ae40f6c221|How to make and share your own Helm package]]   * [[https://​medium.com/​containerum/​how-to-make-and-share-your-own-helm-package-50ae40f6c221|How to make and share your own Helm package]]
   * [[https://​docs.gitlab.com/​ee/​user/​profile/​personal_access_tokens.html|Gitlab Personal access tokens]]   * [[https://​docs.gitlab.com/​ee/​user/​profile/​personal_access_tokens.html|Gitlab Personal access tokens]]
 +  * [[Инструмент GitLab#​Подключение через API]] - Role: Mainteiner, api, read_registry,​ write_registry
 <​code>​ <​code>​
-$ helm repo add --username student --password ​NNNNNN-NNNNNNNNNNNNN ​webd http://192.168.13.1/​api/​v4/​projects/​6/​packages/​helm/​stable+gitlab-runner@server:​~/​gowebd-k8s$ helm repo add --username student --password ​NNNNN-NNNNNNNNNNNNNNNNNNN ​webd http://server.corpX.un/​api/​v4/​projects/​N/​packages/​helm/​stable 
 +"​webd"​ has been added to your repositories
  
-$ helm repo list+gitlab-runner@server:​~/​gowebd-k8s### helm repo remove webd
  
-$ helm package webd-chart +gitlab-runner@server:​~/​gowebd-k8shelm repo list
-ls *tgz+
  
-$ helm plugin install https://​github.com/​chartmuseum/​helm-push +gitlab-runner@server:~/gowebd-k8s$ helm package ​webd-chart
-$ helm cm-push ​webd-chart-0.1.0.tgz webd+
  
-... С другого кластера подключаем (аналогично) наш репозиторий и ...+gitlab-runner@server:​~/​gowebd-k8s$ tar -tf webd-chart-0.1.1.tgz
  
-$ helm search repo webd+gitlab-runner@server:​~/​gowebd-k8s$ helm plugin install https://​github.com/​chartmuseum/​helm-push
  
-$ helm repo update ​webd+gitlab-runner@server:​~/​gowebd-k8s$ helm cm-push webd-chart-0.1.1.tgz ​webd
  
-$ helm install my-webd webd/​webd-chart+gitlab-runner@server:​~/​gowebd-k8srm webd-chart-0.1.1.tgz 
 +</​code><​code>​ 
 +kube1:~# helm repo add webd http://​server.corpX.un/​api/​v4/​projects/​N/​packages/​helm/​stable 
 + 
 +kube1:~# helm repo update 
 + 
 +kube1:~# helm search repo webd 
 + 
 +kube1:~# helm repo update webd 
 + 
 +kube1:​~# ​helm install my-webd webd/​webd-chart 
 + 
 +kube1:~# ###helm uninstall my-webd 
 +</​code><​code>​ 
 +kube1:~# mkdir gowebd; cd gowebd 
 + 
 +kube1:​~/​gowebd#​ ###helm pull webd-chart --repo https://​server.corp13.un/​api/​v4/​projects/​1/​packages/​helm/​stable 
 + 
 +kube1:​~/​gowebd#​ helm show values webd-chart --repo https://​server.corp13.un/​api/​v4/​projects/​1/​packages/​helm/​stable | tee values.yaml.orig 
 + 
 +kube1:​~/​gowebd#​ cat values.yaml 
 +</​code><​code>​ 
 +replicaCount:​ 3 
 +image: 
 +  tag: "​ver1.1"​ 
 +#​REALM_NAME:​ "​corp"​ 
 +</​code><​code>​ 
 +kube1:​~/​gowebd#​ helm upgrade my-webd -i webd-chart -f values.yaml -n my-ns --create-namespace --repo https://​server.corp13.un/​api/​v4/​projects/​1/​packages/​helm/​stable 
 + 
 +$ curl http://​kubeN -H "Host: gowebd.corpX.un"​ 
 + 
 +kube1:​~/​gowebd#​ ###helm uninstall my-webd -n my-ns
 </​code>​ </​code>​
  
 ==== Работа с публичными репозиториями ==== ==== Работа с публичными репозиториями ====
 <​code>​ <​code>​
 +helm repo add gitlab https://​charts.gitlab.io
 +
 +helm search repo -l gitlab/​gitlab-runner
 +
 +helm show values gitlab/​gitlab-runner | tee values.yaml
 +
 +gitlab-runner@server:​~$ diff values.yaml values.yaml.orig
 +</​code><​code>​
 +...
 +gitlabUrl: http://​server.corpX.un/​
 +...
 +runnerRegistrationToken:​ "​NNNNNNNNNNNNNNNNNNNNNNNN"​
 +...
 +148,149c142
 +<   ​create:​ true
 +---
 +>   ​create:​ false
 +325d317
 +<         ​privileged = true
 +432c424
 +<   ​allowPrivilegeEscalation:​ true
 +---
 +>   ​allowPrivilegeEscalation:​ false
 +435c427
 +<   ​privileged:​ true
 +---
 +>   ​privileged:​ false
 +</​code><​code>​
 +gitlab-runner@server:​~$ helm upgrade -i gitlab-runner gitlab/​gitlab-runner -f values.yaml -n gitlab-runner --create-namespace
 +
 +gitlab-runner@server:​~$ kubectl get all -n gitlab-runner
 +</​code><​code>​
 $ helm search hub -o json wordpress | jq '​.'​ | less $ helm search hub -o json wordpress | jq '​.'​ | less
  
Line 677: Line 1559:
 $ helm show values bitnami/​wordpress $ helm show values bitnami/​wordpress
 </​code>​ </​code>​
 +
 +===== Kubernetes Dashboard =====
 +
 +  * https://​kubernetes.io/​docs/​tasks/​access-application-cluster/​web-ui-dashboard/​
 +  * https://​github.com/​kubernetes/​dashboard/​blob/​master/​docs/​user/​access-control/​creating-sample-user.md
 +
 +<​code>​
 +$ kubectl apply -f https://​raw.githubusercontent.com/​kubernetes/​dashboard/​v2.7.0/​aio/​deploy/​recommended.yaml
 +
 +$ cat dashboard-user-role.yaml
 +</​code><​code>​
 +---
 +apiVersion: v1
 +kind: ServiceAccount
 +metadata:
 +  name: admin-user
 +  namespace: kubernetes-dashboard
 +---
 +apiVersion: rbac.authorization.k8s.io/​v1
 +kind: ClusterRoleBinding
 +metadata:
 +  name: admin-user
 +roleRef:
 +  apiGroup: rbac.authorization.k8s.io
 +  kind: ClusterRole
 +  name: cluster-admin
 +subjects:
 +- kind: ServiceAccount
 +  name: admin-user
 +  namespace: kubernetes-dashboard
 +---
 +apiVersion: v1
 +kind: Secret
 +metadata:
 +  name: admin-user
 +  namespace: kubernetes-dashboard
 +  annotations:​
 +    kubernetes.io/​service-account.name:​ "​admin-user"​
 +type: kubernetes.io/​service-account-token
 +</​code><​code>​
 +$ kubectl apply -f dashboard-user-role.yaml
 +
 +$ kubectl -n kubernetes-dashboard create token admin-user
 +
 +$ kubectl get secret admin-user -n kubernetes-dashboard -o jsonpath={"​.data.token"​} | base64 -d ; echo
 +
 +cmder$ kubectl proxy
 +</​code>​
 +
 +  * http://​localhost:​8001/​api/​v1/​namespaces/​kubernetes-dashboard/​services/​https:​kubernetes-dashboard:/​proxy/​
  
 ===== Дополнительные материалы ===== ===== Дополнительные материалы =====
Line 691: Line 1623:
 ... ...
  
-student@node2:​~$ minikube start --driver=none --insecure-registry "​server.corp13.un:​5000"​+student@node2:​~$ minikube start --driver=none --insecure-registry "​server.corpX.un:​5000"​
 </​code>​ </​code>​
  
Line 732: Line 1664:
  
 <​code>​ <​code>​
-root@gate.corp13.un:~# curl -L https://​github.com/​kubernetes/​kompose/​releases/​download/​v1.26.0/​kompose-linux-amd64 -o kompose +root@gate:​~#​ curl -L https://​github.com/​kubernetes/​kompose/​releases/​download/​v1.26.0/​kompose-linux-amd64 -o kompose 
-root@gate.corp13.un:~# chmod +x kompose +root@gate:​~#​ chmod +x kompose 
-root@gate.corp13.un:~# sudo mv ./kompose /​usr/​local/​bin/​kompose+root@gate:​~#​ sudo mv ./kompose /​usr/​local/​bin/​kompose
 </​code>​ </​code>​
  
система_kubernetes.txt · Last modified: 2024/04/27 10:53 by val