This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
система_kubernetes [2024/04/06 11:26] val [Развертывание через kubeadm] |
система_kubernetes [2025/06/18 12:36] (current) val [Развертывание через kubeadm] |
||
---|---|---|---|
Line 10: | Line 10: | ||
* [[https://habr.com/ru/company/flant/blog/513908/|Полноценный Kubernetes с нуля на Raspberry Pi]] | * [[https://habr.com/ru/company/flant/blog/513908/|Полноценный Kubernetes с нуля на Raspberry Pi]] | ||
* [[https://habr.com/ru/companies/domclick/articles/566224/|Различия между Docker, containerd, CRI-O и runc]] | * [[https://habr.com/ru/companies/domclick/articles/566224/|Различия между Docker, containerd, CRI-O и runc]] | ||
+ | * [[https://daily.dev/blog/kubernetes-cni-comparison-flannel-vs-calico-vs-canal|Kubernetes CNI Comparison: Flannel vs Calico vs Canal]] | ||
* [[https://habr.com/ru/company/vk/blog/542730/|11 факапов PRO-уровня при внедрении Kubernetes и как их избежать]] | * [[https://habr.com/ru/company/vk/blog/542730/|11 факапов PRO-уровня при внедрении Kubernetes и как их избежать]] | ||
Line 17: | Line 18: | ||
* [[https://www.youtube.com/watch?v=XZQ7-7vej6w|Наш опыт с Kubernetes в небольших проектах / Дмитрий Столяров (Флант)]] | * [[https://www.youtube.com/watch?v=XZQ7-7vej6w|Наш опыт с Kubernetes в небольших проектах / Дмитрий Столяров (Флант)]] | ||
- | * [[https://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/|Accessing Kubernetes Pods From Outside of the Cluster]] | + | * [[https://habr.com/ru/companies/aenix/articles/541118/|Ломаем и чиним Kubernetes]] |
+ | |||
===== Инструмент командной строки kubectl ===== | ===== Инструмент командной строки kubectl ===== | ||
* [[https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands]] | * [[https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands]] | ||
+ | * [[https://kubernetes.io/ru/docs/reference/kubectl/cheatsheet/|Шпаргалка по kubectl]] | ||
==== Установка ==== | ==== Установка ==== | ||
Line 62: | Line 66: | ||
=== Настройка автодополнения === | === Настройка автодополнения === | ||
<code> | <code> | ||
- | gitlab-runner@server:~$ source <(kubectl completion bash) | + | kube1:~# less /etc/bash_completion.d/kubectl.sh |
+ | |||
+ | или | ||
+ | |||
+ | $ cat ~/.profile | ||
+ | </code><code> | ||
+ | #... | ||
+ | source <(kubectl completion bash) | ||
+ | |||
+ | alias k=kubectl | ||
+ | complete -F __start_kubectl k | ||
+ | #... | ||
</code> | </code> | ||
Line 83: | Line 98: | ||
===== Установка minikube ===== | ===== Установка minikube ===== | ||
- | * [[https://www.linuxtechi.com/how-to-install-minikube-on-ubuntu/|How to Install Minikube on Ubuntu 20.04 LTS / 21.04]] | ||
* [[https://minikube.sigs.k8s.io/docs/start/|Documentation/Get Started/minikube start]] | * [[https://minikube.sigs.k8s.io/docs/start/|Documentation/Get Started/minikube start]] | ||
Line 99: | Line 113: | ||
<code> | <code> | ||
- | gitlab-runner@server:~$ ### minikube delete | ||
- | gitlab-runner@server:~$ ### rm -rv .minikube/ | ||
- | |||
gitlab-runner@server:~$ time minikube start --driver=docker --insecure-registry "server.corpX.un:5000" | gitlab-runner@server:~$ time minikube start --driver=docker --insecure-registry "server.corpX.un:5000" | ||
- | real 29m8.320s | + | real 41m8.320s |
... | ... | ||
Line 109: | Line 120: | ||
gitlab-runner@server:~$ minikube ip | gitlab-runner@server:~$ minikube ip | ||
+ | </code> | ||
+ | |||
+ | ==== minikube kubectl ==== | ||
+ | <code> | ||
+ | gitlab-runner@server:~$ minikube kubectl -- get pods -A | ||
+ | |||
+ | gitlab-runner@server:~$ cat ~/.profile | ||
+ | </code><code> | ||
+ | #... | ||
+ | # not work in gitlab-ci | ||
+ | alias kubectl='minikube kubectl --' | ||
+ | #... | ||
+ | </code><code> | ||
+ | gitlab-runner@server:~$ kubectl get pods -A | ||
+ | </code> | ||
+ | |||
+ | или | ||
+ | |||
+ | * [[#Инструмент командной строки kubectl]] | ||
+ | |||
+ | ==== minikube addons list ==== | ||
+ | <code> | ||
gitlab-runner@server:~$ minikube addons list | gitlab-runner@server:~$ minikube addons list | ||
- | gitlab-runner@server:~$ minikube addons configure registry-creds #Не нужно для registry попубличных проектов | + | gitlab-runner@server:~$ minikube addons configure registry-creds |
... | ... | ||
Do you want to enable Docker Registry? [y/n]: y | Do you want to enable Docker Registry? [y/n]: y | ||
Line 121: | Line 154: | ||
gitlab-runner@server:~$ minikube addons enable registry-creds | gitlab-runner@server:~$ minikube addons enable registry-creds | ||
- | |||
- | gitlab-runner@server:~$ minikube kubectl -- get pods -A | ||
- | |||
- | gitlab-runner@server:~$ alias kubectl='minikube kubectl --' | ||
- | |||
- | gitlab-runner@server:~$ kubectl get pods -A | ||
</code> | </code> | ||
- | или | + | ==== minikube start stop delete ==== |
- | + | ||
- | * [[#Инструмент командной строки kubectl]] | + | |
<code> | <code> | ||
gitlab-runner@server:~$ ###minikube stop | gitlab-runner@server:~$ ###minikube stop | ||
+ | |||
+ | gitlab-runner@server:~$ ### minikube delete | ||
+ | gitlab-runner@server:~$ ### rm -rv .minikube/ | ||
gitlab-runner@server:~$ ###minikube start | gitlab-runner@server:~$ ###minikube start | ||
Line 143: | Line 170: | ||
==== Развертывание через kubeadm ==== | ==== Развертывание через kubeadm ==== | ||
+ | * [[https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/|Installing kubeadm]] | ||
* [[https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/|kubernetes.io Creating a cluster with kubeadm]] | * [[https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/|kubernetes.io Creating a cluster with kubeadm]] | ||
- | * [[https://infoit.com.ua/linux/kak-ustanovit-kubernetes-na-ubuntu-20-04-lts/|Как установить Kubernetes на Ubuntu 20.04 LTS]] | + | |
* [[https://www.linuxtechi.com/install-kubernetes-on-ubuntu-22-04/|How to Install Kubernetes Cluster on Ubuntu 22.04]] | * [[https://www.linuxtechi.com/install-kubernetes-on-ubuntu-22-04/|How to Install Kubernetes Cluster on Ubuntu 22.04]] | ||
- | * [[https://www.linuxtechi.com/install-kubernetes-cluster-on-debian/|https://www.linuxtechi.com/install-kubernetes-cluster-on-debian/]] | + | * [[https://www.linuxtechi.com/install-kubernetes-cluster-on-debian/|https://www.linuxtechi.com/install-kubernetes-cluster-on-debian/|How to Install Kubernetes Cluster on Debian 12 | 11]] |
- | * [[https://www.cloud4y.ru/blog/installation-kubernetes/|Установка Kubernetes]] | + | |
+ | * [[https://www.baeldung.com/ops/kubernetes-cluster-components|Kubernetes Cluster Components]] | ||
=== Подготовка узлов === | === Подготовка узлов === | ||
Line 171: | Line 201: | ||
=== Установка ПО === | === Установка ПО === | ||
+ | |||
+ | === !!! Обратитесь к преподавателю !!! === | ||
+ | |||
+ | == Установка и настройка CRI == | ||
<code> | <code> | ||
- | node1# bash -c ' | + | node1_2_3# apt-get install -y docker.io |
- | http_proxy=http://proxy.isp.un:3128/ apt -y install apt-transport-https curl | + | |
- | ssh node2 http_proxy=http://proxy.isp.un:3128/ apt -y install apt-transport-https curl | + | |
- | ssh node3 http_proxy=http://proxy.isp.un:3128/ apt -y install apt-transport-https curl | + | |
- | ' | + | |
- | node1# bash -c ' | + | Проверяем, если: |
- | curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add | + | node1# containerd config dump | grep SystemdCgroup |
- | ssh node2 "curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add" | + | не равно: |
- | ssh node3 "curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add" | + | SystemdCgroup = true |
+ | то, выполняем следующие 4-ре команды: | ||
+ | |||
+ | bash -c 'mkdir -p /etc/containerd/ | ||
+ | ssh node2 mkdir -p /etc/containerd/ | ||
+ | ssh node3 mkdir -p /etc/containerd/ | ||
+ | ' | ||
+ | bash -c 'containerd config default > /etc/containerd/config.toml | ||
+ | ssh node2 "containerd config default > /etc/containerd/config.toml" | ||
+ | ssh node3 "containerd config default > /etc/containerd/config.toml" | ||
+ | ' | ||
+ | bash -c 'sed -i "s/SystemdCgroup \= false/SystemdCgroup \= true/g" /etc/containerd/config.toml | ||
+ | ssh node2 sed -i \"s/SystemdCgroup \= false/SystemdCgroup \= true/g\" /etc/containerd/config.toml | ||
+ | ssh node3 sed -i \"s/SystemdCgroup \= false/SystemdCgroup \= true/g\" /etc/containerd/config.toml | ||
+ | ' | ||
+ | bash -c 'service containerd restart | ||
+ | ssh node2 service containerd restart | ||
+ | ssh node3 service containerd restart | ||
+ | ' | ||
+ | </code> | ||
+ | == Подключаем репозиторий и устанавливаем ПО == | ||
+ | <code> | ||
+ | bash -c 'mkdir -p /etc/apt/keyrings | ||
+ | ssh node2 mkdir -p /etc/apt/keyrings | ||
+ | ssh node3 mkdir -p /etc/apt/keyrings | ||
' | ' | ||
- | node1# bash -c ' | + | bash -c 'curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg |
- | apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main" | + | ssh node2 "curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg" |
- | ssh node2 apt-add-repository \"deb http://apt.kubernetes.io/ kubernetes-xenial main\" | + | ssh node3 "curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg" |
- | ssh node3 apt-add-repository \"deb http://apt.kubernetes.io/ kubernetes-xenial main\" | + | |
' | ' | ||
- | node1# bash -c ' | + | bash -c 'echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" | tee /etc/apt/sources.list.d/kubernetes.list |
- | http_proxy=http://proxy.isp.un:3128/ apt -y install kubeadm kubelet kubectl kubernetes-cni | + | ssh node2 echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" \| tee /etc/apt/sources.list.d/kubernetes.list |
- | ssh node2 http_proxy=http://proxy.isp.un:3128/ apt -y install kubeadm kubelet kubectl kubernetes-cni | + | ssh node3 echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" \| tee /etc/apt/sources.list.d/kubernetes.list |
- | ssh node3 http_proxy=http://proxy.isp.un:3128/ apt -y install kubeadm kubelet kubectl kubernetes-cni | + | |
' | ' | ||
- | https://forum.linuxfoundation.org/discussion/864693/the-repository-http-apt-kubernetes-io-kubernetes-xenial-release-does-not-have-a-release-file | + | bash -c 'apt-get update && apt-get install -y kubelet kubeadm kubectl |
- | !!!! Внимание на каждом узле нужно сделать: !!!! | + | ssh node2 "apt-get update && apt-get install -y kubelet kubeadm kubectl" |
- | + | ssh node3 "apt-get update && apt-get install -y kubelet kubeadm kubectl" | |
- | удалить из /etc/apt/sources.list строчку с kubernetes | + | ' |
- | + | Время выполнения: 2 минуты | |
- | mkdir /etc/apt/keyrings | + | |
- | + | ||
- | curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg | + | |
- | + | ||
- | echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list | + | |
- | + | ||
- | apt update | + | |
- | + | ||
- | apt install -y kubeadm=1.28.1-1.1 kubelet=1.28.1-1.1 kubectl=1.28.1-1.1 | + | |
</code> | </code> | ||
=== Инициализация master === | === Инициализация master === | ||
+ | |||
+ | * [[https://stackoverflow.com/questions/70416935/create-same-master-and-working-node-in-kubenetes|Create same master and working node in kubenetes]] | ||
<code> | <code> | ||
root@node1:~# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.X.201 | root@node1:~# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.X.201 | ||
+ | Время выполнения: 3 минуты | ||
root@node1:~# mkdir -p $HOME/.kube | root@node1:~# mkdir -p $HOME/.kube | ||
root@node1:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config | root@node1:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config | ||
+ | </code> | ||
+ | === Настройка сети === | ||
+ | <code> | ||
+ | root@nodeN:~# lsmod | grep br_netfilter | ||
+ | </code> | ||
+ | * [[Управление ядром и модулями в Linux#Модули ядра]] | ||
+ | <code> | ||
root@node1:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml | root@node1:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml | ||
- | + | </code> | |
- | root@node1:~# kubectl get pod -o wide --all-namespaces | + | === Проверка работоспособности === |
+ | <code> | ||
+ | root@node1:~# kubectl get pod --all-namespaces -o wide | ||
root@node1:~# kubectl get --raw='/readyz?verbose' | root@node1:~# kubectl get --raw='/readyz?verbose' | ||
- | </code> | ||
- | * Может понадобиться в случае возникновения ошибки [[https://github.com/containerd/containerd/issues/4581|[ERROR CRI]: container runtime is not running]] | ||
- | <code> | ||
- | node1# bash -c ' | ||
- | rm /etc/containerd/config.toml | ||
- | systemctl restart containerd | ||
- | ssh node2 rm /etc/containerd/config.toml | ||
- | ssh node2 systemctl restart containerd | ||
- | ssh node3 rm /etc/containerd/config.toml | ||
- | ssh node3 systemctl restart containerd | ||
- | ' | ||
</code> | </code> | ||
Line 244: | Line 286: | ||
<code> | <code> | ||
root@node2_3:~# curl -k https://node1:6443/livez?verbose | root@node2_3:~# curl -k https://node1:6443/livez?verbose | ||
- | </code> | + | |
- | * [[https://github.com/containerd/containerd/issues/4581|[ERROR CRI]: container runtime is not running]] | + | |
- | <code> | + | |
root@node2_3:~# kubeadm join 192.168.X.201:6443 --token NNNNNNNNNNNNNNNNNNNN \ | root@node2_3:~# kubeadm join 192.168.X.201:6443 --token NNNNNNNNNNNNNNNNNNNN \ | ||
--discovery-token-ca-cert-hash sha256:NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN | --discovery-token-ca-cert-hash sha256:NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN | ||
+ | |||
+ | root@node2_3:~# curl -sSL http://127.0.0.1:10248/healthz | ||
+ | |||
+ | root@node1:~# kubeadm token list | ||
+ | |||
+ | root@node1:~# kubeadm token create --print-join-command | ||
</code> | </code> | ||
=== Проверка состояния === | === Проверка состояния === | ||
Line 255: | Line 301: | ||
root@node1:~# kubectl get nodes -o wide | root@node1:~# kubectl get nodes -o wide | ||
+ | |||
+ | root@node1:~# kubectl describe node node2 | ||
</code> | </code> | ||
Line 264: | Line 312: | ||
$ kubectl cordon kube3 | $ kubectl cordon kube3 | ||
- | $ time kubectl drain kube3 --force --ignore-daemonsets --delete-emptydir-data | + | $ time kubectl drain kube3 #--ignore-daemonsets --delete-emptydir-data --force |
$ kubectl delete node kube3 | $ kubectl delete node kube3 | ||
Line 282: | Line 330: | ||
=== Настройка доступа к Insecure Private Registry === | === Настройка доступа к Insecure Private Registry === | ||
+ | === !!! Обратитесь к преподавателю !!! === | ||
* [[https://github.com/containerd/containerd/issues/4938|Unable to pull image from insecure registry, http: server gave HTTP response to HTTPS client #4938]] | * [[https://github.com/containerd/containerd/issues/4938|Unable to pull image from insecure registry, http: server gave HTTP response to HTTPS client #4938]] | ||
Line 292: | Line 341: | ||
<code> | <code> | ||
- | root@node1:~# mkdir /etc/containerd/ | + | root@node1:~# mkdir -p /etc/containerd/ |
+ | |||
+ | root@node1:~# ###containerd config default > /etc/containerd/config.toml | ||
root@node1:~# cat /etc/containerd/config.toml | root@node1:~# cat /etc/containerd/config.toml | ||
</code><code> | </code><code> | ||
- | version = 2 | + | ... |
- | + | [plugins."io.containerd.grpc.v1.cri".registry.mirrors] | |
- | [plugins."io.containerd.grpc.v1.cri".registry] | + | [plugins."io.containerd.grpc.v1.cri".registry.mirrors."server.corpX.un:5000"] |
- | [plugins."io.containerd.grpc.v1.cri".registry.mirrors] | + | endpoint = ["http://server.corpX.un:5000"] |
- | [plugins."io.containerd.grpc.v1.cri".registry.mirrors."server.corpX.un:5000"] | + | ... |
- | endpoint = ["http://server.corpX.un:5000"] | + | |
- | + | ||
- | # no need | + | |
- | # [plugins."io.containerd.grpc.v1.cri".registry.configs] | + | |
- | # [plugins."io.containerd.grpc.v1.cri".registry.configs."server.corpX.un:5000".tls] | + | |
- | # insecure_skip_verify = true | + | |
- | + | ||
- | # don't work in cri-tools 1.25, need public project | + | |
- | #[plugins."io.containerd.grpc.v1.cri".registry.configs."server.corpX.un:5000".auth] | + | |
- | # auth = "c3R1ZGVudDpwYXNzd29yZA==" | + | |
</code><code> | </code><code> | ||
node1# bash -c ' | node1# bash -c ' | ||
- | ssh node2 mkdir /etc/containerd/ | + | ssh node2 mkdir -p /etc/containerd/ |
- | ssh node3 mkdir /etc/containerd/ | + | ssh node3 mkdir -p /etc/containerd/ |
scp /etc/containerd/config.toml node2:/etc/containerd/config.toml | scp /etc/containerd/config.toml node2:/etc/containerd/config.toml | ||
scp /etc/containerd/config.toml node3:/etc/containerd/config.toml | scp /etc/containerd/config.toml node3:/etc/containerd/config.toml | ||
Line 323: | Line 364: | ||
root@nodeN:~# containerd config dump | less | root@nodeN:~# containerd config dump | less | ||
+ | </code> | ||
+ | |||
+ | == сontainerd v3 == | ||
+ | |||
+ | * [[https://stackoverflow.com/questions/79305194/unable-to-pull-image-from-insecure-registry-http-server-gave-http-response-to/79308521#79308521]] | ||
+ | |||
+ | <code> | ||
+ | # mkdir -p /etc/containerd/certs.d/server.corpX.un:5000/ | ||
+ | |||
+ | # cat /etc/containerd/certs.d/server.corpX.un:5000/hosts.toml | ||
+ | </code><code> | ||
+ | [host."http://server.corpX.un:5000"] | ||
+ | capabilities = ["pull", "resolve", "push"] | ||
+ | skip_verify = true | ||
+ | </code><code> | ||
+ | # systemctl restart containerd.service | ||
</code> | </code> | ||
Line 329: | Line 386: | ||
<code> | <code> | ||
root@nodeN:~# crictl -r unix:///run/containerd/containerd.sock pull server.corpX.un:5000/student/gowebd | root@nodeN:~# crictl -r unix:///run/containerd/containerd.sock pull server.corpX.un:5000/student/gowebd | ||
- | </code> | ||
+ | root@kubeN:~# crictl pull server.corpX.un:5000/student/pywebd2 | ||
+ | </code> | ||
==== Развертывание через Kubespray ==== | ==== Развертывание через Kubespray ==== | ||
Line 340: | Line 398: | ||
* [[https://kubernetes.io/docs/setup/production-environment/tools/kubespray/|Installing Kubernetes with Kubespray]] | * [[https://kubernetes.io/docs/setup/production-environment/tools/kubespray/|Installing Kubernetes with Kubespray]] | ||
- | * [[https://stackoverflow.com/questions/29882263/browse-list-of-tagged-releases-in-a-repo]] | + | === Подготовка к развертыванию через Kubespray === |
+ | |||
+ | * [[Язык программирования Python#Виртуальная среда Python]] | ||
<code> | <code> | ||
- | kube1# ssh-keygen | + | (venv1) server# ssh-keygen |
- | kube1# ssh-copy-id kube1;ssh-copy-id kube2;ssh-copy-id kube3;ssh-copy-id kube4; | + | (venv1) server# ssh-copy-id kube1;ssh-copy-id kube2;ssh-copy-id kube3;ssh-copy-id kube4; |
- | kube1# apt update | + | (venv1) server# git clone https://github.com/kubernetes-sigs/kubespray |
- | kube1# apt install python3-pip -y | + | (venv1) server# cd kubespray/ |
- | kube1# git clone https://github.com/kubernetes-sigs/kubespray | + | (venv1) server:~/kubespray# git tag -l |
- | kube1# cd kubespray/ | + | (venv1) server:~/kubespray# git checkout tags/v2.26.0 |
+ | или | ||
+ | (venv1) server:~/kubespray# git checkout tags/v2.27.0 | ||
- | ~/kubespray# grep -r containerd_insecure_registries . | + | (venv1) server:~/kubespray# time pip3 install -r requirements.txt |
- | ~/kubespray# git log | + | |
- | ~/kubespray# git branch -r | + | (venv1) server:~/kubespray# cp -rvfpT inventory/sample inventory/mycluster |
- | ~/kubespray# ### git checkout origin/release-2.22 | + | |
- | ~/kubespray# git tag -l | + | (venv1) server:~/kubespray# cat inventory/mycluster/hosts.yaml |
- | ~/kubespray# ### git checkout tags/v2.22.1 | + | </code><code> |
- | + | all: | |
- | ~/kubespray# git checkout 4c37399c7582ea2bfb5202c3dde3223f9c43bf59 | + | hosts: |
- | + | kube1: | |
- | ~/kubespray# ### git checkout master | + | kube2: |
+ | kube3: | ||
+ | kube4: | ||
+ | children: | ||
+ | kube_control_plane: | ||
+ | hosts: | ||
+ | kube1: | ||
+ | kube2: | ||
+ | kube_node: | ||
+ | hosts: | ||
+ | kube1: | ||
+ | kube2: | ||
+ | kube3: | ||
+ | etcd: | ||
+ | hosts: | ||
+ | kube1: | ||
+ | kube2: | ||
+ | kube3: | ||
+ | k8s_cluster: | ||
+ | children: | ||
+ | kube_control_plane: | ||
+ | kube_node: | ||
+ | calico_rr: | ||
+ | hosts: {} | ||
+ | </code><code> | ||
+ | (venv1) server:~/kubespray# ansible all -m ping -i inventory/mycluster/hosts.yaml | ||
</code> | </code> | ||
- | * Может потребоваться [[Язык программирования Python#Виртуальная среда Python]] | + | * [[Сервис Ansible#Использование модулей]] Ansible для отключения swap |
- | * Может потребоваться [[https://github.com/kubernetes-sigs/kubespray/issues/10688|"The conditional check 'groups.get('kube_control_plane')' failed. The error was: Conditional is marked as unsafe, and cannot be evaluated." #10688]] | + | * [[Сервис Ansible#Использование ролей]] Ansible для настройки сети |
+ | === Развертывание кластера через Kubespray === | ||
<code> | <code> | ||
- | ~/kubespray# time pip3 install -r requirements.txt | ||
- | real 1m48.202s | ||
- | |||
- | ~/kubespray# cp -rvfpT inventory/sample inventory/mycluster | ||
- | |||
- | ~/kubespray# declare -a IPS=(kube1,192.168.X.221 kube2,192.168.X.222 kube3,192.168.X.223) | ||
- | |||
- | ~/kubespray# CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]} | ||
- | |||
- | ~/kubespray# less inventory/mycluster/hosts.yaml | ||
- | |||
~/kubespray# time ansible-playbook -i inventory/mycluster/hosts.yaml cluster.yml | ~/kubespray# time ansible-playbook -i inventory/mycluster/hosts.yaml cluster.yml | ||
real 45m31.796s | real 45m31.796s | ||
Line 394: | Line 469: | ||
=== Добавление узла через Kubespray === | === Добавление узла через Kubespray === | ||
+ | |||
+ | * [[https://github.com/kubernetes-sigs/kubespray/blob/master/docs/operations/nodes.md|Adding/replacing a node (github.com/kubernetes-sigs/kubespray)]] | ||
+ | * [[https://nixhub.ru/posts/k8s-nodes-scale/|K8s - добавление нод через kubespray]] | ||
+ | * [[https://blog.unetresgrossebite.com/?p=934|Redeploy Kubernetes Nodes with KubeSpray]] | ||
+ | |||
<code> | <code> | ||
~/kubespray# cat inventory/mycluster/hosts.yaml | ~/kubespray# cat inventory/mycluster/hosts.yaml | ||
</code><code> | </code><code> | ||
+ | all: | ||
+ | hosts: | ||
... | ... | ||
- | node4: | + | kube4: |
- | ansible_host: 192.168.X.204 | + | |
- | ip: 192.168.X.204 | + | |
- | access_ip: 192.168.X.204 | + | |
... | ... | ||
kube_node: | kube_node: | ||
+ | hosts: | ||
... | ... | ||
- | node4: | + | kube4: |
... | ... | ||
</code><code> | </code><code> | ||
+ | (venv1) server:~/kubespray# time ansible-playbook -i inventory/mycluster/hosts.yaml cluster.yml | ||
+ | real 6m31.562s | ||
- | ~/kubespray# time ansible-playbook -i inventory/mycluster/hosts.yaml --limit=kube4 scale.yml | + | ~/kubespray# ###time ansible-playbook -i inventory/mycluster/hosts.yaml --limit=kube4 scale.yml |
real 17m37.459s | real 17m37.459s | ||
Line 456: | Line 538: | ||
<code> | <code> | ||
$ kubectl api-resources | $ kubectl api-resources | ||
- | |||
- | $ kubectl run my-debian --image=debian -- "sleep" "3600" | ||
$ ###kubectl run -ti --rm my-debian --image=debian --overrides='{"spec": { "nodeSelector": {"kubernetes.io/hostname": "kube4"}}}' | $ ###kubectl run -ti --rm my-debian --image=debian --overrides='{"spec": { "nodeSelector": {"kubernetes.io/hostname": "kube4"}}}' | ||
- | $ kubectl get all | + | $ kubectl run my-debian --image=debian -- "sleep" "60" |
+ | |||
+ | $ kubectl get pods | ||
kubeN# crictl ps | grep debi | kubeN# crictl ps | grep debi | ||
Line 469: | Line 551: | ||
$ kubectl delete pod my-debian | $ kubectl delete pod my-debian | ||
+ | $ ###kubectl delete pod my-debian --grace-period=0 --force | ||
- | $ kubectl create deployment my-debian --image=debian -- "sleep" "3600" | + | $ kubectl create deployment my-debian --image=debian -- "sleep" "infinity" |
+ | $ kubectl get all | ||
$ kubectl get deployments | $ kubectl get deployments | ||
+ | $ kubectl get replicasets | ||
</code> | </code> | ||
* [[#Настройка автодополнения]] | * [[#Настройка автодополнения]] | ||
Line 480: | Line 565: | ||
$ kubectl exec -ti my-debian-NNNNNNNNN-NNNNN -- bash | $ kubectl exec -ti my-debian-NNNNNNNNN-NNNNN -- bash | ||
Ctrl-D | Ctrl-D | ||
+ | </code> | ||
+ | * [[Технология Docker#Анализ параметров запущенного контейнера изнутри]] | ||
+ | <code> | ||
$ kubectl get deployment my-debian -o yaml | $ kubectl get deployment my-debian -o yaml | ||
</code> | </code> | ||
Line 491: | Line 578: | ||
$ kubectl delete deployment my-debian | $ kubectl delete deployment my-debian | ||
</code> | </code> | ||
+ | |||
+ | ==== Manifest ==== | ||
+ | |||
* [[https://kubernetes.io/docs/reference/glossary/?all=true#term-manifest|Kubernetes Documentation Reference Glossary/Manifest]] | * [[https://kubernetes.io/docs/reference/glossary/?all=true#term-manifest|Kubernetes Documentation Reference Glossary/Manifest]] | ||
<code> | <code> | ||
Line 514: | Line 604: | ||
image: debian | image: debian | ||
command: ["/bin/sh"] | command: ["/bin/sh"] | ||
- | args: ["-c", "while true; do echo hello; sleep 3;done"] | + | args: ["-c", "while :;do echo -n random-value:;od -A n -t d -N 1 /dev/urandom;sleep 5; done"] |
+ | resources: | ||
+ | requests: | ||
+ | memory: "64Mi" | ||
+ | cpu: "250m" | ||
+ | limits: | ||
+ | memory: "128Mi" | ||
+ | cpu: "500m" | ||
restartPolicy: Always | restartPolicy: Always | ||
</code><code> | </code><code> | ||
- | $ kubectl apply -f my-debian-deployment.yaml | + | $ kubectl apply -f my-debian-deployment.yaml #--dry-run=client #-o yaml |
+ | |||
+ | $ kubectl logs -l app=my-debian -f | ||
... | ... | ||
$ kubectl delete -f my-debian-deployment.yaml | $ kubectl delete -f my-debian-deployment.yaml | ||
</code> | </code> | ||
+ | |||
==== namespace для своего приложения ==== | ==== namespace для своего приложения ==== | ||
+ | ==== Deployment ==== | ||
+ | |||
+ | * [[https://stackoverflow.com/questions/52857825/what-is-an-endpoint-in-kubernetes|What is an 'endpoint' in Kubernetes?]] | ||
* [[https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-volumes-example-nfs-persistent-volume.html|How to use an NFS volume]] | * [[https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-volumes-example-nfs-persistent-volume.html|How to use an NFS volume]] | ||
+ | * [[https://www.kryukov.biz/kubernetes/lokalnye-volumes/emptydir/|emptyDir]] | ||
* [[https://hub.docker.com/_/httpd|The Apache HTTP Server Project - httpd Docker Official Image]] | * [[https://hub.docker.com/_/httpd|The Apache HTTP Server Project - httpd Docker Official Image]] | ||
+ | * [[https://habr.com/ru/companies/oleg-bunin/articles/761662/|Дополнительные контейнеры в Kubernetes и где они обитают: от паттернов к автоматизации управления]] | ||
+ | * [[https://stackoverflow.com/questions/39436845/multiple-command-in-poststart-hook-of-a-container|multiple command in postStart hook of a container]] | ||
+ | * [[https://stackoverflow.com/questions/33887194/how-to-set-multiple-commands-in-one-yaml-file-with-kubernetes|How to set multiple commands in one yaml file with Kubernetes?]] | ||
<code> | <code> | ||
Line 543: | Line 650: | ||
metadata: | metadata: | ||
name: my-webd | name: my-webd | ||
+ | # annotations: | ||
+ | # kubernetes.io/change-cause: "update to ver1.2" | ||
spec: | spec: | ||
selector: | selector: | ||
Line 558: | Line 667: | ||
# image: server.corpX.un:5000/student/webd | # image: server.corpX.un:5000/student/webd | ||
# image: server.corpX.un:5000/student/webd:ver1.N | # image: server.corpX.un:5000/student/webd:ver1.N | ||
+ | # image: httpd | ||
+ | # args: ["gunicorn", "app:app", "--bind", "0.0.0.0:8000", "-k", "uvicorn.workers.UvicornWorker"] | ||
# imagePullPolicy: "Always" | # imagePullPolicy: "Always" | ||
- | # image: httpd | ||
# lifecycle: | # lifecycle: | ||
# postStart: | # postStart: | ||
# exec: | # exec: | ||
- | # command: ["/bin/sh", "-c", "echo Hello from apache2 on $(hostname) > /usr/local/apache2/htdocs/index.html"] | + | # command: |
+ | # - /bin/sh | ||
+ | # - -c | ||
+ | # - | | ||
+ | # #test -f /usr/local/apache2/htdocs/index.html && exit 0 | ||
+ | # mkdir -p /usr/local/apache2/htdocs/ | ||
+ | # cd /usr/local/apache2/htdocs/ | ||
+ | # echo "<h1>Hello from apache2 on $(hostname) at $(date)</h1>" > index.html | ||
+ | # echo "<img src=img/logo.gif>" >> index.html | ||
# env: | # env: | ||
+ | # - name: PYWEBD_DOC_ROOT | ||
+ | # value: "/usr/local/apache2/htdocs/" | ||
+ | # - name: PYWEBD_PORT | ||
+ | # value: "4080" | ||
# - name: APWEBD_HOSTNAME | # - name: APWEBD_HOSTNAME | ||
# value: "apwebd.corpX.un" | # value: "apwebd.corpX.un" | ||
Line 578: | Line 700: | ||
# httpGet: | # httpGet: | ||
# port: 80 | # port: 80 | ||
+ | # #scheme: HTTPS | ||
+ | |||
+ | # volumeMounts: | ||
+ | # - name: htdocs-volume | ||
+ | # mountPath: /usr/local/apache2/htdocs | ||
+ | |||
# volumeMounts: | # volumeMounts: | ||
# - name: nfs-volume | # - name: nfs-volume | ||
# mountPath: /var/www | # mountPath: /var/www | ||
+ | |||
+ | # volumes: | ||
+ | # - name: htdocs-volume | ||
+ | # emptyDir: {} | ||
+ | |||
+ | |||
# volumes: | # volumes: | ||
# - name: nfs-volume | # - name: nfs-volume | ||
Line 587: | Line 721: | ||
# server: server.corpX.un | # server: server.corpX.un | ||
# path: /var/www | # path: /var/www | ||
+ | |||
+ | # initContainers: | ||
+ | # - name: load-htdocs-files | ||
+ | # image: curlimages/curl | ||
+ | ## command: ['sh', '-c', 'mkdir /mnt/img; curl http://val.bmstu.ru/unix/Media/logo.gif > /mnt/img/logo.gif'] | ||
+ | # command: ["/bin/sh", "-c"] | ||
+ | # args: | ||
+ | # - | | ||
+ | # test -d /mnt/img/ && exit 0 | ||
+ | # mkdir /mnt/img; cd /mnt/img | ||
+ | # curl http://val.bmstu.ru/unix/Media/logo.gif > logo.gif | ||
+ | # ls -lR /mnt/ | ||
+ | # volumeMounts: | ||
+ | # - mountPath: /mnt | ||
+ | # name: htdocs-volume | ||
+ | |||
</code><code> | </code><code> | ||
- | $ kubectl apply -f my-webd-deployment.yaml -n my-ns | + | $ kubectl apply -f my-webd-deployment.yaml -n my-ns #--dry-run=client #-o yaml |
- | $ kubectl get all -n my-ns -o wide | + | $ kubectl get all -n my-ns -o wide |
$ kubectl describe -n my-ns pod/my-webd-NNNNNNNNNN-NNNNN | $ kubectl describe -n my-ns pod/my-webd-NNNNNNNNNN-NNNNN | ||
+ | |||
+ | $ kubectl -n my-ns logs pod/my-webd-NNNNNNNNNN-NNNNN #-c load-htdocs-files | ||
+ | |||
+ | $ kubectl logs -l app=my-webd -n my-ns | ||
+ | (доступны опции -f, --tail=2000, --previous) | ||
$ kubectl scale deployment my-webd --replicas=3 -n my-ns | $ kubectl scale deployment my-webd --replicas=3 -n my-ns | ||
$ kubectl delete pod/my-webd-NNNNNNNNNN-NNNNN -n my-ns | $ kubectl delete pod/my-webd-NNNNNNNNNN-NNNNN -n my-ns | ||
+ | </code> | ||
+ | |||
+ | === Версии deployment === | ||
+ | |||
+ | * [[https://learnk8s.io/kubernetes-rollbacks|How do you rollback deployments in Kubernetes?]] | ||
+ | * [[https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment|Updating a Deployment]] | ||
+ | |||
+ | <code> | ||
+ | $ ###kubectl rollout pause deployment my-webd-dep -n my-ns | ||
+ | $ ###kubectl set image deployment/my-webd-dep my-webd-con=server.corpX.un:5000/student/gowebd:ver1.2 -n my-ns | ||
+ | $ ###kubectl rollout resume deployment my-webd-dep -n my-ns | ||
+ | |||
+ | $ ###kubectl rollout status deployment/my-webd-dep -n my-ns | ||
+ | |||
+ | $ kubectl rollout history deployment/my-webd -n my-ns | ||
+ | </code><code> | ||
+ | REVISION CHANGE-CAUSE | ||
+ | 1 <none> | ||
+ | ... | ||
+ | N update to ver1.2 | ||
+ | </code><code> | ||
+ | $ kubectl rollout history deployment/my-webd --revision=1 -n my-ns | ||
+ | </code><code> | ||
+ | ... | ||
+ | Image: server.corpX.un:5000/student/webd:ver1.1 | ||
+ | ... | ||
+ | </code><code> | ||
+ | $ kubectl rollout undo deployment/my-webd --to-revision=1 -n my-ns | ||
+ | |||
+ | $ kubectl annotate deployment/my-webd kubernetes.io/change-cause="revert to ver1.1" -n my-ns | ||
+ | |||
+ | $ kubectl rollout history deployment/my-webd -n my-ns | ||
+ | </code><code> | ||
+ | REVISION CHANGE-CAUSE | ||
+ | 2 update to ver1.2 | ||
+ | ... | ||
+ | N+1 revert to ver1.1 | ||
</code> | </code> | ||
Line 602: | Line 794: | ||
* [[https://kubernetes.io/docs/concepts/services-networking/service/|Kubernetes Documentation Concepts Services, Load Balancing, and Networking Service]] | * [[https://kubernetes.io/docs/concepts/services-networking/service/|Kubernetes Documentation Concepts Services, Load Balancing, and Networking Service]] | ||
+ | * [[https://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/|Accessing Kubernetes Pods From Outside of the Cluster]] | ||
* [[https://stackoverflow.com/questions/33069736/how-do-i-get-logs-from-all-pods-of-a-kubernetes-replication-controller|How do I get logs from all pods of a Kubernetes replication controller?]] | * [[https://stackoverflow.com/questions/33069736/how-do-i-get-logs-from-all-pods-of-a-kubernetes-replication-controller|How do I get logs from all pods of a Kubernetes replication controller?]] | ||
+ | |||
<code> | <code> | ||
Line 625: | Line 818: | ||
- protocol: TCP | - protocol: TCP | ||
port: 80 | port: 80 | ||
+ | # targetPort: 4080 | ||
# nodePort: 30111 | # nodePort: 30111 | ||
</code><code> | </code><code> | ||
$ kubectl apply -f my-webd-service.yaml -n my-ns | $ kubectl apply -f my-webd-service.yaml -n my-ns | ||
- | $ kubectl logs -l app=my-webd -n my-ns | + | $ kubectl describe svc my-webd -n my-ns |
- | (доступны опции -f, --tail=2000, --previous) | + | |
+ | $ kubectl get endpoints -n my-ns | ||
</code> | </code> | ||
=== NodePort === | === NodePort === | ||
+ | |||
+ | * [[https://www.baeldung.com/ops/kubernetes-nodeport-range|Why Kubernetes NodePort Services Range From 30000 – 32767]] | ||
+ | |||
<code> | <code> | ||
$ kubectl get svc my-webd -n my-ns | $ kubectl get svc my-webd -n my-ns | ||
Line 638: | Line 836: | ||
my-webd-svc NodePort 10.102.135.146 <none> 80:NNNNN/TCP 18h | my-webd-svc NodePort 10.102.135.146 <none> 80:NNNNN/TCP 18h | ||
- | $ kubectl describe svc my-webd -n my-ns | + | $ curl http://kube1,2,3:NNNNN |
- | + | ||
- | $ curl http://node1,2,3:NNNNN | + | |
- | на "самодельном kubeadm" кластере работает не стабильно | + | |
</code> | </code> | ||
== NodePort Minikube == | == NodePort Minikube == | ||
Line 647: | Line 842: | ||
$ minikube service list | $ minikube service list | ||
- | $ minikube service my-webd -n my-ns --url | + | $ minikube service my-webd --url -n my-ns |
http://192.168.49.2:NNNNN | http://192.168.49.2:NNNNN | ||
- | $ curl $(minikube service my-webd -n my-ns --url) | + | $ curl http://192.168.49.2:NNNNN |
</code> | </code> | ||
Line 678: | Line 873: | ||
spec: | spec: | ||
addresses: | addresses: | ||
- | - 192.168.13.64/28 | + | - 192.168.X.64/28 |
autoAssign: false | autoAssign: false | ||
+ | # autoAssign: true | ||
--- | --- | ||
apiVersion: metallb.io/v1beta1 | apiVersion: metallb.io/v1beta1 | ||
Line 694: | Line 890: | ||
$ kubectl apply -f first-pool.yaml | $ kubectl apply -f first-pool.yaml | ||
- | $ ### kubectl delete -f first-pool.yaml && rm first-pool.yaml | + | ... |
+ | $ kubectl get svc my-webd -n my-ns | ||
+ | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE | ||
+ | my-webd LoadBalancer 10.233.23.29 192.168.X.64 80:NNNNN/TCP 50s | ||
+ | |||
+ | |||
+ | $ #kubectl delete -f first-pool.yaml && rm first-pool.yaml | ||
- | $ ### kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.3/config/manifests/metallb-native.yaml | + | $ #kubectl delete -f https://raw.githubusercontent.com/metallb/metallb/v0.14.3/config/manifests/metallb-native.yaml |
</code> | </code> | ||
Line 702: | Line 904: | ||
<code> | <code> | ||
kube1# host my-webd.my-ns.svc.cluster.local 169.254.25.10 | kube1# host my-webd.my-ns.svc.cluster.local 169.254.25.10 | ||
- | ...10.102.135.146... | ||
- | server# ssh -p 32222 nodeN | + | kube1# curl my-webd.my-ns.svc.cluster.local |
- | + | ||
- | my-openssh-server-NNNNNNNN-NNNNN:~# curl my-webd.my-ns.svc.cluster.local | + | |
- | ИЛИ | + | |
- | my-openssh-server-NNNNNNNN-NNNNN:~# curl my-webd-webd-chart.my-ns.svc.cluster.local | + | |
</code> | </code> | ||
Line 807: | Line 1004: | ||
node1# kubectl get all -n ingress-nginx | node1# kubectl get all -n ingress-nginx | ||
- | node1# ### kubectl delete -f ingress-nginx.controller-v1.3.1.baremetal.yaml | + | node1# ###kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission |
+ | |||
+ | node1# ###kubectl delete -f ingress-nginx.controller-v1.3.1.baremetal.yaml | ||
</code> | </code> | ||
- | === Управление конфигурацией ingress-nginx-controller === | + | === Ingress baremetal DaemonSet === |
<code> | <code> | ||
- | master-1:~$ kubectl exec -n ingress-nginx pods/ingress-nginx-controller-<TAB> -- cat /etc/nginx/nginx.conf | tee nginx.conf | + | kube1:~# mkdir -p ingress-nginx; cd $_ |
- | master-1:~$ kubectl edit -n ingress-nginx configmaps ingress-nginx-controller | + | kube1:~/ingress-nginx# curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0/deploy/static/provider/baremetal/deploy.yaml | tee ingress-nginx.controller-v1.12.0.baremetal.yaml |
+ | |||
+ | kube1:~/ingress-nginx# cat ingress-nginx.controller-v1.12.0.baremetal.yaml | ||
</code><code> | </code><code> | ||
... | ... | ||
+ | apiVersion: v1 | ||
+ | #data: null | ||
data: | data: | ||
+ | allow-snippet-annotations: "true" | ||
use-forwarded-headers: "true" | use-forwarded-headers: "true" | ||
+ | kind: ConfigMap | ||
... | ... | ||
+ | #kind: Deployment | ||
+ | kind: DaemonSet | ||
+ | ... | ||
+ | # strategy: | ||
+ | # rollingUpdate: | ||
+ | # maxUnavailable: 1 | ||
+ | # type: RollingUpdate | ||
+ | ... | ||
+ | hostNetwork: true ### insert this | ||
+ | terminationGracePeriodSeconds: 300 | ||
+ | volumes: | ||
+ | ... | ||
+ | </code><code> | ||
+ | kube1:~/ingress-nginx# kubectl apply -f ingress-nginx.controller-v1.12.0.baremetal.yaml | ||
+ | |||
+ | kube1:~/ingress-nginx# kubectl -n ingress-nginx get pods -o wide | ||
+ | |||
+ | kube1:~/ingress-nginx# kubectl -n ingress-nginx describe service/ingress-nginx-controller | ||
+ | </code><code> | ||
+ | ... | ||
+ | Endpoints: 192.168.X.221:80,192.168.X.222:80,192.168.X.223:80 | ||
+ | ... | ||
+ | </code><code> | ||
+ | kube1:~/ingress-nginx# ###kubectl delete -f ingress-nginx.controller-v1.12.0.baremetal.yaml | ||
</code> | </code> | ||
- | === Итоговый вариант с DaemonSet === | + | === Управление конфигурацией ingress-nginx-controller === |
<code> | <code> | ||
- | node1# diff ingress-nginx.controller-v1.8.2.baremetal.yaml.orig ingress-nginx.controller-v1.8.2.baremetal.yaml | + | master-1:~$ kubectl exec -n ingress-nginx pods/ingress-nginx-controller-<TAB> -- cat /etc/nginx/nginx.conf | tee nginx.conf |
+ | |||
+ | master-1:~$ kubectl edit -n ingress-nginx configmaps ingress-nginx-controller | ||
</code><code> | </code><code> | ||
- | 323a324 | ||
- | > use-forwarded-headers: "true" | ||
- | 391c392,393 | ||
- | < kind: Deployment | ||
- | --- | ||
- | > #kind: Deployment | ||
- | > kind: DaemonSet | ||
- | 409,412c411,414 | ||
- | < strategy: | ||
- | < rollingUpdate: | ||
- | < maxUnavailable: 1 | ||
- | < type: RollingUpdate | ||
- | --- | ||
- | > # strategy: | ||
- | > # rollingUpdate: | ||
- | > # maxUnavailable: 1 | ||
- | > # type: RollingUpdate | ||
- | 501a504 | ||
- | > hostNetwork: true | ||
- | </code><code> | ||
- | node1# kubectl -n ingress-nginx describe service/ingress-nginx-controller | ||
... | ... | ||
- | Endpoints: 192.168.X.221:80,192.168.X.222:80,192.168.X.223:80 | + | data: |
+ | use-forwarded-headers: "true" | ||
... | ... | ||
</code> | </code> | ||
=== ingress example === | === ingress example === | ||
+ | |||
+ | * [[https://stackoverflow.com/questions/49829452/why-ingress-serviceport-can-be-port-and-targetport-of-service|!!! The NGINX ingress controller does not use Services to route traffic to the pods]] | ||
+ | * [[https://stackoverflow.com/questions/54459015/how-to-configure-ingress-to-direct-traffic-to-an-https-backend-using-https|how to configure ingress to direct traffic to an https backend using https]] | ||
<code> | <code> | ||
- | node1# ### kubectl create ingress my-ingress --class=nginx --rule="webd.corpX.un/*=my-webd:80" -n my-ns | + | kube1# ### kubectl create ingress my-ingress --class=nginx --rule="webd.corpX.un/*=my-webd:80" -n my-ns |
- | node1# cat my-ingress.yaml | + | kube1# cat my-ingress.yaml |
</code><code> | </code><code> | ||
apiVersion: networking.k8s.io/v1 | apiVersion: networking.k8s.io/v1 | ||
Line 863: | Line 1077: | ||
metadata: | metadata: | ||
name: my-ingress | name: my-ingress | ||
+ | # annotations: | ||
+ | # nginx.ingress.kubernetes.io/canary: "true" | ||
+ | # nginx.ingress.kubernetes.io/canary-weight: "30" | ||
spec: | spec: | ||
ingressClassName: nginx | ingressClassName: nginx | ||
Line 877: | Line 1094: | ||
name: my-webd | name: my-webd | ||
port: | port: | ||
- | number: 80 | + | number: 4080 |
path: / | path: / | ||
pathType: Prefix | pathType: Prefix | ||
Line 891: | Line 1108: | ||
pathType: Prefix | pathType: Prefix | ||
</code><code> | </code><code> | ||
- | node1# kubectl apply -f my-ingress.yaml -n my-ns | + | kube1# kubectl apply -f my-ingress.yaml -n my-ns |
- | + | kube1# kubectl get ingress -n my-ns | |
- | node1# kubectl get ingress -n my-ns | + | |
NAME CLASS HOSTS ADDRESS PORTS AGE | NAME CLASS HOSTS ADDRESS PORTS AGE | ||
my-webd nginx webd.corpX.un,gowebd.corpX.un 192.168.X.202,192.168.X.203 80 14m | my-webd nginx webd.corpX.un,gowebd.corpX.un 192.168.X.202,192.168.X.203 80 14m | ||
- | + | </code> | |
+ | * [[Утилита curl]] | ||
+ | <code> | ||
$ curl webd.corpX.un | $ curl webd.corpX.un | ||
$ curl gowebd.corpX.un | $ curl gowebd.corpX.un | ||
Line 908: | Line 1125: | ||
$ kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx -f | $ kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx -f | ||
- | node1# ### kubectl delete ingress my-ingress -n my-ns | + | kube1# ### kubectl delete ingress my-ingress -n my-ns |
</code> | </code> | ||
Line 927: | Line 1144: | ||
==== Volumes ==== | ==== Volumes ==== | ||
- | === PersistentVolume и PersistentVolumeVolumeClaim === | + | === hostPath и nodeSelector === |
- | <code> | + | |
- | root@node1:~# ssh node2 mkdir /disk2 | + | |
- | root@node1:~# ssh node2 touch /disk2/disk2_node2 | + | * [[Средства программирования shell#Ресурсы Web сервера на shell]] на kube3 |
- | root@node1:~# kubectl label nodes node2 disk2=yes | + | <code> |
+ | kube1# kubectl label nodes kube3 htdocs-node=yes | ||
- | root@node1:~# kubectl get nodes --show-labels | + | kube1# kubectl get nodes --show-labels |
- | root@node1:~# ###kubectl label nodes node2 disk2- | + | kube1:~/pywebd-k8s# cat my-webd-deployment.yaml |
- | + | ||
- | root@node1:~# cat my-debian-deployment.yaml | + | |
</code><code> | </code><code> | ||
... | ... | ||
- | args: ["-c", "while true; do echo hello; sleep 3;done"] | ||
- | |||
volumeMounts: | volumeMounts: | ||
- | - name: my-disk2-volume | + | - name: htdocs-volume |
- | mountPath: /data | + | mountPath: /usr/local/apache2/htdocs |
- | # volumeMounts: | + | # lifecycle: |
- | # - name: data | + | # ... |
- | # mountPath: /data | + | |
volumes: | volumes: | ||
- | - name: my-disk2-volume | + | - name: htdocs-volume |
- | hostPath: | + | hostPath: |
- | path: /disk2/ | + | path: /var/www/ |
nodeSelector: | nodeSelector: | ||
- | disk2: "yes" | + | htdocs-node: "yes" |
- | # volumes: | + | # initContainers: |
- | # - name: data | + | # ... |
- | # persistentVolumeClaim: | + | </code> |
- | # claimName: my-ha-pvc-sz64m | + | |
- | restartPolicy: Always | + | === PersistentVolume и PersistentVolumeClaim === |
- | </code><code> | + | |
- | root@node1:~# kubectl apply -f my-debian-deployment.yaml | + | |
- | + | ||
- | root@node1:~# kubectl get all -o wide | + | |
- | </code> | + | |
* [[https://qna.habr.com/q/629022|Несколько Claim на один Persistent Volumes?]] | * [[https://qna.habr.com/q/629022|Несколько Claim на один Persistent Volumes?]] | ||
Line 978: | Line 1184: | ||
<code> | <code> | ||
- | root@node1:~# cat my-ha-pv.yaml | + | kube1:~/pv# cat my-ha-pv.yaml |
</code><code> | </code><code> | ||
apiVersion: v1 | apiVersion: v1 | ||
kind: PersistentVolume | kind: PersistentVolume | ||
metadata: | metadata: | ||
- | name: my-pv-node2-sz-128m-num-001 | + | name: my-pv-kube3-sz-128m-num-001 |
# name: my-pv-kube3-keycloak | # name: my-pv-kube3-keycloak | ||
- | labels: | + | # labels: |
- | type: local | + | # type: local |
spec: | spec: | ||
## comment storageClassName for keycloak | ## comment storageClassName for keycloak | ||
Line 993: | Line 1199: | ||
storage: 128Mi | storage: 128Mi | ||
# storage: 8Gi | # storage: 8Gi | ||
- | # volumeMode: Filesystem | ||
accessModes: | accessModes: | ||
- | - ReadWriteMany | + | - ReadWriteOnce |
- | # - ReadWriteOnce | + | |
hostPath: | hostPath: | ||
- | path: /disk2 | + | # path: /disk2 |
+ | path: /disk2/dir1 | ||
persistentVolumeReclaimPolicy: Retain | persistentVolumeReclaimPolicy: Retain | ||
nodeAffinity: | nodeAffinity: | ||
Line 1007: | Line 1212: | ||
operator: In | operator: In | ||
values: | values: | ||
- | - node2 | + | - kube3 |
- | # - kube3 | + | --- |
+ | #... | ||
</code><code> | </code><code> | ||
- | root@node1:~# kubectl apply -f my-ha-pv.yaml | + | kube1:~/pv# kubectl apply -f my-ha-pv.yaml |
- | root@node1:~# kubectl get persistentvolume | + | kube1# kubectl get pv |
- | или | + | |
- | root@node1:~# kubectl get pv | + | |
- | root@kube1:~# ###ssh kube3 'mkdir /disk2/; chmod 777 /disk2/' | + | kube1# kubectl delete pv my-pv-kube3-sz-128m-num-001 |
- | ... | + | |
- | root@node1:~# ###kubectl delete pv my-pv-<TAB> | + | |
- | root@node1:~# cat my-ha-pvc.yaml | + | kube3# mkdir -p /disk2/dir{0..3} |
+ | |||
+ | kube3# chmod 777 -R /disk2/ | ||
+ | |||
+ | kube3# find /disk2/ | ||
+ | |||
+ | kube3# ###rm -rf /disk2/ | ||
+ | </code> | ||
+ | |||
+ | * [[https://stackoverflow.com/questions/55639436/create-multiple-persistent-volumes-in-one-yaml]] | ||
+ | * Знакомимся с [[#Helm]] | ||
+ | |||
+ | <code> | ||
+ | kube1:~/pv# cat my-ha-pv-chart/Chart.yaml | ||
+ | </code><code> | ||
+ | apiVersion: v2 | ||
+ | name: my-ha-pv-chart | ||
+ | version: 0.1.0 | ||
+ | </code><code> | ||
+ | kube1:~/pv# cat my-ha-pv-chart/values.yaml | ||
+ | </code><code> | ||
+ | volume_names: | ||
+ | - "dir1" | ||
+ | - "dir2" | ||
+ | - "dir3" | ||
+ | numVolumes: "3" | ||
+ | </code><code> | ||
+ | kube1:~/pv# cat my-ha-pv-chart/templates/my-ha-pv.yaml | ||
+ | </code><code> | ||
+ | {{ range .Values.volume_names }} | ||
+ | {{/* range $k, $v := until (atoi .Values.numVolumes) */}} | ||
+ | --- | ||
+ | apiVersion: v1 | ||
+ | kind: PersistentVolume | ||
+ | metadata: | ||
+ | name: my-pv-sz-128m-num-{{ . }} | ||
+ | spec: | ||
+ | storageClassName: my-ha-sc | ||
+ | capacity: | ||
+ | storage: 128Mi | ||
+ | accessModes: | ||
+ | - ReadWriteOnce | ||
+ | hostPath: | ||
+ | path: /disk2/{{ . }}/ | ||
+ | {{/* path: /disk2/dir{{ $v }}/ */}} | ||
+ | persistentVolumeReclaimPolicy: Retain | ||
+ | nodeAffinity: | ||
+ | required: | ||
+ | nodeSelectorTerms: | ||
+ | - matchExpressions: | ||
+ | - key: kubernetes.io/hostname | ||
+ | operator: In | ||
+ | values: | ||
+ | - kube3 | ||
+ | {{ end }} | ||
+ | </code><code> | ||
+ | kube1:~/pv# helm template my-ha-pv-chart my-ha-pv-chart/ | ||
+ | |||
+ | kube1:~/pv# helm install my-ha-pv-chart my-ha-pv-chart/ | ||
+ | |||
+ | kube1# kubectl get pv | ||
+ | |||
+ | kube1:~/pv# ###helm uninstall my-ha-pv-chart | ||
+ | </code><code> | ||
+ | kube1:~/pywebd-k8s# cat my-webd-pvc.yaml | ||
</code><code> | </code><code> | ||
apiVersion: v1 | apiVersion: v1 | ||
kind: PersistentVolumeClaim | kind: PersistentVolumeClaim | ||
metadata: | metadata: | ||
- | name: my-ha-pvc-sz64m | + | name: my-webd-pvc |
spec: | spec: | ||
storageClassName: my-ha-sc | storageClassName: my-ha-sc | ||
- | # storageClassName: local-path | ||
accessModes: | accessModes: | ||
- | - ReadWriteMany | + | - ReadWriteOnce |
resources: | resources: | ||
requests: | requests: | ||
storage: 64Mi | storage: 64Mi | ||
</code><code> | </code><code> | ||
- | root@node1:~# kubectl apply -f my-ha-pvc.yaml | + | kube1:~/pywebd-k8s# kubectl apply -f my-webd-pvc.yaml -n my-ns |
- | root@node1:~# kubectl get persistentvolumeclaims | + | kube1:~/pywebd-k8s# kubectl get pvc -n my-ns |
- | или | + | |
- | root@node1:~# kubectl get pvc | + | kube1:~/pywebd-k8s# cat my-webd-deployment.yaml |
+ | </code><code> | ||
... | ... | ||
+ | volumeMounts: | ||
+ | - name: htdocs-volume | ||
+ | mountPath: /usr/local/apache2/htdocs | ||
- | root@node1:~# ### kubectl delete pvc my-ha-pvc-sz64m | + | lifecycle: |
+ | ... | ||
+ | |||
+ | volumes: | ||
+ | - name: htdocs-volume | ||
+ | persistentVolumeClaim: | ||
+ | claimName: my-webd-pvc | ||
+ | |||
+ | initContainers: | ||
+ | ... | ||
+ | </code><code> | ||
+ | kube3# find /disk2 | ||
</code> | </code> | ||
Line 1048: | Line 1328: | ||
* [[https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/|Dynamic Volume Provisioning]] | * [[https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/|Dynamic Volume Provisioning]] | ||
+ | * [[https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/|Changing the default StorageClass]] | ||
=== rancher local-path-provisioner === | === rancher local-path-provisioner === | ||
Line 1070: | Line 1351: | ||
ssh root@kube2 'chmod 777 /opt/local-path-provisioner' | ssh root@kube2 'chmod 777 /opt/local-path-provisioner' | ||
ssh root@kube3 'chmod 777 /opt/local-path-provisioner' | ssh root@kube3 'chmod 777 /opt/local-path-provisioner' | ||
+ | |||
+ | $ ###kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' | ||
</code> | </code> | ||
Line 1085: | Line 1368: | ||
<code> | <code> | ||
kubeN:~# apt install open-iscsi | kubeN:~# apt install open-iscsi | ||
+ | |||
+ | (venv1) server:~# ansible all -f 4 -m apt -a 'pkg=open-iscsi state=present update_cache=true' -i /root/kubespray/inventory/mycluster/hosts.yaml | ||
</code> | </code> | ||
* [[https://github.com/longhorn/longhorn]] | * [[https://github.com/longhorn/longhorn]] | ||
<code> | <code> | ||
$ kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.0/deploy/longhorn.yaml | $ kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.0/deploy/longhorn.yaml | ||
+ | |||
+ | $ kubectl -n longhorn-system get pods -o wide --watch | ||
Setting->General | Setting->General | ||
Line 1135: | Line 1422: | ||
* Делаем снапшот | * Делаем снапшот | ||
- | * Что-то ломаем | + | * Что-то ломаем (удаляем пользователя) |
- | * Останавливаем сервис | + | |
+ | == Остановка сервиса == | ||
<code> | <code> | ||
Line 1154: | Line 1442: | ||
== Использование backup-ов == | == Использование backup-ов == | ||
+ | |||
+ | * Разворачиваем [[Сервис NFS]] на server | ||
+ | |||
<code> | <code> | ||
Setting -> General -> Backup Target -> nfs://server.corp13.un:/var/www (nfs client linux не нужен) | Setting -> General -> Backup Target -> nfs://server.corp13.un:/var/www (nfs client linux не нужен) | ||
</code> | </code> | ||
* Volume -> Create Backup, удаляем NS, восстанавливаем Volume из бекапа, создаем NS и создаем для Volume PV/PVC как было, устанавливаем чарт, он подхватывает pv/pvc | * Volume -> Create Backup, удаляем NS, восстанавливаем Volume из бекапа, создаем NS и создаем для Volume PV/PVC как было, устанавливаем чарт, он подхватывает pv/pvc | ||
+ | |||
+ | ==== ConfigMap, Secret ==== | ||
+ | |||
+ | <code> | ||
+ | server# scp /etc/pywebd/* kube1:/tmp/ | ||
+ | |||
+ | kube1:~/pywebd-k8s# kubectl create configmap pywebd-conf --from-file=/tmp/pywebd.conf --dry-run=client -o yaml | tee my-webd-configmap.yaml | ||
+ | |||
+ | kube1:~/pywebd-k8s# cat my-webd-configmap.yaml | ||
+ | </code><code> | ||
+ | apiVersion: v1 | ||
+ | data: | ||
+ | pywebd.conf: | | ||
+ | [default] | ||
+ | DocumentRoot = /usr/local/apache2/htdocs | ||
+ | Listen = 4443 | ||
+ | kind: ConfigMap | ||
+ | metadata: | ||
+ | creationTimestamp: null | ||
+ | name: pywebd-conf | ||
+ | </code><code> | ||
+ | kube1:~/pywebd-k8s# kubectl apply -f my-webd-configmap.yaml -n my-ns | ||
+ | |||
+ | kube1:~/pywebd-k8s# kubectl -n my-ns get configmaps | ||
+ | |||
+ | kube1:~/pywebd-k8s# kubectl create secret tls pywebd-tls --key /tmp/pywebd.key --cert /tmp/pywebd.crt --dry-run=client -o yaml | tee my-webd-secret-tls.yaml | ||
+ | |||
+ | kube1:~/pywebd-k8s# less my-webd-secret-tls.yaml | ||
+ | </code><code> | ||
+ | apiVersion: v1 | ||
+ | data: | ||
+ | tls.crt: ... | ||
+ | tls.key: ... | ||
+ | kind: Secret | ||
+ | metadata: | ||
+ | creationTimestamp: null | ||
+ | name: pywebd-tls | ||
+ | type: kubernetes.io/tls | ||
+ | </code><code> | ||
+ | kube1:~/pywebd-k8s# rm -rv /tmp/pywebd.* | ||
+ | |||
+ | kube1:~/pywebd-k8s# kubectl apply -f my-webd-secret-tls.yaml -n my-ns | ||
+ | |||
+ | kube1:~/pywebd-k8s# kubectl -n my-ns get secrets | ||
+ | |||
+ | kube1:~/pywebd-k8s# kubectl create secret docker-registry regcred --docker-server=server.corpX.un:5000 --docker-username=student --docker-password='strongpassword' -n my-ns | ||
+ | |||
+ | kube1:~/pywebd-k8s# cat my-webd-deployment.yaml | ||
+ | </code><code> | ||
+ | ... | ||
+ | imagePullSecrets: | ||
+ | - name: regcred | ||
+ | |||
+ | containers: | ||
+ | - name: my-webd | ||
+ | image: server.corpX.un:5000/student/pywebd:ver1.2 | ||
+ | imagePullPolicy: "Always" | ||
+ | |||
+ | # env: | ||
+ | # ... | ||
+ | ... | ||
+ | livenessProbe: | ||
+ | httpGet: | ||
+ | port: 4443 | ||
+ | scheme: HTTPS | ||
+ | ... | ||
+ | volumeMounts: | ||
+ | ... | ||
+ | - name: conf-volume | ||
+ | subPath: pywebd.conf | ||
+ | mountPath: /etc/pywebd/pywebd.conf | ||
+ | - name: secret-tls-volume | ||
+ | subPath: tls.crt | ||
+ | mountPath: /etc/pywebd/pywebd.crt | ||
+ | - name: secret-tls-volume | ||
+ | subPath: tls.key | ||
+ | mountPath: /etc/pywebd/pywebd.key | ||
+ | ... | ||
+ | volumes: | ||
+ | ... | ||
+ | - name: conf-volume | ||
+ | configMap: | ||
+ | name: pywebd-conf | ||
+ | - name: secret-tls-volume | ||
+ | secret: | ||
+ | secretName: pywebd-tls | ||
+ | ... | ||
+ | </code><code> | ||
+ | kubeN$ curl --connect-to "":"":<POD_IP>:4443 https://pywebd.corpX.un | ||
+ | </code> | ||
==== ConfigMap ==== | ==== ConfigMap ==== | ||
Line 1326: | Line 1707: | ||
<code> | <code> | ||
- | # wget https://get.helm.sh/helm-v3.9.0-linux-amd64.tar.gz | + | # wget https://get.helm.sh/helm-v3.16.4-linux-amd64.tar.gz |
- | # tar -zxvf helm-v3.9.0-linux-amd64.tar.gz | + | # tar -zxvf helm-*-linux-amd64.tar.gz |
# mv linux-amd64/helm /usr/local/bin/helm | # mv linux-amd64/helm /usr/local/bin/helm | ||
+ | |||
+ | $ cat ~/.profile | ||
+ | </code><code> | ||
+ | ... | ||
+ | source <(helm completion bash) | ||
</code> | </code> | ||
==== Работа с готовыми Charts ==== | ==== Работа с готовыми Charts ==== | ||
+ | |||
+ | * Сервис Keycloak [[Сервис Keycloak#Kubernetes]] | ||
=== ingress-nginx === | === ingress-nginx === | ||
Line 1341: | Line 1729: | ||
* [[https://devpress.csdn.net/cloud/62fc8e7e7e66823466190055.html|devpress.csdn.net How to install nginx-ingress with hostNetwork on bare-metal?]] | * [[https://devpress.csdn.net/cloud/62fc8e7e7e66823466190055.html|devpress.csdn.net How to install nginx-ingress with hostNetwork on bare-metal?]] | ||
* [[https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml]] | * [[https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml]] | ||
+ | |||
+ | * [[https://github.com/kubernetes/ingress-nginx]] --version 4.7.3 | ||
<code> | <code> | ||
Line 1355: | Line 1745: | ||
- | $ mkdir ingress-nginx; cd ingress-nginx | + | $ mkdir -p ingress-nginx; cd ingress-nginx |
$ helm template ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx | tee t1.yaml | $ helm template ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx | tee t1.yaml | ||
Line 1386: | Line 1776: | ||
# kubectl get clusterrolebindings -A | grep -i ingress | # kubectl get clusterrolebindings -A | grep -i ingress | ||
# kubectl get validatingwebhookconfigurations -A | grep -i ingress | # kubectl get validatingwebhookconfigurations -A | grep -i ingress | ||
+ | |||
+ | # ###helm uninstall ingress-nginx -n ingress-nginx | ||
</code> | </code> | ||
==== Развертывание своего приложения ==== | ==== Развертывание своего приложения ==== | ||
+ | * [[https://helm.sh/docs/chart_template_guide/getting_started/|chart_template_guide getting_started]] | ||
* [[https://opensource.com/article/20/5/helm-charts|How to make a Helm chart in 10 minutes]] | * [[https://opensource.com/article/20/5/helm-charts|How to make a Helm chart in 10 minutes]] | ||
* [[https://stackoverflow.com/questions/49812830/helm-upgrade-with-same-chart-version-but-different-docker-image-tag|Helm upgrade with same chart version, but different Docker image tag]] | * [[https://stackoverflow.com/questions/49812830/helm-upgrade-with-same-chart-version-but-different-docker-image-tag|Helm upgrade with same chart version, but different Docker image tag]] | ||
Line 1394: | Line 1787: | ||
<code> | <code> | ||
- | gitlab-runner@server:~/gowebd-k8s$ helm create webd-chart | + | ~/gowebd-k8s$ helm create webd-chart |
$ less webd-chart/templates/deployment.yaml | $ less webd-chart/templates/deployment.yaml | ||
Line 1404: | Line 1797: | ||
... | ... | ||
version: 0.1.1 | version: 0.1.1 | ||
+ | icon: https://val.bmstu.ru/unix/Media/logo.gif | ||
... | ... | ||
appVersion: "latest" | appVersion: "latest" | ||
Line 1443: | Line 1837: | ||
</code><code> | </code><code> | ||
... | ... | ||
- | image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" | + | imagePullPolicy: {{ .Values.image.pullPolicy }} |
# env: | # env: | ||
# - name: APWEBD_HOSTNAME | # - name: APWEBD_HOSTNAME | ||
Line 1451: | Line 1845: | ||
# - name: REALM_NAME | # - name: REALM_NAME | ||
# value: "{{ .Values.REALM_NAME }}" | # value: "{{ .Values.REALM_NAME }}" | ||
- | ... | ||
- | # livenessProbe: | ||
- | # httpGet: | ||
- | # path: / | ||
- | # port: http | ||
- | # readinessProbe: | ||
- | # httpGet: | ||
- | # path: / | ||
- | # port: http | ||
... | ... | ||
</code><code> | </code><code> | ||
+ | $ helm lint webd-chart/ | ||
+ | |||
$ helm template my-webd webd-chart/ | less | $ helm template my-webd webd-chart/ | less | ||
$ helm install my-webd webd-chart/ -n my-ns --create-namespace --wait | $ helm install my-webd webd-chart/ -n my-ns --create-namespace --wait | ||
+ | |||
+ | $ curl kubeN -H "Host: gowebd.corpX.un" | ||
$ kubectl describe events -n my-ns | less | $ kubectl describe events -n my-ns | less | ||
Line 1487: | Line 1876: | ||
* [[https://medium.com/containerum/how-to-make-and-share-your-own-helm-package-50ae40f6c221|How to make and share your own Helm package]] | * [[https://medium.com/containerum/how-to-make-and-share-your-own-helm-package-50ae40f6c221|How to make and share your own Helm package]] | ||
* [[https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html|Gitlab Personal access tokens]] | * [[https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html|Gitlab Personal access tokens]] | ||
- | * [[Инструмент GitLab#Подключение через API]] - Role: Mainteiner, api, read_registry, write_registry | + | * [[Инструмент GitLab#Подключение через API]] - Role: Mainteiner, api (read_registry, write_registry не нужно) |
+ | |||
+ | === Добавляем приложение в свой репозиторий === | ||
<code> | <code> | ||
- | gitlab-runner@server:~/gowebd-k8s$ helm repo add --username student --password NNNNN-NNNNNNNNNNNNNNNNNNN webd http://server.corpX.un/api/v4/projects/N/packages/helm/stable | + | ~/gowebd-k8s$ helm repo add --username student --password NNNNN-NNNNNNNNNNNNNNNNNNN webd https://server.corpX.un/api/v4/projects/N/packages/helm/stable |
"webd" has been added to your repositories | "webd" has been added to your repositories | ||
- | gitlab-runner@server:~/gowebd-k8s$ ### helm repo remove webd | + | ~/gowebd-k8s$ helm repo list |
- | gitlab-runner@server:~/gowebd-k8s$ helm repo list | + | ~/gowebd-k8s$ helm package webd-chart |
- | gitlab-runner@server:~/gowebd-k8s$ helm package webd-chart | + | ~/gowebd-k8s$ tar -tf webd-chart-0.1.1.tgz |
- | gitlab-runner@server:~/gowebd-k8s$ tar -tf webd-chart-0.1.1.tgz | + | ~/gowebd-k8s$ helm plugin install https://github.com/chartmuseum/helm-push |
- | gitlab-runner@server:~/gowebd-k8s$ helm plugin install https://github.com/chartmuseum/helm-push | + | ~/gowebd-k8s$ helm cm-push webd-chart-0.1.1.tgz webd |
- | gitlab-runner@server:~/gowebd-k8s$ helm cm-push webd-chart-0.1.1.tgz webd | + | ~/gowebd-k8s$ rm webd-chart-0.1.1.tgz |
- | gitlab-runner@server:~/gowebd-k8s$ rm webd-chart-0.1.1.tgz | + | ~/gowebd-k8s$ ### helm repo remove webd |
- | </code><code> | + | |
- | kube1:~# helm repo add webd http://server.corpX.un/api/v4/projects/N/packages/helm/stable | + | ~/gowebd-k8s$ ### helm plugin uninstall cm-push |
+ | </code> | ||
+ | === Устанавливаем приложение через подключение репозитория === | ||
+ | <code> | ||
+ | kube1:~# helm repo add webd https://server.corpX.un/api/v4/projects/N/packages/helm/stable | ||
kube1:~# helm repo update | kube1:~# helm repo update | ||
Line 1513: | Line 1908: | ||
kube1:~# helm repo update webd | kube1:~# helm repo update webd | ||
+ | |||
+ | kube1:~# helm show values webd/webd-chart | tee values.yaml.orig | ||
+ | |||
+ | kube1:~# ###helm pull webd/webd-chart | ||
kube1:~# helm install my-webd webd/webd-chart | kube1:~# helm install my-webd webd/webd-chart | ||
kube1:~# ###helm uninstall my-webd | kube1:~# ###helm uninstall my-webd | ||
- | </code><code> | + | |
+ | kube1:~# ###helm repo remove webd | ||
+ | </code> | ||
+ | === Устанавливаем приложение без подключение репозитория === | ||
+ | <code> | ||
kube1:~# mkdir gowebd; cd gowebd | kube1:~# mkdir gowebd; cd gowebd | ||
- | kube1:~/gowebd# ###helm pull webd-chart --repo https://server.corp13.un/api/v4/projects/1/packages/helm/stable | + | kube1:~/gowebd# ###helm pull webd-chart --repo https://server.corpX.un/api/v4/projects/N/packages/helm/stable |
- | kube1:~/gowebd# helm show values webd-chart --repo https://server.corp13.un/api/v4/projects/1/packages/helm/stable | tee values.yaml.orig | + | kube1:~/gowebd# helm show values webd-chart --repo https://server.corpX.un/api/v4/projects/N/packages/helm/stable | tee values.yaml.orig |
kube1:~/gowebd# cat values.yaml | kube1:~/gowebd# cat values.yaml | ||
Line 1531: | Line 1934: | ||
#REALM_NAME: "corp" | #REALM_NAME: "corp" | ||
</code><code> | </code><code> | ||
- | kube1:~/gowebd# helm upgrade my-webd -i webd-chart -f values.yaml -n my-ns --create-namespace --repo https://server.corp13.un/api/v4/projects/1/packages/helm/stable | + | kube1:~/gowebd# helm upgrade my-webd -i webd-chart -f values.yaml -n my-ns --create-namespace --repo https://server.corpX.un/api/v4/projects/N/packages/helm/stable |
$ curl http://kubeN -H "Host: gowebd.corpX.un" | $ curl http://kubeN -H "Host: gowebd.corpX.un" | ||
Line 1539: | Line 1942: | ||
==== Работа с публичными репозиториями ==== | ==== Работа с публичными репозиториями ==== | ||
+ | |||
+ | === gitlab-runner kubernetes === | ||
+ | |||
<code> | <code> | ||
- | helm repo add gitlab https://charts.gitlab.io | + | kube1:~/gitlab-runner# kubectl create ns gitlab-runner |
+ | |||
+ | kube1:~/gitlab-runner# kubectl -n gitlab-runner create configmap ca-crt --from-file=/usr/local/share/ca-certificates/ca.crt | ||
+ | |||
+ | kube1:~/gitlab-runner# helm repo add gitlab https://charts.gitlab.io | ||
+ | |||
+ | kube1:~/gitlab-runner# helm repo list | ||
+ | |||
+ | kube1:~/gitlab-runner# helm search repo -l gitlab | ||
+ | |||
+ | kube1:~/gitlab-runner# helm search repo -l gitlab/gitlab-runner | ||
+ | |||
+ | kube1:~/gitlab-runner# helm show values gitlab/gitlab-runner --version 0.70.5 | tee values.yaml | ||
+ | |||
+ | kube1:~/gitlab-runner# cat values.yaml | ||
+ | </code><code> | ||
+ | ... | ||
+ | gitlabUrl: https://server.corpX.un | ||
+ | ... | ||
+ | runnerToken: "NNNNNNNNNNNNNNNNNNNNN" | ||
+ | ... | ||
+ | rbac: | ||
+ | ... | ||
+ | create: true #change this | ||
+ | ... | ||
+ | serviceAccount: | ||
+ | ... | ||
+ | create: true #change this | ||
+ | ... | ||
+ | runners: | ||
+ | ... | ||
+ | config: | | ||
+ | [[runners]] | ||
+ | tls-ca-file = "/mnt/ca.crt" #insert this | ||
+ | [runners.kubernetes] | ||
+ | namespace = "{{.Release.Namespace}}" | ||
+ | image = "alpine" | ||
+ | privileged = true #insert this | ||
+ | ... | ||
+ | securityContext: | ||
+ | allowPrivilegeEscalation: true #change this | ||
+ | readOnlyRootFilesystem: false | ||
+ | runAsNonRoot: true | ||
+ | privileged: true #change this | ||
+ | ... | ||
+ | #volumeMounts: [] #comment this | ||
+ | volumeMounts: | ||
+ | - name: ca-crt | ||
+ | subPath: ca.crt | ||
+ | mountPath: /mnt/ca.crt | ||
+ | ... | ||
+ | #volumes: [] #comment this | ||
+ | volumes: | ||
+ | - name: ca-crt | ||
+ | configMap: | ||
+ | name: ca-crt | ||
+ | ... | ||
+ | </code><code> | ||
+ | kube1:~/gitlab-runner# helm upgrade -i gitlab-runner gitlab/gitlab-runner -f values.yaml -n gitlab-runner --version 0.70.5 | ||
+ | |||
+ | kube1:~/gitlab-runner# kubectl get all -n gitlab-runner | ||
+ | |||
+ | kube1:~/gitlab-runner# ### helm -n gitlab-runner uninstall gitlab-runner | ||
+ | </code> | ||
+ | |||
+ | == старая версия == | ||
+ | <code> | ||
+ | gitlab-runner@server:~$ helm repo add gitlab https://charts.gitlab.io | ||
+ | |||
+ | gitlab-runner@server:~$ helm repo list | ||
+ | |||
+ | gitlab-runner@server:~$ helm search repo -l gitlab | ||
- | helm search repo -l gitlab/gitlab-runner | + | gitlab-runner@server:~$ helm search repo -l gitlab/gitlab-runner |
- | helm show values gitlab/gitlab-runner | tee values.yaml | + | gitlab-runner@server:~$ helm show values gitlab/gitlab-runner --version 0.56.0 | tee values.yaml |
gitlab-runner@server:~$ diff values.yaml values.yaml.orig | gitlab-runner@server:~$ diff values.yaml values.yaml.orig | ||
Line 1568: | Line 2045: | ||
> privileged: false | > privileged: false | ||
</code><code> | </code><code> | ||
- | gitlab-runner@server:~$ helm upgrade -i gitlab-runner gitlab/gitlab-runner -f values.yaml -n gitlab-runner --create-namespace | + | gitlab-runner@server:~$ helm upgrade -i gitlab-runner gitlab/gitlab-runner -f values.yaml -n gitlab-runner --create-namespace --version 0.56.0 |
gitlab-runner@server:~$ kubectl get all -n gitlab-runner | gitlab-runner@server:~$ kubectl get all -n gitlab-runner | ||
- | </code><code> | + | </code> |
- | $ helm search hub -o json wordpress | jq '.' | less | + | |
- | $ helm repo add bitnami https://charts.bitnami.com/bitnami | + | == SSL/TLS == |
- | $ helm show values bitnami/wordpress | + | <code> |
+ | # kubectl -n gitlab-runner create configmap wild-crt --from-file=wild.crt | ||
+ | |||
+ | # cat values.yaml | ||
+ | </code><code> | ||
+ | ... | ||
+ | gitlabUrl: https://server.corpX.un/ | ||
+ | ... | ||
+ | config: | | ||
+ | [[runners]] | ||
+ | tls-ca-file = "/mnt/wild.crt" | ||
+ | [runners.kubernetes] | ||
+ | ... | ||
+ | #volumeMounts: [] | ||
+ | volumeMounts: | ||
+ | - name: wild-crt | ||
+ | subPath: wild.crt | ||
+ | mountPath: /mnt/wild.crt | ||
+ | |||
+ | #volumes: [] | ||
+ | volumes: | ||
+ | - name: wild-crt | ||
+ | configMap: | ||
+ | name: wild-crt | ||
</code> | </code> | ||
Line 1629: | Line 2128: | ||
* http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ | * http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ | ||
+ | ===== Мониторинг ===== | ||
+ | |||
+ | ==== Metrics Server ==== | ||
+ | |||
+ | * [[https://kubernetes-sigs.github.io/metrics-server/Kubernetes Metrics Server]] | ||
+ | * [[https://medium.com/@cloudspinx/fix-error-metrics-api-not-available-in-kubernetes-aa10766e1c2f|Fix “error: Metrics API not available” in Kubernetes]] | ||
+ | |||
+ | <code> | ||
+ | kube1:~/metrics-server# curl -L https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.7.2/components.yaml | tee metrics-server-components.yaml | ||
+ | |||
+ | kube1:~/metrics-server# cat metrics-server-components.yaml | ||
+ | </code><code> | ||
+ | ... | ||
+ | containers: | ||
+ | - args: | ||
+ | - --cert-dir=/tmp | ||
+ | - --kubelet-insecure-tls # add this | ||
+ | ... | ||
+ | </code><code> | ||
+ | kube1:~/metrics-server# kubectl apply -f metrics-server-components.yaml | ||
+ | |||
+ | kube1# kubectl get pods -A | grep metrics-server | ||
+ | |||
+ | kube1# kubectl top pod #-n kube-system | ||
+ | |||
+ | kube1# kubectl top pod -A --sort-by=memory | ||
+ | |||
+ | kube1# kubectl top node | ||
+ | </code> | ||
+ | |||
+ | ==== kube-state-metrics ==== | ||
+ | |||
+ | * [[https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-state-metrics]] | ||
+ | * ... алерты с инфой по упавшим подам ... | ||
+ | |||
+ | <code> | ||
+ | kube1# helm repo add prometheus-community https://prometheus-community.github.io/helm-charts | ||
+ | |||
+ | kube1# helm repo update | ||
+ | kube1# helm install kube-state-metrics prometheus-community/kube-state-metrics -n vm --create-namespace | ||
+ | |||
+ | kube1# curl kube-state-metrics.vm.svc.cluster.local:8080/metrics | ||
+ | </code> | ||
+ | ===== Отладка, troubleshooting ===== | ||
+ | |||
+ | ==== Отладка etcd ==== | ||
+ | |||
+ | * [[https://sysdig.com/blog/monitor-etcd/|How to monitor etcd]] | ||
+ | |||
+ | <code> | ||
+ | kubeN:~# more /etc/kubernetes/manifests/kube-apiserver.yaml | ||
+ | |||
+ | kubeN:~# etcdctl member list -w table \ | ||
+ | --endpoints=https://kube1:2379 \ | ||
+ | --cacert=/etc/ssl/etcd/ssl/ca.pem \ | ||
+ | --cert=/etc/ssl/etcd/ssl/node-kube1.pem \ | ||
+ | --key=/etc/ssl/etcd/ssl/node-kube1-key.pem | ||
+ | |||
+ | kubeN:~# etcdctl endpoint status -w table \ | ||
+ | --endpoints=https://kube1:2379,https://kube2:2379,https://kube3:2379 \ | ||
+ | --cacert=/etc/ssl/etcd/ssl/ca.pem \ | ||
+ | --cert=/etc/ssl/etcd/ssl/node-kube1.pem \ | ||
+ | --key=/etc/ssl/etcd/ssl/node-kube1-key.pem | ||
+ | </code> | ||
===== Дополнительные материалы ===== | ===== Дополнительные материалы ===== | ||
+ | |||
+ | ==== Настройка registry-mirrors для Kubespray ==== | ||
+ | <code> | ||
+ | ~/kubespray# cat inventory/mycluster/group_vars/all/docker.yml | ||
+ | </code><code> | ||
+ | ... | ||
+ | docker_registry_mirrors: | ||
+ | - https://mirror.gcr.io | ||
+ | ... | ||
+ | </code><code> | ||
+ | ~/kubespray# cat inventory/mycluster/group_vars/all/containerd.yml | ||
+ | </code><code> | ||
+ | ... | ||
+ | containerd_registries_mirrors: | ||
+ | - prefix: docker.io | ||
+ | mirrors: | ||
+ | - host: https://mirror.gcr.io | ||
+ | capabilities: ["pull", "resolve"] | ||
+ | skip_verify: false | ||
+ | ... | ||
+ | </code> | ||
+ | |||
+ | ==== Установка kubelet kubeadm kubectl в ubuntu20 ==== | ||
+ | |||
+ | * На каждом узле нужно сделать | ||
+ | |||
+ | <code> | ||
+ | mkdir /etc/apt/keyrings | ||
+ | |||
+ | curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg | ||
+ | |||
+ | echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list | ||
+ | |||
+ | apt update && apt install -y kubeadm=1.28.1-1.1 kubelet=1.28.1-1.1 kubectl=1.28.1-1.1 | ||
+ | </code> | ||
+ | |||
+ | ==== Use .kube/config Client certs in curl ==== | ||
+ | * [[https://serverfault.com/questions/1094361/use-kube-config-client-certs-in-curl|Use .kube/config Client certs in curl]] | ||
+ | <code> | ||
+ | cat ~/.kube/config | yq -r '.clusters[0].cluster."certificate-authority-data"' | base64 -d - > ~/.kube/ca.pem | ||
+ | cat ~/.kube/config | yq -r '.users[0].user."client-certificate-data"' | base64 -d - > ~/.kube/user.pem | ||
+ | cat ~/.kube/config | yq -r '.users[0].user."client-key-data"' | base64 -d - > ~/.kube/user-key.pem | ||
+ | |||
+ | SERVER_URL=$(cat ~/.kube/config | yq -r .clusters[0].cluster.server) | ||
+ | |||
+ | curl --cacert ~/.kube/ca.pem --cert ~/.kube/user.pem --key ~/.kube/user-key.pem -X GET ${SERVER_URL}/api/v1/namespaces/default/pods/ | ||
+ | </code> | ||
==== bare-metal minikube ==== | ==== bare-metal minikube ==== |