This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
система_kubernetes [2022/07/14 09:45] val [namespace для своего приложения] |
система_kubernetes [2024/05/06 14:09] val [Развертывание через Kubespray] |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== Система Kubernetes ====== | ====== Система Kubernetes ====== | ||
+ | |||
+ | * [[https://kubernetes.io/ru/docs/home/|Документация по Kubernetes (на русском)]] | ||
* [[https://youtu.be/sLQefhPfwWE|youtube Введение в Kubernetes на примере Minikube]] | * [[https://youtu.be/sLQefhPfwWE|youtube Введение в Kubernetes на примере Minikube]] | ||
Line 7: | Line 9: | ||
* [[https://habr.com/ru/company/domclick/blog/577964/|Ультимативный гайд по созданию CI/CD в GitLab с автодеплоем в Kubernetes на голом железе всего за 514$ в год ( ͡° ͜ʖ ͡°)]] | * [[https://habr.com/ru/company/domclick/blog/577964/|Ультимативный гайд по созданию CI/CD в GitLab с автодеплоем в Kubernetes на голом железе всего за 514$ в год ( ͡° ͜ʖ ͡°)]] | ||
* [[https://habr.com/ru/company/flant/blog/513908/|Полноценный Kubernetes с нуля на Raspberry Pi]] | * [[https://habr.com/ru/company/flant/blog/513908/|Полноценный Kubernetes с нуля на Raspberry Pi]] | ||
+ | * [[https://habr.com/ru/companies/domclick/articles/566224/|Различия между Docker, containerd, CRI-O и runc]] | ||
* [[https://habr.com/ru/company/vk/blog/542730/|11 факапов PRO-уровня при внедрении Kubernetes и как их избежать]] | * [[https://habr.com/ru/company/vk/blog/542730/|11 факапов PRO-уровня при внедрении Kubernetes и как их избежать]] | ||
* [[https://github.com/dgkanatsios/CKAD-exercises|A set of exercises that helped me prepare for the Certified Kubernetes Application Developer exam]] | * [[https://github.com/dgkanatsios/CKAD-exercises|A set of exercises that helped me prepare for the Certified Kubernetes Application Developer exam]] | ||
+ | |||
+ | * [[https://www.youtube.com/watch?v=XZQ7-7vej6w|Наш опыт с Kubernetes в небольших проектах / Дмитрий Столяров (Флант)]] | ||
+ | |||
+ | * [[https://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/|Accessing Kubernetes Pods From Outside of the Cluster]] | ||
===== Инструмент командной строки kubectl ===== | ===== Инструмент командной строки kubectl ===== | ||
Line 17: | Line 24: | ||
==== Установка ==== | ==== Установка ==== | ||
+ | |||
+ | === Linux === | ||
<code> | <code> | ||
- | root@gate.corp13.un:~# curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl | + | # curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl |
- | root@gate.corp13.un:~# chmod +x kubectl | + | |
- | root@gate.corp13.un:~# mv kubectl /usr/local/bin/ | + | # chmod +x kubectl |
+ | |||
+ | # mv kubectl /usr/local/bin/ | ||
</code> | </code> | ||
- | ==== Подключение к кластеру ==== | + | === Windows === |
- | * Если не minikube, то достаточно только копию .kube/config | + | * [[https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/|Install and Set Up kubectl on Windows]] |
- | * [[https://habr.com/ru/company/flant/blog/345580/|см. Настройка GitLab Runner]] | + | |
<code> | <code> | ||
- | student@node2:~$ tar zcf kube-config.tar.gz .kube/config .minikube/ca.crt .minikube/profiles/minikube | + | cmder$ curl -LO "https://dl.k8s.io/release/v1.29.0/bin/windows/amd64/kubectl.exe" |
- | gitlab-runner@gate:~$ scp student@node2:kube-config.tar.gz . | + | cmder$ mv kubectl.exe /usr/bin |
+ | </code> | ||
- | gitlab-runner@gate:~$ tar -xvf kube-config.tar.gz | + | ==== Подключение к кластеру ==== |
- | gitlab-runner@gate:~$ cat .kube/config | + | <code> |
+ | mkdir ~/.kube/ | ||
+ | |||
+ | scp root@192.168.X.2N1:.kube/config ~/.kube/ | ||
+ | |||
+ | cat ~/.kube/config | ||
</code><code> | </code><code> | ||
... | ... | ||
- | certificate-authority: /home/gitlab-runner/.minikube/ca.crt | + | server: https://192.168.X.2N1:6443 |
... | ... | ||
- | client-certificate: /home/gitlab-runner/.minikube/profiles/minikube/client.crt | + | </code><code> |
- | client-key: /home/gitlab-runner/.minikube/profiles/minikube/client.key | + | kubectl get all -o wide --all-namespaces |
+ | kubectl get all -o wide -A | ||
+ | </code> | ||
+ | === Настройка автодополнения === | ||
+ | <code> | ||
+ | gitlab-runner@server:~$ source <(kubectl completion bash) | ||
+ | </code> | ||
+ | |||
+ | === Подключение к другому кластеру === | ||
+ | |||
+ | <code> | ||
+ | gitlab-runner@server:~$ scp root@kube1:.kube/config .kube/config_kube1 | ||
+ | |||
+ | gitlab-runner@server:~$ cat .kube/config_kube1 | ||
+ | </code><code> | ||
+ | ... | ||
+ | .kube/config_kube1 | ||
... | ... | ||
</code><code> | </code><code> | ||
- | gitlab-runner@gate:~$ kubectl get all -o wide --all-namespaces | + | gitlab-runner@server:~$ export KUBECONFIG=~/.kube/config_kube1 |
+ | |||
+ | gitlab-runner@server:~$ kubectl get nodes | ||
</code> | </code> | ||
+ | |||
===== Установка minikube ===== | ===== Установка minikube ===== | ||
* [[https://www.linuxtechi.com/how-to-install-minikube-on-ubuntu/|How to Install Minikube on Ubuntu 20.04 LTS / 21.04]] | * [[https://www.linuxtechi.com/how-to-install-minikube-on-ubuntu/|How to Install Minikube on Ubuntu 20.04 LTS / 21.04]] | ||
* [[https://minikube.sigs.k8s.io/docs/start/|Documentation/Get Started/minikube start]] | * [[https://minikube.sigs.k8s.io/docs/start/|Documentation/Get Started/minikube start]] | ||
- | * Технология Docker [[Технология Docker#Предоставление прав непривилегированным пользователям]] | ||
<code> | <code> | ||
- | student@node3:~$ minikube delete | + | root@server:~# apt install -y curl wget apt-transport-https |
- | student@node3:~$ minikube start --driver=docker --insecure-registry "server.corp13.un:5000" | + | root@server:~# wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 |
- | ИЛИ | + | root@server:~# mv minikube-linux-amd64 /usr/local/bin/minikube |
- | </code><code> | + | |
- | student@node2:~$ sudo apt install conntrack | + | |
- | https://computingforgeeks.com/install-mirantis-cri-dockerd-as-docker-engine-shim-for-kubernetes/ | + | root@server:~# chmod +x /usr/local/bin/minikube |
- | ... | + | </code> |
- | wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz | + | * Технология Docker [[Технология Docker#Предоставление прав непривилегированным пользователям]] |
+ | |||
+ | <code> | ||
+ | gitlab-runner@server:~$ ### minikube delete | ||
+ | gitlab-runner@server:~$ ### rm -rv .minikube/ | ||
+ | |||
+ | gitlab-runner@server:~$ time minikube start --driver=docker --insecure-registry "server.corpX.un:5000" | ||
+ | real 29m8.320s | ||
... | ... | ||
- | student@node2:~$ minikube start --driver=none --insecure-registry "server.corp13.un:5000" | + | gitlab-runner@server:~$ minikube status |
- | </code><code> | + | |
- | student@node3:~$ minikube status | + | |
- | student@node3:~$ minikube ip | + | gitlab-runner@server:~$ minikube ip |
- | student@node3:~$ minikube addons list | + | gitlab-runner@server:~$ minikube addons list |
- | student@node3:~$ minikube addons configure registry-creds | + | gitlab-runner@server:~$ minikube addons configure registry-creds #Не нужно для registry попубличных проектов |
... | ... | ||
Do you want to enable Docker Registry? [y/n]: y | Do you want to enable Docker Registry? [y/n]: y | ||
- | -- Enter docker registry server url: http://server.corp13.un:5000 | + | -- Enter docker registry server url: http://server.corpX.un:5000 |
-- Enter docker registry username: student | -- Enter docker registry username: student | ||
-- Enter docker registry password: | -- Enter docker registry password: | ||
... | ... | ||
- | student@node3:~$ minikube addons enable registry-creds | + | gitlab-runner@server:~$ minikube addons enable registry-creds |
- | student@node3:~$ minikube dashboard & | + | gitlab-runner@server:~$ minikube kubectl -- get pods -A |
- | ... | + | |
- | Opening http://127.0.0.1:NNNNN/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser | + | gitlab-runner@server:~$ alias kubectl='minikube kubectl --' |
- | ... | + | |
- | /home/mobaxterm> ssh -L NNNNN:localhost:NNNNN student@192.168.13.230 | + | gitlab-runner@server:~$ kubectl get pods -A |
- | Теперь, та же ссылка работает на win host системе | + | |
</code> | </code> | ||
- | ===== Установка Kubernetes ===== | + | или |
+ | |||
+ | * [[#Инструмент командной строки kubectl]] | ||
+ | |||
+ | <code> | ||
+ | gitlab-runner@server:~$ ###minikube stop | ||
+ | |||
+ | gitlab-runner@server:~$ ###minikube start | ||
+ | </code> | ||
+ | ===== Кластер Kubernetes ===== | ||
+ | |||
+ | |||
+ | ==== Развертывание через kubeadm ==== | ||
+ | * [[https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/|kubernetes.io Creating a cluster with kubeadm]] | ||
* [[https://infoit.com.ua/linux/kak-ustanovit-kubernetes-na-ubuntu-20-04-lts/|Как установить Kubernetes на Ubuntu 20.04 LTS]] | * [[https://infoit.com.ua/linux/kak-ustanovit-kubernetes-na-ubuntu-20-04-lts/|Как установить Kubernetes на Ubuntu 20.04 LTS]] | ||
+ | * [[https://www.linuxtechi.com/install-kubernetes-on-ubuntu-22-04/|How to Install Kubernetes Cluster on Ubuntu 22.04]] | ||
+ | * [[https://www.linuxtechi.com/install-kubernetes-cluster-on-debian/|https://www.linuxtechi.com/install-kubernetes-cluster-on-debian/]] | ||
* [[https://www.cloud4y.ru/blog/installation-kubernetes/|Установка Kubernetes]] | * [[https://www.cloud4y.ru/blog/installation-kubernetes/|Установка Kubernetes]] | ||
+ | |||
+ | === Подготовка узлов === | ||
<code> | <code> | ||
+ | node1# ssh-keygen | ||
+ | |||
+ | node1# ssh-copy-id node2 | ||
+ | node1# ssh-copy-id node3 | ||
+ | |||
+ | node1# bash -c ' | ||
+ | swapoff -a | ||
+ | ssh node2 swapoff -a | ||
+ | ssh node3 swapoff -a | ||
+ | ' | ||
+ | |||
+ | node1# bash -c ' | ||
+ | sed -i"" -e "/swap/s/^/#/" /etc/fstab | ||
+ | ssh node2 sed -i"" -e "/swap/s/^/#/" /etc/fstab | ||
+ | ssh node3 sed -i"" -e "/swap/s/^/#/" /etc/fstab | ||
+ | ' | ||
+ | </code> | ||
+ | |||
+ | === Установка ПО === | ||
+ | === !!! Обратитесь к преподавателю !!! === | ||
+ | <code> | ||
+ | node1# bash -c ' | ||
+ | http_proxy=http://proxy.isp.un:3128/ apt -y install apt-transport-https curl | ||
+ | ssh node2 http_proxy=http://proxy.isp.un:3128/ apt -y install apt-transport-https curl | ||
+ | ssh node3 http_proxy=http://proxy.isp.un:3128/ apt -y install apt-transport-https curl | ||
+ | ' | ||
+ | |||
+ | node1# bash -c ' | ||
+ | curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add | ||
+ | ssh node2 "curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add" | ||
+ | ssh node3 "curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add" | ||
+ | ' | ||
+ | |||
+ | node1# bash -c ' | ||
+ | apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main" | ||
+ | ssh node2 apt-add-repository \"deb http://apt.kubernetes.io/ kubernetes-xenial main\" | ||
+ | ssh node3 apt-add-repository \"deb http://apt.kubernetes.io/ kubernetes-xenial main\" | ||
+ | ' | ||
+ | |||
+ | node1# bash -c ' | ||
+ | http_proxy=http://proxy.isp.un:3128/ apt -y install kubeadm kubelet kubectl kubernetes-cni | ||
+ | ssh node2 http_proxy=http://proxy.isp.un:3128/ apt -y install kubeadm kubelet kubectl kubernetes-cni | ||
+ | ssh node3 http_proxy=http://proxy.isp.un:3128/ apt -y install kubeadm kubelet kubectl kubernetes-cni | ||
+ | ' | ||
+ | |||
+ | https://forum.linuxfoundation.org/discussion/864693/the-repository-http-apt-kubernetes-io-kubernetes-xenial-release-does-not-have-a-release-file | ||
+ | !!!! Внимание на каждом узле нужно сделать: !!!! | ||
+ | |||
+ | удалить из /etc/apt/sources.list строчку с kubernetes | ||
+ | |||
+ | mkdir /etc/apt/keyrings | ||
+ | |||
+ | curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg | ||
+ | |||
+ | echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list | ||
+ | |||
+ | apt update | ||
+ | |||
+ | apt install -y kubeadm=1.28.1-1.1 kubelet=1.28.1-1.1 kubectl=1.28.1-1.1 | ||
+ | |||
+ | </code> | ||
+ | |||
+ | === Инициализация master === | ||
+ | |||
+ | <code> | ||
+ | root@node1:~# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.X.201 | ||
+ | |||
+ | root@node1:~# mkdir -p $HOME/.kube | ||
+ | |||
+ | root@node1:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config | ||
+ | |||
+ | root@node1:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml | ||
+ | |||
+ | root@node1:~# kubectl get pod -o wide --all-namespaces | ||
+ | |||
+ | root@node1:~# kubectl get --raw='/readyz?verbose' | ||
+ | </code> | ||
+ | * Может понадобиться в случае возникновения ошибки [[https://github.com/containerd/containerd/issues/4581|[ERROR CRI]: container runtime is not running]] | ||
+ | <code> | ||
+ | node1# bash -c ' | ||
+ | rm /etc/containerd/config.toml | ||
+ | systemctl restart containerd | ||
+ | ssh node2 rm /etc/containerd/config.toml | ||
+ | ssh node2 systemctl restart containerd | ||
+ | ssh node3 rm /etc/containerd/config.toml | ||
+ | ssh node3 systemctl restart containerd | ||
+ | ' | ||
+ | </code> | ||
+ | |||
+ | === Подключение worker === | ||
+ | |||
+ | <code> | ||
+ | root@node2_3:~# curl -k https://node1:6443/livez?verbose | ||
+ | </code> | ||
+ | * [[https://github.com/containerd/containerd/issues/4581|[ERROR CRI]: container runtime is not running]] | ||
+ | <code> | ||
+ | root@node2_3:~# kubeadm join 192.168.X.201:6443 --token NNNNNNNNNNNNNNNNNNNN \ | ||
+ | --discovery-token-ca-cert-hash sha256:NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN | ||
+ | </code> | ||
+ | === Проверка состояния === | ||
+ | <code> | ||
+ | root@node1:~# kubectl cluster-info | ||
+ | |||
+ | root@node1:~# kubectl get nodes -o wide | ||
+ | </code> | ||
+ | |||
+ | === Удаление узла === | ||
+ | |||
+ | * [[https://stackoverflow.com/questions/56064537/how-to-remove-broken-nodes-in-kubernetes|How to remove broken nodes in Kubernetes]] | ||
+ | |||
+ | <code> | ||
+ | $ kubectl cordon kube3 | ||
+ | |||
+ | $ time kubectl drain kube3 #--ignore-daemonsets --delete-emptydir-data --force | ||
+ | |||
+ | $ kubectl delete node kube3 | ||
+ | </code> | ||
+ | |||
+ | === Удаление кластера === | ||
+ | |||
+ | * [[https://stackoverflow.com/questions/44698283/how-to-completely-uninstall-kubernetes|How to completely uninstall kubernetes]] | ||
+ | |||
+ | <code> | ||
+ | node1# bash -c ' | ||
+ | kubeadm reset | ||
+ | ssh node2 kubeadm reset | ||
+ | ssh node3 kubeadm reset | ||
+ | ' | ||
+ | </code> | ||
+ | |||
+ | === Настройка доступа к Insecure Private Registry === | ||
+ | |||
+ | * [[https://github.com/containerd/containerd/issues/4938|Unable to pull image from insecure registry, http: server gave HTTP response to HTTPS client #4938]] | ||
+ | * [[https://github.com/containerd/containerd/issues/3847|Containerd cannot pull image from insecure registry #3847]] | ||
+ | |||
+ | * [[https://mrzik.medium.com/how-to-configure-private-registry-for-kubernetes-cluster-running-with-containerd-cf74697fa382|How to Configure Private Registry for Kubernetes cluster running with containerd]] | ||
+ | * [[https://github.com/containerd/containerd/blob/main/docs/PLUGINS.md#version-header|containerd/docs/PLUGINS.md migrate config v1 to v2]] | ||
+ | |||
+ | == сontainerd == | ||
+ | |||
+ | <code> | ||
+ | root@node1:~# mkdir /etc/containerd/ | ||
+ | |||
+ | root@node1:~# cat /etc/containerd/config.toml | ||
+ | </code><code> | ||
+ | version = 2 | ||
+ | |||
+ | [plugins."io.containerd.grpc.v1.cri".registry] | ||
+ | [plugins."io.containerd.grpc.v1.cri".registry.mirrors] | ||
+ | [plugins."io.containerd.grpc.v1.cri".registry.mirrors."server.corpX.un:5000"] | ||
+ | endpoint = ["http://server.corpX.un:5000"] | ||
+ | |||
+ | # no need | ||
+ | # [plugins."io.containerd.grpc.v1.cri".registry.configs] | ||
+ | # [plugins."io.containerd.grpc.v1.cri".registry.configs."server.corpX.un:5000".tls] | ||
+ | # insecure_skip_verify = true | ||
+ | |||
+ | # don't work in cri-tools 1.25, need public project | ||
+ | #[plugins."io.containerd.grpc.v1.cri".registry.configs."server.corpX.un:5000".auth] | ||
+ | # auth = "c3R1ZGVudDpwYXNzd29yZA==" | ||
+ | </code><code> | ||
+ | node1# bash -c ' | ||
+ | ssh node2 mkdir /etc/containerd/ | ||
+ | ssh node3 mkdir /etc/containerd/ | ||
+ | scp /etc/containerd/config.toml node2:/etc/containerd/config.toml | ||
+ | scp /etc/containerd/config.toml node3:/etc/containerd/config.toml | ||
+ | systemctl restart containerd | ||
+ | ssh node2 systemctl restart containerd | ||
+ | ssh node3 systemctl restart containerd | ||
+ | ' | ||
+ | |||
+ | root@nodeN:~# containerd config dump | less | ||
+ | </code> | ||
+ | |||
+ | Проверка | ||
+ | |||
+ | <code> | ||
+ | root@nodeN:~# crictl -r unix:///run/containerd/containerd.sock pull server.corpX.un:5000/student/gowebd | ||
+ | </code> | ||
+ | |||
+ | ==== Развертывание через Kubespray ==== | ||
+ | |||
+ | === !!! Обратитесь к преподавателю !!! === | ||
+ | |||
+ | * [[https://github.com/kubernetes-sigs/kubespray]] | ||
+ | * [[https://habr.com/ru/companies/domclick/articles/682364/|Самое подробное руководство по установке высокодоступного (почти ಠ ͜ʖ ಠ ) Kubernetes-кластера]] | ||
+ | * [[https://habr.com/ru/companies/X5Tech/articles/645651/|Bare-metal kubernetes-кластер на своём локальном компьютере]] | ||
+ | * [[https://internet-lab.ru/k8s_kubespray|Kubernetes — установка через Kubespray]] | ||
+ | * [[https://www.mshowto.org/en/ubuntu-sunucusuna-kubespray-ile-kubernetes-kurulumu.html|Installing Kubernetes on Ubuntu Server with Kubespray]] | ||
+ | * [[https://kubernetes.io/docs/setup/production-environment/tools/kubespray/|Installing Kubernetes with Kubespray]] | ||
+ | |||
+ | * [[https://stackoverflow.com/questions/29882263/browse-list-of-tagged-releases-in-a-repo]] | ||
+ | |||
+ | <code> | ||
+ | kube1# ssh-keygen | ||
+ | |||
+ | kube1# ssh-copy-id kube1;ssh-copy-id kube2;ssh-copy-id kube3;ssh-copy-id kube4; | ||
+ | |||
+ | kube1# apt update | ||
+ | |||
+ | kube1# apt install python3-pip -y | ||
+ | |||
+ | kube1# git clone https://github.com/kubernetes-sigs/kubespray | ||
+ | |||
+ | kube1# cd kubespray/ | ||
+ | |||
+ | ~/kubespray# grep -r containerd_insecure_registries . | ||
+ | ~/kubespray# git log | ||
+ | |||
+ | ~/kubespray# git branch -r | ||
+ | ~/kubespray# git checkout origin/release-2.22 | ||
+ | |||
+ | ~/kubespray# git tag -l | ||
+ | ~/kubespray# ### git checkout tags/v2.22.1 | ||
+ | |||
+ | ~/kubespray# ### git checkout 4c37399c7582ea2bfb5202c3dde3223f9c43bf59 | ||
+ | |||
+ | ~/kubespray# ### git checkout master | ||
+ | </code> | ||
+ | |||
+ | * Может потребоваться [[Язык программирования Python#Виртуальная среда Python]] | ||
+ | * Может потребоваться [[https://github.com/kubernetes-sigs/kubespray/issues/10688|"The conditional check 'groups.get('kube_control_plane')' failed. The error was: Conditional is marked as unsafe, and cannot be evaluated." #10688]] | ||
+ | |||
+ | <code> | ||
+ | ~/kubespray# time pip3 install -r requirements.txt | ||
+ | real 1m48.202s | ||
+ | |||
+ | ~/kubespray# cp -rvfpT inventory/sample inventory/mycluster | ||
+ | |||
+ | ~/kubespray# declare -a IPS=(kube1,192.168.X.221 kube2,192.168.X.222 kube3,192.168.X.223) | ||
+ | |||
+ | ~/kubespray# CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]} | ||
+ | |||
+ | ~/kubespray# less inventory/mycluster/hosts.yaml | ||
+ | |||
+ | ~/kubespray# time ansible-playbook -i inventory/mycluster/hosts.yaml cluster.yml | ||
+ | real 45m31.796s | ||
+ | |||
+ | kube1# less ~/.kube/config | ||
+ | |||
+ | ~/kubespray# ###time ansible-playbook -i inventory/mycluster/hosts.yaml reset.yml | ||
+ | real 7m31.796s | ||
+ | </code> | ||
+ | |||
+ | === Добавление узла через Kubespray === | ||
+ | <code> | ||
+ | ~/kubespray# cat inventory/mycluster/hosts.yaml | ||
+ | </code><code> | ||
... | ... | ||
- | root@node1:~# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.13.210 | + | node4: |
+ | ansible_host: 192.168.X.204 | ||
+ | ip: 192.168.X.204 | ||
+ | access_ip: 192.168.X.204 | ||
... | ... | ||
- | student@node1:~$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml | + | kube_node: |
+ | ... | ||
+ | node4: | ||
+ | ... | ||
+ | </code><code> | ||
+ | |||
+ | ~/kubespray# time ansible-playbook -i inventory/mycluster/hosts.yaml --limit=kube4 scale.yml | ||
+ | real 17m37.459s | ||
+ | |||
+ | $ kubectl get nodes -o wide | ||
+ | </code> | ||
+ | |||
+ | === Добавление insecure_registries через Kubespray === | ||
+ | <code> | ||
+ | ~/kubespray# cat inventory/mycluster/group_vars/all/containerd.yml | ||
+ | </code><code> | ||
+ | ... | ||
+ | containerd_insecure_registries: | ||
+ | "server.corpX.un:5000": "http://server.corpX.un:5000" | ||
+ | containerd_registry_auth: | ||
+ | - registry: server.corpX.un:5000 | ||
+ | username: student | ||
+ | password: Pa$$w0rd | ||
+ | ... | ||
+ | </code><code> | ||
+ | ~/kubespray# time ansible-playbook -i inventory/mycluster/hosts.yaml cluster.yml | ||
+ | user 46m37.151s | ||
+ | |||
+ | # less /etc/containerd/config.toml | ||
+ | </code> | ||
+ | |||
+ | === Управление дополнениями через Kubespray === | ||
+ | <code> | ||
+ | ~/kubespray# cat inventory/mycluster/group_vars/k8s_cluster/addons.yml | ||
+ | </code><code> | ||
+ | ... | ||
+ | helm_enabled: true | ||
+ | ... | ||
+ | ingress_nginx_enabled: true | ||
+ | ingress_nginx_host_network: true | ||
... | ... | ||
- | student@node1:~$ kubectl get pod -o wide --all-namespaces | ||
</code> | </code> | ||
===== Базовые объекты k8s ===== | ===== Базовые объекты k8s ===== | ||
Line 113: | Line 455: | ||
* [[https://kubernetes.io/ru/docs/reference/kubectl/docker-cli-to-kubectl/|kubectl для пользователей Docker]] | * [[https://kubernetes.io/ru/docs/reference/kubectl/docker-cli-to-kubectl/|kubectl для пользователей Docker]] | ||
* [[https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/|Run a Stateless Application Using a Deployment]] | * [[https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/|Run a Stateless Application Using a Deployment]] | ||
+ | |||
<code> | <code> | ||
- | $ kubectl create deployment my-debian --image=debian -- "sleep" "3600" | + | $ kubectl api-resources |
+ | |||
+ | $ kubectl run my-debian --image=debian -- "sleep" "3600" | ||
+ | |||
+ | $ ###kubectl run -ti --rm my-debian --image=debian --overrides='{"spec": { "nodeSelector": {"kubernetes.io/hostname": "kube4"}}}' | ||
$ kubectl get all | $ kubectl get all | ||
- | $ kubectl get deployments | + | kubeN# crictl ps | grep debi |
+ | kubeN# crictl images | ||
+ | nodeN# ctr ns ls | ||
+ | nodeN# ctr -n=k8s.io image ls | grep debi | ||
- | $ kubectl get pods | + | $ kubectl delete pod my-debian |
+ | $ ###kubectl delete pod my-debian --grace-period=0 --force | ||
+ | $ kubectl create deployment my-debian --image=debian -- "sleep" "3600" | ||
+ | |||
+ | $ kubectl get deployments | ||
+ | </code> | ||
+ | * [[#Настройка автодополнения]] | ||
+ | <code> | ||
$ kubectl attach my-debian-NNNNNNNNN-NNNNN | $ kubectl attach my-debian-NNNNNNNNN-NNNNN | ||
Line 129: | Line 486: | ||
$ kubectl get deployment my-debian -o yaml | $ kubectl get deployment my-debian -o yaml | ||
+ | </code> | ||
+ | * [[Переменные окружения]] EDITOR | ||
+ | <code> | ||
$ kubectl edit deployment my-debian | $ kubectl edit deployment my-debian | ||
+ | |||
+ | $ kubectl get pods -o wide | ||
$ kubectl delete deployment my-debian | $ kubectl delete deployment my-debian | ||
- | </code><code> | + | </code> |
- | [[https://kubernetes.io/docs/reference/glossary/?all=true#term-manifest| Kubernetes Documentation Reference Glossary/Manifest]] | + | * [[https://kubernetes.io/docs/reference/glossary/?all=true#term-manifest|Kubernetes Documentation Reference Glossary/Manifest]] |
- | </code><code> | + | <code> |
$ cat my-debian-deployment.yaml | $ cat my-debian-deployment.yaml | ||
</code><code> | </code><code> | ||
apiVersion: apps/v1 | apiVersion: apps/v1 | ||
- | kind: Deployment | + | kind: ReplicaSet |
+ | #kind: Deployment | ||
metadata: | metadata: | ||
name: my-debian | name: my-debian | ||
Line 146: | Line 508: | ||
matchLabels: | matchLabels: | ||
app: my-debian | app: my-debian | ||
+ | replicas: 2 | ||
template: | template: | ||
metadata: | metadata: | ||
Line 158: | Line 521: | ||
restartPolicy: Always | restartPolicy: Always | ||
</code><code> | </code><code> | ||
- | $ kubectl create -f my-debian-deployment.yaml | + | $ kubectl apply -f my-debian-deployment.yaml |
... | ... | ||
$ kubectl delete -f my-debian-deployment.yaml | $ kubectl delete -f my-debian-deployment.yaml | ||
</code> | </code> | ||
==== namespace для своего приложения ==== | ==== namespace для своего приложения ==== | ||
+ | |||
+ | * [[https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-volumes-example-nfs-persistent-volume.html|How to use an NFS volume]] | ||
+ | * [[https://hub.docker.com/_/httpd|The Apache HTTP Server Project - httpd Docker Official Image]] | ||
+ | |||
<code> | <code> | ||
$ kubectl create namespace my-ns | $ kubectl create namespace my-ns | ||
Line 168: | Line 535: | ||
$ kubectl get namespaces | $ kubectl get namespaces | ||
- | $ ### kubectl create deployment my-webd --image=server.corp13.un:5000/student/webd:latest --replicas=2 -n my-ns | + | $ ### kubectl create deployment my-webd --image=server.corpX.un:5000/student/webd:latest --replicas=2 -n my-ns |
+ | |||
+ | $ ### kubectl delete deployment my-webd -n my-ns | ||
+ | |||
+ | $ cd webd/ | ||
$ cat my-webd-deployment.yaml | $ cat my-webd-deployment.yaml | ||
Line 176: | Line 547: | ||
metadata: | metadata: | ||
name: my-webd | name: my-webd | ||
- | namespace: my-ns | ||
spec: | spec: | ||
selector: | selector: | ||
Line 189: | Line 559: | ||
containers: | containers: | ||
- name: my-webd | - name: my-webd | ||
- | image: server.corp13.un:5000/student/webd:latest | + | |
+ | # image: server.corpX.un:5000/student/webd | ||
+ | # image: server.corpX.un:5000/student/webd:ver1.N | ||
+ | |||
+ | # imagePullPolicy: "Always" | ||
+ | |||
+ | # image: httpd | ||
+ | # lifecycle: | ||
+ | # postStart: | ||
+ | # exec: | ||
+ | # command: ["/bin/sh", "-c", "echo Hello from apache2 on $(hostname) > /usr/local/apache2/htdocs/index.html"] | ||
+ | |||
+ | # env: | ||
+ | # - name: APWEBD_HOSTNAME | ||
+ | # value: "apwebd.corpX.un" | ||
+ | # - name: KEYCLOAK_HOSTNAME | ||
+ | # value: "keycloak.corpX.un" | ||
+ | # - name: REALM_NAME | ||
+ | # value: "corpX" | ||
+ | |||
+ | # livenessProbe: | ||
+ | # httpGet: | ||
+ | # port: 80 | ||
+ | |||
+ | # volumeMounts: | ||
+ | # - name: nfs-volume | ||
+ | # mountPath: /var/www | ||
+ | # volumes: | ||
+ | # - name: nfs-volume | ||
+ | # nfs: | ||
+ | # server: server.corpX.un | ||
+ | # path: /var/www | ||
</code><code> | </code><code> | ||
- | $ kubectl apply -f my-webd-deployment.yaml | + | $ kubectl apply -f my-webd-deployment.yaml -n my-ns |
- | $ kubectl get all -n my-ns | + | $ kubectl get all -n my-ns -o wide |
- | $ kubectl describe pod my-webd-NNNNNNNNNN-NNNNN -n my-ns | + | $ kubectl describe -n my-ns pod/my-webd-NNNNNNNNNN-NNNNN |
$ kubectl scale deployment my-webd --replicas=3 -n my-ns | $ kubectl scale deployment my-webd --replicas=3 -n my-ns | ||
+ | |||
+ | $ kubectl delete pod/my-webd-NNNNNNNNNN-NNNNN -n my-ns | ||
+ | </code> | ||
+ | |||
+ | * [[https://learnk8s.io/kubernetes-rollbacks|How do you rollback deployments in Kubernetes?]] | ||
+ | |||
+ | <code> | ||
+ | gitlab-runner@server:~$ kubectl -n my-ns rollout history deployment/my-webd | ||
+ | deployment.apps/my-webd | ||
+ | REVISION CHANGE-CAUSE | ||
+ | 1 <none> | ||
+ | ... | ||
+ | N <none> | ||
+ | |||
+ | gitlab-runner@server:~$ kubectl -n my-ns rollout history deployment/my-webd --revision=1 | ||
+ | ... | ||
+ | Image: server.corpX.un:5000/student/webd:ver1.1 | ||
+ | ... | ||
+ | |||
+ | kubectl -n my-ns rollout undo deployment/my-webd --to-revision=1 | ||
+ | |||
+ | gitlab-runner@server:~$ kubectl -n my-ns rollout undo deployment/my-webd --to-revision=1 | ||
+ | |||
+ | gitlab-runner@server:~$ kubectl -n my-ns rollout history deployment/my-webd | ||
+ | deployment.apps/my-webd | ||
+ | REVISION CHANGE-CAUSE | ||
+ | 2 <none> | ||
+ | ... | ||
+ | N+1 <none> | ||
</code> | </code> | ||
Line 203: | Line 633: | ||
* [[https://kubernetes.io/docs/concepts/services-networking/service/|Kubernetes Documentation Concepts Services, Load Balancing, and Networking Service]] | * [[https://kubernetes.io/docs/concepts/services-networking/service/|Kubernetes Documentation Concepts Services, Load Balancing, and Networking Service]] | ||
+ | |||
+ | * [[https://stackoverflow.com/questions/33069736/how-do-i-get-logs-from-all-pods-of-a-kubernetes-replication-controller|How do I get logs from all pods of a Kubernetes replication controller?]] | ||
<code> | <code> | ||
$ ### kubectl expose deployment my-webd --type=NodePort --port=80 -n my-ns | $ ### kubectl expose deployment my-webd --type=NodePort --port=80 -n my-ns | ||
+ | |||
+ | $ ### kubectl delete svc my-webd -n my-ns | ||
$ cat my-webd-service.yaml | $ cat my-webd-service.yaml | ||
Line 213: | Line 647: | ||
metadata: | metadata: | ||
name: my-webd | name: my-webd | ||
- | namespace: my-ns | ||
spec: | spec: | ||
- | type: NodePort | + | # type: NodePort |
+ | # type: LoadBalancer | ||
+ | # loadBalancerIP: 192.168.X.64 | ||
selector: | selector: | ||
app: my-webd | app: my-webd | ||
Line 221: | Line 656: | ||
- protocol: TCP | - protocol: TCP | ||
port: 80 | port: 80 | ||
- | targetPort: 80 | + | # nodePort: 30111 |
- | status: | + | |
- | loadBalancer: {} | + | |
</code><code> | </code><code> | ||
- | $ kubectl apply -f my-webd-service.yaml | + | $ kubectl apply -f my-webd-service.yaml -n my-ns |
+ | $ kubectl logs -l app=my-webd -n my-ns | ||
+ | (доступны опции -f, --tail=2000, --previous) | ||
+ | </code> | ||
+ | === NodePort === | ||
+ | <code> | ||
$ kubectl get svc my-webd -n my-ns | $ kubectl get svc my-webd -n my-ns | ||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE | ||
- | my-webd-svc NodePort 10.102.135.146 <none> 80:30350/TCP 18h | + | my-webd-svc NodePort 10.102.135.146 <none> 80:NNNNN/TCP 18h |
- | student@node3:~$ minikube service my-webd -n my-ns --url | + | $ kubectl describe svc my-webd -n my-ns |
- | http://192.168.49.2:30350 | + | |
- | student@node3:~$ curl $(minikube service my-webd -n my-ns --url) | + | $ curl http://node1,2,3:NNNNN |
+ | на "самодельном kubeadm" кластере работает не стабильно | ||
+ | </code> | ||
+ | == NodePort Minikube == | ||
+ | <code> | ||
+ | $ minikube service list | ||
+ | |||
+ | $ minikube service my-webd -n my-ns --url | ||
+ | http://192.168.49.2:NNNNN | ||
+ | |||
+ | $ curl $(minikube service my-webd -n my-ns --url) | ||
+ | </code> | ||
+ | |||
+ | === LoadBalancer === | ||
+ | |||
+ | == MetalLB == | ||
+ | |||
+ | * [[https://www.adaltas.com/en/2022/09/08/kubernetes-metallb-nginx/|Ingresses and Load Balancers in Kubernetes with MetalLB and nginx-ingress]] | ||
+ | |||
+ | * [[https://metallb.universe.tf/installation/|Installation]] | ||
+ | * [[https://metallb.universe.tf/configuration/_advanced_ipaddresspool_configuration/|Advanced AddressPool configuration]] | ||
+ | * [[https://metallb.universe.tf/configuration/_advanced_l2_configuration/|Advanced L2 configuration]] | ||
+ | |||
+ | <code> | ||
+ | $ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.3/config/manifests/metallb-native.yaml | ||
+ | |||
+ | $ kubectl -n metallb-system get all | ||
+ | |||
+ | $ cat first-pool.yaml | ||
+ | </code><code> | ||
+ | --- | ||
+ | apiVersion: metallb.io/v1beta1 | ||
+ | kind: IPAddressPool | ||
+ | metadata: | ||
+ | name: first-pool | ||
+ | namespace: metallb-system | ||
+ | spec: | ||
+ | addresses: | ||
+ | - 192.168.13.64/28 | ||
+ | autoAssign: false | ||
+ | --- | ||
+ | apiVersion: metallb.io/v1beta1 | ||
+ | kind: L2Advertisement | ||
+ | metadata: | ||
+ | name: first-pool-advertisement | ||
+ | namespace: metallb-system | ||
+ | spec: | ||
+ | ipAddressPools: | ||
+ | - first-pool | ||
+ | interfaces: | ||
+ | - eth0 | ||
+ | </code><code> | ||
+ | $ kubectl apply -f first-pool.yaml | ||
+ | |||
+ | $ ### kubectl delete -f first-pool.yaml && rm first-pool.yaml | ||
+ | |||
+ | $ ### kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.3/config/manifests/metallb-native.yaml | ||
+ | </code> | ||
+ | |||
+ | === ClusterIP === | ||
+ | <code> | ||
+ | kube1# host my-webd.my-ns.svc.cluster.local 169.254.25.10 | ||
+ | ...10.102.135.146... | ||
+ | |||
+ | server# ssh -p 32222 nodeN | ||
+ | |||
+ | my-openssh-server-NNNNNNNN-NNNNN:~# curl my-webd.my-ns.svc.cluster.local | ||
+ | ИЛИ | ||
+ | my-openssh-server-NNNNNNNN-NNNNN:~# curl my-webd-webd-chart.my-ns.svc.cluster.local | ||
+ | </code> | ||
+ | |||
+ | == port-forward == | ||
+ | |||
+ | * [[#Инструмент командной строки kubectl]] | ||
+ | |||
+ | <code> | ||
+ | node1/kube1# kubectl port-forward -n my-ns --address 0.0.0.0 services/my-webd 1234:80 | ||
+ | |||
+ | cmder> kubectl port-forward -n my-ns services/my-webd 1234:80 | ||
+ | </code> | ||
+ | |||
+ | * http://192.168.X.2N1:1234 | ||
+ | * http://localhost:1234 | ||
+ | |||
+ | <code> | ||
+ | node1/kube1# kubectl -n my-ns delete pod/my-webd... | ||
+ | </code> | ||
+ | |||
+ | == kubectl proxy == | ||
+ | |||
+ | * [[#Инструмент командной строки kubectl]] | ||
+ | |||
+ | <code> | ||
+ | kube1:~# kubectl proxy --address='0.0.0.0' --accept-hosts='^*$' | ||
+ | |||
+ | cmder> kubectl proxy | ||
+ | </code> | ||
+ | |||
+ | * http://192.168.X.2N1:8001/api/v1/namespaces/my-ns/services/my-webd:80/proxy/ | ||
+ | * http://localhost:8001/api/v1/namespaces/my-ns/services/my-webd:80/proxy/ | ||
+ | |||
+ | |||
+ | ==== Удаление объектов ==== | ||
+ | <code> | ||
+ | $ kubectl get all -n my-ns | ||
+ | |||
+ | $ kubectl delete -n my-ns -f my-webd-deployment.yaml,my-webd-service.yaml | ||
+ | |||
+ | или | ||
+ | |||
+ | $ kubectl delete namespace my-ns | ||
</code> | </code> | ||
==== Ingress ==== | ==== Ingress ==== | ||
+ | |||
+ | * [[https://kubernetes.github.io/ingress-nginx/deploy/#quick-start|NGINX ingress controller quick-start]] | ||
+ | |||
+ | === Minikube ingress-nginx-controller === | ||
* [[https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/|Set up Ingress on Minikube with the NGINX Ingress Controller]] | * [[https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/|Set up Ingress on Minikube with the NGINX Ingress Controller]] | ||
- | * [[https://stackoverflow.com/questions/33069736/how-do-i-get-logs-from-all-pods-of-a-kubernetes-replication-controller|How do I get logs from all pods of a Kubernetes replication controller?]] | + | * [[https://www.golinuxcloud.com/kubectl-port-forward/|kubectl port-forward examples in Kubernetes]] |
<code> | <code> | ||
- | student@node2:~$ minikube addons enable ingress | + | server# cat /etc/bind/corpX.un |
+ | </code><code> | ||
+ | ... | ||
+ | webd A 192.168.49.2 | ||
+ | </code><code> | ||
+ | gitlab-runner@server:~$ minikube addons enable ingress | ||
+ | </code> | ||
+ | |||
+ | === Baremetal ingress-nginx-controller === | ||
+ | |||
+ | * [[https://github.com/kubernetes/ingress-nginx/tags]] Версии | ||
+ | * [[https://stackoverflow.com/questions/61616203/nginx-ingress-controller-failed-calling-webhook|Nginx Ingress Controller - Failed Calling Webhook]] | ||
+ | * [[https://stackoverflow.com/questions/51511547/empty-address-kubernetes-ingress|Empty ADDRESS kubernetes ingress]] | ||
+ | * [[https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal-clusters|ingress-nginx/deploy/bare-metal-clusters]] | ||
+ | |||
+ | <code> | ||
+ | server# cat /etc/bind/corpX.un | ||
+ | </code><code> | ||
+ | ... | ||
+ | webd A 192.168.X.202 | ||
+ | A 192.168.X.203 | ||
+ | gowebd CNAME webd | ||
+ | </code><code> | ||
+ | node1# curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/baremetal/deploy.yaml | tee ingress-nginx.controller-v1.3.1.baremetal.yaml | ||
+ | |||
+ | node1# cat ingress-nginx.controller-v1.3.1.baremetal.yaml | ||
+ | </code><code> | ||
+ | ... | ||
+ | kind: Deployment | ||
+ | ... | ||
+ | spec: | ||
+ | ... | ||
+ | replicas: 3 ### insert this (equial count of worker nodes) | ||
+ | template: | ||
+ | ... | ||
+ | terminationGracePeriodSeconds: 300 | ||
+ | hostNetwork: true ###insert this | ||
+ | volumes: | ||
+ | ... | ||
+ | </code><code> | ||
+ | node1# kubectl apply -f ingress-nginx.controller-v1.3.1.baremetal.yaml | ||
+ | |||
+ | node1# kubectl get all -n ingress-nginx | ||
+ | |||
+ | node1# ###kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission | ||
+ | |||
+ | node1# ###kubectl delete -f ingress-nginx.controller-v1.3.1.baremetal.yaml | ||
+ | </code> | ||
+ | |||
+ | === Управление конфигурацией ingress-nginx-controller === | ||
+ | <code> | ||
+ | master-1:~$ kubectl exec -n ingress-nginx pods/ingress-nginx-controller-<TAB> -- cat /etc/nginx/nginx.conf | tee nginx.conf | ||
+ | |||
+ | master-1:~$ kubectl edit -n ingress-nginx configmaps ingress-nginx-controller | ||
+ | </code><code> | ||
+ | ... | ||
+ | data: | ||
+ | use-forwarded-headers: "true" | ||
+ | ... | ||
+ | </code> | ||
+ | |||
+ | === Итоговый вариант с DaemonSet === | ||
+ | <code> | ||
+ | node1# diff ingress-nginx.controller-v1.8.2.baremetal.yaml.orig ingress-nginx.controller-v1.8.2.baremetal.yaml | ||
+ | </code><code> | ||
+ | 323a324 | ||
+ | > use-forwarded-headers: "true" | ||
+ | 391c392,393 | ||
+ | < kind: Deployment | ||
+ | --- | ||
+ | > #kind: Deployment | ||
+ | > kind: DaemonSet | ||
+ | 409,412c411,414 | ||
+ | < strategy: | ||
+ | < rollingUpdate: | ||
+ | < maxUnavailable: 1 | ||
+ | < type: RollingUpdate | ||
+ | --- | ||
+ | > # strategy: | ||
+ | > # rollingUpdate: | ||
+ | > # maxUnavailable: 1 | ||
+ | > # type: RollingUpdate | ||
+ | 501a504 | ||
+ | > hostNetwork: true | ||
+ | </code><code> | ||
+ | node1# kubectl -n ingress-nginx describe service/ingress-nginx-controller | ||
+ | ... | ||
+ | Endpoints: 192.168.X.221:80,192.168.X.222:80,192.168.X.223:80 | ||
+ | ... | ||
+ | </code> | ||
+ | |||
+ | === ingress example === | ||
+ | |||
+ | <code> | ||
+ | node1# ### kubectl create ingress my-ingress --class=nginx --rule="webd.corpX.un/*=my-webd:80" -n my-ns | ||
- | gitlab-runner@gate:~/webd$ cat my-webd-ingress.yaml | + | node1# cat my-ingress.yaml |
</code><code> | </code><code> | ||
apiVersion: networking.k8s.io/v1 | apiVersion: networking.k8s.io/v1 | ||
kind: Ingress | kind: Ingress | ||
metadata: | metadata: | ||
- | name: my-webd | + | name: my-ingress |
- | namespace: my-ns | + | |
- | annotations: | + | |
- | nginx.ingress.kubernetes.io/rewrite-target: /$1 | + | |
spec: | spec: | ||
+ | ingressClassName: nginx | ||
+ | # tls: | ||
+ | # - hosts: | ||
+ | # - gowebd.corpX.un | ||
+ | # secretName: gowebd-tls | ||
rules: | rules: | ||
- | - host: webd.corp13.un | + | - host: webd.corpX.un |
- | http: | + | http: |
- | paths: | + | paths: |
- | - path: /(.*) | + | - backend: |
- | pathType: Prefix # Попробовать: ImplementationSpecific | + | service: |
- | backend: | + | name: my-webd |
- | service: | + | port: |
- | name: my-webd | + | number: 80 |
- | port: | + | path: / |
- | number: 80 | + | pathType: Prefix |
+ | - host: gowebd.corpX.un | ||
+ | http: | ||
+ | paths: | ||
+ | - backend: | ||
+ | service: | ||
+ | name: my-gowebd | ||
+ | port: | ||
+ | number: 80 | ||
+ | path: / | ||
+ | pathType: Prefix | ||
</code><code> | </code><code> | ||
- | $ kubectl apply -f my-webd-ingress.yaml | + | node1# kubectl apply -f my-ingress.yaml -n my-ns |
- | $ kubectl get ingress -n my-ns | ||
- | root@gate.corp13.un:~# host webd | + | node1# kubectl get ingress -n my-ns |
- | webd.corp13.un is an alias for node2.corp13.un. | + | NAME CLASS HOSTS ADDRESS PORTS AGE |
- | node2.corp13.un has address 192.168.13.220 | + | my-webd nginx webd.corpX.un,gowebd.corpX.un 192.168.X.202,192.168.X.203 80 14m |
- | $ curl webd.corp13.un | ||
- | $ kubectl logs -l app=my-webd -n my-ns | + | $ curl webd.corpX.un |
+ | $ curl gowebd.corpX.un | ||
+ | $ curl https://gowebd.corpX.un #-kv | ||
+ | |||
+ | $ curl http://nodeN/ -H "Host: webd.corpX.un" | ||
+ | $ curl --connect-to "":"":kubeN:443 https://gowebd.corpX.un #-vk | ||
+ | |||
+ | $ kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx -f | ||
+ | |||
+ | node1# ### kubectl delete ingress my-ingress -n my-ns | ||
</code> | </code> | ||
- | ==== Удаление объектов ==== | + | |
+ | === secrets tls === | ||
+ | |||
+ | * [[https://devopscube.com/configure-ingress-tls-kubernetes/|How To Configure Ingress TLS/SSL Certificates in Kubernetes]] | ||
<code> | <code> | ||
- | $ kubectl delete -n my-ns -f my-webd-deployment.yaml,my-webd-service.yaml,my-webd-ingress.yaml | + | $ kubectl create secret tls gowebd-tls --key gowebd.key --cert gowebd.crt -n my-ns |
+ | |||
+ | $ kubectl get secrets -n my-ns | ||
- | или | + | $ kubectl get secret/gowebd-tls -o yaml -n my-ns |
- | $ kubectl delete namespace my-ns | + | $ ###kubectl delete secret/gowebd-tls -n my-ns |
</code> | </code> | ||
- | ==== Пример с nfs volume ==== | + | ==== Volumes ==== |
- | * [[https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-volumes-example-nfs-persistent-volume.html|How to use an NFS volume]] | + | === PersistentVolume и PersistentVolumeVolumeClaim === |
+ | <code> | ||
+ | root@node1:~# ssh node2 mkdir /disk2 | ||
+ | |||
+ | root@node1:~# ssh node2 touch /disk2/disk2_node2 | ||
+ | |||
+ | root@node1:~# kubectl label nodes node2 disk2=yes | ||
+ | |||
+ | root@node1:~# kubectl get nodes --show-labels | ||
+ | |||
+ | root@node1:~# ###kubectl label nodes node2 disk2- | ||
+ | |||
+ | root@node1:~# cat my-debian-deployment.yaml | ||
+ | </code><code> | ||
+ | ... | ||
+ | args: ["-c", "while true; do echo hello; sleep 3;done"] | ||
+ | |||
+ | volumeMounts: | ||
+ | - name: my-disk2-volume | ||
+ | mountPath: /data | ||
+ | |||
+ | # volumeMounts: | ||
+ | # - name: data | ||
+ | # mountPath: /data | ||
+ | |||
+ | volumes: | ||
+ | - name: my-disk2-volume | ||
+ | hostPath: | ||
+ | path: /disk2/ | ||
+ | nodeSelector: | ||
+ | disk2: "yes" | ||
+ | |||
+ | # volumes: | ||
+ | # - name: data | ||
+ | # persistentVolumeClaim: | ||
+ | # claimName: my-ha-pvc-sz64m | ||
+ | |||
+ | restartPolicy: Always | ||
+ | </code><code> | ||
+ | root@node1:~# kubectl apply -f my-debian-deployment.yaml | ||
+ | |||
+ | root@node1:~# kubectl get all -o wide | ||
+ | </code> | ||
+ | |||
+ | * [[https://qna.habr.com/q/629022|Несколько Claim на один Persistent Volumes?]] | ||
+ | * [[https://serveradmin.ru/hranilishha-dannyh-persistent-volumes-v-kubernetes/|Хранилища данных (Persistent Volumes) в Kubernetes]] | ||
+ | * [[https://stackoverflow.com/questions/59915899/limit-persistent-volume-claim-content-folder-size-using-hostpath|Limit persistent volume claim content folder size using hostPath]] | ||
+ | * [[https://stackoverflow.com/questions/63490278/kubernetes-persistent-volume-hostpath-vs-local-and-data-persistence|Kubernetes persistent volume: hostpath vs local and data persistence]] | ||
+ | * [[https://www.alibabacloud.com/blog/kubernetes-volume-basics-emptydir-and-persistentvolume_594834|Kubernetes Volume Basics: emptyDir and PersistentVolume]] | ||
<code> | <code> | ||
- | $ cat my-webd-nfs-deployment.yaml | + | root@node1:~# cat my-ha-pv.yaml |
+ | </code><code> | ||
+ | apiVersion: v1 | ||
+ | kind: PersistentVolume | ||
+ | metadata: | ||
+ | name: my-pv-node2-sz-128m-num-001 | ||
+ | # name: my-pv-kube3-keycloak | ||
+ | labels: | ||
+ | type: local | ||
+ | spec: | ||
+ | ## comment storageClassName for keycloak | ||
+ | storageClassName: my-ha-sc | ||
+ | capacity: | ||
+ | storage: 128Mi | ||
+ | # storage: 8Gi | ||
+ | accessModes: | ||
+ | - ReadWriteMany | ||
+ | # - ReadWriteOnce | ||
+ | hostPath: | ||
+ | path: /disk2 | ||
+ | persistentVolumeReclaimPolicy: Retain | ||
+ | nodeAffinity: | ||
+ | required: | ||
+ | nodeSelectorTerms: | ||
+ | - matchExpressions: | ||
+ | - key: kubernetes.io/hostname | ||
+ | operator: In | ||
+ | values: | ||
+ | - node2 | ||
+ | # - kube3 | ||
+ | </code><code> | ||
+ | root@node1:~# kubectl apply -f my-ha-pv.yaml | ||
+ | |||
+ | root@node1:~# kubectl get persistentvolume | ||
+ | или | ||
+ | root@node1:~# kubectl get pv | ||
+ | |||
+ | root@kube1:~# ###ssh kube3 'mkdir /disk2/; chmod 777 /disk2/' | ||
... | ... | ||
+ | root@node1:~# ###kubectl delete pv my-pv-<TAB> | ||
+ | |||
+ | root@node1:~# cat my-ha-pvc.yaml | ||
+ | </code><code> | ||
+ | apiVersion: v1 | ||
+ | kind: PersistentVolumeClaim | ||
+ | metadata: | ||
+ | name: my-ha-pvc-sz64m | ||
+ | spec: | ||
+ | storageClassName: my-ha-sc | ||
+ | # storageClassName: local-path | ||
+ | accessModes: | ||
+ | - ReadWriteMany | ||
+ | resources: | ||
+ | requests: | ||
+ | storage: 64Mi | ||
+ | </code><code> | ||
+ | root@node1:~# kubectl apply -f my-ha-pvc.yaml | ||
+ | |||
+ | root@node1:~# kubectl get persistentvolumeclaims | ||
+ | или | ||
+ | root@node1:~# kubectl get pvc | ||
+ | ... | ||
+ | |||
+ | root@node1:~# ### kubectl delete pvc my-ha-pvc-sz64m | ||
+ | </code> | ||
+ | |||
+ | === Dynamic Volume Provisioning === | ||
+ | |||
+ | * [[https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/|Dynamic Volume Provisioning]] | ||
+ | |||
+ | === rancher local-path-provisioner === | ||
+ | |||
+ | * [[https://github.com/rancher/local-path-provisioner|rancher local-path-provisioner]] | ||
+ | * [[https://artifacthub.io/packages/helm/ebrianne/local-path-provisioner|This chart bootstraps a deployment on a cluster using the package manager]] | ||
+ | |||
+ | <code> | ||
+ | $ kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.26/deploy/local-path-storage.yaml | ||
+ | |||
+ | $ kubectl get sc | ||
+ | |||
+ | $ kubectl -n local-path-storage get all | ||
+ | |||
+ | $ curl https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.26/deploy/local-path-storage.yaml | less | ||
+ | /DEFAULT_PATH_FOR_NON_LISTED_NODES | ||
+ | |||
+ | ssh root@kube1 'mkdir /opt/local-path-provisioner' | ||
+ | ssh root@kube2 'mkdir /opt/local-path-provisioner' | ||
+ | ssh root@kube3 'mkdir /opt/local-path-provisioner' | ||
+ | ssh root@kube1 'chmod 777 /opt/local-path-provisioner' | ||
+ | ssh root@kube2 'chmod 777 /opt/local-path-provisioner' | ||
+ | ssh root@kube3 'chmod 777 /opt/local-path-provisioner' | ||
+ | </code> | ||
+ | |||
+ | * Сервис Keycloak в [[Сервис Keycloak#Kubernetes]] | ||
+ | |||
+ | <code> | ||
+ | $ kubectl get pvc -n my-keycloak-ns | ||
+ | |||
+ | $ kubectl get pv | ||
+ | |||
+ | $ ###kubectl -n my-keycloak-ns delete pvc data-my-keycloak-postgresql-0 | ||
+ | </code> | ||
+ | === longhorn === | ||
+ | |||
+ | <code> | ||
+ | kubeN:~# apt install open-iscsi | ||
+ | </code> | ||
+ | * [[https://github.com/longhorn/longhorn]] | ||
+ | <code> | ||
+ | $ kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.0/deploy/longhorn.yaml | ||
+ | |||
+ | $ kubectl -n longhorn-system get pods -o wide --watch | ||
+ | |||
+ | Setting->General | ||
+ | |||
+ | Pod Deletion Policy When Node is Down: delete-statefuset-pod | ||
+ | </code> | ||
+ | |||
+ | Подключение через kubectl proxy | ||
+ | |||
+ | * [[https://stackoverflow.com/questions/45172008/how-do-i-access-this-kubernetes-service-via-kubectl-proxy|How do I access this Kubernetes service via kubectl proxy?]] | ||
+ | |||
+ | <code> | ||
+ | cmder> kubectl proxy | ||
+ | </code> | ||
+ | |||
+ | * http://localhost:8001/api/v1/namespaces/longhorn-system/services/longhorn-frontend:80/proxy/ | ||
+ | |||
+ | Подключение через ingress | ||
+ | |||
+ | !!! Добавить пример с аутентификацией !!! | ||
+ | <code> | ||
+ | student@server:~/longhorn$ cat ingress.yaml | ||
+ | apiVersion: networking.k8s.io/v1 | ||
+ | kind: Ingress | ||
+ | metadata: | ||
+ | name: longhorn-ingress | ||
+ | namespace: longhorn-system | ||
+ | spec: | ||
+ | ingressClassName: nginx | ||
+ | rules: | ||
+ | - host: lh.corp13.un | ||
+ | http: | ||
+ | paths: | ||
+ | - backend: | ||
+ | service: | ||
+ | name: longhorn-frontend | ||
+ | port: | ||
+ | number: 80 | ||
+ | path: / | ||
+ | pathType: Prefix | ||
+ | </code> | ||
+ | |||
+ | == Использование snapshot-ов == | ||
+ | |||
+ | * [[https://github.com/longhorn/longhorn/issues/63?ref=https%3A%2F%2Fgiter.vip|What should be the best procedure to recover a snapshot or a backup in rancher2/longhorn ?]] | ||
+ | |||
+ | * Делаем снапшот | ||
+ | * Что-то ломаем (удаляем пользователя) | ||
+ | * Останавливаем сервис | ||
+ | |||
+ | <code> | ||
+ | kube1:~# kubectl -n my-keycloak-ns scale --replicas 0 statefulset my-keycloak | ||
+ | |||
+ | kube1:~# kubectl -n my-keycloak-ns scale --replicas 0 statefulset my-keycloak-postgresql | ||
+ | </code> | ||
+ | |||
+ | * Volume -> Attache to Host (любой) в режиме Maintenance, Revert к снапшоту, Deattache | ||
+ | * Запускаем сервис | ||
+ | |||
+ | <code> | ||
+ | kube1:~# kubectl -n my-keycloak-ns scale --replicas 1 statefulset my-keycloak-postgresql | ||
+ | |||
+ | kube1:~# kubectl -n my-keycloak-ns scale --replicas 2 statefulset my-keycloak | ||
+ | </code> | ||
+ | |||
+ | == Использование backup-ов == | ||
+ | |||
+ | * Разворачиваем [[Сервис NFS]] на server | ||
+ | |||
+ | <code> | ||
+ | Setting -> General -> Backup Target -> nfs://server.corp13.un:/var/www (nfs client linux не нужен) | ||
+ | </code> | ||
+ | * Volume -> Create Backup, удаляем NS, восстанавливаем Volume из бекапа, создаем NS и создаем для Volume PV/PVC как было, устанавливаем чарт, он подхватывает pv/pvc | ||
+ | |||
+ | ==== ConfigMap ==== | ||
+ | |||
+ | * [[https://www.aquasec.com/cloud-native-academy/kubernetes-101/kubernetes-configmap/|Kubernetes ConfigMap: Creating, Viewing, Consuming & Managing]] | ||
+ | * [[https://blog.lapw.at/how-to-enable-ssh-into-a-kubernetes-pod/|How to enable SSH connections into a Kubernetes pod]] | ||
+ | |||
+ | <code> | ||
+ | root@node1:~# cat sshd_config | ||
+ | </code><code> | ||
+ | PermitRootLogin yes | ||
+ | PasswordAuthentication no | ||
+ | ChallengeResponseAuthentication no | ||
+ | UsePAM no | ||
+ | </code><code> | ||
+ | root@node1:~# kubectl create configmap ssh-config --from-file=sshd_config --dry-run=client -o yaml | ||
+ | ... | ||
+ | |||
+ | server:~# cat .ssh/id_rsa.pub | ||
+ | ... | ||
+ | |||
+ | root@node1:~# cat my-openssh-server-deployment.yaml | ||
+ | </code><code> | ||
+ | apiVersion: v1 | ||
+ | kind: ConfigMap | ||
+ | metadata: | ||
+ | name: ssh-config | ||
+ | data: | ||
+ | sshd_config: | | ||
+ | PermitRootLogin yes | ||
+ | PasswordAuthentication no | ||
+ | ChallengeResponseAuthentication no | ||
+ | UsePAM no | ||
+ | authorized_keys: | | ||
+ | ssh-rsa AAAAB.....C0zOcZ68= root@server.corpX.un | ||
+ | --- | ||
+ | apiVersion: apps/v1 | ||
+ | kind: Deployment | ||
+ | metadata: | ||
+ | name: my-openssh-server | ||
+ | spec: | ||
+ | selector: | ||
+ | matchLabels: | ||
+ | app: my-openssh-server | ||
+ | template: | ||
+ | metadata: | ||
+ | labels: | ||
+ | app: my-openssh-server | ||
spec: | spec: | ||
containers: | containers: | ||
- | - name: my-webd | + | - name: my-openssh-server |
- | image: server.corp13.un:5000/student/webd:latest | + | image: linuxserver/openssh-server |
+ | command: ["/bin/sh"] | ||
+ | args: ["-c", "/usr/bin/ssh-keygen -A; usermod -p '*' root; /usr/sbin/sshd.pam -D"] | ||
+ | ports: | ||
+ | - containerPort: 22 | ||
volumeMounts: | volumeMounts: | ||
- | - name: nfs-volume | + | - name: ssh-volume |
- | mountPath: /var/www | + | subPath: sshd_config |
+ | mountPath: /etc/ssh/sshd_config | ||
+ | - name: ssh-volume | ||
+ | subPath: authorized_keys | ||
+ | mountPath: /root/.ssh/authorized_keys | ||
volumes: | volumes: | ||
- | - name: nfs-volume | + | - name: ssh-volume |
- | nfs: | + | configMap: |
- | server: 192.168.13.1 | + | name: ssh-config |
- | path: /var/www | + | --- |
- | </code> | + | apiVersion: v1 |
+ | kind: Service | ||
+ | metadata: | ||
+ | name: my-openssh-server | ||
+ | spec: | ||
+ | type: NodePort | ||
+ | ports: | ||
+ | - port: 22 | ||
+ | nodePort: 32222 | ||
+ | selector: | ||
+ | app: my-openssh-server | ||
+ | </code><code> | ||
+ | root@node1:~# kubectl apply -f my-openssh-server-deployment.yaml | ||
+ | root@node1:~# iptables-save | grep 32222 | ||
+ | |||
+ | root@node1:~# ###kubectl exec -ti my-openssh-server-NNNNNNNN-NNNNN -- bash | ||
+ | |||
+ | server:~# ssh -p 32222 nodeN | ||
+ | Welcome to OpenSSH Server | ||
+ | my-openssh-server-NNNNNNNN-NNNNN:~# nslookup my-openssh-server.default.svc.cluster.local | ||
+ | </code> | ||
==== Пример с multi container pod ==== | ==== Пример с multi container pod ==== | ||
Line 336: | Line 1311: | ||
containers: | containers: | ||
- name: my-webd | - name: my-webd | ||
- | image: server.corp13.un:5000/student/webd:latest | + | image: server.corpX.un:5000/student/webd:latest |
volumeMounts: | volumeMounts: | ||
- name: html | - name: html | ||
Line 382: | Line 1357: | ||
- | ==== Установка ==== | + | ==== Установка Helm ==== |
* [[https://helm.sh/docs/intro/install/|Installing Helm]] | * [[https://helm.sh/docs/intro/install/|Installing Helm]] | ||
+ | * [[https://github.com/helm/helm/releases|helm releases]] | ||
<code> | <code> | ||
- | $ wget https://get.helm.sh/helm-v3.9.0-linux-amd64.tar.gz | + | # wget https://get.helm.sh/helm-v3.9.0-linux-amd64.tar.gz |
- | $ tar -zxvf helm-v3.9.0-linux-amd64.tar.gz | + | # tar -zxvf helm-v3.9.0-linux-amd64.tar.gz |
- | $ sudo mv linux-amd64/helm /usr/local/bin/helm | + | # mv linux-amd64/helm /usr/local/bin/helm |
</code> | </code> | ||
+ | ==== Работа с готовыми Charts ==== | ||
+ | |||
+ | === ingress-nginx === | ||
+ | |||
+ | * [[https://kubernetes.github.io/ingress-nginx/deploy/|NGINX Ingress Controller Installation Guide]] | ||
+ | * [[https://stackoverflow.com/questions/56915354/how-to-install-nginx-ingress-with-hostnetwork-on-bare-metal|stackoverflow How to install nginx-ingress with hostNetwork on bare-metal?]] | ||
+ | * [[https://devpress.csdn.net/cloud/62fc8e7e7e66823466190055.html|devpress.csdn.net How to install nginx-ingress with hostNetwork on bare-metal?]] | ||
+ | * [[https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml]] | ||
+ | |||
+ | <code> | ||
+ | $ helm upgrade ingress-nginx --install ingress-nginx \ | ||
+ | --set controller.hostNetwork=true,controller.publishService.enabled=false,controller.kind=DaemonSet,controller.config.use-forwarded-headers=true \ | ||
+ | --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace | ||
+ | |||
+ | $ helm list --namespace ingress-nginx | ||
+ | $ helm list -A | ||
+ | |||
+ | $ kubectl get all -n ingress-nginx -o wide | ||
+ | |||
+ | $ helm delete ingress-nginx --namespace ingress-nginx | ||
+ | |||
+ | |||
+ | $ mkdir ingress-nginx; cd ingress-nginx | ||
+ | |||
+ | $ helm template ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx | tee t1.yaml | ||
+ | |||
+ | $ helm show values ingress-nginx --repo https://kubernetes.github.io/ingress-nginx | tee values.yaml.orig | ||
+ | |||
+ | $ cat values.yaml | ||
+ | </code><code> | ||
+ | controller: | ||
+ | hostNetwork: true | ||
+ | publishService: | ||
+ | enabled: false | ||
+ | kind: DaemonSet | ||
+ | # config: | ||
+ | # use-forwarded-headers: true | ||
+ | # allow-snippet-annotations: true | ||
+ | </code><code> | ||
+ | $ helm template ingress-nginx -f values.yaml --repo https://kubernetes.github.io/ingress-nginx -n ingress-nginx | tee t2.yaml | ||
+ | |||
+ | $ helm upgrade ingress-nginx -i ingress-nginx -f values.yaml --repo https://kubernetes.github.io/ingress-nginx -n ingress-nginx --create-namespace | ||
+ | |||
+ | $ kubectl exec -n ingress-nginx pods/ingress-nginx-controller-<TAB> -- cat /etc/nginx/nginx.conf | tee nginx.conf | grep use_forwarded_headers | ||
+ | |||
+ | $ kubectl -n ingress-nginx describe service/ingress-nginx-controller | ||
+ | ... | ||
+ | Endpoints: 192.168.X.221:80,192.168.X.222:80,192.168.X.223:80 | ||
+ | ... | ||
+ | |||
+ | # kubectl get clusterrole -A | grep -i ingress | ||
+ | # kubectl get clusterrolebindings -A | grep -i ingress | ||
+ | # kubectl get validatingwebhookconfigurations -A | grep -i ingress | ||
+ | </code> | ||
==== Развертывание своего приложения ==== | ==== Развертывание своего приложения ==== | ||
* [[https://opensource.com/article/20/5/helm-charts|How to make a Helm chart in 10 minutes]] | * [[https://opensource.com/article/20/5/helm-charts|How to make a Helm chart in 10 minutes]] | ||
* [[https://stackoverflow.com/questions/49812830/helm-upgrade-with-same-chart-version-but-different-docker-image-tag|Helm upgrade with same chart version, but different Docker image tag]] | * [[https://stackoverflow.com/questions/49812830/helm-upgrade-with-same-chart-version-but-different-docker-image-tag|Helm upgrade with same chart version, but different Docker image tag]] | ||
+ | * [[https://stackoverflow.com/questions/69817305/how-set-field-app-version-in-helm3-chart|how set field app-version in helm3 chart?]] | ||
<code> | <code> | ||
- | $ helm create webd-chart | + | gitlab-runner@server:~/gowebd-k8s$ helm create webd-chart |
+ | |||
+ | $ less webd-chart/templates/deployment.yaml | ||
$ cat webd-chart/Chart.yaml | $ cat webd-chart/Chart.yaml | ||
Line 410: | Line 1443: | ||
... | ... | ||
appVersion: "latest" | appVersion: "latest" | ||
+ | #appVersion: ver1.7 #for vanilla argocd | ||
</code><code> | </code><code> | ||
$ cat webd-chart/values.yaml | $ cat webd-chart/values.yaml | ||
</code><code> | </code><code> | ||
... | ... | ||
+ | replicaCount: 2 | ||
+ | |||
image: | image: | ||
- | repository: server.corp13.un:5000/student/webd | + | repository: server.corpX.un:5000/student/webd |
pullPolicy: Always | pullPolicy: Always | ||
... | ... | ||
Line 422: | Line 1458: | ||
... | ... | ||
service: | service: | ||
- | type: NodePort | + | # type: NodePort |
... | ... | ||
ingress: | ingress: | ||
enabled: true | enabled: true | ||
+ | className: "nginx" | ||
... | ... | ||
hosts: | hosts: | ||
- | - host: webd.corp13.un | + | - host: webd.corpX.un |
... | ... | ||
+ | # tls: [] | ||
+ | # tls: | ||
+ | # - secretName: gowebd-tls | ||
+ | # hosts: | ||
+ | # - gowebd.corpX.un | ||
+ | ... | ||
+ | #APWEBD_HOSTNAME: "apwebd.corp13.un" | ||
+ | #KEYCLOAK_HOSTNAME: "keycloak.corp13.un" | ||
+ | #REALM_NAME: "corp13" | ||
</code><code> | </code><code> | ||
$ less webd-chart/templates/deployment.yaml | $ less webd-chart/templates/deployment.yaml | ||
Line 435: | Line 1481: | ||
... | ... | ||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" | image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" | ||
+ | # env: | ||
+ | # - name: APWEBD_HOSTNAME | ||
+ | # value: "{{ .Values.APWEBD_HOSTNAME }}" | ||
+ | # - name: KEYCLOAK_HOSTNAME | ||
+ | # value: "{{ .Values.KEYCLOAK_HOSTNAME }}" | ||
+ | # - name: REALM_NAME | ||
+ | # value: "{{ .Values.REALM_NAME }}" | ||
+ | ... | ||
+ | # livenessProbe: | ||
+ | # httpGet: | ||
+ | # path: / | ||
+ | # port: http | ||
+ | # readinessProbe: | ||
+ | # httpGet: | ||
+ | # path: / | ||
+ | # port: http | ||
... | ... | ||
</code><code> | </code><code> | ||
- | !!! Был замечен "глюк" DNS, из-за которого не загружался Docker образ, "лечился" предварительным созданием namespace | + | $ helm template my-webd webd-chart/ | less |
- | $ helm install my-webd webd-chart/ --n my-ns --create-namespace --wait | + | $ helm install my-webd webd-chart/ -n my-ns --create-namespace --wait |
+ | |||
+ | $ kubectl describe events -n my-ns | less | ||
$ export HELM_NAMESPACE=my-ns | $ export HELM_NAMESPACE=my-ns | ||
Line 445: | Line 1509: | ||
$ helm list | $ helm list | ||
- | $ helm upgrade my-webd webd-chart/ --set=image.tag=ver1.10 | + | $ ### helm upgrade my-webd webd-chart/ --set=image.tag=ver1.10 |
$ helm history my-webd | $ helm history my-webd | ||
Line 460: | Line 1524: | ||
* [[https://medium.com/containerum/how-to-make-and-share-your-own-helm-package-50ae40f6c221|How to make and share your own Helm package]] | * [[https://medium.com/containerum/how-to-make-and-share-your-own-helm-package-50ae40f6c221|How to make and share your own Helm package]] | ||
* [[https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html|Gitlab Personal access tokens]] | * [[https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html|Gitlab Personal access tokens]] | ||
+ | * [[Инструмент GitLab#Подключение через API]] - Role: Mainteiner, api, read_registry, write_registry | ||
<code> | <code> | ||
- | $ helm repo add --username student --password NNNNNN-NNNNNNNNNNNNN webd http://192.168.13.1/api/v4/projects/6/packages/helm/stable | + | gitlab-runner@server:~/gowebd-k8s$ helm repo add --username student --password NNNNN-NNNNNNNNNNNNNNNNNNN webd http://server.corpX.un/api/v4/projects/N/packages/helm/stable |
+ | "webd" has been added to your repositories | ||
- | $ helm repo list | + | gitlab-runner@server:~/gowebd-k8s$ ### helm repo remove webd |
- | $ helm package webd-chart | + | gitlab-runner@server:~/gowebd-k8s$ helm repo list |
- | $ ls *tgz | + | |
- | $ helm plugin install https://github.com/chartmuseum/helm-push | + | gitlab-runner@server:~/gowebd-k8s$ helm package webd-chart |
- | $ helm cm-push webd-chart-0.1.0.tgz webd | + | |
- | ... С другого кластера подключаем (аналогично) наш репозиторий и ... | + | gitlab-runner@server:~/gowebd-k8s$ tar -tf webd-chart-0.1.1.tgz |
- | $ helm search repo webd | + | gitlab-runner@server:~/gowebd-k8s$ helm plugin install https://github.com/chartmuseum/helm-push |
- | $ helm repo update webd | + | gitlab-runner@server:~/gowebd-k8s$ helm cm-push webd-chart-0.1.1.tgz webd |
- | $ helm install my-webd webd/webd-chart | + | gitlab-runner@server:~/gowebd-k8s$ rm webd-chart-0.1.1.tgz |
+ | </code><code> | ||
+ | kube1:~# helm repo add webd http://server.corpX.un/api/v4/projects/N/packages/helm/stable | ||
+ | |||
+ | kube1:~# helm repo update | ||
+ | |||
+ | kube1:~# helm search repo webd | ||
+ | |||
+ | kube1:~# helm repo update webd | ||
+ | |||
+ | kube1:~# helm install my-webd webd/webd-chart | ||
+ | |||
+ | kube1:~# ###helm uninstall my-webd | ||
+ | </code><code> | ||
+ | kube1:~# mkdir gowebd; cd gowebd | ||
+ | |||
+ | kube1:~/gowebd# ###helm pull webd-chart --repo https://server.corp13.un/api/v4/projects/1/packages/helm/stable | ||
+ | |||
+ | kube1:~/gowebd# helm show values webd-chart --repo https://server.corp13.un/api/v4/projects/1/packages/helm/stable | tee values.yaml.orig | ||
+ | |||
+ | kube1:~/gowebd# cat values.yaml | ||
+ | </code><code> | ||
+ | replicaCount: 3 | ||
+ | image: | ||
+ | tag: "ver1.1" | ||
+ | #REALM_NAME: "corp" | ||
+ | </code><code> | ||
+ | kube1:~/gowebd# helm upgrade my-webd -i webd-chart -f values.yaml -n my-ns --create-namespace --repo https://server.corp13.un/api/v4/projects/1/packages/helm/stable | ||
+ | |||
+ | $ curl http://kubeN -H "Host: gowebd.corpX.un" | ||
+ | |||
+ | kube1:~/gowebd# ###helm uninstall my-webd -n my-ns | ||
</code> | </code> | ||
==== Работа с публичными репозиториями ==== | ==== Работа с публичными репозиториями ==== | ||
<code> | <code> | ||
+ | helm repo add gitlab https://charts.gitlab.io | ||
+ | |||
+ | helm search repo -l gitlab/gitlab-runner | ||
+ | |||
+ | helm show values gitlab/gitlab-runner | tee values.yaml | ||
+ | |||
+ | gitlab-runner@server:~$ diff values.yaml values.yaml.orig | ||
+ | </code><code> | ||
+ | ... | ||
+ | gitlabUrl: http://server.corpX.un/ | ||
+ | ... | ||
+ | runnerRegistrationToken: "NNNNNNNNNNNNNNNNNNNNNNNN" | ||
+ | ... | ||
+ | 148,149c142 | ||
+ | < create: true | ||
+ | --- | ||
+ | > create: false | ||
+ | 325d317 | ||
+ | < privileged = true | ||
+ | 432c424 | ||
+ | < allowPrivilegeEscalation: true | ||
+ | --- | ||
+ | > allowPrivilegeEscalation: false | ||
+ | 435c427 | ||
+ | < privileged: true | ||
+ | --- | ||
+ | > privileged: false | ||
+ | </code><code> | ||
+ | gitlab-runner@server:~$ helm upgrade -i gitlab-runner gitlab/gitlab-runner -f values.yaml -n gitlab-runner --create-namespace | ||
+ | |||
+ | gitlab-runner@server:~$ kubectl get all -n gitlab-runner | ||
+ | </code><code> | ||
$ helm search hub -o json wordpress | jq '.' | less | $ helm search hub -o json wordpress | jq '.' | less | ||
Line 489: | Line 1615: | ||
$ helm show values bitnami/wordpress | $ helm show values bitnami/wordpress | ||
</code> | </code> | ||
+ | |||
+ | ===== Kubernetes Dashboard ===== | ||
+ | |||
+ | * https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/ | ||
+ | * https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md | ||
+ | |||
+ | <code> | ||
+ | $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml | ||
+ | |||
+ | $ cat dashboard-user-role.yaml | ||
+ | </code><code> | ||
+ | --- | ||
+ | apiVersion: v1 | ||
+ | kind: ServiceAccount | ||
+ | metadata: | ||
+ | name: admin-user | ||
+ | namespace: kubernetes-dashboard | ||
+ | --- | ||
+ | apiVersion: rbac.authorization.k8s.io/v1 | ||
+ | kind: ClusterRoleBinding | ||
+ | metadata: | ||
+ | name: admin-user | ||
+ | roleRef: | ||
+ | apiGroup: rbac.authorization.k8s.io | ||
+ | kind: ClusterRole | ||
+ | name: cluster-admin | ||
+ | subjects: | ||
+ | - kind: ServiceAccount | ||
+ | name: admin-user | ||
+ | namespace: kubernetes-dashboard | ||
+ | --- | ||
+ | apiVersion: v1 | ||
+ | kind: Secret | ||
+ | metadata: | ||
+ | name: admin-user | ||
+ | namespace: kubernetes-dashboard | ||
+ | annotations: | ||
+ | kubernetes.io/service-account.name: "admin-user" | ||
+ | type: kubernetes.io/service-account-token | ||
+ | </code><code> | ||
+ | $ kubectl apply -f dashboard-user-role.yaml | ||
+ | |||
+ | $ kubectl -n kubernetes-dashboard create token admin-user | ||
+ | |||
+ | $ kubectl get secret admin-user -n kubernetes-dashboard -o jsonpath={".data.token"} | base64 -d ; echo | ||
+ | |||
+ | cmder$ kubectl proxy | ||
+ | </code> | ||
+ | |||
+ | * http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ | ||
===== Дополнительные материалы ===== | ===== Дополнительные материалы ===== | ||
+ | ==== bare-metal minikube ==== | ||
+ | |||
+ | <code> | ||
+ | student@node2:~$ sudo apt install conntrack | ||
+ | |||
+ | https://computingforgeeks.com/install-mirantis-cri-dockerd-as-docker-engine-shim-for-kubernetes/ | ||
+ | ... | ||
+ | |||
+ | wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz | ||
+ | ... | ||
+ | |||
+ | student@node2:~$ minikube start --driver=none --insecure-registry "server.corpX.un:5000" | ||
+ | </code> | ||
+ | |||
+ | ==== minikube dashboard ==== | ||
+ | <code> | ||
+ | student@node1:~$ minikube dashboard & | ||
+ | ... | ||
+ | Opening http://127.0.0.1:NNNNN/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser | ||
+ | ... | ||
+ | /home/mobaxterm> ssh -L NNNNN:localhost:NNNNN student@192.168.X.10 | ||
+ | Теперь, та же ссылка работает на win host системе | ||
+ | </code> | ||
+ | |||
+ | ==== Подключение к minikube с другой системы ==== | ||
+ | |||
+ | * Если не minikube, то достаточно только копию .kube/config | ||
+ | * [[https://habr.com/ru/company/flant/blog/345580/|см. Настройка GitLab Runner]] | ||
+ | |||
+ | <code> | ||
+ | student@node1:~$ tar -cvzf kube-config.tar.gz .kube/config .minikube/ca.crt .minikube/profiles/minikube | ||
+ | |||
+ | gitlab-runner@server:~$ scp student@node1:kube-config.tar.gz . | ||
+ | |||
+ | gitlab-runner@server:~$ tar -xvf kube-config.tar.gz | ||
+ | |||
+ | gitlab-runner@server:~$ cat .kube/config | ||
+ | </code><code> | ||
+ | ... | ||
+ | certificate-authority: /home/gitlab-runner/.minikube/ca.crt | ||
+ | ... | ||
+ | client-certificate: /home/gitlab-runner/.minikube/profiles/minikube/client.crt | ||
+ | client-key: /home/gitlab-runner/.minikube/profiles/minikube/client.key | ||
+ | ... | ||
+ | </code> | ||
==== kompose ==== | ==== kompose ==== | ||
+ | * [[https://stackoverflow.com/questions/47536536/whats-the-difference-between-docker-compose-and-kubernetes|What's the difference between Docker Compose and Kubernetes?]] | ||
* [[https://loft.sh/blog/docker-compose-to-kubernetes-step-by-step-migration/|Docker Compose to Kubernetes: Step-by-Step Migration]] | * [[https://loft.sh/blog/docker-compose-to-kubernetes-step-by-step-migration/|Docker Compose to Kubernetes: Step-by-Step Migration]] | ||
* [[https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/|Translate a Docker Compose File to Kubernetes Resources]] | * [[https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/|Translate a Docker Compose File to Kubernetes Resources]] | ||
<code> | <code> | ||
- | root@gate.corp13.un:~# curl -L https://github.com/kubernetes/kompose/releases/download/v1.26.0/kompose-linux-amd64 -o kompose | + | root@gate:~# curl -L https://github.com/kubernetes/kompose/releases/download/v1.26.0/kompose-linux-amd64 -o kompose |
- | root@gate.corp13.un:~# chmod +x kompose | + | root@gate:~# chmod +x kompose |
- | root@gate.corp13.un:~# sudo mv ./kompose /usr/local/bin/kompose | + | root@gate:~# sudo mv ./kompose /usr/local/bin/kompose |
</code> | </code> | ||