This shows you the differences between two versions of the page.
| Both sides previous revision Previous revision Next revision | Previous revision | ||
|
система_kubernetes [2024/04/06 11:26] val [Развертывание через kubeadm] |
система_kubernetes [2025/12/09 13:40] (current) val [Использование ClusterRole и ClusterRoleBinding] |
||
|---|---|---|---|
| Line 1: | Line 1: | ||
| ====== Система Kubernetes ====== | ====== Система Kubernetes ====== | ||
| + | |||
| + | * [[https://habr.com/ru/companies/vk/articles/645985/|Почему Kubernetes — это новый Linux: 4 аргумента]] | ||
| * [[https://kubernetes.io/ru/docs/home/|Документация по Kubernetes (на русском)]] | * [[https://kubernetes.io/ru/docs/home/|Документация по Kubernetes (на русском)]] | ||
| Line 10: | Line 12: | ||
| * [[https://habr.com/ru/company/flant/blog/513908/|Полноценный Kubernetes с нуля на Raspberry Pi]] | * [[https://habr.com/ru/company/flant/blog/513908/|Полноценный Kubernetes с нуля на Raspberry Pi]] | ||
| * [[https://habr.com/ru/companies/domclick/articles/566224/|Различия между Docker, containerd, CRI-O и runc]] | * [[https://habr.com/ru/companies/domclick/articles/566224/|Различия между Docker, containerd, CRI-O и runc]] | ||
| + | * [[https://daily.dev/blog/kubernetes-cni-comparison-flannel-vs-calico-vs-canal|Kubernetes CNI Comparison: Flannel vs Calico vs Canal]] | ||
| + | * [[https://habr.com/ru/companies/slurm/articles/464987/|Хранилища в Kubernetes: OpenEBS vs Rook (Ceph) vs Rancher Longhorn vs StorageOS vs Robin vs Portworx vs Linstor]] | ||
| + | * [[https://parshinpn.ru/ru/blog/external-connectivity-kubernetes-calico|Настраиваем сетевую связность внешнего узла с кластером Kubernetes (route reflector)]] | ||
| * [[https://habr.com/ru/company/vk/blog/542730/|11 факапов PRO-уровня при внедрении Kubernetes и как их избежать]] | * [[https://habr.com/ru/company/vk/blog/542730/|11 факапов PRO-уровня при внедрении Kubernetes и как их избежать]] | ||
| Line 17: | Line 22: | ||
| * [[https://www.youtube.com/watch?v=XZQ7-7vej6w|Наш опыт с Kubernetes в небольших проектах / Дмитрий Столяров (Флант)]] | * [[https://www.youtube.com/watch?v=XZQ7-7vej6w|Наш опыт с Kubernetes в небольших проектах / Дмитрий Столяров (Флант)]] | ||
| - | * [[https://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/|Accessing Kubernetes Pods From Outside of the Cluster]] | + | * [[https://habr.com/ru/companies/aenix/articles/541118/|Ломаем и чиним Kubernetes]] |
| + | |||
| ===== Инструмент командной строки kubectl ===== | ===== Инструмент командной строки kubectl ===== | ||
| * [[https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands]] | * [[https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands]] | ||
| + | * [[https://kubernetes.io/ru/docs/reference/kubectl/cheatsheet/|Шпаргалка по kubectl]] | ||
| ==== Установка ==== | ==== Установка ==== | ||
| Line 32: | Line 40: | ||
| # mv kubectl /usr/local/bin/ | # mv kubectl /usr/local/bin/ | ||
| + | </code> | ||
| + | == Debian 13 == | ||
| + | <code> | ||
| + | # apt install kubectl | ||
| </code> | </code> | ||
| Line 57: | Line 69: | ||
| ... | ... | ||
| </code><code> | </code><code> | ||
| - | kubectl get all -o wide --all-namespaces | + | kubectl version |
| - | kubectl get all -o wide -A | + | |
| + | kubectl get all -o wide --all-namespaces #-A | ||
| + | |||
| + | kubectl get nodes | ||
| </code> | </code> | ||
| - | === Настройка автодополнения === | + | ==== Настройка автодополнения ==== |
| <code> | <code> | ||
| - | gitlab-runner@server:~$ source <(kubectl completion bash) | + | kube1:~# less /etc/bash_completion.d/kubectl.sh |
| + | |||
| + | или | ||
| + | |||
| + | $ cat ~/.profile | ||
| + | </code><code> | ||
| + | #... | ||
| + | source <(kubectl completion bash) | ||
| + | |||
| + | alias k=kubectl | ||
| + | complete -F __start_kubectl k | ||
| + | #... | ||
| </code> | </code> | ||
| - | === Подключение к другому кластеру === | + | ==== Создание файла конфигурации kubectl ==== |
| + | |||
| + | * [[https://kubernetes.io/docs/reference/kubectl/generated/kubectl_config/kubectl_config_set-credentials/]] | ||
| <code> | <code> | ||
| - | gitlab-runner@server:~$ scp root@kube1:.kube/config .kube/config_kube1 | + | user1@client1:~$ ###export KUBECONFIG=~/.kube/config_test |
| + | user1@client1:~$ ###rm -rf .kube/ | ||
| - | gitlab-runner@server:~$ cat .kube/config_kube1 | + | user1@client1:~$ kubectl config set-cluster cluster.local --server=https://192.168.13.221:6443 --insecure-skip-tls-verify=true |
| - | </code><code> | + | kubeN# ###cat /etc/kubernetes/ssl/ca.crt |
| - | ... | + | ИЛИ |
| - | .kube/config_kube1 | + | root@my-debian:~# kubectl config set-cluster cluster.local --server=https://192.168.13.221:6443 --certificate-authority=/run/secrets/kubernetes.io/serviceaccount/ca.crt #--embed-certs=true |
| - | ... | + | |
| - | </code><code> | + | |
| - | gitlab-runner@server:~$ export KUBECONFIG=~/.kube/config_kube1 | + | |
| - | gitlab-runner@server:~$ kubectl get nodes | + | user1@client1:~$ cat .kube/config |
| + | |||
| + | user1@client1:~$ kubectl config set-credentials user1 --client-certificate=user1.crt --client-key=user1.key #--embed-certs=true | ||
| + | ИЛИ | ||
| + | user1@client1:~$ kubectl config set-credentials user1 --token=................................... | ||
| + | ИЛИ | ||
| + | root@my-debian:~# kubectl config set-credentials user1 --token=$(cat /run/secrets/kubernetes.io/serviceaccount/token) | ||
| + | |||
| + | user1@client1:~$ kubectl config get-users | ||
| + | |||
| + | user1@client1:~$ kubectl config set-context default-context --cluster=cluster.local --user=user1 | ||
| + | |||
| + | user1@client1:~$ kubectl config use-context default-context | ||
| + | |||
| + | user1@client1:~$ kubectl auth whoami | ||
| + | |||
| + | user1@client1:~$ kubectl auth can-i get pods #-n my-ns | ||
| + | |||
| + | user1@client1:~$ kubectl get pods #-A | ||
| + | Error from server (Forbidden) или ... | ||
| </code> | </code> | ||
| ===== Установка minikube ===== | ===== Установка minikube ===== | ||
| - | * [[https://www.linuxtechi.com/how-to-install-minikube-on-ubuntu/|How to Install Minikube on Ubuntu 20.04 LTS / 21.04]] | ||
| * [[https://minikube.sigs.k8s.io/docs/start/|Documentation/Get Started/minikube start]] | * [[https://minikube.sigs.k8s.io/docs/start/|Documentation/Get Started/minikube start]] | ||
| + | * [[https://stackoverflow.com/questions/42564058/how-can-i-use-local-docker-images-with-minikube|How can I use local Docker images with Minikube?]] | ||
| <code> | <code> | ||
| Line 99: | Line 144: | ||
| <code> | <code> | ||
| - | gitlab-runner@server:~$ ### minikube delete | ||
| - | gitlab-runner@server:~$ ### rm -rv .minikube/ | ||
| - | |||
| gitlab-runner@server:~$ time minikube start --driver=docker --insecure-registry "server.corpX.un:5000" | gitlab-runner@server:~$ time minikube start --driver=docker --insecure-registry "server.corpX.un:5000" | ||
| - | real 29m8.320s | + | real 41m8.320s |
| ... | ... | ||
| Line 109: | Line 151: | ||
| gitlab-runner@server:~$ minikube ip | gitlab-runner@server:~$ minikube ip | ||
| + | </code> | ||
| + | |||
| + | ==== minikube kubectl ==== | ||
| + | <code> | ||
| + | gitlab-runner@server:~$ minikube kubectl -- get pods -A | ||
| + | |||
| + | gitlab-runner@server:~$ cat ~/.profile | ||
| + | </code><code> | ||
| + | #... | ||
| + | # not work in gitlab-ci | ||
| + | alias kubectl='minikube kubectl --' | ||
| + | #... | ||
| + | </code><code> | ||
| + | gitlab-runner@server:~$ kubectl get pods -A | ||
| + | </code> | ||
| + | |||
| + | или | ||
| + | |||
| + | <code> | ||
| + | # cp -v /home/gitlab-runner/.minikube/cache/linux/amd64/v*/kubectl /usr/local/bin/ | ||
| + | </code> | ||
| + | |||
| + | или | ||
| + | |||
| + | * [[#Инструмент командной строки kubectl]] | ||
| + | |||
| + | ==== minikube addons list ==== | ||
| + | <code> | ||
| gitlab-runner@server:~$ minikube addons list | gitlab-runner@server:~$ minikube addons list | ||
| - | gitlab-runner@server:~$ minikube addons configure registry-creds #Не нужно для registry попубличных проектов | + | gitlab-runner@server:~$ minikube addons configure registry-creds |
| ... | ... | ||
| Do you want to enable Docker Registry? [y/n]: y | Do you want to enable Docker Registry? [y/n]: y | ||
| Line 121: | Line 191: | ||
| gitlab-runner@server:~$ minikube addons enable registry-creds | gitlab-runner@server:~$ minikube addons enable registry-creds | ||
| - | |||
| - | gitlab-runner@server:~$ minikube kubectl -- get pods -A | ||
| - | |||
| - | gitlab-runner@server:~$ alias kubectl='minikube kubectl --' | ||
| - | |||
| - | gitlab-runner@server:~$ kubectl get pods -A | ||
| </code> | </code> | ||
| - | или | + | ==== minikube start stop delete ==== |
| - | + | ||
| - | * [[#Инструмент командной строки kubectl]] | + | |
| <code> | <code> | ||
| gitlab-runner@server:~$ ###minikube stop | gitlab-runner@server:~$ ###minikube stop | ||
| + | |||
| + | gitlab-runner@server:~$ ### minikube delete | ||
| + | gitlab-runner@server:~$ ### rm -rv .minikube/ | ||
| gitlab-runner@server:~$ ###minikube start | gitlab-runner@server:~$ ###minikube start | ||
| </code> | </code> | ||
| ===== Кластер Kubernetes ===== | ===== Кластер Kubernetes ===== | ||
| - | |||
| ==== Развертывание через kubeadm ==== | ==== Развертывание через kubeadm ==== | ||
| + | * [[https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/|Installing kubeadm]] | ||
| * [[https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/|kubernetes.io Creating a cluster with kubeadm]] | * [[https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/|kubernetes.io Creating a cluster with kubeadm]] | ||
| - | * [[https://infoit.com.ua/linux/kak-ustanovit-kubernetes-na-ubuntu-20-04-lts/|Как установить Kubernetes на Ubuntu 20.04 LTS]] | + | |
| * [[https://www.linuxtechi.com/install-kubernetes-on-ubuntu-22-04/|How to Install Kubernetes Cluster on Ubuntu 22.04]] | * [[https://www.linuxtechi.com/install-kubernetes-on-ubuntu-22-04/|How to Install Kubernetes Cluster on Ubuntu 22.04]] | ||
| - | * [[https://www.linuxtechi.com/install-kubernetes-cluster-on-debian/|https://www.linuxtechi.com/install-kubernetes-cluster-on-debian/]] | + | * [[https://www.linuxtechi.com/install-kubernetes-cluster-on-debian/|https://www.linuxtechi.com/install-kubernetes-cluster-on-debian/|How to Install Kubernetes Cluster on Debian 12 | 11]] |
| - | * [[https://www.cloud4y.ru/blog/installation-kubernetes/|Установка Kubernetes]] | + | |
| + | * [[https://www.baeldung.com/ops/kubernetes-cluster-components|Kubernetes Cluster Components]] | ||
| === Подготовка узлов === | === Подготовка узлов === | ||
| Line 171: | Line 237: | ||
| === Установка ПО === | === Установка ПО === | ||
| + | |||
| + | === !!! Обратитесь к преподавателю !!! === | ||
| + | |||
| + | == Установка и настройка CRI == | ||
| <code> | <code> | ||
| - | node1# bash -c ' | + | node1_2_3# apt-get install -y docker.io |
| - | http_proxy=http://proxy.isp.un:3128/ apt -y install apt-transport-https curl | + | |
| - | ssh node2 http_proxy=http://proxy.isp.un:3128/ apt -y install apt-transport-https curl | + | |
| - | ssh node3 http_proxy=http://proxy.isp.un:3128/ apt -y install apt-transport-https curl | + | |
| - | ' | + | |
| - | node1# bash -c ' | + | Проверяем, если: |
| - | curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add | + | node1# containerd config dump | grep SystemdCgroup |
| - | ssh node2 "curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add" | + | не равно: |
| - | ssh node3 "curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add" | + | SystemdCgroup = true |
| + | то, выполняем следующие 4-ре команды: | ||
| + | |||
| + | bash -c 'mkdir -p /etc/containerd/ | ||
| + | ssh node2 mkdir -p /etc/containerd/ | ||
| + | ssh node3 mkdir -p /etc/containerd/ | ||
| + | ' | ||
| + | bash -c 'containerd config default > /etc/containerd/config.toml | ||
| + | ssh node2 "containerd config default > /etc/containerd/config.toml" | ||
| + | ssh node3 "containerd config default > /etc/containerd/config.toml" | ||
| + | ' | ||
| + | bash -c 'sed -i "s/SystemdCgroup \= false/SystemdCgroup \= true/g" /etc/containerd/config.toml | ||
| + | ssh node2 sed -i \"s/SystemdCgroup \= false/SystemdCgroup \= true/g\" /etc/containerd/config.toml | ||
| + | ssh node3 sed -i \"s/SystemdCgroup \= false/SystemdCgroup \= true/g\" /etc/containerd/config.toml | ||
| + | ' | ||
| + | bash -c 'service containerd restart | ||
| + | ssh node2 service containerd restart | ||
| + | ssh node3 service containerd restart | ||
| + | ' | ||
| + | </code> | ||
| + | == Подключаем репозиторий и устанавливаем ПО == | ||
| + | <code> | ||
| + | bash -c 'mkdir -p /etc/apt/keyrings | ||
| + | ssh node2 mkdir -p /etc/apt/keyrings | ||
| + | ssh node3 mkdir -p /etc/apt/keyrings | ||
| ' | ' | ||
| - | node1# bash -c ' | + | bash -c 'curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg |
| - | apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main" | + | ssh node2 "curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg" |
| - | ssh node2 apt-add-repository \"deb http://apt.kubernetes.io/ kubernetes-xenial main\" | + | ssh node3 "curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg" |
| - | ssh node3 apt-add-repository \"deb http://apt.kubernetes.io/ kubernetes-xenial main\" | + | |
| ' | ' | ||
| - | node1# bash -c ' | + | bash -c 'echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" | tee /etc/apt/sources.list.d/kubernetes.list |
| - | http_proxy=http://proxy.isp.un:3128/ apt -y install kubeadm kubelet kubectl kubernetes-cni | + | ssh node2 echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" \| tee /etc/apt/sources.list.d/kubernetes.list |
| - | ssh node2 http_proxy=http://proxy.isp.un:3128/ apt -y install kubeadm kubelet kubectl kubernetes-cni | + | ssh node3 echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" \| tee /etc/apt/sources.list.d/kubernetes.list |
| - | ssh node3 http_proxy=http://proxy.isp.un:3128/ apt -y install kubeadm kubelet kubectl kubernetes-cni | + | |
| ' | ' | ||
| - | https://forum.linuxfoundation.org/discussion/864693/the-repository-http-apt-kubernetes-io-kubernetes-xenial-release-does-not-have-a-release-file | + | bash -c 'apt-get update && apt-get install -y kubelet kubeadm kubectl |
| - | !!!! Внимание на каждом узле нужно сделать: !!!! | + | ssh node2 "apt-get update && apt-get install -y kubelet kubeadm kubectl" |
| - | + | ssh node3 "apt-get update && apt-get install -y kubelet kubeadm kubectl" | |
| - | удалить из /etc/apt/sources.list строчку с kubernetes | + | ' |
| - | + | Время выполнения: 2 минуты | |
| - | mkdir /etc/apt/keyrings | + | |
| - | + | ||
| - | curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg | + | |
| - | + | ||
| - | echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list | + | |
| - | + | ||
| - | apt update | + | |
| - | + | ||
| - | apt install -y kubeadm=1.28.1-1.1 kubelet=1.28.1-1.1 kubectl=1.28.1-1.1 | + | |
| </code> | </code> | ||
| === Инициализация master === | === Инициализация master === | ||
| + | |||
| + | * [[https://stackoverflow.com/questions/70416935/create-same-master-and-working-node-in-kubenetes|Create same master and working node in kubenetes]] | ||
| <code> | <code> | ||
| root@node1:~# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.X.201 | root@node1:~# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.X.201 | ||
| + | Время выполнения: 3 минуты | ||
| root@node1:~# mkdir -p $HOME/.kube | root@node1:~# mkdir -p $HOME/.kube | ||
| root@node1:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config | root@node1:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config | ||
| + | </code> | ||
| + | === Настройка сети === | ||
| + | <code> | ||
| + | root@nodeN:~# lsmod | grep br_netfilter | ||
| + | </code> | ||
| + | * [[Управление ядром и модулями в Linux#Модули ядра]] | ||
| + | <code> | ||
| root@node1:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml | root@node1:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml | ||
| - | + | </code> | |
| - | root@node1:~# kubectl get pod -o wide --all-namespaces | + | === Проверка работоспособности === |
| + | <code> | ||
| + | root@node1:~# kubectl get pod --all-namespaces -o wide | ||
| root@node1:~# kubectl get --raw='/readyz?verbose' | root@node1:~# kubectl get --raw='/readyz?verbose' | ||
| - | </code> | ||
| - | * Может понадобиться в случае возникновения ошибки [[https://github.com/containerd/containerd/issues/4581|[ERROR CRI]: container runtime is not running]] | ||
| - | <code> | ||
| - | node1# bash -c ' | ||
| - | rm /etc/containerd/config.toml | ||
| - | systemctl restart containerd | ||
| - | ssh node2 rm /etc/containerd/config.toml | ||
| - | ssh node2 systemctl restart containerd | ||
| - | ssh node3 rm /etc/containerd/config.toml | ||
| - | ssh node3 systemctl restart containerd | ||
| - | ' | ||
| </code> | </code> | ||
| Line 244: | Line 322: | ||
| <code> | <code> | ||
| root@node2_3:~# curl -k https://node1:6443/livez?verbose | root@node2_3:~# curl -k https://node1:6443/livez?verbose | ||
| - | </code> | + | |
| - | * [[https://github.com/containerd/containerd/issues/4581|[ERROR CRI]: container runtime is not running]] | + | |
| - | <code> | + | |
| root@node2_3:~# kubeadm join 192.168.X.201:6443 --token NNNNNNNNNNNNNNNNNNNN \ | root@node2_3:~# kubeadm join 192.168.X.201:6443 --token NNNNNNNNNNNNNNNNNNNN \ | ||
| --discovery-token-ca-cert-hash sha256:NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN | --discovery-token-ca-cert-hash sha256:NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN | ||
| + | |||
| + | root@node2_3:~# curl -sSL http://127.0.0.1:10248/healthz | ||
| + | |||
| + | root@node1:~# kubeadm token list | ||
| + | |||
| + | root@node1:~# kubeadm token create --print-join-command | ||
| </code> | </code> | ||
| === Проверка состояния === | === Проверка состояния === | ||
| Line 255: | Line 337: | ||
| root@node1:~# kubectl get nodes -o wide | root@node1:~# kubectl get nodes -o wide | ||
| + | |||
| + | root@node1:~# kubectl describe node node2 | ||
| </code> | </code> | ||
| Line 264: | Line 348: | ||
| $ kubectl cordon kube3 | $ kubectl cordon kube3 | ||
| - | $ time kubectl drain kube3 --force --ignore-daemonsets --delete-emptydir-data | + | $ time kubectl drain kube3 #--ignore-daemonsets --delete-emptydir-data --force |
| $ kubectl delete node kube3 | $ kubectl delete node kube3 | ||
| Line 282: | Line 366: | ||
| === Настройка доступа к Insecure Private Registry === | === Настройка доступа к Insecure Private Registry === | ||
| + | === !!! Обратитесь к преподавателю !!! === | ||
| * [[https://github.com/containerd/containerd/issues/4938|Unable to pull image from insecure registry, http: server gave HTTP response to HTTPS client #4938]] | * [[https://github.com/containerd/containerd/issues/4938|Unable to pull image from insecure registry, http: server gave HTTP response to HTTPS client #4938]] | ||
| Line 292: | Line 377: | ||
| <code> | <code> | ||
| - | root@node1:~# mkdir /etc/containerd/ | + | root@node1:~# mkdir -p /etc/containerd/ |
| + | |||
| + | root@node1:~# ###containerd config default > /etc/containerd/config.toml | ||
| root@node1:~# cat /etc/containerd/config.toml | root@node1:~# cat /etc/containerd/config.toml | ||
| </code><code> | </code><code> | ||
| - | version = 2 | + | ... |
| - | + | [plugins."io.containerd.grpc.v1.cri".registry.mirrors] | |
| - | [plugins."io.containerd.grpc.v1.cri".registry] | + | [plugins."io.containerd.grpc.v1.cri".registry.mirrors."server.corpX.un:5000"] |
| - | [plugins."io.containerd.grpc.v1.cri".registry.mirrors] | + | endpoint = ["http://server.corpX.un:5000"] |
| - | [plugins."io.containerd.grpc.v1.cri".registry.mirrors."server.corpX.un:5000"] | + | ... |
| - | endpoint = ["http://server.corpX.un:5000"] | + | |
| - | + | ||
| - | # no need | + | |
| - | # [plugins."io.containerd.grpc.v1.cri".registry.configs] | + | |
| - | # [plugins."io.containerd.grpc.v1.cri".registry.configs."server.corpX.un:5000".tls] | + | |
| - | # insecure_skip_verify = true | + | |
| - | + | ||
| - | # don't work in cri-tools 1.25, need public project | + | |
| - | #[plugins."io.containerd.grpc.v1.cri".registry.configs."server.corpX.un:5000".auth] | + | |
| - | # auth = "c3R1ZGVudDpwYXNzd29yZA==" | + | |
| </code><code> | </code><code> | ||
| node1# bash -c ' | node1# bash -c ' | ||
| - | ssh node2 mkdir /etc/containerd/ | + | ssh node2 mkdir -p /etc/containerd/ |
| - | ssh node3 mkdir /etc/containerd/ | + | ssh node3 mkdir -p /etc/containerd/ |
| scp /etc/containerd/config.toml node2:/etc/containerd/config.toml | scp /etc/containerd/config.toml node2:/etc/containerd/config.toml | ||
| scp /etc/containerd/config.toml node3:/etc/containerd/config.toml | scp /etc/containerd/config.toml node3:/etc/containerd/config.toml | ||
| Line 323: | Line 400: | ||
| root@nodeN:~# containerd config dump | less | root@nodeN:~# containerd config dump | less | ||
| + | </code> | ||
| + | |||
| + | == сontainerd v3 == | ||
| + | |||
| + | * [[https://stackoverflow.com/questions/79305194/unable-to-pull-image-from-insecure-registry-http-server-gave-http-response-to/79308521#79308521]] | ||
| + | |||
| + | <code> | ||
| + | # mkdir -p /etc/containerd/certs.d/server.corpX.un:5000/ | ||
| + | |||
| + | # cat /etc/containerd/certs.d/server.corpX.un:5000/hosts.toml | ||
| + | </code><code> | ||
| + | [host."http://server.corpX.un:5000"] | ||
| + | capabilities = ["pull", "resolve", "push"] | ||
| + | skip_verify = true | ||
| + | </code><code> | ||
| + | # systemctl restart containerd.service | ||
| </code> | </code> | ||
| Line 329: | Line 422: | ||
| <code> | <code> | ||
| root@nodeN:~# crictl -r unix:///run/containerd/containerd.sock pull server.corpX.un:5000/student/gowebd | root@nodeN:~# crictl -r unix:///run/containerd/containerd.sock pull server.corpX.un:5000/student/gowebd | ||
| - | </code> | ||
| + | root@kubeN:~# crictl pull server.corpX.un:5000/student/pywebd2 | ||
| + | </code> | ||
| ==== Развертывание через Kubespray ==== | ==== Развертывание через Kubespray ==== | ||
| Line 340: | Line 434: | ||
| * [[https://kubernetes.io/docs/setup/production-environment/tools/kubespray/|Installing Kubernetes with Kubespray]] | * [[https://kubernetes.io/docs/setup/production-environment/tools/kubespray/|Installing Kubernetes with Kubespray]] | ||
| - | * [[https://stackoverflow.com/questions/29882263/browse-list-of-tagged-releases-in-a-repo]] | + | === Подготовка к развертыванию через Kubespray === |
| <code> | <code> | ||
| - | kube1# ssh-keygen | + | server# ssh-keygen ### -t rsa |
| - | kube1# ssh-copy-id kube1;ssh-copy-id kube2;ssh-copy-id kube3;ssh-copy-id kube4; | + | server# ssh-copy-id kube1;ssh-copy-id kube2;ssh-copy-id kube3;ssh-copy-id kube4; |
| + | </code> | ||
| - | kube1# apt update | + | === Вариант 1 (ansible) === |
| - | kube1# apt install python3-pip -y | + | * [[https://github.com/kubernetes-sigs/kubespray/blob/v2.26.0/README.md]] |
| + | * [[Язык программирования Python#Виртуальная среда Python]] | ||
| - | kube1# git clone https://github.com/kubernetes-sigs/kubespray | + | <code> |
| + | (venv1) server# git clone https://github.com/kubernetes-sigs/kubespray | ||
| - | kube1# cd kubespray/ | + | (venv1) server# cd kubespray/ |
| - | ~/kubespray# grep -r containerd_insecure_registries . | + | (venv1) server:~/kubespray# git tag -l |
| - | ~/kubespray# git log | + | |
| - | ~/kubespray# git branch -r | + | (venv1) server:~/kubespray# git checkout tags/v2.26.0 |
| - | ~/kubespray# ### git checkout origin/release-2.22 | + | или |
| + | (venv1) server:~/kubespray# git checkout tags/v2.27.0 | ||
| - | ~/kubespray# git tag -l | + | (venv1) server:~/kubespray# time pip3 install -r requirements.txt |
| - | ~/kubespray# ### git checkout tags/v2.22.1 | + | |
| - | ~/kubespray# git checkout 4c37399c7582ea2bfb5202c3dde3223f9c43bf59 | + | (venv1) server:~/kubespray# cp -rvfpT inventory/sample inventory/mycluster |
| - | ~/kubespray# ### git checkout master | + | (venv1) server:~/kubespray# cat inventory/mycluster/hosts.yaml |
| + | </code><code> | ||
| + | all: | ||
| + | hosts: | ||
| + | kube1: | ||
| + | kube2: | ||
| + | kube3: | ||
| + | kube4: | ||
| + | children: | ||
| + | kube_control_plane: | ||
| + | hosts: | ||
| + | kube1: | ||
| + | kube2: | ||
| + | kube_node: | ||
| + | hosts: | ||
| + | kube1: | ||
| + | kube2: | ||
| + | kube3: | ||
| + | etcd: | ||
| + | hosts: | ||
| + | kube1: | ||
| + | kube2: | ||
| + | kube3: | ||
| + | k8s_cluster: | ||
| + | children: | ||
| + | kube_control_plane: | ||
| + | kube_node: | ||
| + | calico_rr: | ||
| + | hosts: {} | ||
| + | </code><code> | ||
| + | (venv1) server:~/kubespray# ansible all -m ping -i inventory/mycluster/hosts.yaml | ||
| </code> | </code> | ||
| - | * Может потребоваться [[Язык программирования Python#Виртуальная среда Python]] | + | * [[Сервис Ansible#Использование модулей]] Ansible для отключения swap |
| - | * Может потребоваться [[https://github.com/kubernetes-sigs/kubespray/issues/10688|"The conditional check 'groups.get('kube_control_plane')' failed. The error was: Conditional is marked as unsafe, and cannot be evaluated." #10688]] | + | * [[Сервис Ansible#Использование ролей]] Ansible для настройки сети |
| + | === Развертывание кластера через Kubespray === | ||
| <code> | <code> | ||
| - | ~/kubespray# time pip3 install -r requirements.txt | ||
| - | real 1m48.202s | ||
| - | |||
| - | ~/kubespray# cp -rvfpT inventory/sample inventory/mycluster | ||
| - | |||
| - | ~/kubespray# declare -a IPS=(kube1,192.168.X.221 kube2,192.168.X.222 kube3,192.168.X.223) | ||
| - | |||
| - | ~/kubespray# CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]} | ||
| - | |||
| - | ~/kubespray# less inventory/mycluster/hosts.yaml | ||
| - | |||
| ~/kubespray# time ansible-playbook -i inventory/mycluster/hosts.yaml cluster.yml | ~/kubespray# time ansible-playbook -i inventory/mycluster/hosts.yaml cluster.yml | ||
| real 45m31.796s | real 45m31.796s | ||
| Line 394: | Line 510: | ||
| === Добавление узла через Kubespray === | === Добавление узла через Kubespray === | ||
| + | |||
| + | * [[https://github.com/kubernetes-sigs/kubespray/blob/master/docs/operations/nodes.md|Adding/replacing a node (github.com/kubernetes-sigs/kubespray)]] | ||
| + | * [[https://nixhub.ru/posts/k8s-nodes-scale/|K8s - добавление нод через kubespray]] | ||
| + | * [[https://blog.unetresgrossebite.com/?p=934|Redeploy Kubernetes Nodes with KubeSpray]] | ||
| + | |||
| <code> | <code> | ||
| ~/kubespray# cat inventory/mycluster/hosts.yaml | ~/kubespray# cat inventory/mycluster/hosts.yaml | ||
| </code><code> | </code><code> | ||
| + | all: | ||
| + | hosts: | ||
| ... | ... | ||
| - | node4: | + | kube4: |
| - | ansible_host: 192.168.X.204 | + | |
| - | ip: 192.168.X.204 | + | |
| - | access_ip: 192.168.X.204 | + | |
| ... | ... | ||
| kube_node: | kube_node: | ||
| + | hosts: | ||
| ... | ... | ||
| - | node4: | + | kube4: |
| ... | ... | ||
| </code><code> | </code><code> | ||
| + | (venv1) server:~/kubespray# time ansible-playbook -i inventory/mycluster/hosts.yaml cluster.yml | ||
| + | real 6m31.562s | ||
| - | ~/kubespray# time ansible-playbook -i inventory/mycluster/hosts.yaml --limit=kube4 scale.yml | + | ~/kubespray# ###time ansible-playbook -i inventory/mycluster/hosts.yaml --limit=kube4 scale.yml |
| real 17m37.459s | real 17m37.459s | ||
| Line 444: | Line 567: | ||
| ingress_nginx_host_network: true | ingress_nginx_host_network: true | ||
| ... | ... | ||
| + | </code> | ||
| + | |||
| + | === Вариант 2 (docker) === | ||
| + | |||
| + | * [[https://github.com/kubernetes-sigs/kubespray/blob/v2.29.0/README.md]] | ||
| + | |||
| + | <code> | ||
| + | server:~# mkdir -p inventory/sample | ||
| + | |||
| + | server:~# cat inventory/sample/inventory.ini | ||
| + | </code><code> | ||
| + | #[all] | ||
| + | #kube1 ansible_host=192.168.X.221 | ||
| + | #kube2 ansible_host=192.168.X.222 | ||
| + | #kube3 ansible_host=192.168.X.223 | ||
| + | ##kube4 ansible_host=192.168.X.224 | ||
| + | |||
| + | [kube_control_plane] | ||
| + | kube[1:3] | ||
| + | |||
| + | [etcd:children] | ||
| + | kube_control_plane | ||
| + | |||
| + | [kube_node] | ||
| + | kube[1:3] | ||
| + | #kube[1:4] | ||
| + | </code><code> | ||
| + | server:~# docker run --userns=host --rm -it -v /root/inventory/sample:/inventory -v /root/.ssh/:/root/.ssh/ quay.io/kubespray/kubespray:v2.29.0 bash | ||
| + | |||
| + | root@cf764ca3b291:/kubespray# time ansible-playbook -i /inventory/inventory.ini cluster.yml | ||
| + | ... | ||
| + | real 12m18.679s | ||
| + | ... | ||
| + | </code> | ||
| + | |||
| + | ==== Управление образами ==== | ||
| + | <code> | ||
| + | kubeN# | ||
| + | crictl pull server.corpX.un:5000/student/gowebd | ||
| + | crictl images | ||
| + | crictl rmi server.corpX.un:5000/student/gowebd | ||
| + | </code> | ||
| + | |||
| + | ==== Обновление сертификатов ==== | ||
| + | * [[https://weng-albert.medium.com/updating-kubernetes-certificates-easy-peasy-en-139fc07f26c8|Updating Kubernetes Certificates: Easy Peasy!(En)]] | ||
| + | * [[https://medium.com/@reza.sadriniaa/automatic-kubernetes-certificate-renewal-a-step-by-step-guide-c4320192a74d|Automatic Kubernetes Certificate Renewal: A Step-by-Step Guide]] | ||
| + | <code> | ||
| + | kubeM:~# kubeadm certs check-expiration | ||
| + | |||
| + | kubeM:~# cp -rp /etc/kubernetes /root/old_k8s_config | ||
| + | |||
| + | kubeM:~# kubeadm certs renew all | ||
| + | ... | ||
| + | Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates. | ||
| + | |||
| + | kubeM:~# cp /etc/kubernetes/admin.conf /root/.kube/config | ||
| </code> | </code> | ||
| ===== Базовые объекты k8s ===== | ===== Базовые объекты k8s ===== | ||
| Line 456: | Line 635: | ||
| <code> | <code> | ||
| $ kubectl api-resources | $ kubectl api-resources | ||
| - | |||
| - | $ kubectl run my-debian --image=debian -- "sleep" "3600" | ||
| $ ###kubectl run -ti --rm my-debian --image=debian --overrides='{"spec": { "nodeSelector": {"kubernetes.io/hostname": "kube4"}}}' | $ ###kubectl run -ti --rm my-debian --image=debian --overrides='{"spec": { "nodeSelector": {"kubernetes.io/hostname": "kube4"}}}' | ||
| - | $ kubectl get all | + | $ kubectl run my-debian --image=debian -- "sleep" "60" |
| + | |||
| + | $ kubectl get pods | ||
| kubeN# crictl ps | grep debi | kubeN# crictl ps | grep debi | ||
| Line 469: | Line 648: | ||
| $ kubectl delete pod my-debian | $ kubectl delete pod my-debian | ||
| + | $ ###kubectl delete pod my-debian --grace-period=0 --force | ||
| - | $ kubectl create deployment my-debian --image=debian -- "sleep" "3600" | + | $ kubectl create deployment my-debian --image=debian -- "sleep" "infinity" |
| + | $ kubectl get all | ||
| $ kubectl get deployments | $ kubectl get deployments | ||
| + | $ kubectl get replicasets | ||
| </code> | </code> | ||
| * [[#Настройка автодополнения]] | * [[#Настройка автодополнения]] | ||
| Line 480: | Line 662: | ||
| $ kubectl exec -ti my-debian-NNNNNNNNN-NNNNN -- bash | $ kubectl exec -ti my-debian-NNNNNNNNN-NNNNN -- bash | ||
| Ctrl-D | Ctrl-D | ||
| + | </code> | ||
| + | * [[Технология Docker#Анализ параметров запущенного контейнера изнутри]] | ||
| + | <code> | ||
| $ kubectl get deployment my-debian -o yaml | $ kubectl get deployment my-debian -o yaml | ||
| </code> | </code> | ||
| Line 491: | Line 675: | ||
| $ kubectl delete deployment my-debian | $ kubectl delete deployment my-debian | ||
| </code> | </code> | ||
| + | |||
| + | ==== Manifest ==== | ||
| + | |||
| * [[https://kubernetes.io/docs/reference/glossary/?all=true#term-manifest|Kubernetes Documentation Reference Glossary/Manifest]] | * [[https://kubernetes.io/docs/reference/glossary/?all=true#term-manifest|Kubernetes Documentation Reference Glossary/Manifest]] | ||
| <code> | <code> | ||
| Line 510: | Line 697: | ||
| app: my-debian | app: my-debian | ||
| spec: | spec: | ||
| + | #serviceAccountName: admin-user | ||
| containers: | containers: | ||
| - name: my-debian | - name: my-debian | ||
| image: debian | image: debian | ||
| command: ["/bin/sh"] | command: ["/bin/sh"] | ||
| - | args: ["-c", "while true; do echo hello; sleep 3;done"] | + | args: ["-c", "while :;do echo -n random-value:;od -A n -t d -N 1 /dev/urandom;sleep 5; done"] |
| + | resources: | ||
| + | requests: | ||
| + | memory: "64Mi" | ||
| + | cpu: "250m" | ||
| + | limits: | ||
| + | memory: "128Mi" | ||
| + | cpu: "500m" | ||
| restartPolicy: Always | restartPolicy: Always | ||
| </code><code> | </code><code> | ||
| - | $ kubectl apply -f my-debian-deployment.yaml | + | $ kubectl apply -f my-debian-deployment.yaml #--dry-run=client #-o yaml |
| + | |||
| + | $ kubectl logs -l app=my-debian -f | ||
| ... | ... | ||
| $ kubectl delete -f my-debian-deployment.yaml | $ kubectl delete -f my-debian-deployment.yaml | ||
| </code> | </code> | ||
| + | |||
| ==== namespace для своего приложения ==== | ==== namespace для своего приложения ==== | ||
| + | ==== Deployment ==== | ||
| + | |||
| + | * [[https://stackoverflow.com/questions/52857825/what-is-an-endpoint-in-kubernetes|What is an 'endpoint' in Kubernetes?]] | ||
| * [[https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-volumes-example-nfs-persistent-volume.html|How to use an NFS volume]] | * [[https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-volumes-example-nfs-persistent-volume.html|How to use an NFS volume]] | ||
| + | * [[https://www.kryukov.biz/kubernetes/lokalnye-volumes/emptydir/|emptyDir]] | ||
| * [[https://hub.docker.com/_/httpd|The Apache HTTP Server Project - httpd Docker Official Image]] | * [[https://hub.docker.com/_/httpd|The Apache HTTP Server Project - httpd Docker Official Image]] | ||
| + | * [[https://habr.com/ru/companies/oleg-bunin/articles/761662/|Дополнительные контейнеры в Kubernetes и где они обитают: от паттернов к автоматизации управления]] | ||
| + | * [[https://stackoverflow.com/questions/39436845/multiple-command-in-poststart-hook-of-a-container|multiple command in postStart hook of a container]] | ||
| + | * [[https://stackoverflow.com/questions/33887194/how-to-set-multiple-commands-in-one-yaml-file-with-kubernetes|How to set multiple commands in one yaml file with Kubernetes?]] | ||
| <code> | <code> | ||
| Line 535: | Line 740: | ||
| $ ### kubectl delete deployment my-webd -n my-ns | $ ### kubectl delete deployment my-webd -n my-ns | ||
| - | $ cd webd/ | + | $ mkdir ??webd-k8s/; cd $_ |
| $ cat my-webd-deployment.yaml | $ cat my-webd-deployment.yaml | ||
| Line 543: | Line 748: | ||
| metadata: | metadata: | ||
| name: my-webd | name: my-webd | ||
| + | # annotations: | ||
| + | # kubernetes.io/change-cause: "update to ver1.2" | ||
| spec: | spec: | ||
| selector: | selector: | ||
| Line 558: | Line 765: | ||
| # image: server.corpX.un:5000/student/webd | # image: server.corpX.un:5000/student/webd | ||
| # image: server.corpX.un:5000/student/webd:ver1.N | # image: server.corpX.un:5000/student/webd:ver1.N | ||
| + | # image: httpd | ||
| + | # args: ["gunicorn", "app:app", "--bind", "0.0.0.0:8000", "-k", "uvicorn.workers.UvicornWorker"] | ||
| # imagePullPolicy: "Always" | # imagePullPolicy: "Always" | ||
| - | # image: httpd | ||
| # lifecycle: | # lifecycle: | ||
| # postStart: | # postStart: | ||
| # exec: | # exec: | ||
| - | # command: ["/bin/sh", "-c", "echo Hello from apache2 on $(hostname) > /usr/local/apache2/htdocs/index.html"] | + | # command: |
| + | # - /bin/sh | ||
| + | # - -c | ||
| + | # - | | ||
| + | # #test -f /usr/local/apache2/htdocs/index.html && exit 0 | ||
| + | # mkdir -p /usr/local/apache2/htdocs/ | ||
| + | # cd /usr/local/apache2/htdocs/ | ||
| + | # echo "<h1>Hello from apache2 on $(hostname) at $(date)</h1>" > index.html | ||
| + | # echo "<img src=img/logo.gif>" >> index.html | ||
| # env: | # env: | ||
| + | # - name: PYWEBD_DOC_ROOT | ||
| + | # value: "/usr/local/apache2/htdocs/" | ||
| + | # - name: PYWEBD_PORT | ||
| + | # value: "4080" | ||
| # - name: APWEBD_HOSTNAME | # - name: APWEBD_HOSTNAME | ||
| # value: "apwebd.corpX.un" | # value: "apwebd.corpX.un" | ||
| Line 578: | Line 798: | ||
| # httpGet: | # httpGet: | ||
| # port: 80 | # port: 80 | ||
| + | # #scheme: HTTPS | ||
| + | |||
| + | # volumeMounts: | ||
| + | # - name: htdocs-volume | ||
| + | # mountPath: /usr/local/apache2/htdocs | ||
| + | |||
| # volumeMounts: | # volumeMounts: | ||
| # - name: nfs-volume | # - name: nfs-volume | ||
| # mountPath: /var/www | # mountPath: /var/www | ||
| + | |||
| + | # volumes: | ||
| + | # - name: htdocs-volume | ||
| + | # emptyDir: {} | ||
| + | |||
| + | |||
| # volumes: | # volumes: | ||
| # - name: nfs-volume | # - name: nfs-volume | ||
| Line 587: | Line 819: | ||
| # server: server.corpX.un | # server: server.corpX.un | ||
| # path: /var/www | # path: /var/www | ||
| + | |||
| + | # initContainers: | ||
| + | # - name: load-htdocs-files | ||
| + | # image: curlimages/curl | ||
| + | ## command: ['sh', '-c', 'mkdir /mnt/img; curl http://val.bmstu.ru/unix/Media/logo.gif > /mnt/img/logo.gif'] | ||
| + | # command: ["/bin/sh", "-c"] | ||
| + | # args: | ||
| + | # - | | ||
| + | # test -d /mnt/img/ && exit 0 | ||
| + | # mkdir /mnt/img; cd /mnt/img | ||
| + | # curl http://val.bmstu.ru/unix/Media/logo.gif > logo.gif | ||
| + | # ls -lR /mnt/ | ||
| + | # volumeMounts: | ||
| + | # - mountPath: /mnt | ||
| + | # name: htdocs-volume | ||
| + | |||
| </code><code> | </code><code> | ||
| - | $ kubectl apply -f my-webd-deployment.yaml -n my-ns | + | $ kubectl apply -f my-webd-deployment.yaml -n my-ns #--dry-run=client #-o yaml |
| - | $ kubectl get all -n my-ns -o wide | + | $ kubectl get all -n my-ns -o wide |
| $ kubectl describe -n my-ns pod/my-webd-NNNNNNNNNN-NNNNN | $ kubectl describe -n my-ns pod/my-webd-NNNNNNNNNN-NNNNN | ||
| + | |||
| + | $ kubectl -n my-ns logs pod/my-webd-NNNNNNNNNN-NNNNN #-c load-htdocs-files | ||
| + | |||
| + | $ kubectl logs -l app=my-webd -n my-ns | ||
| + | (доступны опции -f, --tail=2000, --previous) | ||
| $ kubectl scale deployment my-webd --replicas=3 -n my-ns | $ kubectl scale deployment my-webd --replicas=3 -n my-ns | ||
| Line 599: | Line 852: | ||
| </code> | </code> | ||
| + | === Версии deployment === | ||
| + | |||
| + | * [[https://learnk8s.io/kubernetes-rollbacks|How do you rollback deployments in Kubernetes?]] | ||
| + | * [[https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment|Updating a Deployment]] | ||
| + | |||
| + | <code> | ||
| + | $ ###kubectl rollout pause deployment my-webd-dep -n my-ns | ||
| + | $ ###kubectl set image deployment/my-webd-dep my-webd-con=server.corpX.un:5000/student/gowebd:ver1.2 -n my-ns | ||
| + | $ ###kubectl rollout resume deployment my-webd-dep -n my-ns | ||
| + | |||
| + | $ ###kubectl rollout status deployment/my-webd-dep -n my-ns | ||
| + | |||
| + | $ kubectl rollout history deployment/my-webd -n my-ns | ||
| + | </code><code> | ||
| + | REVISION CHANGE-CAUSE | ||
| + | 1 <none> | ||
| + | ... | ||
| + | N update to ver1.2 | ||
| + | </code><code> | ||
| + | $ kubectl rollout history deployment/my-webd --revision=1 -n my-ns | ||
| + | </code><code> | ||
| + | ... | ||
| + | Image: server.corpX.un:5000/student/webd:ver1.1 | ||
| + | ... | ||
| + | </code><code> | ||
| + | $ kubectl rollout undo deployment/my-webd --to-revision=1 -n my-ns | ||
| + | |||
| + | $ kubectl annotate deployment/my-webd kubernetes.io/change-cause="revert to ver1.1" -n my-ns | ||
| + | |||
| + | $ kubectl rollout history deployment/my-webd -n my-ns | ||
| + | </code><code> | ||
| + | REVISION CHANGE-CAUSE | ||
| + | 2 update to ver1.2 | ||
| + | ... | ||
| + | N+1 revert to ver1.1 | ||
| + | </code> | ||
| + | |||
| + | === Поиск и удаление подов в нерабочем состоянии === | ||
| + | |||
| + | * [[https://stackoverflow.com/questions/55072235/how-to-delete-completed-kubernetes-pod|How to delete completed kubernetes pod?]] | ||
| + | |||
| + | <code> | ||
| + | kube1:~# kubectl get pods --field-selector=status.phase!=Running -A -o wide | ||
| + | |||
| + | kube1:~# kubectl delete pod --field-selector=status.phase==Succeeded -A | ||
| + | |||
| + | kube1:~# kubectl delete pod --field-selector=status.phase==Failed -A | ||
| + | </code> | ||
| ==== Service ==== | ==== Service ==== | ||
| * [[https://kubernetes.io/docs/concepts/services-networking/service/|Kubernetes Documentation Concepts Services, Load Balancing, and Networking Service]] | * [[https://kubernetes.io/docs/concepts/services-networking/service/|Kubernetes Documentation Concepts Services, Load Balancing, and Networking Service]] | ||
| + | * [[https://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/|Accessing Kubernetes Pods From Outside of the Cluster]] | ||
| * [[https://stackoverflow.com/questions/33069736/how-do-i-get-logs-from-all-pods-of-a-kubernetes-replication-controller|How do I get logs from all pods of a Kubernetes replication controller?]] | * [[https://stackoverflow.com/questions/33069736/how-do-i-get-logs-from-all-pods-of-a-kubernetes-replication-controller|How do I get logs from all pods of a Kubernetes replication controller?]] | ||
| + | |||
| <code> | <code> | ||
| Line 625: | Line 927: | ||
| - protocol: TCP | - protocol: TCP | ||
| port: 80 | port: 80 | ||
| + | # targetPort: 4080 | ||
| # nodePort: 30111 | # nodePort: 30111 | ||
| </code><code> | </code><code> | ||
| $ kubectl apply -f my-webd-service.yaml -n my-ns | $ kubectl apply -f my-webd-service.yaml -n my-ns | ||
| - | $ kubectl logs -l app=my-webd -n my-ns | + | $ kubectl describe svc my-webd -n my-ns |
| - | (доступны опции -f, --tail=2000, --previous) | + | |
| + | $ kubectl get endpoints -n my-ns | ||
| + | или | ||
| + | $ kubectl get endpointslice -n my-ns | ||
| </code> | </code> | ||
| === NodePort === | === NodePort === | ||
| + | |||
| + | * [[https://www.baeldung.com/ops/kubernetes-nodeport-range|Why Kubernetes NodePort Services Range From 30000 – 32767]] | ||
| + | |||
| <code> | <code> | ||
| $ kubectl get svc my-webd -n my-ns | $ kubectl get svc my-webd -n my-ns | ||
| Line 638: | Line 947: | ||
| my-webd-svc NodePort 10.102.135.146 <none> 80:NNNNN/TCP 18h | my-webd-svc NodePort 10.102.135.146 <none> 80:NNNNN/TCP 18h | ||
| - | $ kubectl describe svc my-webd -n my-ns | + | $ curl http://kube1,2,3:NNNNN |
| - | + | ||
| - | $ curl http://node1,2,3:NNNNN | + | |
| - | на "самодельном kubeadm" кластере работает не стабильно | + | |
| </code> | </code> | ||
| == NodePort Minikube == | == NodePort Minikube == | ||
| Line 647: | Line 953: | ||
| $ minikube service list | $ minikube service list | ||
| - | $ minikube service my-webd -n my-ns --url | + | $ minikube service my-webd --url -n my-ns |
| http://192.168.49.2:NNNNN | http://192.168.49.2:NNNNN | ||
| - | $ curl $(minikube service my-webd -n my-ns --url) | + | $ curl http://192.168.49.2:NNNNN |
| </code> | </code> | ||
| Line 667: | Line 973: | ||
| $ kubectl -n metallb-system get all | $ kubectl -n metallb-system get all | ||
| + | |||
| + | $ mkdir metallb-system; cd $_ | ||
| $ cat first-pool.yaml | $ cat first-pool.yaml | ||
| Line 678: | Line 986: | ||
| spec: | spec: | ||
| addresses: | addresses: | ||
| - | - 192.168.13.64/28 | + | - 192.168.X.64/28 |
| autoAssign: false | autoAssign: false | ||
| + | # autoAssign: true | ||
| --- | --- | ||
| apiVersion: metallb.io/v1beta1 | apiVersion: metallb.io/v1beta1 | ||
| Line 694: | Line 1003: | ||
| $ kubectl apply -f first-pool.yaml | $ kubectl apply -f first-pool.yaml | ||
| - | $ ### kubectl delete -f first-pool.yaml && rm first-pool.yaml | + | ... |
| + | $ kubectl get svc my-webd -n my-ns | ||
| + | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE | ||
| + | my-webd LoadBalancer 10.233.23.29 192.168.X.64 80:NNNNN/TCP 50s | ||
| + | |||
| + | |||
| + | $ #kubectl delete -f first-pool.yaml && rm first-pool.yaml | ||
| - | $ ### kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.3/config/manifests/metallb-native.yaml | + | $ #kubectl delete -f https://raw.githubusercontent.com/metallb/metallb/v0.14.3/config/manifests/metallb-native.yaml |
| </code> | </code> | ||
| Line 702: | Line 1017: | ||
| <code> | <code> | ||
| kube1# host my-webd.my-ns.svc.cluster.local 169.254.25.10 | kube1# host my-webd.my-ns.svc.cluster.local 169.254.25.10 | ||
| - | ...10.102.135.146... | ||
| - | server# ssh -p 32222 nodeN | + | kube1# curl my-webd.my-ns.svc.cluster.local |
| - | + | ||
| - | my-openssh-server-NNNNNNNN-NNNNN:~# curl my-webd.my-ns.svc.cluster.local | + | |
| - | ИЛИ | + | |
| - | my-openssh-server-NNNNNNNN-NNNNN:~# curl my-webd-webd-chart.my-ns.svc.cluster.local | + | |
| </code> | </code> | ||
| Line 756: | Line 1066: | ||
| * [[https://kubernetes.github.io/ingress-nginx/deploy/#quick-start|NGINX ingress controller quick-start]] | * [[https://kubernetes.github.io/ingress-nginx/deploy/#quick-start|NGINX ingress controller quick-start]] | ||
| + | * [[#Работа с готовыми Charts]] | ||
| === Minikube ingress-nginx-controller === | === Minikube ingress-nginx-controller === | ||
| Line 807: | Line 1118: | ||
| node1# kubectl get all -n ingress-nginx | node1# kubectl get all -n ingress-nginx | ||
| - | node1# ### kubectl delete -f ingress-nginx.controller-v1.3.1.baremetal.yaml | + | node1# ###kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission |
| + | |||
| + | node1# ###kubectl delete -f ingress-nginx.controller-v1.3.1.baremetal.yaml | ||
| </code> | </code> | ||
| - | === Управление конфигурацией ingress-nginx-controller === | + | === Ingress baremetal DaemonSet === |
| <code> | <code> | ||
| - | master-1:~$ kubectl exec -n ingress-nginx pods/ingress-nginx-controller-<TAB> -- cat /etc/nginx/nginx.conf | tee nginx.conf | + | kube1:~# mkdir -p ingress-nginx; cd $_ |
| - | master-1:~$ kubectl edit -n ingress-nginx configmaps ingress-nginx-controller | + | kube1:~/ingress-nginx# curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0/deploy/static/provider/baremetal/deploy.yaml | tee ingress-nginx.controller-v1.12.0.baremetal.yaml |
| + | |||
| + | kube1:~/ingress-nginx# cat ingress-nginx.controller-v1.12.0.baremetal.yaml | ||
| </code><code> | </code><code> | ||
| ... | ... | ||
| + | apiVersion: v1 | ||
| + | #data: null | ||
| data: | data: | ||
| + | allow-snippet-annotations: "true" | ||
| use-forwarded-headers: "true" | use-forwarded-headers: "true" | ||
| + | kind: ConfigMap | ||
| ... | ... | ||
| + | #kind: Deployment | ||
| + | kind: DaemonSet | ||
| + | ... | ||
| + | # strategy: | ||
| + | # rollingUpdate: | ||
| + | # maxUnavailable: 1 | ||
| + | # type: RollingUpdate | ||
| + | ... | ||
| + | hostNetwork: true ### insert this | ||
| + | terminationGracePeriodSeconds: 300 | ||
| + | volumes: | ||
| + | ... | ||
| + | </code><code> | ||
| + | kube1:~/ingress-nginx# kubectl apply -f ingress-nginx.controller-v1.12.0.baremetal.yaml | ||
| + | |||
| + | kube1:~/ingress-nginx# kubectl -n ingress-nginx get pods -o wide | ||
| + | |||
| + | kube1:~/ingress-nginx# kubectl -n ingress-nginx describe service/ingress-nginx-controller | ||
| + | </code><code> | ||
| + | ... | ||
| + | Endpoints: 192.168.X.221:80,192.168.X.222:80,192.168.X.223:80 | ||
| + | ... | ||
| + | </code><code> | ||
| + | kube1:~/ingress-nginx# ###kubectl delete -f ingress-nginx.controller-v1.12.0.baremetal.yaml | ||
| </code> | </code> | ||
| - | === Итоговый вариант с DaemonSet === | + | === Управление конфигурацией ingress-nginx-controller === |
| <code> | <code> | ||
| - | node1# diff ingress-nginx.controller-v1.8.2.baremetal.yaml.orig ingress-nginx.controller-v1.8.2.baremetal.yaml | + | master-1:~$ kubectl exec -n ingress-nginx pods/ingress-nginx-controller-<TAB> -- cat /etc/nginx/nginx.conf | tee nginx.conf |
| + | |||
| + | master-1:~$ kubectl edit -n ingress-nginx configmaps ingress-nginx-controller | ||
| </code><code> | </code><code> | ||
| - | 323a324 | ||
| - | > use-forwarded-headers: "true" | ||
| - | 391c392,393 | ||
| - | < kind: Deployment | ||
| - | --- | ||
| - | > #kind: Deployment | ||
| - | > kind: DaemonSet | ||
| - | 409,412c411,414 | ||
| - | < strategy: | ||
| - | < rollingUpdate: | ||
| - | < maxUnavailable: 1 | ||
| - | < type: RollingUpdate | ||
| - | --- | ||
| - | > # strategy: | ||
| - | > # rollingUpdate: | ||
| - | > # maxUnavailable: 1 | ||
| - | > # type: RollingUpdate | ||
| - | 501a504 | ||
| - | > hostNetwork: true | ||
| - | </code><code> | ||
| - | node1# kubectl -n ingress-nginx describe service/ingress-nginx-controller | ||
| ... | ... | ||
| - | Endpoints: 192.168.X.221:80,192.168.X.222:80,192.168.X.223:80 | + | data: |
| + | use-forwarded-headers: "true" | ||
| ... | ... | ||
| </code> | </code> | ||
| === ingress example === | === ingress example === | ||
| + | |||
| + | * [[https://stackoverflow.com/questions/49829452/why-ingress-serviceport-can-be-port-and-targetport-of-service|!!! The NGINX ingress controller does not use Services to route traffic to the pods]] | ||
| + | * [[https://stackoverflow.com/questions/54459015/how-to-configure-ingress-to-direct-traffic-to-an-https-backend-using-https|how to configure ingress to direct traffic to an https backend using https]] | ||
| <code> | <code> | ||
| - | node1# ### kubectl create ingress my-ingress --class=nginx --rule="webd.corpX.un/*=my-webd:80" -n my-ns | + | kube1# ### kubectl create ingress my-ingress --class=nginx --rule="webd.corpX.un/*=my-webd:80" -n my-ns |
| - | node1# cat my-ingress.yaml | + | kube1# cat my-ingress.yaml |
| </code><code> | </code><code> | ||
| apiVersion: networking.k8s.io/v1 | apiVersion: networking.k8s.io/v1 | ||
| Line 863: | Line 1191: | ||
| metadata: | metadata: | ||
| name: my-ingress | name: my-ingress | ||
| + | # annotations: | ||
| + | # nginx.ingress.kubernetes.io/canary: "true" | ||
| + | # nginx.ingress.kubernetes.io/canary-weight: "30" | ||
| + | # cert-manager.io/issuer: "letsencrypt-staging" | ||
| + | # cert-manager.io/issuer: "letsencrypt-prod" | ||
| spec: | spec: | ||
| ingressClassName: nginx | ingressClassName: nginx | ||
| Line 877: | Line 1210: | ||
| name: my-webd | name: my-webd | ||
| port: | port: | ||
| - | number: 80 | + | number: 4080 |
| path: / | path: / | ||
| pathType: Prefix | pathType: Prefix | ||
| Line 891: | Line 1224: | ||
| pathType: Prefix | pathType: Prefix | ||
| </code><code> | </code><code> | ||
| - | node1# kubectl apply -f my-ingress.yaml -n my-ns | + | kube1# kubectl apply -f my-ingress.yaml -n my-ns |
| - | + | kube1# kubectl get ingress -n my-ns | |
| - | node1# kubectl get ingress -n my-ns | + | |
| NAME CLASS HOSTS ADDRESS PORTS AGE | NAME CLASS HOSTS ADDRESS PORTS AGE | ||
| my-webd nginx webd.corpX.un,gowebd.corpX.un 192.168.X.202,192.168.X.203 80 14m | my-webd nginx webd.corpX.un,gowebd.corpX.un 192.168.X.202,192.168.X.203 80 14m | ||
| - | + | </code> | |
| + | * [[Утилита curl]] | ||
| + | <code> | ||
| $ curl webd.corpX.un | $ curl webd.corpX.un | ||
| $ curl gowebd.corpX.un | $ curl gowebd.corpX.un | ||
| Line 908: | Line 1241: | ||
| $ kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx -f | $ kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx -f | ||
| - | node1# ### kubectl delete ingress my-ingress -n my-ns | + | kube1# ### kubectl delete ingress my-ingress -n my-ns |
| </code> | </code> | ||
| Line 924: | Line 1257: | ||
| $ ###kubectl delete secret/gowebd-tls -n my-ns | $ ###kubectl delete secret/gowebd-tls -n my-ns | ||
| </code> | </code> | ||
| + | === cert-manager === | ||
| - | ==== Volumes ==== | + | * [[Letsencrypt Certbot]] |
| + | * [[https://cert-manager.io/docs/installation/|cert-manager Installation]] | ||
| + | * [[https://cert-manager.io/docs/tutorials/acme/nginx-ingress/|cert-manager Securing NGINX-ingress]] | ||
| - | === PersistentVolume и PersistentVolumeVolumeClaim === | ||
| <code> | <code> | ||
| - | root@node1:~# ssh node2 mkdir /disk2 | + | student@vps:~$ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.19.1/cert-manager.yaml |
| - | root@node1:~# ssh node2 touch /disk2/disk2_node2 | + | student@vps:~$ kubectl -n cert-manager get all |
| - | root@node1:~# kubectl label nodes node2 disk2=yes | + | student@vps:~/apwebd-k8s$ cat letsencrypt-staging-issuer.yaml |
| + | student@vps:~/apwebd-k8s$ cat letsencrypt-prod-issuer.yaml | ||
| + | </code><code> | ||
| + | apiVersion: cert-manager.io/v1 | ||
| + | kind: Issuer | ||
| + | metadata: | ||
| + | #name: letsencrypt-staging | ||
| + | #name: letsencrypt-prod | ||
| + | spec: | ||
| + | acme: | ||
| + | #server: https://acme-staging-v02.api.letsencrypt.org/directory | ||
| + | #server: https://acme-v02.api.letsencrypt.org/directory | ||
| + | email: val@bmstu.ru | ||
| + | profile: tlsserver | ||
| + | privateKeySecretRef: | ||
| + | #name: letsencrypt-staging | ||
| + | #name: letsencrypt-prod | ||
| + | solvers: | ||
| + | - http01: | ||
| + | ingress: | ||
| + | ingressClassName: nginx | ||
| + | </code><code> | ||
| + | student@vps:~/apwebd-k8s$ kubectl -n my-ns apply -f letsencrypt-staging-issuer.yaml | ||
| + | student@vps:~/apwebd-k8s$ kubectl -n my-ns apply -f letsencrypt-prod-issuer.yaml | ||
| - | root@node1:~# kubectl get nodes --show-labels | + | student@vps:~/apwebd-k8s$ kubectl -n my-ns get secret letsencrypt-staging -o yaml |
| - | root@node1:~# ###kubectl label nodes node2 disk2- | + | student@vps:~/apwebd-k8s$ kubectl -n my-ns get certificate |
| - | root@node1:~# cat my-debian-deployment.yaml | + | student@vps:~/apwebd-k8s$ kubectl -n my-ns events |
| - | </code><code> | + | |
| ... | ... | ||
| - | args: ["-c", "while true; do echo hello; sleep 3;done"] | + | Certificate fetched from issuer successfully |
| - | volumeMounts: | + | student@vps:~/apwebd-k8s$ kubectl -n my-ns get secret webd-tls -o yaml |
| - | - name: my-disk2-volume | + | </code> |
| - | mountPath: /data | + | ==== Volumes ==== |
| - | # volumeMounts: | + | === hostPath и nodeSelector === |
| - | # - name: data | + | |
| - | # mountPath: /data | + | |
| - | volumes: | + | * [[Средства программирования shell#Ресурсы Web сервера на shell]] на kube3 |
| - | - name: my-disk2-volume | + | |
| - | hostPath: | + | |
| - | path: /disk2/ | + | |
| - | nodeSelector: | + | |
| - | disk2: "yes" | + | |
| - | # volumes: | + | <code> |
| - | # - name: data | + | kube1# kubectl label nodes kube3 htdocs-node=yes |
| - | # persistentVolumeClaim: | + | |
| - | # claimName: my-ha-pvc-sz64m | + | |
| - | restartPolicy: Always | + | kube1# kubectl get nodes --show-labels |
| + | |||
| + | kube1:~/pywebd-k8s# cat my-webd-deployment.yaml | ||
| </code><code> | </code><code> | ||
| - | root@node1:~# kubectl apply -f my-debian-deployment.yaml | + | ... |
| + | volumeMounts: | ||
| + | - name: htdocs-volume | ||
| + | mountPath: /usr/local/apache2/htdocs | ||
| - | root@node1:~# kubectl get all -o wide | + | # lifecycle: |
| + | # ... | ||
| + | |||
| + | volumes: | ||
| + | - name: htdocs-volume | ||
| + | hostPath: | ||
| + | path: /var/www/ | ||
| + | |||
| + | nodeSelector: | ||
| + | htdocs-node: "yes" | ||
| + | |||
| + | # initContainers: | ||
| + | # ... | ||
| </code> | </code> | ||
| + | |||
| + | === PersistentVolume и PersistentVolumeClaim === | ||
| * [[https://qna.habr.com/q/629022|Несколько Claim на один Persistent Volumes?]] | * [[https://qna.habr.com/q/629022|Несколько Claim на один Persistent Volumes?]] | ||
| Line 978: | Line 1345: | ||
| <code> | <code> | ||
| - | root@node1:~# cat my-ha-pv.yaml | + | kube1:~/pv# cat my-ha-pv.yaml |
| </code><code> | </code><code> | ||
| apiVersion: v1 | apiVersion: v1 | ||
| kind: PersistentVolume | kind: PersistentVolume | ||
| metadata: | metadata: | ||
| - | name: my-pv-node2-sz-128m-num-001 | + | name: my-pv-kube3-sz-128m-num-001 |
| # name: my-pv-kube3-keycloak | # name: my-pv-kube3-keycloak | ||
| - | labels: | + | # labels: |
| - | type: local | + | # type: local |
| spec: | spec: | ||
| ## comment storageClassName for keycloak | ## comment storageClassName for keycloak | ||
| Line 993: | Line 1360: | ||
| storage: 128Mi | storage: 128Mi | ||
| # storage: 8Gi | # storage: 8Gi | ||
| - | # volumeMode: Filesystem | ||
| accessModes: | accessModes: | ||
| - | - ReadWriteMany | + | - ReadWriteOnce |
| - | # - ReadWriteOnce | + | |
| hostPath: | hostPath: | ||
| - | path: /disk2 | + | # path: /disk2 |
| + | path: /disk2/dir1 | ||
| persistentVolumeReclaimPolicy: Retain | persistentVolumeReclaimPolicy: Retain | ||
| nodeAffinity: | nodeAffinity: | ||
| Line 1007: | Line 1373: | ||
| operator: In | operator: In | ||
| values: | values: | ||
| - | - node2 | + | - kube3 |
| - | # - kube3 | + | --- |
| + | #... | ||
| </code><code> | </code><code> | ||
| - | root@node1:~# kubectl apply -f my-ha-pv.yaml | + | kube1:~/pv# kubectl apply -f my-ha-pv.yaml |
| - | root@node1:~# kubectl get persistentvolume | + | kube1# kubectl get pv |
| - | или | + | |
| - | root@node1:~# kubectl get pv | + | |
| - | root@kube1:~# ###ssh kube3 'mkdir /disk2/; chmod 777 /disk2/' | + | kube1# kubectl delete pv my-pv-kube3-sz-128m-num-001 |
| - | ... | + | |
| - | root@node1:~# ###kubectl delete pv my-pv-<TAB> | + | |
| - | root@node1:~# cat my-ha-pvc.yaml | + | kube3# mkdir -p /disk2/dir{0..3} |
| + | |||
| + | kube3# chmod 777 -R /disk2/ | ||
| + | |||
| + | kube3# find /disk2/ | ||
| + | |||
| + | kube3# ###rm -rf /disk2/ | ||
| + | </code> | ||
| + | |||
| + | * [[https://stackoverflow.com/questions/55639436/create-multiple-persistent-volumes-in-one-yaml]] | ||
| + | * Знакомимся с [[#Helm]] | ||
| + | |||
| + | <code> | ||
| + | kube1:~/pv# cat my-ha-pv-chart/Chart.yaml | ||
| + | </code><code> | ||
| + | apiVersion: v2 | ||
| + | name: my-ha-pv-chart | ||
| + | version: 0.1.0 | ||
| + | </code><code> | ||
| + | kube1:~/pv# cat my-ha-pv-chart/values.yaml | ||
| + | </code><code> | ||
| + | volume_names: | ||
| + | - "dir1" | ||
| + | - "dir2" | ||
| + | - "dir3" | ||
| + | numVolumes: "3" | ||
| + | </code><code> | ||
| + | kube1:~/pv# cat my-ha-pv-chart/templates/my-ha-pv.yaml | ||
| + | </code><code> | ||
| + | {{ range .Values.volume_names }} | ||
| + | {{/* range $k, $v := until (atoi .Values.numVolumes) */}} | ||
| + | --- | ||
| + | apiVersion: v1 | ||
| + | kind: PersistentVolume | ||
| + | metadata: | ||
| + | name: my-pv-sz-128m-num-{{ . }} | ||
| + | spec: | ||
| + | storageClassName: my-ha-sc | ||
| + | capacity: | ||
| + | storage: 128Mi | ||
| + | accessModes: | ||
| + | - ReadWriteOnce | ||
| + | hostPath: | ||
| + | path: /disk2/{{ . }}/ | ||
| + | {{/* path: /disk2/dir{{ $v }}/ */}} | ||
| + | persistentVolumeReclaimPolicy: Retain | ||
| + | nodeAffinity: | ||
| + | required: | ||
| + | nodeSelectorTerms: | ||
| + | - matchExpressions: | ||
| + | - key: kubernetes.io/hostname | ||
| + | operator: In | ||
| + | values: | ||
| + | - kube3 | ||
| + | {{ end }} | ||
| + | </code><code> | ||
| + | kube1:~/pv# helm template my-ha-pv-chart my-ha-pv-chart/ | ||
| + | |||
| + | kube1:~/pv# helm install my-ha-pv-chart my-ha-pv-chart/ | ||
| + | |||
| + | kube1# kubectl get pv | ||
| + | |||
| + | kube1:~/pv# ###helm uninstall my-ha-pv-chart | ||
| + | </code><code> | ||
| + | kube1:~/pywebd-k8s# cat my-webd-pvc.yaml | ||
| </code><code> | </code><code> | ||
| apiVersion: v1 | apiVersion: v1 | ||
| kind: PersistentVolumeClaim | kind: PersistentVolumeClaim | ||
| metadata: | metadata: | ||
| - | name: my-ha-pvc-sz64m | + | name: my-webd-pvc |
| spec: | spec: | ||
| storageClassName: my-ha-sc | storageClassName: my-ha-sc | ||
| - | # storageClassName: local-path | ||
| accessModes: | accessModes: | ||
| - | - ReadWriteMany | + | - ReadWriteOnce |
| resources: | resources: | ||
| requests: | requests: | ||
| storage: 64Mi | storage: 64Mi | ||
| </code><code> | </code><code> | ||
| - | root@node1:~# kubectl apply -f my-ha-pvc.yaml | + | kube1:~/pywebd-k8s# kubectl apply -f my-webd-pvc.yaml -n my-ns |
| - | root@node1:~# kubectl get persistentvolumeclaims | + | kube1:~/pywebd-k8s# kubectl get pvc -n my-ns |
| - | или | + | |
| - | root@node1:~# kubectl get pvc | + | kube1:~/pywebd-k8s# cat my-webd-deployment.yaml |
| + | </code><code> | ||
| ... | ... | ||
| + | volumeMounts: | ||
| + | - name: htdocs-volume | ||
| + | mountPath: /usr/local/apache2/htdocs | ||
| - | root@node1:~# ### kubectl delete pvc my-ha-pvc-sz64m | + | lifecycle: |
| + | ... | ||
| + | |||
| + | volumes: | ||
| + | - name: htdocs-volume | ||
| + | persistentVolumeClaim: | ||
| + | claimName: my-webd-pvc | ||
| + | |||
| + | initContainers: | ||
| + | ... | ||
| + | </code><code> | ||
| + | kube3# find /disk2 | ||
| </code> | </code> | ||
| Line 1048: | Line 1489: | ||
| * [[https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/|Dynamic Volume Provisioning]] | * [[https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/|Dynamic Volume Provisioning]] | ||
| + | * [[https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/|Changing the default StorageClass]] | ||
| === rancher local-path-provisioner === | === rancher local-path-provisioner === | ||
| Line 1070: | Line 1512: | ||
| ssh root@kube2 'chmod 777 /opt/local-path-provisioner' | ssh root@kube2 'chmod 777 /opt/local-path-provisioner' | ||
| ssh root@kube3 'chmod 777 /opt/local-path-provisioner' | ssh root@kube3 'chmod 777 /opt/local-path-provisioner' | ||
| + | ssh root@kube4 'mkdir /opt/local-path-provisioner' | ||
| + | ssh root@kube4 'chmod 777 /opt/local-path-provisioner' | ||
| + | |||
| + | $ ###kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' | ||
| </code> | </code> | ||
| Line 1085: | Line 1531: | ||
| <code> | <code> | ||
| kubeN:~# apt install open-iscsi | kubeN:~# apt install open-iscsi | ||
| + | |||
| + | (venv1) server:~# ansible all -f 4 -m apt -a 'pkg=open-iscsi state=present update_cache=true' -i /root/kubespray/inventory/mycluster/hosts.yaml | ||
| + | |||
| + | root@a7818cd3f7c7:/kubespray# ansible all -f 4 -m apt -a 'pkg=open-iscsi state=present update_cache=true' -i /inventory/inventory.ini | ||
| </code> | </code> | ||
| * [[https://github.com/longhorn/longhorn]] | * [[https://github.com/longhorn/longhorn]] | ||
| <code> | <code> | ||
| $ kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.0/deploy/longhorn.yaml | $ kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.0/deploy/longhorn.yaml | ||
| + | |||
| + | $ kubectl -n longhorn-system get pods -o wide --watch | ||
| Setting->General | Setting->General | ||
| Line 1095: | Line 1547: | ||
| </code> | </code> | ||
| - | Подключение через kubectl proxy | + | Подключение через [[#kubectl proxy]] |
| * [[https://stackoverflow.com/questions/45172008/how-do-i-access-this-kubernetes-service-via-kubectl-proxy|How do I access this Kubernetes service via kubectl proxy?]] | * [[https://stackoverflow.com/questions/45172008/how-do-i-access-this-kubernetes-service-via-kubectl-proxy|How do I access this Kubernetes service via kubectl proxy?]] | ||
| Line 1135: | Line 1587: | ||
| * Делаем снапшот | * Делаем снапшот | ||
| - | * Что-то ломаем | + | * Что-то ломаем (удаляем пользователя) |
| - | * Останавливаем сервис | + | |
| + | == Остановка сервиса == | ||
| <code> | <code> | ||
| Line 1154: | Line 1607: | ||
| == Использование backup-ов == | == Использование backup-ов == | ||
| + | |||
| + | * Разворачиваем [[Сервис NFS]] на server | ||
| + | |||
| <code> | <code> | ||
| Setting -> General -> Backup Target -> nfs://server.corp13.un:/var/www (nfs client linux не нужен) | Setting -> General -> Backup Target -> nfs://server.corp13.un:/var/www (nfs client linux не нужен) | ||
| </code> | </code> | ||
| * Volume -> Create Backup, удаляем NS, восстанавливаем Volume из бекапа, создаем NS и создаем для Volume PV/PVC как было, устанавливаем чарт, он подхватывает pv/pvc | * Volume -> Create Backup, удаляем NS, восстанавливаем Volume из бекапа, создаем NS и создаем для Volume PV/PVC как было, устанавливаем чарт, он подхватывает pv/pvc | ||
| + | |||
| + | ==== ConfigMap, Secret ==== | ||
| + | |||
| + | <code> | ||
| + | server# scp /etc/pywebd/* kube1:/tmp/ | ||
| + | |||
| + | kube1:~/pywebd-k8s# kubectl create configmap pywebd-conf --from-file=/tmp/pywebd.conf --dry-run=client -o yaml | tee my-webd-configmap.yaml | ||
| + | |||
| + | kube1:~/pywebd-k8s# cat my-webd-configmap.yaml | ||
| + | </code><code> | ||
| + | apiVersion: v1 | ||
| + | data: | ||
| + | pywebd.conf: | | ||
| + | [default] | ||
| + | DocumentRoot = /usr/local/apache2/htdocs | ||
| + | Listen = 4443 | ||
| + | kind: ConfigMap | ||
| + | metadata: | ||
| + | creationTimestamp: null | ||
| + | name: pywebd-conf | ||
| + | </code><code> | ||
| + | kube1:~/pywebd-k8s# kubectl apply -f my-webd-configmap.yaml -n my-ns | ||
| + | |||
| + | kube1:~/pywebd-k8s# kubectl -n my-ns get configmaps | ||
| + | |||
| + | kube1:~/pywebd-k8s# kubectl create secret tls pywebd-tls --key /tmp/pywebd.key --cert /tmp/pywebd.crt --dry-run=client -o yaml | tee my-webd-secret-tls.yaml | ||
| + | |||
| + | kube1:~/pywebd-k8s# less my-webd-secret-tls.yaml | ||
| + | </code><code> | ||
| + | apiVersion: v1 | ||
| + | data: | ||
| + | tls.crt: ... | ||
| + | tls.key: ... | ||
| + | kind: Secret | ||
| + | metadata: | ||
| + | creationTimestamp: null | ||
| + | name: pywebd-tls | ||
| + | type: kubernetes.io/tls | ||
| + | </code><code> | ||
| + | kube1:~/pywebd-k8s# rm -rv /tmp/pywebd.* | ||
| + | |||
| + | kube1:~/pywebd-k8s# kubectl apply -f my-webd-secret-tls.yaml -n my-ns | ||
| + | |||
| + | kube1:~/pywebd-k8s# kubectl -n my-ns get secrets | ||
| + | |||
| + | kube1:~/pywebd-k8s# kubectl create secret docker-registry regcred --docker-server=server.corpX.un:5000 --docker-username=student --docker-password='strongpassword' -n my-ns | ||
| + | |||
| + | kube1:~/pywebd-k8s# cat my-webd-deployment.yaml | ||
| + | </code><code> | ||
| + | ... | ||
| + | imagePullSecrets: | ||
| + | - name: regcred | ||
| + | |||
| + | containers: | ||
| + | - name: my-webd | ||
| + | image: server.corpX.un:5000/student/pywebd:ver1.2 | ||
| + | imagePullPolicy: "Always" | ||
| + | |||
| + | # env: | ||
| + | # ... | ||
| + | ... | ||
| + | livenessProbe: | ||
| + | httpGet: | ||
| + | port: 4443 | ||
| + | scheme: HTTPS | ||
| + | ... | ||
| + | volumeMounts: | ||
| + | ... | ||
| + | - name: conf-volume | ||
| + | subPath: pywebd.conf | ||
| + | mountPath: /etc/pywebd/pywebd.conf | ||
| + | - name: secret-tls-volume | ||
| + | subPath: tls.crt | ||
| + | mountPath: /etc/pywebd/pywebd.crt | ||
| + | - name: secret-tls-volume | ||
| + | subPath: tls.key | ||
| + | mountPath: /etc/pywebd/pywebd.key | ||
| + | ... | ||
| + | volumes: | ||
| + | ... | ||
| + | - name: conf-volume | ||
| + | configMap: | ||
| + | name: pywebd-conf | ||
| + | - name: secret-tls-volume | ||
| + | secret: | ||
| + | secretName: pywebd-tls | ||
| + | ... | ||
| + | </code><code> | ||
| + | kubeN$ curl --connect-to "":"":<POD_IP>:4443 https://pywebd.corpX.un | ||
| + | </code> | ||
| ==== ConfigMap ==== | ==== ConfigMap ==== | ||
| Line 1326: | Line 1872: | ||
| <code> | <code> | ||
| - | # wget https://get.helm.sh/helm-v3.9.0-linux-amd64.tar.gz | + | # wget https://get.helm.sh/helm-v3.16.4-linux-amd64.tar.gz |
| - | # tar -zxvf helm-v3.9.0-linux-amd64.tar.gz | + | # tar -zxvf helm-*-linux-amd64.tar.gz |
| # mv linux-amd64/helm /usr/local/bin/helm | # mv linux-amd64/helm /usr/local/bin/helm | ||
| + | |||
| + | $ cat ~/.profile | ||
| + | </code><code> | ||
| + | ... | ||
| + | source <(helm completion bash) | ||
| </code> | </code> | ||
| ==== Работа с готовыми Charts ==== | ==== Работа с готовыми Charts ==== | ||
| + | |||
| + | * Сервис Keycloak [[Сервис Keycloak#Kubernetes]] | ||
| === ingress-nginx === | === ingress-nginx === | ||
| Line 1341: | Line 1894: | ||
| * [[https://devpress.csdn.net/cloud/62fc8e7e7e66823466190055.html|devpress.csdn.net How to install nginx-ingress with hostNetwork on bare-metal?]] | * [[https://devpress.csdn.net/cloud/62fc8e7e7e66823466190055.html|devpress.csdn.net How to install nginx-ingress with hostNetwork on bare-metal?]] | ||
| * [[https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml]] | * [[https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml]] | ||
| + | |||
| + | * [[https://github.com/kubernetes/ingress-nginx]] --version 4.7.3 | ||
| <code> | <code> | ||
| Line 1355: | Line 1910: | ||
| - | $ mkdir ingress-nginx; cd ingress-nginx | + | $ mkdir -p ingress-nginx; cd ingress-nginx |
| $ helm template ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx | tee t1.yaml | $ helm template ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx | tee t1.yaml | ||
| Line 1371: | Line 1926: | ||
| # use-forwarded-headers: true | # use-forwarded-headers: true | ||
| # allow-snippet-annotations: true | # allow-snippet-annotations: true | ||
| + | # service: | ||
| + | # type: LoadBalancer | ||
| + | # loadBalancerIP: "192.168.X.64" | ||
| </code><code> | </code><code> | ||
| $ helm template ingress-nginx -f values.yaml --repo https://kubernetes.github.io/ingress-nginx -n ingress-nginx | tee t2.yaml | $ helm template ingress-nginx -f values.yaml --repo https://kubernetes.github.io/ingress-nginx -n ingress-nginx | tee t2.yaml | ||
| $ helm upgrade ingress-nginx -i ingress-nginx -f values.yaml --repo https://kubernetes.github.io/ingress-nginx -n ingress-nginx --create-namespace | $ helm upgrade ingress-nginx -i ingress-nginx -f values.yaml --repo https://kubernetes.github.io/ingress-nginx -n ingress-nginx --create-namespace | ||
| + | |||
| + | $ kubectl get all -n ingress-nginx | ||
| $ kubectl exec -n ingress-nginx pods/ingress-nginx-controller-<TAB> -- cat /etc/nginx/nginx.conf | tee nginx.conf | grep use_forwarded_headers | $ kubectl exec -n ingress-nginx pods/ingress-nginx-controller-<TAB> -- cat /etc/nginx/nginx.conf | tee nginx.conf | grep use_forwarded_headers | ||
| Line 1386: | Line 1946: | ||
| # kubectl get clusterrolebindings -A | grep -i ingress | # kubectl get clusterrolebindings -A | grep -i ingress | ||
| # kubectl get validatingwebhookconfigurations -A | grep -i ingress | # kubectl get validatingwebhookconfigurations -A | grep -i ingress | ||
| + | |||
| + | # ###helm uninstall ingress-nginx -n ingress-nginx | ||
| </code> | </code> | ||
| ==== Развертывание своего приложения ==== | ==== Развертывание своего приложения ==== | ||
| + | * [[https://helm.sh/docs/chart_template_guide/getting_started/|chart_template_guide getting_started]] | ||
| * [[https://opensource.com/article/20/5/helm-charts|How to make a Helm chart in 10 minutes]] | * [[https://opensource.com/article/20/5/helm-charts|How to make a Helm chart in 10 minutes]] | ||
| * [[https://stackoverflow.com/questions/49812830/helm-upgrade-with-same-chart-version-but-different-docker-image-tag|Helm upgrade with same chart version, but different Docker image tag]] | * [[https://stackoverflow.com/questions/49812830/helm-upgrade-with-same-chart-version-but-different-docker-image-tag|Helm upgrade with same chart version, but different Docker image tag]] | ||
| Line 1394: | Line 1957: | ||
| <code> | <code> | ||
| - | gitlab-runner@server:~/gowebd-k8s$ helm create webd-chart | + | ~/gowebd-k8s$ helm create webd-chart |
| $ less webd-chart/templates/deployment.yaml | $ less webd-chart/templates/deployment.yaml | ||
| Line 1404: | Line 1967: | ||
| ... | ... | ||
| version: 0.1.1 | version: 0.1.1 | ||
| + | icon: https://val.bmstu.ru/unix/Media/logo.gif | ||
| ... | ... | ||
| appVersion: "latest" | appVersion: "latest" | ||
| Line 1443: | Line 2007: | ||
| </code><code> | </code><code> | ||
| ... | ... | ||
| - | image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" | + | imagePullPolicy: {{ .Values.image.pullPolicy }} |
| # env: | # env: | ||
| # - name: APWEBD_HOSTNAME | # - name: APWEBD_HOSTNAME | ||
| Line 1451: | Line 2015: | ||
| # - name: REALM_NAME | # - name: REALM_NAME | ||
| # value: "{{ .Values.REALM_NAME }}" | # value: "{{ .Values.REALM_NAME }}" | ||
| - | ... | ||
| - | # livenessProbe: | ||
| - | # httpGet: | ||
| - | # path: / | ||
| - | # port: http | ||
| - | # readinessProbe: | ||
| - | # httpGet: | ||
| - | # path: / | ||
| - | # port: http | ||
| ... | ... | ||
| </code><code> | </code><code> | ||
| + | $ helm lint webd-chart/ | ||
| + | |||
| $ helm template my-webd webd-chart/ | less | $ helm template my-webd webd-chart/ | less | ||
| $ helm install my-webd webd-chart/ -n my-ns --create-namespace --wait | $ helm install my-webd webd-chart/ -n my-ns --create-namespace --wait | ||
| + | |||
| + | $ curl kubeN -H "Host: gowebd.corpX.un" | ||
| $ kubectl describe events -n my-ns | less | $ kubectl describe events -n my-ns | less | ||
| Line 1487: | Line 2046: | ||
| * [[https://medium.com/containerum/how-to-make-and-share-your-own-helm-package-50ae40f6c221|How to make and share your own Helm package]] | * [[https://medium.com/containerum/how-to-make-and-share-your-own-helm-package-50ae40f6c221|How to make and share your own Helm package]] | ||
| * [[https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html|Gitlab Personal access tokens]] | * [[https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html|Gitlab Personal access tokens]] | ||
| - | * [[Инструмент GitLab#Подключение через API]] - Role: Mainteiner, api, read_registry, write_registry | + | * [[Инструмент GitLab#Подключение через API]] - Role: Mainteiner, api (read_registry, write_registry не нужно) |
| + | |||
| + | === Добавляем приложение в свой репозиторий === | ||
| <code> | <code> | ||
| - | gitlab-runner@server:~/gowebd-k8s$ helm repo add --username student --password NNNNN-NNNNNNNNNNNNNNNNNNN webd http://server.corpX.un/api/v4/projects/N/packages/helm/stable | + | ~/gowebd-k8s$ helm repo add --username student --password NNNNN-NNNNNNNNNNNNNNNNNNN webd https://server.corpX.un/api/v4/projects/N/packages/helm/stable |
| "webd" has been added to your repositories | "webd" has been added to your repositories | ||
| - | gitlab-runner@server:~/gowebd-k8s$ ### helm repo remove webd | + | ~/gowebd-k8s$ helm repo list |
| - | gitlab-runner@server:~/gowebd-k8s$ helm repo list | + | ~/gowebd-k8s$ helm package webd-chart |
| - | gitlab-runner@server:~/gowebd-k8s$ helm package webd-chart | + | ~/gowebd-k8s$ tar -tf webd-chart-0.1.1.tgz |
| - | gitlab-runner@server:~/gowebd-k8s$ tar -tf webd-chart-0.1.1.tgz | + | ~/gowebd-k8s$ helm plugin install https://github.com/chartmuseum/helm-push |
| - | gitlab-runner@server:~/gowebd-k8s$ helm plugin install https://github.com/chartmuseum/helm-push | + | ~/gowebd-k8s$ helm cm-push webd-chart-0.1.1.tgz webd |
| - | gitlab-runner@server:~/gowebd-k8s$ helm cm-push webd-chart-0.1.1.tgz webd | + | ~/gowebd-k8s$ rm webd-chart-0.1.1.tgz |
| - | gitlab-runner@server:~/gowebd-k8s$ rm webd-chart-0.1.1.tgz | + | ~/gowebd-k8s$ ### helm repo remove webd |
| - | </code><code> | + | |
| - | kube1:~# helm repo add webd http://server.corpX.un/api/v4/projects/N/packages/helm/stable | + | ~/gowebd-k8s$ ### helm plugin uninstall cm-push |
| + | </code> | ||
| + | === Устанавливаем приложение через подключение репозитория === | ||
| + | <code> | ||
| + | kube1:~# helm repo add webd https://server.corpX.un/api/v4/projects/N/packages/helm/stable | ||
| kube1:~# helm repo update | kube1:~# helm repo update | ||
| Line 1513: | Line 2078: | ||
| kube1:~# helm repo update webd | kube1:~# helm repo update webd | ||
| + | |||
| + | kube1:~# helm show values webd/webd-chart | tee values.yaml.orig | ||
| + | |||
| + | kube1:~# ###helm pull webd/webd-chart | ||
| kube1:~# helm install my-webd webd/webd-chart | kube1:~# helm install my-webd webd/webd-chart | ||
| kube1:~# ###helm uninstall my-webd | kube1:~# ###helm uninstall my-webd | ||
| - | </code><code> | + | |
| + | kube1:~# ###helm repo remove webd | ||
| + | </code> | ||
| + | === Устанавливаем приложение без подключение репозитория === | ||
| + | <code> | ||
| kube1:~# mkdir gowebd; cd gowebd | kube1:~# mkdir gowebd; cd gowebd | ||
| - | kube1:~/gowebd# ###helm pull webd-chart --repo https://server.corp13.un/api/v4/projects/1/packages/helm/stable | + | kube1:~/gowebd# ###helm pull webd-chart --repo https://server.corpX.un/api/v4/projects/N/packages/helm/stable |
| - | kube1:~/gowebd# helm show values webd-chart --repo https://server.corp13.un/api/v4/projects/1/packages/helm/stable | tee values.yaml.orig | + | kube1:~/gowebd# helm show values webd-chart --repo https://server.corpX.un/api/v4/projects/N/packages/helm/stable | tee values.yaml.orig |
| kube1:~/gowebd# cat values.yaml | kube1:~/gowebd# cat values.yaml | ||
| Line 1531: | Line 2104: | ||
| #REALM_NAME: "corp" | #REALM_NAME: "corp" | ||
| </code><code> | </code><code> | ||
| - | kube1:~/gowebd# helm upgrade my-webd -i webd-chart -f values.yaml -n my-ns --create-namespace --repo https://server.corp13.un/api/v4/projects/1/packages/helm/stable | + | kube1:~/gowebd# helm upgrade my-webd -i webd-chart -f values.yaml -n my-ns --create-namespace --repo https://server.corpX.un/api/v4/projects/N/packages/helm/stable |
| $ curl http://kubeN -H "Host: gowebd.corpX.un" | $ curl http://kubeN -H "Host: gowebd.corpX.un" | ||
| Line 1539: | Line 2112: | ||
| ==== Работа с публичными репозиториями ==== | ==== Работа с публичными репозиториями ==== | ||
| + | |||
| + | === gitlab-runner kubernetes === | ||
| + | |||
| <code> | <code> | ||
| - | helm repo add gitlab https://charts.gitlab.io | + | kube1:~/gitlab-runner# kubectl create ns gitlab-runner |
| + | |||
| + | kube1:~/gitlab-runner# kubectl -n gitlab-runner create configmap ca-crt --from-file=/usr/local/share/ca-certificates/ca.crt | ||
| + | |||
| + | kube1:~/gitlab-runner# helm repo add gitlab https://charts.gitlab.io | ||
| + | |||
| + | kube1:~/gitlab-runner# helm repo list | ||
| + | |||
| + | kube1:~/gitlab-runner# helm search repo -l gitlab | ||
| + | |||
| + | kube1:~/gitlab-runner# helm search repo -l gitlab/gitlab-runner | ||
| + | |||
| + | kube1:~/gitlab-runner# helm show values gitlab/gitlab-runner --version 0.70.5 | tee values.yaml | ||
| + | |||
| + | kube1:~/gitlab-runner# cat values.yaml | ||
| + | </code><code> | ||
| + | ... | ||
| + | gitlabUrl: https://server.corpX.un | ||
| + | ... | ||
| + | runnerToken: "NNNNNNNNNNNNNNNNNNNNN" | ||
| + | ... | ||
| + | rbac: | ||
| + | ... | ||
| + | create: true #change this | ||
| + | ... | ||
| + | serviceAccount: | ||
| + | ... | ||
| + | create: true #change this | ||
| + | ... | ||
| + | runners: | ||
| + | ... | ||
| + | config: | | ||
| + | [[runners]] | ||
| + | tls-ca-file = "/mnt/ca.crt" #insert this | ||
| + | [runners.kubernetes] | ||
| + | namespace = "{{.Release.Namespace}}" | ||
| + | image = "alpine" | ||
| + | privileged = true #insert this | ||
| + | ... | ||
| + | securityContext: | ||
| + | allowPrivilegeEscalation: true #change this | ||
| + | readOnlyRootFilesystem: false | ||
| + | runAsNonRoot: true | ||
| + | privileged: true #change this | ||
| + | ... | ||
| + | #volumeMounts: [] #comment this | ||
| + | volumeMounts: | ||
| + | - name: ca-crt | ||
| + | subPath: ca.crt | ||
| + | mountPath: /mnt/ca.crt | ||
| + | ... | ||
| + | #volumes: [] #comment this | ||
| + | volumes: | ||
| + | - name: ca-crt | ||
| + | configMap: | ||
| + | name: ca-crt | ||
| + | ... | ||
| + | </code><code> | ||
| + | kube1:~/gitlab-runner# helm upgrade -i gitlab-runner gitlab/gitlab-runner -f values.yaml -n gitlab-runner --version 0.70.5 | ||
| + | |||
| + | kube1:~/gitlab-runner# kubectl get all -n gitlab-runner | ||
| + | |||
| + | kube1:~/gitlab-runner# ### helm -n gitlab-runner uninstall gitlab-runner | ||
| + | </code> | ||
| + | |||
| + | == старая версия == | ||
| + | <code> | ||
| + | gitlab-runner@server:~$ helm repo add gitlab https://charts.gitlab.io | ||
| + | |||
| + | gitlab-runner@server:~$ helm repo list | ||
| + | |||
| + | gitlab-runner@server:~$ helm search repo -l gitlab | ||
| - | helm search repo -l gitlab/gitlab-runner | + | gitlab-runner@server:~$ helm search repo -l gitlab/gitlab-runner |
| - | helm show values gitlab/gitlab-runner | tee values.yaml | + | gitlab-runner@server:~$ helm show values gitlab/gitlab-runner --version 0.56.0 | tee values.yaml |
| gitlab-runner@server:~$ diff values.yaml values.yaml.orig | gitlab-runner@server:~$ diff values.yaml values.yaml.orig | ||
| Line 1568: | Line 2215: | ||
| > privileged: false | > privileged: false | ||
| </code><code> | </code><code> | ||
| - | gitlab-runner@server:~$ helm upgrade -i gitlab-runner gitlab/gitlab-runner -f values.yaml -n gitlab-runner --create-namespace | + | gitlab-runner@server:~$ helm upgrade -i gitlab-runner gitlab/gitlab-runner -f values.yaml -n gitlab-runner --create-namespace --version 0.56.0 |
| gitlab-runner@server:~$ kubectl get all -n gitlab-runner | gitlab-runner@server:~$ kubectl get all -n gitlab-runner | ||
| + | </code> | ||
| + | |||
| + | == SSL/TLS == | ||
| + | |||
| + | <code> | ||
| + | # kubectl -n gitlab-runner create configmap wild-crt --from-file=wild.crt | ||
| + | |||
| + | # cat values.yaml | ||
| </code><code> | </code><code> | ||
| - | $ helm search hub -o json wordpress | jq '.' | less | + | ... |
| + | gitlabUrl: https://server.corpX.un/ | ||
| + | ... | ||
| + | config: | | ||
| + | [[runners]] | ||
| + | tls-ca-file = "/mnt/wild.crt" | ||
| + | [runners.kubernetes] | ||
| + | ... | ||
| + | #volumeMounts: [] | ||
| + | volumeMounts: | ||
| + | - name: wild-crt | ||
| + | subPath: wild.crt | ||
| + | mountPath: /mnt/wild.crt | ||
| + | |||
| + | #volumes: [] | ||
| + | volumes: | ||
| + | - name: wild-crt | ||
| + | configMap: | ||
| + | name: wild-crt | ||
| + | </code> | ||
| - | $ helm repo add bitnami https://charts.bitnami.com/bitnami | + | ===== Аутентификацция и авторизация ===== |
| - | $ helm show values bitnami/wordpress | + | ==== Использование сертификатов ==== |
| + | |||
| + | * [[Пакет OpenSSL#Создание приватного ключа пользователя]] | ||
| + | * [[Пакет OpenSSL#Создание запроса на сертификат]] | ||
| + | |||
| + | <code> | ||
| + | user1@client1:~$ cat user1.req | base64 -w0 | ||
| </code> | </code> | ||
| + | * [[https://stackoverflow.com/questions/75735249/what-do-the-values-in-certificatesigningrequest-spec-usages-mean|What do the values in CertificateSigningRequest.spec.usages mean?]] | ||
| + | <code> | ||
| + | kube1:~/users# kubectl explain csr.spec.usages | ||
| + | kube1:~/users# cat user1.req.yaml | ||
| + | </code><code> | ||
| + | apiVersion: certificates.k8s.io/v1 | ||
| + | kind: CertificateSigningRequest | ||
| + | metadata: | ||
| + | name: user1 | ||
| + | spec: | ||
| + | request: LS0t...S0tCg== | ||
| + | signerName: kubernetes.io/kube-apiserver-client | ||
| + | expirationSeconds: 8640000 # 100 * one day | ||
| + | usages: | ||
| + | # - digital signature | ||
| + | # - key encipherment | ||
| + | - client auth | ||
| + | </code><code> | ||
| + | kube1:~/users# kubectl apply -f user1.req.yaml | ||
| + | |||
| + | kube1:~/users# kubectl describe csr/user1 | ||
| + | |||
| + | kube1:~/users# kubectl certificate approve user1 | ||
| + | |||
| + | kube1:~/users# kubectl get csr | ||
| + | |||
| + | kube1:~/users# kubectl get csr/user1 -o yaml | ||
| + | |||
| + | kube1:~/users# kubectl get csr/user1 -o jsonpath="{.status.certificate}" | base64 -d | tee user1.crt | ||
| + | |||
| + | user1@client1:~$ scp root@kube1:users/user1.crt . | ||
| + | |||
| + | kube1:~/users# ###kubectl delete csr user1 | ||
| + | </code> | ||
| + | |||
| + | ==== Использование ServiceAccount ==== | ||
| + | |||
| + | * [[Система Kubernetes#Kubernetes Dashboard]] | ||
| + | |||
| + | ==== Использование Role и RoleBinding ==== | ||
| + | |||
| + | === Предоставление доступа к services/proxy в Namespace === | ||
| + | |||
| + | * Cloud native distributed block storage for Kubernetes [[Система Kubernetes#longhorn]] | ||
| + | |||
| + | <code> | ||
| + | kube1:~# kubectl api-resources -o wide | less | ||
| + | APIVERSION = <group> + "/" + <version of the API> | ||
| + | |||
| + | kube1:~/users# cat lh-svc-proxy-role.yaml | ||
| + | </code><code> | ||
| + | apiVersion: rbac.authorization.k8s.io/v1 | ||
| + | kind: Role | ||
| + | metadata: | ||
| + | namespace: longhorn-system | ||
| + | name: lh-svc-proxy-role | ||
| + | rules: | ||
| + | - apiGroups: [""] | ||
| + | resources: ["services/proxy"] | ||
| + | verbs: ["get"] | ||
| + | </code><code> | ||
| + | kube1:~/users# cat user1-lh-svc-proxy-rolebinding.yaml | ||
| + | </code><code> | ||
| + | apiVersion: rbac.authorization.k8s.io/v1 | ||
| + | kind: RoleBinding | ||
| + | metadata: | ||
| + | name: user1-lh-svc-proxy-rolebinding | ||
| + | namespace: longhorn-system | ||
| + | subjects: | ||
| + | - kind: User | ||
| + | name: user1 | ||
| + | apiGroup: rbac.authorization.k8s.io | ||
| + | roleRef: | ||
| + | kind: Role | ||
| + | name: lh-svc-proxy-role | ||
| + | apiGroup: rbac.authorization.k8s.io | ||
| + | </code><code> | ||
| + | kube1:~/users# kubectl apply -f lh-svc-proxy-role.yaml,user1-lh-svc-proxy-rolebinding.yaml | ||
| + | |||
| + | student@client1:~$ kubectl proxy | ||
| + | |||
| + | student@client1:~$ curl http://localhost:8001/api/v1/namespaces/longhorn-system/services/longhorn-frontend:80/proxy/ | ||
| + | |||
| + | student@client1:~$ curl http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ | ||
| + | |||
| + | kube1:~/users# kubectl delete -f lh-svc-proxy-role.yaml,user1-lh-svc-proxy-rolebinding.yaml | ||
| + | </code> | ||
| + | === Предоставление полного доступа к Namespace === | ||
| + | |||
| + | <code> | ||
| + | kube1:~/users# cat ns-full-access.yaml | ||
| + | </code><code> | ||
| + | --- | ||
| + | kind: Role | ||
| + | apiVersion: rbac.authorization.k8s.io/v1 | ||
| + | metadata: | ||
| + | name: ns-full-access | ||
| + | namespace: my-ns | ||
| + | rules: | ||
| + | - apiGroups: ["*"] | ||
| + | resources: ["*"] | ||
| + | verbs: ["*"] | ||
| + | --- | ||
| + | kind: RoleBinding | ||
| + | apiVersion: rbac.authorization.k8s.io/v1 | ||
| + | metadata: | ||
| + | name: ns-full-access-rolebinding | ||
| + | namespace: my-ns | ||
| + | subjects: | ||
| + | - apiGroup: rbac.authorization.k8s.io | ||
| + | kind: Group | ||
| + | name: cko | ||
| + | #kind: User | ||
| + | #name: user1 | ||
| + | roleRef: | ||
| + | kind: Role | ||
| + | name: ns-full-access | ||
| + | apiGroup: rbac.authorization.k8s.io | ||
| + | #roleRef: | ||
| + | #apiGroup: rbac.authorization.k8s.io | ||
| + | #kind: ClusterRole | ||
| + | #name: admin | ||
| + | </code><code> | ||
| + | kube1:~/users# kubectl apply -f ns-full-access.yaml | ||
| + | </code> | ||
| + | |||
| + | === Поиск предоставленных ролей для учетной записи === | ||
| + | <code> | ||
| + | kube1:~/users# kubectl get rolebindings --all-namespaces -o=json | jq '.items[] | select(.subjects[]?.name == "user1")' | ||
| + | |||
| + | kube1:~/users# kubectl get rolebindings --all-namespaces -o=json | jq '.items[] | select(.subjects[]?.name == "cko")' | ||
| + | |||
| + | kube1:~/users# kubectl delete -f ns-full-access.yaml | ||
| + | ИЛИ | ||
| + | kube1:~/users# kubectl -n my-ns delete rolebindings ns-full-access-rolebinding | ||
| + | kube1:~/users# kubectl -n my-ns delete role ns-full-access | ||
| + | </code> | ||
| + | |||
| + | ==== Использование ClusterRole и ClusterRoleBinding ==== | ||
| + | |||
| + | === Предоставление доступа к services/port-forward в Cluster === | ||
| + | |||
| + | <code> | ||
| + | kube1:~/users# cat svc-pfw-role.yaml | ||
| + | </code><code> | ||
| + | apiVersion: rbac.authorization.k8s.io/v1 | ||
| + | kind: ClusterRole | ||
| + | #kind: Role | ||
| + | metadata: | ||
| + | name: svc-pfw-role | ||
| + | # namespace: my-pgcluster-ns | ||
| + | rules: | ||
| + | - apiGroups: [""] | ||
| + | resources: ["services"] | ||
| + | verbs: ["get"] | ||
| + | - apiGroups: [""] | ||
| + | resources: ["pods"] | ||
| + | verbs: ["get", "list"] | ||
| + | - apiGroups: [""] | ||
| + | resources: ["pods/portforward"] | ||
| + | verbs: ["create"] | ||
| + | </code><code> | ||
| + | kube1:~/users# cat user1-svc-pfw-rolebinding.yaml | ||
| + | </code><code> | ||
| + | apiVersion: rbac.authorization.k8s.io/v1 | ||
| + | kind: ClusterRoleBinding | ||
| + | #kind: RoleBinding | ||
| + | metadata: | ||
| + | name: user1-svc-pfw-rolebinding | ||
| + | # namespace: my-pgcluster-ns | ||
| + | subjects: | ||
| + | - kind: User | ||
| + | name: user1 | ||
| + | apiGroup: rbac.authorization.k8s.io | ||
| + | roleRef: | ||
| + | kind: ClusterRole | ||
| + | # kind: Role | ||
| + | name: svc-pfw-role | ||
| + | apiGroup: rbac.authorization.k8s.io | ||
| + | </code><code> | ||
| + | kube1:~/users# kubectl apply -f svc-pfw-role.yaml,user1-svc-pfw-rolebinding.yaml | ||
| + | |||
| + | student@client1:~$ kubectl port-forward -n my-pgcluster-ns services/my-pgcluster-rw 5432:5432 | ||
| + | |||
| + | student@client1:~$ psql postgres://keycloak:strongpassword@127.0.0.1:5432/keycloak | ||
| + | </code> | ||
| + | |||
| + | * Доступ через proxy к [[Система Kubernetes#Kubernetes Dashboard]] | ||
| + | |||
| + | <code> | ||
| + | kube1:~/users# kubectl delete -f svc-pfw-role.yaml,user1-svc-pfw-rolebinding.yaml | ||
| + | </code> | ||
| + | === Предоставление полного доступа к Kubernetes Cluster === | ||
| + | |||
| + | <code> | ||
| + | kube1:~/users# kubectl get clusterroles | less | ||
| + | |||
| + | kube1:~/users# kubectl get clusterrole cluster-admin -o yaml | ||
| + | |||
| + | kube1:~/users# kubectl get clusterrolebindings | less | ||
| + | |||
| + | kube1:~/users# kubectl get clusterrolebindings kubeadm:cluster-admins -o yaml | ||
| + | |||
| + | kube1:~/users# kubectl get clusterrolebindings cluster-admin -o yaml | ||
| + | |||
| + | kube1:~/users# cat user1-cluster-admin.yaml | ||
| + | </code><code> | ||
| + | apiVersion: rbac.authorization.k8s.io/v1 | ||
| + | kind: ClusterRoleBinding | ||
| + | metadata: | ||
| + | name: user1-cluster-admin | ||
| + | subjects: | ||
| + | - kind: User | ||
| + | name: user1 | ||
| + | # name: user1@corp13.un | ||
| + | apiGroup: rbac.authorization.k8s.io | ||
| + | roleRef: | ||
| + | kind: ClusterRole | ||
| + | name: cluster-admin | ||
| + | apiGroup: rbac.authorization.k8s.io | ||
| + | </code><code> | ||
| + | kube1:~/users# kubectl apply -f user1-cluster-admin.yaml | ||
| + | |||
| + | student@client1:~$ kubectl get nodes | ||
| + | </code> | ||
| + | |||
| + | === Поиск предоставленных кластерных ролей для учетной записи === | ||
| + | <code> | ||
| + | kube1:~/users# kubectl get clusterrolebindings -o=json | jq '.items[] | select(.subjects[]?.name == "kubeadm:cluster-admins")' | ||
| + | |||
| + | kube1:~/users# kubectl get clusterrolebindings -o=json | jq '.items[] | select(.subjects[]?.name == "user1")' | ||
| + | |||
| + | kube1:~/users# kubectl get clusterrolebindings -o=json | jq '.items[] | select(.subjects[]?.name == "default")' | ||
| + | |||
| + | kube1:~/users# kubectl delete -f user1-cluster-admin.yaml | ||
| + | ИЛИ | ||
| + | kube1:~/users# kubectl delete clusterrolebindings user1-cluster-admin | ||
| + | </code> | ||
| ===== Kubernetes Dashboard ===== | ===== Kubernetes Dashboard ===== | ||
| + | |||
| + | * https://www.bytebase.com/blog/top-open-source-kubernetes-dashboard/ | ||
| * https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/ | * https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/ | ||
| * https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md | * https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md | ||
| + | |||
| + | * [[https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_token/]] | ||
| + | * [[https://www.jwt.io/|JSON Web Token (JWT) Debugger]] | ||
| <code> | <code> | ||
| $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml | $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml | ||
| - | $ cat dashboard-user-role.yaml | + | $ cat dashboard-sa-admin-user.yaml |
| </code><code> | </code><code> | ||
| --- | --- | ||
| Line 1595: | Line 2518: | ||
| name: admin-user | name: admin-user | ||
| namespace: kubernetes-dashboard | namespace: kubernetes-dashboard | ||
| + | #namespace: default | ||
| --- | --- | ||
| apiVersion: rbac.authorization.k8s.io/v1 | apiVersion: rbac.authorization.k8s.io/v1 | ||
| Line 1608: | Line 2532: | ||
| name: admin-user | name: admin-user | ||
| namespace: kubernetes-dashboard | namespace: kubernetes-dashboard | ||
| - | --- | + | #namespace: default |
| + | </code><code> | ||
| + | $ kubectl apply -f dashboard-sa-admin-user.yaml | ||
| + | |||
| + | $ kubectl auth can-i get pods --as=system:serviceaccount:kubernetes-dashboard:admin-user | ||
| + | |||
| + | $ kubectl create token admin-user -n kubernetes-dashboard #--duration=1h | ||
| + | |||
| + | $ ###ps aux | grep kube-apiserver | grep service-account-key-file | ||
| + | $ ###cat /etc/kubernetes/ssl/sa.pub | ||
| + | $ ###echo ... | jq -R 'split(".") | .[1] | @base64d | fromjson' | ||
| + | $ ###echo ... | awk -F'.' '{print $2}' | base64 -d | jq -r '.exp | todate' | ||
| + | </code> | ||
| + | |||
| + | ==== Доступ через proxy ==== | ||
| + | |||
| + | <code> | ||
| + | cmder$ kubectl proxy | ||
| + | </code> | ||
| + | |||
| + | * http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ | ||
| + | |||
| + | |||
| + | ==== Доступ через port-forward ==== | ||
| + | <code> | ||
| + | $ kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443 | ||
| + | </code> | ||
| + | |||
| + | * https://localhost:8443 | ||
| + | |||
| + | ==== Создание долговременного токена ==== | ||
| + | <code> | ||
| + | $ cat dashboard-secret-for-token.yaml | ||
| + | </code><code> | ||
| apiVersion: v1 | apiVersion: v1 | ||
| kind: Secret | kind: Secret | ||
| Line 1614: | Line 2571: | ||
| name: admin-user | name: admin-user | ||
| namespace: kubernetes-dashboard | namespace: kubernetes-dashboard | ||
| + | #namespace: default | ||
| annotations: | annotations: | ||
| kubernetes.io/service-account.name: "admin-user" | kubernetes.io/service-account.name: "admin-user" | ||
| type: kubernetes.io/service-account-token | type: kubernetes.io/service-account-token | ||
| </code><code> | </code><code> | ||
| - | $ kubectl apply -f dashboard-user-role.yaml | + | $ kubectl apply -f dashboard-secret-for-token.yaml |
| - | + | ||
| - | $ kubectl -n kubernetes-dashboard create token admin-user | + | |
| $ kubectl get secret admin-user -n kubernetes-dashboard -o jsonpath={".data.token"} | base64 -d ; echo | $ kubectl get secret admin-user -n kubernetes-dashboard -o jsonpath={".data.token"} | base64 -d ; echo | ||
| + | </code> | ||
| + | ===== Мониторинг ===== | ||
| - | cmder$ kubectl proxy | + | ==== Metrics Server ==== |
| + | |||
| + | * [[https://kubernetes-sigs.github.io/metrics-server/Kubernetes Metrics Server]] | ||
| + | * [[https://medium.com/@cloudspinx/fix-error-metrics-api-not-available-in-kubernetes-aa10766e1c2f|Fix “error: Metrics API not available” in Kubernetes]] | ||
| + | |||
| + | <code> | ||
| + | kube1:~/metrics-server# curl -L https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.7.2/components.yaml | tee metrics-server-components.yaml | ||
| + | |||
| + | kube1:~/metrics-server# cat metrics-server-components.yaml | ||
| + | </code><code> | ||
| + | ... | ||
| + | containers: | ||
| + | - args: | ||
| + | - --cert-dir=/tmp | ||
| + | - --kubelet-insecure-tls # add this | ||
| + | ... | ||
| + | </code><code> | ||
| + | kube1:~/metrics-server# kubectl apply -f metrics-server-components.yaml | ||
| + | |||
| + | kube1# kubectl get pods -A | grep metrics-server | ||
| + | |||
| + | kube1# kubectl top pod #-n kube-system | ||
| + | |||
| + | kube1# kubectl top pod -A --sort-by=memory | ||
| + | |||
| + | kube1# kubectl top node | ||
| </code> | </code> | ||
| - | * http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ | + | ==== kube-state-metrics ==== |
| + | * [[https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-state-metrics]] | ||
| + | * ... алерты с инфой по упавшим подам ... | ||
| + | |||
| + | <code> | ||
| + | kube1# helm repo add prometheus-community https://prometheus-community.github.io/helm-charts | ||
| + | |||
| + | kube1# helm repo update | ||
| + | kube1# helm install kube-state-metrics prometheus-community/kube-state-metrics -n vm --create-namespace | ||
| + | |||
| + | kube1# curl kube-state-metrics.vm.svc.cluster.local:8080/metrics | ||
| + | </code> | ||
| + | ===== Отладка, troubleshooting ===== | ||
| + | |||
| + | ==== Отладка etcd ==== | ||
| + | |||
| + | * [[https://sysdig.com/blog/monitor-etcd/|How to monitor etcd]] | ||
| + | |||
| + | <code> | ||
| + | kubeN:~# more /etc/kubernetes/manifests/kube-apiserver.yaml | ||
| + | |||
| + | kubeN:~# etcdctl member list -w table \ | ||
| + | --endpoints=https://kube1:2379 \ | ||
| + | --cacert=/etc/ssl/etcd/ssl/ca.pem \ | ||
| + | --cert=/etc/ssl/etcd/ssl/node-kube1.pem \ | ||
| + | --key=/etc/ssl/etcd/ssl/node-kube1-key.pem | ||
| + | |||
| + | kubeN:~# etcdctl endpoint status -w table \ | ||
| + | --endpoints=https://kube1:2379,https://kube2:2379,https://kube3:2379 \ | ||
| + | --cacert=/etc/ssl/etcd/ssl/ca.pem \ | ||
| + | --cert=/etc/ssl/etcd/ssl/node-kube1.pem \ | ||
| + | --key=/etc/ssl/etcd/ssl/node-kube1-key.pem | ||
| + | </code> | ||
| ===== Дополнительные материалы ===== | ===== Дополнительные материалы ===== | ||
| + | |||
| + | ==== Настройка registry-mirrors для Kubespray ==== | ||
| + | <code> | ||
| + | ~/kubespray# cat inventory/mycluster/group_vars/all/docker.yml | ||
| + | </code><code> | ||
| + | ... | ||
| + | docker_registry_mirrors: | ||
| + | - https://mirror.gcr.io | ||
| + | ... | ||
| + | </code><code> | ||
| + | ~/kubespray# cat inventory/mycluster/group_vars/all/containerd.yml | ||
| + | </code><code> | ||
| + | ... | ||
| + | containerd_registries_mirrors: | ||
| + | - prefix: docker.io | ||
| + | mirrors: | ||
| + | - host: https://mirror.gcr.io | ||
| + | capabilities: ["pull", "resolve"] | ||
| + | skip_verify: false | ||
| + | ... | ||
| + | </code> | ||
| + | |||
| + | ==== Установка kubelet kubeadm kubectl в ubuntu20 ==== | ||
| + | |||
| + | * На каждом узле нужно сделать | ||
| + | |||
| + | <code> | ||
| + | mkdir /etc/apt/keyrings | ||
| + | |||
| + | curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg | ||
| + | |||
| + | echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list | ||
| + | |||
| + | apt update && apt install -y kubeadm=1.28.1-1.1 kubelet=1.28.1-1.1 kubectl=1.28.1-1.1 | ||
| + | </code> | ||
| + | |||
| + | ==== Use .kube/config Client certs in curl ==== | ||
| + | * [[https://serverfault.com/questions/1094361/use-kube-config-client-certs-in-curl|Use .kube/config Client certs in curl]] | ||
| + | <code> | ||
| + | cat ~/.kube/config | yq -r '.clusters[0].cluster."certificate-authority-data"' | base64 -d - > ~/.kube/ca.pem | ||
| + | cat ~/.kube/config | yq -r '.users[0].user."client-certificate-data"' | base64 -d - > ~/.kube/user.pem | ||
| + | cat ~/.kube/config | yq -r '.users[0].user."client-key-data"' | base64 -d - > ~/.kube/user-key.pem | ||
| + | |||
| + | SERVER_URL=$(cat ~/.kube/config | yq -r .clusters[0].cluster.server) | ||
| + | |||
| + | curl --cacert ~/.kube/ca.pem --cert ~/.kube/user.pem --key ~/.kube/user-key.pem -X GET ${SERVER_URL}/api/v1/namespaces/default/pods/ | ||
| + | </code> | ||
| ==== bare-metal minikube ==== | ==== bare-metal minikube ==== | ||
| Line 1678: | Line 2740: | ||
| ==== kompose ==== | ==== kompose ==== | ||
| + | * https://kompose.io/ | ||
| * [[https://stackoverflow.com/questions/47536536/whats-the-difference-between-docker-compose-and-kubernetes|What's the difference between Docker Compose and Kubernetes?]] | * [[https://stackoverflow.com/questions/47536536/whats-the-difference-between-docker-compose-and-kubernetes|What's the difference between Docker Compose and Kubernetes?]] | ||
| * [[https://loft.sh/blog/docker-compose-to-kubernetes-step-by-step-migration/|Docker Compose to Kubernetes: Step-by-Step Migration]] | * [[https://loft.sh/blog/docker-compose-to-kubernetes-step-by-step-migration/|Docker Compose to Kubernetes: Step-by-Step Migration]] | ||
| Line 1683: | Line 2746: | ||
| <code> | <code> | ||
| + | kube1:~/gitlab# curl -L https://github.com/kubernetes/kompose/releases/download/v1.37.0/kompose-linux-amd64 -o /usr/local/bin/kompose | ||
| + | |||
| + | kube1:~/gitlab# chmod +x /usr/local/bin/kompose | ||
| + | |||
| + | |||
| + | |||
| + | |||
| root@gate:~# curl -L https://github.com/kubernetes/kompose/releases/download/v1.26.0/kompose-linux-amd64 -o kompose | root@gate:~# curl -L https://github.com/kubernetes/kompose/releases/download/v1.26.0/kompose-linux-amd64 -o kompose | ||
| root@gate:~# chmod +x kompose | root@gate:~# chmod +x kompose | ||