This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
система_kubernetes [2024/04/03 10:18] val [Volumes] |
система_kubernetes [2024/05/31 06:19] val [Развертывание через Kubespray] |
||
---|---|---|---|
Line 171: | Line 171: | ||
=== Установка ПО === | === Установка ПО === | ||
+ | === !!! Обратитесь к преподавателю !!! === | ||
<code> | <code> | ||
node1# bash -c ' | node1# bash -c ' | ||
Line 258: | Line 259: | ||
=== Удаление узла === | === Удаление узла === | ||
+ | |||
+ | * [[https://stackoverflow.com/questions/56064537/how-to-remove-broken-nodes-in-kubernetes|How to remove broken nodes in Kubernetes]] | ||
+ | |||
<code> | <code> | ||
$ kubectl cordon kube3 | $ kubectl cordon kube3 | ||
- | $ time kubectl drain kube3 --force --ignore-daemonsets --delete-emptydir-data | + | $ time kubectl drain kube3 #--ignore-daemonsets --delete-emptydir-data --force |
$ kubectl delete node kube3 | $ kubectl delete node kube3 | ||
Line 329: | Line 333: | ||
==== Развертывание через Kubespray ==== | ==== Развертывание через Kubespray ==== | ||
+ | |||
+ | === !!! Обратитесь к преподавателю !!! === | ||
* [[https://github.com/kubernetes-sigs/kubespray]] | * [[https://github.com/kubernetes-sigs/kubespray]] | ||
Line 344: | Line 350: | ||
kube1# ssh-copy-id kube1;ssh-copy-id kube2;ssh-copy-id kube3;ssh-copy-id kube4; | kube1# ssh-copy-id kube1;ssh-copy-id kube2;ssh-copy-id kube3;ssh-copy-id kube4; | ||
- | kube1# apt update | + | kube1# #apt update |
- | kube1# apt install python3-pip -y | + | kube1# #apt install python3-pip -y |
kube1# git clone https://github.com/kubernetes-sigs/kubespray | kube1# git clone https://github.com/kubernetes-sigs/kubespray | ||
Line 356: | Line 362: | ||
~/kubespray# git branch -r | ~/kubespray# git branch -r | ||
- | ~/kubespray# ### git checkout origin/release-2.22 # debian11 - 2.24 | + | ~/kubespray# git checkout origin/release-2.22 |
~/kubespray# git tag -l | ~/kubespray# git tag -l | ||
~/kubespray# ### git checkout tags/v2.22.1 | ~/kubespray# ### git checkout tags/v2.22.1 | ||
- | ~/kubespray# git checkout 4c37399c7582ea2bfb5202c3dde3223f9c43bf59 | + | ~/kubespray# ### git checkout 4c37399c7582ea2bfb5202c3dde3223f9c43bf59 |
~/kubespray# ### git checkout master | ~/kubespray# ### git checkout master | ||
Line 373: | Line 379: | ||
real 1m48.202s | real 1m48.202s | ||
- | ~/kubespray# cp -rfp inventory/sample inventory/mycluster | + | ~/kubespray# cp -rvfpT inventory/sample inventory/mycluster |
~/kubespray# declare -a IPS=(kube1,192.168.X.221 kube2,192.168.X.222 kube3,192.168.X.223) | ~/kubespray# declare -a IPS=(kube1,192.168.X.221 kube2,192.168.X.222 kube3,192.168.X.223) | ||
Line 380: | Line 386: | ||
~/kubespray# less inventory/mycluster/hosts.yaml | ~/kubespray# less inventory/mycluster/hosts.yaml | ||
+ | </code> | ||
+ | * [[Сервис Ansible#Использование модулей]] Ansible для отключения swap | ||
+ | * Настройка registry-mirrors для установки и работы кластера | ||
+ | <code> | ||
+ | ~/kubespray# cat inventory/mycluster/group_vars/all/docker.yml | ||
+ | </code><code> | ||
+ | ... | ||
+ | docker_registry_mirrors: | ||
+ | - https://mirror.gcr.io | ||
+ | ... | ||
+ | </code><code> | ||
+ | ~/kubespray# cat inventory/mycluster/group_vars/all/containerd.yml | ||
+ | </code><code> | ||
+ | ... | ||
+ | containerd_registries_mirrors: | ||
+ | - prefix: docker.io | ||
+ | mirrors: | ||
+ | - host: https://mirror.gcr.io | ||
+ | capabilities: ["pull", "resolve"] | ||
+ | skip_verify: false | ||
+ | ... | ||
+ | </code> | ||
+ | * Развертывание кластера | ||
+ | <code> | ||
~/kubespray# time ansible-playbook -i inventory/mycluster/hosts.yaml cluster.yml | ~/kubespray# time ansible-playbook -i inventory/mycluster/hosts.yaml cluster.yml | ||
real 45m31.796s | real 45m31.796s | ||
Line 466: | Line 496: | ||
$ kubectl delete pod my-debian | $ kubectl delete pod my-debian | ||
+ | $ ###kubectl delete pod my-debian --grace-period=0 --force | ||
$ kubectl create deployment my-debian --image=debian -- "sleep" "3600" | $ kubectl create deployment my-debian --image=debian -- "sleep" "3600" | ||
Line 594: | Line 625: | ||
$ kubectl delete pod/my-webd-NNNNNNNNNN-NNNNN -n my-ns | $ kubectl delete pod/my-webd-NNNNNNNNNN-NNNNN -n my-ns | ||
+ | </code> | ||
+ | |||
+ | * [[https://learnk8s.io/kubernetes-rollbacks|How do you rollback deployments in Kubernetes?]] | ||
+ | |||
+ | <code> | ||
+ | gitlab-runner@server:~$ kubectl -n my-ns rollout history deployment/my-webd | ||
+ | deployment.apps/my-webd | ||
+ | REVISION CHANGE-CAUSE | ||
+ | 1 <none> | ||
+ | ... | ||
+ | N <none> | ||
+ | |||
+ | gitlab-runner@server:~$ kubectl -n my-ns rollout history deployment/my-webd --revision=1 | ||
+ | ... | ||
+ | Image: server.corpX.un:5000/student/webd:ver1.1 | ||
+ | ... | ||
+ | |||
+ | kubectl -n my-ns rollout undo deployment/my-webd --to-revision=1 | ||
+ | |||
+ | gitlab-runner@server:~$ kubectl -n my-ns rollout undo deployment/my-webd --to-revision=1 | ||
+ | |||
+ | gitlab-runner@server:~$ kubectl -n my-ns rollout history deployment/my-webd | ||
+ | deployment.apps/my-webd | ||
+ | REVISION CHANGE-CAUSE | ||
+ | 2 <none> | ||
+ | ... | ||
+ | N+1 <none> | ||
</code> | </code> | ||
Line 644: | Line 702: | ||
$ minikube service list | $ minikube service list | ||
- | $ minikube service my-webd -n my-ns --url | + | $ minikube service my-webd --url -n my-ns |
http://192.168.49.2:NNNNN | http://192.168.49.2:NNNNN | ||
- | $ curl $(minikube service my-webd -n my-ns --url) | + | $ curl http://192.168.49.2:NNNNN |
</code> | </code> | ||
Line 804: | Line 862: | ||
node1# kubectl get all -n ingress-nginx | node1# kubectl get all -n ingress-nginx | ||
- | node1# ### kubectl delete -f ingress-nginx.controller-v1.3.1.baremetal.yaml | + | node1# ###kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission |
+ | |||
+ | node1# ###kubectl delete -f ingress-nginx.controller-v1.3.1.baremetal.yaml | ||
</code> | </code> | ||
Line 990: | Line 1050: | ||
storage: 128Mi | storage: 128Mi | ||
# storage: 8Gi | # storage: 8Gi | ||
- | # volumeMode: Filesystem | ||
accessModes: | accessModes: | ||
- ReadWriteMany | - ReadWriteMany | ||
Line 1013: | Line 1072: | ||
root@node1:~# kubectl get pv | root@node1:~# kubectl get pv | ||
- | root@kube1:~# ### ssh kube3 chmod 777 /disk2/ | + | root@kube1:~# ###ssh kube3 'mkdir /disk2/; chmod 777 /disk2/' |
... | ... | ||
- | root@node1:~# ###kubectl delete pv my-pv-node2-sz-128m-num-001 | + | root@node1:~# ###kubectl delete pv my-pv-<TAB> |
root@node1:~# cat my-ha-pvc.yaml | root@node1:~# cat my-ha-pvc.yaml | ||
Line 1046: | Line 1105: | ||
* [[https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/|Dynamic Volume Provisioning]] | * [[https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/|Dynamic Volume Provisioning]] | ||
- | == rancher local-path-provisioner == | + | === rancher local-path-provisioner === |
* [[https://github.com/rancher/local-path-provisioner|rancher local-path-provisioner]] | * [[https://github.com/rancher/local-path-provisioner|rancher local-path-provisioner]] | ||
Line 1055: | Line 1114: | ||
$ kubectl get sc | $ kubectl get sc | ||
+ | |||
+ | $ kubectl -n local-path-storage get all | ||
$ curl https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.26/deploy/local-path-storage.yaml | less | $ curl https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.26/deploy/local-path-storage.yaml | less | ||
Line 1076: | Line 1137: | ||
$ ###kubectl -n my-keycloak-ns delete pvc data-my-keycloak-postgresql-0 | $ ###kubectl -n my-keycloak-ns delete pvc data-my-keycloak-postgresql-0 | ||
</code> | </code> | ||
- | == longhorn == | + | === longhorn === |
<code> | <code> | ||
Line 1083: | Line 1144: | ||
* [[https://github.com/longhorn/longhorn]] | * [[https://github.com/longhorn/longhorn]] | ||
<code> | <code> | ||
+ | $ kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.0/deploy/longhorn.yaml | ||
+ | |||
+ | $ kubectl -n longhorn-system get pods -o wide --watch | ||
+ | |||
Setting->General | Setting->General | ||
Line 1094: | Line 1159: | ||
<code> | <code> | ||
cmder> kubectl proxy | cmder> kubectl proxy | ||
- | |||
- | http://localhost:8001/api/v1/namespaces/longhorn-system/services/longhorn-frontend:80/proxy/ | ||
</code> | </code> | ||
+ | |||
+ | * http://localhost:8001/api/v1/namespaces/longhorn-system/services/longhorn-frontend:80/proxy/ | ||
Подключение через ingress | Подключение через ingress | ||
Line 1123: | Line 1188: | ||
</code> | </code> | ||
- | Использование snapshot-ов | + | == Использование snapshot-ов == |
* [[https://github.com/longhorn/longhorn/issues/63?ref=https%3A%2F%2Fgiter.vip|What should be the best procedure to recover a snapshot or a backup in rancher2/longhorn ?]] | * [[https://github.com/longhorn/longhorn/issues/63?ref=https%3A%2F%2Fgiter.vip|What should be the best procedure to recover a snapshot or a backup in rancher2/longhorn ?]] | ||
* Делаем снапшот | * Делаем снапшот | ||
- | * Что-то ломаем | + | * Что-то ломаем (удаляем пользователя) |
* Останавливаем сервис | * Останавливаем сервис | ||
Line 1146: | Line 1211: | ||
</code> | </code> | ||
- | Использование backup-ов | + | == Использование backup-ов == |
+ | |||
+ | * Разворачиваем [[Сервис NFS]] на server | ||
<code> | <code> | ||
- | Setting -> General -> Backup Target -> nfs://server.corp13.un:/disk2 (nfs client linux не нужен) | + | Setting -> General -> Backup Target -> nfs://server.corp13.un:/var/www (nfs client linux не нужен) |
</code> | </code> | ||
- | * Volume -> Create Backup, удаляем чарт, удаляем pv/pvc, восстанавливаем Volume из бекапа, создаем для него PV/PVC как было, устанавливаем чарт, он подхватывает pv/pvc | + | * Volume -> Create Backup, удаляем NS, восстанавливаем Volume из бекапа, создаем NS и создаем для Volume PV/PVC как было, устанавливаем чарт, он подхватывает pv/pvc |
==== ConfigMap ==== | ==== ConfigMap ==== |