User Tools

Site Tools


система_kubernetes

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
система_kubernetes [2024/04/03 07:27]
val [Volumes]
система_kubernetes [2024/04/27 06:47]
val [Deployment, Replica Sets, Pods]
Line 171: Line 171:
  
 === Установка ПО === === Установка ПО ===
 +=== !!! Обратитесь к преподавателю !!! ===
 <​code>​ <​code>​
 node1# bash -c ' node1# bash -c '
Line 258: Line 259:
  
 === Удаление узла === === Удаление узла ===
 +
 +  * [[https://​stackoverflow.com/​questions/​56064537/​how-to-remove-broken-nodes-in-kubernetes|How to remove broken nodes in Kubernetes]]
 +
 <​code>​ <​code>​
 $ kubectl cordon kube3 $ kubectl cordon kube3
Line 329: Line 333:
  
 ==== Развертывание через Kubespray ==== ==== Развертывание через Kubespray ====
 +
 +=== !!! Обратитесь к преподавателю !!! ===
  
   * [[https://​github.com/​kubernetes-sigs/​kubespray]]   * [[https://​github.com/​kubernetes-sigs/​kubespray]]
Line 356: Line 362:
  
 ~/​kubespray#​ git branch -r ~/​kubespray#​ git branch -r
-~/​kubespray#​ ### git checkout origin/​release-2.22 ​  # debian11 - 2.24+~/​kubespray#​ ### git checkout origin/​release-2.22
  
 ~/​kubespray#​ git tag -l ~/​kubespray#​ git tag -l
Line 373: Line 379:
 real    1m48.202s real    1m48.202s
  
-~/​kubespray#​ cp -rfp inventory/​sample inventory/​mycluster+~/​kubespray#​ cp -rvfpT inventory/​sample inventory/​mycluster
  
 ~/​kubespray#​ declare -a IPS=(kube1,​192.168.X.221 kube2,​192.168.X.222 kube3,​192.168.X.223) ~/​kubespray#​ declare -a IPS=(kube1,​192.168.X.221 kube2,​192.168.X.222 kube3,​192.168.X.223)
Line 466: Line 472:
  
 $ kubectl delete pod my-debian $ kubectl delete pod my-debian
 +$ ###kubectl delete pod my-debian --grace-period=0 --force
  
 $ kubectl create deployment my-debian --image=debian -- "​sleep"​ "​3600"​ $ kubectl create deployment my-debian --image=debian -- "​sleep"​ "​3600"​
Line 594: Line 601:
  
 $ kubectl delete pod/​my-webd-NNNNNNNNNN-NNNNN -n my-ns $ kubectl delete pod/​my-webd-NNNNNNNNNN-NNNNN -n my-ns
 +</​code>​
 +
 +  * [[https://​learnk8s.io/​kubernetes-rollbacks|How do you rollback deployments in Kubernetes?​]]
 +
 +<​code>​
 +gitlab-runner@server:​~$ kubectl -n my-ns rollout history deployment/​my-webd
 +deployment.apps/​my-webd
 +REVISION ​ CHANGE-CAUSE
 +1         <​none>​
 +...
 +N         <​none>​
 +
 +gitlab-runner@server:​~$ kubectl -n my-ns rollout history deployment/​my-webd --revision=1
 +...
 +    Image: ​     server.corpX.un:​5000/​student/​webd:​ver1.1
 +...
 +
 +kubectl -n my-ns rollout undo deployment/​my-webd --to-revision=1
 +
 +gitlab-runner@server:​~$ kubectl -n my-ns rollout undo deployment/​my-webd --to-revision=1
 +
 +gitlab-runner@server:​~$ kubectl -n my-ns rollout history deployment/​my-webd
 +deployment.apps/​my-webd
 +REVISION ​ CHANGE-CAUSE
 +2         <​none>​
 +...
 +N+1         <​none>​
 </​code>​ </​code>​
  
Line 804: Line 838:
 node1# kubectl get all -n ingress-nginx node1# kubectl get all -n ingress-nginx
  
-node1# ### kubectl delete -f ingress-nginx.controller-v1.3.1.baremetal.yaml+node1# ###kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission 
 + 
 +node1# ###kubectl delete -f ingress-nginx.controller-v1.3.1.baremetal.yaml
 </​code>​ </​code>​
  
Line 990: Line 1026:
     storage: 128Mi     storage: 128Mi
 #    storage: 8Gi #    storage: 8Gi
-#  volumeMode: Filesystem 
   accessModes:​   accessModes:​
     - ReadWriteMany     - ReadWriteMany
Line 1013: Line 1048:
 root@node1:​~#​ kubectl get pv root@node1:​~#​ kubectl get pv
  
-root@kube1:​~#​ ### ssh kube3 chmod 777 /disk2/+root@kube1:​~#​ ###ssh kube3 'mkdir /​disk2/; ​chmod 777 /disk2/'
 ... ...
-root@node1:​~#​ ###kubectl delete pv my-pv-node2-sz-128m-num-001+root@node1:​~#​ ###kubectl delete pv my-pv-<TAB>
  
 root@node1:​~#​ cat my-ha-pvc.yaml root@node1:​~#​ cat my-ha-pvc.yaml
Line 1046: Line 1081:
   * [[https://​kubernetes.io/​docs/​concepts/​storage/​dynamic-provisioning/​|Dynamic Volume Provisioning]]   * [[https://​kubernetes.io/​docs/​concepts/​storage/​dynamic-provisioning/​|Dynamic Volume Provisioning]]
  
-== rancher local-path-provisioner ==+=== rancher local-path-provisioner ​===
  
   * [[https://​github.com/​rancher/​local-path-provisioner|rancher local-path-provisioner]]   * [[https://​github.com/​rancher/​local-path-provisioner|rancher local-path-provisioner]]
Line 1055: Line 1090:
  
 $ kubectl get sc $ kubectl get sc
 +
 +$ kubectl -n local-path-storage get all
  
 $ curl https://​raw.githubusercontent.com/​rancher/​local-path-provisioner/​v0.0.26/​deploy/​local-path-storage.yaml | less $ curl https://​raw.githubusercontent.com/​rancher/​local-path-provisioner/​v0.0.26/​deploy/​local-path-storage.yaml | less
Line 1076: Line 1113:
 $ ###kubectl -n my-keycloak-ns delete pvc data-my-keycloak-postgresql-0 $ ###kubectl -n my-keycloak-ns delete pvc data-my-keycloak-postgresql-0
 </​code>​ </​code>​
-== longhorn ==+=== longhorn ​===
  
 <​code>​ <​code>​
Line 1083: Line 1120:
   * [[https://​github.com/​longhorn/​longhorn]]   * [[https://​github.com/​longhorn/​longhorn]]
 <​code>​ <​code>​
 +$ kubectl apply -f https://​raw.githubusercontent.com/​longhorn/​longhorn/​v1.6.0/​deploy/​longhorn.yaml
 +
 +$ kubectl -n longhorn-system get pods -o wide --watch
 +
 Setting->​General Setting->​General
  
Line 1094: Line 1135:
 <​code>​ <​code>​
 cmder> kubectl proxy cmder> kubectl proxy
- 
-http://​localhost:​8001/​api/​v1/​namespaces/​longhorn-system/​services/​longhorn-frontend:​80/​proxy/​ 
 </​code>​ </​code>​
 +
 +  * http://​localhost:​8001/​api/​v1/​namespaces/​longhorn-system/​services/​longhorn-frontend:​80/​proxy/​
  
 Подключение через ingress Подключение через ingress
Line 1123: Line 1164:
 </​code>​ </​code>​
  
-Использование snapshot-ов+== Использование snapshot-ов ​==
  
   * [[https://​github.com/​longhorn/​longhorn/​issues/​63?​ref=https%3A%2F%2Fgiter.vip|What should be the best procedure to recover a snapshot or a backup in rancher2/​longhorn ?]]   * [[https://​github.com/​longhorn/​longhorn/​issues/​63?​ref=https%3A%2F%2Fgiter.vip|What should be the best procedure to recover a snapshot or a backup in rancher2/​longhorn ?]]
 +
 +  * Делаем снапшот
 +  * Что-то ломаем (удаляем пользователя)
 +  * Останавливаем сервис
  
 <​code>​ <​code>​
Line 1131: Line 1176:
  
 kube1:~# kubectl -n my-keycloak-ns scale --replicas 0 statefulset my-keycloak-postgresql kube1:~# kubectl -n my-keycloak-ns scale --replicas 0 statefulset my-keycloak-postgresql
 +</​code>​
  
 +  * Volume -> Attache to Host (любой) в режиме Maintenance,​ Revert к снапшоту,​ Deattache ​
 +  * Запускаем сервис
 +
 +<​code>​
 kube1:~# kubectl -n my-keycloak-ns scale --replicas 1 statefulset my-keycloak-postgresql kube1:~# kubectl -n my-keycloak-ns scale --replicas 1 statefulset my-keycloak-postgresql
  
Line 1137: Line 1187:
 </​code>​ </​code>​
  
-Использование backup-ов+== Использование backup-ов ​== 
 + 
 +  * Разворачиваем [[Сервис NFS]] на server  
 <​code>​ <​code>​
-Setting -> General -> Backup Target -> nfs://​server.corp13.un:/​disk2+Setting -> General -> Backup Target -> nfs://​server.corp13.un:/​var/www (nfs client linux не нужен)
 </​code>​ </​code>​
-  * бекапим, удаляем ​чарт, удаляем pv/pvc, восстанавливаем Volume из бекапа,​ создаем для ​него ​PV/PVC как было, устанавливаем чарт, он подхватывает pv/pvc+  * Volume -> Create Backup, удаляем ​NS, восстанавливаем Volume из бекапа, ​создаем NS и создаем для ​Volume ​PV/PVC как было, устанавливаем чарт, он подхватывает pv/pvc
  
 ==== ConfigMap ==== ==== ConfigMap ====
система_kubernetes.txt · Last modified: 2024/05/13 17:43 by val