User Tools

Site Tools


система_kubernetes

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
система_kubernetes [2024/04/03 11:52]
val [Volumes]
система_kubernetes [2024/04/22 07:26]
val [Volumes]
Line 171: Line 171:
  
 === Установка ПО === === Установка ПО ===
 +=== !!! Обратитесь к преподавателю !!! ===
 <​code>​ <​code>​
 node1# bash -c ' node1# bash -c '
Line 258: Line 259:
  
 === Удаление узла === === Удаление узла ===
 +
 +  * [[https://​stackoverflow.com/​questions/​56064537/​how-to-remove-broken-nodes-in-kubernetes|How to remove broken nodes in Kubernetes]]
 +
 <​code>​ <​code>​
 $ kubectl cordon kube3 $ kubectl cordon kube3
Line 329: Line 333:
  
 ==== Развертывание через Kubespray ==== ==== Развертывание через Kubespray ====
 +
 +=== !!! Обратитесь к преподавателю !!! ===
  
   * [[https://​github.com/​kubernetes-sigs/​kubespray]]   * [[https://​github.com/​kubernetes-sigs/​kubespray]]
Line 356: Line 362:
  
 ~/​kubespray#​ git branch -r ~/​kubespray#​ git branch -r
-~/​kubespray#​ ### git checkout origin/​release-2.22 ​  # debian11 - 2.24+~/​kubespray#​ ### git checkout origin/​release-2.22
  
 ~/​kubespray#​ git tag -l ~/​kubespray#​ git tag -l
Line 373: Line 379:
 real    1m48.202s real    1m48.202s
  
-~/​kubespray#​ cp -rfp inventory/​sample inventory/​mycluster+~/​kubespray#​ cp -rvfpT inventory/​sample inventory/​mycluster
  
 ~/​kubespray#​ declare -a IPS=(kube1,​192.168.X.221 kube2,​192.168.X.222 kube3,​192.168.X.223) ~/​kubespray#​ declare -a IPS=(kube1,​192.168.X.221 kube2,​192.168.X.222 kube3,​192.168.X.223)
Line 594: Line 600:
  
 $ kubectl delete pod/​my-webd-NNNNNNNNNN-NNNNN -n my-ns $ kubectl delete pod/​my-webd-NNNNNNNNNN-NNNNN -n my-ns
 +</​code>​
 +
 +  * [[https://​learnk8s.io/​kubernetes-rollbacks|How do you rollback deployments in Kubernetes?​]]
 +
 +<​code>​
 +gitlab-runner@server:​~$ kubectl -n my-ns rollout history deployment/​my-webd
 +deployment.apps/​my-webd
 +REVISION ​ CHANGE-CAUSE
 +1         <​none>​
 +...
 +N         <​none>​
 +
 +gitlab-runner@server:​~$ kubectl -n my-ns rollout history deployment/​my-webd --revision=1
 +...
 +    Image: ​     server.corpX.un:​5000/​student/​webd:​ver1.1
 +...
 +
 +kubectl -n my-ns rollout undo deployment/​my-webd --to-revision=1
 +
 +gitlab-runner@server:​~$ kubectl -n my-ns rollout undo deployment/​my-webd --to-revision=1
 +
 +gitlab-runner@server:​~$ kubectl -n my-ns rollout history deployment/​my-webd
 +deployment.apps/​my-webd
 +REVISION ​ CHANGE-CAUSE
 +2         <​none>​
 +...
 +N+1         <​none>​
 </​code>​ </​code>​
  
Line 804: Line 837:
 node1# kubectl get all -n ingress-nginx node1# kubectl get all -n ingress-nginx
  
-node1# ### kubectl delete -f ingress-nginx.controller-v1.3.1.baremetal.yaml+node1# ###kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission 
 + 
 +node1# ###kubectl delete -f ingress-nginx.controller-v1.3.1.baremetal.yaml
 </​code>​ </​code>​
  
Line 990: Line 1025:
     storage: 128Mi     storage: 128Mi
 #    storage: 8Gi #    storage: 8Gi
-#  volumeMode: Filesystem 
   accessModes:​   accessModes:​
     - ReadWriteMany     - ReadWriteMany
Line 1013: Line 1047:
 root@node1:​~#​ kubectl get pv root@node1:​~#​ kubectl get pv
  
-root@kube1:​~#​ ### ssh kube3 chmod 777 /disk2/+root@kube1:​~#​ ###ssh kube3 'mkdir /​disk2/; ​chmod 777 /disk2/'
 ... ...
-root@node1:​~#​ ###kubectl delete pv my-pv-node2-sz-128m-num-001+root@node1:​~#​ ###kubectl delete pv my-pv-<TAB>
  
 root@node1:​~#​ cat my-ha-pvc.yaml root@node1:​~#​ cat my-ha-pvc.yaml
Line 1046: Line 1080:
   * [[https://​kubernetes.io/​docs/​concepts/​storage/​dynamic-provisioning/​|Dynamic Volume Provisioning]]   * [[https://​kubernetes.io/​docs/​concepts/​storage/​dynamic-provisioning/​|Dynamic Volume Provisioning]]
  
-== rancher local-path-provisioner ==+=== rancher local-path-provisioner ​===
  
   * [[https://​github.com/​rancher/​local-path-provisioner|rancher local-path-provisioner]]   * [[https://​github.com/​rancher/​local-path-provisioner|rancher local-path-provisioner]]
Line 1055: Line 1089:
  
 $ kubectl get sc $ kubectl get sc
 +
 +$ kubectl -n local-path-storage get all
  
 $ curl https://​raw.githubusercontent.com/​rancher/​local-path-provisioner/​v0.0.26/​deploy/​local-path-storage.yaml | less $ curl https://​raw.githubusercontent.com/​rancher/​local-path-provisioner/​v0.0.26/​deploy/​local-path-storage.yaml | less
Line 1076: Line 1112:
 $ ###kubectl -n my-keycloak-ns delete pvc data-my-keycloak-postgresql-0 $ ###kubectl -n my-keycloak-ns delete pvc data-my-keycloak-postgresql-0
 </​code>​ </​code>​
-== longhorn ==+=== longhorn ​===
  
 <​code>​ <​code>​
Line 1083: Line 1119:
   * [[https://​github.com/​longhorn/​longhorn]]   * [[https://​github.com/​longhorn/​longhorn]]
 <​code>​ <​code>​
 +$ kubectl apply -f https://​raw.githubusercontent.com/​longhorn/​longhorn/​v1.6.0/​deploy/​longhorn.yaml
 +
 +$ kubectl -n longhorn-system get pods -o wide --watch
 +
 Setting->​General Setting->​General
  
Line 1094: Line 1134:
 <​code>​ <​code>​
 cmder> kubectl proxy cmder> kubectl proxy
- 
-http://​localhost:​8001/​api/​v1/​namespaces/​longhorn-system/​services/​longhorn-frontend:​80/​proxy/​ 
 </​code>​ </​code>​
 +
 +  * http://​localhost:​8001/​api/​v1/​namespaces/​longhorn-system/​services/​longhorn-frontend:​80/​proxy/​
  
 Подключение через ingress Подключение через ingress
Line 1123: Line 1163:
 </​code>​ </​code>​
  
-Использование snapshot-ов+== Использование snapshot-ов ​==
  
   * [[https://​github.com/​longhorn/​longhorn/​issues/​63?​ref=https%3A%2F%2Fgiter.vip|What should be the best procedure to recover a snapshot or a backup in rancher2/​longhorn ?]]   * [[https://​github.com/​longhorn/​longhorn/​issues/​63?​ref=https%3A%2F%2Fgiter.vip|What should be the best procedure to recover a snapshot or a backup in rancher2/​longhorn ?]]
  
   * Делаем снапшот   * Делаем снапшот
-  * Что-то ломаем+  * Что-то ломаем ​(удаляем пользователя)
   * Останавливаем сервис   * Останавливаем сервис
  
Line 1146: Line 1186:
 </​code>​ </​code>​
  
-Использование backup-ов+== Использование backup-ов ​== 
 + 
 +  * Разворачиваем [[Сервис NFS]] на server  
 <​code>​ <​code>​
-Setting -> General -> Backup Target -> nfs://​server.corp13.un:/​disk2 (nfs client linux не нужен)+Setting -> General -> Backup Target -> nfs://​server.corp13.un:/​var/​www ​(nfs client linux не нужен)
 </​code>​ </​code>​
   * Volume -> Create Backup, удаляем NS, восстанавливаем Volume из бекапа,​ создаем NS и создаем для Volume PV/PVC как было, устанавливаем чарт, он подхватывает pv/pvc   * Volume -> Create Backup, удаляем NS, восстанавливаем Volume из бекапа,​ создаем NS и создаем для Volume PV/PVC как было, устанавливаем чарт, он подхватывает pv/pvc
система_kubernetes.txt · Last modified: 2024/06/01 11:09 by val