This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
система_kubernetes [2025/03/17 09:18] val [Ingress] |
система_kubernetes [2025/03/31 13:51] (current) val [Metrics Server] |
||
---|---|---|---|
Line 25: | Line 25: | ||
* [[https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands]] | * [[https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands]] | ||
+ | * [[https://kubernetes.io/ru/docs/reference/kubectl/cheatsheet/|Шпаргалка по kubectl]] | ||
==== Установка ==== | ==== Установка ==== | ||
Line 457: | Line 458: | ||
~/kubespray# cat inventory/mycluster/hosts.yaml | ~/kubespray# cat inventory/mycluster/hosts.yaml | ||
</code><code> | </code><code> | ||
+ | all: | ||
+ | hosts: | ||
... | ... | ||
- | node4: | + | kube4: |
- | ansible_host: 192.168.X.204 | + | |
- | ip: 192.168.X.204 | + | |
- | access_ip: 192.168.X.204 | + | |
... | ... | ||
kube_node: | kube_node: | ||
+ | hosts: | ||
... | ... | ||
- | node4: | + | kube4: |
... | ... | ||
</code><code> | </code><code> | ||
+ | (venv1) server:~/kubespray# time ansible-playbook -i inventory/mycluster/hosts.yaml cluster.yml | ||
+ | real 6m31.562s | ||
- | ~/kubespray# time ansible-playbook -i inventory/mycluster/hosts.yaml --limit=kube4 scale.yml | + | ~/kubespray# ###time ansible-playbook -i inventory/mycluster/hosts.yaml --limit=kube4 scale.yml |
real 17m37.459s | real 17m37.459s | ||
Line 516: | Line 519: | ||
<code> | <code> | ||
$ kubectl api-resources | $ kubectl api-resources | ||
- | |||
- | $ kubectl run my-debian --image=debian -- "sleep" "3600" | ||
$ ###kubectl run -ti --rm my-debian --image=debian --overrides='{"spec": { "nodeSelector": {"kubernetes.io/hostname": "kube4"}}}' | $ ###kubectl run -ti --rm my-debian --image=debian --overrides='{"spec": { "nodeSelector": {"kubernetes.io/hostname": "kube4"}}}' | ||
+ | |||
+ | $ kubectl run my-debian --image=debian -- "sleep" "60" | ||
$ kubectl get pods | $ kubectl get pods | ||
Line 579: | Line 582: | ||
image: debian | image: debian | ||
command: ["/bin/sh"] | command: ["/bin/sh"] | ||
- | args: ["-c", "while true; do echo hello; sleep 3;done"] | + | args: ["-c", "while :;do echo -n random-value:;od -A n -t d -N 1 /dev/urandom;sleep 5; done"] |
+ | resources: | ||
+ | requests: | ||
+ | memory: "64Mi" | ||
+ | cpu: "250m" | ||
+ | limits: | ||
+ | memory: "128Mi" | ||
+ | cpu: "500m" | ||
restartPolicy: Always | restartPolicy: Always | ||
</code><code> | </code><code> | ||
$ kubectl apply -f my-debian-deployment.yaml #--dry-run=client #-o yaml | $ kubectl apply -f my-debian-deployment.yaml #--dry-run=client #-o yaml | ||
+ | |||
+ | $ kubectl logs -l app=my-debian -f | ||
... | ... | ||
$ kubectl delete -f my-debian-deployment.yaml | $ kubectl delete -f my-debian-deployment.yaml | ||
Line 666: | Line 678: | ||
# port: 80 | # port: 80 | ||
# #scheme: HTTPS | # #scheme: HTTPS | ||
- | |||
- | # resources: | ||
- | # requests: | ||
- | # memory: "64Mi" | ||
- | # cpu: "250m" | ||
- | # limits: | ||
- | # memory: "128Mi" | ||
- | # cpu: "500m" | ||
- | |||
# volumeMounts: | # volumeMounts: | ||
Line 870: | Line 873: | ||
- | $ ### kubectl delete -f first-pool.yaml && rm first-pool.yaml | + | $ #kubectl delete -f first-pool.yaml && rm first-pool.yaml |
- | $ ### kubectl delete -f https://raw.githubusercontent.com/metallb/metallb/v0.14.3/config/manifests/metallb-native.yaml | + | $ #kubectl delete -f https://raw.githubusercontent.com/metallb/metallb/v0.14.3/config/manifests/metallb-native.yaml |
</code> | </code> | ||
Line 981: | Line 984: | ||
node1# ###kubectl delete -f ingress-nginx.controller-v1.3.1.baremetal.yaml | node1# ###kubectl delete -f ingress-nginx.controller-v1.3.1.baremetal.yaml | ||
- | </code> | ||
- | |||
- | === Управление конфигурацией ingress-nginx-controller === | ||
- | <code> | ||
- | master-1:~$ kubectl exec -n ingress-nginx pods/ingress-nginx-controller-<TAB> -- cat /etc/nginx/nginx.conf | tee nginx.conf | ||
- | |||
- | master-1:~$ kubectl edit -n ingress-nginx configmaps ingress-nginx-controller | ||
- | </code><code> | ||
- | ... | ||
- | data: | ||
- | use-forwarded-headers: "true" | ||
- | ... | ||
</code> | </code> | ||
Line 1035: | Line 1026: | ||
</code><code> | </code><code> | ||
kube1:~/ingress-nginx# ###kubectl delete -f ingress-nginx.controller-v1.12.0.baremetal.yaml | kube1:~/ingress-nginx# ###kubectl delete -f ingress-nginx.controller-v1.12.0.baremetal.yaml | ||
+ | </code> | ||
+ | |||
+ | === Управление конфигурацией ingress-nginx-controller === | ||
+ | <code> | ||
+ | master-1:~$ kubectl exec -n ingress-nginx pods/ingress-nginx-controller-<TAB> -- cat /etc/nginx/nginx.conf | tee nginx.conf | ||
+ | |||
+ | master-1:~$ kubectl edit -n ingress-nginx configmaps ingress-nginx-controller | ||
+ | </code><code> | ||
+ | ... | ||
+ | data: | ||
+ | use-forwarded-headers: "true" | ||
+ | ... | ||
</code> | </code> | ||
Line 1149: | Line 1152: | ||
</code> | </code> | ||
- | === PersistentVolume и PersistentVolumeVolumeClaim === | + | === PersistentVolume и PersistentVolumeClaim === |
* [[https://qna.habr.com/q/629022|Несколько Claim на один Persistent Volumes?]] | * [[https://qna.habr.com/q/629022|Несколько Claim на один Persistent Volumes?]] | ||
Line 1342: | Line 1345: | ||
<code> | <code> | ||
kubeN:~# apt install open-iscsi | kubeN:~# apt install open-iscsi | ||
+ | |||
+ | (venv1) server:~# ansible all -f 4 -m apt -a 'pkg=open-iscsi state=present update_cache=true' -i /root/kubespray/inventory/mycluster/hosts.yaml | ||
</code> | </code> | ||
* [[https://github.com/longhorn/longhorn]] | * [[https://github.com/longhorn/longhorn]] | ||
Line 1420: | Line 1425: | ||
</code> | </code> | ||
* Volume -> Create Backup, удаляем NS, восстанавливаем Volume из бекапа, создаем NS и создаем для Volume PV/PVC как было, устанавливаем чарт, он подхватывает pv/pvc | * Volume -> Create Backup, удаляем NS, восстанавливаем Volume из бекапа, создаем NS и создаем для Volume PV/PVC как было, устанавливаем чарт, он подхватывает pv/pvc | ||
+ | |||
+ | ==== ConfigMap, Secret ==== | ||
+ | |||
+ | <code> | ||
+ | server# scp /etc/pywebd/* kube1:/tmp/ | ||
+ | |||
+ | kube1:~/pywebd-k8s# kubectl create configmap pywebd-conf --from-file=/tmp/pywebd.conf --dry-run=client -o yaml | tee my-webd-configmap.yaml | ||
+ | |||
+ | kube1:~/pywebd-k8s# cat my-webd-configmap.yaml | ||
+ | </code><code> | ||
+ | apiVersion: v1 | ||
+ | data: | ||
+ | pywebd.conf: | | ||
+ | [default] | ||
+ | DocumentRoot = /usr/local/apache2/htdocs | ||
+ | Listen = 4443 | ||
+ | kind: ConfigMap | ||
+ | metadata: | ||
+ | creationTimestamp: null | ||
+ | name: pywebd-conf | ||
+ | </code><code> | ||
+ | kube1:~/pywebd-k8s# kubectl apply -f my-webd-configmap.yaml -n my-ns | ||
+ | |||
+ | kube1:~/pywebd-k8s# kubectl -n my-ns get configmaps | ||
+ | |||
+ | kube1:~/pywebd-k8s# kubectl create secret tls pywebd-tls --key /tmp/pywebd.key --cert /tmp/pywebd.crt --dry-run=client -o yaml | tee my-webd-secret-tls.yaml | ||
+ | |||
+ | kube1:~/pywebd-k8s# less my-webd-secret-tls.yaml | ||
+ | </code><code> | ||
+ | apiVersion: v1 | ||
+ | data: | ||
+ | tls.crt: ... | ||
+ | tls.key: ... | ||
+ | kind: Secret | ||
+ | metadata: | ||
+ | creationTimestamp: null | ||
+ | name: pywebd-tls | ||
+ | type: kubernetes.io/tls | ||
+ | </code><code> | ||
+ | kube1:~/pywebd-k8s# rm -rv /tmp/pywebd.* | ||
+ | |||
+ | kube1:~/pywebd-k8s# kubectl apply -f my-webd-secret-tls.yaml -n my-ns | ||
+ | |||
+ | kube1:~/pywebd-k8s# kubectl -n my-ns get secrets | ||
+ | |||
+ | kube1:~/pywebd-k8s# kubectl create secret docker-registry regcred --docker-server=server.corpX.un:5000 --docker-username=student --docker-password='strongpassword' -n my-ns | ||
+ | |||
+ | kube1:~/pywebd-k8s# cat my-webd-deployment.yaml | ||
+ | </code><code> | ||
+ | ... | ||
+ | imagePullSecrets: | ||
+ | - name: regcred | ||
+ | |||
+ | containers: | ||
+ | - name: my-webd | ||
+ | image: server.corpX.un:5000/student/pywebd:ver1.2 | ||
+ | imagePullPolicy: "Always" | ||
+ | |||
+ | # env: | ||
+ | # ... | ||
+ | ... | ||
+ | livenessProbe: | ||
+ | httpGet: | ||
+ | port: 4443 | ||
+ | scheme: HTTPS | ||
+ | ... | ||
+ | volumeMounts: | ||
+ | ... | ||
+ | - name: conf-volume | ||
+ | subPath: pywebd.conf | ||
+ | mountPath: /etc/pywebd/pywebd.conf | ||
+ | - name: secret-tls-volume | ||
+ | subPath: tls.crt | ||
+ | mountPath: /etc/pywebd/pywebd.crt | ||
+ | - name: secret-tls-volume | ||
+ | subPath: tls.key | ||
+ | mountPath: /etc/pywebd/pywebd.key | ||
+ | ... | ||
+ | volumes: | ||
+ | ... | ||
+ | - name: conf-volume | ||
+ | configMap: | ||
+ | name: pywebd-conf | ||
+ | - name: secret-tls-volume | ||
+ | secret: | ||
+ | secretName: pywebd-tls | ||
+ | ... | ||
+ | </code><code> | ||
+ | kubeN$ curl --connect-to "":"":<POD_IP>:4443 https://pywebd.corpX.un | ||
+ | </code> | ||
==== ConfigMap ==== | ==== ConfigMap ==== | ||
Line 1804: | Line 1899: | ||
kube1:~# mkdir gowebd; cd gowebd | kube1:~# mkdir gowebd; cd gowebd | ||
- | kube1:~/gowebd# ###helm pull webd-chart --repo https://server.corp13.un/api/v4/projects/1/packages/helm/stable | + | kube1:~/gowebd# ###helm pull webd-chart --repo https://server.corpX.un/api/v4/projects/N/packages/helm/stable |
- | kube1:~/gowebd# helm show values webd-chart --repo https://server.corp13.un/api/v4/projects/1/packages/helm/stable | tee values.yaml.orig | + | kube1:~/gowebd# helm show values webd-chart --repo https://server.corpX.un/api/v4/projects/N/packages/helm/stable | tee values.yaml.orig |
kube1:~/gowebd# cat values.yaml | kube1:~/gowebd# cat values.yaml | ||
Line 1815: | Line 1910: | ||
#REALM_NAME: "corp" | #REALM_NAME: "corp" | ||
</code><code> | </code><code> | ||
- | kube1:~/gowebd# helm upgrade my-webd -i webd-chart -f values.yaml -n my-ns --create-namespace --repo https://server.corp13.un/api/v4/projects/1/packages/helm/stable | + | kube1:~/gowebd# helm upgrade my-webd -i webd-chart -f values.yaml -n my-ns --create-namespace --repo https://server.corpX.un/api/v4/projects/N/packages/helm/stable |
$ curl http://kubeN -H "Host: gowebd.corpX.un" | $ curl http://kubeN -H "Host: gowebd.corpX.un" | ||
Line 1825: | Line 1920: | ||
=== gitlab-runner kubernetes === | === gitlab-runner kubernetes === | ||
+ | |||
+ | <code> | ||
+ | kube1:~/gitlab-runner# kubectl create ns gitlab-runner | ||
+ | |||
+ | kube1:~/gitlab-runner# kubectl -n gitlab-runner create configmap ca-crt --from-file=/usr/local/share/ca-certificates/ca.crt | ||
+ | |||
+ | kube1:~/gitlab-runner# helm repo add gitlab https://charts.gitlab.io | ||
+ | |||
+ | kube1:~/gitlab-runner# helm repo list | ||
+ | |||
+ | kube1:~/gitlab-runner# helm search repo -l gitlab | ||
+ | |||
+ | kube1:~/gitlab-runner# helm search repo -l gitlab/gitlab-runner | ||
+ | |||
+ | kube1:~/gitlab-runner# helm show values gitlab/gitlab-runner --version 0.70.5 | tee values.yaml | ||
+ | |||
+ | kube1:~/gitlab-runner# cat values.yaml | ||
+ | </code><code> | ||
+ | ... | ||
+ | gitlabUrl: https://server.corpX.un | ||
+ | ... | ||
+ | runnerToken: "NNNNNNNNNNNNNNNNNNNNN" | ||
+ | ... | ||
+ | rbac: | ||
+ | ... | ||
+ | create: true #change this | ||
+ | ... | ||
+ | serviceAccount: | ||
+ | ... | ||
+ | create: true #change this | ||
+ | ... | ||
+ | runners: | ||
+ | ... | ||
+ | config: | | ||
+ | [[runners]] | ||
+ | tls-ca-file = "/mnt/ca.crt" #insert this | ||
+ | [runners.kubernetes] | ||
+ | namespace = "{{.Release.Namespace}}" | ||
+ | image = "alpine" | ||
+ | privileged = true #insert this | ||
+ | ... | ||
+ | securityContext: | ||
+ | allowPrivilegeEscalation: true #change this | ||
+ | readOnlyRootFilesystem: false | ||
+ | runAsNonRoot: true | ||
+ | privileged: true #change this | ||
+ | ... | ||
+ | #volumeMounts: [] #comment this | ||
+ | volumeMounts: | ||
+ | - name: ca-crt | ||
+ | subPath: ca.crt | ||
+ | mountPath: /mnt/ca.crt | ||
+ | ... | ||
+ | #volumes: [] #comment this | ||
+ | volumes: | ||
+ | - name: ca-crt | ||
+ | configMap: | ||
+ | name: ca-crt | ||
+ | ... | ||
+ | </code><code> | ||
+ | kube1:~/gitlab-runner# helm upgrade -i gitlab-runner gitlab/gitlab-runner -f values.yaml -n gitlab-runner --version 0.70.5 | ||
+ | |||
+ | kube1:~/gitlab-runner# kubectl get all -n gitlab-runner | ||
+ | |||
+ | kube1:~/gitlab-runner# ### helm -n gitlab-runner uninstall gitlab-runner | ||
+ | </code> | ||
+ | |||
+ | == старая версия == | ||
<code> | <code> | ||
gitlab-runner@server:~$ helm repo add gitlab https://charts.gitlab.io | gitlab-runner@server:~$ helm repo add gitlab https://charts.gitlab.io | ||
Line 1941: | Line 2104: | ||
* http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ | * http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ | ||
+ | ===== Мониторинг ===== | ||
+ | |||
+ | ==== Metrics Server ==== | ||
+ | |||
+ | * [[https://kubernetes-sigs.github.io/metrics-server/Kubernetes Metrics Server]] | ||
+ | * [[https://medium.com/@cloudspinx/fix-error-metrics-api-not-available-in-kubernetes-aa10766e1c2f|Fix “error: Metrics API not available” in Kubernetes]] | ||
+ | |||
+ | <code> | ||
+ | kube1:~/metrics-server# curl -L https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.7.2/components.yaml | tee metrics-server-components.yaml | ||
+ | |||
+ | kube1:~/metrics-server# cat metrics-server-components.yaml | ||
+ | </code><code> | ||
+ | ... | ||
+ | containers: | ||
+ | - args: | ||
+ | - --cert-dir=/tmp | ||
+ | - --kubelet-insecure-tls # add this | ||
+ | ... | ||
+ | </code><code> | ||
+ | kube1:~/metrics-server# kubectl apply -f metrics-server-components.yaml | ||
+ | |||
+ | kube1# kubectl get pods -A | grep metrics-server | ||
+ | |||
+ | kube1# kubectl top pod #-n kube-system | ||
+ | |||
+ | kube1# kubectl top pod -A --sort-by=memory | ||
+ | |||
+ | kube1# kubectl top node | ||
+ | </code> | ||
+ | |||
+ | ==== kube-state-metrics ==== | ||
+ | |||
+ | * [[https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-state-metrics]] | ||
+ | |||
+ | <code> | ||
+ | kube1# helm repo add prometheus-community https://prometheus-community.github.io/helm-charts | ||
+ | |||
+ | kube1# helm repo update | ||
+ | kube1# helm install kube-state-metrics prometheus-community/kube-state-metrics -n vm --create-namespace | ||
+ | |||
+ | kube1# curl kube-state-metrics.vm.svc.cluster.local:8080/metrics | ||
+ | </code> | ||
===== Отладка, troubleshooting ===== | ===== Отладка, troubleshooting ===== | ||