User Tools

Site Tools


решение_ceph

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
решение_ceph [2021/05/29 12:01]
val
решение_ceph [2022/02/01 19:36]
val [MON]
Line 12: Line 12:
  
   * [[https://​docs.ceph.com/​en/​latest/​install/​manual-deployment/​|MANUAL DEPLOYMENT]]   * [[https://​docs.ceph.com/​en/​latest/​install/​manual-deployment/​|MANUAL DEPLOYMENT]]
 +  * [[https://​www.server-world.info/​en/​note?​os=Debian_11&​p=ceph14&​f=1|Debian 11 : Ceph Nautilus]]
  
 ==== Установка ==== ==== Установка ====
Line 19: Line 20:
  
 ==== MON ==== ==== MON ====
 +
 +  * [[https://​docs.ceph.com/​en/​latest/​man/​8/​ceph-mon/​|CEPH-MON – CEPH MONITOR DAEMON]]
  
 === Начальная конфигурация === === Начальная конфигурация ===
Line 27: Line 30:
  fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993  fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
  mon initial members = node3,​node4,​node5  mon initial members = node3,​node4,​node5
- mon host = 192.168.13.3,192.168.13.4,192.168.13.5 + mon host = 192.168.X.3,192.168.X.4,192.168.X.5 
- public network = 192.168.13.0/24+ public network = 192.168.X.0/24
  auth cluster required = none  auth cluster required = none
  auth service required = none  auth service required = none
  auth client required = none  auth client required = none
-</​code><​code>​ 
-node3# ​ 
-ssh-keygen 
-ssh-copy-id node4 
-ssh-copy-id node5 
 </​code><​code>​ </​code><​code>​
 scp /​etc/​ceph/​ceph.conf node4:/​etc/​ceph/​ scp /​etc/​ceph/​ceph.conf node4:/​etc/​ceph/​
 scp /​etc/​ceph/​ceph.conf node5:/​etc/​ceph/​ scp /​etc/​ceph/​ceph.conf node5:/​etc/​ceph/​
 </​code><​code>​ </​code><​code>​
-monmaptool --create --add node3 192.168.13.3 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /​tmp/​monmap +monmaptool --create --add node3 192.168.X.3 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /​tmp/​monmap 
-monmaptool --add node4 192.168.13.4 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /​tmp/​monmap +monmaptool --add node4 192.168.X.4 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /​tmp/​monmap 
-monmaptool --add node5 192.168.13.5 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap+monmaptool --add node5 192.168.X.5 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap
 </​code><​code>​ </​code><​code>​
 monmaptool --print /tmp/monmap monmaptool --print /tmp/monmap
Line 50: Line 48:
 scp /tmp/monmap node5:/tmp/ scp /tmp/monmap node5:/tmp/
 </​code><​code>​ </​code><​code>​
-bash -c '+node3# ​bash -c '
 sudo -u ceph mkdir /​var/​lib/​ceph/​mon/​ceph-node3 sudo -u ceph mkdir /​var/​lib/​ceph/​mon/​ceph-node3
 ssh node4 sudo -u ceph mkdir /​var/​lib/​ceph/​mon/​ceph-node4 ssh node4 sudo -u ceph mkdir /​var/​lib/​ceph/​mon/​ceph-node4
Line 56: Line 54:
 ' '
 </​code><​code>​ </​code><​code>​
-bash -c '+node3# ​bash -c '
 sudo -u ceph ceph-mon --mkfs -i node3 --monmap /tmp/monmap sudo -u ceph ceph-mon --mkfs -i node3 --monmap /tmp/monmap
 ssh node4 sudo -u ceph ceph-mon --mkfs -i node4 --monmap /tmp/monmap ssh node4 sudo -u ceph ceph-mon --mkfs -i node4 --monmap /tmp/monmap
Line 62: Line 60:
 ' '
 </​code><​code>​ </​code><​code>​
-ls -l /​var/​lib/​ceph/​mon/​ceph-node3+node3# ​ls -l /​var/​lib/​ceph/​mon/​ceph-node3
 </​code><​code>​ </​code><​code>​
-bash -c '+node3# ​bash -c '
 systemctl start ceph-mon@node3 systemctl start ceph-mon@node3
 systemctl enable ceph-mon@node3 systemctl enable ceph-mon@node3
Line 72: Line 70:
 ssh node5 systemctl enable ceph-mon@node5 ssh node5 systemctl enable ceph-mon@node5
 ' '
 +</​code>​
 +
 +  * [[https://​www.suse.com/​support/​kb/​doc/?​id=000019960|Cluster status shows a "mons are allowing insecure global_id reclaim"​ health warning]]
 +
 +<​code>​
 +debian11/​ubuntu#​ ceph mon enable-msgr2
 +debian11/​ubuntu#​ ceph config set mon mon_warn_on_insecure_global_id_reclaim false
 +debian11/​ubuntu#​ ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false
 </​code><​code>​ </​code><​code>​
 # ceph -s # ceph -s
Line 82: Line 88:
 </​code><​code>​ </​code><​code>​
 MON_DOWN 1/3 mons down, quorum node4,node5 MON_DOWN 1/3 mons down, quorum node4,node5
-    mon.node3 (rank 0) addr 192.168.13.3:6789/0 is down (out of quorum)+    mon.node3 (rank 0) addr 192.168.X.3:6789/0 is down (out of quorum)
 </​code>​ </​code>​
  
 ==== MGR ==== ==== MGR ====
 +
 +  * [[https://​docs.ceph.com/​en/​latest/​mgr/​index.html|CEPH MANAGER DAEMON]]
  
 <​code>​ <​code>​
-bash -c '+node3# ​bash -c '
 sudo -u ceph mkdir /​var/​lib/​ceph/​mgr/​ceph-node3 sudo -u ceph mkdir /​var/​lib/​ceph/​mgr/​ceph-node3
 systemctl start ceph-mgr@node3 systemctl start ceph-mgr@node3
Line 108: Line 116:
  
 ==== OSD ==== ==== OSD ====
 +
 +  * [[https://​docs.ceph.com/​en/​latest/​man/​8/​ceph-osd/​|CEPH-OSD – CEPH OBJECT STORAGE DAEMON]]
  
 === Начальная конфигурация === === Начальная конфигурация ===
 <​code>​ <​code>​
-bash -c '+# lsblk 
 +</​code><​code>​ 
 +node3# ​bash -c '
 ceph-volume lvm create --data /dev/sdb ceph-volume lvm create --data /dev/sdb
 ssh node4 ceph-volume lvm create --data /dev/sdb ssh node4 ceph-volume lvm create --data /dev/sdb
Line 133: Line 145:
  
 <​code>​ <​code>​
-node3# ​+node3# ​scp /​etc/​ceph/​ceph.conf node6:/​etc/​ceph/​
  
-ssh-copy-id node6+node3# ​ssh node6 ceph-volume lvm create ​--data /dev/sdb
  
-scp /​etc/​ceph/​ceph.conf node6:/​etc/​ceph/​ +node3# ​watch ceph -s
- +
-ssh node6 ceph-volume lvm create --data /dev/sdb +
- +
-watch ceph -s+
 </​code>​ </​code>​
  
Line 161: Line 169:
 ceph osd crush reweight osd.2 0.00389 ceph osd crush reweight osd.2 0.00389
  
-node3.corp13.un:​~# ceph osd df+node3# ceph osd df
 ID CLASS WEIGHT ​ REWEIGHT SIZE    USE     ​AVAIL ​  ​%USE ​ VAR  PGS ID CLASS WEIGHT ​ REWEIGHT SIZE    USE     ​AVAIL ​  ​%USE ​ VAR  PGS
  ​0 ​  ​hdd ​      ​0 ​ 0.83000 4.00GiB 1.01GiB 2.99GiB 25.30 0.37   0  ​0 ​  ​hdd ​      ​0 ​ 0.83000 4.00GiB 1.01GiB 2.99GiB 25.30 0.37   0
Line 173: Line 181:
  
 ==== OSD POOL ==== ==== OSD POOL ====
 +
 +  * [[https://​docs.ceph.com/​en/​latest/​rados/​operations/​pools/​|POOLS]]
  
 === Расчет параметров и создание POOL === === Расчет параметров и создание POOL ===
 +
 +  * [[https://​docs.ceph.com/​en/​latest/​rados/​operations/​placement-groups/​|PLACEMENT GROUPS]]
 +
 <​code>​ <​code>​
-Total PGs = (Total_number_of_OSD ​* 100max_replication_count+Total PGs = OSDs * 100 / pool_size
  
 5*100/3 = 166,6... 5*100/3 = 166,6...
 +</​code><​code>​
 +# ceph osd pool create test-pool1 180
  
-ceph osd pool create test-pool1 ​180+debian11# ​ceph osd pool create test-pool1 ​128 
 +или 
 +debian11# ceph config set global mon_warn_on_pool_pg_num_not_power_of_two false
  
-ceph -s+ceph -s 
 +</​code><​code>​
 ... ...
     pgs:     180 active+clean     pgs:     180 active+clean
 +</​code><​code>​
 +# ceph pg dump
  
-ceph pg dump +ceph osd lspools
- +
-ceph osd lspools+
 </​code>​ </​code>​
 === Изменение параметров POOL === === Изменение параметров POOL ===
Line 204: Line 222:
 ==== RBD POOL ==== ==== RBD POOL ====
  
 +  * Reliable Autonomic Distributed Object Store = RADOS
   * [[https://​docs.ceph.com/​en/​latest/​rbd/​rados-rbd-cmds/​|BASIC BLOCK DEVICE COMMANDS]]   * [[https://​docs.ceph.com/​en/​latest/​rbd/​rados-rbd-cmds/​|BASIC BLOCK DEVICE COMMANDS]]
 +  * [[https://​docs.ceph.com/​en/​latest/​man/​8/​rbd/​|rbd – manage rados block device (RBD) images]]
  
 <​code>​ <​code>​
-ceph df+ceph df
  
-rbd pool init test-pool1 +rbd pool init test-pool1
-rbd create -p test-pool1 rbd1 --size 1024+
  
-rbd list test-pool1 +# rbd create -p test-pool1 rbd1 --size 1G 
-rbd info test-pool1/​rbd1+ 
 +rbd list test-pool1 
 + 
 +rbd info test-pool1/​rbd1
 </​code><​code>​ </​code><​code>​
-rbd resize --size 3G test-pool1/​rbd1 +rbd resize --size 3584M test-pool1/​rbd1
-rbd resize --size 3584M test-pool1/​rbd1+
  
-nodeN# systemctl restart tgt+node3bash -c ' 
 +systemctl restart tgt 
 +ssh node4 systemctl restart tgt 
 +ssh node5 systemctl restart tgt 
 +'
 </​code>​ </​code>​
  
Line 239: Line 264:
 scp /​etc/​tgt/​conf.d/​ceph.conf node5:/​etc/​tgt/​conf.d/​ scp /​etc/​tgt/​conf.d/​ceph.conf node5:/​etc/​tgt/​conf.d/​
 </​code><​code>​ </​code><​code>​
-bash -c '+node3# ​bash -c '
 systemctl restart tgt systemctl restart tgt
 ssh node4 systemctl restart tgt ssh node4 systemctl restart tgt
решение_ceph.txt · Last modified: 2022/05/23 09:02 by val