User Tools

Site Tools


решение_ceph

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
решение_ceph [2021/05/29 12:04]
val [MON]
решение_ceph [2022/05/23 09:02]
val [OSD POOL]
Line 12: Line 12:
  
   * [[https://​docs.ceph.com/​en/​latest/​install/​manual-deployment/​|MANUAL DEPLOYMENT]]   * [[https://​docs.ceph.com/​en/​latest/​install/​manual-deployment/​|MANUAL DEPLOYMENT]]
 +  * [[https://​www.server-world.info/​en/​note?​os=Debian_11&​p=ceph14&​f=1|Debian 11 : Ceph Nautilus]]
  
 ==== Установка ==== ==== Установка ====
Line 19: Line 20:
  
 ==== MON ==== ==== MON ====
 +
 +  * [[https://​docs.ceph.com/​en/​latest/​man/​8/​ceph-mon/​|CEPH-MON – CEPH MONITOR DAEMON]]
  
 === Начальная конфигурация === === Начальная конфигурация ===
Line 27: Line 30:
  fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993  fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
  mon initial members = node3,​node4,​node5  mon initial members = node3,​node4,​node5
- mon host = 192.168.13.3,192.168.13.4,192.168.13.5 + mon host = 192.168.X.3,192.168.X.4,192.168.X.5 
- public network = 192.168.13.0/24+ public network = 192.168.X.0/24
  auth cluster required = none  auth cluster required = none
  auth service required = none  auth service required = none
Line 36: Line 39:
 scp /​etc/​ceph/​ceph.conf node5:/​etc/​ceph/​ scp /​etc/​ceph/​ceph.conf node5:/​etc/​ceph/​
 </​code><​code>​ </​code><​code>​
-monmaptool --create --add node3 192.168.13.3 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /​tmp/​monmap +monmaptool --create --add node3 192.168.X.3 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /​tmp/​monmap 
-monmaptool --add node4 192.168.13.4 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /​tmp/​monmap +monmaptool --add node4 192.168.X.4 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /​tmp/​monmap 
-monmaptool --add node5 192.168.13.5 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap+monmaptool --add node5 192.168.X.5 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap
 </​code><​code>​ </​code><​code>​
 monmaptool --print /tmp/monmap monmaptool --print /tmp/monmap
Line 45: Line 48:
 scp /tmp/monmap node5:/tmp/ scp /tmp/monmap node5:/tmp/
 </​code><​code>​ </​code><​code>​
-bash -c '+node3# ​bash -c '
 sudo -u ceph mkdir /​var/​lib/​ceph/​mon/​ceph-node3 sudo -u ceph mkdir /​var/​lib/​ceph/​mon/​ceph-node3
 ssh node4 sudo -u ceph mkdir /​var/​lib/​ceph/​mon/​ceph-node4 ssh node4 sudo -u ceph mkdir /​var/​lib/​ceph/​mon/​ceph-node4
Line 51: Line 54:
 ' '
 </​code><​code>​ </​code><​code>​
-bash -c '+node3# ​bash -c '
 sudo -u ceph ceph-mon --mkfs -i node3 --monmap /tmp/monmap sudo -u ceph ceph-mon --mkfs -i node3 --monmap /tmp/monmap
 ssh node4 sudo -u ceph ceph-mon --mkfs -i node4 --monmap /tmp/monmap ssh node4 sudo -u ceph ceph-mon --mkfs -i node4 --monmap /tmp/monmap
Line 57: Line 60:
 ' '
 </​code><​code>​ </​code><​code>​
-ls -l /​var/​lib/​ceph/​mon/​ceph-node3+node3# ​ls -l /​var/​lib/​ceph/​mon/​ceph-node3
 </​code><​code>​ </​code><​code>​
-bash -c '+node3# ​bash -c '
 systemctl start ceph-mon@node3 systemctl start ceph-mon@node3
 systemctl enable ceph-mon@node3 systemctl enable ceph-mon@node3
Line 67: Line 70:
 ssh node5 systemctl enable ceph-mon@node5 ssh node5 systemctl enable ceph-mon@node5
 ' '
 +</​code>​
 +
 +  * [[https://​www.suse.com/​support/​kb/​doc/?​id=000019960|Cluster status shows a "mons are allowing insecure global_id reclaim"​ health warning]]
 +
 +<​code>​
 +debian11/​ubuntu#​ ceph mon enable-msgr2
 +debian11/​ubuntu#​ ceph config set mon mon_warn_on_insecure_global_id_reclaim false
 +debian11/​ubuntu#​ ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false
 </​code><​code>​ </​code><​code>​
 # ceph -s # ceph -s
Line 77: Line 88:
 </​code><​code>​ </​code><​code>​
 MON_DOWN 1/3 mons down, quorum node4,node5 MON_DOWN 1/3 mons down, quorum node4,node5
-    mon.node3 (rank 0) addr 192.168.13.3:6789/0 is down (out of quorum)+    mon.node3 (rank 0) addr 192.168.X.3:6789/0 is down (out of quorum)
 </​code>​ </​code>​
  
 ==== MGR ==== ==== MGR ====
 +
 +  * [[https://​docs.ceph.com/​en/​latest/​mgr/​index.html|CEPH MANAGER DAEMON]]
  
 <​code>​ <​code>​
-bash -c '+node3# ​bash -c '
 sudo -u ceph mkdir /​var/​lib/​ceph/​mgr/​ceph-node3 sudo -u ceph mkdir /​var/​lib/​ceph/​mgr/​ceph-node3
 systemctl start ceph-mgr@node3 systemctl start ceph-mgr@node3
Line 103: Line 116:
  
 ==== OSD ==== ==== OSD ====
 +
 +  * [[https://​docs.ceph.com/​en/​latest/​man/​8/​ceph-osd/​|CEPH-OSD – CEPH OBJECT STORAGE DAEMON]]
  
 === Начальная конфигурация === === Начальная конфигурация ===
 <​code>​ <​code>​
-bash -c '+# lsblk 
 +</​code><​code>​ 
 +node3# ​bash -c '
 ceph-volume lvm create --data /dev/sdb ceph-volume lvm create --data /dev/sdb
 ssh node4 ceph-volume lvm create --data /dev/sdb ssh node4 ceph-volume lvm create --data /dev/sdb
Line 152: Line 169:
 ceph osd crush reweight osd.2 0.00389 ceph osd crush reweight osd.2 0.00389
  
-node3.corp13.un:​~# ceph osd df+node3# ceph osd df
 ID CLASS WEIGHT ​ REWEIGHT SIZE    USE     ​AVAIL ​  ​%USE ​ VAR  PGS ID CLASS WEIGHT ​ REWEIGHT SIZE    USE     ​AVAIL ​  ​%USE ​ VAR  PGS
  ​0 ​  ​hdd ​      ​0 ​ 0.83000 4.00GiB 1.01GiB 2.99GiB 25.30 0.37   0  ​0 ​  ​hdd ​      ​0 ​ 0.83000 4.00GiB 1.01GiB 2.99GiB 25.30 0.37   0
Line 164: Line 181:
  
 ==== OSD POOL ==== ==== OSD POOL ====
 +
 +  * [[https://​docs.ceph.com/​en/​latest/​rados/​operations/​pools/​|POOLS]]
  
 === Расчет параметров и создание POOL === === Расчет параметров и создание POOL ===
 +
 +  * [[https://​docs.ceph.com/​en/​latest/​rados/​operations/​placement-groups/​|PLACEMENT GROUPS]]
 +
 <​code>​ <​code>​
-Total PGs = (Total_number_of_OSD ​* 100max_replication_count+Total PGs = OSDs * 100 / pool_size
  
 5*100/3 = 166,6... 5*100/3 = 166,6...
 +</​code><​code>​
 +# ceph osd pool create test-pool1 180
  
-ceph osd pool create test-pool1 ​180+debian11/​ubuntu# ​ceph osd pool create test-pool1 ​128 
 +или 
 +debian11/​ubuntu#​ ceph config set global mon_warn_on_pool_pg_num_not_power_of_two false
  
-ceph -s+ceph -s 
 +</​code><​code>​
 ... ...
     pgs:     180 active+clean     pgs:     180 active+clean
 +</​code><​code>​
 +# ceph pg dump
  
-ceph pg dump +ceph osd lspools
- +
-ceph osd lspools+
 </​code>​ </​code>​
 === Изменение параметров POOL === === Изменение параметров POOL ===
Line 195: Line 222:
 ==== RBD POOL ==== ==== RBD POOL ====
  
 +  * Reliable Autonomic Distributed Object Store = RADOS
   * [[https://​docs.ceph.com/​en/​latest/​rbd/​rados-rbd-cmds/​|BASIC BLOCK DEVICE COMMANDS]]   * [[https://​docs.ceph.com/​en/​latest/​rbd/​rados-rbd-cmds/​|BASIC BLOCK DEVICE COMMANDS]]
 +  * [[https://​docs.ceph.com/​en/​latest/​man/​8/​rbd/​|rbd – manage rados block device (RBD) images]]
  
 <​code>​ <​code>​
-ceph df+ceph df
  
-rbd pool init test-pool1 +rbd pool init test-pool1
-rbd create -p test-pool1 rbd1 --size 1024+
  
-rbd list test-pool1 +# rbd create -p test-pool1 rbd1 --size 1G 
-rbd info test-pool1/​rbd1+ 
 +rbd list test-pool1 
 + 
 +rbd info test-pool1/​rbd1
 </​code><​code>​ </​code><​code>​
-rbd resize --size 3G test-pool1/​rbd1 +rbd resize --size 3584M test-pool1/​rbd1
-rbd resize --size 3584M test-pool1/​rbd1+
  
-nodeN# systemctl restart tgt+node3bash -c ' 
 +systemctl restart tgt 
 +ssh node4 systemctl restart tgt 
 +ssh node5 systemctl restart tgt 
 +'
 </​code>​ </​code>​
  
Line 230: Line 264:
 scp /​etc/​tgt/​conf.d/​ceph.conf node5:/​etc/​tgt/​conf.d/​ scp /​etc/​tgt/​conf.d/​ceph.conf node5:/​etc/​tgt/​conf.d/​
 </​code><​code>​ </​code><​code>​
-bash -c '+node3# ​bash -c '
 systemctl restart tgt systemctl restart tgt
 ssh node4 systemctl restart tgt ssh node4 systemctl restart tgt
решение_ceph.txt · Last modified: 2022/05/23 09:02 by val