User Tools

Site Tools


решение_ceph

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
решение_ceph [2021/06/17 10:27]
admin
решение_ceph [2022/02/01 19:36]
val [MON]
Line 12: Line 12:
  
   * [[https://​docs.ceph.com/​en/​latest/​install/​manual-deployment/​|MANUAL DEPLOYMENT]]   * [[https://​docs.ceph.com/​en/​latest/​install/​manual-deployment/​|MANUAL DEPLOYMENT]]
 +  * [[https://​www.server-world.info/​en/​note?​os=Debian_11&​p=ceph14&​f=1|Debian 11 : Ceph Nautilus]]
  
 ==== Установка ==== ==== Установка ====
Line 19: Line 20:
  
 ==== MON ==== ==== MON ====
 +
 +  * [[https://​docs.ceph.com/​en/​latest/​man/​8/​ceph-mon/​|CEPH-MON – CEPH MONITOR DAEMON]]
  
 === Начальная конфигурация === === Начальная конфигурация ===
Line 57: Line 60:
 ' '
 </​code><​code>​ </​code><​code>​
-ls -l /​var/​lib/​ceph/​mon/​ceph-node3+node3# ​ls -l /​var/​lib/​ceph/​mon/​ceph-node3
 </​code><​code>​ </​code><​code>​
 node3# bash -c ' node3# bash -c '
Line 67: Line 70:
 ssh node5 systemctl enable ceph-mon@node5 ssh node5 systemctl enable ceph-mon@node5
 ' '
 +</​code>​
 +
 +  * [[https://​www.suse.com/​support/​kb/​doc/?​id=000019960|Cluster status shows a "mons are allowing insecure global_id reclaim"​ health warning]]
 +
 +<​code>​
 +debian11/​ubuntu#​ ceph mon enable-msgr2
 +debian11/​ubuntu#​ ceph config set mon mon_warn_on_insecure_global_id_reclaim false
 +debian11/​ubuntu#​ ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false
 </​code><​code>​ </​code><​code>​
 # ceph -s # ceph -s
Line 81: Line 92:
  
 ==== MGR ==== ==== MGR ====
 +
 +  * [[https://​docs.ceph.com/​en/​latest/​mgr/​index.html|CEPH MANAGER DAEMON]]
  
 <​code>​ <​code>​
Line 103: Line 116:
  
 ==== OSD ==== ==== OSD ====
 +
 +  * [[https://​docs.ceph.com/​en/​latest/​man/​8/​ceph-osd/​|CEPH-OSD – CEPH OBJECT STORAGE DAEMON]]
  
 === Начальная конфигурация === === Начальная конфигурация ===
Line 166: Line 181:
  
 ==== OSD POOL ==== ==== OSD POOL ====
 +
 +  * [[https://​docs.ceph.com/​en/​latest/​rados/​operations/​pools/​|POOLS]]
  
 === Расчет параметров и создание POOL === === Расчет параметров и создание POOL ===
 +
 +  * [[https://​docs.ceph.com/​en/​latest/​rados/​operations/​placement-groups/​|PLACEMENT GROUPS]]
 +
 <​code>​ <​code>​
-Total PGs = (Total_number_of_OSD ​* 100max_replication_count+Total PGs = OSDs * 100 / pool_size
  
 5*100/3 = 166,6... 5*100/3 = 166,6...
 </​code><​code>​ </​code><​code>​
 # ceph osd pool create test-pool1 180 # ceph osd pool create test-pool1 180
 +
 +debian11# ceph osd pool create test-pool1 128
 +или
 +debian11# ceph config set global mon_warn_on_pool_pg_num_not_power_of_two false
  
 # ceph -s # ceph -s
Line 198: Line 222:
 ==== RBD POOL ==== ==== RBD POOL ====
  
 +  * Reliable Autonomic Distributed Object Store = RADOS
   * [[https://​docs.ceph.com/​en/​latest/​rbd/​rados-rbd-cmds/​|BASIC BLOCK DEVICE COMMANDS]]   * [[https://​docs.ceph.com/​en/​latest/​rbd/​rados-rbd-cmds/​|BASIC BLOCK DEVICE COMMANDS]]
 +  * [[https://​docs.ceph.com/​en/​latest/​man/​8/​rbd/​|rbd – manage rados block device (RBD) images]]
  
 <​code>​ <​code>​
-ceph df+ceph df 
 + 
 +# rbd pool init test-pool1 
 + 
 +# rbd create -p test-pool1 rbd1 --size 1G
  
-rbd pool init test-pool1 +rbd list test-pool1
-rbd create -p test-pool1 rbd1 --size 1024+
  
-rbd list test-pool1 +rbd info test-pool1/​rbd1
-rbd info test-pool1/​rbd1+
 </​code><​code>​ </​code><​code>​
-rbd resize --size 3G test-pool1/​rbd1 +rbd resize --size 3584M test-pool1/​rbd1
-rbd resize --size 3584M test-pool1/​rbd1+
  
 node3# bash -c ' node3# bash -c '
решение_ceph.txt · Last modified: 2022/05/23 09:02 by val