ceph health
HEALTH_WARN 1 near full osd(s)
啊哈,这时我们可以试着小小地优化下该OSD的权重。在OSD间平衡负载看起来简单,但是事情很可能不会如我们想象地那样进行…
增加osd的权重
在操作之前我们先保存下pgmap。
$ ceph pg dump > /tmp/pg_dump.1
让我们慢慢来,先给osd.13的权重增加0.05。
$ ceph osd tree | grep osd.13
13 3 osd.13 up 1
$ ceph osd crush reweight osd.13 3.05
reweighted item id 13 name \’osd.13\’ to 3.05 in crush map
$ ceph osd tree | grep osd.13
13 3.05 osd.13 up 1
从crushmap中可以看到osd.13的新权重值已经生效。看看集群中发生了什么。
$ ceph health detail
HEALTH_WARN 2 pgs backfilling; 2 pgs stuck unclean; recovery 16884/9154554 degraded (0.184%)
pg 3.183 is stuck unclean for 434.029986, current state active remapped backfilling, last acting [1,13,5]
pg 3.83 is stuck unclean for 2479.504088, current state active remapped backfilling, last acting [5,13,12]
pg 3.183 is active remapped backfilling, acting [1,13,5]
pg 3.83 is active remapped backfilling, acting [5,13,12]
recovery 16884/9154554 degraded (0.184%)
看,pg 3.183和3.83现在处于 active remapped backfilling 状态:
$ ceph pg map 3.183
osdmap e4588 pg 3.183 (3.183) -> up [1,13] acting [1,13,5]
$ ceph pg map 3.83
osdmap e4588 pg 3.83 (3.83) -> up [13,5] acting [5,13,12]
在这个例子里,我们可以看到osd.13被加入到了这两个pg中。pg 3.183和3.83分别被从osd 5和12中移走。如果我们查看osd的带宽,就会看到这些迁移:osd.1 –> osd.13,osd.5 –> osd.13。
OSD 1和5是pg 3.183和3.83的主OSD(从acting列表中可以看出),OSD 13在写入数据。
等待集群完成这些操作后执行:
$ ceph pg dump > /tmp/pg_dump.3
让我们看看发生的变化。
# Old map
$ egrep \'^(3.183|3.83)\' /tmp/pg_dump.1 | awk \'{print $1,$9,$14,$15}\'
3.183 active clean [1,5] [1,5]
3.83 active clean [12,5] [12,5]
# New map
$ egrep \'^(3.183|3.83)\' /tmp/pg_dump.3 | awk \'{print $1,$9,$14,$15}\'
3.183 active clean [1,13] [1,13]
3.83 active clean [13,5] [13,5]
所以,对于pg 3.183和3.83,osd 5和12被替换成了osd13。
减小osd的权重
和上面一样,但是我们这次是减小 “near full ratio” 的osd的权重值。
$ ceph pg dump > /tmp/pg_dump.4
$ ceph osd tree | grep osd.7
7 2.65 osd.7 up 1
$ ceph osd crush reweight osd.7 2.6
reweighted item id 7 name \’osd.7\’ to 2.6 in crush map
$ ceph health detail
HEALTH_WARN 2 pgs backfilling; 2 pgs stuck unclean; recovery 17117/9160466 degraded (0.187%)
pg 3.ca is stuck unclean for 1097.132237, current state active remapped backfilling, last acting [4,6,7]
pg 3.143 is stuck unclean for 1097.456265, current state active remapped backfilling, last acting [12,6,7]
pg 3.143 is active remapped backfilling, acting [12,6,7]
pg 3.ca is active remapped backfilling, acting [4,6,7]
recovery 17117/9160466 degraded (0.187%)
根据osd的带宽,我们可以看到这些迁移:osd.4 –> osd.6,osd.12 –> osd.6。
OSD 4和12是pg 3.143和3.ca的主OSD(从acting列表中可以看出),OSD 6在写入数据。这两个PG都会放入OSD 6中,从而释放OSD 7。在我的例子中,因为osd 7是这两个pg的副本osd,所以osd 7没有读操作。
# Before
$ egrep \'^(3.ca|3.143)\' /tmp/pg_dump.3 | awk \'{print $1,$9,$14,$15}\'
3.143 active clean [12,7] [12,7]
3.ca active clean [4,7] [4,7]
# After
$ ceph pg dump > /tmp/pg_dump.5
$ egrep \'^(3.ca|3.143)\' /tmp/pg_dump.5 | awk \'{print $1,$9,$14,$15}\'
3.143 active clean [12,6] [12,6]
3.ca active clean [4,6] [4,6]
嗯,显然,数据还是没有放在它该在OSD上,OSD还是处于快要满的状态。我想,要达到平衡态还是需要一定的时间。
使用crushtool
可以用带有参数 –show-utilization 的 crushtool
命令来验证我们的想法。
首先获取当前的crushmap:
$ ceph osd getcrushmap -o crushmap.bin
你可以列出某个pool的使用情况和副本数:
$ ceph osd dump | grep \'^pool 0\'
pool 0 \'data\' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0
$ crushtool --test -i crushmap.bin --show-utilization --rule 0 --num-rep=2
device 0: 123
device 1: 145
device 2: 125
device 3: 121
device 4: 139
device 5: 133
device 6: 129
device 7: 142
device 8: 146
device 9: 139
device 10: 146
device 11: 143
device 12: 129
device 13: 136
device 14: 152
修改之后,测试新的权重:
$ crushtool -d crushmap.bin -o crushmap.txt
# edit crushmap.txt
$ crushtool -c crushmap.txt -o crushmap-new.bin
$ crushtool --test -i crushmap-new.bin --show-utilization --rule 0 --num-rep=2
如果一切顺利,重新导入crushmap:
$ ceph osd setcrushmap -i crushmap-new.bin
评论前必须登录!
注册