Status:


SettingValue
Collected at2017-03-25 18:20:48
Collected at GMT2017-03-25 16:20:48
StatusHEALTH_OK
PG count104
Pool count3
Used799.3 M
Avail71.2 G
Data3.8 G
Free %98
Mon count1

OSD:


SettingValue
Count4
PG per OSD26
Cluster net192.168.122.0/24
Public net192.168.122.0/24
Near full ratio0.85
Full ratio0.95
Backfill full ratio0.85
Filesafe full ratio0.97
Journal aiotrue
Journal diotrue
Filestorage sync5s
Monmap version1
Monmap modified in546:58:58

Activity:


SettingValue
Client IO Bps0
Client IO IOPS0

Host's info:


NameServicesCPU'sRAM
total
RAM
free
Swap
used
Load avg
5m
Conn
tcp/udp
ceph-monmon(ceph-mon)1992.2 Mi181.9 Mi00.0210/7
ceph-osd0osd-012.0 Gi1.1 Gi00.023/7
ceph-osd1osd-112.0 Gi1.1 Gi96 Ki0.0821/7
ceph-osd2osd-312.0 Gi1.1 Gi00.021/7
ceph-osd3osd-212.0 Gi1.1 Gi23.5 Mi0.021/7
koder-precision831.1 Gi3.7 Gi2.4 Gi1.12132/50

Monitors info:


NameNodeRoleDisk free
B (%)
ceph-monHEALTH_OKNone31.1 G (89)

OSD's state:


StatusCountID's
up4

OSD's info:


OSDnodestatusversion [hash]daemon
run
open
files
ip
conn
threadsweight
default
weight
extra
reweightPG countStorage
used
Storage
free
Storage
free %
Journal
colocated
Journal
on SSD
Journal
on file
0ceph-osd0up10.2.5 [c461ee19]yes17130980.500.601.0072190.5 M17.8 G99yesnono
1ceph-osd1up10.2.5 [c461ee19]yes17128960.500.601.0077200.6 M17.8 G99yesnono
2ceph-osd3up10.2.5 [c461ee19]yes17128960.500.501.0085207.7 M17.8 G99yesnono
3ceph-osd2up10.2.5 [c461ee19]yes175281040.500.601.0078200.4 M17.8 G99yesnono

OSD's current load:


OSDnodejournal
lat, ms
apply
lat, ms
commit
lat, ms
D devD read
Bps/OPS
D write
Bps/OPS
D lat
ms
D IO
time %
J devJ write
Bps/OPS
J lat
ms
J IO
time %
0ceph-osd02323102vdb0 / 0363.0 K / 541161<-----
1ceph-osd12223170vdb0 / 0325.9 K / 491157<-----
2ceph-osd32526152vdb0 / 0305.0 K / 441150<-----
3ceph-osd22727129vdb0 / 0321.0 K / 481257<-----

Pool's stats:


PoolIdsizemin_sizeobjdatafreereadwriterulesetPG(PGP)Bytes/PGObjs/PRPG per OSD
Dev %
rbd0321.0 K3.8 G---1.3 M728.6 M06460.7 Mi1648.0 ~ 6%
test1327.9 K841.0 K---3 K11.3 M08105.1 Ki9886.0 ~ 36%
rbd233200---000320024.0 ~ 12%

PG's status:


StatusCount%
any104100.00
active104100.00
clean104100.00

PG copy per OSD:


OSD/poolrbdrbd2testsum
04821372
14427677
25226785
34822878
sum1929624312

Current disk IO load


IOPS


hostload
ceph-osd0
vdb
54.6
ceph-osd1
vdb
49.7
ceph-osd2
vdb
48.7
ceph-osd3
vdb
44.3

Read IOPS


hostload
ceph-osd0
vdb
0
ceph-osd1
vdb
0
ceph-osd2
vdb
0
ceph-osd3
vdb
0

Write IOPS


hostload
ceph-osd0
vdb
54.6
ceph-osd1
vdb
49.7
ceph-osd2
vdb
48.7
ceph-osd3
vdb
44.3

Bps


hostload
ceph-osd0
vdb
363.0 K
ceph-osd1
vdb
325.9 K
ceph-osd2
vdb
321.0 K
ceph-osd3
vdb
305.0 K

Read Bps


hostload
ceph-osd0
vdb
0
ceph-osd1
vdb
0
ceph-osd2
vdb
0
ceph-osd3
vdb
0

Write Bps


hostload
ceph-osd0
vdb
363.0 K
ceph-osd1
vdb
325.9 K
ceph-osd2
vdb
321.0 K
ceph-osd3
vdb
305.0 K

Latency, ms


hostload
ceph-osd0
vdb
11
ceph-osd1
vdb
11
ceph-osd2
vdb
12
ceph-osd3
vdb
11

Average QD


hostload
ceph-osd0
vdb
0.6
ceph-osd1
vdb
0.6
ceph-osd2
vdb
0.6
ceph-osd3
vdb
0.5

Active time %


hostload
ceph-osd0
vdb
61.0
ceph-osd1
vdb
57.0
ceph-osd2
vdb
57.0
ceph-osd3
vdb
50.0

Network load


Send (KiBps/Pps)


hostclusterpublichw adapterhw adapter
ceph-mon3/43/4--
ceph-osd0130/102130/102--
ceph-osd195/8195/81--
ceph-osd257/7257/72--
ceph-osd3123/89123/89--
koder-precision--
enp0s31f6
24/341
wlp2s0
0/0

Receive (KiBps/Pps)


hostclusterpublichw adapterhw adapter
ceph-mon0/50/5--
ceph-osd0146/103146/103--
ceph-osd1134/79134/79--
ceph-osd2133/64133/64--
ceph-osd3117/94117/94--
koder-precision--
enp0s31f6
955/676
wlp2s0
0/0

OSD used space (GiB)


OSD PG


OSD data/journal write (MiBps)


OSD data/journal write IOPS


OSD data/journal QD


Journal latency, ms


Apply latency, ms


Commit cycle latency, ms


Ceph OPS time