Ceph: BlueStore BlueFS Spillover Internals
Issue
- Ceph - BlueStore BlueFS Spillover Internals
- RocksDB spillover HDD on compaction
-
What are the options slow_total_bytes and slow_used_bytes in BlueStore perf dump?
-
From
ceph daemon osd.x perf dumpcommand:"bluefs": { "gift_bytes": 0, "reclaim_bytes": 0, "db_total_bytes": 32212246528, "db_used_bytes": 15892217856, "wal_total_bytes": 0, "wal_used_bytes": 0, "slow_total_bytes": 160009551872, "slow_used_bytes": 137363456, "num_files": 261, "log_bytes": 8749056, "log_compactions": 1, "logged_bytes": 369836032, "files_written_wal": 2, "files_written_sst": 607, "bytes_written_wal": 12874543619, "bytes_written_sst": 36489876338 }, -
From
ceph osd metadata osd.xcommand:"bluefs": "1", "bluefs_db_block_size": "4096", "bluefs_db_rotational": "0", "bluefs_db_size": "32212254720", "bluefs_db_type": "ssd", "bluefs_single_shared_device": "0", "bluefs_slow_block_size": "4096", "bluefs_slow_rotational": "1", "bluefs_slow_size": "4000220971008", "bluefs_slow_type": "hdd", "bluestore_bdev_access_mode": "blk", "bluestore_bdev_block_size": "4096", "bluestore_bdev_rotational": "1", "bluestore_bdev_size": "4000220971008", "bluestore_bdev_type": "hdd",
Environment
Red Hat Ceph Storage (RHCS) 3.2
Red Hat Ceph Storage (RHCS) 3.3
Red Hat Ceph Storage (RHCS) 4.x
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.