-
Notifications
You must be signed in to change notification settings - Fork 417
Poor read hit ratio #189
Comments
Can you paste the output of 'dmsetup table' and 'dmsetup status' here ? On Thursday, October 16, 2014 10:24 AM, mozg31337 [email protected] wrote: Hello guys, |
Here you go: dmsetup table output: 2014-10-16 01:09:15.600807 osd.7 192.168.168.200:6836/32031 100 : [WRN] slow request 30.999641 seconds old, received at 2014-10-16 01:08:44.601040: osd_op(client.36035566.0:16626375 rbd_data.51da686763845e ################# dmsetup status output: osd_0_CACHE: 0 5818588047 flashcache stats: ################## Many thanks |
I don't see anything obviously broken in the flashcache configuration. The vast majority of the IO requests coming into flashcache are 4K, so they should be cacheable. How large is the disk volume that you are caching ? And what is your workload (do you expect it to be cacheable and do you expect a healthy cache hit rate) ? You could completely turn off skip sequential and see if that has any impact on the cache hit rate, just as an experiment. On Thursday, October 16, 2014 12:13 PM, mozg31337 [email protected] wrote: Here you go: |
My hdds are 3TB in size and each hdd is getting between 120 and 160 GB of ssd space. The hdds are used by about 40%. In terms of the usage, it's really a mixed scenario with a combination of web/sql/os data. I've got around 50-60 vms, which are not being loaded by much. I will decrease the size of the sequential threshold and see if it helps. My guess is that it will help it as the Enhanceio doesn't have that capability and has a lot of similarities with flashcache as far as i've read on the net. The reason for enabling the seq threshold is that ceph performs a regular scrubbing activity on the hdds, which will push the hot data out of cache. The scrubbing itself looks like a seq activity, thus I was hoping to minimise the push out effect of the hot data. |
The hit ratio of only 20% doesn't improve performance so much (think of Amdhal's low). I think both flashcache and enhanceio aren't the best solution for your workload. |
What kind of workload does flashcache / enhanceio work well with? |
I just installed flashcache and I have a 6% read hit ratio so far on a very loaded box. How long does the cache take to build? What is a good read hit rate? |
Thank you for reporting this issue and appreciate your patience. We've notified the core team for an update on this issue. We're looking for a response within the next 30 days or the issue may be closed. |
Hello guys,
I seems to be getting a very poor read hit percentage for my flashcache setup. According to the dmsetup status I am getting between 5-8% read hits. As na example, with default EnhanceIO caching on the same cluster I was getting over 20% hit ratio.
I've got a small ceph cluster with 16 hdds and 4 ssds that I use for flashcache. I am using the Write Around caching method using the following additional sysctrl tunning options:
reclaim_policy = 1
skip_seq_thresh_kb = 4096
My hdds are formatted using XFS with default ceph-deploy settings which are:
-f -i size=2048
The XFS file system is mounted like this:
rw,noatime,inode64,logbsize=256k,delaylog,allocsize=4M
What can I do to improve the read hit ratio?
Thanks
Andrei
The text was updated successfully, but these errors were encountered: