You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 19, 2024. It is now read-only.
$ sync ; time sh -c "dd if=/dev/zero of=/tmp/testfile bs=100k count=1k && sync" ; rm /tmp/testfile
1024+0 records in
1024+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.326029 s, 322 MB/s
real 0m0.342s
user 0m0.004s
sys 0m0.332s
$ sync ; time sh -c "dd if=/tmp/testfile of=/dev/null bs=4k count=1k && sync" ; rm /tmp/testfile
1024+0 records in
1024+0 records out
4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00974453 s, 430 MB/s
real 0m0.024s
user 0m0.004s
sys 0m0.020s
issue is that a lot of cloud providers, when provisioning smaller disks, will give you lower iops and throughput on disks which are small slivers of larger SANs they're using in the background. This can kill neo4j performance. Need some testing & docs on this topic for guidance.
From Jake:
The text was updated successfully, but these errors were encountered: