Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different Drive Sizes Not Balanced Fully #50

Open
srfn8kd opened this issue Feb 7, 2023 · 2 comments
Open

Different Drive Sizes Not Balanced Fully #50

srfn8kd opened this issue Feb 7, 2023 · 2 comments

Comments

@srfn8kd
Copy link

srfn8kd commented Feb 7, 2023

Currently I am experiencing the following.

Using the -o options to send data to multiple different sized destination partitions, n2disk will fill up the first to the limit, then start rotating the information on all drives Below is an example of the output drives after /data4 attained 97% limit set by the execution options

/dev/sde1 12T 12T 411G 97% /data4
/dev/sdf1 29T 12T 17T 42% /data5
/dev/sdc1 53T 12T 41T 23% /data2
/dev/sdd1 53T 12T 41T 23% /data3

I found that data[2,3,5] will never increase in storage over the sizes shown - all will maintain the 12T limit imposed by /data4 - I reached this conclusion as the data shown is quite a while after data4 reached 97%, as well as by executing df and observing bytes deleted then written on the disks that were not yet full

this is how I start it /usr/bin/stdbuf -oL /usr/bin/n2disk --syslog --daemon -i fbcard:0:b00 -P /var/run/n2disk.pid -o /data3/pcap/ -o /data2/pcap -o /data4/pcap -o /data5/pcap -A /data2/timeline --disk-limit 97% -b 16384 -p 2048 -C 16384 -q 1 -c 34 -w 36,38,44,46 --index -3 -Z -z 48,50,52,54

n2disk v.3.6.230113 (r5273)

@srfn8kd
Copy link
Author

srfn8kd commented Feb 13, 2023

I let the 54 T drives fill up to the 97% limit and updated the conf to write out to the two smaller drives, 28 T and 12 T; balancing across all the drives seems to be working as intended. The smaller drives are incrementing slowly while the large drives have maintain 97% capacity

@srfn8kd
Copy link
Author

srfn8kd commented Feb 14, 2023

Ok - now the large drives have been reduced to 23% usage after /data2 became 97% full and we're back to where we were, with the large drives not being utilized fully, will let it run overnight in this configuration and see if utilization of the large drives increments at all

/dev/sde1 12T 12T 289G 98% /data4
/dev/sdf1 29T 12T 17T 42% /data5
/dev/sdc1 53T 12T 41T 23% /data2
/dev/sdd1 53T 13T 41T 23% /data3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants