forked from leverich/mutilate
-
Notifications
You must be signed in to change notification settings - Fork 9
/
cmdline.ggo
140 lines (115 loc) · 7.16 KB
/
cmdline.ggo
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
package "mcperf"
version "0.4"
usage "mcperf -s server[@port] [options]"
description "\"High-performance\" memcached benchmarking tool"
args "-c cc --show-required -C --default-optional -l"
option "verbose" v "Verbosity. Repeat for more verbose." multiple
option "quiet" - "Disable log messages."
text "\nBasic options:"
option "server" s "Memcached server hostname[:port[-end_port]]. \
Repeat to specify multiple servers. " string multiple
option "binary" - "Use binary memcached protocol instead of ASCII."
option "qps" q "Target aggregate QPS. 0 = peak QPS." int default="0"
option "time" t "Maximum time to run (seconds)." int default="5"
option "profile" - "Select one of several predefined profiles." int
option "keysize" K "Length of memcached keys (distribution)."
string default="30"
option "keyorder" - "Selection of memcached keys to use (distribution)."
string default="none"
option "valuesize" V "Length of memcached values (distribution)."
string default="200"
option "records" r "Number of memcached records to use. \
If multiple memcached servers are given, this number is divided \
by the number of servers." int default="10000"
option "update" u "Ratio of set:get commands." float default="0.0"
text "\nAdvanced options:"
option "username" U "Username to use for SASL authentication." string
option "password" P "Password to use for SASL authentication." string
option "threads" T "Number of threads to spawn." int default="1"
option "affinity" - "Set CPU affinity for threads, round-robin"
option "connections" c "Connections to establish per server." int default="1"
option "depth" d "Maximum depth to pipeline requests." int default="1"
option "roundrobin" R "Assign threads to servers in round-robin fashion. \
By default, each thread connects to every server."
option "iadist" i "Inter-arrival distribution (distribution). Note: \
The distribution will automatically be adjusted to match the QPS given \
by --qps." string default="exponential"
option "skip" S "Skip transmissions if previous requests are late. This \
harms the long-term QPS average, but reduces spikes in QPS after \
long latency requests."
option "moderate" - "Enforce a minimum delay of ~1/lambda between requests."
option "noload" - "Skip database loading."
option "loadonly" - "Load database and then exit."
option "blocking" B "Use blocking epoll(). May increase latency."
option "no_nodelay" - "Don't use TCP_NODELAY."
option "warmup" w "Warmup time before starting measurement." int
option "wait" W "Time to wait after startup to start measurement." int
option "save" - "Record latency samples to given file." string
option "search" - "Search for the QPS where N-order statistic < Xus. \
(i.e. --search 95:1000 means find the QPS where 95% of requests are \
faster than 1000us)." string typestr="N:X"
option "scan" - "Scan latency across QPS rates from min to max."
string typestr="min:max:step"
option "trace" e "To enable server tracing based on client activity, \
will issue special start_trace/stop_trace commands. \
Requires memcached to support these commands."
option "getq_size" G "Size of queue for multiget requests." int default="100"
option "getq_freq" g "Frequency of multiget requests, 0 for no multi-get, 100 for only multi-get." float default="0.0"
option "keycache_capacity" - "Cached key capacity. (default 10000)" int default="10000"
option "keycache_reuse" - "Number of times to reuse key cache before generating new req sequence. (Default 100)" int default="100"
option "keycache_regen" - "When regenerating control number of requests to regenerate. (Default 1%)" int default="1"
option "plot_all" - "Create plot/csv of latency histogram at each step when using gnuplot and loghistogram sampler"
text "\nAgent-mode options:"
option "agentmode" A "Run client in agent mode."
option "agent" a "Enlist remote agent, with optional agent port to support multiple agents per host." string typestr="host[@agent_port]" multiple
option "agent_port" p "Agent port, ignored when it is part of the agent option" string default="5556"
option "lambda_mul" l "Lambda multiplier. Increases share of \
QPS for this client." int default="1"
option "measure_connections" C "Master client connections per server, \
overrides --connections." int
option "measure_qps" Q "Explicitly set master client QPS, \
spread across threads and connections." int
option "measure_depth" D "Set master client connection depth." int
option "poll_freq" m "Set frequency in seconds for agent protocol recv polling." int default="1"
option "poll_max" M "Set timeout for agent protocol recv polling. An agent not responding within time limit will be dropped." int default="120"
text "
The --measure_* options aid in taking latency measurements of the
memcached server without incurring significant client-side queuing
delay. --measure_connections allows the master to override the
--connections option. --measure_depth allows the master to operate as
an \"open-loop\" client while other agents continue as a regular
closed-loop clients. --measure_qps lets you modulate the QPS the
master queries at independent of other clients. This theoretically
normalizes the baseline queuing delay you expect to see across a wide
range of --qps values.
"
text "
Predefined profiles to approximate some use cases:
1. memcached for web serving benchmark : p95, 20ms, FB key/value/IA, >4000 connections to the device under test.
2. memcached for applications backends : p99, 10ms, 32B key , 1000B value, uniform IA, >1000 connections
3. memcached for low latency (e.g. stock trading): p99.9, 32B key, 200B value, uniform IA, QPS rate set to 100000
4. P99.9, 1 msec. Key size = 32 bytes; value size has uniform distribution from 100 bytes to 1k;
5. Update heavy large queue memcached: limit connections and increase depth to 148, update ratio 0.623595
6. Update heavy high latency memcached: Single connection and increase depth to 249, update ratio 0.558257
7. Small DB low connection memcached: 1000 records, depth 207, 6 connections per thread
8. Small DB low connection large queue memcached: 1000 records, depth 250, 7 connections per thread
9. Small DB low connection large queue with updates memcached: 1000 records, depth 250, 7 connections per thread, update ratio 0.001114
"
text "
Some options take a 'distribution' as an argument.
Distributions are specified by <distribution>[:<param1>[,...]].
Parameters are not required. The following distributions are supported:
[fixed:]<value> Always generates <value>.
uniform:<max> Uniform distribution between 0 and <max>.
normal:<mean>,<sd> Normal distribution.
exponential:<lambda> Exponential distribution.
pareto:<loc>,<scale>,<shape> Generalized Pareto distribution.
gev:<loc>,<scale>,<shape> Generalized Extreme Value distribution.
To recreate the Facebook \"ETC\" request stream from [1], the
following hard-coded distributions are also provided:
fb_value = a hard-coded discrete and GPareto PDF of value sizes
fb_key = \"gev:30.7984,8.20449,0.078688\", key-size distribution
fb_ia = \"pareto:0.0,16.0292,0.154971\", inter-arrival time dist.
[1] Berk Atikoglu et al., Workload Analysis of a Large-Scale Key-Value Store,
SIGMETRICS 2012
"