Skip to content
This repository has been archived by the owner on Jul 30, 2018. It is now read-only.

twemproxy maintained and used at vipshop

deep edited this page Sep 1, 2015 · 6 revisions

New feature

new command:replace_server

description: Replace a server in the pool, and update Configuration file.

usage: replace_server arg1 arg2

parameter: arg1 is the server's ip:port to be replaced;arg2 is the new server's ip:port

example: redis-cli -h 127.0.0.1 -p 1234 replace_server 10.101.1.10:22122 10.101.1.11:22122

limit: Now just for redis

resolved: With redis master-slave replication, redis-sentinel and replace_server command, twemproxy+redis can be high-availability.

log rotate

description: Split twemproxy's log file by size, and save a certain number of log files.

usage: Add three options for running: -R(--log-rotate) –M(--log-file-max-size) –C(--log-file-count).

parameter:

  • -R : Enable log rotate. default disabled.
  • -M : Set every log file's maximum size. default unit is byte, but it also can be others(now just for B/M/G/MB/GB, case insensitive). default log file size is 1GB。
  • -C : Set the count of log files to save. the count can be -1, 0 and positive integer. if count=-1, save all the log files; if count=0, just save current log file; if count > 0, save count number log files expect current log file. additional log files' name is "current log file name"+"_"+"split time". default is 10.
  • -M and -C worked just when -R was enabled.

example: Nutcracker -d -o nutcracker.log -R -M 20MB -C 2

limit: -C maximum is 200.

resolved: Avoid the disk to be filled.

tcpkeepalive

description: Add keep-alive to check and kill dead connections.

usage: Add four parameters in the yml configuration file: tcpkeepalive, tcpkeepidle, tcpkeepcnt and tcpkeepintvl.

parameter:

  • tcpkeepalive: A boolean value that controls if tcp keepalive enabled. Defaults to false.
  • tcpkeepidle: The time value in msec that a connection is in idle, and then twemproxy check this connection whether dead or not.
  • tcpkeepcnt: The number of tcpkeepalive attempt check if one idle connection dead times when the client always had no reply.
  • tcpkeepintvl: The time value in msec that the interval between every tcpkeepalive check when the client always had no reply.

example:

twem1:
  listen: 127.0.0.1:22122
  hash: fnv1a_64
  hash_tag: "{}"
  distribution: ketama
  auto_eject_hosts: true
  timeout: 400
  redis: false
  tcpkeepalive: true
  tcpkeepidle: 800
  tcpkeepcnt: 3
  tcpkeepintvl: 60
  server_connections: 5
  servers:
  - 127.0.0.1:23401:1 server1
  - 127.0.0.1:23401:1 server2
  - 127.0.0.1:23401:1 server3

**limit:**none.

resolved: When lvs used upon twemproxy, please enable tcpkeepalive. See: https://github.com/twitter/twemproxy/issues/329

administration

description: A interface is let administration to management twemproxy.

usage:

  • There is a special port for administration. Add two options for running: -A (--proxy-adm-addr) -P (--proxy-adm-port).
  • User can use telnet to connect this address.
  • new command after login(input help to display): show_conf, show_oconf, show_pools, show_pool, show_servers, find_key, find_keys, reload_conf, set_watch, del_watch, reset_watch, show_watch.

parameter:

  • -A : Set the administration listening ip, default: 0.0.0.0.
  • -P : Set the administration listening port, default: 0(means administration is disabled).

commands:

  • show_conf: Show the current configuration. No argument.
  • show_oconf: Show the previous configuration. No argument.
  • show_pools: Show all the pools' name. No argument.
  • show_pool: Show the pool's name. One argument: pool name.
  • show_servers: Show all the servers in a pool. One argument: pool name.
  • find_key: Show the server which a key is on. Two arguments: pool name and a key.
  • find_keys: Show the servers which keys are on. more then two arguments: pool name and some keys.
  • reload_conf: reload the configuration file at the running time.
  • set_watch: Set a watch in the zookeeper. Two arguments: watch name and watch path.
  • del_watch: Delete a watch in the zookeeper. Two arguments: watch name and watch path.
  • reset_watch: Reset a watch in the zookeeper. Two arguments: watch name and watch path.
  • show_watch: Show a watch in the zookeeper. One argument: watch name.

example:

  • nutcracker -d -o nutcracker.log -A 127.0.0.1 –P 32045
  • telnet 127.0.0.1 32045

limit:

  • The key in the find_key and find_keys command can not contain blank.
  • The watch name in set_watch, del_watch, reset_watch and show_watch commands now just can be conf.

resolved:

  • User can review twemproxy current configuration at real time.
  • User can find keys on which server.
  • User can keep configuration with zookeeper at running time, and review the watch state.

Zookeeper

description: Let twemproxy running from configuration in zookeeper and keep configuration with zookeeper.

usage:

  • When build twemproxy, in the configure command use --with-zookeeper option.
  • Add four options for running: -S(--zk-start) -K(--zk-keep) -Z(--zk-path) -z(--zk-server).

parameter:

  • -S : set configuration from zookeeper. default disabled.
  • -K : set configuration keep with zookeeper. default disabled.
  • -Z : set zookeeper configuration path (default: /twemproxy).
  • -z : set zookeeper servers address (default: 127.0.0.1:2181).

example: nutcracker -d -o nutcracker.log -S -K -Z /twemproxy123 -z 192.168.0.1:2181,192.168.0.2:2181

limit: The maximum size of configuration in the zookeeper is 5000byte.

resolved: Let many twemproxys' configuration in one project be consistent with zookeeper.

Replication pool

description: Let one pool as another pool's replication pool. It allows double write for master pool and slave pool. If get/gets miss in master pool, request will penetrate to the slave pool, and if result hit in the slave pool, write back to master pool.

usage: Add four parameters in the yml configuration file: replication_from, replication_mode, penetrate_mode and write_back_mode.

parameter:

  • replication_from : This key is for slave pool, and requires another pool name in the same the configuration file. It means the slave pool is replication pool of the master pool.
  • replication_mode : This key is for master pool and storage commands(set, add, replace, append, prepend, cas). It has three possible values:0, 1 and 2. If 2, nutcracker sent the worst response of master pool and slave pool to client. if 1, nutcracker sent the master pool response to client, and if slave pool response is not equal to master pool, record in file. if 0, nutcracker sent the master pool response to client, and do not care about the slave pool response.
  • penetrate_mode : This key is for master pool and read commands(get, gets). It has four possible values:0, 1, 2, 3. If 0, just master pool get miss, the request would go to slave pool; if 1, just master pool return error, the request will go to slave pool; if 2, both miss and error, the request will go to slave pool; if 3, the request will never go to slave pool.
  • write_back_mode: This key is for master pool, and has two possible values:0, 1. If we get(gets) from the master pool and result miss, then penetrate to the slave pool and hit the result, at this time, if 0, nutcracker does not write back to master pool, otherwise 1.

example:

twem1:
  listen: 127.0.0.1:22122
  hash: fnv1a_64
  hash_tag: "{}"
  distribution: ketama
  auto_eject_hosts: true
  timeout: 400
  redis: false
  replication_mode: 1
  penetrate_mode: 1
  write_back_mode: 1
  server_connections: 5
  servers:
  - 127.0.0.1:23401:1 server1
  - 127.0.0.1:23401:1 server2
  - 127.0.0.1:23401:1 server3
twem2:
  listen: 127.0.0.1:22123
  hash: fnv1a_64
  hash_tag: "{}"
  distribution: ketama
  auto_eject_hosts: true
  timeout: 400
  redis: false
  server_connections: 5
  replication_from: twem1
  servers:
  - 127.0.0.1:23402:1 server1
  - 127.0.0.1:23402:1 server2
  - 127.0.0.1:23402:1 server3

The server pool twem2 is the server pool twem1's replication pool.

limit:

  • master pool and slave pool must be in one configuration file.
  • now just for redis:false.
  • now just for one master pool and one slave pool.

resolved: Avoid memcache single point problem. Avoid request penetrate to database. To support expand memcache server count.