Split is a rack based ab testing framework designed to work with Rails, Sinatra or any other rack based app.
Split is heavily inspired by the Abingo and Vanity rails ab testing plugins and Resque in its use of Redis.
Split is designed to be hacker friendly, allowing for maximum customisation and extensibility.
Split uses redis as a datastore.
Split only supports redis 2.0 or greater.
If you're on OS X, Homebrew is the simplest way to install Redis:
$ brew install redis
$ redis-server /usr/local/etc/redis.conf
You now have a Redis daemon running on 6379.
If you are using bundler add split to your Gemfile:
gem 'split'
Then run:
$ bundle install
Otherwise install the gem:
$ gem install split
and require it in your project:
require 'split'
If you are using Redis on Ruby 1.8.x then you will likely want to also use the SystemTimer gem if you want to make sure the Redis client will not hang.
Put the following in your gemfile as well:
gem 'SystemTimer'
Split is autoloaded when rails starts up, as long as you've configured redis it will 'just work'.
To configure sinatra with Split you need to enable sessions and mix in the helper methods. Add the following lines at the top of your sinatra app:
class MySinatraApp < Sinatra::Base
enable :sessions
helpers Split::Helper
get '/' do
...
end
To begin your ab test use the ab_test
method, naming your experiment with the first argument and then the different alternatives which you wish to test on as the other arguments.
ab_test
returns one of the alternatives, if a user has already seen that test they will get the same alternative as before, which you can use to split your code on.
It can be used to render different templates, show different text or any other case based logic.
finished
is used to make a completion of an experiment, or conversion.
Example: View
<% ab_test("login_button", "/images/button1.jpg", "/images/button2.jpg") do |button_file| %>
<%= img_tag(button_file, :alt => "Login!") %>
<% end %>
Example: Controller
def register_new_user
# See what level of free points maximizes users' decision to buy replacement points.
@starter_points = ab_test("new_user_free_points", '100', '200', '300')
end
Example: Conversion tracking (in a controller!)
def buy_new_points
# some business logic
finished("new_user_free_points")
end
Example: Conversion tracking (in a view)
Thanks for signing up, dude! <% finished("signup_page_redesign") %>
You can find more examples, tutorials and guides on the wiki.
Perhaps you only want to show an alternative to 10% of your visitors because it is very experimental or not yet fully load tested.
To do this you can pass a weight with each alternative in the following ways:
ab_test('homepage design', {'Old' => 20}, {'New' => 2})
ab_test('homepage design', 'Old', {'New' => 0.1})
ab_test('homepage design', {'Old' => 10}, 'New')
Note: If using ruby 1.8.x and weighted alternatives you should always pass the control alternative through as the second argument with any other alternatives as a third argument because the order of the hash is not preserved in ruby 1.8, ruby 1.9.1+ users are not affected by this bug.
This will only show the new alternative to visitors 1 in 10 times, the default weight for an alternative is 1.
For development and testing, you may wish to force your app to always return an alternative. You can do this by passing it as a parameter in the url.
If you have an experiment called button_color
with alternatives called red
and blue
used on your homepage, a url such as:
http://myawesomesite.com?button_color=red
will always have red buttons. This won't be stored in your session or count towards to results.
When a user completes a test their session is reset so that they may start the test again in the future.
To stop this behaviour you can pass the following option to the finished
method:
finished('experiment_name', :reset => false)
The user will then always see the alternative they started with.
By default Split will avoid users participating in multiple experiments at once. This means you are less likely to skew results by adding in more variation to your tests.
To stop this behaviour and allow users to participate in multiple experiments at once enable the allow_multiple_experiments
config option like so:
Split.configure do |config|
config.allow_multiple_experiments = true
end
Split comes with two built-in persistence adapters for storing users and the alternatives they've been given for each experiment.
By default Split will store the tests for each user in the session.
You can optionally configure Split to use a cookie or any custom adapter of your choosing.
Split.configure do |config|
config.persistence = :cookie
end
Note: Using cookies depends on ActionDispatch::Cookies
or any identical API
Your custom adapter needs to implement the same API as existing adapters.
See Split::Persistance::CookieAdapter
or Split::Persistence::SessionAdapter
for a starting point.
Split.configure do |config|
config.persistence = YourCustomAdapterClass
end
Split comes with a Sinatra-based front end to get an overview of how your experiments are doing.
If you are running Rails 2: You can mount this inside your app using Rack::URLMap in your config.ru
require 'split/dashboard'
run Rack::URLMap.new \
"/" => Your::App.new,
"/split" => Split::Dashboard.new
However, if you are using Rails 3: You can mount this inside your app routes by first adding this to the Gemfile:
gem 'split', :require => 'split/dashboard'
Then adding this to config/routes.rb
mount Split::Dashboard, :at => 'split'
You may want to password protect that page, you can do so with Rack::Auth::Basic
(in your split initializer file)
Split::Dashboard.use Rack::Auth::Basic do |username, password|
username == 'admin' && password == 'p4s5w0rd'
end
You can override the default configuration options of Split like so:
Split.configure do |config|
config.db_failover = true # handle redis errors gracefully
config.db_failover_on_db_error = proc{|error| Rails.logger.error(error.message) }
config.allow_multiple_experiments = true
config.enabled = true
config.persistence = Split::Persistence::SessionAdapter
end
In most scenarios you don't want to have AB-Testing enabled for web spiders, robots or special groups of users. Split provides functionality to filter this based on a predefined, extensible list of bots, IP-lists or custom exclude logic.
Split.configure do |config|
# bot config
config.robot_regex = /my_custom_robot_regex/ # or
config.bots['newbot'] = "Description for bot with 'newbot' user agent, which will be added to config.robot_regex for exclusion"
# IP config
config.ignore_ip_addresses << '81.19.48.130' # or regex: /81\.19\.48\.[0-9]+/
# or provide your own filter functionality, the default is proc{ |request| is_robot? || is_ignored_ip_address? }
config.ignore_filter = proc{ |request| CustomExcludeLogic.excludes?(request) }
end
Instead of providing the experiment options inline, you can store them in a hash. This hash can control your experiment's alternatives, weights, algorithm and if the experiment resets once finished:
Split.configure do |config|
config.experiments = {
"my_first_experiment" => {
:alternatives => ["a", "b"],
:resettable => false
},
"my_second_experiment" => {
:algorithm => 'Split::Algorithms::Whiplash',
:alternatives => [
{ :name => "a", :percent => 67 },
{ :name => "b", :percent => 33 }
]
}
}
end
You can also store your experiments in a YAML file:
Split.configure do |config|
config.experiments = YAML.load_file "config/experiments.yml"
end
You can then define the YAML file like:
my_first_experiment:
alternatives:
- a
- b
my_second_experiment:
alternatives:
- name: a
percent: 67
- name: b
percent: 33
resettable: false
This simplifies the calls from your code:
ab_test("my_first_experiment")
and:
finished("my_first_experiment")
You might wish to track generic metrics, such as conversions, and use
those to complete multiple different experiments without adding more to
your code. You can use the configuration hash to do this, thanks to
the :metric
option.
Split.configure do |config|
config.experiments = {
"my_first_experiment" => {
:alternatives => ["a", "b"],
:metric => :my_metric,
}
}
end
Your code may then track a completion using the metric instead of the experiment name:
finished(:my_metric)
You can also create a new metric by instantiating and saving a new Metric object.
Split::Metric.new(:my_metric)
Split::Metric.save
You might wish to allow an experiment to have multiple, distinguishable goals. The API to define goals for an experiment is this:
ab_test({"link_color" => ["purchase", "refund"]}, "red", "blue")
or you can you can define them in a configuration file:
Split.configure do |config|
config.experiments = {
"link_color" => {
:alternatives => ["red", "blue"],
:goals => ["purchase", "refund"]
}
}
end
To complete a goal conversion, you do it like:
finished("link_color" => "purchase")
Due to the fact that Redis has no automatic failover mechanism, it's
possible to switch on the db_failover
config option, so that ab_test
and finished
will not crash in case of a db failure. ab_test
always
delivers alternative A (the first one) in that case.
It's also possible to set a db_failover_on_db_error
callback (proc)
for example to log these errors via Rails.logger.
You may want to change the Redis host and port Split connects to, or set various other options at startup.
Split has a redis
setter which can be given a string or a Redis
object. This means if you're already using Redis in your app, Split
can re-use the existing connection.
String: Split.redis = 'localhost:6379'
Redis: Split.redis = $redis
For our rails app we have a config/initializers/split.rb
file where
we load config/split.yml
by hand and set the Redis information
appropriately.
Here's our config/split.yml
:
development: localhost:6379
test: localhost:6379
staging: redis1.example.com:6379
fi: localhost:6379
production: redis1.example.com:6379
And our initializer:
rails_root = ENV['RAILS_ROOT'] || File.dirname(__FILE__) + '/../..'
rails_env = ENV['RAILS_ENV'] || 'development'
split_config = YAML.load_file(rails_root + '/config/split.yml')
Split.redis = split_config[rails_env]
If you're running multiple, separate instances of Split you may want to namespace the keyspaces so they do not overlap. This is not unlike the approach taken by many memcached clients.
This feature is provided by the redis-namespace library, which Split uses by default to separate the keys it manages from other keys in your Redis server.
Simply use the Split.redis.namespace
accessor:
Split.redis.namespace = "split:blog"
We recommend sticking this in your initializer somewhere after Redis is configured.
Split provides the Helper module to facilitate running experiments inside web sessions.
Alternatively, you can access the underlying Metric, Trial, Experiment and Alternative objects to conduct experiments that are not tied to a web session.
# create a new experiment
experiment = Split::Experiment.find_or_create('color', 'red', 'blue')
# create a new trial
trial = Trial.new(:experiment => experiment)
# run trial
trial.choose!
# get the result, returns either red or blue
trial.alternative.name
# if the goal has been achieved, increment the successful completions for this alternative.
if goal_acheived?
trial.complete!
end
By default, Split ships with an algorithm that randomly selects from possible alternatives for a traditional a/b test.
An implementation of a bandit algorithm is also provided.
Users may also write their own algorithms. The default algorithm may be specified globally in the configuration file, or on a per experiment basis using the experiments hash of the configuration file.
- Split::Export - easily export ab test data out of Split
- Split::Analytics - push test data to google analytics
- Split::Mongoid - store data in mongoid instead of redis
Ryan bates has produced an excellent 10 minute screencast about split on the Railscasts site: A/B Testing with Split
Special thanks to the following people for submitting patches:
- Lloyd Pick
- Jeffery Chupp
- Andrew Appleton
- Phil Nash
- Dave Goodchild
- Ian Young
- Nathan Woodhull
- Ville Lautanala
- Liu Jin
- Peter Schröder
Source hosted at GitHub. Report Issues/Feature requests on GitHub Issues. Discussion at Google Groups
Tests can be ran with rake spec
- Fork the project.
- Make your feature addition or bug fix.
- Add tests for it. This is important so I don't break it in a future version unintentionally.
- Add documentation if necessary.
- Commit, do not mess with rakefile, version, or history. (if you want to have your own version, that is fine but bump version in a commit by itself I can ignore when I pull)
- Send a pull request. Bonus points for topic branches.
Copyright (c) 2013 Andrew Nesbitt. See LICENSE for details.