diff --git a/index.html b/index.html index 61c9f43b2..61ab7ab03 100644 --- a/index.html +++ b/index.html @@ -1,6 +1,6 @@ -Jepsen 0.3.5

Jepsen 0.3.5

Released under the Eclipse Public License

Distributed systems testing framework.

Installation

To install, add the following dependency to your project or build file:

[jepsen "0.3.5"]

Namespaces

jepsen.adya

Moved to jepsen.tests.adya.

+Jepsen 0.3.6

Jepsen 0.3.6

Released under the Eclipse Public License

Distributed systems testing framework.

Installation

To install, add the following dependency to your project or build file:

[jepsen "0.3.6"]

Namespaces

jepsen.adya

Moved to jepsen.tests.adya.

Public variables and functions:

    jepsen.checker.clock

    Helps analyze clock skew over time.

    Public variables and functions:

    jepsen.checker.perf

    Supporting functions for performance analysis.

    @@ -20,13 +20,13 @@

    jepsen.core

    Entry point for all Jepsen tests. Coordinates the setup of servers, running tests, creating and resolving failures, and interpreting results.

    jepsen.db

    Allows Jepsen to set up and tear down databases.

    -

    jepsen.faketime

    Libfaketime is useful for making clocks run at differing rates! This namespace provides utilities for stubbing out programs with faketime.

    +

    jepsen.faketime

    Libfaketime is useful for making clocks run at differing rates! This namespace provides utilities for stubbing out programs with faketime.

    Public variables and functions:

    jepsen.fs-cache

    Some systems Jepsen tests are expensive or time-consuming to set up. They might involve lengthy compilation processes, large packages which take a long time to download, or allocate large files on initial startup.

    jepsen.generator.context

    Generators work with an immutable context that tells them what time it is, what processes are available, what process is executing which thread and vice versa, and so on. We need an efficient, high-performance data structure to track this information. This namespace provides that data structure, and functions to alter it.

    +

    jepsen.generator.context

    Generators work with an immutable context that tells them what time it is, what processes are available, what process is executing which thread and vice versa, and so on. We need an efficient, high-performance data structure to track this information. This namespace provides that data structure, and functions to alter it.

    jepsen.generator.interpreter

    This namespace interprets operations from a pure generator, handling worker threads, spawning processes for interacting with clients and nemeses, and recording a history.

    jepsen.generator.test

    This namespace contains functions for testing generators. See the jepsen.generator-test namespace in the test/ directory for a concrete example of how these functions can be used.

    -

    jepsen.generator.translation-table

    We burn a lot of time in hashcode and map manipulation for thread names, which are mostly integers 0…n, but sometimes non-integer names like :nemesis. It’s nice to be able to represent thread state internally as purely integers. To do this, we compute a one-time translation table which lets us map those names to integers and vice-versa.

    +

    jepsen.generator.translation-table

    We burn a lot of time in hashcode and map manipulation for thread names, which are mostly integers 0…n, but sometimes non-integer names like :nemesis. It’s nice to be able to represent thread state internally as purely integers. To do this, we compute a one-time translation table which lets us map those names to integers and vice-versa.

    jepsen.independent

    Some tests are expensive to check–for instance, linearizability–which requires we verify only short histories. But if histories are short, we may not be able to sample often or long enough to reveal concurrency errors. This namespace supports splitting a test into independent components–for example taking a test of a single register and lifting it to a map of keys to registers.

    jepsen.lazyfs

    Lazyfs allows the injection of filesystem-level faults: specifically, losing data which was written to disk but not fsynced. This namespace lets you mount a specific directory as a lazyfs filesystem, and offers a DB which mounts/unmounts it, and downloads the lazyfs log file–this can be composed into your own database. You can then call lose-unfsynced-writes! as a part of your database’s db/kill! implementation, likely after killing your DB process itself.

    jepsen.nemesis.combined

    A nemesis which combines common operations on nodes and processes: clock skew, crashes, pauses, and partitions. So far, writing these sorts of nemeses has involved lots of special cases. I expect that the API for specifying these nemeses is going to fluctuate as we figure out how to integrate those special cases appropriately. Consider this API unstable.

    @@ -43,7 +43,8 @@

    Public variables and functions:

    jepsen.reconnect

    Stateful wrappers for automatically reconnecting network clients.

    jepsen.repl

    Helper functions for mucking around with tests!

    Public variables and functions:

    jepsen.report

    Prints out stuff.

    -

    Public variables and functions:

    jepsen.store

    Persistent storage for test runs and later analysis.

    +

    Public variables and functions:

    jepsen.role

    Supports tests where each node has a single, distinct role. For instance, one node might run ZooKeeper, and the remaining nodes might run Kafka.

    +

    jepsen.store.format

    Jepsen tests are logically a map. To save this map to disk, we originally wrote it as a single Fressian file. This approach works reasonably well, but has a few problems:

    Public variables and functions:

    jepsen.store.fressian

    Supports serialization of various Jepsen datatypes via Fressian.

    jepsen.tests

    Provide utilities for writing tests using jepsen.

    @@ -54,8 +55,8 @@

    Public variables and functions:

    jepsen.tests.cycle.append

    Detects cycles in histories where operations are transactions over named lists lists, and operations are either appends or reads. See elle.list-append for docs.

    Public variables and functions:

    jepsen.tests.cycle.wr

    A test which looks for cycles in write/read transactions. Writes are assumed to be unique, but this is the only constraint. See elle.rw-register for docs.

    Public variables and functions:

    jepsen.tests.linearizable-register

    Common generators and checkers for linearizability over a set of independent registers. Clients should understand three functions, for writing a value, reading a value, and compare-and-setting a value from v to v’. Reads receive nil, and replace it with the value actually read.

    +

    jepsen.tests.linearizable-register

    Common generators and checkers for linearizability over a set of independent registers. Clients should understand three functions, for writing a value, reading a value, and compare-and-setting a value from v to v’. Reads receive nil, and replace it with the value actually read.

    Public variables and functions:

    jepsen.tests.long-fork

    Tests for an anomaly in parallel snapshot isolation (but which is prohibited in normal snapshot isolation). In long-fork, concurrent write transactions are observed in conflicting order. For example:

    \ No newline at end of file diff --git a/jepsen.adya.html b/jepsen.adya.html index 400aefa4d..f54cf662b 100644 --- a/jepsen.adya.html +++ b/jepsen.adya.html @@ -1,4 +1,4 @@ -jepsen.adya documentation

    jepsen.adya

    Moved to jepsen.tests.adya.

    +jepsen.adya documentation

    jepsen.adya

    Moved to jepsen.tests.adya.

    \ No newline at end of file diff --git a/jepsen.checker.clock.html b/jepsen.checker.clock.html index 667431f3c..095673062 100644 --- a/jepsen.checker.clock.html +++ b/jepsen.checker.clock.html @@ -1,7 +1,7 @@ -jepsen.checker.clock documentation

    jepsen.checker.clock

    Helps analyze clock skew over time.

    +jepsen.checker.clock documentation

    jepsen.checker.clock

    Helps analyze clock skew over time.

    history->datasets

    (history->datasets history)

    Takes a history and produces a map of nodes to sequences of t offset pairs, representing the changing clock offsets for that node over time.

    -

    plot!

    (plot! test history opts)

    Plots clock offsets over time. Looks for any op with a :clock-offset field, which contains a (possible incomplete) map of nodes to offsets, in seconds. Plots those offsets over time.

    -

    short-node-names

    (short-node-names nodes)

    Takes a collection of node names, and maps them to shorter names by removing common trailing strings (e.g. common domains).

    -
    \ No newline at end of file +

    plot!

    (plot! test history opts)

    Plots clock offsets over time. Looks for any op with a :clock-offset field, which contains a (possible incomplete) map of nodes to offsets, in seconds. Plots those offsets over time.

    +

    short-node-names

    (short-node-names nodes)

    Takes a collection of node names, and maps them to shorter names by removing common trailing strings (e.g. common domains).

    +
    \ No newline at end of file diff --git a/jepsen.checker.html b/jepsen.checker.html index 76d08dba0..114e41db0 100644 --- a/jepsen.checker.html +++ b/jepsen.checker.html @@ -1,38 +1,38 @@ -jepsen.checker documentation

    jepsen.checker

    Validates that a history is correct with respect to some model.

    +jepsen.checker documentation

    jepsen.checker

    Validates that a history is correct with respect to some model.

    check-safe

    (check-safe checker test history)(check-safe checker test history opts)

    Like check, but wraps exceptions up and returns them as a map like

    -

    {:valid? :unknown :error “…”}

    -

    Checker

    protocol

    members

    check

    (check checker test history opts)

    Verify the history is correct. Returns a map like

    +

    {:valid? :unknown :error {:via {:type clojure.lang.Exceptioninfo, …} …}

    +

    Checker

    protocol

    members

    check

    (check checker test history opts)

    Verify the history is correct. Returns a map like

    {:valid? true}

    or

    {:valid? false :some-details … :failed-at details of specific operations}

    Opts is a map of options controlling checker execution. Keys include:

    :subdirectory - A directory within this test’s store directory where output files should be written. Defaults to nil.

    -

    clock-plot

    (clock-plot)

    Plots clock offsets on all nodes

    -

    compose

    (compose checker-map)

    Takes a map of names to checkers, and returns a checker which runs each check (possibly in parallel) and returns a map of names to results; plus a top-level :valid? key which is true iff every checker considered the history valid.

    -

    concurrency-limit

    (concurrency-limit limit checker)

    Takes positive integer limit and a checker. Puts an upper bound on the number of concurrent executions of this checker. Use this when a checker is particularly thread or memory intensive, to reduce context switching and memory cost.

    -

    counter

    (counter)

    A counter starts at zero; add operations should increment it by that much, and reads should return the present value. This checker validates that at each read, the value is greater than the sum of all :ok increments, and lower than the sum of all attempted increments.

    +

    clock-plot

    (clock-plot)

    Plots clock offsets on all nodes

    +

    compose

    (compose checker-map)

    Takes a map of names to checkers, and returns a checker which runs each check (possibly in parallel) and returns a map of names to results; plus a top-level :valid? key which is true iff every checker considered the history valid.

    +

    concurrency-limit

    (concurrency-limit limit checker)

    Takes positive integer limit and a checker. Puts an upper bound on the number of concurrent executions of this checker. Use this when a checker is particularly thread or memory intensive, to reduce context switching and memory cost.

    +

    counter

    (counter)

    A counter starts at zero; add operations should increment it by that much, and reads should return the present value. This checker validates that at each read, the value is greater than the sum of all :ok increments, and lower than the sum of all attempted increments.

    Note that this counter verifier assumes the value monotonically increases: decrements are not allowed.

    Returns a map:

    :valid? Whether the counter remained within bounds :reads [lower-bound read-value upper-bound …] :errors [lower-bound read-value upper-bound …]

    ; Not implemented, but might be nice:

    :max-absolute-error The lower read upper where read falls furthest outside :max-relative-error Same, but with error computed as a fraction of the mean}

    -

    expand-queue-drain-ops

    (expand-queue-drain-ops)

    A Tesser fold which looks for :drain operations with their value being a collection of queue elements, and expands them to a sequence of :dequeue invoke/complete pairs.

    -

    frequency-distribution

    (frequency-distribution points c)

    Computes a map of percentiles (0–1, not 0–100, we’re not monsters) of a collection of numbers, taken at percentiles points. If the collection is empty, returns nil.

    -

    latency-graph

    (latency-graph)(latency-graph opts)

    Spits out graphs of latencies. Checker options take precedence over those passed in with this constructor.

    -

    linearizable

    (linearizable {:keys [algorithm model]})

    Validates linearizability with Knossos. Defaults to the competition checker, but can be controlled by passing either :linear or :wgl.

    +

    expand-queue-drain-ops

    (expand-queue-drain-ops)

    A Tesser fold which looks for :drain operations with their value being a collection of queue elements, and expands them to a sequence of :dequeue invoke/complete pairs.

    +

    frequency-distribution

    (frequency-distribution points c)

    Computes a map of percentiles (0–1, not 0–100, we’re not monsters) of a collection of numbers, taken at percentiles points. If the collection is empty, returns nil.

    +

    latency-graph

    (latency-graph)(latency-graph opts)

    Spits out graphs of latencies. Checker options take precedence over those passed in with this constructor.

    +

    linearizable

    (linearizable {:keys [algorithm model]})

    Validates linearizability with Knossos. Defaults to the competition checker, but can be controlled by passing either :linear or :wgl.

    Takes an options map for arguments, ex. {:model (model/cas-register) :algorithm :wgl}

    -

    log-file-pattern

    (log-file-pattern pattern filename)

    Takes a PCRE regular expression pattern (as a Pattern or string) and a filename. Checks the store directory for this test, and in each node directory (e.g. n1), examines the given file to see if it contains instances of the pattern. Returns :valid? true if no instances are found, and :valid? false otherwise, along with a :count of the number of matches, and a :matches list of maps, each with the node and matching string from the file.

    +

    log-file-pattern

    (log-file-pattern pattern filename)

    Takes a PCRE regular expression pattern (as a Pattern or string) and a filename. Checks the store directory for this test, and in each node directory (e.g. n1), examines the given file to see if it contains instances of the pattern. Returns :valid? true if no instances are found, and :valid? false otherwise, along with a :count of the number of matches, and a :matches list of maps, each with the node and matching string from the file.

    (log-file-pattern-checker #“panic: (\w+)$” “db.log”)

    {:valid? false :count 5 :matches {:node “n1” :line “panic: invariant violation” “invariant violation” …]}}

    -

    merge-valid

    (merge-valid valids)

    Merge n :valid values, yielding the one with the highest priority.

    -

    noop

    (noop)

    An empty checker that only returns nil.

    -

    perf

    (perf)(perf opts)

    Composes various performance statistics. Checker options take precedence over those passed in with this constructor.

    -

    queue

    (queue model)

    Every dequeue must come from somewhere. Validates queue operations by assuming every non-failing enqueue succeeded, and only OK dequeues succeeded, then reducing the model with that history. Every subhistory of every queue should obey this property. Should probably be used with an unordered queue model, because we don’t look for alternate orderings. O(n).

    -

    rate-graph

    (rate-graph)(rate-graph opts)

    Spits out graphs of throughput over time. Checker options take precedence over those passed in with this constructor.

    -

    set

    (set)

    Given a set of :add operations followed by a final :read, verifies that every successfully added element is present in the read, and that the read contains only elements for which an add was attempted.

    -

    set-full

    (set-full)(set-full checker-opts)

    A more rigorous set analysis. We allow :add operations which add a single element, and :reads which return all elements present at that time. For each element, we construct a timeline like so:

    +

    merge-valid

    (merge-valid valids)

    Merge n :valid values, yielding the one with the highest priority.

    +

    noop

    (noop)

    An empty checker that only returns nil.

    +

    perf

    (perf)(perf opts)

    Composes various performance statistics. Checker options take precedence over those passed in with this constructor.

    +

    queue

    (queue model)

    Every dequeue must come from somewhere. Validates queue operations by assuming every non-failing enqueue succeeded, and only OK dequeues succeeded, then reducing the model with that history. Every subhistory of every queue should obey this property. Should probably be used with an unordered queue model, because we don’t look for alternate orderings. O(n).

    +

    rate-graph

    (rate-graph)(rate-graph opts)

    Spits out graphs of throughput over time. Checker options take precedence over those passed in with this constructor.

    +

    set

    (set)

    Given a set of :add operations followed by a final :read, verifies that every successfully added element is present in the read, and that the read contains only elements for which an add was attempted.

    +

    set-full

    (set-full)(set-full checker-opts)

    A more rigorous set analysis. We allow :add operations which add a single element, and :reads which return all elements present at that time. For each element, we construct a timeline like so:

    [nonexistent] ... [created] ... [present] ... [absent] ... [present] ...
     

    For each element:

    @@ -80,20 +80,20 @@ element was known to be inserted, but never observed. -

    set-full-add

    (set-full-add element-state op)

    set-full-element

    (set-full-element op)

    Given an add invocation, constructs a new set element state record to track that element

    -

    set-full-element-results

    (set-full-element-results e)

    Takes a SetFullElement and computes a map of final results from it:

    +

    set-full-add

    (set-full-add element-state op)

    set-full-element

    (set-full-element op)

    Given an add invocation, constructs a new set element state record to track that element

    +

    set-full-element-results

    (set-full-element-results e)

    Takes a SetFullElement and computes a map of final results from it:

    :element The element itself :outcome :stable, :lost, :never-read :lost-latency :stable-latency

    -

    set-full-read-absent

    (set-full-read-absent element-state inv op)

    set-full-read-present

    (set-full-read-present element-state inv op)

    set-full-results

    (set-full-results opts elements)

    Takes options from set-full, and a collection of SetFullElements. Computes agggregate results; see set-full for details.

    -

    stats

    (stats)

    Computes basic statistics about success and failure rates, both overall and broken down by :f. Results are valid only if every :f has at some :ok operations; otherwise they’re :unknown.

    -

    stats-fold

    Helper for computing stats over a history or filtered history.

    -

    total-queue

    (total-queue)

    What goes in must come out. Verifies that every successful enqueue has a successful dequeue. Queues only obey this property if the history includes draining them completely. O(n).

    -

    unbridled-optimism

    (unbridled-optimism)

    Everything is awesoooommmmme!

    -

    unhandled-exceptions

    (unhandled-exceptions)

    Returns information about unhandled exceptions: a sequence of maps sorted in descending frequency order, each with:

    +

    set-full-read-absent

    (set-full-read-absent element-state inv op)

    set-full-read-present

    (set-full-read-present element-state inv op)

    set-full-results

    (set-full-results opts elements)

    Takes options from set-full, and a collection of SetFullElements. Computes agggregate results; see set-full for details.

    +

    stats

    (stats)

    Computes basic statistics about success and failure rates, both overall and broken down by :f. Results are valid only if every :f has at some :ok operations; otherwise they’re :unknown.

    +

    stats-fold

    Helper for computing stats over a history or filtered history.

    +

    total-queue

    (total-queue)

    What goes in must come out. Verifies that every successful enqueue has a successful dequeue. Queues only obey this property if the history includes draining them completely. O(n).

    +

    unbridled-optimism

    (unbridled-optimism)

    Everything is awesoooommmmme!

    +

    unhandled-exceptions

    (unhandled-exceptions)

    Returns information about unhandled exceptions: a sequence of maps sorted in descending frequency order, each with:

    :class    The class of the exception thrown
     :count    How many of this exception we observed
     :example  An example operation
     
    -

    unique-ids

    (unique-ids)

    Checks that a unique id generator actually emits unique IDs. Expects a history with :f :generate invocations matched by :ok responses with distinct IDs for their :values. IDs should be comparable. Returns

    +

    unique-ids

    (unique-ids)

    Checks that a unique id generator actually emits unique IDs. Expects a history with :f :generate invocations matched by :ok responses with distinct IDs for their :values. IDs should be comparable. Returns

    {:valid?              Were all IDs unique?
      :attempted-count     Number of attempted ID generation calls
      :acknowledged-count  Number of IDs actually returned successfully
    @@ -102,5 +102,5 @@
                           they appeared--not complete for perf reasons :D
      :range               [lowest-id highest-id]}
     
    -

    valid-priorities

    A map of :valid? values to their importance. Larger numbers are considered more signficant and dominate when checkers are composed.

    -
    \ No newline at end of file +

    valid-priorities

    A map of :valid? values to their importance. Larger numbers are considered more signficant and dominate when checkers are composed.

    +
    \ No newline at end of file diff --git a/jepsen.checker.perf.html b/jepsen.checker.perf.html index 686aa5c0b..2722a5046 100644 --- a/jepsen.checker.perf.html +++ b/jepsen.checker.perf.html @@ -1,46 +1,46 @@ -jepsen.checker.perf documentation

    jepsen.checker.perf

    Supporting functions for performance analysis.

    +jepsen.checker.perf documentation

    jepsen.checker.perf

    Supporting functions for performance analysis.

    broaden-range

    (broaden-range [a b])

    Given a lower upper range for a plot, returns lower’ upper’, which covers the original range, but slightly expanded, to fall nicely on integral boundaries.

    -

    bucket-points

    (bucket-points dt points)

    Takes a time window dt and a sequence of time, _ points, and emits a seq of time, points-in-window buckets, ordered by time. Time is at the midpoint of the window.

    -

    bucket-scale

    (bucket-scale dt b)

    Given a bucket size dt, and a bucket number (e.g. 0, 1, …), returns the time at the midpoint of that bucket.

    -

    bucket-time

    (bucket-time dt t)

    Given a bucket size dt and a time t, computes the time at the midpoint of the bucket this time falls into.

    -

    buckets

    (buckets dt)(buckets dt tmax)

    Given a bucket size dt, emits a lazy sequence of times at the midpoints of each bucket.

    -

    completions-by-f-type

    (completions-by-f-type history)

    Takes a history and returns a map of f -> type-> ops, for all completions in history.

    -

    default-nemesis-color

    first-time

    (first-time history)

    Takes a history and returns the first :time in it, in seconds, as a double.

    -

    fs->points

    (fs->points fs)

    Given a sequence of :f’s, yields a map of f -> gnuplot-point-type, so we can render each function in a different style.

    -

    has-data?

    (has-data? plot)

    Takes a plot and returns true iff it has at least one series with data points.

    -

    interval->times

    (interval->times [a b])

    Given an interval of two operations a b, returns the times time-a time-b covering the interval. If b is missing, yields time-a nil.

    -

    invokes-by-f

    (invokes-by-f)(invokes-by-f history)

    Takes a history and returns a map of f -> ops, for all invocations. Either a tesswer fold, or runs on a history.

    -

    invokes-by-f-type

    (invokes-by-f-type)(invokes-by-f-type history)

    A fold which returns a map of f -> type -> ops, for all invocations.

    -

    invokes-by-type

    (invokes-by-type)(invokes-by-type history)

    Splits up a sequence of invocations into ok, failed, and crashed ops by looking at their corresponding completions. Either a tesser fold, or runs on a history.

    -

    latencies->quantiles

    (latencies->quantiles dt qs points)

    Takes a time window in seconds, a sequence of quantiles from 0 to 1, and a sequence of time, latency pairs. Groups pairs by their time window and emits a emits a map of quantiles to sequences of view source

    bucket-points

    (bucket-points dt points)

    Takes a time window dt and a sequence of time, _ points, and emits a seq of time, points-in-window buckets, ordered by time. Time is at the midpoint of the window.

    +

    bucket-scale

    (bucket-scale dt b)

    Given a bucket size dt, and a bucket number (e.g. 0, 1, …), returns the time at the midpoint of that bucket.

    +

    bucket-time

    (bucket-time dt t)

    Given a bucket size dt and a time t, computes the time at the midpoint of the bucket this time falls into.

    +

    buckets

    (buckets dt)(buckets dt tmax)

    Given a bucket size dt, emits a lazy sequence of times at the midpoints of each bucket.

    +

    completions-by-f-type

    (completions-by-f-type history)

    Takes a history and returns a map of f -> type-> ops, for all completions in history.

    +

    default-nemesis-color

    first-time

    (first-time history)

    Takes a history and returns the first :time in it, in seconds, as a double.

    +

    fs->points

    (fs->points fs)

    Given a sequence of :f’s, yields a map of f -> gnuplot-point-type, so we can render each function in a different style.

    +

    has-data?

    (has-data? plot)

    Takes a plot and returns true iff it has at least one series with data points.

    +

    interval->times

    (interval->times [a b])

    Given an interval of two operations a b, returns the times time-a time-b covering the interval. If b is missing, yields time-a nil.

    +

    invokes-by-f

    (invokes-by-f)(invokes-by-f history)

    Takes a history and returns a map of f -> ops, for all invocations. Either a tesswer fold, or runs on a history.

    +

    invokes-by-f-type

    (invokes-by-f-type)(invokes-by-f-type history)

    A fold which returns a map of f -> type -> ops, for all invocations.

    +

    invokes-by-type

    (invokes-by-type)(invokes-by-type history)

    Splits up a sequence of invocations into ok, failed, and crashed ops by looking at their corresponding completions. Either a tesser fold, or runs on a history.

    +

    latencies->quantiles

    (latencies->quantiles dt qs points)

    Takes a time window in seconds, a sequence of quantiles from 0 to 1, and a sequence of time, latency pairs. Groups pairs by their time window and emits a emits a map of quantiles to sequences of time, latency-at-that-quantile pairs, one per time window.

    -

    latency-point

    (latency-point op)

    Given an operation, returns a time, latency pair: times in seconds, latencies in ms.

    -

    latency-preamble

    (latency-preamble test output-path)

    Gnuplot commands for setting up a latency plot.

    -

    legend-part

    (legend-part series)

    Takes a series map and returns the list of gnuplot commands to render that series.

    -

    nemesis-activity

    (nemesis-activity nemeses history)

    Given a nemesis specification and a history, partitions the set of nemesis operations in the history into different nemeses, as per the spec. Returns the spec, restricted to just those non-hidden nemeses taking part in this history, and with each spec augmented with two keys:

    +

    latency-point

    (latency-point op)

    Given an operation, returns a time, latency pair: times in seconds, latencies in ms.

    +

    latency-preamble

    (latency-preamble test output-path)

    Gnuplot commands for setting up a latency plot.

    +

    legend-part

    (legend-part series)

    Takes a series map and returns the list of gnuplot commands to render that series.

    +

    nemesis-activity

    (nemesis-activity nemeses history)

    Given a nemesis specification and a history, partitions the set of nemesis operations in the history into different nemeses, as per the spec. Returns the spec, restricted to just those non-hidden nemeses taking part in this history, and with each spec augmented with two keys:

    :ops All operations the nemeses performed :intervals A set of start end paired ops.

    -

    nemesis-alpha

    nemesis-lines

    (nemesis-lines plot nemeses)

    Given nemesis activity, emits a sequence of gnuplot commands rendering vertical lines where nemesis events occurred.

    -

    nemesis-ops

    (nemesis-ops nemeses history)

    Given a history and a nemeses specification, partitions the set of nemesis operations in the history into different nemeses, as per the spec. Returns the nemesis spec, restricted to just those nemeses taking part in this history, and with each spec augmented with an :ops key, which contains all operations that nemesis performed. Skips :hidden? nemeses.

    -

    nemesis-regions

    (nemesis-regions plot nemeses)

    Given nemesis activity, emits a sequence of gnuplot commands rendering shaded regions where each nemesis was active. We can render a maximum of 12 nemeses; this keeps size and spacing consistent.

    -

    nemesis-series

    (nemesis-series plot nemeses)

    Given nemesis activity, constructs the series required to show every present nemesis’ activity in the legend. We do this by constructing dummy data, and a key that will match the way that nemesis’s activity is rendered.

    -

    plot!

    (plot! opts)

    Renders a gnuplot plot. Takes an option map:

    +

    nemesis-alpha

    nemesis-lines

    (nemesis-lines plot nemeses)

    Given nemesis activity, emits a sequence of gnuplot commands rendering vertical lines where nemesis events occurred.

    +

    nemesis-ops

    (nemesis-ops nemeses history)

    Given a history and a nemeses specification, partitions the set of nemesis operations in the history into different nemeses, as per the spec. Returns the nemesis spec, restricted to just those nemeses taking part in this history, and with each spec augmented with an :ops key, which contains all operations that nemesis performed. Skips :hidden? nemeses.

    +

    nemesis-regions

    (nemesis-regions plot nemeses)

    Given nemesis activity, emits a sequence of gnuplot commands rendering shaded regions where each nemesis was active. We can render a maximum of 12 nemeses; this keeps size and spacing consistent.

    +

    nemesis-series

    (nemesis-series plot nemeses)

    Given nemesis activity, constructs the series required to show every present nemesis’ activity in the legend. We do this by constructing dummy data, and a key that will match the way that nemesis’s activity is rendered.

    +

    plot!

    (plot! opts)

    Renders a gnuplot plot. Takes an option map:

    :preamble Gnuplot commands to send first :series A vector of series maps :draw-fewer-on-top? If passed, renders series with fewer points on top :xrange A pair xmin xmax which controls the xrange :yrange Ditto, for the y axis :logscale e.g. :y

    A series map is a map with:

    :data A sequence of data points to render, e,g. [0 0 1 2 2 4] :with How to draw this series, e.g. ’points :linetype What kind of line to use :pointtype What kind of point to use :title A string, or nil, to label this series map

    -

    point-graph!

    (point-graph! test history {:keys [subdirectory nemeses], :as opts})

    Writes a plot of raw latency data points.

    -

    preamble

    (preamble output-path)

    Shared gnuplot preamble

    -

    qs->colors

    (qs->colors qs)

    Given a sequence of quantiles q, yields a map of q -> gnuplot-color, so we can render each latency curve in a different color.

    -

    quantiles

    (quantiles qs points)

    Takes a sequence of quantiles from 0 to 1 and a sequence of values, and returns a map of quantiles to values at those quantiles.

    -

    quantiles-graph!

    (quantiles-graph! test history {:keys [subdirectory nemeses]})

    Writes a plot of latency quantiles, by f, over time.

    -

    rate

    (rate history)

    Map breaking down the mean rate of completions by f and type, plus totals at each level.

    -

    rate-graph!

    (rate-graph! test history {:keys [subdirectory nemeses]})

    Writes a plot of operation rate by their completion times.

    -

    rate-preamble

    (rate-preamble test output-path)

    Gnuplot commands for setting up a rate plot.

    -

    type->color

    Takes a type of operation (e.g. :ok) and returns a gnuplot color.

    -

    types

    What types are we rendering?

    -

    with-nemeses

    (with-nemeses plot history nemeses)

    Augments a plot map to render nemesis activity. Takes a nemesis specification: a collection of nemesis spec maps, each of which has keys:

    +

    point-graph!

    (point-graph! test history {:keys [subdirectory nemeses], :as opts})

    Writes a plot of raw latency data points.

    +

    preamble

    (preamble output-path)

    Shared gnuplot preamble

    +

    qs->colors

    (qs->colors qs)

    Given a sequence of quantiles q, yields a map of q -> gnuplot-color, so we can render each latency curve in a different color.

    +

    quantiles

    (quantiles qs points)

    Takes a sequence of quantiles from 0 to 1 and a sequence of values, and returns a map of quantiles to values at those quantiles.

    +

    quantiles-graph!

    (quantiles-graph! test history {:keys [subdirectory nemeses]})

    Writes a plot of latency quantiles, by f, over time.

    +

    rate

    (rate history)

    Map breaking down the mean rate of completions by f and type, plus totals at each level.

    +

    rate-graph!

    (rate-graph! test history {:keys [subdirectory nemeses]})

    Writes a plot of operation rate by their completion times.

    +

    rate-preamble

    (rate-preamble test output-path)

    Gnuplot commands for setting up a rate plot.

    +

    type->color

    Takes a type of operation (e.g. :ok) and returns a gnuplot color.

    +

    types

    What types are we rendering?

    +

    with-nemeses

    (with-nemeses plot history nemeses)

    Augments a plot map to render nemesis activity. Takes a nemesis specification: a collection of nemesis spec maps, each of which has keys:

    :name A string uniquely naming this nemesis :color What color to use for drawing this nemesis (e.g. “#abcd01”) :start A set of :f’s which begin this nemesis’ activity :stop A set of :f’s which end this nemesis’ activity :fs A set of :f’s otherwise related to this nemesis :hidden? Skips rendering this nemesis.

    -

    with-range

    (with-range plot)

    Takes a plot object. Where xrange or yrange are not provided, fills them in by iterating over each series :data.

    -

    without-empty-series

    (without-empty-series plot)

    Takes a plot, and strips out empty series objects.

    -
    \ No newline at end of file +

    with-range

    (with-range plot)

    Takes a plot object. Where xrange or yrange are not provided, fills them in by iterating over each series :data.

    +

    without-empty-series

    (without-empty-series plot)

    Takes a plot, and strips out empty series objects.

    +
    \ No newline at end of file diff --git a/jepsen.checker.timeline.html b/jepsen.checker.timeline.html index 2b9e32fe0..608edd864 100644 --- a/jepsen.checker.timeline.html +++ b/jepsen.checker.timeline.html @@ -1,18 +1,18 @@ -jepsen.checker.timeline documentation

    jepsen.checker.timeline

    Renders an HTML timeline of a history.

    -

    body

    (body op start stop)

    breadcrumbs

    (breadcrumbs test history-key)

    Renders a series of back links increasing in depth

    -

    col-width

    pixels

    -

    gutter-width

    pixels

    -

    height

    pixels

    -

    hiccup

    (hiccup test history opts)

    Renders the Hiccup structure for a history.

    -

    html

    (html)

    linkify-time

    (linkify-time t)

    Remove - and : chars from a time string

    -

    nemesis?

    (nemesis? op)

    op-limit

    Maximum number of operations to render. Helps make timeline usable on massive histories.

    -

    pair->div

    (pair->div history test process-index [start stop])

    Turns a pair of start/stop operations into a div.

    -

    pairs

    (pairs history)(pairs invocations [op & ops])

    Pairs up ops from each process in a history. Yields a lazy sequence of info or invoke, ok|fail|info pairs.

    -

    process-index

    (process-index history)

    Maps processes to columns

    -

    render-duration

    (render-duration start stop)

    render-error

    (render-error op)

    render-msg

    (render-msg op)

    render-op

    (render-op op)

    render-op-extra-keys

    (render-op-extra-keys op)

    Helper for render-op which renders keys we didn’t explicitly print

    -

    render-wall-time

    (render-wall-time test op)

    style

    (style m)

    Generate a CSS style fragment from a map.

    -

    stylesheet

    sub-index

    (sub-index history)

    Attaches a :sub-index key to each element of this timeline’s subhistory, identifying its relative position.

    -

    timescale

    Nanoseconds per pixel

    -

    title

    (title test op start stop)
    \ No newline at end of file +jepsen.checker.timeline documentation

    jepsen.checker.timeline

    Renders an HTML timeline of a history.

    +

    body

    (body op start stop)

    breadcrumbs

    (breadcrumbs test history-key)

    Renders a series of back links increasing in depth

    +

    col-width

    pixels

    +

    gutter-width

    pixels

    +

    height

    pixels

    +

    hiccup

    (hiccup test history opts)

    Renders the Hiccup structure for a history.

    +

    html

    (html)

    linkify-time

    (linkify-time t)

    Remove - and : chars from a time string

    +

    nemesis?

    (nemesis? op)

    op-limit

    Maximum number of operations to render. Helps make timeline usable on massive histories.

    +

    pair->div

    (pair->div history test process-index [start stop])

    Turns a pair of start/stop operations into a div.

    +

    pairs

    (pairs history)(pairs invocations [op & ops])

    Pairs up ops from each process in a history. Yields a lazy sequence of info or invoke, ok|fail|info pairs.

    +

    process-index

    (process-index history)

    Maps processes to columns

    +

    render-duration

    (render-duration start stop)

    render-error

    (render-error op)

    render-msg

    (render-msg op)

    render-op

    (render-op op)

    render-op-extra-keys

    (render-op-extra-keys op)

    Helper for render-op which renders keys we didn’t explicitly print

    +

    render-wall-time

    (render-wall-time test op)

    style

    (style m)

    Generate a CSS style fragment from a map.

    +

    stylesheet

    sub-index

    (sub-index history)

    Attaches a :sub-index key to each element of this timeline’s subhistory, identifying its relative position.

    +

    timescale

    Nanoseconds per pixel

    +

    title

    (title test op start stop)
    \ No newline at end of file diff --git a/jepsen.cli.html b/jepsen.cli.html index a289e2b79..3f8d73930 100644 --- a/jepsen.cli.html +++ b/jepsen.cli.html @@ -1,36 +1,36 @@ -jepsen.cli documentation

    jepsen.cli

    Command line interface. Provides a default main method for common Jepsen functions (like the web interface), and utility functions for Jepsen tests to create their own test runners.

    -

    -main

    (-main & args)

    default-nodes

    help-opt

    merge-opt-specs

    (merge-opt-specs a b)

    Takes two option specifications and merges them together. Where both offer the same option name, prefers the latter.

    -

    one-of

    (one-of coll)

    Takes a collection and returns a string like “Must be one of …” and a list of names. For maps, uses keys.

    -

    package-opt

    (package-opt default)(package-opt option-name default)

    parse-concurrency

    (parse-concurrency parsed)(parse-concurrency parsed k)

    Takes a parsed map. Parses :concurrency; if it is a string ending with n, e.g 3n, sets it to 3 * the number of :nodes. Otherwise, parses as a plain integer. With an optional keyword k, parses that key in the parsed map–by default, the key is :concurrency.

    -

    parse-nodes

    (parse-nodes parsed)

    Takes a parsed map and merges all the various node specifications together. In particular:

    +jepsen.cli documentation

    jepsen.cli

    Command line interface. Provides a default main method for common Jepsen functions (like the web interface), and utility functions for Jepsen tests to create their own test runners.

    +

    -main

    (-main & args)

    default-nodes

    help-opt

    merge-opt-specs

    (merge-opt-specs a b)

    Takes two option specifications and merges them together. Where both offer the same option name, prefers the latter.

    +

    one-of

    (one-of coll)

    Takes a collection and returns a string like “Must be one of …” and a list of names. For maps, uses keys.

    +

    package-opt

    (package-opt default)(package-opt option-name default)

    parse-concurrency

    (parse-concurrency parsed)(parse-concurrency parsed k)

    Takes a parsed map. Parses :concurrency; if it is a string ending with n, e.g 3n, sets it to 3 * the number of :nodes. Otherwise, parses as a plain integer. With an optional keyword k, parses that key in the parsed map–by default, the key is :concurrency.

    +

    parse-nodes

    (parse-nodes parsed)

    Takes a parsed map and merges all the various node specifications together. In particular:

    • If :nodes-file and :nodes are blank, and :node is the default node list, uses the default node list.
    • Otherwise, merges together :nodes-file, :nodes, and :node into a single list.

    The new parsed map will have a merged nodes list in :nodes, and lose :nodes-file and :node options.

    -

    rename-keys

    (rename-keys m replacements)

    Given a map m, and a map of keys to replacement keys, yields m with keys renamed.

    -

    rename-options

    (rename-options parsed replacements)

    Like rename-keys, but takes a parsed map and updates keys in :options.

    -

    rename-ssh-options

    (rename-ssh-options parsed)

    Takes a parsed map and moves SSH options to a map under :ssh.

    -

    repeated-opt

    (repeated-opt short-opt long-opt docstring default)(repeated-opt short-opt long-opt docstring default parse-map)

    Helper for vector options where we want to replace the default vector (checking via identical?) if any options are passed, building a vector for multiple args. If parse-map is provided (a map of string cmdline options to parsed values), the special word “all” can be used to specify every value in the map.

    -

    run!

    (run! subcommands [command & arguments :as argv])

    Parses arguments and runs tests, etc. Takes a map of subcommand names to subcommand-specs, and a list of arguments. Each subcommand-spec is a map with the following keys:

    +

    rename-keys

    (rename-keys m replacements)

    Given a map m, and a map of keys to replacement keys, yields m with keys renamed.

    +

    rename-options

    (rename-options parsed replacements)

    Like rename-keys, but takes a parsed map and updates keys in :options.

    +

    rename-ssh-options

    (rename-ssh-options parsed)

    Takes a parsed map and moves SSH options to a map under :ssh.

    +

    repeated-opt

    (repeated-opt short-opt long-opt docstring default)(repeated-opt short-opt long-opt docstring default parse-map)

    Helper for vector options where we want to replace the default vector (checking via identical?) if any options are passed, building a vector for multiple args. If parse-map is provided (a map of string cmdline options to parsed values), the special word “all” can be used to specify every value in the map.

    +

    run!

    (run! subcommands [command & arguments :as argv])

    Parses arguments and runs tests, etc. Takes a map of subcommand names to subcommand-specs, and a list of arguments. Each subcommand-spec is a map with the following keys:

    :opt-spec - The option parsing spec to use. :opt-fn - A function to transform the tools.cli options map, e.g. {:options …, :arguments …, :summary …}. Default: identity :usage - A usage string (default: “Usage:”) :run - Function to execute with the transformed options (default: pprint)

    If an unrecognized (or no command) is given, prints out a general usage guide and exits.

    For a subcommand, if help or –help is given, prints out a help string with usage for the given subcommand and exits with status 0.

    If invalid arguments are given, prints those errors to the console, and exits with status 254.

    Finally, if everything looks good, calls the given subcommand’s run function with parsed options, and exits with status 0.

    Catches exceptions, logs them to the console, and exits with status 255.

    -

    serve-cmd

    (serve-cmd)

    A web server command.

    -

    single-test-cmd

    (single-test-cmd opts)

    A command which runs a single test with standard built-ins. Options:

    +

    serve-cmd

    (serve-cmd)

    A web server command.

    +

    single-test-cmd

    (single-test-cmd opts)

    A command which runs a single test with standard built-ins. Options:

    {:opt-spec A vector of additional options for tools.cli. Merge into test-opt-spec. Optional. :opt-fn A function which transforms parsed options. Composed after test-opt-fn. Optional. :opt-fn* Replaces test-opt-fn, in case you want to override it altogether. :tarball If present, adds a –tarball option to this command, defaulting to whatever URL is given here. :usage Defaults to jc/test-usage. Optional. :test-fn A function that receives the option map and constructs a test.}

    This comes with two commands: test, which runs a test and analyzes it, and analyze, which constructs a test map using the same arguments as run, but analyzes a history from disk instead.

    -

    tarball-opt

    (tarball-opt default)

    test-all-cmd

    (test-all-cmd opts)

    A command that runs a whole suite of tests in one go. Options:

    +

    tarball-opt

    (tarball-opt default)

    test-all-cmd

    (test-all-cmd opts)

    A command that runs a whole suite of tests in one go. Options:

    :opt-spec A vector of additional options for tools.cli. Appended to test-opt-spec. Optional. :opt-fn A function which transforms parsed options. Composed after test-opt-fn. Optional. :opt-fn* Replaces test-opt-fn, instead of composing with it. :usage Defaults to test-usage. Optional. :tests-fn A function that receives the transformed option map and constructs a sequence of tests to run.

    -

    test-all-exit!

    (test-all-exit! results)

    Takes a map of statuses and exits with an appropriate error code: 255 if any crashed, 2 if any were unknown, 1 if any were invalid, 0 if all passed.

    -

    test-all-print-summary!

    (test-all-print-summary! results)

    Prints a summary of test outcomes. Takes a map of statuses (e.g. :crashed, true, false, :unknown), to test files. Returns results.

    -

    test-all-run-tests!

    (test-all-run-tests! tests)

    Runs a sequence of tests and returns a map of outcomes (e.g. true, :unknown, :crashed, false) to collections of test folders with that outcome.

    -

    test-opt-fn

    (test-opt-fn parsed)

    An opt fn for running simple tests. Remaps ssh keys, remaps :node to :nodes, reads :nodes-file into :nodes, and parses :concurrency.

    -

    test-opt-spec

    Command line options for testing.

    -

    test-usage

    (test-usage)

    validate-tarball

    (validate-tarball parsed)

    Takes a parsed map and ensures a tarball is present.

    -
    \ No newline at end of file +

    test-all-exit!

    (test-all-exit! results)

    Takes a map of statuses and exits with an appropriate error code: 255 if any crashed, 2 if any were unknown, 1 if any were invalid, 0 if all passed.

    +

    test-all-print-summary!

    (test-all-print-summary! results)

    Prints a summary of test outcomes. Takes a map of statuses (e.g. :crashed, true, false, :unknown), to test files. Returns results.

    +

    test-all-run-tests!

    (test-all-run-tests! tests)

    Runs a sequence of tests and returns a map of outcomes (e.g. true, :unknown, :crashed, false) to collections of test folders with that outcome.

    +

    test-opt-fn

    (test-opt-fn parsed)

    An opt fn for running simple tests. Remaps ssh keys, remaps :node to :nodes, reads :nodes-file into :nodes, and parses :concurrency.

    +

    test-opt-spec

    Command line options for testing.

    +

    test-usage

    (test-usage)

    validate-tarball

    (validate-tarball parsed)

    Takes a parsed map and ensures a tarball is present.

    +
    \ No newline at end of file diff --git a/jepsen.client.html b/jepsen.client.html index 8fd6cdcdf..5b4953622 100644 --- a/jepsen.client.html +++ b/jepsen.client.html @@ -1,17 +1,17 @@ -jepsen.client documentation

    jepsen.client

    Applies operations to a database.

    +jepsen.client documentation

    jepsen.client

    Applies operations to a database.

    Client

    protocol

    members

    close!

    (close! client test)

    Close the client connection when work is completed or an invocation crashes the client. Close should not affect the logical state of the test.

    invoke!

    (invoke! client test operation)

    Apply an operation to the client, returning an operation to be appended to the history. For multi-stage operations, the client may reach into the test and conj onto the history atom directly.

    open!

    (open! client test node)

    Set up the client to work with a particular node. Returns a client which is ready to accept operations via invoke! Open should not affect the logical state of the test; it should not, for instance, modify tables or insert records.

    setup!

    (setup! client test)

    Called to set up database state for testing.

    teardown!

    (teardown! client test)

    Tear down database state when work is complete.

    -

    closable?

    (closable? client)

    Returns true if the given client implements method close!.

    -

    is-reusable?

    (is-reusable? client test)

    Wrapper around reusable?; returns false when not implemented.

    -

    noop

    Does nothing.

    -

    Reusable

    protocol

    members

    reusable?

    (reusable? client test)

    If true, this client can be re-used with a fresh process after a call to invoke throws or returns an info operation. If false (or if this protocol is not implemented), crashed clients will be closed and new ones opened to replace them.

    -

    timeout

    (timeout timeout-or-fn client)

    Sometimes a client library’s own timeouts don’t work reliably. This takes either a timeout as a number of ms, or a function (f op) => timeout-in-ms, and a client. Wraps that client in a new one which automatically times out operations that take longer than the given timeout. Timed out operations have :error :jepsen.client/timeout.

    -

    validate

    (validate client)

    Wraps a client, validating that its return types are what you’d expect.

    -

    with-client

    macro

    (with-client [client-sym client-expr] & body)

    Analogous to with-open. Takes a binding of the form view source

    closable?

    (closable? client)

    Returns true if the given client implements method close!.

    +

    is-reusable?

    (is-reusable? client test)

    Wrapper around reusable?; returns false when not implemented.

    +

    noop

    Does nothing.

    +

    Reusable

    protocol

    members

    reusable?

    (reusable? client test)

    If true, this client can be re-used with a fresh process after a call to invoke throws or returns an info operation. If false (or if this protocol is not implemented), crashed clients will be closed and new ones opened to replace them.

    +

    timeout

    (timeout timeout-or-fn client)

    Sometimes a client library’s own timeouts don’t work reliably. This takes either a timeout as a number of ms, or a function (f op) => timeout-in-ms, and a client. Wraps that client in a new one which automatically times out operations that take longer than the given timeout. Timed out operations have :error :jepsen.client/timeout.

    +

    validate

    (validate client)

    Wraps a client, validating that its return types are what you’d expect.

    +

    with-client

    macro

    (with-client [client-sym client-expr] & body)

    Analogous to with-open. Takes a binding of the form client-sym client-expr, and a body. Binds client-sym to client-expr (presumably, client-expr opens a new client), evaluates body with client-sym bound, and ensures client is closed before returning.

    -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/jepsen.codec.html b/jepsen.codec.html index 2560b6c09..6a1960b84 100644 --- a/jepsen.codec.html +++ b/jepsen.codec.html @@ -1,6 +1,6 @@ -jepsen.codec documentation

    jepsen.codec

    Serializes and deserializes objects to/from bytes.

    +jepsen.codec documentation

    jepsen.codec

    Serializes and deserializes objects to/from bytes.

    decode

    (decode bytes)

    Deserialize bytes to an object.

    -

    encode

    (encode o)

    Serialize an object to bytes.

    -
    \ No newline at end of file +

    encode

    (encode o)

    Serialize an object to bytes.

    +
    \ No newline at end of file diff --git a/jepsen.control.clj-ssh.html b/jepsen.control.clj-ssh.html index 255318a4d..f4bc2a743 100644 --- a/jepsen.control.clj-ssh.html +++ b/jepsen.control.clj-ssh.html @@ -1,9 +1,9 @@ -jepsen.control.clj-ssh documentation

    jepsen.control.clj-ssh

    A CLJ-SSH powered implementation of the Remote protocol.

    +jepsen.control.clj-ssh documentation

    jepsen.control.clj-ssh

    A CLJ-SSH powered implementation of the Remote protocol.

    clj-ssh-agent

    Acquiring an SSH agent is expensive and involves a global lock; we save the agent and re-use it to speed things up.

    -

    clj-ssh-session

    (clj-ssh-session conn-spec)

    Opens a raw session to the given connection spec

    -

    concurrency-limit

    OpenSSH has a standard limit of 10 concurrent channels per connection. However, commands run in quick succession with 10 concurrent also seem to blow out the channel limit–perhaps there’s an asynchronous channel teardown process. We set the limit a bit lower here. This is experimentally determined for clj-ssh by running jepsen.control-test’s integration test…

    -

    remote

    (remote)

    A remote that does things via clj-ssh.

    -

    with-errors

    macro

    (with-errors conn context & body)

    Takes a conn spec, a context map, and a body. Evals body, remapping clj-ssh exceptions to :type :jepsen.control/ssh-failed.

    -
    \ No newline at end of file +

    clj-ssh-session

    (clj-ssh-session conn-spec)

    Opens a raw session to the given connection spec

    +

    concurrency-limit

    OpenSSH has a standard limit of 10 concurrent channels per connection. However, commands run in quick succession with 10 concurrent also seem to blow out the channel limit–perhaps there’s an asynchronous channel teardown process. We set the limit a bit lower here. This is experimentally determined for clj-ssh by running jepsen.control-test’s integration test…

    +

    remote

    (remote)

    A remote that does things via clj-ssh.

    +

    with-errors

    macro

    (with-errors conn context & body)

    Takes a conn spec, a context map, and a body. Evals body, remapping clj-ssh exceptions to :type :jepsen.control/ssh-failed.

    +
    \ No newline at end of file diff --git a/jepsen.control.core.html b/jepsen.control.core.html index 41ffcfc6d..514ae7ba4 100644 --- a/jepsen.control.core.html +++ b/jepsen.control.core.html @@ -1,16 +1,16 @@ -jepsen.control.core documentation

    jepsen.control.core

    Provides the base protocol for running commands on remote nodes, as well as common functions for constructing and evaluating shell commands.

    +jepsen.control.core documentation

    jepsen.control.core

    Provides the base protocol for running commands on remote nodes, as well as common functions for constructing and evaluating shell commands.

    env

    (env env)

    We often want to construct env vars for a process. This function takes a map of environment variable names (any Named type, e.g. :HOME, “HOME”) to values (which are coerced using (str value)), and constructs a Literal string, suitable for passing to exec, which binds those environment variables.

    Callers of this function (especially indirectly, as with start-stop-daemon), may wish to construct env var strings themselves. Passing a string s to this function simply returns (lit s). Passing a Literal l to this function returns l. nil is passed through unchanged.

    -

    escape

    (escape s)

    Escapes a thing for the shell.

    +

    escape

    (escape s)

    Escapes a thing for the shell.

    Nils are empty strings.

    Literal wrappers are passed through directly.

    The special keywords :>, :>>, and :< map to their corresponding shell I/O redirection operators.

    Named things like keywords and symbols use their name, escaped. Strings are escaped like normal.

    Sequential collections and sets have each element escaped and space-separated.

    -

    lit

    (lit s)

    A literal string to be passed, unescaped, to the shell.

    -

    Remote

    protocol

    Remotes allow jepsen.control to run shell commands, upload, and download files. They use a context map, which encodes the current user, directory, etc:

    +

    lit

    (lit s)

    A literal string to be passed, unescaped, to the shell.

    +

    Remote

    protocol

    Remotes allow jepsen.control to run shell commands, upload, and download files. They use a context map, which encodes the current user, directory, etc:

    :dir - The directory to execute remote commands in :sudo - The user we want to execute a command as :password - The user’s password, for sudo, if necessary.

    members

    connect

    (connect this conn-spec)

    Set up the remote to work with a particular node. Returns a Remote which is ready to accept actions via execute! and upload! and download!. conn-spec is a map of:

    {:host :post :username :password :private-key-path :strict-host-key-checking}

    @@ -24,6 +24,6 @@

    :exit The command’s exit status. :out The stdout string. :err The stderr string.

    upload!

    (upload! this context local-paths remote-path opts)

    Copy the specified local-path to the remote-path on the connected host.

    Opts is an option map. There are no defined options right now, but later we might introduce some for e.g. recursive uploads, compression, etc. This is also a place for Remote implementations to offer custom semantics.

    -

    throw-on-nonzero-exit

    (throw-on-nonzero-exit {:keys [exit action], :as result})

    Throws when an SSH result has nonzero exit status.

    -

    wrap-sudo

    (wrap-sudo {:keys [sudo sudo-password]} cmd)

    Takes a context map and a command action, and returns the command action, modified to wrap it in a sudo command, if necessary. Uses the context map’s :sudo and :sudo-password fields.

    -
    \ No newline at end of file +

    throw-on-nonzero-exit

    (throw-on-nonzero-exit {:keys [exit action], :as result})

    Throws when an SSH result has nonzero exit status.

    +

    wrap-sudo

    (wrap-sudo {:keys [sudo sudo-password]} cmd)

    Takes a context map and a command action, and returns the command action, modified to wrap it in a sudo command, if necessary. Uses the context map’s :sudo and :sudo-password fields.

    +
    \ No newline at end of file diff --git a/jepsen.control.docker.html b/jepsen.control.docker.html index d473fd713..d02acc446 100644 --- a/jepsen.control.docker.html +++ b/jepsen.control.docker.html @@ -1,10 +1,10 @@ -jepsen.control.docker documentation

    jepsen.control.docker

    The recommended way is to use SSH to setup and teardown databases. It’s however sometimes conveniet to be able to setup and teardown the databases using docker exec and docker cp instead, which is what this namespace helps you do.

    +jepsen.control.docker documentation

    jepsen.control.docker

    The recommended way is to use SSH to setup and teardown databases. It’s however sometimes conveniet to be able to setup and teardown the databases using docker exec and docker cp instead, which is what this namespace helps you do.

    Use at your own risk, this is an unsupported way of running Jepsen.

    cp-from

    (cp-from container-id remote-paths local-path)

    Copies files from a container filesystem to the host.

    -

    cp-to

    (cp-to container-id local-paths remote-path)

    Copies files from the host to a container filesystem.

    -

    docker

    A remote that does things via docker exec and docker cp.

    -

    exec

    (exec container-id {:keys [cmd], :as opts})

    Execute a shell command on a docker container.

    -

    resolve-container-id

    (resolve-container-id host)

    Takes a host, e.g. localhost:30404, and resolves the Docker container id exposing that port. Due to a bug in Docker (https://github.com/moby/moby/pull/40442) this is more difficult than it should be.

    -
    \ No newline at end of file +

    cp-to

    (cp-to container-id local-paths remote-path)

    Copies files from the host to a container filesystem.

    +

    docker

    A remote that does things via docker exec and docker cp.

    +

    exec

    (exec container-id {:keys [cmd], :as opts})

    Execute a shell command on a docker container.

    +

    resolve-container-id

    (resolve-container-id host)

    Takes a host, e.g. localhost:30404, and resolves the Docker container id exposing that port. Due to a bug in Docker (https://github.com/moby/moby/pull/40442) this is more difficult than it should be.

    +
    \ No newline at end of file diff --git a/jepsen.control.html b/jepsen.control.html index d48fcfa49..a94fea23b 100644 --- a/jepsen.control.html +++ b/jepsen.control.html @@ -1,62 +1,62 @@ -jepsen.control documentation

    jepsen.control

    Provides control over a remote node. There’s a lot of dynamically bound state in this namespace because we want to make it as simple as possible for scripts to open connections to various nodes.

    +jepsen.control documentation

    jepsen.control

    Provides control over a remote node. There’s a lot of dynamically bound state in this namespace because we want to make it as simple as possible for scripts to open connections to various nodes.

    Note that a whole bunch of this namespace refers to things as ‘ssh’, although they really can apply to any remote, not just SSH.

    &&

    A literal &&

    -

    *dir*

    dynamic

    Working directory

    -

    *dummy*

    dynamic

    When true, don’t actually use SSH

    -

    *host*

    dynamic

    Current hostname

    -

    *password*

    dynamic

    Password (for login)

    -

    *port*

    dynamic

    SSH listening port

    -

    *private-key-path*

    dynamic

    SSH identity file

    -

    *remote*

    dynamic

    The remote to use for remote control actions

    -

    *retries*

    dynamic

    How many times to retry conns

    -

    *session*

    dynamic

    Current control session wrapper

    -

    *strict-host-key-checking*

    dynamic

    Verify SSH host keys

    -

    *sudo*

    dynamic

    User to sudo to

    -

    *sudo-password*

    dynamic

    Password for sudo, if needed

    -

    *trace*

    dynamic

    Shall we trace commands?

    -

    *username*

    dynamic

    Username

    -

    cd

    macro

    (cd dir & body)

    Evaluates forms in the given directory.

    -

    clj-ssh

    The clj-ssh SSH remote. This used to be the default.

    -

    cmd-context

    (cmd-context)

    Constructs a context map for a command’s execution from dynamically bound vars.

    -

    conn-spec

    (conn-spec)

    jepsen.control originally stored everything–host, post, etc.–in separate dynamic variables. Now, we store these things in a conn-spec map, which can be passed to remotes without creating cyclic dependencies. This function exists to support the transition from those variables to a conn-spec, and constructs a conn spec from current var bindings.

    -

    debug-data

    (debug-data)

    Construct a map of SSH data for debugging purposes.

    -

    disconnect

    (disconnect remote)

    Close a Remote session.

    -

    download

    (download remote-paths local-path)

    Copies remote paths to local node.

    -

    env

    (env env)

    We often want to construct env vars for a process. This function takes a map of environment variable names (any Named type, e.g. :HOME, “HOME”) to values (which are coerced using (str value)), and constructs a Literal string, suitable for passing to exec, which binds those environment variables.

    +

    *dir*

    dynamic

    Working directory

    +

    *dummy*

    dynamic

    When true, don’t actually use SSH

    +

    *host*

    dynamic

    Current hostname

    +

    *password*

    dynamic

    Password (for login)

    +

    *port*

    dynamic

    SSH listening port

    +

    *private-key-path*

    dynamic

    SSH identity file

    +

    *remote*

    dynamic

    The remote to use for remote control actions

    +

    *retries*

    dynamic

    How many times to retry conns

    +

    *session*

    dynamic

    Current control session wrapper

    +

    *strict-host-key-checking*

    dynamic

    Verify SSH host keys

    +

    *sudo*

    dynamic

    User to sudo to

    +

    *sudo-password*

    dynamic

    Password for sudo, if needed

    +

    *trace*

    dynamic

    Shall we trace commands?

    +

    *username*

    dynamic

    Username

    +

    cd

    macro

    (cd dir & body)

    Evaluates forms in the given directory.

    +

    clj-ssh

    The clj-ssh SSH remote. This used to be the default.

    +

    cmd-context

    (cmd-context)

    Constructs a context map for a command’s execution from dynamically bound vars.

    +

    conn-spec

    (conn-spec)

    jepsen.control originally stored everything–host, post, etc.–in separate dynamic variables. Now, we store these things in a conn-spec map, which can be passed to remotes without creating cyclic dependencies. This function exists to support the transition from those variables to a conn-spec, and constructs a conn spec from current var bindings.

    +

    debug-data

    (debug-data)

    Construct a map of SSH data for debugging purposes.

    +

    disconnect

    (disconnect remote)

    Close a Remote session.

    +

    download

    (download remote-paths local-path)

    Copies remote paths to local node.

    +

    env

    (env env)

    We often want to construct env vars for a process. This function takes a map of environment variable names (any Named type, e.g. :HOME, “HOME”) to values (which are coerced using (str value)), and constructs a Literal string, suitable for passing to exec, which binds those environment variables.

    Callers of this function (especially indirectly, as with start-stop-daemon), may wish to construct env var strings themselves. Passing a string s to this function simply returns (lit s). Passing a Literal l to this function returns l. nil is passed through unchanged.

    -

    escape

    (escape s)

    Escapes a thing for the shell.

    +

    escape

    (escape s)

    Escapes a thing for the shell.

    Nils are empty strings.

    Literal wrappers are passed through directly.

    The special keywords :>, :>>, and :< map to their corresponding shell I/O redirection operators.

    Named things like keywords and symbols use their name, escaped. Strings are escaped like normal.

    Sequential collections and sets have each element escaped and space-separated.

    -

    exec

    (exec & commands)

    Takes a shell command and arguments, runs the command, and returns stdout, throwing if an error occurs. Escapes all arguments.

    -

    exec*

    (exec* & commands)

    Like exec, but does not escape.

    -

    expand-path

    (expand-path path)

    Expands path relative to the current directory.

    -

    file->path

    (file->path x)

    Takes an object, if it’s an instance of java.io.File, gets the path, otherwise returns the object

    -

    just-stdout

    (just-stdout result)

    Returns the stdout from an ssh result, trimming any newlines at the end.

    -

    lit

    (lit s)

    A literal string to be passed, unescaped, to the shell.

    -

    on

    macro

    (on host & body)

    Opens a session to the given host and evaluates body there; and closes session when body completes.

    -

    on-many

    macro

    (on-many hosts & body)

    Takes a list of hosts, executes body on each host in parallel, and returns a map of hosts to return values.

    -

    on-nodes

    (on-nodes test f)(on-nodes test nodes f)

    Given a test, evaluates (f test node) in parallel on each node, with that node’s SSH connection bound. If nodes is provided, evaluates only on those nodes in particular.

    -

    session

    (session host)

    Returns a Remote bound to the given host.

    -

    ssh

    The default (SSHJ-backed) remote.

    -

    ssh*

    (ssh* action)

    Evaluates an SSH action against the current host. Retries packet corrupt errors.

    -

    su

    macro

    (su & body)

    sudo root …

    -

    sudo

    macro

    (sudo user & body)

    Evaluates forms with a particular user.

    -

    throw-on-nonzero-exit

    (throw-on-nonzero-exit {:keys [exit action], :as result})

    Throws when an SSH result has nonzero exit status.

    -

    trace

    macro

    (trace & body)

    Evaluates forms with command tracing enabled.

    -

    upload

    (upload local-paths remote-path)

    Copies local path(s) to remote node and returns the remote path.

    -

    upload-resource!

    (upload-resource! resource-name remote-path)

    Uploads a local JVM resource (as a string) to the given remote path.

    -

    with-remote

    macro

    (with-remote remote & body)

    Takes a remote and evaluates body with that remote in that scope.

    -

    with-session

    macro

    (with-session host session & body)

    Binds a host and session and evaluates body. Does not open or close session; this is just for the namespace dynamic state.

    -

    with-ssh

    macro

    (with-ssh ssh & body)

    Takes a map of SSH configuration and evaluates body in that scope. Catches JSchExceptions and re-throws with all available debugging context. Options:

    +

    exec

    (exec & commands)

    Takes a shell command and arguments, runs the command, and returns stdout, throwing if an error occurs. Escapes all arguments.

    +

    exec*

    (exec* & commands)

    Like exec, but does not escape.

    +

    expand-path

    (expand-path path)

    Expands path relative to the current directory.

    +

    file->path

    (file->path x)

    Takes an object, if it’s an instance of java.io.File, gets the path, otherwise returns the object

    +

    just-stdout

    (just-stdout result)

    Returns the stdout from an ssh result, trimming any newlines at the end.

    +

    lit

    (lit s)

    A literal string to be passed, unescaped, to the shell.

    +

    on

    macro

    (on host & body)

    Opens a session to the given host and evaluates body there; and closes session when body completes.

    +

    on-many

    macro

    (on-many hosts & body)

    Takes a list of hosts, executes body on each host in parallel, and returns a map of hosts to return values.

    +

    on-nodes

    (on-nodes test f)(on-nodes test nodes f)

    Given a test, evaluates (f test node) in parallel on each node, with that node’s SSH connection bound. If nodes is provided, evaluates only on those nodes in particular.

    +

    session

    (session host)

    Returns a Remote bound to the given host.

    +

    ssh

    The default (SSHJ-backed) remote.

    +

    ssh*

    (ssh* action)

    Evaluates an SSH action against the current host. Retries packet corrupt errors.

    +

    su

    macro

    (su & body)

    sudo root …

    +

    sudo

    macro

    (sudo user & body)

    Evaluates forms with a particular user.

    +

    throw-on-nonzero-exit

    (throw-on-nonzero-exit {:keys [exit action], :as result})

    Throws when an SSH result has nonzero exit status.

    +

    trace

    macro

    (trace & body)

    Evaluates forms with command tracing enabled.

    +

    upload

    (upload local-paths remote-path)

    Copies local path(s) to remote node and returns the remote path.

    +

    upload-resource!

    (upload-resource! resource-name remote-path)

    Uploads a local JVM resource (as a string) to the given remote path.

    +

    with-remote

    macro

    (with-remote remote & body)

    Takes a remote and evaluates body with that remote in that scope.

    +

    with-session

    macro

    (with-session host session & body)

    Binds a host and session and evaluates body. Does not open or close session; this is just for the namespace dynamic state.

    +

    with-ssh

    macro

    (with-ssh ssh & body)

    Takes a map of SSH configuration and evaluates body in that scope. Catches JSchExceptions and re-throws with all available debugging context. Options:

    :dummy? :username :password :sudo-password :private-key-path :strict-host-key-checking

    -

    with-test-nodes

    macro

    (with-test-nodes test & body)

    Given a test, evaluates body in parallel on each node, with that node’s SSH connection bound.

    -

    wrap-cd

    (wrap-cd cmd)

    Wraps command by changing to the current bound directory first.

    -

    wrap-sudo

    (wrap-sudo cmd)

    Wraps command in a sudo subshell.

    -

    wrap-trace

    (wrap-trace arg)

    Logs argument to console when tracing is enabled.

    -

    |

    A literal pipe character.

    -
    \ No newline at end of file +

    with-test-nodes

    macro

    (with-test-nodes test & body)

    Given a test, evaluates body in parallel on each node, with that node’s SSH connection bound.

    +

    wrap-cd

    (wrap-cd cmd)

    Wraps command by changing to the current bound directory first.

    +

    wrap-sudo

    (wrap-sudo cmd)

    Wraps command in a sudo subshell.

    +

    wrap-trace

    (wrap-trace arg)

    Logs argument to console when tracing is enabled.

    +

    |

    A literal pipe character.

    +
    \ No newline at end of file diff --git a/jepsen.control.k8s.html b/jepsen.control.k8s.html index 842b7e327..1f4144a3f 100644 --- a/jepsen.control.k8s.html +++ b/jepsen.control.k8s.html @@ -1,9 +1,9 @@ -jepsen.control.k8s documentation

    jepsen.control.k8s

    The recommended way is to use SSH to setup and teardown databases. It’s however sometimes conveniet to be able to setup and teardown the databases using kubectl instead, which is what this namespace helps you do. Use at your own risk, this is an unsupported way of running Jepsen.

    +jepsen.control.k8s documentation

    jepsen.control.k8s

    The recommended way is to use SSH to setup and teardown databases. It’s however sometimes conveniet to be able to setup and teardown the databases using kubectl instead, which is what this namespace helps you do. Use at your own risk, this is an unsupported way of running Jepsen.

    cp-from

    (cp-from context namespace pod-name remote-paths local-path)

    Copies files from a pod filesystem to the host.

    -

    cp-to

    (cp-to context namespace pod-name local-paths remote-path)

    Copies files from the host to a pod filesystem.

    -

    exec

    (exec context namespace pod-name {:keys [cmd], :as opts})

    Execute a shell command on a pod.

    -

    k8s

    (k8s)

    Returns a remote that does things via kubectl exec and kubectl cp, in the default context and namespacd.

    -

    list-pods

    (list-pods context namespace)

    A helper function to list all pods in a given context/namespace

    -
    \ No newline at end of file +

    cp-to

    (cp-to context namespace pod-name local-paths remote-path)

    Copies files from the host to a pod filesystem.

    +

    exec

    (exec context namespace pod-name {:keys [cmd], :as opts})

    Execute a shell command on a pod.

    +

    k8s

    (k8s)

    Returns a remote that does things via kubectl exec and kubectl cp, in the default context and namespacd.

    +

    list-pods

    (list-pods context namespace)

    A helper function to list all pods in a given context/namespace

    +
    \ No newline at end of file diff --git a/jepsen.control.net.html b/jepsen.control.net.html index 6477bfeff..a9e9c4cf3 100644 --- a/jepsen.control.net.html +++ b/jepsen.control.net.html @@ -1,9 +1,9 @@ -jepsen.control.net documentation

    jepsen.control.net

    Network control functions.

    +jepsen.control.net documentation

    jepsen.control.net

    Network control functions.

    control-ip

    (control-ip)

    Assuming you have a DB node bound in jepsen.client, returns the IP address of the control node, as perceived by that DB node. This is helpful when you want to, say, set up a tcpdump filter which snarfs traffic coming from the control node.

    -

    ip

    Look up an ip for a hostname. Memoized.

    -

    ip*

    (ip* host)

    Look up an ip for a hostname. Unmemoized.

    -

    local-ip

    (local-ip)

    The local node’s IP address

    -

    reachable?

    (reachable? node)

    Can the current node ping the given node?

    -
    \ No newline at end of file +

    ip

    Look up an ip for a hostname. Memoized.

    +

    ip*

    (ip* host)

    Look up an ip for a hostname. Unmemoized.

    +

    local-ip

    (local-ip)

    The local node’s IP address

    +

    reachable?

    (reachable? node)

    Can the current node ping the given node?

    +
    \ No newline at end of file diff --git a/jepsen.control.retry.html b/jepsen.control.retry.html index bb9bc76f7..3bab57335 100644 --- a/jepsen.control.retry.html +++ b/jepsen.control.retry.html @@ -1,8 +1,8 @@ -jepsen.control.retry documentation

    jepsen.control.retry

    SSH client libraries appear to be near universally-flaky. Maybe race conditions, maybe underlying network instability, maybe we’re just doing it wrong. For whatever reason, they tend to throw errors constantly. The good news is we can almost always retry their commands safely! This namespace provides a Remote which wraps an underlying Remote in a jepsen.reconnect wrapper, catching certain exception classes and ensuring they’re automatically retried.

    +jepsen.control.retry documentation

    jepsen.control.retry

    SSH client libraries appear to be near universally-flaky. Maybe race conditions, maybe underlying network instability, maybe we’re just doing it wrong. For whatever reason, they tend to throw errors constantly. The good news is we can almost always retry their commands safely! This namespace provides a Remote which wraps an underlying Remote in a jepsen.reconnect wrapper, catching certain exception classes and ensuring they’re automatically retried.

    backoff-time

    Roughly how long should we back off when retrying, in ms?

    -

    remote

    (remote remote)

    Constructs a new Remote by wrapping another Remote in one which automatically catches and retries any exception of the form {:type :jepsen.control/ssh-failed}.

    -

    retries

    How many times should we retry exceptions before giving up and throwing?

    -

    with-retry

    macro

    (with-retry & body)

    Takes a body. Evaluates body, retrying SSH exceptions.

    -
    \ No newline at end of file +

    remote

    (remote remote)

    Constructs a new Remote by wrapping another Remote in one which automatically catches and retries any exception of the form {:type :jepsen.control/ssh-failed}.

    +

    retries

    How many times should we retry exceptions before giving up and throwing?

    +

    with-retry

    macro

    (with-retry & body)

    Takes a body. Evaluates body, retrying SSH exceptions.

    +
    \ No newline at end of file diff --git a/jepsen.control.scp.html b/jepsen.control.scp.html index 8d86bf020..95f856975 100644 --- a/jepsen.control.scp.html +++ b/jepsen.control.scp.html @@ -1,12 +1,12 @@ -jepsen.control.scp documentation

    jepsen.control.scp

    Built-in JDK SSH libraries can be orders of magnitude slower than plain old SCP for copying even medium-sized files of a few GB. This provides a faster implementation of a Remote which shells out to SCP.

    +jepsen.control.scp documentation

    jepsen.control.scp

    Built-in JDK SSH libraries can be orders of magnitude slower than plain old SCP for copying even medium-sized files of a few GB. This provides a faster implementation of a Remote which shells out to SCP.

    exec!

    (exec! remote ctx cmd-args)

    A super basic exec implementation for our own purposes. At some point we might want to pull some? all? of control/exec all the way down into control.remote, and get rid of this.

    -

    remote

    (remote cmd-remote)

    Takes a remote which can execute commands, and wraps it in a remote which overrides upload & download to use SCP.

    -

    remote-path

    (remote-path {:keys [username host]} path)

    Returns the string representation of a remote path using a conn spec; e.g. admin@n1:/foo/bar

    -

    scp!

    (scp! conn-spec sources dest)

    Runs an SCP command by shelling out. Takes a conn-spec (used for port, key, etc), a seq of sources, and a single destination, all as strings.

    -

    tmp-dir

    The remote directory we temporarily store files in while transferring up and down.

    -

    tmp-file

    (tmp-file)

    Returns a randomly generated tmpfile for use during uploads/downloads

    -

    with-tmp-dir

    macro

    (with-tmp-dir remote ctx & body)

    Evaluates body. If a nonzero exit status occurs, forces the tmp dir to exist, and re-evals body. We do this to avoid the overhead of checking for existence every time someone wants to upload/download a file.

    -

    with-tmp-file

    macro

    (with-tmp-file remote ctx [tmp-file-sym] & body)

    Evaluates body with tmp-file-sym bound to the remote path of a temporary file. Cleans up file at exit.

    -
    \ No newline at end of file +

    remote

    (remote cmd-remote)

    Takes a remote which can execute commands, and wraps it in a remote which overrides upload & download to use SCP.

    +

    remote-path

    (remote-path {:keys [username host]} path)

    Returns the string representation of a remote path using a conn spec; e.g. admin@n1:/foo/bar

    +

    scp!

    (scp! conn-spec sources dest)

    Runs an SCP command by shelling out. Takes a conn-spec (used for port, key, etc), a seq of sources, and a single destination, all as strings.

    +

    tmp-dir

    The remote directory we temporarily store files in while transferring up and down.

    +

    tmp-file

    (tmp-file)

    Returns a randomly generated tmpfile for use during uploads/downloads

    +

    with-tmp-dir

    macro

    (with-tmp-dir remote ctx & body)

    Evaluates body. If a nonzero exit status occurs, forces the tmp dir to exist, and re-evals body. We do this to avoid the overhead of checking for existence every time someone wants to upload/download a file.

    +

    with-tmp-file

    macro

    (with-tmp-file remote ctx [tmp-file-sym] & body)

    Evaluates body with tmp-file-sym bound to the remote path of a temporary file. Cleans up file at exit.

    +
    \ No newline at end of file diff --git a/jepsen.control.sshj.html b/jepsen.control.sshj.html index c3c39af02..0d2839314 100644 --- a/jepsen.control.sshj.html +++ b/jepsen.control.sshj.html @@ -1,11 +1,11 @@ -jepsen.control.sshj documentation

    jepsen.control.sshj

    An sshj-backed control Remote. Experimental; I’m considering replacing jepsen.control’s use of clj-ssh with this instead.

    -

    agent-proxy

    (agent-proxy)

    auth!

    (auth! c {:keys [username password private-key-path], :as conn-spec})

    Tries a bunch of ways to authenticate an SSHClient. We start with the given key file, if provided, then fall back to general public keys, then fall back to username/password.

    -

    auth-methods

    (auth-methods agent)

    Returns a list of AuthMethods we can use for logging in via an AgentProxy.

    -

    concurrency-limit

    OpenSSH has a standard limit of 10 concurrent channels per connection. However, commands run in quick succession with 10 concurrent also seem to blow out the channel limit–perhaps there’s an asynchronous channel teardown process. We set the limit a bit lower here. This is experimentally determined by running jepsen.control-test’s integration test…

    -

    handle-error

    (handle-error conn context e)

    Takes an connection, a context map, and an SSHJ exception. Throws if it was caused by an InterruptedException or InterruptedIOException. Otherwise, wraps it in a :ssh-failed exception map, and throws that.

    -

    remote

    (remote)

    Constructs an SSHJ remote.

    -

    send-eof!

    (send-eof! client session)

    There’s a bug in SSHJ where it doesn’t send an EOF when you close the session’s outputstream, which causes the remote command to hang indefinitely. To work around this, we send an EOF message ourselves. I’m not at all sure this is threadsafe; it might cause issues later.

    -

    with-errors

    macro

    (with-errors conn context & body)

    Takes a conn spec, a context map, and a body. Evals body, remapping SSHJ exceptions to :type :jepsen.control/ssh-failed.

    -
    \ No newline at end of file +jepsen.control.sshj documentation

    jepsen.control.sshj

    An sshj-backed control Remote. Experimental; I’m considering replacing jepsen.control’s use of clj-ssh with this instead.

    +

    agent-proxy

    (agent-proxy)

    auth!

    (auth! c {:keys [username password private-key-path], :as conn-spec})

    Tries a bunch of ways to authenticate an SSHClient. We start with the given key file, if provided, then fall back to general public keys, then fall back to username/password.

    +

    auth-methods

    (auth-methods agent)

    Returns a list of AuthMethods we can use for logging in via an AgentProxy.

    +

    concurrency-limit

    OpenSSH has a standard limit of 10 concurrent channels per connection. However, commands run in quick succession with 10 concurrent also seem to blow out the channel limit–perhaps there’s an asynchronous channel teardown process. We set the limit a bit lower here. This is experimentally determined by running jepsen.control-test’s integration test…

    +

    handle-error

    (handle-error conn context e)

    Takes an connection, a context map, and an SSHJ exception. Throws if it was caused by an InterruptedException or InterruptedIOException. Otherwise, wraps it in a :ssh-failed exception map, and throws that.

    +

    remote

    (remote)

    Constructs an SSHJ remote.

    +

    send-eof!

    (send-eof! client session)

    There’s a bug in SSHJ where it doesn’t send an EOF when you close the session’s outputstream, which causes the remote command to hang indefinitely. To work around this, we send an EOF message ourselves. I’m not at all sure this is threadsafe; it might cause issues later.

    +

    with-errors

    macro

    (with-errors conn context & body)

    Takes a conn spec, a context map, and a body. Evals body, remapping SSHJ exceptions to :type :jepsen.control/ssh-failed.

    +
    \ No newline at end of file diff --git a/jepsen.control.util.html b/jepsen.control.util.html index 2ae6947b7..af1fa17e6 100644 --- a/jepsen.control.util.html +++ b/jepsen.control.util.html @@ -1,38 +1,38 @@ -jepsen.control.util documentation

    jepsen.control.util

    Utility functions for scripting installations.

    +jepsen.control.util documentation

    jepsen.control.util

    Utility functions for scripting installations.

    await-tcp-port

    (await-tcp-port port)(await-tcp-port port opts)

    Blocks until a local TCP port is bound. Options:

    :retry-interval How long between retries, in ms. Default 1s. :log-interval How long between logging that we’re still waiting, in ms. Default `retry-interval. :timeout How long until giving up and throwing :type :timeout, in ms. Default 60 seconds.

    -

    cached-wget!

    (cached-wget! url)(cached-wget! url opts)

    Downloads a string URL to the Jepsen wget cache directory, and returns the full local filename as a string. Skips if the file already exists. Local filenames are base64-encoded URLs, as opposed to the name of the file–this is helpful when you want to download a package like https://foo.com/v1.2/foo.tar; since the version is in the URL but not a part of the filename, downloading a new version could silently give you the old version instead.

    +

    cached-wget!

    (cached-wget! url)(cached-wget! url opts)

    Downloads a string URL to the Jepsen wget cache directory, and returns the full local filename as a string. Skips if the file already exists. Local filenames are base64-encoded URLs, as opposed to the name of the file–this is helpful when you want to download a package like https://foo.com/v1.2/foo.tar; since the version is in the URL but not a part of the filename, downloading a new version could silently give you the old version instead.

    Options:

    :force? Even if we have this cached, download the tarball again anyway. :user? User for wget authentication. If provided, valid pw must also be provided. :pw? Password for wget authentication.

    -

    daemon-running?

    (daemon-running? pidfile)

    Given a pidfile, returns true if the pidfile is present and the process it contains is alive, nil if the pidfile is absent, false if it’s present and the process doesn’t exist.

    +

    daemon-running?

    (daemon-running? pidfile)

    Given a pidfile, returns true if the pidfile is present and the process it contains is alive, nil if the pidfile is absent, false if it’s present and the process doesn’t exist.

    Strictly this doesn’t mean the process is RUNNING; it could be asleep or a zombie, but you know what I mean. ;-)

    -

    encode

    (encode s)

    base64 encode a given string and return the encoded string in utf8

    -

    ensure-user!

    (ensure-user! username)

    Make sure a user exists.

    -

    exists?

    (exists? filename)

    Is a path present?

    -

    file?

    (file? filename)

    Is filename a regular file that exists?

    -

    grepkill!

    (grepkill! pattern)(grepkill! signal pattern)

    Kills processes by grepping for the given string. If a signal is given, sends that signal instead. Signals may be either numbers or names, e.g. :term, :hup, …

    -

    install-archive!

    (install-archive! url dest)(install-archive! url dest opts)

    Gets the given tarball URL, caching it in /tmp/jepsen/, and extracts its sole top-level directory to the given dest directory. Deletes current contents of dest. Supports both zip files and tarballs, compressed or raw. Returns dest.

    +

    encode

    (encode s)

    base64 encode a given string and return the encoded string in utf8

    +

    ensure-user!

    (ensure-user! username)

    Make sure a user exists.

    +

    exists?

    (exists? filename)

    Is a path present?

    +

    file?

    (file? filename)

    Is filename a regular file that exists?

    +

    grepkill!

    (grepkill! pattern)(grepkill! signal pattern)

    Kills processes by grepping for the given string. If a signal is given, sends that signal instead. Signals may be either numbers or names, e.g. :term, :hup, …

    +

    install-archive!

    (install-archive! url dest)(install-archive! url dest opts)

    Gets the given tarball URL, caching it in /tmp/jepsen/, and extracts its sole top-level directory to the given dest directory. Deletes current contents of dest. Supports both zip files and tarballs, compressed or raw. Returns dest.

    URLs can be HTTP, HTTPS, or file://, in which case they are interpreted as a file path on the remote node.

    Standard practice for release tarballs is to include a single directory, often named something like foolib-1.2.3-amd64, with files inside it. If only a single directory is present, its contents will be moved to dest, so foolib-1.2.3-amd64/my.file becomes dest/my.file. If the tarball includes multiple files, those files are moved to dest, so my.file becomes dest/my.file.

    Options:

    :force? Even if we have this cached, download the tarball again anyway. :user? User for wget authentication. If provided, valid pw must also be provided. :pw? Password for wget authentication.

    -

    ls

    (ls)(ls dir)

    A seq of directory entries (not including . and ..). TODO: escaping for control chars in filenames (if you do this, WHO ARE YOU???)

    -

    ls-full

    (ls-full dir)

    Like ls, but prepends dir to each entry.

    -

    signal!

    (signal! process-name signal)

    Sends a signal to a named process by signal number or name.

    -

    start-daemon!

    (start-daemon! opts bin & args)

    Starts a daemon process, logging stdout and stderr to the given file. Invokes bin with args. Options are:

    +

    ls

    (ls)(ls dir)

    A seq of directory entries (not including . and ..). TODO: escaping for control chars in filenames (if you do this, WHO ARE YOU???)

    +

    ls-full

    (ls-full dir)

    Like ls, but prepends dir to each entry.

    +

    signal!

    (signal! process-name signal)

    Sends a signal to a named process by signal number or name.

    +

    start-daemon!

    (start-daemon! opts bin & args)

    Starts a daemon process, logging stdout and stderr to the given file. Invokes bin with args. Options are:

    :env Environment variables for the invocation of start-stop-daemon. Should be a Map of env var names to string values, like {:SEEDS “flax, cornflower”}. See jepsen.control/env for alternative forms. :background? :chdir :exec Sets a custom executable to check for. :logfile :make-pidfile? :match-executable? Helpful for cases where the daemon is a wrapper script that execs another process, so that pidfile management doesn’t work right. When this option is true, we ask start-stop-daemon to check for any process running the given executable program: either :exec or the bin argument. :match-process-name? Helpful for cases where the daemon is a wrapper script that execs another process, so that pidfile management doesn’t work right. When this option is true, we ask start-stop-daemon to check for any process with a COMM field matching :process-name (or the name of the bin). :pidfile Where should we write (and check for) the pidfile? If nil, doesn’t use the pidfile at all. :process-name Overrides the process name for :match-process-name?

    Returns :started if the daemon was started, or :already-running if it was already running, or throws otherwise.

    -

    std-wget-opts

    A list of standard options we pass to wget

    -

    stop-daemon!

    (stop-daemon! pidfile)(stop-daemon! cmd pidfile)

    Kills a daemon process by pidfile, or, if given a command name, kills all processes with that command name, and cleans up pidfile. Pidfile may be nil in the two-argument case, in which case it is ignored.

    -

    tmp-dir!

    (tmp-dir!)

    Creates a temporary directory under /tmp/jepsen and returns its path.

    -

    tmp-dir-base

    Where should we put temporary files?

    -

    tmp-file!

    (tmp-file!)

    Creates a random, temporary file under tmp-dir-base, and returns its path.

    -

    wget!

    (wget! url)(wget! url opts)

    Downloads a string URL and returns the filename as a string. Skips if the file already exists.

    +

    std-wget-opts

    A list of standard options we pass to wget

    +

    stop-daemon!

    (stop-daemon! pidfile)(stop-daemon! cmd pidfile)

    Kills a daemon process by pidfile, or, if given a command name, kills all processes with that command name, and cleans up pidfile. Pidfile may be nil in the two-argument case, in which case it is ignored.

    +

    tmp-dir!

    (tmp-dir!)

    Creates a temporary directory under /tmp/jepsen and returns its path.

    +

    tmp-dir-base

    Where should we put temporary files?

    +

    tmp-file!

    (tmp-file!)

    Creates a random, temporary file under tmp-dir-base, and returns its path.

    +

    wget!

    (wget! url)(wget! url opts)

    Downloads a string URL and returns the filename as a string. Skips if the file already exists.

    Options:

    :force? Even if we have this cached, download the tarball again anyway. :user? User for wget authentication. If provided, valid pw must also be provided. :pw? Password for wget authentication.

    -

    wget-cache-dir

    Directory for caching files from the web.

    -

    wget-helper!

    (wget-helper! & args)

    A helper for wget! and cached-wget!. Calls wget with options; catches name resolution and other network errors, and retries them. EC2 name resolution can be surprisingly flaky.

    -

    write-file!

    (write-file! string file)

    Writes a string to a filename.

    -
    \ No newline at end of file +

    wget-cache-dir

    Directory for caching files from the web.

    +

    wget-helper!

    (wget-helper! & args)

    A helper for wget! and cached-wget!. Calls wget with options; catches name resolution and other network errors, and retries them. EC2 name resolution can be surprisingly flaky.

    +

    write-file!

    (write-file! string file)

    Writes a string to a filename.

    +
    \ No newline at end of file diff --git a/jepsen.core.html b/jepsen.core.html index 41142bf4e..323bdd003 100644 --- a/jepsen.core.html +++ b/jepsen.core.html @@ -1,17 +1,17 @@ -jepsen.core documentation

    jepsen.core

    Entry point for all Jepsen tests. Coordinates the setup of servers, running tests, creating and resolving failures, and interpreting results.

    +jepsen.core documentation

    jepsen.core

    Entry point for all Jepsen tests. Coordinates the setup of servers, running tests, creating and resolving failures, and interpreting results.

    Jepsen tests a system by running a set of singlethreaded processes, each representing a single client in the system, and a special nemesis process, which induces failures across the cluster. Processes choose operations to perform based on a generator. Each process uses a client to apply the operation to the distributed system, and records the invocation and completion of that operation in the history for the test. When the test is complete, a checker analyzes the history to see if it made sense.

    Jepsen automates the setup and teardown of the environment and distributed system by using an OS and client respectively. See run! for details.

    analyze!

    (analyze! test)

    After running the test and obtaining a history, we perform some post-processing on the history, run the checker, and write the test to disk again. Takes a test map. Returns a new test with results.

    -

    conj-op!

    (conj-op! test op)

    Add an operation to a tests’s history, and returns the operation.

    -

    log-results

    (log-results test)

    Logs info about the results of a test to stdout, and returns test.

    -

    log-test-start!

    (log-test-start! test)

    Logs some basic information at the start of a test: the Git version of the working directory, the lein arguments to re-run the test, etc.

    -

    maybe-snarf-logs!

    (maybe-snarf-logs! test)

    Snarfs logs, swallows and logs all throwables. Why? Because we do this when we encounter an error and abort, and we don’t want an error here to supercede the root cause that made us abort.

    -

    prepare-test

    (prepare-test test)

    Takes a test and prepares it for running. Ensures it has a :start-time, :concurrency, and :barrier field. Wraps its generator in a forgettable reference, to prevent us from inadvertently retaining the head.

    +

    conj-op!

    (conj-op! test op)

    Add an operation to a tests’s history, and returns the operation.

    +

    log-results

    (log-results test)

    Logs info about the results of a test to stdout, and returns test.

    +

    log-test-start!

    (log-test-start! test)

    Logs some basic information at the start of a test: the Git version of the working directory, the lein arguments to re-run the test, etc.

    +

    maybe-snarf-logs!

    (maybe-snarf-logs! test)

    Snarfs logs, swallows and logs all throwables. Why? Because we do this when we encounter an error and abort, and we don’t want an error here to supercede the root cause that made us abort.

    +

    prepare-test

    (prepare-test test)

    Takes a test and prepares it for running. Ensures it has a :start-time, :concurrency, and :barrier field. Wraps its generator in a forgettable reference, to prevent us from inadvertently retaining the head.

    This operation always succeeds, and is necessary for accessing a test’s store directory, which depends on :start-time. You may call this yourself before calling run!, if you need access to the store directory outside the run! context.

    -

    primary

    (primary test)

    Given a test, returns the primary node.

    -

    run!

    (run! test)

    Runs a test. Tests are maps containing

    +

    primary

    (primary test)

    Given a test, returns the primary node.

    +

    run!

    (run! test)

    Runs a test. Tests are maps containing

    :nodes A sequence of string node names involved in the test :concurrency (optional) How many processes to run concurrently :ssh SSH credential information: a map containing… :username The username to connect with (root) :password The password to use :sudo-password The password to use for sudo, if needed :port SSH listening port (22) :private-key-path A path to an SSH identity file (~/.ssh/id_rsa) :strict-host-key-checking Whether or not to verify host keys :logging Logging options; see jepsen.store/start-logging! :os The operating system; given by the OS protocol :db The database to configure: given by the DB protocol :remote The remote to use for control actions. Try, for example, (jepsen.control.sshj/remote). :client A client for the database :nemesis A client for failures :generator A generator of operations to apply to the DB :checker Verifies that the history is valid :log-files A list of paths to logfiles/dirs which should be captured at the end of the test. :nonserializable-keys A collection of top-level keys in the test which shouldn’t be serialized to disk. :leave-db-running? Whether to leave the DB running at the end of the test.

    Jepsen automatically adds some additional keys during the run

    :start-time When the test began :history The operations the clients and nemesis performed :results The results from the checker, once the test is completed

    @@ -65,15 +65,15 @@
    • This generates the final report
    -

    run-case!

    (run-case! test)

    Takes a test with a store handle. Spawns nemesis and clients and runs the generator. Returns test with no :generator and a completed :history.

    -

    snarf-logs!

    (snarf-logs! test)

    Downloads logs for a test. Updates symlinks.

    -

    synchronize

    (synchronize test)(synchronize test timeout-s)

    A synchronization primitive for tests. When invoked, blocks until all nodes have arrived at the same point.

    +

    run-case!

    (run-case! test)

    Takes a test with a store handle. Spawns nemesis and clients and runs the generator. Returns test with no :generator and a completed :history.

    +

    snarf-logs!

    (snarf-logs! test)

    Downloads logs for a test. Updates symlinks.

    +

    synchronize

    (synchronize test)(synchronize test timeout-s)

    A synchronization primitive for tests. When invoked, blocks until all nodes have arrived at the same point.

    This is often used in IO-heavy DB setup code to ensure all nodes have completed some phase of execution before moving on to the next. However, if an exception is thrown by one of those threads, the call to synchronize will deadlock! To avoid this, we include a default timeout of 60 seconds, which can be overridden by passing an alternate timeout in seconds.

    -

    with-client+nemesis-setup-teardown

    macro

    (with-client+nemesis-setup-teardown [test-sym test] & body)

    Takes a binding vector of a test symbol and a test map. Sets up clients and nemesis, and rebinds (:nemesis test) to the set-up nemesis. Evaluates body. Afterwards, ensures clients and nemesis are torn down.

    -

    with-db

    macro

    (with-db test & body)

    Wraps body in DB setup and teardown.

    -

    with-log-snarfing

    macro

    (with-log-snarfing test & body)

    Evaluates body and ensures logs are snarfed afterwards. Will also download logs in the event of JVM shutdown, so you can ctrl-c a test and get something useful.

    -

    with-logging

    macro

    (with-logging test & body)

    Sets up logging for this test run, logs the start of the test, evaluates body, and stops logging at the end. Also logs test crashes, so they appear in the log files for this test run.

    -

    with-os

    macro

    (with-os test & body)

    Wraps body in OS setup and teardown.

    -

    with-resources

    macro

    (with-resources [sym start stop resources] & body)

    Takes a four-part binding vector: a symbol to bind resources to, a function to start a resource, a function to stop a resource, and a sequence of resources. Then takes a body. Starts resources in parallel, evaluates body, and ensures all resources are correctly closed in the event of an error.

    -

    with-sessions

    macro

    (with-sessions [test' test] & body)

    Takes a test’ test binding form and a body. Starts with test-expr as the test, and sets up the jepsen.control state required to run this test–the remote, SSH options, etc. Opens SSH sessions to each node. Saves those sessions in the :sessions map of the test, binds that to the test' symbol in the binding expression, and evaluates body.

    -
    \ No newline at end of file +

    with-client+nemesis-setup-teardown

    macro

    (with-client+nemesis-setup-teardown [test-sym test] & body)

    Takes a binding vector of a test symbol and a test map. Sets up clients and nemesis, and rebinds (:nemesis test) to the set-up nemesis. Evaluates body. Afterwards, ensures clients and nemesis are torn down.

    +

    with-db

    macro

    (with-db test & body)

    Wraps body in DB setup and teardown.

    +

    with-log-snarfing

    macro

    (with-log-snarfing test & body)

    Evaluates body and ensures logs are snarfed afterwards. Will also download logs in the event of JVM shutdown, so you can ctrl-c a test and get something useful.

    +

    with-logging

    macro

    (with-logging test & body)

    Sets up logging for this test run, logs the start of the test, evaluates body, and stops logging at the end. Also logs test crashes, so they appear in the log files for this test run.

    +

    with-os

    macro

    (with-os test & body)

    Wraps body in OS setup and teardown.

    +

    with-resources

    macro

    (with-resources [sym start stop resources] & body)

    Takes a four-part binding vector: a symbol to bind resources to, a function to start a resource, a function to stop a resource, and a sequence of resources. Then takes a body. Starts resources in parallel, evaluates body, and ensures all resources are correctly closed in the event of an error.

    +

    with-sessions

    macro

    (with-sessions [test' test] & body)

    Takes a test’ test binding form and a body. Starts with test-expr as the test, and sets up the jepsen.control state required to run this test–the remote, SSH options, etc. Opens SSH sessions to each node. Saves those sessions in the :sessions map of the test, binds that to the test' symbol in the binding expression, and evaluates body.

    +
    \ No newline at end of file diff --git a/jepsen.db.html b/jepsen.db.html index 90e051009..10041ba46 100644 --- a/jepsen.db.html +++ b/jepsen.db.html @@ -1,26 +1,27 @@ -jepsen.db documentation

    jepsen.db

    Allows Jepsen to set up and tear down databases.

    +jepsen.db documentation

    jepsen.db

    Allows Jepsen to set up and tear down databases.

    cycle!

    (cycle! test)

    Takes a test, and tears down, then sets up, the database on all nodes concurrently.

    If any call to setup! or setup-primary! throws :type ::setup-failed, we tear down and retry the whole process up to cycle-tries times.

    -

    cycle-tries

    How many tries do we get to set up a database?

    -

    DB

    protocol

    members

    setup!

    (setup! db test node)

    Set up the database on this particular node.

    +

    cycle-tries

    How many tries do we get to set up a database?

    +

    DB

    protocol

    members

    setup!

    (setup! db test node)

    Set up the database on this particular node.

    teardown!

    (teardown! db test node)

    Tear down the database on this particular node.

    -

    Kill

    protocol

    This optional protocol supports starting and killing a DB’s processes.

    +

    Kill

    protocol

    This optional protocol supports starting and killing a DB’s processes.

    members

    kill!

    (kill! db test node)

    Forcibly kills the process

    start!

    (start! db test node)

    Starts the process

    -

    log-files-map

    (log-files-map db test node)

    Takes a DB, a test, and a node. Returns a map of remote paths to local paths. Checks to make sure there are no duplicate local paths.

    +

    log-files-map

    (log-files-map db test node)

    Takes a DB, a test, and a node. Returns a map of remote paths to local paths. Checks to make sure there are no duplicate local paths.

    log-files used to return a sequence of remote paths, and some people are likely still assuming that form for composition. When they start e.g. concatenating maps into lists of strings, we’re going to get mixed representations. We try to make all this Just Work (TM).

    -

    LogFiles

    protocol

    members

    log-files

    (log-files db test node)

    Returns either a.) a map of fully-qualified remote paths (on this DB node) to short local paths (in store/), or b.) a sequence of fully-qualified remote paths.

    -

    noop

    Does nothing.

    -

    Pause

    protocol

    This optional protocol supports pausing and resuming a DB’s processes.

    +

    LogFiles

    protocol

    members

    log-files

    (log-files db test node)

    Returns either a.) a map of fully-qualified remote paths (on this DB node) to short local paths (in store/), or b.) a sequence of fully-qualified remote paths.

    +

    map-test

    (map-test f db)

    Wraps a DB in another DB which rewrites every test argument using (f test). Helpful for when you want to compose two DBs together that need different test parameters, like :version.

    +

    noop

    Does nothing.

    +

    Pause

    protocol

    This optional protocol supports pausing and resuming a DB’s processes.

    members

    pause!

    (pause! db test node)

    Pauses the process

    resume!

    (resume! db test node)

    Resumes the process

    -

    Primary

    protocol

    This optional protocol supports databases which have a notion of one (or more) primary nodes.

    +

    Primary

    protocol

    This optional protocol supports databases which have a notion of one (or more) primary nodes.

    members

    primaries

    (primaries db test)

    Returns a collection of nodes which are currently primaries. Best-effort is OK; in practice, this usually devolves to ‘nodes that think they’re currently primaries’.

    setup-primary!

    (setup-primary! db test node)

    Performs one-time setup on a single node.

    -

    Process

    protocol

    tcpdump

    (tcpdump opts)

    A database which runs a tcpdump capture from setup! to teardown!, and yields a tcpdump logfile. Options:

    +

    Process

    protocol

    tcpdump

    (tcpdump opts)

    A database which runs a tcpdump capture from setup! to teardown!, and yields a tcpdump logfile. Options:

    :clients-only? If true, applies a filter string which yields only traffic from Jepsen clients, rather than capturing inter-DB-node traffic.

    :filter A filter string to apply (in addition to ports). e.g. “host 192.168.122.1”, which can be helpful for seeing just client traffic from the control node.

    :ports A collection of ports to grab traffic from.

    -
    \ No newline at end of file + \ No newline at end of file diff --git a/jepsen.faketime.html b/jepsen.faketime.html index 94b2b2ed1..4ec83da94 100644 --- a/jepsen.faketime.html +++ b/jepsen.faketime.html @@ -1,9 +1,9 @@ -jepsen.faketime documentation

    jepsen.faketime

    Libfaketime is useful for making clocks run at differing rates! This namespace provides utilities for stubbing out programs with faketime.

    +jepsen.faketime documentation

    jepsen.faketime

    Libfaketime is useful for making clocks run at differing rates! This namespace provides utilities for stubbing out programs with faketime.

    install-0.9.6-jepsen1!

    (install-0.9.6-jepsen1!)

    Installs our fork of 0.9.6 (the last version which worked with jemalloc), which includes a patch to support CLOCK_MONOTONIC_COARSE and CLOCK_REALTIME_COARSE. Gosh, this is SUCH a hack.

    -

    rand-factor

    (rand-factor factor)

    Helpful for choosing faketime rates. Takes a factor (e.g. 2.5) and produces a random number selected from a distribution around 1, with minimum and maximum constrained such that factor * min = max. Intuitively, the fastest clock can be no more than twice as fast as the slowest.

    -

    script

    (script cmd init-offset rate)

    A sh script which invokes cmd with a faketime wrapper. Takes an initial offset in seconds, and a clock rate to run at.

    -

    unwrap!

    (unwrap! cmd)

    If a wrapper is installed, remove it and replace it with the original .nofaketime version of the binary.

    -

    wrap!

    (wrap! cmd init-offset rate)

    Replaces an executable with a faketime wrapper, moving the original to x.no-faketime. Idempotent.

    -
    \ No newline at end of file +

    rand-factor

    (rand-factor factor)

    Helpful for choosing faketime rates. Takes a factor (e.g. 2.5) and produces a random number selected from a distribution around 1, with minimum and maximum constrained such that factor * min = max. Intuitively, the fastest clock can be no more than twice as fast as the slowest.

    +

    script

    (script cmd init-offset rate)

    A sh script which invokes cmd with a faketime wrapper. Takes an initial offset in seconds, and a clock rate to run at.

    +

    unwrap!

    (unwrap! cmd)

    If a wrapper is installed, remove it and replace it with the original .nofaketime version of the binary.

    +

    wrap!

    (wrap! cmd init-offset rate)

    Replaces an executable with a faketime wrapper, moving the original to x.no-faketime. Idempotent.

    +
    \ No newline at end of file diff --git a/jepsen.fs-cache.html b/jepsen.fs-cache.html index f15076562..e83d2a018 100644 --- a/jepsen.fs-cache.html +++ b/jepsen.fs-cache.html @@ -1,6 +1,6 @@ -jepsen.fs-cache documentation

    jepsen.fs-cache

    Some systems Jepsen tests are expensive or time-consuming to set up. They might involve lengthy compilation processes, large packages which take a long time to download, or allocate large files on initial startup.

    +jepsen.fs-cache documentation

    jepsen.fs-cache

    Some systems Jepsen tests are expensive or time-consuming to set up. They might involve lengthy compilation processes, large packages which take a long time to download, or allocate large files on initial startup.

    Other systems require state which persists from run to run–for instance, there might be an expensive initial cluster join process, and you might want to perform that process once, save the cluster’s state to disk before testing, and for subsequent tests redeploy that state to nodes to skip the cluster join.

    This namespace provides a persistent cache, stored on the control node’s filesystem, which is suitable for strings, data, or files. It also provides a basic locking mechanism.

    Cached values are referred to by logical paths: a vector of strings, keywords, numbers, booleans, etc; see the Encode protocol for details. For instance, a cache path could be any of

    @@ -16,26 +16,26 @@

    Writes to cache are atomic: a temporary file will be written to first, then renamed into its final cache location.

    You can acquire locks on any cache path, whether it exists or not, using (locking path ...).

    atomic-move!

    (atomic-move! f1 f2)

    Attempts to move a file atomically, even across filesystems.

    -

    cached?

    (cached? path)

    Do we have the given path cached?

    -

    clear!

    (clear!)(clear! path)

    Clears the entire cache, or a specified path.

    -

    deploy-remote!

    (deploy-remote! cache-path remote-path)

    Deploys a cached path to the given remote path (a string). Deletes remote path first. Creates parents if necessary. Returns remote path.

    -

    dir

    Top-level cache directory.

    -

    dir-prefix

    What string do we prefix to directories in cache path filenames, to distinguish /foo from /foo/bar?

    -

    Encode

    protocol

    members

    encode-path-component

    (encode-path-component component)

    Encodes datatypes to strings which can be used in filenames. Use escape to escape slashes.

    -

    escape

    (escape string)

    Escapes slashes in filenames.

    -

    file

    (file path)

    The local File backing a given path, whether or not it exists.

    -

    file!

    (file! path)

    Like file, but ensures parents exist.

    -

    file-prefix

    What string do we prefix to files in cache path filenames, to distinguish /foo from /foo/bar?

    -

    fs-path

    (fs-path path)

    Takes a cache path, and returns a sequence of filenames for that path.

    -

    load-edn

    (load-edn path)

    Reads the given cache path as an EDN structure, returning data. Returns nil if file does not exist.

    -

    load-file

    (load-file path)

    The local File backing a given path. Returns nil if no file exists.

    -

    load-string

    (load-string path)

    Returns the cached value for a given path as a string, or nil if uncached.

    -

    locking

    macro

    (locking path & body)

    Acquires a lock for a particular cache path, and evaluates body. Helpful for reducing concurrent evaluation of expensive cache misses.

    -

    save-edn!

    (save-edn! data path)

    Writes the given data structure to an EDN file. Returns data.

    -

    save-file!

    (save-file! file path)

    Caches a File object to the given path. Returns file.

    -

    save-remote!

    (save-remote! remote-path cache-path)

    Caches a remote path (a string) to a cache path by SCPing it down. Returns remote-path.

    -

    save-string!

    (save-string! string path)

    Caches the given string to a cache path. Returns string.

    -

    write-atomic!

    macro

    (write-atomic! [tmp-sym final] & body)

    Writes a file atomically. Takes a binding form and a body, like so

    +

    cached?

    (cached? path)

    Do we have the given path cached?

    +

    clear!

    (clear!)(clear! path)

    Clears the entire cache, or a specified path.

    +

    deploy-remote!

    (deploy-remote! cache-path remote-path)

    Deploys a cached path to the given remote path (a string). Deletes remote path first. Creates parents if necessary. Returns remote path.

    +

    dir

    Top-level cache directory.

    +

    dir-prefix

    What string do we prefix to directories in cache path filenames, to distinguish /foo from /foo/bar?

    +

    Encode

    protocol

    members

    encode-path-component

    (encode-path-component component)

    Encodes datatypes to strings which can be used in filenames. Use escape to escape slashes.

    +

    escape

    (escape string)

    Escapes slashes in filenames.

    +

    file

    (file path)

    The local File backing a given path, whether or not it exists.

    +

    file!

    (file! path)

    Like file, but ensures parents exist.

    +

    file-prefix

    What string do we prefix to files in cache path filenames, to distinguish /foo from /foo/bar?

    +

    fs-path

    (fs-path path)

    Takes a cache path, and returns a sequence of filenames for that path.

    +

    load-edn

    (load-edn path)

    Reads the given cache path as an EDN structure, returning data. Returns nil if file does not exist.

    +

    load-file

    (load-file path)

    The local File backing a given path. Returns nil if no file exists.

    +

    load-string

    (load-string path)

    Returns the cached value for a given path as a string, or nil if uncached.

    +

    locking

    macro

    (locking path & body)

    Acquires a lock for a particular cache path, and evaluates body. Helpful for reducing concurrent evaluation of expensive cache misses.

    +

    save-edn!

    (save-edn! data path)

    Writes the given data structure to an EDN file. Returns data.

    +

    save-file!

    (save-file! file path)

    Caches a File object to the given path. Returns file.

    +

    save-remote!

    (save-remote! remote-path cache-path)

    Caches a remote path (a string) to a cache path by SCPing it down. Returns remote-path.

    +

    save-string!

    (save-string! string path)

    Caches the given string to a cache path. Returns string.

    +

    write-atomic!

    macro

    (write-atomic! [tmp-sym final] & body)

    Writes a file atomically. Takes a binding form and a body, like so

    (write-atomic tmp-file (io/file “final.txt”) (write! tmp-file))

    Creates a temporary file, and binds it to tmp-file. Evals body, presumably modifying tmp-file in some way. If body terminates normally, renames tmp-file to final-file. Ensures temp file is cleaned up.

    -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/jepsen.generator.context.html b/jepsen.generator.context.html index fce1ccf38..e0dc634d6 100644 --- a/jepsen.generator.context.html +++ b/jepsen.generator.context.html @@ -1,24 +1,24 @@ -jepsen.generator.context documentation

    jepsen.generator.context

    Generators work with an immutable context that tells them what time it is, what processes are available, what process is executing which thread and vice versa, and so on. We need an efficient, high-performance data structure to track this information. This namespace provides that data structure, and functions to alter it.

    +jepsen.generator.context documentation

    jepsen.generator.context

    Generators work with an immutable context that tells them what time it is, what processes are available, what process is executing which thread and vice versa, and so on. We need an efficient, high-performance data structure to track this information. This namespace provides that data structure, and functions to alter it.

    Contexts are intended not only for managing generator-relevant state about active threads and so on; they also can store arbitrary contextual information for generators. For instance, generators may thread state between invocations or layers of the generator stack. To do this, contexts also behave like Clojure maps. They have a single special key, :time; all other keys are available for your use.

    all-but

    (all-but x)

    One thing we do often, and which is expensive, is stripping out the nemesis from the set of active threads using (complement #{:nemesis}). This type encapsulates that notion of “all but x”, and allows us to specialize some expensive functions for speed.

    -

    all-processes

    (all-processes ctx)

    Given a context, returns a Bifurcan ISet of all processes currently belonging to some thread.

    -

    all-thread-count

    (all-thread-count ctx)

    How many threads are in the given context, total?

    -

    all-threads

    (all-threads ctx)

    Given a context, returns a Bifurcan ISet of all threads in it.

    -

    busy-thread

    (busy-thread this time thread)

    Returns context with the given time, and the given thread no longer free.

    -

    context

    (context test)

    Constructs a fresh Context for a test. Its initial time is 0. Its threads are the integers from 0 to (:concurrency test), plus a :nemesis). Every thread is free. Each initially runs itself as a process.

    -

    free-processes

    (free-processes ctx)

    Given a context, returns a collection of processes which are not actively processing an invocation.

    -

    free-thread

    (free-thread this time thread)

    Returns context with the given time, and the given thread free.

    -

    free-thread-count

    (free-thread-count ctx)

    How many threads are free in the given context?

    -

    free-threads

    (free-threads ctx)

    Given a context, returns a Bifurcan ISet of threads which are not actively processing an invocation.

    -

    intersect-bitsets

    (intersect-bitsets a b)

    Intersects one bitset with another, immutably.

    -

    make-thread-filter

    (make-thread-filter pred)(make-thread-filter pred ctx)

    We often want to restrict a context to a specific subset of threads matching some predicate. We want to do this a lot. To make this fast, we can pre-compute a function which does this restriction more efficiently than doing it at runtime.

    +

    all-processes

    (all-processes ctx)

    Given a context, returns a Bifurcan ISet of all processes currently belonging to some thread.

    +

    all-thread-count

    (all-thread-count ctx)

    How many threads are in the given context, total?

    +

    all-threads

    (all-threads ctx)

    Given a context, returns a Bifurcan ISet of all threads in it.

    +

    busy-thread

    (busy-thread this time thread)

    Returns context with the given time, and the given thread no longer free.

    +

    context

    (context test)

    Constructs a fresh Context for a test. Its initial time is 0. Its threads are the integers from 0 to (:concurrency test), plus a :nemesis). Every thread is free. Each initially runs itself as a process.

    +

    free-processes

    (free-processes ctx)

    Given a context, returns a collection of processes which are not actively processing an invocation.

    +

    free-thread

    (free-thread this time thread)

    Returns context with the given time, and the given thread free.

    +

    free-thread-count

    (free-thread-count ctx)

    How many threads are free in the given context?

    +

    free-threads

    (free-threads ctx)

    Given a context, returns a Bifurcan ISet of threads which are not actively processing an invocation.

    +

    intersect-bitsets

    (intersect-bitsets a b)

    Intersects one bitset with another, immutably.

    +

    make-thread-filter

    (make-thread-filter pred)(make-thread-filter pred ctx)

    We often want to restrict a context to a specific subset of threads matching some predicate. We want to do this a lot. To make this fast, we can pre-compute a function which does this restriction more efficiently than doing it at runtime.

    Call this with a context and a predicate, and it’ll construct a function which restricts any version of that context (e.g. one with the same threads, but maybe a different time or busy state) to just threads matching the given predicate.

    Don’t have a context handy? Pass this just a predicate, and it’ll construct a filter which lazily compiles itself on first invocation, and is fast thereafter.

    -

    process->thread

    (process->thread ctx process)

    Given a process, looks up which thread is executing it.

    -

    some-free-process

    (some-free-process ctx)

    Given a context, returns a random free process, or nil if all are busy.

    -

    thread->process

    (thread->process ctx thread)

    Given a thread, looks up which process it’s executing.

    -

    thread-free?

    (thread-free? ctx thread)

    Is the given thread free?

    -

    with-next-process

    (with-next-process ctx thread)

    Replaces a thread’s process with a new one.

    -
    \ No newline at end of file +

    process->thread

    (process->thread ctx process)

    Given a process, looks up which thread is executing it.

    +

    some-free-process

    (some-free-process ctx)

    Given a context, returns a random free process, or nil if all are busy.

    +

    thread->process

    (thread->process ctx thread)

    Given a thread, looks up which process it’s executing.

    +

    thread-free?

    (thread-free? ctx thread)

    Is the given thread free?

    +

    with-next-process

    (with-next-process ctx thread)

    Replaces a thread’s process with a new one.

    +
    \ No newline at end of file diff --git a/jepsen.generator.html b/jepsen.generator.html index 8b0afd5ee..d6bd238ff 100644 --- a/jepsen.generator.html +++ b/jepsen.generator.html @@ -1,6 +1,6 @@ -jepsen.generator documentation

    jepsen.generator

    In a Nutshell

    +jepsen.generator documentation

    jepsen.generator

    In a Nutshell

    Generators tell Jepsen what to do during a test. Generators are purely functional objects which support two functions: op and update. op produces operations for Jepsen to perform: it takes a test and context object, and yields:

    • nil if the generator is exhausted
    • @@ -180,78 +180,80 @@

      Default {:f :read}">{:f :write, :value (rand-int 5)} {:f :read}))

      Promises and delays are generators which ignore updates, yield :pending until realized, then are replaced by whatever generator they contain. Delays are not evaluated until they could produce an op, so you can include them in sequences, phases, etc., and they’ll be evaluated only once prior ops have been consumed.

    all-processes

    (all-processes ctx)

    Given a context, returns a Bifurcan ISet of all processes currently belonging to some thread.

    -

    all-threads

    (all-threads ctx)

    Given a context, returns a Bifurcan ISet of all threads in it.

    -

    any

    (any & gens)

    Takes multiple generators and binds them together. Operations are taken from any generator. Updates are propagated to all generators.

    -

    clients

    (clients client-gen)(clients client-gen nemesis-gen)

    In the single-arity form, wraps a generator such that only clients request operations from it. In its two-arity form, combines a generator of client operations and a generator for nemesis operations into one. When the process requesting an operation is :nemesis, routes to the nemesis generator; otherwise to the client generator.

    -

    concat

    (concat & gens)

    Where your generators are sequences, you can use Clojure’s concat to make them a generator. This concat is useful when you’re trying to concatenate arbitrary generators. Right now, (concat a b c) is simply ’(a b c).

    -

    context

    (context test)

    Constructs a fresh Context for a test. Its initial time is 0. Its threads are the integers from 0 to (:concurrency test), plus a :nemesis). Every thread is free. Each initially runs itself as a process.

    -

    cycle

    (cycle gen)(cycle limit gen)

    Wraps a finite generator so that once it completes (e.g. emits nil), it begins again. With an optional integer limit, repeats the generator that many times. When the generator returns nil, it is reset to its original value and the cycle repeats. Updates are propagated to the current generator, but do not affect the original. Not sure if this is the right call–might change that later.

    -

    cycle-times

    (cycle-times & specs)

    Cycles between several generators on a rotating schedule. Takes a flat series of time, generator pairs, like so:

    +

    all-threads

    (all-threads ctx)

    Given a context, returns a Bifurcan ISet of all threads in it.

    +

    any

    (any & gens)

    Takes multiple generators and binds them together. Operations are taken from any generator. Updates are propagated to all generators.

    +

    clients

    (clients client-gen)(clients client-gen nemesis-gen)

    In the single-arity form, wraps a generator such that only clients request operations from it. In its two-arity form, combines a generator of client operations and a generator for nemesis operations into one. When the process requesting an operation is :nemesis, routes to the nemesis generator; otherwise to the client generator.

    +

    concat

    (concat & gens)

    Where your generators are sequences, you can use Clojure’s concat to make them a generator. This concat is useful when you’re trying to concatenate arbitrary generators. Right now, (concat a b c) is simply ’(a b c).

    +

    context

    (context test)

    Constructs a fresh Context for a test. Its initial time is 0. Its threads are the integers from 0 to (:concurrency test), plus a :nemesis). Every thread is free. Each initially runs itself as a process.

    +

    cycle

    (cycle gen)(cycle limit gen)

    Wraps a finite generator so that once it completes (e.g. emits nil), it begins again. With an optional integer limit, repeats the generator that many times. When the generator returns nil, it is reset to its original value and the cycle repeats. Updates are propagated to the current generator, but do not affect the original. Not sure if this is the right call–might change that later.

    +

    cycle-times

    (cycle-times & specs)

    Cycles between several generators on a rotating schedule. Takes a flat series of time, generator pairs, like so:

    (cycle-times 5  {:f :write}
                  10 (gen/stagger 1 {:f :read}))
     

    This generator emits writes for five seconds, then staggered reads for ten seconds, then goes back to writes, and so on. Generator state is preserved from cycle to cycle, which makes this suitable for e.g. interleaving quiet periods into a nemesis generator which needs to perform a specific sequence of operations like :add-node, :remove-node, :add-node …

    Updates are propagated to all generators.

    -

    delay

    (delay dt gen)

    Given a time dt in seconds, and an underlying generator gen, constructs a generator which tries to emit operations exactly dt seconds apart. Emits operations more frequently if it falls behind. Like stagger, this should result in histories where operations happen roughly every dt seconds.

    +

    delay

    (delay dt gen)

    Given a time dt in seconds, and an underlying generator gen, constructs a generator which tries to emit operations exactly dt seconds apart. Emits operations more frequently if it falls behind. Like stagger, this should result in histories where operations happen roughly every dt seconds.

    Note that this definition of delay differs from its stateful cousin delay, which a.) introduced dt seconds of delay between completion and subsequent invocation, and b.) emitted 1/dt ops/sec per thread, rather than globally.

    -

    dissoc-vec

    (dissoc-vec v i)

    Cut a single index out of a vector, returning a vector one shorter, without the element at that index.

    -

    each-thread

    (each-thread gen)

    Takes a generator. Constructs a generator which maintains independent copies of that generator for every thread. Each generator sees exactly one thread in its free process list. Updates are propagated to the generator for the thread which emitted the operation.

    -

    each-thread-ensure-context-filters!

    (each-thread-ensure-context-filters! context-filters ctx)

    Ensures an EachThread has context filters for each thread.

    -

    extend-protocol-runtime

    macro

    (extend-protocol-runtime proto klass & specs)

    Extends a protocol to a runtime-defined class. Helpful because some Clojure constructs, like promises, use reify rather than classes, and have no distinct interface we can extend.

    -

    f-map

    (f-map f-map g)

    Takes a function f-map converting op functions (:f op) to other functions, and a generator g. Returns a generator like g, but where fs are replaced according to f-map. Useful for composing generators together for use with a composed nemesis.

    -

    fill-in-op

    (fill-in-op op ctx)

    Takes an operation as a map and fills in missing fields for :type, :process, and :time using context. Returns :pending if no process is free. Turns maps into history Ops.

    -

    filter

    (filter f gen)

    A generator which filters operations from an underlying generator, passing on only those which match (f op). Like map, :pending and nil operations bypass the filter.

    -

    flip-flop

    (flip-flop a b)

    Emits an operation from generator A, then B, then A again, then B again, etc. Stops as soon as any gen is exhausted. Updates are ignored.

    -

    fn-wrapper

    (fn-wrapper f)

    Wraps a function into a wrapper which makes it more efficient to invoke. We memoize the function’s arity, in particular, to reduce reflection.

    -

    free-processes

    (free-processes ctx)

    Given a context, returns a collection of processes which are not actively processing an invocation.

    -

    free-threads

    (free-threads ctx)

    Given a context, returns a Bifurcan ISet of threads which are not actively processing an invocation.

    -

    friendly-exceptions

    (friendly-exceptions gen)

    Wraps a generator, so that exceptions thrown from op and update are wrapped with a :type ::op-threw or ::update-threw Slingshot exception map, including the generator, context, and event which caused the exception.

    -

    Generator

    protocol

    members

    op

    (op gen test context)

    Obtains the next operation from this generator. Returns an pair of op gen’, or :pending gen, or nil if this generator is exhausted.

    +

    dissoc-vec

    (dissoc-vec v i)

    Cut a single index out of a vector, returning a vector one shorter, without the element at that index.

    +

    each-process

    (each-process gen)

    Takes a generator. Constructs a generator which maintains independent copies of that generator for every process. Each generator sees exactly one thread & process in its free process list. Updates are propagated to the generator for the thread which emitted the operation.

    +

    each-thread

    (each-thread gen)

    Takes a generator. Constructs a generator which maintains independent copies of that generator for every thread. Each generator sees exactly one thread in its free process list. Updates are propagated to the generator for the thread which emitted the operation.

    +

    each-thread-ensure-context-filters!

    (each-thread-ensure-context-filters! context-filters ctx)

    Ensures an EachThread has context filters for each thread.

    +

    extend-protocol-runtime

    macro

    (extend-protocol-runtime proto klass & specs)

    Extends a protocol to a runtime-defined class. Helpful because some Clojure constructs, like promises, use reify rather than classes, and have no distinct interface we can extend.

    +

    f-map

    (f-map f-map g)

    Takes a function f-map converting op functions (:f op) to other functions, and a generator g. Returns a generator like g, but where fs are replaced according to f-map. Useful for composing generators together for use with a composed nemesis.

    +

    fill-in-op

    (fill-in-op op ctx)

    Takes an operation as a map and fills in missing fields for :type, :process, and :time using context. Returns :pending if no process is free. Turns maps into history Ops.

    +

    filter

    (filter f gen)

    A generator which filters operations from an underlying generator, passing on only those which match (f op). Like map, :pending and nil operations bypass the filter.

    +

    flip-flop

    (flip-flop a b)

    Emits an operation from generator A, then B, then A again, then B again, etc. Stops as soon as any gen is exhausted. Updates are ignored.

    +

    fn-wrapper

    (fn-wrapper f)

    Wraps a function into a wrapper which makes it more efficient to invoke. We memoize the function’s arity, in particular, to reduce reflection.

    +

    free-processes

    (free-processes ctx)

    Given a context, returns a collection of processes which are not actively processing an invocation.

    +

    free-threads

    (free-threads ctx)

    Given a context, returns a Bifurcan ISet of threads which are not actively processing an invocation.

    +

    friendly-exceptions

    (friendly-exceptions gen)

    Wraps a generator, so that exceptions thrown from op and update are wrapped with a :type ::op-threw or ::update-threw Slingshot exception map, including the generator, context, and event which caused the exception.

    +

    Generator

    protocol

    members

    op

    (op gen test context)

    Obtains the next operation from this generator. Returns an pair of op gen’, or :pending gen, or nil if this generator is exhausted.

    update

    (update gen test context event)

    Updates the generator to reflect an event having taken place. Returns a generator (presumably, gen, perhaps with some changes) resulting from the update.

    -

    init!

    (init!)

    We do some magic to extend the Generator protocol over promises etc, but it’s fragile and could break with… I think AOT compilation, but also apparently plain old dependencies? I’m not certain. It’s weird. Just to be safe, we move this into a function that gets called by jepsen.generator.interpreter, so that we observe the real version of the promise reify auto-generated class.

    -

    initialized?

    limit

    (limit remaining gen)

    Wraps a generator and ensures that it returns at most limit operations. Propagates every update to the underlying generator.

    -

    log

    (log msg)

    A generator which, when asked for an operation, logs a message and yields nil. Occurs only once; use repeat to repeat.

    -

    map

    (map f gen)

    A generator which wraps another generator g, transforming operations it generates with (f op). When the underlying generator yields :pending or nil, this generator does too, without calling f. Passes updates to underlying generator.

    -

    mix

    (mix gens)

    A random mixture of several generators. Takes a collection of generators and chooses between them uniformly. Ignores updates; some users create broad (hundreds of generators) mixes.

    +

    init!

    (init!)

    We do some magic to extend the Generator protocol over promises etc, but it’s fragile and could break with… I think AOT compilation, but also apparently plain old dependencies? I’m not certain. It’s weird. Just to be safe, we move this into a function that gets called by jepsen.generator.interpreter, so that we observe the real version of the promise reify auto-generated class.

    +

    initialized?

    limit

    (limit remaining gen)

    Wraps a generator and ensures that it returns at most limit operations. Propagates every update to the underlying generator.

    +

    log

    (log msg)

    A generator which, when asked for an operation, logs a message and yields nil. Occurs only once; use repeat to repeat.

    +

    map

    (map f gen)

    A generator which wraps another generator g, transforming operations it generates with (f op). When the underlying generator yields :pending or nil, this generator does too, without calling f. Passes updates to underlying generator.

    +

    mix

    (mix gens)

    A random mixture of several generators. Takes a collection of generators and chooses between them uniformly. Ignores updates; some users create broad (hundreds of generators) mixes.

    To be precise, a mix behaves like a sequence of one-time, randomly selected generators from the given collection. This is efficient and prevents multiple generators from competing for the next slot, making it hard to control the mixture of operations.

    TODO: This can interact badly with generators that return :pending; gen/mix won’t let other generators (which could help us get unstuck!) advance. We should probably cycle on :pending.

    -

    nemesis

    (nemesis nemesis-gen)(nemesis nemesis-gen client-gen)

    In the single-arity form, wraps a generator such that only the nemesis requests operations from it. In its two-arity form, combines a generator of client operations and a generator for nemesis operations into one. When the process requesting an operation is :nemesis, routes to the nemesis generator; otherwise to the client generator.

    -

    on

    For backwards compatibility

    -

    on-threads

    (on-threads f gen)

    Wraps a generator, restricting threads which can use it to only those threads which satisfy (f thread). Alters the context passed to the underlying generator: it will only include free threads and workers satisfying f. Updates are passed on only when the thread performing the update matches f.

    -

    on-threads-context

    (on-threads-context f context)

    For backwards compatibility; filters a context to just threads matching (f thread). Use context/make-thread-filter for performance.

    -

    on-update

    (on-update f gen)

    Wraps a generator with an update handler function. When an update occurs, calls (f this test ctx event), and returns whatever f does–presumably, a new generator. Can also be helpful for side effects–for instance, to update some shared mutable state when an update occurs.

    -

    once

    (once gen)

    Emits only a single item from the underlying generator.

    -

    phases

    (phases & generators)

    Takes several generators, and constructs a generator which evaluates everything from the first generator, then everything from the second, and so on.

    -

    process->thread

    (process->thread ctx process)

    Given a process, looks up which thread is executing it.

    -

    process-limit

    (process-limit n gen)

    Takes a generator and returns a generator with bounded concurrency–it emits operations for up to n distinct processes, but no more.

    +

    nemesis

    (nemesis nemesis-gen)(nemesis nemesis-gen client-gen)

    In the single-arity form, wraps a generator such that only the nemesis requests operations from it. In its two-arity form, combines a generator of client operations and a generator for nemesis operations into one. When the process requesting an operation is :nemesis, routes to the nemesis generator; otherwise to the client generator.

    +

    on

    For backwards compatibility

    +

    on-threads

    (on-threads f gen)

    Wraps a generator, restricting threads which can use it to only those threads which satisfy (f thread). Alters the context passed to the underlying generator: it will only include free threads and workers satisfying f. Updates are passed on only when the thread performing the update matches f.

    +

    on-threads-context

    (on-threads-context f context)

    For backwards compatibility; filters a context to just threads matching (f thread). Use context/make-thread-filter for performance.

    +

    on-update

    (on-update f gen)

    Wraps a generator with an update handler function. When an update occurs, calls (f this test ctx event), and returns whatever f does–presumably, a new generator. Can also be helpful for side effects–for instance, to update some shared mutable state when an update occurs.

    +

    once

    (once gen)

    Emits only a single item from the underlying generator.

    +

    phases

    (phases & generators)

    Takes several generators, and constructs a generator which evaluates everything from the first generator, then everything from the second, and so on.

    +

    process->thread

    (process->thread ctx process)

    Given a process, looks up which thread is executing it.

    +

    process-limit

    (process-limit n gen)

    Takes a generator and returns a generator with bounded concurrency–it emits operations for up to n distinct processes, but no more.

    Specifically, we track the set of all processes in a context’s workers map: the underlying generator can return operations only from contexts such that the union of all processes across all such contexts has cardinality at most n. Tracking the union of all possible processes, rather than just those processes actually performing operations, prevents the generator from “trickling” at the end of a test, i.e. letting only one or two processes continue to perform ops, rather than the full concurrency of the test.

    -

    rand-int-seq

    (rand-int-seq)(rand-int-seq seed)

    Generates a reproducible sequence of random longs, given a random seed. If seed is not provided, taken from (rand-int)).

    -

    repeat

    (repeat gen)(repeat limit gen)

    Wraps a generator so that it emits operations infinitely, or, with an initial limit, up to limit times. Think of this as the inverse of once: where once takes a generator that emits many things and makes it emit one, this takes a generator that emits (presumably) one thing, and makes it emit many.

    +

    rand-int-seq

    (rand-int-seq)(rand-int-seq seed)

    Generates a reproducible sequence of random longs, given a random seed. If seed is not provided, taken from (rand-int)).

    +

    rand-seq

    (rand-seq)(rand-seq seed)

    Generates a reproducible sequence of random doubles, given a random seed. If seed is not provided, taken from (rand-int)

    +

    repeat

    (repeat gen)(repeat limit gen)

    Wraps a generator so that it emits operations infinitely, or, with an initial limit, up to limit times. Think of this as the inverse of once: where once takes a generator that emits many things and makes it emit one, this takes a generator that emits (presumably) one thing, and makes it emit many.

    The state of the underlying generator is unchanged as repeat yields operations, but repeat does not memoize its results; repeating a nondeterministic generator results in a sequence of different operations.

    -

    reserve

    (reserve & args)

    Takes a series of count, generator pairs, and a final default generator.

    +

    reserve

    (reserve & args)

    Takes a series of count, generator pairs, and a final default generator.

    (reserve 5 write 10 cas read)

    The first 5 threads will call the write generator, the next 10 will emit CAS operations, and the remaining threads will perform reads. This is particularly useful when you want to ensure that two classes of operations have a chance to proceed concurrently–for instance, if writes begin blocking, you might like reads to proceed concurrently without every thread getting tied up in a write.

    Each generator sees a context which only includes the worker threads which will execute that particular generator. Updates from a thread are propagated only to the generator which that thread executes.

    -

    sleep

    (sleep dt)

    Emits exactly one special operation which causes its receiving process to do nothing for dt seconds. Use (repeat (sleep 10)) to sleep repeatedly.

    -

    some-free-process

    (some-free-process ctx)

    Given a context, returns a random free process, or nil if all are busy.

    -

    soonest-op-map

    (soonest-op-map)(soonest-op-map m)(soonest-op-map m1 m2)

    Takes a pair of maps wrapping operations. Each map has the following structure:

    +

    sleep

    (sleep dt)

    Emits exactly one special operation which causes its receiving process to do nothing for dt seconds. Use (repeat (sleep 10)) to sleep repeatedly.

    +

    some-free-process

    (some-free-process ctx)

    Given a context, returns a random free process, or nil if all are busy.

    +

    soonest-op-map

    (soonest-op-map)(soonest-op-map m)(soonest-op-map m1 m2)

    Takes a pair of maps wrapping operations. Each map has the following structure:

    :op An operation :weight An optional integer weighting.

    Returns whichever map has an operation which occurs sooner. If one map is nil, the other happens sooner. If one map’s op is :pending, the other happens sooner. If one op has a lower :time, it happens sooner. If the two ops have equal :times, resolves the tie randomly proportional to the two maps’ respective :weights. With weights 2 and 3, returns the first map 2/5 of the time, and the second 3/5 of the time.

    The :weight of the returned map is the sum of both weights if their times are equal, which makes this function suitable for use in a reduction over many generators.

    Why is this nondeterministic? Because we use this function to decide between several alternative generators, and always biasing towards an earlier or later generator could lead to starving some threads or generators.

    -

    stagger

    (stagger dt gen)

    Wraps a generator. Operations from that generator are scheduled at uniformly random intervals between 0 to 2 * (dt seconds).

    +

    stagger

    (stagger dt gen)

    Wraps a generator. Operations from that generator are scheduled at uniformly random intervals between 0 to 2 * (dt seconds).

    Unlike Jepsen’s original version of stagger, this actually means ‘schedule at roughly every dt seconds’, rather than ‘introduce roughly dt seconds of latency between ops’, which makes this less sensitive to request latency variations.

    Also note that unlike Jepsen’s original version of stagger, this delay applies to all operations, not to each thread independently. If your old stagger dt is 10, and your concurrency is 5, your new stagger dt should be 2.

    -

    synchronize

    (synchronize gen)

    Takes a generator, and waits for all workers to be free before it begins.

    -

    then

    (then a b)

    Generator A, synchronize, then generator B. Note that this takes its arguments backwards: b comes before a. Why? Because it reads better in ->> composition. You can say:

    +

    synchronize

    (synchronize gen)

    Takes a generator, and waits for all workers to be free before it begins.

    +

    then

    (then a b)

    Generator A, synchronize, then generator B. Note that this takes its arguments backwards: b comes before a. Why? Because it reads better in ->> composition. You can say:

    (->> (fn [] {:f :write :value 2})
          (limit 3)
          (then (once {:f :read})))
     
    -

    thread->process

    (thread->process ctx thread)

    Given a thread, looks up which process it’s executing.

    -

    time-limit

    (time-limit dt gen)

    Takes a time in seconds, and an underlying generator. Once this emits an operation (taken from that underlying generator), will only emit operations for dt seconds.

    -

    trace

    (trace k gen)

    Wraps a generator, logging calls to op and update before passing them on to the underlying generator. Takes a key k, which is included in every log line.

    -

    tracking-get!

    (tracking-get! read-keys m k not-found)

    Takes an ArrayList, a map, a key, and a not-found value. Reads key from map, returning it or not-found. Adds the key to the list if it was in the map. Yourkit led me down this path.

    -

    until-ok

    (until-ok gen)

    Wraps a generator, yielding operations from it until one of those operations completes with :type :ok.

    -

    validate

    (validate gen)

    Validates the well-formedness of operations emitted from the underlying generator.

    -
    \ No newline at end of file +

    thread->process

    (thread->process ctx thread)

    Given a thread, looks up which process it’s executing.

    +

    time-limit

    (time-limit dt gen)

    Takes a time in seconds, and an underlying generator. Once this emits an operation (taken from that underlying generator), will only emit operations for dt seconds.

    +

    trace

    (trace k gen)

    Wraps a generator, logging calls to op and update before passing them on to the underlying generator. Takes a key k, which is included in every log line.

    +

    tracking-get!

    (tracking-get! read-keys m k not-found)

    Takes an ArrayList, a map, a key, and a not-found value. Reads key from map, returning it or not-found. Adds the key to the list if it was in the map. Yourkit led me down this path.

    +

    until-ok

    (until-ok gen)

    Wraps a generator, yielding operations from it until one of those operations completes with :type :ok.

    +

    validate

    (validate gen)

    Validates the well-formedness of operations emitted from the underlying generator.

    +
    \ No newline at end of file diff --git a/jepsen.generator.interpreter.html b/jepsen.generator.interpreter.html index d7f8e37fe..3480dd128 100644 --- a/jepsen.generator.interpreter.html +++ b/jepsen.generator.interpreter.html @@ -1,17 +1,17 @@ -jepsen.generator.interpreter documentation

    jepsen.generator.interpreter

    This namespace interprets operations from a pure generator, handling worker threads, spawning processes for interacting with clients and nemeses, and recording a history.

    +jepsen.generator.interpreter documentation

    jepsen.generator.interpreter

    This namespace interprets operations from a pure generator, handling worker threads, spawning processes for interacting with clients and nemeses, and recording a history.

    client-nemesis-worker

    (client-nemesis-worker)

    A Worker which can spawn both client and nemesis-specific workers based on the :client and :nemesis in a test.

    -

    goes-in-history?

    (goes-in-history? op)

    Should this operation be journaled to the history? We exclude :log and :sleep ops right now.

    -

    max-pending-interval

    When the generator is :pending, this controls the maximum interval before we’ll update the context and check the generator for an operation again. Measured in microseconds.

    -

    run!

    (run! test)

    Takes a test with a :store :handle open. Causes the test’s reference to the :generator to be forgotten, to avoid retaining the head of infinite seqs. Opens a writer for the test’s history using that handle. Creates an initial context from test and evaluates all ops from (:gen test). Spawns a thread for each worker, and hands those workers operations from gen; each thread applies the operation using (:client test) or (:nemesis test), as appropriate. Invocations and completions are journaled to a history on disk. Returns a new test with no :generator and a completed :history.

    +

    goes-in-history?

    (goes-in-history? op)

    Should this operation be journaled to the history? We exclude :log and :sleep ops right now.

    +

    max-pending-interval

    When the generator is :pending, this controls the maximum interval before we’ll update the context and check the generator for an operation again. Measured in microseconds.

    +

    run!

    (run! test)

    Takes a test with a :store :handle open. Causes the test’s reference to the :generator to be forgotten, to avoid retaining the head of infinite seqs. Opens a writer for the test’s history using that handle. Creates an initial context from test and evaluates all ops from (:gen test). Spawns a thread for each worker, and hands those workers operations from gen; each thread applies the operation using (:client test) or (:nemesis test), as appropriate. Invocations and completions are journaled to a history on disk. Returns a new test with no :generator and a completed :history.

    Generators are automatically wrapped in friendly-exception and validate. Clients are wrapped in a validator as well.

    Automatically initializes the generator system, which, on first invocation, extends the Generator protocol over some dynamic classes like (promise).

    -

    spawn-worker

    (spawn-worker test out worker id)

    Creates communication channels and spawns a worker thread to evaluate the given worker. Takes a test, a Queue which should receive completion operations, a Worker object, and a worker id.

    +

    spawn-worker

    (spawn-worker test out worker id)

    Creates communication channels and spawns a worker thread to evaluate the given worker. Takes a test, a Queue which should receive completion operations, a Worker object, and a worker id.

    Returns a map with:

    :id The worker ID :future The future evaluating the worker code :in A Queue which delivers invocations to the worker

    -

    Worker

    protocol

    This protocol allows the interpreter to manage the lifecycle of stateful workers. All operations on a Worker are guaranteed to be executed by a single thread.

    +

    Worker

    protocol

    This protocol allows the interpreter to manage the lifecycle of stateful workers. All operations on a Worker are guaranteed to be executed by a single thread.

    members

    close!

    (close! this test)

    Closes this worker, releasing any resources it may hold.

    invoke!

    (invoke! this test op)

    Asks the worker to perform this operation, and returns a completed operation.

    open

    (open this test id)

    Spawns a new Worker process for the given worker ID.

    -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/jepsen.generator.test.html b/jepsen.generator.test.html index c5cc12d4e..03d6ef0b6 100644 --- a/jepsen.generator.test.html +++ b/jepsen.generator.test.html @@ -1,20 +1,20 @@ -jepsen.generator.test documentation

    jepsen.generator.test

    This namespace contains functions for testing generators. See the jepsen.generator-test namespace in the test/ directory for a concrete example of how these functions can be used.

    +jepsen.generator.test documentation

    jepsen.generator.test

    This namespace contains functions for testing generators. See the jepsen.generator-test namespace in the test/ directory for a concrete example of how these functions can be used.

    NOTE: While the simulate function is considered stable at this point, the others might still be subject to change – use with care and expect possible breakage in future releases.

    default-context

    A default initial context for running these tests. Two worker threads, one nemesis.

    -

    default-test

    A default test map.

    -

    imperfect

    (imperfect gen)(imperfect ctx gen)

    Simulates the series of ops obtained from a generator where threads alternately fail, info, then ok, and repeat, taking 10 ns each. Returns invocations and completions.

    -

    invocations

    (invocations history)

    Only invokes, not returns

    -

    n+nemesis-context

    (n+nemesis-context n)

    A context with n numeric worker threads and one nemesis.

    -

    perfect

    (perfect gen)(perfect ctx gen)

    Simulates the series of ops obtained from a generator where the system executes every operation successfully in 10 nanoseconds. Returns only invocations.

    -

    perfect*

    (perfect* gen)(perfect* ctx gen)

    Simulates the series of ops obtained from a generator where the system executes every operation successfully in 10 nanoseconds. Returns full history.

    -

    perfect-info

    (perfect-info gen)(perfect-info ctx gen)

    Simulates the series of ops obtained from a generator where every operation crashes with :info in 10 nanoseconds. Returns only invocations.

    -

    perfect-latency

    How long perfect operations take

    -

    quick

    (quick gen)(quick ctx gen)

    Like quick-ops, but returns just invocations.

    -

    quick-ops

    (quick-ops gen)(quick-ops ctx gen)

    Simulates the series of ops obtained from a generator where the system executes every operation perfectly, immediately, and with zero latency.

    -

    rand-seed

    We need tests to be deterministic for reproducibility, but also pseudorandom. Changing this seed will force rewriting some tests, but it might be necessary for discovering edge cases.

    -

    simulate

    (simulate gen complete-fn)(simulate ctx gen complete-fn)

    Simulates the series of operations obtained from a generator, given a function that takes a context and op and returns the completion for that op.

    +

    default-test

    A default test map.

    +

    imperfect

    (imperfect gen)(imperfect ctx gen)

    Simulates the series of ops obtained from a generator where threads alternately fail, info, then ok, and repeat, taking 10 ns each. Returns invocations and completions.

    +

    invocations

    (invocations history)

    Only invokes, not returns

    +

    n+nemesis-context

    (n+nemesis-context n)

    A context with n numeric worker threads and one nemesis.

    +

    perfect

    (perfect gen)(perfect ctx gen)

    Simulates the series of ops obtained from a generator where the system executes every operation successfully in 10 nanoseconds. Returns only invocations.

    +

    perfect*

    (perfect* gen)(perfect* ctx gen)

    Simulates the series of ops obtained from a generator where the system executes every operation successfully in 10 nanoseconds. Returns full history.

    +

    perfect-info

    (perfect-info gen)(perfect-info ctx gen)

    Simulates the series of ops obtained from a generator where every operation crashes with :info in 10 nanoseconds. Returns only invocations.

    +

    perfect-latency

    How long perfect operations take

    +

    quick

    (quick gen)(quick ctx gen)

    Like quick-ops, but returns just invocations.

    +

    quick-ops

    (quick-ops gen)(quick-ops ctx gen)

    Simulates the series of ops obtained from a generator where the system executes every operation perfectly, immediately, and with zero latency.

    +

    rand-seed

    We need tests to be deterministic for reproducibility, but also pseudorandom. Changing this seed will force rewriting some tests, but it might be necessary for discovering edge cases.

    +

    simulate

    (simulate gen complete-fn)(simulate ctx gen complete-fn)

    Simulates the series of operations obtained from a generator, given a function that takes a context and op and returns the completion for that op.

    Strips out op :index fields–it’s generally not as useful for testing.

    -

    with-fixed-rand-int

    macro

    (with-fixed-rand-int seed & body)

    Rebinds rand-int to yield a deterministic series of random values. Definitely not threadsafe, but fine for tests I think.

    -
    \ No newline at end of file +

    with-fixed-rands

    macro

    (with-fixed-rands seed & body)

    Rebinds rand, rand-int, and rand-nth to yield a deterministic series of random values. Definitely not threadsafe, but fine for tests I think.

    +
    \ No newline at end of file diff --git a/jepsen.generator.translation-table.html b/jepsen.generator.translation-table.html index 37ab2c8b2..2f78a231b 100644 --- a/jepsen.generator.translation-table.html +++ b/jepsen.generator.translation-table.html @@ -1,11 +1,11 @@ -jepsen.generator.translation-table documentation

    jepsen.generator.translation-table

    We burn a lot of time in hashcode and map manipulation for thread names, which are mostly integers 0…n, but sometimes non-integer names like :nemesis. It’s nice to be able to represent thread state internally as purely integers. To do this, we compute a one-time translation table which lets us map those names to integers and vice-versa.

    +jepsen.generator.translation-table documentation

    jepsen.generator.translation-table

    We burn a lot of time in hashcode and map manipulation for thread names, which are mostly integers 0…n, but sometimes non-integer names like :nemesis. It’s nice to be able to represent thread state internally as purely integers. To do this, we compute a one-time translation table which lets us map those names to integers and vice-versa.

    all-names

    (all-names translation-table)

    A sequence of all names in the translation table, in the exact order of thread indices. Index 0’s name comes first, then 1, and so on.

    -

    index->name

    (index->name translation-table thread-index)

    Turns a thread index (an int) into a thread name (e.g. 0, 5, or :nemesis).

    -

    indices->names

    (indices->names translation-table indices)

    Takes a translation table and a BitSet of thread indices. Constructs a Bifurcan ISet out of those threads.

    -

    name->index

    (name->index translation-table thread-name)

    Turns a thread name (e.g. 0, 5, or :nemesis) into a primitive int.

    -

    names->indices

    (names->indices translation-table names)

    Takes a translation table and a collection of thread names. Constructs a BitSet of those thread indices.

    -

    thread-count

    (thread-count translation-table)

    How many threads in a translation table in all?

    -

    translation-table

    (translation-table int-thread-count named-threads)

    Takes a number of integer threads and a collection of named threads, and computes a translation table.

    -
    \ No newline at end of file +

    index->name

    (index->name translation-table thread-index)

    Turns a thread index (an int) into a thread name (e.g. 0, 5, or :nemesis).

    +

    indices->names

    (indices->names translation-table indices)

    Takes a translation table and a BitSet of thread indices. Constructs a Bifurcan ISet out of those threads.

    +

    name->index

    (name->index translation-table thread-name)

    Turns a thread name (e.g. 0, 5, or :nemesis) into a primitive int.

    +

    names->indices

    (names->indices translation-table names)

    Takes a translation table and a collection of thread names. Constructs a BitSet of those thread indices.

    +

    thread-count

    (thread-count translation-table)

    How many threads in a translation table in all?

    +

    translation-table

    (translation-table int-thread-count named-threads)

    Takes a number of integer threads and a collection of named threads, and computes a translation table.

    +
    \ No newline at end of file diff --git a/jepsen.independent.html b/jepsen.independent.html index 8520e0cfb..074df1240 100644 --- a/jepsen.independent.html +++ b/jepsen.independent.html @@ -1,24 +1,24 @@ -jepsen.independent documentation

    jepsen.independent

    Some tests are expensive to check–for instance, linearizability–which requires we verify only short histories. But if histories are short, we may not be able to sample often or long enough to reveal concurrency errors. This namespace supports splitting a test into independent components–for example taking a test of a single register and lifting it to a map of keys to registers.

    +jepsen.independent documentation

    jepsen.independent

    Some tests are expensive to check–for instance, linearizability–which requires we verify only short histories. But if histories are short, we may not be able to sample often or long enough to reveal concurrency errors. This namespace supports splitting a test into independent components–for example taking a test of a single register and lifting it to a map of keys to registers.

    checker

    (checker checker)

    Takes a checker that operates on :values like v, and lifts it to a checker that operates on histories with values of [k v] tuples–like those generated by sequential-generator.

    We partition the history into (count (distinct keys)) subhistories. The subhistory for key k contains every element from the original history except those whose values are MapEntries with a different key. This means that every history sees, for example, un-keyed nemesis operations or informational logging.

    The checker we build is valid iff the given checker is valid for all subhistories. Under the :results key we store a map of keys to the results from the underlying checker on the subhistory for that key. :failures is the subset of that :results map which were not valid.

    -

    concurrent-generator

    (concurrent-generator n keys fgen)

    Takes a positive integer n, a sequence of keys (k1 k2 …) and a function (fgen k) which, when called with a key, yields a generator. Returns a generator which splits up threads into groups of n threads per key, and has each group work on a key for some time. Once a key’s generator is exhausted, it obtains a new key, constructs a new generator from key, and moves on.

    +

    concurrent-generator

    (concurrent-generator n keys fgen)

    Takes a positive integer n, a sequence of keys (k1 k2 …) and a function (fgen k) which, when called with a key, yields a generator. Returns a generator which splits up threads into groups of n threads per key, and has each group work on a key for some time. Once a key’s generator is exhausted, it obtains a new key, constructs a new generator from key, and moves on.

    Threads working with this generator are assumed to have contiguous IDs, starting at 0. Violating this assumption results in uneven allocation of threads to groups.

    Excludes the nemesis by design; only worker threads run here.

    Updates are routed to the generator which that thread is currently executing.

    -

    dir

    What directory should we write independent results to?

    -

    group-threads

    (group-threads n ctx)

    Given a group size and pure generator context, returns a collection of collections of threads, each per group.

    -

    history-keys

    (history-keys history)

    Takes a history and returns the set of keys in it.

    -

    make-group->threads

    (make-group->threads n ctx)

    Given a group size and pure generator context, returns a vector where each element is the set of threads in the group corresponding to that index.

    -

    make-thread->group

    (make-thread->group n ctx)

    Given a group size and pure generator context, returns a map of threads to groups.

    -

    sequential-generator

    (sequential-generator keys fgen)

    Takes a sequence of keys k1 k2 …, and a function (fgen k) which, when called with a key, yields a generator. Returns a generator which starts with the first key k1 and constructs a generator gen1 via (fgen k1), returns elements from gen1 until it is exhausted, then moves to k2.

    +

    dir

    What directory should we write independent results to?

    +

    group-threads

    (group-threads n ctx)

    Given a group size and pure generator context, returns a collection of collections of threads, each per group.

    +

    history-keys

    (history-keys history)

    Takes a history and returns the set of keys in it.

    +

    make-group->threads

    (make-group->threads n ctx)

    Given a group size and pure generator context, returns a vector where each element is the set of threads in the group corresponding to that index.

    +

    make-thread->group

    (make-thread->group n ctx)

    Given a group size and pure generator context, returns a map of threads to groups.

    +

    sequential-generator

    (sequential-generator keys fgen)

    Takes a sequence of keys k1 k2 …, and a function (fgen k) which, when called with a key, yields a generator. Returns a generator which starts with the first key k1 and constructs a generator gen1 via (fgen k1), returns elements from gen1 until it is exhausted, then moves to k2.

    The generator wraps each :value in the operations it generates in a k1 value tuple.

    fgen must be pure.

    -

    subhistories

    (subhistories ks history)

    Takes a collection of keys and a history. Runs a concurrent fold over the history, breaking it into a map of keys to Histories for those keys. Unwraps tuples. Materializes everything in memory; later if we want to do ginormous histories we should spill to disk.

    -

    tuple

    (tuple k v)

    Constructs a kv tuple

    -

    tuple-gen

    (tuple-gen k gen)

    Wraps a generator so that it returns :value k v tuples for :invoke ops.

    -

    tuple?

    (tuple? value)

    Is the given value generated by an independent generator?

    -
    \ No newline at end of file +

    subhistories

    (subhistories ks history)

    Takes a collection of keys and a history. Runs a concurrent fold over the history, breaking it into a map of keys to Histories for those keys. Unwraps tuples. Materializes everything in memory; later if we want to do ginormous histories we should spill to disk.

    +

    tuple

    (tuple k v)

    Constructs a kv tuple

    +

    tuple-gen

    (tuple-gen k gen)

    Wraps a generator so that it returns :value k v tuples for :invoke ops.

    +

    tuple?

    (tuple? value)

    Is the given value generated by an independent generator?

    +
    \ No newline at end of file diff --git a/jepsen.lazyfs.html b/jepsen.lazyfs.html index c49236908..a724d58e8 100644 --- a/jepsen.lazyfs.html +++ b/jepsen.lazyfs.html @@ -1,27 +1,27 @@ -jepsen.lazyfs documentation

    jepsen.lazyfs

    Lazyfs allows the injection of filesystem-level faults: specifically, losing data which was written to disk but not fsynced. This namespace lets you mount a specific directory as a lazyfs filesystem, and offers a DB which mounts/unmounts it, and downloads the lazyfs log file–this can be composed into your own database. You can then call lose-unfsynced-writes! as a part of your database’s db/kill! implementation, likely after killing your DB process itself.

    +jepsen.lazyfs documentation

    jepsen.lazyfs

    Lazyfs allows the injection of filesystem-level faults: specifically, losing data which was written to disk but not fsynced. This namespace lets you mount a specific directory as a lazyfs filesystem, and offers a DB which mounts/unmounts it, and downloads the lazyfs log file–this can be composed into your own database. You can then call lose-unfsynced-writes! as a part of your database’s db/kill! implementation, likely after killing your DB process itself.

    bin

    The lazyfs binary

    -

    checkpoint!

    (checkpoint! db-or-lazyfs-map)

    Forces the given lazyfs map or DB to flush writes to disk.

    -

    commit

    What version should we check out and build?

    -

    config

    (config {:keys [log-file fifo fifo-completed cache-size]})

    The lazyfs config file text. Takes a lazyfs map

    -

    db

    (db dir-or-lazyfs)

    Takes a directory or a lazyfs map and constructs a DB whose setup installs lazyfs and mounts the given lazyfs dir.

    -

    dir

    Where do we install lazyfs to on the remote node?

    -

    fifo!

    (fifo! {:keys [user fifo fifo-completed]} cmd)

    Sends a string to the fifo channel for the given lazyfs map.

    -

    fuse-dev

    The path to the fuse device.

    -

    install!

    (install!)

    Installs lazyfs on the currently-bound remote node.

    -

    lazyfs

    (lazyfs x)

    Takes a directory as a string, or a map of options, or a full lazyfs map, which is passed through unaltered. Constructs a lazyfs map of all the files we need to run a lazyfs for a directory. Map options are:

    +

    checkpoint!

    (checkpoint! db-or-lazyfs-map)

    Forces the given lazyfs map or DB to flush writes to disk.

    +

    commit

    What version should we check out and build?

    +

    config

    (config {:keys [log-file fifo fifo-completed cache-size]})

    The lazyfs config file text. Takes a lazyfs map

    +

    db

    (db dir-or-lazyfs)

    Takes a directory or a lazyfs map and constructs a DB whose setup installs lazyfs and mounts the given lazyfs dir.

    +

    dir

    Where do we install lazyfs to on the remote node?

    +

    fifo!

    (fifo! {:keys [user fifo fifo-completed]} cmd)

    Sends a string to the fifo channel for the given lazyfs map.

    +

    fuse-dev

    The path to the fuse device.

    +

    install!

    (install!)

    Installs lazyfs on the currently-bound remote node.

    +

    lazyfs

    (lazyfs x)

    Takes a directory as a string, or a map of options, or a full lazyfs map, which is passed through unaltered. Constructs a lazyfs map of all the files we need to run a lazyfs for a directory. Map options are:

    :dir The directory to mount :user Which user should run lazyfs? Default “root”. :chown Who to set as the owner of the directory. Defaults to “:” :cache-size The size of the lazyfs page cache. Should be a string like “0.5GB”

    -

    lose-unfsynced-writes!

    (lose-unfsynced-writes! db-or-lazyfs-map)

    Takes a lazyfs map or a lazyfs DB. Asks the local node to lose any writes to the given lazyfs map which have not been fsynced yet.

    -

    mount!

    (mount! {:keys [dir data-dir lazyfs-dir chown user config-file log-file], :as lazyfs})

    Takes a lazyfs map, creates directories and config files, and starts the lazyfs daemon. You likely want to call this before beginning database setup. Returns the lazyfs map.

    -

    nemesis

    (nemesis lazyfs)

    A nemesis which inject faults into the given lazyfs map by writing to its fifo. Types of faults (:f) supported:

    +

    lose-unfsynced-writes!

    (lose-unfsynced-writes! db-or-lazyfs-map)

    Takes a lazyfs map or a lazyfs DB. Asks the local node to lose any writes to the given lazyfs map which have not been fsynced yet.

    +

    mount!

    (mount! {:keys [dir data-dir lazyfs-dir chown user config-file log-file], :as lazyfs})

    Takes a lazyfs map, creates directories and config files, and starts the lazyfs daemon. You likely want to call this before beginning database setup. Returns the lazyfs map.

    +

    nemesis

    (nemesis lazyfs)

    A nemesis which inject faults into the given lazyfs map by writing to its fifo. Types of faults (:f) supported:

    :lose-unfsynced-writes

    Forgets any writes which were not fsynced. The
     :value should be a list of nodes you'd like to lose un-fsynced writes on.
     

    You don’t necessarily need to use this–I haven’t figured out how to integrate it well into jepsen.nemesis combined. Once we start getting other classes of faults it will probably make sense for this nemesis to get more use and expand.

    -

    real-extension

    When we mount a lazyfs directory, it’s backed by a real directory on the underlying filesystem: e.g. ‘foo’ is backed by ‘foo.real’. We name this directory using this extension.

    -

    repo-url

    Where can we clone lazyfs from?

    -

    start-daemon!

    (start-daemon! opts)

    Starts the lazyfs daemon once preparation is complete. We daemonize ourselves so that we can get logs–also it looks like the built-in daemon might not work right now.

    -

    umount!

    (umount! {:keys [lazyfs-dir dir], :as lazyfs})

    Stops the given lazyfs map and destroys the lazyfs directory. You probably want to call this as a part of database teardown.

    -
    \ No newline at end of file +

    real-extension

    When we mount a lazyfs directory, it’s backed by a real directory on the underlying filesystem: e.g. ‘foo’ is backed by ‘foo.real’. We name this directory using this extension.

    +

    repo-url

    Where can we clone lazyfs from?

    +

    start-daemon!

    (start-daemon! opts)

    Starts the lazyfs daemon once preparation is complete. We daemonize ourselves so that we can get logs–also it looks like the built-in daemon might not work right now.

    +

    umount!

    (umount! {:keys [lazyfs-dir dir], :as lazyfs})

    Stops the given lazyfs map and destroys the lazyfs directory. You probably want to call this as a part of database teardown.

    +
    \ No newline at end of file diff --git a/jepsen.nemesis.combined.html b/jepsen.nemesis.combined.html index 1bd9ba557..2440b4277 100644 --- a/jepsen.nemesis.combined.html +++ b/jepsen.nemesis.combined.html @@ -1,21 +1,21 @@ -jepsen.nemesis.combined documentation

    jepsen.nemesis.combined

    A nemesis which combines common operations on nodes and processes: clock skew, crashes, pauses, and partitions. So far, writing these sorts of nemeses has involved lots of special cases. I expect that the API for specifying these nemeses is going to fluctuate as we figure out how to integrate those special cases appropriately. Consider this API unstable.

    +jepsen.nemesis.combined documentation

    jepsen.nemesis.combined

    A nemesis which combines common operations on nodes and processes: clock skew, crashes, pauses, and partitions. So far, writing these sorts of nemeses has involved lots of special cases. I expect that the API for specifying these nemeses is going to fluctuate as we figure out how to integrate those special cases appropriately. Consider this API unstable.

    This namespace introduces a new abstraction. A nemesis+generator is a map with a nemesis and a generator for that nemesis. This enables us to write an algebra for composing both simultaneously. We call checkers+generators+clients a “workload”, but I don’t have a good word for this except “nemesis”. If you can think of a good word, please let me know.

    We also take advantage of the Process and Pause protocols in jepsen.db, which allow us to start, kill, pause, and resume processes.

    clock-package

    (clock-package opts)

    A nemesis and generator package for modifying clocks. Options as for nemesis-package.

    -

    compose-packages

    (compose-packages packages)

    Takes a collection of nemesis+generators packages and combines them into one. Generators are combined with gen/any. Final generators proceed sequentially.

    -

    db-generators

    (db-generators opts)

    A map with a :generator and a :final-generator for DB-related operations. Options are from nemesis-package.

    -

    db-nemesis

    (db-nemesis db)

    A nemesis which can perform various DB-specific operations on nodes. Takes a database to operate on. This nemesis responds to the following f’s:

    +

    compose-packages

    (compose-packages packages)

    Takes a collection of nemesis+generators packages and combines them into one. Generators are combined with gen/any. Final generators proceed sequentially.

    +

    db-generators

    (db-generators opts)

    A map with a :generator and a :final-generator for DB-related operations. Options are from nemesis-package.

    +

    db-nemesis

    (db-nemesis db)

    A nemesis which can perform various DB-specific operations on nodes. Takes a database to operate on. This nemesis responds to the following f’s:

    :start :kill :pause :resume

    In all cases, the :value is a node spec, as interpreted by db-nodes.

    -

    db-nodes

    (db-nodes test db node-spec)

    Takes a test, a DB, and a node specification. Returns a collection of nodes taken from that test. node-spec may be one of:

    +

    db-nodes

    (db-nodes test db node-spec)

    Takes a test, a DB, and a node specification. Returns a collection of nodes taken from that test. node-spec may be one of:

    nil - Chooses a random, non-empty subset of nodes :one - Chooses a single random node :minority - Chooses a random minority of nodes :majority - Chooses a random majority of nodes :minority-third - Up to, but not including, 1/3rd of nodes :primaries - A random nonempty subset of nodes which we think are primaries :all - All nodes “a”, … - The specified nodes

    -

    db-package

    (db-package opts)

    A nemesis and generator package for acting on a single DB. Options are from nemesis-package.

    -

    default-interval

    The default interval, in seconds, between nemesis operations.

    -

    f-map

    (f-map lift pkg)

    Takes a function lift which (presumably injectively) transforms the :f values used in operations, and a nemesis package. Yields a new nemesis package which uses the lifted fs. See generator/f-map and nemesis/f-map.

    -

    f-map-perf

    (f-map-perf lift perf)

    Takes a perf map, and transforms the fs in it using lift.

    -

    file-corruption-nemesis

    (file-corruption-nemesis db)(file-corruption-nemesis db bitflip truncate)

    db-package

    (db-package opts)

    A nemesis and generator package for acting on a single DB. Options are from nemesis-package.

    +

    default-interval

    The default interval, in seconds, between nemesis operations.

    +

    f-map

    (f-map lift pkg)

    Takes a function lift which (presumably injectively) transforms the :f values used in operations, and a nemesis package. Yields a new nemesis package which uses the lifted fs. See generator/f-map and nemesis/f-map.

    +

    f-map-perf

    (f-map-perf lift perf)

    Takes a perf map, and transforms the fs in it using lift.

    +

    file-corruption-nemesis

    (file-corruption-nemesis db)(file-corruption-nemesis db bitflip truncate)

    Wraps jepsen.nemesis/bitflip and jepsen.nemesis/truncate-file to corrupt files.

    Responds to:

    {:f :bitflip  :value [:node-spec ... ; target nodes as interpreted by db-nodes
                           {:file "/path/to/file/or/dir" :probability 1e-5}]} 
    @@ -23,7 +23,7 @@
                           {:file "/path/to/file/or/dir" :drop {:distribution :geometric :p 1e-3}}]} 
     

    See jepsen.nemesis.combined/file-corruption-package.

    -

    file-corruption-package

    (file-corruption-package {:keys [faults db file-corruption interval], :as _opts})

    A nemesis and generator package that corrupts files.

    +

    file-corruption-package

    (file-corruption-package {:keys [faults db file-corruption interval], :as _opts})

    A nemesis and generator package that corrupts files.

    Opts:

    {:file-corruption
      {:targets     [...] ; A collection of node specs, e.g. [:one, ["n1", "n2"], :all]
    @@ -43,9 +43,9 @@
     

    :probability or :drop can be specified as a single value or a distribution-map. Use a distribution-map to generate a new random value for each operation using jepsen.util/rand-distribution.

    See jepsen.nemesis/bitflip and jepsen.nemesis/truncate-file.

    Additional options as for nemesis-package.

    -

    grudge

    (grudge test db part-spec)

    Computes a grudge from a partition spec. Spec may be one of:

    +

    grudge

    (grudge test db part-spec)

    Computes a grudge from a partition spec. Spec may be one of:

    :one Isolates a single node :majority A clean majority/minority split :majorities-ring Overlapping majorities in a ring :minority-third Cleanly splits away up to, but not including, 1/3rd of nodes :primaries Isolates a nonempty subset of primaries into single-node components

    -

    nemesis-package

    (nemesis-package opts)

    Takes an option map, and returns a map with a :nemesis, a :generator for its operations, a :final-generator to clean up any failure modes at the end of a test, and a :perf map that can be passed to checker/perf to render nice graphs.

    +

    nemesis-package

    (nemesis-package opts)

    Takes an option map, and returns a map with a :nemesis, a :generator for its operations, a :final-generator to clean up any failure modes at the end of a test, and a :perf map that can be passed to checker/perf to render nice graphs.

    This nemesis is intended for throwing a broad array of simple failures at the wall, and seeing “what sticks”. Once you’ve found a fault, you can restrict the failure modes to specific types of faults, and specific targets for those faults, to try and reproduce it faster.

    This nemesis is not intended for complex sequences of faults, like partitionining away a leader, flipping some switch, adjusting the clock on an unrelated node, then crashing someone else. I don’t think I can devise a good declarative langauge for that in a way which is simpler than “generators” themselves. For those types of faults, you’ll write your own generator instead, but you may be able to use this nemesis to execute some or all of those operations.

    Mandatory options:

    @@ -62,19 +62,20 @@

    :targets A collection of node specs, e.g. :one, :all

    File corruption options:

    :targets A collection of node specs, e.g. :one, :all :corruptions A collection of file corruptions, e.g. {:type :bitflip, :file “/path/to/file” :probability 1e-3}

    -

    nemesis-packages

    (nemesis-packages opts)

    Just like nemesis-package, but returns a collection of packages, rather than the combined package, so you can manipulate it further before composition.

    -

    node-specs

    (node-specs db)

    Returns all possible node specification for the given DB. Helpful when you don’t know WHAT you want to test.

    -

    noop

    A package which does nothing.

    -

    packet-nemesis

    (packet-nemesis db)

    A nemesis to disrupt packets, e.g. delay, loss, corruption, etc. Takes a db to work with db-nodes.

    +

    nemesis-packages

    (nemesis-packages opts)

    Just like nemesis-package, but returns a collection of packages, rather than the combined package, so you can manipulate it further before composition.

    +

    node-specs

    (node-specs db)

    Returns all possible node specification for the given DB. Helpful when you don’t know WHAT you want to test.

    +

    noop

    A package which does nothing.

    +

    packet-nemesis

    (packet-nemesis db)

    A nemesis to disrupt packets, e.g. delay, loss, corruption, etc. Takes a db to work with db-nodes.

    The network behavior is applied to all traffic to and from the target nodes.

    This nemesis responds to:

    -
    {:f :start-packet :value [:node-spec   ; target nodes as interpreted by db-nodes
    -                          {:delay {},  ; behaviors that disrupt packets
    -                           :loss  {:percent :33%},...}]} 
    -{:f :stop-packet  :value nil}
    +
    {:f     :start-packet
    + :value [:node-spec   ; target nodes as interpreted by db-nodes
    +         {:delay {},  ; behaviors that disrupt packets
    +          :loss  {:percent :33%}, ...}]}
    +{:f :stop-packet, :value nil}
     

    See jepsen.net/all-packet-behaviors.

    -

    packet-package

    (packet-package opts)

    A nemesis and generator package that disrupts packets, e.g. delay, loss, corruption, etc.

    +

    packet-package

    (packet-package opts)

    A nemesis and generator package that disrupts packets, e.g. delay, loss, corruption, etc.

    Opts:

    {:packet
      {:targets      ; A collection of node specs, e.g. [:one, :all]
    @@ -88,7 +89,7 @@
     

    See jepsen.net/all-packet-behaviors.

    Additional options as for nemesis-package.

    -

    partition-nemesis

    (partition-nemesis db)(partition-nemesis db p)

    Wraps a partitioner nemesis with support for partition specs. Uses db to determine primaries.

    -

    partition-package

    (partition-package opts)

    A nemesis and generator package for network partitions. Options as for nemesis-package.

    -

    partition-specs

    (partition-specs db)

    All possible partition specs for a DB.

    -
    \ No newline at end of file +

    partition-nemesis

    (partition-nemesis db)(partition-nemesis db p)

    Wraps a partitioner nemesis with support for partition specs. Uses db to determine primaries.

    +

    partition-package

    (partition-package opts)

    A nemesis and generator package for network partitions. Options as for nemesis-package.

    +

    partition-specs

    (partition-specs db)

    All possible partition specs for a DB.

    +
    \ No newline at end of file diff --git a/jepsen.nemesis.html b/jepsen.nemesis.html index 8f2e6e93f..ada249937 100644 --- a/jepsen.nemesis.html +++ b/jepsen.nemesis.html @@ -1,17 +1,17 @@ -jepsen.nemesis documentation

    jepsen.nemesis

    bisect

    (bisect coll)

    Given a sequence, cuts it in half; smaller half first.

    -

    bitflip

    (bitflip)

    A nemesis which introduces random bitflips in files. Takes operations like:

    +jepsen.nemesis documentation

    jepsen.nemesis

    bisect

    (bisect coll)

    Given a sequence, cuts it in half; smaller half first.

    +

    bitflip

    (bitflip)

    A nemesis which introduces random bitflips in files. Takes operations like:

    {:f     :bitflip
      :value {"some-node" {:file         "/path/to/file or /path/to/dir"
                           :probability  1e-3}}}
     

    This flips 1 x 10^-3 of the bits in /path/to/file, or a random file in /path/to/dir, on “some-node”.

    -

    bitflip-dir

    Where do we install the bitflip utility?

    -

    bridge

    (bridge nodes)

    A grudge which cuts the network in half, but preserves a node in the middle which has uninterrupted bidirectional connectivity to both components.

    -

    clock-scrambler

    (clock-scrambler dt)

    Randomizes the system clock of all nodes within a dt-second window.

    -

    complete-grudge

    (complete-grudge components)

    Takes a collection of components (collections of nodes), and computes a grudge such that no node can talk to any nodes outside its partition.

    -

    compose

    (compose nemeses)

    Combines multiple Nemesis objects into one. If all, or all but one, nemesis support Reflection, compose can simply take a collection of nemeses, and use (fs nem) to figure out what ops to send to which nemesis. Otherwise…

    +

    bitflip-dir

    Where do we install the bitflip utility?

    +

    bridge

    (bridge nodes)

    A grudge which cuts the network in half, but preserves a node in the middle which has uninterrupted bidirectional connectivity to both components.

    +

    clock-scrambler

    (clock-scrambler dt)

    Randomizes the system clock of all nodes within a dt-second window.

    +

    complete-grudge

    (complete-grudge components)

    Takes a collection of components (collections of nodes), and computes a grudge such that no node can talk to any nodes outside its partition.

    +

    compose

    (compose nemeses)

    Combines multiple Nemesis objects into one. If all, or all but one, nemesis support Reflection, compose can simply take a collection of nemeses, and use (fs nem) to figure out what ops to send to which nemesis. Otherwise…

    Takes a map of fs to nemeses and returns a single nemesis which, depending on (:f op), routes to the appropriate child nemesis. fs should be a function which takes (:f op) and returns either nil, if that nemesis should not handle that :f, or a new :f, which replaces the op’s :f, and the resulting op is passed to the given nemesis. For instance:

    (compose {#{:start :stop} (partition-random-halves)
               #{:kill}        (process-killer)})
    @@ -23,42 +23,42 @@
                :ring-stop2  :stop} (partition-majorities-ring)})
     

    We turn :split-start into :start, and pass that op to partition-random-halves.

    -

    f-map

    (f-map lift nem)

    Remaps the :f values that a nemesis accepts. Takes a function (presumably injective) which transforms :f values: (lift f) -> g, and a nemesis which accepts operations like {:f f}. The nemesis must support Reflection/fs. Returns a new nemesis which takes {:f g} instead. For example:

    +

    f-map

    (f-map lift nem)

    Remaps the :f values that a nemesis accepts. Takes a function (presumably injective) which transforms :f values: (lift f) -> g, and a nemesis which accepts operations like {:f f}. The nemesis must support Reflection/fs. Returns a new nemesis which takes {:f g} instead. For example:

    (f-map (fn f :foo f) (partition-random-halves))

    … yields a nemesis which takes ops like {:f [:foo :start] ...} and calls the underlying partitioner nemesis with {:f :start ...}. This is designed for symmetry with generator/f-map, so you can say:

    (gen/f-map lift gen) (nem/f-map lift gen)

    and get a generator and nemesis that work together. Particularly handy for building up complex nemesis packages using nemesis.combined!

    If you know all of your fs in advance, you can also do this with compose, but it turns out to be handy to have this as a separate function.

    -

    hammer-time

    (hammer-time process)(hammer-time targeter process)

    Responds to {:f :start} by pausing the given process name on a given node or nodes using SIGSTOP, and when {:f :stop} arrives, resumes it with SIGCONT. Picks the node(s) to pause using (targeter list-of-nodes), which defaults to rand-nth. Targeter may return either a single node or a collection of nodes.

    -

    invert-grudge

    (invert-grudge nodes conns)

    Takes a universe of nodes and a map of nodes to nodes they should be connected to, and returns a map of nodes to nodes they should NOT be connected to.

    -

    majorities-ring

    (majorities-ring nodes)

    A grudge in which every node can see a majority, but no node sees the same majority as any other. There are nice, exact solutions where the topology does look like a ring: these are possible for 4, 5, 6, 8, etc nodes. Seven, however, does not work so cleanly–some nodes must be connected to more than four others. We therefore offer two algorithms: one which provides an exact ring for 5-node clusters (generally common in Jepsen), and a stochastic one which doesn’t guarantee efficient ring structures, but works for larger clusters.

    +

    hammer-time

    (hammer-time process)(hammer-time targeter process)

    Responds to {:f :start} by pausing the given process name on a given node or nodes using SIGSTOP, and when {:f :stop} arrives, resumes it with SIGCONT. Picks the node(s) to pause using (targeter list-of-nodes), which defaults to rand-nth. Targeter may return either a single node or a collection of nodes.

    +

    invert-grudge

    (invert-grudge nodes conns)

    Takes a universe of nodes and a map of nodes to nodes they should be connected to, and returns a map of nodes to nodes they should NOT be connected to.

    +

    majorities-ring

    (majorities-ring nodes)

    A grudge in which every node can see a majority, but no node sees the same majority as any other. There are nice, exact solutions where the topology does look like a ring: these are possible for 4, 5, 6, 8, etc nodes. Seven, however, does not work so cleanly–some nodes must be connected to more than four others. We therefore offer two algorithms: one which provides an exact ring for 5-node clusters (generally common in Jepsen), and a stochastic one which doesn’t guarantee efficient ring structures, but works for larger clusters.

    Wow this actually is shockingly complicated. Wonder if there’s a better way?

    -

    majorities-ring-perfect

    (majorities-ring-perfect nodes)

    The perfect variant of majorities-ring, used for 5-node clusters.

    -

    majorities-ring-stochastic

    (majorities-ring-stochastic nodes)

    The stochastic variant of majorities-ring, used for larger clusters.

    -

    Nemesis

    protocol

    members

    invoke!

    (invoke! this test op)

    Apply an operation to the nemesis, which alters the cluster.

    +

    majorities-ring-perfect

    (majorities-ring-perfect nodes)

    The perfect variant of majorities-ring, used for 5-node clusters.

    +

    majorities-ring-stochastic

    (majorities-ring-stochastic nodes)

    The stochastic variant of majorities-ring, used for larger clusters.

    +

    Nemesis

    protocol

    members

    invoke!

    (invoke! this test op)

    Apply an operation to the nemesis, which alters the cluster.

    setup!

    (setup! this test)

    Set up the nemesis to work with the cluster. Returns the nemesis ready to be invoked

    teardown!

    (teardown! this test)

    Tear down the nemesis when work is complete

    -

    node-start-stopper

    (node-start-stopper targeter start! stop!)

    Takes a targeting function which, given a list of nodes, returns a single node or collection of nodes to affect, and two functions (start! test node) invoked on nemesis start, and (stop! test node) invoked on nemesis stop. Returns a nemesis which responds to :start and :stop by running the start! and stop! fns on each of the given nodes. During start! and stop!, binds the jepsen.control session to the given node, so you can just call (c/exec ...).

    +

    node-start-stopper

    (node-start-stopper targeter start! stop!)

    Takes a targeting function which, given a list of nodes, returns a single node or collection of nodes to affect, and two functions (start! test node) invoked on nemesis start, and (stop! test node) invoked on nemesis stop. Returns a nemesis which responds to :start and :stop by running the start! and stop! fns on each of the given nodes. During start! and stop!, binds the jepsen.control session to the given node, so you can just call (c/exec ...).

    The targeter can take either (targeter test nodes) or, if that fails, (targeter nodes).

    Re-selects a fresh node (or nodes) for each start–if targeter returns nil, skips the start. The return values from the start and stop fns will become the :values of the returned :info operations from the nemesis, e.g.:

    {:value {:n1 [:killed "java"]}}
     
    -

    noop

    Does nothing.

    -

    partition-halves

    (partition-halves)

    Responds to a :start operation by cutting the network into two halves–first nodes together and in the smaller half–and a :stop operation by repairing the network.

    -

    partition-majorities-ring

    (partition-majorities-ring)

    Every node can see a majority, but no node sees the same majority as any other. Randomly orders nodes into a ring.

    -

    partition-random-halves

    (partition-random-halves)

    Cuts the network into randomly chosen halves.

    -

    partition-random-node

    (partition-random-node)

    Isolates a single node from the rest of the network.

    -

    partitioner

    (partitioner)(partitioner grudge)

    Responds to a :start operation by cutting network links as defined by (grudge nodes), and responds to :stop by healing the network. The grudge to apply is either taken from the :value of a :start op, or if that is nil, by calling (grudge (:nodes test))

    -

    Reflection

    protocol

    Optional protocol for reflecting on nemeses.

    +

    noop

    Does nothing.

    +

    partition-halves

    (partition-halves)

    Responds to a :start operation by cutting the network into two halves–first nodes together and in the smaller half–and a :stop operation by repairing the network.

    +

    partition-majorities-ring

    (partition-majorities-ring)

    Every node can see a majority, but no node sees the same majority as any other. Randomly orders nodes into a ring.

    +

    partition-random-halves

    (partition-random-halves)

    Cuts the network into randomly chosen halves.

    +

    partition-random-node

    (partition-random-node)

    Isolates a single node from the rest of the network.

    +

    partitioner

    (partitioner)(partitioner grudge)

    Responds to a :start operation by cutting network links as defined by (grudge nodes), and responds to :stop by healing the network. The grudge to apply is either taken from the :value of a :start op, or if that is nil, by calling (grudge (:nodes test))

    +

    Reflection

    protocol

    Optional protocol for reflecting on nemeses.

    members

    fs

    (fs this)

    What :f functions does this nemesis support? Returns a set. Helpful for composition.

    -

    set-time!

    (set-time! t)

    Set the local node time in POSIX seconds.

    -

    split-one

    (split-one coll)(split-one loner coll)

    Split one node off from the rest

    -

    timeout

    (timeout timeout-ms nemesis)

    Sometimes nemeses are unreliable. If you wrap them in this nemesis, it’ll time out their operations with the given timeout, in milliseconds. Timed out operations have :value :timeout.

    -

    truncate-file

    (truncate-file)

    A nemesis which responds to

    +

    set-time!

    (set-time! t)

    Set the local node time in POSIX seconds.

    +

    split-one

    (split-one coll)(split-one loner coll)

    Split one node off from the rest

    +

    timeout

    (timeout timeout-ms nemesis)

    Sometimes nemeses are unreliable. If you wrap them in this nemesis, it’ll time out their operations with the given timeout, in milliseconds. Timed out operations have :value :timeout.

    +

    truncate-file

    (truncate-file)

    A nemesis which responds to

    {:f     :truncate
      :value {"some-node" {:file "/path/to/file or /path/to/dir"
                           :drop 64}}}
     

    where the value is a map of nodes to {:file, :drop} maps, on those nodes, drops the last :drop bytes from the given file, or a random file from the given directory.

    -

    validate

    (validate nemesis)

    Wraps a nemesis, validating that it constructs responses to setup and invoke correctly.

    -
    \ No newline at end of file +

    validate

    (validate nemesis)

    Wraps a nemesis, validating that it constructs responses to setup and invoke correctly.

    +
    \ No newline at end of file diff --git a/jepsen.nemesis.membership.html b/jepsen.nemesis.membership.html index 4fb4e3eae..d01d113f1 100644 --- a/jepsen.nemesis.membership.html +++ b/jepsen.nemesis.membership.html @@ -1,6 +1,6 @@ -jepsen.nemesis.membership documentation

    jepsen.nemesis.membership

    EXPERIMENTAL: provides standardized support for nemeses which add and remove nodes from a cluster.

    +jepsen.nemesis.membership documentation

    jepsen.nemesis.membership

    EXPERIMENTAL: provides standardized support for nemeses which add and remove nodes from a cluster.

    This is a tricky problem. Even the concept of cluster state is complicated: there is Jepsen’s knowledge of the state, and each individual node’s understanding of the current state. Depending on which node you ask, you may get more or less recent (or, frequently, divergent) views of cluster state. Cluster state representation is highly variable across databases, which means our standardized state machine must allow for that variability.

    We are guided by some principles that crop up repeatedly in writing these sorts of nemeses:

      @@ -26,15 +26,15 @@

      Our general approach is to define a sort of state machine where the state is our representation of the cluster state, how all nodes view the cluster, and the set of ongoing operations, plus any auxiliary material (e.g. after completing a node removal, we can delete its data files). This state is periodically updated by querying individual nodes, and also by performing operations–e.g. initiating a node removal.

      The generator constructs those operations by asking the nemesis what sorts of operations would be legal to perform at this time, and picking one of those. It then passes that operation back to the nemesis (via nemesis/invoke!), and the nemesis updates its local state and performs the operation.

    initial-state

    (initial-state test)

    Constructs an initial cluster state map for the given test.

    -

    node-view-future

    (node-view-future test state running? opts node)

    Spawns a future which keeps the given state atom updated with our view of this node.

    -

    node-view-interval

    How many seconds between updating node views.

    -

    package

    (package opts)

    Constructs a nemesis and generator for membership operations. Options are a map like

    +

    node-view-future

    (node-view-future test state running? opts node)

    Spawns a future which keeps the given state atom updated with our view of this node.

    +

    node-view-interval

    How many seconds between updating node views.

    +

    package

    (package opts)

    Constructs a nemesis and generator for membership operations. Options are a map like

    {:faults #{:membership …} :membership membership-opts}.

    Membership opts are:

    {:state A record satisfying the State protocol :log-resolve-op? Whether to log the resolution of operations :log-resolve? Whether to log each resolve step :log-node-views? Whether to log changing node views :log-view? Whether to log the entire cluster view.

    The package includes a :state field, which is an atom of the current cluster state. You can use this (for example) to have generators which inspect the current cluster state and use it to target faults.

    -

    resolve

    (resolve state test opts)

    Resolves a state towards its final form by calling resolve and resolve-ops until converged.

    -

    resolve-ops

    (resolve-ops state test opts)

    Try to resolve any pending ops we can. Returns state with those ops resolved.

    -

    State

    protocol

    For convenience, a copy of the membership State protocol. This lets users implement the protocol without requiring the state namespace themselves.

    -

    update-node-view!

    (update-node-view! state test node opts)

    Takes an atom wrapping a State, a test, and a node. Gets the current view from that node’s perspective, and updates the state atom to reflect it.

    -
    \ No newline at end of file +

    resolve

    (resolve state test opts)

    Resolves a state towards its final form by calling resolve and resolve-ops until converged.

    +

    resolve-ops

    (resolve-ops state test opts)

    Try to resolve any pending ops we can. Returns state with those ops resolved.

    +

    State

    protocol

    For convenience, a copy of the membership State protocol. This lets users implement the protocol without requiring the state namespace themselves.

    +

    update-node-view!

    (update-node-view! state test node opts)

    Takes an atom wrapping a State, a test, and a node. Gets the current view from that node’s perspective, and updates the state atom to reflect it.

    +
    \ No newline at end of file diff --git a/jepsen.nemesis.membership.state.html b/jepsen.nemesis.membership.state.html index 07257320f..03d94edcb 100644 --- a/jepsen.nemesis.membership.state.html +++ b/jepsen.nemesis.membership.state.html @@ -1,6 +1,6 @@ -jepsen.nemesis.membership.state documentation

    jepsen.nemesis.membership.state

    This namespace defines the protocol for nemesis membership state machines—how to find the current view from a node, how to merge node views together, how to generate, apply, and complete operations, etc.

    +jepsen.nemesis.membership.state documentation

    jepsen.nemesis.membership.state

    This namespace defines the protocol for nemesis membership state machines—how to find the current view from a node, how to merge node views together, how to generate, apply, and complete operations, etc.

    States should be Clojure defrecords, and have several special keys:

    :node-views A map of nodes to the view of the cluster state from that particular node.

    :view The merged view of the cluster state.

    @@ -15,4 +15,4 @@

    resolve-op

    (resolve-op this test [op op'])

    Called with a particular pair of operations (both invocation and completion). If that operation has been resolved, returns a new version of the state. Otherwise, returns nil.

    setup!

    (setup! this test)

    Performs a one-time initialization of state. Should return a new state. This is a good place to open network connections or set up mutable resources.

    teardown!

    (teardown! this test)

    Called at the end of the test to dispose of this State. This is your opportunity to close network connections etc.

    -
    \ No newline at end of file + \ No newline at end of file diff --git a/jepsen.nemesis.time.html b/jepsen.nemesis.time.html index 8319ee658..82b8e895c 100644 --- a/jepsen.nemesis.time.html +++ b/jepsen.nemesis.time.html @@ -1,11 +1,11 @@ -jepsen.nemesis.time documentation

    jepsen.nemesis.time

    Functions for messing with time and clocks.

    +jepsen.nemesis.time documentation

    jepsen.nemesis.time

    Functions for messing with time and clocks.

    bump-gen

    Randomized clock bump generator targeting a random subsets of nodes.

    -

    bump-gen-select

    (bump-gen-select select)

    A function which returns a clock bump generator that bumps the clock from -262 to +262 seconds, exponentially distributed. (select test) is used to select which subset of the test’s nodes to use as targets in the generator.

    -

    bump-time!

    (bump-time! delta)

    Adjusts the clock by delta milliseconds. Returns the time offset from the current local wall clock, in seconds.

    -

    clock-gen

    (clock-gen)

    Emits a random schedule of clock skew operations. Always starts by checking the clock offsets to establish an initial bound.

    -

    clock-nemesis

    (clock-nemesis)

    Generates a nemesis which manipulates clocks. Accepts four types of operations:

    +

    bump-gen-select

    (bump-gen-select select)

    A function which returns a clock bump generator that bumps the clock from -262 to +262 seconds, exponentially distributed. (select test) is used to select which subset of the test’s nodes to use as targets in the generator.

    +

    bump-time!

    (bump-time! delta)

    Adjusts the clock by delta milliseconds. Returns the time offset from the current local wall clock, in seconds.

    +

    clock-gen

    (clock-gen)

    Emits a random schedule of clock skew operations. Always starts by checking the clock offsets to establish an initial bound.

    +

    clock-nemesis

    (clock-nemesis)

    Generates a nemesis which manipulates clocks. Accepts four types of operations:

    {:f :reset, :value [node1 ...]}
     
     {:f :strobe, :value {node1 {:delta ms, :period ms, :duration s} ...}}
    @@ -14,17 +14,17 @@
     
     {:f :check-offsets}
     
    -

    clock-offset

    (clock-offset remote-time)

    Takes a time in seconds since the epoch, and subtracts the local node time, to obtain a relative offset in seconds.

    -

    compile!

    (compile! reader bin)

    Takes a Reader to C source code and spits out a binary to /opt/jepsen/, if it doesn’t already exist.

    -

    compile-resource!

    (compile-resource! resource bin)

    Given a resource name, spits out a binary to /opt/jepsen/.

    -

    compile-tools!

    (compile-tools!)

    current-offset

    (current-offset)

    Returns the clock offset of this node, in seconds.

    -

    dir

    Where do we install binaries to?

    -

    install!

    (install!)

    Uploads and compiles some C programs for messing with clocks.

    -

    parse-time

    (parse-time s)

    Parses a decimal time in unix seconds since the epoch, provided as a string, to a bigdecimal

    -

    reset-gen

    Randomized reset generator. Performs resets on random subsets of the test’s nodes.

    -

    reset-gen-select

    (reset-gen-select select)

    A function which returns a generator of reset operations. Takes a function (select test) which returns nodes from the test we’d like to target for that clock reset.

    -

    reset-time!

    (reset-time!)(reset-time! test)

    Resets the local node’s clock to NTP. If a test is given, resets time on all nodes across the test.

    -

    strobe-gen

    Randomized clock strobe generator targeting a random subsets of the test’s nodes.

    -

    strobe-gen-select

    (strobe-gen-select select)

    A function which returns a clock strobe generator that introduces clock strobes from 4 ms to 262 seconds, with a period of 1 ms to 1 second, for a duration of 0-32 seconds. (select test) is used to select which subset of the test’s nodes to use as targets in the generator.

    -

    strobe-time!

    (strobe-time! delta period duration)

    Strobes the time back and forth by delta milliseconds, every period milliseconds, for duration seconds.

    -
    \ No newline at end of file +

    clock-offset

    (clock-offset remote-time)

    Takes a time in seconds since the epoch, and subtracts the local node time, to obtain a relative offset in seconds.

    +

    compile!

    (compile! reader bin)

    Takes a Reader to C source code and spits out a binary to /opt/jepsen/, if it doesn’t already exist.

    +

    compile-resource!

    (compile-resource! resource bin)

    Given a resource name, spits out a binary to /opt/jepsen/.

    +

    compile-tools!

    (compile-tools!)

    current-offset

    (current-offset)

    Returns the clock offset of this node, in seconds.

    +

    dir

    Where do we install binaries to?

    +

    install!

    (install!)

    Uploads and compiles some C programs for messing with clocks.

    +

    parse-time

    (parse-time s)

    Parses a decimal time in unix seconds since the epoch, provided as a string, to a bigdecimal

    +

    reset-gen

    Randomized reset generator. Performs resets on random subsets of the test’s nodes.

    +

    reset-gen-select

    (reset-gen-select select)

    A function which returns a generator of reset operations. Takes a function (select test) which returns nodes from the test we’d like to target for that clock reset.

    +

    reset-time!

    (reset-time!)(reset-time! test)

    Resets the local node’s clock to NTP. If a test is given, resets time on all nodes across the test.

    +

    strobe-gen

    Randomized clock strobe generator targeting a random subsets of the test’s nodes.

    +

    strobe-gen-select

    (strobe-gen-select select)

    A function which returns a clock strobe generator that introduces clock strobes from 4 ms to 262 seconds, with a period of 1 ms to 1 second, for a duration of 0-32 seconds. (select test) is used to select which subset of the test’s nodes to use as targets in the generator.

    +

    strobe-time!

    (strobe-time! delta period duration)

    Strobes the time back and forth by delta milliseconds, every period milliseconds, for duration seconds.

    +
    \ No newline at end of file diff --git a/jepsen.net.html b/jepsen.net.html index 719e8a23f..7d563282a 100644 --- a/jepsen.net.html +++ b/jepsen.net.html @@ -1,6 +1,6 @@ -jepsen.net documentation

    jepsen.net

    Controls network manipulation.

    +jepsen.net documentation

    jepsen.net

    Controls network manipulation.

    TODO: break this up into jepsen.net.proto (polymorphism) and jepsen.net (wrapper fns, default args, etc)

    all-packet-behaviors

    All of the available network packet behaviors, and their default option values.

    Caveats:

    @@ -10,20 +10,20 @@
  • :loss - When used locally (not on a bridge or router), the loss is reported to the upper level protocols. This may cause TCP to resend and behave as if there was no loss.
  • See tc-netem(8).

    -

    drop!

    (drop! net test src dest)

    Drop traffic from src to dest.

    -

    drop-all!

    (drop-all! test grudge)

    Takes a test and a grudge: a map of nodes to collections of nodes they should drop messages from, and makes those changes to the test’s network.

    -

    fast!

    (fast! net test)

    Removes packet loss and delays.

    -

    flaky!

    (flaky! net test)

    Introduces randomized packet loss

    -

    heal!

    (heal! net test)

    End all traffic drops and restores network to fast operation.

    -

    ipfilter

    IPFilter rules

    -

    iptables

    Default iptables (assumes we control everything).

    -

    net-dev

    (net-dev)

    Returns the network interface of the current host.

    -

    noop

    Does nothing.

    -

    qdisc-del

    (qdisc-del dev)

    Deletes root qdisc for given dev on current node.

    -

    shape!

    (shape! net test nodes behavior)

    Shapes network behavior, i.e. packet delay, loss, corruption, duplication, reordering, and rate for the given nodes.

    -

    slow!

    (slow! net test)(slow! net test opts)

    Delays network packets with options:

    +

    drop!

    (drop! net test src dest)

    Drop traffic from src to dest.

    +

    drop-all!

    (drop-all! test grudge)

    Takes a test and a grudge: a map of nodes to collections of nodes they should drop messages from, and makes those changes to the test’s network.

    +

    fast!

    (fast! net test)

    Removes packet loss and delays.

    +

    flaky!

    (flaky! net test)

    Introduces randomized packet loss

    +

    heal!

    (heal! net test)

    End all traffic drops and restores network to fast operation.

    +

    ipfilter

    IPFilter rules

    +

    iptables

    Default iptables (assumes we control everything).

    +

    net-dev

    (net-dev)

    Returns the network interface of the current host.

    +

    noop

    Does nothing.

    +

    qdisc-del

    (qdisc-del dev)

    Deletes root qdisc for given dev on current node.

    +

    shape!

    (shape! net test nodes behavior)

    Shapes network behavior, i.e. packet delay, loss, corruption, duplication, reordering, and rate for the given nodes.

    +

    slow!

    (slow! net test)(slow! net test opts)

    Delays network packets with options:

      {:mean          ; (in ms)
       :variance       ; (in ms)
       :distribution}  ; (e.g. :normal)
     
    -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/jepsen.net.proto.html b/jepsen.net.proto.html index 488b4bf9c..2bc77b34c 100644 --- a/jepsen.net.proto.html +++ b/jepsen.net.proto.html @@ -1,6 +1,6 @@ -jepsen.net.proto documentation

    jepsen.net.proto

    Protocols for network manipulation. High-level functions live in jepsen.net.

    +jepsen.net.proto documentation

    jepsen.net.proto

    Protocols for network manipulation. High-level functions live in jepsen.net.

    Net

    protocol

    members

    drop!

    (drop! net test src dest)

    Drop traffic from src to dest.

    fast!

    (fast! net test)

    Removes packet loss and delays.

    flaky!

    (flaky! net test)

    Introduces randomized packet loss

    @@ -11,6 +11,6 @@ :variance ; (in ms) :distribution} ; (e.g. :normal) -

    PartitionAll

    protocol

    This optional protocol provides support for making multiple network changes in a single call. If you don’t support this protocol, we’ll use drop! instead.

    +

    PartitionAll

    protocol

    This optional protocol provides support for making multiple network changes in a single call. If you don’t support this protocol, we’ll use drop! instead.

    members

    drop-all!

    (drop-all! net test grudge)

    Takes a grudge: a map of nodes to collections of nodes they should drop messages from, and makes the appropriate changes to the network.

    -
    \ No newline at end of file + \ No newline at end of file diff --git a/jepsen.os.centos.html b/jepsen.os.centos.html index bf2c740d9..99fe2a9a6 100644 --- a/jepsen.os.centos.html +++ b/jepsen.os.centos.html @@ -1,16 +1,16 @@ -jepsen.os.centos documentation

    jepsen.os.centos

    Common tasks for CentOS boxes.

    +jepsen.os.centos documentation

    jepsen.os.centos

    Common tasks for CentOS boxes.

    install

    (install pkgs)

    Ensure the given packages are installed. Can take a flat collection of packages, passed as symbols, strings, or keywords, or, alternatively, a map of packages to version strings.

    -

    install-start-stop-daemon!

    (install-start-stop-daemon!)

    Installs start-stop-daemon on centos

    -

    installed

    (installed pkgs)

    Given a list of centos packages (strings, symbols, keywords, etc), returns the set of packages which are installed, as strings.

    -

    installed-start-stop-daemon?

    (installed-start-stop-daemon?)

    Is start-stop-daemon Installed?

    -

    installed-version

    (installed-version pkg)

    Given a package name, determines the installed version of that package, or nil if it is not installed.

    -

    installed?

    (installed? pkg-or-pkgs)

    Are the given packages, or singular package, installed on the current system?

    -

    maybe-update!

    (maybe-update!)

    Yum update if we haven’t done so recently.

    -

    os

    Support for CentOS.

    -

    setup-hostfile!

    (setup-hostfile!)

    Makes sure the hostfile has a loopback entry for the local hostname

    -

    time-since-last-update

    (time-since-last-update)

    When did we last run a yum update, in seconds ago

    -

    uninstall!

    (uninstall! pkg-or-pkgs)

    Removes a package or packages.

    -

    update!

    (update!)

    Yum update.

    -
    \ No newline at end of file +

    install-start-stop-daemon!

    (install-start-stop-daemon!)

    Installs start-stop-daemon on centos

    +

    installed

    (installed pkgs)

    Given a list of centos packages (strings, symbols, keywords, etc), returns the set of packages which are installed, as strings.

    +

    installed-start-stop-daemon?

    (installed-start-stop-daemon?)

    Is start-stop-daemon Installed?

    +

    installed-version

    (installed-version pkg)

    Given a package name, determines the installed version of that package, or nil if it is not installed.

    +

    installed?

    (installed? pkg-or-pkgs)

    Are the given packages, or singular package, installed on the current system?

    +

    maybe-update!

    (maybe-update!)

    Yum update if we haven’t done so recently.

    +

    os

    Support for CentOS.

    +

    setup-hostfile!

    (setup-hostfile!)

    Makes sure the hostfile has a loopback entry for the local hostname

    +

    time-since-last-update

    (time-since-last-update)

    When did we last run a yum update, in seconds ago

    +

    uninstall!

    (uninstall! pkg-or-pkgs)

    Removes a package or packages.

    +

    update!

    (update!)

    Yum update.

    +
    \ No newline at end of file diff --git a/jepsen.os.debian.html b/jepsen.os.debian.html index 8ac14d081..772dd46bc 100644 --- a/jepsen.os.debian.html +++ b/jepsen.os.debian.html @@ -1,18 +1,18 @@ -jepsen.os.debian documentation

    jepsen.os.debian

    Common tasks for Debian boxes.

    +jepsen.os.debian documentation

    jepsen.os.debian

    Common tasks for Debian boxes.

    add-key!

    (add-key! keyserver key)

    Receives an apt key from the given keyserver.

    -

    add-repo!

    (add-repo! repo-name apt-line)(add-repo! repo-name apt-line keyserver key)

    Adds an apt repo (and optionally a key from the given keyserver).

    -

    install

    (install pkgs)(install pkgs apt-opts)

    Ensure the given packages are installed. Can take a flat collection of packages, passed as symbols, strings, or keywords, or, alternatively, a map of packages to version strings. Can optionally take a collection of additional CLI options to be passed to apt-get.

    -

    install-jdk11!

    (install-jdk11!)

    Installs an openjdk jdk11 via stretch-backports.

    -

    installed

    (installed pkgs)

    Given a list of debian packages (strings, symbols, keywords, etc), returns the set of packages which are installed, as strings.

    -

    installed-version

    (installed-version pkg)

    Given a package name, determines the installed version of that package, or nil if it is not installed.

    -

    installed?

    (installed? pkg-or-pkgs)

    Are the given debian packages, or singular package, installed on the current system?

    -

    maybe-update!

    (maybe-update!)

    Apt-get update if we haven’t done so recently.

    -

    node-locks

    Prevents running apt operations concurrently on the same node.

    -

    os

    An implementation of the Debian OS.

    -

    setup-hostfile!

    (setup-hostfile!)

    Makes sure the hostfile has a loopback entry for the local hostname

    -

    time-since-last-update

    (time-since-last-update)

    When did we last run an apt-get update, in seconds ago

    -

    uninstall!

    (uninstall! pkg-or-pkgs)

    Removes a package or packages.

    -

    update!

    (update!)

    Apt-get update.

    -
    \ No newline at end of file +

    add-repo!

    (add-repo! repo-name apt-line)(add-repo! repo-name apt-line keyserver key)

    Adds an apt repo (and optionally a key from the given keyserver).

    +

    install

    (install pkgs)(install pkgs apt-opts)

    Ensure the given packages are installed. Can take a flat collection of packages, passed as symbols, strings, or keywords, or, alternatively, a map of packages to version strings. Can optionally take a collection of additional CLI options to be passed to apt-get.

    +

    install-jdk11!

    (install-jdk11!)

    Installs an openjdk jdk11 via stretch-backports.

    +

    installed

    (installed pkgs)

    Given a list of debian packages (strings, symbols, keywords, etc), returns the set of packages which are installed, as strings.

    +

    installed-version

    (installed-version pkg)

    Given a package name, determines the installed version of that package, or nil if it is not installed.

    +

    installed?

    (installed? pkg-or-pkgs)

    Are the given debian packages, or singular package, installed on the current system?

    +

    maybe-update!

    (maybe-update!)

    Apt-get update if we haven’t done so recently.

    +

    node-locks

    Prevents running apt operations concurrently on the same node.

    +

    os

    An implementation of the Debian OS.

    +

    setup-hostfile!

    (setup-hostfile!)

    Makes sure the hostfile has a loopback entry for the local hostname

    +

    time-since-last-update

    (time-since-last-update)

    When did we last run an apt-get update, in seconds ago

    +

    uninstall!

    (uninstall! pkg-or-pkgs)

    Removes a package or packages.

    +

    update!

    (update!)

    Apt-get update.

    +
    \ No newline at end of file diff --git a/jepsen.os.html b/jepsen.os.html index f7fc89940..9302b0f2c 100644 --- a/jepsen.os.html +++ b/jepsen.os.html @@ -1,7 +1,7 @@ -jepsen.os documentation

    jepsen.os

    Controls operating system setup and teardown.

    +jepsen.os documentation

    jepsen.os

    Controls operating system setup and teardown.

    noop

    Does nothing

    -

    OS

    protocol

    members

    setup!

    (setup! os test node)

    Set up the operating system on this particular node.

    +

    OS

    protocol

    members

    setup!

    (setup! os test node)

    Set up the operating system on this particular node.

    teardown!

    (teardown! os test node)

    Tear down the operating system on this particular node.

    -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/jepsen.os.smartos.html b/jepsen.os.smartos.html index df037b4f5..c374ffda3 100644 --- a/jepsen.os.smartos.html +++ b/jepsen.os.smartos.html @@ -1,13 +1,13 @@ -jepsen.os.smartos documentation

    jepsen.os.smartos

    Common tasks for SmartOS boxes.

    +jepsen.os.smartos documentation

    jepsen.os.smartos

    Common tasks for SmartOS boxes.

    install

    (install pkgs)

    Ensure the given packages are installed. Can take a flat collection of packages, passed as symbols, strings, or keywords, or, alternatively, a map of packages to version strings.

    -

    installed

    (installed pkgs)

    Given a list of pkgin packages (strings, symbols, keywords, etc), returns the set of packages which are installed, as strings.

    -

    installed-version

    (installed-version pkg)

    Given a package name, determines the installed version of that package, or nil if it is not installed.

    -

    installed?

    (installed? pkg-or-pkgs)

    Are the given packages, or singular package, installed on the current system?

    -

    maybe-update!

    (maybe-update!)

    Pkgin update if we haven’t done so recently.

    -

    setup-hostfile!

    (setup-hostfile!)

    Makes sure the hostfile has a loopback entry for the local hostname

    -

    time-since-last-update

    (time-since-last-update)

    When did we last run a pkgin update, in seconds ago

    -

    uninstall!

    (uninstall! pkg-or-pkgs)

    Removes a package or packages.

    -

    update!

    (update!)

    Pkgin update.

    -
    \ No newline at end of file +

    installed

    (installed pkgs)

    Given a list of pkgin packages (strings, symbols, keywords, etc), returns the set of packages which are installed, as strings.

    +

    installed-version

    (installed-version pkg)

    Given a package name, determines the installed version of that package, or nil if it is not installed.

    +

    installed?

    (installed? pkg-or-pkgs)

    Are the given packages, or singular package, installed on the current system?

    +

    maybe-update!

    (maybe-update!)

    Pkgin update if we haven’t done so recently.

    +

    setup-hostfile!

    (setup-hostfile!)

    Makes sure the hostfile has a loopback entry for the local hostname

    +

    time-since-last-update

    (time-since-last-update)

    When did we last run a pkgin update, in seconds ago

    +

    uninstall!

    (uninstall! pkg-or-pkgs)

    Removes a package or packages.

    +

    update!

    (update!)

    Pkgin update.

    +
    \ No newline at end of file diff --git a/jepsen.os.ubuntu.html b/jepsen.os.ubuntu.html index 794c37ba2..6ac05b502 100644 --- a/jepsen.os.ubuntu.html +++ b/jepsen.os.ubuntu.html @@ -1,5 +1,5 @@ -jepsen.os.ubuntu documentation

    jepsen.os.ubuntu

    Common tasks for Ubuntu boxes. Tested against Ubuntu 18.04.

    +jepsen.os.ubuntu documentation

    jepsen.os.ubuntu

    Common tasks for Ubuntu boxes. Tested against Ubuntu 18.04.

    os

    An implementation of the Ubuntu OS.

    -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/jepsen.reconnect.html b/jepsen.reconnect.html index 981475f23..e69079f87 100644 --- a/jepsen.reconnect.html +++ b/jepsen.reconnect.html @@ -1,13 +1,13 @@ -jepsen.reconnect documentation

    jepsen.reconnect

    Stateful wrappers for automatically reconnecting network clients.

    +jepsen.reconnect documentation

    jepsen.reconnect

    Stateful wrappers for automatically reconnecting network clients.

    A wrapper is a map with a connection atom conn and a pair of functions: (open), which opens a new connection, and (close conn), which closes a connection. We use these to provide a with-conn macro that acquires the current connection from a wrapper, evaluates body, and automatically closes/reopens the connection when errors occur.

    Connect/close/reconnect lock the wrapper, but multiple threads may acquire the current connection at once.

    close!

    (close! wrapper)

    Closes a wrapper.

    -

    conn

    (conn wrapper)

    Active connection for a wrapper, if one exists.

    -

    open!

    (open! wrapper)

    Given a wrapper, opens a connection. Noop if conn is already open.

    -

    reopen!

    (reopen! wrapper)

    Reopens a wrapper’s connection.

    -

    with-conn

    macro

    (with-conn [c wrapper] & body)

    Acquires a read lock, takes a connection from the wrapper, and evaluates body with that connection bound to c. If any Exception is thrown, closes the connection and opens a new one.

    -

    with-lock

    macro

    (with-lock wrapper lock-method & body)

    with-read-lock

    macro

    (with-read-lock wrapper & body)

    with-write-lock

    macro

    (with-write-lock wrapper & body)

    wrapper

    (wrapper options)

    A wrapper is a stateful construct for talking to a database. Options:

    +

    conn

    (conn wrapper)

    Active connection for a wrapper, if one exists.

    +

    open!

    (open! wrapper)

    Given a wrapper, opens a connection. Noop if conn is already open.

    +

    reopen!

    (reopen! wrapper)

    Reopens a wrapper’s connection.

    +

    with-conn

    macro

    (with-conn [c wrapper] & body)

    Acquires a read lock, takes a connection from the wrapper, and evaluates body with that connection bound to c. If any Exception is thrown, closes the connection and opens a new one.

    +

    with-lock

    macro

    (with-lock wrapper lock-method & body)

    with-read-lock

    macro

    (with-read-lock wrapper & body)

    with-write-lock

    macro

    (with-write-lock wrapper & body)

    wrapper

    (wrapper options)

    A wrapper is a stateful construct for talking to a database. Options:

    :name An optional name for this wrapper (for debugging logs) :open A function which generates a new conn :close A function which closes a conn :log? Whether to log reconnect messages. A special value, minimal logs only a single line rather than a full stacktrace.

    -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/jepsen.repl.html b/jepsen.repl.html index e5e953d0f..b8f54abc2 100644 --- a/jepsen.repl.html +++ b/jepsen.repl.html @@ -1,5 +1,5 @@ -jepsen.repl documentation

    jepsen.repl

    Helper functions for mucking around with tests!

    +jepsen.repl documentation

    jepsen.repl

    Helper functions for mucking around with tests!

    latest-test

    (latest-test)

    Returns the most recently run test

    -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/jepsen.report.html b/jepsen.report.html index d0527525e..18c3dfd2b 100644 --- a/jepsen.report.html +++ b/jepsen.report.html @@ -1,5 +1,5 @@ -jepsen.report documentation \ No newline at end of file diff --git a/jepsen.store.format.html b/jepsen.store.format.html index b08add432..2eaf3ef21 100644 --- a/jepsen.store.format.html +++ b/jepsen.store.format.html @@ -1,6 +1,6 @@ -jepsen.store.format documentation

    jepsen.store.format

    Jepsen tests are logically a map. To save this map to disk, we originally wrote it as a single Fressian file. This approach works reasonably well, but has a few problems:

    +jepsen.store.format documentation

    jepsen.store.format

    Jepsen tests are logically a map. To save this map to disk, we originally wrote it as a single Fressian file. This approach works reasonably well, but has a few problems:

    • We write test files multiple times: once at the end of a test, and once once the analysis is complete–in case the analysis fails. Rewriting the entire file is inefficient. It would be nice to incrementally append new state.

      @@ -114,104 +114,104 @@

      That’s It

      When it comes time to reference the results or history in that lazy map, we look up the right block in the block index, seek to that offset, and decode whatever’s there.

      Decoding a block is straightforward. We grab the length header, run a CRC over that region of the file, check the block type, then decode the remaining data based on the block structure.

    append-to-big-vector-block!

    (append-to-big-vector-block! w element)

    Appends an element to a BigVector block writer. This function is asynchronous and returns as soon as the writer’s queue has accepted the element. Close the writer to complete the process. Returns writer.

    -

    append-to-fressian-stream-block!

    (append-to-fressian-stream-block! writer data)

    Takes a FressianStreamBlockWriter and a Clojure value. Appends that value as Fressian to the stream. Returns writer.

    -

    assoc-block!

    (assoc-block! handle id offset)

    Takes a handle, a block ID, and its corresponding offset. Updates the handle’s block index (in-memory) to add this mapping. Returns handle.

    -

    big-vector-block-writer!

    (big-vector-block-writer! handle elements-per-chunk)(big-vector-block-writer! handle block-id elements-per-chunk)

    Takes a handle, a optional block ID, and the maximum number of elements per chunk. Returns a BigVectorBlockWriter which can have elements appended to it via append-to-big-vector-block!. Those elements, in turn, are appended to a series of newly created FressianStream blocks, each of which is stitched together into a BigVector block with the given ID. As each chunk of writes is finished, the writer automatically writes a new block index, ensuring we can recover at least part of the history from crashes.

    +

    append-to-fressian-stream-block!

    (append-to-fressian-stream-block! writer data)

    Takes a FressianStreamBlockWriter and a Clojure value. Appends that value as Fressian to the stream. Returns writer.

    +

    assoc-block!

    (assoc-block! handle id offset)

    Takes a handle, a block ID, and its corresponding offset. Updates the handle’s block index (in-memory) to add this mapping. Returns handle.

    +

    big-vector-block-writer!

    (big-vector-block-writer! handle elements-per-chunk)(big-vector-block-writer! handle block-id elements-per-chunk)

    Takes a handle, a optional block ID, and the maximum number of elements per chunk. Returns a BigVectorBlockWriter which can have elements appended to it via append-to-big-vector-block!. Those elements, in turn, are appended to a series of newly created FressianStream blocks, each of which is stitched together into a BigVector block with the given ID. As each chunk of writes is finished, the writer automatically writes a new block index, ensuring we can recover at least part of the history from crashes.

    The writer is asynchronous: it internally spawns a thread for serialization and IO. Appends to the writer are transferred to the IO thread via a queue; the IO thread then writes them to disk. Closing the writer blocks until the transfer queue is exhausted.

    -

    big-vector-block-writer-worker!

    (big-vector-block-writer-worker! handle block-id elements-per-chunk queue)

    Loop which writes values from a BigVectorBlockWriter’s queue to disk.

    -

    big-vector-chunk-size

    How many elements should we write to a chunk of a BigVector before starting a new one?

    -

    big-vector-count-size

    How many bytes do we use to store a bigvector’s count?

    -

    big-vector-index-size

    How many bytes do we use to store a bigvector element’s index?

    -

    block-checksum

    (block-checksum header data)

    Compute the checksum of a block, given two bytebuffers: one for the header, and one for the data.

    -

    block-checksum-given-data-checksum

    (block-checksum-given-data-checksum header data-crc)

    Computes the checksum of a block, given a ByteBuffer header, and an already-computed CRC32 checksum of the data. Useful for streaming writers which compute their own checksums while writing. Mutates data-crc in place; I can’t figure out how to safely copy it.

    -

    block-checksum-offset

    Where do we write a checksum in the block header?

    -

    block-checksum-size

    How long is the checksum for a block?

    -

    block-header

    (block-header)

    Returns a blank ByteBuffer for a block header. All fields zero.

    -

    block-header-checksum

    (block-header-checksum header)

    Fetches the checksum of a block header.

    -

    block-header-for-data

    (block-header-for-data block-type data)

    Takes a block type and a ByteBuffer of data, and constructs a block header whose type is the given type, and which has the appropriate length and checksum for the given data.

    -

    block-header-for-length-and-checksum!

    (block-header-for-length-and-checksum! block-type data-length data-checksum)

    An optimized way to construct a block header, a block type, the length of a data region (not including headers) and the CRC checksum of that data. Mutates the checksum in place.

    -

    block-header-length

    (block-header-length header)

    Fetches the length of a block header.

    -

    block-header-size

    How long is a block header?

    -

    block-header-type

    (block-header-type header)

    Returns the type of a block header, as a keyword.

    -

    block-id-size

    How many bytes per block ID?

    -

    block-index-data-size

    (block-index-data-size index)

    Takes a block index and returns the number of bytes required for that block to be written, NOT including headers.

    -

    block-index-offset-offset

    Where in the file do we write the offset of the index block?

    -

    block-len-offset

    Where do we write a block length in a block header?

    -

    block-len-size

    How long is the length prefix for a block?

    -

    block-offset-size

    How many bytes per block offset address?

    -

    block-ref

    (block-ref id)

    Constructs a new BlockRef object pointing to the given block ID.

    -

    block-references

    (block-references handle)(block-references handle block-id)

    Takes a handle and a block ID, and returns the set of all block IDs which that block references. Right now we do this by parsing the block data; later we might want to move references into block headers. With no block ID, returns references from the root.

    -

    block-type->short

    A map of block types to integer codes.

    -

    block-type-offset

    Where do we store the block type in a block header?

    -

    block-type-size

    How long is the type for a block?

    -

    check-block-checksum

    (check-block-checksum header data)

    Verifies the checksum of a block, given two ByteBuffers: one for the header, and one for the data.

    -

    check-magic

    (check-magic handle)

    Takes a Handle and reads the magic bytes, ensuring they match.

    -

    check-version!

    (check-version! handle)

    Takes a Handle and reads the version. Ensures it’s a version we can decode, and updates the Handle’s version if it hasn’t already been set.

    -

    close!

    (close! handle)

    Closes a Handle

    -

    copy!

    (copy! r w)

    Takes two handles: a reader and a writer. Copies the root and any other referenced blocks from reader to writer.

    -

    current-version

    The current file version.

    +

    big-vector-block-writer-worker!

    (big-vector-block-writer-worker! handle block-id elements-per-chunk queue)

    Loop which writes values from a BigVectorBlockWriter’s queue to disk.

    +

    big-vector-chunk-size

    How many elements should we write to a chunk of a BigVector before starting a new one?

    +

    big-vector-count-size

    How many bytes do we use to store a bigvector’s count?

    +

    big-vector-index-size

    How many bytes do we use to store a bigvector element’s index?

    +

    block-checksum

    (block-checksum header data)

    Compute the checksum of a block, given two bytebuffers: one for the header, and one for the data.

    +

    block-checksum-given-data-checksum

    (block-checksum-given-data-checksum header data-crc)

    Computes the checksum of a block, given a ByteBuffer header, and an already-computed CRC32 checksum of the data. Useful for streaming writers which compute their own checksums while writing. Mutates data-crc in place; I can’t figure out how to safely copy it.

    +

    block-checksum-offset

    Where do we write a checksum in the block header?

    +

    block-checksum-size

    How long is the checksum for a block?

    +

    block-header

    (block-header)

    Returns a blank ByteBuffer for a block header. All fields zero.

    +

    block-header-checksum

    (block-header-checksum header)

    Fetches the checksum of a block header.

    +

    block-header-for-data

    (block-header-for-data block-type data)

    Takes a block type and a ByteBuffer of data, and constructs a block header whose type is the given type, and which has the appropriate length and checksum for the given data.

    +

    block-header-for-length-and-checksum!

    (block-header-for-length-and-checksum! block-type data-length data-checksum)

    An optimized way to construct a block header, a block type, the length of a data region (not including headers) and the CRC checksum of that data. Mutates the checksum in place.

    +

    block-header-length

    (block-header-length header)

    Fetches the length of a block header.

    +

    block-header-size

    How long is a block header?

    +

    block-header-type

    (block-header-type header)

    Returns the type of a block header, as a keyword.

    +

    block-id-size

    How many bytes per block ID?

    +

    block-index-data-size

    (block-index-data-size index)

    Takes a block index and returns the number of bytes required for that block to be written, NOT including headers.

    +

    block-index-offset-offset

    Where in the file do we write the offset of the index block?

    +

    block-len-offset

    Where do we write a block length in a block header?

    +

    block-len-size

    How long is the length prefix for a block?

    +

    block-offset-size

    How many bytes per block offset address?

    +

    block-ref

    (block-ref id)

    Constructs a new BlockRef object pointing to the given block ID.

    +

    block-references

    (block-references handle)(block-references handle block-id)

    Takes a handle and a block ID, and returns the set of all block IDs which that block references. Right now we do this by parsing the block data; later we might want to move references into block headers. With no block ID, returns references from the root.

    +

    block-type->short

    A map of block types to integer codes.

    +

    block-type-offset

    Where do we store the block type in a block header?

    +

    block-type-size

    How long is the type for a block?

    +

    check-block-checksum

    (check-block-checksum header data)

    Verifies the checksum of a block, given two ByteBuffers: one for the header, and one for the data.

    +

    check-magic

    (check-magic handle)

    Takes a Handle and reads the magic bytes, ensuring they match.

    +

    check-version!

    (check-version! handle)

    Takes a Handle and reads the version. Ensures it’s a version we can decode, and updates the Handle’s version if it hasn’t already been set.

    +

    close!

    (close! handle)

    Closes a Handle

    +

    copy!

    (copy! r w)

    Takes two handles: a reader and a writer. Copies the root and any other referenced blocks from reader to writer.

    +

    current-version

    The current file version.

    Version 0 was the first version of the file format.

    Version 1 added support for FressianStream and BigVector blocks.

    -

    find-references

    (find-references x)

    A little helper function for finding BlockRefs in a nested data structure. Returns the IDs of all BlockRefs.

    -

    first-block-offset

    Where in the file the first block begins.

    -

    flush!

    (flush! handle)

    Flushes writes to a Handle to disk.

    -

    fressian-buffer-size

    How many bytes should we buffer before writing Fressian data to disk?

    -

    fressian-read-handlers

    How do we read Fressian data?

    -

    fressian-stream-block-writer!

    (fressian-stream-block-writer! handle)

    Takes a handle. Creates a new block ID, and prepares to write a new FressianStream block at the end of the file. Returns a FressianStreamBlockWriter which can be used to write elements to the FressianStream. When closed, the writer writes the block header and updates the handle’s block index to refer to the new block.

    -

    fressian-write-handlers

    How do we write Fressian data?

    -

    gc!

    (gc! file)

    Garbage-collects a file (anything that works with io/file) in-place.

    -

    IPartialMap

    protocol

    members

    partial-map-rest-id

    (partial-map-rest-id this)

    large-region-size

    How big does a file region have to be before we just mmap it instead of doing file reads?

    -

    load-block-index!

    (load-block-index! handle)

    Takes a handle, reloads its block index from disk, and returns handle.

    -

    magic

    The magic string at the start of Jepsen files.

    -

    magic-offset

    Where the magic is written

    -

    magic-size

    Bytes it takes to store the magic string.

    -

    new-block-id!

    (new-block-id! handle)

    Takes a handle and returns a fresh block ID for that handle, mutating the handle so that this ID will not be allocated again.

    -

    next-block-offset

    (next-block-offset handle)

    Takes a handle and returns the offset of the next block. Right now this is just the end of the file.

    -

    open

    (open path)

    Constructs a new handle for a Jepsen file of the given path (anything which works with io/file).

    -

    prep-read!

    (prep-read! handle)

    Called when we read anything from a handle. Ensures that we’ve checked the magic, version, and loaded the block index.

    -

    prep-write!

    (prep-write! handle)

    Called when we write anything to a handle. Ensures that we’ve written the header before doing anything else. Returns handle.

    -

    read-big-vector-block

    (read-big-vector-block handle buf)

    Takes a handle and a ByteBuffer for a big-vector block. Returns a lazy vector (specifically, a soft chunked vector) representing its data.

    -

    read-block-by-id

    (read-block-by-id handle id)

    Takes a handle and a logical block id. Looks up the offset for the given block and reads it using read-block-by-offset (which includes verifying the checksum).

    -

    read-block-by-offset

    (read-block-by-offset handle offset)

    Takes a Handle and the offset of a block. Reads the block header, validates the checksum, and interprets the block data depending on the block type. Returns a map of:

    +

    find-references

    (find-references x)

    A little helper function for finding BlockRefs in a nested data structure. Returns the IDs of all BlockRefs.

    +

    first-block-offset

    Where in the file the first block begins.

    +

    flush!

    (flush! handle)

    Flushes writes to a Handle to disk.

    +

    fressian-buffer-size

    How many bytes should we buffer before writing Fressian data to disk?

    +

    fressian-read-handlers

    How do we read Fressian data?

    +

    fressian-stream-block-writer!

    (fressian-stream-block-writer! handle)

    Takes a handle. Creates a new block ID, and prepares to write a new FressianStream block at the end of the file. Returns a FressianStreamBlockWriter which can be used to write elements to the FressianStream. When closed, the writer writes the block header and updates the handle’s block index to refer to the new block.

    +

    fressian-write-handlers

    How do we write Fressian data?

    +

    gc!

    (gc! file)

    Garbage-collects a file (anything that works with io/file) in-place.

    +

    IPartialMap

    protocol

    members

    partial-map-rest-id

    (partial-map-rest-id this)

    large-region-size

    How big does a file region have to be before we just mmap it instead of doing file reads?

    +

    load-block-index!

    (load-block-index! handle)

    Takes a handle, reloads its block index from disk, and returns handle.

    +

    magic

    The magic string at the start of Jepsen files.

    +

    magic-offset

    Where the magic is written

    +

    magic-size

    Bytes it takes to store the magic string.

    +

    new-block-id!

    (new-block-id! handle)

    Takes a handle and returns a fresh block ID for that handle, mutating the handle so that this ID will not be allocated again.

    +

    next-block-offset

    (next-block-offset handle)

    Takes a handle and returns the offset of the next block. Right now this is just the end of the file.

    +

    open

    (open path)

    Constructs a new handle for a Jepsen file of the given path (anything which works with io/file).

    +

    prep-read!

    (prep-read! handle)

    Called when we read anything from a handle. Ensures that we’ve checked the magic, version, and loaded the block index.

    +

    prep-write!

    (prep-write! handle)

    Called when we write anything to a handle. Ensures that we’ve written the header before doing anything else. Returns handle.

    +

    read-big-vector-block

    (read-big-vector-block handle buf)

    Takes a handle and a ByteBuffer for a big-vector block. Returns a lazy vector (specifically, a soft chunked vector) representing its data.

    +

    read-block-by-id

    (read-block-by-id handle id)

    Takes a handle and a logical block id. Looks up the offset for the given block and reads it using read-block-by-offset (which includes verifying the checksum).

    +

    read-block-by-offset

    (read-block-by-offset handle offset)

    Takes a Handle and the offset of a block. Reads the block header, validates the checksum, and interprets the block data depending on the block type. Returns a map of:

    {:type The block type, as a keyword :offset The offset of this block :length How many bytes are in this block, total :data The interpreted data stored in this block—depends on block type}

    -

    read-block-by-offset*

    (read-block-by-offset* handle offset)

    Takes a Handle and the offset of a block. Reads the block header and data, validates the checksum, and returns a map of:

    +

    read-block-by-offset*

    (read-block-by-offset* handle offset)

    Takes a Handle and the offset of a block. Reads the block header and data, validates the checksum, and returns a map of:

    {:header header, as bytebuffer :data data, as bytebuffer}

    -

    read-block-data

    (read-block-data handle offset header)

    Fetches the ByteBuffer for a block’s data, given a block header stored at the given offset.

    -

    read-block-header

    (read-block-header handle offset)

    Fetches the ByteBuffer for a block header at the given offset.

    -

    read-block-index-block

    (read-block-index-block handle data)

    Takes a ByteBuffer and reads a block index from it: a map of

    +

    read-block-data

    (read-block-data handle offset header)

    Fetches the ByteBuffer for a block’s data, given a block header stored at the given offset.

    +

    read-block-header

    (read-block-header handle offset)

    Fetches the ByteBuffer for a block header at the given offset.

    +

    read-block-index-block

    (read-block-index-block handle data)

    Takes a ByteBuffer and reads a block index from it: a map of

    {:root root-id :blocks {id offset, id2 offset2, …}}

    -

    read-block-index-offset

    (read-block-index-offset handle)

    Takes a handle and returns the current root block index offset from its file. Throws :type ::no-block-index if the block index is 0 or the file is too short.

    -

    read-file

    (read-file file offset size)

    Returns a ByteBuffer corresponding to a given file region. Uses mmap for large regions, or regular read calls for small ones.

    -

    read-fressian-block

    (read-fressian-block handle data)

    Takes a handle and a ByteBuffer of data from a Fressian block. Returns its parsed contents.

    -

    read-fressian-stream-block

    (read-fressian-stream-block handle data)

    Takes a handle and a ByteBuffer of data from a FressianStream block. Returns its contents as a vector.

    -

    read-partial-map-block

    (read-partial-map-block handle data)

    Takes a handle and a ByteBuffer for a partial-map block. Returns a lazy map representing its data.

    -

    read-root

    (read-root handle)

    Takes a handle. Looks up the root block from the current block index and reads it. Returns nil if there is no root.

    -

    read-test

    (read-test handle)

    Reads a test from a handle’s root. Constructs a lazy test map where history and results are loaded as-needed from the file. Leave the handle open so this map can use it; it’ll be automatically closed when this map is GCed. Includes metadata so that this test can be rewritten using write-results!

    -

    set-block-header-checksum!

    (set-block-header-checksum! buf checksum)

    Sets the checksum in a block header. Returns the block header.

    -

    set-block-header-length!

    (set-block-header-length! buf length)

    Sets the length in a block header. Returns the block header.

    -

    set-block-header-type!

    (set-block-header-type! buf block-type)

    Sets the type (a keyword) in a block header. Returns the header.

    -

    set-root!

    (set-root! handle root-id)

    Takes a handle and a block ID. Updates the handle’s block index (in-memory) to point to this block ID as the root. Returns handle.

    -

    short->block-type

    A map of integers to block types.

    -

    test-history-writer!

    (test-history-writer! handle test)(test-history-writer! handle test chunk-size)

    Takes a handle and a test created with write-initial-test!, and returns a BigVectorBlockWriter for writing operations to the history. Append elements using append-to-big-vector-block!, and .close the writer when done.

    -

    version

    (version handle)

    Returns the version of a Handle.

    -

    version-offset

    Where in the file the version begins

    -

    version-size

    Bytes it takes to store a version.

    -

    write-big-vector-block!

    (write-big-vector-block! handle id element-count chunks)

    Takes a handle, a block ID, a count, and a vector of view source

    read-block-index-offset

    (read-block-index-offset handle)

    Takes a handle and returns the current root block index offset from its file. Throws :type ::no-block-index if the block index is 0 or the file is too short.

    +

    read-file

    (read-file file offset size)

    Returns a ByteBuffer corresponding to a given file region. Uses mmap for large regions, or regular read calls for small ones.

    +

    read-fressian-block

    (read-fressian-block handle data)

    Takes a handle and a ByteBuffer of data from a Fressian block. Returns its parsed contents.

    +

    read-fressian-stream-block

    (read-fressian-stream-block handle data)

    Takes a handle and a ByteBuffer of data from a FressianStream block. Returns its contents as a vector.

    +

    read-partial-map-block

    (read-partial-map-block handle data)

    Takes a handle and a ByteBuffer for a partial-map block. Returns a lazy map representing its data.

    +

    read-root

    (read-root handle)

    Takes a handle. Looks up the root block from the current block index and reads it. Returns nil if there is no root.

    +

    read-test

    (read-test handle)

    Reads a test from a handle’s root. Constructs a lazy test map where history and results are loaded as-needed from the file. Leave the handle open so this map can use it; it’ll be automatically closed when this map is GCed. Includes metadata so that this test can be rewritten using write-results!

    +

    set-block-header-checksum!

    (set-block-header-checksum! buf checksum)

    Sets the checksum in a block header. Returns the block header.

    +

    set-block-header-length!

    (set-block-header-length! buf length)

    Sets the length in a block header. Returns the block header.

    +

    set-block-header-type!

    (set-block-header-type! buf block-type)

    Sets the type (a keyword) in a block header. Returns the header.

    +

    set-root!

    (set-root! handle root-id)

    Takes a handle and a block ID. Updates the handle’s block index (in-memory) to point to this block ID as the root. Returns handle.

    +

    short->block-type

    A map of integers to block types.

    +

    test-history-writer!

    (test-history-writer! handle test)(test-history-writer! handle test chunk-size)

    Takes a handle and a test created with write-initial-test!, and returns a BigVectorBlockWriter for writing operations to the history. Append elements using append-to-big-vector-block!, and .close the writer when done.

    +

    version

    (version handle)

    Returns the version of a Handle.

    +

    version-offset

    Where in the file the version begins

    +

    version-size

    Bytes it takes to store a version.

    +

    write-big-vector-block!

    (write-big-vector-block! handle id element-count chunks)

    Takes a handle, a block ID, a count, and a vector of initial-index block-id chunks. Writes a BigVector block with the given count and chunks to the end of the file. Records the freshly written block in the handle’s block index, and returns ID.

    -

    write-block!

    (write-block! handle offset block-type data)

    Writes a block to a handle at the given offset, given a block type as a keyword and a ByteBuffer for the block’s data. Returns handle.

    -

    write-block-data!

    (write-block-data! handle offset data)

    Writes block data to the given block offset (e.g. the address of the header, not the data itself) in the file, backed by the given handle. Returns handle.

    -

    write-block-header!

    (write-block-header! handle offset block-header)

    Writes a block header to the given offset in the file backed by the given handle. Returns handle.

    -

    write-block-index!

    (write-block-index! handle)(write-block-index! handle offset)

    Writes a block index for a Handle, based on whatever that Handle’s current block index is. Automatically generates a new block ID for this index and adds it to the handle as well. Then writes a new block index offset pointing to this block index. Returns handle.

    -

    write-block-index-offset!

    (write-block-index-offset! handle root)

    Takes a handle and the offset of a block index block to use as the new root. Updates the file’s block pointer. Returns handle.

    -

    write-file!

    (write-file! file offset buffer)

    Takes a FileChannel, an offset, and a ByteBuffer. Writes the ByteBuffer to the FileChannel at the given offset completely. Returns number of bytes written.

    -

    write-fressian-block!

    (write-fressian-block! handle data)(write-fressian-block! handle id data)

    Takes a handle, an optional block ID, and some Clojure data. Writes that data to a Fressian block at the end of the file, records the new block in the handle’s block index, and returns the ID of the newly written block.

    -

    write-fressian-block!*

    (write-fressian-block!* handle offset data)

    Takes a handle, a byte offset, and some Clojure data. Writes that data to a Fressian block at the given offset. Returns handle.

    -

    write-fressian-to-file!

    (write-fressian-to-file! file offset checksum data)

    Takes a FileChannel, an offset, a checksum, and a data structure as Fressian. Writes the data structure as Fressian to the file at the given offset. Returns the size of the data that was just written, in bytes. Mutates checksum with written bytes.

    -

    write-header!

    (write-header! handle)

    Takes a Handle and writes the initial magic bytes and version number. Initializes the handle’s version to current-version if it hasn’t already been set. Returns handle.

    -

    write-initial-test!

    (write-initial-test! handle test)

    Writes an initial test to a handle, making the test the root. Creates an (initially nil) block for the history. Called when we first begin a test. Returns test with additional metadata, so we can write the history and results later.

    -

    write-partial-map-block!

    (write-partial-map-block! handle m rest-id)(write-partial-map-block! handle id m rest-id)

    Takes a handle, a Clojure map, and the ID of the block which stores the rest of the map (use nil if there is no more data to the PartialMap). Writes the map to a new PartialMap block, records it in the handle’s block index, and returns the ID of this block itself. Optionally takes an explicit ID for this block.

    -

    write-partial-map-block!*

    (write-partial-map-block!* handle offset m rest-id)

    Takes a handle, a byte offset, a Clojure map, and the ID of the block which stores the rest of the map (use nil if there is no more to the PartialMap). Writes the map and rest pointer to a PartialMap block at the given offset. Returns handle.

    -

    write-test!

    (write-test! handle test)

    Writes an entire test map to a handle, making the test the root. Useful for re-writing a completed test that’s already in memory, and migrating existing Fressian tests to the new format. Returns handle.

    -

    write-test-with-history!

    (write-test-with-history! handle test)

    Takes a handle and a test created with write-initial-test!, and writes it again as the root. Used for rewriting a test after running it, but before analysis, in case there’s state that changed. Returns test.

    -

    write-test-with-results!

    (write-test-with-results! handle test)

    Takes a handle and a test created with write-initial-test!, and appends its :results as a partial map block: :valid? in the top tier, and other results below. Writes test using those results and history blocks. Returns test, with ::results-id metadata pointing to the block ID of these results.

    -
    \ No newline at end of file +

    write-block!

    (write-block! handle offset block-type data)

    Writes a block to a handle at the given offset, given a block type as a keyword and a ByteBuffer for the block’s data. Returns handle.

    +

    write-block-data!

    (write-block-data! handle offset data)

    Writes block data to the given block offset (e.g. the address of the header, not the data itself) in the file, backed by the given handle. Returns handle.

    +

    write-block-header!

    (write-block-header! handle offset block-header)

    Writes a block header to the given offset in the file backed by the given handle. Returns handle.

    +

    write-block-index!

    (write-block-index! handle)(write-block-index! handle offset)

    Writes a block index for a Handle, based on whatever that Handle’s current block index is. Automatically generates a new block ID for this index and adds it to the handle as well. Then writes a new block index offset pointing to this block index. Returns handle.

    +

    write-block-index-offset!

    (write-block-index-offset! handle root)

    Takes a handle and the offset of a block index block to use as the new root. Updates the file’s block pointer. Returns handle.

    +

    write-file!

    (write-file! file offset buffer)

    Takes a FileChannel, an offset, and a ByteBuffer. Writes the ByteBuffer to the FileChannel at the given offset completely. Returns number of bytes written.

    +

    write-fressian-block!

    (write-fressian-block! handle data)(write-fressian-block! handle id data)

    Takes a handle, an optional block ID, and some Clojure data. Writes that data to a Fressian block at the end of the file, records the new block in the handle’s block index, and returns the ID of the newly written block.

    +

    write-fressian-block!*

    (write-fressian-block!* handle offset data)

    Takes a handle, a byte offset, and some Clojure data. Writes that data to a Fressian block at the given offset. Returns handle.

    +

    write-fressian-to-file!

    (write-fressian-to-file! file offset checksum data)

    Takes a FileChannel, an offset, a checksum, and a data structure as Fressian. Writes the data structure as Fressian to the file at the given offset. Returns the size of the data that was just written, in bytes. Mutates checksum with written bytes.

    +

    write-header!

    (write-header! handle)

    Takes a Handle and writes the initial magic bytes and version number. Initializes the handle’s version to current-version if it hasn’t already been set. Returns handle.

    +

    write-initial-test!

    (write-initial-test! handle test)

    Writes an initial test to a handle, making the test the root. Creates an (initially nil) block for the history. Called when we first begin a test. Returns test with additional metadata, so we can write the history and results later.

    +

    write-partial-map-block!

    (write-partial-map-block! handle m rest-id)(write-partial-map-block! handle id m rest-id)

    Takes a handle, a Clojure map, and the ID of the block which stores the rest of the map (use nil if there is no more data to the PartialMap). Writes the map to a new PartialMap block, records it in the handle’s block index, and returns the ID of this block itself. Optionally takes an explicit ID for this block.

    +

    write-partial-map-block!*

    (write-partial-map-block!* handle offset m rest-id)

    Takes a handle, a byte offset, a Clojure map, and the ID of the block which stores the rest of the map (use nil if there is no more to the PartialMap). Writes the map and rest pointer to a PartialMap block at the given offset. Returns handle.

    +

    write-test!

    (write-test! handle test)

    Writes an entire test map to a handle, making the test the root. Useful for re-writing a completed test that’s already in memory, and migrating existing Fressian tests to the new format. Returns handle.

    +

    write-test-with-history!

    (write-test-with-history! handle test)

    Takes a handle and a test created with write-initial-test!, and writes it again as the root. Used for rewriting a test after running it, but before analysis, in case there’s state that changed. Returns test.

    +

    write-test-with-results!

    (write-test-with-results! handle test)

    Takes a handle and a test created with write-initial-test!, and appends its :results as a partial map block: :valid? in the top tier, and other results below. Writes test using those results and history blocks. Returns test, with ::results-id metadata pointing to the block ID of these results.

    +
    \ No newline at end of file diff --git a/jepsen.store.fressian.html b/jepsen.store.fressian.html index ffab30b7c..2096db049 100644 --- a/jepsen.store.fressian.html +++ b/jepsen.store.fressian.html @@ -1,12 +1,12 @@ -jepsen.store.fressian documentation

    jepsen.store.fressian

    Supports serialization of various Jepsen datatypes via Fressian.

    +jepsen.store.fressian documentation

    jepsen.store.fressian

    Supports serialization of various Jepsen datatypes via Fressian.

    postprocess-fressian

    (postprocess-fressian obj)

    DEPRECATED: we now decode vectors directly in the Fressian reader.

    Fressian likes to give us ArrayLists, which are kind of a PITA when you’re used to working with vectors.

    We now write sequential types as their own vector wrappers, which means this is not necessary going forward, but I’m leaving this in place in case you have historical tests you need to re-process.

    -

    read-handlers

    read-handlers*

    reader

    (reader input-stream)(reader input-stream opts)

    Creates a Fressian reader given an InputStream. Options:

    +

    read-handlers

    read-handlers*

    reader

    (reader input-stream)(reader input-stream opts)

    Creates a Fressian reader given an InputStream. Options:

    :handlers Read handlers

    -

    write-handlers

    write-handlers*

    write-object+

    (write-object+ writer-opts writer x)(write-object+ writer-opts _ path x)

    Takes options for writer, a Fressian writer, and an object x. Writes x object to the given writer. If the write fails due to an unknown handler, backs up, traverses the structure of x, and determines the path to the specific part which could not be serialized, throwing a more specific error. Uses writer-opts to create new writers for this debugging process, if necessary.

    -

    writer

    (writer output-stream)(writer output-stream opts)

    Creates a Fressian writer given an OutputStream. Options:

    +

    write-handlers

    write-handlers*

    write-object+

    (write-object+ writer-opts writer x)(write-object+ writer-opts _ path x)

    Takes options for writer, a Fressian writer, and an object x. Writes x object to the given writer. If the write fails due to an unknown handler, backs up, traverses the structure of x, and determines the path to the specific part which could not be serialized, throwing a more specific error. Uses writer-opts to create new writers for this debugging process, if necessary.

    +

    writer

    (writer output-stream)(writer output-stream opts)

    Creates a Fressian writer given an OutputStream. Options:

    :handlers Write handlers

    -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/jepsen.store.html b/jepsen.store.html index c22a016a9..7c5143215 100644 --- a/jepsen.store.html +++ b/jepsen.store.html @@ -1,42 +1,42 @@ -jepsen.store documentation

    jepsen.store

    Persistent storage for test runs and later analysis.

    +jepsen.store documentation

    jepsen.store

    Persistent storage for test runs and later analysis.

    all-tests

    (all-tests)

    A plain old vector of test delays, sorted in chronological order. Unlike tests, attempts to load tests with no .fressian or .jepsen file will return nil here, instead of throwing. Helpful when you want to slice and dice all tests at the REPL.

    -

    base-dir

    class-name->ns-str

    (class-name->ns-str class-name)

    Turns a class string into a namespace string (by translating _ to -)

    -

    close!

    (close! test)

    Takes a test map and closes its store handle, if one exists. Returns test without store handle.

    -

    console-appender

    default-edn-reader

    (default-edn-reader tag value)

    We use defrecords heavily and it’s nice to be able to deserialize them.

    -

    default-logging-overrides

    Logging overrides that we apply by default

    -

    default-nonserializable-keys

    What keys in a test can’t be serialized to disk, by default?

    -

    delete!

    (delete!)(delete! test-name)(delete! test-name test-time)

    Deletes all tests, or all tests under a given name, or, if given a date as well, a specific test.

    -

    delete-file-recursively!

    (delete-file-recursively! f)

    dir?

    (dir? f)

    Is this a directory?

    -

    edn-tag->constructor

    (edn-tag->constructor tag)

    Takes an edn tag and returns a constructor fn taking that tag’s value and building an object from it.

    -

    file-name

    (file-name f)

    Maps a File to a string name.

    -

    fressian-file

    (fressian-file test)

    Gives the path to a fressian file encoding all the results from a test.

    -

    fressian-file!

    (fressian-file! test)

    Gives the path to a fressian file encoding all the results from a test, ensuring its containing directory exists.

    -

    jepsen-file

    (jepsen-file test)

    Gives the path to a .jepsen file encoding all the results from a test.

    -

    jepsen-file!

    (jepsen-file! test)

    Gives the path to a .jepsen file, ensuring its directory exists.

    -

    latest

    (latest)

    Loads the latest test

    -

    load

    (load test)(load test-name test-time)

    Loads a specific test, either given a map with {:name … :start-time …}, or by name and time as separate arguments. Prefers to load a .jepsen file, falls back to .fressian.

    -

    load-fressian-file

    (load-fressian-file file)

    Loads an arbitrary Fressian file.

    -

    load-jepsen-file

    (load-jepsen-file file)

    Loads a test from an arbitrary Jepsen file. This is lazy, and retains a filehandle which will remain open until all references to this test are gone and the GC kicks in.

    -

    load-results

    (load-results test-name test-time)

    Loads the results map for a test by name and time. Prefers a lazy map from test.fressian; falls back to parsing results.edn.

    -

    load-results-edn

    (load-results-edn test)

    Loads the results map for a test by parsing the result.edn file, instead of test.jepsen.

    -

    memoized-edn-tag->constructor

    memoized-load-results

    migrate-to-jepsen-format!

    (migrate-to-jepsen-format!)

    Loads every test and copies their Fressian files to the new on-disk format.

    -

    nonserializable-keys

    (nonserializable-keys test)

    What keys in a test can’t be serialized to disk? The union of default nonserializable keys, plus any in :nonserializable-keys.

    -

    path

    (path test)(path test & args)

    With one arg, a test, returns the directory for that test’s results. Given additional arguments, returns a file with that name in the test directory. Nested paths are flattened: (path t [:a :b :c :d) expands to …/a/b/c/d. Nil path components are ignored: (path t :a nil :b) expands to …/a/b.

    +

    base-dir

    class-name->ns-str

    (class-name->ns-str class-name)

    Turns a class string into a namespace string (by translating _ to -)

    +

    close!

    (close! test)

    Takes a test map and closes its store handle, if one exists. Returns test without store handle.

    +

    console-appender

    default-edn-reader

    (default-edn-reader tag value)

    We use defrecords heavily and it’s nice to be able to deserialize them.

    +

    default-logging-overrides

    Logging overrides that we apply by default

    +

    default-nonserializable-keys

    What keys in a test can’t be serialized to disk, by default?

    +

    delete!

    (delete!)(delete! test-name)(delete! test-name test-time)

    Deletes all tests, or all tests under a given name, or, if given a date as well, a specific test.

    +

    delete-file-recursively!

    (delete-file-recursively! f)

    dir?

    (dir? f)

    Is this a directory?

    +

    edn-tag->constructor

    (edn-tag->constructor tag)

    Takes an edn tag and returns a constructor fn taking that tag’s value and building an object from it.

    +

    file-name

    (file-name f)

    Maps a File to a string name.

    +

    fressian-file

    (fressian-file test)

    Gives the path to a fressian file encoding all the results from a test.

    +

    fressian-file!

    (fressian-file! test)

    Gives the path to a fressian file encoding all the results from a test, ensuring its containing directory exists.

    +

    jepsen-file

    (jepsen-file test)

    Gives the path to a .jepsen file encoding all the results from a test.

    +

    jepsen-file!

    (jepsen-file! test)

    Gives the path to a .jepsen file, ensuring its directory exists.

    +

    latest

    (latest)

    Loads the latest test

    +

    load

    (load test)(load test-name test-time)

    Loads a specific test, either given a map with {:name … :start-time …}, or by name and time as separate arguments. Prefers to load a .jepsen file, falls back to .fressian.

    +

    load-fressian-file

    (load-fressian-file file)

    Loads an arbitrary Fressian file.

    +

    load-jepsen-file

    (load-jepsen-file file)

    Loads a test from an arbitrary Jepsen file. This is lazy, and retains a filehandle which will remain open until all references to this test are gone and the GC kicks in.

    +

    load-results

    (load-results test-name test-time)

    Loads the results map for a test by name and time. Prefers a lazy map from test.fressian; falls back to parsing results.edn.

    +

    load-results-edn

    (load-results-edn test)

    Loads the results map for a test by parsing the result.edn file, instead of test.jepsen.

    +

    memoized-edn-tag->constructor

    memoized-load-results

    migrate-to-jepsen-format!

    (migrate-to-jepsen-format!)

    Loads every test and copies their Fressian files to the new on-disk format.

    +

    nonserializable-keys

    (nonserializable-keys test)

    What keys in a test can’t be serialized to disk? The union of default nonserializable keys, plus any in :nonserializable-keys.

    +

    path

    (path test)(path test & args)

    With one arg, a test, returns the directory for that test’s results. Given additional arguments, returns a file with that name in the test directory. Nested paths are flattened: (path t [:a :b :c :d) expands to …/a/b/c/d. Nil path components are ignored: (path t :a nil :b) expands to …/a/b.

    Test must have only two keys: :name, and :start-time. :start-time may be a string, or a DateTime.

    -

    path!

    (path! & args)

    Like path, but ensures the path’s containing directories exist.

    -

    read-handlers

    save-0!

    (save-0! test)

    Writes a test at the start of a test run. Updates symlinks. Returns a new version of test which should be used for subsequent writes.

    -

    save-1!

    (save-1! test)

    Phase 1: after completing the history, writes test.jepsen and history files to disk and updates latest symlinks. Returns test with metadata which should be preserved for calls to save-2!

    -

    save-2!

    (save-2! test)

    Phase 2: after computing results, we update the .jepsen file and write results as EDN. Returns test with metadata that should be preserved for future save calls.

    -

    serializable-test

    (serializable-test test)

    Takes a test and returns it without its serializable keys.

    -

    start-logging!

    (start-logging! test)

    Starts logging to a file in the test’s directory. Also updates current symlink. Test may include a :logging key, which should be a map with the following optional options:

    +

    path!

    (path! & args)

    Like path, but ensures the path’s containing directories exist.

    +

    read-handlers

    save-0!

    (save-0! test)

    Writes a test at the start of a test run. Updates symlinks. Returns a new version of test which should be used for subsequent writes.

    +

    save-1!

    (save-1! test)

    Phase 1: after completing the history, writes test.jepsen and history files to disk and updates latest symlinks. Returns test with metadata which should be preserved for calls to save-2!

    +

    save-2!

    (save-2! test)

    Phase 2: after computing results, we update the .jepsen file and write results as EDN. Returns test with metadata that should be preserved for future save calls.

    +

    serializable-test

    (serializable-test test)

    Takes a test and returns it without its serializable keys.

    +

    start-logging!

    (start-logging! test)

    Starts logging to a file in the test’s directory. Also updates current symlink. Test may include a :logging key, which should be a map with the following optional options:

    {:overrides   A map of packages to log level keywords}
     

    Test may also include a :logging-json? flag, which produces JSON structured Jepsen logs.

    -

    stop-logging!

    (stop-logging!)

    Resets logging to console only.

    -

    symlink?

    (symlink? f)

    Is this a symlink?

    -

    test

    (test which)

    Like load, loads a specific test. Frequently you don’t care about individual test names; you just want “the last test”, or “the third most recent”. This function can take:

    +

    stop-logging!

    (stop-logging!)

    Resets logging to console only.

    +

    symlink?

    (symlink? f)

    Is this a symlink?

    +

    test

    (test which)

    Like load, loads a specific test. Frequently you don’t care about individual test names; you just want “the last test”, or “the third most recent”. This function can take:

    • A Long. (test 0) returns the first test ever run; (test 2) loads the third. (test -1) loads the most recent test; -2 the next-to-most-recent.

      @@ -45,19 +45,19 @@

      A String. Can be a directory name, in which case we look for a test.jepsen in that directory.

    -

    test-names

    (test-names)

    Returns a seq of all known test names.

    -

    tests

    (tests)(tests test-name)

    If given a test name, returns a map of test runs to deref-able tests. With no test name, returns a map of test names to maps of runs to deref-able tests.

    -

    update-current-symlink!

    (update-current-symlink! test)

    Creates a current symlink to the currently running test, if a store directory exists.

    -

    update-symlink!

    (update-symlink! test dest)

    Takes a test and a symlink path. Creates a symlink from that path to the test directory, if it exists.

    -

    update-symlinks!

    (update-symlinks! test)

    Creates latest and current symlinks to the given test, if a store directory exists.

    -

    virtual-dir?

    (virtual-dir? f)

    Is this a . or .. directory entry?

    -

    with-handle

    macro

    (with-handle [test-sym test-expr] & body)

    Takes a binding symbol and a test expression. Opens a store.format handle for writing and reading test data, and evaluates body with that handle open, closing it automatically at the end of the block. Within block, test-sym is bound to test-expr, but with a special key :store :handle being the writer handle. Returns the value of the body.

    +

    test-names

    (test-names)

    Returns a seq of all known test names.

    +

    tests

    (tests)(tests test-name)

    If given a test name, returns a map of test runs to deref-able tests. With no test name, returns a map of test names to maps of runs to deref-able tests.

    +

    update-current-symlink!

    (update-current-symlink! test)

    Creates a current symlink to the currently running test, if a store directory exists.

    +

    update-symlink!

    (update-symlink! test dest)

    Takes a test and a symlink path. Creates a symlink from that path to the test directory, if it exists.

    +

    update-symlinks!

    (update-symlinks! test)

    Creates latest and current symlinks to the given test, if a store directory exists.

    +

    virtual-dir?

    (virtual-dir? f)

    Is this a . or .. directory entry?

    +

    with-handle

    macro

    (with-handle [test-sym test-expr] & body)

    Takes a binding symbol and a test expression. Opens a store.format handle for writing and reading test data, and evaluates body with that handle open, closing it automatically at the end of the block. Within block, test-sym is bound to test-expr, but with a special key :store :handle being the writer handle. Returns the value of the body.

    The generator interpreter, save-0, save-1, etc. use this handle to write and read test data as the test is run.

    -

    with-out-file

    macro

    (with-out-file test filename & body)

    Binds stdout to a file for the duration of body.

    -

    write-fressian!

    (write-fressian! test)

    Write the entire test as a .fressian file.

    -

    write-fressian-file!

    (write-fressian-file! data file)

    Writes a data structure to the given file, as Fressian. For instance:

    +

    with-out-file

    macro

    (with-out-file test filename & body)

    Binds stdout to a file for the duration of body.

    +

    write-fressian!

    (write-fressian! test)

    Write the entire test as a .fressian file.

    +

    write-fressian-file!

    (write-fressian-file! data file)

    Writes a data structure to the given file, as Fressian. For instance:

    (write-fressian-file! {:foo 2} (path! test “foo.fressian”)).

    -

    write-handlers

    write-history!

    (write-history! test)

    Writes out history.txt and history.edn files.

    -

    write-jepsen!

    (write-jepsen! test)

    Takes a test and saves it as a .jepsen binary file.

    -

    write-results!

    (write-results! test)

    Writes out a results.edn file.

    -
    \ No newline at end of file +

    write-handlers

    write-history!

    (write-history! test)

    Writes out history.txt and history.edn files.

    +

    write-jepsen!

    (write-jepsen! test)

    Takes a test and saves it as a .jepsen binary file.

    +

    write-results!

    (write-results! test)

    Writes out a results.edn file.

    +
    \ No newline at end of file diff --git a/jepsen.tests.adya.html b/jepsen.tests.adya.html index 4af4d3f01..c7a560fdf 100644 --- a/jepsen.tests.adya.html +++ b/jepsen.tests.adya.html @@ -1,8 +1,8 @@ -jepsen.tests.adya documentation

    jepsen.tests.adya

    Generators and checkers for tests of Adya’s proscribed behaviors for weakly-consistent systems. See http://pmg.csail.mit.edu/papers/adya-phd.pdf

    +jepsen.tests.adya documentation

    jepsen.tests.adya

    Generators and checkers for tests of Adya’s proscribed behaviors for weakly-consistent systems. See http://pmg.csail.mit.edu/papers/adya-phd.pdf

    g2-checker

    (g2-checker)

    Verifies that at most one :insert completes successfully for any given key.

    -

    g2-gen

    (g2-gen)

    With concurrent, unique keys, emits pairs of :insert ops of the form [key a-id b-id], where one txn has a-id and the other has b-id. a-id and b-id are globally unique. Only two insert ops are generated for any given key. Keys and ids are positive integers.

    +

    g2-gen

    (g2-gen)

    With concurrent, unique keys, emits pairs of :insert ops of the form [key a-id b-id], where one txn has a-id and the other has b-id. a-id and b-id are globally unique. Only two insert ops are generated for any given key. Keys and ids are positive integers.

    G2 clients use two tables:

    create table a (
       id    int primary key,
    @@ -24,4 +24,4 @@
     

    into table a, if a-id is present. If b-id is present, insert into table b instead. Iff the insert succeeds, return :type :ok with the operation value unchanged.

    We’re looking to detect violations based on predicates; databases may prevent anti-dependency cycles with individual primary keys, but selects based on predicates might observe stale data. Clients should feel free to choose predicates and values in creative ways.

    -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/jepsen.tests.bank.html b/jepsen.tests.bank.html index 4f682db27..a1c9bca48 100644 --- a/jepsen.tests.bank.html +++ b/jepsen.tests.bank.html @@ -1,18 +1,18 @@ -jepsen.tests.bank documentation

    jepsen.tests.bank

    Helper functions for doing bank tests, where you simulate transfers between accounts, and verify that reads always show the same balance. The test map should have these additional options:

    +jepsen.tests.bank documentation

    jepsen.tests.bank

    Helper functions for doing bank tests, where you simulate transfers between accounts, and verify that reads always show the same balance. The test map should have these additional options:

    :accounts A collection of account identifiers. :total-amount Total amount to allocate. :max-transfer The largest transfer we’ll try to execute.

    by-node

    (by-node test history)

    Groups operations by node.

    -

    check-op

    (check-op accts total negative-balances? op)

    Takes a single op and returns errors in its balance

    -

    checker

    (checker checker-opts)

    Verifies that all reads must sum to (:total test), and, unless :negative-balances? is true, checks that all balances are non-negative.

    -

    diff-transfer

    Transfers only between different accounts.

    -

    err-badness

    (err-badness test err)

    Takes a bank error and returns a number, depending on its type. Bigger numbers mean more egregious errors.

    -

    generator

    (generator)

    A mixture of reads and transfers for clients.

    -

    ok-reads

    (ok-reads history)

    Filters a history to just OK reads. Returns nil if there are none.

    -

    plotter

    (plotter)

    Renders a graph of balances over time

    -

    points

    (points history)

    Turns a history into a seqeunce of time total-of-accounts points.

    -

    read

    (read _ _)

    A generator of read operations.

    -

    test

    (test)(test opts)

    A partial test; bundles together some default choices for keys and amounts with a generator and checker. Options:

    +

    check-op

    (check-op accts total negative-balances? op)

    Takes a single op and returns errors in its balance

    +

    checker

    (checker checker-opts)

    Verifies that all reads must sum to (:total test), and, unless :negative-balances? is true, checks that all balances are non-negative.

    +

    diff-transfer

    Transfers only between different accounts.

    +

    err-badness

    (err-badness test err)

    Takes a bank error and returns a number, depending on its type. Bigger numbers mean more egregious errors.

    +

    generator

    (generator)

    A mixture of reads and transfers for clients.

    +

    ok-reads

    (ok-reads history)

    Filters a history to just OK reads. Returns nil if there are none.

    +

    plotter

    (plotter)

    Renders a graph of balances over time

    +

    points

    (points history)

    Turns a history into a seqeunce of time total-of-accounts points.

    +

    read

    (read _ _)

    A generator of read operations.

    +

    test

    (test)(test opts)

    A partial test; bundles together some default choices for keys and amounts with a generator and checker. Options:

    :negative-balances? if true, doesn’t verify that balances remain positive

    -

    transfer

    (transfer test _)

    Generator of a transfer: a random amount between two randomly selected accounts.

    -
    \ No newline at end of file +

    transfer

    (transfer test _)

    Generator of a transfer: a random amount between two randomly selected accounts.

    +
    \ No newline at end of file diff --git a/jepsen.tests.causal-reverse.html b/jepsen.tests.causal-reverse.html index 843f61c2a..20d4793cd 100644 --- a/jepsen.tests.causal-reverse.html +++ b/jepsen.tests.causal-reverse.html @@ -1,11 +1,11 @@ -jepsen.tests.causal-reverse documentation

    jepsen.tests.causal-reverse

    Checks for a strict serializability anomaly in which T1 < T2, but T2 is visible without T1.

    +jepsen.tests.causal-reverse documentation

    jepsen.tests.causal-reverse

    Checks for a strict serializability anomaly in which T1 < T2, but T2 is visible without T1.

    We perform concurrent blind inserts across n keys, and meanwhile, perform reads of n keys in a transaction. To verify, we replay the history, tracking the writes which were known to have completed before the invocation of any write w_i. If w_i is visible, and some w_j < w_i is not visible, we’ve found a violation of strict serializability.

    Splits keys up onto different tables to make sure they fall in different shard ranges

    checker

    (checker)

    Takes a history of writes and reads. Verifies that subquent writes do not appear without prior acknowledged writes.

    -

    errors

    (errors history expected)

    Takes a history and an expected graph of write precedence, returning ops that violate the expected write order.

    -

    graph

    (graph history)

    Takes a history and returns a first-order write precedence graph.

    -

    workload

    (workload opts)

    A package of a generator and checker. Options:

    +

    errors

    (errors history expected)

    Takes a history and an expected graph of write precedence, returning ops that violate the expected write order.

    +

    graph

    (graph history)

    Takes a history and returns a first-order write precedence graph.

    +

    workload

    (workload opts)

    A package of a generator and checker. Options:

    :nodes A set of nodes you’re going to operate on. We only care about the count, so we can figure out how many workers to use per key. :per-key-limit Maximum number of ops per key. Default 500.

    -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/jepsen.tests.causal.html b/jepsen.tests.causal.html index 1c1fee813..171fec7e6 100644 --- a/jepsen.tests.causal.html +++ b/jepsen.tests.causal.html @@ -1,6 +1,6 @@ -jepsen.tests.causal documentation

    jepsen.tests.causal

    causal-register

    (causal-register)

    check

    (check model)

    A series of causally consistent (CC) ops are a causal order (CO). We issue a CO of 5 read (r) and write (w) operations (r w r w r) against a register (key). All operations in this CO must appear to execute in the order provided by the issuing site (process). We also look for anomalies, such as unexpected values

    -

    cw1

    (cw1 _ _)

    cw2

    (cw2 _ _)

    inconsistent

    (inconsistent msg)

    Represents an invalid termination of a model; e.g. that an operation could not have taken place.

    -

    inconsistent?

    (inconsistent? model)

    Is a model inconsistent?

    -

    Model

    protocol

    members

    step

    (step model op)

    r

    (r _ _)

    ri

    (ri _ _)

    test

    (test opts)
    \ No newline at end of file +jepsen.tests.causal documentation

    jepsen.tests.causal

    causal-register

    (causal-register)

    check

    (check model)

    A series of causally consistent (CC) ops are a causal order (CO). We issue a CO of 5 read (r) and write (w) operations (r w r w r) against a register (key). All operations in this CO must appear to execute in the order provided by the issuing site (process). We also look for anomalies, such as unexpected values

    +

    cw1

    (cw1 _ _)

    cw2

    (cw2 _ _)

    inconsistent

    (inconsistent msg)

    Represents an invalid termination of a model; e.g. that an operation could not have taken place.

    +

    inconsistent?

    (inconsistent? model)

    Is a model inconsistent?

    +

    Model

    protocol

    members

    step

    (step model op)

    r

    (r _ _)

    ri

    (ri _ _)

    test

    (test opts)
    \ No newline at end of file diff --git a/jepsen.tests.cycle.append.html b/jepsen.tests.cycle.append.html index b9e9d067a..a93b54e1a 100644 --- a/jepsen.tests.cycle.append.html +++ b/jepsen.tests.cycle.append.html @@ -1,9 +1,9 @@ -jepsen.tests.cycle.append documentation

    jepsen.tests.cycle.append

    Detects cycles in histories where operations are transactions over named lists lists, and operations are either appends or reads. See elle.list-append for docs.

    +jepsen.tests.cycle.append documentation

    jepsen.tests.cycle.append

    Detects cycles in histories where operations are transactions over named lists lists, and operations are either appends or reads. See elle.list-append for docs.

    checker

    (checker)(checker opts)

    Full checker for append and read histories. See elle.list-append for options.

    -

    gen

    (gen opts)

    Wrapper for elle.list-append/gen; as a Jepsen generator.

    -

    test

    (test opts)

    A partial test, including a generator and checker. You’ll need to provide a client which can understand operations of the form:

    +

    gen

    (gen opts)

    Wrapper for elle.list-append/gen; as a Jepsen generator.

    +

    test

    (test opts)

    A partial test, including a generator and checker. You’ll need to provide a client which can understand operations of the form:

    {:type :invoke, :f :txn, :value [[:r 3 nil] [:append 3 2] [:r 3]]}
     

    and return completions like:

    @@ -11,4 +11,4 @@

    where the key 3 identifies some list, whose value is initially 1, and becomes 1 2.

    Options are passed directly to elle.list-append/check and elle.list-append/gen; see their docs for full options.

    -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/jepsen.tests.cycle.html b/jepsen.tests.cycle.html index 557a385de..b097f5deb 100644 --- a/jepsen.tests.cycle.html +++ b/jepsen.tests.cycle.html @@ -1,5 +1,5 @@ -jepsen.tests.cycle documentation

    jepsen.tests.cycle

    Tests based on transactional cycle detection via Elle. If you’re looking for code that used to be here, see elle.core.

    +jepsen.tests.cycle documentation

    jepsen.tests.cycle

    Tests based on transactional cycle detection via Elle. If you’re looking for code that used to be here, see elle.core.

    checker

    (checker analyze-fn)

    Takes a function which takes a history and returns a graph, explainer pair, and returns a checker which uses those graphs to identify cyclic dependencies.

    -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/jepsen.tests.cycle.wr.html b/jepsen.tests.cycle.wr.html index 7fb498e8e..cc601d313 100644 --- a/jepsen.tests.cycle.wr.html +++ b/jepsen.tests.cycle.wr.html @@ -1,12 +1,12 @@ -jepsen.tests.cycle.wr documentation

    jepsen.tests.cycle.wr

    A test which looks for cycles in write/read transactions. Writes are assumed to be unique, but this is the only constraint. See elle.rw-register for docs.

    +jepsen.tests.cycle.wr documentation

    jepsen.tests.cycle.wr

    A test which looks for cycles in write/read transactions. Writes are assumed to be unique, but this is the only constraint. See elle.rw-register for docs.

    checker

    (checker)(checker opts)

    Full checker for write-read registers. See elle.rw-register for options.

    -

    gen

    (gen opts)

    Wrapper around elle.rw-register/gen.

    -

    test

    (test opts)

    A partial test, including a generator and a checker. You’ll need to provide a client which can understand operations of the form:

    +

    gen

    (gen opts)

    Wrapper around elle.rw-register/gen.

    +

    test

    (test opts)

    A partial test, including a generator and a checker. You’ll need to provide a client which can understand operations of the form:

    {:type :invoke, :f :txn, :value [:r 3 nil :w 3 6}

    and return completions like:

    {:type :ok, :f :txn, :value [:r 3 1 :w 3 6]}

    Where the key 3 identifies some register whose value is initially 1, and which this transaction sets to 6.

    Options are passed directly to elle.rw-register/check and elle.rw-register/gen; see their docs for full options.

    -
    \ No newline at end of file +
    \ No newline at end of file diff --git a/jepsen.tests.html b/jepsen.tests.html index 35dd62059..e37f11775 100644 --- a/jepsen.tests.html +++ b/jepsen.tests.html @@ -1,7 +1,7 @@ -jepsen.tests documentation

    jepsen.tests

    Provide utilities for writing tests using jepsen.

    +jepsen.tests documentation

    jepsen.tests

    Provide utilities for writing tests using jepsen.

    atom-client

    (atom-client state)(atom-client state meta-log)

    A CAS client which uses an atom for state. Should probably move this into core-test.

    -

    atom-db

    (atom-db state)

    Wraps an atom as a database.

    -

    noop-test

    Boring test stub. Typically used as a basis for writing more complex tests.

    -
    \ No newline at end of file +

    atom-db

    (atom-db state)

    Wraps an atom as a database.

    +

    noop-test

    Boring test stub. Typically used as a basis for writing more complex tests.

    +
    \ No newline at end of file diff --git a/jepsen.tests.kafka.html b/jepsen.tests.kafka.html index 58fa3a21a..40ac037cf 100644 --- a/jepsen.tests.kafka.html +++ b/jepsen.tests.kafka.html @@ -1,6 +1,6 @@ -jepsen.tests.kafka documentation

    jepsen.tests.kafka

    This workload is intended for systems which behave like the popular Kafka queue. This includes Kafka itself, as well as compatible systems like Redpanda.

    +jepsen.tests.kafka documentation

    jepsen.tests.kafka

    This workload is intended for systems which behave like the popular Kafka queue. This includes Kafka itself, as well as compatible systems like Redpanda.

    At the abstract level of this workload, these systems provide a set of totally-ordered append-only logs called partitions, each of which stores a single arbitrary (and, for our purposes, unique) message at a particular offset into the log. Partitions are grouped together into topics: each topic is therefore partially ordered.

    Each client has a producer and a consumer aspect; in Kafka these are separate clients, but for Jepsen’s purposes we combine them. A producer can send a message to a topic-partition, which assigns it a unique, theoretically monotonically-increasing offset and saves it durably at that offset. A consumer can subscribe to a topic, in which case the system aautomatically assigns it any number of partitions in that topic–this assignment can change at any time. Consumers can also assign themselves specific partitions manually. When a consumer polls, it receives messages and their offsets from whatever topic-partitions it is currently assigned to, and advances its internal state so that the next poll (barring a change in assignment) receives the immediately following messages.

    Operations

    @@ -64,22 +64,27 @@

    Analysis

  • Intermediate reads? I assume these happen constantly, but are they supposed to? It’s not totally clear what this MEANS, but I think it might look like a transaction T1 which writes v1 v2 v3 to k, and another T2 which polls k and observes any of v1, v2, or v3, but not all of them. This miiight be captured as a wr-rw cycle in some cases, but perhaps not all, since we’re only generating rw edges for final reads.

  • +
  • +

    Precommitted reads. These occur when a transaction observes a value that it wrote. This is fine in most transaction systems, but illegal in Kafka, which assumes that consumers (running at read committed) never observe uncommitted records.

    +
  • -

    allowed-error-types

    (allowed-error-types test)

    Redpanda does a lot of things that are interesting to know about, but not necessarily bad or against-spec. For instance, g0 cycles are normal in the Kafka transactional model, and g1c is normal with wr-only edges at read-uncommitted but not with read-committed. This is a very ad-hoc attempt to encode that so that Jepsen’s valid/invalid results are somewhat meaningful.]

    +

    allowed-error-types

    (allowed-error-types test)

    Redpanda does a lot of things that are interesting to know about, but not necessarily bad or against-spec. For instance, g0 cycles are normal in the Kafka transactional model, and g1c is normal with wr-only edges at read-uncommitted but not with read-committed. This is a very ad-hoc attempt to encode that so that Jepsen’s valid/invalid results are somewhat meaningful.

    Takes a test, and returns a set of keyword error types (e.g. :poll-skip) which this test considers allowable.

    -

    analysis

    (analysis history)(analysis history opts)

    Builds up intermediate data structures used to understand a history. Options include:

    +

    analysis

    (analysis history)(analysis history opts)

    Builds up intermediate data structures used to understand a history. Options include:

    :directory - Used for generating output files :ww-deps - Whether to perform write-write inference on the basis of log offsets.

    -

    around-key-offset

    (around-key-offset k offset history)(around-key-offset k offset n history)

    Filters a history to just those operations around a given key and offset; trimming their mops to just those regions as well.

    -

    around-key-value

    (around-key-value k value history)(around-key-value k value n history)

    Filters a history to just those operations around a given key and value; trimming their mops to just those regions as well.

    -

    around-some

    (around-some pred n coll)

    Clips a sequence to just those elements near a predicate. Takes a predicate, a range n, and a sequence xs. Returns the series of all x in xs such x is within n elements of some x’ matching predicate.

    -

    assocv

    (assocv v i value)

    An assoc on vectors which allows you to assoc at arbitrary indexes, growing the vector as needed. When v is nil, constructs a fresh vector rather than a map.

    -

    checker

    (checker)

    condense-error

    (condense-error test [type errs])

    Takes a test and a pair of an error type (e.g. :lost-write) and a seq of errors. Returns a pair of [type, {:count n, :errors }], which tries to show the most interesting or severe errors without making the pretty-printer dump out two gigabytes of EDN.

    -

    consume-counts

    (consume-counts history)

    Kafka transactions are supposed to offer ‘exactly once’ processing: a transaction using the subscribe workflow should be able to consume an offset and send something to an output queue, and if this transaction is successful, it should happen at most once. It’s not exactly clear to me how these semantics are supposed to work–it’s clearly not once per consumer group, because we routinely see dups with only one consumer group. As a fallback, we look for single consumer per process, which should DEFINITELY hold, but… appears not to.

    +

    around-key-offset

    (around-key-offset k offset history)(around-key-offset k offset n history)

    Filters a history to just those operations around a given key and offset; trimming their mops to just those regions as well.

    +

    around-key-value

    (around-key-value k value history)(around-key-value k value n history)

    Filters a history to just those operations around a given key and value; trimming their mops to just those regions as well.

    +

    around-some

    (around-some pred n coll)

    Clips a sequence to just those elements near a predicate. Takes a predicate, a range n, and a sequence xs. Returns the series of all x in xs such x is within n elements of some x’ matching predicate.

    +

    assocv

    (assocv v i value)

    An assoc on vectors which allows you to assoc at arbitrary indexes, growing the vector as needed. When v is nil, constructs a fresh vector rather than a map.

    +

    checker

    (checker)

    condense-error

    (condense-error test [type errs])

    Takes a test and a pair of an error type (e.g. :lost-write) and a seq of errors. Returns a pair of [type, {:count n, :errors }], which tries to show the most interesting or severe errors without making the pretty-printer dump out two gigabytes of EDN.

    +

    consume-counts

    (consume-counts {:keys [history op-reads]})

    Kafka transactions are supposed to offer ‘exactly once’ processing: a transaction using the subscribe workflow should be able to consume an offset and send something to an output queue, and if this transaction is successful, it should happen at most once. It’s not exactly clear to me how these semantics are supposed to work–it’s clearly not once per consumer group, because we routinely see dups with only one consumer group. As a fallback, we look for single consumer per process, which should DEFINITELY hold, but… appears not to.

    We verify this property by looking at all committed transactions which performed a poll while subscribed (not assigned!) and keeping track of the number of times each key and value is polled. Yields a map of keys to values to consumed counts, wherever that count is more than one.

    -

    crash-client-gen

    (crash-client-gen opts)

    A generator which, if the test has :crash-clients? true, periodically emits an operation to crash a random client.

    -

    cycles!

    (cycles! {:keys [history directory], :as analysis})

    Finds a map of cycle names to cyclic anomalies in a partial analysis.

    -

    duplicate-cases

    (duplicate-cases {:keys [version-orders]})

    Takes a partial analysis and identifies cases where a single value appears at more than one offset in a key.

    -

    final-polls

    (final-polls offsets)

    Takes an atom containing a map of keys to offsets. Constructs a generator which:

    +

    crash-client-gen

    (crash-client-gen opts)

    A generator which, if the test has :crash-clients? true, periodically emits an operation to crash a random client.

    +

    cycles!

    (cycles! {:keys [history directory], :as analysis})

    Finds a map of cycle names to cyclic anomalies in a partial analysis.

    +

    datafy-version-order-log

    (datafy-version-order-log m)

    Turns a bifurcan integer map of Bifurcan sets, and converts it to a vector of Clojure sets.

    +

    downsample-plot

    (downsample-plot points)

    Sometimes we wind up feeding absolutely huge plots to gnuplot, which chews up a lot of CPU time. We downsample these points, skipping points which are close in both x and y.

    +

    duplicate-cases

    (duplicate-cases {:keys [version-orders]})

    Takes a partial analysis and identifies cases where a single value appears at more than one offset in a key.

    +

    final-polls

    (final-polls offsets)

    Takes an atom containing a map of keys to offsets. Constructs a generator which:

    1. Checks the topic-partition state from the admin API

      @@ -95,56 +100,63 @@

      Analysis

    This process repeats every 10 seconds until polls have caught up to the offsets in the offsets atom.

    -

    g1a-cases

    (g1a-cases {:keys [history writes-by-type writer-of]})

    Takes a partial analysis and looks for aborted reads, where a known-failed write is nonetheless visible to a committed read. Returns a seq of error maps, or nil if none are found.

    -

    graph

    (graph analysis history)

    A combined Elle dependency graph between completion operations.

    -

    index-seq

    (index-seq xs)

    Takes a seq of distinct values, and returns a map of:

    +

    firstv

    (firstv v)

    First for vectors.

    +

    g1a-cases

    (g1a-cases {:keys [history writes-by-type writer-of op-reads]})

    Takes a partial analysis and looks for aborted reads, where a known-failed write is nonetheless visible to a committed read. Returns a seq of error maps, or nil if none are found.

    +

    graph

    (graph analysis history)

    A combined Elle dependency graph between completion operations.

    +

    index-seq

    (index-seq xs)

    Takes a seq of distinct values, and returns a map of:

    {:by-index A vector of the sequence :by-value A map of values to their indices in the vector.}

    -

    int-poll-skip+nonmonotonic-cases

    (int-poll-skip+nonmonotonic-cases {:keys [history version-orders]})

    Takes a partial analysis and looks for cases where a single transaction contains:

    +

    int-poll-skip+nonmonotonic-cases

    (int-poll-skip+nonmonotonic-cases {:keys [history version-orders op-reads]})

    Takes a partial analysis and looks for cases where a single transaction contains:

    {:skip A pair of poll values which read the same key and skip over some part of the log which we know should exist. :nonmonotonic A pair of poll values which contradict the log order, or repeat the same value.}

    When a transaction’s rebalance log includes a key which would otherwise be involved in one of these violations, we don’t report it as an error: we assume that rebalances invalidate any assumption of monotonically advancing offsets.

    -

    int-send-skip+nonmonotonic-cases

    (int-send-skip+nonmonotonic-cases {:keys [history version-orders]})

    Takes a partial analysis and looks for cases where a single transaction contains a pair of sends to the same key which:

    +

    int-poll-skip+nonmonotonic-cases-per-key

    (int-poll-skip+nonmonotonic-cases-per-key version-orders op rebalanced-keys errs [k vs])

    A reducer for int-poll-skip+nonmonotonic-cases. Takes version orders, an op, a rebalanced-keys set, a transient vector of error maps and a key, values pair from (op-reads). Adds an error if we can find one in some key.

    +

    int-send-skip+nonmonotonic-cases

    (int-send-skip+nonmonotonic-cases {:keys [history version-orders]})

    Takes a partial analysis and looks for cases where a single transaction contains a pair of sends to the same key which:

    {:skip Skips over some indexes of the log :nonmonotonic Go backwards (or stay in the same place) in the log}

    -

    interleave-subscribes

    (interleave-subscribes txn-gen)

    Takes a txn generator and keeps track of the keys flowing through it, interspersing occasional :subscribe or :assign operations for recently seen keys.

    -

    key-order-viz

    (key-order-viz k log history)

    Takes a key, a log for that key (a vector of offsets to sets of elements which were observed at that offset) and a history of ops relevant to that key. Constructs an XML structure visualizing all sends/polls of that log’s offsets.

    -

    log->last-index->values

    (log->last-index->values log)

    Takes a log: a vector of sets of read values for each offset in a partition, possibly including nils. Returns a vector which takes indices (dense offsets) to sets of values whose last appearance was at that position.

    -

    log->value->first-index

    (log->value->first-index log)

    Takes a log: a vector of sets of read values for each offset in a partition, possibly including nils. Returns a map which takes a value to the index where it first appeared.

    -

    lost-write-cases

    (lost-write-cases {:keys [history version-orders reads-by-type writer-of readers-of]})

    Takes a partial analysis and looks for cases of lost write: where a write that we should have observed is somehow not observed. Of course we cannot expect to observe everything: for example, if we send a message to Redpanda at the end of a test, and don’t poll for it, there’s no chance of us seeing it at all! Or a poller could fall behind.

    +

    interleave-subscribes

    (interleave-subscribes opts txn-gen)

    Takes CLI options (:sub-p) and a txn generator. Keeps track of the keys flowing through it, interspersing occasional :subscribe or :assign operations for recently seen keys.

    +

    key-order-viz

    (key-order-viz k log history)

    Takes a key, a log for that key (a vector of offsets to sets of elements which were observed at that offset) and a history of ops relevant to that key. Constructs an XML structure visualizing all sends/polls of that log’s offsets.

    +

    log->last-index->values

    (log->last-index->values log)

    Takes a log: a vector of sets of read values for each offset in a partition, possibly including nils. Returns a vector which takes indices (dense offsets) to sets of values whose last appearance was at that position.

    +

    log->value->first-index

    (log->value->first-index log)

    Takes a log: a vector of sets of read values for each offset in a partition, possibly including nils. Returns a map which takes a value to the index where it first appeared.

    +

    lost-write-cases

    (lost-write-cases {:keys [history version-orders reads-by-type writer-of readers-of]})

    Takes a partial analysis and looks for cases of lost write: where a write that we should have observed is somehow not observed. Of course we cannot expect to observe everything: for example, if we send a message to Redpanda at the end of a test, and don’t poll for it, there’s no chance of us seeing it at all! Or a poller could fall behind.

    What we do instead is identify the highest read value for each key v_max, and then take the set of all values prior to it in the version order: surely, if we read v_max = 3, and the version order is 1 2 3 4, we should also have read 1 and 2.

    It’s not quite this simple. If a message appears at multiple offsets, the version order will simply pick one for us, which leads to nondeterminism. If an offset has multiple messages, a successfully inserted message could appear nowhere in the version order.

    To deal with this, we examine the raw logs for each key, and build two index structures. The first maps values to their earliest (index) appearance in the log: we use this to determine the highest index that must have been read. The second is a vector which maps indexes to sets of values whose last appearance in the log was at that index. We use this vector to identify which values ought to have been read.

    Once we’ve derived the set of values we ought to have read for some key k, we run through each poll of k and cross off the values read. If there are any values left, they must be lost updates.

    -

    mop-index

    (mop-index op f k v)

    Takes an operation, a function f (:poll or :send), a key k, and a value v. Returns the index (0, 1, …) within that operation’s value which performed that poll or send, or nil if none could be found.

    -

    must-have-committed?

    (must-have-committed? reads-by-type op)

    Takes a reads-by-type map and a (presumably :info) transaction which sent something. Returns true iff the transaction was :ok, or if it was :info and we can prove that some send from this transaction was successfully read.

    -

    nonmonotonic-send-cases

    (nonmonotonic-send-cases {:keys [history version-orders]})

    Takes a partial analysis and checks each process’s operations sequentially, looking for cases where a single process’s sends to a given key go backwards relative to the version order.

    -

    nth+

    (nth+ v i)

    Nth for vectors, but returns nil instead of out-of-bounds.

    -

    op->max-offsets

    (op->max-offsets op)

    Takes an operation (presumably, an OK or info one) and returns a map of keys to the highest offsets interacted with, either via send or poll, in that op.

    -

    op->max-poll-offsets

    (op->max-poll-offsets {:keys [type f value]})

    Takes an operation and returns a map of keys to the highest offsets polled.

    -

    op->max-send-offsets

    (op->max-send-offsets {:keys [type f value]})

    Takes an operation and returns a map of keys to the highest offsets sent.

    -

    op->thread

    (op->thread test op)

    Returns the thread which executed a given operation.

    -

    op-around-key-offset

    (op-around-key-offset k offset op)(op-around-key-offset k offset n op)

    Takes an operation and returns that operation with its value trimmed so that any send/poll operations are constrained to just the given key, and values within n of the given offset. Returns nil if operation is not relevant.

    -

    op-around-key-value

    (op-around-key-value k value op)(op-around-key-value k value n op)

    Takes an operation and returns that operation with its value trimmed so that any send/poll operations are constrained to just the given key, and values within n of the given value. Returns nil if operation is not relevant.

    -

    op-pairs

    (op-pairs op)

    Returns a map of keys to the sequence of all offset value pairs either written or read for that key; writes first.

    -

    op-read-offsets

    (op-read-offsets op)

    Returns a map of keys to the sequence of all offsets read for that key.

    -

    op-read-pairs

    (op-read-pairs op)

    Returns a map of keys to the sequence of all offset value pairs read for that key.

    -

    op-reads

    (op-reads op)

    Returns a map of keys to the sequence of all values read for that key.

    -

    op-reads-helper

    (op-reads-helper op f)

    Takes an operation and a function which takes an offset-value pair. Returns a map of keys read by this operation to the sequence of (f offset value) read for that key.

    -

    op-write-offsets

    (op-write-offsets op)

    Returns a map of keys to the sequence of all offsets written to that key in an op.

    -

    op-write-pairs

    (op-write-pairs op)

    Returns a map of keys to the sequence of all offset value pairs written to that key in an op.

    -

    op-writes

    (op-writes op)

    Returns a map of keys to the sequence of all values written to that key in an op.

    -

    op-writes-helper

    (op-writes-helper op f)

    Takes an operation and a function which takes an offset-value pair. Returns a map of keys written by this operation to the sequence of (f offset value) sends for that key. Note that offset may be nil.

    -

    plot-realtime-lag!

    (plot-realtime-lag! test lags {:keys [nemeses subdirectory filename group-fn group-name]})

    Takes a test, a collection of realtime lag measurements, and options (e.g. those to checker/check). Plots a graph file (realtime-lag.png) in the store directory

    -

    plot-realtime-lags!

    (plot-realtime-lags! test lags opts)

    Constructs realtime lag plots for all processes together, and then another broken out by process, and also by key.

    -

    plot-unseen!

    (plot-unseen! test unseen {:keys [subdirectory]})

    Takes a test, a collection of unseen measurements, and options (e.g. those to checker/check). Plots a graph file (unseen.png) in the store directory.

    -

    poll-skip+nonmonotonic-cases

    (poll-skip+nonmonotonic-cases {:keys [history version-orders]})

    Takes a partial analysis and checks each process’s operations sequentially, looking for cases where a single process either jumped backwards or skipped over some region of a topic-partition. Returns a map:

    +

    mop-index

    (mop-index op f k v)

    Takes an operation, a function f (:poll or :send), a key k, and a value v. Returns the index (0, 1, …) within that operation’s value which performed that poll or send, or nil if none could be found.

    +

    must-have-committed?

    (must-have-committed? reads-by-type op)

    Takes a reads-by-type map and a (presumably :info) transaction which sent something. Returns true iff the transaction was :ok, or if it was :info and we can prove that some send from this transaction was read.

    +

    nonmonotonic-send-cases

    (nonmonotonic-send-cases {:keys [history by-process version-orders]})

    Takes a partial analysis and checks each process’s operations sequentially, looking for cases where a single process’s sends to a given key go backwards relative to the version order.

    +

    nth+

    (nth+ v i)

    Nth for vectors, but returns nil instead of out-of-bounds.

    +

    op->max-offsets

    (op->max-offsets op)

    Takes an operation (presumably, an OK or info one) and returns a map of keys to the highest offsets interacted with, either via send or poll, in that op.

    +

    op->max-poll-offsets

    (op->max-poll-offsets {:keys [type f value]})

    Takes an operation and returns a map of keys to the highest offsets polled.

    +

    op->max-send-offsets

    (op->max-send-offsets {:keys [type f value]})

    Takes an operation and returns a map of keys to the highest offsets sent.

    +

    op->thread

    (op->thread test op)

    Returns the thread which executed a given operation.

    +

    op-around-key-offset

    (op-around-key-offset k offset op)(op-around-key-offset k offset n op)

    Takes an operation and returns that operation with its value trimmed so that any send/poll operations are constrained to just the given key, and values within n of the given offset. Returns nil if operation is not relevant.

    +

    op-around-key-value

    (op-around-key-value k value op)(op-around-key-value k value n op)

    Takes an operation and returns that operation with its value trimmed so that any send/poll operations are constrained to just the given key, and values within n of the given value. Returns nil if operation is not relevant.

    +

    op-pairs

    (op-pairs op)

    Returns a map of keys to the sequence of all offset value pairs either written or read for that key; writes first.

    +

    op-read-offsets

    (op-read-offsets op)

    Returns a map of keys to the sequence of all offsets read for that key.

    +

    op-read-pairs

    (op-read-pairs op)

    Returns a map of keys to the sequence of all offset value pairs read for that key.

    +

    op-reads

    (op-reads op)

    Returns a map of keys to the sequence of all values read for that key.

    +

    op-reads-helper

    (op-reads-helper op f)

    Takes an operation and a function which takes an offset-value pair. Returns a map of keys read by this operation to the sequence of (f offset value) read for that key.

    +

    op-reads-index

    (op-reads-index history)

    We call op-reads a LOT. This takes a history and builds an efficient index, then returns a function (op-reads op) which works just like (op-reads op), but is memoized.

    +

    op-write-offsets

    (op-write-offsets op)

    Returns a map of keys to the sequence of all offsets written to that key in an op.

    +

    op-write-pairs

    (op-write-pairs op)

    Returns a map of keys to the sequence of all offset value pairs written to that key in an op.

    +

    op-writes

    (op-writes op)

    Returns a map of keys to the sequence of all values written to that key in an op.

    +

    op-writes-helper

    (op-writes-helper op f)

    Takes an operation and a function which takes an offset-value pair. Returns a map of keys written by this operation to the sequence of (f offset value) sends for that key. Note that offset may be nil.

    +

    plot-bounds

    (plot-bounds points)

    Quickly determine {:min-x, :max-x, :min-y, :max-y} from a series of x y points. Nil if there are no points.

    +

    plot-realtime-lag!

    (plot-realtime-lag! test lags {:keys [nemeses subdirectory filename group-fn group-name]})

    Takes a test, a collection of realtime lag measurements, and options (e.g. those to checker/check). Plots a graph file (realtime-lag.png) in the store directory

    +

    plot-realtime-lags!

    (plot-realtime-lags! {:keys [history], :as test} lags opts)

    Constructs realtime lag plots for all processes together, and then another broken out by process, and also by key.

    +

    plot-unseen!

    (plot-unseen! test unseen {:keys [subdirectory]})

    Takes a test, a collection of unseen measurements, and options (e.g. those to checker/check). Plots a graph file (unseen.png) in the store directory.

    +

    poll-skip+nonmonotonic-cases

    (poll-skip+nonmonotonic-cases {:keys [history by-process version-orders op-reads]})

    Takes a partial analysis and checks each process’s operations sequentially, looking for cases where a single process either jumped backwards or skipped over some region of a topic-partition. Returns a task of a map:

    {:nonmonotonic Cases where a process started polling at or before a previous operation last left off :skip Cases where two successive operations by a single process skipped over one or more values for some key.}

    -

    poll-unseen

    (poll-unseen gen)

    Wraps a generator. Keeps track of every offset that is successfully sent, and every offset that’s successfully polled. When there’s a key that has some offsets which were sent but not polled, we consider that unseen. This generator occasionally rewrites assign/subscribe operations to try and catch up to unseen keys.

    -

    previous-value

    (previous-value version-order v2)

    Takes a version order for a key and a value. Returns the previous value in the version order, or nil if either we don’t know v2’s index or v2 was the first value in the version order.

    -

    readers-of

    (readers-of history)

    Takes a history and builds a map of keys to values to vectors of completion operations which observed those that value.

    -

    reads-by-type

    (reads-by-type history)

    Takes a history and constructs a map of types (:ok, :info, :fail) to maps of keys to the set of all values which were read for that key. We use this to identify, for instance, the known-successful reads for some key as a part of finding lost updates.

    -

    reads-of-key

    (reads-of-key k history)(reads-of-key k v history)

    Returns a seq of all operations which read the given key, and, optionally, read the given value.

    -

    reads-of-key-offset

    (reads-of-key-offset k offset history)

    Returns a seq of all operations which read the given key and offset.

    -

    reads-of-key-value

    (reads-of-key-value k value history)

    Returns a seq of all operations which read the given key and value.

    -

    realtime-lag

    (realtime-lag history)

    Takes a history and yields a series of maps of the form

    +

    poll-skip+nonmonotonic-cases-per-process

    (poll-skip+nonmonotonic-cases-per-process version-orders op-reads ops)

    Per-process helper for poll-skip+nonmonotonic cases.

    +

    poll-unseen

    (poll-unseen gen)

    Wraps a generator. Keeps track of every offset that is successfully sent, and every offset that’s successfully polled. When there’s a key that has some offsets which were sent but not polled, we consider that unseen. This generator occasionally rewrites assign/subscribe operations to try and catch up to unseen keys.

    +

    precommitted-read-cases

    (precommitted-read-cases {:keys [history op-reads]})

    Takes a partial analysis with a history and looks for a transaction which observed its own writes. Returns a vector of error maps, or nil if none are found.

    +

    This is legal in most DBs, but in Kafka’s model, sent values are supposed to be invisible to all pollers until their producing txn commits.

    +

    previous-value

    (previous-value version-order v2)

    Takes a version order for a key and a value. Returns the previous value in the version order, or nil if either we don’t know v2’s index or v2 was the first value in the version order.

    +

    readers-of

    (readers-of history op-reads)

    Takes a history and an op-reads fn, and builds a map of keys to values to vectors of completion operations which observed those that value.

    +

    reads-by-type

    (reads-by-type history op-reads)

    Takes a history and an op-reads fn, and constructs a map of types (:ok, :info, :fail) to maps of keys to the set of all values which were read for that key. We use this to identify, for instance, the known-successful reads for some key as a part of finding lost updates.

    +

    reads-of-key

    (reads-of-key k history)(reads-of-key k v history)

    Returns a seq of all operations which read the given key, and, optionally, read the given value.

    +

    reads-of-key-offset

    (reads-of-key-offset k offset history)

    Returns a seq of all operations which read the given key and offset.

    +

    reads-of-key-value

    (reads-of-key-value k value history)

    Returns a seq of all operations which read the given key and value.

    +

    realtime-lag

    (realtime-lag history)

    Takes a history and yields a series of maps of the form

    {:process The process performing a poll :key The key being polled :time The time the read began, in nanos :lag The realtime lag of this key, in nanos.

    The lag of a key k in a poll is the conservative estimate of how long it has been since the highest value in that poll was the final message in log k.

    For instance, given:

    @@ -153,35 +165,37 @@

    Analysis

    {:time 3, :type :ok, :value [:send :x 1 :b]} {:time 4, :type :invoke, :value :poll} {:time 5, :type :ok, :value [:poll {:x []}]}

    The lag of this read is 4 - 3 = 1. By time 3, offset 1 must have existed for key x. However, the most recent offset we observed was 0, which could only have been the most recent offset up until the write of offset 1 at time 3. Since our read could have occurred as early as time 4, the lag is at least 1.

    Might want to make this into actual lower upper ranges, rather than just the lower bound on lag, but conservative feels OK for starters.

    -

    render-order-viz!

    (render-order-viz! test {:keys [version-orders errors history], :as analysis})

    Takes a test, an analysis, and for each key with certain errors renders an HTML timeline of how each operation perceived that key’s log.

    -

    stats-checker

    (stats-checker)(stats-checker c)

    Wraps a (jepsen.checker/stats) with a new checker that returns the same results, except it won’t return :valid? false if :crash or :debug-topic-partitions ops always crash. You might want to wrap your existing stats checker with this.

    -

    strip-types

    (strip-types ms)

    Takes a collection of maps, and removes their :type fields. Returns nil if none remain.

    -

    subscribe-ratio

    How many subscribe ops should we issue per txn op?

    -

    tag-rw

    (tag-rw gen)

    Takes a generator and tags operations as :f :poll or :send if they’re entirely comprised of send/polls.

    -

    track-key-offsets

    (track-key-offsets keys-atom gen)

    Wraps a generator. Keeps track of every key that generator touches in the given atom, which is a map of keys to highest offsets seen.

    -

    txn-generator

    (txn-generator la-gen)

    Takes a list-append generator and rewrites its transactions to be :poll or :send k v micro-ops. Also adds a :keys field onto each operation, with a set of keys that txn would have interacted with; we use this to generate :subscribe ops later.

    -

    unseen

    (unseen history)

    Takes a history and yields a series of maps like

    +

    render-order-viz!

    (render-order-viz! test {:keys [version-orders errors history], :as analysis})

    Takes a test, an analysis, and for each key with certain errors renders an HTML timeline of how each operation perceived that key’s log.

    +

    secondv

    (secondv v)

    Second for vectors.

    +

    stats-checker

    (stats-checker)(stats-checker c)

    Wraps a (jepsen.checker/stats) with a new checker that returns the same results, except it won’t return :valid? false if :crash or :debug-topic-partitions ops always crash. You might want to wrap your existing stats checker with this.

    +

    strip-types

    (strip-types ms)

    Takes a collection of maps, and removes their :type fields. Returns nil if none remain.

    +

    tag-rw

    (tag-rw gen)

    Takes a generator and tags operations as :f :poll or :send if they’re entirely comprised of send/polls.

    +

    track-key-offsets

    (track-key-offsets keys-atom gen)

    Wraps a generator. Keeps track of every key that generator touches in the given atom, which is a map of keys to highest offsets seen.

    +

    txn-generator

    (txn-generator la-gen)

    Takes a list-append generator and rewrites its transactions to be :poll or :send k v micro-ops. Also adds a :keys field onto each operation, with a set of keys that txn would have interacted with; we use this to generate :subscribe ops later.

    +

    unseen

    (unseen {:keys [history op-reads]})

    Takes a partial analysis and yields a series of maps like

    {:time The time in nanoseconds :unseen A map of keys to the number of messages in that key which have been successfully acknowledged, but not polled by any client.}

    The final map in the series includes a :messages key: a map of keys to sets of messages that were unseen.

    -

    version-orders

    (version-orders history reads-by-type)(version-orders history reads-by-type logs)

    Takes a history and a reads-by-type structure. Constructs a map of:

    +

    version-orders

    (version-orders history reads-by-type)

    Takes a history and a reads-by-type structure. Constructs a map of:

    {:orders A map of keys to orders for that key. Each order is a map of: {:by-index A vector which maps indices to single values, in log order. :by-value A map of values to indices in the log. :log A vector which maps offsets to sets of values in log order.}

    :errors A series of error maps describing any incompatible orders, where a single offset for a key maps to multiple values.}

    Offsets are directly from Kafka. Indices are dense offsets, removing gaps in the log.

    -

    version-orders-reduce-mop

    (version-orders-reduce-mop logs mop)

    Takes a logs object from version-orders and a micro-op, and integrates that micro-op’s information about offsets into the logs.

    -

    version-orders-update-log

    (version-orders-update-log log offset value)

    Updates a version orders log with the given offset and value.

    -

    workload

    (workload opts)

    Constructs a workload (a map with a generator, client, checker, etc) given an options map. Options are:

    +

    Note that we infer version orders from sends only when we can prove their effects were visible, but from all polls, including :info and :fail ones. Why? Because unlike a traditional transaction, where you shouldn’t trust reads in aborted txns, pollers in Kafka’s transaction design are always supposed to emit safe data regardless of whether the transaction commits or not.

    +

    version-orders-reduce-mop

    (version-orders-reduce-mop logs mop)

    Takes a logs object from version-orders and a micro-op, and integrates that micro-op’s information about offsets into the logs.

    +

    version-orders-update-log

    (version-orders-update-log log offset value)

    Updates a version orders log with the given offset and value.

    +

    workload

    (workload opts)

    Constructs a workload (a map with a generator, client, checker, etc) given an options map. Options are:

    :crash-clients? If set, periodically emits a :crash operation which the client responds to with :info; this forces the client to be torn down and replaced by a fresh client.

    :crash-client-interval How often, in seconds, to crash clients. Default is 30 seconds.

    :sub-via A set of subscription methods: either #{:assign} or #{:subscribe}.

    :txn? If set, generates transactions with multiple send/poll micro-operations.

    +

    :sub-p The probability that the generator emits an assign/subscribe op.

    These options must also be present in the test map, because they are used by the checker, client, etc at various points. For your convenience, they are included in the workload map returned from this function; merging that map into your test should do the trick.

    … plus those taken by jepsen.tests.cycle.append/test, e.g. :key-count, :min-txn-length, …

    -

    worst-realtime-lag

    (worst-realtime-lag lags)

    Takes a seq of realtime lag measurements, and finds the point with the highest lag.

    -

    wr-graph

    (wr-graph {:keys [writer-of readers-of]} history)

    Analyzes a history to extract write-read dependencies. T1 < T2 iff T1 writes some v to k and T2 reads k.

    -

    writer-of

    (writer-of history)

    Takes a history and builds a map of keys to values to the completion operation which attempted to write that value.

    -

    writes-by-type

    (writes-by-type history)

    Takes a history and constructs a map of types (:ok, :info, :fail) to maps of keys to the set of all values which were written for that key. We use this to identify, for instance, what all the known-failed writes were for a given key.

    -

    writes-of-key

    (writes-of-key k history)(writes-of-key k v history)

    Returns a seq of all operations which wrote the given key, and, optionally, sent the given value.

    -

    writes-of-key-offset

    (writes-of-key-offset k offset history)

    Returns a seq of all operations which wrote the given key and offset.

    -

    writes-of-key-value

    (writes-of-key-value k value history)

    Returns a seq of all operations which wrote the given key and value.

    -

    ww-graph

    (ww-graph {:keys [writer-of version-orders ww-deps]} history)

    Analyzes a history to extract write-write dependencies. T1 < T2 iff T1 sends some v1 to k and T2 sends some v2 to k and v1 < v2 in the version order.

    -
    \ No newline at end of file +

    worst-realtime-lag

    (worst-realtime-lag lags)

    Takes a seq of realtime lag measurements, and finds the point with the highest lag.

    +

    wr-graph

    (wr-graph {:keys [writer-of readers-of op-reads]} history)

    Analyzes a history to extract write-read dependencies. T1 < T2 iff T1 writes some v to k and T2 reads k.

    +

    writer-of

    (writer-of history)

    Takes a history and builds a map of keys to values to the completion operation which attempted to write that value.

    +

    writes-by-type

    (writes-by-type history)

    Takes a history and constructs a map of types (:ok, :info, :fail) to maps of keys to the set of all values which were written for that key. We use this to identify, for instance, what all the known-failed writes were for a given key.

    +

    writes-of-key

    (writes-of-key k history)(writes-of-key k v history)

    Returns a seq of all operations which wrote the given key, and, optionally, sent the given value.

    +

    writes-of-key-offset

    (writes-of-key-offset k offset history)

    Returns a seq of all operations which wrote the given key and offset.

    +

    writes-of-key-value

    (writes-of-key-value k value history)

    Returns a seq of all operations which wrote the given key and value.

    +

    ww-graph

    (ww-graph {:keys [writer-of version-orders ww-deps]} history)

    Analyzes a history to extract write-write dependencies. T1 < T2 iff T1 sends some v1 to k and T2 sends some v2 to k and v1 < v2 in the version order.

    +
    \ No newline at end of file diff --git a/jepsen.tests.linearizable-register.html b/jepsen.tests.linearizable-register.html index 8025b4d78..66c76e5f7 100644 --- a/jepsen.tests.linearizable-register.html +++ b/jepsen.tests.linearizable-register.html @@ -1,10 +1,10 @@ -jepsen.tests.linearizable-register documentation

    jepsen.tests.linearizable-register

    Common generators and checkers for linearizability over a set of independent registers. Clients should understand three functions, for writing a value, reading a value, and compare-and-setting a value from v to v’. Reads receive nil, and replace it with the value actually read.

    +jepsen.tests.linearizable-register documentation

    jepsen.tests.linearizable-register

    Common generators and checkers for linearizability over a set of independent registers. Clients should understand three functions, for writing a value, reading a value, and compare-and-setting a value from v to v’. Reads receive nil, and replace it with the value actually read.

    {:type :invoke, :f :write, :value [k v]}
     {:type :invoke, :f :read,  :value [k nil]}
     {:type :invoke, :f :cas,   :value [k [v v']]}
     
    -

    cas

    (cas _ _)

    r

    (r _ _)

    test

    (test opts)

    A partial test, including a generator, model, and checker. You’ll need to provide a client. Options:

    +

    cas

    (cas _ _)

    r

    (r _ _)

    test

    (test opts)

    A partial test, including a generator, model, and checker. You’ll need to provide a client. Options:

    :nodes A set of nodes you’re going to operate on. We only care about the count, so we can figure out how many workers to use per key. :model A model for checking. Default is (model/cas-register). :per-key-limit Maximum number of ops per key. :process-limit Maximum number of processes that can interact with a given key. Default 20.

    -

    w

    (w _ _)
    \ No newline at end of file +

    w

    (w _ _)
    \ No newline at end of file diff --git a/jepsen.tests.long-fork.html b/jepsen.tests.long-fork.html index 177996af4..8410a886e 100644 --- a/jepsen.tests.long-fork.html +++ b/jepsen.tests.long-fork.html @@ -1,6 +1,6 @@ -jepsen.tests.long-fork documentation

    jepsen.tests.long-fork

    Tests for an anomaly in parallel snapshot isolation (but which is prohibited in normal snapshot isolation). In long-fork, concurrent write transactions are observed in conflicting order. For example:

    +jepsen.tests.long-fork documentation

    jepsen.tests.long-fork

    Tests for an anomaly in parallel snapshot isolation (but which is prohibited in normal snapshot isolation). In long-fork, concurrent write transactions are observed in conflicting order. For example:

    T1: (write x 1) T2: (write y 1) T3: (read x nil) (read y 1) T4: (read x 1) (read y nil)

    T3 implies T2 < T1, but T4 implies T1 < T2. We aim to observe these conflicts.

    To generalize to multiple updates…

    @@ -36,22 +36,22 @@

    We can verify this property in roughly linear time, which is nice. It doesn’t, however, prevent closed loops with no forking structure.

    To do loops, I think we have to actually do the graph traversal. Let’s punt on that for now.

    checker

    (checker n)

    Takes a group size n, and a history of :txn transactions. Verifies that no key is written multiple times. Searches for read transactions where one read observes x but not y, and another observes y but not x.

    -

    distinct-pairs

    (distinct-pairs coll)

    Given a collection, returns a sequence of all unique 2-element sets taken from that collection.

    -

    early-reads

    (early-reads reads)

    Given a set of read txns finds those that are too early to tell us anything; e.g. all nil

    -

    ensure-no-long-forks

    (ensure-no-long-forks n reads)

    Returns a checker error if any long forks exist.

    -

    ensure-no-multiple-writes-to-one-key

    (ensure-no-multiple-writes-to-one-key history)

    Returns a checker error if we have multiple writes to one key, or nil if things are OK.

    -

    find-forks

    (find-forks ops)

    Given a set of read ops, compares every one to ensure a total order exists. If mutually incomparable reads exist, returns the pair.

    -

    generator

    (generator n)

    Generates single inserts followed by group reads, mixed with reads of other concurrent groups, just for grins. Takes a group size n.

    -

    group-for

    (group-for n k)

    Takes a key and returns the collection of keys for its group. Lower inclusive, upper exclusive.

    -

    groups

    (groups n read-ops)

    Given a group size n, and a set of read ops, partitions those read operations by group. Throws if any group has the wrong size.

    -

    late-reads

    (late-reads reads)

    Given a set of read txns, finds those that are too late to tell us anything; e.g. all 1.

    -

    op-read-keys

    (op-read-keys op)

    Given a read op, returns the set of keys read.

    -

    read-compare

    (read-compare a b)

    Given two maps of keys to values, a and b, returns -1 if a dominates, 0 if the two are equal, 1 if b dominates, or nil if a and b are incomparable.

    -

    read-op->value-map

    (read-op->value-map op)

    Takes a read operation, and converts it to a map of keys to values.

    -

    read-txn-for

    (read-txn-for n k)

    Takes a group size and a key and generates a transaction reading that key’s group in shuffled order.

    -

    read-txn?

    (read-txn? txn)

    Is this transaction a pure read txn?

    -

    reads

    (reads history)

    All ok read ops

    -

    workload

    (workload)(workload n)

    A package of a checker and generator to look for long forks. n is the group size: how many keys to check simultaneously.

    -

    write-txn?

    (write-txn? txn)

    Is this a pure write transaction?

    -
    \ No newline at end of file +

    distinct-pairs

    (distinct-pairs coll)

    Given a collection, returns a sequence of all unique 2-element sets taken from that collection.

    +

    early-reads

    (early-reads reads)

    Given a set of read txns finds those that are too early to tell us anything; e.g. all nil

    +

    ensure-no-long-forks

    (ensure-no-long-forks n reads)

    Returns a checker error if any long forks exist.

    +

    ensure-no-multiple-writes-to-one-key

    (ensure-no-multiple-writes-to-one-key history)

    Returns a checker error if we have multiple writes to one key, or nil if things are OK.

    +

    find-forks

    (find-forks ops)

    Given a set of read ops, compares every one to ensure a total order exists. If mutually incomparable reads exist, returns the pair.

    +

    generator

    (generator n)

    Generates single inserts followed by group reads, mixed with reads of other concurrent groups, just for grins. Takes a group size n.

    +

    group-for

    (group-for n k)

    Takes a key and returns the collection of keys for its group. Lower inclusive, upper exclusive.

    +

    groups

    (groups n read-ops)

    Given a group size n, and a set of read ops, partitions those read operations by group. Throws if any group has the wrong size.

    +

    late-reads

    (late-reads reads)

    Given a set of read txns, finds those that are too late to tell us anything; e.g. all 1.

    +

    op-read-keys

    (op-read-keys op)

    Given a read op, returns the set of keys read.

    +

    read-compare

    (read-compare a b)

    Given two maps of keys to values, a and b, returns -1 if a dominates, 0 if the two are equal, 1 if b dominates, or nil if a and b are incomparable.

    +

    read-op->value-map

    (read-op->value-map op)

    Takes a read operation, and converts it to a map of keys to values.

    +

    read-txn-for

    (read-txn-for n k)

    Takes a group size and a key and generates a transaction reading that key’s group in shuffled order.

    +

    read-txn?

    (read-txn? txn)

    Is this transaction a pure read txn?

    +

    reads

    (reads history)

    All ok read ops

    +

    workload

    (workload)(workload n)

    A package of a checker and generator to look for long forks. n is the group size: how many keys to check simultaneously.

    +

    write-txn?

    (write-txn? txn)

    Is this a pure write transaction?

    +
    \ No newline at end of file diff --git a/jepsen.util.html b/jepsen.util.html index 32b114be3..ab716d190 100644 --- a/jepsen.util.html +++ b/jepsen.util.html @@ -1,19 +1,19 @@ -jepsen.util documentation

    jepsen.util

    Kitchen sink

    +jepsen.util documentation

    jepsen.util

    Kitchen sink

    *relative-time-origin*

    dynamic

    A reference point for measuring time in a test run.

    -

    all-jdk-loggers

    (all-jdk-loggers)

    arities

    (arities c)

    The arities of a function class.

    -

    await-fn

    (await-fn f)(await-fn f opts)

    Invokes a function (f) repeatedly. Blocks until (f) returns, rather than throwing. Returns that return value. Catches Exceptions (except for InterruptedException) and retries them automatically. Options:

    +

    all-jdk-loggers

    (all-jdk-loggers)

    arities

    (arities c)

    The arities of a function class.

    +

    await-fn

    (await-fn f)(await-fn f opts)

    Invokes a function (f) repeatedly. Blocks until (f) returns, rather than throwing. Returns that return value. Catches Exceptions (except for InterruptedException) and retries them automatically. Options:

    :retry-interval How long between retries, in ms. Default 1s. :log-interval How long between logging that we’re still waiting, in ms. Default `retry-interval. :log-message What should we log to the console while waiting? :timeout How long until giving up and throwing :type :timeout, in ms. Default 60 seconds.

    -

    buf-size

    chunk-vec

    (chunk-vec n v)

    Partitions a vector into reducibles of size n (somewhat like partition-all) but uses subvec for speed.

    +

    buf-size

    chunk-vec

    (chunk-vec n v)

    Partitions a vector into reducibles of size n (somewhat like partition-all) but uses subvec for speed.

    (chunk-vec 2 [1])     ; => ([1])
     (chunk-vec 2 [1 2 3]) ; => ([1 2] [3])
     
    -

    coll

    (coll thing-or-things)

    Wraps non-collection things into singleton lists, and leaves colls as themselves. Useful when you can take either a single thing or a sequence of things.

    -

    compare<

    (compare< a b)

    Like <, but works on any comparable objects, not just numbers.

    -

    concat-files!

    (concat-files! out fs)

    Appends contents of all fs, writing to out. Returns fs.

    -

    contains-many?

    (contains-many? m & ks)

    Takes a map and any number of keys, returning true if all of the keys are present. Ex. (contains-many? {:a 1 :b 2 :c 3} :a :b :c) => true

    -

    deepfind

    (deepfind pred haystack)(deepfind pred path haystack)

    Finds things that match a predicate in a nested structure. Returns a lazy sequence of matching things, each represented by a vector path which denotes how to access that object, ending in the matching thing itself. Path elements are:

    +

    coll

    (coll thing-or-things)

    Wraps non-collection things into singleton lists, and leaves colls as themselves. Useful when you can take either a single thing or a sequence of things.

    +

    compare<

    (compare< a b)

    Like <, but works on any comparable objects, not just numbers.

    +

    concat-files!

    (concat-files! out fs)

    Appends contents of all fs, writing to out. Returns fs.

    +

    contains-many?

    (contains-many? m & ks)

    Takes a map and any number of keys, returning true if all of the keys are present. Ex. (contains-many? {:a 1 :b 2 :c 3} :a :b :c) => true

    +

    deepfind

    (deepfind pred haystack)(deepfind pred path haystack)

    Finds things that match a predicate in a nested structure. Returns a lazy sequence of matching things, each represented by a vector path which denotes how to access that object, ending in the matching thing itself. Path elements are:

    • keys for maps
    • integers for sequentials
    • @@ -21,23 +21,24 @@
    • :deref for deref-ables.

    (deepfind string? :a {:b “foo”} :c) ; => (1 :b “foo”)

    -

    default

    (default m k v)

    Like assoc, but only fills in values which are NOT present in the map.

    -

    drop-common-proper-prefix

    (drop-common-proper-prefix cs)

    Given a collection of sequences, removes the longest common proper prefix from each one.

    -

    ex-root-cause

    (ex-root-cause t)

    Unwraps throwables to return their original cause.

    -

    exception?

    (exception? x)

    Is x an Exception?

    -

    fast-last

    (fast-last coll)

    Like last, but O(1) on counted collections.

    -

    fcatch

    (fcatch f)

    Takes a function and returns a version of it which returns, rather than throws, exceptions.

    -

    fixed-point

    (fixed-point f x)

    Applies f repeatedly to x until it converges.

    -

    forget!

    (forget! this)

    Allows this forgettable reference to be reclaimed by the GC at some later time. Future attempts to dereference it may throw. Returns self.

    -

    forgettable

    (forgettable x)

    Constructs a deref-able reference to x which can be explicitly forgotten. Helpful for controlling access to infinite seqs (e.g. the generator) when you don’t have firm control over everyone who might see them.

    -

    fraction

    (fraction a b)

    a/b, but if b is zero, returns unity.

    -

    get-named-lock!

    (get-named-lock! locks name)

    Given a pool of locks, and a lock name, returns the object used for locking in that pool. Creates the lock if it does not already exist.

    -

    history->latencies

    (history->latencies history)

    Takes a history–a sequence of operations–and returns a new history where operations have two new keys:

    +

    default

    (default m k v)

    Like assoc, but only fills in values which are NOT present in the map.

    +

    drop-common-proper-prefix

    (drop-common-proper-prefix cs)

    Given a collection of sequences, removes the longest common proper prefix from each one.

    +

    ex-root-cause

    (ex-root-cause t)

    Unwraps throwables to return their original cause.

    +

    exception?

    (exception? x)

    Is x an Exception?

    +

    extreme-by*

    (extreme-by* f coll retain?)

    Helper for min-by and max-by

    +

    fast-last

    (fast-last coll)

    Like last, but O(1) on counted collections.

    +

    fcatch

    (fcatch f)

    Takes a function and returns a version of it which returns, rather than throws, exceptions.

    +

    fixed-point

    (fixed-point f x)

    Applies f repeatedly to x until it converges.

    +

    forget!

    (forget! this)

    Allows this forgettable reference to be reclaimed by the GC at some later time. Future attempts to dereference it may throw. Returns self.

    +

    forgettable

    (forgettable x)

    Constructs a deref-able reference to x which can be explicitly forgotten. Helpful for controlling access to infinite seqs (e.g. the generator) when you don’t have firm control over everyone who might see them.

    +

    fraction

    (fraction a b)

    a/b, but if b is zero, returns unity.

    +

    get-named-lock!

    (get-named-lock! locks name)

    Given a pool of locks, and a lock name, returns the object used for locking in that pool. Creates the lock if it does not already exist.

    +

    history->latencies

    (history->latencies history)

    Takes a history–a sequence of operations–and returns a new history where operations have two new keys:

    :latency the time in nanoseconds it took for the operation to complete. :completion the next event for that process

    -

    inc*

    (inc* x)

    Like inc, but (inc nil) => 1.

    -

    integer-interval-set-str

    (integer-interval-set-str set)

    Takes a set of integers and yields a sorted, compact string representation.

    -

    lazy-atom

    (lazy-atom f)

    An atom with lazy state initialization. Calls (f) on first use to provide the initial value of the atom. Only supports swap/reset/deref. Reset bypasses lazy initialization. If f throws, behavior is undefined (read: proper fucked).

    -

    letr

    macro

    (letr bindings & body)

    Let bindings, plus early return.

    +

    inc*

    (inc* x)

    Like inc, but (inc nil) => 1.

    +

    integer-interval-set-str

    (integer-interval-set-str set)

    Takes a set of integers and yields a sorted, compact string representation.

    +

    lazy-atom

    (lazy-atom f)

    An atom with lazy state initialization. Calls (f) on first use to provide the initial value of the atom. Only supports swap/reset/deref. Reset bypasses lazy initialization. If f throws, behavior is undefined (read: proper fucked).

    +

    letr

    macro

    (letr bindings & body)

    Let bindings, plus early return.

    You want to do some complicated, multi-stage operation assigning lots of variables–but at different points in the let binding, you need to perform some conditional check to make sure you can proceed to the next step. Ordinarily, you’d intersperse let and if statements, like so:

    (let [res (network-call)]
       (if-not (:ok? res)
    @@ -83,25 +84,25 @@
     

    returns 1, not 2, because (return 2) was not the terminal expression.

    return only works within letr’s bindings, not its body.

    -

    letr-let-if

    (letr-let-if groups body)

    Takes a sequence of binding groups and a body expression, and emits a let for the first group, an if statement checking for a return, and recurses; ending with body.

    -

    letr-partition-bindings

    (letr-partition-bindings bindings)

    Takes a vector of bindings sym expr, sym’ expr, …. Returns binding-groups: a sequence of vectors of bindgs, where the final binding in each group has an early return. The final group (possibly empty!) contains no early return.

    -

    letr-rewrite-return

    (letr-rewrite-return expr)

    Rewrites (return x) to (Return. x) in expr. Returns a pair of view source

    letr-let-if

    (letr-let-if groups body)

    Takes a sequence of binding groups and a body expression, and emits a let for the first group, an if statement checking for a return, and recurses; ending with body.

    +

    letr-partition-bindings

    (letr-partition-bindings bindings)

    Takes a vector of bindings sym expr, sym’ expr, …. Returns binding-groups: a sequence of vectors of bindgs, where the final binding in each group has an early return. The final group (possibly empty!) contains no early return.

    +

    letr-rewrite-return

    (letr-rewrite-return expr)

    Rewrites (return x) to (Return. x) in expr. Returns a pair of changed? expr, where changed is whether the expression contained a return.

    -

    linear-time-nanos

    (linear-time-nanos)

    A linear time source in nanoseconds.

    -

    local-time

    (local-time)

    Local time.

    -

    log

    (log & things)

    log-op

    (log-op op)

    Logs an operation and returns it.

    -

    log-print

    (log-print _ & things)

    longest-common-prefix

    (longest-common-prefix cs)

    Given a collection of sequences, finds the longest sequence which is a prefix of every sequence given.

    -

    majority

    (majority n)

    Given a number, returns the smallest integer strictly greater than half.

    -

    map-keys

    (map-keys f m)

    Maps keys in a map.

    -

    map-kv

    (map-kv f m)

    Takes a function (f k v) which returns k v, and builds a new map by applying f to every pair.

    -

    map-vals

    (map-vals f m)

    Maps values in a map.

    -

    max-by

    (max-by f coll)

    Finds the maximum element of a collection based on some (f element), which returns Comparables. If coll is empty, returns nil.

    -

    maybe-number

    (maybe-number s)

    Tries reading a string as a long, then double, then string. Passes through nil. Useful for getting nice values out of stats APIs that just dump a bunch of heterogenously-typed strings at you.

    -

    meh

    macro

    (meh & body)

    Returns, rather than throws, exceptions.

    -

    min-by

    (min-by f coll)

    Finds the minimum element of a collection based on some (f element), which returns Comparables. If coll is empty, returns nil.

    -

    minority-third

    (minority-third n)

    Given a number, returns the largest integer strictly less than 1/3rd. Helpful for testing byzantine fault-tolerant systems.

    -

    ms->nanos

    (ms->nanos ms)

    mute

    macro

    (mute & body)

    mute-jdk

    macro

    (mute-jdk & body)

    name+

    (name+ x)

    Tries name, falls back to pr-str.

    -

    named-locks

    (named-locks)

    Creates a mutable data structure which backs a named locking mechanism.

    +

    linear-time-nanos

    (linear-time-nanos)

    A linear time source in nanoseconds.

    +

    local-time

    (local-time)

    Local time.

    +

    log

    (log & things)

    log-op

    (log-op op)

    Logs an operation and returns it.

    +

    log-print

    (log-print _ & things)

    longest-common-prefix

    (longest-common-prefix cs)

    Given a collection of sequences, finds the longest sequence which is a prefix of every sequence given.

    +

    majority

    (majority n)

    Given a number, returns the smallest integer strictly greater than half.

    +

    map-keys

    (map-keys f m)

    Maps keys in a map.

    +

    map-kv

    (map-kv f m)

    Takes a function (f k v) which returns k v, and builds a new map by applying f to every pair.

    +

    map-vals

    (map-vals f m)

    Maps values in a map.

    +

    max-by

    (max-by f coll)

    Finds the maximum element of a collection based on some (f element), which returns Comparables. If coll is empty, returns nil.

    +

    maybe-number

    (maybe-number s)

    Tries reading a string as a long, then double, then string. Passes through nil. Useful for getting nice values out of stats APIs that just dump a bunch of heterogenously-typed strings at you.

    +

    meh

    macro

    (meh & body)

    Returns, rather than throws, exceptions.

    +

    min-by

    (min-by f coll)

    Finds the minimum element of a collection based on some (f element), which returns Comparables. If coll is empty, returns nil.

    +

    minority-third

    (minority-third n)

    Given a number, returns the largest integer strictly less than 1/3rd. Helpful for testing byzantine fault-tolerant systems.

    +

    ms->nanos

    (ms->nanos ms)

    mute

    macro

    (mute & body)

    mute-jdk

    macro

    (mute-jdk & body)

    name+

    (name+ x)

    Tries name, falls back to pr-str.

    +

    named-locks

    (named-locks)

    Creates a mutable data structure which backs a named locking mechanism.

    Named locks are helpful when you need to coordinate access to a dynamic pool of resources. For instance, you might want to prohibit multiple threads from executing a command on a remote node at once. Nodes are uniquely identified by a string name, so you could write:

    (defonce node-locks (named-locks))
     
    @@ -112,18 +113,20 @@
     

    Now, concurrent calls to start-db! will not execute concurrently.

    The structure we use to track named locks is an atom wrapping a map, where the map’s keys are any object, and the values are canonicalized versions of that same object. We use standard Java locking on the canonicalized versions. This is basically an arbitrary version of string interning.

    -

    nanos->ms

    (nanos->ms nanos)

    nanos->secs

    (nanos->secs nanos)

    nemesis-intervals

    (nemesis-intervals history)(nemesis-intervals history opts)

    Given a history where a nemesis goes through :f :start and :f :stop type transitions, constructs a sequence of pairs of start and stop ops. Since a nemesis usually goes :start :start :stop :stop, we construct pairs of the first and third, then second and fourth events. Where no :stop op is present, we emit a pair like start nil. Optionally, a map of start and stop sets may be provided to match on user-defined :start and :stop keys.

    +

    nanos->ms

    (nanos->ms nanos)

    nanos->secs

    (nanos->secs nanos)

    nemesis-intervals

    (nemesis-intervals history)(nemesis-intervals history opts)

    Given a history where a nemesis goes through :f :start and :f :stop type transitions, constructs a sequence of pairs of start and stop ops. Since a nemesis usually goes :start :start :stop :stop, we construct pairs of the first and third, then second and fourth events. Where no :stop op is present, we emit a pair like start nil. Optionally, a map of start and stop sets may be provided to match on user-defined :start and :stop keys.

    Multiple starts are ended by the same pair of stops, so :start1 :start2 :start3 :start4 :stop1 :stop2 yields:

    start1 stop1 start2 stop2 start3 stop1 start4 stop2

    -

    op->str

    (op->str op)

    Format an operation as a string.

    -

    parse-long

    (parse-long s)

    Parses a string to a Long. Look, we use this a lot, okay?

    -

    poly-compare

    (poly-compare a b)

    Comparator function for sorting heterogenous collections.

    -

    polysort

    (polysort coll)

    Sort, but on heterogenous collections.

    -

    pprint-str

    (pprint-str x)

    print-history

    (print-history history)(print-history printer history)

    Prints a history to the console.

    -

    prn-op

    (prn-op op)

    Prints an operation to the console.

    -

    processors

    (processors)

    How many processors on this platform?

    -

    pwrite-history!

    (pwrite-history! f history)(pwrite-history! f printer history)

    Writes history, taking advantage of more cores.

    -

    rand-distribution

    (rand-distribution)(rand-distribution distribution-map)

    Generates a random value with a distribution (default :uniform) of:

    +

    nil-if-empty

    (nil-if-empty seqable)

    Takes a seqable and returns it, or nil if (seq seqable) is nil. Helpful when you want to return a vector if non-empty, or nil otherwise.

    +

    op->str

    (op->str op)

    Format an operation as a string.

    +

    parse-long

    (parse-long s)

    Parses a string to a Long. Look, we use this a lot, okay?

    +

    partition-by-vec

    (partition-by-vec f xs)

    A faster version of partition-by which returns a vector of vectors, rather than using lazy seqs. Comes at the cost of eager evaluation.

    +

    poly-compare

    (poly-compare a b)

    Comparator function for sorting heterogenous collections.

    +

    polysort

    (polysort coll)

    Sort, but on heterogenous collections.

    +

    pprint-str

    (pprint-str x)

    print-history

    (print-history history)(print-history printer history)

    Prints a history to the console.

    +

    prn-op

    (prn-op op)

    Prints an operation to the console.

    +

    processors

    (processors)

    How many processors on this platform?

    +

    pwrite-history!

    (pwrite-history! f history)(pwrite-history! f printer history)

    Writes history, taking advantage of more cores.

    +

    rand-distribution

    (rand-distribution)(rand-distribution distribution-map)

    Generates a random value with a distribution (default :uniform) of:

    ; Uniform distribution from min (inclusive, default 0) to max (exclusive, default Long/MAX_VALUE). 
     {:distribution :uniform, :min 0, :max 1024}
     
    @@ -136,21 +139,21 @@
     ; Select a value based on weights. :weights are {value weight ...}
     {:distribution :weighted :weights {1e-3 1 1e-4 3 1e-5 1}}
     
    -

    rand-exp

    (rand-exp lambda)

    Generates a exponentially distributed random value with rate parameter lambda.

    -

    rand-nth-empty

    (rand-nth-empty coll)

    Like rand-nth, but returns nil if the collection is empty.

    -

    random-nonempty-subset

    (random-nonempty-subset coll)

    A randomly selected, randomly ordered, non-empty subset of the given collection. Returns nil if collection is empty.

    -

    real-pmap

    (real-pmap f coll)

    Like pmap, but runs a thread per element, which prevents deadlocks when work elements have dependencies. The dom-top real-pmap throws the first exception it gets, which might be something unhelpful like InterruptedException or BrokenBarrierException. This variant works like that real-pmap, but throws more interesting exceptions when possible.

    -

    relative-time-nanos

    (relative-time-nanos)

    Time in nanoseconds since relative-time-origin

    -

    retry

    macro

    (retry dt & body)

    Evals body repeatedly until it doesn’t throw, sleeping dt seconds.

    -

    secs->nanos

    (secs->nanos s)

    sequential

    (sequential thing-or-things)

    Wraps non-sequential things into singleton lists, and leaves sequential things or nil as themselves. Useful when you can take either a single thing or a sequence of things.

    -

    sh

    (sh & args)

    A wrapper around clojure.java.shell’s sh which throws on nonzero exit.

    -

    sleep

    (sleep dt)

    High-resolution sleep; takes a (possibly fractional) time in ms.

    -

    spy

    (spy x)

    test->str

    (test->str test)

    Pretty-prints a test to a string. This binds print-length to avoid printing infinite sequences for generators.

    -

    time-

    macro

    (time- & body)

    timeout

    macro

    (timeout millis timeout-val & body)

    Times out body after n millis, returning timeout-val.

    -

    uninteresting-exceptions

    Exceptions which are less interesting; used by real-pmap and other cases where we want to pick a meaningful exception.

    -

    with-named-lock

    macro

    (with-named-lock locks name & body)

    Given a lock pool, and a name, locks that name in the pool for the duration of the body.

    -

    with-relative-time

    macro

    (with-relative-time & body)

    Binds relative-time-origin at the start of body.

    -

    with-retry

    macro

    (with-retry initial-bindings & body)

    It’s really fucking inconvenient not being able to recur from within (catch) expressions. This macro wraps its body in a (loop bindings(try …)). Provides a (retry & new bindings) form which is usable within (catch) blocks: when this form is returned by the body, the body will be retried with the new bindings.

    -

    with-thread-name

    macro

    (with-thread-name thread-name & body)

    Sets the thread name for duration of block.

    -

    write-history!

    (write-history! f history)(write-history! f printer history)

    Writes a history to a file.

    -
    \ No newline at end of file +

    rand-exp

    (rand-exp lambda)

    Generates a exponentially distributed random value with rate parameter lambda.

    +

    rand-nth-empty

    (rand-nth-empty coll)

    Like rand-nth, but returns nil if the collection is empty.

    +

    random-nonempty-subset

    (random-nonempty-subset coll)

    A randomly selected, randomly ordered, non-empty subset of the given collection. Returns nil if collection is empty.

    +

    real-pmap

    (real-pmap f coll)

    Like pmap, but runs a thread per element, which prevents deadlocks when work elements have dependencies. The dom-top real-pmap throws the first exception it gets, which might be something unhelpful like InterruptedException or BrokenBarrierException. This variant works like that real-pmap, but throws more interesting exceptions when possible.

    +

    relative-time-nanos

    (relative-time-nanos)

    Time in nanoseconds since relative-time-origin

    +

    retry

    macro

    (retry dt & body)

    Evals body repeatedly until it doesn’t throw, sleeping dt seconds.

    +

    secs->nanos

    (secs->nanos s)

    sequential

    (sequential thing-or-things)

    Wraps non-sequential things into singleton lists, and leaves sequential things or nil as themselves. Useful when you can take either a single thing or a sequence of things.

    +

    sh

    (sh & args)

    A wrapper around clojure.java.shell’s sh which throws on nonzero exit.

    +

    sleep

    (sleep dt)

    High-resolution sleep; takes a (possibly fractional) time in ms.

    +

    spy

    (spy x)

    test->str

    (test->str test)

    Pretty-prints a test to a string. This binds print-length to avoid printing infinite sequences for generators.

    +

    time-

    macro

    (time- & body)

    timeout

    macro

    (timeout millis timeout-val & body)

    Times out body after n millis, returning timeout-val.

    +

    uninteresting-exceptions

    Exceptions which are less interesting; used by real-pmap and other cases where we want to pick a meaningful exception.

    +

    with-named-lock

    macro

    (with-named-lock locks name & body)

    Given a lock pool, and a name, locks that name in the pool for the duration of the body.

    +

    with-relative-time

    macro

    (with-relative-time & body)

    Binds relative-time-origin at the start of body.

    +

    with-retry

    macro

    (with-retry initial-bindings & body)

    It’s really fucking inconvenient not being able to recur from within (catch) expressions. This macro wraps its body in a (loop bindings(try …)). Provides a (retry & new bindings) form which is usable within (catch) blocks: when this form is returned by the body, the body will be retried with the new bindings.

    +

    with-thread-name

    macro

    (with-thread-name thread-name & body)

    Sets the thread name for duration of block.

    +

    write-history!

    (write-history! f history)(write-history! f printer history)

    Writes a history to a file.

    +
    \ No newline at end of file diff --git a/jepsen.web.html b/jepsen.web.html index 05d051ec7..005977783 100644 --- a/jepsen.web.html +++ b/jepsen.web.html @@ -1,29 +1,29 @@ -jepsen.web documentation

    jepsen.web

    Web server frontend for browsing test results.

    -

    app

    (app req)

    assert-file-in-scope!

    (assert-file-in-scope! f)

    Throws if the given file is outside our store directory.

    -

    basic-date-time

    clj-escape

    (clj-escape s)

    Escape a Clojure string.

    -

    content-type

    Map of extensions to known content-types

    -

    dir

    (dir dir)

    Serves a directory.

    -

    dir-cell

    (dir-cell f)

    Renders a File (a directory) for a directory view.

    -

    dir-sort

    (dir-sort files)

    Sort a collection of Files. If everything’s an integer, sort numerically, else alphanumerically.

    -

    fast-tests

    (fast-tests)

    Abbreviated set of tests: just name, start-time, results. Memoizes (partially) via test-cache.

    -

    file-cell

    (file-cell f)

    Renders a File for a directory view.

    -

    file-url

    (file-url f)

    URL for a File

    -

    files

    (files req)

    Serve requests for /files/ urls

    -

    home

    (home req)

    Home page

    -

    js-escape

    (js-escape s)

    Escape a Javascript string.

    -

    page-limit

    How many test rows per page?

    -

    params

    (params req)

    Parses a query params map from a request.

    -

    parse-time

    (parse-time t)

    Parses a time from a string

    -

    relative-path

    (relative-path base target)

    Relative path, as a Path.

    -

    serve!

    (serve! options)

    Starts an http server with the given httpkit options.

    -

    test-cache

    An in-memory cache of {:name, :start-time, :valid?} maps, indexed by an ordered map of :start-time :name. Earliest start times at the front.

    -

    test-cache-key

    Function which extracts the key for the test cache from a map.

    -

    test-cache-mutable-window

    How far back in the test cache do we refresh on every page load?

    -

    test-header

    (test-header)

    test-row

    (test-row t)

    Turns a test map into a table row.

    -

    url

    (url t & args)

    Takes a test and filename components; returns a URL for that file.

    -

    url-encode-path-components

    (url-encode-path-components x)

    URL encodes individual components of a path, leaving / as / instead of encoded.

    -

    valid-color

    zip

    (zip req dir)

    Serves a directory as a zip file. Strips .zip off the extension.

    -

    zip-path!

    (zip-path! zipper base file)

    Writes a path to a zipoutputstream

    -
    \ No newline at end of file +jepsen.web documentation

    jepsen.web

    Web server frontend for browsing test results.

    +

    app

    (app req)

    assert-file-in-scope!

    (assert-file-in-scope! f)

    Throws if the given file is outside our store directory.

    +

    basic-date-time

    clj-escape

    (clj-escape s)

    Escape a Clojure string.

    +

    content-type

    Map of extensions to known content-types

    +

    dir

    (dir dir)

    Serves a directory.

    +

    dir-cell

    (dir-cell f)

    Renders a File (a directory) for a directory view.

    +

    dir-sort

    (dir-sort files)

    Sort a collection of Files. If everything’s an integer, sort numerically, else alphanumerically.

    +

    fast-tests

    (fast-tests)

    Abbreviated set of tests: just name, start-time, results. Memoizes (partially) via test-cache.

    +

    file-cell

    (file-cell f)

    Renders a File for a directory view.

    +

    file-url

    (file-url f)

    URL for a File

    +

    files

    (files req)

    Serve requests for /files/ urls

    +

    home

    (home req)

    Home page

    +

    js-escape

    (js-escape s)

    Escape a Javascript string.

    +

    page-limit

    How many test rows per page?

    +

    params

    (params req)

    Parses a query params map from a request.

    +

    parse-time

    (parse-time t)

    Parses a time from a string

    +

    relative-path

    (relative-path base target)

    Relative path, as a Path.

    +

    serve!

    (serve! options)

    Starts an http server with the given httpkit options.

    +

    test-cache

    An in-memory cache of {:name, :start-time, :valid?} maps, indexed by an ordered map of :start-time :name. Earliest start times at the front.

    +

    test-cache-key

    Function which extracts the key for the test cache from a map.

    +

    test-cache-mutable-window

    How far back in the test cache do we refresh on every page load?

    +

    test-header

    (test-header)

    test-row

    (test-row t)

    Turns a test map into a table row.

    +

    url

    (url t & args)

    Takes a test and filename components; returns a URL for that file.

    +

    url-encode-path-components

    (url-encode-path-components x)

    URL encodes individual components of a path, leaving / as / instead of encoded.

    +

    valid-color

    zip

    (zip req dir)

    Serves a directory as a zip file. Strips .zip off the extension.

    +

    zip-path!

    (zip-path! zipper base file)

    Writes a path to a zipoutputstream

    +
    \ No newline at end of file