From 36ccf26a68aa17b8a0db6e98c87fcecf67f0a452 Mon Sep 17 00:00:00 2001 From: <> Date: Mon, 22 Jul 2024 06:47:29 +0000 Subject: [PATCH] Deployed f91da1e with MkDocs version: 1.6.0 --- search/search_index.json | 2 +- sitemap.xml.gz | Bin 127 -> 127 bytes tutorials/github-actions/index.html | 6 +++--- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/search/search_index.json b/search/search_index.json index 43244272..17d98796 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":""},{"location":"#nf-test-a-simple-testing-framework-for-nextflow-pipelines","title":"nf-test: A simple testing framework for Nextflow pipelines","text":"

Test your production ready\u00a0Nextflow\u00a0pipelines in an efficient and automated way. \ud83d\ude80

Getting Started Installation Source

A DSL language similar to Nextflow Describes expected behavior using 'when' and 'then' blocks Abundance of functions for writing elegant and readable assertions Utilizes snapshots to write tests for complex data structures Provides commands for generating boilerplate code Includes a test-runner that executes these scripts Easy installation on CI systems

"},{"location":"#unit-testing","title":"Unit testing","text":"

nf-test enables you to test all components of your data science pipeline: from end-to-end testing of the entire pipeline to specific tests of processes or even custom functions. This ensures that all testing is conducted consistently across your project.

Pipeline Process Functions
nextflow_pipeline {\n\n  name \"Test Hello World\"\n  script \"nextflow-io/hello\"\n\n  test(\"hello world example should start 4 processes\") {\n    expect {\n      with(workflow) {\n        assert success\n        assert trace.tasks().size() == 4\n        assert \"Ciao world!\" in stdout\n        assert \"Bonjour world!\" in stdout\n        assert \"Hello world!\" in stdout\n        assert \"Hola world!\" in stdout\n      }\n    }\n  }\n\n}\n
nextflow_process {\n\n    name \"Test Process SALMON_INDEX\"\n    script \"modules/local/salmon_index.nf\"\n    process \"SALMON_INDEX\"\n\n    test(\"Should create channel index files\") {\n\n        when {\n            process {\n                \"\"\"\n                input[0] = file(\"test_data/transcriptome.fa\")\n                \"\"\"\n            }\n        }\n\n        then {\n            //check if test case succeeded\n            assert process.success\n            //analyze trace file\n            assert process.trace.tasks().size() == 1\n            with(process.out) {\n                // check if emitted output has been created\n                assert index.size() == 1\n                // count amount of created files\n                assert path(index.get(0)).list().size() == 16\n                // parse info.json file\n                def info = path(index.get(0)+'/info.json').json\n                assert info.num_kmers == 375730\n                assert info.seq_length == 443050\n                //verify md5 checksum\n                assert path(index.get(0)+'/info.json').md5 == \"80831602e2ac825e3e63ba9df5d23505\"\n            }\n        }\n\n    }\n\n}\n
nextflow_function {\n\n    name \"Test functions\"\n    script \"functions.nf\"\n\n    test(\"Test function1\") {\n      function \"function1\"\n      ...\n    }\n\n    test(\"Test function2\") {\n      function \"function2\"\n      ...\n    }\n}\n

Learn more about pipeline tests, workflow tests, process tests and function tests in the documentation.

"},{"location":"#snapshot-testing","title":"Snapshot testing","text":"

nf-test supports snapshot testing and automatically generates a baseline set of unit tests to safeguard against regressions caused by changes.nf-test captures a snapshot of output channels or any other objects and subsequently compares them to reference snapshot files stored alongside the tests. If the two snapshots do not match, the test will fail

Learn more

"},{"location":"#highly-extendable","title":"Highly extendable","text":"

nf-test supports the inclusion of third-party libraries (e.g., jar files) or functions from Groovy files. This can be done to either extend its functionality or to prevent code duplication, thus maintaining simplicity in the logic of test cases. Given that many assertions are specific to use cases, nf-test incorporates a plugin system that allows for the extension of existing classes with custom methods. For example FASTA file support.

Learn more

"},{"location":"#support-us","title":"Support us","text":"

We love stars as much as we love rockets! So make sure you star us on GitHub.

Star

Show the world your Nextflow pipeline is using nf-test and add the following badge to your README.md:

[![nf-test](https://img.shields.io/badge/tested_with-nf--test-337ab7.svg)](https://code.askimed.com/nf-test)\n
"},{"location":"#citation","title":"Citation","text":"

If you test your pipeline with nf-test, please cite:

Forer, L., & Sch\u00f6nherr, S. (2024). Improving the Reliability and Quality of Nextflow Pipelines with nf-test. bioRxiv. https://doi.org/10.1101/2024.05.25.595877

"},{"location":"#about","title":"About","text":"

nf-test has been created by Lukas Forer and Sebastian Sch\u00f6nherr and is MIT Licensed.

Thanks to all the contributors to help us maintaining and improving nf-test!

"},{"location":"about/","title":"About","text":"

nf-test has been created by Lukas Forer and Sebastian Sch\u00f6nherr and is MIT Licensed.

"},{"location":"about/#contributors","title":"Contributors","text":""},{"location":"about/#statistics","title":"Statistics","text":"

GitHub:

Bioconda:

"},{"location":"installation/","title":"Installation","text":"

nf-test has the same requirements as Nextflow and can be used on POSIX compatible systems like Linux or OS X. You can install nf-test using the following command:

curl -fsSL https://get.nf-test.com | bash\n

If you don't have curl installed, you could use wget:

wget -qO- https://get.nf-test.com | bash\n

It will create the nf-test executable file in the current directory. Optionally, move the nf-test file to a directory accessible by your $PATH variable.

"},{"location":"installation/#verify-installation","title":"Verify installation","text":"

Test the installation with the following command:

nf-test version\n

You should see something like this:

\ud83d\ude80 nf-test 0.5.0\nhttps://code.askimed.com/nf-test\n(c) 2021 -2022 Lukas Forer and Sebastian Schoenherr\n\nNextflow Runtime:\n\n      N E X T F L O W\n      version 21.10.6 build 5660\n      created 21-12-2021 16:55 UTC (17:55 CEST)\n      cite doi:10.1038/nbt.3820\n      http://nextflow.io\n

Now you are ready to write your first testcase.

"},{"location":"installation/#install-a-specific-version","title":"Install a specific version","text":"

If you want to install a specific version pass it to the install script as so

curl -fsSL https://get.nf-test.com | bash -s 0.7.0\n
"},{"location":"installation/#manual-installation","title":"Manual installation","text":"

All releases are also available on Github.

"},{"location":"installation/#nextflow-binary-not-found","title":"Nextflow Binary not found?","text":"

If you get an error message like this, then nf-test was not able to detect your Nextflow installation.

\ud83d\ude80 nf-test 0.5.0\nhttps://code.askimed.com/nf-test\n(c) 2021 -2022 Lukas Forer and Sebastian Schoenherr\n\nNextflow Runtime:\nError: Nextflow Binary not found. Please check if Nextflow is in a directory accessible by your $PATH variable or set $NEXTFLOW_HOME.\n

To solve this issue you have two possibilites:

"},{"location":"installation/#updating","title":"Updating","text":"

To update an existing nf-test installtion to the latest version, run the following command:

nf-test update\n
"},{"location":"installation/#compiling-from-source","title":"Compiling from source","text":"

To compile nf-test from source you shall have maven installed. This will produce a nf-test/target/nf-test.jar file.

git clone git@github.com:askimed/nf-test.git\ncd nf-test\nmvn install\n
To use the newly compiled nf-test.jar, update the nf-test bash script that is on your PATH to point to the new .jar file. First locate it with which nf-test, and then modify APP_HOME and APP_JAR vars at the top:
#!/bin/bash\nAPP_HOME=\"/PATH/TO/nf-test/target/\"\nAPP_JAR=\"nf-test.jar\"\nAPP_UPDATE_URL=\"https://code.askimed.com/install/nf-test\"\n...\n

"},{"location":"resources/","title":"Resources","text":"

This page collects videos and blog posts about nf-test created by the community. Have you authored a blog post or given a talk about nf-test? Feel free to contact us, and we will be delighted to feature it here.

"},{"location":"resources/#training-module-hello-nf-test","title":"Training Module: Hello nf-test","text":"

This training modules introduces nf-test in a simple and easy to follow-along hello-nextflow style aimed for beginners.

hello nf-test training module

"},{"location":"resources/#blog-post-leveraging-nf-test-for-enhanced-quality-control-in-nf-core","title":"Blog post: Leveraging nf-test for enhanced quality control in nf-core","text":"

Reproducibility is an important attribute of all good science. This is specially true in the realm of bioinformatics, where software is hopefully being constantly updated, and pipelines are ideally being maintained. This blog post covers nf-test in the nf-core context.

Read blog post

"},{"location":"resources/#nf-corebytesize-converting-pytest-modules-to-nf-test","title":"nf-core/bytesize: Converting pytest modules to nf-test","text":"

Adam Talbot & Sateesh Peri do a live demo of converting nf-core DSL2 modules pytests to nf-test

The presentation was recored as part of the nf-core/bytesize series.

"},{"location":"resources/#nf-test-a-simple-but-powerful-testing-framework-for-nextflow-pipelines","title":"nf-test: a simple but powerful testing framework for Nextflow pipelines","text":"

Lukas Forer provides an overview of nf-test, its evolution over time and up-coming features.

The presentation was recorded as part of the 2023 Nextflow Summit in Barcelona.

"},{"location":"resources/#nf-test-a-simple-test-framework-specifically-tailored-for-nextflow-pipelines","title":"nf-test, a simple test framework specifically tailored for Nextflow pipelines","text":"

Sateesh Peri does a hands-on exploration of nf-test, a simple test framework specifically tailored for Nextflow pipelines.

Slides to follow along can be found here.

The presentation was recorded as part of the Workflows Community Meetup - All Things Groovy at the Wellcome Genome Campus.

"},{"location":"resources/#nf-corebytesize-nf-test","title":"nf-core/bytesize: nf-test","text":"

Edmund Miller shares with us his impressions about nf-test from a user perspective. nf-test is a simple test framework for Nextflow pipelines.

The presentation was recored as part of the nf-core/bytesize

"},{"location":"resources/#episode-8-nf-test-mentorships-and-debugging-resume","title":"Episode 8: nf-test, mentorships and debugging resume","text":"

Phil Ewels, Chris Hakkaart and Marcel Ribeiro-Dantas chat about the nf-test framework for testing Nextflow pipelines.

The presentation was part of the \"News & Views\" episode of Channels (Nextflow Podcast).

"},{"location":"resources/#blog-post-a-simple-test-framework-for-nextflow-pipelines","title":"Blog post: A simple test framework for Nextflow pipelines","text":"

Discover how nf-test originated from the need to efficiently and automatically test production-ready Nextflow pipelines.

Read blog post

"},{"location":"tutorials/","title":"Tutorials","text":""},{"location":"tutorials/#setup-nf-test-on-github-actions","title":"Setup nf-test on GitHub Actions","text":"

In this tutorial, we will guide you through setting up and running nf-test on GitHub Actions.

Read tutorial

"},{"location":"docs/configuration/","title":"Configuration","text":""},{"location":"docs/configuration/#nf-testconfig","title":"nf-test.config","text":"

The nf-test.config file is a configuration file used to customize settings and behavior for nf-test. This file must be located in the root of your project, and it is automatically loaded when you run nf-test test. Below are the parameters that can be adapted:

Parameter Description Default Value testsDir Location for storing all nf-test cases (test scripts). If you want all test files to be in the same directory as the script itself, you can set the testDir to . \"tests\" workDir Directory for storing temporary files and working directories for each test. This directory should be added to .gitignore. \".nf-test\" configFile Location of an optional nextflow.config file specifically used for executing tests. Learn more. \"tests/nextflow.config\" libDir Location of a library folder that is automatically added to the classpath during testing to include additional libraries or resources needed for test cases. \"tests/lib\" profile Default profile to use for running tests defined in the Nextflow configuration. See Learn more. \"docker\" withTrace Enable or disable tracing options during testing. Disable tracing if your containers don't include the procps tool. true autoSort Enable or disable sorted channels by default when running tests. true options Custom Nextflow command-line options to be applied when running tests. For example \"-dump-channels -stub-run\" ignore List of filenames or patterns that should be ignored when building the dependency graph. For example: ignore 'folder/**/*.nf', 'modules/module.nf' `` triggers List of filenames or patterns that should be trigger a full test run. For example: triggers 'nextflow.config', 'test-data/**/*' `` requires Can be used to specify the minimum required version of nf-test. Requires nf-test > 0.9.0 ``

Here's an example of what an nf-test.config file could look like:

config {\n    testsDir \"tests\"\n    workDir \".nf-test\"\n    configFile \"tests/nextflow.config\"\n    libDir \"tests/lib\"\n    profile \"docker\"\n    withTrace false\n    autoSort false\n    options \"-dump-channels -stub-run\"\n}\n

The requires keyword can be used to specify the minimum required version of nf-test. For instance, to ensure the use of at least nf-test version 0.9.0, define it as follows:

config {\n    requires (\n        \"nf-test\": \"0.9.0\"\n    )\n}\n
"},{"location":"docs/configuration/#testsnextflowconfig","title":"tests/nextflow.config","text":"

This optional nextflow.config file is used to execute tests. This is a good place to set default params for all your tests. Example number of threads:

params {\n    // run all tests with 1 threads\n    threads = 1\n}\n
"},{"location":"docs/configuration/#configuration-for-tests","title":"Configuration for tests","text":"

nf-test allows to set and overwrite the config, autoSort and options properties for a specific testsuite:

nextflow_process {\n\n    name \"Test Process...\"\n    script \"main.nf\"\n    process \"my_process\"\n    config \"path/to/test/nextflow.config\"\n    autoSort false\n    options \"-dump-channels\"\n    ...\n\n}\n

It is also possible to overwrite these properties for specific test. Depending on the used Nextflow option, also add the --debug nf-test option on the command-line to see the addtional output.

nextflow_process {\n\n   test(\"my test\") {\n\n      config \"path/to/test/nextflow.config\"\n      autoSort false\n      options \"-dump-channels\"\n      ...\n\n    }\n\n}\n
"},{"location":"docs/configuration/#managing-profiles","title":"Managing Profiles","text":"

Profiles in nf-test provide a convenient way to configure and customize Nextflow executions for your test cases. To run your test using a specific Nextflow profile, you can use the --profile argument on the command line or define a default profile in nf-test.config.

"},{"location":"docs/configuration/#basic-profile-usage","title":"Basic Profile Usage","text":"

By default, nf-test reads the profile configuration from nf-test.config. If you've defined a profile called A in nf-test.config, running nf-test --profile B will start Nextflow with only the B profile. It replaces any existing profiles.

"},{"location":"docs/configuration/#combining-profiles-with","title":"Combining Profiles with \"+\"","text":"

To combine profiles, you can use the + prefix. For example, running nf-test --profile +B will start Nextflow with both A and B profiles, resulting in -profile A,B. This allows you to extend the existing configuration with additional profiles.

"},{"location":"docs/configuration/#profile-priority-order","title":"Profile Priority Order","text":"

Profiles are evaluated in a specific order, ensuring predictable behavior:

  1. Profile in nf-test.config: The first profile considered is the one defined in nf-test.config.

  2. Profile Defined in Testcase: If you specify a profile within a testcase, it takes precedence over the one in nf-test.config.

  3. Profile Defined on the Command Line (CLI): Finally, any profiles provided directly through the CLI have the highest priority and override/extends previously defined profiles.

By understanding this profile evaluation order, you can effectively configure Nextflow executions for your test cases in a flexible and organized manner.

"},{"location":"docs/configuration/#file-staging","title":"File Staging","text":"

Warning

File Staging is obsolete since version >= 0.9.0.

The stage section of the nf-test.config file is used to define files that are needed by Nextflow in the test environment (meta directory). Additionally, the directories lib, bin, and assets are automatically staged.

"},{"location":"docs/configuration/#supported-directives","title":"Supported Directives","text":""},{"location":"docs/configuration/#symlink","title":"symlink","text":"

This directive is used to create symbolic links (symlinks) in the test environment. Symlinks are pointers to files or directories and can be useful for creating references to data files or directories required for the test. The syntax for the symlink directive is as follows:

symlink \"source_path\"\n

source_path: The path to the source file or directory that you want to symlink.

"},{"location":"docs/configuration/#copy","title":"copy","text":"

This directive is used to copy files or directories into the test environment. It allows you to duplicate files from a specified source to a location within the test environment. The syntax for the copy directive is as follows:

copy \"source_path\"\n

source_path: The path to the source file or directory that you want to copy.

"},{"location":"docs/configuration/#example-usage","title":"Example Usage","text":"

Here's an example of how to use the stage section in an nf-test.config file:

config {\n    ...\n    stage {\n        symlink \"data/original_data.txt\"\n        copy \"resources/config.yml\"\n    }\n    ...\n}\n

In this example:

"},{"location":"docs/configuration/#testsuite","title":"Testsuite","text":"

Furthermore, it is also possible to stage files that are specific to a single testsuite:

nextflow_workflow {\n\n    name \"Test workflow HELLO_WORKFLOW\"\n\n    script \"./hello.nf\"\n    workflow \"HELLO_WORKFLOW\"\n\n    stage {\n        symlink \"test-assets/test.txt\"\n    }\n\n    test(\"Should print out test file\") {\n        expect {\n            assert workflow.success\n        }\n    }\n\n}\n
"},{"location":"docs/getting-started/","title":"Getting started","text":"

This guide helps you to understand the concepts of nf-test and to write your first test cases. Before you start, please check if you have installed nf-test properly on your computer. Also, this guide assumes that you have a basic knowledge of Groovy and unit testing. The Groovy documentation is the best place to learn its syntax.

"},{"location":"docs/getting-started/#lets-get-started","title":"Let's get started","text":"

To show the power of nf-test, we adapted a recently published proof of concept Nextflow pipeline. We adapted the pipeline to the new DSL2 syntax using modules. First, open the terminal and clone our test pipeline:

# clone nextflow pipeline\ngit clone https://github.com/askimed/nf-test-examples\n\n# enter project directory\ncd nf-test-examples\n

The pipeline consists of three modules (salmon.index.nf, salmon_align_quant.nf,fastqc.nf). Here, we use the salmon.index.nf process to create a test case from scratch. This process takes a reference as an input and creates an index using salmon.

"},{"location":"docs/getting-started/#init-new-project","title":"Init new project","text":"

Before creating test cases, we use the init command to setup nf-test.

//Init command has already been executed for our repository\nnf-test init\n

The init command creates the following files: nf-test.config and the .nf-test/tests folder.

In the configuration section you can learn more about these files and how to customize the directory layout.

"},{"location":"docs/getting-started/#create-your-first-test","title":"Create your first test","text":"

The generate command helps you to create a skeleton test code for a Nextflow process or the complete pipeline/workflow.

Here we generate a test case for the process salmon.index.nf:

# delete already existing test case\nrm tests/modules/local/salmon_index.nf.test\nnf-test generate process modules/local/salmon_index.nf\n

This command creates a new file tests/modules/local/salmon_index.nf with the following content:

nextflow_process {\n\n    name \"Test Process SALMON_INDEX\"\n    script \"modules/local/salmon_index.nf\"\n    process \"SALMON_INDEX\"\n\n    test(\"Should run without failures\") {\n\n        when {\n            params {\n                // define parameters here. Example:\n                // outdir = \"tests/results\"\n            }\n            process {\n                \"\"\"\n                // define inputs of the process here. Example:\n                // input[0] = file(\"test-file.txt\")\n                \"\"\"\n            }\n        }\n\n        then {\n            assert process.success\n            with(process.out) {\n              // Make assertions about the content and elements of output channels here. Example:\n              // assert out_channel != null\n            }\n        }\n\n    }\n\n}\n

The generate command filled automatically the name, script and process of our test case as well as created a skeleton for your first test method. Typically you create one file per process and use different test methods to describe the expected behaviour of the process.

This test has a name, a when and a then closure (when/then closures are required here, since inputs need to be defined). The when block describes the input parameters of the workflow or the process. nf-test executes the process with exactly these parameters and parses the content of the output channels. Then, it evaluates the assertions defined in the then block to check if content of the output channels matches your expectations.

"},{"location":"docs/getting-started/#the-when-block","title":"The when block","text":"

The when block describes the input of the process and/or the Nextflow params.

The params block is optional and is a simple map that can be used to override Nextflow's input params.

The process block is a multi-line string. The input array can be used to set the different inputs arguments of the process. In our example, we only have one input that expects a file. Let us update the process block by setting the first element of the input array to the path of our reference file:

when {\n    params {\n        outdir = \"output\"\n    }\n    process {\n        \"\"\"\n        // Use transcriptome.fa as a first input paramter for our process\n        input[0] = file(\"${projectDir}/test_data/transcriptome.fa\")\n        \"\"\"\n    }\n}\n

Everything which is defined in the process block is later executed in a Nextflow script (created automatically to test your process). Therefore, you can use every Nextflow specific function or command to define the values of the input array (e.g. Channels, files, paths, etc.).

"},{"location":"docs/getting-started/#the-then-block","title":"The then block","text":"

The then block describes the expected output channels of the process when we execute it with the input parameters defined in the when block.

The then block typically contains mainly assertions to check assumptions (e.g. the size and the content of an output channel). However, this block accepts every Groovy script. This means you can also import third party libraries to define very specific assertions.

nf-test automatically loads all output channels of the process and all their items into a map named process.out. You can then use this map to formulate your assertions.

For example, in the salmon_index process we expect to get one process executed and 16 files created. But we also want to check the md5 sum and want to look into the actual JSON file. Let us update the then section with some assertions that describe our expectations:

then {\n    //check if test case succeeded\n    assert process.success\n    //analyze trace file\n    assert process.trace.tasks().size() == 1\n    with(process.out) {\n      // check if emitted output has been created\n      assert index.size() == 1\n      // count amount of created files\n      assert path(index.get(0)).list().size() == 16\n      // parse info.json file using a json parser provided by nf-test\n      def info = path(index.get(0)+'/info.json').json\n      assert info.num_kmers == 375730\n      assert info.seq_length == 443050\n      assert path(index.get(0)+'/info.json').md5 == \"80831602e2ac825e3e63ba9df5d23505\"\n    }\n}\n

The items of a channel are always sorted by nf-test. This provides a deterministic order inside the channel and enables you to write reproducible tests.

"},{"location":"docs/getting-started/#your-first-test-specification","title":"Your first test specification","text":"

You can update the name of the test method to something that gives us later a good description of our specification. When we put everything together, we get the following full working test specification:

nextflow_process {\n\n    name \"Test Process SALMON_INDEX\"\n    script \"modules/local/salmon_index.nf\"\n    process \"SALMON_INDEX\"\n\n    test(\"Should create channel index files\") {\n\n        when {\n            process {\n                \"\"\"\n                input[0] = file(\"${projectDir}/test_data/transcriptome.fa\")\n                \"\"\"\n            }\n        }\n\n        then {\n            //check if test case succeeded\n            assert process.success\n            //analyze trace file\n            assert process.trace.tasks().size() == 1\n            with(process.out) {\n              // check if emitted output has been created\n              assert index.size() == 1\n              // count amount of created files\n              assert path(index.get(0)).list().size() == 16\n              // parse info.json file\n              def info = path(index.get(0)+'/info.json').json\n              assert info.num_kmers == 375730\n              assert info.seq_length == 443050\n              assert path(index.get(0)+'/info.json').md5 == \"80831602e2ac825e3e63ba9df5d23505\"\n            }\n        }\n    }\n}\n
"},{"location":"docs/getting-started/#run-your-first-test","title":"Run your first test","text":"

Now, the test command can be used to run your test:

nf-test test tests/modules/local/salmon_index.nf.test --profile docker\n
"},{"location":"docs/getting-started/#specifying-profiles","title":"Specifying profiles","text":"

In this case, the docker profile defined in the Nextflow pipeline is used to execute the test. The profile is set using the --profile parameter, but you can also define a default profile in the configuration file.

Congratulations! You created you first nf-test specification.

"},{"location":"docs/getting-started/#nextflow-options","title":"Nextflow options","text":"

nf-test also allows to specify Nextflow options (e.g. -dump-channels, -stub-run) globally in the nf-test.config file or by adding an option to the test suite or the actual test. Read more about this in the configuration documentation.

nextflow_process {\n\n    options \"-dump-channels\"\n\n}\n
"},{"location":"docs/getting-started/#whats-next","title":"What's next?","text":""},{"location":"docs/nftest_pipelines/","title":"Pipelines using nf-test","text":""},{"location":"docs/nftest_pipelines/#nf-test-examples","title":"nf-test-examples","text":"

All test cases described in this documentation can be found in the nf-test-examples repository.

"},{"location":"docs/nftest_pipelines/#gwas-regenie-pipeline","title":"GWAS-Regenie Pipeline","text":"

To show the power of nf-test, we applied nf-test to a Nextflow pipeline that performs whole genome regression modelling using regenie. Please click here to learn more about this pipeline and checkout different kind of test cases.

"},{"location":"docs/running-tests/","title":"Running tests","text":""},{"location":"docs/running-tests/#basic-usage","title":"Basic usage","text":"

The easiest way to use nf-test is to run the following command. This command will run all tests under the tests directory. The testDir can be changed in the nf-test.config.

nf-test test\n
"},{"location":"docs/running-tests/#execute-specific-tests","title":"Execute specific tests","text":"

You can also specify a list of tests, which should be executed.

nf-test test tests/modules/local/salmon_index.nf.test tests/modules/bwa_index.nf.test\n\nnf-test test tests/modules tests/modules/bwa_index.nf.test\n
"},{"location":"docs/running-tests/#tag-tests","title":"Tag tests","text":"

nf-test provides a simple tagging mechanism that allows to execute tests by name or by tag.

Tags can be defined for each testsuite or for each testcase using the new tag directive:

nextflow_process {\n\n    name \"suite 1\"\n    tag \"tag1\"\n\n    test(\"test 1\") {\n        tag \"tag2\"\n        tag \"tag3\"   \n        ...\n    }\n\n    test(\"test 2\") {\n\n        tag \"tag4\"\n        tag \"tag5\"   \n        ...\n\n    }\n}\n

For example, to execute all tests with tag2 use the following command.

nf-test test --tag tag2  # collects test1\n

Names are automatically added to tags. This enables to execute suits or tests directly.

nf-test test --tag \"suite 1\"  # collects test1 and test2\n

When more tags are provided,\u00a0all tests that match at least one tag will be executed. Tags are also not case-sensitive, both lines will result the same tests.

nf-test test --tag tag3,tag4  # collects test1 and test2\nnf-test test --tag TAG3,TAG4  # collects test1 and test2\n
"},{"location":"docs/running-tests/#create-a-tap-output","title":"Create a TAP output","text":"

To run all tests and create a report.tap file, use the following command.

nf-test test --tap report.tap\n
"},{"location":"docs/running-tests/#run-test-by-its-hash-value","title":"Run test by its hash value","text":"

To run a specific test using its hash, the following command can be used. The hash value is generated during its first execution.

nf-test test tests/main.nf.test@d41119e4\n
"},{"location":"docs/assertions/assertions/","title":"Assertions","text":"

Writing test cases means formulating assumptions by using assertions. Groovy\u2019s power assert provides a detailed output when the boolean expression validates to false. nf-test provides several extensions and commands to simplify the work with Nextflow channels. Here we summarise how nextflow and nf-test handles channels and provide examples for the tools that nf-test provides:

"},{"location":"docs/assertions/assertions/#nextflow-channels-and-nf-test-channel-sorting","title":"Nextflow channels and nf-test channel sorting","text":"

Nextflow channels emit (in a random order) a single value or a tuple of values.

Channels that emit a single item produce an unordered list of objects, List<Object>, for example:

process.out.outputCh = ['Hola', 'Hello', 'Bonjour']\n

Channels that contain Nextflow file values have a unique path each run. For Example:

process.out.outputCh = ['/.nf-test/tests/c563c/work/65/85d0/Hola.json', '/.nf-test/tests/c563c/work/65/fa20/Hello.json', '/.nf-test/tests/c563c/work/65/b62f/Bonjour.json']\n

Channels that emit tuples produce an unordered list of ordered objects, List<List<Object>>:

process.out.outputCh = [\n  ['Hola', '/.nf-test/tests/c563c/work/65/85d0/Hola.json'], \n  ['Hello', '/.nf-test/tests/c563c/work/65/fa20/Hello.json'], \n  ['Bonjour', '/.nf-test/tests/c563c/work/65/b62f/Bonjour.json']\n]\n

Assertions by channel index are made possible through sorting of the nextflow channel. The sorting is performed automatically by nf-test prior to launch of the then closure via integer, string and path comparisons. For example, the above would be sorted by nf-test:

process.out.outputCh = [\n  ['Bonjour', '/.nf-test/tests/c563c/work/65/b62f/Bonjour.json'],\n  ['Hello', '/.nf-test/tests/c563c/work/65/fa20/Hello.json'],\n  ['Hola', '/.nf-test/tests/c563c/work/65/85d0/Hola.json']\n]\n

"},{"location":"docs/assertions/assertions/#using-with","title":"Using with","text":"

This assertions...

assert process.out.imputed_plink2\nassert process.out.imputed_plink2.size() == 1\nassert process.out.imputed_plink2.get(0).get(0) == \"example.vcf\"\nassert process.out.imputed_plink2.get(0).get(1) ==~ \".*/example.vcf.pgen\"\nassert process.out.imputed_plink2.get(0).get(2) ==~ \".*/example.vcf.psam\"\nassert process.out.imputed_plink2.get(0).get(3) ==~ \".*/example.vcf.pvar\"\n

... can be written by using with(){} to improve readability:

assert process.out.imputed_plink2\nwith(process.out.imputed_plink2) {\n    assert size() == 1\n    with(get(0)) {\n        assert get(0) == \"example.vcf\"\n        assert get(1) ==~ \".*/example.vcf.pgen\"\n        assert get(2) ==~ \".*/example.vcf.psam\"\n        assert get(3) ==~ \".*/example.vcf.pvar\"\n    }\n}\n
"},{"location":"docs/assertions/assertions/#using-contains-to-assert-an-item-in-the-channel-is-present","title":"Using contains to assert an item in the channel is present","text":"

Groovy's contains and collect methods can be used to flexibly assert an item exists in the channel output.

For example, the below represents a channel that emits a two-element tuple, a string and a json file:

/*\ndef process.out.outputCh = [\n  ['Bonjour', '/.nf-test/tests/c563c/work/65/b62f/Bonjour.json'],\n  ['Hello', '/.nf-test/tests/c563c/work/65/fa20/Hello.json'],\n  ['Hola', '/.nf-test/tests/c563c/work/65/85d0/Hola.json']\n]\n*/\n

To assert the channel contains one of the tuples, parse the json and assert:

testData = process.out.outputCh.collect { greeting, jsonPath -> [greeting, path(jsonPath).json] } \nassert testData.contains(['Hello', path('./myTestData/Hello.json').json])\n

To assert a subset of the tuple data, filter the channel using collect. For example, to assert the greeting only:

testData = process.out.outputCh.collect { greeting, jsonPath -> greeting } \nassert testData.contains('Hello')\n

See the files page for more information on parsing and asserting various file types.

"},{"location":"docs/assertions/assertions/#using-assertcontainsinanyorder-for-order-agnostic-assertion-of-the-contents-of-a-channel","title":"Using assertContainsInAnyOrder for order-agnostic assertion of the contents of a channel","text":"

assertContainsInAnyOrder(List<object> list1, List<object> list2) performs an order agnostic assertion on channels contents and is available in every nf-test closure. It is a binding for Hamcrest's assertContainsInAnyOrder.

Some example use-cases are provided below.

"},{"location":"docs/assertions/assertions/#channel-that-emits-strings","title":"Channel that emits strings","text":"
// process.out.outputCh = ['Bonjour', 'Hello', 'Hola'] \n\ndef expected = ['Hola', 'Hello', 'Bonjour']\nassertContainsInAnyOrder(process.out.outputCh, expected)\n
"},{"location":"docs/assertions/assertions/#channel-that-emits-a-single-maps-eg-valmymap","title":"Channel that emits a single maps, e.g. val(myMap)","text":"
/*\nprocess.out.outputCh = [\n  [\n    'D': [10,11,12],\n    'C': [7,8,9]\n  ],\n  [\n    'B': [4,5,6],\n    'A': [1,2,3]\n  ]\n]\n*/\n\ndef expected = [\n  [\n    'A': [1,2,3],\n    'B': [4,5,6]\n  ],\n  [\n    'C': [7,8,9],\n    'D': [10,11,12]\n  ]\n]\n\nassertContainsInAnyOrder(process.out.outputCh, expected)\n
"},{"location":"docs/assertions/assertions/#channel-that-emits-json-files","title":"Channel that emits json files","text":"

See the files page for more information on parsing and asserting various file types.

Since the outputCh filepaths are different between consecutive runs, the files need to be read/parsed prior to comparison

/*\nprocess.out.outputCh = [\n  '/.nf-test/tests/c563c/work/65/b62f/Bonjour.json',\n  '/.nf-test/tests/c563c/work/65/fa20/Hello.json',\n  '/.nf-test/tests/c563c/work/65/85d0/Hola.json'\n]\n*/\n\ndef actual = process.out.outputCh.collect { filepath -> path(filepath).json }\ndef expected = [\n  path('./myTestData/Hello.json').json,\n  path('./myTestData/Hola.json').json,\n  path('./myTestData/Bonjour.json').json,\n]\n\nassertContainsInAnyOrder(actual, expected)\n
"},{"location":"docs/assertions/assertions/#channel-that-emits-a-tuple-of-strings-and-json-files","title":"Channel that emits a tuple of strings and json files","text":"

See the files page for more information on parsing and asserting various file types.

Since the ordering of items within the tuples are consistent, we can assert this case:

/*\nprocess.out.outputCh = [\n  ['Bonjour', '/.nf-test/tests/c563c/work/65/b62f/Bonjour.json'],\n  ['Hello', '/.nf-test/tests/c563c/work/65/fa20/Hello.json'],\n  ['Hola', '/.nf-test/tests/c563c/work/65/85d0/Hola.json']\n]\n*/\n\ndef actual = process.out.outputCh.collect { greeting, filepath -> [greeting, path(filepath).json] }\ndef expected = [\n  ['Hola', path('./myTestData/Hola.json').json], \n  ['Hello', path('./myTestData/Hello.json').json],\n  ['Bonjour', path('./myTestData/Bonjour.json').json],\n]\n\nassertContainsInAnyOrder(actual, expected)\n

To assert the json only and ignore the strings:

/*\nprocess.out.outputCh = [\n  ['Bonjour', '/.nf-test/tests/c563c/work/65/b62f/Bonjour.json'],\n  ['Hello', '/.nf-test/tests/c563c/work/65/fa20/Hello.json'],\n  ['Hola', '/.nf-test/tests/c563c/work/65/85d0/Hola.json']\n]\n*/\n\ndef actual = process.out.outputCh.collect { greeting, filepath -> path(filepath).json }\ndef expected = [\n  path('./myTestData/Hello.json').json, \n  path('./myTestData/Hola.json').json,\n  path('./myTestData/Bonjour.json').json\n]\n\nassertContainsInAnyOrder(actual, expected)\n

To assert the strings only and not the json files:

/*\nprocess.out.outputCh = [\n  ['Bonjour', '/.nf-test/tests/c563c/work/65/b62f/Bonjour.json'],\n  ['Hello', '/.nf-test/tests/c563c/work/65/fa20/Hello.json'],\n  ['Hola', '/.nf-test/tests/c563c/work/65/85d0/Hola.json']\n]\n*/\n\ndef actual = process.out.outputCh.collect { greeting, filepath -> greeting }\ndef expected = ['Hello', 'Hola', 'Bonjour]\n\nassertContainsInAnyOrder(actual, expected)\n

"},{"location":"docs/assertions/assertions/#using-assertall","title":"Using assertAll","text":"

assertAll(Closure... closures) ensures that all supplied closures do no throw exceptions. The number of failed closures is reported in the Exception message. This useful for efficient debugging of a set of test assertions from a single test run.

def a = 2\n\nassertAll(\n    { assert a==1 },\n    { a = 1/0 },\n    { assert a==2 },\n    { assert a==3 }\n)\n
The output will look like this:
assert a==1\n       ||\n       |false\n       2\n\njava.lang.ArithmeticException: Division by zero\nAssertion failed:\n\nassert a==3\n       ||\n       |false\n       2\n\nFAILED (7.106s)\n\n  java.lang.Exception: 3 of 4 assertions failed\n

"},{"location":"docs/assertions/fasta/","title":"FASTA Files","text":"

0.7.0

The nft-fasta plugin extends path by a fasta property that can be used to read FASTA files into maps. nft-fasta supports also gzipped FASTA files.

"},{"location":"docs/assertions/fasta/#setup","title":"Setup","text":"

To use the fasta property you need to activate the nft-fasta plugin in your nf-test.config file:

config {\n  plugins {\n    load \"nft-fasta@1.0.0\"\n  }\n}\n

More about plugins can be fond here.

"},{"location":"docs/assertions/fasta/#comparing-files","title":"Comparing files","text":"
assert path('path/to/fasta1.fasta').fasta == path(\"path/to/fasta2.fasta'\").fasta\n
"},{"location":"docs/assertions/fasta/#work-with-individual-samples","title":"Work with individual samples","text":"
def sequences = path('path/to/fasta1.fasta.gz').fasta\nassert \"seq1\" in sequences\nassert !(\"seq8\" in sequences)\nassert sequences.seq1 == \"AGTACGTAGTAGCTGCTGCTACGTGCGCTAGCTAGTACGTCACGACGTAGATGCTAGCTGACTCGATGC\"\n
"},{"location":"docs/assertions/files/","title":"Files","text":""},{"location":"docs/assertions/files/#md5-checksum","title":"md5 Checksum","text":"

nf-test extends path by a md5 property that can be used to compare the file content with an expected checksum:

assert path(process.out.out_ch.get(0)).md5 == \"64debea5017a035ddc67c0b51fa84b16\"\n
Note that for gzip compressed files, the md5 property is calculated after gunzipping the file contents, whereas for other filetypes the md5 property is directly calculated on the file itself.

"},{"location":"docs/assertions/files/#json-files","title":"JSON Files","text":"

nf-test supports comparison of JSON files and keys within JSON files. To assert that two JSON files contain the same keys and values:

assert path(process.out.out_ch.get(0)).json == path('./some.json').json\n
Individual keys can also be asserted:

assert path(process.out.out_ch.get(0)).json.key == \"value\"\n
"},{"location":"docs/assertions/files/#yaml-files","title":"YAML Files","text":"

nf-test supports comparison of YAML files and keys within YAML files. To assert that two YAML files contain the same keys and values:

assert path(process.out.out_ch.get(0)).yaml == path('./some.yaml').yaml\n
Individual keys can also be asserted:

assert path(process.out.out_ch.get(0)).yaml.key == \"value\"\n
"},{"location":"docs/assertions/files/#gzip-files","title":"GZip Files","text":"

nf-test extends path by a linesGzip property that can be used to read gzip compressed files.

assert path(process.out.out_ch.get(0)).linesGzip.size() == 5\nassert path(process.out.out_ch.get(0)).linesGzip.contains(\"Line Content\")\n
"},{"location":"docs/assertions/files/#filter-lines","title":"Filter lines","text":"

The returned array can also be filtered by lines.

def lines = path(process.out.gzip.get(0)).linesGzip[0..5]\nassert lines.size() == 6\ndef lines = path(process.out.gzip.get(0)).linesGzip[0]\nassert lines.equals(\"MY_HEADER\")\n
"},{"location":"docs/assertions/files/#grep-lines","title":"Grep lines","text":"

nf-test also provides the possibility to grep only specific lines with the advantage that only a subset of lines need to be read (especially helpful for larger files).

def lines = path(process.out.gzip.get(0)).grepLinesGzip(0,5)\nassert lines.size() == 6\ndef lines = path(process.out.gzip.get(0)).grepLineGzip(0)\nassert lines.equals(\"MY_HEADER\")\n
"},{"location":"docs/assertions/files/#snapshot-support","title":"Snapshot Support","text":"

The possibility of filter lines from a *.gz file can also be combined with the snapshot functionality.

assert snapshot(\npath(process.out.gzip.get(0)).linesGzip[0]\n).match()\n
"},{"location":"docs/assertions/libraries/","title":"Using Third-Party Libraries","text":"

nf-test supports including third party libraries (e.g. jar files ) or functions from groovy files to either extend it functionality or to avoid duplicate code and to keep the logic in test cases simple.

"},{"location":"docs/assertions/libraries/#using-local-groovy-files","title":"Using Local Groovy Files","text":"

0.7.0 \u00b7

If nf-test detects a lib folder in the directory of a tescase, then it adds it automatically to the classpath.

"},{"location":"docs/assertions/libraries/#examples","title":"Examples","text":"

We have a Groovy script MyWordUtils.groovy that contains the following class:

class MyWordUtils {\n\n    def static capitalize(String word){\n      return word.toUpperCase();\n    }\n\n}\n

We can put this file in a subfolder called lib:

testcase_1\n\u251c\u2500\u2500 capitalizer.nf\n\u251c\u2500\u2500 capitalizer.test\n\u2514\u2500\u2500 lib\n    \u2514\u2500\u2500 MyWordUtils.groovy\n

The file capitalizer.nf contains the CAPITALIZER process:

#!/usr/bin/env nextflow\nnextflow.enable.dsl=2\n\nprocess CAPITALIZER {\n    input:\n        val cheers\n    output:\n        stdout emit: output\n    script:\n       println \"$cheers\".toUpperCase()\n    \"\"\"\n    \"\"\"\n\n}\n

Next, we can use this class in the capitalizer.nf.test like every other class that is provided by nf-test or Groovy itself:

nextflow_process {\n\n    name \"Test Process CAPITALIZER\"\n    script \"capitalizer.nf\"\n    process \"CAPITALIZER\"\n\n    test(\"Should run without failures\") {\n\n        when {\n            process {\n                \"\"\"\n                input[0] = \"world\"\n                \"\"\"\n            }\n        }\n\n        then {\n            assert process.success\n            assert process.stdout.contains(MyWordUtils.capitalize('world'))\n        }\n\n    }\n\n}\n

If we have a project and we want to reuse libraries in multiple test cases, then we can store the class in the shared lib folder. Both test cases are now able to use MyWordUtils:

tests\n\u251c\u2500\u2500 testcase_1\n    \u251c\u2500\u2500 hello_1.nf\n    \u251c\u2500\u2500 hello_1.nf.test\n\u251c\u2500\u2500 testcase_2\n    \u251c\u2500\u2500 hello_2.nf\n    \u251c\u2500\u2500 hello_2.nf.test\n\u2514\u2500\u2500 lib\n    \u2514\u2500\u2500 MyWordUtils.groovy\n

The default location is tests/lib. This folder location can be changed in nf-test config file.

It is also possible to use the --lib parameter to add an additional folder to the classpath:

nf-test test tests/testcase_1/hello_1.nf.test --lib tests/mylibs\n

If multiple folders are used, the they need to be separate with a colon (like in Java or Groovy).

"},{"location":"docs/assertions/libraries/#using-local-jar-files","title":"Using Local Jar Files","text":"

To integrate local jar files, you can either specify the path to the jar within the nf-test --lib option

nf-test test test.nf.test --lib tests/lib/groovy-ngs-utils/groovy-ngs-utils.jar\n

or add it as follows to the nf-test.config file:

libDir \"tests/lib:tests/lib/groovy-ngs-utils/groovy-ngs-utils.jar\"\n

You could then import the class and use it in the then statement:

import gngs.VCF;\n\nnextflow_process {\n\n    name \"Test Process VARIANT_CALLER\"\n    script \"variant_caller.nf\"\n    process \"VARIANT_CALLER\"\n\n    test(\"Should run without failures\") {\n\n        when {\n           ...\n        }\n\n        then {\n            assert process.success             \n            def vcf = VCF.parse(\"$baseDir/tests/test_data/NA12879.vcf.gz\")\n            assert vcf.samples.size() == 10\n            assert vcf.variants.size() == 20\n        }\n\n    }\n\n}\n
"},{"location":"docs/assertions/libraries/#using-maven-artifcats-with-grab","title":"Using Maven Artifcats with @Grab","text":"

nf-test supports the @Grab annotation to include third-party libraries that are available in a maven repository. As the dependency is defined as a maven artifact, there is no local copy of the jar file needed and maven enables to include an exact version as well as provides an easy update process.

"},{"location":"docs/assertions/libraries/#example","title":"Example","text":"

The following example uses the WordUtil class from commons-lang:

@Grab(group='commons-lang', module='commons-lang', version='2.4')\nimport org.apache.commons.lang.WordUtils\n\nnextflow_process {\n\n    name \"Test Process CAPITALIZER\"\n    script \"capitalizer.nf\"\n    process \"CAPITALIZER\"\n\n    test(\"Should run without failures\") {\n\n        when {\n            process {\n                \"\"\"\n                input[0] = \"world\"\n                \"\"\"\n            }\n        }\n\n        then {\n            assert process.success\n            assert process.stdout.contains(WordUtils.capitalize('world'))\n        }\n\n    }\n\n}\n
"},{"location":"docs/assertions/regular-expressions/","title":"Regular Expressions","text":""},{"location":"docs/assertions/regular-expressions/#using-operator","title":"Using ==~ operator","text":"

The operator ==~ can be used to check if a string matches a regular expression:

assert \"/my/full/path/to/process/dir/example.vcf.pgen\" ==~ \".*/example.vcf.pgen\"\n
"},{"location":"docs/assertions/snapshots/","title":"Snapshots","text":"

0.7.0

Snapshots are a very useful tool whenever you want to make sure your output channels or output files not change unexpectedly. This feature is highly inspired by Jest.

A typical snapshot test case takes a snapshot of the output channels or any other object, then compares it to a reference snapshot file stored alongside the test (*.nf.test.snap). The test will fail, if the two snapshots do not match: either the change is unexpected, or the reference snapshot needs to be updated to the new output of a process, workflow, pipeline or function.

"},{"location":"docs/assertions/snapshots/#using-snapshots","title":"Using Snapshots","text":"

The snapshot keyword creates a snapshot of the object and its match method can then be used to check if its contains the expected data from the snap file. The following example shows how to create a snapshot of a workflow channel:

assert snapshot(workflow.out.channel1).match()\n

You can also create a snapshot of all output channels of a process:

assert snapshot(process.out).match()\n

Or a specific check on a file:

assert snapshot(path(process.out.get(0))).match()\n

Even the result of a function can be used:

assert snapshot(function.result).match()\n

The first time this test runs, nf-test creates a snapshot file. This is a json file that contains a serialized version of the provided object.

The snapshot file should be committed alongside code changes, and reviewed as part of your code review process. nf-test uses pretty-format to make snapshots human-readable during code review. On subsequent test runs, nf-test will compare the data with the previous snapshot. If they match, the test will pass. If they don't match, either the test runner found a bug in your code that should be fixed, or the implementation has changed and the snapshot needs to be updated.

"},{"location":"docs/assertions/snapshots/#updating-snapshots","title":"Updating Snapshots","text":"

When a snapshot test is failing due to an intentional implementation change, you can use the --update-snapshot flag to re-generate snapshots for all failed tests.

nf-test test tests/main.nf.test --update-snapshot\n
"},{"location":"docs/assertions/snapshots/#cleaning-obsolete-snapshots","title":"Cleaning Obsolete Snapshots","text":"

0.8.0

Over time, snapshots can become outdated, leading to inconsistencies in your testing process. To help you manage obsolete snapshots, nf-test generates a list of these obsolete keys. This list provides transparency into which snapshots are no longer needed and can be safely removed.

Running your tests with the --clean-snapshotor --wipe-snapshot option removes the obsolete snapshots from the snapshot file. This option is useful when you want to maintain the structure of your snapshot file but remove unused entries. It ensures that your snapshot file only contains the snapshots required for your current tests, reducing file bloat and improving test performance.

nf-test test tests/main.nf.test --clean-snapshot\n

Obsolete snapshots can only be detected when running all tests in a test file simultaneously, and when all tests pass. If you run a single test or if tests are skipped, nf-test cannot detect obsolete snapshots.

"},{"location":"docs/assertions/snapshots/#constructing-complex-snapshots","title":"Constructing Complex Snapshots","text":"

It is also possible to include multiple objects into one snapshot:

assert snapshot(workflow.out.channel1, workflow.out.channel2).match()\n

Every object that is serializable can be included into snapshots. Therefore you can even make a snapshot of the complete workflow or process object. This includes stdout, stderr, exist status, trace etc. and is the easiest way to create a test that checks for all of this properties:

assert snapshot(workflow).match()\n

You can also include output files to a snapshot (e.g. useful in pipeline tests where no channels are available):

assert snapshot(\n    workflow,\n    path(\"${params.outdir}/file1.txt\"),\n    path(\"${params.outdir}/file2.txt\"),\n    path(\"${params.outdir}/file3.txt\")\n).match()\n

By default the snapshot has the same name as the test. You can also store a snapshot under a user defined name. This enables you to use multiple snapshots in one single test and to separate them in a logical way. In the following example a workflow snapshot is created, stored under the name \"workflow\".

assert snapshot(workflow).match(\"workflow\")\n

The next example creates a snapshot of two files and saves it under \"files\".

assert snapshot(path(\"${params.outdir}/file1.txt\"), path(\"${params.outdir}/file2.txt\")).match(\"files\")\n

You can also use helper methods to add objects to snapshots. For example, you can use the list()method to add all files of a folder to a snapshot:

 assert snapshot(workflow, path(params.outdir).list()).match()\n
"},{"location":"docs/assertions/snapshots/#compressed-snapshots","title":"Compressed Snapshots","text":"

If you add complex objects to snapshots with large content, you could use the md5() function to store the hashsum instead of the content in the snapshot file:

 assert snapshot(hugeObject).md5().match()\n
"},{"location":"docs/assertions/snapshots/#file-paths","title":"File Paths","text":"

If nf-test detects a path in the snapshot it automatically replace it by a unique fingerprint of the file that ensures the file content is the same. The fingerprint is default the md5 sum.

"},{"location":"docs/assertions/snapshots/#snapshot-differences","title":"Snapshot Differences","text":"

0.8.0

By default, nf-test uses the diff tool for comparing snapshots. It employs the following default arguments:

These default arguments are applied when no custom settings are specified.

If diffis not installed on the system, nf-test will print exepcted and found snapshots without highlighting differences.

"},{"location":"docs/assertions/snapshots/#customizing-diff-tool-arguments","title":"Customizing Diff Tool Arguments","text":"

Users have the flexibility to customize the arguments passed to the diff tool using an environment variable called NFT_DIFF_ARGS. This environment variable allows you to modify the way the diff tool behaves when comparing snapshots.

To customize the arguments, follow these steps:

  1. Set the NFT_DIFF_ARGS environment variable with your desired arguments.

    export NFT_DIFF_ARGS=\"<your_custom_arguments>\"\n
  2. Run nf-test to perform snapshot comparison, and it will utilize the custom arguments specified in NFT_DIFF_ARGS.

"},{"location":"docs/assertions/snapshots/#changing-the-diff-tool","title":"Changing the Diff Tool","text":"

nf-test not only allows you to customize the arguments but also provides the flexibility to change the diff tool itself. This can be achieved by using the environment variable NFT_DIFF.

"},{"location":"docs/assertions/snapshots/#example-using-icdiff","title":"Example: Using icdiff","text":"

As an example, you can change the diff tool to icdiff, which supports features like colors. To switch to icdiff, follow these steps:

  1. Install icdiff

  2. Set the NFT_DIFF environment variable to icdiff to specify the new diff tool.

    export NFT_DIFF=\"icdiff\"\n
  3. If needed, customize the arguments for icdiff using NFT_DIFF_ARGS as explained in the previous section

    export NFT_DIFF_ARGS=\"-N --cols 200 -L expected -L observed -t\"\n
  4. Run nf-test, and it will use icdiff as the diff tool for comparing snapshots.

"},{"location":"docs/cli/clean/","title":"clean command","text":""},{"location":"docs/cli/clean/#usage","title":"Usage","text":"
nf-test clean\n

The clean command removes the .nf-test directory.

"},{"location":"docs/cli/coverage/","title":"coverage command","text":"

0.9.0

"},{"location":"docs/cli/coverage/#usage","title":"Usage","text":"
nf-test coverage\n

The coverage command prints information about the number of Nextflow files that are covered by a test.

"},{"location":"docs/cli/coverage/#optional-arguments","title":"Optional Arguments","text":""},{"location":"docs/cli/coverage/#-csv-filename","title":"--csv <filename>","text":"

Writes a coverage report in csv format.

"},{"location":"docs/cli/coverage/#-html-filename","title":"--html <filename>","text":"

Writes a coverage report in html format.

"},{"location":"docs/cli/generate/","title":"generate command","text":""},{"location":"docs/cli/generate/#usage","title":"Usage","text":"
nf-test generate <TEST_CASE_TYPE> <NEXTFLOW_FILES>\n
"},{"location":"docs/cli/generate/#supported-types","title":"Supported Types","text":""},{"location":"docs/cli/generate/#process","title":"process","text":""},{"location":"docs/cli/generate/#workflow","title":"workflow","text":""},{"location":"docs/cli/generate/#pipeline","title":"pipeline","text":""},{"location":"docs/cli/generate/#function","title":"function","text":""},{"location":"docs/cli/generate/#examples","title":"Examples","text":"

Create a test case for a process:

nf-test generate process modules/local/salmon_index.nf\n

Create a test cases for all processes in folder modules:

nf-test generate process modules/**/*.nf\n

Create a test case for a sub workflow:

nf-test generate workflow workflows/some_workflow.nf\n

Create a test case for the whole pipeline:

nf-test generate pipeline main.nf\n

Create a test case for each function in file functions.nf:

nf-test generate function functions.nf\n
"},{"location":"docs/cli/init/","title":"init command","text":""},{"location":"docs/cli/init/#usage","title":"Usage","text":"
nf-test init\n

The init command set ups nf-test in the current directory.

The init command creates the following files: nf-test.config and tests/nextflow.config. It also creates a folder tests which is the home directory of your test code.

In the configuration section you can learn more about these files and how to customize the directory layout.

"},{"location":"docs/cli/list/","title":"list command","text":""},{"location":"docs/cli/list/#usage","title":"Usage","text":"

list command provides a convenient way to list all available test cases.

nf-test list [<NEXTFLOW_FILES>|<SCRIPT_FOLDERS>]\n
"},{"location":"docs/cli/list/#optional-arguments","title":"Optional Arguments","text":""},{"location":"docs/cli/list/#-tags","title":"--tags","text":"

Print a list of all used tags.

"},{"location":"docs/cli/list/#-format-json","title":"--format json","text":"

Print the list of tests or tags as json object.

"},{"location":"docs/cli/list/#-format-raw","title":"--format raw","text":"

Print the list of tests or tags as simple list without formatting.

"},{"location":"docs/cli/list/#-silent","title":"--silent","text":"

Hide program version and header infos.

"},{"location":"docs/cli/list/#-debug","title":"--debug","text":"

Show debugging infos.

"},{"location":"docs/cli/list/#examples","title":"Examples","text":"
nf-test list --format json --silent\n[\"/Users/lukfor/Development/git/nf-gwas/tests/main.nf.test@69b98c67\",\"/Users/lukfor/Development/git/nf-gwas/tests/main.nf.test@fdb6c1cc\",\"/Users/lukfor/Development/git/nf-gwas/tests/main.nf.test@d1c219eb\",\"/Users/lukfor/Development/git/nf-gwas/tests/main.nf.test@3c54e3cb\",...]\n
nf-test list --format raw --silent\n/Users/lukfor/Development/git/nf-gwas/tests/main.nf.test@69b98c67\n/Users/lukfor/Development/git/nf-gwas/tests/main.nf.test@fdb6c1cc\n/Users/lukfor/Development/git/nf-gwas/tests/main.nf.test@d1c219eb\n/Users/lukfor/Development/git/nf-gwas/tests/main.nf.test@3c54e3cb\n...\n
nf-test list --tags --format json --silent\n[\"fastqc\",\"snakemake\"]\n
nf-test list --tags --format raw --silent\nfastqc\nsnakemake\n
"},{"location":"docs/cli/test/","title":"test command","text":""},{"location":"docs/cli/test/#usage","title":"Usage","text":"
nf-test test [<NEXTFLOW_FILES>|<SCRIPT_FOLDERS>]\n
"},{"location":"docs/cli/test/#optional-arguments","title":"Optional Arguments","text":""},{"location":"docs/cli/test/#-profile-nextflow_profile","title":"--profile <NEXTFLOW_PROFILE>","text":"

To run your test using a specific Nextflow profile, you can use the --profile argument. Learn more.

"},{"location":"docs/cli/test/#-dry-run","title":"--dry-run","text":"

This flag allows users to simulate the execution of tests.

"},{"location":"docs/cli/test/#-verbose","title":"--verbose","text":"

Prints out the Nextflow output during test runs.

"},{"location":"docs/cli/test/#-without-trace","title":"--without-trace","text":"

The Linux tool procps is required to run Nextflow tracing. In case your container does not support this tool, you can also run nf-test without tracing. Please note that the workflow.trace are not available when running it with this flag.

"},{"location":"docs/cli/test/#-tag-tag","title":"--tag <tag>","text":"

Execute only tests with the provided tag. Multiple tags can be used and have to be separated by commas (e.g. tag1,tag2).

"},{"location":"docs/cli/test/#-debug","title":"--debug","text":"

The debug parameter prints out debugging messages and all available output channels which can be accessed in the then clause.

"},{"location":"docs/cli/test/#output-reports","title":"Output Reports","text":""},{"location":"docs/cli/test/#-tap-filename","title":"--tap <filename>","text":"

Writes test results in TAP format to file.

"},{"location":"docs/cli/test/#-junitxml-filename","title":"--junitxml <filename>","text":"

Writes test results in JUnit XML format to file, which conforms to the standard schema.

"},{"location":"docs/cli/test/#-csv-filename","title":"--csv <filename>","text":"

Writes test results in csv file.

"},{"location":"docs/cli/test/#-ci","title":"--ci","text":"

By default,nf-test automatically stores a new snapshot. When CI mode is activated, nf-test will fail the test instead of storing the snapshot automatically.

"},{"location":"docs/cli/test/#-filter-types","title":"--filter <types>","text":"

Filter test cases by specified types (e.g., module, pipeline, workflow or function). Multiple types can be separated by commas.

"},{"location":"docs/cli/test/#optimizing-test-execution","title":"Optimizing Test Execution","text":""},{"location":"docs/cli/test/#-related-tests-files","title":"--related-tests <files>","text":"

Finds and executes all related tests for the provided .nf or nf.test files. Multiple files can be provided space separated.

"},{"location":"docs/cli/test/#-follow-dependencies","title":"--follow-dependencies","text":"

When this flag is set, nf-test will traverse all dependencies when the related-tests flag is set. This option is particularly useful when you need to ensure that all dependent tests are executed, bypassing the firewall calculation process.

"},{"location":"docs/cli/test/#-only-changed","title":"--only-changed","text":"

When enabled, this parameter instructs nf-test to execute tests only for files that have been modified within the current git working tree.

"},{"location":"docs/cli/test/#-changed-since-commit_hashbranch_name","title":"--changed-since <commit_hash|branch_name>","text":"

This parameter triggers the execution of tests related to changes made since the specifie commit. e.g. --changed-since HEAD^ for all changes between the HEAD and HEAD - 1.

"},{"location":"docs/cli/test/#-changed-until-commit_hashbranch_name","title":"--changed-until <commit_hash|branch_name>","text":"

This parameter initiates the execution of tests related to changes made until the specified commit hash.

"},{"location":"docs/cli/test/#-graph-filename","title":"--graph <filename>","text":"

Enables the export of the dependency graph as a dot file. The dot file format is commonly used for representing graphs in graphviz and other related software.

"},{"location":"docs/cli/test/#sharding","title":"Sharding","text":"

This parameter allows users to divide the execution workload into manageable chunks, which can be useful for parallel or distributed processing.

"},{"location":"docs/cli/test/#-shard-shard","title":"--shard <shard>","text":"

Splits the execution into arbitrary chunks defined by the format i/n, where i denotes the index of the current chunk and n represents the total number of chunks. For instance, 2/5 executes the second chunk out of five.

"},{"location":"docs/cli/test/#-shard-strategy-strategy","title":"--shard-strategy <strategy>","text":"

Description: Specifies the strategy used to build shards when the --shard parameter is utilized. Accepted values are round-robin or none.. This parameter determines the method employed to distribute workload chunks among available resources. With the round-robin strategy, shards are distributed evenly among resources in a cyclic manner. The none strategy implies that shards won't be distributed automatically, and it's up to the user to manage the assignment of shards. Default value is round-robin.

"},{"location":"docs/cli/test/#examples","title":"Examples","text":"
nf-test test\n
nf-test test tests/modules/local/salmon_index.nf.test tests/modules/bwa_index.nf.test\n\nnf-test test tests/modules tests/modules/bwa_index.nf.test\n
nf-test test tests/main.nf.test@d41119e4\n
nf-test test --tap report.tap\n
nf-test test --related-tests modules/module_a.nf modules/module_b.nf\n
nf-test test --only-changed\n
nf-test test --changed-since HEAD^\n
nf-test test --shard 2/4 \n
"},{"location":"docs/plugins/developing-plugins/","title":"Plugin Development","text":"

0.7.0

The following plugin can be used as a boilerplate: https://github.com/askimed/nft-fasta

"},{"location":"docs/plugins/developing-plugins/#developing-plugins","title":"Developing Plugins","text":"

A plugin has the possibility:

  1. Adding a new method to an existing class (e.g. the property fasta to class Path). It uses Groovy's ExtensionModule concept. Important: the method has to be static. One class can provide multiple methods.
// com.askimed.nf.test.fasta.PathExtension\npublic class PathExtension {\n  //can be used as: path(filename).fasta\n    public static Object getFasta(Path self) {\n    return FastaUtil.readAsMap(self);\n  }\n\n}\n
  1. Providing new methods
// com.askimed.nf.test.fasta.Methods\npublic class Methods {\n\n  //can be used as: helloFasta()\n  public static void helloFasta() {\n    System.out.println(\"Hello FASTA\");\n  }\n\n}\n
"},{"location":"docs/plugins/developing-plugins/#manifest-file","title":"Manifest file","text":"

You need to create a file META-INF/nf-test-plugin (in your resources). This file contains metadata about the plugin and both classes can now be registered by using the extensionClasses and extensionMethods properties.

moduleName=nft-my-plugin\nmoduleVersion=1.0.0\nmoduleAuthors=Lukas Forer\nextensionClasses=com.askimed.nf.test.fasta.PathExtension\nextensionMethods=com.askimed.nf.test.fasta.Methods\n
"},{"location":"docs/plugins/developing-plugins/#building-a-jar-file","title":"Building a jar file","text":"

The plugin itself is a jar file that contains all classes and the META-INF/nf-test-plugin file. If you have dependencies then you have to create a uber-jar that includes all libraries, because nf-test doesn't support the classpath set in META-INF\\MANIFEST.

"},{"location":"docs/plugins/developing-plugins/#publishing-plugins","title":"Publishing Plugins","text":"

Available plugins are managed in this default repository: https://github.com/askimed/nf-test-plugins/blob/main/plugins.json

Add your plugin or a new release to the plugin.json file and create a pull request to publish your plugin in the default repository. Or host you own repository:

[{\n  \"id\": \"nft-fasta\",\n  \"releases\": [{\n    \"version\": \"1.0.0\",\n    \"url\": \"https://github.com/askimed/nft-fasta/releases/download/v1.0.0/nft-fasta-1.0.0.jar\",\n  },{\n    \"version\": \"2.0.0\",\n    \"url\": \"https://github.com/askimed/nft-fasta/releases/download/v2.0.0/nft-fasta-2.0.0.jar\",\n  }]\n},{\n  \"id\": \"nft-my-plugin\",\n  \"releases\": [{\n    \"version\": \"1.0.0\",\n    \"url\": \"https://github.com/lukfor/nft-my-plugin2/releases/download/v1.0.0/nft-my-plugin-1.0.0.jar\",\n  }]\n}]\n
"},{"location":"docs/plugins/using-plugins/","title":"Plugins","text":"

0.7.0

Most assertions are usecase specific. Therefore, separating this functionality and helper classes from the nf-test codebase has several advantages:

  1. nf-test releases are independent from plugin releases
  2. it is easier for third-parties to develop and maintain plugins
  3. it is possible to use private repositories to integrate private/protected code in plugins without sharing them

For this purpose, we integrated the following plugin system that provides (a) the possibility to extend existing classes with custom methods (e.g. path(filename).fasta) and (2) to extends nf-test with new methods.

"},{"location":"docs/plugins/using-plugins/#using-plugins","title":"Using Plugins","text":"

Available plugins are listed here.

A plugin can be activated via the nf-test.config by adding the plugin section and by using load method to specify the plugin and its version:

config {\n\n  plugins {\n\n    load \"nft-fasta@1.0.0\"\n\n  }\n\n}\n

It is also possible to add one ore more additional repositories. (Example: repository with development/snapshot versions, in-house repository, ...)

config {\n\n  plugins {\n\n    repository \"https://github.com/askimed/nf-test-plugins/blob/main/plugins-snapshots.json\"\n    repository \"https://github.com/seppinho/nf-test-plugin2/blob/main/plugins.json\"\n\n    load \"nft-fasta@1.1.0-snapshot\"\n    load \"nft-plugin2@1.1.0\"\n\n    // you can also load jar files directly without any repository\n    // loadFromFile \"path/to/my/nft-plugin.jar\"\n  }\n\n}\n

All plugins are downloaded and cached in .nf-test\\plugins. This installation mechanism is yet not safe for parallel execution when multiple nf-test instances are resolving the same plugin. However, you can use nf-test update-plugins to download all plugins before you run your tests in parallel.

To clear the cache and to force redownloading plugins and repositories you can execute the nf-test clean command.

One or multiple plugins can be activated also via the --plugins parameter:

nf-test test my-test.nf.test --plugins nft-fasta@1.0.0,plugin2@1.0.0\n

or

nf-test test my-test.nf.test --plugins path/to/my/nft-plugin.jar\n
"},{"location":"docs/testcases/","title":"Documentation","text":""},{"location":"docs/testcases/global_variables/","title":"Global Variables","text":"

The following variables are available and can be used in setup, when, then and cleanup closures.

Name Description Example baseDir orprojectDir The directory where the nf-test.config script is located. mypipeline moduleDir The directory where the module script is located mypipeline/modules/mymodule moduleTestDir The directory where the test script is located mypipeline/tests/modules/mymodule launchDir The directory where the test is run. mypipeline/.nf-test/tests/<test_hash> metaDir The directory where all meta are located (e.g. mock.nf). mypipeline/.nf-test/tests/<test_hash>/meta workDir The directory where tasks temporary files are created. mypipeline/.nf-test/tests/<test_hash>/work outputDir An output directory in the $launchDir that can be used to store output files. The variable contains the absolute path. If you need a relative outpu directory see launchDir example. mypipeline/.nf-test/tests/<test_hash>/output params Dictionary like object holding all parameters."},{"location":"docs/testcases/global_variables/#examples","title":"Examples","text":""},{"location":"docs/testcases/global_variables/#outputdir","title":"outputDir","text":"

This variable points to the directory within the temporary test directory (.nf-test/tests/<test-dir>/output/). The variable can be set under params:

params {\n    outdir = \"$outputDir\"\n}\n
"},{"location":"docs/testcases/global_variables/#basedir","title":"baseDir","text":"

This variable points to the directory to locate the base directory of the main nf-test config. The variable can be used e.g. in the process definition to build absolute paths for input files:

process {\n    \"\"\"\n    file1 = file(\"$baseDir/tests/input/file123.gz\")\n    \"\"\"\n}\n
"},{"location":"docs/testcases/global_variables/#launchdir","title":"launchDir","text":"

This variable points to the directory where the test is executed. This can be used get access to results that are created in an relative output directory:

when {\n    params {\n        outdir = \"results\"\n    }\n}\n
then {\n    assert path(\"$launchDir/results\").exists()\n}\n
"},{"location":"docs/testcases/nextflow_function/","title":"Function Testing","text":"

nf-test allows testing of functions that are defined in a Nextflow file or defined in lib. Please checkout the CLI to generate a function test.

"},{"location":"docs/testcases/nextflow_function/#syntax","title":"Syntax","text":"
nextflow_function {\n\n    name \"<NAME>\"\n    script \"<PATH/TO/NEXTFLOW_SCRIPT.nf>\"\n    function \"<FUNCTION_NAME>\"\n\n    test(\"<TEST_NAME>\") {\n\n    }\n}\n

Script paths that start with ./ or ../ are considered relative paths. These paths are resolved based on the location of the test script. Relative paths are beneficial when you want to reference files or directories located within the same directory as your test script or in a parent directory. These paths provide a convenient way to access files without specifying the entire path.

"},{"location":"docs/testcases/nextflow_function/#multiple-functions","title":"Multiple Functions","text":"

If a Nextflow script contains multiple functions and you want to test them all in the same testsuite, you can override the function property in each test. For example:

"},{"location":"docs/testcases/nextflow_function/#functionsnf","title":"functions.nf","text":"
def function1() {\n  ...\n}\n\ndef function2() {\n  ...\n}\n
"},{"location":"docs/testcases/nextflow_function/#functionsnftest","title":"functions.nf.test","text":"
nextflow_function {\n\n    name \"Test functions\"\n    script \"functions.nf\"\n\n    test(\"Test function1\") {\n      function \"function1\"\n      ...\n    }\n\n    test(\"Test function2\") {\n      function \"function2\"\n      ...\n    }\n}\n
"},{"location":"docs/testcases/nextflow_function/#functions-in-lib-folder","title":"Functions in lib folder","text":"

If you want to test a function that is inside a groovy file in your lib folder, you can ignore the script property, because Nextflow adds them automatically to the classpath. For example:

"},{"location":"docs/testcases/nextflow_function/#libutilsgroovy","title":"lib\\Utils.groovy","text":"
class Utils {\n\n    public static void sayHello(name) {\n        if (name == null) {\n            error('Cannot greet a null person')\n        }\n\n        def greeting = \"Hello ${name}\"\n\n        println(greeting)\n    }\n\n}\n
"},{"location":"docs/testcases/nextflow_function/#testslibutilsgroovytest","title":"tests\\lib\\Utils.groovy.test","text":"
nextflow_function {\n\n    name \"Test Utils.groovy\"\n\n    test(\"Test function1\") {\n      function \"Utils.sayHello\"\n      ...\n    }\n}\n

Note: the generate function command works only with Nextflow functions.

"},{"location":"docs/testcases/nextflow_function/#assertions","title":"Assertions","text":"

The function object can be used in asserts to check its status, result value or error messages.

// function status\nassert function.success\nassert function.failed\n\n// return value\nassert function.result == 27\n\n//returns a list containing all lines from stdout\nassert function.stdout.contains(\"Hello World\") == 3\n
"},{"location":"docs/testcases/nextflow_function/#example","title":"Example","text":""},{"location":"docs/testcases/nextflow_function/#nextflow-script","title":"Nextflow script","text":"

Create a new file and name it functions.nf.

def say_hello(name) {\n    if (name == null) {\n        error('Cannot greet a null person')\n    }\n\n    def greeting = \"Hello ${name}\"\n\n    println(greeting)\n    return greeting\n}\n
"},{"location":"docs/testcases/nextflow_function/#nf-test-script","title":"nf-test script","text":"

Create a new file and name it functions.nf.test.

nextflow_function {\n\n  name \"Test Function Say Hello\"\n\n  script \"functions.nf\"\n  function \"say_hello\"\n\n  test(\"Passing case\") {\n\n    when {\n      function {\n        \"\"\"\n        input[0] = \"aaron\"\n        \"\"\"\n      }\n    }\n\n    then {\n      assert function.success\n      assert function.result == \"Hello aaron\"\n      assert function.stdout.contains(\"Hello aaron\")\n      assert function.stderr.isEmpty()\n    }\n\n  }\n\n  test(\"Failure Case\") {\n\n    when {\n      function {\n        \"\"\"\n        input[0] = null\n        \"\"\"\n      }\n    }\n\n    then {\n      assert function.failed\n      //It seems to me that error(..) writes message to stdout\n      assert function.stdout.contains(\"Cannot greet a null person\")\n    }\n  }\n}\n
"},{"location":"docs/testcases/nextflow_function/#execute-test","title":"Execute test","text":"
nf-test test functions.nf.test\n
"},{"location":"docs/testcases/nextflow_pipeline/","title":"Pipeline Testing","text":"

nf-test also allows to test the complete pipeline end-to-end. Please checkout the CLI to generate a pipeline test.

"},{"location":"docs/testcases/nextflow_pipeline/#syntax","title":"Syntax","text":"
nextflow_pipeline {\n\n    name \"<NAME>\"\n    script \"<PATH/TO/NEXTFLOW_SCRIPT.nf>\"\n\n    test(\"<TEST_NAME>\") {\n\n    }\n}\n
"},{"location":"docs/testcases/nextflow_pipeline/#assertions","title":"Assertions","text":"

The workflow object can be used in asserts to check its status, error messages or traces.

// workflow status\nassert workflow.success\nassert workflow.failed\nassert workflow.exitStatus == 0\n\n// workflow error message\nassert workflow.errorReport.contains(\"....\")\n\n// trace\n//returns a list containing succeeded tasks\nassert workflow.trace.succeeded().size() == 3\n\n//returns a list containing failed tasks\nassert workflow.trace.failed().size() == 0\n\n//returns a list containing all tasks\nassert workflow.trace.tasks().size() == 3\n
"},{"location":"docs/testcases/nextflow_pipeline/#example","title":"Example","text":""},{"location":"docs/testcases/nextflow_pipeline/#nextflow-script","title":"Nextflow script","text":"

Create a new file and name it pipeline.nf.

#!/usr/bin/env nextflow\nnextflow.enable.dsl=2\n\nprocess SAY_HELLO {\n    input:\n        val cheers\n\n    output:\n        stdout emit: verbiage_ch\n        path '*.txt', emit: verbiage_ch2\n\n    script:\n    \"\"\"\n    echo -n $cheers\n    echo -n $cheers > ${cheers}.txt\n    \"\"\"\n}\n\nworkflow {\n    input = params.input_text.trim().split(',')\n    Channel.from(input) | SAY_HELLO\n}\n
"},{"location":"docs/testcases/nextflow_pipeline/#nf-test-script","title":"nf-test script","text":"

Create a new file and name it pipeline.nf.test.

nextflow_pipeline {\n\n    name \"Test Pipeline with 1 process\"\n    script \"pipeline.nf\"\n\n    test(\"Should run without failures\") {\n\n        when {\n            params {\n              input_text = \"hello,nf-test\"\n            }\n        }\n\n        then {\n            assert workflow.success\n            assert workflow.trace.tasks().size() == 2\n        }\n\n    }\n\n}\n
"},{"location":"docs/testcases/nextflow_pipeline/#execute-test","title":"Execute test","text":"
nf-test init\nnf-test test pipeline.nf.test\n
"},{"location":"docs/testcases/nextflow_process/","title":"Process Testing","text":"

nf-test allows to test each process defined in a module file. Please checkout the CLI to generate a process test.

"},{"location":"docs/testcases/nextflow_process/#syntax","title":"Syntax","text":"
nextflow_process {\n\n    name \"<NAME>\"\n    script \"<PATH/TO/NEXTFLOW_SCRIPT.nf>\"\n    process \"<PROCESS_NAME>\"\n\n    test(\"<TEST_NAME>\") {\n\n    }\n}\n

Script paths that start with ./ or ../ are considered relative paths. These paths are resolved based on the location of the test script. Relative paths are beneficial when you want to reference files or directories located within the same directory as your test script or in a parent directory. These paths provide a convenient way to access files without specifying the entire path.

"},{"location":"docs/testcases/nextflow_process/#assertions","title":"Assertions","text":"

The process object can be used in asserts to check its status or error messages.

// process status\nassert process.success\nassert process.failed\nassert process.exitStatus == 0\n\n// Analyze Nextflow trace file\nassert process.trace.tasks().size() == 1\n\n// process error message\nassert process.errorReport.contains(\"....\")\n\n//returns a list containing all lines from stdout\nassert process.stdout.contains(\"Hello World\") == 3\n
"},{"location":"docs/testcases/nextflow_process/#output-channels","title":"Output Channels","text":"

The process.out object provides access to the content of all named output Channels (see Nextflow emit):

// channel exists\nassert process.out.my_channel != null\n\n// channel contains 3 elements\nassert process.out.my_channel.size() == 3\n\n// first element is \"hello\"\nassert process.out.my_channel.get(0) == \"hello\"\n

Channels that lack explicit names can be addressed using square brackets and the corresponding index. This indexing method provides a straightforward way to interact with channels without the need for predefined names. To access the first output channel, you can use the index [0] as demonstrated below:

// channel exists\nassert process.out[0] != null\n\n// channel contains 3 elements\nassert process.out[0].size() == 3\n\n// first element is \"hello\"\nassert process.out[0].get(0) == \"hello\"\n
"},{"location":"docs/testcases/nextflow_process/#example","title":"Example","text":""},{"location":"docs/testcases/nextflow_process/#nextflow-script","title":"Nextflow script","text":"

Create a new file and name it say_hello.nf.

#!/usr/bin/env nextflow\nnextflow.enable.dsl=2\n\nprocess SAY_HELLO {\n    input:\n        val cheers\n\n    output:\n        stdout emit: verbiage_ch\n        path '*.txt', emit: verbiage_ch2\n\n    script:\n    \"\"\"\n    echo -n $cheers\n    echo -n $cheers > ${cheers}.txt\n    \"\"\"\n}\n
"},{"location":"docs/testcases/nextflow_process/#nf-test-script","title":"nf-test script","text":"

Create a new file and name it say_hello.nf.test.

nextflow_process {\n\n    name \"Test Process SAY_HELLO\"\n    script \"say_hello.nf\"\n    process \"SAY_HELLO\"\n\n    test(\"Should run without failures\") {\n\n        when {\n            process {\n                \"\"\"\n                input[0] = Channel.from('hello','nf-test')\n                \"\"\"\n            }\n        }\n\n        then {\n\n            assert process.success\n            assert process.trace.tasks().size() == 2\n\n            with(process.out.verbiage_ch2) {\n                assert size() == 2\n                assert path(get(0)).readLines().size() == 1\n                assert path(get(1)).readLines().size() == 1\n                assert path(get(1)).md5 == \"4a17df7a54b41a84df492da3f1bab1e3\"\n            }\n\n        }\n\n    }\n}\n
"},{"location":"docs/testcases/nextflow_process/#execute-test","title":"Execute test","text":"
nf-test init\nnf-test test say_hello.nf.test\n
"},{"location":"docs/testcases/nextflow_workflow/","title":"Workflow Testing","text":"

nf-test also allows to test a specific workflow. Please checkout the CLI to generate a workflow test.

"},{"location":"docs/testcases/nextflow_workflow/#syntax","title":"Syntax","text":"
nextflow_workflow {\n\n    name \"<NAME>\"\n    script \"<PATH/TO/NEXTFLOW_SCRIPT.nf>\"\n    workflow \"<WORKFLOW_NAME>\"\n\n    test(\"<TEST_NAME>\") {\n\n    }\n}\n

Script paths that start with ./ or ../ are considered relative paths. These paths are resolved based on the location of the test script. Relative paths are beneficial when you want to reference files or directories located within the same directory as your test script or in a parent directory. These paths provide a convenient way to access files without specifying the entire path.

"},{"location":"docs/testcases/nextflow_workflow/#assertions","title":"Assertions","text":"

The workflow object can be used in asserts to check its status, error messages or traces.

// workflow status\nassert workflow.success\nassert workflow.failed\nassert workflow.exitStatus == 0\n\n// workflow error message\nassert workflow.errorReport.contains(\"....\")\n\n// trace\n//returns a list containing succeeded tasks\nassert workflow.trace.succeeded().size() == 3\n\n//returns a list containing failed tasks\nassert workflow.trace.failed().size() == 0\n\n//returns a list containing all tasks\nassert workflow.trace.tasks().size() == 3\n\n//returns a list containing all lines from stdout\nassert workflow.stdout.contains(\"Hello World\") == 3\n
"},{"location":"docs/testcases/nextflow_workflow/#output-channels","title":"Output Channels","text":"

The workflow.out object provides access to the content of all named output Channels (see Nextflow emit):

// channel exists\nassert workflow.out.my_channel != null\n\n// channel contains 3 elements\nassert workflow.out.my_channel.size() == 3\n\n// first element is \"hello\"\nassert workflow.out.my_channel.get(0) == \"hello\"\n
"},{"location":"docs/testcases/nextflow_workflow/#example","title":"Example","text":""},{"location":"docs/testcases/nextflow_workflow/#nextflow-script","title":"Nextflow script","text":"

Create a new file and name it trial.nf.

#!/usr/bin/env nextflow\nnextflow.enable.dsl=2\n\nprocess sayHello {\n    input:\n        val cheers\n\n    output:\n        stdout emit: verbiage_ch\n        path '*.txt', emit: verbiage_ch2\n\n    script:\n    \"\"\"\n    echo -n $cheers\n    echo -n $cheers > ${cheers}.txt\n    \"\"\"\n}\n\nworkflow trial {\n    take: things\n    main:\n        sayHello(things)\n        sayHello.out.verbiage_ch.view()\n    emit:\n        trial_out_ch = sayHello.out.verbiage_ch2\n}\n\nworkflow {\n    Channel.from('hello','nf-test') | trial\n}\n
"},{"location":"docs/testcases/nextflow_workflow/#nf-test-script","title":"nf-test script","text":"

Create a new file and name it trial.nf.test.

nextflow_workflow {\n\n    name \"Test Workflow Trial\"\n    script \"trial.nf\"\n    workflow \"trial\"\n\n    test(\"Should run without failures\") {\n\n        when {\n            workflow {\n                \"\"\"\n                input[0] = Channel.from('hello','nf-test')\n                \"\"\"\n            }\n        }\n\n        then {\n\n            assert workflow.success\n\n            with(workflow.out.trial_out_ch) {\n                assert size() == 2\n                assert path(get(0)).readLines().size() == 1\n                assert path(get(1)).readLines().size() == 1\n                assert path(get(1)).md5 == \"4a17df7a54b41a84df492da3f1bab1e3\"\n            }\n\n        }\n\n    }\n\n}\n
"},{"location":"docs/testcases/nextflow_workflow/#execute-test","title":"Execute test","text":"
nf-test init\nnf-test test trial.nf.test\n
"},{"location":"docs/testcases/params/","title":"Params Dictionary","text":"

The params block is optional and is a simple map that can be used to overwrite Nextflow's input params. The params block is located in the when block of a testcase. You can set params manually:

when {\n    params {\n        outdir = \"output\"\n    }\n}\n

It is also possible to set nested params using the same syntax as in your Nextflow script:

when {\n    params {\n        output {\n          dir = \"output\"\n        }\n    }\n}\n

The params map can also be used in the then block:

then {\n    assert params.output == \"output\"    \n}\n
"},{"location":"docs/testcases/params/#load-params-from-files","title":"Load params from files","text":"

In addition, you can load the params from a JSON file:

when {\n    params {\n        load(\"$baseDir/tests/params.json\")\n    }\n}\n

or from a YAML file:

when {\n    params {\n        load(\"$baseDir/tests/params.yaml\")\n    }\n}\n

nf-test allows to combine both techniques and therefor it is possible to overwrite one or more params from the json file:

when {\n    params {\n        load(\"$baseDir/tests/params.json\")\n        outputDir = \"new/output/path\"\n    }\n}\n
"},{"location":"docs/testcases/setup/","title":"Setup Method","text":"

The setup method allows you to specify processes or workflows that need to be executed before the primary when block. It serves as a mechanism to prepare the required input data or set up essential steps prior to the primary processing block.

"},{"location":"docs/testcases/setup/#syntax","title":"Syntax","text":"

The setup method is typically used within the context of a test case. The basic syntax for the setup method is as follows:

test(\"my test\"){\n    setup {\n        // Define and execute dependent processes or workflows here\n    }\n}\n

Within the setup block, you can use the run method to define and execute dependent processes or workflows.

The run method syntax for a process is as follows:

run(\"ProcessName\") {\n    script \"path/to/process/script.nf\"\n    process {\n        // Define the process inputs here\n    }\n}\n

The run method syntax for a workflow is as follows:

run(\"WorkflowName\") {\n    script \"path/to/workflow/script.nf\"\n    workflow {\n        // Define the workflow inputs here\n    }\n}\n

If you need to run the same process multiple times, you can set the alias of the process:

run(\"GENERATE_DATA\", alias: \"MY_PROCESS\") {\n    script \"./generate_data.nf\"\n    process {\n       ...\n    }\n}\n

Warning

Please keep in mind that changes in procsses or workflows, which are executed in the setup method, can result in a failed test run.

"},{"location":"docs/testcases/setup/#example-usage","title":"Example Usage","text":""},{"location":"docs/testcases/setup/#1-local-setup-method","title":"1. Local Setup Method","text":"

In this example, we create a setup method within a Nextflow process definition to execute a dependent process named \"ABRICATE_RUN.\" This process generates input data that is required for the primary process \"ABRICATE_SUMMARY.\" The setup block specifies the execution of \"ABRICATE_RUN,\" and the when block defines the processing logic for \"ABRICATE_SUMMARY.\"

nextflow_process {\n\n    name \"Test process data\"\n\n    script \"../main.nf\"\n    process \"ABRICATE_SUMMARY\"\n    config \"./nextflow.config\"\n\n    test(\"Should use process ABRICATE_RUN to generate input data\") {\n\n        setup {\n\n            run(\"ABRICATE_RUN\") {\n                script \"../../run/main.nf\"\n                process {\n                    \"\"\"\n                    input[0] =  Channel.fromList([\n                        tuple([ id:'test1', single_end:false ], // meta map\n                            file(params.test_data['bacteroides_fragilis']['genome']['genome_fna_gz'], checkIfExists: true)),\n                        tuple([ id:'test2', single_end:false ],\n                            file(params.test_data['haemophilus_influenzae']['genome']['genome_fna_gz'], checkIfExists: true))\n                    ])\n                    \"\"\"\n                }\n            }\n\n        }\n\n        when {\n            process {\n                \"\"\"\n                input[0] = ABRICATE_RUN.out.report.collect{ meta, report -> report }.map{ report -> [[ id: 'test_summary'], report]}\n                \"\"\"\n            }\n        }\n\n        then {\n            assert process.success\n            assert snapshot(process.out).match()\n        }\n    }\n\n}\n
"},{"location":"docs/testcases/setup/#2-global-setup-method","title":"2. Global Setup Method","text":"

In this example, a global setup method is defined for all tests within a Nextflow process definition. The setup method is applied to multiple test cases, ensuring consistent setup for each test. This approach is useful when multiple tests share the same setup requirements.

nextflow_process {\n\n    name \"Test process data\"\n\n    script \"../main.nf\"\n    process \"ABRICATE_SUMMARY\"\n    config \"./nextflow.config\"\n\n    setup {\n        run(\"ABRICATE_RUN\") {\n            script \"../../run/main.nf\"\n            process {\n                \"\"\"\n                input[0] =  Channel.fromList([\n                    tuple([ id:'test1', single_end:false ], // meta map\n                        file(params.test_data['bacteroides_fragilis']['genome']['genome_fna_gz'], checkIfExists: true)),\n                    tuple([ id:'test2', single_end:false ],\n                        file(params.test_data['haemophilus_influenzae']['genome']['genome_fna_gz'], checkIfExists: true))\n                ])\n                \"\"\"\n            }\n        }\n    }\n\n    test(\"first test\") {\n        when {\n            process {\n                \"\"\"\n                input[0] = ABRICATE_RUN.out.report.collect{ meta, report -> report }.map{ report -> [[ id: 'test_summary'], report]}\n                \"\"\"\n            }\n        }\n        then {\n            assert process.success\n            assert snapshot(process.out).match()\n        }\n    }\n\n    test(\"second test\") {\n        when {\n            process {\n                \"\"\"\n                input[0] = ABRICATE_RUN.out.report.collect{ meta, report -> report }.map{ report -> [[ id: 'test_summary'], report]}\n                \"\"\"\n            }\n        }\n        then {\n            assert process.success\n            assert snapshot(process.out).match()\n        }\n    }\n\n}\n
"},{"location":"docs/testcases/setup/#3-aliasing-of-dependencies","title":"3. Aliasing of Dependencies","text":"

In this example, the process UNTAR is used multiple times in the setup method:

nextflow_process {\n\n    ...\n\n    setup {\n\n        run(\"UNTAR\", alias: \"UNTAR1\") {\n            script \"modules/nf-core/untar/main.nf\"\n            process {\n            \"\"\"\n            input[0] = Channel.fromList(...)\n            \"\"\"\n            }\n        }\n\n        run(\"UNTAR\", alias: \"UNTAR2\") {\n            script \"modules/nf-core/untar/main.nf\"\n            process {\n            \"\"\"\n            input[0] = Channel.fromList(...)\n            \"\"\"\n            }\n        }\n\n        run(\"UNTAR\", alias: \"UNTAR3\") {\n            script \"modules/nf-core/untar/main.nf\"\n            process {\n            \"\"\"\n            input[0] = Channel.fromList(...)\n            \"\"\"\n            }\n        }\n    }\n\n    test(\"Test with three different inputs\") {\n        when {\n            process {\n                \"\"\"\n                input[0] = UNTAR1.out.untar.map{ it[1] }\n                input[1] = UNTAR2.out.untar.map{ it[1] }\n                input[2] = UNTAR3.out.untar.map{ it[1] }\n                \"\"\"\n            }\n        }\n\n        then {\n            ...\n        }\n\n }\n\n}\n
"},{"location":"tutorials/github-actions/","title":"Setup nf-test on GitHub Actions","text":"

Warning

This feature is experimental and requires nf-test 0.9.0-rc1 or higher

In this tutorial, we will guide you through setting up and running nf-test on GitHub Actions. We will start with a simple example where all tests run in a single job, then extend it to demonstrate how you can use sharding to distribute tests across multiple jobs for improved efficiency. Finally, we will show you how to run only the tests affected by the changed files using the --changes-since option.

By the end of this tutorial, you will have a clear understanding of how to:

  1. Set up a basic CI workflow for running nf-test on GitHub Actions.
  2. Extend the workflow to use sharding, allowing tests to run in parallel, which can significantly reduce the overall execution time.
  3. Configure the workflow to run only the test cases affected by the changed files, optimizing the CI process further.

Whether you are maintaining a complex bioinformatics pipeline or a simple data analysis workflow, integrating nf-test with GitHub Actions will help ensure the robustness and reliability of your code. Let's get started!

"},{"location":"tutorials/github-actions/#step-1-running-nf-test","title":"Step 1: Running nf-test","text":"

Create a file named .github/workflows/ci-tests.yml in your repository with the following content:

name: CI Tests\n\non: [push, pull_request]\n\njobs:\n  test:\n    runs-on: ubuntu-latest\n\n    steps:\n      - name: Checkout\n        uses: actions/checkout@v4\n\n      - name: Set up JDK 11\n        uses: actions/setup-java@v2\n        with:\n          java-version: '11'\n          distribution: 'adopt'\n\n      - name: Setup Nextflow latest-edge\n        uses: nf-core/setup-nextflow@v1\n        with:\n          version: \"latest-edge\"\n\n      - name: Install nf-test\n        run: |\n          wget -qO- https://code.askimed.com/install/nf-test | bash\n          sudo mv nf-test /usr/local/bin/\n\n      - name: Run Tests\n        run: nf-test test --ci\n
"},{"location":"tutorials/github-actions/#explanation","title":"Explanation:","text":"
  1. Checkout: Uses the actions/checkout@v2 action to check out the repository.
  2. Set up JDK 11: Uses the actions/setup-java@v2 action to set up Java Development Kit version 11.
  3. Setup Nextflow: Uses the nf-core/setup-nextflow@v1 action to install the latest-edge version of Nextflow.
  4. Install nf-test: Downloads and installs nf-test.
  5. Run Tests: Runs nf-test with the --ci flag. This activates the CI mode. Instead of automatically storing a new snapshot as per usual, it will now fail the test if no reference snapshot is available. This enables tests to fail when a snapshot file was forgotten to be committed.
"},{"location":"tutorials/github-actions/#step-2-extending-to-use-sharding","title":"Step 2: Extending to Use Sharding","text":"

To distribute the tests across multiple jobs, you can set up sharding. Update your workflow file as follows:

name: CI Tests\n\non: [push, pull_request]\n\njobs:\n  test:\n    runs-on: ubuntu-latest\n    strategy:\n      matrix:\n        shard: [1, 2, 3, 4]\n    steps:\n      - name: Checkout\n        uses: actions/checkout@v4\n\n      - name: Set up JDK 11\n        uses: actions/setup-java@v2\n        with:\n          java-version: '11'\n          distribution: 'adopt'\n\n      - name: Setup Nextflow latest-edge\n        uses: nf-core/setup-nextflow@v1\n        with:\n          version: \"latest-edge\"\n\n      - name: Install nf-test\n        run: |\n          wget -qO- https://code.askimed.com/install/nf-test | bash\n          sudo mv nf-test /usr/local/bin/\n\n      - name: Run Tests (Shard ${{ matrix.shard }}/${{ strategy.job-total }})\n        run: nf-test test --ci --shard ${{ matrix.shard }}/${{ strategy.job-total }}\n
"},{"location":"tutorials/github-actions/#explanation-of-sharding","title":"Explanation of Sharding:","text":"
  1. Matrix Strategy: The strategy section defines a matrix with a shard parameter that has four values: [1, 2, 3, 4]. This will create four parallel jobs, one for each shard.
  2. Run Tests with Sharding: The run command for running tests is updated to nf-test test --shard ${{ matrix.shard }}/${{ strategy.job-total }}. This command will run the tests for the specific shard. ${{ matrix.shard }} represents the current shard number, and ${{ strategy.job-total }} represents the total number of shards.
"},{"location":"tutorials/github-actions/#step-3-running-only-tests-affected-by-changed-files","title":"Step 3: Running Only Tests Affected by Changed Files","text":"

To optimize the workflow further, you can run only the tests that are affected by the changed files since the last commit. Update your workflow file as follows:

name: CI Tests\n\non: [push, pull_request]\n\njobs:\n  test:\n    runs-on: ubuntu-latest\n    strategy:\n      matrix:\n        shard: [1, 2, 3, 4]\n    steps:\n      - name: Checkout\n        uses: actions/checkout@v4\n        with:\n          fetch-depth: 0\n\n      - name: Set up JDK 11\n        uses: actions/setup-java@v2\n        with:\n          java-version: '11'\n          distribution: 'adopt'\n\n      - name: Setup Nextflow latest-edge\n        uses: nf-core/setup-nextflow@v1\n        with:\n          version: \"latest-edge\"\n\n      - name: Install nf-test\n        run: |\n          wget -qO- https://code.askimed.com/install/nf-test | bash\n          sudo mv nf-test /usr/local/bin/\n\n      - name: Run Tests (Shard ${{ matrix.shard }}/${{ strategy.job-total }})\n        run: nf-test test --ci --shard ${{ matrix.shard }}/${{ strategy.job-total }} --changed-since HEAD^\n
"},{"location":"tutorials/github-actions/#explanation-of-changes","title":"Explanation of Changes:","text":"
  1. Checkout with Full History: The actions/checkout@v2 action is updated with fetch-depth: 0 to fetch the full history of the repository. This is necessary for accurately determining the changes since the last commit.
  2. Run Tests with Changed Files: The run command is further updated to include the --changed-since HEAD^ option. This option ensures that only the tests affected by the changes since the previous commit are run.
"},{"location":"tutorials/github-actions/#step-4-adapting-nf-testconfig-to-trigger-full-test-runs","title":"Step 4: Adapting nf-test.config to Trigger Full Test Runs","text":"

In some cases, changes to specific critical files should trigger a full test run, regardless of other changes. To configure this, you need to adapt your nf-test.config file.

Add the following lines to your nf-test.config:

config {\n    ...\n    triggers 'nextflow.config', 'nf-test.config', 'test-data/**/*'\n    ...\n}\n

The triggers directive in nf-test.config specifies a list of filenames or patterns that should trigger a full test run. For example:

- `'nextflow.config'`: Changes to the main Nextflow configuration file will trigger a full test run.\n- `'nf-test.config'`: Changes to the nf-test configuration file itself will trigger a full test run.\n- `'test-data/**/*'`: Changes to any files within the `test-data` directory will trigger a full test run.\n

This configuration ensures that critical changes always result in a comprehensive validation of the pipeline, providing additional confidence in your CI process.

"},{"location":"tutorials/github-actions/#step-5-additional-useful-options","title":"Step 5: Additional useful Options","text":"

The --filter flag allows you to selectively run test cases based on their specified types. For example, you can filter tests by module, pipeline, workflow, or function. This is particularly useful when you have a large suite of tests and need to focus on specific areas of functionality. By separating multiple types with commas, you can run a customized subset of tests that match the exact criteria you're interested in, thereby saving time and resources.

The --related-tests flag enables you to identify and execute all tests related to the provided .nf or nf.test files. This is ideal for scenarios where you have made changes to specific files and want to ensure that only the relevant tests are run. You can provide multiple files by separating them with spaces, which makes it easy to manage and test multiple changes at once, ensuring thorough validation of your updates.

When the --follow-dependencies flag is set, the nf-test tool will automatically traverse and execute all tests for dependencies related to the files specified with the --related-tests flag. This ensures that any interdependent components are also tested, providing comprehensive coverage. This option is particularly useful for complex projects with multiple dependencies, as it bypasses the firewall calculation process and guarantees that all necessary tests are executed.

The --changed-until flag allows you to run tests based on changes made up until a specified commit hash or branch name. By default, this parameter uses HEAD, but you can specify any commit or branch to target the changes made up to that point. This is particularly useful for validating changes over a specific range of commits, ensuring that all modifications within that period are tested comprehensively.

"},{"location":"tutorials/github-actions/#summary","title":"Summary","text":"
  1. Without Sharding: A straightforward setup where all tests run in a single job.
  2. With Sharding: Distributes tests across multiple jobs, allowing them to run in parallel.
  3. With Sharding and Changed Files: Optimizes the CI process by running only the tests affected by the changed files since the last commit, in parallel jobs.

Choose the configuration that best suits your project's needs. Start with the simpler setup and extend it as needed to improve efficiency and reduce test execution time.

"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":""},{"location":"#nf-test-a-simple-testing-framework-for-nextflow-pipelines","title":"nf-test: A simple testing framework for Nextflow pipelines","text":"

Test your production ready\u00a0Nextflow\u00a0pipelines in an efficient and automated way. \ud83d\ude80

Getting Started Installation Source

A DSL language similar to Nextflow Describes expected behavior using 'when' and 'then' blocks Abundance of functions for writing elegant and readable assertions Utilizes snapshots to write tests for complex data structures Provides commands for generating boilerplate code Includes a test-runner that executes these scripts Easy installation on CI systems

"},{"location":"#unit-testing","title":"Unit testing","text":"

nf-test enables you to test all components of your data science pipeline: from end-to-end testing of the entire pipeline to specific tests of processes or even custom functions. This ensures that all testing is conducted consistently across your project.

Pipeline Process Functions
nextflow_pipeline {\n\n  name \"Test Hello World\"\n  script \"nextflow-io/hello\"\n\n  test(\"hello world example should start 4 processes\") {\n    expect {\n      with(workflow) {\n        assert success\n        assert trace.tasks().size() == 4\n        assert \"Ciao world!\" in stdout\n        assert \"Bonjour world!\" in stdout\n        assert \"Hello world!\" in stdout\n        assert \"Hola world!\" in stdout\n      }\n    }\n  }\n\n}\n
nextflow_process {\n\n    name \"Test Process SALMON_INDEX\"\n    script \"modules/local/salmon_index.nf\"\n    process \"SALMON_INDEX\"\n\n    test(\"Should create channel index files\") {\n\n        when {\n            process {\n                \"\"\"\n                input[0] = file(\"test_data/transcriptome.fa\")\n                \"\"\"\n            }\n        }\n\n        then {\n            //check if test case succeeded\n            assert process.success\n            //analyze trace file\n            assert process.trace.tasks().size() == 1\n            with(process.out) {\n                // check if emitted output has been created\n                assert index.size() == 1\n                // count amount of created files\n                assert path(index.get(0)).list().size() == 16\n                // parse info.json file\n                def info = path(index.get(0)+'/info.json').json\n                assert info.num_kmers == 375730\n                assert info.seq_length == 443050\n                //verify md5 checksum\n                assert path(index.get(0)+'/info.json').md5 == \"80831602e2ac825e3e63ba9df5d23505\"\n            }\n        }\n\n    }\n\n}\n
nextflow_function {\n\n    name \"Test functions\"\n    script \"functions.nf\"\n\n    test(\"Test function1\") {\n      function \"function1\"\n      ...\n    }\n\n    test(\"Test function2\") {\n      function \"function2\"\n      ...\n    }\n}\n

Learn more about pipeline tests, workflow tests, process tests and function tests in the documentation.

"},{"location":"#snapshot-testing","title":"Snapshot testing","text":"

nf-test supports snapshot testing and automatically generates a baseline set of unit tests to safeguard against regressions caused by changes.nf-test captures a snapshot of output channels or any other objects and subsequently compares them to reference snapshot files stored alongside the tests. If the two snapshots do not match, the test will fail

Learn more

"},{"location":"#highly-extendable","title":"Highly extendable","text":"

nf-test supports the inclusion of third-party libraries (e.g., jar files) or functions from Groovy files. This can be done to either extend its functionality or to prevent code duplication, thus maintaining simplicity in the logic of test cases. Given that many assertions are specific to use cases, nf-test incorporates a plugin system that allows for the extension of existing classes with custom methods. For example FASTA file support.

Learn more

"},{"location":"#support-us","title":"Support us","text":"

We love stars as much as we love rockets! So make sure you star us on GitHub.

Star

Show the world your Nextflow pipeline is using nf-test and add the following badge to your README.md:

[![nf-test](https://img.shields.io/badge/tested_with-nf--test-337ab7.svg)](https://code.askimed.com/nf-test)\n
"},{"location":"#citation","title":"Citation","text":"

If you test your pipeline with nf-test, please cite:

Forer, L., & Sch\u00f6nherr, S. (2024). Improving the Reliability and Quality of Nextflow Pipelines with nf-test. bioRxiv. https://doi.org/10.1101/2024.05.25.595877

"},{"location":"#about","title":"About","text":"

nf-test has been created by Lukas Forer and Sebastian Sch\u00f6nherr and is MIT Licensed.

Thanks to all the contributors to help us maintaining and improving nf-test!

"},{"location":"about/","title":"About","text":"

nf-test has been created by Lukas Forer and Sebastian Sch\u00f6nherr and is MIT Licensed.

"},{"location":"about/#contributors","title":"Contributors","text":""},{"location":"about/#statistics","title":"Statistics","text":"

GitHub:

Bioconda:

"},{"location":"installation/","title":"Installation","text":"

nf-test has the same requirements as Nextflow and can be used on POSIX compatible systems like Linux or OS X. You can install nf-test using the following command:

curl -fsSL https://get.nf-test.com | bash\n

If you don't have curl installed, you could use wget:

wget -qO- https://get.nf-test.com | bash\n

It will create the nf-test executable file in the current directory. Optionally, move the nf-test file to a directory accessible by your $PATH variable.

"},{"location":"installation/#verify-installation","title":"Verify installation","text":"

Test the installation with the following command:

nf-test version\n

You should see something like this:

\ud83d\ude80 nf-test 0.5.0\nhttps://code.askimed.com/nf-test\n(c) 2021 -2022 Lukas Forer and Sebastian Schoenherr\n\nNextflow Runtime:\n\n      N E X T F L O W\n      version 21.10.6 build 5660\n      created 21-12-2021 16:55 UTC (17:55 CEST)\n      cite doi:10.1038/nbt.3820\n      http://nextflow.io\n

Now you are ready to write your first testcase.

"},{"location":"installation/#install-a-specific-version","title":"Install a specific version","text":"

If you want to install a specific version pass it to the install script as so

curl -fsSL https://get.nf-test.com | bash -s 0.7.0\n
"},{"location":"installation/#manual-installation","title":"Manual installation","text":"

All releases are also available on Github.

"},{"location":"installation/#nextflow-binary-not-found","title":"Nextflow Binary not found?","text":"

If you get an error message like this, then nf-test was not able to detect your Nextflow installation.

\ud83d\ude80 nf-test 0.5.0\nhttps://code.askimed.com/nf-test\n(c) 2021 -2022 Lukas Forer and Sebastian Schoenherr\n\nNextflow Runtime:\nError: Nextflow Binary not found. Please check if Nextflow is in a directory accessible by your $PATH variable or set $NEXTFLOW_HOME.\n

To solve this issue you have two possibilites:

"},{"location":"installation/#updating","title":"Updating","text":"

To update an existing nf-test installtion to the latest version, run the following command:

nf-test update\n
"},{"location":"installation/#compiling-from-source","title":"Compiling from source","text":"

To compile nf-test from source you shall have maven installed. This will produce a nf-test/target/nf-test.jar file.

git clone git@github.com:askimed/nf-test.git\ncd nf-test\nmvn install\n
To use the newly compiled nf-test.jar, update the nf-test bash script that is on your PATH to point to the new .jar file. First locate it with which nf-test, and then modify APP_HOME and APP_JAR vars at the top:
#!/bin/bash\nAPP_HOME=\"/PATH/TO/nf-test/target/\"\nAPP_JAR=\"nf-test.jar\"\nAPP_UPDATE_URL=\"https://code.askimed.com/install/nf-test\"\n...\n

"},{"location":"resources/","title":"Resources","text":"

This page collects videos and blog posts about nf-test created by the community. Have you authored a blog post or given a talk about nf-test? Feel free to contact us, and we will be delighted to feature it here.

"},{"location":"resources/#training-module-hello-nf-test","title":"Training Module: Hello nf-test","text":"

This training modules introduces nf-test in a simple and easy to follow-along hello-nextflow style aimed for beginners.

hello nf-test training module

"},{"location":"resources/#blog-post-leveraging-nf-test-for-enhanced-quality-control-in-nf-core","title":"Blog post: Leveraging nf-test for enhanced quality control in nf-core","text":"

Reproducibility is an important attribute of all good science. This is specially true in the realm of bioinformatics, where software is hopefully being constantly updated, and pipelines are ideally being maintained. This blog post covers nf-test in the nf-core context.

Read blog post

"},{"location":"resources/#nf-corebytesize-converting-pytest-modules-to-nf-test","title":"nf-core/bytesize: Converting pytest modules to nf-test","text":"

Adam Talbot & Sateesh Peri do a live demo of converting nf-core DSL2 modules pytests to nf-test

The presentation was recored as part of the nf-core/bytesize series.

"},{"location":"resources/#nf-test-a-simple-but-powerful-testing-framework-for-nextflow-pipelines","title":"nf-test: a simple but powerful testing framework for Nextflow pipelines","text":"

Lukas Forer provides an overview of nf-test, its evolution over time and up-coming features.

The presentation was recorded as part of the 2023 Nextflow Summit in Barcelona.

"},{"location":"resources/#nf-test-a-simple-test-framework-specifically-tailored-for-nextflow-pipelines","title":"nf-test, a simple test framework specifically tailored for Nextflow pipelines","text":"

Sateesh Peri does a hands-on exploration of nf-test, a simple test framework specifically tailored for Nextflow pipelines.

Slides to follow along can be found here.

The presentation was recorded as part of the Workflows Community Meetup - All Things Groovy at the Wellcome Genome Campus.

"},{"location":"resources/#nf-corebytesize-nf-test","title":"nf-core/bytesize: nf-test","text":"

Edmund Miller shares with us his impressions about nf-test from a user perspective. nf-test is a simple test framework for Nextflow pipelines.

The presentation was recored as part of the nf-core/bytesize

"},{"location":"resources/#episode-8-nf-test-mentorships-and-debugging-resume","title":"Episode 8: nf-test, mentorships and debugging resume","text":"

Phil Ewels, Chris Hakkaart and Marcel Ribeiro-Dantas chat about the nf-test framework for testing Nextflow pipelines.

The presentation was part of the \"News & Views\" episode of Channels (Nextflow Podcast).

"},{"location":"resources/#blog-post-a-simple-test-framework-for-nextflow-pipelines","title":"Blog post: A simple test framework for Nextflow pipelines","text":"

Discover how nf-test originated from the need to efficiently and automatically test production-ready Nextflow pipelines.

Read blog post

"},{"location":"tutorials/","title":"Tutorials","text":""},{"location":"tutorials/#setup-nf-test-on-github-actions","title":"Setup nf-test on GitHub Actions","text":"

In this tutorial, we will guide you through setting up and running nf-test on GitHub Actions.

Read tutorial

"},{"location":"docs/configuration/","title":"Configuration","text":""},{"location":"docs/configuration/#nf-testconfig","title":"nf-test.config","text":"

The nf-test.config file is a configuration file used to customize settings and behavior for nf-test. This file must be located in the root of your project, and it is automatically loaded when you run nf-test test. Below are the parameters that can be adapted:

Parameter Description Default Value testsDir Location for storing all nf-test cases (test scripts). If you want all test files to be in the same directory as the script itself, you can set the testDir to . \"tests\" workDir Directory for storing temporary files and working directories for each test. This directory should be added to .gitignore. \".nf-test\" configFile Location of an optional nextflow.config file specifically used for executing tests. Learn more. \"tests/nextflow.config\" libDir Location of a library folder that is automatically added to the classpath during testing to include additional libraries or resources needed for test cases. \"tests/lib\" profile Default profile to use for running tests defined in the Nextflow configuration. See Learn more. \"docker\" withTrace Enable or disable tracing options during testing. Disable tracing if your containers don't include the procps tool. true autoSort Enable or disable sorted channels by default when running tests. true options Custom Nextflow command-line options to be applied when running tests. For example \"-dump-channels -stub-run\" ignore List of filenames or patterns that should be ignored when building the dependency graph. For example: ignore 'folder/**/*.nf', 'modules/module.nf' `` triggers List of filenames or patterns that should be trigger a full test run. For example: triggers 'nextflow.config', 'test-data/**/*' `` requires Can be used to specify the minimum required version of nf-test. Requires nf-test > 0.9.0 ``

Here's an example of what an nf-test.config file could look like:

config {\n    testsDir \"tests\"\n    workDir \".nf-test\"\n    configFile \"tests/nextflow.config\"\n    libDir \"tests/lib\"\n    profile \"docker\"\n    withTrace false\n    autoSort false\n    options \"-dump-channels -stub-run\"\n}\n

The requires keyword can be used to specify the minimum required version of nf-test. For instance, to ensure the use of at least nf-test version 0.9.0, define it as follows:

config {\n    requires (\n        \"nf-test\": \"0.9.0\"\n    )\n}\n
"},{"location":"docs/configuration/#testsnextflowconfig","title":"tests/nextflow.config","text":"

This optional nextflow.config file is used to execute tests. This is a good place to set default params for all your tests. Example number of threads:

params {\n    // run all tests with 1 threads\n    threads = 1\n}\n
"},{"location":"docs/configuration/#configuration-for-tests","title":"Configuration for tests","text":"

nf-test allows to set and overwrite the config, autoSort and options properties for a specific testsuite:

nextflow_process {\n\n    name \"Test Process...\"\n    script \"main.nf\"\n    process \"my_process\"\n    config \"path/to/test/nextflow.config\"\n    autoSort false\n    options \"-dump-channels\"\n    ...\n\n}\n

It is also possible to overwrite these properties for specific test. Depending on the used Nextflow option, also add the --debug nf-test option on the command-line to see the addtional output.

nextflow_process {\n\n   test(\"my test\") {\n\n      config \"path/to/test/nextflow.config\"\n      autoSort false\n      options \"-dump-channels\"\n      ...\n\n    }\n\n}\n
"},{"location":"docs/configuration/#managing-profiles","title":"Managing Profiles","text":"

Profiles in nf-test provide a convenient way to configure and customize Nextflow executions for your test cases. To run your test using a specific Nextflow profile, you can use the --profile argument on the command line or define a default profile in nf-test.config.

"},{"location":"docs/configuration/#basic-profile-usage","title":"Basic Profile Usage","text":"

By default, nf-test reads the profile configuration from nf-test.config. If you've defined a profile called A in nf-test.config, running nf-test --profile B will start Nextflow with only the B profile. It replaces any existing profiles.

"},{"location":"docs/configuration/#combining-profiles-with","title":"Combining Profiles with \"+\"","text":"

To combine profiles, you can use the + prefix. For example, running nf-test --profile +B will start Nextflow with both A and B profiles, resulting in -profile A,B. This allows you to extend the existing configuration with additional profiles.

"},{"location":"docs/configuration/#profile-priority-order","title":"Profile Priority Order","text":"

Profiles are evaluated in a specific order, ensuring predictable behavior:

  1. Profile in nf-test.config: The first profile considered is the one defined in nf-test.config.

  2. Profile Defined in Testcase: If you specify a profile within a testcase, it takes precedence over the one in nf-test.config.

  3. Profile Defined on the Command Line (CLI): Finally, any profiles provided directly through the CLI have the highest priority and override/extends previously defined profiles.

By understanding this profile evaluation order, you can effectively configure Nextflow executions for your test cases in a flexible and organized manner.

"},{"location":"docs/configuration/#file-staging","title":"File Staging","text":"

Warning

File Staging is obsolete since version >= 0.9.0.

The stage section of the nf-test.config file is used to define files that are needed by Nextflow in the test environment (meta directory). Additionally, the directories lib, bin, and assets are automatically staged.

"},{"location":"docs/configuration/#supported-directives","title":"Supported Directives","text":""},{"location":"docs/configuration/#symlink","title":"symlink","text":"

This directive is used to create symbolic links (symlinks) in the test environment. Symlinks are pointers to files or directories and can be useful for creating references to data files or directories required for the test. The syntax for the symlink directive is as follows:

symlink \"source_path\"\n

source_path: The path to the source file or directory that you want to symlink.

"},{"location":"docs/configuration/#copy","title":"copy","text":"

This directive is used to copy files or directories into the test environment. It allows you to duplicate files from a specified source to a location within the test environment. The syntax for the copy directive is as follows:

copy \"source_path\"\n

source_path: The path to the source file or directory that you want to copy.

"},{"location":"docs/configuration/#example-usage","title":"Example Usage","text":"

Here's an example of how to use the stage section in an nf-test.config file:

config {\n    ...\n    stage {\n        symlink \"data/original_data.txt\"\n        copy \"resources/config.yml\"\n    }\n    ...\n}\n

In this example:

"},{"location":"docs/configuration/#testsuite","title":"Testsuite","text":"

Furthermore, it is also possible to stage files that are specific to a single testsuite:

nextflow_workflow {\n\n    name \"Test workflow HELLO_WORKFLOW\"\n\n    script \"./hello.nf\"\n    workflow \"HELLO_WORKFLOW\"\n\n    stage {\n        symlink \"test-assets/test.txt\"\n    }\n\n    test(\"Should print out test file\") {\n        expect {\n            assert workflow.success\n        }\n    }\n\n}\n
"},{"location":"docs/getting-started/","title":"Getting started","text":"

This guide helps you to understand the concepts of nf-test and to write your first test cases. Before you start, please check if you have installed nf-test properly on your computer. Also, this guide assumes that you have a basic knowledge of Groovy and unit testing. The Groovy documentation is the best place to learn its syntax.

"},{"location":"docs/getting-started/#lets-get-started","title":"Let's get started","text":"

To show the power of nf-test, we adapted a recently published proof of concept Nextflow pipeline. We adapted the pipeline to the new DSL2 syntax using modules. First, open the terminal and clone our test pipeline:

# clone nextflow pipeline\ngit clone https://github.com/askimed/nf-test-examples\n\n# enter project directory\ncd nf-test-examples\n

The pipeline consists of three modules (salmon.index.nf, salmon_align_quant.nf,fastqc.nf). Here, we use the salmon.index.nf process to create a test case from scratch. This process takes a reference as an input and creates an index using salmon.

"},{"location":"docs/getting-started/#init-new-project","title":"Init new project","text":"

Before creating test cases, we use the init command to setup nf-test.

//Init command has already been executed for our repository\nnf-test init\n

The init command creates the following files: nf-test.config and the .nf-test/tests folder.

In the configuration section you can learn more about these files and how to customize the directory layout.

"},{"location":"docs/getting-started/#create-your-first-test","title":"Create your first test","text":"

The generate command helps you to create a skeleton test code for a Nextflow process or the complete pipeline/workflow.

Here we generate a test case for the process salmon.index.nf:

# delete already existing test case\nrm tests/modules/local/salmon_index.nf.test\nnf-test generate process modules/local/salmon_index.nf\n

This command creates a new file tests/modules/local/salmon_index.nf with the following content:

nextflow_process {\n\n    name \"Test Process SALMON_INDEX\"\n    script \"modules/local/salmon_index.nf\"\n    process \"SALMON_INDEX\"\n\n    test(\"Should run without failures\") {\n\n        when {\n            params {\n                // define parameters here. Example:\n                // outdir = \"tests/results\"\n            }\n            process {\n                \"\"\"\n                // define inputs of the process here. Example:\n                // input[0] = file(\"test-file.txt\")\n                \"\"\"\n            }\n        }\n\n        then {\n            assert process.success\n            with(process.out) {\n              // Make assertions about the content and elements of output channels here. Example:\n              // assert out_channel != null\n            }\n        }\n\n    }\n\n}\n

The generate command filled automatically the name, script and process of our test case as well as created a skeleton for your first test method. Typically you create one file per process and use different test methods to describe the expected behaviour of the process.

This test has a name, a when and a then closure (when/then closures are required here, since inputs need to be defined). The when block describes the input parameters of the workflow or the process. nf-test executes the process with exactly these parameters and parses the content of the output channels. Then, it evaluates the assertions defined in the then block to check if content of the output channels matches your expectations.

"},{"location":"docs/getting-started/#the-when-block","title":"The when block","text":"

The when block describes the input of the process and/or the Nextflow params.

The params block is optional and is a simple map that can be used to override Nextflow's input params.

The process block is a multi-line string. The input array can be used to set the different inputs arguments of the process. In our example, we only have one input that expects a file. Let us update the process block by setting the first element of the input array to the path of our reference file:

when {\n    params {\n        outdir = \"output\"\n    }\n    process {\n        \"\"\"\n        // Use transcriptome.fa as a first input paramter for our process\n        input[0] = file(\"${projectDir}/test_data/transcriptome.fa\")\n        \"\"\"\n    }\n}\n

Everything which is defined in the process block is later executed in a Nextflow script (created automatically to test your process). Therefore, you can use every Nextflow specific function or command to define the values of the input array (e.g. Channels, files, paths, etc.).

"},{"location":"docs/getting-started/#the-then-block","title":"The then block","text":"

The then block describes the expected output channels of the process when we execute it with the input parameters defined in the when block.

The then block typically contains mainly assertions to check assumptions (e.g. the size and the content of an output channel). However, this block accepts every Groovy script. This means you can also import third party libraries to define very specific assertions.

nf-test automatically loads all output channels of the process and all their items into a map named process.out. You can then use this map to formulate your assertions.

For example, in the salmon_index process we expect to get one process executed and 16 files created. But we also want to check the md5 sum and want to look into the actual JSON file. Let us update the then section with some assertions that describe our expectations:

then {\n    //check if test case succeeded\n    assert process.success\n    //analyze trace file\n    assert process.trace.tasks().size() == 1\n    with(process.out) {\n      // check if emitted output has been created\n      assert index.size() == 1\n      // count amount of created files\n      assert path(index.get(0)).list().size() == 16\n      // parse info.json file using a json parser provided by nf-test\n      def info = path(index.get(0)+'/info.json').json\n      assert info.num_kmers == 375730\n      assert info.seq_length == 443050\n      assert path(index.get(0)+'/info.json').md5 == \"80831602e2ac825e3e63ba9df5d23505\"\n    }\n}\n

The items of a channel are always sorted by nf-test. This provides a deterministic order inside the channel and enables you to write reproducible tests.

"},{"location":"docs/getting-started/#your-first-test-specification","title":"Your first test specification","text":"

You can update the name of the test method to something that gives us later a good description of our specification. When we put everything together, we get the following full working test specification:

nextflow_process {\n\n    name \"Test Process SALMON_INDEX\"\n    script \"modules/local/salmon_index.nf\"\n    process \"SALMON_INDEX\"\n\n    test(\"Should create channel index files\") {\n\n        when {\n            process {\n                \"\"\"\n                input[0] = file(\"${projectDir}/test_data/transcriptome.fa\")\n                \"\"\"\n            }\n        }\n\n        then {\n            //check if test case succeeded\n            assert process.success\n            //analyze trace file\n            assert process.trace.tasks().size() == 1\n            with(process.out) {\n              // check if emitted output has been created\n              assert index.size() == 1\n              // count amount of created files\n              assert path(index.get(0)).list().size() == 16\n              // parse info.json file\n              def info = path(index.get(0)+'/info.json').json\n              assert info.num_kmers == 375730\n              assert info.seq_length == 443050\n              assert path(index.get(0)+'/info.json').md5 == \"80831602e2ac825e3e63ba9df5d23505\"\n            }\n        }\n    }\n}\n
"},{"location":"docs/getting-started/#run-your-first-test","title":"Run your first test","text":"

Now, the test command can be used to run your test:

nf-test test tests/modules/local/salmon_index.nf.test --profile docker\n
"},{"location":"docs/getting-started/#specifying-profiles","title":"Specifying profiles","text":"

In this case, the docker profile defined in the Nextflow pipeline is used to execute the test. The profile is set using the --profile parameter, but you can also define a default profile in the configuration file.

Congratulations! You created you first nf-test specification.

"},{"location":"docs/getting-started/#nextflow-options","title":"Nextflow options","text":"

nf-test also allows to specify Nextflow options (e.g. -dump-channels, -stub-run) globally in the nf-test.config file or by adding an option to the test suite or the actual test. Read more about this in the configuration documentation.

nextflow_process {\n\n    options \"-dump-channels\"\n\n}\n
"},{"location":"docs/getting-started/#whats-next","title":"What's next?","text":""},{"location":"docs/nftest_pipelines/","title":"Pipelines using nf-test","text":""},{"location":"docs/nftest_pipelines/#nf-test-examples","title":"nf-test-examples","text":"

All test cases described in this documentation can be found in the nf-test-examples repository.

"},{"location":"docs/nftest_pipelines/#gwas-regenie-pipeline","title":"GWAS-Regenie Pipeline","text":"

To show the power of nf-test, we applied nf-test to a Nextflow pipeline that performs whole genome regression modelling using regenie. Please click here to learn more about this pipeline and checkout different kind of test cases.

"},{"location":"docs/running-tests/","title":"Running tests","text":""},{"location":"docs/running-tests/#basic-usage","title":"Basic usage","text":"

The easiest way to use nf-test is to run the following command. This command will run all tests under the tests directory. The testDir can be changed in the nf-test.config.

nf-test test\n
"},{"location":"docs/running-tests/#execute-specific-tests","title":"Execute specific tests","text":"

You can also specify a list of tests, which should be executed.

nf-test test tests/modules/local/salmon_index.nf.test tests/modules/bwa_index.nf.test\n\nnf-test test tests/modules tests/modules/bwa_index.nf.test\n
"},{"location":"docs/running-tests/#tag-tests","title":"Tag tests","text":"

nf-test provides a simple tagging mechanism that allows to execute tests by name or by tag.

Tags can be defined for each testsuite or for each testcase using the new tag directive:

nextflow_process {\n\n    name \"suite 1\"\n    tag \"tag1\"\n\n    test(\"test 1\") {\n        tag \"tag2\"\n        tag \"tag3\"   \n        ...\n    }\n\n    test(\"test 2\") {\n\n        tag \"tag4\"\n        tag \"tag5\"   \n        ...\n\n    }\n}\n

For example, to execute all tests with tag2 use the following command.

nf-test test --tag tag2  # collects test1\n

Names are automatically added to tags. This enables to execute suits or tests directly.

nf-test test --tag \"suite 1\"  # collects test1 and test2\n

When more tags are provided,\u00a0all tests that match at least one tag will be executed. Tags are also not case-sensitive, both lines will result the same tests.

nf-test test --tag tag3,tag4  # collects test1 and test2\nnf-test test --tag TAG3,TAG4  # collects test1 and test2\n
"},{"location":"docs/running-tests/#create-a-tap-output","title":"Create a TAP output","text":"

To run all tests and create a report.tap file, use the following command.

nf-test test --tap report.tap\n
"},{"location":"docs/running-tests/#run-test-by-its-hash-value","title":"Run test by its hash value","text":"

To run a specific test using its hash, the following command can be used. The hash value is generated during its first execution.

nf-test test tests/main.nf.test@d41119e4\n
"},{"location":"docs/assertions/assertions/","title":"Assertions","text":"

Writing test cases means formulating assumptions by using assertions. Groovy\u2019s power assert provides a detailed output when the boolean expression validates to false. nf-test provides several extensions and commands to simplify the work with Nextflow channels. Here we summarise how nextflow and nf-test handles channels and provide examples for the tools that nf-test provides:

"},{"location":"docs/assertions/assertions/#nextflow-channels-and-nf-test-channel-sorting","title":"Nextflow channels and nf-test channel sorting","text":"

Nextflow channels emit (in a random order) a single value or a tuple of values.

Channels that emit a single item produce an unordered list of objects, List<Object>, for example:

process.out.outputCh = ['Hola', 'Hello', 'Bonjour']\n

Channels that contain Nextflow file values have a unique path each run. For Example:

process.out.outputCh = ['/.nf-test/tests/c563c/work/65/85d0/Hola.json', '/.nf-test/tests/c563c/work/65/fa20/Hello.json', '/.nf-test/tests/c563c/work/65/b62f/Bonjour.json']\n

Channels that emit tuples produce an unordered list of ordered objects, List<List<Object>>:

process.out.outputCh = [\n  ['Hola', '/.nf-test/tests/c563c/work/65/85d0/Hola.json'], \n  ['Hello', '/.nf-test/tests/c563c/work/65/fa20/Hello.json'], \n  ['Bonjour', '/.nf-test/tests/c563c/work/65/b62f/Bonjour.json']\n]\n

Assertions by channel index are made possible through sorting of the nextflow channel. The sorting is performed automatically by nf-test prior to launch of the then closure via integer, string and path comparisons. For example, the above would be sorted by nf-test:

process.out.outputCh = [\n  ['Bonjour', '/.nf-test/tests/c563c/work/65/b62f/Bonjour.json'],\n  ['Hello', '/.nf-test/tests/c563c/work/65/fa20/Hello.json'],\n  ['Hola', '/.nf-test/tests/c563c/work/65/85d0/Hola.json']\n]\n

"},{"location":"docs/assertions/assertions/#using-with","title":"Using with","text":"

This assertions...

assert process.out.imputed_plink2\nassert process.out.imputed_plink2.size() == 1\nassert process.out.imputed_plink2.get(0).get(0) == \"example.vcf\"\nassert process.out.imputed_plink2.get(0).get(1) ==~ \".*/example.vcf.pgen\"\nassert process.out.imputed_plink2.get(0).get(2) ==~ \".*/example.vcf.psam\"\nassert process.out.imputed_plink2.get(0).get(3) ==~ \".*/example.vcf.pvar\"\n

... can be written by using with(){} to improve readability:

assert process.out.imputed_plink2\nwith(process.out.imputed_plink2) {\n    assert size() == 1\n    with(get(0)) {\n        assert get(0) == \"example.vcf\"\n        assert get(1) ==~ \".*/example.vcf.pgen\"\n        assert get(2) ==~ \".*/example.vcf.psam\"\n        assert get(3) ==~ \".*/example.vcf.pvar\"\n    }\n}\n
"},{"location":"docs/assertions/assertions/#using-contains-to-assert-an-item-in-the-channel-is-present","title":"Using contains to assert an item in the channel is present","text":"

Groovy's contains and collect methods can be used to flexibly assert an item exists in the channel output.

For example, the below represents a channel that emits a two-element tuple, a string and a json file:

/*\ndef process.out.outputCh = [\n  ['Bonjour', '/.nf-test/tests/c563c/work/65/b62f/Bonjour.json'],\n  ['Hello', '/.nf-test/tests/c563c/work/65/fa20/Hello.json'],\n  ['Hola', '/.nf-test/tests/c563c/work/65/85d0/Hola.json']\n]\n*/\n

To assert the channel contains one of the tuples, parse the json and assert:

testData = process.out.outputCh.collect { greeting, jsonPath -> [greeting, path(jsonPath).json] } \nassert testData.contains(['Hello', path('./myTestData/Hello.json').json])\n

To assert a subset of the tuple data, filter the channel using collect. For example, to assert the greeting only:

testData = process.out.outputCh.collect { greeting, jsonPath -> greeting } \nassert testData.contains('Hello')\n

See the files page for more information on parsing and asserting various file types.

"},{"location":"docs/assertions/assertions/#using-assertcontainsinanyorder-for-order-agnostic-assertion-of-the-contents-of-a-channel","title":"Using assertContainsInAnyOrder for order-agnostic assertion of the contents of a channel","text":"

assertContainsInAnyOrder(List<object> list1, List<object> list2) performs an order agnostic assertion on channels contents and is available in every nf-test closure. It is a binding for Hamcrest's assertContainsInAnyOrder.

Some example use-cases are provided below.

"},{"location":"docs/assertions/assertions/#channel-that-emits-strings","title":"Channel that emits strings","text":"
// process.out.outputCh = ['Bonjour', 'Hello', 'Hola'] \n\ndef expected = ['Hola', 'Hello', 'Bonjour']\nassertContainsInAnyOrder(process.out.outputCh, expected)\n
"},{"location":"docs/assertions/assertions/#channel-that-emits-a-single-maps-eg-valmymap","title":"Channel that emits a single maps, e.g. val(myMap)","text":"
/*\nprocess.out.outputCh = [\n  [\n    'D': [10,11,12],\n    'C': [7,8,9]\n  ],\n  [\n    'B': [4,5,6],\n    'A': [1,2,3]\n  ]\n]\n*/\n\ndef expected = [\n  [\n    'A': [1,2,3],\n    'B': [4,5,6]\n  ],\n  [\n    'C': [7,8,9],\n    'D': [10,11,12]\n  ]\n]\n\nassertContainsInAnyOrder(process.out.outputCh, expected)\n
"},{"location":"docs/assertions/assertions/#channel-that-emits-json-files","title":"Channel that emits json files","text":"

See the files page for more information on parsing and asserting various file types.

Since the outputCh filepaths are different between consecutive runs, the files need to be read/parsed prior to comparison

/*\nprocess.out.outputCh = [\n  '/.nf-test/tests/c563c/work/65/b62f/Bonjour.json',\n  '/.nf-test/tests/c563c/work/65/fa20/Hello.json',\n  '/.nf-test/tests/c563c/work/65/85d0/Hola.json'\n]\n*/\n\ndef actual = process.out.outputCh.collect { filepath -> path(filepath).json }\ndef expected = [\n  path('./myTestData/Hello.json').json,\n  path('./myTestData/Hola.json').json,\n  path('./myTestData/Bonjour.json').json,\n]\n\nassertContainsInAnyOrder(actual, expected)\n
"},{"location":"docs/assertions/assertions/#channel-that-emits-a-tuple-of-strings-and-json-files","title":"Channel that emits a tuple of strings and json files","text":"

See the files page for more information on parsing and asserting various file types.

Since the ordering of items within the tuples are consistent, we can assert this case:

/*\nprocess.out.outputCh = [\n  ['Bonjour', '/.nf-test/tests/c563c/work/65/b62f/Bonjour.json'],\n  ['Hello', '/.nf-test/tests/c563c/work/65/fa20/Hello.json'],\n  ['Hola', '/.nf-test/tests/c563c/work/65/85d0/Hola.json']\n]\n*/\n\ndef actual = process.out.outputCh.collect { greeting, filepath -> [greeting, path(filepath).json] }\ndef expected = [\n  ['Hola', path('./myTestData/Hola.json').json], \n  ['Hello', path('./myTestData/Hello.json').json],\n  ['Bonjour', path('./myTestData/Bonjour.json').json],\n]\n\nassertContainsInAnyOrder(actual, expected)\n

To assert the json only and ignore the strings:

/*\nprocess.out.outputCh = [\n  ['Bonjour', '/.nf-test/tests/c563c/work/65/b62f/Bonjour.json'],\n  ['Hello', '/.nf-test/tests/c563c/work/65/fa20/Hello.json'],\n  ['Hola', '/.nf-test/tests/c563c/work/65/85d0/Hola.json']\n]\n*/\n\ndef actual = process.out.outputCh.collect { greeting, filepath -> path(filepath).json }\ndef expected = [\n  path('./myTestData/Hello.json').json, \n  path('./myTestData/Hola.json').json,\n  path('./myTestData/Bonjour.json').json\n]\n\nassertContainsInAnyOrder(actual, expected)\n

To assert the strings only and not the json files:

/*\nprocess.out.outputCh = [\n  ['Bonjour', '/.nf-test/tests/c563c/work/65/b62f/Bonjour.json'],\n  ['Hello', '/.nf-test/tests/c563c/work/65/fa20/Hello.json'],\n  ['Hola', '/.nf-test/tests/c563c/work/65/85d0/Hola.json']\n]\n*/\n\ndef actual = process.out.outputCh.collect { greeting, filepath -> greeting }\ndef expected = ['Hello', 'Hola', 'Bonjour]\n\nassertContainsInAnyOrder(actual, expected)\n

"},{"location":"docs/assertions/assertions/#using-assertall","title":"Using assertAll","text":"

assertAll(Closure... closures) ensures that all supplied closures do no throw exceptions. The number of failed closures is reported in the Exception message. This useful for efficient debugging of a set of test assertions from a single test run.

def a = 2\n\nassertAll(\n    { assert a==1 },\n    { a = 1/0 },\n    { assert a==2 },\n    { assert a==3 }\n)\n
The output will look like this:
assert a==1\n       ||\n       |false\n       2\n\njava.lang.ArithmeticException: Division by zero\nAssertion failed:\n\nassert a==3\n       ||\n       |false\n       2\n\nFAILED (7.106s)\n\n  java.lang.Exception: 3 of 4 assertions failed\n

"},{"location":"docs/assertions/fasta/","title":"FASTA Files","text":"

0.7.0

The nft-fasta plugin extends path by a fasta property that can be used to read FASTA files into maps. nft-fasta supports also gzipped FASTA files.

"},{"location":"docs/assertions/fasta/#setup","title":"Setup","text":"

To use the fasta property you need to activate the nft-fasta plugin in your nf-test.config file:

config {\n  plugins {\n    load \"nft-fasta@1.0.0\"\n  }\n}\n

More about plugins can be fond here.

"},{"location":"docs/assertions/fasta/#comparing-files","title":"Comparing files","text":"
assert path('path/to/fasta1.fasta').fasta == path(\"path/to/fasta2.fasta'\").fasta\n
"},{"location":"docs/assertions/fasta/#work-with-individual-samples","title":"Work with individual samples","text":"
def sequences = path('path/to/fasta1.fasta.gz').fasta\nassert \"seq1\" in sequences\nassert !(\"seq8\" in sequences)\nassert sequences.seq1 == \"AGTACGTAGTAGCTGCTGCTACGTGCGCTAGCTAGTACGTCACGACGTAGATGCTAGCTGACTCGATGC\"\n
"},{"location":"docs/assertions/files/","title":"Files","text":""},{"location":"docs/assertions/files/#md5-checksum","title":"md5 Checksum","text":"

nf-test extends path by a md5 property that can be used to compare the file content with an expected checksum:

assert path(process.out.out_ch.get(0)).md5 == \"64debea5017a035ddc67c0b51fa84b16\"\n
Note that for gzip compressed files, the md5 property is calculated after gunzipping the file contents, whereas for other filetypes the md5 property is directly calculated on the file itself.

"},{"location":"docs/assertions/files/#json-files","title":"JSON Files","text":"

nf-test supports comparison of JSON files and keys within JSON files. To assert that two JSON files contain the same keys and values:

assert path(process.out.out_ch.get(0)).json == path('./some.json').json\n
Individual keys can also be asserted:

assert path(process.out.out_ch.get(0)).json.key == \"value\"\n
"},{"location":"docs/assertions/files/#yaml-files","title":"YAML Files","text":"

nf-test supports comparison of YAML files and keys within YAML files. To assert that two YAML files contain the same keys and values:

assert path(process.out.out_ch.get(0)).yaml == path('./some.yaml').yaml\n
Individual keys can also be asserted:

assert path(process.out.out_ch.get(0)).yaml.key == \"value\"\n
"},{"location":"docs/assertions/files/#gzip-files","title":"GZip Files","text":"

nf-test extends path by a linesGzip property that can be used to read gzip compressed files.

assert path(process.out.out_ch.get(0)).linesGzip.size() == 5\nassert path(process.out.out_ch.get(0)).linesGzip.contains(\"Line Content\")\n
"},{"location":"docs/assertions/files/#filter-lines","title":"Filter lines","text":"

The returned array can also be filtered by lines.

def lines = path(process.out.gzip.get(0)).linesGzip[0..5]\nassert lines.size() == 6\ndef lines = path(process.out.gzip.get(0)).linesGzip[0]\nassert lines.equals(\"MY_HEADER\")\n
"},{"location":"docs/assertions/files/#grep-lines","title":"Grep lines","text":"

nf-test also provides the possibility to grep only specific lines with the advantage that only a subset of lines need to be read (especially helpful for larger files).

def lines = path(process.out.gzip.get(0)).grepLinesGzip(0,5)\nassert lines.size() == 6\ndef lines = path(process.out.gzip.get(0)).grepLineGzip(0)\nassert lines.equals(\"MY_HEADER\")\n
"},{"location":"docs/assertions/files/#snapshot-support","title":"Snapshot Support","text":"

The possibility of filter lines from a *.gz file can also be combined with the snapshot functionality.

assert snapshot(\npath(process.out.gzip.get(0)).linesGzip[0]\n).match()\n
"},{"location":"docs/assertions/libraries/","title":"Using Third-Party Libraries","text":"

nf-test supports including third party libraries (e.g. jar files ) or functions from groovy files to either extend it functionality or to avoid duplicate code and to keep the logic in test cases simple.

"},{"location":"docs/assertions/libraries/#using-local-groovy-files","title":"Using Local Groovy Files","text":"

0.7.0 \u00b7

If nf-test detects a lib folder in the directory of a tescase, then it adds it automatically to the classpath.

"},{"location":"docs/assertions/libraries/#examples","title":"Examples","text":"

We have a Groovy script MyWordUtils.groovy that contains the following class:

class MyWordUtils {\n\n    def static capitalize(String word){\n      return word.toUpperCase();\n    }\n\n}\n

We can put this file in a subfolder called lib:

testcase_1\n\u251c\u2500\u2500 capitalizer.nf\n\u251c\u2500\u2500 capitalizer.test\n\u2514\u2500\u2500 lib\n    \u2514\u2500\u2500 MyWordUtils.groovy\n

The file capitalizer.nf contains the CAPITALIZER process:

#!/usr/bin/env nextflow\nnextflow.enable.dsl=2\n\nprocess CAPITALIZER {\n    input:\n        val cheers\n    output:\n        stdout emit: output\n    script:\n       println \"$cheers\".toUpperCase()\n    \"\"\"\n    \"\"\"\n\n}\n

Next, we can use this class in the capitalizer.nf.test like every other class that is provided by nf-test or Groovy itself:

nextflow_process {\n\n    name \"Test Process CAPITALIZER\"\n    script \"capitalizer.nf\"\n    process \"CAPITALIZER\"\n\n    test(\"Should run without failures\") {\n\n        when {\n            process {\n                \"\"\"\n                input[0] = \"world\"\n                \"\"\"\n            }\n        }\n\n        then {\n            assert process.success\n            assert process.stdout.contains(MyWordUtils.capitalize('world'))\n        }\n\n    }\n\n}\n

If we have a project and we want to reuse libraries in multiple test cases, then we can store the class in the shared lib folder. Both test cases are now able to use MyWordUtils:

tests\n\u251c\u2500\u2500 testcase_1\n    \u251c\u2500\u2500 hello_1.nf\n    \u251c\u2500\u2500 hello_1.nf.test\n\u251c\u2500\u2500 testcase_2\n    \u251c\u2500\u2500 hello_2.nf\n    \u251c\u2500\u2500 hello_2.nf.test\n\u2514\u2500\u2500 lib\n    \u2514\u2500\u2500 MyWordUtils.groovy\n

The default location is tests/lib. This folder location can be changed in nf-test config file.

It is also possible to use the --lib parameter to add an additional folder to the classpath:

nf-test test tests/testcase_1/hello_1.nf.test --lib tests/mylibs\n

If multiple folders are used, the they need to be separate with a colon (like in Java or Groovy).

"},{"location":"docs/assertions/libraries/#using-local-jar-files","title":"Using Local Jar Files","text":"

To integrate local jar files, you can either specify the path to the jar within the nf-test --lib option

nf-test test test.nf.test --lib tests/lib/groovy-ngs-utils/groovy-ngs-utils.jar\n

or add it as follows to the nf-test.config file:

libDir \"tests/lib:tests/lib/groovy-ngs-utils/groovy-ngs-utils.jar\"\n

You could then import the class and use it in the then statement:

import gngs.VCF;\n\nnextflow_process {\n\n    name \"Test Process VARIANT_CALLER\"\n    script \"variant_caller.nf\"\n    process \"VARIANT_CALLER\"\n\n    test(\"Should run without failures\") {\n\n        when {\n           ...\n        }\n\n        then {\n            assert process.success             \n            def vcf = VCF.parse(\"$baseDir/tests/test_data/NA12879.vcf.gz\")\n            assert vcf.samples.size() == 10\n            assert vcf.variants.size() == 20\n        }\n\n    }\n\n}\n
"},{"location":"docs/assertions/libraries/#using-maven-artifcats-with-grab","title":"Using Maven Artifcats with @Grab","text":"

nf-test supports the @Grab annotation to include third-party libraries that are available in a maven repository. As the dependency is defined as a maven artifact, there is no local copy of the jar file needed and maven enables to include an exact version as well as provides an easy update process.

"},{"location":"docs/assertions/libraries/#example","title":"Example","text":"

The following example uses the WordUtil class from commons-lang:

@Grab(group='commons-lang', module='commons-lang', version='2.4')\nimport org.apache.commons.lang.WordUtils\n\nnextflow_process {\n\n    name \"Test Process CAPITALIZER\"\n    script \"capitalizer.nf\"\n    process \"CAPITALIZER\"\n\n    test(\"Should run without failures\") {\n\n        when {\n            process {\n                \"\"\"\n                input[0] = \"world\"\n                \"\"\"\n            }\n        }\n\n        then {\n            assert process.success\n            assert process.stdout.contains(WordUtils.capitalize('world'))\n        }\n\n    }\n\n}\n
"},{"location":"docs/assertions/regular-expressions/","title":"Regular Expressions","text":""},{"location":"docs/assertions/regular-expressions/#using-operator","title":"Using ==~ operator","text":"

The operator ==~ can be used to check if a string matches a regular expression:

assert \"/my/full/path/to/process/dir/example.vcf.pgen\" ==~ \".*/example.vcf.pgen\"\n
"},{"location":"docs/assertions/snapshots/","title":"Snapshots","text":"

0.7.0

Snapshots are a very useful tool whenever you want to make sure your output channels or output files not change unexpectedly. This feature is highly inspired by Jest.

A typical snapshot test case takes a snapshot of the output channels or any other object, then compares it to a reference snapshot file stored alongside the test (*.nf.test.snap). The test will fail, if the two snapshots do not match: either the change is unexpected, or the reference snapshot needs to be updated to the new output of a process, workflow, pipeline or function.

"},{"location":"docs/assertions/snapshots/#using-snapshots","title":"Using Snapshots","text":"

The snapshot keyword creates a snapshot of the object and its match method can then be used to check if its contains the expected data from the snap file. The following example shows how to create a snapshot of a workflow channel:

assert snapshot(workflow.out.channel1).match()\n

You can also create a snapshot of all output channels of a process:

assert snapshot(process.out).match()\n

Or a specific check on a file:

assert snapshot(path(process.out.get(0))).match()\n

Even the result of a function can be used:

assert snapshot(function.result).match()\n

The first time this test runs, nf-test creates a snapshot file. This is a json file that contains a serialized version of the provided object.

The snapshot file should be committed alongside code changes, and reviewed as part of your code review process. nf-test uses pretty-format to make snapshots human-readable during code review. On subsequent test runs, nf-test will compare the data with the previous snapshot. If they match, the test will pass. If they don't match, either the test runner found a bug in your code that should be fixed, or the implementation has changed and the snapshot needs to be updated.

"},{"location":"docs/assertions/snapshots/#updating-snapshots","title":"Updating Snapshots","text":"

When a snapshot test is failing due to an intentional implementation change, you can use the --update-snapshot flag to re-generate snapshots for all failed tests.

nf-test test tests/main.nf.test --update-snapshot\n
"},{"location":"docs/assertions/snapshots/#cleaning-obsolete-snapshots","title":"Cleaning Obsolete Snapshots","text":"

0.8.0

Over time, snapshots can become outdated, leading to inconsistencies in your testing process. To help you manage obsolete snapshots, nf-test generates a list of these obsolete keys. This list provides transparency into which snapshots are no longer needed and can be safely removed.

Running your tests with the --clean-snapshotor --wipe-snapshot option removes the obsolete snapshots from the snapshot file. This option is useful when you want to maintain the structure of your snapshot file but remove unused entries. It ensures that your snapshot file only contains the snapshots required for your current tests, reducing file bloat and improving test performance.

nf-test test tests/main.nf.test --clean-snapshot\n

Obsolete snapshots can only be detected when running all tests in a test file simultaneously, and when all tests pass. If you run a single test or if tests are skipped, nf-test cannot detect obsolete snapshots.

"},{"location":"docs/assertions/snapshots/#constructing-complex-snapshots","title":"Constructing Complex Snapshots","text":"

It is also possible to include multiple objects into one snapshot:

assert snapshot(workflow.out.channel1, workflow.out.channel2).match()\n

Every object that is serializable can be included into snapshots. Therefore you can even make a snapshot of the complete workflow or process object. This includes stdout, stderr, exist status, trace etc. and is the easiest way to create a test that checks for all of this properties:

assert snapshot(workflow).match()\n

You can also include output files to a snapshot (e.g. useful in pipeline tests where no channels are available):

assert snapshot(\n    workflow,\n    path(\"${params.outdir}/file1.txt\"),\n    path(\"${params.outdir}/file2.txt\"),\n    path(\"${params.outdir}/file3.txt\")\n).match()\n

By default the snapshot has the same name as the test. You can also store a snapshot under a user defined name. This enables you to use multiple snapshots in one single test and to separate them in a logical way. In the following example a workflow snapshot is created, stored under the name \"workflow\".

assert snapshot(workflow).match(\"workflow\")\n

The next example creates a snapshot of two files and saves it under \"files\".

assert snapshot(path(\"${params.outdir}/file1.txt\"), path(\"${params.outdir}/file2.txt\")).match(\"files\")\n

You can also use helper methods to add objects to snapshots. For example, you can use the list()method to add all files of a folder to a snapshot:

 assert snapshot(workflow, path(params.outdir).list()).match()\n
"},{"location":"docs/assertions/snapshots/#compressed-snapshots","title":"Compressed Snapshots","text":"

If you add complex objects to snapshots with large content, you could use the md5() function to store the hashsum instead of the content in the snapshot file:

 assert snapshot(hugeObject).md5().match()\n
"},{"location":"docs/assertions/snapshots/#file-paths","title":"File Paths","text":"

If nf-test detects a path in the snapshot it automatically replace it by a unique fingerprint of the file that ensures the file content is the same. The fingerprint is default the md5 sum.

"},{"location":"docs/assertions/snapshots/#snapshot-differences","title":"Snapshot Differences","text":"

0.8.0

By default, nf-test uses the diff tool for comparing snapshots. It employs the following default arguments:

These default arguments are applied when no custom settings are specified.

If diffis not installed on the system, nf-test will print exepcted and found snapshots without highlighting differences.

"},{"location":"docs/assertions/snapshots/#customizing-diff-tool-arguments","title":"Customizing Diff Tool Arguments","text":"

Users have the flexibility to customize the arguments passed to the diff tool using an environment variable called NFT_DIFF_ARGS. This environment variable allows you to modify the way the diff tool behaves when comparing snapshots.

To customize the arguments, follow these steps:

  1. Set the NFT_DIFF_ARGS environment variable with your desired arguments.

    export NFT_DIFF_ARGS=\"<your_custom_arguments>\"\n
  2. Run nf-test to perform snapshot comparison, and it will utilize the custom arguments specified in NFT_DIFF_ARGS.

"},{"location":"docs/assertions/snapshots/#changing-the-diff-tool","title":"Changing the Diff Tool","text":"

nf-test not only allows you to customize the arguments but also provides the flexibility to change the diff tool itself. This can be achieved by using the environment variable NFT_DIFF.

"},{"location":"docs/assertions/snapshots/#example-using-icdiff","title":"Example: Using icdiff","text":"

As an example, you can change the diff tool to icdiff, which supports features like colors. To switch to icdiff, follow these steps:

  1. Install icdiff

  2. Set the NFT_DIFF environment variable to icdiff to specify the new diff tool.

    export NFT_DIFF=\"icdiff\"\n
  3. If needed, customize the arguments for icdiff using NFT_DIFF_ARGS as explained in the previous section

    export NFT_DIFF_ARGS=\"-N --cols 200 -L expected -L observed -t\"\n
  4. Run nf-test, and it will use icdiff as the diff tool for comparing snapshots.

"},{"location":"docs/cli/clean/","title":"clean command","text":""},{"location":"docs/cli/clean/#usage","title":"Usage","text":"
nf-test clean\n

The clean command removes the .nf-test directory.

"},{"location":"docs/cli/coverage/","title":"coverage command","text":"

0.9.0

"},{"location":"docs/cli/coverage/#usage","title":"Usage","text":"
nf-test coverage\n

The coverage command prints information about the number of Nextflow files that are covered by a test.

"},{"location":"docs/cli/coverage/#optional-arguments","title":"Optional Arguments","text":""},{"location":"docs/cli/coverage/#-csv-filename","title":"--csv <filename>","text":"

Writes a coverage report in csv format.

"},{"location":"docs/cli/coverage/#-html-filename","title":"--html <filename>","text":"

Writes a coverage report in html format.

"},{"location":"docs/cli/generate/","title":"generate command","text":""},{"location":"docs/cli/generate/#usage","title":"Usage","text":"
nf-test generate <TEST_CASE_TYPE> <NEXTFLOW_FILES>\n
"},{"location":"docs/cli/generate/#supported-types","title":"Supported Types","text":""},{"location":"docs/cli/generate/#process","title":"process","text":""},{"location":"docs/cli/generate/#workflow","title":"workflow","text":""},{"location":"docs/cli/generate/#pipeline","title":"pipeline","text":""},{"location":"docs/cli/generate/#function","title":"function","text":""},{"location":"docs/cli/generate/#examples","title":"Examples","text":"

Create a test case for a process:

nf-test generate process modules/local/salmon_index.nf\n

Create a test cases for all processes in folder modules:

nf-test generate process modules/**/*.nf\n

Create a test case for a sub workflow:

nf-test generate workflow workflows/some_workflow.nf\n

Create a test case for the whole pipeline:

nf-test generate pipeline main.nf\n

Create a test case for each function in file functions.nf:

nf-test generate function functions.nf\n
"},{"location":"docs/cli/init/","title":"init command","text":""},{"location":"docs/cli/init/#usage","title":"Usage","text":"
nf-test init\n

The init command set ups nf-test in the current directory.

The init command creates the following files: nf-test.config and tests/nextflow.config. It also creates a folder tests which is the home directory of your test code.

In the configuration section you can learn more about these files and how to customize the directory layout.

"},{"location":"docs/cli/list/","title":"list command","text":""},{"location":"docs/cli/list/#usage","title":"Usage","text":"

list command provides a convenient way to list all available test cases.

nf-test list [<NEXTFLOW_FILES>|<SCRIPT_FOLDERS>]\n
"},{"location":"docs/cli/list/#optional-arguments","title":"Optional Arguments","text":""},{"location":"docs/cli/list/#-tags","title":"--tags","text":"

Print a list of all used tags.

"},{"location":"docs/cli/list/#-format-json","title":"--format json","text":"

Print the list of tests or tags as json object.

"},{"location":"docs/cli/list/#-format-raw","title":"--format raw","text":"

Print the list of tests or tags as simple list without formatting.

"},{"location":"docs/cli/list/#-silent","title":"--silent","text":"

Hide program version and header infos.

"},{"location":"docs/cli/list/#-debug","title":"--debug","text":"

Show debugging infos.

"},{"location":"docs/cli/list/#examples","title":"Examples","text":"
nf-test list --format json --silent\n[\"/Users/lukfor/Development/git/nf-gwas/tests/main.nf.test@69b98c67\",\"/Users/lukfor/Development/git/nf-gwas/tests/main.nf.test@fdb6c1cc\",\"/Users/lukfor/Development/git/nf-gwas/tests/main.nf.test@d1c219eb\",\"/Users/lukfor/Development/git/nf-gwas/tests/main.nf.test@3c54e3cb\",...]\n
nf-test list --format raw --silent\n/Users/lukfor/Development/git/nf-gwas/tests/main.nf.test@69b98c67\n/Users/lukfor/Development/git/nf-gwas/tests/main.nf.test@fdb6c1cc\n/Users/lukfor/Development/git/nf-gwas/tests/main.nf.test@d1c219eb\n/Users/lukfor/Development/git/nf-gwas/tests/main.nf.test@3c54e3cb\n...\n
nf-test list --tags --format json --silent\n[\"fastqc\",\"snakemake\"]\n
nf-test list --tags --format raw --silent\nfastqc\nsnakemake\n
"},{"location":"docs/cli/test/","title":"test command","text":""},{"location":"docs/cli/test/#usage","title":"Usage","text":"
nf-test test [<NEXTFLOW_FILES>|<SCRIPT_FOLDERS>]\n
"},{"location":"docs/cli/test/#optional-arguments","title":"Optional Arguments","text":""},{"location":"docs/cli/test/#-profile-nextflow_profile","title":"--profile <NEXTFLOW_PROFILE>","text":"

To run your test using a specific Nextflow profile, you can use the --profile argument. Learn more.

"},{"location":"docs/cli/test/#-dry-run","title":"--dry-run","text":"

This flag allows users to simulate the execution of tests.

"},{"location":"docs/cli/test/#-verbose","title":"--verbose","text":"

Prints out the Nextflow output during test runs.

"},{"location":"docs/cli/test/#-without-trace","title":"--without-trace","text":"

The Linux tool procps is required to run Nextflow tracing. In case your container does not support this tool, you can also run nf-test without tracing. Please note that the workflow.trace are not available when running it with this flag.

"},{"location":"docs/cli/test/#-tag-tag","title":"--tag <tag>","text":"

Execute only tests with the provided tag. Multiple tags can be used and have to be separated by commas (e.g. tag1,tag2).

"},{"location":"docs/cli/test/#-debug","title":"--debug","text":"

The debug parameter prints out debugging messages and all available output channels which can be accessed in the then clause.

"},{"location":"docs/cli/test/#output-reports","title":"Output Reports","text":""},{"location":"docs/cli/test/#-tap-filename","title":"--tap <filename>","text":"

Writes test results in TAP format to file.

"},{"location":"docs/cli/test/#-junitxml-filename","title":"--junitxml <filename>","text":"

Writes test results in JUnit XML format to file, which conforms to the standard schema.

"},{"location":"docs/cli/test/#-csv-filename","title":"--csv <filename>","text":"

Writes test results in csv file.

"},{"location":"docs/cli/test/#-ci","title":"--ci","text":"

By default,nf-test automatically stores a new snapshot. When CI mode is activated, nf-test will fail the test instead of storing the snapshot automatically.

"},{"location":"docs/cli/test/#-filter-types","title":"--filter <types>","text":"

Filter test cases by specified types (e.g., module, pipeline, workflow or function). Multiple types can be separated by commas.

"},{"location":"docs/cli/test/#optimizing-test-execution","title":"Optimizing Test Execution","text":""},{"location":"docs/cli/test/#-related-tests-files","title":"--related-tests <files>","text":"

Finds and executes all related tests for the provided .nf or nf.test files. Multiple files can be provided space separated.

"},{"location":"docs/cli/test/#-follow-dependencies","title":"--follow-dependencies","text":"

When this flag is set, nf-test will traverse all dependencies when the related-tests flag is set. This option is particularly useful when you need to ensure that all dependent tests are executed, bypassing the firewall calculation process.

"},{"location":"docs/cli/test/#-only-changed","title":"--only-changed","text":"

When enabled, this parameter instructs nf-test to execute tests only for files that have been modified within the current git working tree.

"},{"location":"docs/cli/test/#-changed-since-commit_hashbranch_name","title":"--changed-since <commit_hash|branch_name>","text":"

This parameter triggers the execution of tests related to changes made since the specifie commit. e.g. --changed-since HEAD^ for all changes between the HEAD and HEAD - 1.

"},{"location":"docs/cli/test/#-changed-until-commit_hashbranch_name","title":"--changed-until <commit_hash|branch_name>","text":"

This parameter initiates the execution of tests related to changes made until the specified commit hash.

"},{"location":"docs/cli/test/#-graph-filename","title":"--graph <filename>","text":"

Enables the export of the dependency graph as a dot file. The dot file format is commonly used for representing graphs in graphviz and other related software.

"},{"location":"docs/cli/test/#sharding","title":"Sharding","text":"

This parameter allows users to divide the execution workload into manageable chunks, which can be useful for parallel or distributed processing.

"},{"location":"docs/cli/test/#-shard-shard","title":"--shard <shard>","text":"

Splits the execution into arbitrary chunks defined by the format i/n, where i denotes the index of the current chunk and n represents the total number of chunks. For instance, 2/5 executes the second chunk out of five.

"},{"location":"docs/cli/test/#-shard-strategy-strategy","title":"--shard-strategy <strategy>","text":"

Description: Specifies the strategy used to build shards when the --shard parameter is utilized. Accepted values are round-robin or none.. This parameter determines the method employed to distribute workload chunks among available resources. With the round-robin strategy, shards are distributed evenly among resources in a cyclic manner. The none strategy implies that shards won't be distributed automatically, and it's up to the user to manage the assignment of shards. Default value is round-robin.

"},{"location":"docs/cli/test/#examples","title":"Examples","text":"
nf-test test\n
nf-test test tests/modules/local/salmon_index.nf.test tests/modules/bwa_index.nf.test\n\nnf-test test tests/modules tests/modules/bwa_index.nf.test\n
nf-test test tests/main.nf.test@d41119e4\n
nf-test test --tap report.tap\n
nf-test test --related-tests modules/module_a.nf modules/module_b.nf\n
nf-test test --only-changed\n
nf-test test --changed-since HEAD^\n
nf-test test --shard 2/4 \n
"},{"location":"docs/plugins/developing-plugins/","title":"Plugin Development","text":"

0.7.0

The following plugin can be used as a boilerplate: https://github.com/askimed/nft-fasta

"},{"location":"docs/plugins/developing-plugins/#developing-plugins","title":"Developing Plugins","text":"

A plugin has the possibility:

  1. Adding a new method to an existing class (e.g. the property fasta to class Path). It uses Groovy's ExtensionModule concept. Important: the method has to be static. One class can provide multiple methods.
// com.askimed.nf.test.fasta.PathExtension\npublic class PathExtension {\n  //can be used as: path(filename).fasta\n    public static Object getFasta(Path self) {\n    return FastaUtil.readAsMap(self);\n  }\n\n}\n
  1. Providing new methods
// com.askimed.nf.test.fasta.Methods\npublic class Methods {\n\n  //can be used as: helloFasta()\n  public static void helloFasta() {\n    System.out.println(\"Hello FASTA\");\n  }\n\n}\n
"},{"location":"docs/plugins/developing-plugins/#manifest-file","title":"Manifest file","text":"

You need to create a file META-INF/nf-test-plugin (in your resources). This file contains metadata about the plugin and both classes can now be registered by using the extensionClasses and extensionMethods properties.

moduleName=nft-my-plugin\nmoduleVersion=1.0.0\nmoduleAuthors=Lukas Forer\nextensionClasses=com.askimed.nf.test.fasta.PathExtension\nextensionMethods=com.askimed.nf.test.fasta.Methods\n
"},{"location":"docs/plugins/developing-plugins/#building-a-jar-file","title":"Building a jar file","text":"

The plugin itself is a jar file that contains all classes and the META-INF/nf-test-plugin file. If you have dependencies then you have to create a uber-jar that includes all libraries, because nf-test doesn't support the classpath set in META-INF\\MANIFEST.

"},{"location":"docs/plugins/developing-plugins/#publishing-plugins","title":"Publishing Plugins","text":"

Available plugins are managed in this default repository: https://github.com/askimed/nf-test-plugins/blob/main/plugins.json

Add your plugin or a new release to the plugin.json file and create a pull request to publish your plugin in the default repository. Or host you own repository:

[{\n  \"id\": \"nft-fasta\",\n  \"releases\": [{\n    \"version\": \"1.0.0\",\n    \"url\": \"https://github.com/askimed/nft-fasta/releases/download/v1.0.0/nft-fasta-1.0.0.jar\",\n  },{\n    \"version\": \"2.0.0\",\n    \"url\": \"https://github.com/askimed/nft-fasta/releases/download/v2.0.0/nft-fasta-2.0.0.jar\",\n  }]\n},{\n  \"id\": \"nft-my-plugin\",\n  \"releases\": [{\n    \"version\": \"1.0.0\",\n    \"url\": \"https://github.com/lukfor/nft-my-plugin2/releases/download/v1.0.0/nft-my-plugin-1.0.0.jar\",\n  }]\n}]\n
"},{"location":"docs/plugins/using-plugins/","title":"Plugins","text":"

0.7.0

Most assertions are usecase specific. Therefore, separating this functionality and helper classes from the nf-test codebase has several advantages:

  1. nf-test releases are independent from plugin releases
  2. it is easier for third-parties to develop and maintain plugins
  3. it is possible to use private repositories to integrate private/protected code in plugins without sharing them

For this purpose, we integrated the following plugin system that provides (a) the possibility to extend existing classes with custom methods (e.g. path(filename).fasta) and (2) to extends nf-test with new methods.

"},{"location":"docs/plugins/using-plugins/#using-plugins","title":"Using Plugins","text":"

Available plugins are listed here.

A plugin can be activated via the nf-test.config by adding the plugin section and by using load method to specify the plugin and its version:

config {\n\n  plugins {\n\n    load \"nft-fasta@1.0.0\"\n\n  }\n\n}\n

It is also possible to add one ore more additional repositories. (Example: repository with development/snapshot versions, in-house repository, ...)

config {\n\n  plugins {\n\n    repository \"https://github.com/askimed/nf-test-plugins/blob/main/plugins-snapshots.json\"\n    repository \"https://github.com/seppinho/nf-test-plugin2/blob/main/plugins.json\"\n\n    load \"nft-fasta@1.1.0-snapshot\"\n    load \"nft-plugin2@1.1.0\"\n\n    // you can also load jar files directly without any repository\n    // loadFromFile \"path/to/my/nft-plugin.jar\"\n  }\n\n}\n

All plugins are downloaded and cached in .nf-test\\plugins. This installation mechanism is yet not safe for parallel execution when multiple nf-test instances are resolving the same plugin. However, you can use nf-test update-plugins to download all plugins before you run your tests in parallel.

To clear the cache and to force redownloading plugins and repositories you can execute the nf-test clean command.

One or multiple plugins can be activated also via the --plugins parameter:

nf-test test my-test.nf.test --plugins nft-fasta@1.0.0,plugin2@1.0.0\n

or

nf-test test my-test.nf.test --plugins path/to/my/nft-plugin.jar\n
"},{"location":"docs/testcases/","title":"Documentation","text":""},{"location":"docs/testcases/global_variables/","title":"Global Variables","text":"

The following variables are available and can be used in setup, when, then and cleanup closures.

Name Description Example baseDir orprojectDir The directory where the nf-test.config script is located. mypipeline moduleDir The directory where the module script is located mypipeline/modules/mymodule moduleTestDir The directory where the test script is located mypipeline/tests/modules/mymodule launchDir The directory where the test is run. mypipeline/.nf-test/tests/<test_hash> metaDir The directory where all meta are located (e.g. mock.nf). mypipeline/.nf-test/tests/<test_hash>/meta workDir The directory where tasks temporary files are created. mypipeline/.nf-test/tests/<test_hash>/work outputDir An output directory in the $launchDir that can be used to store output files. The variable contains the absolute path. If you need a relative outpu directory see launchDir example. mypipeline/.nf-test/tests/<test_hash>/output params Dictionary like object holding all parameters."},{"location":"docs/testcases/global_variables/#examples","title":"Examples","text":""},{"location":"docs/testcases/global_variables/#outputdir","title":"outputDir","text":"

This variable points to the directory within the temporary test directory (.nf-test/tests/<test-dir>/output/). The variable can be set under params:

params {\n    outdir = \"$outputDir\"\n}\n
"},{"location":"docs/testcases/global_variables/#basedir","title":"baseDir","text":"

This variable points to the directory to locate the base directory of the main nf-test config. The variable can be used e.g. in the process definition to build absolute paths for input files:

process {\n    \"\"\"\n    file1 = file(\"$baseDir/tests/input/file123.gz\")\n    \"\"\"\n}\n
"},{"location":"docs/testcases/global_variables/#launchdir","title":"launchDir","text":"

This variable points to the directory where the test is executed. This can be used get access to results that are created in an relative output directory:

when {\n    params {\n        outdir = \"results\"\n    }\n}\n
then {\n    assert path(\"$launchDir/results\").exists()\n}\n
"},{"location":"docs/testcases/nextflow_function/","title":"Function Testing","text":"

nf-test allows testing of functions that are defined in a Nextflow file or defined in lib. Please checkout the CLI to generate a function test.

"},{"location":"docs/testcases/nextflow_function/#syntax","title":"Syntax","text":"
nextflow_function {\n\n    name \"<NAME>\"\n    script \"<PATH/TO/NEXTFLOW_SCRIPT.nf>\"\n    function \"<FUNCTION_NAME>\"\n\n    test(\"<TEST_NAME>\") {\n\n    }\n}\n

Script paths that start with ./ or ../ are considered relative paths. These paths are resolved based on the location of the test script. Relative paths are beneficial when you want to reference files or directories located within the same directory as your test script or in a parent directory. These paths provide a convenient way to access files without specifying the entire path.

"},{"location":"docs/testcases/nextflow_function/#multiple-functions","title":"Multiple Functions","text":"

If a Nextflow script contains multiple functions and you want to test them all in the same testsuite, you can override the function property in each test. For example:

"},{"location":"docs/testcases/nextflow_function/#functionsnf","title":"functions.nf","text":"
def function1() {\n  ...\n}\n\ndef function2() {\n  ...\n}\n
"},{"location":"docs/testcases/nextflow_function/#functionsnftest","title":"functions.nf.test","text":"
nextflow_function {\n\n    name \"Test functions\"\n    script \"functions.nf\"\n\n    test(\"Test function1\") {\n      function \"function1\"\n      ...\n    }\n\n    test(\"Test function2\") {\n      function \"function2\"\n      ...\n    }\n}\n
"},{"location":"docs/testcases/nextflow_function/#functions-in-lib-folder","title":"Functions in lib folder","text":"

If you want to test a function that is inside a groovy file in your lib folder, you can ignore the script property, because Nextflow adds them automatically to the classpath. For example:

"},{"location":"docs/testcases/nextflow_function/#libutilsgroovy","title":"lib\\Utils.groovy","text":"
class Utils {\n\n    public static void sayHello(name) {\n        if (name == null) {\n            error('Cannot greet a null person')\n        }\n\n        def greeting = \"Hello ${name}\"\n\n        println(greeting)\n    }\n\n}\n
"},{"location":"docs/testcases/nextflow_function/#testslibutilsgroovytest","title":"tests\\lib\\Utils.groovy.test","text":"
nextflow_function {\n\n    name \"Test Utils.groovy\"\n\n    test(\"Test function1\") {\n      function \"Utils.sayHello\"\n      ...\n    }\n}\n

Note: the generate function command works only with Nextflow functions.

"},{"location":"docs/testcases/nextflow_function/#assertions","title":"Assertions","text":"

The function object can be used in asserts to check its status, result value or error messages.

// function status\nassert function.success\nassert function.failed\n\n// return value\nassert function.result == 27\n\n//returns a list containing all lines from stdout\nassert function.stdout.contains(\"Hello World\") == 3\n
"},{"location":"docs/testcases/nextflow_function/#example","title":"Example","text":""},{"location":"docs/testcases/nextflow_function/#nextflow-script","title":"Nextflow script","text":"

Create a new file and name it functions.nf.

def say_hello(name) {\n    if (name == null) {\n        error('Cannot greet a null person')\n    }\n\n    def greeting = \"Hello ${name}\"\n\n    println(greeting)\n    return greeting\n}\n
"},{"location":"docs/testcases/nextflow_function/#nf-test-script","title":"nf-test script","text":"

Create a new file and name it functions.nf.test.

nextflow_function {\n\n  name \"Test Function Say Hello\"\n\n  script \"functions.nf\"\n  function \"say_hello\"\n\n  test(\"Passing case\") {\n\n    when {\n      function {\n        \"\"\"\n        input[0] = \"aaron\"\n        \"\"\"\n      }\n    }\n\n    then {\n      assert function.success\n      assert function.result == \"Hello aaron\"\n      assert function.stdout.contains(\"Hello aaron\")\n      assert function.stderr.isEmpty()\n    }\n\n  }\n\n  test(\"Failure Case\") {\n\n    when {\n      function {\n        \"\"\"\n        input[0] = null\n        \"\"\"\n      }\n    }\n\n    then {\n      assert function.failed\n      //It seems to me that error(..) writes message to stdout\n      assert function.stdout.contains(\"Cannot greet a null person\")\n    }\n  }\n}\n
"},{"location":"docs/testcases/nextflow_function/#execute-test","title":"Execute test","text":"
nf-test test functions.nf.test\n
"},{"location":"docs/testcases/nextflow_pipeline/","title":"Pipeline Testing","text":"

nf-test also allows to test the complete pipeline end-to-end. Please checkout the CLI to generate a pipeline test.

"},{"location":"docs/testcases/nextflow_pipeline/#syntax","title":"Syntax","text":"
nextflow_pipeline {\n\n    name \"<NAME>\"\n    script \"<PATH/TO/NEXTFLOW_SCRIPT.nf>\"\n\n    test(\"<TEST_NAME>\") {\n\n    }\n}\n
"},{"location":"docs/testcases/nextflow_pipeline/#assertions","title":"Assertions","text":"

The workflow object can be used in asserts to check its status, error messages or traces.

// workflow status\nassert workflow.success\nassert workflow.failed\nassert workflow.exitStatus == 0\n\n// workflow error message\nassert workflow.errorReport.contains(\"....\")\n\n// trace\n//returns a list containing succeeded tasks\nassert workflow.trace.succeeded().size() == 3\n\n//returns a list containing failed tasks\nassert workflow.trace.failed().size() == 0\n\n//returns a list containing all tasks\nassert workflow.trace.tasks().size() == 3\n
"},{"location":"docs/testcases/nextflow_pipeline/#example","title":"Example","text":""},{"location":"docs/testcases/nextflow_pipeline/#nextflow-script","title":"Nextflow script","text":"

Create a new file and name it pipeline.nf.

#!/usr/bin/env nextflow\nnextflow.enable.dsl=2\n\nprocess SAY_HELLO {\n    input:\n        val cheers\n\n    output:\n        stdout emit: verbiage_ch\n        path '*.txt', emit: verbiage_ch2\n\n    script:\n    \"\"\"\n    echo -n $cheers\n    echo -n $cheers > ${cheers}.txt\n    \"\"\"\n}\n\nworkflow {\n    input = params.input_text.trim().split(',')\n    Channel.from(input) | SAY_HELLO\n}\n
"},{"location":"docs/testcases/nextflow_pipeline/#nf-test-script","title":"nf-test script","text":"

Create a new file and name it pipeline.nf.test.

nextflow_pipeline {\n\n    name \"Test Pipeline with 1 process\"\n    script \"pipeline.nf\"\n\n    test(\"Should run without failures\") {\n\n        when {\n            params {\n              input_text = \"hello,nf-test\"\n            }\n        }\n\n        then {\n            assert workflow.success\n            assert workflow.trace.tasks().size() == 2\n        }\n\n    }\n\n}\n
"},{"location":"docs/testcases/nextflow_pipeline/#execute-test","title":"Execute test","text":"
nf-test init\nnf-test test pipeline.nf.test\n
"},{"location":"docs/testcases/nextflow_process/","title":"Process Testing","text":"

nf-test allows to test each process defined in a module file. Please checkout the CLI to generate a process test.

"},{"location":"docs/testcases/nextflow_process/#syntax","title":"Syntax","text":"
nextflow_process {\n\n    name \"<NAME>\"\n    script \"<PATH/TO/NEXTFLOW_SCRIPT.nf>\"\n    process \"<PROCESS_NAME>\"\n\n    test(\"<TEST_NAME>\") {\n\n    }\n}\n

Script paths that start with ./ or ../ are considered relative paths. These paths are resolved based on the location of the test script. Relative paths are beneficial when you want to reference files or directories located within the same directory as your test script or in a parent directory. These paths provide a convenient way to access files without specifying the entire path.

"},{"location":"docs/testcases/nextflow_process/#assertions","title":"Assertions","text":"

The process object can be used in asserts to check its status or error messages.

// process status\nassert process.success\nassert process.failed\nassert process.exitStatus == 0\n\n// Analyze Nextflow trace file\nassert process.trace.tasks().size() == 1\n\n// process error message\nassert process.errorReport.contains(\"....\")\n\n//returns a list containing all lines from stdout\nassert process.stdout.contains(\"Hello World\") == 3\n
"},{"location":"docs/testcases/nextflow_process/#output-channels","title":"Output Channels","text":"

The process.out object provides access to the content of all named output Channels (see Nextflow emit):

// channel exists\nassert process.out.my_channel != null\n\n// channel contains 3 elements\nassert process.out.my_channel.size() == 3\n\n// first element is \"hello\"\nassert process.out.my_channel.get(0) == \"hello\"\n

Channels that lack explicit names can be addressed using square brackets and the corresponding index. This indexing method provides a straightforward way to interact with channels without the need for predefined names. To access the first output channel, you can use the index [0] as demonstrated below:

// channel exists\nassert process.out[0] != null\n\n// channel contains 3 elements\nassert process.out[0].size() == 3\n\n// first element is \"hello\"\nassert process.out[0].get(0) == \"hello\"\n
"},{"location":"docs/testcases/nextflow_process/#example","title":"Example","text":""},{"location":"docs/testcases/nextflow_process/#nextflow-script","title":"Nextflow script","text":"

Create a new file and name it say_hello.nf.

#!/usr/bin/env nextflow\nnextflow.enable.dsl=2\n\nprocess SAY_HELLO {\n    input:\n        val cheers\n\n    output:\n        stdout emit: verbiage_ch\n        path '*.txt', emit: verbiage_ch2\n\n    script:\n    \"\"\"\n    echo -n $cheers\n    echo -n $cheers > ${cheers}.txt\n    \"\"\"\n}\n
"},{"location":"docs/testcases/nextflow_process/#nf-test-script","title":"nf-test script","text":"

Create a new file and name it say_hello.nf.test.

nextflow_process {\n\n    name \"Test Process SAY_HELLO\"\n    script \"say_hello.nf\"\n    process \"SAY_HELLO\"\n\n    test(\"Should run without failures\") {\n\n        when {\n            process {\n                \"\"\"\n                input[0] = Channel.from('hello','nf-test')\n                \"\"\"\n            }\n        }\n\n        then {\n\n            assert process.success\n            assert process.trace.tasks().size() == 2\n\n            with(process.out.verbiage_ch2) {\n                assert size() == 2\n                assert path(get(0)).readLines().size() == 1\n                assert path(get(1)).readLines().size() == 1\n                assert path(get(1)).md5 == \"4a17df7a54b41a84df492da3f1bab1e3\"\n            }\n\n        }\n\n    }\n}\n
"},{"location":"docs/testcases/nextflow_process/#execute-test","title":"Execute test","text":"
nf-test init\nnf-test test say_hello.nf.test\n
"},{"location":"docs/testcases/nextflow_workflow/","title":"Workflow Testing","text":"

nf-test also allows to test a specific workflow. Please checkout the CLI to generate a workflow test.

"},{"location":"docs/testcases/nextflow_workflow/#syntax","title":"Syntax","text":"
nextflow_workflow {\n\n    name \"<NAME>\"\n    script \"<PATH/TO/NEXTFLOW_SCRIPT.nf>\"\n    workflow \"<WORKFLOW_NAME>\"\n\n    test(\"<TEST_NAME>\") {\n\n    }\n}\n

Script paths that start with ./ or ../ are considered relative paths. These paths are resolved based on the location of the test script. Relative paths are beneficial when you want to reference files or directories located within the same directory as your test script or in a parent directory. These paths provide a convenient way to access files without specifying the entire path.

"},{"location":"docs/testcases/nextflow_workflow/#assertions","title":"Assertions","text":"

The workflow object can be used in asserts to check its status, error messages or traces.

// workflow status\nassert workflow.success\nassert workflow.failed\nassert workflow.exitStatus == 0\n\n// workflow error message\nassert workflow.errorReport.contains(\"....\")\n\n// trace\n//returns a list containing succeeded tasks\nassert workflow.trace.succeeded().size() == 3\n\n//returns a list containing failed tasks\nassert workflow.trace.failed().size() == 0\n\n//returns a list containing all tasks\nassert workflow.trace.tasks().size() == 3\n\n//returns a list containing all lines from stdout\nassert workflow.stdout.contains(\"Hello World\") == 3\n
"},{"location":"docs/testcases/nextflow_workflow/#output-channels","title":"Output Channels","text":"

The workflow.out object provides access to the content of all named output Channels (see Nextflow emit):

// channel exists\nassert workflow.out.my_channel != null\n\n// channel contains 3 elements\nassert workflow.out.my_channel.size() == 3\n\n// first element is \"hello\"\nassert workflow.out.my_channel.get(0) == \"hello\"\n
"},{"location":"docs/testcases/nextflow_workflow/#example","title":"Example","text":""},{"location":"docs/testcases/nextflow_workflow/#nextflow-script","title":"Nextflow script","text":"

Create a new file and name it trial.nf.

#!/usr/bin/env nextflow\nnextflow.enable.dsl=2\n\nprocess sayHello {\n    input:\n        val cheers\n\n    output:\n        stdout emit: verbiage_ch\n        path '*.txt', emit: verbiage_ch2\n\n    script:\n    \"\"\"\n    echo -n $cheers\n    echo -n $cheers > ${cheers}.txt\n    \"\"\"\n}\n\nworkflow trial {\n    take: things\n    main:\n        sayHello(things)\n        sayHello.out.verbiage_ch.view()\n    emit:\n        trial_out_ch = sayHello.out.verbiage_ch2\n}\n\nworkflow {\n    Channel.from('hello','nf-test') | trial\n}\n
"},{"location":"docs/testcases/nextflow_workflow/#nf-test-script","title":"nf-test script","text":"

Create a new file and name it trial.nf.test.

nextflow_workflow {\n\n    name \"Test Workflow Trial\"\n    script \"trial.nf\"\n    workflow \"trial\"\n\n    test(\"Should run without failures\") {\n\n        when {\n            workflow {\n                \"\"\"\n                input[0] = Channel.from('hello','nf-test')\n                \"\"\"\n            }\n        }\n\n        then {\n\n            assert workflow.success\n\n            with(workflow.out.trial_out_ch) {\n                assert size() == 2\n                assert path(get(0)).readLines().size() == 1\n                assert path(get(1)).readLines().size() == 1\n                assert path(get(1)).md5 == \"4a17df7a54b41a84df492da3f1bab1e3\"\n            }\n\n        }\n\n    }\n\n}\n
"},{"location":"docs/testcases/nextflow_workflow/#execute-test","title":"Execute test","text":"
nf-test init\nnf-test test trial.nf.test\n
"},{"location":"docs/testcases/params/","title":"Params Dictionary","text":"

The params block is optional and is a simple map that can be used to overwrite Nextflow's input params. The params block is located in the when block of a testcase. You can set params manually:

when {\n    params {\n        outdir = \"output\"\n    }\n}\n

It is also possible to set nested params using the same syntax as in your Nextflow script:

when {\n    params {\n        output {\n          dir = \"output\"\n        }\n    }\n}\n

The params map can also be used in the then block:

then {\n    assert params.output == \"output\"    \n}\n
"},{"location":"docs/testcases/params/#load-params-from-files","title":"Load params from files","text":"

In addition, you can load the params from a JSON file:

when {\n    params {\n        load(\"$baseDir/tests/params.json\")\n    }\n}\n

or from a YAML file:

when {\n    params {\n        load(\"$baseDir/tests/params.yaml\")\n    }\n}\n

nf-test allows to combine both techniques and therefor it is possible to overwrite one or more params from the json file:

when {\n    params {\n        load(\"$baseDir/tests/params.json\")\n        outputDir = \"new/output/path\"\n    }\n}\n
"},{"location":"docs/testcases/setup/","title":"Setup Method","text":"

The setup method allows you to specify processes or workflows that need to be executed before the primary when block. It serves as a mechanism to prepare the required input data or set up essential steps prior to the primary processing block.

"},{"location":"docs/testcases/setup/#syntax","title":"Syntax","text":"

The setup method is typically used within the context of a test case. The basic syntax for the setup method is as follows:

test(\"my test\"){\n    setup {\n        // Define and execute dependent processes or workflows here\n    }\n}\n

Within the setup block, you can use the run method to define and execute dependent processes or workflows.

The run method syntax for a process is as follows:

run(\"ProcessName\") {\n    script \"path/to/process/script.nf\"\n    process {\n        // Define the process inputs here\n    }\n}\n

The run method syntax for a workflow is as follows:

run(\"WorkflowName\") {\n    script \"path/to/workflow/script.nf\"\n    workflow {\n        // Define the workflow inputs here\n    }\n}\n

If you need to run the same process multiple times, you can set the alias of the process:

run(\"GENERATE_DATA\", alias: \"MY_PROCESS\") {\n    script \"./generate_data.nf\"\n    process {\n       ...\n    }\n}\n

Warning

Please keep in mind that changes in procsses or workflows, which are executed in the setup method, can result in a failed test run.

"},{"location":"docs/testcases/setup/#example-usage","title":"Example Usage","text":""},{"location":"docs/testcases/setup/#1-local-setup-method","title":"1. Local Setup Method","text":"

In this example, we create a setup method within a Nextflow process definition to execute a dependent process named \"ABRICATE_RUN.\" This process generates input data that is required for the primary process \"ABRICATE_SUMMARY.\" The setup block specifies the execution of \"ABRICATE_RUN,\" and the when block defines the processing logic for \"ABRICATE_SUMMARY.\"

nextflow_process {\n\n    name \"Test process data\"\n\n    script \"../main.nf\"\n    process \"ABRICATE_SUMMARY\"\n    config \"./nextflow.config\"\n\n    test(\"Should use process ABRICATE_RUN to generate input data\") {\n\n        setup {\n\n            run(\"ABRICATE_RUN\") {\n                script \"../../run/main.nf\"\n                process {\n                    \"\"\"\n                    input[0] =  Channel.fromList([\n                        tuple([ id:'test1', single_end:false ], // meta map\n                            file(params.test_data['bacteroides_fragilis']['genome']['genome_fna_gz'], checkIfExists: true)),\n                        tuple([ id:'test2', single_end:false ],\n                            file(params.test_data['haemophilus_influenzae']['genome']['genome_fna_gz'], checkIfExists: true))\n                    ])\n                    \"\"\"\n                }\n            }\n\n        }\n\n        when {\n            process {\n                \"\"\"\n                input[0] = ABRICATE_RUN.out.report.collect{ meta, report -> report }.map{ report -> [[ id: 'test_summary'], report]}\n                \"\"\"\n            }\n        }\n\n        then {\n            assert process.success\n            assert snapshot(process.out).match()\n        }\n    }\n\n}\n
"},{"location":"docs/testcases/setup/#2-global-setup-method","title":"2. Global Setup Method","text":"

In this example, a global setup method is defined for all tests within a Nextflow process definition. The setup method is applied to multiple test cases, ensuring consistent setup for each test. This approach is useful when multiple tests share the same setup requirements.

nextflow_process {\n\n    name \"Test process data\"\n\n    script \"../main.nf\"\n    process \"ABRICATE_SUMMARY\"\n    config \"./nextflow.config\"\n\n    setup {\n        run(\"ABRICATE_RUN\") {\n            script \"../../run/main.nf\"\n            process {\n                \"\"\"\n                input[0] =  Channel.fromList([\n                    tuple([ id:'test1', single_end:false ], // meta map\n                        file(params.test_data['bacteroides_fragilis']['genome']['genome_fna_gz'], checkIfExists: true)),\n                    tuple([ id:'test2', single_end:false ],\n                        file(params.test_data['haemophilus_influenzae']['genome']['genome_fna_gz'], checkIfExists: true))\n                ])\n                \"\"\"\n            }\n        }\n    }\n\n    test(\"first test\") {\n        when {\n            process {\n                \"\"\"\n                input[0] = ABRICATE_RUN.out.report.collect{ meta, report -> report }.map{ report -> [[ id: 'test_summary'], report]}\n                \"\"\"\n            }\n        }\n        then {\n            assert process.success\n            assert snapshot(process.out).match()\n        }\n    }\n\n    test(\"second test\") {\n        when {\n            process {\n                \"\"\"\n                input[0] = ABRICATE_RUN.out.report.collect{ meta, report -> report }.map{ report -> [[ id: 'test_summary'], report]}\n                \"\"\"\n            }\n        }\n        then {\n            assert process.success\n            assert snapshot(process.out).match()\n        }\n    }\n\n}\n
"},{"location":"docs/testcases/setup/#3-aliasing-of-dependencies","title":"3. Aliasing of Dependencies","text":"

In this example, the process UNTAR is used multiple times in the setup method:

nextflow_process {\n\n    ...\n\n    setup {\n\n        run(\"UNTAR\", alias: \"UNTAR1\") {\n            script \"modules/nf-core/untar/main.nf\"\n            process {\n            \"\"\"\n            input[0] = Channel.fromList(...)\n            \"\"\"\n            }\n        }\n\n        run(\"UNTAR\", alias: \"UNTAR2\") {\n            script \"modules/nf-core/untar/main.nf\"\n            process {\n            \"\"\"\n            input[0] = Channel.fromList(...)\n            \"\"\"\n            }\n        }\n\n        run(\"UNTAR\", alias: \"UNTAR3\") {\n            script \"modules/nf-core/untar/main.nf\"\n            process {\n            \"\"\"\n            input[0] = Channel.fromList(...)\n            \"\"\"\n            }\n        }\n    }\n\n    test(\"Test with three different inputs\") {\n        when {\n            process {\n                \"\"\"\n                input[0] = UNTAR1.out.untar.map{ it[1] }\n                input[1] = UNTAR2.out.untar.map{ it[1] }\n                input[2] = UNTAR3.out.untar.map{ it[1] }\n                \"\"\"\n            }\n        }\n\n        then {\n            ...\n        }\n\n }\n\n}\n
"},{"location":"tutorials/github-actions/","title":"Setup nf-test on GitHub Actions","text":"

Warning

This feature is experimental and requires nf-test 0.9.0-rc1 or higher

In this tutorial, we will guide you through setting up and running nf-test on GitHub Actions. We will start with a simple example where all tests run in a single job, then extend it to demonstrate how you can use sharding to distribute tests across multiple jobs for improved efficiency. Finally, we will show you how to run only the tests affected by the changed files using the --changes-since option.

By the end of this tutorial, you will have a clear understanding of how to:

  1. Set up a basic CI workflow for running nf-test on GitHub Actions.
  2. Extend the workflow to use sharding, allowing tests to run in parallel, which can significantly reduce the overall execution time.
  3. Configure the workflow to run only the test cases affected by the changed files, optimizing the CI process further.

Whether you are maintaining a complex bioinformatics pipeline or a simple data analysis workflow, integrating nf-test with GitHub Actions will help ensure the robustness and reliability of your code. Let's get started!

"},{"location":"tutorials/github-actions/#step-1-running-nf-test","title":"Step 1: Running nf-test","text":"

Create a file named .github/workflows/ci-tests.yml in your repository with the following content:

name: CI Tests\n\non: [push, pull_request]\n\njobs:\n  test:\n    runs-on: ubuntu-latest\n\n    steps:\n      - name: Checkout\n        uses: actions/checkout@v4\n\n      - name: Set up JDK 11\n        uses: actions/setup-java@v2\n        with:\n          java-version: '11'\n          distribution: 'adopt'\n\n      - name: Setup Nextflow latest-edge\n        uses: nf-core/setup-nextflow@v1\n        with:\n          version: \"latest-edge\"\n\n      - name: Install nf-test\n        run: |\n          wget -qO- https://get.nf-test.com | bash\n          sudo mv nf-test /usr/local/bin/\n\n      - name: Run Tests\n        run: nf-test test --ci\n
"},{"location":"tutorials/github-actions/#explanation","title":"Explanation:","text":"
  1. Checkout: Uses the actions/checkout@v2 action to check out the repository.
  2. Set up JDK 11: Uses the actions/setup-java@v2 action to set up Java Development Kit version 11.
  3. Setup Nextflow: Uses the nf-core/setup-nextflow@v1 action to install the latest-edge version of Nextflow.
  4. Install nf-test: Downloads and installs nf-test.
  5. Run Tests: Runs nf-test with the --ci flag. This activates the CI mode. Instead of automatically storing a new snapshot as per usual, it will now fail the test if no reference snapshot is available. This enables tests to fail when a snapshot file was forgotten to be committed.
"},{"location":"tutorials/github-actions/#step-2-extending-to-use-sharding","title":"Step 2: Extending to Use Sharding","text":"

To distribute the tests across multiple jobs, you can set up sharding. Update your workflow file as follows:

name: CI Tests\n\non: [push, pull_request]\n\njobs:\n  test:\n    runs-on: ubuntu-latest\n    strategy:\n      matrix:\n        shard: [1, 2, 3, 4]\n    steps:\n      - name: Checkout\n        uses: actions/checkout@v4\n\n      - name: Set up JDK 11\n        uses: actions/setup-java@v2\n        with:\n          java-version: '11'\n          distribution: 'adopt'\n\n      - name: Setup Nextflow latest-edge\n        uses: nf-core/setup-nextflow@v1\n        with:\n          version: \"latest-edge\"\n\n      - name: Install nf-test\n        run: |\n          wget -qO- https://get.nf-test.com | bash\n          sudo mv nf-test /usr/local/bin/\n\n      - name: Run Tests (Shard ${{ matrix.shard }}/${{ strategy.job-total }})\n        run: nf-test test --ci --shard ${{ matrix.shard }}/${{ strategy.job-total }}\n
"},{"location":"tutorials/github-actions/#explanation-of-sharding","title":"Explanation of Sharding:","text":"
  1. Matrix Strategy: The strategy section defines a matrix with a shard parameter that has four values: [1, 2, 3, 4]. This will create four parallel jobs, one for each shard.
  2. Run Tests with Sharding: The run command for running tests is updated to nf-test test --shard ${{ matrix.shard }}/${{ strategy.job-total }}. This command will run the tests for the specific shard. ${{ matrix.shard }} represents the current shard number, and ${{ strategy.job-total }} represents the total number of shards.
"},{"location":"tutorials/github-actions/#step-3-running-only-tests-affected-by-changed-files","title":"Step 3: Running Only Tests Affected by Changed Files","text":"

To optimize the workflow further, you can run only the tests that are affected by the changed files since the last commit. Update your workflow file as follows:

name: CI Tests\n\non: [push, pull_request]\n\njobs:\n  test:\n    runs-on: ubuntu-latest\n    strategy:\n      matrix:\n        shard: [1, 2, 3, 4]\n    steps:\n      - name: Checkout\n        uses: actions/checkout@v4\n        with:\n          fetch-depth: 0\n\n      - name: Set up JDK 11\n        uses: actions/setup-java@v2\n        with:\n          java-version: '11'\n          distribution: 'adopt'\n\n      - name: Setup Nextflow latest-edge\n        uses: nf-core/setup-nextflow@v1\n        with:\n          version: \"latest-edge\"\n\n      - name: Install nf-test\n        run: |\n          wget -qO- https://get.nf-test.com | bash\n          sudo mv nf-test /usr/local/bin/\n\n      - name: Run Tests (Shard ${{ matrix.shard }}/${{ strategy.job-total }})\n        run: nf-test test --ci --shard ${{ matrix.shard }}/${{ strategy.job-total }} --changed-since HEAD^\n
"},{"location":"tutorials/github-actions/#explanation-of-changes","title":"Explanation of Changes:","text":"
  1. Checkout with Full History: The actions/checkout@v2 action is updated with fetch-depth: 0 to fetch the full history of the repository. This is necessary for accurately determining the changes since the last commit.
  2. Run Tests with Changed Files: The run command is further updated to include the --changed-since HEAD^ option. This option ensures that only the tests affected by the changes since the previous commit are run.
"},{"location":"tutorials/github-actions/#step-4-adapting-nf-testconfig-to-trigger-full-test-runs","title":"Step 4: Adapting nf-test.config to Trigger Full Test Runs","text":"

In some cases, changes to specific critical files should trigger a full test run, regardless of other changes. To configure this, you need to adapt your nf-test.config file.

Add the following lines to your nf-test.config:

config {\n    ...\n    triggers 'nextflow.config', 'nf-test.config', 'test-data/**/*'\n    ...\n}\n

The triggers directive in nf-test.config specifies a list of filenames or patterns that should trigger a full test run. For example:

- `'nextflow.config'`: Changes to the main Nextflow configuration file will trigger a full test run.\n- `'nf-test.config'`: Changes to the nf-test configuration file itself will trigger a full test run.\n- `'test-data/**/*'`: Changes to any files within the `test-data` directory will trigger a full test run.\n

This configuration ensures that critical changes always result in a comprehensive validation of the pipeline, providing additional confidence in your CI process.

"},{"location":"tutorials/github-actions/#step-5-additional-useful-options","title":"Step 5: Additional useful Options","text":"

The --filter flag allows you to selectively run test cases based on their specified types. For example, you can filter tests by module, pipeline, workflow, or function. This is particularly useful when you have a large suite of tests and need to focus on specific areas of functionality. By separating multiple types with commas, you can run a customized subset of tests that match the exact criteria you're interested in, thereby saving time and resources.

The --related-tests flag enables you to identify and execute all tests related to the provided .nf or nf.test files. This is ideal for scenarios where you have made changes to specific files and want to ensure that only the relevant tests are run. You can provide multiple files by separating them with spaces, which makes it easy to manage and test multiple changes at once, ensuring thorough validation of your updates.

When the --follow-dependencies flag is set, the nf-test tool will automatically traverse and execute all tests for dependencies related to the files specified with the --related-tests flag. This ensures that any interdependent components are also tested, providing comprehensive coverage. This option is particularly useful for complex projects with multiple dependencies, as it bypasses the firewall calculation process and guarantees that all necessary tests are executed.

The --changed-until flag allows you to run tests based on changes made up until a specified commit hash or branch name. By default, this parameter uses HEAD, but you can specify any commit or branch to target the changes made up to that point. This is particularly useful for validating changes over a specific range of commits, ensuring that all modifications within that period are tested comprehensively.

"},{"location":"tutorials/github-actions/#summary","title":"Summary","text":"
  1. Without Sharding: A straightforward setup where all tests run in a single job.
  2. With Sharding: Distributes tests across multiple jobs, allowing them to run in parallel.
  3. With Sharding and Changed Files: Optimizes the CI process by running only the tests affected by the changed files since the last commit, in parallel jobs.

Choose the configuration that best suits your project's needs. Start with the simpler setup and extend it as needed to improve efficiency and reduce test execution time.

"}]} \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index 16f0ed9069316d3bb0a22be7583f1c078cef758b..71d22720c0b7986d2a962a19d547802e72774a85 100644 GIT binary patch delta 13 Ucmb=gXP58h;9yudcOrWQ02?F(4gdfE delta 13 Ucmb=gXP58h;ArrlGm*Ul033z{I{*Lx diff --git a/tutorials/github-actions/index.html b/tutorials/github-actions/index.html index 273a0255..982c31ed 100644 --- a/tutorials/github-actions/index.html +++ b/tutorials/github-actions/index.html @@ -1263,7 +1263,7 @@

Step 1: Running nf-test

- name: Install nf-test run: | - wget -qO- https://code.askimed.com/install/nf-test | bash + wget -qO- https://get.nf-test.com | bash sudo mv nf-test /usr/local/bin/ - name: Run Tests @@ -1306,7 +1306,7 @@

Step 2: Extending to Use Sharding

- name: Install nf-test run: | - wget -qO- https://code.askimed.com/install/nf-test | bash + wget -qO- https://get.nf-test.com | bash sudo mv nf-test /usr/local/bin/ - name: Run Tests (Shard ${{ matrix.shard }}/${{ strategy.job-total }}) @@ -1348,7 +1348,7 @@

Step 3: Running Onl - name: Install nf-test run: | - wget -qO- https://code.askimed.com/install/nf-test | bash + wget -qO- https://get.nf-test.com | bash sudo mv nf-test /usr/local/bin/ - name: Run Tests (Shard ${{ matrix.shard }}/${{ strategy.job-total }})