The last 20 years have brought an increased understanding of the immense power of molecular methods for documenting the diversity of life on earth. Seemingly sterile and mundane substrates such as soil and sea water turn out to abound with life–although perhaps not in a way that the casual observer may immediately appreciate. DNA-based studies have shown that organism groups such as fungi, insects, oomycetes, bacteria and archaea are everywhere, although we often cannot observe them physically (Debroas et al. 2017). The benefits of molecular methods are not restricted to the microscopic world: there are many organisms, such as some fish species, which can at least theoretically be observed physically but for which it is very costly, labour-intensive, and perhaps invasive to seek to do so (Boussarie et al. 2018). In such situations, DNA sequence data enables us to record the presence (and past presence) of these organisms non-invasively and with minimal effort. These developments mean that we do not always need tangible, physical manifestations of all organisms present at some site in order to record them. All organisms, whether or not they are physically observable, may be important when it comes to understanding biodiversity, ecology and biological conservation.
Sequence-derived data enables us to record inconspicuous or otherwise unobservable taxa that fall below the radar of vetted protocols for field work, checklists, depositions into natural science collections, etc. The current maturity of DNA methodologies enables us to record the presence of these organisms to a level of detail that exceeds that of macroscopic observations of organisms in general. It is important to use this moment to define and agree how we should record and report on an organism as present in some substrate or locality through molecular data. Doing so will avoid the disorder that has occurred in other domains, in which the lack of standards and recommendations has led to very heterogeneous and largely incomparable bodies of data (Leebens-Mack et al. 2006; Yilmaz et al. 2011; Nilsson et al. 2012).
DNA-derived occurrence data of species should be as standardized and reproducible as possible, regardless of whether or not the detected species have formal scientific names. In some cases, such occurrence records will hint at previously unknown geographical and ecological properties of described species, thus enriching our body of knowledge on these taxa. In other cases, the data may allow us to amalgamate and visualize information on currently undescribed species, potentially speeding up their eventual formal description. The ability to collect usable data even for unnamed species adds significantly to the many ways in which GBIF and other biodiversity data platforms index the living world. This change will benefit researchers, policymakers, and other users of biodiversity data. Recent estimates suggest that at least 85 per cent of all extant species are undescribed. Existing data standards have been designed for the minority of taxa that have been described. Good practices for dealing with DNA-derived data will help to characterize occurrences of all organisms, whether described or not (Mora et al. 2011; Tedesco et al. 2014).
This guide sets out the ways in which DNA-derived occurrence data should be reported for standardized inclusion in GBIF and other biodiversity data platforms. It does not express any view on the issue of access and benefit sharing for digital sequence information, the subject of extensive discussion through the Convention on Biological Diversity (CBD). However, it is worth noting that genetic barcodes and metabarcodes are typically short fragments of non-coding or coding genes, which are not suitable for commercial exploitation. As the archiving of sequences through International Nucleotide Sequence Database Collaboration (INDSC) is a widespread norm in sequence-based research, publication of occurrence data originating from sequences does not involve publishing new sequences. In most cases these have already been placed in a public genetic repository. This guide therefore addresses the added value possible from deriving spatio-temporal occurrence data and sequence-based names from sequence data, rather than the value of the genetic information itself.
Reporting sequence-derived occurrences in an open and reproducible way brings many benefits: notably, it increases citability, highlights the taxa concerned in the context of biological conservation and contributes to taxonomic and ecological knowledge. Each of these three benefits provides a strong rationale for professionals to adopt the practices outlined in this guide, helping them to highlight a significant proportion of extant biodiversity, hasten its discovery and integrate it into biological conservation and policymaking.
This guide has been developed for multiple target audiences: students planning a first sequence-based study, researchers with old sequences and abundance tables they want to revive or preserve, biodiversity data specialists who are new to sequence-derived occurrences, and bioinformaticians familiar with sequence data but new to biodiversity data platforms. The guide does not directly target users of molecular data in biodiversity data platforms, but such users may find section 1.7 on Outputs particularly interesting. The authors' intention is to provide guidance on publishing data and associated attributes from genetic sequence through general biodiversity data platforms.
The flowchart outlines the processing steps involved in publishing amplicon-derived molecular biodiversity data in repositories such as GBIF and national biodiversity data platforms, including those built on the ALA platform. This guide’s focus is primarily on the steps following the arrival of raw FASTQ sequences from the sequencing step. By familiarizing themselves with the flowchart—and noting any steps that appear familiar or unclear—users will be able to navigate the content included in the guide.
We have done our best to present the information in this guide so that it is useful for each of the audiences described above, but background reading (e.g. GBIF quick guide to data publishing) may be required in certain cases.
Sequence-derived biological occurrence data includes information derived from sequencing individual organisms, but also from environmental DNA (eDNA, i.e. DNA extracted from environmental samples, Thomsen and Willerslev 2015) and from bulk samples comprising many individuals (e.g. plankton samples or Malaise trap samples consisting of multiple individuals from many species). Currently, the greatest volume of sequence-derived occurrence data derives from eDNA. Since analytical methods and end products are laregly similar for all sample sources, the discussion below will focus on eDNA ([category-i-sequence-derived-occurrences] and [category-ii-enriched-occurrences]), noting that the outline is applicable to the other sources.
Surveys often utilize targeted sequencing of taxonomically and phylogenetically informative genetic markers, but can also use, for example, qPCR-based approaches that do not directly result in DNA sequence data (§3.1.3 and §3.2.3). Environmental DNA has been in use as a term since 1987, when it was first used to describe DNA from microbes in sediment samples (Ogram, Sayler, and Barkay 1987). eDNA is now more broadly used to describe a complex mix of DNA from different organisms (Taberlet et al. 2018 and 2012). Thus, eDNA includes all DNA extracted from a specific environmental sample, regardless of substrate and which species it contains. It may be extracted from a wide range of sources, including skin and hair cells, saliva, soil, feces, and from living or recently dead organisms (Pietramellara et al. 2009). Environmental DNA often sufficiently represents all organisms in a given sample. In practice, however, the presence of DNA in the environmental sample depends on the habitat selection, the body size, morphology and activity level of the organism, and the sampling methods used to capture it (Taberlet et al. 2018).
Several studies show that, for water samples, analyses based on eDNA may have a higher probability of finding rare and cryptic species than conventional methods (Thomsen et al. 2012; Biggs et al. 2015; Valentini et al. 2016; Bessey et al. 2020). The same may be true in other environments. Therefore, eDNA may be suitable for monitoring rare red list species and undesirable alien species that often have low densities and that are difficult to detect with conventional methods. Environmental DNA methods are able to detect cryptic organisms, especially those that are small and unable to be detected by the naked eye (e.g. bacteria and fungi). In addition, eDNA can also be used for observation of many species simultaneously, and may describe entire biological communities or major components of them (Ekrem & Majaneva 2019).
Identification and classification of organisms from sequence data and marker-based surveys depends on access to a reference library of sequences taken from known organisms that are matched against the newly generated sequences. The efficacy of classification depends on the completeness (coverage) and the reliability of reference libraries, as well as the tools used to carry out the classification. These are all moving targets, making it essential to apply taxonomic expertise and caution in the interpreting results (Taxonomy of sequences). Inclusion of all verified amplicon sequence variants (ASVs) (Сallahan et al. 2017) in the reference library will enhance identification accuracy and allow for future reinterpretation of data.
Some studies show a relationship between the amount of DNA for a given species in an environmental sample and the biomass of the species in the environment. One can therefore potentially also think of environmental DNA allowing a so-called semi-quantitative estimate (indirect target) for organism biomass, both from environmental samples and bulk samples (Takahara et al. 2012; Thomsen et al. 2012; Andersen et al. 2012; Ovaskainen et al. 2013; Lacoursière-Roussel, Rosabal, and Bernatchez 2016); Thomsen et al. 2016; Valentini et al. 2016; Fossøy et al. 2019; Yates, Fraser, and Derry 2019; Doi et al. 2017). However, other studies show little correlation between environmental DNA quantity and estimated population density (Knudsen et al. 2019). PCR, quantification, mixing and other biases are frequently debated. For example, moult, reproduction and mass death can contribute to increased levels of crustacean environmental DNA in water, while turbidity and poor water quality reduce the amount of detectable environmental DNA ([Strand et al. 2019). Large multicellular organisms are likely to shed more eDNA than microscopic ones (Elbrecht & Leese 2015). More testing and understanding of specific contexts will therefore required before quantitative estimates of population sizes can be used as an accepted and trusted method. Without careful calibration, biological observations derived from eDNA should be thought of "observed presences" or perhaps "relative abundances" (where the relationship is to other observations using a particular sample and assay), rather than reliably indicating actual abundance or absence.
eDNA is thus a sample type, not a method, including DNA derived from any environmental sample rather than from the capture and sequencing of a targeted individual. Such sample types includes water, soil, sediment and air, but also stool samples and tissue (plant/animal) where the host DNA is not targeted (Taberlet et al. 2018). Note, however, that choice of PCR primers sets taxonomic limits and introduces biases to signals of community compositions and abundances. A number of analytical methods exist for studying environmental DNA. These can be divided into two main classes: 1) those which aim to detect a specific organism and 2) those which describe a community of a range of organisms. Different methods of analysis will generate different types and volumes of data. Most often DNA concentrations are low, and technical and biological replicates should be included to validate species detection.
Publishing biodiversity data is largely a process of making species occurrence data findable, accessible, interoperable and reusable, in accordance with the FAIR principles (Wilkinson et al. 2015). Biodiversity data platforms help expose and discover genetic sequence data as biodiversity occurrence records alongside other types of biodiversity data, such as museum collection specimens, citizen science observations, and classical field surveys. The structure, management and storage for each original data source will vary according to the needs of each community. The biodiversity data platforms support data discovery, access and reuse by making these individual datasets compatible with each other, addressing taxonomic, spatial and other inconsistencies in the available biodiversity data. Making data available through single access points supports large-scale data-intensive research, management, and policy. The compatibility between datasets is reached through the process of standardization.
A number of data standards are in use for general biodiversity data, and a separate set of standards for genetic sequence data. Standards often highlight the subsets of fields which are most important or most frequently applicable. These subsets may be referenced as “cores”. The preferred format for publishing data in the GBIF and ALA networks is the Darwin Core Archive (DwC-A) using the Darwin Core (DwC) data standard. In practice, this is a compressed folder (a zip file) containing data files, in standard comma- or tab-delimited text format, a metadata file (eml.xml) that describes the data resource, and a metafile (meta.xml) that specifies the structure of files and data fields included in the archive. Standardized packaging ensures that the data can travel between systems using specific data exchange protocols.Section 2 of this guide provides recommendations for the mapping of the data files, while guidelines and tools for constructing the xml files can be found here: TDWG, GBIF, and ALA.
A central part of the standardization process is the mapping of fields, which is required to transform the original field (column) structure in a source-data export into a standard field structure. Standardization may also affect the content of the individual fields within each record, for example, by recalculating coordinates to a common system, rearranging date elements, or mapping the contents of fields a standard set of values, often called a vocabulary. The process of standardization also provides an opportunity to improve data quality, for example, by filling in omissions, correcting typos and extra spaces and handling heterogeneities and outliers. Such improvements enhance the quality of data and increase its suitability for reuse, but at the same time, data published in any state is better than data that is unpublished and inaccessible.
Once a dataset has been through these standardizаtion and data quality processes, it should be placed in an accessible online location and associated with relevant metadata. Metadata–data or information about the dataset–includes key parameters that describe the dataset and further improve its discoverability and reuse. Metadata should include other important elements such as authorship, Digital Object Identifiers (DOIs), organizational affiliations and other provenance information, as well as procedural and methodological information about how the dataset was collected and curated.
Datasets and their associated metadata are indexed by each data portal: this process enables users to query, filter and process data through APIs and web portals. Unlike journal publications, datasets may be dynamic products that go through multiple versions, with an evolving number of records and mutable metadata fields under the same title and DOI.
Note that most holders of genetic sequence data are expected to upload and archive genetic sequence data in raw sequence data repositories such as NCBI’s SRA or EMBL’s ENA. This topic is not covered here. Biodiversity data platforms such as ALA, GBIF, and most national biodiversity portals are not archives or repositories for raw sequence reads and associated files. We do, however, stress the importance of maintaining links between such primary data and derived occurrences in Section 2.
Metabarcoding data can be produced from a number of different sequencing platforms (Illumina, PacBio, Oxford Nanopore, Ion Torrent, etc.) that rely on different principles for readout and generation of data that differ with respect to read length, error profile, whether sequences are single or paired-end, etc. Currently the Illumina short-read platform is the most widely adopted and as such is the basis of the descriptions here. However, the bioinformatics processing of the data follows the same general principles (QC, denoising, classification) regardless of the sequencing technology used (Hugerth et al. 2017, Figure 2).
Typically, the DNA sequences are first pre-processed by removing primer sequences and, depending on the sequencing method used, low quality bases, usually toward the 5’ and 3’ sequence ends. Sequences not fulfilling requirements on length, overall quality, presence of primers, etc. are removed.
The pre-processed sequences can then be classified and identified to taxa using reference databases (closed reference methods), or processed further to generate abundances without taxonomic identifications (open reference methods). Open reference methods are perhaps the most common and require either clustering of sequences into operational taxonomic units (OTUs; Blaxter et al. 2005) of some defined sequence similarity, or denoising sequences to produce amplicon sequence variants (ASV; also referred to as zero radius OTU (zOTU)). Denoising attempts to correct errors that have been introduced in the PCR and/or sequencing steps, such that the denoised sequences are the set of unique biologically real sequences present in the original sequence mixture. In case of paired-end sequences, the forward and reverse sequences may be denoised separately before merging or else merged prior to denoising. ASVs in the resulting set can differ by as little as one base. Operationally, ASVs may be thought of as OTUs without defined radius and while denoising algorithms are typically very good, they do not entirely remove the problems of over-splitting or lumping sequences.
The PCR used for generating the sequencing library can result in the generation of artefactual sequences in the form of chimeras; a single sequence that originates from multiple parent sequences. Such sequences can be detected bioinformatically and removed, and this is typically done after OTU clustering or denoising.
Finally, the pre-processed sequences, OTUs or ASVs, are taxonomically classified by comparing them to a database of annotated sequences (often referred to as reference libraries, see §1.6). As with the previous steps, several alternative methods are available. Most of these are either based on aligning the metabarcoding sequences to the reference sequences or on counting shared k-mers (short exact sequences).
Several open source pipelines exist for bioinformatic processing of metabarcoding data (lQIIME, DADA2, SWARM, USEARCH, mothur, LULU, PROTAX) [LINKS]. Given the existence of many popular and well used workflows, we make some recommendations below on analysing data for submission to biodiversity data platforms. This is not to suggest that these are the best methods or most appropriate for all purposes but is an attempt to encourage submission of relatively standardized data that may readily be compared via the platforms. If possible, a well documented and maintained workflow should be used (e.g. nf-core/ampliseq pipeline). Metadata should include workflow details and versions (SOP in MiXS extension, see mapping in Table 4. Sequence data should be deposited in an appropriate nucleotide archive (NCBI’s SRA (Leinonen et al. 2011) or EMBL’s ENA (Amid et al. 2020)) and data submitted to the biodiversity platform should include the biosample ID obtained from the archive (see data mapping in <<§2.3>> [REF?]). Making use of these sample IDs will reduce the chances of duplication and ensure sequence data is readily obtainable should opportunities for re-analysis arise, as reference libraries and bioinformatic tools improve. The core end-product of these pipelines is typically a file of counts of individual OTUs or ASVs in each sample along with the taxonomy assigned to these. This is generated either in tabular format or in the BIOM format. OTU or ASV sequences are also usually provided in the FASTA format (Pearson & Lipman 1988).
Taxonomic annotation of sequences is a critical step in the processing of molecular biodiversity datasets, as scientific names are key to accessing and communicating information about the observed organisms. The accuracy and precision of such sequence annotation will depend on the availability of reliable reference databases and libraries across all branches of the tree of life, which in turn will require joint efforts from taxonomists and molecular ecologists.
Species are described primarily by taxonomists, placing taxonomy at the heart of biology. Any attempt at characterizing biodiversity will therefore make use of the end products of taxonomic research. However, unlike DNA sequence data, taxonomic outputs may not always be readily amenable to direct algorithmic or computational interpretation: classical taxonomy is a human-driven process which includes manual steps of taxon delimitation, description and naming, culminating in a formal publication in accordance to the international Codes of Nomenclature. As discussed in previous chapters, DNA sequence-based surveys are very effective at detecting cryptic species and will often identify the presence of organisms currently outside traditional Linnaean taxonomic knowledge. While these guidelines do not address the publication of alternative species checklists derived from sequence data, the disconnection between traditional taxonomy and eDNA efforts is undesirable, and so we offer the following recommendations to readers of this guide.
As taxonomy is central to the discovery of biodiversity data, it is highly recommended that any eDNA sequencing efforts should seek to include relevant taxonomic expertise in their study. It will similarly be beneficial if eDNA sequencing studies are able to allocate a portion of their budget to generation and release of reference sequences from previously unsequenced type specimens or other important reference material from the local herbarium, museum, or biological collection. Taxonomists, too, can contribute towards this goal by always including relevant DNA sequences with each new species description (Miralles et al. 2020) and by targeting the many novel biological entities unraveled by eDNA efforts (e.g. Tedersoo et al. 2017).
Most current biodiversity data platforms are organised around traditional name lists and taxonomic indexes. As sequence-derived occurrences are rapidly becoming a significant source of biodiversity data, and as official taxonomy and nomenclature for such data lags, it is recommended that data providers and platforms should continue to explore and include more flexible representations of taxonomy into their taxonomic backbones. These new representations include molecular reference databases (e.g., GTDB, BOLD BINs, UNITE) that recognize sequence data as reference material for previously unclassified organisms. Additionally, we suggest other commonly used molecular databases (e.g., PR2, RDP, SILVA) should develop stable identifiers for taxa and make reference sequences available for those taxa, to allow their use as taxonomic references.
In contrast to classical taxonomy, which is a heavily manual (human) process, clustering DNA sequences into taxonomic concepts relies on algorithmic analysis of similarity and other signals (such as phylogeny and probability), as well as some human editing. The resulting OTUs vary in stability, presence of reference sequences and physical material, alignments and cut-off values, and OTU identifiers (such as DOIs). Even more importantly, they vary in scale, from local study- or project-specific libraries to global databases that enable broader cross-study comparison. In contrast to the centralization and codification of Linnaean taxa that are formally described in research publications, OTUs are distributed across multiple evolving digital reference libraries that differ in taxonomic focus, barcode genes and other factors.
Algorithms for taxonomic annotation of eDNA will typically assign each unique sequence to the nearest taxonomic group in a reference set, based on some criteria for relatedness and confidence. For poorly known groups of organisms, such as prokaryotes and fungi, the annotation may be a non-Linnean placeholder name for a (cluster-based) taxon, and this taxon will often be ranked above species level. No reference database contains all species in a given group. Users' frequent neglect of this fact has been the source of numerous taxonomic misidentifications during the last 30 years.
During import into the biodiversity platform, the taxonomic resolution for these occurrences may be reduced even further, as the reference set used for annotation may not be included in the taxonomic index of that platform. Records assigned to a well defined cluster within a genus are likely to be treated as undefined records of that genus. The inclusion of the underlying sequence within or by reference in each record will allow future users to potentially identify the organism to a greater level of granularity, particularly as reference libraries improve over time. In cases where the underlying sequence cannot be included as part of the submitted data, we advocate deposition of a (scientific or placeholder) name of the taxon plus an MD5 checksum of the sequence as a unique taxon ID (see §2.2 Data Mapping). MD5 checksums are unidirectional hash algorithms commonly used for verifying file integrity and storing passwords (ref). In this case, they would provide a unique and repeatable representation of the original sequence that would nevertheless not allow the sequence itself to be recovered. This may be required in cases where sensitivity exists around access. MD5 checksums enable efficient query to determine whether the same exact sequence has been recovered in other eDNA efforts, but it is not a complete replacement of the sequence as MD5s do not enable further analyses. Two sequences differing by even a single base will get two completely different MD5 checksums, such that BLAST-style sequence similarity searches will not work.
The purpose of exposing sequence-derived data through biodiversity platforms is to enable reuse of these data alongside other biodiversity data types. It is very important to keep this reuse in mind when preparing your data for publication. Ideally, the metadata and data should tell a complete story in such a way that new, uninformed users can use this evidence without any additional consultations or correspondence. Biodiversity data platforms provide search, filtering, browsing and data access functionality [link to new GBIF data use webpage—KC note: still pending 4 Aug 20]. Users can often choose data-output formats (e.g. DwC-A, CSV) and then process, clean and transform data into the shape and format needed for the analyses.
At GBIF.org or through the GBIF API, registered users can search, filter, and download biodiversity data in the following three formats:
-
Simple: a simple, tab-delimited format which includes only the GBIF-interpreted version of the data, as a result of the indexing process. This is suitable for quick tests and direct import into spreadsheets.
-
Darwin Core Archive: richer format that includes both the interpreted data and the original verbatim version provided by the publisher (prior to indexing and interpretation by GBIF). Because it includes all the metadata and issue flags, this format provides a richer view of the downloaded dataset.
-
Species list: a simple table format that includes only an interpreted list of unique species names from a dataset or query result.
Regardless of the selected format, each GBIF user download receives a reusable link to the query and a data citation that includes a DOI. This DOI-based citation system provides the means of recognizing and crediting uses to datasets and data originators, improving both the credibility and transparency of the findings based on the data.
UNITE is a web-based sequence management environment centred on the eukaryotic nuclear ribosomal ITS region. All public sequences are clustered into species hypotheses (SHs), which are assigned unique DOIs. An SH-matching service outputs various elements of information, including what species are present in eDNA samples, whether these species are potentially undescribed new species, other studies in which they were recovered, whether the species are alien to a region, and whether they are threatened. The DOIs are connected to the taxonomic backbone of the PlutoF platform and GBIF, such that they are accompanied by a taxon name where available.
The data used in UNITE are hosted and managed in PlutoF. Data are represented through a range of standards, primarily Darwin Core, MIxS, and DMP Common Standard; partial support is available for EML, MCL, and GGBN.
PlutoF exports data primarily through the CSV and FASTA formats. PlutoF can also be used to publish data in GBIF (using the DwC format) and to prepare GenBank submission files. It is furthermore possible to download species lists from your data and download your project as a JSON document with project data in hierarchically structured.
BOLD is the reference database maintained by the Centre for Biodiversity Genomics in Guelph on behalf of the International Barcode of Life Consortium (IBOL). It hosts data on barcode reference specimens and sequences for eukaryote species, particularly COI for animals, and maintains the Barcode Index Number (BIN; Ratnasingham & Hebert 2013) system, identifiers for OTUs of approximately species rank, based on clusters of closely similar sequences.
By associating standard sequences with identified reference specimens, BOLD and UNITE are establishing an essential mapping layer for linking ASVs and OTUs with the foundations of the Linnaean taxonomic framework.
It is essential to follow data citation recommendations and use DOIs, as good data citation culture is not only the academic norm, but also a powerful mechanism for crediting acknowledging and, therefore, incentivizing data publishers.