Kana comes from the Telugu word kaṇaṁ (కణం), which means ... drumroll... cell
Checkout our preprint on bioRxiv
kana is a web application for single-cell RNA-seq data analysis that works directly in the browser. That's right - the calculations are performed client-side, by your browser, on your computer! This differs from the usual paradigm of, e.g., Shiny applications where data needs to be sent to a backend server that does the actual analysis. Our client-side approach has a number of advantages:
- Your data is never transferred anywhere, so you don't have have to worry about problems with data privacy. These can be especially hairy when your backend server lies in a different jurisdiction from your data source. By performing the analysis on the client, we avoid all of these issues.
- kana is super-cheap to run and deploy, just serve it as a static website. There's no need to maintain a server or cloud compute instance - let the user's machine take care of it. It also naturally scales to any number of users as they're automatically providing the compute.
- By removing network latency, we can achieve a smooth interactive experience. This ranges from steps such as animated dimensionality reductions to user-driven marker detection and celltype annotation.
If you have a Matrix Market (.mtx
) file or HDF5 (tenx V3 or AnnData
representation stored as h5ad), or SummarizedExperiment
(or derivatives like SingleCellExperiment
) stored as an RDS file, or an ExperimentHub id, you're ready to go.
- Launch the application by clicking here.
- Select the Matrix Market file (this may be Gzip-compressed).
We recommend also providing the corresponding
genes.tsv
orfeatures.tsv
file to identify marker genes properly. - Click the "Analyze" button, and we'll run a standard scRNA-seq analysis for you.
The standard analysis follows the flow described in the Orchestrating Single-Cell Analysis with Bioconductor. Briefly, this involves:
- Removal of low-quality cells
- Normalization and log-transformation
- Modeling of the mean-variance trend across genes
- A principal components analysis on the highly variable genes
- Clustering with graph-based methods
- The usual dimensionality reductions (t-SNE/UMAP)
- Marker detection for each cluster
- Make custom cell selections and detect markers for this selection
- Cell type annotation for each cluster across user selected reference datasets
- Perform Integration or Batch correction using MNN correction. You can provide a single dataset containing multiple batches and specify the
batch
column in the cell annotations, or load multiple datasets where each dataset is considered a batch - Support Multi-modal analysis for Cite-seq data
- Perform analysis on subsets (filter based on cell annotation)
The interface provides a depiction of the dimensionality reduction of choice, a ranking of marker genes for the cluster of interest, and diagnostic plots from the individual analysis steps.
for release notes and changelog, read here
Tips and tricks:
- Clicking on a cluster name in the legend will highlight that cluster in the t-SNE/UMAP plot.
- Clicking on the droplet icon in the marker table will color the t-SNE/UMAP plot by the expression of the selected gene.
- Clicking on the plus icon in the marker table will give some details about that gene's expression in the selected cluster, including a histogram relative to cells in other clusters.
- Hovering over the bars in the Markers section for a gene displays a tooltip on different statistics for that gene vs the selected cluster.
- Filter markers either by searching for a gene or using the sliders to filter by various statistics.
- Clicking on Save in the t-SNE or UMAP section will capture the current state of the visualization to Gallery
- Clicking on Animate will interactively visualize dimensions at various iterations as the t-SNE or UMAP algorithms computes these embeddings
- Clicking on "What's happening" will show logs describing how long each step of the analysis took (and any errors during the analysis).
- Clicking Export will save the analysis either to the browser or download the analysis as a .kana file. Loading these files will restore the state of the application
Deployment is as easy as serving the static files in this repository via HTTPS. Indeed, our deployment is just being served via GitHub Pages. As promised, there's no need to set up a backend server.
We have significantly revamped the entire application and the underlying infrastructure to support hybrid compute - either purely client-side with webassembly, or on backend systems through node, or both.
kana uses the scran.js library for efficient client-side execution of scRNA-seq analysis steps. This uses a variety of C/C++ libraries compiled to WebAssembly to enable heavy-duty calculations in the browser at near-native speed.
All computations performed by kana run in a Web Worker. This avoids blocking on the main thread and allows the application to be more responsive. Data is sent to the main thread on an as-needed basis, e.g., for visualizations. We also create separate Web Workers for the t-SNE and UMAP steps so that they can be run concurrently for maximum efficiency.
The WASM code itself is compiled with PThreads support to enable parallelization of some analysis steps.
This involves the use of a SharedArrayBuffer
to efficiently share memory across Web Workers,
which in turn requires cross origin isolation of the site.
We achieve this by using a service worker to cache the resources and load the blobs with the relevant headers - hence the need for HTTPS.
- bakana: The core analysis workflow is refactored into an independent package to provide the same functionality in browser and node environments. The Kana front-end is now a wrapper around bakana.
- kanapi: provides a node API (using WebSockets) to run single-cell analysis in backend environments (extending
bakana
). One can extend Kana to interact to this API (#good-first-issue) - kana-formats: as we add new functionality and features, we need to store and read the exported analysis state (
.kana
files). This package specifies the formats and provides readers for parsing various versions. - kanaval: validate the exported analysis results.
Install dependencies:
npm install or yarn # depending on what you use
To start the app:
yarn start # if using yarn, highly recommended
npm run start # if using npm
This usually runs on port 3000 unless something else is already running on the same port.
For the curious: this project was bootstrapped with the Create React App.