diff --git a/.github/BLAS_benchmarks.md b/.github/BLAS_benchmarks.md
deleted file mode 100644
index cd41167..0000000
--- a/.github/BLAS_benchmarks.md
+++ /dev/null
@@ -1,9 +0,0 @@
-# BLAS benchmarks
-
-The benchmark plots below show the performance of different BLAS libraries (OpenBLAS, Intel MKL, AMD AOCL BLIS) with different numbers of threads on my Ryzen Zen3 5950X (16c/32t). In my case, 16 threads with OpenBLAS is a good blend of performance and memory usage.
-
-
-
-
-
-I didn't include any GPU BLAS libraries (NVBLAS, cuBLAS, etc.) because the I'm limiting the scope of demucs.cpp to use only the CPU. The real PyTorch version of Demucs is suitable for GPU acceleration.
diff --git a/.github/PERFORMANCE.md b/.github/PERFORMANCE.md
new file mode 100644
index 0000000..b29677c
--- /dev/null
+++ b/.github/PERFORMANCE.md
@@ -0,0 +1,56 @@
+### Multi-core, OpenMP, BLAS, etc.
+
+:warning: `demucs.cpp` library code in `./src` **should not use any threading (e.g. pthread or OpenMP) except through the BLAS interface.** This is because demucs.cpp is compiled to a single-threaded WebAssembly module in .
+
+If you have OpenMP and OpenBLAS installed, OpenBLAS might automatically use all of the threads on your machine, which doesn't always run the fastest. Use the `OMP_NUM_THREADS` environment variable to limit this. On my 16c/32t machine, I found `OMP_NUM_THREADS=16` to be the fastest. This matches the [Eigen recommendation](https://eigen.tuxfamily.org/dox/TopicMultiThreading.html) to use the same number of threads as physical cores:
+>On most OS it is very important to limit the number of threads to the number of physical cores, otherwise significant slowdowns are expected, especially for operations involving dense matrices.
+
+### BLAS benchmarks
+
+The benchmark plots below show the performance of different BLAS libraries (OpenBLAS, Intel MKL, AMD AOCL BLIS) with different numbers of threads on my Ryzen Zen3 5950X (16c/32t). In my case, 16 threads with OpenBLAS is a good blend of performance and memory usage.
+
+
+
+
+
+I didn't include any GPU BLAS libraries (NVBLAS, cuBLAS, etc.) because the I'm limiting the scope of demucs.cpp to use only the CPU. The real PyTorch version of Demucs is suitable for GPU acceleration.
+
+### GPUs, cuBLAS, NVBLAS
+
+There is a [branch](https://github.com/sevagh/demucs.cpp/tree/nvblas) where I explored NVBLAS (a cuBLAS wrapper with automatic host-GPU memory transfers). It's not very useful, but it's what I expect. Demucs.cpp is heavy on the for-loops and small matrix-vector or matrix-matrix multiplications. This is to run on Android phones (typically with small amounts of memory, 6-8 GB on flagships) and in WebAssembly (which has a 4 GB memory limit per module).
+
+If I wrote it to use large matrix broadcasts, it would probably be faster (while consuming more memory and breaking the intended usecase), and accelerate much better on GPUs.
+
+### Multi-threading
+
+There are two new programs, `demucs_mt.cpp.main` and `demucs_ft_mt.cpp.main` that use C++11 [std::threads](https://en.cppreference.com/w/cpp/thread/thread).
+
+In the single-threaded programs:
+
+* User supplies a waveform of length N seconds
+* Waveform is split into 7.8-second segments for Demucs inference
+* Segments are processed sequentially, where each segment inference can use >1 core with `OMP_NUM_THREADS`
+
+In the multi-threaded programs:
+* User supplies a waveform of length N seconds and a `num_threads` argument
+* Waveform is split into `num_threads` sub-waveforms (of length M < N) to process in parallel with a 0.75-second overlap
+ * We always need overlapping segments in audio applications to eliminate [boundary artifacts](https://freemusicdemixer.com/under-the-hood/2024/02/23/Demucs-segmentation#boundary-artifacts-and-the-overlap-add-method)
+* `num_threads` threads are launched to perform Demucs inference on the sub-waveforms in parallel
+* Within each thread, the sub-waveform is split into 7.8-second segments
+* Segments within a thread are still processed sequentially, where each segment inference can use >1 core with `OMP_NUM_THREADS`
+
+For the single-threaded `demucs.cpp.main`, my suggestion is `OMP_NUM_THREADS=$num_physical_cores`. On my 5950X system with 16 cores, execution time for a 4-minute song:
+```
+real 10m23.201s
+user 29m42.190s
+sys 4m17.248s
+```
+
+For the multi-threaded `demucs_mt.cpp.main`, using 4 `std::thread` and OMP threads = 4 (4x4 = 16 physical cores):
+```
+real 4m9.331s
+user 18m59.731s
+sys 3m28.465s
+```
+
+More than 2x faster for 4 threads. This is inspired by the parallelism strategy used in .
diff --git a/.github/android-screenshot.png b/.github/android-screenshot.png
new file mode 100644
index 0000000..e5ad2b5
Binary files /dev/null and b/.github/android-screenshot.png differ
diff --git a/.github/google-play-badge.png b/.github/google-play-badge.png
new file mode 100644
index 0000000..131f3ac
Binary files /dev/null and b/.github/google-play-badge.png differ
diff --git a/README.md b/README.md
index b5c7cc2..e35a937 100644
--- a/README.md
+++ b/README.md
@@ -1,6 +1,16 @@
# demucs.cpp
-C++17 implementation of the [Demucs v4 hybrid transformer](https://github.com/facebookresearch/demucs), a PyTorch neural network for music demixing. Similar project to [umx.cpp](https://github.com/sevagh/umx.cpp). This code powers my site .
+C++17 library that implements the inference of the [Demucs v4 hybrid transformer model](https://github.com/facebookresearch/demucs), a PyTorch neural network for music demixing.
+
+It uses only the standard library and the header-only library [Eigen](https://eigen.tuxfamily.org/index.php?title=Main_Page) as dependencies, making it suitable to compile and run on many platforms. It was designed for low-memory environments by sacrificing the speed of the Torch implementation.
+
+Demucs.cpp powers my websites (, ) and now my new Android app [Music Demixer](https://play.google.com/store/apps/details?id=com.freemusicdemixer.pro) to bring Demucs to your pocket!
+
+
+
+See my other project [umx.cpp](https://github.com/sevagh/umx.cpp) for a similar library for Open-Unmix.
+
+### Library design
It uses [libnyquist](https://github.com/ddiakopoulos/libnyquist) to load audio files, the [ggml](https://github.com/ggerganov/ggml) file format to serialize the PyTorch weights of `htdemucs`, `htdemucs_6s`, and `htdemucs_ft` (4-source, 6-source, fine-tuned) to a binary file format, and [Eigen](https://eigen.tuxfamily.org/index.php?title=Main_Page) (+ OpenMP) to implement the inference. There are also programs for multi-threaded Demucs inference using C++11's `std::thread`.
@@ -14,48 +24,7 @@ It uses [libnyquist](https://github.com/ddiakopoulos/libnyquist) to load audio f
1. `demucs_mt.cpp.main`: run a single model, multi-threaded
1. `demucs_ft_mt.cpp.main`: run all four fine-tuned models, multi-threaded
-### Multi-core, OpenMP, BLAS, etc.
-
-:warning: `demucs.cpp` library code in `./src` **should not use any threading (e.g. pthread or OpenMP) except through the BLAS interface.** This is because demucs.cpp is compiled to a single-threaded WebAssembly module in .
-
-If you have OpenMP and OpenBLAS installed, OpenBLAS might automatically use all of the threads on your machine, which doesn't always run the fastest. Use the `OMP_NUM_THREADS` environment variable to limit this. On my 16c/32t machine, I found `OMP_NUM_THREADS=16` to be the fastest. This matches the [Eigen recommendation](https://eigen.tuxfamily.org/dox/TopicMultiThreading.html) to use the same number of threads as physical cores:
->On most OS it is very important to limit the number of threads to the number of physical cores, otherwise significant slowdowns are expected, especially for operations involving dense matrices.
-
-See the [BLAS benchmarks doc](./.github/BLAS_benchmarks.md) for more details.
-
-### Multi-threading
-
-There are two new programs, `demucs_mt.cpp.main` and `demucs_ft_mt.cpp.main` that use C++11 [std::threads](https://en.cppreference.com/w/cpp/thread/thread).
-
-In the single-threaded programs:
-
-* User supplies a waveform of length N seconds
-* Waveform is split into 7.8-second segments for Demucs inference
-* Segments are processed sequentially, where each segment inference can use >1 core with `OMP_NUM_THREADS`
-
-In the multi-threaded programs:
-* User supplies a waveform of length N seconds and a `num_threads` argument
-* Waveform is split into `num_threads` sub-waveforms (of length M < N) to process in parallel with a 0.75-second overlap
- * We always need overlapping segments in audio applications to eliminate [boundary artifacts](https://freemusicdemixer.com/under-the-hood/2024/02/23/Demucs-segmentation#boundary-artifacts-and-the-overlap-add-method)
-* `num_threads` threads are launched to perform Demucs inference on the sub-waveforms in parallel
-* Within each thread, the sub-waveform is split into 7.8-second segments
-* Segments within a thread are still processed sequentially, where each segment inference can use >1 core with `OMP_NUM_THREADS`
-
-For the single-threaded `demucs.cpp.main`, my suggestion is `OMP_NUM_THREADS=$num_physical_cores`. On my 5950X system with 16 cores, execution time for a 4-minute song:
-```
-real 10m23.201s
-user 29m42.190s
-sys 4m17.248s
-```
-
-For the multi-threaded `demucs_mt.cpp.main`, using 4 `std::thread` and OMP threads = 4 (4x4 = 16 physical cores):
-```
-real 4m9.331s
-user 18m59.731s
-sys 3m28.465s
-```
-
-More than 2x faster for 4 threads. This is inspired by the parallelism strategy used in .
+See the [PERFORMANCE doc](./.github/PERFORMANCE.md) for details on multi-threading, external BLAS libraries, etc..
## Instructions
@@ -149,9 +118,3 @@ Encoder Status: 0
```
For the 6-source model, additional targets 4 and 5 correspond to guitar and piano.
-
-## Dev tips
-
-* make lint
-* Valgrind memory error test: `valgrind --leak-check=full --show-leak-kinds=all --track-origins=yes --verbose ./demucs.cpp.main ../ggml-demucs/ggml-model-htdemucs-f16.bin ../test/data/gspi_stereo.wav ./demucs-out-cpp/`
-* Callgrind + KCachegrind: `valgrind --tool=callgrind ./demucs.cpp.test --gtest_filter='*FreqDec*'`