You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think some people (myself included) will come across this method being familiar with UMAP and classical dimensionality reduction techniques, but it might not be clear when to use UMAP vs. TopOMetry. Could you comment on this?
Might consider adding to the docs and/or README, but feel free to ignore the suggestion as well.
The text was updated successfully, but these errors were encountered:
Hi @sgbaird! Thank you for your interest in TopOMetry.
it might not be clear when to use UMAP vs. TopOMetry
I imagined people would consider reading the introduction, specifically pages 6-8 from our preprint. I'm not telling people to 'ditch UMAP'. TopOMetry assumptions on data structure are looser than UMAP's. We basically assume the k number of neighbors divided by the total number of samples approaches zero (i.e. data comprises a set of topological manifolds, that is, we can do calculus). When data topology is highly non-uniform, such as in biological information, TopOMetry yields greater details, such as in the PBMC68K example (Fig. 2 of the manuscript). Even in non-biological data, such as in Natural Language Processing, TopOMetry can better separate clusters and provide denoised affinity matrices for further clustering algorithms to be trained on. An important hint that data may fall outside UMAP's assumptions is if embeddings are too different.
A second point is TopOMetry is intended to be a comprehensive framework. Separate steps can be pipelined at the user's will (i.e. use only a first diffusion model and then a specific layout technique, or use the same model to duplicate any steps). I'm not saying the default workflows are necessarily the best, nor the best methods for approximating the LBO, they are only currently the best based on really solid mathematical ground. The idea is that TopOMetry works within a scikit-learn compatible workflow and that users can yield its approximate kNN, affinity learning, orthogonal decomposition, and layout optimization modules separately, at any possible combination, on their will. My intent is to allow the community to provide their thoughts and contributions and extensions on this initial work. After all, I did everything so far by myself.
Might consider adding to the docs and/or README
I'm indeed considering, as this was my first question after sharing the manuscript. Will do it this week, along with some new tutorials.
Came across this via Leland's twitter post btw.
Prof. Leland was very helpful in providing his insights and believing in me in the early stages of this project. I'm thankful he shared this. UMAP is seminal, groundbreaking work, and if I could see a little further it was by standing on the shoulders of giants.
I think some people (myself included) will come across this method being familiar with UMAP and classical dimensionality reduction techniques, but it might not be clear when to use UMAP vs. TopOMetry. Could you comment on this?
Might consider adding to the docs and/or README, but feel free to ignore the suggestion as well.
The text was updated successfully, but these errors were encountered: