Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to handle large dataset(>65000 cells) in scMetabolism? #7

Open
ybyOxidant opened this issue Jan 26, 2022 · 5 comments
Open

how to handle large dataset(>65000 cells) in scMetabolism? #7

ybyOxidant opened this issue Jan 26, 2022 · 5 comments

Comments

@ybyOxidant
Copy link

Hello!
I am currently working with a dataset containing more than 60000 cells.
Runing sc.metabolism or sc.metabolism.Seurat is blocked by the ERROR below.
Error in asMethod(object) : Cholmod error 'problem too large' at file ../Core/cholmod_dense.c, line 102

I believe it will happen when the column number exceeds the limitation of as.matrix(). So now I break the dataset into 2 parts to process by scMeatbolism, then merging them for downstream analysis. But it's not elegant, and the internal parameter of scMetabolism seems not equal. Do you have a better solution?

@wangshisheng
Copy link

Hello!

I meet same problem, could you provide some help? Many thanks~~

@abollol
Copy link

abollol commented Mar 24, 2022

Hello! me too. any solution?

@honghh2018
Copy link

@wangshisheng @abollol same issue.
how to solve this problem?

@honghh2018
Copy link

What happen for this R package ? no one to maintain ?

@wu-yc
Copy link
Owner

wu-yc commented May 23, 2022

This is limited by as.matrix(). You can split your datasets according to cell types (i.e. T cell, B cell, etc.) and perform the analysis one by one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants