-
-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tracker for Coverage over CellML Model Repository #23
Comments
2.1.0 is giving ~178/940 |
2.2.0 is giving ~477/940 |
It is a known issue that some files in the CellML Model Repository have bad XML or do not fit the specification of CellML we use. (aside @shahriariravanian which version of CellML are we guaranteeing should work?) removing Goldbeeter_2006 from my data folder we now get. The problem is caused in EzXML, where if it hits an error in parsing, it pushes to a global error stack that prevents further usage. why they do this, I have no idea... 530/940 |
861 CellML models we get 940 from the curls, but cloning the git repos returns 861, so that's where that discrepancy comes from |
What are the issues you see? |
this data is from @shahriariravanian. could you shed some light on chris' question? |
The remaining issues are:
|
Great, could you name a model with ? I'd like to look into that. Similarly for a model with missing vars and components. Also, if you end up doing some profiling, I think it'd be good to add benchmarking to our testing of the model repo. I'm happy to add this too with BenchmarkTools. This may help pin down inefficiencies, ie "is it dependent on parameter count, state count, etc... ?". |
This is the results of the latest run:
|
Here is the result file as a CSV file. The 0 -> fail to generate ODESystem |
Try setting the runner to a lower tolerance. That should help the domain error cases. If not, generate |
These are the latest tracking results using ver 2.4.1 (to be pushed soon):
|
This issue will track our progress in testing CellMLToolkit.jl on the CellML Model Repository.
I have a branch that I've added some functions to query the Model Repo for all of the "exposures" and then curl them here. Additionally I added some functions to create a DataFrame to see which models work and which don't, here.
This work is incomplete and since the model repository is quite large, it takes a while to download.
We are planning to do something similar for SBML.jl and their test-suite so it'd be nice to have some consistency in testing.
I don't have the entire library, but from my sample of ~1000 models, I found that we can call
solve
on about 10% of these models and get back aSolution
.@shahriariravanian you've mentioned some of the issues that could be contributing to this 10% number. It would be good to mention them, so that as they get fixed we can see how this percentage changes.
The text was updated successfully, but these errors were encountered: