-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Paper] add comparison tables and benchmark #127
Conversation
Possible alternative to comparing the 3 packages
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #127 +/- ##
==========================================
+ Coverage 83.42% 87.31% +3.89%
==========================================
Files 29 29
Lines 748 749 +1
==========================================
+ Hits 624 654 +30
+ Misses 124 95 -29 ☔ View full report in Codecov by Sentry. |
First table looks very good. The second one is surprising to me, this is the first time I ran BenchmarkTools on these things, and i NEVER optimized the code, this is wild that we beat DataGenCopulaBased like that. What about BivariateCopulas why dont you have a result ? |
With "bivariatecopula.jl" it is only possible for the bivariate case and on the other hand I was not able to execute the code correctly. I was trying for quite some time using the documentation but I didn't succeed. Maybe it's a problem with my machine. On the other hand, do you think we should cite "BenchmarkTools" and put the characteristics of the machine I used to obtain those results? |
add dates of package bivaratecopula
That is good ! Yes you can cite benchmarktools like i cited other packages: with a link to their repository +, if they have some bibtex citation in their readme and/or docs, these bibtex entries. Look at what I did for BivariateCopulas to see how it works. Nice numbers. The statement should be somthing like
|
joss/paper.md
Outdated
The following table shows some characteristics that differentiate each package. | ||
| Characteristic | Copulas.jl | DatagenCopulaBased.jl | BivariateCopulas.jl | | ||
|-----------------------------------------------|--------------------|--------------------|--------------------| | ||
| Every Archimedean Copula sampling | Yes | No | No | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could add here a line "Classic Bivariate copulas sampling" with Yes/Yes/Yes.
Also a line "Obscure bivariate copulas sampling" with "Yes/No/No ?" ^^
Maybe the first column could be refactored to :
Sampling
- Classic Bivariate Copulas Yes Yes Yes
- Obscure Bivariate Copulas Yes No No
- Archimedean copulas All / Classic Only / Classic Only
- Multivariate Copulas Yes Yes No (You exchanged the two i think)
- Archimedean Chains (this is the real name of "nested" : No No Yes
Fitting No No Yes
Plotting No No Yes
Dependence metrics Partial / ? / ? (check)
It would also be nice to add this benchmark to the documentation (with the code to run it) since it is pretty :) |
modifying tables, Benchmarktools.jl needs to be cited correctly
joss/paper.md
Outdated
## Efficiency | ||
To perform an efficiency test we use the "BenchmarkTools" package with the objective of comparing the execution time and the amount of memory necessary to generate copula samples with each package. We generate 10^6 samples for Clayton copula of dimensions 2, 5, 10 with parameter 0.8 | ||
To perform an efficiency test we use the [`BenchmarkTools.jl`](https://github.com/JuliaCI/BenchmarkTools.jl) [@BenchmarkTools] package with the objective of comparing the execution time and the amount of memory necessary to generate copula samples with each package. We generate 10^6 samples for Clayton copula of dimensions 2, 5, 10 with parameter 0.8 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the reference to work you need to add it to the joss/paper.bib
file, taking it from their CITATION file there : https://github.com/JuliaCI/BenchmarkTools.jl/blob/master/CITATION.bib
@ARTICLE{BenchmarkTools,
author = {{Chen}, Jiahao and {Revels}, Jarrett},
title = "{Robust benchmarking in noisy environments}",
journal = {arXiv e-prints},
keywords = {Computer Science - Performance, 68N30, B.8.1, D.2.5},
year = 2016,
month = "Aug",
eid = {arXiv:1608.04295},
archivePrefix ={arXiv},
eprint = {1608.04295},
primaryClass = {cs.PF},
adsurl = {https://ui.adsabs.harvard.edu/abs/2016arXiv160804295C},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
Hello, I added the code, however I only added the one I used for copulas.jl but it works with any package. Additionally, it must be taken into account that the methodology for obtaining samples is different for each package. |
Thanks, this is Ok for now I will finsh this PR and merge it myself on monday. very good job. Can you now focus on #98 from now on ? I am sure this is not very hard to solve but this needs to work correctly. |
Do you think I can add a thank you to the foundation that finances my
studies?
El El vie, 9 feb 2024 a la(s) 15:11, Oskar Laverny ***@***.***>
escribió:
… Thanks, this is Ok for now I will finsh this PR and merge it myself on
monday. very good job.
Can you now focus on #98 <#98>
from now on ? I am sure this is not very hard to solve but this needs to
work correctly.
—
Reply to this email directly, view it on GitHub
<#127 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/A7PTWNXEVSRDNMIU5IYVNGTYSZ7GRAVCNFSM6AAAAABC4YIQXGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMZWGUZTSNRUGM>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Yes of course, add it under a section at the end of the paper before the references like so :
|
Thanks, I will.
El El vie, 9 feb 2024 a la(s) 15:24, Oskar Laverny ***@***.***>
escribió:
… Yes of course, add it under a section at the end of the paper before the
references like so :
# Acknowledgement
Santiago Jiménez Ramos thanks XXX for the funding
—
Reply to this email directly, view it on GitHub
<#127 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/A7PTWNTB7CRSW7ZADGO3463YS2AYPAVCNFSM6AAAAABC4YIQXGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMZWGU2TKMJSGY>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Add Acknowledgement
Hi Oskar, I don't know what you think about this, I used "BenchmarkTools.jl" to get the results. Can it be helpful for correction?