-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
replace unsupported get
keyword with scheduler
#48
Comments
I think for a longer period we should support both keywords. This allows people to use also an older dask version and not break scripts with another release. |
Good idea!
…--
Oliver Beckstein
email: [email protected]
Am Jun 26, 2018 um 13:17 schrieb Max Linke ***@***.***>:
I think for a longer period we should support both keywords. This allows people to use also an older dask version and not break scripts with another release.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
It seems in the new version of dask (0.20.0) released on 10/26/2018, it raises an error on the use of the get= keyword and set_options (http://docs.dask.org/en/latest/changelog.html). |
get
keyword with scheduler
get
keyword with scheduler
Would be nice to support old and new dask, but dask is moving so quickly, I don't think we have the developer time to do this. Instead we will have to require dask ≥ 0.18.0 and have users update. At least that is relatively painless for dask. |
I'm also OK saying this library is cutting edge. We explicitly claim the library is not stable in the readme. |
- fix #48 - updated boiler-plate code in ParallelAnalysisBase.run and copied and pasted into leaflet.LeafletFinder.run() (TODO: makes this more DRY) - dask.distributed added as dependency (it is recommended by dask for a single node anyway, and it avoids imports inside if statements... much cleaner code in PMDA) - removed scheduler kwarg: use dask.config.set(scheduler=...) - 'multiprocessing' and n_jobs=-1 are now only selected if nothing is set by dask; if one wants n_jobs=-1 to always grab all cores then you must set the multiprocessing scheduler - default for n_jobs=1 (instead of -1), i.e., the single threaded scheduler - updated tests - removed unnecessary broken(?) test for "no deprecations" in parallel.ParallelAnalysisBase - updated CHANGELOG
- fix #48 - updated boiler-plate code in ParallelAnalysisBase.run and copied and pasted into leaflet.LeafletFinder.run() (TODO: makes this more DRY) - dask.distributed added as dependency (it is recommended by dask for a single node anyway, and it avoids imports inside if statements... much cleaner code in PMDA) - removed scheduler kwarg: use dask.config.set(scheduler=...) - 'multiprocessing' and n_jobs=-1 are now only selected if nothing is set by dask; if one wants n_jobs=-1 to always grab all cores then you must set the multiprocessing scheduler - default for n_jobs=1 (instead of -1), i.e., the single threaded scheduler - updated tests - removed unnecessary broken(?) test for "no deprecations" in parallel.ParallelAnalysisBase - updated CHANGELOG
Expected behaviour
No warnings are raised when using dask.PMDA should work with latest dask releases ≥ 0.18.0
Actual behaviour
Using the
get=
kwarg isdeprecatedremoved (raises TypeError), instead usescheduler=
This is now a TypeError.
Code to reproduce the behaviour
Currently version of MDAnalysis:
(run
python -c "import MDAnalysis as mda; print(mda.__version__)"
)(run
python -c "import pmda; print(pmda.__version__)"
)(run
python -c "import dask; print(dask.__version__)"
)The text was updated successfully, but these errors were encountered: