You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Following up on this discussion, it will be cumbersome to add new data sources and drivers. The folders created when the pipeline is run could be fully organized by data source and driver type, but they are not.
For example, after the pipeline is run, the directory structure in 1_fetch looks like this:
No subfolders in tmp/ or out/. At best, future data sources must be identified based on a suffix (e.g. _mntoha). At worst, there is no suffix (as with lake_metadata.csv), so the situation is ripe for file collisions that result in a file being overwritten or used for the wrong data source.
There's no distinction between NLDAS drivers and GCM drivers.
The text was updated successfully, but these errors were encountered:
I have been using the suffix/prefix approach over in lake-temperature-out. I'm not super satisfied by it because you end up having to scroll through a lot of files, so I like the idea of a nested approach!
Following up on this discussion, it will be cumbersome to add new data sources and drivers. The folders created when the pipeline is run could be fully organized by data source and driver type, but they are not.
For example, after the pipeline is run, the directory structure in
1_fetch
looks like this:Some issues with this organization system:
_mntoha
). At worst, there is no suffix (as withlake_metadata.csv
), so the situation is ripe for file collisions that result in a file being overwritten or used for the wrong data source.The text was updated successfully, but these errors were encountered: