Reading large *.res1D files using the MIKEIO1D tool #85
Replies: 2 comments
-
Hi Michael, this is an interesting use case. When you try to load all the data by calling read() method on 200+ GB large file, you will at least need that amount of RAM memory to do that. If you are interested in only particular locations, my first suggestion is to use location filtering to load the file. See the section Additionally, to explore just the header information (to get the IDs of relevant nodes, links etc. for filtering) you can load the res1d file using res1d = Read1D(filename, header_load=True) |
Beta Was this translation helpful? Give feedback.
-
MIKE IO 1D is built on pandas with the assumption that there's enough memory for results to be read. For working with larger files, you can filter as @gedaskir suggested. In the future it could be possible to also slice in time, or perhaps lazy load all results. I'll move this into discussions. |
Beta Was this translation helpful? Give feedback.
-
I have a .res1D file from a model simulation. The file is large, its about 200+ GB large.
Opening this file on MIKE VIEW was painfully slow and I have been testing the MIKEIO1D python tools developed by DHI but that was also taking so long.
When trying the following code, python says there is not enough memory on the computer, even though I have a 64GB RAM computer
`
Can you suggest a better/faster way to read the .res1D file and extract the results?
Thanks,
Michael
Beta Was this translation helpful? Give feedback.
All reactions