NullPointerException reading from NetcdfFile when cache enabled (version 5.5.3) #981
Replies: 3 comments
-
Thanks for the report, Sami. Is it possible to get the server logs when this happens? |
Beta Was this translation helpful? Give feedback.
-
Hi @skauppin , older support messages show that this has been an issue before, and it is a bug in netCDF-Java, but it can be mitigated by also enabling the RandomAccessFile cache: Let us know if that doesn't work for you. |
Beta Was this translation helpful? Give feedback.
-
Thanks for the answer! I was able to reproduce the original problem on my laptop and enabling the RandomAccessFile cache did seem to help. But when we deployed to production environment we still got those NullPointerExceptions errors from RandomAccessFile. Many files are being accessed there in multiple threads so it's more complicated. I haven't been able to reproduce on my laptop with RandomAccessFile cache enabled, so at the moment I'm not able to provide more details on what is happening. I'll let you know if we find out anything. |
Beta Was this translation helpful? Give feedback.
-
Hi all,
I'm working with a multithreaded server that uses netcdf-java library to access netcdf files and providing data from them to client applications. We recently updated from
netcdfAll-4.6.10.jar
to version 5.5.3 (edu.ucar/cdm-core
,edu.ucar/grib
andedu.ucar/netcdf4
).We are utilizing the caching functionality of the netcdf-java library. Then the system starts it initializes the cache by calling
NetcdfDatasets.initNetcdfFileCache
, and we get aNetcdfFile
instance by callingNetcdfDatasets.acquireFile
.Now with version 5.5.3 we are observing errors when reading data from files.
When not using the cache functionality there is no errors but then we have some performance problems.
To me it looks like the problem is that NetcdfFile is be removed from the cache (and closed) while it's still used by some thread.
This did not happen with library version 4.6.10.
We haven't tried yet, but I guess a kinda workaround could be to set the cache size so big that all the files being processed probably fit to the cache. That's of course not optimal in anyway, and we'd prefer a more sound solution.
Reproducing the problem seems quite easy. Just open and read from X files in parallel and while setting cache size < X.
I would appreciate any comments, insight, ideas how this should be fixed.
Thanks!
Sami
Beta Was this translation helpful? Give feedback.
All reactions