forked from HDFGroup/hdf5
-
Notifications
You must be signed in to change notification settings - Fork 0
/
RELEASE.txt
2386 lines (1786 loc) · 110 KB
/
RELEASE.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
HDF5 version 1.15.0 currently under development
================================================================================
INTRODUCTION
============
This document describes the differences between this release and the previous
HDF5 release. It contains information on the platforms tested and known
problems in this release. For more details check the HISTORY*.txt files in the
HDF5 source.
Note that documentation in the links below will be updated at the time of each
final release.
Links to HDF5 documentation can be found on:
https://support.hdfgroup.org/releases/hdf5/latest-docs.html
The official HDF5 releases can be obtained from:
https://support.hdfgroup.org/downloads/index.html
Changes from Release to Release and New Features in the HDF5-1.16.x release series
can be found at:
https://support.hdfgroup.org/releases/hdf5/documentation/release_specific_info.md
If you have any questions or comments, please send them to the HDF Help Desk:
CONTENTS
========
- New Features
- Support for new platforms and languages
- Bug Fixes since HDF5-1.14.0
- Platforms Tested
- Known Problems
- CMake vs. Autotools installations
New Features
============
Configuration:
-------------
- Added signed Windows msi binary and signed Apple dmg binary files.
The release process now provides signed Windows and Apple installation
binaries in addition to the debian and rpm installation binaries. Also
these installer files are no longer compressed into packaged archives.
- Added configuration option for internal threading/concurrency support:
CMake: HDF5_ENABLE_THREADS (ON/OFF) (Default: ON)
Autotools: --enable-threads (yes/no) (Default: yes)
This option enables support for threading and concurrency algorithms
within the HDF5 library. It is required for, but separate from, the
'threadsafe' configure option, which makes the HDF5 API safe to call from
multiple threads. It is possible to enable the 'threads' option and
disable the 'threadsafe' option, but not vice versa. The 'threads' option
must be on to enable the subfiling VFD.
- Moved examples to the HDF5Examples folder in the source tree.
Moved the C++ and Fortran examples from the examples folder to the HDF5Examples
folder and renamed to TUTR, tutorial. This is referenced from the LearnBasics
doxygen page.
- Added support for using zlib-ng package as the zlib library:
CMake: HDF5_USE_ZLIB_NG
Autotools: --enable-zlibng
Added the option HDF5_USE_ZLIB_NG to allow the replacement of the
default ZLib package by the zlib-ng package as a built-in compression library.
- Disable CMake UNITY_BUILD for hdf5
CMake added a target property, UNITY_BUILD, that when set to true, the target
source files will be combined into batches for faster compilation. By default,
the setting is OFF, but could be enabled by a project that includes HDF5 as a subproject.
HDF5 has disabled this feature by setting the property to OFF in the HDFMacros.cmake file.
- Removed "function/code stack" debugging configuration option:
CMake: HDF5_ENABLE_CODESTACK
Autotools: --enable-codestack
This was used to debug memory leaks internal to the library, but has been
broken for >1.5 years and is now easily replaced with third-party tools
(e.g. libbacktrace: https://github.com/ianlancetaylor/libbacktrace) on an
as-needed basis when debugging an issue.
- Added configure options for enabling/disabling non-standard programming
language features
* Added a new configuration option that allows enabling or disabling of
support for features that are extensions to programming languages, such
as support for the _Float16 datatype:
CMake: HDF5_ENABLE_NONSTANDARD_FEATURES (ON/OFF) (Default: ON)
Autotools: --enable-nonstandard-features (yes/no) (Default: yes)
When this option is enabled, configure time checks are still performed
to ensure that a feature can be used properly, but these checks may not
be sufficient when compiler support for a feature is incomplete or broken,
resulting in library build failures. When set to OFF/no, this option
provides a way to disable support for all non-standard features to avoid
these issues. Individual features can still be re-enabled with their
respective configuration options.
* Added a new configuration option that allows enabling or disabling of
support for the _Float16 C datatype:
CMake: HDF5_ENABLE_NONSTANDARD_FEATURE_FLOAT16 (ON/OFF) (Default: ON)
Autotools: --enable-nonstandard-feature-float16 (yes/no) (Default: yes)
While support for the _Float16 C datatype can generally be detected and
used properly, some compilers have incomplete support for the datatype
and will pass configure time checks while still failing to build HDF5.
This option provides a way to disable support for the _Float16 datatype
when the compiler doesn't have the proper support for it.
- Deprecate bin/cmakehdf5 script
With the improvements made in CMake since version 3.23 and the addition
of CMake preset files, this script is no longer necessary.
See INSTALL_CMake.txt file, Section X: Using CMakePresets.json for compiling
- Overhauled LFS support checks
In 2024, we can assume that Large File Support (LFS) exists on all
systems we support, though it may require flags to enable it,
particularly when building 32-bit binaries. The HDF5 source does
not use any of the 64-bit specific API calls (e.g., ftello64)
or explicit 64-bit offsets via off64_t.
Autotools
* We now use AC_SYS_LARGEFILE to determine how to support LFS. We
previously used a custom m4 script for this.
CMake
* The HDF_ENABLE_LARGE_FILE option (advanced) has been removed
* We no longer run a test program to determine if LFS works, which
will help with cross-compiling
* On Linux we now unilaterally set -D_LARGEFILE_SOURCE and
-D_FILE_OFFSET_BITS=64, regardless of 32/64 bit system. CMake
doesn't offer a nice equivalent to AC_SYS_LARGEFILE and since
those options do nothing on 64-bit systems, this seems safe and
covers all our bases. We don't set -D_LARGEFILE64_SOURCE since
we don't use any of the POSIX 64-bit specific API calls like
ftello64, as noted above.
* We didn't test for LFS support on non-Linux platforms. We've added
comments for how LFS should probably be supported on AIX and Solaris,
which seem to be alive, though uncommon. PRs would be appreciated if
anyone wishes to test this.
This overhaul also fixes GitHub #2395, which points out that the LFS flags
used when building with CMake differ based on whether CMake has been
run before. The LFS check program that caused this problem no longer exists.
- The CMake HDF5_ENABLE_DEBUG_H5B option has been removed
This enabled some additional version-1 B-tree checks. These have been
removed so the option is no longer necessary.
This option was CMake-only and marked as advanced.
- New option for building with static CRT in Windows
The following option has been added:
HDF5_BUILD_STATIC_CRT_LIBS "Build With Static Windows CRT Libraries" OFF
Because our minimum CMake is 3.18, the macro to change runtime flags no longer
works as CMake changed the default behavior in CMake 3.15.
Fixes GitHub issue #3984
- Added support for the new MSVC preprocessor
Microsoft added support for a new, standards-conformant preprocessor
to MSVC, which can be enabled with the /Zc:preprocessor option. This
preprocessor would trip over our HDopen() variadic function-like
macro, which uses a feature that only works with the legacy preprocessor.
ifdefs have been added that select the correct HDopen() form and
allow building HDF5 with the /Zc:preprocessor option.
The HDopen() macro is located in an internal header file and only
affects building the HDF5 library from source.
Fixes GitHub #2515
- Renamed HDF5_ENABLE_USING_MEMCHECKER to HDF5_USING_ANALYSIS_TOOL
The HDF5_USING_ANALYSIS_TOOL is used to indicate to test macros that
an analysis tool is being used and that the tests should not use
the runTest.cmake macros and it's variations. The analysis tools,
like valgrind, test the macro code instead of the program under test.
HDF5_ENABLE_USING_MEMCHECKER is still used for controlling the HDF5
define, H5_USING_MEMCHECKER.
- New option for building and naming tools in CMake
The following option has been added:
HDF5_BUILD_STATIC_TOOLS "Build Static Tools Not Shared Tools" OFF
The default will build shared tools unless BUILD_SHARED_LIBS = OFF.
Tools will no longer have "-shared" as only one set of tools will be created.
- Incorporated HDF5 examples repository into HDF5 library.
The HDF5Examples folder is equivalent to the hdf5-examples repository.
This enables building and testing the examples
during the library build process or after the library has been installed.
Previously, the hdf5-examples archives were downloaded
for packaging with the library. Now the examples can be built
and tested without a packaged install of the library.
However, to maintain the ability to use the HDF5Examples with an installed
library, it is necessary to map the option names used by the library
to those used by the examples. The typical pattern is:
<example option> = <library option>
HDF_BUILD_FORTRAN = ${HDF5_BUILD_FORTRAN}
- Added new option for CMake to mark tests as SKIPPED.
HDF5_DISABLE_TESTS_REGEX is a REGEX string that will be checked with
test names and if there is a match then that test's property will be
set to DISABLED. HDF5_DISABLE_TESTS_REGEX can be initialized on the
command line: "-DHDF5_DISABLE_TESTS_REGEX:STRING=<regex>"
See CMake documentation for regex-specification.
- Added defaults to CMake for long double conversion checks
HDF5 performs a couple of checks at build time to see if long double
values can be converted correctly (IBM's Power architecture uses a
special format for long doubles). These checks were performed using
TRY_RUN, which is a problem when cross-compiling.
These checks now use default values appropriate for most non-Power
systems when cross-compiling. The cache values can be pre-set if
necessary, which will preempt both the TRY_RUN and the default.
Affected values:
H5_LDOUBLE_TO_LONG_SPECIAL (default no)
H5_LONG_TO_LDOUBLE_SPECIAL (default no)
H5_LDOUBLE_TO_LLONG_ACCURATE (default yes)
H5_LLONG_TO_LDOUBLE_CORRECT (default yes)
H5_DISABLE_SOME_LDOUBLE_CONV (default no)
Fixes GitHub #3585
- Improved support for Intel oneAPI
* Separates the old 'classic' Intel compiler settings and warnings
from the oneAPI settings
* Uses `-check nouninit` in debug builds to avoid false positives
when building H5_buildiface with `-check all`
* Both Autotools and CMake
- Added new options for CMake and Autotools to control the Doxygen
warnings as errors setting.
* HDF5_ENABLE_DOXY_WARNINGS: ON/OFF (Default: ON)
* --enable-doxygen-errors: enable/disable (Default: enable)
The default will fail compile if the doxygen parsing generates warnings.
The option can be disabled if certain versions of doxygen have parsing
issues. i.e. 1.9.5, 1.9.8.
Addresses GitHub issue #3398
- Added support for AOCC and classic Flang w/ the Autotools
* Adds a config/clang-fflags options file to support Flang
* Corrects missing "-Wl," from linker options in the libtool wrappers
when using Flang, the MPI Fortran compiler wrappers, and building
the shared library. This would often result in unrecognized options
like -soname.
* Enable -nomp w/ Flang to avoid linking to the OpenMPI library.
CMake can build the parallel, shared library w/ Fortran using AOCC
and Flang, so no changes were needed for that build system.
Fixes GitHub issues #3439, #1588, #366, #280
- Converted the build of libaec and zlib to use FETCH_CONTENT with CMake.
Using the CMake FetchContent module, the external filters can populate
content at configure time via any method supported by the ExternalProject
module. Whereas ExternalProject_Add() downloads at build time, the
FetchContent module makes content available immediately, allowing the
configure step to use the content in commands like add_subdirectory(),
include() or file() operations.
Removed HDF options for using FETCH_CONTENT explicitly:
BUILD_SZIP_WITH_FETCHCONTENT:BOOL
BUILD_ZLIB_WITH_FETCHCONTENT:BOOL
- Thread-safety + static library disabled on Windows w/ CMake
The thread-safety feature requires hooks in DllMain(), which is only
present in the shared library.
We previously just warned about this, but now any CMake configuration
that tries to build thread-safety and the static library will fail.
This cannot be overridden with ALLOW_UNSUPPORTED.
Fixes GitHub issue #3613
- Autotools builds now build the szip filter by default when an appropriate
library is found
Since libaec is prevalent and BSD-licensed for both encoding and
decoding, we build the szip filter by default now.
Both autotools and CMake build systems will process the szip filter the same as
the zlib filter is processed.
- Removed CMake cross-compiling variables
* HDF5_USE_PREGEN
* HDF5_BATCH_H5DETECT
These were used to work around H5detect and H5make_libsettings and
are no longer required.
- Running H5make_libsettings is no longer required for cross-compiling
The functionality of H5make_libsettings is now handled via template files,
so H5make_libsettings has been removed.
- Running H5detect is no longer required for cross-compiling
The functionality of H5detect is now exercised at library startup,
so H5detect has been removed.
- Updated HDF5 API tests CMake code to support VOL connectors
* Implemented support for fetching, building and testing HDF5
VOL connectors during the library build process and documented
the feature under doc/cmake-vols-fetchcontent.md
* Implemented the HDF5_TEST_API_INSTALL option that enables
installation of the HDF5 API tests on the system
- Added new CMake options for building and running HDF5 API tests
(Experimental)
HDF5 API tests are an experimental feature, primarily targeted
toward HDF5 VOL connector authors, that is currently being developed.
These tests exercise the HDF5 API and are being integrated back
into the HDF5 library from the HDF5 VOL tests repository
(https://github.com/HDFGroup/vol-tests). To support this feature,
the following new options have been added to CMake:
* HDF5_TEST_API: ON/OFF (Default: OFF)
Controls whether the HDF5 API tests will be built. These tests
will only be run during testing of HDF5 if the HDF5_TEST_SERIAL
(for serial tests) and HDF5_TEST_PARALLEL (for parallel tests)
options are enabled.
* HDF5_TEST_API_INSTALL: ON/OFF (Default: OFF)
Controls whether the HDF5 API test executables will be installed
on the system alongside the HDF5 library. This option is currently
not functional.
* HDF5_TEST_API_ENABLE_ASYNC: ON/OFF (Default: OFF)
Controls whether the HDF5 Async API tests will be built. These
tests will only be run if the VOL connector used supports Async
operations.
* HDF5_TEST_API_ENABLE_DRIVER: ON/OFF (Default: OFF)
Controls whether to build the HDF5 API test driver program. This
test driver program is useful for VOL connectors that use a
client/server model where the server needs to be up and running
before the VOL connector can function. This option is currently
not functional.
* HDF5_TEST_API_SERVER: String (Default: "")
Used to specify a path to the server executable that the test
driver program should execute.
- Added support for CMake presets file.
CMake supports two main files, CMakePresets.json and CMakeUserPresets.json,
that allow users to specify common configure options and share them with others.
HDF added a CMakePresets.json file of a typical configuration and support
file, config/cmake-presets/hidden-presets.json.
Also added a section to INSTALL_CMake.txt with very basic explanation of the
process to use CMakePresets.
- Deprecated and removed old SZIP library in favor of LIBAEC library
LIBAEC library has been used in HDF5 binaries as the szip library of choice
for a few years. We are removing the options for using the old SZIP library.
Also removed the config/cmake/FindSZIP.cmake file.
- Enabled instrumentation of the library by default in CMake for parallel
debug builds
HDF5 can be configured to instrument portions of the parallel library to
aid in debugging. Autotools builds of HDF5 turn this capability on by
default for parallel debug builds and off by default for other build types.
CMake has been updated to match this behavior.
- Added new option to build libaec and zlib inline with CMake.
Using the CMake FetchContent module, the external filters can populate
content at configure time via any method supported by the ExternalProject
module. Whereas ExternalProject_Add() downloads at build time, the
FetchContent module makes content available immediately, allowing the
configure step to use the content in commands like add_subdirectory(),
include() or file() operations.
The HDF options (and defaults) for using this are:
BUILD_SZIP_WITH_FETCHCONTENT:BOOL=OFF
LIBAEC_USE_LOCALCONTENT:BOOL=OFF
BUILD_ZLIB_WITH_FETCHCONTENT:BOOL=OFF
ZLIB_USE_LOCALCONTENT:BOOL=OFF
The CMake variables to control the path and file names:
LIBAEC_TGZ_ORIGPATH:STRING
LIBAEC_TGZ_ORIGNAME:STRING
ZLIB_TGZ_ORIGPATH:STRING
ZLIB_TGZ_ORIGNAME:STRING
See the CMakeFilters.cmake and config/cmake/cacheinit.cmake files for usage.
- Added the CMake variable HDF5_ENABLE_ROS3_VFD to the HDF5 CMake config
file hdf5-config.cmake. This allows to easily detect if the library
has been built with or without read-only S3 functionality.
Library:
--------
- Added new routines for interacting with error stacks: H5Epause_stack,
H5Eresume_stack, and H5Eis_paused. These routines can be used to
indicate that errors from a call to an HDF5 routine should not be
pushed on to an error stack. Primarily targeted toward 3rd-party
developers of Virtual File Drivirs (VFDs) and Virtual Object Layer (VOL)
connectors, these routines allow developers to perform "speculative"
operations (such as trying to open a file or object) without requiring
that the error stack be cleared after a speculative operation fails.
- H5Pset_external() now uses HDoff_t, which is always a 64-bit type
The H5Pset_external() call took an off_t parameter in HDF5 1.14.x and
earlier. On POSIX systems, off_t is specified as a 64-bit type via
POSIX large-file support (LFS). On Windows, however, off_t is defined
as a 32-bit type, even on 64-bit Windows.
HDoff_t has been added to H5public.h and is defined to be __int64 on
Windows and the library has been updated to use HDoff_t in place of
off_t throughout. The H5Pset_external() offset parameter has also been
updated to be HDoff_t.
There is no API compatibility wrapper for this change.
Fixes GitHub issue #3506
- Relaxed behavior of H5Pset_page_buffer_size() when opening files
This API call sets the size of a file's page buffer cache. This call
was extremely strict about matching its parameters to the file strategy
and page size used to create the file, requiring a separate open of the
file to obtain these parameters.
These requirements have been relaxed when using the fapl to open
a previously-created file:
* When opening a file that does not use the H5F_FSPACE_STRATEGY_PAGE
strategy, the setting is ignored and the file will be opened, but
without a page buffer cache. This was previously an error.
* When opening a file that has a page size larger than the desired
page buffer cache size, the page buffer cache size will be increased
to the file's page size. This was previously an error.
The behavior when creating a file using H5Pset_page_buffer_size() is
unchanged.
Fixes GitHub issue #3382
- Added support for _Float16 16-bit half-precision floating-point datatype
Support for the _Float16 C datatype has been added on platforms where:
- The _Float16 datatype and its associated macros (FLT16_MIN, FLT16_MAX,
FLT16_EPSILON, etc.) are available
- A simple test program that converts between the _Float16 datatype and
other datatypes with casts can be successfully compiled and run at
configure time. Some compilers appear to be buggy or feature-incomplete
in this regard and will generate calls to compiler-internal functions
for converting between the _Float16 datatype and other datatypes, but
will not link these functions into the build, resulting in build
failures.
The following new macros have been added:
H5_HAVE__FLOAT16 - This macro is defined in H5pubconf.h and will have
the value 1 if support for the _Float16 datatype is
available. It will not be defined otherwise.
H5_SIZEOF__FLOAT16 - This macro is defined in H5pubconf.h and will have
a value corresponding to the size of the _Float16
datatype, as computed by sizeof(). It will have the
value 0 if support for the _Float16 datatype is not
available.
H5_HAVE_FABSF16 - This macro is defined in H5pubconf.h and will have the
value 1 if the fabsf16 function is available for use.
H5_LDOUBLE_TO_FLOAT16_CORRECT - This macro is defined in H5pubconf.h and
will have the value 1 if the platform can
correctly convert long double values to
_Float16. Some compilers have issues with
this.
H5T_NATIVE_FLOAT16 - This macro maps to the ID of an HDF5 datatype representing
the native C _Float16 datatype for the platform. If
support for the _Float16 datatype is not available, the
macro will map to H5I_INVALID_HID and should not be used.
H5T_IEEE_F16BE - This macro maps to the ID of an HDF5 datatype representing
a big-endian IEEE 754 16-bit floating-point datatype. This
datatype is available regardless of whether _Float16 support
is available or not.
H5T_IEEE_F16LE - This macro maps to the ID of an HDF5 datatype representing
a little-endian IEEE 754 16-bit floating-point datatype.
This datatype is available regardless of whether _Float16
support is available or not.
The following new hard datatype conversion paths have been added, but
will only be used when _Float16 support is available:
H5T_NATIVE_SCHAR <-> H5T_NATIVE_FLOAT16 | H5T_NATIVE_UCHAR <-> H5T_NATIVE_FLOAT16
H5T_NATIVE_SHORT <-> H5T_NATIVE_FLOAT16 | H5T_NATIVE_USHORT <-> H5T_NATIVE_FLOAT16
H5T_NATIVE_INT <-> H5T_NATIVE_FLOAT16 | H5T_NATIVE_UINT <-> H5T_NATIVE_FLOAT16
H5T_NATIVE_LONG <-> H5T_NATIVE_FLOAT16 | H5T_NATIVE_ULONG <-> H5T_NATIVE_FLOAT16
H5T_NATIVE_LLONG <-> H5T_NATIVE_FLOAT16 | H5T_NATIVE_ULLONG <-> H5T_NATIVE_FLOAT16
H5T_NATIVE_FLOAT <-> H5T_NATIVE_FLOAT16 | H5T_NATIVE_DOUBLE <-> H5T_NATIVE_FLOAT16
H5T_NATIVE_LDOUBLE <-> H5T_NATIVE_FLOAT16
The H5T_NATIVE_LDOUBLE -> H5T_NATIVE_FLOAT16 hard conversion path will only
be available and used if H5_LDOUBLE_TO_FLOAT16_CORRECT has a value of 1. Otherwise,
the conversion will be emulated in software by the library.
Note that in the absence of any compiler flags for architecture-specific
tuning, the generated code for datatype conversions with the _Float16 type
may perform conversions by first promoting the type to float. Use of
architecture-specific tuning compiler flags may instead allow for the
generation of specialized instructions, such as AVX512-FP16 instructions,
if available.
- Made several improvements to the datatype conversion code
* The datatype conversion code was refactored to use pointers to
H5T_t datatype structures internally rather than IDs wrapping
the pointers to those structures. These IDs are needed if an
application-registered conversion function or conversion exception
function are involved during the conversion process. For simplicity,
the conversion code simply passed these IDs down and let the internal
code unwrap the IDs as necessary when needing to access the wrapped
H5T_t structures. However, this could cause a significant amount of
repeated ID lookups for compound datatypes and other container-like
datatypes. The code now passes down pointers to the datatype
structures and only creates IDs to wrap those pointers as necessary.
Quick testing showed an average ~3x to ~10x improvement in performance
of conversions on container-like datatypes, depending on the
complexity of the datatype.
* A conversion "context" structure was added to hold information about
the current conversion being performed. This allows conversions on
container-like datatypes to be optimized better by skipping certain
portions of the conversion process that remain relatively constant
when multiple elements of the container-like datatype are being
converted.
* After refactoring the datatype conversion code to use pointers
internally rather than IDs, several copies of datatypes that were
made by higher levels of the library were able to be removed. The
internal IDs that were previously registered to wrap those copied
datatypes were also able to be removed.
- Implemented optimized support for vector I/O in the Subfiling VFD
Previously, the Subfiling VFD would handle vector I/O requests by
breaking them down into individual I/O requests, one for each entry
in the I/O vectors provided. This could result in poor I/O performance
for features in HDF5 that utilize vector I/O, such as parallel I/O
to filtered datasets. The Subfiling VFD now properly handles vector
I/O requests in their entirety, resulting in fewer I/O calls, improved
vector I/O performance and improved vector I/O memory efficiency.
- Added a simple cache to the read-only S3 (ros3) VFD
The read-only S3 VFD now caches the first N bytes of a file stored
in S3 to avoid a lot of small I/O operations when opening files.
This cache is per-file and created when the file is opened.
N is currently 16 MiB or the size of the file, whichever is smaller.
Addresses GitHub issue #3381
- Added new API function H5Pget_actual_selection_io_mode()
This function allows the user to determine if the library performed
selection I/O, vector I/O, or scalar (legacy) I/O during the last HDF5
operation performed with the provided DXPL.
- Added support for in-place type conversion in most cases
In-place type conversion allows the library to perform type conversion
without an intermediate type conversion buffer. This can improve
performance by allowing I/O in a single operation over the entire
selection instead of being limited by the size of the intermediate buffer.
Implemented for I/O on contiguous and chunked datasets when the selection
is contiguous in memory and when the memory datatype is not smaller than
the file datatype.
- Changed selection I/O to be on by default when using the MPIO file driver
- Added support for selection I/O in the MPIO file driver
Previously, only vector I/O operations were supported. Support for
selection I/O should improve performance and reduce memory uses in some
cases.
- Changed the error handling for a not found path in the find plugin process.
While attempting to load a plugin the HDF5 library will fail if one of the
directories in the plugin paths does not exist, even if there are more paths
to check. Instead of exiting the function with an error, just logged the error
and continue processing the list of paths to check.
- Implemented support for temporary security credentials for the Read-Only
S3 (ROS3) file driver.
When using temporary security credentials, one also needs to specify a
session/security token next to the access key id and secret access key.
This token can be specified by the new API function H5Pset_fapl_ros3_token().
The API function H5Pget_fapl_ros3_token() can be used to retrieve
the currently set token.
- Added a Subfiling VFD configuration file prefix environment variable
The Subfiling VFD now checks for values set in a new environment
variable "H5FD_SUBFILING_CONFIG_FILE_PREFIX" to determine if the
application has specified a pathname prefix to apply to the file
path for its configuration file. For example, this can be useful
for cases where the application wishes to write subfiles to a
machine's node-local storage while placing the subfiling configuration
file on a file system readable by all machine nodes.
- Added H5Pset_selection_io(), H5Pget_selection_io(), and
H5Pget_no_selection_io_cause() API functions to manage the selection I/O
feature. This can be used to enable collective I/O with type conversion,
or it can be used with custom VFDs that support vector or selection I/O.
- Added H5Pset_modify_write_buf() and H5Pget_modify_write_buf() API
functions to allow the library to modify the contents of write buffers, in
order to avoid malloc/memcpy. Currently only used for type conversion
with selection I/O.
Parallel Library:
-----------------
- Added optimized support for the parallel compression feature when
using the multi-dataset I/O API routines collectively
Previously, calling H5Dwrite_multi/H5Dread_multi collectively in parallel
with a list containing one or more filtered datasets would cause HDF5 to
break out of the optimized multi-dataset I/O mode and instead perform I/O
by looping over each dataset in the I/O request. The library has now been
updated to perform I/O in a more optimized manner in this case by first
performing I/O on all the filtered datasets at once and then performing
I/O on all the unfiltered datasets at once.
- Changed H5Pset_evict_on_close so that it can be called with a parallel
build of HDF5
Previously, H5Pset_evict_on_close would always fail when called from a
parallel build of HDF5, stating that the feature is not supported with
parallel HDF5. This failure would occur even if a parallel build of HDF5
was used with a serial HDF5 application. H5Pset_evict_on_close can now
be called regardless of the library build type and the library will
instead fail during H5Fcreate/H5Fopen if the "evict on close" property
has been set to true and the file is being opened for parallel access
with more than 1 MPI process.
Fortran Library:
----------------
- Add Fortran H5R APIs:
h5rcreate_attr_f, h5rcreate_object_f, h5rcreate_region_f,
h5ropen_attr_f, h5ropen_object_f, h5ropen_region_f,
h5rget_file_name_f, h5rget_attr_name_f, h5rget_obj_name_f,
h5rcopy_f, h5requal_f, h5rdestroy_f, h5rget_type_f
- Added Fortran H5E APIs:
h5eregister_class_f, h5eunregister_class_f, h5ecreate_msg_f, h5eclose_msg_f
h5eget_msg_f, h5epush_f, h5eget_num_f, h5ewalk_f, h5eget_class_name_f,
h5eappend_stack_f, h5eget_current_stack_f, h5eset_current_stack_f, h5ecreate_stack_f,
h5eclose_stack_f, h5epop_f, h5eprint_f (C h5eprint v2 signature)
- Added API support for Fortran MPI_F08 module definitions:
Adds support for MPI's MPI_F08 module datatypes: type(MPI_COMM) and type(MPI_INFO) for HDF5 APIs:
H5PSET_FAPL_MPIO_F, H5PGET_FAPL_MPIO_F, H5PSET_MPI_PARAMS_F, H5PGET_MPI_PARAMS_F
Ref. #3951
- Added Fortran APIs:
H5FGET_INTENT_F, H5SSEL_ITER_CREATE_F, H5SSEL_ITER_GET_SEQ_LIST_F,
H5SSEL_ITER_CLOSE_F, H5S_mp_H5SSEL_ITER_RESET_F
- Added Fortran Parameters:
H5S_SEL_ITER_GET_SEQ_LIST_SORTED_F, H5S_SEL_ITER_SHARE_WITH_DATASPACE_F
- Added Fortran Parameters:
H5S_BLOCK_F and H5S_PLIST_F
- The configuration definitions file, H5config_f.inc, is now installed
and the HDF5 version number has been added to it.
- Added Fortran APIs:
h5fdelete_f
- Added Fortran APIs:
h5vlnative_addr_to_token_f and h5vlnative_token_to_address_f
- Fixed an uninitialized error return value for hdferr
to return the error state of the h5aopen_by_idx_f API.
- Added h5pget_vol_cap_flags_f and related Fortran VOL
capability definitions.
- Fortran async APIs H5A, H5D, H5ES, H5G, H5F, H5L and H5O were added.
- Added Fortran APIs:
h5pset_selection_io_f, h5pget_selection_io_f,
h5pget_actual_selection_io_mode_f,
h5pset_modify_write_buf_f, h5pget_modify_write_buf_f
- Added Fortran APIs:
h5get_free_list_sizes_f, h5dwrite_chunk_f, h5dread_chunk_f,
h5fget_info_f, h5lvisit_f, h5lvisit_by_name_f,
h5pget_no_selection_io_cause_f, h5pget_mpio_no_collective_cause_f,
h5sselect_shape_same_f, h5sselect_intersect_block_f,
h5pget_file_space_page_size_f, h5pset_file_space_page_size_f,
h5pget_file_space_strategy_f, h5pset_file_space_strategy_f
- Removed "-commons" linking option on Darwin, as COMMON and EQUIVALENCE
are no longer used in the Fortran source.
Fixes GitHub issue #3571
C++ Library:
------------
-
Java Library:
-------------
-
Tools:
------
- Add doxygen files for the tools
Implement the tools usage text as pages in doxygen.
- Add option to adjust the page buffer size in tools
The page buffer cache size for a file can now be adjusted using the
--page-buffer-size=N
option in the h5repack, h5diff, h5dump, h5ls, and h5stat tools. This
will call the H5Pset_page_buffer_size() API function with the specified
size in bytes.
- Allow h5repack to reserve space for a user block without a file
This is useful for users who want to reserve space
in the file for future use without requiring a file to copy.
High-Level APIs:
----------------
- Added Fortran HL API: h5doappend_f
C Packet Table API:
-------------------
-
Internal header file:
---------------------
-
Documentation:
--------------
-
Support for new platforms, languages and compilers
==================================================
-
Bug Fixes since HDF5-1.14.0 release
===================================
Library
-------
- Fixed a bug with large external datasets
When performing a large I/O on an external dataset, the library would only
issue a single read or write system call. This could cause errors or cause
the data to be incorrect. These calls do not guarantee that they will
process the entire I/O request, and may need to be called multiple times
to complete the I/O, advancing the buffer and reducing the size by the
amount actually processed by read or write each time. Implemented this
algorithm for external datasets in both the read and write cases.
Fixes GitHub #4216
Fixes h5py GitHub #2394
- Fixed a bug in the Subfiling VFD that could cause a buffer over-read
and memory allocation failures
When performing vector I/O with the Subfiling VFD, making use of the
vector I/O size extension functionality could cause the VFD to read
past the end of the "I/O sizes" array that is passed in. When an entry
in the "I/O sizes" array has the value 0 and that entry is at an array
index greater than 0, this signifies that the value in the preceding
array entry should be used for the rest of the I/O vectors, effectively
extending the last valid I/O size across the remaining entries. This
allows an application to save a bit on memory by passing in a smaller
"I/O sizes" array. The Subfiling VFD didn't implement a check for this
functionality in the portion of the code that generates I/O vectors,
causing it to read past the end of the "I/O sizes" array when it was
shorter than expected. This could also result in memory allocation
failures, as the nearby memory allocations are based off the values
read from that array, which could be uninitialized.
- Fixed H5Rget_attr_name to return the length of the attribute's name
without the null terminator
H5Rget_file_name and H5Rget_obj_name both return the name's length
without the null terminator. H5Rget_attr_name now behaves consistently
with the other two APIs. Going forward, all the get character string
APIs in HDF5 will be modified/written in this manner, regarding the
length of a character string.
- Fixed library to allow usage of page buffering feature for serial file
access with parallel builds of HDF5
When HDF5 is built with parallel support enabled, the library would previously
disallow any usage of page buffering, even if a file was not opened with
parallel access. The library now allows usage of page buffering for serial
file access with parallel builds of HDF5. Usage of page buffering is still
disabled for any form of parallel file access, even if only 1 MPI process
is used.
- Fixed a leak of datatype IDs created internally during datatype conversion
Fixed an issue where the library could leak IDs that it creates internally
for compound datatype members during datatype conversion. When the library's
table of datatype conversion functions is modified (such as when a new
conversion function is registered with the library from within an application),
the compound datatype conversion function has to recalculate data that it
has cached. When recalculating that data, the library was registering new
IDs for each of the members of the source and destination compound datatypes
involved in the conversion process and was overwriting the old cached IDs
without first closing them. This would result in use-after-free issues due
to multiple IDs pointing to the same internal H5T_t structure, as well as
crashes due to the library not gracefully handling partially initialized or
partially freed datatypes on library termination.
Fixes h5py GitHub #2419
- Fixed function H5Requal actually to compare the reference pointers
Fixed an issue with H5Requal always returning true because the
function was only comparing the ref2_ptr to itself.
- Fixed infinite loop closing library issue when h5dump with a user provided test file
The library's metadata cache calls the "get_final_load_size" client callback
to find out the actual size of the object header. As the size obtained
exceeds the file's EOA, it throws an error but the object header structure
allocated through the client callback is not freed hence causing the
issue described.
(1) Free the structure allocated in the object header client callback after
saving the needed information in udata. (2) Deserialize the object header
prefix in the object header's "deserialize" callback regardless.
Fixes GitHub #3790
- Fixed many (future) CVE issues
A partner organization corrected many potential security issues, which
were fixed and reported to us before submission to MITRE. These do
not have formal CVE issues assigned to them yet, so the numbers assigned
here are just placeholders. We will update the HDF5 1.14 CVE list (link
below) when official MITRE CVE tracking numbers are assigned.
These CVE issues are generally of the same form as other reported HDF5
CVE issues, and rely on the library failing while attempting to read
a malformed file. Most of them cause the library to segfault and will
probably be assigned "medium (~5/10)" scores by NIST, like the other
HDF5 CVE issues.
The issues that were reported to us have all been fixed in this release,
so HDF5 will continue to have no unfixed public CVE issues.
NOTE: HDF5 versions earlier than 1.14.4 should be considered vulnerable
to these issues and users should upgrade to 1.14.4 as soon as
possible. Note that it's possible to build the 1.14 library with
HDF5 1.8, 1.10, etc. API bindings for people who wish to enjoy
the benefits of a more secure library but don't want to upgrade
to the latest API. We will not be bringing the CVE fixes to earlier
versions of the library (they are no longer supported).
LIST OF CVE ISSUES FIXED IN THIS RELEASE:
* CVE-2024-0116-001
HDF5 library versions <=1.14.3 contain a heap buffer overflow in
H5D__scatter_mem resulting in causing denial of service or potential
code execution
* CVE-2024-0112-001
HDF5 library versions <=1.14.3 contain a heap buffer overflow in
H5S__point_deserialize resulting in the corruption of the
instruction pointer and causing denial of service or potential code
execution
* CVE-2024-0111-001
HDF5 library versions <=1.14.3 contain a heap buffer overflow in
H5T__conv_struct_opt resulting in causing denial of service or
potential code execution
* CVE-2023-1208-002
HDF5 library versions <=1.14.3 contain a heap buffer overflow in
H5O__mtime_new_encode resulting in the corruption of the instruction
pointer and causing denial of service or potential code execution
* CVE-2023-1208-001
HDF5 library versions <=1.14.3 contain a heap buffer overflow in
H5O__layout_encode resulting in the corruption of the instruction
pointer and causing denial of service or potential code execution
* CVE-2023-1207-001
HDF5 library versions <=1.14.3 contain a heap buffer overflow in
H5O__dtype_encode_helper causing denial of service or potential
code execution
* CVE-2023-1205-001
HDF5 library versions <=1.14.3 contain a heap buffer overflow in
H5VM_array_fill resulting in the corruption of the instruction
pointer and causing denial of service or potential code execution
* CVE-2023-1202-002
HDF5 library versions <=1.14.3 contain a heap buffer overflow in
H5T__get_native_type resulting in the corruption of the instruction
pointer and causing denial of service or potential code execution
* CVE-2023-1202-001
HDF5 library versions <=1.14.3 contain a heap buffer overflow in
H5T__ref_mem_setnull resulting in the corruption of the instruction
pointer and causing denial of service or potential code execution
* CVE-2023-1130-001
HDF5 library versions <=1.14.3 contain a heap buffer overflow in
H5T_copy_reopen resulting in the corruption of the instruction
pointer and causing denial of service or potential code execution
* CVE-2023-1125-001
HDF5 versions <= 1.14.3 contain a heap buffer overflow in
H5Z__nbit_decompress_one_byte caused by the earlier use of an
initialized pointer. This may result in Denial of Service or
potential code execution