- Internal CI/CD release fix
- Fixed OOM bug caused when using partition file with large value for num-parts
- Upgraded to use Spark
3.5.1
.
- Upgraded to use Spark
3.4.2
. - Added Java
11
as the minimally required pre-requisite to run CDM jobs.
- Code test & coverage changes
- Upgraded to use Scala 2.13
- Allow support for Spark 3.4.1, SCC 3.4.1 and begin automated testing using Cassandra® latest 4 series.
- Improved unit test coverage
- Allow support for vector CQL data type
- Allow reserved keywords used as Target column-names
- In rare edge situations, counter tables with existing data in Target can have null values on target. This release will handle null values in Target counter table transparently.
- Counter table columns will usually have zeros to begin with, but in rare edge situations, they can have null values. This release will handle null values in counter table transparently.
- Fixed docker build
- Documentation fixes in readme & properties file
- Config namespace fixes
- Refactored exception handling and loading of token-range filters to use the same Migrate & DiffData jobs instead of separate jobs to reduce code & maintenance overhead
- Capture failed partitions in a file for easier reruns
- Optimized mvn to reduce jar size
- Fixed bugs in docs
- Fixes broken maven link in docker build process
- Upgrades to latest stable Maven 3.x
This release is a major code refactor of Cassandra Data Migrator, focused on internal code structure and organization. Automated testing (both unit and integration) was introduced and incorporated into the build process. It includes all features of the previous version, but the properties specified within configuration (.properties) file have been re-organized and renamed; therefore, the configuration file from the previous version will not work with this version.
New features were also introduced with this release, on top of the 3.4.5 version.
- New features:
Column renaming
: Column names can differ between Origin and TargetMigrate UDTs across keyspaces
: UDTs can be migrated from Origin to Target, even when the keyspace names differData Type Conversion
: Some predefined Codecs support type conversion between Origin and Target; custom Codecs can be addedSeparate Writetime and TTL configuration
: Writetime columns can differ from TTL columnsSubset of columns can be specified with Writetime and TTL
: Not all eligible columns need to be used to compute the origin valueAutomatic RandomPartitioner min/max
: Partition min/max values no longer need to be manually configuredPopulate Target columns with constant values
: New columns can be added to the Target table, and populated with constant valuesExplode Origin Map Column into Target rows
: A Map in Origin can be expanded into multiple rows in Target when the Map key is part of the Target primary key
Previous releases of the project have not been documented in this file