There are two different ways of handling database deploy: State and migration
A lot of frameworks have migrations build in. But the problems seems to be that they only support a subset of the functionality we need. Typically only schemas and indexes. For example Entity framework have migrations baked in, but it only supports schema and index migrations. Though you can write your own migrations in pure SQL and put them next to the autogenerated files. This are though not desireble for us.
Microsoft has SQL Server Data Tools(SSDT) which support a state based approach. The problem with state based approach are if we want to do some kind of data migrations in a release. This is possible i SSDT, but the data migration get "bound" to that release. So if you update your database more then 1 release, you have to apply all the release between sequantially. This are neither desireble.
Database versioning best practices
State vs migration-driven database delivery
Critiquing two different approaches to delivering databases: Migrations vs state
Based on the above we choose to go the Migration based path.
DBMigrator works
-
Validation
TO DO hash validation
-
Diff
-
Upgrade
-
Rollback
Run DBMigrator with parameters like follows:
-newversion "$version"
This creates a new folder with "$version" as name in the folder where the executable are located.
Run DBMigrator with parameters like follows:
NewFeature.ps1 "$newfeaturename"
This creates a new folder with "$newfeaturename" as name in the folder where "NewFeature.ps1" are located.
Inside this folder are the following folder structure:
- Migrations
- Migrate
Run DBMigrator with parameters like follows:
upgrade "$servername" "$database" "$username" "$password" "$toVersionString"
Where $toVersionString is optional and will default be the latest version.
Run DBMigrator with parameters like follows:
rollback "$servername" "$database" "$username" "$password" "$toVersionString"