Liquibase - How to skip changesets that have been executed - sql-update

I have a dropwizard application backed by a mysql database. I am using the liquibase wrapper for database migrations
To start with I used the ' db dump' command to autogenerate the migrations.xml file for me.
Now I am working on refactoring the database and I want to be able to update specific column names and names of tables.
I use the preconditions to skip the already generated tables to skip the 'createTable' commands
<preConditions onFail="MARK_RAN">
<not>
<tableExists tableName="myTable" />
</not>
</preConditions>
How do I now skip the execution of the primary key and foreign key contraints? Is there a precondition at the changelog level that I can use to indicate "skip already executed changesets'? Or do I just create a new migrations.xml? Will this delete the existing data?

Check this for preconditions http://www.liquibase.org/documentation/preconditions.html
You can't tell liquibase to skip all changesets that are executed, but you can just exclude that file (you have to save migration.xml in some other folder (archive i.e), because someday you can create db structure from scratch and you will need that changesets)
That means that you can just create new .xml file with changesets that you want to be executed. Liquibase wont delete any data, it will work only with xml files you give to it.

Related

Automatically migrate JSON data to newest version of JSON schema

I have a service running on my linux machine that reads data stored in a .json file when the machine is booting. The service then validates the incoming JSON data and modifies specific system configurations according to the data. The service is written in C++ and for the validation im using https://github.com/pboettch/json-schema-validator.
In development it was easy to modify the JSON schema and just adapt the data manually. I've started to use semantic versioning for my JSON schema and included it the following way:
JSON schema:
{
"$id": "https://my-company.org/schemas/config/0.1.0/config.schema.json",
"$schema": "http://json-schema.org/draft-07/schema#",
// Start of Schema definition
}
JSON data:
{
"$schema": "https://my-comapny.org/schemas/config/0.1.0/config.schema.json",
// Rest of JSON data
}
With the addition of the version, I am able to check if a version mismatch exists before validating.
What I am looking for is a way to automatically migrate the JSON data to match the newer schema version, if a version mismatch is identified. Is there any way to automatically achieve this, or is the only way to manually edit the JSON data to match the schema?
Since I plan on releasing this as open source I would really like to include some form of automatic migration so I can just ask the user if he wants to migrate to conform to the newest schema version instead of throwing an error, if a version mismatch was identified.
What you're asking for is something which will need to make assumptions to work.
This is an age old problem and similar for databases. You can have schema migrations generated with many simple changes, but this is not viable if you wish to translate existing data automatically too.
Let's look at a basic example. You rename a field.
How would a tool know you've renamed a field vs removed an old one and added a new one? It essentially, cannot.
So, you need to write your migrations by hand.
You could use JSON transformation tools like jq or fx to create migration scripts without writing it in code, which may or may not be preferable. (jq has a steeper learning curve but it's also very powerful.)

TEIID Importing ddl into vdb ddl

Currently my VDB DDL file is getting quite big. I want to split into different files using the following.
IMPORT FROM REPOSITORY "DDL-FILE"
INTO test OPTIONS ("ddl-file" '/path/to/schema1.ddl')
However, this does not seem to work.
Can the DDL file path be relative, how?
The schema test, can it be VIRTUAL?
Does "DDL-FILE" refer to "ddl-file"?
What should I put in my main VDB ddl and what should I put in my extra ddl's. Should the
extra ddl's contain server configuration details or should they be defined as a VDB.
I would like to see a working example on how to use this.
This will be used in a teiid springboot project where you can only load one main vdb file. It is not workable to have one very large ddl file.
I tried multiple approaches but it does not seem to work, either giving me a null pointer with no error codes or error codes that tell me nothing.
Also the syntax in Teiid 9.3 seems different:
IMPORT FOREIGN SCHEMA public
FROM REPOSITORY DDL-FILE
INTO test OPTIONS ("ddl-file" '/path/to/schema.ddl')
This feature is currently not implemented in Teiid Spring Boot. This issue is captured in https://issues.redhat.com/browse/TEIIDSB-219
Update: I added the needed code to master, should be available with 1.7 release meanwhile you can build the master branch and test it out.

Database Migration from one version to another using Liquibase

I rolled out the first version of application and a Postgres server is set up for the same.
I am planning to roll out my second version of my application which has structural changes in my tables.
For example : I had App table with a column called version , now I have another column called releaseVersion and I have to apply alter to add this column.In such a case, how can I use liquibase to generate/apply the migration script?
Is liquibase capable of such migration.?
In short, for my first version I created my table using the DDL
CREATE TABLE App (version varchar); // I manually generated this using liquibase offline mode and my metadata.
Now I have my db with above column.
And I need to generate the alter to add column using liquibase. Something like this
ALTER TABLE App ADD releaseVersion varchar;
Is it possible using Liquibase as it is the industry standard for migration.
I used liquibase:diff, but it is only capable of creating the difference changelog from two databases (target db and base db). In my case, there is only a production database.
Yes, it's possible.
Create a changeSet like:
<changeSet author="foo" id="bar">
<preConditions onFail="MARK_RAN">
<and>
<columnExists tableName="App" columnName="version"/>
<not>
<columnExists tableName="App" columnName="releaseVersion"/>
</not>
</and>
</preConditions>
<renameColumn tableName="App" oldColumnName="version" newColumnName="releaseVersion" columnDataType="varchar(100)"/>
</changeSet>
and apply it, using liquibase update command.
If you need to just add a new column, then your changeSet will look like this:
<changeSet id="foo" author="bar">
<preConditions onFail="MARK_RAN">
<not>
<columnExists tableName="App" columnName="releaseVersion"/>
</not>
</preConditions>
<addColumn tableName="App">
<column name="releaseVersion" type="varchar(100)"/>
</addColumn>
</changeSet>

What is the structure.sql used for?

I'm curious what the point of the structure.sql file is. It seems to be updated and created every time rails migrations are run. So it seems to be a visual representation of our database. What else can it be used for?
When one runs structure:load, what does it do? What does it mean to load a structure file into a database? Why would you need to do that?
Should one be committing the structure.sql file?
Seems like your rails app is configured to use the sql schema format
#/config/application.rb
...
config.active_record.schema_format = :sql
...
the structure.sql is in place of a schema.db.
Running db:structure:load ( or db:schema:load) will load your entire database. You only need to do this when bringing on a new app instance from scratch. After awhile, your migration files will become quite lengthy and it will be better to do a load first, then a migration when bringing up a new app instance

How to detect and respond to a database change (INSERT) from a django project?

I am setting up our project to integrate with a shipping platform called Endicia which has the ability to insert new rows into our database when a package is shipped.
How can I detect from python when a new row has been inserted?
My solution for now would be to query the DB every 30 seconds or so for new rows... is there another solution to send a signal from postgres to python?
You'd set up a custom command that is run by the manage.py file.
You'd put it in `yourapp/management/commands/' folder. Make sure to add an init.py file to both the management and commands folder or the command won't work. Then you create the code for the custom command.
Then, see this related question about running a shell script when changes are made to a postgres database. The answer there was to use PL/sh. You'll need to figure that part out on your own, but basically however you do it, the end result is that the script should call something like /path/to/app/manage.py command_name