I rolled out the first version of application and a Postgres server is set up for the same.
I am planning to roll out my second version of my application which has structural changes in my tables.
For example : I had App table with a column called version , now I have another column called releaseVersion and I have to apply alter to add this column.In such a case, how can I use liquibase to generate/apply the migration script?
Is liquibase capable of such migration.?
In short, for my first version I created my table using the DDL
CREATE TABLE App (version varchar); // I manually generated this using liquibase offline mode and my metadata.
Now I have my db with above column.
And I need to generate the alter to add column using liquibase. Something like this
ALTER TABLE App ADD releaseVersion varchar;
Is it possible using Liquibase as it is the industry standard for migration.
I used liquibase:diff, but it is only capable of creating the difference changelog from two databases (target db and base db). In my case, there is only a production database.
Yes, it's possible.
Create a changeSet like:
<changeSet author="foo" id="bar">
<preConditions onFail="MARK_RAN">
<and>
<columnExists tableName="App" columnName="version"/>
<not>
<columnExists tableName="App" columnName="releaseVersion"/>
</not>
</and>
</preConditions>
<renameColumn tableName="App" oldColumnName="version" newColumnName="releaseVersion" columnDataType="varchar(100)"/>
</changeSet>
and apply it, using liquibase update command.
If you need to just add a new column, then your changeSet will look like this:
<changeSet id="foo" author="bar">
<preConditions onFail="MARK_RAN">
<not>
<columnExists tableName="App" columnName="releaseVersion"/>
</not>
</preConditions>
<addColumn tableName="App">
<column name="releaseVersion" type="varchar(100)"/>
</addColumn>
</changeSet>
Related
Currently my VDB DDL file is getting quite big. I want to split into different files using the following.
IMPORT FROM REPOSITORY "DDL-FILE"
INTO test OPTIONS ("ddl-file" '/path/to/schema1.ddl')
However, this does not seem to work.
Can the DDL file path be relative, how?
The schema test, can it be VIRTUAL?
Does "DDL-FILE" refer to "ddl-file"?
What should I put in my main VDB ddl and what should I put in my extra ddl's. Should the
extra ddl's contain server configuration details or should they be defined as a VDB.
I would like to see a working example on how to use this.
This will be used in a teiid springboot project where you can only load one main vdb file. It is not workable to have one very large ddl file.
I tried multiple approaches but it does not seem to work, either giving me a null pointer with no error codes or error codes that tell me nothing.
Also the syntax in Teiid 9.3 seems different:
IMPORT FOREIGN SCHEMA public
FROM REPOSITORY DDL-FILE
INTO test OPTIONS ("ddl-file" '/path/to/schema.ddl')
This feature is currently not implemented in Teiid Spring Boot. This issue is captured in https://issues.redhat.com/browse/TEIIDSB-219
Update: I added the needed code to master, should be available with 1.7 release meanwhile you can build the master branch and test it out.
I'm using siddhi to create some app which also interacts with PostgreSQL DB. Although I'm not sure, I believe, there is a bug about making multiple updates on the same PG table, within a single event (i.e. upon receiving an event, update a record in the table, and create another one again in the same table) it seems the batch updates are causing some problems. SO, I just want to give it a try after disabling batchUpdate (it is enabled by default). I just don't know how to configure it using siddhi-sdk (via Intellij plugin). There are two related tickets:
https://github.com/wso2-extensions/siddhi-store-rdbms/issues/43
https://github.com/wso2/product-sp/issues/472
Until these are documented, I'd like to get some quick response how to set these fields.
Best regards...
I'm using siddhi to create some app which also interacts with PostgreSQL DB. Although I'm not sure, I believe, there is a bug about making multiple updates on the same PG table, within a single event (i.e. upon receiving an event, update a record in the table, and create another one again in the same table) it seems the batch updates are causing some problems.
When batchEnabled has been set to true, it will perform the insert/update operation on batch of events instead of performing those operations on each and every single event. Simply, this has been introduced to improve the performance.
The default value of this parameter is currently set to "true".
However, batchEnable configurations is done through a system parameter called, "{{RDBMS-Name}}.batchEnable" which have to be configured in the WSO2 Stream Processor's deployment.yaml
If you want to overide this property in Product-SP please find the steps below.
Open the deployment.yaml file located in {Product-SP-Home}/conf/editor/
Insert the following lines in the file.
siddhi:
extensions:
extension:
name: store
namespace: rdbms
properties:
PostgreSQL.batchEnable: true
But currently there is no way to overwrite those system configurations from the siddhi app level. Since you are using the SDK, what you can do is changing the default value of above parameter to "false".
Please find the steps below do it.
Find the siddhi-store-rdbms-4.x.xx.jar file in the siddhi
sdk. This is located in the {siddhi-sdk-home}/lib/ .
Open the jar file using an archive manager and open the
rdbms-table-config.xml file located inside it with a text editor.
Set false in <batchEnable>true</batchEnable> attribute under the
<database name="PostgreSQL"> tag and save it.
Thanks Raveen. with a simple dash (-) before "extension" I was able to set the config.
siddhi:
extensions:
- extension:
name: store
namespace: rdbms
properties:
PostgreSQL.batchEnable: false
How to manage schema migrations for Google BigQuery, we have used Liquibase and Flyway in the past. What kind of tools can we use to manage schema modifications and the like (e.g. adding a new column) across dev/staging environments.
Found open source framework for BigQuery schema migration
https://github.com/medjed/bigquery_migration
One more solution
https://robertsahlin.com/automatic-builds-and-version-control-of-your-bigquery-views/
PS
In flyway someone opened the ticket to support BigQuery.
Flyway, a very popular database migration tool, now offers support for BigQuery as a beta, while pending certification.
You can get access to the beta version here: https://flywaydb.org/documentation/database/big-query after answering a short survey.
I've tested it from the command line and it works great! Took me about an hour to get familiar with Flyway's configuration, and now calling it with a yarn command.
Here's an example for a NodeJS project with the following files structure:
package.json
fireway/
<SERVICE_ACCOUNT_JSON_FILE>
flyway.conf
migrations/
V1_<YOUR_MIGRATION>.sql
package.json
{
...
"scripts": {
...
"migrate": "flyway -configFiles=flyway/flyway.conf migrate"
},
...
}
and flyway.conf:
flyway.url=jdbc:bigquery://https://www.googleapis.com/bigquery/v2:443;ProjectId=<YOUR_PROJECT_ID>;OAuthType=0;OAuthServiceAcctEmail=<SERVICE_ACCOUNT_NAME>;OAuthPvtKeyPath=flyway/<SERVICE_ACCOUNT_JSON_FILE>;
flyway.schemas=<YOUR_DATASET_NAME>
flyway.user=
flyway.password=
flyway.locations=filesystem:./flyway/migrations
flyway.baselineOnMigrate=true
Then you can just call yarn migrate any time you have new migrations to apply.
I created an adapter for Sequel, sequel-bigquery, so we could manage our BigQuery database schema as a set of Ruby migration files which use Sequel's DSL - the same way we do for our PostgreSQL database.
Example
# migrations/bigquery/001_create_people_table.rb
Sequel.migration do
change do
create_table(:people) do
String :name, null: false
Integer :age, null: false
TrueClass :is_developer, null: false
DateTime :last_skied_at
Date :date_of_birth, null: false
BigDecimal :height_m
Float :distance_from_sun_million_km
end
end
end
require 'sequel-bigquery'
require 'logger'
db = Sequel.connect(
adapter: :bigquery,
project: 'your-gcp-project',
database: 'your_bigquery_dataset_name',
location: 'australia-southeast2',
logger: Logger.new(STDOUT),
)
Sequel.extension(:migration)
Sequel::Migrator.run(db, 'migrations/bigquery')
According to the BQ docs, you can add a row to the schema without any additional process.
For more complex transformations, if it can be resolved in a SQL query, you can just run that query setting the destination table as the source table (although I would suggest creating a backup of the table in case something goes wrong).
Example
Let's say I have a table with a column that is a integer (column d), but at the insertion time it was written as a string. I can modify the table by setting itself as a destination table and running a query like:
SELECT
a,
b,
c,
CAST(d AS INT64) AS d,
e,
f
FROM
`example.dataset.table`
This is an example for changing the schema, but this can be applied as long as you can get the result with a BQ query.
I have a dropwizard application backed by a mysql database. I am using the liquibase wrapper for database migrations
To start with I used the ' db dump' command to autogenerate the migrations.xml file for me.
Now I am working on refactoring the database and I want to be able to update specific column names and names of tables.
I use the preconditions to skip the already generated tables to skip the 'createTable' commands
<preConditions onFail="MARK_RAN">
<not>
<tableExists tableName="myTable" />
</not>
</preConditions>
How do I now skip the execution of the primary key and foreign key contraints? Is there a precondition at the changelog level that I can use to indicate "skip already executed changesets'? Or do I just create a new migrations.xml? Will this delete the existing data?
Check this for preconditions http://www.liquibase.org/documentation/preconditions.html
You can't tell liquibase to skip all changesets that are executed, but you can just exclude that file (you have to save migration.xml in some other folder (archive i.e), because someday you can create db structure from scratch and you will need that changesets)
That means that you can just create new .xml file with changesets that you want to be executed. Liquibase wont delete any data, it will work only with xml files you give to it.
I use EclipseLink as my JPA2 persistence layer, and i would like to see the values sent to DB in logs.
I already see SQL queries (using <property name="eclipselink.logging.level" value="ALL" /> in my persistence.xml), but, for example in an SQSL INSERT, I do not see the values, only the placeholders ?
So, how to see what values are sent
You'll need to use a JDBC proxy driver like p6spy or log4jdbc to get the SQL statements issued with their values instead of the placeholders. This approach works well you are using a EclipseLink with a connection pool whose URL is derived from persistence.xml (where you can specify a JDBC URL recognized by the proxy driver instead of the actual), but may not be so useful in a Java EE environment (atleast for log4jdbc), unless you can get the JNDI data sources to use the proxy driver.