One of my changesets had a logicalFilePath which was incorrect(two changesets accidentally had the same logicalFilePath) and upon editing the logicalFilePath in an existing changeset, liquibase update failed with an error of duplicate column, which would mean that liquibase thought the changeset to be not executed and re-ran it.
Does liquibase identify if a changeset has already been executed based on the 'EXECUTED' flag or the combination of 'id','author' and 'logicalFilePath'?
Also, how do i rectify the mistake in this case where an existing changeset has an incorrect logicalFilePath
How does it work:
From Liquibase docs:
logicalFilePath - Use to override the file name and path when creating the unique
identifier of change sets. Required when moving or renaming change
logs.
Liquibase calculates the MD5 checksum of the changeSet based on the:
content of the changeSet;
id of the changeSet;
author of the changeSet;
path and name of your changeLog file or logicalFilePath;
If you don't change anything in your changeSet and just try to rerun it, Liquibase will look at databasechangelog.id, databasechangelog.author, databasechangelog.FILENAME and databasechangelog.MD5SUM, and if everything is the same as it was, then the changeSet will be skipped.
If you change the content of the changeSet, liquibase will throw an error that checksum was changed (while databasechangelog.id, databasechangelog.author and databasechangelog.FILENAME stays the same).
If you change the id, author or path (logicalFilePath), then Liquibase will think that it's a new changeSet and will try to execute it.
Why do you have a problem:
Liquibase treats you changeSet as new, and as you have an error:
update failed with an error of duplicate column
I suppose you don't have any preConditions in your changeSet or they aren't sufficient enough,
How do you fix it:
So since liquibase thinks that you're executing a new changeSet, nothing stops you from writing those:
<preConditions onFail="MARK_RAN">
<not>
<columnExists tableName="your_table" columnName="your_column"/>
</not>
</preConditions>
and because your_table.your_column already exists in the database, then this changeSet will be marked as databasechangelog.EXECTYPE=MARK_RAN and skipped.
Problem solved!
Related
If I submit two change sets one right after the other, is the first guaranteed to complete before the second?
It is a bit unclear what you mean by submitting a changeset. If you're talking about creating changesets, then this shouldn't be a problem, since it won't actually execute anything on your stack.
If you're talking about executing a changeset on a stack, CloudFormation will probably not accept the second changeset you have submitted since you cannot update a stack that has an in-progress status.
After the first changeset is executed successfully, the second changeset will automatically be removed as it is no longer valid for the stack after the update has been applied. Hence, if you try to use the ID of the old changeset, you will get a ChangeSetNotFound error.
What I'm sure won't happen, is CloudFormation will execute both changesets sequentially.
In SSDT project (using VS2017/VS2015, SSDT version 15.1.61702.140), I cannot get my project to build. The compiler keeps complaining about the sql statement in my PostDeploymentScript (yes, I have set the BuildAction property to PostDeploy). The sql statement is:
if ('$(env)' = 'dvp')
BEGIN
PRINT 'creating users for dvp'
:r .\SecurityAdditions\usersdvp.sql
END
ELSE IF ('$(env)' = 'qat')
BEGIN
PRINT 'creating users for qat'
:r .\SecurityAdditions\usersqat.sql
END
The actual error message is:
D:\My\File\Path\PostDeploymentScript.sql (lineNum, col): Error: SQL72007:
The syntax check failed 'Unexpected end of file occurred.' in the batch near:
The line num referred in the error message in the last line (end). Any idea what's causing this?
Apparently the problem was due to the GO statements I had in the files I was referencing. Having GO statements inside if else block is invalid. Here is an article explaining that. I was able to get it work by removing all GO statements from the referenced files and by splitting if else to two if.
IF ('$(env)' = 'dvp')
BEGIN
:R .\SecurityAdditions\UsersDVP.sql
END
IF ('$(env)' = 'qat')
BEGIN
:R .\SecurityAdditions\UsersQAT.sql
END
GO
I had this same error because I forgot to end one of the scripts being included in the post deployment script with a GO statement. What makes it hard fix is that the error will point to the first line in the next script instead of the script where the GO statement is missing.
I ran into this issue while I was trying to create database users in a SQL Database project. Setting the build action to None is no use because then your script doesn't run during the deployment.
I was using a script like this to create the users:
IF NOT EXISTS (SELECT * FROM sys.sysusers WHERE name='$(DbUserName)')
BEGIN
CREATE USER [$(DbUserName)] WITH PASSWORD = '$(DbPassword)';
ALTER ROLE [db_owner] ADD MEMBER [$(DbUserName)];
END
I had two SQLCMD variables in the project file and setting a default value for one of them actually resolved the issue. It's really weird but I hope this helps some poor soul one day :)
I would like to share my experience here.
I got same error building my sql project but scenario was different and tricky.
I introduced new column in one of my database table and I needed to populate that column for already existing rows in that table. So basically it should be one time process and hence I decided to create post deployment script to do that. This post deployment script
began with IF condition to make sure it run only once for a given database. Please note this does not allow GO statement.
then Create Function to create temporary function. This needs GO statement before Create Function mainly because it makes changes in database schema. This was tricky because IF does not allow GO statement.
then Update query using temp function to achieve my requirement. This is fine without GO statement
then DROP FUNCTION to remove temporary function. This is also database schema change and ideally needs GO statement.
To handle this situation without any GO statement
I created a variable let's say '#CreateFuntion NAVARCHAR(MAX)' and set it with whole Create Function statement.
Executed Create Function using "EXEC sp_executesql #CreateFunction". This runs Create Function in separate batch. I was expecting Drop Function will need same treatment but in my case it worked without GO and "EXEC sp_executesql" may be because it was last statement in the script and would anyway run in next batch.
Everything else as it is
Another reason this could happen is if a post deployment script has a BEGIN statement without a corresponding END line. In such a case, any subsequent GO in anther future script will cause this error. I stumbled across this due to my own absent-mindedness when editing one of the post-deployment scripts.
We had a column type for a enum called enumFooType which we had added on \Doctrine\DBal\Types\Type::addType().
When running vendor/bin/doctrine-module migrations:diff to generate the migration that would delete said column, an error was thrown:
[Doctrine\DBAL\DBALException]
Unknown column type "enumFooType" requested. Any Doctrine type that you use has to be registered with \Doctrine\DBAL\Types\Type::addType().
You can get a list of all the known types with \Doctrine\DBAL\Types\Type::getTypesMap().
If this error occurs during database introspection then you might have forgot to register all database types for a Doctrine Type.
Use AbstractPlatform#registerDoctrineTypeMapping() or have your custom types implement Type#getMappedDatabaseTypes().
If the type name is empty you might have a problem with the cache or forgot some mapping information.
I'm guessing the error was thrown because the database has a foo_type marked with (DC2Type:enumFooType).
What is the correct way of handling these types of deletions? My first thought would be to generate a blank migration using vendor/bin/doctrine-module migrations:generate and manually write the query, but I'd like a more automated way, if possible not writing anything manually.
TL;DR:
The class definition for the DBAL type enumFooType should exist before running the doctrine commands (now that I have written this line, it feels kind of obvious, like "duh!").
Long answer:
After a couple of rollbacks and trial and errors, I devised the following procedure for this kind of operations:
Delete the property of enumFooType from the entity class.
Create the migration (up to this point, the EnumFooType file still exists).
Delete the EnumFooType class that contains the definition of this dbal type.
The reason it has to be done in this order is because if you delete the type first, doctrine won't be load because this file is missing, resulting in the exception posted in the original question.
Moreover, after you have created the migration, and then deleted the type; If you ever need to rollback that change, you have to:
Restore to the previous commit, so that the EnumFooType exist and the property of type enumFooType is defined in the entity class.
Run the migration command to roll back.
I made a change, adding a unique constraint to a model, within the abc application and did a
./manage.py schemamigration abc --auto
That created a migration file but as well as the expected change the new migration file also contained a number of add_column statements which are adding columns which were previously added in an earlier migration (and which have been the subject of a migrate)
I'm really puzzled as to why this has happened and what to do about it.
Will the add_column statements just be ignored if I do another migrate ?
OK thanks to the #django-south irc channel I've figured this out.
This type of problem can arise when activity has taken place in different source control branches and, as a result of a merge, the dictionary of frozen models, which appears at the bottom of a south migration file, is missing some stuff which has already taken place. The result of this is that the next schemamigration tries to produce the "missing" changes.
The fix is to manually edit the migration file which was created by the schemamigration before doing migrate. This will get things back into synch.
There's some information about issues in the later part of this section : http://south.readthedocs.org/en/latest/tutorial/part5.html#team-workflow .
Thanks to carljm and maney on #django-south for helping me with this.
I'm having some difficulty getting my django tests to run properly; I'm using nose, and I started getting an error when the migrations were being applied, that from table 1 a foreign key relation to table 2 failed with the error:
django.db.utils.DatabaseError: relation "table2_column" does not exist
Looking at the way the migrations were being applied it was clear to me that table1 was not created prior to the foreign key relation was applied, so I tried to figure out how to force the dependency, and found the following article:
http://south.aeracode.org/docs/dependencies.html
I then added:
depends_on = (
("app2", "0001_inital"),
)
to my app1/0001_initial.py file.
Unfortunately now I'm getting the following error:
south.exceptions.DependsOnUnknownMigrationMigration 'app1:0001_initial' depends on unknown migration 'app2:0001_inital'.
Any ideas on how to solve this?
I'm not sure if this will solve your problem, but you can add a setting to use syncdb instead of migrations when running tests. Add the following line to your settings.py
SOUTH_TESTS_MIGRATE = False
You have a typo in the name of the migration it's depending on. It should be:
depends_on = (
("app2", "0001_initial"),
)
This dependency system worked for me, after having exactly the same issue you list here, and then finding the dependency system South's docs.
This error is also thrown if there is an error during the import of the target module: If you've got hand-constructed migrations and you're certain the file name matches your depends_on or needed_by, check the referenced file for errors.
Also, setting SOUTH_TESTS_MIGRATE to False won't fix the problem. It just means you won't see the problem until you try to use the migration.
http://south.readthedocs.org/en/latest/settings.html
(That's still useful if you want to speed up your unittests.)