Flyway 5.0.7 warning about using schema_version table - database-migration

We use the Flyway Gradle plugin to do our migrations offline (i.e. we migrate while the system is down). We recently upgraded to Flyway 5.0.7 and we see this warning now for migrations:
Could not find schema history table XXXXXXX.flyway_schema_history, but found XXXXXXX.schema_version instead. You are seeing this message because Flyway changed its default for flyway.table in version 5.0.0 to flyway_schema_history and you are still relying on the old default (schema_version). Set flyway.table=schema_version in your configuration to fix this. This fallback mechanism will be removed in Flyway 6.0.0.
(I've used the XXXXXXX to obscure the actual schema name).
So, it appears that we can avoid the error by setting flyway.table=schema_version. But, it also says this mechanism will be removed in Flyway 6.0.0.
Are we supposed to do something to make this compatible going forward? Do we have to manually rename the schema_version table to flyway_schema_history? Or is there a way to make Flyway do it? If not, what is going to happen when Flyway 6.0.0 comes out? Will it automatically migrate the data to the appropriate table name?

The default for flyway.table has been changed from schema_version to flyway_schema_history. And they have also provided automatic fallback to old default with a warning to avoid breaking existing installations using the old default.
It means from flyway 5, If you do not specify flyway.table property inside your configuration file, then flyway will look for the table flyway_schema_history in db, and if not found it will look for the table schema_version as a fallback and if the old table is found then will warn with the message that you are getting now. From flyway 6, this fallback mechanism will be removed. If you do not provide flyway.table property, it will look for flyway_schema_history in db, if not found it will not look for schema_version table even if you have any and will create a new table named flyway_schema_history to maintain functionality.
In flyway 6, your existing system will run fine if you set flyway.table=schema_version, you do not need to change table name in db. But if you do not set the property, then you must have to change the table name, otherwise flyway will not recognize existing schema_version table, will treat the system as a new one, will create flyway_schema_history table and will start executing scripts from start.
Hoping it will help.

On PostgreSQL I have solved it with just one migration on top:
DO $$
BEGIN
IF (EXISTS (SELECT 1 FROM information_schema.tables WHERE table_schema = 'public' AND table_name = 'schema_version')
AND EXISTS (SELECT 1 FROM information_schema.tables WHERE table_schema = 'public' AND table_name = 'flyway_schema_history'))
THEN
DROP TABLE schema_version;
END IF ;
IF (EXISTS (SELECT 1 FROM information_schema.tables WHERE table_schema = 'public' AND table_name = 'schema_version')
AND NOT EXISTS (SELECT 1 FROM information_schema.tables WHERE table_schema = 'public' AND table_name = 'flyway_schema_history'))
THEN
CREATE TABLE flyway_schema_history AS TABLE schema_version;
END IF ;
END
$$ ;
It works actually in 2 stages:
On first reboot it copies history with migration recorded into 'old' history table.
On second reboot it drops old history table. Now migration goes into 'new' history and everything is finished.

It is possible to migrate from schema_version to flyway_schema_history by mapping a table over the other and copying the relevant records:
DROP TABLE IF EXISTS `flyway_schema_history`;
SET character_set_client = utf8mb4 ;
CREATE TABLE `flyway_schema_history` (
`installed_rank` int(11) NOT NULL,
`version` varchar(50) DEFAULT NULL,
`description` varchar(200) NOT NULL,
`type` varchar(20) NOT NULL,
`script` varchar(1000) NOT NULL,
`checksum` int(11) DEFAULT NULL,
`installed_by` varchar(100) NOT NULL,
`installed_on` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`execution_time` int(11) NOT NULL,
`success` tinyint(1) NOT NULL,
PRIMARY KEY (`installed_rank`),
KEY `flyway_schema_history_s_idx` (`success`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
insert into flyway_schema_history (installed_rank, version, description, type, script, checksum, installed_by, installed_on, execution_time, success)
select installed_rank, version, description, type, script, checksum, installed_by, installed_on, execution_time, success
from schema_version;
This is the schema version of flyway_schema_history as of flyway 5.2.2. I recommend, to use this script safely, to migrate before to this version and then move forward.
Please, understand that this script must be executed as it is in your db console. This script is for MySQL only. You have to craft your own for other databases.

I dont think its true: "It means from flyway 5, If you do not specify flyway.table property inside your configuration file, then flyway will look for the table flyway_schema_history in db, and if not found it will look for the table schema_version as a fallback and if the old table is found then will warn with the message that you are getting now."
because I have both tables present, but it still complains: "Could not find schema history table "xxxx"."flyway_schema_history", but found "xxxx"."schema_version" instead. You are seeing this message because Flyway changed its default for flyway.table in version 5.0.0 to flyway_schema_history and you are still relying on the old default (schema_version)."
This means, flyway has the logic backwards, it must have first checked on "schema_version", when it found it, it assumed there is no "flyway_schema_history"

Related

Django migration IntegrityError: invalid foreign key (but the data exists)

I am gradually updating a legacy Django application from 1.19 -> 2.2 and beyond. To upgrade to 2.2, I just added on_delete=models.CASCADE to all models.ForeignKey fields that did not have the error (which I also had to do retroactively for existing migrations, apparently...).
Possibly related/unrelated to that, when I run manage.py migrate, Django throws the following error (I shortened the table/field names for brevity):
django.db.utils.IntegrityError: The row in table 'X' with primary key '3' has an invalid foreign key: X.fieldname_id contains a value '4' that does not have a corresponding value in Y__old.id.
Note in particular the __old.id suffix for the db table that Django expects to contain a row with id 4. When manually inspecting the db, the table Y does really contain a valid row with id 4! I'm assuming, to support the migration, Django is making some temporary tables suffixed with __old and somehow it is unable to migrate said data?
The db row Y in question is really simple: a char, boolean, and number column.
Edit: seems to be related to an old Django bug with SQLite. Not sure how to solve. It does not seem to occur for Django 2.1.15, and starts to occur in Django 2.2.
This problem is caused by the mentioned Django bug, but if you get the migration error, your database is broken already.
When you dump the DB to SQL, you can see REFERENCES statements which point to tables ending in __old, but these tables do not actually exist:
$> sqlite3 mydb.db .dump | grep '__old'
CREATE TABLE IF NOT EXISTS "company" [...]"account_id" integer NULL REFERENCES "account__old" ("id") [...]
Fortunately, the DB can be fixed easily, by just removing the __old and dumping into a new database. This can be automated with sed:
sqlite3 broken.db .dump | sed 's/REFERENCES "\(.[^"]*\)__old"/REFERENCES "\1"/g' | sqlite3 fixed.db
It is not an ideal solution, but you can manually delete the row from the database or set the value of the foreign key to a temporary value, migrate and then restore the original value.

What happens if I drop some "special" SQLite tables

First, some background info, maybe someone suggests some better way then I try to do. I need to export SQLite database into text file. For that I have to use C++ and chosen to use CppSQLite lib.
That I do is collecting create queries and after that export every table data, the problem is that there are tables like sqlite_sequence and sqlite_statN. During import I cannot create these tables because these are special purpose, so the main question, would it affect stability if these tables are gone?
Another part of question. Is there any way to export and import SQLite database using CppSQLite or any other SQLite lib for C++?
P.S. Solution to copy database file is not appropriate in this particular situation.
Object names beginning with sqlite_ are reserved; you cannot create them directly even if you wanted to. (But you change the contents of some of them, and you can drop the sqlite_stat* tables.)
The sqlite_sequence table is created automatically when a table with an AUTOINCREMENT column is created.
The record for the actual sequence value of a table is created when it is needed first.
If you want to save/restore the sequence value, you have to re-insert the old value.
The sqlite_stat* tables are created by ANALYZE.
Running ANALYZE after importing the SQL text would be easiest, but slow; faster would be to create an empty sqlite_stat* table by running ANALYZE on a table that will not be analyzed (such as sqlite_master), and then inserting the old records manually.
All this is implemented in the .dump command of the sqlite3 command-line tool (source code in shell.c):
SQLite version 3.8.4.3 2014-04-03 16:53:12
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> create table t(x integer primary key autoincrement);
sqlite> insert into t default values;
sqlite> insert into t default values;
sqlite> analyze;
sqlite> .dump
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE t(x integer primary key autoincrement);
INSERT INTO "t" VALUES(1);
INSERT INTO "t" VALUES(2);
ANALYZE sqlite_master;
INSERT INTO "sqlite_stat1" VALUES('t',NULL,'2');
DELETE FROM sqlite_sequence;
INSERT INTO "sqlite_sequence" VALUES('t',2);
COMMIT;
sqlite>

how to check in sqlite3 whether number of columns are changed or not

Iam coding in c and using sqlite3 as database .I want to ask that how can we check whether no. of columns in a table got changed or not. Situation is like this that i am going to run the application with new executable according to which new columns will be added in the table.So when the DB will be created again application should check whether the table schema is same or not and according to the new schema it should create table.Iam developing application for a embedded enviroment(specifically for a device).
When iam changing the number of columns of table in db and running new executable in the device new tables are not getting created because of the presence of old tables but when iam deleting the old db and creating fresh tables then changes are coming.So how to handle this situation ?
Platform : Linux , gcc compiler
Thanks in advance
Pls guide me like this : (assuming Old DB is already present )
That firstly we have to check for the schema of Old DB and if there is any change in some of the tables(like some new columns added or deleted ) then create the new DB according to that .
Use Versioning and Explicit Column References
You can make use of database versioning to help assist with this sort of problem.
Create a separate table with only one column and one record to store the database version.
Whenever you upgrade your database, set the version number in the separate table.
Design your insert queries to specify the columns.
Define default values for new columns so that old programs insert default values.
Examples
UPDATE databaseVersion SET version=2;
Version 1 Query
INSERT INTO MyTable (id, var1, var2) VALUES (2, '5', '6');
Version 2 Query
INSERT INTO MyTable (id, var1, var2, var3) VALUES (3, '5', '6', '7');
This way your queries should still be compatible on the new DB when using the old program.

Django and sqlite email authentication

I wanted to create an email authenticated django user model, and I basically followed the steps in this website:
http://www.micahcarrick.com/django-email-authentication.html
And also included the table alteration code in a post_syncdb function in a managmenet module, to make the email a unique identifier. This should work ok with MySql. BUT, it wont work for sqlite. This is because sqlite's table alteration is limited and wont allow you to change that attribute OR even add a column with a unique identifier.
If there is no elegant way of doing this, then I might have to switch to MySql.
http://www.sqlite.org/faq.html#q26
So, it UNIQUE is fully supported, but you cannot alter a table using UNIQUE. So dump the table to a new table that has the UNIQUE constraint then alter and rename the tables. Or just dump it, modify the dump and reimport it.
I think, in your post_syncdb hook, you can add:
cursor.execute(
"CREATE UNIQUE INDEX IF NOT EXISTS auth_user_email_unique "
"ON auth_user (email COLLATE NOCASE);"
)
you may have to break out different blocks based on settings.DATABASES['default']['ENGINE']

primary key declaration error while creating table

CreateL()
{
_LIT(KSQLCountry, "CREATE TABLE Country(CountryID INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,CountryName VARCHAR(45) NOT NULL,CountryCode VARCHAR(10) NOT NULL)");
User::LeaveIfError(iDatabase.Execute(KSQLCountry));
}
while creating table i want to declare for primary key and foreign key
which showing run time error (it crashes) during creation of table
what is right way to declare primary key
I don't know which DB are you using, but maybe this will help you
http://snippets.dzone.com/posts/show/1680
Try to use COUNTER data type instead of INTEGER and AUTOINCREMENT.
Another guess: isn't that AUTO_INCREMENT with underscore?
AUTO_INCREMENT is indeed with underscore, this is the error in that SQL
Seems like you're using the old legacy Symbian DBMS and not SQLite.
The old DBMS only supports a small subset of SQL. If my memory serves me well, that includes only some basic SELECT queries.
To create tables using the old DBMS, use the C++ API, e.g.
CDbColSet* columns = CDbColSet::NewLC();
TDbCol id(_L("CountryID"), EDbColInt32);
id.iAttributes = TDbCol::EAutoIncrement | TDbCol::ENotNull;
columns->AddL(id);
columns->AddL(TDbCol(_L("CountryName"), EDbColText, 45));
columns->AddL(TDbCol(_L("CountryCode"), EDbColText, 10));
User::LeaveIfError(aDatabase.CreateTable(_L("Country"), *columns));
CleanupStack::PopAndDestroy(columns);
Or just use the more recent SQLite-backed RSqlDatabase API.