Alter tables with lobos migrations - clojure

Anybody knows how to alter table (add column) with lobos migrations?
There is no documentation on this that I can find -- any help would be greatly appreciated.

Lobos will run all migrations pending according to the lobos_migrations table on your database.
The syntax for modifying an existent table is using alter, you can find some docs here.
For adding a new column this is an example migration:
(defmigration add-column-test
(up []
(alter :add (table :your_table (integer :test_column))))
(down []))
You can also find other alter options such as column rename in the tests source code

Related

Doctrine Migration from string to Entity

I've an apparently simple task to perform, i have to convert several tables column from a string to a new entity (integer FOREIGN KEY) value.
I have DB 10 tables with a column called "app_version" which atm are VARCHAR columns type. Since i'm going to have a little project refactor i'd like to convert those VARCHAR columns to a new column which contains an ID representing the newly mapped value so:
V1 -> ID: 1
V2 -> ID: 2
and so on
I've prepared a Doctrine Migration (i'm using symfony 3.4) which performs the conversion by DROPPING the old column and adding the new id column for the AppVersion table.
Of course i need to preserve my current existing data.
I know about preUp and postUp but i can't figure how to do it w/o hitting the DB performance too much. I can collect the data via SELECT in the preUp, store them in some PHP vars to use later on inside postUp to write new values to DB but since i have 10 tables with many rows this become a disaster real fast.
Do you guys have any suggestion i could apply to make this smooth and easy?
Please do not ask why i have to do this refactor now and i didn't setup the DB correctly in the first time. :D
Keywords for ideas: transaction? bulk query? avoid php vars storage? write sql file? everything can be good
I feel dumb but the solution was much more simple, i created a custom migration with all the "ALTER TABLE [table_name] DROP app_version" to be executed AFTER one that simply does:
UPDATE [table_name] SET app_version_id = 1 WHERE app_version = "V1"

How do we drop partitions in hive with regex. Is it possible?

I am trying to run the following
alter table historical_data drop partition (my_date not rlike '[A-Za-z]');
Which gives me an Exception
org.apache.hadoop.hive.ql.parse.ParseException: line 2:69 mismatched input 'not' expecting set null in drop partition statement
I couldn't find anything similar. I did see one answer on some question in SO but it doesn't work.
Any help is appreciated.
Regexp is not supported unfortunately.
You can use all these comparators < > <= >= <> = != maybe it will help. See usage in this answer: https://stackoverflow.com/a/56646879/2700344
See also this jira Extend ALTER TABLE DROP PARTITION syntax to use all comparators
Also one more jira not implemented yet: Extend ALTER TABLE DROP PARTITION syntax to use multiple conditions
Impala supports LIKE in drop partition:
alter table historical_data drop partition (year < 1995, last_name like 'A%');
Created this Jira for adding regexp, please vote in the Jira if you need it.

Received "ValueError: Found wrong number (0) of constraints for ..." during Django migration

While using Django 1.7 migrations, I came across a migration that worked in development, but not in production:
ValueError: Found wrong number (0) of constraints for table_name(a, b, c, d)
This is caused by an AlterUniqueTogether rule:
migrations.AlterUniqueTogether(
name='table_name',
unique_together=set([('a', 'b')]),
)
Reading up on bugs and such in the Django bug DB it seems to be about the existing unique_together in the db not matching the migration history.
How can I work around this error and finish my migrations?
(Postgres and MySQL Answer)
If you look at your actual table (use \d table_name) and look at the indexes, you'll find an entry for your unique constraint. This is what Django is trying to find and drop. But it can't find an exact match.
For example,
"table_name_...6cf2a9c6e98cbd0d_uniq" UNIQUE CONSTRAINT, btree (d, a, b, c)
In my case, the order of the keys (d, a, b, c) did not match the constraint it was looking to drop (a, b, c, d).
I went back into my migration history and changed the original AlterUniqueTogether to match the actual order in the database.
The migration then completed successfully.
I had a similar issue come up while I was switching over a CharField to become a ForeignKey. Everything worked with that process, but I was left with Django thinking it still needed to update the unique_together in a new migration. (Even though everything looked correct from inside postgres.) Unfortunately applying this new migration would then give a similar error:
ValueError: Found wrong number (0) of constraints for program(name, funder, payee, payer, location, category)
The fix that ultimately worked for me was to comment out all the previous AlterUniqueTogether operations for that model. The manage.py migrate worked without error after that.
"unique_together in the db not matching the migration history" - Every time an index is altered on a table it checks its previous index and drops it. In your case it is not able to fetch the previous index.
Solution-
1.Either you can generate it manually
2.Or revert to code where previous index is used and migrate.Then finally change to new index in your code and run migration.(django_migration files to be taken care of)
Also worth checking that you only have the expected number of Unique indexes for the table in question.
For example, if your table has multiple Unique indexes, then you should delete them to make sure you have only 1 (or whatever the number of expected Unique indexes is) pre-migration index present.
To check how many Unique indexes are there for a given table in PostgreSQL:
SELECT *
FROM information_schema.table_constraints AS c
WHERE
c.table_name = '<table_name>'
and c.constraint_type = 'UNIQUE'
Just in case someone runs into this and the previous answers haven't solved, In my case the issue was that when I modified the unique together constraint, the migration was attempted but the data didn't allow it (because of a more restrictive unique together constraint). However, the migration managed to delete the unique together constraint from the table leaving it in an inconsistent state. I had to migrate back to zero and re-apply the migration without data. Then it went through without problems.
In summary, make sure your data will be able to accept the new constraint before you apply the migration.
Find the latest migration file of the respective table, find unique
together, and replace current unique constraints fields.
Migrate database using ./manage.py migrate your_app_name.
Revert or undo the previous migrations file.
In my case problem was that the previous migration was not present in the table dajsngo_migrations. I added missing entry and then the new migration worked
Someone may get this issue while modifying the unique_together. Basically, the table state is not consistent with the migrations. You may need to add the previous constraints manually using MySQL shell.
incase one is using migrate with django and there is no data in the database, then drop the database and the do again python manage.py migrate`

South Data migration in django after modifing the model

I have a project with existing class model as
class Disability(models.Model):
child = models.ForeignKey(Child,verbose_name=_("Child"))
But with the recent architecture change i have to modify it as
class Disability(models.Model):
child = models.ManyToManyField(Child,verbose_name=_("Child"))
now for this new change .. ( even i have to modify the existing database for the new one )
i guess data migration is the best way to do it rather than doing it manually .
i refered this online doc
http://south.readthedocs.org/en/latest/commands.html#commands-datamigration
but it has very less about data migration . and more about schema migration .
question 1 . if i do the schema migration will this make me loose all me previous data belonging to that old model.
question 2 . Even i am tring for schema migartion it is asking this ...
(rats)rats#Inspiron:~/profoundis/kenyakids$ ./manage.py schemamigration web --auto
? The field 'Disability.child' does not have a default specified, yet is NOT NULL.
? Since you are removing this field, you MUST specify a default
? value to use for existing rows. Would you like to:
? 1. Quit now, and add a default to the field in models.py
? 2. Specify a one-off value to use for existing columns now
? 3. Disable the backwards migration by raising an exception.
? Please select a choice: 1
Can anyone Explain the concept and difference between schema and data migration and how this can be achieved separately .
Schema and data migrations are not different options you can take to modify your table structure. They are completely different things. Of course, data migrations are fully described in the South docs.
Here a data migration will not help you, because you need to modify your schema. And the whole point of South and other migration systems is that they allow you to do that without losing data.
South will try to do a transaction by moving your table data to a temporary table (I could be wrong there), then restructure the table and try to add in the origin data to the new strucutre. Like this:
old_table -> clone -> tmp_table
old_table ->restructure
tmp_table.data -> table
South will look at the field types. If there is big changes it will ask what to do. For example chaning a text field to a int field would be very hard to convert :)
When you remove fields you may still want to be able to convert back to an old structure, so south will need some default data to be able to create a table with the old structure.
Moving data is always an issue since you may change table structure and field type. For example how would you manually deal with data from a Char(max_length=100) to a Char(max_length=50)?
Best suggestion is to keep good backups.
Also take advantage of djangos fixtures. You can save fixtures for different datastructures along with south migrations.
South will load initial_data files in the same way as syncdb, but it
loads them at the end of every successful migration process
http://south.readthedocs.org/en/latest/commands.html#initial-data-and-post-syncdb

Verify the structure of a database? (SQLite in C++ / Qt)

I was wondering what the "best" way to verify the structure of my database is with SQLite in Qt / C++. I'm using SQLite so there is a file which contains my database, and I want to make sure that, when launching the program, the database is structured the way it should be- i.e., it has X tables each with their own Y columns, appropriately named, etc. Could someone point my in the right direction? Thanks so much!
You can get a list of all the tables in the database with this query:
select tbl_name from sqlite_master;
And then for each table returned, run this query to get column information
pragma table_info(my_table);
For the pragma, each row of the result set will contain: a column index, the column name, the column's type affinity, whether the column may be NULL, and the column's default value.
(I'm assuming here that you know how to run SQL queries against your database in the SQLite C interface.)
If you have QT and thus QtSql at hand, you can also use the QSqlDatabase::tables() (API doc) method to get the tables and QSqlDatabase::record(tablename) to get the field names. It can also give you the primary key(s), but for further details you will have to follow pkh's advice to use the table_info pragma.