I have some columns of a table in an "old" database that I want to migrate to a new one, using DBFlow. DBFlow provides the #Migration annotation for databases, but it seems it only works to migragte tables in the same database.
What is the best approach to import columns into the a new/different database using DBFlow?
It is not possible to migrate between different databases. One need to copy/convert/migrate by hand.
Related
Can AWS Database Migration Service only migrate a couple of tables or columns, instead of all the schemas?
I have a situation with a Database that always has changes for dev, adding columns and tables. One day we want to pass these changes to the production database but in this database there is information about current users, information of the app, etc. Would they erase or affect this information when passing the changes with the migration?
i'm looking for a "best-practice" guide/solution to the following situation.
I have a Django project with a MySql DB which i created and manage. I have to import data, every 5 minutes, from a second (external, not managed by me) db in order to do some actions. I have read rights for the external db and all the necessary information.
I have read the django docs regarding the usage of multiple database: register the db in settings.py, migrate using the --database flag, query/access data by routing to the db (short version) and multiple question on this matter on stackoverflow.
So my plan is:
Register the second database in settings.py, use inspectdb to add to the model, migrate, define a method which reads data from the external db and add it to the internal (own) db.
However I do have some questions:
Do i have to register the external db if i don't manage it?
(Most probably yes in order to use ORM or the cursors to access the data)
How can i migrate the model if I don't manage the DB and don't have write permissions? I also don't need all the tables (around 250, but only 5 needed).
(is fake migration an option worth considering? I would use inspectdb and migrate only the necessary tables.)
Because I only need to retrieve data from the external db and not to write back, would it suffice to have a method that constantly gets the latest data like the second solution suggested in this answer
Any thoughts/ideas/suggestions are welcomed!
I would not use Django's ORM for it, but rather just access the DB with psycopg2 and SQL, get the columns you care about into dicts, and work with those. Otherwise any minor change to that external DB's tables may break your Django app, because the models don't match anymore. That could create more headaches than an ORM is worth.
I am connecting to a legacy mysql database in cloud using my django project.
i need to fetch data and insert data if required in DB table.
do i need to write model for the tables which are already present in the db?
if so there are 90 plus tables. so what happens if i create model for single table?
how do i talk to database other than creating models and migrating? is there any better way in django? or model is the better way?
when i create model what happens in the backend? does it create those tables again in the same database?
There are several ways to connect to a legacy database; the two I use are either by creating a model for the data you need from the legacy database, or using raw SQL.
For example, if I'm going to be connecting to the legacy database for the foreseeable future, and performing both reads and writes, I'll create a model containing only the fields from the foreign table I need as a secondary database. That method is well documented and a bit more time consuming.
However, if I'm only reading data from a legacy database which will be retired, I'll create a read-only user on the legacy database, and use raw SQL to grab what I need like so:
from django.db import connections
cursor = connections["my_secondary_db"].cursor()
cursor.execute("SELECT * FROM my_table")
for row in cursor.fetchall():
insert_data_into_my_new_system_model(row)
I'm doing this right now with a legacy SQL Server database from our old website as we migrate our user and product data to Django/PostgreSQL. This has served me well over the years and saved a lot of time. I've used this to create sync routines all within a single app as Django management commands, and then when the legacy database is done being migrated to Django, I've completely deleted the single app containing all of the sync routines, for a clean break. Good luck!
I am currently developing a server using Flask/SqlAlchemy. It occurs that when an ORM model is not present as a table in the database, it is created by default by SqlAlchemy.
However when an ORM class is changed with for instance an extra column is added, these changes do not get saved in the database. So the extra column will be missing, every time I query. I have to adjust my DB manually every time there is a change in the models that I use.
Is there a better way to apply changes in the models during development? I hardly think manual MySql manipulation is the best solution.
you can proceed as the following:
new_column = Column('new_column', String, default='some_default_value')
new_column.create(my_table, populate_default=True)
you can find more details about sqlalchemy migration in: https://sqlalchemy-migrate.readthedocs.org/en/latest/changeset.html
This question already has answers here:
Altering database tables in Django
(7 answers)
Closed 9 years ago.
I have created a sample Django application with multiple models within it and populated data.
Now, I need to add a new column to one of the models?
Here are my concerns?
What will happen if I do syncdb after adding a column to the model , will it just alter the table and add the new column?
Or will it create a new table after deleting all the columns?
Is there any better way to tackle this issue?
syncdb does not work for altering database tables.
Here is the documentation (Readup on : Syncdb will not alter existing tables)
A clean way to achieve this would be to use a 3rd party tool such as django south which would handle the migrations (Handle the alter table scripts in your case) for you.
Here is a step by step tutorial on south, and here is the official documentation on south
syncdb will not add new column and if the table already exist it will not create no new table. the thing i used to do is simply after adding the field name in your model. get within the shell and type:
$ python manage.py dbshell
you will get directly within your database shell (mysql or psql) it up to what database
you are using.
mysql> | psql> ALTER TABLE <table_name> ADD column varchar(100);
and it will add the new column to your table, doesn't matter if the table it already populated or not.
In development, I use the reset function a lot. It's useful for me as I don't mind blowing away dev data. It basically makes new database tables for just that app - so it will remove your data. Not useful if you wish to keep the data populated, south is better as mentioned above.
python manage.py reset <app-name>