Are there any third party tools that can generate a Mysql database from a doctrine scheme?
I am using ZF2 with Doctrine Orm 2 with Skipper to generate my entities. However when it comes to generating my MySql database I do this via Workbench. The problem is that this DB does not always follow the changes and updates I make to the Skipper files.
Now I know I can update via Doctrine's vendor folder using the tool provided, however I have never been able to get this to work due to other vendor modules crashing the environment while in console mode. ZfcRbac / Oauth2 and others that just done play nicely when in command line.
So my question is simply, is there an easy third party tool I can use to save time and frustration or at least some other technique I can use?
thanks!
The best way to synchronize your database is to use Doctrine Migrations. Every time when you update your entities, it does not matter how - manually or with Skypper, you should create a migration. My workflow is:
1) I modify some entity, let say adding a new property;
2) Then I use Doctrine orm tool to generate the entity accessors. /If you already have them, generated by Skypper, you can skip this step/
$php public/index.php orm:generate-entities --update-entities="true" --generate-methods="true" module/Application/src
3) Updated: Generate doctrine proxies
$php public/index.php orm:generate:proxies
4) The next step is to generate the migrations:
$php public/index.php migrations:diff
5) And the final step is to execute the migration and get your database synchronized with the new schema:
$php public/index.php migrations:migrate
That's all, now your database is synchronized. I am not sure if there is a third party library for this. If you have problems with the other modules, it is a good idea to check your configurations trying to fix them.
Related
Like we used to do with rdbms sql scripts. I wanted to do a similar thing with my dynamodb table.
Currently its very difficult to track changes from environment to environment(dev - qa -prod). We are directly making changes via the console.
What I want to do is, Keep the table data/json in the git version control and whenever any dev makes a change, we should be able to just run a script that will be able migrate the respective changes to on the dynamodb table eg. update/create/delete the tables, add/remove/update the records.
But I am not able to find a proper way/guide to achieve this currently. I am using javascript/nodejs as our base language.
Any help regarding this scenario will be appriciable.
Thanks
ref : https://forums.aws.amazon.com/thread.jspa?threadID=342538
As far as I can tell you described to separate issues:
Changing the tables "structure"
Updating records after the update
Before I go into my answer, remember that DynamoDB is a NoSQL database and your previous RDBMS was a relational database. Operational tasks can differ very much for both types of databases.
1. Automating changes to the "structure" of the table
For this you can check out Infrastructure as Code tools like Terraform, CloudFormation or Pulumi.
But since DynamoDB is a NoSQL database, you only can do a few things like setting your hash and sort key etc and defining indices. Adding "fields" to the DynamoDB is not done with those tools, because except for the hash and sort key, there are no fields. Everything else does not follow a explicit (sql) schema.
2. Updating records after an update
If you do not have a lot of records, you could write yourself a simple tool or script to do the relevant work using the AWS SDK and run that during your CI/CD pipeline. A simple approach would be to have a "migrations" folder and if there is a file in it, the pipeline will execute it. So after the migration is done, just remove the file again. Not great, but pragmatic.
If you have a lot of records this won't work that great anymore, at least if you want to have a downtime-less deployment. In that case you will have update your software to be able to work with the old and new versions of the records structure, while you gradually update all records in the background (using a script etc.). Once all the records are updated, you can remove the code paths that handle the old structure.
I migrated a database that was manipulated via SQL to use Django's migration module: https://docs.djangoproject.com/en/3.0/topics/migrations/.
Initially I thought of using migrations only for changes in the model such as changes in columns or new tables, performing deletes, inserts and updates through SQL, as they are constant and writing a migration for each case would be a little impractical.
Could using the migration module without using data migrations cause me some kind of problem or inconsistency in the future?
You can think about data migration if you made a change that required also a manual fix on the data.
Maybe you decided to normalize something in the database, e.g. to split a name column to the first name and the last name. If you have only one instance of the application with one database and you are the only developer then you will also not write a data migration, but if you want to change it on a live production site with 24 x 7 hours traffic or you cooperate with other developers then you probably prepare a data migration for their databases or you will thoroughly test the migration on a copy of live data that the update will work on the production site correctly without issues and with minimal shutdown. If you don't write a data migration and you had no problem immediately then it is OK and will be not worse then a possible ill-conceived data migration.
I have a multi-database Django Project (Python 2.7, Django 1.4 ) and I need that the database used in every single transaction triggered in a request be selected through the url from that request.
Ex:
url='db1.myproject.com'
Would trigger the use of 'db1' in every transaction, like this:
MyModel.objects.using('db1').all()
That is where my problem resides.
Its virtualy impossible include in every call of 'Model.objects' the '.using(dbname)' function (this project already have 2 dozen of models, and its build with scaling in mind, so it could have hundreds of models, and almost everymodel has (or might have) a implementation a of post_save/post_delete that acts as database triggers).
Is there a way of telling django that he should change the default database for this or that request ?
I already have a Custom QuerySet and a Custom Manager, so, if i could tell THEM the database should be this or that could do the trick to (i dont find anywhere how to get request data inside them).
Any help would be apreciated.
I think my answer to a similar question here will do what you need:
https://stackoverflow.com/a/21382032/202168
...although I'm a bit worried you may have chosen an unconventional or 'wrong' way of doing database load balancing, maybe this article can help with some better ideas:
http://engineering.hackerearth.com/2013/10/07/scaling-database-with-django-and-haproxy/
Objective:Unit Testing DAO layer using any in memory database(HSQL/Derby?)
My end database is Sybase, so the query uses cross database joins.
Eg. of the query :
select table1.col1, table2.col2
from db1..table1, db2..table2
where table1.col1=table2.col2
Obviously, table1 is a table in Database db1 and table2 is a table in Database db2.
Is there any way to test this out?
if you use db specific feature (cross db joins) then you have 2 options to prevent failure on other db:
teach other db to understand the query. for example you can register new functions in h2. this probably won't work in your case. but this way you don't test anything, you just prevent failure and/or allow your application to work on other db
don't test it on other db. face it, even if your queries run on h2, you still have to run them on sysbase. otherwise you simply don't know if it works
so this way or another you need to run your tests on sysbase. if you don't plan to support other database then what's the point in fighting for making this specific test works on h2? it will have completely irrelevant to your production code.
and answering your question: Is there any way to test this out?
on sysbase? yes, just run your query and clear your database after. you can use vagrant or dedicated db
on h2/hsqlbd? probably not
I need to move a series of tables from one datasource to another. Our hosting company doesn't give shared passwords amongst the databases so I can't write a SQL script to handle it.
The best option is just writing a little coldfusion scripty that takes care of it.
Ordinarily I would do something like:
SELECT * INTO database.table FROM database.table
The only problem with this is that cfquery's don't allow you to use two datasources in the same query.
I don't think I could use a QoQ's either because you can't tell it to use the second datasource, but to have a dbType of 'Query'.
Can anyone think of any intelligent ways of getting this done? Or is the only option to just loop over each line in the first query adding them individually to the second?
My problem with that is that it will take much longer. We have a lot of tables to move.
Ok, so you don't have a shared password between the databases, but you do seem to have the passwords for each individual database (since you have datasources set up). So, can you create a linked server definition from database 1 to database 2? User credentials can be saved against the linked server, so they don't have to be the same as the source DB. Once that's set up, you can definitely move data between the two DBs.
We use this all the time to sync data from our live database into our test environment. I can provide more specific SQL if this would work for you.
You CAN access two databases, but not two datasources in the same query.
I wrote something a few years ago called "DataSynch" for just this sort of thing.
http://www.bryantwebconsulting.com/blog/index.cfm/2006/9/20/database_synchronization
Everything you need for this to work is included in my free "com.sebtools" package:
http://sebtools.riaforge.org/
I haven't actually used this in a few years, but I can't think of any reason why it wouldn't still work.
Henry - why do any of this? Why not just use SQL manager to move over the selected tables usign the "import data" function? (right click on your dB and choose "import" - then use the native client and permissions for the "other" database to specify the tables. Your SQL manager will need to have access to both DBs, but the db servers themselves do not need access to each other. Your manager studio will serve as a conduit.