I'm attempting to programatically create an hstore extension for a postgres backed django application. Currently, after creation (via eb create -database.engine postgres), I connect to the db instance directly with pgadmin, and create the extension manually.
Is there a way to do this either with options, container commands, a pre-deploy hook, etc? I've been searching fairly hard, and not seen anything that's a guide on this. Or am I just thinking about this process in a backwards way?
Related
What is the best way to go about this? I have a mobile app a project team developed whereby they setup the database as a MySQL instance. However, with this new project I have with my own developers, we believe Postgres would better suit or needs - but I want everything on one DB instance (data between the mobile app and the new project will be shared). What is the best way to accomplish this?
You will need to create a new RDS instance to switch the engine type to Postgres.
Whilst transitioning you will need to have both running, to migrate the DB across you will want to keep the data synchronised between both. Luckily AWS has the database migration service.
You should try to migrate your existing application to use this first, then remove the DMS setup and shutdown the MySQL database.
I'm using docker for a project, the main focus for its usage is to make the application available even if one of the node (it's a 6 nodes cluster with docker swarm) is down.
The application is basically a Django App that can save some images from users and others models. I'm currently saving the images as files, but since I need to specify a volume locally for a single machine, I would like to know if it would be better to save the images on database cluster, so it would be available even if the whole node goes down. Or is there another way?
#Edit
Note: The cluster runs locally and doesn't have internet access
The two options are two perform the file sharing via database or via the file system.
For file system sharing, you can use something like GlusterFS, so for each container it seems like they are mounting a host-local volume, but it's actually shared via GlusterFS between the hosts.
To my mind, if it's your application (e.g you can modify it at will), saving stuff in database would be the easier approach for most developers.
The best solution is often to go for a hosted option (such as MongoDB Atlas). Making a database resilient and highly available is really hard, and unless you are an expert on docker and mongo I would strongly recommend you to go for a hosted option.
I have a Django app running in a docker swarm. I'm trying to connect it with cassandra using django-cassandra-engine.
Following the guides, I've configured installed apps and connection settings, and running the first step works great (manage.py sync_cassandra). It created the keyspace and my models.
However, whenever I try to create data using the models or a raw query, there's simply a timeout while connecting without any other errors.
Swarm is running on AWS with a custom VPC. Docker flow proxy is used as reverse proxy for the setup (not that it should affect the connection in any way)
I've tried deploying cassandra both as a standalone and as a docker image.
I am also able to ping the servers both ways.
I am even able to manually connect to the django app container, install cassandra and connect to the cassandra cluster using cqlsh
I've been banging my head off for the past few days around this...
Has anyone encountered something similar? Any ideas as to where I can start digging
Feel free to ask for any information you think may be relevant.
I almost finished writing my app in django, and checked out the steps to deployment. The thing I left to do is to switch my sqlite db to postgresql but do I need to do that in my deployment stage? won't I be installing postgresql db in deployment stage anyway since I will be using new system that digitalocean or aws provides? also for limited budget, I should be using digital ocean right?
If you want to clone project from local machine to VPS, like Amazon or Digital Ocean. You need to create enviroment like in local pc. Create new DB server, if you will use PostreSQL or MySQL, with SQLite you dont need it.
Also you need to delete old migrations from apps.
I have a database instance on RDS with 2 databases on it. Is there a good way using the RDS command line tools to copy the one database to the other? If not, what is the recommended way of doing it?
This is not an exact solution to the OP, but if all you need is to clone an existing database for a new purpose, there's an easier way. You can take a snapshot from the original RDS instance, then restore it to a new instance. You can even use the web console.
I'd use mysqldump to get the tables and then mysql to import them.
Update 2014/07/08: Depending on what you're planning to do here, another solution today is to setup replication and then to promote the slave to be the master. That is for example if you want to update your database's release/version:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html
If you're looking to backup externally, there's also replication:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Exporting.NonRDSRepl.html
RDS has come a long way.
it depends on which database you are hosting there - for SQL Server I have used the SQL Azure Migration wizard (free download from CodePlex).
To get full RDBMS functionality the trick is to use the DNS name of your SQL Server instance in the wizard, but select 'SQL Server v2008' (or eventually v2012 after AWS RDS makes instances with 2012 available) and do NOT select to-->'SQL Azure'. I did a short screencast on this on my blog as well.