Migrating from Heroku to Azure - getting the database migration right - django

I have a Django app live on Heroku. I'm migrating it to Azure, taking advantage of the $120K/yr credit they recently offered me. Here's what I've done so far:
i) I created an Azure VM with Ubuntu (Standard_D1).
ii) I installed postgresql on it (my db of choice)
iii) I pulled my Heroku app's files from my github onto the Azure VM.
iv) I created a postgres DB on the Azure VM, and then ran syncdb to create the required tables.
v) I tweaked postgresql.conf and pg_hba.conf to cater to some tuning requirements and such.
vi) I took a backup from my Heroku app's dashboard, and downloaded it. This backup file's name is a random uuid, without a file format (e.g. f0af6457-1a24-47d0-881c-434f9bef7c92).
vii) I'm now gearing up to use pg_restore to fit the backup in the newly created+synced app on Azure VM.
Does all this sound about right so far? I have 3 questions:
1) Will pg_restore work with the backup I got off Heroku? This backup doesn't have a file format at all; whereas I'm under the impression it has to be a .tar archive to be compatible with pg_restore.
2) My database is called mydbname. The data backup is saved at /datadrive/backup/filename. Thus, in my case is the correct pg_restore command something like: pg_restore -d mydbname /datadrive/backup/filename?
3) Once I successfully load the correct data in my Azure app, the final step, in my opinion, is to route traffic going to the Heroku app instead to the Azure app. For that, I'll tweak DNS entries. Am I missing anything else here, in your opinion?

Essentially the extension shouldn't matter, your restore should work but frankly haven't tested myself with a heroku backup.
However what I would suggest is lets make it a valid .dump file
curl -o latest.dump heroku pg:backups public-url --app <yourappname>
this should be your valid .dump file, though its not any different from the backup you already have..

Related

not able to connect superset to druid

I have Druid and superset running locally, but I am not able to connect them together. I have sample data wikiticker in Druid. I already installed pydruid with pip3: pip3 install pydruid (I am not sure if I need to install this to any particular location). I have also installed superset using docker-compose locally using This Link, However, I am not able to connect Druid with Superset. I went to Data->Databases->add database. In Connection, I gave Database name as Druid and not sure what to give in SQLALCHEMY URI*
. I tried these:
druid//admin:admin#localhost:8082/wikiticker
pydruid//admin:admin#localhost:8082/wikiticker
druid://admin:admin#localhost:8082/druid/v2/sql
but nothing is working.
As far as I know, Druid has no built-in authentication. The SQLALCHEMY_URI string should be druid+https://localhost:8082/druid/v2/sql/ (or druid+http://localhost:8082/druid/v2/sql/ if you're using HTTP).
As per documentation the connection string should look like this (third variant in the question):
druid://<User>:<password>#<Host>:<Port-default-9088>/druid/v2/sql
Why you cannot connect might be, because of your docker setup. In the context of your superset docker container localhost refers to that particular docker container. For example the database and the redis cache are referred to as db and redis for the connection setup within the docker-compose.yml and the environment variables set in .env.
So you could extend the docker-compose.yml to include the druid container, named druid as well and then connect to it like this:
druid://admin:admin#druid:PORTTHATYOUEXPOSED/druid/v2/sql
There is a good chance that you didn't add the Root Certificate. You can either do that or disable SSL verification. See the documentation here: https://superset.apache.org/docs/databases/druid

Connect Google Cloud Build to Google Cloud SQL

Google Cloud Run allows for using Cloud SQL. But what if you need Cloud SQL when building your container in Google Cloud Build? Is that possible?
Background
I have a Next.js project, that runs in a Container on Google Cloud Run. Pushing my code to Cloud Build (installing the stuff, generating static pages and putting everything in a Container) and deploying to Cloud Run works perfectly. 👌
Cloud SQL
But, I just added some functionality in which it also needs to some data from my PostgreSQL instance that runs on Google Cloud SQL. This data is used when building the project (generating the static pages).
Locally, on my machine, this works fine as the project can connect to my CloudSQL proxy. While running in CloudRun this should also work, as Cloud Run allows for connecting to my Postgres instance on Cloud SQL.
My problem
When building my project with Cloud Build, I need access to my database to be able to generate my static pages. I am looking for a way to connect my Docker cloud builder to Cloud SQL, perhaps just like Cloud Run (fully managed) provides a mechanism that connects using the Cloud SQL Proxy.
That way I could be connecting to /cloudsql/INSTANCE_CONNECTION_NAME while building my project!
Question
So my question is: How do I connect to my PostgreSQL instance on Google Cloud SQL via the Cloud SQL Proxy while building my project on Google Cloud Build?
Things like my database credentials, etc. already live in Secrets Manager, so I should be able to use those details I guess 🤔
You can use the container that you want (and you need) to generate your static pages, and download cloud sql proxy to open a tunnel with the database
- name: '<YOUR CONTAINER>'
entrypoint: 'sh'
args:
- -c
- |
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
chmod +x cloud_sql_proxy
./cloud_sql_proxy -instances=<my-project-id:us-central1:myPostgresInstance>=tcp:5432 &
<YOUR SCRIPT>
App engine has an exec wrapper which has the benefit of proxying your Cloud SQL in for you, so I use that to connect to the DB in cloud build (so do some google tutorials).
However, be warned of trouble ahead: Cloud Build runs exclusively* in us-central1 which means it'll be pathologically slow to connect from anywhere else. For one or two operations, I don't care but if you're running a whole suite of integration tests that simply will not work.
Also, you'll need to grant permission for GCB to access GCSQL.
steps:
- id: 'Connect to DB using appengine wrapper to help'
name: gcr.io/google-appengine/exec-wrapper
args:
[
'-i', # The image you want to connect to the db from
'$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME:$SHORT_SHA',
'-s', # The postgres instance
'${PROJECT_ID}:${_POSTGRES_REGION}:${_POSTGRES_INSTANCE_NAME}',
'-e', # Get your secrets here...
'GCLOUD_ENV_SECRET_NAME=${_GCLOUD_ENV_SECRET_NAME}',
'--', # And then the command you want to run, in my case a database migration
'python',
'manage.py',
'migrate',
]
substitutions:
_GCLOUD_ENV_SECRET_NAME: mysecret
_GCR_HOSTNAME: eu.gcr.io
_POSTGRES_INSTANCE_NAME: my-instance
_POSTGRES_REGION: europe-west1
* unless you're willing to pay more and get very stung by Beta software, in which case you can use cloud build workers (at the time of writing are in Beta, anyway... I'll come back and update if they make it into production and fix the issues)
The ENV VARS (including DB connections) are not available during build steps.
However, you can use ENTRYPOINT (of Docker) to run commands when the container runs (after completing the build steps).
I was having the need to run DB migrations when a new build was deployed (i.e. when the container starts running) and using ENTRYPOINT (to a file/command) was able to run migrations (which require DB connection details, not available during the build-process).
"How to" part is pretty brief and is located here : https://stackoverflow.com/a/69088911/867451

How to transfer one mongodb database from one computer to another

I am using django 2.2 and mongodb as database for backend. i have inserted all data in my application.I am also using Robo3T for seeing collections of mongodb database.My database name is CIS_FYP_db. In my computer everything is going perfect but i want to transfer that project into another computer while i am transferring the project which also contains data\db file with many collections.wt files but when i am running that project in other computer it is showing me that database is blank.No data were present there and mongodb is making new database with the same name CIS_FYP_db with no collections.Please help me to solve this problem that how can i transfer my mongodb database into other computer so i can use it into my application which is already made for that database.Thanks in advance
setting.py
DATABASES = {
'default': {
'ENGINE': 'djongo',
'NAME': 'CIS_FYP_db',
}
}
When you create a connection with mongodb then database is created automatically if not exist already.
You can use mongodump command to get all the database records and mongorestore to restore your database on your new machine.
Assumption: you have setup mongoDb locally and want to migrate it to another computer.
1.Requirements:
mongodump
mongorestore
1.1.How to install?
to install above requirement u have to install [MongoDB Database
Tools]
download link: https://www.mongodb.com/try/download/database-tools
1.2.Popular error.
sometime path is not set, so try this in cmd prompt: set path="C:\Program Files\MongoDB\Server\5.0\bin"
Note: please refactor the link according to your folder path.
2.Procedure:
Note: make sure you follow Step 1
2.1. Approach
we are going to create dump of mongodb from old pc(using mongodump), then transfer that dump to new pc, and import that dump using mongorestore.
2.2.Creation of dump in old pc(from where u want to replicate database)
cmd mongodump --host localhost:27017 --out ~/Desktop/mongo-migration
above cmd will create a dump in the mentioned path==> ~/Desktop/mongo-migration
just copy that folder and transfer it to new pc
Note: if you have created authenticated user then add these flag in above cmd and provide values --username [yourUserName] --password [yourPassword] --authenticationDatabase admin
2.3.Import of dump(created from old PC)
place that dump folder somewhere and execute below cmd
mongorestore C:/....../mongo-migration/ -u root --host 127.0.0.1:27017
done :)

Giving Docker access to db file outside container

I'm trying to test a Django app managed by Docker. Since it's a development project only used by me, I'm using a sqlite3 database backend. However, because I'll be populating this test database with a lot of generated data, and because I don't fully trust Docker, I want to store this sqlite3 db file outside of the container in my home directory, to ensure it doesn't get deleted or lost.
However, by design, Docker makes it difficult for programs inside containers to access files outside of those containers. How do I update my Docker configuration to allow access to this one specific db file in my home directory?
You can mount a host directory into your docker container using -v flag.
For details see this answer: https://stackoverflow.com/a/23455537/7695859.
docker run -v /host/directory:/container/directory -other -options image_name command_to_run
For more details understanding see these official docs.
Use volumes
Manage data in Docker

Bitnami Redmine backup strategy

We started using Redmine at work. I know it uses MySQL as the database, and Apache 2 as the web server. How can Redmine be properly backed up so that it can be reloaded quickly when anything goes wrong?
This will do just fine:
mysqldump --single-transaction --user=user_name --password=your_password redmine_database > backup.sql
It will dump the entire contents of the redmine_database to the backup.sql file.
Update:
As far as backing up "apache", as I state in my comment below - you don't need or want to back up your apache installation. If you ever need to recover your system, apache would need to be reinstalled as with any other application. If you are referring to the actual files and directories within your redmine installation, those as well don't need to be backed up except for the files/ directory which contains user uploaded files to redmine. You can backup your entire redmine installation (to be safe) with the following command:
tar czvf redmine_backup.tar.gz /path/too/redmine/installation
Run it as a VM (JumpBox has a quickstartable one, I believe) then periodically pause or shutdown the VM and backup/copy the entire virtual disk.
I know this doesn't help with an existing installation, but it's what I'd recommend to anyone planning backups before they implement. That's not meant to be snide, just helpful to anyone else reading this thread.
Bitnami apps are self contained, so another option if you can afford some downtime, is simply to shutdown the server, and zip the directory contents ... You may want to do this maybe once a week, in addition to your mysqldump backups. This way you also capture any changes that may have happened in Apache, etc.
Read the Redmine user guide (look at the bottom).
Also, don't forget to backup the attached files.
Redmine backups should include:
Data (stored in your redmine database)
attachments (stored in the files directory of your Redmine install)
Here is a simple shell script that can be used for daily backups (assuming you're using a MySQL database):
# Database
/usr/bin/mysqldump -u <username> -p<password> <redmine_database> | gzip > /path/to/backup/db/redmine_`date +%y_%m_%d`.gz
# Attachments
rsync -a /path/to/redmine/files /path/to/backup/files
Redmine sets table charset as "latin1".
So, if you use non-latin1 charset (CJK in UTF-8 or something), you should give following option to backup script.
mysqldump -u root -p --default-character-set=latin1 --skip-set-charset bitnami_redmine -r backup.sql
It skips "set charset blah-blah-blah" on sql dump and you would get a clean(=dump without interpretation) dump.
By the way, you have to back up the files directory as well; it holds all uploaded files. I installed the Bitnami Redmine stack on Windows.
For MySQL, I use MySQLAdmin to schedule database backup every day.
And I use aceBackup to automatic backup database dump files and Redmine uploaded files to a remote FTP server.
When the server is something wrong, I can just reinstall the Bitnami Redmine stack, and import early dumped database file, then cover Redmine's files directory with backup files.
And that's OK.
This separate program (Bitnami Redmine stack) and data (database & uploaded files) perfectly.