Bitnami Redmine backup strategy - redmine

We started using Redmine at work. I know it uses MySQL as the database, and Apache 2 as the web server. How can Redmine be properly backed up so that it can be reloaded quickly when anything goes wrong?

This will do just fine:
mysqldump --single-transaction --user=user_name --password=your_password redmine_database > backup.sql
It will dump the entire contents of the redmine_database to the backup.sql file.
Update:
As far as backing up "apache", as I state in my comment below - you don't need or want to back up your apache installation. If you ever need to recover your system, apache would need to be reinstalled as with any other application. If you are referring to the actual files and directories within your redmine installation, those as well don't need to be backed up except for the files/ directory which contains user uploaded files to redmine. You can backup your entire redmine installation (to be safe) with the following command:
tar czvf redmine_backup.tar.gz /path/too/redmine/installation

Run it as a VM (JumpBox has a quickstartable one, I believe) then periodically pause or shutdown the VM and backup/copy the entire virtual disk.
I know this doesn't help with an existing installation, but it's what I'd recommend to anyone planning backups before they implement. That's not meant to be snide, just helpful to anyone else reading this thread.

Bitnami apps are self contained, so another option if you can afford some downtime, is simply to shutdown the server, and zip the directory contents ... You may want to do this maybe once a week, in addition to your mysqldump backups. This way you also capture any changes that may have happened in Apache, etc.

Read the Redmine user guide (look at the bottom).
Also, don't forget to backup the attached files.
Redmine backups should include:
Data (stored in your redmine database)
attachments (stored in the files directory of your Redmine install)
Here is a simple shell script that can be used for daily backups (assuming you're using a MySQL database):
# Database
/usr/bin/mysqldump -u <username> -p<password> <redmine_database> | gzip > /path/to/backup/db/redmine_`date +%y_%m_%d`.gz
# Attachments
rsync -a /path/to/redmine/files /path/to/backup/files

Redmine sets table charset as "latin1".
So, if you use non-latin1 charset (CJK in UTF-8 or something), you should give following option to backup script.
mysqldump -u root -p --default-character-set=latin1 --skip-set-charset bitnami_redmine -r backup.sql
It skips "set charset blah-blah-blah" on sql dump and you would get a clean(=dump without interpretation) dump.

By the way, you have to back up the files directory as well; it holds all uploaded files. I installed the Bitnami Redmine stack on Windows.
For MySQL, I use MySQLAdmin to schedule database backup every day.
And I use aceBackup to automatic backup database dump files and Redmine uploaded files to a remote FTP server.
When the server is something wrong, I can just reinstall the Bitnami Redmine stack, and import early dumped database file, then cover Redmine's files directory with backup files.
And that's OK.
This separate program (Bitnami Redmine stack) and data (database & uploaded files) perfectly.

Related

how to run a one-off task on cloudfoundry to upload data before starting Python app

Hi this is my first experience trying to deploy a Python app on cloud using CF. I am having issues deploying my app; I sincerely appreciate if anyone can help me or point me to the right direction to solve the issue.
The main problem is the app that I am trying to deploy is large size due to a lot of python dependencies. The size of my app directory is 200 Kb. The first error I observed was: Staging fails due to "Failed to upload payload for droplet" . I think the reason is when all Python dependencies are downloaded from requirements.txt file and finally the droplet is created its size is too large for upload. The droplet size=982. 3 Mb.
The first solution I tried was vendoring app where I created a vendor directory containing all python dependencies but the size of vendor directory was greater that 1Gb, which causes the upload size exceed 1Gb limit and leads to failure in uploading app files.
The second solution I am working on is to upload all installed Python libraries on an object store (in my case S3 bucket which is bounded to my app) and then download the dependencies folder called Pypackages to the app's root directory: /home/vcap/app, so I want to have /home/vcap/app/Pypackages exist before my app starts on the cloud. But I couldn't do it successfully yet. I have included a python script in my app directory which downloads files from S3 bucket successfully. (I have put the correct absolute path for download in downloadS3.py script ie, /home/vcap/app/Pypackages) I want to run this script using "python downloadS3.py" as a one-off task. First I tried the solution here : Can I have multiple commands run in a manifest.yml file?
and although I can see the status of the task is SUCCEED via '$cf tasks my-app-name' , /home/vcap/app/Pypackages does not exist.
I also tried to run one-off task as the steps below:
1-
$ cf push -c 'python downloadS3.py && sleep infinity' -i 1 --no-route
2-
$ cf push -c 'null'
I have printed the contents of /home/vcap/app on my app, ie when app is started and I enter the url in my browser (I don't know what is the right way to see the contents of root directory). Anyway, the problem is Pypackages are not downloaded to the correct root directory. I am not sure if I am running the one-off task in a wrong way or if there is a better solution to make my app work.
I appreciate any helps! (edited)
Diego Cells stage apps and upload droplet to blobstore via cloud controller, the max file can be uploaded is configurable at Ops Manager > TAS for VMs > Application Developer Control > Maximum File Upload Size (MB), default is 1024MB. Seems this is causing restriction, if you can get it increased with your admin help...
Tasks run in their own containers so possibly not an option. I think Python buildpack collects and install the packages before creating the droplet, so don't think copying packages directly to /app directory will be of much help.
If you have data files then you can use .profile file and do some scripting to copy them from S3 or server/NFS location into the /app directory. Something like
wget http://s3.location.com/data_files
cp data_files /home/vcap/app/
But if all these are packages and increasing the size is not feasible then you may need to look to break the app..

Methods to automate ColdFusion Administrator settings

When working with a ColdFusion server you can access the CFIDE/administrator to set config values, which update the cfusion/lib/ xml files (e.g. neo-runtime.xml, neo-mail.xml, etc.)
I'd like to automate a deployment process that includes setting these administrator values so that I don't have to log in and manually set them for each new box that shares settings. I'm unsure of the best way to go about it.
Some thoughts I had are:
Replacing the full files with ones containing my custom settings. I've done this for local development, but it may not be an ideal method due to CF hot-fixes potentially adding/removing/changing attributes.
A script to read the wddx xml file and replace the attribute values. I'm having trouble finding information about how to do this method.
Has anyone done anything like this before? Or does anyone have any recommendations on how to best go about this?
At one company, we checked all the neo-*.xml files into source control, with a set for each environment Devs only had access to the dev settings and we could deploy a local development environment with all the correct settings for new employees quickly.
but it may not be an ideal method due to CF hot-fixes potentially adding/removing/changing attributes.
You have to keep up with those changes and migrate each environment appropriately.
While I was there, we upgraded from 8 to 9, 9 to 11 and from 11 to 2016. Environments would have to be mixed as it took time to verify the applications worked with each new version of CF. Each server got their correct XML files for that environment and scripts would copy updates as needed. We had something like 55 servers in production running 8 instances each, so this scaled well.
There is a very usefull tool developed by Ortus Solutions for this kind of automatizations called cfconfig that can be installed with their commandbox command line utility. This tool isn't only capable of setting configurations of the administrator: It is also capable of exporting/importing settings to a json file (cfconfig.json). It might be what you need.
Here is the link to their docs
https://cfconfig.ortusbooks.com/introduction/getting-started-guide
CFConfig worked perfectly for my needs. I marked #AndreasRu answer as accepted for introducing me to that tool! I'm just adding this response with some additional detail for posterity.
Install CommandBox as part of deployment script
Install CFConfig as part of deployment script
Use CFConfig to export a config.json file from an existing box that will share settings with the new deployment. Store this json file in source control for each type/env of box.
Use CFConfig to import the config.json as part of deployment script
Here's a simple example of what this looks like on debian
# Installs CommandBox
curl -fsSl https://downloads.ortussolutions.com/debs/gpg | apt-key add -
echo "deb https://downloads.ortussolutions.com/debs/noarch /" | tee -a /etc/apt/sources.list.d/commandbox.list
apt-get update && apt-get install apt-transport-https commandbox
# Installs CFConfig module
box install commandbox-cfconfig
# Import config settings
box cfconfig import from=/<path-to-config>/config.json to=/opt/ColdFusion/cfusion/ toFormat=adobe#11.0.19

How to transfer one mongodb database from one computer to another

I am using django 2.2 and mongodb as database for backend. i have inserted all data in my application.I am also using Robo3T for seeing collections of mongodb database.My database name is CIS_FYP_db. In my computer everything is going perfect but i want to transfer that project into another computer while i am transferring the project which also contains data\db file with many collections.wt files but when i am running that project in other computer it is showing me that database is blank.No data were present there and mongodb is making new database with the same name CIS_FYP_db with no collections.Please help me to solve this problem that how can i transfer my mongodb database into other computer so i can use it into my application which is already made for that database.Thanks in advance
setting.py
DATABASES = {
'default': {
'ENGINE': 'djongo',
'NAME': 'CIS_FYP_db',
}
}
When you create a connection with mongodb then database is created automatically if not exist already.
You can use mongodump command to get all the database records and mongorestore to restore your database on your new machine.
Assumption: you have setup mongoDb locally and want to migrate it to another computer.
1.Requirements:
mongodump
mongorestore
1.1.How to install?
to install above requirement u have to install [MongoDB Database
Tools]
download link: https://www.mongodb.com/try/download/database-tools
1.2.Popular error.
sometime path is not set, so try this in cmd prompt: set path="C:\Program Files\MongoDB\Server\5.0\bin"
Note: please refactor the link according to your folder path.
2.Procedure:
Note: make sure you follow Step 1
2.1. Approach
we are going to create dump of mongodb from old pc(using mongodump), then transfer that dump to new pc, and import that dump using mongorestore.
2.2.Creation of dump in old pc(from where u want to replicate database)
cmd mongodump --host localhost:27017 --out ~/Desktop/mongo-migration
above cmd will create a dump in the mentioned path==> ~/Desktop/mongo-migration
just copy that folder and transfer it to new pc
Note: if you have created authenticated user then add these flag in above cmd and provide values --username [yourUserName] --password [yourPassword] --authenticationDatabase admin
2.3.Import of dump(created from old PC)
place that dump folder somewhere and execute below cmd
mongorestore C:/....../mongo-migration/ -u root --host 127.0.0.1:27017
done :)

Migrating from Heroku to Azure - getting the database migration right

I have a Django app live on Heroku. I'm migrating it to Azure, taking advantage of the $120K/yr credit they recently offered me. Here's what I've done so far:
i) I created an Azure VM with Ubuntu (Standard_D1).
ii) I installed postgresql on it (my db of choice)
iii) I pulled my Heroku app's files from my github onto the Azure VM.
iv) I created a postgres DB on the Azure VM, and then ran syncdb to create the required tables.
v) I tweaked postgresql.conf and pg_hba.conf to cater to some tuning requirements and such.
vi) I took a backup from my Heroku app's dashboard, and downloaded it. This backup file's name is a random uuid, without a file format (e.g. f0af6457-1a24-47d0-881c-434f9bef7c92).
vii) I'm now gearing up to use pg_restore to fit the backup in the newly created+synced app on Azure VM.
Does all this sound about right so far? I have 3 questions:
1) Will pg_restore work with the backup I got off Heroku? This backup doesn't have a file format at all; whereas I'm under the impression it has to be a .tar archive to be compatible with pg_restore.
2) My database is called mydbname. The data backup is saved at /datadrive/backup/filename. Thus, in my case is the correct pg_restore command something like: pg_restore -d mydbname /datadrive/backup/filename?
3) Once I successfully load the correct data in my Azure app, the final step, in my opinion, is to route traffic going to the Heroku app instead to the Azure app. For that, I'll tweak DNS entries. Am I missing anything else here, in your opinion?
Essentially the extension shouldn't matter, your restore should work but frankly haven't tested myself with a heroku backup.
However what I would suggest is lets make it a valid .dump file
curl -o latest.dump heroku pg:backups public-url --app <yourappname>
this should be your valid .dump file, though its not any different from the backup you already have..

rhc snapshot not saving all the data

If I try to take a snapshot
rhc snapshot save -a django
It is not saving all the data and codes in the server.
Here is the link of my app
http://django-appspot.rhcloud.com
It is running on:
Django-1.5.1
python-2.7
Mysql
The size without /tmp , /ssh , /sandbox is 866M
I think I am exceeding the disk quota.
Currently I am unable to take the media folder backup. Is there any way around??
It is Solved
https://www.openshift.com/forums/openshift/rhc-snapshot-not-saving-all-the-data#comment-33800
Use WinFTP to download the backup files