How to store bacula (community-edition) backup on Amazon S3? - amazon-web-services

I am using centOS-7 machine, bacula community-edition 11.0.5 and PostgreSql Database
Bacula is used to take full and incremental backup
I followed bellow document link to store the backup on an Amazon S3 bucket.
https://www.bacula.lat/community/bacula-storage-in-any-cloud-with-rclone-and-rclone-changer/?lang=en
I configured storage daemon as they shown in the above link, once after the backup, backup is success and backed up file storing in the given path /mnt/vtapes/tapes, but backup-file is not moving from /mnt/vtapes/tapes to AWS s3 bucket.
In the above document mentioned as, we need to create Schedule routines to the cloud to move backup file from /mnt/vtapes/tapes to Amazon S3 bucket.
**I am not aware of what is cloud Schedule routines in AWS, whether it is any cloud lambda function or something else?
Is there any S3 cloud driver which support bacula backup or any other way to store bacula-community backup file on Amazon S3 other than S3FS-Fuse and libs3 ?

The link which you shared is for bacula-enterprise, we are using bacula-community. so any related document you prefer for bacula-community edition

Bacula Community include AWS S3 cloud driver starting from 9.6.0. Check https://www.bacula.org/11.0.x-manuals/en/main/main.pdf - Chapter 3, New Features in 9.6.0. And additional: 4.0.1 New Commands, Resource, and Directives for Cloud. This is the same exact driver available at Enterprise version.

Related

On-Premise file backup to aws

Use case:
I have one directory on-premise, I want to make a backup for it let's say at every midnight. And want to restore it if something goes wrong.
Doesn't seem a complicated task,but reading through the AWS documentation even this can be cumbersome and costly.Setting up Storage gateway locally seems unnecessarily complex for a simple task like this,setting up at EC2 costly also.
What I have done:
Reading through this + some other blog posts:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html
https://docs.aws.amazon.com/storagegateway/latest/userguide/WhatIsStorageGateway.html
What I have found:
1.Setting up file gateway (locally or as an EC2 instance):
It just mount the files to an S3. And that's it.So my on-premise App will constantly write to this S3.The documentation doesn't mention anything about scheduled backup and recovery.
2.Setting up volume gateway:
Here I can make a scheduled synchronization/backup to the a S3 ,but using a whole volume for it would be a big overhead.
3.Standalone S3:
Just using a bare S3 and copy my backup there by AWS API/SDK with a manually made scheduled job.
Solutions:
Using point 1 from above, enable versioning and the versions of the files will serve as a recovery point.
Using point 3
I think I am looking for a mix of file-volume gateway: Working on file level and make an asynchronus scheduled snapshot for them.
How this should be handled? Isn't there a really easy way which will just send a backup of a directory to the AWS?
The easiest way to backup a directory to Amazon S3 would be:
Install the AWS Command-Line Interface (CLI)
Provide credentials via the aws configure command
When required run the aws s3 sync command
For example
aws s3 sync folder1 s3://bucketname/folder1/
This will copy any files from the source to the destination. It will only copy files that have been added or changed since a previous sync.
Documentation: sync — AWS CLI Command Reference
If you want to be more fancy and keep multiple backups, you could copy to a different target directory, or create a zip file first and upload the zip file, or even use a backup program like Cloudberry Backup that knows how to use S3 and can do traditional-style backups.

AWS: How to transfer files from ec2 instance (Windows Server) to S3 daily?

Can someone explain me whats the best way to transfer data from a harddrive on an EC2 Instance (running Windows Server 2012) to an S3 Bucket for the same AWS Account on a daily basis?
Backround idea to this:
I'm generating a .csv file for one of our Business partners daily at 11:00 am and I want to deliver it to S3 (he has access to our S3 Bucket).
After that he can pull it out of S3 manually or automatically whenever he wants.
Hope you can help me, I only found manually solutions with the CLI, but no automated way for daily transfers.
Best Regards
You can directly mount S3 buckets as mounted drives on your EC2 instances. This way you don't even need some sort of triggers/daily task scheduler along with third party service as objects would be directly available in the S3 bucket.
For Linux typically you would use Filesystem in Userspace (FUSE). Take a look at this repo if you need it for Linux: https://github.com/s3fs-fuse/s3fs-fuse.
Regarding Windows, there is this tool:
https://tntdrive.com/mount-amazon-s3-bucket.aspx
If these tools don't suit you or if you don't want to mount directly the s3 bucket, here is another option: Whatever you can do with the CLI you should be able to do with the SDK. Therefore if you are able to code in one of the various language AWS Lambda proposes - C#/Java/Go/Powershell/Python/Node.js/Ruby - you could automate that using a Lambda function along with a daily task scheduler triggering at 11a.m.
Hope this helps!
Create a small application that uploads your file to an S3 bucket (there are a some example here). Then use Task Scheduler to execute your application on a regular basis.

How to store AMI file to S3 bucket?

I am having an Ubuntu ec2 instance at AWS. I took AMI for the instance.
I want to store the AMI to S3 bucket. Is there any way? Also is there anyway to export AMI from S3 bucket?
Update: This feature is now available
From Store and restore an AMI using S3 - Amazon Elastic Compute Cloud:
You can store an Amazon Machine Image (AMI) in an Amazon S3 bucket, copy the AMI to another S3 bucket, and then restore it from the S3 bucket. By storing and restoring an AMI using S3 buckets, you can copy AMIs from one AWS partition to another, for example, from the main commercial partition to the AWS GovCloud (US) partition. You can also make archival copies of AMIs by storing them in an S3 bucket.
--- Old Answer ---
It is not possible to export an AMI.
An Amazon Machine Image (AMI) is a copy of an Elastic Block Store (EBS) volume. The AMI is stored in Amazon S3, but it is not accessible via the S3 service. Think of it as being stored in AWS's own S3 bucket, rather than yours.
If you wish to export a disk image, use a standard disk utility to copy the disk to ISO format, which can then be copied and mounted on other VMs.
Thank you John.
Hi Guys, I had chat with AWS support also. For your reference ,
10:15:45 AM Myself: Well i have some doubts. I will ask and just clear in me on that.
10:15:49 AM AWS support: Sure
10:15:59 AM AWS support: I'll be happy to do so
10:16:25 AM Myself: Is there any option to sore AMI in S3 bucket.
10:16:46 AM AWS support: No, this is not possible
10:17:05 AM AWS support: AMI data is a simple configuration file
10:17:11 AM AWS support: This is backed by S3
10:17:18 AM AWS support: But not stored in an S3 bucket
10:17:27 AM AWS support: The exact same is true for Snapshots
10:17:45 AM AWS support: It is stored and backed by S3- but not something that can be placed in one of your buckets
10:17:59 AM Myself: is it possible to view that in s3?
10:18:51 AM AWS support: No, this is not something that is visible in S3, I am sorry to say
10:19:57 AM Myself: OK. I need to download the AMI . what can i do?
10:20:19 AM AWS support: AMI data is not something that is downloadable
10:20:35 AM AWS support: Are you seeking to Download your whole instance?
10:20:46 AM AWS support: Or download a complete volume?
10:21:07 AM AWS support: If you originally imported your instance from a VM, you can Export the VM 10:21:29 AM AWS support: But its an EC2 instance that was created in EC2, you can not- I am really sorry to say
10:22:02 AM Myself: Okay fine.
I ran into the same problem and to my delight, AWS has since innovated something about this.
You can store and restore your AMI in S3 now.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ami-store-restore.html#store-ami
--
UPDATE:
the version of AWS CLI is matter

How to restore postgres dump with RDS?

I have a postgres dump in AWS S3 bucket, what is the most convenience way to restore it in a AWS RDS ?
AFAIK, there is no native AWS way to manually push data from S3 to anywhere else. The dump stored on S3 needs to first be downloaded and then restored.
You can use the link posted above (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL.Procedural.Importing.html), however that doesn't help you download the data.
The easiest way to get something off of S3 is to simply go to the S3 console and point/click your way to the file, right click it and click Download. If you need to restore FROM an EC2 instance (e.g. because your RDS does not have a public IP), than install and configure the AWS CLI (http://docs.aws.amazon.com/cli/latest/userguide/installing.html).
Once you have the CLI configured, download with the following command:
aws s3 cp s3://<<bucket>>/<<folder>>/<<folder>>/<<key>> dump.gz
NOTE: the above command may need some additional tweaking depending on whether you have multiple AWS profiles installed on the machine, the dump is not one file (but many), etc.
From there restore to RDS just like you would a normal Postgres server following the instructions in the AWS link.
Hope that helps!

How to set up scheduled backups in amazon s3?

I have S3 bucket "foo.backups", s3cmd is installed on my DigitalOcean droplet. I need to backup mydb.sqlite3 database and "myfolder".
How to make scheduled daily backups of these databse and folder with such structure:
s3://foo.backups/
-30jan15/
--mydb.sqlite3
--myfolder/
---...
-31jan15/
--mydb.sqlite3
--myfolder/
---...
-1feb15/
--mydb.sqlite3
--myfolder/
---...
How can I set it up?
Thanks!
As an alternative to s3cmd, aws-cli - you might consider using https://github.com/minio/mc
mc implements mc mirror command to recursively sync files and directories to multiple destinations in parallel.
Features a cool progress bar and session management for resumable copy/mirror operations.
$ mc mirror
NAME:
mc mirror - Mirror folders recursively from a single source to many destinations.
USAGE:
mc mirror SOURCE TARGET [TARGET...]
EXAMPLES:
1. Mirror a bucket recursively from Minio cloud storage to a bucket on Amazon S3 cloud storage.
$ mc mirror https://play.minio.io:9000/photos/2014 https://s3.amazonaws.com/backup-photos
2. Mirror a local folder recursively to Minio cloud storage and Amazon S3 cloud storage.
$ mc mirror backup/ https://play.minio.io:9000/archive https://s3.amazonaws.com/archive
3. Mirror a bucket from aliased Amazon S3 cloud storage to multiple folders on Windows.
$ mc mirror s3/documents/2014/ C:\backup\2014 C:\shared\volume\backup\2014
4. Mirror a local folder of non english character recursively to Amazon s3 cloud storage and Minio cloud storage.
$ mc mirror 本語/ s3/mylocaldocuments play/backup
5. Mirror a local folder with space characters to Amazon s3 cloud storage
$ mc mirror 'workdir/documents/Aug 2015' s3/miniocloud
Hope this helps.
As an alternative to s3cmd, you might consider using the AWS Command-Line Interface (CLI).
The CLI has an aws s3 sync command that will copy directories and sub-directories to/from Amazon S3. You can also nominate which filetypes are included/excluded. It will only copies that a new or have been modified since the previous sync.
See: AWS CLI S3 documentation