restore MariaDB on Ec2 (not RDS) from S3 backup - amazon-web-services

I have a snapshot of a MariaDB and a folder with the corresponding images on S3.
I would like to launch an Ec2 to run the Mariadb snapshot I have on S3.
Should I launch an Ec2, install MariaDB and then somehow import the data from the snapshot?
I found plenty of litereature on how to restore a snapshot into RDS, but it usually refers to RDS snapshot, not a snapshot from S3. I could not find how to lunch a db on a ec2 from a S3 snapshot.

Guessing S3 snapshot means SQL dump file. Here is the command you need to use.
The prerequisite is you need to have an EC2 with the Maria DB software installed.
And attach the IAM role to EC2 with s3 read permissions to read files from S3. Then run the below command.
mysql -u db_user -ppassword database_name < aws s3 cp s3://mysqldump_bucket_name/mysqldump.sql -

Related

Question on using aws cli to deploy code to a EC2 instance

I am looking at using jenkins to deploy a war file to an EC2 instance. I have set up similar before. Creating an EC2 instance, a S3 Bucket and a Code Deploy application. The way that worked was that :
1)zip up load the war/jar into a S3 Bucket.
2) Use AWS steps createDeployment to deploy the zip file from the S3 Bucket to the EC2. This would also involve creating a appspec.yml and scripts to set up the environment.
But have been told there is another way. that does not need setting up a code deploy.
I have created an Ec2 instance, set up a docker container inside it, with all the environment settings.
And what I would like to do is load my zip file into the EC2. That I dont need a AWS codedeploy application.
is this correct, is there a AWS CLI command to simply load a zip file into the EC2 instance.
Thank you for any help.
You can copy from an s3 bucket
To copy files from a S3 bucket to EC2 instance,
Create an IAM role with S3 write access or admin access
Map the IAM role to an EC2 instance
Install AWS CLI in EC2 instance
Run the AWS s3 cp command to copy the files from S3 to EC2
To copy the files from S3 to EC2, Keep the source as the bucket URL and the destination to your local directory or filename
To copy the files from S3 to EC2
aws s3 cp s3://<S3BucketName> <Fully Qualified Local filename/Directory>
In the previous command, you can see the difference. Here the source is S3 Bucket URL and the destination is a local file name or directory name.

Slow download / upload speed from AWS S3 to AWS EC2 in same region

I'm using AWS CLI to download big file from S3 bucket ( heroku db backup ) to my EC2 instance and upload big file ( about 110gb) to my AWS S3 bucket from my AWS EC2
Problem is when download ( from bucket that I don't own ) or upload to my bucket ( they are all in same region us-east-1 ) . Speed when download/upload start at about 60MB/s but decrease to 7-8MB/s after first 15gb even when I enable transfer acceleration .
So this is problem with AWS CLI config or my EC2 instance ( I'm testing with t2.micro ) ?Tks
Here's a list of items you can try:
Use enhanced networking on the EC2 instance.
Use parallel workloads for the data transfer.
Customize the upload configurations on the AWS Command Line Interface (AWS CLI).
Use an Amazon Virtual Private Cloud (Amazon VPC) endpoint for Amazon S3.
Upgrade your EC2 instance type.
Use chunked transfers.
Read more here: https://aws.amazon.com/premiumsupport/knowledge-center/s3-transfer-data-bucket-instance/

Can I migrate an existing Aurora DB cluster to Aurora Serverless?

AWS Aurora FAQ states:
I have a snapshot of an existing Aurora (Postgres) provisioned cluster. The snapshot was originally taken in us-west-1, but I copied it to us-west-2 (not sure if this matters). When I attempt to restore this snapshot to an Aurora serverless setup, I only see the option to create a provisioned cluster.
What am I doing wrong?
You can do it via snapshot -> restore, but only if your source database snapshot has specific characteristics. To see if your version is compatible, use:
$ aws rds describe-db-engine-versions |
jq -r '.DBEngineVersions[] |
select(.SupportedEngineModes[]?=="serverless") |
"\(.Engine): \(.EngineVersion)"'
yields
aurora-mysql: 5.7.mysql_aurora.2.07.1
aurora-postgresql: 10.14
aurora: 5.6.10a
The versions that are compatible to swap back and forth (using snapshots between serverless and provisioned) change over time. If you are using a source database snapshot that is not directly compatible, you can try mysqldump.

Not able to get data from Amazon S3 to EC2 for Training

I'm new to cloud infrastructure for Deep Learning and trying to use AWS for deep learning first time and I don't know how to access my data from EC2 launched instance.
My data is stored is S3 bucket but I'm not able to find a way how to get it together and start training.
In that EC2 instance. login via ssh.
install aws cli if its not there
configure credentials are add permission for ec2 instance to use s3 bucket.
otherwise add aws secret and access key
get files to your local system
aws s3 cp s3://mybucket/test.txt test2.txt
Get files from local to s3
aws s3 cp test.txt s3://mybucket/test2.txt
https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html#examples

How to transfer data from Amazon S3 to Amazon EC2

I am using an EC2 instance and I have enabled the log service of Elastic Load Balancer. The logs are stored in Amazon S3 and I want that data to be used as dataset for Elasticsearch which is present on my EC2 instance. Is there a way I can transfer the data to my EC2 instance or access the data directly from S3 only to be used for Elasticsearch ?
The AWS Command Line Interface (CLI) has commands that make it easy to copy to/from Amazon S3.
You can run them on your EC2 instance to download data from Amazon S3.
aws s3 cp s3://bucket/path/file .
aws s3 cp s3://bucket/path . --recursive
aws s3 sync s3://bucket/path .