Region conflict in AWS S3 - amazon-web-services

I have bucket in the region EU (London) in S3. I am trying to upload a tar file through command line. At the end it throws an error saying so
A client error (PermanentRedirect) occurred when calling the PutObject operation: The bucket you are attempting to access must be addressed using the specified endpoint.Please send all future requests to this endpoint.
I have correctly configured using aws configure by giving correct access key and Region. Can someone shed light into this issue
I have created a script to upload database by creating a tar file
HOST=abc.com
DBNAME=db
BUCKET=s3.eu-west-2.amazonaws.com/<bucketname>/
USER=<user>
stamp=`date +"%Y-%m-%d"`
filename="Mgdb_$stamp.tar.gz"
TIME=`/bin/date +%Y-%m-%d-%T`
DEST=/home/$USER/tmp
TAR=$DEST/../$TIME.tar.gz
/bin/mkdir -p $DEST
echo "Backing up $HOST/$DBNAME to s3://$BUCKET/ on $TIME";
/usr/bin/mongodump --host $HOST --port 1234 -u "user" -p "pass" --authenticationDatabase "admin" -o $DEST
/bin/tar czvf $TAR -C $DEST .
/usr/bin/aws s3 cp $TAR s3://$BUCKET/$stamp/$filename
/bin/rm -f $TAR
/bin/rm -rf $DEST

Just append the region to the AWS Command-Line Interface (CLI) command:
aws s3 cp file.txt s3://my-bucket/file.txt --region eu-west-2

The format for the S3Uri in your script is incorrect. It should be s3://<bucketname>/<prefix>/<filename>. Then you add the --region option to specify the bucket region.
$BUCKET=<bucketname>
/usr/bin/aws s3 cp $TAR s3://$BUCKET/$stamp/$filename --region eu-west-2

Related

Invalid bucket name on circle-ci(Deploy to AWS S3 )

situation
I did success build on circle-ci and I tried deploying to AWS S3 but
something problem occurs
Goal
Build on ciecle-ci →AWS S3 → AWS cloudfront
this is my repo
error
!/bin/bash -eo pipefail
if ["${develop}"=="master"]
then
aws --region ${AWS_REGION} s3 sync ~/repo/build s3://{AWS_BUCKET_PRODUCTION} --delete
elif ["${develop}" == "staging"]
then
aws --region ${AWS_REGION} s3 sync ~/repo/build s3://{AWS_BUCKET_STAGING} --delete
else
aws --region ${AWS_REGION} s3 sync ~/repo/build s3://{AWS_BUCKET_DEV} --delete
fi
/bin/bash: [==master]: command not found
/bin/bash: line 3: [: missing `]'
fatal error: Parameter validation failed:
Invalid bucket name "{AWS_BUCKET_DEV}": Bucket name must match the regex "^[a-zA-Z0-9.\-_]{1,255}$"
Variable name
AWS_BUCKET_PRODUCTION
AWS_BUCKET_STAGING
AWS_DEV_BUCKET
I used this site for checking my name
https://regex101.com/r/2IFv8B/1
You need to put a $ in front of {AWS_BUCKET_DEV}, {AWS_BUCKET_STAGING}, and {AWS_BUCKET_PRODUCTION} respectively. Otherwise the shell doesn’t replace the variable names with their values.
... s3://${AWS_BUCKET_DEV} ...

Can you multi stage build a docker image with both aws/gsutil cli?

I am wondering if there is a straightforward way in docker to build an image that has both the aws cli and gsutil cli installed on it for use. Unfortunately, an s3 name containing periods creates a Host ... returned an invalid certificate error https://github.com/GoogleCloudPlatform/gsutil/issues/267 and I cannot change the s3 bucket name unfortunately, which means I cannot do the following
gsutil -m cp -r "s3://path.with.periods/path/files" "gs://bucket_path/path"
so instead Ill have to do something like
aws s3 cp --recursive --quiet "s3://path.with.periods/path/files" ./
gsutil -m cp -r "./" "gs://bucket_path/path"
but I was wondering if there was a straightforward dockerfile that could run these commands?

How to deploy files from s3 to ec2 instance based on S3 event

Actually I am working on a pipeline. So I am having a scenario where I am pushing some artifacts into s3. Now I have wrote a shell script which download the folder and copy each file to its desired location in a wildfly server(Ec2 instance).
#!/bin/bash
mkdir /home/ec2-user/test-temp
cd /home/ec2-user/test-temp
aws s3 cp s3://deploy-artifacts/test-APP test-APP --recursive --region us-east-1
aws s3 cp s3://deploy-artifacts/test-COMMON test-COMMON --recursive --region us-east-1
cd /home/ec2-user/
sudo mkdir -p /opt/wildfly/modules/system/layers/base/psg/common
sudo cp -rf ./test-temp/test-COMMON/standalone/configuration/standalone.xml /opt/wildfly/standalone/configuration
sudo cp -rf ./test-temp/test-COMMON/modules/system/layers/base/com/microsoft/* /opt/wildfly/modules/system/layers/base/com/microsoft/
sudo cp -rf ./test-temp/test-COMMON/modules/system/layers/base/com/mysql /opt/wildfly/modules/system/layers/base/com/
sudo cp -rf ./test-temp/test-COMMON/modules/system/layers/base/psg/common/* /opt/wildfly/modules/system/layers/base/psg/common
sudo cp -rf ./test-temp/test-APP/standalone/deployments/HS.war /opt/wildfly/standalone/deployments
sudo cp -rf ./test-temp/test-APP/bin/resource /opt/wildfly/bin/resource
sudo cp -rf ./test-temp/test-APP/modules/system/layers/base/psg/* /opt/wildfly/modules/system/layers/base/psg/
sudo cp -rf ./test-temp/test-APP/standalone/deployments/* /opt/wildfly/standalone/deployments/
sudo chown -R wildfly:wildfly /opt/wildfly/
sudo service wildfly start
But every time I push new artifacts into s3. I have to go to the server and run this script manually. Is there a way to automate it? I was reading about lamda but after lamda knows the change in s3. where I am gonna define my shell script to run??
Any guidance will be help full.
To Trigger the lambda function on file uploading to s3 bucket, for this you have to set the event notification in s3 bucket.
Steps for setting up s3 event notification:-
1- you lambda and s3 bucket should be in the same region
2 - go to Properties tab of s3 bucket
3 - open up the Event and provide values for event types like put or copy
4 - Do specify the Lambda ARN in Send to option.
Now create one lambda function and add the s3 bucket as a trigger option. Just make sure your Lambda IAM policy is properly set.

Delete files from folder in S3 bucket

I have an AWS S3 bucket test-bucket with a data folder. The data folder will have multiple files.
I am able to delete the files in the S3 bucket.
But what I want is to delete the files in the data folder without deleting the folder.
I tried the following:
aws s3 rm s3://test-bucket/data/*
Also checked using --recursive option, but that does not work.
Is there a way I can delete the files in the folder using AWS CLI?
Following aws cli command worked:
aws s3 rm s3://test-bucket --recursive --exclude="*" --include="data/*.*"
you can do it using aws cli : https://aws.amazon.com/cli/ and some unix command.
this aws cli commands should work:
aws s3 rm s3://<your_bucket_name> --exclude "*" --include "<your_regex>"
if you want to include sub-folders you should add the flag --recursive
or with unix commands:
aws s3 ls s3://<your_bucket_name>/ | awk '{print $4}' | xargs -I% <your_os_shell> -c 'aws s3 rm s3:// <your_bucket_name>/% $1'
explanation:
list all files on the bucket --pipe-->
get the 4th parameter(its the file name) --pipe-->
run delete script with aws cli
aws s3 rm s3://bucket/folder1/folder2/ --recursive --dryrun
From what I see happening when I try it, adding the slash at the end means delete below folder2, not including folder2.
We can remove all files including sub-folders as well from AWS-S3-BUCKET
In Node.js by using the below function.
the same command can be used in AWS-CLI by configuring AWS in the command-line.
function removeAllFilesFromBucket(){
const S3_REGION = "eu-west-1";
const S3_BUCKET_NAME = "sample-staging";
let filePathBucket = S3_BUCKET_NAME+'/assets/videos';
let awsS3ShellCommand = 'aws s3 rm s3://'+filePathBucket+' --region '+S3_REGION+' --recursive';
var { exec } = require('child_process');
exec(awsS3ShellCommand, (err, stdout, stderr) => {
if (err) {
console.log('err',err);
return;
}else{
console.log('Bucket files and sub-folders Deleted successfully !!!');
console.log(`stdout: ${stdout}`);
console.log(`stderr: ${stderr}`);
}
});
}

How to delete multiple files in S3 bucket with AWS CLI

Suppose I have an S3 bucket named x.y.z
In this bucket, I have hundreds of files. But I only want to delete 2 files named purple.gif and worksheet.xlsx
Can I do this from the AWS command line tool with a single call to rm?
This did not work:
$ aws s3 rm s3://x.y.z/worksheet.xlsx s3://x.y.z/purple.gif
Unknown options: s3://x.y.z/purple.gif
From the manual, it doesn't seem like you can delete a list of files explicitly by name. Does anyone know a way to do it? I prefer not using the --recursive flag.
You can do this by providing an --exclude or --include argument multiple times. But, you'll have to use --recursive for this to work.
When there are multiple filters, remember that the order of the filter parameters is important. The rule is the filters that appear later in the command take precedence over filters that appear earlier in the command.
aws s3 rm s3://x.y.z/ --recursive --exclude "*" --include "purple.gif" --include "worksheet.xlsx"
Here, all files will be excluded from the command except for purple.gif and worksheet.xlsx.
If you're unsure, always try a --dryrun first and inspect which files will be deleted.
Source: Use of Exclude and Include Filters
s3 rm cannot delete multiple files, but you can use s3api delete-objects to achieve what you want here.
Example
aws s3api delete-objects --bucket x.y.z --delete '{"Objects":[{"Key":"worksheet.xlsx"},{"Key":"purple.gif"}]}'
Apparently aws s3 rm works only on individual files/objects.
Below is a bash command that constructs individual delete commands and then removes the objects one by one. Works with some success (might be bit slow, but works):
aws s3 ls s3://bucketname/foldername/ |
awk {'print "aws s3 rm s3://bucketname/foldername/" $4'} |
bash
The first two lines are meant to construct the "rm" commands and the 3rd line (bash) will execute them.
Note that you might face issues if your object names have spaces or funny characters. This is because "aws s3 ls" command won't list such objects (as of this writing)
This command deletes files in a bucket.
aws s3 rm s3://buketname --recursive
If you are using AWS CLI you can filter LS results with grep regex and delete them. For example
aws s3 ls s3://BUCKET | awk '{print $4}' | grep -E -i '^2015-([0-9][0-9])\-([0-9][0-9])\-([0-9][0-9])\-([0-9][0-9])\-([0-9][0-9])\-([0-9a-zA-Z]*)' | xargs -I% bash -c 'aws s3 rm s3://BUCKET/%'
This is slow but it works
This solution will work when you want to specify wildcard for object name.
aws s3 ls dmap-live-dwh-files/backup/mongodb/oms_api/hourly/ | grep order_2019_08_09_* | awk {'print "aws s3 rm s3://dmap-live-dwh-files/backup/mongodb/oms_api/hourly/" $4'} | bash
I found this one useful through the command line. I had more than 4 million files and it took almost a week to empty the bucket. This comes handy as the AWS console is not descriptive with the logs.
Note: You need the jq tool installed.
aws s3api list-object-versions --bucket YOURBUCKETNAMEHERE-processed \
--output json --query 'Versions[].[Key, VersionId]' \
| jq -r '.[] | "--key '\''" + .[0] + "'\'' --version-id " + .[1]' \
| xargs -L1 aws s3api delete-object --bucket YOURBUCKETNAMEHERE
You can delete multiple files using aws s3 rm. If you want to delete all files in a specific folder, just use
aws s3 rm --recursive --region <AWS_REGION> s3://<AWS_BUCKET>/<FOLDER_PATH>/
first test it with the --dryrun option!
Quick way to delete a very large Folder in AWS
AWS_PROFILE=<AWS_PROFILE> AWS_BUCKET=<AWS_BUCKET> AWS_FOLDER=<AWS_FOLDER>; aws --profile $AWS_PROFILE s3 ls "s3://${AWS_BUCKET}/${AWS_FOLDER}/" | awk '{print $4}' | xargs -P8 -n1000 bash -c 'aws --profile '${AWS_PROFILE}' s3api delete-objects --bucket '${AWS_BUCKET}' --delete "Objects=[$(printf "{Key='${AWS_FOLDER}'/%s}," "$#")],Quiet=true" >/dev/null 2>&1'
PS: This might be launch 2/3 times because sometimes, some deletion fails...