What is curl command(in a .config file) to copy a .sh file from AWS S3 bucket.
To copy fluent installer file from a S3 location and install in the EC2 instance
i Need to replace below command to
curl -L https://toolbelt.treasuredata.com/sh/install-amazon1-td-agent3.sh
copy from S3 location
I am assuming you mean a .config file that would be deployed with something like Elastic Beanstalk.
Firstly, I would highly recommend that you turn off the bucket's public availability if it is sensitive data.
Secondly, for this process, you should create a new IAM Policy (or a new Role), with access to the specific file in the Bucket you want to copy. Again, I am assuming Elastic Beanstalk for this answer.
Elastic Beanstalk environments spin up with a default Role named:"aws-elasticbeanstalk-ec2-role".
Here is an example Policy that you can create, and then attach to the Role:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["s3:GetObject"],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::your-bucket-name/your-file.txt"
]
}
]
}
The .config file contents should then be as follows:
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Auth:
type: "s3"
buckets: ["your-bucket-name"]
roleName: "aws-elasticbeanstalk-ec2-role"
files:
"/path/to/your-file.txt" :
mode: "000644"
owner: root
group: root
authentication: "S3Auth"
source: https://s3.REGION.amazonaws.com/your-bucket-name/your-file.txt
Replace:
REGION with your Bucket's region,
your-bucket-name with the Bucket's name,
your-file.txt with your file's name.
What this does
.config files are run before initialisation, and therefore allow you to do things before your code is run.
The code here creates a new file on the instance that it's deployed to, and copies the contents from the target in the S3 Bucket.
This file therefore exists before your own code is run, and you can safely use it at runtime.
Related
We are asked to upload a file to client's S3 bucket; however, we do not have AWS account (nor we plan on getting one). What is the easiest way for the client to grant us access to their S3 bucket?
My recommendation would be for your client to create an IAM user for you that is used for the upload. Then, you will need to install the AWS cli. On your client's side there will be a user that the only permission they have is to write to their bucket. This can be done pretty simply and will look something like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::the-bucket-name/*",
"arn:aws:s3:::the-bucket-name"
]
}
]
}
I have not thoroughly tested the above permissions!
Then, on your side, after you install the AWS cli you need to have two files. They both live in the home directory of the user that runs your script. The first is $HOME/.aws/config. This has something like:
[default]
output=json
region=us-west-2
You will need to ask them what AWS region the bucket is in. Next is $HOME/.aws/credentials. This will contain something like:
[default]
aws_access_key_id=the-access-key
aws_secret_access_key=the-secret-key-they-give-you
They must give you the region, the access key, the secret key, and the bucket name. With all of this you can now run something like:
aws s3 cp local-file-name.ext s3://the-client-bucket/destination-file-name.ext
This will transfer the local file local-file-name.ext to the bucket the-client-bucket with the file name there of destination-file-name.ext. They may have a different path in the bucket.
To recap:
Client creates an IAM user that has very limited permission. Only API permission is needed, not console.
You install the AWS CLI
Client gives you the access key and secret key.
You configure the machine that does the transfers with the credentials
You can now push files to the bucket.
You do not need an AWS account to do this.
I deployed my django project using AWS Elastic beanstalk and S3,
and I tried to upload the profile avatar but it shows Server Error(500)
My Sentry log shows me,
"An error occurred (IllegalLocationConstraintException) when calling the PutObject operation: The eu-south-1 location constraint is incompatible for the region specific endpoint this request was sent to."
I think this error appeared
because I put my bucket on eu-south-1 but I try to access it and to create a new object in Seoul, Korea.
Also, the AWS document said IllegalLocationConstraintException indicates that you are trying to access a bucket from a different Region than where the bucket exists. To avoid this error, use the --region option. For example: aws s3 cp awsexample.txt s3://testbucket/ --region ap-east-1.
(https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html)
but this solution might be just for when upload a file from AWS CLI...
I tried to change my bucket policy by adding this but doesn't work.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::{BucketName}/*"
}
]
}
I don't know what should I do and why they do not allow access from other regions?
How to allow access to create, update and remove an object in my bucket from all around the world?
This is my first deployment please help me🥲
Is your Django Elastic Beanstalk instance in a different region from the S3 bucket? If so, you need to set the AWS_S3_REGION_NAME setting as documented here.
is it possible to use the s3 cli to change to ACL of existing files, without using sync ? I got about 1TB of data on my bucket, I'd like to change their ACL without syncing it on my computer.
There are two ways to make Amazon S3 content 'public':
Change the Access Control List (ACL) on an individual object
Create a Bucket Policy on a bucket or path within a bucket
It sounds like you want to make all objects within a given directory public, so you should use an Amazon S3 Bucket Policy, such as this one from Bucket Policy Examples - Amazon Simple Storage Service:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::my-bucket/directory/*"]
}
]
}
You can add this policy by the AWS CLI, but it's much easier to do it in the Amazon S3 management console (Permissions tab).
I wish to achieve the following
Create S3 bucket that contains EMR bootstrap script and config file
Apply policy to this bucket so that only EMR default roles can access it along with specific admin users
EMR bootstrap action runs when cluster starts that accesses S3 bucket to retrieve script and config file and execute on EMR nodes
Here is the policy I have applied to the S3 bucket. I am using the NotPrincipal statement so it will deny access to everyone except the listed arn's
{
"Id": "policy1",
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"NotPrincipal": {
"AWS": ["arn:aws:iam::xxxxxxxxxxxx:user/user1#mydomain.com",
"arn:aws:iam::xxxxxxxxxxxx:user/user2#mydomain.com",
"arn:aws:iam::xxxxxxxxxxxx:root",
"arn:aws:iam::xxxxxxxxxxxx:role/EMR_DefaultRole",
"arn:aws:iam::xxxxxxxxxxxx:role/EMR_EC2_DefaultRole"]
},
"Action": "s3:*",
"Resource": ["arn:aws:s3:::bucket-restricted-access",
"arn:aws:s3:::bucket-restricted-access/*"]
}
]
}
I then am trying to create an EMR cluster via the C# AWS SDK that includes a bootstrap action to run a script from the following location
s3://bucket-restricted-access/config/runscript.sh
However, as soon as the cluster starts I get an error
Terminated with errors - Access denied trying to read bootstrap action
file 's3://bucket-restricted-access/config/runscript.sh'
Is the EMR cluster using the assumed permissions from the EMR_EC2_DefaultRole role to try and retrieve the bootstrap action?
If not, is there another user/role that I need to add to the S3 bucket policy to fix the permissions issue?
The EMR cluster is launched with security group ElasticMapReduce-master and ElasticMapReduce-slave
The access key and secrete key which you use to create EMR should have permission to access the s3 bucket which has your bootstrap script
I have files stored on S3 and wrote .ebextensions config to automatically copy the them to new instances. I'm receiving this error in the Elastic Beanstalk console:
[Instance: INSTANCEID Module: AWSEBAutoScalingGroup ConfigSet: null] Command failed on instance. Return code: 1 Output: [CMD-AppDeploy/AppDeployStage0/EbExtensionPreBuild] command failed with error code 1: Error occurred during build: Failed to retrieve https://s3-us-west-1.amazonaws.com/MyBucket/MyFolder/_MyFile.txt: HTTP Error 403 : AccessDenied
My .ebextension config file has this section:
files:
"/target/file/path" :
mode: "000777"
owner: ec2-user
group: ec2-user
source: https://s3-us-west-1.amazonaws.com/_MyBucket_/_MyFolder_/_MyFile.txt
In attempting to make this file copying work, I've also relaxed permissions by giving the elastic beanstalk IAM role the standard read only access policy to all of S3. It's policy is this:
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "*"
}
Yet the prebuild copying step still fails. Did I give the source url in the correct format? Is there another security entity/policy involved? Help please :)
The documentation is very sketchy on the subject (probably an ideal candidate for StackExchange Docs!).
To do this correctly with .ebextensions, you need to allow the Beanstalk instance IAMs user in the bucket policy, setup an AWS::CloudFormation::Authentication: auth config and attach config to remote sources. This is kind of a hybrid of all the other answers, but all failed in one way or another for me.
Assuming your IAM instance role is aws-elasticbeanstalk-ec2-role:
Set your AWS bucket to allow the Beanstalk IAM User. Edit "bucket policy":
{
"Version": "2012-10-17",
"Id": "BeanstalkS3Copy",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "<beanstalk_iam_role_arn>"
},
"Action": [
"s3:ListBucketVersions",
"s3:ListBucket",
"s3:GetObjectVersion",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<bucket_name>",
"arn:aws:s3:::<bucket_name>/*"
]
}
]
}
where:
beanstalk_iam_role_arn = the fully qualified instance IAMs role. See "IAM role" associated with a running instance if available or see environment configuration. Example: arn:aws:iam::12345689:role/aws-elasticbeanstalk-ec2-role
bucket_name = your bucket name
In your .ebextension/myconfig.config, add an S3 authentication block that uses your IAMs instance user:
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Auth:
type: "s3"
buckets: ["bucket_name"]
roleName:
"Fn::GetOptionSetting":
Namespace: "aws:asg:launchconfiguration"
OptionName: "IamInstanceProfile"
DefaultValue: "aws-elasticbeanstalk-ec2-role"
Set bucket_name appropriately
Define a remote file and attach the S3 Authentication block:
"/etc/myfile.txt" :
mode: "000400"
owner: root
group: root
authentication: "S3Auth" # Matches to auth block above.
source: https://s3-eu-west-1.amazonaws.com/mybucket/myfile.txt
Set your source URL appropriately
Similar to chaseadamsio's answer, you can configure the role given to the EC2 instance with a policy to access S3 resources, then use the pre-installed AWS CLI utilities to move files around.
The way I approached this is to create a role dedicated to the given EB application, then attach a policy similar to:
"Statement": [
{
"Sid": "<sid>",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<your_bucket_path>/*"
]
}
]
This gives your instance access, then to get the files, add a 'commands' block to your config such as:
commands:
01-get-file:
command: aws s3 cp s3://<your_bucket_path>/your-file.txt /home/ec2-user
02-execute-actions:
[unpack, run scripts, etc..]
Obviously you can use other AWS CLI utlities as needed. I found this solved a lot of problems I was having with S3 access and makes deployment a lot easier.
I found a solution to overcome this error. It turns out adding a Resources section to the .ebextensions config file makes it work. The entire file becomes:
files:
"/target/file/path" :
mode: "000777"
owner: ec2-user
group: ec2-user
source: https://s3-us-west-1.amazonaws.com/_MyBucket_/_MyFolder_/_MyFile.txt
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Access:
type: S3
roleName: aws-elasticbeanstalk-ec2-role
buckets: _MyBucket
At this point, I don't know enough to grok why it has to be this way. Hopefully it can help someone who's lost move forward and eventually gain a better understanding. I based my answer on this link https://forums.aws.amazon.com/message.jspa?messageID=541634
An alternative to setting the .ebextensions config would be to set a policy on the aws-elasticbeanstalk-ec2-role within the IAM Manager (or create a new role specifically for your elastic beanstalk environments to sandbox your autoscaled ec2 instances.
To do so, go to the IAM manager within the web console, and click on "Roles" on the left side. You should see your instance name in the list of roles, clicking on that will take you to the administration page for that particular role. Attach a new role policy to the role under "Permissions" with a policy document matching what you want your ec2 to have permissions to do (in this case, you'd give it a policy to access an s3 bucket called _MyBucket and you should no longer need the Resources section in your .ebextensions config.
If you have your IAM role for the machine configured to get access to the file you can do the following in .ebextensions
commands:
01a_copy_file:
command: aws s3 cp s3://bucket/path/file /destination/