Is it possible to copy between AWS accounts using AWS CLI? - amazon-web-services

Is it possible using AWS CLI to copy the contents of S3 buckets between AWS accounts? I know it's possible to copy/sync between buckets in the same account, but I need to get the contents of an old AWS account into a new one. I have AWS CLI configured with two profiles, but I don't see how I can use both profiles in a single copy/sync command.

Very Simple. Let's say:
Old AWS Account = old#aws.com
New AWS Account = new#aws.com
Loginto the AWS console as old#aws.com
Go to the bucket of your choice and apply below bucket policy:
{
"Statement": [
{
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket_name",
"Principal": {
"AWS": [
"account-id-of-new#aws.com-account"
]
}
},
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucket_name/*",
"Principal": {
"AWS": [
"account-id-of-new#aws.com-account"
]
}
}
]
}
I would guess that bucket_name and account-id-of-new#aws.com-account1 is evident to you in above policy
Now, Make sure you are running AWS-CLI with the credentials of new#aws.com
Run below command and the copy will happen like a charm:
aws s3 cp s3://bucket_name/some_folder/some_file.txt s3://bucket_in_new#aws.com_acount/fromold_account.txt
Ofcourse, do make sure that new#aws.com has write privileges to his own bucket bucket_in_new#aws.com_acount which is used in above command to save the stuff copied from old#aws.com bucket.
Hope this helps.

Ok, I have this working now! Thanks for your answers. In the end I used a combination between #slayedbylucifer and #Sony Kadavan. What worked for me was a new bucket policy and a new user policy.
I added the following bucket policy (Account A):
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::myfoldername",
"Principal": {
"AWS": [
"arn:aws:iam::111111111111:user/myusername"
]
}
},
{
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::myfoldername",
"Principal": {
"AWS": [
"arn:aws:iam::111111111111:user/myusername"
]
}
}
]
}
And the following user policy (Account B):
{
"Version": "2012-10-17",
"Statement":{
"Effect":"Allow",
"Action":"s3:*",
"Resource":"arn:aws:s3:::myfoldername/*"
}
}
And used the following aws cli command (the region option was required because the accounts were in different regions):
aws --region us-east-1 s3 sync s3://myfoldername s3://myfoldername-accountb

Yes, you can.
You need to first create an IAM user in the second account and delegate permissions to it - read/write/list on specific S3 bucket. Once you do this then provide this IAM users's credentials to your CLI and it will work.
How to delegate permissions:
Delegating Cross-Account Permissions to IAM Users - AWS Identity and Access Management : http://docs.aws.amazon.com/IAM/latest/UserGuide/DelegatingAccess.html#example-delegate-xaccount-roles
Sample S3 policy for delegation:
{
"Version": "2012-10-17",
"Statement" : {
"Effect":"Allow",
"Sid":"AccountBAccess1",
"Principal" : {
"AWS":"111122223333"
},
"Action":"s3:*",
"Resource":"arn:aws:s3:::mybucket/*"
}
}
When you do this on production setups, be more restrictive in the permissions. If your need is to copy from a bucket to another. Then on one side, you need to give only List and Get (not Put)

In my case below mentioned command will work, hope so this will work for you as well. I have two different AWS accounts in different regions, and I want to copy my old bucket content into new one bucket. I have AWS CLI configured with two profiles.
Used the following aws cli command:
aws s3 cp --profile <profile1> s3://source_bucket_path/ --profile <profile2> s3://destination_bucket_path/ --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers --recursive

Related

Uploading to AWS S3 bucket from a profile in a different environment

I have access to one of two AWS environments and I've created a protected S3 bucket in it to upload files to from an account in the one that I do not. The environment and the account that I don't have access to are what a project's CI uses.
environment I have access to: env1
environment I do not have access to: env2
account I do not have access to: user/ci
bucket name: content
S3 bucket policy:
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
...
},
{
"Sid": "Allow access to bucket from profile in env1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111122223333:user/ci"
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket*"
],
"Resource": "arn:aws:s3:::content"
},
{
"Sid": "Allow access to bucket items from profile in env1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111122223333:user/ci"
},
"Action": [
"s3:Get*",
"s3:PutObject",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::content",
"arn:aws:s3:::content/*"
]
}
]
}
From inside a container that's configured for env1 and user/ci I'm testing with the command
aws s3 sync content/ s3://content/
and I get the error:
fatal error: An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
I have two questions:
Am I even using the correct aws command to upload the data to the bucket?
Am I missing something from my bucket policy?
For the latter, I've basically followed what a load of examples and answers online have suggested.
To test your policy, I did the following:
Created an IAM User with no policies
Created an Amazon S3 bucket
Attached your Bucket Policy to the bucket, and updated the ARN and bucket name
Tested access to the bucket with:
aws s3 ls s3://bucketname
aws s3 sync folder/ s3://bucketname/folder/
It worked fine.
Therefore, the policy you display appears to be giving all necessary permissions. It is possible that you have something else that is Denying access on the bucket.
The solution was to given the ACL
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::content",
"arn:aws:s3:::content/*"
]
}
]
}
to user/ci in env1.

AWS: Could not able to give s3 access via s3 bucket policy

I am the root user of my account and i created one new user and trying to give access to s3 via s3 bucket policy:
Here is my policy details :-
{  "Id": "Policy1542998309644",  "Version": "2012-10-17",  "Statement": [    {      "Sid": "Stmt1542998308012",      "Action": [        "s3:ListBucket"      ],      "Effect": "Allow",      "Resource": "arn:aws:s3:::aws-bucket-demo-1",      "Principal": {        "AWS": [          "arn:aws:iam::213171387512:user/Dave"        ]      }    }  ]}
in IAM i have not given any access to the new user. I want to provide him access to s3 via s3 bucket policy. Actually i would like to achieve this : https://aws.amazon.com/premiumsupport/knowledge-center/s3-console-access-certain-bucket/ But not from IAM , I want to use only s3 bucket policy.
Based on the following AWS blog post (the blog shows IAM policy, but it can be adapted to a bucket policy):
How can I grant a user Amazon S3 console access to only a certain bucket?
you can make the following bucket policy:
{
"Id": "Policy1589632516440",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1589632482887",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::aws-bucket-demo-1",
"Principal": {
"AWS": [
"arn:aws:iam::213171387512:user/Dave"
]
}
},
{
"Sid": "Stmt1589632515136",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::aws-bucket-demo-1/*",
"Principal": {
"AWS": [
"arn:aws:iam::213171387512:user/Dave"
]
}
}
]
}
This will require user to url directly to the bucket:
https://s3.console.aws.amazon.com/s3/buckets/aws-bucket-demo-1/
The reason is that the user does not have permissions to list all buckets available. Thus he/she has to go directly to the one you specify.
Obviously the IAM user needs to have AWS Management Console access enabled when you create him/her in the IAM service. With Programmatic access only, IAM users can't use console and no bucket policy can change that.
You will need to use ListBuckets.
It seems like you want this user to only be able to see your bucket but not access anything in it.

How to sync multiple S3 buckets using multiple AWS accounts?

I am having trouble syncing two S3 buckets that are attached to two separate AWS accounts.
There are two AWS accounts - Account A which is managed by a third party and Account B, which I manage. I am looking to pull files from an S3 bucket in Account A to an S3 bucket in Account B.
Account A provided me the following instructions:
In Account B, create a new IAM user called LogsUser. Attach the following policy to the user:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::ACCOUNTID:role/12345-LogAccess-role"
}
]
}
Configure the AWS CLI to update the config and credentials files. Specifically, the ~/.aws/config file to look like:
[profile LogsUser]
role_arn = arn:aws:iam::ACCOUNTID:role/12345-LogAccess-role
source_profile = LogsUser
And the ~/.aws/credentials file to look like
aws_access_key_id = YOUR_ACCESS_KEY_ID
aws_secret_access_key = YOUR_SECRET_ACCESS_KEY
From here, I am successfully able to query the log files in Account A's bucket using $ aws s3 ls --profile LogsUser s3://bucket-a.
I have set up bucket-b in Account B, however, I am unable to query any files in bucket-b. For example, $ aws s3 ls --profile LogsUser s3://bucket-b returns An error occurred (AccessDenied) when calling the AssumeRole operation: Access denied.
Is there something additional I can add to the config file or my IAM policy to allow access to bucket-b using --profile LogsUser option? I can access bucket-b using other --profile settings, but am not looking to sync to the local file system and then to another bucket.
The desired results is to run a command like aws s3 sync s3://bucket-a s3://bucket-b --profile UserLogs.
For example, if you want to copy “Account A” S3 bucket objects to “Account B” S3 bucket, follow below.
Create a policy for the S3 bucket in “account A” like the below policy. For that, you need “Account B” number, to find the B account number go to Support → Support center and copy the account number from there.
Setup “account A” bucket policy :
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DelegateS3Access",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT_B_NUMBER:root"
},
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::ACCOUNT_A_BUCKET_NAME/*",
"arn:aws:s3:::ACCOUNT_A_BUCKET_NAME"
]
}
]
}
Log into “Account B” and create a new IAM user or attach the below policy for the existing user.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::ACCOUNT_A_BUCKET_NAME",
"arn:aws:s3:::ACCOUNT_A_BUCKET_NAME/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::ACCOUNT_B_BUCKET_NAME",
"arn:aws:s3:::ACCOUNT_B_BUCKET_NAME/*"
]
}
]
}
Configure AWS CLI with “Account B” IAM user(Which you have created IAM with the above user policy)
aws s3 sync s3://ACCOUNT_A_BUCKET_NAME s3://ACCOUNT_B_BUCKET_NAME --source-region ACCOUNT_A_REGION-NAME --region ACCOUNT_B_REGION-NAME
This way we can copy S3 bucket objects over different AWS accounts.
If you have multiple awscli profiles, use --profile end of the command with profile name.
Your situation is:
You wish to copy from Bucket-A in Account-A
The files need to be copied to Bucket-B in Account-B
Account-A has provided you with the ability to assume LogAccess-role in Account-A, which has access to Bucket-A
When copying files between buckets using the CopyObject() command (which is used by the AWS CLI sync command), it requires:
Read Access on the source bucket (Bucket-A)
Write Access on the destination bucket (Bucket-B)
When you assume LogAccess-role, you receive credentials that have Read Access on Bucket-A. That is great! However, those credentials do not have permission to write to Bucket-B because it is in a separate account.
To overcome this, you should create a Bucket Policy on Bucket-A that grants Write Access to LogAccess-role from Account-B. The Bucket Policy would look something like:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT-A:role/12345-LogAccess-role"
},
"Action": [
"s3:ListBucket",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::bucket-a",
"arn:aws:s3:::bucket-a/*"
]
}
]
}
(You might need other permissions. Check any error messages for hints.)
That way, LogAccess-role will be able to read from Bucket-A and write to Bucket-B.
I would suggest you to consider you to use AWS S3 bucket replication:
https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html
If you just want to list objects in bucket-b, do this.
First make sure the LogsUser IAM user has got proper permission to access the bucket-b s3 bucket in Account B. You can add this policy to the user if not
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::bucket-b/*"
]
}
]
}
If there is permissions attached to the user, and if the Access keys and Secret Key stored in ~/.aws/credentials stored as [default] belongs to LogsUser IAM user, you can simply list objects inside bucket-b with following command.
aws s3 ls
If you want to run the command aws s3 sync s3://bucket-a s3://bucket-b --profile UserLogs, do this.
Remember, we will be using temporary credentials created by STS after assuming the role with permanent credentials of LogsUser. That means the role in Account A should have proper access to both buckets to perform the action and the bucket(bucket-b) in another account (Account B) should have proper bucket policy to allow the role to perform S3 operations.
To provide permissions to the role to access bucket-b, attach following bucket policy to bucket-b.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNTID:role/12345-LogAccess-role"
},
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::bucket-b/*"
]
}
]
}
Also in Account A, attach a policy to the role like below to allow access to S3 buckets in both the accounts.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::bucket-b/*",
"arn:aws:s3:::bucket-a/*"
]
}
]
}

AWS IAM Role in EC2 and access to S3 from JupyterHub

In JupyterHub, installed in an EC2 instance with an IAM role which allows access to a specific S3 bucket when I try to access a file in that bucket with this code:
s3nRdd = spark.sparkContext.textFile("s3n://bucket/file")
I get this error:
IllegalArgumentException: u'AWS Access Key ID and Secret Access Key
must be specified as the username or password (respectively) of a s3n
URL, or by setting the fs.s3n.awsAccessKeyId or
fs.s3n.awsSecretAccessKey properties (respectively).'
However, when I export the AWS access key id and secret access key in the kernel configuration having the same policy as that role, the read for that file succeeds.
As the best practice is to use IAM roles, why doesn't the EC2 role work in this situation?
--update--
The EC2 IAM role has these 2 policies:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1488892557621",
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::<bucket_name>",
"arn:aws:s3:::<bucket_name>/*"
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "ec2:*",
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "Stmt1480684159000",
"Effect": "Allow",
"Action": [
"iam:PassRole"
],
"Resource": [
"*"
]
}
]
}
Also, I am using hadoop version 2.4.0 which doesn't support s3a protocol and updating is not an option.
S3n doesn't support IAM roles, and 2.4 is a very outdated version anyway. Not as buggy as 2.5 when it comes to s3n, but still less than perfect.
If you want to use IAM roles, you are going to have to switch to S3a, and yes, for you, that does mean upgrading Hadoop. sorry.
You must create a bucket policy to allow access from particular IAM roles. Since S3 doesn't trust the roles, the API just fallback and ask for access key.
Just add soemthing like this in your bucket policy, replace all the custom <> parameter with your own values.
{
"Version": "2012-10-17",
"Id": "EC2IAMaccesss",
"Statement": [{
"Sid": "MyAppIAMRolesAccess",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::<acc_id>:role/<yourIAMroleName>"
]
},
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<yourbucket>/*",
"arn:aws:s3:::<yourbucket>"
]
}
]
}
(updates)
Make sure you give proper policy to the EC2 IAM Roles, because IAM roles is very powerful, no Policy is attach to it out of the box. You must assign a policy, e.g. for minimal S3 access, add AWSS3ReadOnly policy to the roles.
You may encounter issues of spark problematic interaction with IAM roles. Please check the documentation on spark access through s3n:// schema. Otherwise, use s3a://

Grant EC2 instance access to S3 Bucket

I want to grant my ec2 instance access to an s3 bucket.
On this ec2 instance, a container with my application is launched. Now I don't get permission on the s3 bucket.
This is my bucket policy
{
"Version": "2012-10-17",
"Id": "Policy1462808223348",
"Statement": [
{
"Sid": "Stmt1462808220978",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::714656454815:role/ecsInstanceRole"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "private-ip/32"
}
}
}
]
}
But it doesn't work until I give the bucket the permission for everyone to access it.
I try to curl the file in the s3 bucket from inside the ec2 instance but this doesn't work either.
at least of now, 2019, there is a much easier and cleaner way to do it (the credentials never have to be stored in the instance, instead it can query them automatically):
create an IAM Role for your instance and assign it
create a policy to grant access to your s3 bucket
assign the policy to the instance's IAM role
upload/download objects e.g. via aws cli for s3 - cp e.g. aws s3 cp <S3Uri> <LocalPath>
#2: An example of a JSON Policy to Allow Read and Write Access to Objects in an S3 Bucket is:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::bucket-name"]
},
{
"Sid": "AllObjectActions",
"Effect": "Allow",
"Action": "s3:*Object",
"Resource": ["arn:aws:s3:::bucket-name/*"]
}
]
}
You have to adjust allowed actions, and replace "bucket-name"
There is no direct way of granting "EC2" instance access to AWS server, but you can try the following.
Create a new user in AWS IAM, and download the credentials file.
This user will represent your EC2 server.
Provide the user with permissions to your S3 Bucket.
Next, place the credentials file in the following location:-
EC2 - Windows Instance:
a. Place the credentials file anywhere you wish. (e.g. C:/credentials)
b. Create an environment variable AWS_CREDENTIAL_PROFILES_FILE and put the value as the path where you put your credentials file (e.g. C:/credentials)
EC2 - Linux Instance
a. Follow steps from windows instance
b. Create a folder .aws inside your app-server's root folder (e.g. /usr/share/tomcat6).
c. Create a symmlink between your environment variable and your .aws folder
sudo ln -s $AWS_CREDENTIAL_PROFILES_FILE /usr/share/tomcat6/.aws/credentials
Now that your credentials file is placed, you can use Java code to access the bucket.
NOTE: AWS-SDK libraries are required for this
AWSCredentials credentials = null;
try {
credentials = new ProfileCredentialsProvider().getCredentials();
} catch (Exception e) {
LOG.error("Unable to load credentials " + e);
failureMsg = "Cannot connect to file server.";
throw new AmazonClientException(
"Cannot load the credentials from the credential profiles file. " +
"Please make sure that your credentials file is at the correct " +
"location (environment variable : AWS_CREDENTIAL_PROFILES_FILE), and is in valid format.",
e);
}
AmazonS3 s3 = new AmazonS3Client(credentials);
Region usWest2 = Region.getRegion(Regions.US_WEST_2);
s3.setRegion(usWest2);
ObjectListing objectListing = s3.listObjects(new ListObjectsRequest().withBucketName(bucketName).withPrefix(prefix));
Where bucketName = [Your Bucket Name]
and prefix = [your folder structure inside your bucket, where your file(s) are contained]
Hope that helps.
Also, if you are not using Java, you can check out AWS-SDKs in other programming languages too.
I found it out....
It only works with the public IP from the ec2 instance.
Try this:
{
"Version": "2012-10-17",
"Id": "Policy1462808223348",
"Statement": [
{
"Sid": "Stmt1462808220978",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket-name/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "yourIp/24"
}
}
}
]
}
I faced the same problem. I finally resolved it by creating an access-point for the bucket in question using AWS CLI see https://docs.aws.amazon.com/AmazonS3/latest/dev/creating-access-points.html and I then created a bucket policy like following
{
"Version": "2012-10-17",
"Id": "Policy1583357393961",
"Statement": [
{
"Sid": "Stmt1583357315674",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<account-id>:role/ecsInstanceRole"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::<your-bucket>"
},
{
"Sid": "Stmt1583357391961",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<account-id>:role/ecsInstanceRole"
},
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::<your-bucket>/*"
}
]
}
Please make sure you are using a newer version of aws cli (1.11.xxx didn't work for me). I finally installed the version 2 of cli to get this to work.