AWS Lambda put data to cross account s3 bucket - amazon-web-services

Here is what I am trying to do:
I have access logs in account A which are encrypted default by AWS and I have lambda and s3 bucket in account B. I want to trigger the lambda when a new object lands on the account A s3 bucket and lambda in account B downloads the data and writes it to account B s3 bucket. Below are the blocks I am facing.
First approach:
I was able to get the trigger from account A s3 new object to lambda in account B however, the lambda in account B is not able to download the object - Access Denied error. After looking for a couple of days, I figured that it is because the Access logs are encrypted by default and there is no way I can add lambda role to the encryption role policy so that it can encrypt/decrypt the log files. So moved on to the second approach.
Second approach:
I have moved my lambda to Account A. Now the source s3 bucket and lambda are in Account A and destination s3 bucket is in Account B. Now I can process the Access logs in the Account A via Lambda in Account A but when it writes the file in the Account B s3 bucket I get Access denied error while downloaded/reading the file.
Lambda role policy:
In addition to full s3 access and full lambda access.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1574387531641",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
},
{
"Sid": "Stmt1574387531642",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::Account-B-bucket",
"arn:aws:s3:::Account-B-bucket/*"
]
}
]
}
Trust relationship
{ "Version": "2012-10-17", "Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com",
"AWS": "arn:aws:iam::Account-B-ID:root"
},
"Action": "sts:AssumeRole"
} ] }
Destination - Account B s3 bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::Account-A-ID:role/service-role/lambda-role"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::Account-B-Bucket",
"arn:aws:s3:::Account-B-Bucket/*"
]
},
{
"Sid": "Stmt11111111111111",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::Account-A-ID:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::Account-B-Bucket",
"arn:aws:s3:::Account-B-Bucket/*"
]
}
] }
I am stuck here. I want lambda to be able to decrypt the access logs and read/process the data and write it to different account s3 bucket. Am I missing something? Help is much appreciated!
Adding file metadata:
File property screenshot
Lambda Code:
s3 = boto3.client('s3')
# reading access logs from account A. Lambda is also running in account A.
response = s3.get_object(Bucket=access_log_bucket, Key=access_log_key)
body = response['Body']
content = io.BytesIO(body.read())
# processing access logs
processed_content = process_it(content)
# writting to account B s3 bucket
s3.put_object(Body=processed_content,
Bucket=processed_bucket,
Key=processed_key)

Rather than downloading and then uploading the object, I would recommend that you use the copy_object() command.
The benefit of using copy_object() is that the object will be copied directly by Amazon S3, without the need to first download the object.
When doing so, the credentials you use must have read permissions on the source bucket and write permissions on the destination bucket. (However, if you are 'processing' the data, this of course won't apply.)
As part of this command, you can specify an ACL:
ACL='bucket-owner-full-control'
This is required because the object is being written from credentials in Account A to a bucket owned by Account B. Using bucket-owner-full-control will pass control of the object to Account B. (It is not required if using credentials from Account B and 'pulling' an object from Account A.)

Thanks John Rotenstein for the direction. I found the solution. I only needed to add ACL='bucket-owner-full-control' in the put_object. Below is the complete boto3 cmd.
s3.put_object(
ACL='bucket-owner-full-control'
Body=processed_content,
Bucket=processed_bucket,
Key=processed_key)

Related

S3 cross account file transfer, file not accessible

I am pushing a s3 file from accountA to accountB but the pushed file is not accessible from accountB. I checked the pushed file and the Owner of the pushed file appears to be accountA.
Here is what I have done.
The IAM role in accountA has this policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*",
"s3-object-lambda:*"
],
"Resource": "*"
}
]
}
The bucket policy in accountB looks like this:
{
"Sid": "S3AllowPutFromDataLake",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::accountId:role/roleNameAccountA"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::bucketName/*"
}
How to fix this?
This is a common problem when copying S3 objects between AWS Accounts. Here are several options to avoid it happening. Pick whichever one you prefer:
Pull instead of Push
The problem occurs when Account A copies an object to Account B. Ownership stays with Account A.
This can be avoided by having Account B trigger the copy. It is, in effect, 'pulling' the object into Account B rather than 'pushing' the object. Ownership will stay with Account B, since Account B requested the copy.
Disable ACLs
The concept of object-level ACLs pre-dates Bucket Policies and causes many problems like the one you are experiencing.
Amazon S3 has now introduced the ability to disable ACLs on a bucket and this is the recommended option when creating new buckets. Disabling the ACLs will also remove this 'ownership' concept that is causing problems. In your situation, it is the Target bucket in Account B that should have ACLs disabled.
See: Disabling ACLs for all new buckets and enforcing Object Ownership - Amazon Simple Storage Service
Specify ownership while copying
When copying the file, it is possible to specify that ownership should be transferred by setting the ACL to bucket-owner-full-control.
Using the AWS CLI:
aws s3 cp s3://bucket-a/foo.txt s3://bucket-b/foo.txt --acl bucket-owner-full-control
Using boto3:
s3_client.copy_object(
ACL = 'bucket-owner-full-control',
Bucket = DESTINATION_BUCKET,
Key = KEY,
CopySource = {'Bucket':SOURCE_BUCKET, 'Key':KEY}
)
Was able to fix this by modifying the bucket policy as below:
{
"Sid": "S3AllowPutFromDataLake",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::accountId:role/roleNameAccountA"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
]
"Resource": "arn:aws:s3:::bucketName/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
}
And adding this parameter while pushing the file:
'ACL': 'bucket-owner-full-control'
The owner is still accountA but now I am able to access the file from accountB.

AWS Lambda : Cross account Policy for Lambda function S3 to S3 copy

we are trying to implement the lambda function which will copy the object from one S3 to another S3 bucket in cross account based on the source S3 bucket events. Currently we are able to copy the file between source and target within same SAG . But when we tried to implement the same logic with cross account , getting the CopyObject operation: Access Denied issue . I have given following bucket policy. Can you please help me to get the correct IAM and bucket policy to resolve this issue .
{
"Version": "2012-10-17",
"Id": "Policy1603404813917",
"Statement": [
{
"Sid": "Stmt1603404812651",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::6888889898:role/Staff"
},
"Action": [
"s3:GetObject",
"s3:GetObjectAcl",
"s3:ListBucket",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::source-bucktet-testing-lambda/*",
"arn:aws:s3:::source-bucktet-testing-lambda"
]
}
]
}
based on the https://www.lixu.ca/2016/09/aws-lambda-and-s3-how-to-do-cross_83.html link , Yes, we can implement the same logic with help of access ID and access secret keys for source and dest. But am trying to implement same logic instead of access ID and access secret keys for source and dest, granting access for both source and target buckets with appropriate policy and make it work as like same account .
To reproduce your situation, I did the following:
In Account-A:
Created an Amazon S3 bucket (Bucket-A)
Created an IAM Role (Role-A)
Created an AWS Lambda function (Lambda-A) and assigned Role-A to the function
Configured an Amazon S3 Event on Bucket-A to trigger Lambda-A for "All object create events"
In Account-B:
Created an Amazon S3 bucket (Bucket-B) with a bucket policy (see below)
IAM Role
Role-A has the AWSLambdaBasicExecutionRole managed policy, and also this Inline Policy that assigns the Lambda function permission to read from Bucket-A and write to Bucket-B:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-a/*"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::bucket-b/*"
}
]
}
Bucket Policy on destination bucket
The Bucket Policy on Bucket-B permits access from the Role-A IAM Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT-A:role/role-a"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::bucket-b/*"
}
]
}
Lambda Function
Lambda-A is triggered when an object is created in Bucket-A, and copies it to Bucket-B:
import boto3
import urllib
TARGET_BUCKET = 'bucket-b'
def lambda_handler(event, context):
# Get incoming bucket and key
source_bucket = event['Records'][0]['s3']['bucket']['name']
source_key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'])
# Copy object to different bucket
s3_resource = boto3.resource('s3')
copy_source = {
'Bucket': source_bucket,
'Key': source_key
}
target_key = source_key # Change if desired
s3_resource.Bucket(TARGET_BUCKET).Object(target_key).copy(copy_source, ExtraArgs={'ACL': 'bucket-owner-full-control'})
I grant ACL=bucket-owner-full-control because copying objects to buckets owned by different accounts can sometimes cause the objects to still be 'owned' by the original account. Using this ACL grants ownership to the account that owns the destination bucket.
Testing
I uploaded a file to Bucket-A in Account-A.
The file was correctly copied to Bucket-B in Account-B.
Comments
The solution does NOT require:
A bucket policy on Bucket-A, since Role-A grants the necessary permissions
Turning off S3 Block Public Access, since the permissions assigned do not grant 'public' access
Assuming the following
Above mentioned policy is for the source bucket
6888889898 is the Destination AWS account
Lambda for copying the file is located in the destination AWS account and has Staff role attached to it.
Even after setting all these correctly, the copy operation may fail. This is because the Policy allows you to get/put s3 objects, but not the tags associated with those s3 objects.
You will need to ALLOW the following actions as well "s3:GetObjectTagging" and "s3:PutObjectTagging"

AWS: Could not able to give s3 access via s3 bucket policy

I am the root user of my account and i created one new user and trying to give access to s3 via s3 bucket policy:
Here is my policy details :-
{  "Id": "Policy1542998309644",  "Version": "2012-10-17",  "Statement": [    {      "Sid": "Stmt1542998308012",      "Action": [        "s3:ListBucket"      ],      "Effect": "Allow",      "Resource": "arn:aws:s3:::aws-bucket-demo-1",      "Principal": {        "AWS": [          "arn:aws:iam::213171387512:user/Dave"        ]      }    }  ]}
in IAM i have not given any access to the new user. I want to provide him access to s3 via s3 bucket policy. Actually i would like to achieve this : https://aws.amazon.com/premiumsupport/knowledge-center/s3-console-access-certain-bucket/ But not from IAM , I want to use only s3 bucket policy.
Based on the following AWS blog post (the blog shows IAM policy, but it can be adapted to a bucket policy):
How can I grant a user Amazon S3 console access to only a certain bucket?
you can make the following bucket policy:
{
"Id": "Policy1589632516440",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1589632482887",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::aws-bucket-demo-1",
"Principal": {
"AWS": [
"arn:aws:iam::213171387512:user/Dave"
]
}
},
{
"Sid": "Stmt1589632515136",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::aws-bucket-demo-1/*",
"Principal": {
"AWS": [
"arn:aws:iam::213171387512:user/Dave"
]
}
}
]
}
This will require user to url directly to the bucket:
https://s3.console.aws.amazon.com/s3/buckets/aws-bucket-demo-1/
The reason is that the user does not have permissions to list all buckets available. Thus he/she has to go directly to the one you specify.
Obviously the IAM user needs to have AWS Management Console access enabled when you create him/her in the IAM service. With Programmatic access only, IAM users can't use console and no bucket policy can change that.
You will need to use ListBuckets.
It seems like you want this user to only be able to see your bucket but not access anything in it.

Cannot 'getObject' from lambda on s3 bucket when the object is created by another account

I have 3 accounts I will refer to as account AA, BB and CC.
AA does a putObject to a s3 bucket in BB, and CC has a lambda that is triggered when an object is created in BB
When I create an object in the s3 bucket from account BB, the lambda works as expected. I can do this through the console, or the s3 api.
When the object is put in there from account AA, I am able to read the event in the lambda, but get Access Denied when trying to do an s3:GetObject
At one point I had the lambda in BB, and it was able to perform the s3:GetObject on objects created by AA. It is only when I moved the lambda to CC, did i start experiencing issues with objects created by AA.
here is my s3 bucket policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AA:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::BUCKET_NAME/*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::CC:role/LAMBDA_ARN"
},
"Action": "s3:Get*",
"Resource": "arn:aws:s3::BUCKET_NAME/*"
}
]
}
And here is my statement from CC lambda that allows access to the s3
{
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME*",
"arn:aws:s3:::BUCKET_NAME*/*"
],
"Effect": "Allow",
}
The lambda execution role has full permissions to s3:get* on account BB.
The fact that it was written by another account should not affect reading that object, As I can take that same object that was written in there from AA, write it into BB again, and CC lambda will be able to read it just fine.
When writing an object to Amazon S3 from a different account (that is, from an account that does not own the S3 bucket), it is recommended to set the ACL to bucket-owner-full-control. This allows the bucket owner to control the object.
As strange as it seems, it is possible to create objects in a bucket that the owning account cannot access!
See: Access Control List (ACL) Overview - Amazon Simple Storage Service

S3 IAM policy works in simulator, but not in real life

I have a client who I want to be able to upload files, but not navigate freely around my S3 bucket. I’ve created them an IAM user account, and applied the following policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1416387009000",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets"
],
"Resource": [
"arn:aws:s3:::*"
]
},
{
"Sid": "Stmt1416387127000",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::progress"
]
},
{
"Sid": "Stmt1416387056000",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::progress/*"
]
}
]
}
There are three statements:
Ability to list all buckets (otherwise they can’t see anything in the S3 console when they log in)
Ability to list the contents of the progress bucket
Ability to put objects in the progress bucket
The user can log in to the AWS console with their username and password (and my custom account URL, i.e. https://account.signin.aws.amazon.com/console). They can go to the S3 section of the console, and see a list of all my buckets. However, if they click progress then they just get the following error message:
Sorry! You were denied access to do that.
I’ve checked with the IAM Policy Simulator whether the user has the ListBucket permission on the bucket’s ARN (arn:aws:s3:::progress) and the Policy Simulator says the user should be allowed.
I’ve logged out and in again as the target user in case policies are only refreshed on log out, but still no joy.
What have I done wrong? Have I missed something?
My guess is that when using the AWS console another call is made to get the bucket location before it can list the objects in that bucket, and the user doesn't have permission to make that call. You need to also give he account access to GetBucketLocation. Relevant text from the documentation
When you use the Amazon S3 console, note that when you click a bucket,
the console first sends the GET Bucket location request to find the
AWS region where the bucket is deployed. Then the console uses the
region-specific endpoint for the bucket to send the GET Bucket (List
Objects) request. As a result, if users are going to use the console,
you must grant permission for the s3:GetBucketLocation action as shown
in the following policy statement:
{
"Sid": "RequiredByS3Console",
"Action": ["s3:GetBucketLocation"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::*"]
}