AWS S3 Bucket Access file permission - amazon-web-services

I'm not an AWS expert, so I need some help configuring access policies to a S3 Bucket audio file.
Quick explain:
I'm trying to call a lambda function and access a audio file from a S3 bucket with private access. My lambda function (same aws account) should be able to access the mp3 file through its URI.
Details:
I'm developing an Alexa Skill in .NET hosted on AWS Lambda. This skill needs to play an audio that will be retrieved from a S3 Bucket.
The only way I was able to play the audio was leaving the mp3 file accessible for everyone (allow public access), but I want to restrict the access for my lambda function (same aws account) only. In other words: I don't want anyone can access these files, just my lambda function.
Whenever I configure the access policy, the alexa skill doesn't access the file anymore and returns: "It was not possible to stablish a connection with the provided audio file URI"
I tried:
Creating a role on IAM management console
Creating a inline policy and attaching all S3 list and read permissions for any resource
Setting up the created role to my lambda function execution role
But it's not working.
Anyone knows how to configure it correctly?
Reference:lambda-execution-role-s3-bucket

You should create an IAM Role and associate that IAM Role with the AWS Lambda function.
The IAM Role should have the following permissions:
The AWSLambdaBasicExecutionRole managed policy, which gives permission for the Lambda function to send logging to CloudWatch Logs (See Lambda execution role - AWS Lambda)
A policy that permits the Lambda function to access the Amazon S3 bucket, something like:
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":"s3:ListBucket",
"Resource":"arn:aws:s3:::BUCKET-NAME"
},
{
"Effect":"Allow",
"Action":[
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject"
],
"Resource":"arn:aws:s3:::BUCKET-NAME/*"
}
]
}
This policy gives the Lambda function permission to list the contents of the bucket, and upload/download/delete objects from the bucket.
If you merely want the Lambda function to read files in the bucket, you can reduce it to:
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":"s3:GetObject"
"Resource":"arn:aws:s3:::BUCKET-NAME/*"
}
]
}

What #Anon Coward suggested worked fine!
Are you trying to read the audio file in the Lambda, or pass a link off to
Alexa for it to play it on the device? If you're passing a link off,
likely you need to pass a pre signed URL so the device can access the data. –
Anon Coward Nov 24 at 5:47
I wasn't realizing that when I provide the URI to Alexa through REST API it isn't going to be resolved on my own function. For this reason Alexa didn't have access to any file.
Thanks you all!

Related

How to give access of s3 bucket residing in Account A to different iam users from multiple aws accounts?

I am working on aws SAM project and i have a requirement of giving access to my S3 bucket to multiple iam users from unknown aws accounts but i can't make bucket publicly accessible. I want to secure my bucket as well as i want any iam user from any aws account to access the contents of my S3 bucket. Is this possible?
Below is the policy i tried and worked perfectly.
{
"Version": "2012-10-17",
"Id": "Policy1616828964582",
"Statement": [
{
"Sid": "Stmt1616828940658",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/STS_Role_demo"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::new-demo-bkt/*"
}
]
}
Above policy is for one user but i want any user from other AWS account to access my contents without making the bucket and objects public so how can i achieve this?
This might be possible using a set of Conditions on the incoming requests.
I can think of two options:
You create an IAM role that your SAM application uses even when running in other accounts
You create S3 bucket policies that allow unknown users access
If you decide to look into S3 bucket policies, I suggest using an S3 Access Point to better manage access policies.
Access points are named network endpoints that are attached to buckets
that you can use to perform S3 object operations, such as GetObject
and PutObject. Each access point has distinct permissions and network
controls that S3 applies for any request that is made through that
access point. Each access point enforces a customized access point
policy that works in conjunction with the bucket policy that is
attached to the underlying bucket.
You can use a combination of S3 Conditions to restrict access. For example, your SAM application could include specific condition keys when making S3 requests, and the bucket policy then allows access based on those conditions.
You can also apply global IAM conditions to S3 policies.
This isn't great security though, malicious actors might be able to figure out the headers and spoof requests to your bucket. As noted on some conditions such as aws:UserAgent:
This key should be used carefully. Since the aws:UserAgent value is
provided by the caller in an HTTP header, unauthorized parties can use
modified or custom browsers to provide any aws:UserAgent value that
they choose. As a result, aws:UserAgent should not be used to
prevent unauthorized parties from making direct AWS requests. You can
use it to allow only specific client applications, and only after
testing your policy.

Allow access to S3 Bucket from all EC2 instances of specific Account

Is there any way to allow all instances created by a specific AWS account access to an S3 bucket?
I would like to provide data that should be very simple for clients to download to their instances. Ideally, automatically via the post_install script option of AWS ParallelCluster.
However, it seems like this requires a lot of setup, as is described in this tutorial by AWS:
https://aws.amazon.com/premiumsupport/knowledge-center/s3-instance-access-bucket/
This is not feasible for me. Clients should not have to create IAM roles.
The best I came up with at the moment is allowing S3 bucket access to a specific AWS account and then working with access keys:
export AWS_ACCESS_KEY_ID=<key-id>
export AWS_SECRETE_ACCESS_KEY=<secret-key>
aws s3 cp s3://<bucket> . --recursive
Unfortunately, this is also not ideal as I would like to provide ready-to-use AWS Parallelcluster post_install scripts. These scripts should automatically download the required data on cluster startup.
Is there any way to allow all instances created by a specific AWS account access to an S3 bucket?
Yes. It's a 2 step process. In summary:
1) On your side, the bucket must trust the account id of the other accounts that will access it, and you must decide which calls you will allow.
2) On the other accounts that will access the bucket, the instances must be authorised to run AWS API calls on your bucket using IAM policies.
In more detail:
Step 1: let's work through this and break it down.
On your bucket, you'll need to configure a bucket policy like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "111",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT_ID_TO_TRUST:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME_HERE/*"
}
]
}
You can find more examples of bucket policies in the AWS documentation here.
WARNING 1: "arn:aws:iam::ACCOUNT_ID:root" will trust everything that has permissions to connect to your bucket on the other AWS account. This shouldn't be a problem for what you're trying to do, but it's best you completely understand how this policy works to prevent any accidents.
WARNING 2: Do not grant s3:* - you will need to scope down the permissions to actions such as s3:GetObject etc. There is a website to help you generate these policies here. s3:* will contain delete permissions which if used incorrectly could result in nasty surprises.
Now, once that's done, great work - that's things on your end covered.
Step 2: The other accounts that want to read the data will have to assign an instance role to the ec2 instances they launch and that role will need a policy attached to it granting access to your bucket. Those instances can then run AWS CLI commands on your bucket, provided your bucket policy authorises the call on your side and the instance policy authorises the call on their side.
The policy that needs to be attached to the instance role should look something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME_HERE/*"
}
]
}
Keep in mind, just because this policy grants s3:* it doesn't mean they can do anything on your bucket, not unless you have s3:* in your bucket policy. Actions of this policy will be limited to whatever you've scoped the permissions to in your bucket policy.
This is not feasible for me. Clients should not have to create IAM roles.
If they have an AWS account it's up to them on how they choose to access the bucket as long as you define a bucket policy that trusts their account the rest is on them. They can create an ec2 instance role and grant it permissions to your bucket, or an IAM User and grant it access to your bucket. It doesn't matter.
The best I came up with at the moment is allowing S3 bucket access to a specific AWS account and then working with access keys:
If the code will run on an ec2 instance, it's bad practice to use access keys and instead should use an ec2 instance role.
Ideally, automatically via CloudFormation on instance startup.
I think you mean via instance userdata, which you can define through CloudFormation.
You say "Clients should not have to create IAM roles". This is perfectly correct.
I presume that you are creating the instances for use by the clients. If so, then you should create an IAM Role that has access to the desired bucket.
Then, when you create an Amazon EC2 instance for your clients, associate the IAM Role to the instance. Your clients will then be able to use the AWS Command-Line Interface (CLI) to access the S3 bucket (list, upload, download, or whatever permissions you put into the IAM Role).
If you want the data to be automatically downloaded when you first create their instance, then you can add User Data script that will execute when the instance starts. This can download the files from S3 to the instance.

s3 - use CLI to make directory public

is it possible to use the s3 cli to change to ACL of existing files, without using sync ? I got about 1TB of data on my bucket, I'd like to change their ACL without syncing it on my computer.
There are two ways to make Amazon S3 content 'public':
Change the Access Control List (ACL) on an individual object
Create a Bucket Policy on a bucket or path within a bucket
It sounds like you want to make all objects within a given directory public, so you should use an Amazon S3 Bucket Policy, such as this one from Bucket Policy Examples - Amazon Simple Storage Service:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::my-bucket/directory/*"]
}
]
}
You can add this policy by the AWS CLI, but it's much easier to do it in the Amazon S3 management console (Permissions tab).

How can AWS CloudFormation Lambda resource access code file in S3 if it is KMS encrypted?

My Lambda function deployment via CloudFormation works OK when the Lambda's code file in S3 bucket is not encrypted, but fails when I use KMS encrypted code file.
I have AWS CloudFormation stack that contains Lambda resources. My Python code ZIP file is in an S3 bucket. The Lambda resources in my CFN template contain "Code" property that points to S3Bucket and S3Key where zip is located. The bucket policy allows my role the actions s3:GetObject, s3:PutObject, s3:ListBucket. The stack build works fine when code ZIP file is unencrypted. But when I use a KMS encrypted zip file in bucket, I get the error:
"Your access has been denied by S3, please make sure your request credentials have permission to GetObject for my-bucket/my-folder/sample.zip. S3 Error Code: AccessDenied. S3 Error Message: Access Denied"
Do I need to enhance my S3 bucket policy to support accessing KMS encrypted files? How is that done? (The error message seems misleading, since my bucket policy already does allow my role GetObject access.) Thanks.
Since you are almost certain that the request is failing for encrypted objects, you have to give the "role" you are referring permission to use the KMS CMK and it must be done via the KMS key policy (and/or IAM policy).
If you are using a customer managed CMK, then you can refer here and add the IAM role as the Key User. If you are using AWS managed CMK (identifiable by the AWS icon), you can add permission policy to the IAM role as following:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": [
"kms:*"
],
"Resource": [
"arn:aws:kms:*:account_id:key/key_id"
]
}
}
Note:
Above policy allows all KMS API for the specific key but you can tweak it to give minimum required permission.
For customer managed CMKs, it is also possible to manage the permission to KMS CMK via the IAM policy (along with key policy), since we don't know the key policy, I just included the option to manage via the key policy itself.

How to secure an S3 bucket to an Instance's Role?

Using cloudformation I have launched an EC2 instance with a role that has an S3 policy which looks like the following
{"Statement":[{"Action":"s3:*","Resource":"*","Effect":"Allow"}]}
In S3 the bucket policy is like so
{
"Version": "2012-10-17",
"Id": "MyPolicy",
"Statement": [
{
"Sid": "ReadAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456678:role/Production-WebRole-1G48DN4VC8840"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::web-deploy/*"
}
]
}
When I login to the instance and attempt to curl any object I upload into the bucket (without acl modifications) I receive and Unauthorized 403 error.
Is this the correct way to restrict access to a bucket to only instances launched with a specific role?
The EC2 instance role is more than sufficient to put/read to any of your S3 buckets, but you need to use the instance role, which is not done automatically by curl.
You should use for example aws s3 cp <local source> s3://<bucket>/<key>, which will automatically used the instance role.
There are three ways to grant access to an object in Amazon S3:
Object ACL: Specific objects can be marked as "Public", so anyone can access them.
Bucket Policy: A policy placed on a bucket to determine what access to Allow/Deny, either publicly or to specific Users.
IAM Policy: A policy placed on a User, Group or Role, granting them access to an AWS resource such as an Amazon S3 bucket.
If any of these policies grant access, the user can access the object(s) in Amazon S3. One exception is if there is a Deny policy, which overrides an Allow policy.
Role on the Amazon EC2 instance
You have granted this role to the Amazon EC2 instance:
{"Statement":[{"Action":"s3:*","Resource":"*","Effect":"Allow"}]}
This will provide credentials to the instance that can be accessed by the AWS Command-Line Interface (CLI) or any application using the AWS SDK. They will have unlimited access to Amazon S3 unless there is also a Deny policy that otherwise restricts access.
If anything, that policy is granting too much permission. It is allowing an application on that instance to do anything it wants to your Amazon S3 storage, including deleting it all! It is better to assign least privilege, only giving permission for what the applications need to do.
Amazon S3 Bucket Policy
You have also created a Bucket Policy, which allows anything that has assumed the Production-WebRole-1G48DN4VC8840 role to retrieve the contents of the web-deploy bucket.
It doesn't matter what specific permissions the role itself has -- this policy means that merely using the role to access the web-deploy bucket will allow it to read all files. Therefore, this policy alone would be sufficient to your requirement of granting bucket access to instances using the Role -- you do not also require the policy within the role itself.
So, why can't you access the content? It is because using a straight CURL does not identify your role/user. Amazon S3 receives the request and treats it as anonymous, thereby not granting access.
Try accessing the data via the CLI or programmatically via an SDK call. For example, this CLI command would download an object:
aws s3 cp s3://web-deploy/foo.txt foo.txt
The CLI will automatically grab credentials related to your role, allowing access to the objects.