Access Denied from S3 Bucket [pendente] - amazon-web-services

In Account A I created a s3 bucket with cloudformation, and a CodeBuild builds an artifact and uploads to this bucket. In Account B I try to create a stack with cloudformation, and use the artifact from Account A's bucket to deploy my Lambda function. But, I get an Access Denied error. Does anyone know the solution? Thanks...
"TestBucket": {
"Type": "AWS::S3::Bucket",
"DeletionPolicy": "Retain",
"Properties": {
"AccessControl": "BucketOwnerFullControl"
}
},
"IAMPolicy": {
"Type": "AWS::S3::BucketPolicy",
"Properties": {
"Bucket": {
"Ref": "TestBucket"
},
"PolicyDocument": {
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::xxxxxxxxxxxx:root",
"arn:aws:iam::xxxxxxxxxxxx:root"
]
},
"Action": [
"s3:GetObject"
],
"Resource": [
{
"Fn::Join": [
"",
[
"arn:aws:s3:::",
{
"Ref": "TestBucket"
},
"/*"
]
]
},
{
"Fn::Join": [
"",
[
"arn:aws:s3:::",
{
"Ref": "TestBucket"
}
]
]
}
]
}
]
}
}
}

Assuming that the xxxxx in below statement is the account number of Account B:
"AWS": [
"arn:aws:iam::xxxxxxxxxxxx:root",
"arn:aws:iam::xxxxxxxxxxxx:root"
]
You are saying that this bucket grants the access to Account B on the basis of IAM permissions/policies held by them in Account B IAM service.
So essentially all the users/instance profile/policy that have explicit S3 access will be able to access this bucket of Account A. This means that perhaps the IAM policy that you are attaching to the lambda role in Account B doesn't have explicit S3 access.
I would suggest giving S3 access to your Lambda function and this should work.
Please be aware that in future if you want to write to S3 bucket of Account A from Account B, you would have to make sure that you put the bucket-owner-full-control acl so that the objects are available across all the accounts.
Example:
Using CLI:
$ aws s3api put-object --acl bucket-owner-full-control --bucket my-test-bucket --key dir/my_object.txt --body /path/to/my_object.txt

Instead of "arn:aws:iam::xxxxxxxxxxxx:root" granting access to the root role only, try granting access to all identities in the account by specifying just the account ID as the item within the Principal/AWS object: "xxxxxxxxxxxx".
See Using a Resource-based Policy to Delegate Access to an Amazon S3 Bucket in Another Account for more details.

Related

Why can't I access my bucket from an assumed role?

I have an S3 bucket with no attached ACLs or policies. It was created by terraform like so:
resource "aws_s3_bucket" "runners_cache" {
bucket = var.runners_cache.bucket
}
I created a role and attached a policy to it; see the following console log for details
$ aws iam get-role --role-name bootstrap-test-bootstrapper
{
"Role": {
{
"Role": {
"Path": "/bootstrap-test/",
"RoleName": "bootstrap-test-bootstrapper",
"RoleId": "#SNIP",
"Arn": "arn:aws:iam::#SNIP:role/bootstrap-test/bootstrap-test-bootstrapper",
... #SNIP
$ aws iam list-attached-role-policies --role-name bootstrap-test-bootstrapper
{
"AttachedPolicies": [
{
"PolicyName": "bootstrap-test-bootstrapper",
"PolicyArn": "arn:aws:iam::#SNIP:policy/bootstrap-test/bootstrap-test-bootstrapper"
},
... #SNIP
$ aws iam get-policy --policy-arn arn:aws:iam::#SNIP:policy/bootstrap-test/bootstrap-test-runner
{
"Policy": {
"PolicyName": "bootstrap-test-runner",
"PolicyId": "#SNIP",
"Arn": "arn:aws:iam::#SNIP:policy/bootstrap-test/bootstrap-test-runner",
"Path": "/bootstrap-test/",
"DefaultVersionId": "v7",
... #SNIP
$ aws iam get-policy-version --policy-arn arn:aws:iam::#SNIP:policy/bootstrap-test/bootstrap-test-runner --version-id v7
{
"PolicyVersion": {
"Document": {
"Statement": [
{
"Action": [
"s3:AbortMultipartUpload",
"s3:CompleteMultipartUpload",
"s3:ListBucket",
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObjectAcl"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::#SNIP-runners-cache/*",
"arn:aws:s3:::#SNIP-cloud-infrastructure-terraform-states/*"
]
},
{
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::*"
]
}
],
"Version": "2012-10-17"
},
"VersionId": "v7",
"IsDefaultVersion": true,
"CreateDate": "2022-08-18T14:16:33+00:00"
}
}
tl;dr this role has an attached policy that allows full access to s3 within the account.
I can successfully assume this role:
$ aws sts assume-role --role-arn arn:aws:iam::#SNIP:role/bootstrap-test/bootstrap-test-bootstrapper --role-session-name test123
{ ... #REDACTED }
$ export AWS_ACCESS_KEY_ID=ASIA2 #REDACTED
$ export AWS_SECRET_ACCESS_KEY=8 #REDACTED
$ export AWS_SESSION_TOKEN=IQoJb #REDACTED
$ aws sts get-caller-identity
{
"UserId": "#SNIP",
"Account": "#SNIP",
"Arn": "arn:aws:sts::#SNIP:assumed-role/bootstrap-test-bootstrapper/test123"
}
However, once I do this, I no longer have access to S3:
$ aws s3 ls #SNIP-runners-cache
An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
$ aws s3 ls
An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied
What am I missing? Is there some default behavior that prevents access to S3? How should I go about debugging these 403 errors?
It is easy to get over-obsessed with the details of the policy and forget about the role itself. In this case the permissions boundary went unnoticed in the CLI, but it is quite easy to see in the web console:
Indeed, #luk2302 was right, the limiting factor was a permissions boundary. After removing it from the role, access to S3 was restored.

Limiting access of a GCP Cloud IAM custom role only to a bucket

AWS provides a way through its IAM policies to limit access from a particular user/role to a specific named resource.
For example the following permission:
{
"Sid": "ThirdStatement",
"Effect": "Allow",
"Action": [
"s3:List*",
"s3:Get*"
],
"Resource": [
"arn:aws:s3:::confidential-data",
"arn:aws:s3:::confidential-data/*"
]
}
will allow all List* and Get* operations on the confidential-data bucket and its contents.
However, I could not find such an option when going through GCP's custom roles.
Now, I know that for GCS buckets (which is my use case) you can create either ACLs to achieve (more or less?) the same result.
My question is, assuming I create a service account identified by someone#myaccount-googlecloud.com and I want this account to have read/write permissions to gs://mybucket-on-google-cloud-storage, how should I format the ACL to do this?
(for the time being, it does not matter to me whatever other permissions are inherited from the organization/folder/project)
From documentation:
Grant the service account foo#developer.gserviceaccount.com WRITE access to the bucket example-bucket:
gsutil acl ch -u foo#developer.gserviceaccount.com:W gs://example-bucket
Grant the service account foo#developer.gserviceaccount.com READ access to the bucket example-bucket:
gsutil acl ch -u foo#developer.gserviceaccount.com:R gs://example-bucket
The format for ACL is as below
{
"bindings":[
{
"role": "[IAM_ROLE]",
"members":[
"[MEMBER_NAME]"
]
}
]
}
Please refer to the Google Docs
e.g.
{
"kind": "storage#policy",
"resourceId": "projects/_/buckets/bucket_name",
"version": 1,
"bindings": [
{
"role": "roles/storage.legacyBucketWriter",
"members": [
"projectEditor:projectname",
"projectOwner:projectname"
]
},
{
"role": "roles/storage.legacyBucketReader",
"members": [
"projectViewer:projectname"
]
}
],
"etag": "CAE="
}

Grant permissions between AWS resources with CloudFormation

I would like to have a CloudFormation template create an EC2 instance and give that instance access to a S3 bucket.
One way is to have the template create an IAM user with proper permissions and use its access key to grant access.
But what if I don't want to give that user access to the IAM service?
Is there a way to have that user deploy this template without IAM?
UPDATE:
I want to be able to just share that template, so I am wondering if it is possible to not have a dependency on pre-existing IAM resources (roles, policies, etc)
The common method to grant permissions for an instance is Instance Profiles. You create a role with all the required permissions, assign that role to an instance profile and then assign the profile to any instance you need.
You can do this with CloudFormation:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"myEC2Instance": {
"Type": "AWS::EC2::Instance",
"Version": "2009-05-15",
"Properties": {
"ImageId": "ami-205fba49",
"InstanceType": "t2.micro",
"IamInstanceProfile": {
"Ref": "RootInstanceProfile"
}
}
},
"MyRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version" : "2012-10-17",
"Statement": [ {
"Effect": "Allow",
"Principal": {
"Service": [ "ec2.amazonaws.com" ]
},
"Action": [ "sts:AssumeRole" ]
} ]
},
"Path": "/"
}
},
"RolePolicies": {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": "s3",
"PolicyDocument": {
"Version" : "2012-10-17",
"Statement": [ {
"Effect": "Allow",
"Action":["s3:PutObject","s3:PutObjectAcl"],
"Resource":["arn:aws:s3:::examplebucket/*"],
} ]
},
"Roles": [ { "Ref": "MyRole" } ]
}
},
"RootInstanceProfile": {
"Type": "AWS::IAM::InstanceProfile",
"Properties": {
"Path": "/",
"Roles": [ { "Ref": "MyRole" } ]
}
}
}
}
If you want to avoid giving the user deploying this template IAM access, you can create the instance profile before deploying the template and specify the already existing instance profile in the template. I haven't tried that yet, but it seems that should only require ec2:AssociateIamInstanceProfile and you should be able to constrain that just to that one specific profile.
Depends on what you mean by IAM service.
You can create IAM User Access Keys that give permissions to specific AWS services and no others. Access Keys do not allow IAM Console Access (this requires login credentials or federation).
For your use case your user will need at a minimum:
Permission to use CloudFormation to execute your template.
Permission to create the EC2 instance.
These permissions are defined in a policy that you add to the IAM user in the AWS Management Console. You can create users that cannot log into the console. Then you generate the Access Keys that the user will use in their application, AWS CLI, etc.
Overview of IAM Policies

AWS Codepipeline with a Codecommit targetsource repository from another account

Is it possible to create a codepipeline that has a target source of a CodeCommit Repository in another account?
I just had to do this, I'll explain the process.
Account C is the account with your CodeCommit repository.
Account P is the account with your CodePipeline... pipelines.
In Account P:
Create an AWS KMS Encryption Key and add Account C with having access (guide here in pre-requisite step). You will also need to add the CodePipeline role, and if you have a CodeBuild and CodeDeploy step add those roles too.
In your CodePipeline artifacts S3 bucket you need to add Account C access. Go to the Bucket Policy and add:
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNTC_ID:root"
},
"Action": [
"s3:Get*",
"s3:Put*"
],
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME/*"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNTC_ID:root"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME"
}
Change ACCOUNTC_ID to the account ID of Account C, and change YOUR_BUCKET_NAME to the CodePipeline artifact S3 bucket name.
Add a policy to your CodePipeline service role so you can get access to Account C and the CodeCommit repositories:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": [
"arn:aws:iam::ACCOUNTC_ID:role/*"
]
}
}
Again, change ACCOUNTC_ID to the account ID of Account C.
In Account C:
Create an IAM Policy that lets Account P to access the CodeCommit resources and also the KMS key so it can encrypt them with the same key as the rest of your CodePipeline:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject*",
"s3:PutObject",
"s3:PutObjectAcl",
"codecommit:ListBranches",
"codecommit:ListRepositories"
],
"Resource": [
"arn:aws:s3:::YOUR_BUCKET_NAME_IN_ACCOUNTP_FOR_CODE_PIPELINE/*"
]
},
{
"Effect": "Allow",
"Action": [
"kms:DescribeKey",
"kms:GenerateDataKey*",
"kms:Encrypt",
"kms:ReEncrypt*",
"kms:Decrypt"
],
"Resource": [
"arn:aws:kms:YOUR_KMS_ARN"
]
}
]
}
Replace bucket name and KMS ARN in the above policy. Save the policy as something like CrossAccountPipelinePolicy.
Create a role for cross account access and attach the above policy as well as the AWSCodeCommitFullAccess policy. Make sure to make the Trusted entity as the account ID of Account P.
In AWS CLI
You can't do this bit in the console so you have to use the AWS CLI. This will be to get your CodePipeline in AccountP to assume the role in the Source step and dump it in the S3 bucket for all your next steps to use.
aws codepipeline get-pipeline --name NameOfPipeline > pipeline.json
Modify the pipeline json so it looks a bit like this and replace the bits that you need to:
"pipeline": {
"name": "YOUR_PIPELINE_NAME",
"roleArn": "arn:aws:iam::AccountP_ID:role/ROLE_NAME_FOR_CODE_PIPELINE",
"artifactStore": {
"type": "S3",
"location": "YOUR_BUCKET_NAME",
"encryptionKey": {
"id": "arn:aws:kms:YOUR_KMS_KEY_ARN",
"type": "KMS"
}
},
"stages": [
{
"name": "Source",
"actions": [
{
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"provider": "CodeCommit",
"version": "1"
},
"runOrder": 1,
"roleArn": "arn:aws:iam::AccountC_ID:role/ROLE_NAME_WITH_CROSS_ACCOUNT_POLICY",
"configuration": {
"BranchName": "master",
"PollForSourceChanges": "false",
"RepositoryName": "YOURREPOSITORYNAME"
},
"outputArtifacts": [
{
"name": "MyApp"
}
],
"inputArtifacts": []
}
]
},
Update the pipeline with aws codepipeline update-pipeline --cli-input-json file://pipeline.json
Verify it works by running the pipeline.
You can deploy resources using pipeline with codecommit repository in another account.
Let's say you have Account A where your codecommit repository sits, and Account B where you codepipeline sits.
Configure the following in account B:
You would need to create custom KMS key because AWS Default Key does not have an associated Key policy. You can use Create a Pipeline in CodePipeline That Uses Resources from Another AWS Account if you need assistance with creating CMK. Add the Codepipeline service role to the KMS Key Policy to allow the codepipeline to use it.
Event bus for receiving events from cross account Go to CloudWatch → Event Buses under Events section → Add Permission → Enter DEV AWS Account Id → Add. For more details, check Creating an Event Bus
Add the following Policy to S3 pipeline Artifact store:
{
“Version”: “2012–10–17”,
“Id”: “PolicyForKMSAccess”,
“Statement”: [
{ “Sid”: “AllowAccessFromAAccount”,
“Effect”: “Allow”,
“Principal”: { “AWS”: “arn:aws:iam::ACCOUNT_A_ID:root” },
“Action”: [ “s3:Get*”, “s3:Put*”, "s3:ListBucket ],
“Resource”: “arn:aws:s3:::NAME-OF-THE-BUCKET/*” }
]
}
Edit the Pipeline IAM rols to assume role to Account A as follows:
{
“Version”:“2012–10–17”,
“Statement”:{
“Effect”:“Allow”,
“Action”:“sts:AssumeRole”,
“Resource”:[
“arn:aws:iam::ACCOUNT_A_ID:role/*
]
}
}
Create a CloudWatch Event Rule to trigger the pipeline on master branch of the CodeCommit in account A. Add CodePipeline's ARN as a target of this rule.
Now, do the following in Account A:
Create a cross account IAM role with 3 policies.
a) AWSCodeCommitFullAccess
b) Inline Policy to assume role to Account B as follows:
{
“Version”:“2012–10–17”,
“Statement”:[
{
“Effect”:“Allow”,
“Principal”:{
“AWS”:“arn:aws:iam::ACCOUNT_B_ID:root”
},
“Action”:“sts:AssumeRole”
}
]
}
c)Inline policy for KMS, CodeCommit and S3 access:
{
“Version”:“2012–10–17”,
“Statement”:[
{
“Effect”:“Allow”,
“Action”:[
“s3:Get*”,
“s3:Put*”,
“codecommit:*”
],
“Resource”:[
“arn:aws:s3:::YOUR_BUCKET_NAME_IN_B_FOR_CODE_PIPELINE_ARTIFACTS/”
]
},
{
“Effect”:“Allow”,
“Action”:[
“kms:*" ],
“Resource”: [ “arn:aws:kms:YOUR_KMS_ARN_FROM_B_ACCOUNT” ] } ] }
2. Update your pipeline as #Eran Medan suggested.
For more details, please visit AWS CodePipeline with a Cross-Account CodeCommit Repository
Also, please note that I have given a lot more permissions than required for example codecommit:* and kms:*, you can alter them as per your needs.
I hope this will help.
Yes, it should be possible. Follow these instructions: http://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create-cross-account.html

Amazon S3 Bucket policy to allow user to write to the bucket but that bucket only

I have a bucket in Amazon S3 called 'data1'.
When I connect using Cyberduck to my S3, I want the user to only have access to 'data1' bucket and none of the others.
I also set up a new IAM user, called data1, and attached the 'AmazonS3FullAccess' policy to the permissions for that user - but that gives access to all of the buckets - which is what you would expect.
I guess I need to setup another policy for this - however what policy would I do?
First find the users principle. These can be found by looking at the Arn field output by this command
aws iam list-users
For instance
{
"Users": [
{
"UserName": "eric",
"Path": "/",
"CreateDate": "2016-07-12T09:08:21Z",
"UserId": "AIDAJXPI4SWK7X7PY4RX2",
"Arn": "arn:aws:iam::930517348925:user/eric"
},
{
"UserName": "bambi",
"Path": "/",
"CreateDate": "2015-07-15T11:07:16Z",
"UserId": "AIDAJ2LEXFRXJI5AKUU7W",
"Arn": "arn:aws:iam::930517348725:user/bambi"
}
]
}
Then set up an S3 bucket policy. These apply to the bucket and are set per bucket. Normal IAM policies are set per IAM entity and are attached to the IAM entity, for instance the user. You already have IAM policies. For this requirement an S3 policy is needed.
Just to emphasise - S3 policies apply to the bucket and are "attached" to S3, IAM policies apply to IAM and are associated with IAM objects. When IAM entities try to use an S3 bucket both S3 policys and IAM policies can apply. See http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
Once you know the ARN of the principle then add a S3 policy like this
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "arn:aws:iam::930517348725:user/bambi",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::examplebucket/*"]
},
{
"Sid":"block",
"Effect":"Deny",
"Principal": "arn:aws:iam::930517348725:user/bambi",
"Action":["s3:*"],
"Resource":["arn:aws:s3:::*"]
}
]
}
I haven't tested this but that is the general idea. Sorry I didn't use "data1" for both the principle and bucket name in the example but it's too confusing..:)
For write-only access you can attach a policy like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME/*"
]
}
]
}
but it reads like you want to do more than just write?