Codepipeline S3 Bucket access denied in Codebuild - amazon-web-services

Background:
I'm testing a Codepipeline with a source stage containing a Github source and a test stage containing a Codebuild project. The Github source is authenticated with a Codestar connection.
Problem:
When the Codebuild project is triggered via the pipeline, the project is denied access to the associated Codepipeline S3 artifact bucket. Here's the log from the build:
AccessDenied: Access Denied
status code: 403, request id: 123, host id: 1234
for primary source and source version arn:aws:s3:::my-bucket/foo/master/foo123
Here's the statement of the Codebuild service role policy that's relevant to the problem:
{
"Sid": "CodePipelineArtifactBucketAccess",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:ListObjects",
"s3:ListBucket",
"s3:GetObjectVersion",
"s3:GetObject",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::my-bucket/*"
}
Attempts:
1.
Changing the resource attribute in the policy above from arn:aws:s3:::my-bucket/* to arn:aws:s3:::my-bucket*. (Same Access Denied error)
2.
Checking the associated artifact bucket's permissions. Currently, it's set to block all public access and there is no bucket policy attached. The bucket's ACL is set to allow the bucket owner (me) to have read/write access. (Same Access Denied error)
3.
Given this is a test pipeline, I've tried giving the Codebuild service role and the Codepipeline service role full S3 access to all resources. (Same Access Denied error)

Adding the Codebuild role ARN to the CMK policies usage/grant related permissions did the trick. I guess I mindlessly assumed that the Codebuild service role would inherit the Codepipeline's role which would enable the Codebuild project to decrypt the CMK associated with the Codepipeline artifact bucket. Here's the relevant statements I changed in the CMK's policy:
{
"Sid": "GrantPermissions",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::111111111111:role/codebuild-role",
"arn:aws:iam::111111111111:role/codepipeline-role"
]
},
"Action": [
"kms:RevokeGrant",
"kms:ListGrants",
"kms:CreateGrant"
],
"Resource": "*",
"Condition": {
"Bool": {
"kms:GrantIsForAWSResource": "true"
}
}
},
{
"Sid": "UsagePermissions",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::111111111111:role/codebuild-role",
"arn:aws:iam::111111111111:role/codepipeline-role"
]
},
"Action": [
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:Encrypt",
"kms:DescribeKey",
"kms:Decrypt"
],
"Resource": "*"
}

Related

How to setup terraform state on encrypted s3 bucket

I have setup an s3 backend for terraform state following this excellent answer by Austin Davis. I followed the suggestion by Matt Lavin to add a policy encrypting the bucket.
Unfortunately that bucket policy means that the terraform state list now throws the
Failed to load state: AccessDenied: Access Denied status code: 403, request id: XXXXXXXXXXXXXXXX, host id: XXXX...
I suspect I'm missing either passing or configuring something on the terraform side to encrypt the communication or an additional policy entry to be able to read the encrypted state.
This is the policy added to the tf-state bucket:
{
"Version": "2012-10-17",
"Id": "RequireEncryption",
"Statement": [
{
"Sid": "RequireEncryptedTransport",
"Effect": "Deny",
"Action": ["s3:*"],
"Resource": ["arn:aws:s3:::${aws_s3_bucket.terraform_state.bucket}/*"],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
},
"Principal": "*"
},
{
"Sid": "RequireEncryptedStorage",
"Effect": "Deny",
"Action": ["s3:PutObject"],
"Resource": ["arn:aws:s3:::${aws_s3_bucket.terraform_state.bucket}/*"],
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
},
"Principal": "*"
}
]
}
I would start by removing that bucket policy, and just enable the newer default bucket encryption setting on the S3 bucket. If you still get access denied after doing that, then the IAM role you are using when you run Terraform is missing some permissions.

Uploading to AWS S3 bucket from a profile in a different environment

I have access to one of two AWS environments and I've created a protected S3 bucket in it to upload files to from an account in the one that I do not. The environment and the account that I don't have access to are what a project's CI uses.
environment I have access to: env1
environment I do not have access to: env2
account I do not have access to: user/ci
bucket name: content
S3 bucket policy:
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
...
},
{
"Sid": "Allow access to bucket from profile in env1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111122223333:user/ci"
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket*"
],
"Resource": "arn:aws:s3:::content"
},
{
"Sid": "Allow access to bucket items from profile in env1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111122223333:user/ci"
},
"Action": [
"s3:Get*",
"s3:PutObject",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::content",
"arn:aws:s3:::content/*"
]
}
]
}
From inside a container that's configured for env1 and user/ci I'm testing with the command
aws s3 sync content/ s3://content/
and I get the error:
fatal error: An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
I have two questions:
Am I even using the correct aws command to upload the data to the bucket?
Am I missing something from my bucket policy?
For the latter, I've basically followed what a load of examples and answers online have suggested.
To test your policy, I did the following:
Created an IAM User with no policies
Created an Amazon S3 bucket
Attached your Bucket Policy to the bucket, and updated the ARN and bucket name
Tested access to the bucket with:
aws s3 ls s3://bucketname
aws s3 sync folder/ s3://bucketname/folder/
It worked fine.
Therefore, the policy you display appears to be giving all necessary permissions. It is possible that you have something else that is Denying access on the bucket.
The solution was to given the ACL
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::content",
"arn:aws:s3:::content/*"
]
}
]
}
to user/ci in env1.

Cross-Account IAM Access Denied with GUI Client, but permitted via CLI

I am stuck with provisioning end-user access into a cross account shared bucket, and need help figuring out if there are specific policy requirements for using clients to access the bucket, vs straight CLI.
IAM User Accounts are managed in our "Core" AWS Account.
S3 Bucket is provisioned in our "Dev" AWS Account.
S3 Bucket in Dev account is encrypted with KMS key in Dev Account.
We have configured our Bucket Policy to permit the user access.
We have configured user policies to permit access to the S3 bucket.
We have configured user policies to permit use of the KMS key.
When using the CLI our user account can succesfully access and use the S3 bucket. When attempting to connect with a GUI Client (Win-SCP, CyberDuck, MAC ForkLift) we receive permission denied errors.
BUCKET POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::[DEVACCOUNT#]:role/EC2-ROLE-FOR-APP-ACCESS",
"arn:aws:iam::[COREACCOUNT#]:user/end.user"
]
},
"Action": "s3:List*",
"Resource": [
"arn:aws:s3:::dev-mybucket",
"arn:aws:s3:::dev-mybucket/*"
]
},
{
"Sid": "AllObjectActions",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::[DEVACCOUNT#]:role/EC2-ROLE-FOR-APP-ACCESS",
"arn:aws:iam::[COREACCOUNT#]:user/end.user"
]
},
"Action": [
"s3:GetObject",
"s3:Put*"
],
"Resource": "arn:aws:s3:::dev-mybucket/*"
}
]
}
User Policy - access KMS
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUseOfDevAPPSKey",
"Effect": "Allow",
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:Describe*"
],
"Resource": [
"arn:aws:kms:ca-central-1:[DEVACCOUNT#]:key/[redacted-key-number]"
]
},
{
"Sid": "AllowAttachmentOfPersistentResources",
"Effect": "Allow",
"Action": [
"kms:CreateGrant",
"kms:List*",
"kms:RevokeGrant"
],
"Resource": [
"arn:aws:kms:ca-central-1:[DEVACCOUNT#]:key/[redacted-key-number]"
],
"Condition": {
"Bool": {
"kms:GrantIsForAWSResource": true
}
}
}
]
}
User policy - Access S3 Bucket
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccessToMyBucket",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::dev-mybucket/",
"arn:aws:s3:::dev-mybucket/*"
]
}
]
}
From aws s3 commands we can 'ls' content and 'cp' content from local to remote and from remote to local.
When configuring access with the GUI Clients we always receive somewhat generic 'permission denied' or 'access denied' type errors.
The GUI client is probably making a call that is not List*, Put* or GetObject.
For example, it might be calling GetObjectVersion, GetObjectAcl or GetBucketAcl.
Try adding Get* permissions in addition to List*.
You might also be able to look at the events in your AWS CloudTrail trail to see what specific API calls were denied.
For details, see: Specifying Permissions in a Policy - Amazon Simple Storage Service
Access to an S3 bucket via a GUI such as the AWS web console or SFTP clients with s3 functionality(FileZilla, Cyberduck, ForkLift, etc.) requires the s3:ListAllMyBuckets action in a policy attached to that IAM user. This is very unfortunate as the user will now have access to see ALL your bucket names in that account even if they just have read, write, and or List access to just one bucket in that account.
https://docs.aws.amazon.com/AmazonS3/latest/API/API_Operations.html
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.html
One other option is to go to the bucket URL directly. The user/role will require access via that bucket's Bucket policy.
https://s3.console.aws.amazon.com/s3/buckets/dev-mybucket

Code Build Access denied while downloading artifact from S3

My CodeBuild is configured with CodePipeline. S3 is my artifact store. I continue to get an Access denied message despite having attached IAM roles with sufficient access.
Screenshot of the error message
I have already checked the service role associated with Codebuild. It has the following policy attached to it.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": [
"arn:aws:logs:ap-southeast-1:682905754632:log-group:/aws/codebuild/Build",
"arn:aws:logs:ap-southeast-1:682905754632:log-group:/aws/codebuild/Build:*"
],
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
},
{
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::codepipeline-ap-southeast-1-*"
],
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion"
]
}
]
}
But when I test it using the IAM policy validator I get the following error message.
Based on the accepted answer to this question the policy that I currently have should allow me to get the artifacts from S3 without any problems - AWS Codebuild fails while downloading source. Message: Access Denied
How do I get rid of the access denied message?
This generally happens when you have a CodeBuild project already and you integrate it to a CodePipeline pipeline. When you integrate a Codebuild project with CodePipeline, the project will retrieve it's source from the CodePipeline Source output. Source output will be stored in the artifact store location, which is an S3 bucket, either a default bucket created by CodePipeline or one you specify upon pipeline creation.
So, you will need to provide permissions to the CodeBuild Service role to access the CodePipline bucket in S3. The role will require permissions to put S3 objects in the bucket, as well as get objects.
Policy which i tried and same is working:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CodeBuildDefaultPolicy",
"Effect": "Allow",
"Action": [
"codebuild:*",
"iam:PassRole"
],
"Resource": "*"
},
{
"Sid": "CloudWatchLogsAccessPolicy",
"Effect": "Allow",
"Action": [
"logs:FilterLogEvents",
"logs:GetLogEvents"
],
"Resource": "*"
},
{
"Sid": "S3AccessPolicy",
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:GetObject",
"s3:List*",
"s3:PutObject"
],
"Resource": "*"
}
]
}
Policy Simulator
AWS Reference

AWS CodeBuild can't sync to S3 bucket ListObject denied permission

In CodeBuild, I have 2 projects. One is for a staging site, and another one is for a production site. When I compile my site, and run it through the staging project, it works fine. It sync's successfully to my s3 bucket for the staging site. However, when tried to compile it and run it through the production project, when running the sync command, it returns an error :
fatal error: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
[Container] 2018/09/11 08:40:33 Command did not exit successfully aws s3 sync public/ s3://$S3_BUCKET exit status 1
I did some digging around, and I think the problem is with my bucket policy. I am using CloudFront as a CDN on top of my S3 bucket. I don't want to modify the bucket policy of the production bucket right until I'm absolutely sure that I must. I'm worried it might have some affect on the live site.
Here is my bucket policy for the production bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::[bucket_name]/*"
},
{
"Sid": "2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity [access_code]"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::[bucket_name]/*"
}
]
}
As per the error description, the list permission is missing.
Add the below permission at your bucket policy:
"Action": [
"s3:Get*",
"s3:List*"
]
This should solve your issue. Also check the IAM service role created on codebuild to access S3 buckets. The S3 bucket policy and IAM role both control the access to the S3 bucket in this kind of setup.
Your service role should have list permission for S3.
{
"Sid": "S3ObjectPolicy",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:List*"
],
"Resource": ["arn:aws:s3:::my_bucket",
"arn:aws:s3:::my_bucket/*"]
}