I created an s3 bucket in terraform. However after creating this bucket, I am getting the error:
error getting S3 Bucket Object Lock configuration: AccessDenied: Access Denied
I am using AWS academy so I do not have many permissions, however, there is a role in AWS academy that allows the user to do stuff with s3. Is there a way to attach this IAM role to the S3 bucket so access it via Terraform?
I would like to upload images to this bucket, however I can no longer deploy code due to Terraform trying to access the Object Lock Configuration which it does not have access to. Is there a way to tell terraform to not try to not try to get this information?
Here is my code
resource "aws_s3_bucket" "b" {
bucket = "my-tf-test-bucket"
acl = "private"
tags = {
Name = "My bucket"
Environment = "Dev"
}
}
Image of Console
So it seems that you have enable object lock to your bucket which prevents you to write or delete any files in your s3 bucket.
One way is to disable it from the console and refresh the state of terraform.
Related
Requirement: Create SakeMaker GroundTruth labeling job with input/output location pointing to S3 bucket in another AWS account
High Level Steps Followed: Lets say, Account_A: SageMaker GroundTruth labeling job and Account_B: S3 bucket
Create role AmazonSageMaker-ExecutionRole in Account_A with 3 policies attached:
AmazonSageMakerFullAccess
Account_B_S3_AccessPolicy: Policy with necessary S3 permissions to access S3 bucket in Account_B
AssumeRolePolicy: Assume role policy for arn:aws:iam::Account_B:role/Cross-Account-S3-Access-Role
Create role Cross-Account-S3-Access-Role in Account_B with 1 policy and 1 trust relationship attached:
S3_AccessPolicy: Policy with necessary S3 permissions to access S3 bucket in the this Account_B
TrustRelationship: For principal arn:aws:iam::Account_A:role/AmazonSageMaker-ExecutionRole
Error: While trying to create SakeMaker GroundTruth labeling job with IAM role as AmazonSageMaker-ExecutionRole, it throws error AccessDenied: Access Denied - The S3 bucket 'Account_B_S3_bucket_name' you entered in Input dataset location cannot be reached. Either the bucket does not exist, or you do not have permission to access it. If the bucket does not exist, update Input dataset location with a new S3 URI. If the bucket exists, give the IAM entity you are using to create this labeling job permission to read and write to this S3 bucket, and try your request again.
In your high level step 2, the approach should change to using a Resource Policy on your S3 bucket that allows account A to write to it. Rather than expecting Account A to assume a role in Account B, which I don't believe Sagemeker will do. Therefore the general approach is to do the following:
Account A Sagemaker service is given has a iam policy with a that allows access to Account B bucket. (Basically what you've done).
Account B bucket is given a resource policy that allows Account A to access it.
The following article gives additional help on this topic: How can I provide cross-account access to objects that are in Amazon S3 buckets?
Reverted back to original approach where access to the SageMaker execution role was provided through direct S3 bucket policy.
While creating the GT job from console:
(i) Expects the user creating the job also to have access to the data in cross account S3 bucket; Updated bucket policy to have access for both SageMaker execution role as well as user
(ii) Expects the manifest in own account's S3 bucket; Fails with 403 if manifest is in cross account S3 bucket even though SageMaker execution role had access to the cross account S3 bucket
While creating the GT job from CLI: Above restrictions doesn't apply and was able to create the GT job.
I am trying to deploy a simple flask application on Elastibeanstalk using Terraform.
I am using the Terraform's default resource for ElasticBeanstalk Environment - aws_elastic_beanstalk_environment
I am able to deploy my application successfully, however during deployment ElasticBeanstalk creates an S3 bucket - elasticbeanstalk-region-account-id which is not encrypted by default.
I want to change this behaviour and make sure this bucket is encrypted when it gets created. Which setting do I use to accomplish this? I could not find the relevant setting for this. Any ideas?
by default aws beansltalk create an unencrypted bucket so aws_elastic_beanstalk_environment resource cannot do anything here
from the AWS doc :
Elastic Beanstalk doesn't turn on default encryption for the Amazon S3
bucket that it creates. This means that by default, objects are stored
unencrypted in the bucket (and are accessible only by authorized
users). Some applications require all objects to be encrypted when
they are stored—on a hard drive, in a database, etc. (also known as
encryption at rest). If you have this requirement, you can configure
your account's buckets for default encryption
so you need to enable it yourself, try the folowing
after you create the beanstalk env, get the aws s3 bucket created by beanstalk and enable server side encryption by the Terraform resource aws_s3_bucket_server_side_encryption_configuration
resource "aws_kms_key" "mykey" {
description = "This key is used to encrypt bucket objects"
deletion_window_in_days = 10
}
data "aws_s3_bucket" "mybucket" {
bucket = "elasticbeanstalk-region-account-id" # here change the value with your information
}
resource "aws_s3_bucket_server_side_encryption_configuration" "example" {
bucket = data.aws_s3_bucket.mybucket
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = aws_kms_key.mykey.arn
sse_algorithm = "aws:kms"
}
}
}
I'm trying to setup a remote Terraform backend to S3. I was able to create the bucket, but I used bucket_prefix instead of bucket to define my bucket name. I did this to ensure code re-usability within my org.
My issue is that I've been having trouble referencing the new bucket in my Terraform back end config. I know that I can hard code the name of the bucket that I created, but I would like to reference the bucket similar to other resources in Terraform.
Would this be possible?
I've included my code below:
#configure terraform to use s3 as the backend
terraform {
backend "s3" {
bucket = "aws_s3_bucket.my-bucket.id"
key = "terraform/terraform.tfstate"
region = "ca-central-1"
}
}
AWS S3 Resource definition
resource "aws_s3_bucket" "my-bucket" {
bucket_prefix = var.bucket_prefix
acl = var.acl
lifecycle {
prevent_destroy = true
}
versioning {
enabled = var.versioning
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = var.sse_algorithm
}
}
}
}
Terraform needs a valid backend configuration when the initialization steps happens (terraform init), meaning that you have to have an existing bucket before being able to provision any resources (before the first terraform apply).
If you do a terraform init with a bucket name which does not exist, you get this error:
The referenced S3 bucket must have been previously created. If the S3 bucket
│ was created within the last minute, please wait for a minute or two and try
│ again.
This is self explanatory. It is not really possible to have the S3 bucket used for backend and also defined as a Terraform resource. While certainly you can use terraform import to import an existing bucket into the state, I would NOT recommend importing the backend bucket.
I found an issue with a S3 bucket.
The bucket don't have any ACL associated, and the user that create the bucket was deleted.
How it's possible add some ACL in the bucket to get the control back?
For any command using AWS CLI, the result are the same always: An error occurred (AccessDenied) when calling the operation: Access Denied
Also in AWS console the access is denied.
First things first , AccessDenied error in AWS indicates that your AWS user does not have access to S3 service , Get S3 permission to your IAM user account , if in case you had access to AWS S3 service.
The thing is since you are using cli make sure AWS client KEY and secret are still correctly in local.
Now the interesting use case :
You have access to S3 service but cannot access the bucket since the bucket had some policies set
In this case if user who set the policies left and no user was able to access this bucket, the best way is to ask AWS root account holder to change the bucket permissions
An IAM user with the managed policy named AdministratorAccess should be able to access all S3 buckets within the same AWS account. Unless you have applied some unusual S3 bucket policy or ACL, in which case you might need to log in as the account's root user and modify that bucket policy or ACL.
See Why am I getting an "Access Denied" error from the S3 when I try to modify a bucket policy?
I just posted this on a related thread...
https://stackoverflow.com/a/73977525/999943
https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-owner-full-control-acl/
Basically when putting objects from the non-bucket owner, you need to set the acl at the same time.
--acl bucket-owner-full-control
I've setup S3 inventory report for a bucket, the data being analyzed is in bucket/data while the inventory report is generated and stored into bucket/meta/inventory/.
Now I want to access it from another AWS account, I have created the IAM role policy for cross-account access and I can copy/get files via the SDK or the AWS CLI only from the bucket/data/ prefix. If I try to get files created for the S3 inventory report, like the manifest.json file or any csv file from the inventory report with path bucket/meta/inventory/.../data/report.csv, I get:
403 Access Denied
or via CLI
An error occurred (AccessDenied) when calling the GetObject operation: Access Denied.
It is strange as I have policy that allows s3:ListBucket and s3:GetObject for the whole bucket for that IAM role but it seems that the files created by the s3.amazonaws.com service, in this case all files from the inventory report are not accessible for that IAM Role.
Does someone has encountered this? Anyone can suggest a fix?
I have found the issue, it seems that you must provide "s3:x-amz-acl": "bucket-owner-full-control" StringEquals Condition in the bucket policy statement for the S3 inventory as stated here:
https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html#example-bucket-policies-use-case-9
Otherwise the ACL on the files from the Inventory Report will block any access outside the account that owns the bucket where the inventory is saved.