I am trying to deploy a simple flask application on Elastibeanstalk using Terraform.
I am using the Terraform's default resource for ElasticBeanstalk Environment - aws_elastic_beanstalk_environment
I am able to deploy my application successfully, however during deployment ElasticBeanstalk creates an S3 bucket - elasticbeanstalk-region-account-id which is not encrypted by default.
I want to change this behaviour and make sure this bucket is encrypted when it gets created. Which setting do I use to accomplish this? I could not find the relevant setting for this. Any ideas?
by default aws beansltalk create an unencrypted bucket so aws_elastic_beanstalk_environment resource cannot do anything here
from the AWS doc :
Elastic Beanstalk doesn't turn on default encryption for the Amazon S3
bucket that it creates. This means that by default, objects are stored
unencrypted in the bucket (and are accessible only by authorized
users). Some applications require all objects to be encrypted when
they are stored—on a hard drive, in a database, etc. (also known as
encryption at rest). If you have this requirement, you can configure
your account's buckets for default encryption
so you need to enable it yourself, try the folowing
after you create the beanstalk env, get the aws s3 bucket created by beanstalk and enable server side encryption by the Terraform resource aws_s3_bucket_server_side_encryption_configuration
resource "aws_kms_key" "mykey" {
description = "This key is used to encrypt bucket objects"
deletion_window_in_days = 10
}
data "aws_s3_bucket" "mybucket" {
bucket = "elasticbeanstalk-region-account-id" # here change the value with your information
}
resource "aws_s3_bucket_server_side_encryption_configuration" "example" {
bucket = data.aws_s3_bucket.mybucket
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = aws_kms_key.mykey.arn
sse_algorithm = "aws:kms"
}
}
}
Related
The new recommendation from AWS is to disable ACL by default such that Object ownership defaults to Bucket owner. How can I achieve this with aws_s3_bucket resource using Terraform?
I tried doing the following without success
resource "aws_s3_bucket_acl" "example_bucket_acl" {
bucket = aws_s3_bucket.s3-bucket.id
acl = "private"
expected_bucket_owner = data.aws_caller_identity.current.account_id
}
data "aws_caller_identity" "current" {}
This code sets the ACL such that only bucket owner can read and write the bucket and the objects within the bucket, but the object ownership configuration is still set to "object writer". Furthermore, ACL is not disabled as a result of setting this.
From the Terraform's documentation on S3 ACL, it does not state any examples nor provide any arguments that support disabling ACL.
I tried to brute force the solution by running terraform plan after manually changing the settings in AWS to see what differences I would get from the plan, but it says my infrastructure matches the configuration.
Does anyone have any ideas how this can be done? I'm currently using Terraform CLI v1.3.5 and AWS provider v4.40.0.
This is set using aws_s3_bucket_ownership_controls, not with aws_s3_bucket_acl. You can set the control to BucketOwnerEnforced.
I created an s3 bucket in terraform. However after creating this bucket, I am getting the error:
error getting S3 Bucket Object Lock configuration: AccessDenied: Access Denied
I am using AWS academy so I do not have many permissions, however, there is a role in AWS academy that allows the user to do stuff with s3. Is there a way to attach this IAM role to the S3 bucket so access it via Terraform?
I would like to upload images to this bucket, however I can no longer deploy code due to Terraform trying to access the Object Lock Configuration which it does not have access to. Is there a way to tell terraform to not try to not try to get this information?
Here is my code
resource "aws_s3_bucket" "b" {
bucket = "my-tf-test-bucket"
acl = "private"
tags = {
Name = "My bucket"
Environment = "Dev"
}
}
Image of Console
So it seems that you have enable object lock to your bucket which prevents you to write or delete any files in your s3 bucket.
One way is to disable it from the console and refresh the state of terraform.
I'm trying to setup a remote Terraform backend to S3. I was able to create the bucket, but I used bucket_prefix instead of bucket to define my bucket name. I did this to ensure code re-usability within my org.
My issue is that I've been having trouble referencing the new bucket in my Terraform back end config. I know that I can hard code the name of the bucket that I created, but I would like to reference the bucket similar to other resources in Terraform.
Would this be possible?
I've included my code below:
#configure terraform to use s3 as the backend
terraform {
backend "s3" {
bucket = "aws_s3_bucket.my-bucket.id"
key = "terraform/terraform.tfstate"
region = "ca-central-1"
}
}
AWS S3 Resource definition
resource "aws_s3_bucket" "my-bucket" {
bucket_prefix = var.bucket_prefix
acl = var.acl
lifecycle {
prevent_destroy = true
}
versioning {
enabled = var.versioning
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = var.sse_algorithm
}
}
}
}
Terraform needs a valid backend configuration when the initialization steps happens (terraform init), meaning that you have to have an existing bucket before being able to provision any resources (before the first terraform apply).
If you do a terraform init with a bucket name which does not exist, you get this error:
The referenced S3 bucket must have been previously created. If the S3 bucket
│ was created within the last minute, please wait for a minute or two and try
│ again.
This is self explanatory. It is not really possible to have the S3 bucket used for backend and also defined as a Terraform resource. While certainly you can use terraform import to import an existing bucket into the state, I would NOT recommend importing the backend bucket.
I was trying to create a remote backend for my S3 bucket.
provider "aws" {
version = "1.36.0"
profile = "tasdik"
region = "ap-south-1"
}
terraform {
backend "s3" {
bucket = "ops-bucket"
key = "aws/ap-south-1/homelab/s3/terraform.tfstate"
region = "ap-south-1"
}
}
resource "aws_s3_bucket" "ops-bucket" {
bucket = "ops-bucket"
acl = "private"
versioning {
enabled = true
}
lifecycle {
prevent_destroy = true
}
tags {
Name = "ops-bucket"
Environmet = "devel"
}
}
I haven't applied anything yet, the bucket is not present as of now. So, terraform asks me to do an init. But when I try to do so, I get a
$ terraform init
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error loading state: BucketRegionError: incorrect region, the bucket is not in 'ap-south-1' region
status code: 301, request id: , host id:
Terraform will initialise any state configuration before any other actions such as a plan or apply. Thus you can't have the creation of the S3 bucket for your state to be stored in be defined at the same time as you defining the state backend.
Terraform also won't create an S3 bucket for you to put your state in, you must create this ahead of time.
You can either do this outside of Terraform such as with the AWS CLI:
aws s3api create-bucket --bucket "${BUCKET_NAME}" --region "${BUCKET_REGION}" \
--create-bucket-configuration LocationConstraint="${BUCKET_REGION}"
or you could create it via Terraform as you are trying to do so but use local state for creating the bucket on the first apply and then add the state configuration and re-init to get Terraform to migrate the state to your new S3 bucket.
As for the error message, S3 bucket names are globally unique across all regions and all AWS accounts. The error message is telling you that it ran the GetBucketLocation call but couldn't find a bucket in ap-south-1. When creating your buckets I recommend making sure they are likely to be unique by doing something such as concatenating the account ID and possibly the region name into the bucket name.
I have set S3 bucket policy in my S3 account via web browser
https://i.stack.imgur.com/sppyr.png
My issue is, the java code of my web app when run in my local laptop, it uploads image to S3.
final AmazonS3 s3 = new AmazonS3Client(
new AWSStaticCredentialsProvider(new BasicAWSCredentials("accessKey*",
"secretKey")));
s3.setRegion(Region.US_West.toAWSRegion());
s3.setEndpoint("s3-us-west-1.amazonaws.com");
versionId = s3.putObject(new PutObjectRequest("bucketName", name, convFile)).getVersionId();
But when I deploy my web app to Elastic Beanstalk, it doesn't successfully upload images to S3 object.
So Should I programmatically code S3 bucket policy again in my Java Code?
PS: Additional details that may be useful : Why am I able to upload to AWS S3 from my localhost, but not from my AWS Elastic BeanStalk instance?
Your S3 bucket policy is too permissive. You should delete it asap.
Instead of explicitly supply credentials to your Elastic Beanstalk app in code, you should create an IAM role that the Elastic Beanstalk app will assume. That IAM role should have an attached IAM policy that allows appropriate access to your S3 bucket, and to the objects in the bucket.
When testing on your laptop, your app does not need to have credentials in the code. Instead, your app should leverage the fact that the AWS SDK will retrieve credentials for you from the environment that the app is running in. You should use the default credential provider chain.