Signature does not match : Amazon S3 bucket creation from terraform - amazon-web-services

I wanted to crete a bucket and then have something like folder1 as the folder(equivalent to create folder action in the bucket from AWS console).
I am trying to do the same with the following terraform code :
resource "aws_s3_bucket" "bucket_create1" {
bucket = "test_bucket/folder1/"
acl = "private"
}
I am getting the following error :
Error creating S3 bucket: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
How can I resolve this?

Don't create folder in your bucket:
resource "aws_s3_bucket" "bucket_create1" {
bucket = "test_bucket"
acl = "private"
}

Related

Permission denied when creating an S3 bucket with Terraform

I am having permission issues when deploying resources to AWS with Terraform.
I have a terraform template which deploys an S3 bucket. I get ACCESS DENIED 403 when creating the bucket.
I used my personal keys (admin rights), but now I have also tried using the root keys, and same result.
Terraform version is 0.14.
Any help is appreciated // Timme
This is the terraform snippet that creates the S3 bucket:
resource "aws_s3_bucket" "root_bucket" {
bucket = var.bucket_name
acl = "public-read"
policy = templatefile("templates/s3-policy.json", { bucket = var.bucket_name })
website {
redirect_all_requests_to = "https://www.${var.domain_name}"
}
tags = var.common_tags
}

Creating IAM user via terraform and upload the secret key and access key in S3 bucket

I have written a terraform code to create IAM user and my requirement is to store the access key and secret key in a S3 bucket. I have tried implementing the same via s3 cli commands, but was not of great help. Any suggestions would be appreciated
I want to point out that storing tokens in s3 can be dangerous, if not configured correctly.
Make sure that you have understood how policies in AWS and access control in s3 works!. https://docs.aws.amazon.com/IAM/latest/UserGuide/access.html
With that out of the way, this is what I have come up with:
# The user to which we will grant access to s3
resource "aws_iam_user" "user" {
name = "s3-user"
path = "/"
}
# Create the access key
resource "aws_iam_access_key" "key" {
user = aws_iam_user.user.name
}
# Create the bucket for storing tokens
resource "aws_s3_bucket" "token" {
bucket = "my_token_bucket"
acl = "private"
}
# Create the object inside the token bucket
resource "aws_s3_bucket_object" "tokens" {
bucket = aws_s3_bucket.token.id
key = "keys.txt"
server_side_encryption = "AES256"
content_type = "text/plain"
content = <<EOF
access_id: ${aws_iam_access_key.key.id}
access_secret: ${aws_iam_access_key.key.secret}
EOF
}
I haven't tested this.
You can use loca-exec to execute commands :
resource "null_resource" "s3_copy" {
provisioner "local-exec" {
command = "aws s3 cp keys.txt s3://bucket/keys "
}
}

How to add public permissions for S3 bucket via terraform

I am creating a s3 bucket using below terraform template, and want to apply some(2 out of 4) public permissions for the bucket, please suggest how can we do that.
Terraform template for s3 bucket :
resource "aws_s3_bucket" "example" {
bucket = "example"
}
Now I want to tick out 2 permissions which are there on the AWS console s3 bucket permissions :
I want to apply those 2 permissions to my bucket
Please suggest any way how can we achieve this.
You have to use aws_s3_bucket_public_access_block. For example:
resource "aws_s3_bucket" "example" {
bucket = "example"
}
resource "aws_s3_bucket_public_access_block" "example" {
bucket = aws_s3_bucket.example.id
block_public_acls = true
block_public_policy = true
}
The above are bucket-level settings. There are also account level settings which you can change using aws_s3_account_public_access_block.

Why does S3 bucket created in terraform needs bucket policy to grant access to lambda

We use a combination of cloud formation and terraform where some common resources like DynamoDB, S3 are created using terraform and others like APIGateway are created using serverless and cloudformation. All resources are in the same AWS account
I have an S3 bucket in terraform
resource "aws_s3_bucket" "payment_bucket" {
bucket = "payment-bucket-${var.env_name}"
acl = "private"
tags = merge(
module.tags.base_tags,
{
"Name" = "payment-bucket-${var.env_name}"
}
)
lifecycle {
ignore_changes = [tags]
}
}
This creates a private bucket payment-bucket-dev in my AWS account when I run the tf-apply
We have an APIGateway in the same AWS account which is created using serverless and one of the lambda needs accesses to this bucket so I have created an IAM role for the lambda function to grant permission to access the bucket.
makePayment:
name: makePayment-${self:provider.stage}
handler: src/handler/makePayment.default
events:
- http:
path: /payment
method: post
private: true
cors: true
iamRoleStatementsName: ${self:service}-${self:provider.stage}-makePayment-role
iamRoleStatements:
- Effect: Allow
Action:
- s3:PutObject
Resource:
- arn:aws:s3:::#{AWS::Region}:#{AWS::AccountId}:payment-bucket-${self:provider.stage}/capture/batch/*
But when I run this lambda make-payment-dev , it throws an AccessDenied error unless I add bucket policy granting access to the lambda role
resource "aws_s3_bucket_policy" "payment_service_s3_bucket_policy" {
..
..
}
Why do I need to add S3 bucket policy when both s3 bucket and the lambda function and role are in the same account? Am I missing something?
Also, If I created the bucket using AWS::S3::Bucket as part of the cloud formation stack the Apigateway is in (we are using serverless), I don't need add bucket policy and it all works fine.
I think the problem is simply that the S3 bucket ARN is incorrect.
S3 bucket ARNs do not have account IDs or regions in them. Use arn:aws:s3:::mybucket/myprefix/*.
The answer depends on what AWS IAM role is applying the terraform plan because the AWS s3 bucket canned ACL rule: "private" restricts bucket access as: Owner gets FULL_CONTROL. No one else has access rights (default). per documentation: https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html
You have to be relatively explicit at this point as to who can access the bucket. Often if I'm going with private ACL but want every other role in my AWS account to have access to the bucket I attach a bucket policy to the terraform aws_s3_bucket resource to first allow access to the bucket. Then I explicitly grant the lambda's role access to said bucket via another inline policy.
In your case it would look something like this:
// Allow access to the bucket
data "aws_iam_policy_document" "bucket_policy" {
statement {
sid = "S3 bucket policy for account access"
actions = [
"s3:ListBucket",
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
]
principals {
type = "AWS"
identifiers = [
"arn:aws:iam::{your_account_id_here}:root",
]
}
resources = [
"arn:aws:s3:::test_bucket_name",
"arn:aws:s3:::test_bucket_name/*",
]
condition {
test = "StringEquals"
variable = "aws:PrincipalArn"
values = ["arn:aws:iam::{your_account_id_here}:role/*"]
}
}
}
resource "aws_s3_bucket" "this" {
bucket = "test_bucket_name"
acl = "private"
policy = data.aws_iam_policy_document.bucket_policy.json
}
// Grant the lambda IAM role permissions to the bucket
data "aws_iam_policy_document" "grant_bucket_access" {
statement {
sid = "AccessToTheAppAuxFilesBucket"
actions = [
"s3:ListBucket",
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
]
resources = [
"arn:aws:s3:::test_bucket_name/*",
"arn:aws:s3:::test_bucket_name"
]
}
}
// Data call to pull the arn of the lambda's IAM Role
data "aws_iam_role" "cloudformation_provisioned_role" {
name = "the_name_of_the_lambdas_iam_role"
}
resource "aws_iam_role_policy" "iam_role_inline_policy" {
name = "s3_bucket_access"
role = data.aws_iam_role.cloudformation_provisioned_role.arn
policy = data.aws_iam_policy_document.grant_bucket_access.json
}
It's an open bug. acl and force_destroy aren't well imported with terraform import : https://github.com/hashicorp/terraform-provider-aws/issues/6193

terraform multiple providers not working with s3 bucket

Im trying to do this:
terraform {
backend "s3" {
bucket = "resources"
region = "us-east-1"
key = "resources"
}
}
// the default region
provider "aws" {
region = "us-west-2"
}
//for creating buckets in other regions- region param broken stupid issue with aws_s3_bucket resource...
provider "aws" {
alias = "east1"
region = "us-east-1"
}
resource "aws_s3_bucket" "zzzzz" {
provider = "aws.east1"
bucket = "zzzzz"
acl = "private"
force_destroy = true
}
And getting error
Error creating S3 bucket: AuthorizationHeaderMalformed: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-2'
I just needed to wait 1hour + because I recreated bucket in different region
This may also happen if your bucket name is not globally unique (not within your account only). Trying a different (usually longer) name would help
This error is related to your S3 bucket name. Following my example, I had this name: my_bucket
When I changed to a more detailed name (my-project-s3-state-bucket) the error disappeared.
So, in conclusion, your s3 bucket should be globally unique.
PS: Yeah, I agree that the terraform/aws provider error isn't friendly to understand.