I'm trying to build multiple s3 buckets and each will have its own ACL configuration.
The problem is that I won't know the ID of each bucket until it is created and I need the ID in order to pass the ACL to be set.
When I run terraform validate with the code below there is no error. But after running the plan it tries to access the ID value within the values I configured for each bucket and as ID does not exist it returns an error.
locals {
bucket_settings = {
bucket-code-pipeline = {
name = "cache-codepipeline-${var.env}-bucket-01"
acl = "private"
},
bucket-alb = {
name = "alb-logs-${var.env}-bucket-02"
acl = "private"
}
}
}
resource "aws_s3_bucket" "bucket" {
for_each = local.bucket_settings
bucket = each.value.name
}
resource "aws_s3_bucket_acl" "acl" {
for_each = local.bucket_settings
bucket = local.bucket_settings[each.value.id]
acl = each.value.acl
}
➜ s3 git:(master) ✗ terraform validate
Success! The configuration is valid.
➜ s3 git:(master) ✗ terraform plan
var.env
Enter a value: dev
╷
│ Error: Unsupported attribute
│
│ on s3-buckets.tf line 37, in resource "aws_s3_bucket_acl" "acl":
│ 37: bucket = local.bucket_settings[each.value.id]
│ ├────────────────
│ │ each.value is object with 2 attributes
│
│ This object does not have an attribute named "id".
I'd like to understand why I can't access the bucket ID through each each.value.id.
Instead of
bucket = local.bucket_settings[each.value.id]
it should be
bucket = aws_s3_bucket.bucket[each.key].id
or
bucket = each.value.name
Related
I am trying to create a container registry and add the service account with the OWNER role by changing the google_storage_bucket_acl.
According to the docs, the name of that bucket can be accessed via google_container_registry.<name>.id.
resource "google_container_registry" "registry" {
project = var.project_id
location = "EU"
}
resource "google_storage_bucket_acl" "image_store_acl" {
depends_on = [google_container_registry.registry]
bucket = google_container_registry.registry.id
role_entity = [
"OWNER:${local.terraform_service_account}",
]
}
$terraform plan
..
Terraform will perform the following actions:
# google_storage_bucket_acl.image_store_acl will be created
+ resource "google_storage_bucket_acl" "image_store_acl" {
+ bucket = "eu.artifacts.dev-00-ebcd.appspot.com"
+ id = (known after apply)
+ role_entity = [
+ "OWNER:terraform-service-account#dev-00-ebcd.iam.gserviceaccount.com",
]
}
Plan: 1 to add, 0 to change, 0 to destroy.
However, if I run terraform apply, the following error is what I get:
google_storage_bucket_acl.image_store_acl: Creating...
╷
│ Error: Error updating ACL for bucket eu.artifacts.dev-00-ebcd.appspot.com: googleapi: Error 400: Invalid argument., invalid
│
│ with google_storage_bucket_acl.image_store_acl,
│ on docker.tf line 6, in resource "google_storage_bucket_acl" "image_store_acl":
│ 6: resource "google_storage_bucket_acl" "image_store_acl" {
│
╵
I am creating a Lambda function that has it's handler code stored in an S3 bucket. I need to create these resources and I am using Terraform.
It appears the S3 bucket is dependent on the Lambda's ARN output so that I can set the correct Principal config for the bucket.
The Lambda is also dependent on the S3 bucket existing so I can configure the bucket which stores the handler code.
I have 2 modules creating the required resources
# S3 Bucket module
resource "aws_s3_bucket" "s3-lambda" {
bucket = var.bucket_name
acl = "private"
policy = data.aws_iam_policy_document.s3_lambda_permissions.json
tags = {
Name = var.tag_name
Environment = var.env_name
}
}
# Lambda module
resource "aws_lambda_function" "redirect_lambda" {
s3_bucket = var.bucket_name
s3_key = var.key
handler = var.handler
runtime = var.runtime
role = aws_iam_role.redirect_lambda.arn
function_name = "redirect_lambda-${var.env_name}"
publish = true
}
I am then calling these modules in my main.tf
module "qr_redirect_lambda" {
source = "./modules/qr-redirect"
env_name = var.env_name
bucket_name = var.qr_redirect_lambda_bucket_name
key = var.lambda_key
runtime = var.lambda_runtime_16
handler = var.lambda_handler
tag_name = "tag name
}
How can I create these 2 resources that are codependent on each other?
Error output:
Error: error creating Lambda Function (1): InvalidParameterValueException: Error occurred while GetObject. S3 Error Code: NoSuchBucket. S3 Error Message: The specified bucket does not exist
│ {
│ RespMetadata: {
│ StatusCode: 400,
│ RequestID: "xxx-xxx"
│ },
│ Message_: "Error occurred while GetObject. S3 Error Code: NoSuchBucket. S3 Error Message: The specified bucket does not exist",
│ Type: "User"
│ }
│
│ with module.qr_redirect_lambda.aws_lambda_function.qr_redirect_lambda,
│ on modules/qr-lambda/main.tf line 1, in resource "aws_lambda_function" "qr_redirect_lambda":
│ 1: resource "aws_lambda_function" "qr_redirect_lambda" {
I think you can do this it in three stages, instead of two:
Create bucket without bucket policy
Create lambda. You can use depends_on to create the lambda only after the bucket.
Use aws_s3_bucket_policy to create the bucket policy.
This is the detailed error I get after terraform apply
│ Error: Error validating S3 bucket name: only lowercase alphanumeric characters and hyphens allowed in "$var.bucket"
│
│ with module.s3.aws_s3_bucket.b,
│ on modules/s3/main.tf line 1, in resource "aws_s3_bucket" "b":
│ 1: resource "aws_s3_bucket" "b" {
This is main.tf
resource "aws_s3_bucket" "b" {
bucket = "$var.bucket"
tags = {
Name = "$var.tag"
Environment = "$var.environment"
}
}
This variables.tf
variable "bucket" {
description = "The Dev Environment"
}
variable "tag" {
description = "Tagging the bucket"
}
variable "environment" {
description = "Environment of the bucket"
}
This is module's main.tf
module "s3" {
source = "./modules/s3"
environment = var.environment
bucket = var.bucket
tag = var.tag
}
I tried fixing the error however I don't understand the error because everything seems perfect.
Instead of
bucket = "$var.bucket"
it should be
bucket = var.bucket
also the devault value for the bucket is wrong and does not satisfy s3 naming requirements. Try with:
variable "bucket" {
description = "the-dev-environment-434331"
}
I am using shared_cred_file for aws provider. With aws provider version 3.63 for example, terraform plan works good.
When I use aws provider 4.0 it prompts me to use apply changed setting for shared_credentials_files. After the changes, there is no error, but the second error remains
what could be the problem?
Warning: Argument is deprecated
│
│ with provider[“registry.terraform.io/hashicorp/aws”],
│ on main.tf line 15, in provider “aws”:
│ 15: shared_credentials_file = “~/.aws/credentials”
│
│ Use shared_credentials_files instead.
│
│ (and one more similar warning elsewhere)
╵
╷
│ Error: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found.
│
│ Please see https://registry.terraform.io/providers/hashicorp/aws
│ for more information about providing credentials.
│
│ Error: no EC2 IMDS role found, operation error ec2imds: GetMetadata, canceled, context deadline exceeded
│
│
│ with provider[“registry.terraform.io/hashicorp/aws”],
│ on main.tf line 13, in provider “aws”:
│ 13: provider “aws” {
│
///////////////////////////////
// Infrastructure init
terraform {
backend "s3" {
bucket = "monitoring-********-infrastructure"
key = "tfstates/********-non-prod-rds-info.tfstate"
profile = "test-prof"
region = "eu-west-2"
shared_credentials_file = "~/.aws/credentials"
}
}
provider "aws" {
profile = "test-prof"
shared_credentials_files = ["~/.aws/credentials"]
region = "eu-west-2"
}
Error: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found.
│
│ Please see https://registry.terraform.io/providers/hashicorp/aws
│ for more information about providing credentials.
│
│ Error: no EC2 IMDS role found, operation error ec2imds: GetMetadata, canceled, context deadline exceeded
│
│
│ with provider["registry.terraform.io/hashicorp/aws"],
│ on main.tf line 13, in provider "aws":
│ 13: provider "aws" {
cat config
[test-prof]
output = json
region = eu-west-2
cat credentials
[test-prof]
aws_access_key_id = ****************
aws_secret_access_key = ******************
By latest Terraform documentation this is how it will work,
provider "aws" {
region = "us-east-1"
shared_credentials_files = ["C:/Users/tf_user/.aws/credentials"]
profile = "customprofile"
}
I had the same issue this thing works for me.
Changing
provider "aws" {
shared_credentials_file = "$HOME/.aws/credentials"
profile = "default"
region = "us-east-1"
}
to
provider "aws" {
shared_credentials_file = "/Users/me/.aws/credentials"
profile = "default"
region = "us-east-1"
}
worked for me.
We stumbled with this issue in our pipelines after migration AWS Provider from version 3 -> 4.
So, for anyone using Azure DevOps or any other CI tools, the fix should be as easy as adding a new step in the pipeline and creating the shared credentials file:
mkdir $HOME/.aws
echo [default] >> $HOME/.aws/credentials
echo aws_access_key_id = ${AWS_ACCESS_KEY_ID} >> $HOME/.aws/credentials
echo aws_secret_access_key = ${AWS_SECRET_ACCESS_KEY} >> $HOME/.aws/credentials
AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY should be defined as a var or secrets in your pipeline.
When you are using
provider "aws" {
region = "your region"
shared_credentials_file = "path_file_credentials like C:\Users\terraform\.aws\credentials"
profile = "profile_name"
}
The path should be in this format: %USERPROFILE%.aws\credentials
This is the only acceptable format by the date of this answer, there are other ways too:
1.You can put your credentials in a tf file
provider "aws" {
profile = "profile_name"
region = "us-west-2"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
If you are working on a project and don't want to share them with your teammates, you can use it as a variable like this:
main.tf
provider "aws" {
profile = "profile_name"
region = "us-west-2"
access_key = var.access_key
secret_key = var.secret_key
}
variables.tf
variable "access_key" {
description = "My AWS access key"
}
variable "secret_key" {
description = "My AWS secret key"
}
You can either fill them after terraform apply or add variables.tf to .gitignore
You can find more options here.
I am trying to spin-up an AWS bastion host on AWS EC2. I am using the Terraform module provided by Guimove. I am getting stuck on the bastion_host_key_pair field. I need to provide a keypair that can be used to launch the EC2 template, but the bucket (aws_s3_bucket.bucket) that needs to contain the public key of the key pair gets created during the module, therefore the key isn't there when it tries to launch the instance and it fails. It feels like a chicken and egg scenario, so I am obviously doing something wrong. What am I doing wrong?
Error:
╷
│ Error: Error creating Auto Scaling Group: AccessDenied: You are not authorized to use launch template: lt-004b0af2895c684b3
│ status code: 403, request id: c6096e0d-dc83-4384-a036-f35b8ca292f8
│
│ with module.bastion.aws_autoscaling_group.bastion_auto_scaling_group,
│ on .terraform\modules\bastion\main.tf line 300, in resource "aws_autoscaling_group" "bastion_auto_scaling_group":
│ 300: resource "aws_autoscaling_group" "bastion_auto_scaling_group" {
│
╵
Terraform:
resource "tls_private_key" "bastion_host" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "bastion_host" {
key_name = "bastion_user"
public_key = tls_private_key.bastion_host.public_key_openssh
}
resource "aws_s3_bucket_object" "bucket_public_key" {
bucket = aws_s3_bucket.bucket.id
key = "public-keys/${aws_key_pair.bastion_host.key_name}.pub"
content = aws_key_pair.bastion_host.public_key
kms_key_id = aws_kms_key.key.arn
}
module "bastion" {
source = "Guimove/bastion/aws"
bucket_name = "${var.identifier}-ssh-bastion-bucket-${var.env}"
region = var.aws_region
vpc_id = var.vpc_id
is_lb_private = "false"
bastion_host_key_pair = aws_key_pair.bastion_host.key_name
create_dns_record = "false"
elb_subnets = var.public_subnet_ids
auto_scaling_group_subnets = var.public_subnet_ids
instance_type = "t2.micro"
tags = {
Name = "SSH Bastion Host - ${var.identifier}-${var.env}",
}
}
I had the same issue. The fix was to go into AWS Market place, accept the EULA and subscribe to the AMI I was trying to use.