I've successfully applied and deployed this script a week ago. I made 0 changes since then, to the script or to other factors used within this. Running it this morning throws this -
Terraform v1.0.8
on linux_amd64
Configuring remote state backend...
Initializing Terraform configuration...
aws_iam_role.iam_for_lambda: Refreshing state... [id=iam_for_lambda]
aws_lambda_function.lambda: Refreshing state... [id=MissingPostedTransactions]
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_iam_role_policy_attachment.tf_vpc_execution_policy will be created
+ resource "aws_iam_role_policy_attachment" "tf_vpc_execution_policy" {
+ id = (known after apply)
+ policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
+ role = "arn:aws:iam::<arn no>:role/iam_for_lambda"
}
Then I type "yes" to apply the supposed "change", and I get this -
aws_iam_role_policy_attachment.tf_vpc_execution_policy: Creating...
╷
│ Error: Error attaching policy arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole to IAM Role arn:aws:iam::<arn no>:role/iam_for_lambda: ValidationError: The specified value for roleName is invalid. It must contain only alphanumeric characters and/or the following: +=,.#_-
│ status code: 400, request id: 8d354476-df67-4c2d-b3b8-c7aa7efce060
│
│ with aws_iam_role_policy_attachment.tf_vpc_execution_policy,
│ on main.tf line 55, in resource "aws_iam_role_policy_attachment" "tf_vpc_execution_policy":
│ 55: resource "aws_iam_role_policy_attachment" "tf_vpc_execution_policy" {
What am I missing here?
Everything is ok in your resources except you should specify role_name and not role_arn. Please refer to documentation from Terraform for more info:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment
+ id = (known after apply)
+ policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
+ role = "<ROLE_NAME>"
}
Related
I am trying to create an aws instance through terraform. Despite generating multiple users with different key pairs, all of them seem to return a InvalidClientTokenID error when I try to terraform plan. Below are the options I've tried based on the research I've done:
AWS cli configure to save my credentials there
"aws sts get-caller-identity" to confirm that the credentials are valid
Exported the credentials to my local env
Pointed the instance.tf file to my /.aws credentials file through "shared_credentials_files"
Generated multiple access keys until I got secret keys with no special symbols
This is my code:
provider "aws" {
# access_key = "redacted"
# secret_key = "redacted"
shared_credentials_files = "/home/nocnoc/.aws/credentials"
region = "eu-central-1"
}
resource "aws_instance" "example" {
ami = "ami-0965bd5ba4d59211c"
instance_type = "t3.micro"
}
This is the error message:
$terraform apply
╷
│ Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: operation error STS: GetCallerIdentity, https response error StatusCode: 403, RequestID: 2ea13d91-630c-40dc-84eb-72b26222aecb, api error InvalidClientTokenId: The security token included in the request is invalid.
│
│ with provider["registry.terraform.io/hashicorp/aws"],
│ on instance.tf line 1, in provider "aws":
│ 1: provider "aws" {
│
Are there any other options that I have not yet considered? I have MFA set up on my AWS account, but so did my tutor and the course didn't mention anything regarding adding a special field into the terraform file regarding that
I am trying to create a container registry and add the service account with the OWNER role by changing the google_storage_bucket_acl.
According to the docs, the name of that bucket can be accessed via google_container_registry.<name>.id.
resource "google_container_registry" "registry" {
project = var.project_id
location = "EU"
}
resource "google_storage_bucket_acl" "image_store_acl" {
depends_on = [google_container_registry.registry]
bucket = google_container_registry.registry.id
role_entity = [
"OWNER:${local.terraform_service_account}",
]
}
$terraform plan
..
Terraform will perform the following actions:
# google_storage_bucket_acl.image_store_acl will be created
+ resource "google_storage_bucket_acl" "image_store_acl" {
+ bucket = "eu.artifacts.dev-00-ebcd.appspot.com"
+ id = (known after apply)
+ role_entity = [
+ "OWNER:terraform-service-account#dev-00-ebcd.iam.gserviceaccount.com",
]
}
Plan: 1 to add, 0 to change, 0 to destroy.
However, if I run terraform apply, the following error is what I get:
google_storage_bucket_acl.image_store_acl: Creating...
╷
│ Error: Error updating ACL for bucket eu.artifacts.dev-00-ebcd.appspot.com: googleapi: Error 400: Invalid argument., invalid
│
│ with google_storage_bucket_acl.image_store_acl,
│ on docker.tf line 6, in resource "google_storage_bucket_acl" "image_store_acl":
│ 6: resource "google_storage_bucket_acl" "image_store_acl" {
│
╵
I'm trying to do an AWS-Terraform-GitHub pipeline for a serverless app. In terraform i define a lambda function and on push i want to update the lambda function code and create a new lambda function version (to be used with an alias at a later date).
This is my code
data "archive_file" "zip" {
type = "zip"
source_file = "${path.module}/lambda/hello.js"
output_path = "${path.module}/lambda/hello.zip"
}
resource "aws_lambda_function" "hello_terraform" {
filename = data.archive_file.zip.output_path
source_code_hash = filebase64sha256(data.archive_file.zip.output_path)
function_name = var.project_name
role = aws_iam_role.lambda_role.arn
handler = "hello.handler"
runtime = "nodejs12.x"
timeout = 10
publish = true
}
data "aws_iam_policy_document" "lambda_assume_role_policy" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
}
}
resource "aws_iam_role" "lambda_role" {
name = "${var.project_name}-lambda-role"
assume_role_policy = data.aws_iam_policy_document.lambda_assume_role_policy.json
}
When i do the initial push , or a a change that does not involve the code in lambda function everything works. However when i do a code modification i get this error on github workflow (on terraform apply)
│ Error: Error publishing Lambda Function (lambda-terraform-github-actions) version: ResourceConflictException: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:us-east-1:961736190498:function:lambda-terraform-github-actions
│ {
│ RespMetadata: {
│ StatusCode: 409,
│ RequestID: "d8c86252-a471-46be-9662-751fc935083c"
│ },
│ Message_: "The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:us-east-1:961736190498:function:lambda-terraform-github-actions",
│ Type: "User"
│ }
│
│ with aws_lambda_function.hello_terraform,
│ on lambda.tf line 9, in resource "aws_lambda_function" "hello_terraform":
│ 9: resource "aws_lambda_function" "hello_terraform" {
│
╵
Operation failed: failed running terraform apply (exit 1)
I try adding depends_on but i still have the same problem .
I also try the same thing on a local environment , doing terraform apply on the same code without the pipeline but the same thing happens.
If i remove the "publish" the terraform apply works, the function gets updates but of course there is no new function version.
Just today, whenever I run terraform apply, I see an error something like this: Can't configure a value for "lifecycle_rule": its value will be decided automatically based on the result of applying this configuration.
It was working yesterday.
Following is the command I run: terraform init && terraform apply
Following is the list of initialized provider plugins:
- Finding latest version of hashicorp/archive...
- Finding latest version of hashicorp/aws...
- Finding latest version of hashicorp/null...
- Installing hashicorp/null v3.1.0...
- Installed hashicorp/null v3.1.0 (signed by HashiCorp)
- Installing hashicorp/archive v2.2.0...
- Installed hashicorp/archive v2.2.0 (signed by HashiCorp)
- Installing hashicorp/aws v4.0.0...
- Installed hashicorp/aws v4.0.0 (signed by HashiCorp)
Following are the errors:
Acquiring state lock. This may take a few moments...
Releasing state lock. This may take a few moments...
╷
│ Error: Value for unconfigurable attribute
│
│ with module.ssm-parameter-store-backup.aws_s3_bucket.this,
│ on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource "aws_s3_bucket" "this":
│ 1: resource "aws_s3_bucket" "this" {
│
│ Can't configure a value for "lifecycle_rule": its value will be decided
│ automatically based on the result of applying this configuration.
╵
╷
│ Error: Value for unconfigurable attribute
│
│ with module.ssm-parameter-store-backup.aws_s3_bucket.this,
│ on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 1, in resource "aws_s3_bucket" "this":
│ 1: resource "aws_s3_bucket" "this" {
│
│ Can't configure a value for "server_side_encryption_configuration": its
│ value will be decided automatically based on the result of applying this
│ configuration.
╵
╷
│ Error: Value for unconfigurable attribute
│
│ with module.ssm-parameter-store-backup.aws_s3_bucket.this,
│ on .terraform/modules/ssm-parameter-store-backup/s3_backup.tf line 3, in resource "aws_s3_bucket" "this":
│ 3: acl = "private"
│
│ Can't configure a value for "acl": its value will be decided automatically
│ based on the result of applying this configuration.
╵
ERRO[0012] 1 error occurred:
* exit status 1
My code is as follows:
resource "aws_s3_bucket" "this" {
bucket = "${var.project}-${var.environment}-ssm-parameter-store-backups-bucket"
acl = "private"
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = data.aws_kms_key.s3.arn
sse_algorithm = "aws:kms"
}
}
}
lifecycle_rule {
id = "backups"
enabled = true
prefix = "backups/"
transition {
days = 90
storage_class = "GLACIER_IR"
}
transition {
days = 180
storage_class = "DEEP_ARCHIVE"
}
expiration {
days = 365
}
}
tags = {
Name = "${var.project}-${var.environment}-ssm-parameter-store-backups-bucket"
Environment = var.environment
}
}
Terraform AWS Provider is upgraded to version 4.0.0 which is published on 10 February 2022.
Major changes in the release include:
Version 4.0.0 of the AWS Provider introduces significant changes to the aws_s3_bucket resource.
Version 4.0.0 of the AWS Provider will be the last major version to support EC2-Classic resources as AWS plans to fully retire EC2-Classic Networking. See the AWS News Blog for additional details.
Version 4.0.0 and 4.x.x versions of the AWS Provider will be the last versions compatible with Terraform 0.12-0.15.
The reason for this change by Terraform is as follows: To help distribute the management of S3 bucket settings via independent resources, various arguments and attributes in the aws_s3_bucket resource have become read-only. Configurations dependent on these arguments should be updated to use the corresponding aws_s3_bucket_* resource. Once updated, new aws_s3_bucket_* resources should be imported into Terraform state.
So, I updated my code accordingly by following the guide here: Terraform AWS Provider Version 4 Upgrade Guide | S3 Bucket Refactor
The new working code looks like this:
resource "aws_s3_bucket" "this" {
bucket = "${var.project}-${var.environment}-ssm-parameter-store-backups-bucket"
tags = {
Name = "${var.project}-${var.environment}-ssm-parameter-store-backups-bucket"
Environment = var.environment
}
}
resource "aws_s3_bucket_acl" "this" {
bucket = aws_s3_bucket.this.id
acl = "private"
}
resource "aws_s3_bucket_server_side_encryption_configuration" "this" {
bucket = aws_s3_bucket.this.id
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = data.aws_kms_key.s3.arn
sse_algorithm = "aws:kms"
}
}
}
resource "aws_s3_bucket_lifecycle_configuration" "this" {
bucket = aws_s3_bucket.this.id
rule {
id = "backups"
status = "Enabled"
filter {
prefix = "backups/"
}
transition {
days = 90
storage_class = "GLACIER_IR"
}
transition {
days = 180
storage_class = "DEEP_ARCHIVE"
}
expiration {
days = 365
}
}
}
If you don't want to upgrade your Terraform AWS Provider version to 4.0.0, you can use the existing or older version by specifying it explicitly in the code as below:
terraform {
required_version = "~> 1.0.11"
required_providers {
aws = "~> 3.73.0"
}
}
It's broken because Terraform AWS Provider was updated to version 4.0.0.
If you can't upgrade your version, maybe you could lock your AWS provider version like this:
terraform {
required_version = "~> 0.12.31"
required_providers {
aws = "~> 3.74.1"
}
}
For Terragrunt/Terraform users:
As others have mentioned, AWS Provider upgraded to 4.0. Breaking changes are delineated here (under the git 4.0 tag): GitHub | terraform-provider-aws | v4.0.0
Note the breaking changes to s3. I found 39 references of aws_s3_bucket on the page. The reality is some of us don't have time to address all the breaking changes for our current projects. I have found version 3.74.1 to be quite effective.
To restrict all your Terraform projects which are configured with Terragrunt, inside the root terragrunt.hcl file of your terragrunt repo, you can specify the following:
generate "versions" {
path = "versions_override.tf"
if_exists = "overwrite_terragrunt"
contents = <<EOF
terraform {
required_providers {
aws = {
version = "= 3.74.1"
source = "hashicorp/aws"
}
}
}
EOF
}
In effect, Terragrunt will generate a versions_override.tf terraform config file which will define the explicit version of 3.74.1.
I am following the tutorial on terraform docs to create a service on AWS Lambda.
https://learn.hashicorp.com/tutorials/terraform/lambda-api-gateway
This configuration
resource "aws_s3_bucket" "lambda_bucket" {
bucket = random_pet.lambda_bucket_name.id
acl = "private"
force_destroy = true
}
will incur the following error.
Error: Value for unconfigurable attribute
with aws_s3_bucket.lambda_bucket,
on main.tf line 32, in resource "aws_s3_bucket" "lambda_bucket":
32: acl = "private"
Can't configure a value for "acl": its value will be decided automatically
based on the result of applying this configuration.
Since acl is now read only, update your configuration to use the aws_s3_bucket_acl resource and remove the acl argument in the aws_s3_bucket resource:
resource "aws_s3_bucket" "lambda_bucket" {
bucket = random_pet.lambda_bucket_name.id
force_destroy = true
}
resource "aws_s3_bucket_acl" "lamdbda_bucket" {
bucket = aws_s3_bucket.lambda_bucket.id
acl = "private"
}
Quick solution: Keep your project on version 3 until you are ready to move to version 4 following the upgrade guide provided by Terraform here: Terraform AWS Provider Version 4 Upgrade Guide.
In order to do it, freeze your provider as shown below:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.74.2"
}
consul = {
source = "hashicorp/consul"
}
}
required_version = ">= 0.13"
}
I want to deploy an infrastructure on AWS using terraform. This is the main.tf config file:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}
required_version = ">= 0.14.9"
}
provider "aws" {
profile = "default"
region = "us-west-2"
}
resource "aws_instance" "app_server" {
ami = "ami-830c94e3"
instance_type = "t2.micro"
tags = {
Name = "ExampleAppServerInstance"
}
}
AWS config file ~/.aws/config,:
[default]
region = us-east-1
[humboi]
region = us-east-1
Running terraform apply and entering "yes" gives:
aws_instance.app_server: Creating...
╷
│ Error: Error launching source instance: UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: r8hvTFNQzGA7k309BxQ9OYRxCaCH-0wwYvhAzbjEt77PsyOYyWItWNrOPUW4Z1CIzm8A6x6euBuSZsE8uSfb3YdPuzLXttHT3DS9IJsDs0ilX0Vxtu1OZ3nSCBowuylMuLEXY8VdaA35Hb7CaLb-ktQwb_ke0Pku-Uh2Vi_cwsYwAdXdGVeTETkiuErZ3tAU37f5DyZkaL4dVgPMynjRI3-GW0P63WJxcZVTkfNcNzuTx6PQfdv-YydIdUOSAS-RUVqK6ewiX-Mz4S0GwAaIFeJ_4SoIQVjogbzYYBC0bI4-sBSyVmySGuxNF6x-BOU0Zt2-po1mwEiPaDBVL9aOt6k_eZKMbYM9Ef8qQRcxnSLWOCiHuw6LVbmPJzaDQRFNZ2eO11Fa2oOcu8JMEOQjOtPkibQNAdO_5LZWAnc6Ye2-Ukt2_folTKN6TH6v1hmwsLAO7uGL60gQ-n9iBfCIqEE_6gfImsdbOptgz-IRtTrz5a8bfLOBVfd9oNjKGXQoA2ZKhM35m1ML1DQKY8LcDv0aULkGzoM6bRYoq1UkJBYuF-ShamtSpSlzpd4KDXztpxUdb496FR4MdOoHgS04W_3WXoN-hb_lG-Wgbkv7CEWMv2pNhBCRipBgUUw3QK-NApkeTxxJXy9vFQ4fTZQanEIQa_Bxxg
│ status code: 403, request id: 0c1f14ec-b5f4-4a3f-bf1f-40be4cf370fc
│
│ with aws_instance.app_server,
│ on main.tf line 17, in resource "aws_instance" "app_server":
│ 17: resource "aws_instance" "app_server" {
│
╵
The error is that the Operation was Unauthorized. What's the cause of the unauthorized operation if I have the ~/.aws/config and also the ~/.aws/credentials?
I have this happen when I change my backend configuration without deleting .terraform. I believe terraform caches credentials in .terraform. If you delete that directory, it will regenerate it and it might work for you.
Also, make sure you restart your machine after setting environment variables for aws.
The IAM User which you have created
doesn't have Admin access or EC2 FULL ACESS
so enable it and try it again..