This is my template:
resource "aws_ecs_cluster" "doesntmatter" {
name = var.doesntmatter_name
capacity_providers = ["FARGATE", "FARGATE_SPOT"]
setting {
name = "containerInsights"
value = "enabled"
}
tags = var.tags
}
When I run it. It properly creates cluster and sets containerInsights to enabled.
But when I run terrafrom again. It wants to change this property as if it wasn't set before.
It doesn't matter how many times I run it. It still thinks it needs to change it every deployment.
Additionally, the terraform state show resName does show that this setting is saved in state file.
It's a bug that is resolved with v3.57.0 of the Terraform AWS Provider (released yesterday).
Amazon ECS is making a change to the ECS Describe-Clusters API. Previously, the response to a successful ECS Describe-Clusters API request included the cluster settings by default. This behavior was incorrect since, as documented here (https://docs.aws.amazon.com/cli/latest/reference/ecs/describe-clusters.html), cluster settings is an optional field that s hould only be included when explicitly requested by the customer. With the change, ECS will no longer surface the cluster settings field in response to the Describe-Clusters API by default. Customers can continue to use the --include SETTINGS flag with the Describe-Clusters API to receive the cluster settings.
Tracking bug: https://github.com/hashicorp/terraform-provider-aws/issues/20684
Related
I do have a terraform script which provisions a Kubernetes deployment and a few clusterroles and clusterrolebingings via Helm.
But now I do need to edit one of the provisioned Clusterroles via Terraform and add another block of permissions. Is there a way to do this or would I need to recreate a similar resource freshly.
This is my block to create the respective deployment for efs-csi-driver.
resource "helm_release" "aws-efs-csi-driver" {
name = "aws-efs-csi-driver"
chart = "aws-efs-csi-driver"
repository = "https://kubernetes-sigs.github.io/aws-efs-csi-driver/"
version = "2.x.x"
namespace = "kube-system"
timeout = 3600
values = [
file("${path.module}/config/values.yaml"),
]
}
Somehow I do need to modify https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/45c5e752d2256558170100138de835b82d54b8af/deploy/kubernetes/base/controller-serviceaccount.yaml#L11 by adding a couple of more permission blocks. Is there a way that I can patch it out (Or completely overlay)
We have an AWS SecretsManager Secret that was created once. That secret will be updated by an external job every hour.
I have the problem that sometimes the terraform plan/apply fails with the following message:
AWS Provider 2.48
Error: Error refreshing state: 1 error occurred:
* module.xxx.xxx: 1 error occurred:
* module.xxx.aws_secretsmanager_secret_version.xxx:
aws_secretsmanager_secret_version.xxx: error reading Secrets Manager Secret Version: InvalidRequestException: You can't perform this operation on secret version 68AEABC3-34BE-4723-8BF5-469A44F9B1D9 because it was deleted.
We've tried two solutions:
1) Force delete the whole secret via aws cli, but this has the side effect that one of our dependend resources will also be recreated (ecs template definition depends on that secret). This works, but we do not want the side effect of recreating the ecs thing.
2) Manually edit the backend .tfstate file and set the current AWS secret version. Then run the plan again.
Both solution seem to be hacky in a way. What is the best way to solve this issue ?
You can use terraform import to reconcile the state difference before you run a plan or apply.
In your case, this would look like:
terraform import module.xxx.aws_secretsmanager_secret_version.xxx arn:aws:secretsmanager:some_region:some_account_id:secret:example-123456|xxxxx-xxxxxxx-xxxxxxx-xxxxx
I think perhaps the problem you are having is that by default AWS tries to "help you" by not letting you delete secrets automatically until 7 days have elapsed. AWS tries the "help you" by telling you they give you a grace period of 7 days to update your "code" that may rely on this. Which makes automation more difficult.
I have worked around this by setting the recovery window period to "0 days", effectively eliminating that grace period that AWS provides.
Then you can have terraform, rename, or delete your secret at will, either manually (via AWS CLI) or via terraform.
You can update an existing secret by putting in this value FIRST. Then change the name of the secret (if you wish to), or delete it (this terraform section) as desired and run the terraform again after the recovery window days = 0 has been applied.
Here is an example:
resource "aws_secretsmanager_secret" "mySecret" {
name = "your secret name"
recovery_window_in_days = "0"
// this is optional and can be set to true | false
lifecycle {
create_before_destroy = true
}
}
*Note, there is also an option to "create before destroy" you can set on the lifecyle.
https://www.terraform.io/docs/configuration/resources.html
Also, you can use the terraform resource to update the secret values like this:
This example will set the secret values once and then tell terraform to ignore any changes made to the values (username, password in this example) after the initial creation.
If you remove the lifecyle section, then terraform will keep track of whether or not the secret values themselves have changed. If they have changed they would revert back to the value in the terraform state.
If you store your tfstate files in an s3 protected bucket that is safer than not doing so, because they are plaintext in the statefile, so anyone with access to your terraform state file could see your secret values.
I would suggest: 1) figuring out what is deleting your secrets unexpectedly? 2) having your "external job" be a terraform bash script to update the values using a resource as in the example below.
Hope this gives you some ideas.
resource "aws_secretsmanager_secret_version" "your-secret-data" {
secret_id = aws_secretsmanager_secret.your-secret.id
secret_string = <<-EOF
{
"username": "usernameValue",
"password": "passwordValue"
}
EOF
// ignore any updates to the initial values above done after creation.
lifecycle {
ignore_changes = [
secret_string
]
}
}
I am new to Terraform. How to stop GCP vm instances using terraform?
I have tried changing status of the vm instance, it's available for AWS but couldn't find the way to do it for GCP.
Edit
Since version v3.11.0 of Google provider (released 2020/03/02), it is possible to shutdown and start a Compute instance with desired_status field :
compute: added the ability to manage the status of google_compute_instance resources with the desired_status field
Just declare in your Terraform resource :
resource "google_compute_instance" "default" {
name = "test"
machine_type = "n1-standard-1"
zone = "us-central1-a"
[...]
desired_status = "TERMINATED"
}
And apply your changes. If your instance was running before, it should be shut down. This PR shows the modifications that have been added, if you are interested to take a look. The desired_status can either take RUNNING or TERMINATED values.
Previous answer (as of 2019/10/26)
As of the time of the question (2019/09/18), with the latest Google provider available then (version v2.15.0), this is not possible to update the status of a Google Compute instance.
The following issue is opened on the Google Terraform provider on Github :
google_compute_instance should allow to specify instance state #1719
There is also a Pull Request to add this feature :
ability to change instance_state #2956
But unfortunately, this PR seems to be stale (not updated since 2019/03/13).
I am configuring S3 backend through terraform for AWS.
terraform {
backend "s3" {}
}
On providing the values for (S3 backend) bucket name, key & region on running "terraform init" command, getting following error
"Error configuring the backend "s3": No valid credential sources found for AWS Provider. Please see https://terraform.io/docs/providers/aws/index.html for more information on providing credentials for the AWS Provider
Please update the configuration in your Terraform files to fix this error
then run this command again."
I have declared access & secret keys as variables in providers.tf. While running "terraform init" command it didn't prompt any access key or secret key.
How to resolve this issue?
When running the terraform init you have to add -backend-config options for your credentials (aws keys). So your command should look like:
terraform init -backend-config="access_key=<your access key>" -backend-config="secret_key=<your secret key>"
I also had the same issue, the easiest and the secure way is to fix this issue is that configure the AWS profile. Even if you properly mentioned the AWS_PROFILE in your project, you have to mention it again in your backend.tf.
my problem was, I have already set up the AWS provider in the project as below and it is working properly.
provider "aws" {
region = "${var.AWS_REGION}"
profile = "${var.AWS_PROFILE}"
}
but end of the project I was trying to configure the S3 backend configuration file. therefore I have run the command terraform init and I also got the same error message.
Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
Note that is not enough for the terraform backend configuration. you have to mention the AWS_PROFILE in the backend file as well.
Full Solution
I'm using the terraform latest version at this moment. it's v0.13.5.
please see the provider.tf
provider "aws" {
region = "${var.AWS_REGION}"
profile = "${var.AWS_PROFILE}" # lets say profile is my-profile
}
for example your AWS_PROFILE is my-profile
then your backend.tf should be as below.
terraform {
backend "s3" {
bucket = "my-terraform--bucket"
encrypt = true
key = "state.tfstate"
region = "ap-southeast-2"
profile = "my-profile" # you have to give the profile name here. not the variable("${var.AWS_PROFILE}")
}
}
then run the terraform init
I've faced a similar problem when renamed profile in AWS credentials file. Deleting .terraform folder, and running terraform init again resolved the problem.
If you have set up custom aws profile already, use the below option.
terraform init -backend-config="profile=your-profile-name"
If there is no custom profile,then make sure to add access_key and secret_key to default profile and try.
Don't - add variables for secrets. It's a really really bad practice and unnecessary.
Terraform will pick up your default AWS profile, or use whatever AWS profile you set AWS_PROFILE too. If this in AWS you should be using an instance profile. Roles can be done too.
If you hardcode the profile into your tf code then you have to have the same profile names where-ever you want to run this script and change it for every different account its run against.
Don't - do all this cmdline stuff, unless you like wrapper scripts or typing.
Do - Add yourself a remote_state.tf that looks like
terraform {
backend "s3" {
bucket = "WHAT-YOU-CALLED-YOUR-STATEBUCKET"
key = "mykey/terraform.tfstate"
region = "eu-west-1"
}
}
now when your terraform init:
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
The values in the provider aren't relevant to the perms for the remote_state and could even be different AWS accounts (or even another cloud provider).
Had the same issue and I was using export AWS_PROFILE as I always had. I checked my credentials which were correct.
Re-running aws configure fixed it for some reason.
I had same issue and below is my usecase.
AWS account 1: Management account (IAM user created here and this user will assume role into Dev and Prod account)
AWS account 2: Dev environment account (Role is created here for the trusted account in this case Management account user)
AWS account 3: Prod environment account (Role is created here for the trusted account in this case Management account user)
So I created a dev-backend.conf and prod-backend.conf file with the below content. The main point that fixed this issue is passing the "role_arn" value in S3 backend configuration
Defining below content in dev-backend.conf and prod-backend.conf files
bucket = "<your bucket name>"
key = "< your key path>"
region = "<region>"
dynamodb_table = "<db name>"
encrypt = true
profile = "< your profile>" # this profile has access key and secret key of the IAM user created in Management account
role_arn = "arn:aws:iam::<dev/prod account id>:role/<dev/prod role name >"
Terraform initialise with dev s3 bucket config from local state to s3 state
$ terraform init -reconfigure -backend-config="dev-backend.conf"
Terraform apply using dev environment variables file
$ terraform apply --var-file="dev-app.tfvars"
Terraform initialise with prod s3 bucket config from dev s3 bucket to prod s3 bucket state
$ terraform init -reconfigure -backend-config="prod-backend.conf"
Terraform apply using prod environment variables file
$ terraform apply --var-file="prod-app.tfvars"
I decided to put an end to this issue for once and for all, since there is a bunch of different topics about this same issue. This issue mainly arises because of different forms of authentication used while developing locally versus running a CI/CD pipeline. People tend to mix different authentication options together without taking into account the order of precedence.
When running locally you should definitely use the aws cli, since you don’t wanna have to set access keys every time you run a build. If you happen to work with multiple accounts locally you can tell the aws cli to switch profiles:
export AWS_PROFILE=my-profile
When you want to run (the same code) in a CI/CD pipeline (e.g. Github Actions, CircleCI), all you have to do is export the required environment variables within your build pipeline:
export AWS_ACCESS_KEY_ID="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_SECRET_ACCESS_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_REGION="eu-central-1"
This only works if you do not set any hard-coded configuration within the provider block. Because the AWS Terraform provider documentation learns us the order of authentication. Parameters in the provider configuration are evaluated first, then come environment variables.
Example:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {}
terraform {
backend "s3" {}
}
Before you plan or apply this, you'll have to initialize the backend:
terraform init \
-backend-config="bucket=${TFSTATE_BUCKET}" \
-backend-config="key=${TFSTATE_KEY}" \
-backend-config="region=${TFSTATE_REGION}"
Best practices:
When running locally use the aws cli to authenticate. When running in a build pipeline, use environment variables to authenticate.
Keep your Terraform configuration as clean as possible, so try to avoid hard-coded settings and keep the provider block empty, so that you'll be able to authenticate dynamically.
Preferably also keep the s3 backend configuration empty and initialize this configuration from environment variables or a configuration file.
The Terraform documentation recommends including .terraform.lock.hcl in your version control so that you can discuss potential changes to your external dependencies via code review.
Setting AWS_PROFILE in a build pipeline is basically useless. Most of the times you do not have the aws cli installed during runtime. If you would somehow need this, then you should probably think of splitting this into separate build pipelines.
Personally, I like to use Terragrunt as a wrapper around Terraform. One of the main reasons is that it enables you to dynamically set the backend configuration. This is not possible in plain Terraform.
If someone is using localstack, for me only worked using this tip https://github.com/localstack/localstack/issues/3982#issuecomment-1107664517
backend "s3" {
bucket = "curso-terraform"
key = "terraform.tfstate"
region = "us-east-1"
endpoint = "http://localhost:4566"
skip_credentials_validation = true
skip_metadata_api_check = true
force_path_style = true
dynamodb_table = "terraform_state"
dynamodb_endpoint = "http://localhost:4566"
encrypt = true
}
And don't forget to add the endpoint in provider:
provider "aws" {
region = "us-east-1"
skip_credentials_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
s3_force_path_style = true
endpoints {
ec2 = "http://localhost:4566"
s3 = "http://localhost:4566"
dynamodb = "http://localhost:4566"
}
}
in my credentials file, 2 profile names are there one after another caused the error for me. when I removed 2nd profile name this issue was resolved.
I experienced this issue when trying to apply some Terraform changes to an existing project. The terraform commands have been working fine, and I even ran worked on the project couple of hours before the issue started.
I was encountering the following errors:
❯ terraform init
Initializing modules...
Initializing the backend...
╷
│ Error: error configuring S3 Backend: IAM Role (arn:aws:iam::950456587296:role/MyRole) cannot be assumed.
│
│ There are a number of possible causes of this - the most common are:
│ * The credentials used in order to assume the role are invalid
│ * The credentials do not have appropriate permission to assume the role
│ * The role ARN is not valid
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
│ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I had my organization VPN turned on when running the Terraform commands, and this caused the commands to fail.
Here's how I fixed it
My VPN caused the issue, this may not apply to everyone.
Turning off my VPN fixed it.
I am running terraform on my Linux instance and I am getting the terrors below.
+ /usr/local/bin/terraform workspace new test
enter code here[0m[0m[1m[33mBackend reinitialization required. Please run "terraform init".[0m
[33mReason: Initial configuration of the requested backend "s3"
The "backend" is the interface that Terraform uses to store state,
perform operations, etc. If this message is showing up, it means that the
Terraform configuration you're using is using a custom configuration for
the Terraform backend.
Changes to backend configurations require reinitialization. This allows
Terraform to setup the new configuration, copy existing state, etc. This is
only done during "terraform init". Please run that command now then try again.
If the change reason above is incorrect, please verify your configuration
hasn't changed and try again. At this point, no changes to your existing
configuration or state have been made.
[0m
[31mFailed to load backend: Initialization required. Please see the error message above.
Here is the Terraform configuration file.
provider "aws" {
# don't touch below here
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
region = "us-west-2"
}
# Configure Terraform to store this in S3
terraform {
backend "s3" {
bucket = "nom-terraform"
key = "apps/onboarding/terraform.tfstate"
region = "us-west-2"
}
}
Before running terraform apply, I managed to run terraform plan successfully.
Seems that you have added new s3 as backend.
So terraform requires re-initialization.
Just run terraform init, it will add s3 as backend and ask permission to transfer local statefile to s3.
You would need to first initialise the provide, that download the provider with the latest version, the output would be something like this :-
# terraform init
Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
- Downloading plugin for provider "spotinst" (1.1.1)...
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.aws: version = "~> 1.2"
* provider.cloudflare: version = "~> 0.1"
* provider.spotinst: version = "~> 1.1"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.`
I would first delete any state file and the .terraform folder, many error exist sometimes due to corruption.
Afterwards I would run init and it should run.
I do not believe adding the backend was the issue as it should have tried to merge between the states