terraform import fargate cluster - amazon-web-services

I have an existing manually created fargate cluster named "test-cluster" in us-west-1
In terraform configuration file i created
resource "aws_ecs_cluster" "mycluster" {
}
I run terraform command to import the files
terraform import aws_ecs_cluster.mycluster test-cluster
I receive this error message
Error: Cannot import non-existent remote object
While attempting to import an existing object to aws_ecs_cluster.cluster, the
provider detected that no object exists with the given id. Only pre-existing
objects can be imported; check that the id is correct and that it is
associated with the provider's configured region or endpoint, or use
"terraform apply" to create a new remote object for this resource.
I've also ran aws configure adding the correct region.

Based on the comments.
The issue was caused by using wrong account in terraform and/or AWS console.
The solution was to use correct account.

Related

How to map AWS resource type to Terraform type

I am trying to import existing AWS resources through Terraform import cmd.
Programatically I am able to take AWS resource ID through Resource tagging API but then I can not find a proper way to map it to Terraform type.
For example EC2 instance i-abcd has to be imported in Terraform through the following cmd:
terraform import aws_instance.foo i-abcd
Is there any way that I can determine the Terraform type of the i-abcd knowing that it is an instance in AWS?
Something like a dictionary:
AWS Resource type | Terraform Resource type
instance | aws_instance
Is there any solution like the above one out there or any workarounds to create it without too many manual mappings?
Thanks in advance!

Create an iam role under specific aws account using terraform

I'm really new to terraform and has been stuck in this for a while.
So I'm using an external module which creates an aws_iam_role and also corresponding policies. In my terraform code, I just use the following code to create the module but how can I make sure the roles are created under specific aws account? I have multiple aws accounts right now but I just want the external module to be in one of them. The account id for the target aws account is known. Thanks!
module "<external_module>" {
source = "git::..."
...
}
Thanks!

Terraform quickest way to import multiple resources

My Terraform state file is messedup. Resources are already available on AWS. When I run terraform apply command I am getting multiple "Already Exists" error same as below.
aws_autoscaling_group.mysql-asg: Error creating AutoScaling Group: AlreadyExists: AutoScalingGroup by this name already exists - A group with the name int-mysql-asg already exists
When I do terraform import then it goes away. but I have hundreds of resources which is giving error. What is the best way to sync terraform state and make terraform apply successful?
You may want to look at Terraforming
It's a Ruby project that states "Export existing AWS resources to Terraform style (tf, tfstate)"

Terraform + Route53 - manage existing record

I have a production environment that is configured to have a domain name that points to a load-balancer. This is already working, and it was configured using Route53.
I am using Terraform to deploy the infrastructure, including the Route53 record.
The Route53 record was set manually.
I would like for Terraform to manage the Route53 record in subsequent deployments. However, when I run an update to update the infrastructure and include the Route53 record, I get this error:
Error: Error applying plan:
1 error(s) occurred:
* module.asg.aws_route53_record.www: 1 error(s) occurred:
* aws_route53_record.www: [ERR]: Error building changeset:
InvalidChangeBatch: [Tried to create a resource record set
[name='foo.com.', type='A'] but it already exists]
Well, at first, this error makes sense, because the resource already exists. But, given this, how can I overcome this issue without causing downtime?
I've tried to manually edit the state file to include the route53 record, but that failed with the same error...
I'm happy to provide more information if necessary. Any suggestions that you might have are welcome. Thank you.
You can use terraform import to import the existing Route53 resource into your current terraform infrastructure. Here are the steps:
Init terraform with your desire workspace via terraform init.
Define your aws_route53_record exactly the same as the existing resource that you have
resource "aws_route53_record" "www" {
// your code here
}
Import the desired resource
terraform import aws_route53_record.www ZONEID_RECORDNAME_TYPE_SET-IDENTIFIER
For example:
terraform import aws_route53_record.www Z4KAPRWWNC7JR_dev.example.com_CNAME
After import successfully, this will save the state of the existing resource.
Do terraform plan to check the resource
You now can update to your existing resource
You have to import the record into your Terraform state with the terraform import command. You should not edit the state manually!
See the resource Docs for additional information on how to import the record.
Keeping it here for new visitors.
In the later versions of aws provider(~3.10), they offer an argument allow_overwrite defaults to false.
No need to edit state file (not recommended) or do terraform import.
allow_overwrite - (Optional) Allow creation of this record in Terraform to overwrite an existing record, if any. This does not affect the ability to update the record in Terraform and does not prevent other resources within Terraform or manual Route 53 changes outside Terraform from overwriting this record. false by default. This configuration is not recommended for most environments.
from: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route53_record#allow_overwrite

Error while configuring Terraform S3 Backend

I am configuring S3 backend through terraform for AWS.
terraform {
backend "s3" {}
}
On providing the values for (S3 backend) bucket name, key & region on running "terraform init" command, getting following error
"Error configuring the backend "s3": No valid credential sources found for AWS Provider. Please see https://terraform.io/docs/providers/aws/index.html for more information on providing credentials for the AWS Provider
Please update the configuration in your Terraform files to fix this error
then run this command again."
I have declared access & secret keys as variables in providers.tf. While running "terraform init" command it didn't prompt any access key or secret key.
How to resolve this issue?
When running the terraform init you have to add -backend-config options for your credentials (aws keys). So your command should look like:
terraform init -backend-config="access_key=<your access key>" -backend-config="secret_key=<your secret key>"
I also had the same issue, the easiest and the secure way is to fix this issue is that configure the AWS profile. Even if you properly mentioned the AWS_PROFILE in your project, you have to mention it again in your backend.tf.
my problem was, I have already set up the AWS provider in the project as below and it is working properly.
provider "aws" {
region = "${var.AWS_REGION}"
profile = "${var.AWS_PROFILE}"
}
but end of the project I was trying to configure the S3 backend configuration file. therefore I have run the command terraform init and I also got the same error message.
Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
Note that is not enough for the terraform backend configuration. you have to mention the AWS_PROFILE in the backend file as well.
Full Solution
I'm using the terraform latest version at this moment. it's v0.13.5.
please see the provider.tf
provider "aws" {
region = "${var.AWS_REGION}"
profile = "${var.AWS_PROFILE}" # lets say profile is my-profile
}
for example your AWS_PROFILE is my-profile
then your backend.tf should be as below.
terraform {
backend "s3" {
bucket = "my-terraform--bucket"
encrypt = true
key = "state.tfstate"
region = "ap-southeast-2"
profile = "my-profile" # you have to give the profile name here. not the variable("${var.AWS_PROFILE}")
}
}
then run the terraform init
I've faced a similar problem when renamed profile in AWS credentials file. Deleting .terraform folder, and running terraform init again resolved the problem.
If you have set up custom aws profile already, use the below option.
terraform init -backend-config="profile=your-profile-name"
If there is no custom profile,then make sure to add access_key and secret_key to default profile and try.
Don't - add variables for secrets. It's a really really bad practice and unnecessary.
Terraform will pick up your default AWS profile, or use whatever AWS profile you set AWS_PROFILE too. If this in AWS you should be using an instance profile. Roles can be done too.
If you hardcode the profile into your tf code then you have to have the same profile names where-ever you want to run this script and change it for every different account its run against.
Don't - do all this cmdline stuff, unless you like wrapper scripts or typing.
Do - Add yourself a remote_state.tf that looks like
terraform {
backend "s3" {
bucket = "WHAT-YOU-CALLED-YOUR-STATEBUCKET"
key = "mykey/terraform.tfstate"
region = "eu-west-1"
}
}
now when your terraform init:
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
The values in the provider aren't relevant to the perms for the remote_state and could even be different AWS accounts (or even another cloud provider).
Had the same issue and I was using export AWS_PROFILE as I always had. I checked my credentials which were correct.
Re-running aws configure fixed it for some reason.
I had same issue and below is my usecase.
AWS account 1: Management account (IAM user created here and this user will assume role into Dev and Prod account)
AWS account 2: Dev environment account (Role is created here for the trusted account in this case Management account user)
AWS account 3: Prod environment account (Role is created here for the trusted account in this case Management account user)
So I created a dev-backend.conf and prod-backend.conf file with the below content. The main point that fixed this issue is passing the "role_arn" value in S3 backend configuration
Defining below content in dev-backend.conf and prod-backend.conf files
bucket = "<your bucket name>"
key = "< your key path>"
region = "<region>"
dynamodb_table = "<db name>"
encrypt = true
profile = "< your profile>" # this profile has access key and secret key of the IAM user created in Management account
role_arn = "arn:aws:iam::<dev/prod account id>:role/<dev/prod role name >"
Terraform initialise with dev s3 bucket config from local state to s3 state
$ terraform init -reconfigure -backend-config="dev-backend.conf"
Terraform apply using dev environment variables file
$ terraform apply --var-file="dev-app.tfvars"
Terraform initialise with prod s3 bucket config from dev s3 bucket to prod s3 bucket state
$ terraform init -reconfigure -backend-config="prod-backend.conf"
Terraform apply using prod environment variables file
$ terraform apply --var-file="prod-app.tfvars"
I decided to put an end to this issue for once and for all, since there is a bunch of different topics about this same issue. This issue mainly arises because of different forms of authentication used while developing locally versus running a CI/CD pipeline. People tend to mix different authentication options together without taking into account the order of precedence.
When running locally you should definitely use the aws cli, since you don’t wanna have to set access keys every time you run a build. If you happen to work with multiple accounts locally you can tell the aws cli to switch profiles:
export AWS_PROFILE=my-profile
When you want to run (the same code) in a CI/CD pipeline (e.g. Github Actions, CircleCI), all you have to do is export the required environment variables within your build pipeline:
export AWS_ACCESS_KEY_ID="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_SECRET_ACCESS_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_REGION="eu-central-1"
This only works if you do not set any hard-coded configuration within the provider block. Because the AWS Terraform provider documentation learns us the order of authentication. Parameters in the provider configuration are evaluated first, then come environment variables.
Example:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {}
terraform {
backend "s3" {}
}
Before you plan or apply this, you'll have to initialize the backend:
terraform init \
-backend-config="bucket=${TFSTATE_BUCKET}" \
-backend-config="key=${TFSTATE_KEY}" \
-backend-config="region=${TFSTATE_REGION}"
Best practices:
When running locally use the aws cli to authenticate. When running in a build pipeline, use environment variables to authenticate.
Keep your Terraform configuration as clean as possible, so try to avoid hard-coded settings and keep the provider block empty, so that you'll be able to authenticate dynamically.
Preferably also keep the s3 backend configuration empty and initialize this configuration from environment variables or a configuration file.
The Terraform documentation recommends including .terraform.lock.hcl in your version control so that you can discuss potential changes to your external dependencies via code review.
Setting AWS_PROFILE in a build pipeline is basically useless. Most of the times you do not have the aws cli installed during runtime. If you would somehow need this, then you should probably think of splitting this into separate build pipelines.
Personally, I like to use Terragrunt as a wrapper around Terraform. One of the main reasons is that it enables you to dynamically set the backend configuration. This is not possible in plain Terraform.
If someone is using localstack, for me only worked using this tip https://github.com/localstack/localstack/issues/3982#issuecomment-1107664517
backend "s3" {
bucket = "curso-terraform"
key = "terraform.tfstate"
region = "us-east-1"
endpoint = "http://localhost:4566"
skip_credentials_validation = true
skip_metadata_api_check = true
force_path_style = true
dynamodb_table = "terraform_state"
dynamodb_endpoint = "http://localhost:4566"
encrypt = true
}
And don't forget to add the endpoint in provider:
provider "aws" {
region = "us-east-1"
skip_credentials_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
s3_force_path_style = true
endpoints {
ec2 = "http://localhost:4566"
s3 = "http://localhost:4566"
dynamodb = "http://localhost:4566"
}
}
in my credentials file, 2 profile names are there one after another caused the error for me. when I removed 2nd profile name this issue was resolved.
I experienced this issue when trying to apply some Terraform changes to an existing project. The terraform commands have been working fine, and I even ran worked on the project couple of hours before the issue started.
I was encountering the following errors:
❯ terraform init
Initializing modules...
Initializing the backend...
╷
│ Error: error configuring S3 Backend: IAM Role (arn:aws:iam::950456587296:role/MyRole) cannot be assumed.
│
│ There are a number of possible causes of this - the most common are:
│ * The credentials used in order to assume the role are invalid
│ * The credentials do not have appropriate permission to assume the role
│ * The role ARN is not valid
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
│ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I had my organization VPN turned on when running the Terraform commands, and this caused the commands to fail.
Here's how I fixed it
My VPN caused the issue, this may not apply to everyone.
Turning off my VPN fixed it.