I wish to run a terraform plan to verify a terraform plan file uploaded by a user and detect the resources.
However, running terraform plan as of now requires AWS credentials.
Is there a way to run plan without using the credentials or extract the list of resources in another way from the .tf file?
Found a solution here,
https://github.com/terraform-providers/terraform-provider-aws/issues/5584#issuecomment-433203543
Along with the skip_credentials_validation flag, a mock secret_key is also required.
provider "aws" {
region = "${var.region}"
skip_credentials_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
s3_force_path_style = true
access_key = "mock_access_key"
secret_key = "mock_secret_key"
}
Related
I would like to store a terraform state file in one aws account and deploy infrastructure into another. Is it possible to provide different set of credentials for backend and aws provider using environmental variables(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)? Or maybe provide credentials to one with environmental variables and another through shared_credentials_file?
main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "=3.74.3"
}
}
backend "s3" {
encrypt = true
bucket = "bucket-name"
region = "us-east-1"
key = "terraform.tfstate"
}
}
variable "region" {
default = "us-east-1"
}
provider "aws" {
region = "${var.region}"
}
resource "aws_vpc" "test" {
cidr_block = "10.0.0.0/16"
}
Yes, the AWS profile/access keys configuration used by the S3 backend are separate from the AWS profile/access keys configuration used by the AWS provider. By default they are both going to be looking in the same place, but you could configure the backend to use a different profile so that it connects to a different AWS account.
Yes, and you can even keep them in separated files in the same folder to avoid confusion
backend.tf
terraform {
backend "s3" {
profile = "profile-1"
region = "eu-west-1"
bucket = "your-bucket"
key = "terraform-state/terraform.tfstate"
dynamodb_table = "terraform-locks"
encrypt = true
}
}
main.tf
provider "aws" {
profile = "profile-2"
region = "us-east-1"
}
resource .......
This way, the state file will be stored in the profile-1, and all the code will run in the profile-2
Question: I am trying to reference the environments AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY inside a terraform module which uses AWS CodeBuild projects. The credentials are being loaded from a shared credentials file.
In my terraform project
main.tf
# master profile
provider "aws" {
alias = "master"
max_retries = "5"
profile = "master"
region = var.region
}
# env profile
provider "aws" {
max_retries = "5"
region = var.region
profile = "dev-terraform"
}
module "code_build" {
source = "../../modules/code_build"
...
}
code_build.tf
resource "aws_codebuild_project" "sls_deploy" {
...
environment {
...
environment_variable {
name = "AWS_ACCESS_KEY_ID"
value = "(Trying to read the aws access key from the aws provider profile env)
}
Can anyone explain how I can reference the AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY from the provider credentials specified in main.tf?
I am looking to deploy ECE (Elastic Cloud Enterprise) in AWS with Terraform. Reading through the documentation, I'm still not clear how this model works.
In the provider below, what is the reason for the endpoint? Is terraform connecting to this endpoint with the specified username and password? And are these credentials are being provided with the ECE license?
Hence, I'm thinking that the ECE installation endpoint can't be private. But I need to provision this privately - probably won't be able to do it via Terraform. Any experience with this?
provider "ec" {
# ECE installation endpoint
endpoint = "https://my.ece-environment.corp"
# If the ECE installation has a self-signed certificate
# you must set insecure to true.
insecure = true
username = "my-username"
password = "my-password"
}
data "ec_stack" "latest" {
version_regex = "latest"
region = "us-east-1"
}
resource "ec_deployment" "example_minimal" {
# Optional name.
name = "my_example_deployment"
# Mandatory fields
region = "us-east-1"
version = data.ec_stack.latest.version
deployment_template_id = "aws-io-optimized-v2"
elasticsearch {}
}
I tried to create a simple example in AWS environments. In the beginning, I export 2 values:
export AWS_ACCESS_KEY_ID= something
export AWS_SECRET_ACCESS_KEY= something
After that, I wrote a simple code.
provider "aws" {
region = "us-east-1"
access_key = AWS_ACCESS_KEY_ID
secret_key = AWS_SECRET_ACCESS_KEY
}
resource "aws_instance" "example" {
ami = "ami-40d28157"
instance_type = "t2.micro"
tags = {
Name = "terraform-example"
}
}
When I define values instead of parameters AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY everything works OK, but with the provided code, I see the following error
on main.tf line 4, in provider "aws":
4: secret_key = AWS_SECRET_ACCESS_KEY
A reference to a resource type must be followed by at least one attribute
access, specifying the resource name.
Some ideas on how to solve this problem?
You don't have to do anything. As explained in the terraform authentication documentation for AWS provider, terraform will automatically use the credentials in that order:
Static credentials
Environment variables
Shared credentials/configuration file
CodeBuild, ECS, and EKS Roles
EC2 Instance Metadata Service (IMDS and IMDSv2)
So once you export your keys (make sure to export them correctly):
export AWS_ACCESS_KEY_ID="something"
export AWS_SECRET_ACCESS_KEY="something"
in your config file you would just use (exemplified in the docs):
provider "aws" {
region = "us-east-1"
}
resource "aws_instance" "example" {
ami = "ami-40d28157"
instance_type = "t2.micro"
tags = {
Name = "terraform-example"
}
}
This is official page: https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/secret_manager_secret
I created these files:
variables.tf
variable gcp_project {
type = string
}
main.tf
resource "google_secret_manager_secret" "my_password" {
provider = google-beta
secret_id = "my-password"
replication {
automatic = true
}
}
data "google_secret_manager_secret_version" "my_password_v1" {
provider = google-beta
project = var.gcp_project
secret = google_secret_manager_secret.my_password.secret_id
version = 1
}
outputs.tf
output my_password_version {
value = data.google_secret_manager_secret_version.my_password_v1.version
}
When apply it, got error:
Error: Error retrieving available secret manager secret versions: googleapi: Error 404: Secret Version [projects/2381824501/secrets/my-password/versions/1] not found.
So I created the secret by gcloud cli:
echo -n "my_secret_password" | gcloud secrets create "my-password" \
--data-file - \
--replication-policy "automatic"
Then apply terraform again, it said Error: project: required field is not set.
If use terraform to create a secret with a real value, how to do?
I found the following article that I consider to be useful on Managing Secret Manager with Terraform.
You have to:
Create the Setup
Create a file named versions.tf that define the version constraints.
Create a file named main.tf and configure the Google provider stanza:
This is the code for creating a Secret Manager secret named "my-secret" with an automatic replication policy:
resource "google_secret_manager_secret" "my-secret" {
provider = google-beta
secret_id = "my-secret"
replication {
automatic = true
}
depends_on = [google_project_service.secretmanager]
}
Following #marian.vladoi's answer, if you're having issues with cloud resource manager api, enable it like so:
resource "google_project_service" "cloudresourcemanager" {
service = "cloudresourcemanager.googleapis.com"
}
You can also enable the cloud resource manager api using this gcloud command in terminal:
gcloud services enable secretmanager.googleapis.com