I am trying to create VPC in GCP using Terraform but when i run terraform apply I am getting an error in terminal. I am new to terraform and GCP this is the code is have used
//Google Cloud provider
provider "google" {
credentials = "${file("${var.credentials}")}"
project = "${var.gcp_project}"
region = "${var.region}"
}
// Create VPC
resource "google_compute_network" "vpc_network" {
name = "${var.name}-vpc"
auto_create_subnetworks = "false"
}
//variables.tf
variable "region" {}
variable "gcp_project" {}
variable "credentials" {}
variable "name" {}
variable "subnet_cidr" {}
// terraform.tfvars
region = "europe-west2"
gcp_project = "rock-prism-350316"
credentials = "credentials.json"
name = "dev"
subnet_cidr = "10.10.0.0/24"
I am using a service account which has below access :
Editor access for project, admin compute network
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
google_compute_network.vpc_network: Creating...
╷
│ Error: Error creating Network: Post
"https://compute.googleapis.com/compute/v1/projects/rock-prism-350316/global/networks?
alt=json": oauth2: cannot fetch token: unexpected EOF
│
│ with google_compute_network.vpc_network,
│ on main.tf line 8, in resource "google_compute_network" "vpc_network":
│ 8: resource "google_compute_network" "vpc_network" {
You can use the contents of a key file or a file path in the provider (see credentials section).
The error message you're getting shows that it's trying to create a network in project rock-prism-350316 using credentials for project sunlit-vortex-184612
Try correcting the gcp_project value in your tfvars file. It's also a good idea to add the project parameter to your VPC resource:
//Google Cloud provider
provider "google" {
credentials = "${file("${var.credentials}")}"
project = "${var.gcp_project}"
region = "${var.region}"
}
// Create VPC
resource "google_compute_network" "vpc_network" {
name = "${var.name}-vpc"
project = "${var.gcp_project}"
auto_create_subnetworks = "false"
}
//variables.tf
variable "region" {}
variable "gcp_project" {}
variable "credentials" {}
variable "name" {}
variable "subnet_cidr" {}
// terraform.tfvars
region = "europe-west2"
gcp_project = "rock-prism-350316"
credentials = "credentials.json"
name = "dev"
subnet_cidr = "10.10.0.0/24"
The issue was there a security software installed on my device and that was blocking the communication between GCP provider and terraform. I had to disable this from services once that was done it was working fine. There was no issues in code or in authentication.
Related
Using Terraform v1.2.5 I am attempting to deploy an AWS VPC Peer. However, the following code fails validation:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.1"
}
}
}
provider "aws" {
region = "us-east-1"
}
data "aws_vpc" "accepter" {
provider = aws.accepter
id = "${var.accepter_vpc_id}"
}
locals {
accepter_account_id = "${element(split(":", data.aws_vpc.accepter.arn), 4)}"
}
resource "aws_vpc_peering_connection" "requester" {
description = "peer_to_${var.accepter_profile}"
vpc_id = "{$var.requester_vpc_id}"
peer_vpc_id = "${data.aws_vpc.accepter.id}"
peer_owner_id = "${local.accepter_account_id}"
}
When validating this terraform code I am receiving the following error :
$ terraform validate
╷
│ Error: Provider configuration not present
│
│ To work with data.aws_vpc.accepter its original provider configuration at provider["registry.terraform.io/hashicorp/aws"].accepter is
│ required, but it has been removed. This occurs when a provider configuration is removed while objects created by that provider still exist
│ in the state. Re-add the provider configuration to destroy data.aws_vpc.accepter, after which you can remove the provider configuration
│ again.
What am I missing or misconfigured that is causing this error?
You need to provide the aliased provider version you are referencing here:
data "aws_vpc" "accepter" {
provider = aws.accepter # <--- missing aliased provider
id = var.accepter_vpc_id
}
To fix this, you just need to add the corresponding aliased provider block [1]:
provider "aws" {
alias = "accepter"
region = "us-east-1" # make sure the region is right
}
[1] https://developer.hashicorp.com/terraform/language/providers/configuration#alias-multiple-provider-configurations
Since VPC Peering is an "inter-region" Peering we need 2 AWS regions. Hence the way to achhieve this is creating AWS provider and AWS alias provider blocks.
See an example:
provider "aws" {
region = "us-east-1"
# Requester's credentials.
}
provider "aws" {
alias = "peer"
region = "us-west-2"
# Accepter's credentials.
}
Your resource doesn't know what provider to use, the one you provided does not exist.
You have two options:
Add alias to provider:
provider "aws" {
region = "us-east-1"
alias = "accepter"
}
remove line provider = aws.accepter from data "aws_vpc" "accepter"
Somehow, the first will be more valid for you, I suppose.
Question: I am trying to reference the environments AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY inside a terraform module which uses AWS CodeBuild projects. The credentials are being loaded from a shared credentials file.
In my terraform project
main.tf
# master profile
provider "aws" {
alias = "master"
max_retries = "5"
profile = "master"
region = var.region
}
# env profile
provider "aws" {
max_retries = "5"
region = var.region
profile = "dev-terraform"
}
module "code_build" {
source = "../../modules/code_build"
...
}
code_build.tf
resource "aws_codebuild_project" "sls_deploy" {
...
environment {
...
environment_variable {
name = "AWS_ACCESS_KEY_ID"
value = "(Trying to read the aws access key from the aws provider profile env)
}
Can anyone explain how I can reference the AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY from the provider credentials specified in main.tf?
I created a new EC2 instance on AWS.
I am trying to create a Terraform on the AWS server and I'm getting an error.
I don't have the AMI previously created, so I'm not sure if this is the issue.
I checked my keypair and ensured it is correct.
I also checked the API details and they are correct too. I'm using a college AWS App account where the API details are the same for all users. Not sure if that would be an issue.
This is the error I'm getting after running terraform apply:
Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
│ status code: 403, request id: be2bf9ee-3aa4-401a-bc8b-f15c8a1e63d0
│
│ with provider["registry.terraform.io/hashicorp/aws"],
│ on main.tf line 10, in provider "aws":
│ 10: provider "aws" {
My main.tf file:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}
required_version = ">= 0.14.9"
}
provider "aws" {
profile = "default"
region = "eu-west-1"
}
resource "aws_instance" "app_server" {
ami = "ami-04505e74c0741db8d"
instance_type = "t2.micro"
key_name = "<JOEY'S_KEYPAIR>"
tags = {
Name = "joey_terraform"
}
}
Credentials:
AWS Access Key ID [****************LRMC]:
AWS Secret Access Key [****************6OO3]:
Default region name [eu-west-1]:
Default output format [None]:
This is official page: https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/secret_manager_secret
I created these files:
variables.tf
variable gcp_project {
type = string
}
main.tf
resource "google_secret_manager_secret" "my_password" {
provider = google-beta
secret_id = "my-password"
replication {
automatic = true
}
}
data "google_secret_manager_secret_version" "my_password_v1" {
provider = google-beta
project = var.gcp_project
secret = google_secret_manager_secret.my_password.secret_id
version = 1
}
outputs.tf
output my_password_version {
value = data.google_secret_manager_secret_version.my_password_v1.version
}
When apply it, got error:
Error: Error retrieving available secret manager secret versions: googleapi: Error 404: Secret Version [projects/2381824501/secrets/my-password/versions/1] not found.
So I created the secret by gcloud cli:
echo -n "my_secret_password" | gcloud secrets create "my-password" \
--data-file - \
--replication-policy "automatic"
Then apply terraform again, it said Error: project: required field is not set.
If use terraform to create a secret with a real value, how to do?
I found the following article that I consider to be useful on Managing Secret Manager with Terraform.
You have to:
Create the Setup
Create a file named versions.tf that define the version constraints.
Create a file named main.tf and configure the Google provider stanza:
This is the code for creating a Secret Manager secret named "my-secret" with an automatic replication policy:
resource "google_secret_manager_secret" "my-secret" {
provider = google-beta
secret_id = "my-secret"
replication {
automatic = true
}
depends_on = [google_project_service.secretmanager]
}
Following #marian.vladoi's answer, if you're having issues with cloud resource manager api, enable it like so:
resource "google_project_service" "cloudresourcemanager" {
service = "cloudresourcemanager.googleapis.com"
}
You can also enable the cloud resource manager api using this gcloud command in terminal:
gcloud services enable secretmanager.googleapis.com
I would like to manage AWS S3 buckets with terraform and noticed that there's a region parameter for the resource.
I have an AWS provider that is configured for 1 region, and would like to use that provider to create S3 buckets in multiple regions if possible. My S3 buckets have a lot of common configuration that I don't want to repeat, so i have a local module to do all the repetitive stuff....
In mod-s3-bucket/main.tf, I have something like:
variable bucket_region {}
variable bucket_name {}
resource "aws_s3_bucket" "s3_bucket" {
region = var.bucket_region
bucket = var.bucket_name
}
And then in main.tf in the parent directory (tf root):
provider "aws" {
region = "us-east-1"
}
module "somebucket" {
source = "mod-s3-bucket"
bucket_region = "us-east-1"
bucket_name = "useast1-bucket"
}
module "anotherbucket" {
source = "mod-s3-bucket"
bucket_region = "us-east-2"
bucket_name = "useast2-bucket"
}
When I run a terraform apply with that, both buckets get created in us-east-1 - is this expected behaviour? My understanding is that region should make the buckets get created in different regions.
Further to that, if I run a terraform plan after bucket creation, I see the following:
~ region = "us-east-1" -> "us-east-2"
on the 1 bucket, but after an apply, the region has not changed.
I know I can easily solve this by using a 2nd, aliased AWS provider, but am asking specifically about how the region parameter is meant to work for an aws_s3_bucket resource (https://www.terraform.io/docs/providers/aws/r/s3_bucket.html#region)
terraform v0.12.24
aws v2.64.0
I think you'll need to do something like the docs show in this example for Replication Configuration: https://www.terraform.io/docs/providers/aws/r/s3_bucket.html#using-replication-configuration
# /root/main.tf
provider "aws" {
region = "us-east-1"
}
provider "aws" {
alias = "us-east-2"
region = "us-east-2"
}
module "somebucket" {
source = "mod-s3-bucket"
bucket_region = "us-east-1"
bucket_name = "useast1-bucket"
}
module "anotherbucket" {
source = "mod-s3-bucket"
provider = "aws.us-east-2"
bucket_region = "us-east-2"
bucket_name = "useast2-bucket"
}
# /mod-s3-bucket/main.tf
variable provider {
type = string
default = "aws"
}
variable bucket_region {}
variable bucket_name {}
resource "aws_s3_bucket" "s3_bucket" {
provider = var.provider
region = var.bucket_region
bucket = var.bucket_name
}
I've never explicitly set the provider like that though in a resource but based on the docs it might work.
The region attribute in s3 bucket resource isn't parsed as expected, there is a bug for this:
https://github.com/terraform-providers/terraform-provider-aws/issues/592
The multiple provider approach is needed.
Terraform informs you if you try to set the region directly in the resource:
╷
│ Error: Value for unconfigurable attribute
│
│ with aws_s3_bucket.my_bucket,
│ on s3.tf line 10, in resource "aws_s3_bucket" "my_bucket":
│ 28: region = "us-east-1"
│
│ Can't configure a value for "region": its value will be decided automatically based on the result of applying this configuration.
Terraform uses the configuration of the provider, where the region is set, for managing resources. Alternatively, as already mentioned, you can use multiple configurations for the same provider by making use of the alias meta-argument.
You can optionally define multiple configurations for the same
provider, and select which one to use on a per-resource or per-module
basis. The primary reason for this is to support multiple regions for
a cloud platform; other examples include targeting multiple Docker
hosts, multiple Consul hosts, etc.
...
A provider block without an alias argument is the default
configuration for that provider. Resources that don't set the provider
meta-argument will use the default provider configuration that matches
the first word of the resource type name. link