Error: Invalid version constraint when using a module - amazon-web-services

I'm creating few resources like Alb, security group and I use modules from github :
module "efs_sg" {
source = "git::https://github.com/terraform-aws-modules/terraform-aws-security-group.git"
version = "3.2.0"
name = "${var.default_tags["app"]}-efs"
description = "Security group for FE "
vpc_id = data.terraform_remote_state.network.outputs.vpc_id
computed_ingress_with_source_security_group_id = [
{
from_port = 2049
to_port = 2049
protocol = "tcp"
description = "NFS"
source_security_group_id = "${module.asg_sg.this_security_group_id}"
}
]
number_of_computed_ingress_with_source_security_group_id = 1
tags = "${var.default_tags}"
}
when I do terraform apply/plan I get this error:
Error: Invalid version constraint
Cannot apply a version constraint to module "efs_sg" (at
terraform/dev/eu-west-1/sg.tf:107) because it has a non Registry
URL.
I'm using Terraform v0.12.12
How to fix this?

You can't use the version constraint with a git hosted module. The version constraint requires a Terraform Registry hosted module.
You can add a tag to the repo and use the ?ref query.
eg:
"git::https://github.com/terraform-aws-modules/terraform-aws-security-group.git?ref=3.2.0"
https://www.terraform.io/docs/modules/sources.html#selecting-a-revision

Related

Terraform 0.12 issue . This object does not have an attribute named "xyz"

i have a project that i created in terraform 0.12 and its modularized.
its something like:
<project_name>
----sg
--main.tf
--variables.tf
--outputs.tf
----ecs
--main.tf
--variables.tf
--outputs.tf
----efs
--main.tf
--variables.tf
--outputs.tf
----alb
--main.tf
--variables.tf
--outputs.tf
i will be calling the outputs values in sg[security group] using remote state.
i was able to call the outputs values form sg to ecs and other modules successfully but while doing same in alb i get following error.
"This object does not have an attribute named "alb_sg"".
the outputs.tf file for sg is
output "alb_sg" {
value = [module.alb_sg.this_security_group_id]}
...
...
...
Security group Output from terraform apply:
alb_sg = [
"sg-somevalue"
]
ecs_sg = [
"sg-somevalue"
]
efs_sg = [
"sg-somevalue"
]
alb resource code from alb module :
resource "aws_lb" this
{
name = somename
subnets = flatten(module.vpc_presets.subnet_ids)
security_groups = [data.terraform_remote_state.remote_state_sg.outputs.alb_sg]
internal = "true"
loab_balancer_type = "application"
tags = var.tags
}
the error after i do terraform apply from inside alb module
Error: Unsupported attribute.
on main.tf line 12, in resource "aws_lb" "this"
12: security_groups = [data.terraform_remote_state.remote_state_sg.outputs.alb_sg]
data.terraform_remote_state.remote_state_sg.outputs is object with 3 attributes
This object does not have an attribute named “alb_sg”
The issue was with wrong reference to the remote state. i was referring to different remote state and that was not having alb_sg attribute. after going through the code again i realized that it was coding issue.
So there's a mistake in your resource declaration:
It should be "this" instead of this. See below:
resource "aws_lb" "this"
{
name = somename
subnets = flatten(module.vpc_presets.subnet_ids)
security_groups = [data.terraform_remote_state.remote_state_sg.outputs.alb_sg]
internal = "true"
loab_balancer_type = "application"
tags = var.tags
}
You can refer to the terraform documentation for aws_lb resource type.

target_groups optional param is not optional : terraform

Error: Invalid index
on .terraform/modules/database-security-group/main.tf line 70, in resource "aws_security_group_rule" "ingress_rules":
70: to_port = var.rules[var.ingress_rules[count.index]][1]
|----------------
| count.index is 0
| var.ingress_rules is list of string with 1 element
| var.rules is map of list of string with 119 elements
The given key does not identify an element in this collection value
.
It's all Greek to me. We could use the help..
module "database-security-group" {
source = "terraform-aws-modules/security-group/aws"
name = "database-security"
description = "Security group for Database on database subnet."
vpc_id = module.vpc.vpc_id
ingress_cidr_blocks = ["0.0.0.0/0"]
ingress_rules = [ "http-3306-tcp"]
egress_rules = ["all-all"]
tags = {
Name = "Database"
Environment = "spoon"
}
}
I believe the intention of this particular module is that you select from its table of predefined rules when specifying ingress_rules and egress_rules.
At the time I write this I don't see a definition for a rule "http-3306-tcp", and so I think that's the cause of your error. If your intent was to allow TCP port 3306 for MySQL then it seems the rule key for that is "mysql-tcp".

Terraform Error: "replication_group_id": conflicts with engine_version. ( redis )

I'm trying to create an aws_elasticache_replication_group using Redis
resource "aws_elasticache_cluster" "encryption-at-rest" {
count = 1
cluster_id = "${var.namespace}-${var.environment}-encryption-at-rest"
engine = "redis"
engine_version = var.engine_version
node_type = var.node_type
num_cache_nodes = 1
port = var.redis_port
#az_mode = var.az_mode
replication_group_id = aws_elasticache_replication_group.elasticache_replication_group.id
security_group_ids = [aws_security_group.redis_security_group.id]
subnet_group_name = aws_elasticache_subnet_group.default.name
apply_immediately = true
tags = {
Name = "${var.namespace}-${var.environment}-redis"
}
}
resource "aws_elasticache_replication_group" "elasticache_replication_group" {
automatic_failover_enabled = false //var.sharding_automatic_failover_enabled
availability_zones = ["ap-southeast-1a"] //data.terraform_remote_state.network.outputs.availability_zones
replication_group_id = "${var.namespace}-${var.environment}-encryption-at-rest"
replication_group_description = "${var.namespace} ${var.environment} replication group"
security_group_ids = [aws_security_group.redis_security_group.id]
subnet_group_name = aws_elasticache_subnet_group.default.name
node_type = var.node_type
number_cache_clusters = 1 //2
parameter_group_name = aws_elasticache_parameter_group.param_group.name
port = var.redis_port
at_rest_encryption_enabled = true
kms_key_id = data.aws_kms_alias.kms_redis.target_key_arn
apply_immediately = true
}
resource "aws_elasticache_parameter_group" "param_group" {
name = "${var.namespace}-${var.environment}-params"
family = "redis5.0"
}
But I get the following error:
aws_security_group_rule.redis_ingress[0]: Refreshing state... [id=sgrule-3474516270]
aws_security_group_rule.redis_ingress[1]: Refreshing state... [id=sgrule-2582511137]
aws_elasticache_replication_group.elasticache_replication_group: Refreshing state... [id=cbpl-uat-encryption-at-rest]
Error: "replication_group_id": conflicts with engine_version
on redis.tf line 1, in resource "aws_elasticache_cluster" "encryption-at-rest":
1: resource "aws_elasticache_cluster" "encryption-at-rest" {
Releasing state lock. This may take a few moments...
The aws_elasticache_cluster resource docs say this:
replication_group_id - (Optional) The ID of the replication group to
which this cluster should belong. If this parameter is specified, the
cluster is added to the specified replication group as a read replica;
otherwise, the cluster is a standalone primary that is not part of any
replication group.
engine – (Required unless replication_group_id is provided) Name
of the cache engine to be used for this cache cluster. Valid values
for this parameter are memcached or redis
If you're going to join it to a replication group then the engine must match the replication group's engine type and so it shouldn't be set on the aws_elasticache_cluster.
The AWS provider overloads the aws_elasticache_cluster structure to handle multiple dissimilar configurations. The internal logic contains a set of 'ConflictsWith' validations which are based on the premise that certain arguments simply cannot be specified together because they represent different modes of elasticache clusters (or nodes).
If you are specifying a replication_group_id then the value of engine_version will be managed by the corresponding aws_elasticache_replication_group.
Therefore, the solution is simply to remove the engine_version argument from your aws_elasticache_cluster resource specification. If you so choose (or in cases where it is required), you can also add that argument to the aws_elasticache_replication_group.
Example: Redis Cluster Mode Disabled Read Replica Instance
// These inherit their settings from the replication group.
resource "aws_elasticache_cluster" "replica" {
cluster_id = "cluster-example"
replication_group_id = aws_elasticache_replication_group.example.id
}
In this mode, the aws_elasticache_cluster structure requires very few arguments.

Terraform - re-use an existing subnetwork to create a cloud sql instance on GCP

I am attempting to create a cloud sql instance on GCP using terraform. I want to use an existing VPC subnetwork created in an earlier step but there does not seem to be a way to refer to it. Instead all examples seem to require a new IP range to be setup. This is my current code that creates the new IP range:
provider = google-beta
project = "project_name"
name = "private_range"
purpose = "VPC_PEERING"
address_type = "INTERNAL"
prefix_length = 18
network = "projects/project_name/global/networks/vpc_name"
address = "192.168.128.0"
}
resource "google_service_networking_connection" "private_vpc_connection" {
provider = google-beta
network = "projects/project_name/global/networks/vpc_name"
service = "servicenetworking.googleapis.com"
reserved_peering_ranges = [google_compute_global_address.private_ip_address.name]
}
resource "google_sql_database_instance" "instance" {
provider = google-beta
project = "project_name"
name = "db-instance10"
region = "us-east1"
database_version = "MYSQL_5_7"
depends_on = [google_service_networking_connection.private_vpc_connection]
settings {
tier = "db-f1-micro"
ip_configuration {
ipv4_enabled = false
private_network = "projects/project_name/global/networks/vpc_name"
}
}
}
provider "google-beta" {
region = "us-east1"
zone = "us-east1-c"
}
When I specify the exact same IP range as the existing subnet. I receive the error:
Error: Error waiting to create GlobalAddress: Error waiting for Creating GlobalAddress: Requested range conflicts with other resources: The provided IP range overlaps with existing subnetwork IP range.
There does not seem to be any obvious way to refer to the existing subnetwork resource as the reserved_peering_ranges parameter only seems to accept the name of an IP address range resource.
Here is the resource specification for the existing subnetwork:
creation_timestamp = "2020-06-03T07:28:05.762-07:00"
enable_flow_logs = true
fingerprint = "ied1TiEZjgc="
gateway_address = "192.168.128.1"
id = "us-east1/vpc_subnet_name"
ip_cidr_range = "192.168.128.0/18"
name = "vpc_subnet_name"
network = "https://www.googleapis.com/compute/v1/projects/project_name/global/networks/vpc_name"
private_ip_google_access = true
project = "project_name"
region = "us-east1"
secondary_ip_range = []
self_link = "https://www.googleapis.com/compute/v1/projects/project_name/regions/us-east1/subnetworks/vpc_subnet_name"
log_config {
aggregation_interval = "INTERVAL_5_SEC"
flow_sampling = 0.5
metadata = "INCLUDE_ALL_METADATA"
}
}
Connecting to a Cloud sql instance through a private IP requires configuring private service access that uses an allocated IP address range that must not overlap with any existing VPC subnet.
The private connection links your VPC network with the service's VPC network. This connection allows VM instances in your VPC network to use internal IP addresses to reach the service resources, for example a Cloud sql instance that has internal IP addresses.
Once created, the allocated IP address range and the connection can then be reused with other services.
you can use the below module to create the cloud sql with exisiting private vpc but you need to modify it according to your network. in this scenario i have created a separate private network & creating the cloud sql using that network.
https://github.com/gruntwork-io/terraform-google-sql
get the existing network in your cloud infra from which you want to create your cloudsql, the below command gives the
gcloud network list --uri
you need to append the network where self link is mentioned & hash out the steps where the vpc is getting created. please refer the below main.tf file
the location of this file is --- Cloud_SQL.terraform\modules\sql_example_postgres-private-ip\examples\postgres-private-ip\main.tf
add the variable accordingly.
# ------------------------------------------------------------------------------
# LAUNCH A POSTGRES CLOUD SQL PRIVATE IP INSTANCE
# ------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# CONFIGURE OUR GCP CONNECTION
# ------------------------------------------------------------------------------
provider "google-beta" {
project = var.project
region = var.region
}
terraform {
# This module is now only being tested with Terraform 0.14.x. However, to make upgrading easier, we are setting
# 0.12.26 as the minimum version, as that version added support for required_providers with source URLs, making it
# forwards compatible with 0.14.x code.
required_version = ">= 0.12.26"
required_providers {
google-beta = {
source = "hashicorp/google-beta"
version = "~> 3.57.0"
}
}
}
# ------------------------------------------------------------------------------
# CREATE A RANDOM SUFFIX AND PREPARE RESOURCE NAMES
# ------------------------------------------------------------------------------
resource "random_id" "name" {
byte_length = 2
}
####################################################################
# Reserve global internal address range for the peering
resource "google_compute_global_address" "private_ip_address" {
provider = google-beta
# name = local.private_ip_name
name = var.vpc_network
purpose = "VPC_PEERING"
address_type = "INTERNAL"
prefix_length = 16
# network = google_compute_network.private_network.self_link
# network = google_compute_network.vpc_network.self_link
network = "https://www.googleapis.com/compute/v1/projects/lucky-operand-312611/global/networks/myprivatevpc/"
}
# Establish VPC network peering connection using the reserved address range
resource "google_service_networking_connection" "private_vpc_connection" {
provider = google-beta
# network = google_compute_network.private_network.self_link
network = "https://www.googleapis.com/compute/v1/projects/lucky-operand-312611/global/networks/myprivatevpc"
service = "servicenetworking.googleapis.com"
reserved_peering_ranges = [google_compute_global_address.private_ip_address.name]
}
# ------------------------------------------------------------------------------
# CREATE DATABASE INSTANCE WITH PRIVATE IP
# ------------------------------------------------------------------------------
module "postgres" {
# When using these modules in your own templates, you will need to use a Git URL with a ref attribute that pins you
# to a specific version of the modules, such as the following example:
# source = "github.com/gruntwork-io/terraform-google-sql.git//modules/cloud-sql?ref=v0.2.0"
source = "../../modules/cloud-sql"
project = var.project
region = var.region
name = var.instance_name
db_name = var.db_name
engine = var.postgres_version
machine_type = var.machine_type
# To make it easier to test this example, we are disabling deletion protection so we can destroy the databases
# during the tests. By default, we recommend setting deletion_protection to true, to ensure database instances are
# not inadvertently destroyed.
deletion_protection = false
# These together will construct the master_user privileges, i.e.
# 'master_user_name'#'master_user_host' IDENTIFIED BY 'master_user_password'.
# These should typically be set as the environment variable TF_VAR_master_user_password, etc.
# so you don't check these into source control."
master_user_password = var.master_user_password
master_user_name = var.master_user_name
master_user_host = "%"
# Pass the private network link to the module
# private_network = google_compute_network.private_network.self_link
private_network = "https://www.googleapis.com/compute/v1/projects/lucky-operand-312611/global/networks/myprivatevpc"
# Wait for the vpc connection to complete
dependencies = [google_service_networking_connection.private_vpc_connection.network]
custom_labels = {
test-id = "postgres-private-ip-example"
}
}

Terraform: Creating GCP Project using Shared VPC

I've been working through this for what feels like an eternity now.. so the host project already exists.. and has all the VPN's and networking set up. I am looking to create a new project, through Terraform and allowing it to use the host projects shared VPC.
Every time I run up against a problem and end up resolving it, I just run up against another one.
Right now I'm seeing:
google_compute_shared_vpc_service_project.project: googleapi: Error 404: The resource 'projects/intacct-staging-db3b7e7a' was not found, notFound
* google_compute_instance.dokku: 1 error(s) occurred:
As well as:
google_compute_instance.dokku: Error loading zone 'europe-west2-a': googleapi: Error 404: Failed to find project intacct-staging, notFound
I was originally convinced it was ordering, which is why I was playing around with depends_on configurations, to try and sort out the order. That hasn't seemed to resolve it.
Reading it simply, google_compute_shared_vpc_service_project doesn't exist as far as google_compute_shared_vpc_service_project is concerned. Even though I've added the following to google_compute_shared_vpc_service_project:
depends_on = ["google_project.project",
"google_compute_shared_vpc_host_project.host_project",
]
Perhaps, because the host project already exists I should use data to refer to it instead of resource?
My full TF File is here:
provider "google" {
region = "${var.gcp_region}"
credentials = "${file("./creds/serviceaccount.json")}"
}
resource "random_id" "id" {
byte_length = 4
prefix = "${var.project_name}-"
}
resource "google_project" "project" {
name = "${var.project_name}"
project_id = "${random_id.id.hex}"
billing_account = "${var.billing_account}"
org_id = "${var.org_id}"
}
resource "google_project_services" "project" {
project = "${google_project.project.project_id}"
services = [
"compute.googleapis.com"
]
depends_on = [ "google_project.project" ]
}
# resource "google_service_account" "service-account" {
# account_id = "intacct-staging-service"
# display_name = "Service Account for the intacct staging app"
# }
resource "google_compute_shared_vpc_host_project" "host_project" {
project = "${var.vpc_parent}"
}
resource "google_compute_shared_vpc_service_project" "project" {
host_project = "${google_compute_shared_vpc_host_project.host_project.project}"
service_project = "${google_project.project.project_id}"
depends_on = ["google_project.project",
"google_compute_shared_vpc_host_project.host_project",
]
}
resource "google_compute_address" "dokku" {
name = "fr-intacct-staging-ip"
address_type = "EXTERNAL"
project = "${google_project.project.project_id}"
depends_on = [ "google_project_services.project" ]
}
resource "google_compute_instance" "dokku" {
project = "${google_project.project.name}"
name = "dokku-host"
machine_type = "${var.comp_type}"
zone = "${var.gcp_zone}"
allow_stopping_for_update = "true"
tags = ["intacct"]
# Install Dokku
metadata_startup_script = <<SCRIPT
sed -i 's/PermitRootLogin no/PermitRootLogin yes/' /etc/ssh/sshd_config && service sshd restart
SCRIPT
boot_disk {
initialize_params {
image = "${var.compute_image}"
}
}
network_interface {
subnetwork = "${var.subnetwork}"
subnetwork_project = "${var.vpc_parent}"
access_config = {
nat_ip = "${google_compute_address.dokku.address}"
}
}
metadata {
sshKeys = "root:${file("./id_rsa.pub")}"
}
}
EDIT:
As discussed below I was able to resolve the latter project not found error by changing the reference to project_id instead of name as name does not include the random hex.
I'm now also seeing another error, referring to the static IP. The network interface is configured to use the subnetwork from the Host VPC...
network_interface {
subnetwork = "${var.subnetwork}"
subnetwork_project = "${var.vpc_parent}"
access_config = {
nat_ip = "${google_compute_address.dokku.address}"
}
}
The IP is setup here:
resource "google_compute_address" "dokku" {
name = "fr-intacct-staging-ip"
address_type = "EXTERNAL"
project = "${google_project.project.project_id}"
}
The IP should really be in the host project, which I've tried.. and when I do I get an error saying that cross-project is not allowed with this resource.
When I change to the above, it also errors saying that the new project is now capable of handling API Calls. Which I suppose would make sense as I only allowed compute API calls per the google_project_services resource.
I'll try allowing network API calls and see if that works, but I'm thinking the external IP needs to be in the host project's shared VPC?
For anyone encountering the same problem, in my case the project not found error was solved just by enabling the Compute Engine API.