Variables in terragrunt. AWS region selection - amazon-web-services

I'm trying to create AWS environment using terragrunt. I worked in us-east-2 region and now I want to work in eu-central-1.
In my code I have the only variable, that represents region. I changed it to eu-central-1, but when I execute "terragrunt run-all plan" I see my old environment in outputs.
I deleted tfstate and all other local files, that were created by terragrunt. I also deleted s3bucket and dynamodb table in AWS. Where does terragrunt store information about region? How can I use new region?
terraform {
source = "/home/bohdan/Dev_ops/terragrunt_vpc/modules//vpc"
extra_arguments "custom_vars" {
commands = get_terraform_commands_that_need_vars()
}
}
locals {
remote_state_bucket_prefix = "tfstate"
environment = "dev"
app_name = "demo3"
aws_account = "873432059572"
aws_region = "eu-central-1"
image_tag = "v1"
}
inputs = {
remote_state_bucket = format("%s-%s-%s-%s", local.remote_state_bucket_prefix, local.app_name, local.environment, local.aws_region)
environment = local.environment
app_name = local.app_name
aws_account = local.aws_account
aws_region = local.aws_region
image_tag = local.image_tag
}
remote_state {
backend = "s3"
config = {
encrypt = true
bucket = format("%s-%s-%s-%s", local.remote_state_bucket_prefix, local.app_name, local.environment, local.aws_region)
key = format("%s/terraform.tfstate", path_relative_to_include())
region = local.aws_region
# dynamodb_table = format("tflock-%s-%s-%s", local.environment, local.app_name, local.aws_region)
dynamodb_table = "my-lock-table"
}
}

Problem was in source = "/home/bohdan/Dev_ops/terragrunt_vpc/modules//vpc"
I referred to an incorrect module with hardcoded region

Related

I am trying to create a service account using terraform i have 3 accounts dev,qa, production. I want to give access to s3 bucket in each environment

I have this in my main.tf
module "service_accout_iam_role" {
for_each = { for sa in local.service_accounts : sa.name => sa }
source = "./service_account_iam_role"
environment = var.environment
eks_cluster_name = var.eks_cluster_name
account_id = var.account_id
region = var.region
service_account_name = each.value.name
namespace = each.value.namespace
policies = each.value.policies
}
And
locals {
service_accounts = [
{
name = "my-account"
namespace = "test123"
policies = [
{
name = "deleteS3"
resources = [
"arn:aws:s3:::my-dev-bucket",
"arn:aws:s3:::my-qa-bucket",
"arn:aws:s3:::my-Prod-bucket"
]
},
]
},
]
}
whenever i ran terraform apply in dev it shoould give permsiions in dev, when i ran qa it needs access to qa and same for production. How can i write a condition?
You should have three separate setups for your environments. This is most commonly done using workspaces. Otherwise, whenever you change your env, you will be just overwriting the settings of the old environment.

issue while deploying gcp cloud function deployment

I have following issue while deploying the cloud function (I am completely new to gcp and terraform )
I am trying to deploy a cloud function through the terraform; but the issue is that when I am deploying its destroying an existing cloud function which was already deployed in gcp (deployed by other colleague) even though cloud function name , bucket object name and archive file name are different (only bucket name and project id are same)
looks like its taking the state of existing cloud function which is already deployed
is there any way to keep the existing state unaffected?
code Snippet(as mentioned above there is already one cloud function deployed with same project id and bucket)
main.tf:
provider "google" {
project = "peoject_id"
credentials = "cdenetialfile"
region = "some-region"
}
locals {
timestamp = formatdate("YYMMDDhhmmss", timestamp())
root_dir = abspath("./app/")
}
data "archive_file" "archive" {
type = "zip"
output_path = "/tmp/function-${local.timestamp}.zip"
source_dir = local.root_dir
}
resource "google_storage_bucket_object" "object_archive" {
name = "archive-${local.timestamp}.zip"
bucket = "dev-bucket-tfstate"
source = "/tmp/function-${local.timestamp}.zip"
depends_on = [data.archive_file.archive]
}
resource "google_cloudfunctions_function" "translator_function" {
name = "Cloud_functionname"
available_memory_mb = 256
timeout = 61
runtime = "java11"
source_archive_bucket = "dev-bucket-tfstate"
source_archive_object = google_storage_bucket_object.object_archive.name
entry_point = "com.test.controller.myController"
event_trigger {
event_type = "google.pubsub.topic.publish"
resource = "topic_name"
}
}
backend.tf
terraform {
backend "gcs" {
bucket = "dev-bucket-tfstate"
credentials = "cdenetialfile"
}
}

terraform plan error : Argument not expected here

I am trying to create custom IAM roles for different environments, with different set of permissions for dev and prod, in Google Cloud Platform. My folder structure is as follows:
iamroles (root folder)
-- main.tf
-- variables.tf
--customiamroles(folder)
----main.tf
--environments
----nonprod
------main.tf
------variables.tf
----prod
------main.tf
------variables.tf
The main.tf in the root folder has the below code:
iamroles/main.tf
/*
This is the 'main' Terraform file. It calls the child modules to create roles in the corresponding environments
*/
provider "google" {
credentials = file("${var.project_id}.json")
project = var.project_id
region = var.location
}
module "nonprod" {
source = "./environments/nonprod"
}
iamroles/variables.tf
variable "project_id"{
type = string
}
variable "location" {
type = string
default = "europe-west3"
}
iamroles/environments/nonprod/main.tf
module "nonprod" {
role_details = [{
role_id = "VS_DEV_NONPROD_CLOUDSQL",
title = "VS DEVELOPER NON PROD CLOUD SQL",
description = "Role which provides limited view and update access to Cloud SQL",
permissions = var.developer_nonprod_sql
},
{
role_id = "VS_DEV_NONPROD_APPENGINE",
title = "VS DEVELOPER NON PROD APPENGINE",
description = "Appengine access for developers for non production environments to View, Create and Delete versions, View and Delete instances, View and Run cron jobs",
permissions = var.developer_nonprod_appengine
}]
source = "../../customiamroles"
}
iamroles/environments/nonprod/variables.tf
variable "role_details" {
type = list(object({
role_id = string
title = string
description = string
permissions = list(string)
}))
}
variable "developer_nonprod_sql" {
default = ["cloudsql.databases.create","cloudsql.databases.get"]
}
variable "developer_nonprod_appengine" {
default = ["appengine.applications.get","appengine.instances.get","appengine.instances.list","appengine.operations.*","appengine.services.get","appengine.services.list"]
}
iamroles/customiamroles/main.tf
# Creating custom roles
resource "google_project_iam_custom_role" "vs-custom-roles" {
for_each = var.role_details
role_id = each.value.role_id
title = each.value.title
description = each.value.description
permissions = each.value.permissions
}
While executing terraform plan from iamroles folder, i am getting the below exception:
I am new to terraform, learning for the past two days. I could use some help to understand what I am doing wrong.

Having trouble with Terraform and AWS Storage Gateway disks

I am using Terraform with AWS and have been able to create an AWS Storage Gateway file gateway using the aws_storagegateway_gateway resource.
The gateway will create and the status will be 'online' however there is not a cache disk added yet in the console which is normal as it has to be done after the gateway is created. The VM does have a disk and it is available to add in the console and doing so in the console works perfectly.
However, I am trying to add the disk with Terraform once the gateway is created and cannot seem to get the code to work, or quite possibly don't understand how to get it to work.
Trying to use the aws_storagegateway_cache resource but I get an error on the disk_id and do not know how to get it to return from the code of the gateway creation.
Might someone have a working example of how to get the cache disk to add with Terraform once the gateway is created or know how to get the disk_id so I can add it?
Adding code
provider "aws" {
access_key = "${var.access-key}"
secret_key = "${var.secret-key}"
token = "${var.token}"
region = "${var.region}"
}
resource "aws_storagegateway_gateway" "hmsgw" {
gateway_ip_address = "${var.gateway-ip-address}"
gateway_name = "${var.gateway-name}"
gateway_timezone = "${var.gateway-timezone}"
gateway_type = "${var.gateway-type}"
smb_active_directory_settings {
domain_name = "${var.domain-name}"
username = "${var.username}"
password = "${var.password}"
}
}
resource "aws_storagegateway_cache" "sgwdisk" {
disk_id = "SCSI"
gateway_arn = "${aws_storagegateway_gateway.hmsgw.arn}"
}
output "gatewayid" {
value = "${aws_storagegateway_gateway.hmsgw.arn}"
}
The error I get is:
aws_storagegateway_cache.sgwdisk: error adding Storage Gateway cache: InvalidGatewayRequestException: The specified disk does not exist.
status code: 400, request id: fda602fd-a47e-11e8-a1f4-b383e2e2e2f6
I have attempted to hard code the disk_id like above or use a variable. On the variable I don't know if it is returned or exists so that could be the issue, new to this.
Before creating the resource "aws_storagegateway_cache", use data to get the disk id. I am using the below scripts and it works fine.
variable "upload_disk_path" {
default = "/dev/sdb"
}
data "aws_storagegateway_local_disk" "upload_disk" {
disk_path = "${var.upload_disk_path}"
gateway_arn = "${aws_storagegateway_gateway.this.arn}"
}
resource "aws_storagegateway_upload_buffer" "stg_upload_buffer" {
disk_id = "${data.aws_storagegateway_local_disk.upload_disk.disk_id}"
gateway_arn = "${aws_storagegateway_gateway.this.arn}"
}
In case your are using two disk's (one for upload and one cahce), use the same code but set the default value of cache_disk_path = "/dev/sdc"
if you use the AWS cli to run: aws storagegateway list-local-disks --gateway-arn [your gateway's arn] --region [gateway's region], you'll get data returned that includes the disk ID.
Then, in your example code, you replace SCSI with "${gateway_arn}:[diskID from command above]" and your cache volume will be created.
One thing I've noticed though - when I've done this and then tried to apply the same Terraform code again, and in some cases even with a targeted deploy of a specific resource within my Terraform, it wants to re-deploy the cache volume, because the Terraform is detecting that the disk ID is changing to a value of "1". Passing in "1" as the value in the Terraform, however, does not seem to work.
This would work also:
variable "disk_path" {
default = "/dev/sdb"
}
provider "aws" {
alias = "primary"
access_key = var.access-key
secret_key = var.secret-key
token = var.token
region = var.region
}
resource "aws_storagegateway_gateway" "hmsgw" {
gateway_ip_address = var.gateway-ip-address
gateway_name = var.gateway-name
gateway_timezone = var.gateway-timezone
gateway_type = var.gateway-type
smb_active_directory_settings {
domain_name = var.domain-name
username = var.username
password = var.password
}
}
data "aws_storagegateway_local_disk" "sgw_disk" {
disk_path = var.disk_path
gateway_arn = aws_storagegateway_gateway.hmsgw.arn
provider = aws.primary
}
resource "aws_storagegateway_cache" "sgw_cache" {
disk_id = data.aws_storagegateway_local_disk.sgw_disk.id
gateway_arn = aws_storagegateway_gateway.hmsgw.arn
provider = aws.primary
}

AWS Beanstalk Tomcat and Terraform

I try to set up a Tomcat using Beanstalk.
Here's my Terraform code:
(bucket is created beforehand)
# Upload the JAR to bucket
resource "aws_s3_bucket_object" "myjar" {
bucket = "${aws_s3_bucket.mybucket.id}"
key = "src/java-tomcat-v3.zip"
source = "${path.module}/src/java-tomcat-v3.zip"
etag = "${md5(file("${path.module}/src/java-tomcat-v3.zip"))}"
}
# Define app
resource "aws_elastic_beanstalk_application" "tftestapp" {
name = "tf-test-name"
description = "tf-test-desc"
}
# Define beanstalk jar version
resource "aws_elastic_beanstalk_application_version" "myjarversion" {
name = "tf-test-version-label"
application = "tf-test-name"
description = "My description"
bucket = "${aws_s3_bucket.mybucket.id}"
key = "${aws_s3_bucket_object.myjar.id}"
force_delete = true
}
# Deploy env
resource "aws_elastic_beanstalk_environment" "tftestenv" {
name = "tf-test-name"
application = "${aws_elastic_beanstalk_application.tftestapp.name}"
solution_stack_name = "64bit Amazon Linux 2018.03 v3.0.0 running Tomcat 7 Java 7"
setting {
namespace = "aws:autoscaling:asg"
name = "MinSize"
value = "1"
}
...
}
And I end up with a very strange error, saying it can't find the file on the bucket.
InvalidParameterCombination: Unable to download from S3 location
(Bucket: mybucket Key: src/java-tomcat-v3.zip). Reason: Not Found
Nevertheless, connecting to the web console and accessing my bucket, I can see the zip file is right there...
I don't get it, any help please?
PS: I tried with and without the src/
Cheers
I was recently having this same error on Terraform 0.13.
Differences between 0.13 and older versions:
The documentation appears to be out of date. For instance, under aws_elastic_beanstalk_application_version it shows
resource "aws_s3_bucket" "default" {
bucket = "tftest.applicationversion.bucket"
}
resource "aws_s3_bucket_object" "default" {
bucket = aws_s3_bucket.default.id
key = "beanstalk/go-v1.zip"
source = "go-v1.zip"
}
resource "aws_elastic_beanstalk_application" "default" {
name = "tf-test-name"
description = "tf-test-desc"
}
resource "aws_elastic_beanstalk_application_version" "default" {
name = "tf-test-version-label"
application = "tf-test-name"
description = "application version created by terraform"
bucket = aws_s3_bucket.default.id
key = aws_s3_bucket_object.default.id
}
If you attempt to use this, terraform fails with the bucket object because the "source" argument is no longer available within aws_elastic_beanstalk_application_version.
After removing the "source" property, it moved to the next issue, which was Error: InvalidParameterCombination: Unable to download from S3 location (Bucket: mybucket Key: mybucket/myfile.txt). Reason: Not Found
This error comes from the terraform:
resource "aws_s3_bucket" "bucket" {
bucket = "mybucket"
}
resource "aws_s3_bucket_object" "default" {
bucket = aws_s3_bucket.bucket.id
key = "myfile.txt"
}
resource "aws_elastic_beanstalk_application" "default" {
name = "tf-test-name"
description = "tf-test-desc"
}
resource "aws_elastic_beanstalk_application_version" "default" {
name = "tf-test-version-label"
application = "tf-test-name"
description = "application version created by terraform"
bucket = aws_s3_bucket.bucket.id
key = aws_s3_bucket_object.default.id
}
What Terraform ends up doing here is it prepends the bucket to the key. When you run terraform plan you see that bucket = "mybucket" and key = "mybucket/myfile.txt". The problem with this is that Terraform looks in the bucket for the file "mybucket/myfile.txt" when it should ONLY be looking for "myfile.txt"
Solution
What I did was REMOVE the bucket and bucket object resources from the script and place the names in variables, as follows:
variable "sourceCodeS3BucketName" {
type = string
description = "The bucket that contains the engine code."
default = "mybucket"
}
variable "sourceCodeFilename" {
type = string
description = "The code file name."
default = "myfile.txt"
}
resource "aws_elastic_beanstalk_application" "myApp" {
name = "my-beanstalk-app"
description = "My application"
}
resource "aws_elastic_beanstalk_application_version" "v1_0_0" {
name = "my-application-v1_0_0"
application = aws_elastic_beanstalk_application.myApp.name
description = "Application v1.0.0"
bucket = var.sourceCodeS3BucketName
key = var.sourceCodeFilename
}
By directly using the name of the file and the bucket, Terraform does not prepend the bucket name to the key, and it can find the file just fine.