terraform plan error : Argument not expected here - google-cloud-platform

I am trying to create custom IAM roles for different environments, with different set of permissions for dev and prod, in Google Cloud Platform. My folder structure is as follows:
iamroles (root folder)
-- main.tf
-- variables.tf
--customiamroles(folder)
----main.tf
--environments
----nonprod
------main.tf
------variables.tf
----prod
------main.tf
------variables.tf
The main.tf in the root folder has the below code:
iamroles/main.tf
/*
This is the 'main' Terraform file. It calls the child modules to create roles in the corresponding environments
*/
provider "google" {
credentials = file("${var.project_id}.json")
project = var.project_id
region = var.location
}
module "nonprod" {
source = "./environments/nonprod"
}
iamroles/variables.tf
variable "project_id"{
type = string
}
variable "location" {
type = string
default = "europe-west3"
}
iamroles/environments/nonprod/main.tf
module "nonprod" {
role_details = [{
role_id = "VS_DEV_NONPROD_CLOUDSQL",
title = "VS DEVELOPER NON PROD CLOUD SQL",
description = "Role which provides limited view and update access to Cloud SQL",
permissions = var.developer_nonprod_sql
},
{
role_id = "VS_DEV_NONPROD_APPENGINE",
title = "VS DEVELOPER NON PROD APPENGINE",
description = "Appengine access for developers for non production environments to View, Create and Delete versions, View and Delete instances, View and Run cron jobs",
permissions = var.developer_nonprod_appengine
}]
source = "../../customiamroles"
}
iamroles/environments/nonprod/variables.tf
variable "role_details" {
type = list(object({
role_id = string
title = string
description = string
permissions = list(string)
}))
}
variable "developer_nonprod_sql" {
default = ["cloudsql.databases.create","cloudsql.databases.get"]
}
variable "developer_nonprod_appengine" {
default = ["appengine.applications.get","appengine.instances.get","appengine.instances.list","appengine.operations.*","appengine.services.get","appengine.services.list"]
}
iamroles/customiamroles/main.tf
# Creating custom roles
resource "google_project_iam_custom_role" "vs-custom-roles" {
for_each = var.role_details
role_id = each.value.role_id
title = each.value.title
description = each.value.description
permissions = each.value.permissions
}
While executing terraform plan from iamroles folder, i am getting the below exception:
I am new to terraform, learning for the past two days. I could use some help to understand what I am doing wrong.

Related

Cannot create Cloud Composer

I am getting the error below while provisioning Composer via terraform.
Error: Error waiting to create Environment: Error waiting to create Environment: Error waiting for Creating Environment: error while retrieving operation: Get "https://composer.googleapis.com/v1beta1/projects/aayush-terraform/locations/us-central1/operations/ee459492-abb0-4646-893e-09d112219d79?alt=json&prettyPrint=false": write tcp 10.227.112.165:63811->142.251.12.95:443: write: broken pipe. An initial environment was or is still being created, and clean up failed with error: Getting creation operation state failed while waiting for environment to finish creating, but environment seems to still be in 'CREATING' state. Wait for operation to finish and either manually delete environment or import "projects/aayush-terraform/locations/us-central1/environments/example-composer-env" into your state.
Below is the code snippet:
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~>3.0"
}
}
}
variable "gcp_region" {
type = string
description = "Region to use for GCP provider"
default = "us-central1"
}
variable "gcp_project" {
type = string
description = "Project to use for this config"
default = "aayush-terraform"
}
provider "google" {
region = var.gcp_region
project = var.gcp_project
}
resource "google_service_account" "test" {
account_id = "composer-env-account"
display_name = "Test Service Account for Composer Environment"
}
resource "google_project_iam_member" "composer-worker" {
role = "roles/composer.worker"
member = "serviceAccount:${google_service_account.test.email}"
}
resource "google_compute_network" "test" {
name = "composer-test-network"
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "test" {
name = "composer-test-subnetwork"
ip_cidr_range = "10.2.0.0/16"
region = "us-central1"
network = google_compute_network.test.id
}
resource "google_composer_environment" "test" {
name = "example-composer-env"
region = "us-central1"
config {
node_count = 3
node_config {
zone = "us-central1-a"
machine_type = "n1-standard-1"
network = google_compute_network.test.id
subnetwork = google_compute_subnetwork.test.id
service_account = google_service_account.test.name
}
}
}
NOTE: Composer is getting created even after this error is being thrown and I am provisioning this composer via service account which has been given owner access.
I had the same problem and I solved it by giving the "composer.operations.get
" permission to the service account which is provisioning the Composer.
This permission is part of the Composer Administrator role.
To prevent future operations like updates or deletion through Terraform, I think it's better to use the role rather than a single permission.
Or if you want to make some least privileges work, you can first use the role, then removing permissions you think you won't need and test your terraform code.

error creating SageMaker project in terraform because service catalog product "does not exist or access was denied"

I have a product with id: prod-xxxxxxxxxxxx. I have checked that it exists in aws service catalog. However, when I try to create an aws_sagemaker_project using terraform:
resource "aws_sagemaker_project" "test-project" {
project_name = "test-project"
service_catalog_provisioning_details {
product_id = "prod-xxxxxxxxxxxx"
}
}
I get the error: "error creating SageMaker project: ValidationException: Product prod-xxxxxxxxxxxx does not exist or access was denied". How do I ensure that I can access this product?
Do I need a launch constraint for this product, and to grant access to the portfolio to end users as described here: https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-projects-templates-custom.html?
You need to aws_servicecatalog_product terraform state to refer to product_id.
resource "aws_servicecatalog_product" "example" {
name = "example"
owner = [aws_security_group.example.id]
type = aws_subnet.main.id
provisioning_artifact_parameters {
template_url = "https://s3.amazonaws.com/cf-templates-ozkq9d3hgiq2-us-east-1/temp1.json"
}
tags = {
foo = "bar"
}
}
resource "aws_sagemaker_project" "example" {
project_name = "example"
service_catalog_provisioning_details {
product_id = aws_servicecatalog_product.example.id
}
}
This error means that you haven't granted access to the service catalog's portfolio to your terraform IAM principal/user/role. Basically you are unable to "see" the product based on the end user section of the service catalog portfolio.
You can fix this by adding the following Servicecatalog resources
resource "aws_servicecatalog_principal_portfolio_association" "project" {
portfolio_id = aws_servicecatalog_portfolio.portfolio.id
principal_arn = "${ROLE_ARN}"
}
resource "aws_servicecatalog_portfolio" "portfolio" {
name = "My App Portfolio"
description = "List of my organizations apps"
provider_name = "Brett"
}
resource "aws_servicecatalog_product_portfolio_association" "example" {
portfolio_id = aws_servicecatalog_portfolio.portfolio.id
product_id = "prod-xxxxxxxxxxxx"
}

issue while deploying gcp cloud function deployment

I have following issue while deploying the cloud function (I am completely new to gcp and terraform )
I am trying to deploy a cloud function through the terraform; but the issue is that when I am deploying its destroying an existing cloud function which was already deployed in gcp (deployed by other colleague) even though cloud function name , bucket object name and archive file name are different (only bucket name and project id are same)
looks like its taking the state of existing cloud function which is already deployed
is there any way to keep the existing state unaffected?
code Snippet(as mentioned above there is already one cloud function deployed with same project id and bucket)
main.tf:
provider "google" {
project = "peoject_id"
credentials = "cdenetialfile"
region = "some-region"
}
locals {
timestamp = formatdate("YYMMDDhhmmss", timestamp())
root_dir = abspath("./app/")
}
data "archive_file" "archive" {
type = "zip"
output_path = "/tmp/function-${local.timestamp}.zip"
source_dir = local.root_dir
}
resource "google_storage_bucket_object" "object_archive" {
name = "archive-${local.timestamp}.zip"
bucket = "dev-bucket-tfstate"
source = "/tmp/function-${local.timestamp}.zip"
depends_on = [data.archive_file.archive]
}
resource "google_cloudfunctions_function" "translator_function" {
name = "Cloud_functionname"
available_memory_mb = 256
timeout = 61
runtime = "java11"
source_archive_bucket = "dev-bucket-tfstate"
source_archive_object = google_storage_bucket_object.object_archive.name
entry_point = "com.test.controller.myController"
event_trigger {
event_type = "google.pubsub.topic.publish"
resource = "topic_name"
}
}
backend.tf
terraform {
backend "gcs" {
bucket = "dev-bucket-tfstate"
credentials = "cdenetialfile"
}
}

googleapi: Error 400: Precondition check failed., failedPrecondition while creating Cloud Composer Environment through Terraform

I'm trying to create Cloud Composer Environment through Terraform and getting this error
googleapi: Error 400: Precondition check failed., failedPrecondition while creating Cloud Composer Environment through Terraform
The service account of VM from which I'm trying to create composer has owner permissions in the GCP project.
I have tried with same composer configurations from GCP console and the environment got created without any issues.
I have tried disabling the Cloud Composer API and enabled it once again, yet no solution.
Eventually, for the very first time doing terraform apply, it was trying to create composer environment but ended up with version error and I changed the Image version of composer. Now I'm facing this issue. Can anyone help?
Error message from terminal
composer/main.tf
resource "google_composer_environment" "etl_env" {
provider = google-beta
name = var.env_name
region = var.region
config {
node_count = 3
node_config {
zone = var.zone
machine_type = var.node_machine_type
network = var.network
subnetwork = var.app_subnet_selflink
ip_allocation_policy {
use_ip_aliases = true
}
}
software_config {
image_version = var.software_image_version
python_version = 3
}
private_environment_config {
enable_private_endpoint = false
}
database_config {
machine_type = var.database_machine_type
}
web_server_config {
machine_type = var.web_machine_type
}
}
}
composer/variables.tf
variable "app_subnet_selflink" {
type = string
description = "App Subnet Selflink"
}
variable "region" {
type = string
description = "Region"
default = "us-east4"
}
variable "zone" {
type = string
description = "Availability Zone"
default = "us-east4-c"
}
variable "network" {
type = string
description = "Name of the network"
}
variable "env_name" {
type = string
default = "composer-etl"
description = "The name of the composer environment"
}
variable "node_machine_type" {
type = string
default = "n1-standard-1"
description = "The machine type of the worker nodes"
}
variable "software_image_version" {
type = string
default = "composer-1.15.2-airflow-1.10.14"
description = "The image version used in the software configurations of composer"
}
variable "database_machine_type" {
type = string
default = "db-n1-standard-2"
description = "The machine type of the database instance"
}
variable "web_machine_type" {
type = string
default = "composer-n1-webserver-2"
description = "The machine type of the web server instance"
}
Network and Subnetwork are referenced from another module and they are correct.
The issue will be with master_ipv4_cidr_block range. If left blank, the default value of '172.16.0.0/28' is used. As you already created it manually the range already is in use, use some other ranges.
Please follow links for more GCP and Terraform

How to make gcp cloud function public using Terraform

I will start by saying I am very new to both GCP and Terraform, so I hope there is a simple answer that I have just overlooked.
I am trying to create a GCP cloud function and then make it public using Terraform. I am able to create the function but not make it public, despite closely following the documentation's example: https://www.terraform.io/docs/providers/google/r/cloudfunctions_function.html
I receive the error "googleapi: Error 403: Permission 'cloudfunctions.functions.setIamPolicy' denied on resource ... (or resource may not exist)" when the google_cloudfunctions_function_iam_member resource is reached.
How can I make this function public? Does it have something to do with the account/api key I am using for credentials to create all these resources?
Thanks in advance.
my main.tf file:
provider "google" {
project = "my-project"
credentials = "key.json" #compute engine default service account api key
region = "us-central1"
}
terraform {
backend "gcs" {
bucket = "manually-created-bucket"
prefix = "terraform/state"
credentials = "key.json"
}
}
# create the storage bucket for our scripts
resource "google_storage_bucket" "source_code" {
name = "test-bucket-lh05111992"
location = "us-central1"
force_destroy = true
}
# zip up function source code
data "archive_file" "my_function_script_zip" {
type = "zip"
source_dir = "../source/scripts/my-function-script"
output_path = "../source/scripts/my-function-script.zip"
}
# add function source code to storage
resource "google_storage_bucket_object" "my_function_script_zip" {
name = "index.zip"
bucket = google_storage_bucket.source_code.name
source = "../source/scripts/my-function-script.zip"
}
#create the cloudfunction
resource "google_cloudfunctions_function" "function" {
name = "send_my_function_script"
description = "This function is called in GTM. It sends a users' google analytics id to BigQuery."
runtime = "nodejs10"
available_memory_mb = 128
source_archive_bucket = google_storage_bucket.source_code.name
source_archive_object = google_storage_bucket_object.my_function_script_zip.name
trigger_http = true
entry_point = "handleRequest"
}
# IAM entry for all users to invoke the function
resource "google_cloudfunctions_function_iam_member" "invoker" {
project = google_cloudfunctions_function.function.project
region = "us-central1"
cloud_function = google_cloudfunctions_function.function.name
role = "roles/cloudfunctions.invoker"
member = "allUsers"
}
It seems the only problem with that example from the terraform site are the " Cloud Functions IAM resources" which have been modified since Nov 2019. Now you have to specify these resources as explained here. Now for your user case (public cloud function) I'd recommend you to follow this configuration and just change the "members" attribute to "allUsers" so it'd be something like this
resource "google_cloudfunctions_function_iam_binding" "binding" {
project = google_cloudfunctions_function.function.project
region = google_cloudfunctions_function.function.region
cloud_function = google_cloudfunctions_function.function.name
role = "roles/cloudfunctions.invoker"
members = [
"allUsers",
]
}
Finally, you can give it a test and modify the functions you've already created here at the #Try this API" right panel and enter the proper resource and request body like this (make sure to enter the "resource" parameter correcly):
{
"policy": {
"bindings": [
{
"members": [
"allUsers"
],
"role": "roles/cloudfunctions.invoker"
}
]
}
}
In addition to adjusting the IAM roles how #chinoche suggested, I also discovered that I needed to modify the service account I was using to give it poject owner permissions. (I guess the default one I was using didn't have this). I updated my key.json and it finally worked.