SOPs with Terraform using Alias's - amazon-web-services

i have a working sops solution to encrypt files using 1 aws accounts (aws_sops) KMS and then deploy the secrets to another aws accounts secret manager (aws_secrets).
This is done via connecting to the aws_sops having the .sops.yaml file point at its kms and using an alias to deploy the secret.
while this works, it then saves the state of the aws_secrets workspace to the aws_sops statefile. Which means i cant deploy this solution to a terraform workspace that is already hosted in the aws_secrets account.
Is it possible to switch the solution to using an alias for aws_sops and connecting directly to aws_secrets account? I dont see how to tell sops to use the aws alias instead of the default.
working solution (which i dont like):
provider "aws" {
alias = "development"
profile = "development"
}
provider "aws" {}
provider "sops" {}
terraform {
backend "s3" {
bucket = "xxx-statefile"
encrypt = true
key = "pat/terraform.tfstate"
}
}
data "sops_file" "sops-secret" {
source_file = "../secrets.json"
}
resource "aws_secretsmanager_secret" "pipeline" {
provider = aws.development
name = "service-accounts/pipeline/resource-access-pat"
recovery_window_in_days = 0
force_overwrite_replica_secret = true
}
resource "aws_secretsmanager_secret_version" "pipeline" {
provider = aws.development
secret_id = aws_secretsmanager_secret.pipeline.id
secret_string = jsonencode(
{
"pat" : data.sops_file.sops-secret.data["token"]
})
}
failed solution 1
was to remove the provider alias from the secrets and put in the data call as thats the only time / place i can see sops getting called.
But that gets the error:
│ Error: Invalid data source
│
│ on ../data.tf line 1, in data "sops_file" "test":
│ 1: data "sops_file" "test" {
│
│ The provider hashicorp/aws does not support data source "sops_file".
which makes sense as its just reading a local file.
failed solution 2
it looks someone had a similar problem and raised a ticket: https://github.com/carlpett/terraform-provider-sops/issues/89
A possible solution was to add the role for the aws_sops
ive tried adding a role with admin permissions to kms etc like :
"sops": {
"kms": [
{
"arn": "arn:aws:kms:eu-west-2:xxx:key/xxx",
"role": "arn:aws:iam::xxx:role/TerraformAccountAccessRole",
"created_at": "2023-02-10T13:53:05Z",
"enc": "xx==",
"aws_profile": ""
}
and tried adding the the aws_profile as well:
"sops": {
"kms": [
{
"arn": "arn:aws:kms:xxx:xxx:key/xxx",
"role": "arn:aws:iam::xxx:role/TerraformAccountAccessRole",
"created_at": "2023-02-10T13:53:05Z",
"enc": "xx==",
"aws_profile": "aws_sops"
}
bit i get an error:
│ Error: Failed to get the data key required to decrypt the SOPS file.
│
│ Group 0: FAILED
│ arn:aws:kms:xxx:xxx:key/xxx: FAILED
│ - | Error creating AWS session: Failed to assume role
│ | "arn:aws:iam::xxx:role/TerraformAccountAccessRole":
│ | AccessDenied: User:
│ | arn:aws:sts::089449186373:assumed-role/AWSReservedSSO_DevOps_xxx/xxx#xxx.com
│ | is not authorized to perform: sts:AssumeRole on resource:
│ | arn:aws:iam::xxx:role/TerraformAccountAccessRole
│ | status code: 403, request id:
│ | d9327e8c-8ffc-4873-9279-112c1c8c7258
│
│ Recovery failed because no master key was able to decrypt the file. In
│ order for SOPS to recover the file, at least one key has to be successful,
│ but none were.

Related

Creating folder under organization using terraform in GCP

I have created a folder named terraform and created a service account with owner permission on it. I then used that service account at organization level and provide owner permission.
Now I am trying to create a folder under organization using terraform.
# Top-level folder under an organization.
resource "google_folder" "department1" {
parent = "organizations/70497122"
display_name = "department1"
}
provider "google" {
#project = "terraform-project-0"
#region = "us-central1"
credentials = file("c:/terraform/credentials/terraform-day1.json")
}
Now as per documention :
## The service account used to run Terraform when creating
## a google_folder resource must have roles/resourcemanager.folderCreator
and I am getting the below error in terraform which says about the cloudresourcemanager.googleapis.com api to be enabled on project=1003453129743. but there is no project with the project number 1003453129743.
│ Error: Error creating folder 'department1' in 'organizations/70497122': googleapi: Error 403: Cloud Resource Manager API has not been used in project 1003453129743 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/cloudresourcemanager.googleapis.com/overview?project=1003453129743 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.
│ Details:
│ [
│ {
│ "#type": "type.googleapis.com/google.rpc.Help",
│ "links": [
│ {
│ "description": "Google developers console API activation",
│ "url": "https://console.developers.google.com/apis/api/cloudresourcemanager.googleapis.com/overview?project=1003453129743"
│ }
│ ]
│ },
│ {
│ "#type": "type.googleapis.com/google.rpc.ErrorInfo",
│ "domain": "googleapis.com",
│ "metadata": {
│ "consumer": "projects/1003453129743",
│ "service": "cloudresourcemanager.googleapis.com"
│ },
│ "reason": "SERVICE_DISABLED"
│ }
│ ]
│ , accessNotConfigured
│
│ with google_folder.department1,
│ on main.tf line 5, in resource "google_folder" "department1":
│ 5: resource "google_folder" "department1" {
Now solve below questions and creation folder under organization
how to assign roles/resourcemanager.folderCreator at organization level to service account.
Why is this misleading error "cloudresourcemanager.googleapis.com service disabled for projects/1003453129743" when there is no project with this number.
because of these errors I am not able to create folder under organization using terraform.
I am using terraform1.3.4.exe.
First thing to assign role on organization level , select organization in project selector and then open IAM as shown below
2nd , in the error its project number and not project id.
To provide organization access using terraform refer below documentation
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/google_organization_iam
enabling Cloud Resource Manager API resolved the issue.

gcloud cli authentication with google workspace scopes not allowed - "this app is blocked"

I am attempting to deploy some GCP and Google Workspace resources via Terraform. I am using a GCP identity which has admin access to both GCP and Workspace. Everything is fine when I perform the actions manually in console.cloud.google.com and admin.google.com.
I can deploy the GCP resources via Terraform without issue after performing a gcloud auth application-default login then referencing the GCP provider
provider "google" {
project = var.project
region = "us-central1"
zone = "us-central1-c"
}
When I attempt to add in Google Workspace resources...
provider "googleworkspace" {
customer_id = var.workspace_customer_id
}
resource "googleworkspace_role" "IntegrationRole" {
name = var.common_resource_name
privileges {
privilege_name = "ORGS_RETRIEVE_PRIVILEGE_GROUP"
service_id = "..."
}
}
I get the following error.
Error: googleapi: Error 403: Request had insufficient authentication scopes.
│ Details:
│ [
│ {
│ "#type": "type.googleapis.com/google.rpc.ErrorInfo",
│ "domain": "googleapis.com",
│ "metadata": {
│ "method": "ccc.hosted.frontend.directory.v1.DirectoryRoles.Insert",
│ "service": "admin.googleapis.com"
│ },
│ "reason": "ACCESS_TOKEN_SCOPE_INSUFFICIENT"
│ }
│ ]
│
│ More details:
│ Reason: insufficientPermissions, Message: Insufficient Permission
In researching this, it seems I need to provide the appropriate scopes to perform actions in Google Workspace. So I do so...
gcloud auth application-default login \
--scopes https://www.googleapis.com/auth/admin.directory.rolemanagement,https://www.googleapis.com/auth/admin.directory.user
But when attempting to approve this I get the following error.
I have attempted to turn on "less secure app settings" but the result is the same.
How can I make this work?
Thanks in advance!

Unable to run GCP terraform commands from GitHub Actions

I have setup keyless authentication for my Github Actions pipeline using Workload Identity Federation by following the official tutorial
When running a terraform init command from my pipeline I get the following error:
│ Error: Failed to get existing workspaces: querying Cloud Storage failed: Get "https://storage.googleapis.com/storage/v1/b/lws-dev-common-bucket/o?alt=json&delimiter=%2F&pageToken=&prefix=global%2Fnetworking.state%2F&prettyPrint=false&projection=full&versions=false": oauth2/google: status code 403: {
│ "error": {
│ "code": 403,
│ "message": "Permission 'iam.serviceAccounts.getAccessToken' denied on resource (or it may not exist).",
│ "status": "PERMISSION_DENIED",
│ "details": [
│ {
│ "#type": "type.googleapis.com/google.rpc.ErrorInfo",
│ "reason": "IAM_PERMISSION_DENIED",
│ "domain": "iam.googleapis.com",
│ "metadata": {
│ "permission": "iam.serviceAccounts.getAccessToken"
│ }
│ }
│ ]
│ }
│ }
I have ensured that the service account that I am using has proper permissions including:
Cloud Run Admin
Cloud Run Service Agent
Below is a snippet of my pipeline code:
- id: 'auth'
name: 'Authenticate to Google Cloud'
uses: 'google-github-actions/auth#v0.4.0'
with:
workload_identity_provider: 'projects/385050593732/locations/global/workloadIdentityPools/my-pool/providers/my-provider'
service_account: 'lws-d-iac-sa#lefewaresolutions-poc.iam.gserviceaccount.com'
- name: Terraform Init
working-directory: ./Terraform/QuickStartDeployments/EKSCluster
run: terraform init
and my terraform code:
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "3.89.0"
}
}
backend "gcs" {
bucket = "lws-dev-common-bucket"
prefix = "global/networking.state"
}
required_version = ">= 0.14.9"
}
provider "google" {
project = var.project_id
region = var.region
}
module "vpc" {
source = "../../Modules/VPC"
project_id = var.project_id
region = "us-west1"
vpc_name = var.vpc_name
}
I ran into the same issue and was able to fix it by granting the service account Service Account Token Creator role in the project IAM page manually
This can also happen if your service account doesn't have permission to access the storage bucket where your terraform state file is stored, or if your service account doesn't have the Workload Identity User role set properly.

Configure terraform with aws without user credentials

Trying to configure aws from terraform. Running terraform from ec2. Have attached AmazonEC2FullAccess policy to the role attached to ec2.
I don't have access and secret keys. Using keys for aws cli and terraform is not allowed. I need to use existing role to configure to aws and create resources using it.
Getting below error when using AmazonEC2FullAccess policy with ec2.
[ec2-user#ip-1*-1*-1*-2** terraform]$ terraform plan
╷
│ Error: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found.
│
│ Please see https://registry.terraform.io/providers/hashicorp/aws
│ for more information about providing credentials.
│
│ Error: failed to refresh cached credentials, no EC2 IMDS role found, operation error ec2imds: GetMetadata, request send failed, Get "http://1**.***.***.***/latest/meta-data/iam/security-credentials/": proxyconnect tcp: dial tcp 1*.*.*.*:8***: i/o timeout
│
│
│ with provider["registry.terraform.io/hashicorp/aws"],
│ on main.tf line 17, in provider "aws":
│ 17: provider "aws" {
│
Resource vpc file :-
[ec2-user#ip-1*.1*.1*.*** terraform]$ cat vpc.tf
resource "aws_vpc" "main" {
cidr_block = "1*.*.*.*/16"
}
main.tf file :-
[ec2-user#ip-1*.1*.1*.*** terraform]$ cat main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.39.0"
}
}
required_version = ">= 1.3.0"
}
provider "aws" {
region = var.aws_region
#role_arn =var.aws_role_arn
}
Also tried using role_arn in main.tf it gives following error :-
│ Error: Unsupported argument
│
│ on main.tf line 19, in provider "aws":
│ 19: role_arn =var.aws_role_arn
│
│ An argument named "role_arn" is not expected here.
Any help is much appreciated.

GCP - Terraform - google_project_services module breaks terraform pipeline

As per subject title, when using this module to just delete the default network and nothing else, this module breaks my terraform entirely, even when the module is removed/commented out, I get the same error still:
resource "google_project_service" "service" {
project = var.project_id
service = "compute.googleapis.com"
disable_dependent_services = false
disable_on_destroy = false
provisioner "local-exec" {
command = "gcloud -q compute networks delete default --project=${var.project_id}"
}
}
This is the error I get (replaced my actual project id with "project_id"):
Error: Error when reading or editing Project Service project_id/compute.googleapis.com: Error disabling service "compute.googleapis.com" for project "project_id": googleapi: Error 400: The service compute.googleapis.com is depended on by the following active service(s): container.googleapis.com; Please specify disable_dependent_services=true if you want to proceed with disabling all services.
│ Help Token: Ae-hA1POavq8x9V18i7Um0cW3sx_9lXuuNzjqDzX3zZ3HEYjJ91bGelEobL22DVMdY27NCRrCtZbyE-GbagPtdmxWhdpSamwl0JJomQ4KTRUQDK5
│ Details:
│ [
│ {
│ "#type": "type.googleapis.com/google.rpc.PreconditionFailure",
│ "violations": [
│ {
│ "subject": "?error_code=100001\u0026service_name=compute.googleapis.com\u0026services=container.googleapis.com",
│ "type": "googleapis.com"
│ }
│ ]
│ },
│ {
│ "#type": "type.googleapis.com/google.rpc.ErrorInfo",
│ "domain": "serviceusage.googleapis.com",
│ "metadata": {
│ "service_name": "compute.googleapis.com",
│ "services": "container.googleapis.com"
│ },
│ "reason": "COMMON_SU_SERVICE_HAS_DEPENDENT_SERVICES"
│ }
│ ]
│ , failedPrecondition
Have had issues like this before with this module when wanting to enable gcp apis in a newly created project with terraform so just stopped using it.
Any ideas how I fix the above?
I am doing a terraform init, refresh, plan and apply, this fails and gets the error above on the terraform apply stage.
seems the module/resource was still defined in the state file, removing that fixed it
Since some other service using compute.googleapis.com as a dependent service. so its stopping compute.googleapis.com from disabling because it might affect those dependent service. Here is the Reference Doc1
Try like this it should work
disable_dependent_services = True