Terraform 0.9.6 remote config outdated - amazon-web-services

I have been trying to update some of the terraform scripts from version 0.6.13 to 0.9.6. In my scripts I had before
terraform remote config -backend=s3 \
-backend-config="bucket=my_bucker" \
-backend-config="access_key=my_access_key" \
-backend-config="secret_key=my_secret" \
-backend-config="region=my_region" \
-backend-config="key=my_state_key"
and then
terraform/terraform remote pull
Which was pulling the remote state from aws. Upon running terraform apply it will give me the exact resources that needed to be updated/ created based on the remote tfstate that is stored in an s3 bucket.
Now the issue I'm facing is that remote pull and remote config commands are outdated and don't work anymore.
I tried to follow the instructions on https://www.terraform.io/docs/backends/types/remote.html
however it was not much helpful.
From what I understand I would have to do an init first with a partial configuration which presumably would automatically pull the remote state as following:
`terraform init -var-file="terraform.tfvars"\
-backend=true \
-backend-config="bucket=my_bucker" \
-backend-config="access_key=my_access_key" \
-backend-config="secret_key=my_secret" \
-backend-config="region=my_region" \
-backend-config="key=my_state_key"`
However it doesn't really pull the remote state as it was doing before.
Would anyone be able to guide me into the right direction?

You don't need terraform remote pull any more. Terraform by default will automatically based on the refresh flag which defaults to true.

Apparently I had to add a minimal backend configuration such as
terraform {
backend "s3" {
}
}
in my main.tf file for it to work

Related

Using credential process for IAM Roles Anywhere in springboot application

I have a use case where I need to access the SNS topic from outside AWS. We planned to use https://aws.amazon.com/blogs/security/extend-aws-iam-roles-to-workloads-outside-of-aws-with-iam-roles-anywhere/ as it seems to be the right fit
But I'm unable to get this working correctly. I followed the link exactly mentioed above where the contents of .aws/config file are
credential_process = ./aws_signing_helper credential-process
--certificate /path/to/certificate.pem
--private-key /path/to/private-key.pem
--trust-anchor-arn <TA_ARN>
--profile-arn <PROFILE_ARN>
--role-arn <ExampleS3WriteRole_ARN>
But my spring boot application throws an error stating that it could not fetch the credentials to connect to AWS. Kindly assist
I found the easiest thing to do was to create a separate script for the credential_process to target, this isn't necessary I just found it easier.
So create a script along the lines of:
#! /bin/bash
# raw_helper.sh
/path/to/aws_signing_helper credential-process \
--certificate /path/to/cert.crt \
--private-key /path/to/key.key \
--trust-anchor-arn <TA_ARN> \
--profile-arn <Roles_Anywhere_Profile_ARN> \
--role-arn <IAM_Role_ARN>
The key thing I found is that most places (including AWS documentation) tell you to use the ~/.aws/config file and declare the profile there. This didn't seem to work, but when I added the profile to my ~/.aws/credentials file it did work. Assuming you've created a helper script, this would look like this:
# ~/.aws/credentials
[raw_profile]
credential_process = /path/to/raw_helper.sh

Dataproc Serverless - how to set javax.net.ssl.trustStore property to fix java.security.cert.CertPathValidatorException

Trying to use google-cloud-dataproc-serveless with spark.jars.repositories option
gcloud beta dataproc batches submit pyspark sample.py --project=$GCP_PROJECT --region=$MY_REGION --properties \
spark.jars.repositories='https://my.repo.com:443/artifactory/my-maven-prod-group',\
spark.jars.packages='com.spark.mypackage:my-module-jar',spark.dataproc.driverEnv.javax.net.ssl.trustStore=.,\
spark.driver.extraJavaOptions='-Djavax.net.ssl.trustStore=. -Djavax.net.debug=true' \
--files=my-ca-bundle.crt
giving this exception
javax.net.ssl.SSLHandshakeException: java.security.cert.CertPathValidatorException
Tried to set this property javax.net.ssl.trustStore using spark.dataproc.driverEnv/spark.driver.extraJavaOptions, but its not working.
Is it possible to fix this issue by setting the right config properties and values,
or
Custom Image is the ONLY solution, with pre installed certificates?
You need to have a Java trust store with your cert imported. Then submit the batch with
--files=my-trust-store.jks \
--properties spark.driver.extraJavaOptions='-Djavax.net.ssl.trustStore=./my-trust-store.jks',spark.executor.extraJavaOptions='-Djavax.net.ssl.trustStore=./my-trust-store.jks'

How to create connector in airflow that is of type external provider (like the google-cloud-plaform) with the airflow REST API

I'm trying to automate creation of connector in airflow by github action, but since it is an external provider, the payload that need to be sent to airflow REST API doesn't work and i didn't find any documentation on how to do it.
So here is the PAYLOAD i'm trying to send :
PAYLOAD = {
"connection_id": CONNECTOR,
"conn_type": "google_cloud_platform",
"extra": json.dumps({
"google_cloud_platform": {
"keyfile_dict" : open(CONNECTOR_SERVICE_ACCOUNT_FILE, "r").read(),
"num_retries" : 2,
}
})
}
According to the airflow documentation here
And the information i found on the "create connector" page of airflow UI :
Airflow UI create connector page
But i received no error (code 200) and the connector is created but doesn't have the settings i tried to configure.
I confirm the creation works on the UI.
Does anyone have a solution or document that refer to the exact right payload i need to sent to airflow rest api ? Or maybe i miss something.
Airflow version : 2.2.3+composer
Cloud Composer version (GCP) : 2.0.3
Github runner version : 2.288.1
Language : Python
Thanks guys and feel free to contact me for further questions.
Bye
#vdolez was write, it's kind of a pain to format the payload to have the exact same format airflow REST API want. it's something like this :
"{\"extra__google_cloud_platform__key_path\": \"\",
\"extra__google_cloud_platform__key_secret_name\": \"\",
\"extra__google_cloud_platform__keyfile_dict\": \"{}\",
\"extra__google_cloud_platform__num_retries\": 5,
\"extra__google_cloud_platform__project\": \"\",
\"extra__google_cloud_platform__scope\": \"\"}"
And when you need to nest dictionnary inside some of these field, not worth the time and effort. But in case someone want to know, you have to escape every special character.
I change my workflow to notify competent users to create connector manually after my pipeline succeed.
I will try to contact airflow/cloud composer support to see if we can have a feature for better formatting.
You might be running into encoding/decoding issues while sending data over the web.
Since you're using Composer, it might be a good idea to use Composer CLI to create a connection.
Here's how to run airflow commands in Composer:
gcloud composer environments run ENVIRONMENT_NAME \
--location LOCATION \
SUBCOMMAND \
-- SUBCOMMAND_ARGUMENTS
Here's how to create a connection with the native Airflow commands:
airflow connections add 'my_prod_db' \
--conn-type 'my-conn-type' \
--conn-login 'login' \
--conn-password 'password' \
--conn-host 'host' \
--conn-port 'port' \
--conn-schema 'schema' \
...
Combining the two, you'll get something like:
gcloud composer environments run ENVIRONMENT_NAME \
--location LOCATION \
connections \
-- add 'my_prod_db' \
--conn-type 'my-conn-type' \
--conn-login 'login' \
--conn-password 'password' \
--conn-host 'host' \
--conn-port 'port' \
--conn-schema 'schema' \
...
You could run this in a Docker image where gcloud is already installed.

How to download jar from artifact registry (GCP)?

I have a maven Artifact Registry and am able to add dependency in pom.xml and get the jar.
I have another usecase where I would like to only download the jar using CLI something which you can easily do with other external maven repos eg curl https://repo1.maven.org/maven2/org/apache/iceberg/iceberg-spark-runtime/0.7.0-incubating/iceberg-spark-runtime-0.7.0-incubating.jar --output temp.jar
I don't see any instructions about how to do this.
I needed this too.
I have configured a service account following gcp guide
Then, I have executed the following command to get authbasic credz :
gcloud artifacts print-settings gradle \
[--project=PROJECT] \
[--repository=REPOSITORY] \
[--location=LOCATION] \
--json-key=KEY-FILE \
[--version-policy=VERSION-POLICY] \
[--allow-snapshot-overwrites]
In the output you have the artifactRegistryMavenSecret.
Finally you get your artifact with :
curl -L -u _json_key_base64:{{ artifactRegistryMavenSecret }} https://{{ region }}-maven.pkg.dev/{{ projectId }}/{{ repository }}/path/of/artifact/module/{{ version }}/app-{{ version }}.jar -o file.jar
It seems like this feature as mentioned does not exist yet for Artifact Registry based on this open feature request (this feature request has currently no ETA). However, you can try to implement a Cloud build automation not to only save your built artifact in Artifact Registry, but also to store them in Google Cloud Storage or other Storage repositories; so you can easily access the JARs (since Cloud Storage supports direct downloading).
In order to do this, you would need to integrate Cloud Build with Artifact Registry. The documentation page has instructions to use Maven projects with Cloud Build and Artifact Registry. In addition, you can configure Cloud Build to store built artifacts in Cloud Storage.
Both of these integrations are configured through a Cloud Build configuration file. In this file, the steps for building a project are defined, including integrations to other serverless services. This integration would involve defining a target Maven repository:
steps:
- name: gcr.io/cloud-builders/mvn
args: ['deploy']
And a location to deploy the artifacts into Cloud Storage:
artifacts:
objects:
location: [STORAGE_LOCATION]
paths: [[ARTIFACT_PATH],[ARTIFACT_PATH], ...]
Additional to #Nicolas Roux's answer:
artifactRegistryMavenSecret is basically an encode64 of the Service Account json key.
So instead of runnig gcloud artifacts print-settings gradle and curl -u _json_key_base64:{{ artifactRegistryMavenSecret }}, another way is you can directly use the token from gcloud auth print-access-token, then apply this token to cURL.
For example:
1. gcloud auth activate-service-account SERVICE_ACCOUNT#DOMAIN.COM \
--key-file=/path/key.json --project=PROJECT_ID
2. curl --oauth2-bearer "$(gcloud auth print-access-token)" \
-o app-{{ version }}.jar \
-L https://{{ region }}-maven.pkg.dev/{{ projectId }}/{{ repository }}/path/of/artifact/module/{{ version }}/app-{{ version }}.jar
By that, if you're working with Google Auth Action (google-github-actions/auth#v0) in Github Actions Workflow, then you can easily run the curl command without needing to extract artifactRegistryMavenSecret.

Terraform backend remote config with Google Cloud Buckets

I'm running the below command and seeing the output that Terraform has been successfully initialized!
terraform init \
-backend=true \
-backend-config="bucket=terraform-remote-states" \
-backend-config="project=<<my-poject>>" \
-backend-config="path=terraform.tfstate"
However, when I run the template, it creates the state file locally instead of within GCS.
Not sure what I'm missing here. Appreciate any thoughts and help.
When you execute the listed terraform init command, it seems like you don't have a backend block that looks like the below within any of the .tf files in that directory.
terraform {
backend "gcs" {
bucket = "terraform-state"
path = "/terraform.tfstate"
project = "my-project"
}
}
None of those -backend-config arguments you're passing tell Terraform that you want the state to go into GCS.
Without an explicit backend "gcs" {} declaration as above, Terraform will default to storing state locally, which is the behaviour you're currently seeing.