Unable to install terraform extensions on a VM - azure-virtual-machine

I am having a issue with installing Virtual machine extensions - VSTS Agent on Azure VM by terraform. It reports error as below:
1 error(s) occurred:
2019-05-01T13:11:47.4220106Z
2019-05-01T13:11:47.4281029Z * azurerm_virtual_machine_extension.tf-vm-erx-bussvc-ext: 1 error(s) occurred:
2019-05-01T13:11:47.4285499Z * azurerm_virtual_machine_extension.tf-vm-erx-bussvc-ext: compute.VirtualMachineExtensionsClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="InvalidParameter" Message="The value of parameter typeHandlerVersion is invalid." Target="typeHandlerVersion"
2019-05-01T13:11:47.4286000Z
Terraform apply command line arguments were as below:
usr/local/bin/terraform apply -var location=australiasoutheast -var interf_base_hostname=erxpreinterf -var mkconn_base_hostname=erxpremkconn -var bussvc_base_hostname=erxprebussvc -var sql_base_hostname=erxpresqldbs -var win_image_publisher=MicrosoftWindowsServer -var sql_image_publisher=MicrosoftSQLServer -var win_image_offer=WindowsServer -var sql_image_offer=SQL2014SP3-WS2012R2 -var win_2012_sku=2012-R2-Datacenter -var win_2016_sku=2016-Datacenter -var sql_sku=sqldev -var interf_vm_size=Standard_D2s_v3 -var mkconn_vm_size=Standard_D2s_v3 -var bussvc_vm_size=Standard_D2s_v3 -var sqldbs_vm_size=Standard_DS3_v2 -var interf_avset=erx-sha-pre-interf-avs-au-se -var mkconn_avset=erx-sha-pre-mkconn-avs-au-se -var bussvc_avset=erx-sha-pre-bussvc-avs-au-se -var sqldbs_avset=erx-sha-pre-sqldbs-avs-au-se -var application_nsg=erx-sha-pre-applic-nsg-au-se -var sql_nsg=erx-sha-pre-sqldbs-nsg-au-se -var username=scmadmin -var password=*** -var TF_LOG=DEBUG -var sqldbs_avset-02=erx-sha-pre-sqldbs-avs-au-se-02 -var builds_base_hostname=erxprebuilds -var builds_vm_size=Standard_B2ms -var linux_image_offer=CentOS -var linux_image_publisher=OpenLogic -var linux_sku=7.5 -var buildserver_nsg=erx-sha-pre-builds-nsg-au-se -var git_username=user.name%40companyname.com.au -var git_pat=n2kk5jmu77qxxxxxxxxxxxxxxxxxxxxxxxxxxxxx5ff33xoc3q -var git_url=azure repo url -var extension_publisher=Microsoft.VisualStudio.Services -var extension_type=TeamServicesAgent -var extension_version=1.26.0.9 -auto-approve
data.azurerm_resource_group.tf-rg-erx-external
What is the name of correct variable
type: "" => "TeamServicesAgent"
type_handler_version: "" => "1.26.0.9"
virtual_machine_name: "" => "erxprebussvc01"
My azure virtual machine extensions code is as below:
resource "azurerm_virtual_machine_extension" "tf-vm-erx-bussvc-ext" {
name = "${var.bussvc_base_hostname}${format("%02d",count.index+1)}-EXT"
location = "${data.azurerm_resource_group.tf-rg-erx-external.location}"
resource_group_name = "${data.azurerm_resource_group.tf-rg-erx-external.name}"
virtual_machine_name = "${var.bussvc_base_hostname}${format("%02d",count.index+1)}"
publisher = "${var.extension_publisher}"
type = "${var.extension_type}"
type_handler_version = "${var.extension_version}"
settings = <<SETTINGS
{
"VstsAccountName":"https://companyname.visualstudio.com/",
"TeamProject":"Fred",
"DeploymentGroup": "eRx",
"Tags": [
"PreProdAzure","Role"
]
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"PATToken": "${var.git_pat}"
}
PROTECTED_SETTINGS
}

A workaround would be to set type_handler_version variable to "1.0" and add:
autoUpgradeMinorVersion = true
in your terraform code. that type_handler_version is invalid, i suppose.

Related

Importing airflow variables in a json file using gitlab ci/cd

How to import variables using gitlab ci/cd yml file.
I have found Importing airflow variables in a json file using the command line but not helping out
You can import a Json file as Airflow variables.
variables.json file :
{
"feature": {
"param1": "param1",
"param2": "param2",
...
}
}
For example, this file can be put in the following structure :
my-project
config
dags
variables
dev
variables.json
prd
variables.json
You can then create a Shell script to deploy these variables and file to Cloud Composer, deploy_dags_config.sh file :
#!/usr/bin/env bash
set -e
set -o pipefail
set -u
export FEATURE_NAME=my_feature
export ENV=dev
export COMPOSER_ENVIRONMENT=my-composer-env
export ENVIRONMENT_LOCATION=europe-west1
export GCP_PROJECT_ID=my-gcp-project
echo "### Deploying the data config variables of module ${FEATURE_NAME} to composer"
# deploy variables
gcloud composer environments storage data import \
--source config/dags/variables/${ENV}/variables.json \
--destination "${FEATURE_NAME}"/config \
--environment ${COMPOSER_ENVIRONMENT} \
--location ${ENVIRONMENT_LOCATION} \
--project ${GCP_PROJECT_ID}
gcloud beta composer environments run ${COMPOSER_ENVIRONMENT} \
--project ${GCP_PROJECT_ID} \
--location ${ENVIRONMENT_LOCATION} \
variables import \
-- /home/airflow/gcs/data/"${FEATURE_NAME}"/config/variables.json
echo "Variables of ${FEATURE_NAME} are well imported in environment ${COMPOSER_ENVIRONMENT} for project ${GCP_PROJECT_ID}"
This Shell script is the used in Gitlab CI yaml file :
deploy_conf:
image: google/cloud-sdk:416.0.0
script:
- . ./authentication.sh
- . ./deploy_dags_config.sh
Your Gitlab have to be authenticated to GCP.
In the Airflow DAG code, the variables can be then retrieved in a Dict as follow :
from typing import Dict
from airflow.models import Variable
variables:Dict = Variable.get("feature", deserialize_json=True)
Because the root node of variables.json file and object is feature (this name should be unique) :
{
"feature": {
"param1": "param1",
"param2": "param2",
...
}
}

Terragrunt - dynamically add TF and TG versions to AWS default tags

I have a default tags block and would like to add new tags showing the TG and TF versions used in deployment.
I assumed this would work, but I was wrong..
locals {
terraform_version = "${run_cmd("terraform --version")}"
terragrunt_version = "${run_cmd("terragrunt --version")}"
}
provider "aws" {
default_tags {
tags = {
terraform_version = local.terraform_version
terragrunt_version = local.terragrunt_version
}
}
}
I'm sure there's a simple way to do this, but it alludes me.
Here's the error message:
my-mac$ terragrunt apply
ERRO[0000] Error: Error in function call
ERRO[0000] on /Users/me/git/terraform/environments/terragrunt.hcl line 8, in locals:
ERRO[0000] 8: terraform_version = "${run_cmd("terraform --version")}"
ERRO[0000]
ERRO[0000] Call to function "run_cmd" failed: exec: "terraform --version": executable file not found in $PATH.
ERRO[0000] Encountered error while evaluating locals in file /Users/me/git/terraform/environments/terragrunt.hcl
ERRO[0000] /Users/me/git/terraform/environments/terragrunt.hcl:8,31-39: Error in function call; Call to function "run_cmd" failed: exec: "terraform --version": executable file not found in $PATH.
ERRO[0000] Unable to determine underlying exit code, so Terragrunt will exit with error code 1
The run_cmd function uses separate parameters for the command to run and the args to pass. Your example tries to run the command "terraform --version" and not terraform --version. You should update your code like the following:
locals {
terraform_version = "${run_cmd("terraform", "--version")}"
terragrunt_version = "${run_cmd("terragrunt", "--version")}"
}
Building on jordanm's good work, I found the TG version was good but I needed to remove some verbosity in the TF output for it to be usable as an aws tag.
locals {
terraform_version = "${run_cmd("/bin/bash", "-c", terraform --version | sed 1q")}"
terragrunt_version = "${run_cmd("terragrunt", "--version")}"
}
Good work everybody!

Issue while calling Bash script using External data source in Terraform

I have External data source , that is calling the bash script .
Main.tf :
resource "aws_ami_from_instance" "QA-ami" {
name = "QA-ami"
source_instance_id = "i-00f4*****75**a"
}
resource "aws_instance" "QA-server-via-ami" {
ami = aws_ami_from_instance.QA-ami.id
instance_type = var.qa_instance_type
subnet_id = var.qa_subnet_id
key_name = var.qa_key_name
}
data "external" "instance_status" { //line 38
program = ["bash", "${path.module}/check_instance_status.sh"]
query = {
id = aws_instance.QA-server-via-ami.id
}
}
output "test" {
value = data.external.instance_status.result
}
Bash Script :
#!/bin/bash
set -e
eval "$(jq -r '#sh "INSTANCE_ID=\(.id)"')"
sleep 600
status=$(aws ec2 describe-instance-status --instance-ids ${INSTANCE_ID} --output json --query
'InstanceStatuses[0]')
instance_status=$(echo ${status} | jq -r '.InstanceStatus.Details[0].Status')
system_status=$(echo ${status} | jq -r '.SystemStatus.Details[0].Status')
jq -n --arg inst_status "$instance_status" \
--arg sys_status "$system_status" \
'{"instance_status":$inst_status,"system_status":$sys_status}'
But when i am running terraform apply , i am getting below error :
Error: failed to execute "bash": bash: ./check_instance_status.sh: No such file or directory
on main.tf line 38, in data "external" "instance_status":
38: data "external" "instance_status" {
My bash script is present in /check_instance_status.sh , still i am getting error .
Please assist me .
It's probably just a path problem, I'm assuming this is in a submodule? Then try path.root like this: program = ["bash", "${path.root}/check_instance_status.sh"]
Also make sure that check_instance_status.sh is executable with chmod +x check_instance_status.sh and that it runs correctly on the command line.
Add the complete path directly in
program = ["bash", "/path/to/check_instance_status.sh"]

Serverspec test fail when i run its pipeline from other pipeline

I'm running 3 pipelines in jenkins (CI, CD, CDP) when I run the CI pipe the final stage is a trigger for activate the pipe CD (Continuous Deployment), this receive a parameter APP_VERSION from CI (Continuous Integration) PIPE and deploy an instance with packer and run SERVERSPEC TEST, but serverspec test failed.
but the demo-app is installed via salstack
The strange is when I run the CD and pass the parameter APP_VERSION manually this WORK !!
this is the final stage for pipeline CI
stage "Trigger downstream"
echo 'parametro'
def versionApp = sh returnStdout: true, script:"echo \$(git rev-parse --short HEAD) "
build job: "demo-pipeCD", parameters: [[$class: "StringParameterValue", name: "APP_VERSION", value: "${versionApp}"]], wait: false
}
I have passed to serverspec the sbin PATH and not work.
EDIT: I add the code the test.
enter code here
require 'spec_helper'
versionFile = open('/tmp/APP_VERSION')
appVersion = versionFile.read.chomp
describe package("demo-app-#{appVersion}") do
it { should be_installed }
end
Also, i add the job pipeline
#!groovy
node {
step([$class: 'WsCleanup'])
stage "Checkout Git repo"
checkout scm
stage "Checkout additional repos"
dir("pipeCD") {
git "https://git-codecommit.us-east-
1.amazonaws.com/v1/repos/pipeCD"
}
stage "Run Packer"
sh "echo $APP_VERSION"
sh "\$(export PATH=/usr/bin:/root/bin:/usr/local/bin:/sbin)"
sh "/opt/packer validate -var=\"appVersion=$APP_VERSION\" -var-
file=packer/demo-app_vars.json packer/demo-app.json"
sh "/opt/packer build -machine-readable -
var=\"appVersion=$APP_VERSION\" -var-file=packer/demo-app_vars.json
packer/demo-app.json | tee packer/packer.log"
REPEAT .. the parameter APP_VERSION in job pipe is rigth and the demo-app app is installed before the execute the test.

Terraform - output ec2 instance ids to calling shell script

I am using 'terraform apply' in a shell script to create multiple EC2 instances. I need to output the list of generated IPs to a script variable & use the list in another sub-script. I have defined output variables for the ips in a terraform config file - 'instance_ips'
output "instance_ips" {
value = [
"${aws_instance.gocd_master.private_ip}",
"${aws_instance.gocd_agent.*.private_ip}"
]
}
However, the terraform apply command is printing entire EC2 generation output apart from the output variables.
terraform init \
-backend-config="region=$AWS_DEFAULT_REGION" \
-backend-config="bucket=$TERRAFORM_STATE_BUCKET_NAME" \
-backend-config="role_arn=$PROVISIONING_ROLE" \
-reconfigure \
"$TERRAFORM_DIR"
OUTPUT = $( terraform apply <input variables e.g -
var="aws_region=$AWS_DEFAULT_REGION">
-auto-approve \
-input=false \
"$TERRAFORM_DIR"
)
terraform output instance_ips
So the 'OUTPUT' script variable content is
Terraform command: apply Initialising the backend... Successfully
configured the backend "s3"! Terraform will automatically use this
backend unless the backend configuration changes. Initialising provider
plugins... Terraform has been successfully initialised!
.
.
.
aws_route53_record.gocd_agent_dns_entry[2]: Creation complete after 52s
(ID:<zone ............................)
aws_route53_record.gocd_master_dns_entry: Creation complete after 52s
(ID:<zone ............................)
aws_route53_record.gocd_agent_dns_entry[1]: Creation complete after 53s
(ID:<zone ............................)
Apply complete! Resources: 9 added, 0 changed, 0 destroyed. Outputs:
instance_ips = [ 10.39.209.155, 10.39.208.44, 10.39.208.251,
10.39.209.227 ]
instead of just the EC2 ips.
Firing the 'terraform output instance_ips' is throwing a 'Initialisation Required' error which I understand means 'terraform init' is required.
Is there any way to suppress ec2 generation & just print output variables. if not, how to retrieve the IPs using 'terraform output' command w/o needing to do a terraform init ?
If I understood the context correctly, you can actually create a file in that directory & that file can be used by your sub-shell script. You can do it by using a null_resource OR "local_file".
Here is how we can use it in a modularized structure -
Using null_resource -
resource "null_resource" "instance_ips" {
triggers {
ip_file = "${sha1(file("${path.module}/instance_ips.txt"))}"
}
provisioner "local-exec" {
command = "echo ${module.ec2.instance_ips} >> instance_ips.txt"
}
}
Using local_file -
resource "local_file" "instance_ips" {
content = "${module.ec2.instance_ips}"
filename = "${path.module}/instance_ips.txt"
}