Using Terraform commands in JenkinsPipeline - amazon-web-services

I am very new to terraform stuff and currently working on running terraform scripts on JenkinsPipeline. I have some .tfvars file for each region for example tf-de-sandbox.tfvars, tf-fr-sandbox.tfvars, tf-de-prod.tfvars, tf-fr-prod.tfvars. I am trying to run plan and apply commands through JenkinsPipeline. What I am looking for is there anyway where I can run both sandbox files in parallel and prod files in parallel. Forexample can I give multiple tfvars file while using plan command?
Terraform plan -var-file=[tf-de-sandbox.tfvars,tf-fr-sandbox.tfvars] or something like this and then use apply command?
My JenkinsPipeline looks like this.
pipeline {
agent none
triggers {
cron('0 9 * * *') //schedule your build every day at 9h 00
}
stages {
stage('Pre-deploy stacks') {
agent { node { label 'deploy-python36' } }
steps {
sh 'cf sync --confirm cfn/stacks.yml'
sh 'curl https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo | tee /etc/yum.repos.d/hashicorp.repo && yum install terraform -y'
}
}
stage('TF Initialization') {
agent { node { label 'deploy-python36' } }
steps {
dir('./tf') {
sh 'terraform init'
}
}
}
stage('TF/DE planning [box]') {
agent { node { label 'deploy-python36' } }
steps {
dir('./tf') {
sh 'terraform plan -var-file=conf/tf-de-sandbox.tfvars'
}
}
}

Yes, you can. For example:
terraform plan --var-file=dev.tfvars --var-file=common.tfvars --out=planoutput
And then, to apply:
terraform apply planoutput

Related

Using SSH Agent in Jenkins to connect to multiple instances

I created a pipeline job that connects to an AWS account and runs some shell scripts.
At one point I wanted to add an SSH step, in which Jenkins ssh into each instance inside the account, performs an update and a restart, then continues until all instances are updated.
Account is comprised of 4 instances, grouped into pairs.
This is my Jenkinsfile (the ssh part is commented out for now):
pipeline {
agent {
label "amazon-small"
}
stages {
stage('Repo Checkout') {
steps {
git credentialsId: 'jenkins-ssh', url: 'ssh://git#REPO.git'
}
}
stage('Create snapshots') {
withAWS(role: "${config.aws_role}", roleAccount: "${config.aws_acount}", duration: 3600) {
steps {
sh "aws_patch_stuff/snapshots.sh"
}
}
}
stage('Deregister first ID pair from target group') {
steps {
sh "source vars.sh; deregister_pair_from_target_group ${id_pair_1[#]}"
}
}
stage('SSH into first host pair, update and restart') {
steps {
sshagent(credentials: [config.credential_id]) {
sh "python3 check_active_sessions.py"
// sh "source vars.sh; ssh_into_pair ${ip_pair_1[#]}"
}
}
}
stage('Register targets after reboot') {
steps {
sh "source vars.sh; register_pair_to_target_group ${id_pair_1[#]}"
}
stage('Check software availability') {
steps {
sh "source vars.sh; check_soft_availability ${ip_pair_1[#]}"
}
}
stage('Deregister second ID pair from target group') {
steps {
sh "source vars.sh; deregister_pair_from_target_group ${id_pair_2[#]}"
}
}
stage('SSH into second host pair, update and restart') {
steps {
sshagent(credentials: [config.credential_id]) {
sh "python3 check_active_sessions.py"
// sh "source vars.sh; ssh_into_pair ${ip_pair_2[#]}"
}
}
}
stage('Register targets after reboot') {
steps {
sh "source vars.sh; register_pair_to_target_group ${id_pair_2[#]}"
}
}
stage('Check software availability') {
steps {
sh "source vars.sh; check_soft_availability ${ip_pair_2[#]}"
}
}
post {
always {
script {
currentBuild.result = currentBuild.result ?: 'SUCCESS'
mail bcc: '', body: 'Update done, hosts rebooted, packages are up to date', cc: '', from: '', replyTo: '', subject: 'Status for update and reboot pipeline', to: 'some.name#email.com'
}
}
}
}
The ssh function looks like this:
ssh_into_pair () {
arr=("$#")
for INSTANCEIP in ${arr[#]}
do
echo "Updating $INSTANCEIP..."
ssh -o StrictHostKeyChecking=no -l ec2-user $INSTANCEIP uname -a
sudo yum update
echo "Performing restart..."
sudo needs-restarting -r || sudo shutdown -r 1
sleep 10
done
}
I tried using the SSH plugin for Jenkins, however I am pretty sure I'm not using it right.
The private keys are stored in jenkins and are provided in the groovy file of the job, so this should't be an issue. Is there a way I can actually achieve this? Should I maybe consider using Ansible as well?

Start up scripts with google compute engine in node js

Im a novice to google cloud compute api in node
im using this library
https://googleapis.dev/nodejs/compute/latest/index.html
im authenticated and can make API requests that is all set up
all im trying to do is make a start up script that will download from this URL
http://eve-robotics.com/release/EveAIO_setup.exe and places the folder on the desktop
i have this but im 100% sure this is way off based on some articles and docs i am seeing but i know nothing ab bash, start up scripts
this is what i have
const Compute = require('#google-cloud/compute');
const compute = new Compute();
const zone = compute.zone('us-central1-c')
async function createVM(){
vmName = 'start-script-trial3'
// const [vm, operation] = await zone.createVM(vmName, {
// })
const config = {
os: 'windows',
http: true,
metadata: {
items: [
{
key: 'startup-script',
value: `curl http://eve-robotics.com/release/EveAIO_setup.exe --output Eve`,
},
]}
}
const vm = zone.vm(vmName)
const [gas, operation] = await vm.create(config)
console.log(operation.id)
}
createVM()
I was able to do it in bash:
I made a 'bat' script for windows:
#ECHO OFF
curl http://eve-robotics.com/release/EveAIO_setup.exe --output C:\Users\Eve
I copied the script to GCS:
gsutil cp file.bat gs://my-bucket/
Then I run the gcloud command:
gcloud compute instances create example-windows-instance --scopes storage-ro --image-family=windows-1803-core --image-project=windows-cloud --metadata windows-startup-script-url=gs://marian-b/file.bat --zone=europe-west1-c

AWS-CDK & Powershell Lambda

I have a Powershell Lambda that I would like to deploy via the AWS CDK however I'm having issues getting it to run.
Deploying the Powershell via a manual Publish-AWSPowerShellLambda works:
Publish-AWSPowerShellLambda -ScriptPath .\PowershellLambda.ps1 -Name PowershellLambda
However the same script deployed with the CDK doesnt log to CloudWatch Logs, even though it has permission:
import events = require('#aws-cdk/aws-events');
import targets = require('#aws-cdk/aws-events-targets');
import lambda = require('#aws-cdk/aws-lambda');
import cdk = require('#aws-cdk/core');
export class LambdaCronStack extends cdk.Stack {
constructor(app: cdk.App, id: string) {
super(app, id);
const lambdaFn = new lambda.Function(this, 'Singleton', {
code: new lambda.AssetCode('./PowershellLambda/PowershellLambda.zip'),
handler: 'PowershellLambda::PowershellLambda.Bootstrap::ExecuteFunction',
timeout: cdk.Duration.seconds(300),
runtime: lambda.Runtime.DOTNET_CORE_2_1
});
const rule = new events.Rule(this, 'Rule', {
schedule: events.Schedule.expression('rate(1 minute)')
});
rule.addTarget(new targets.LambdaFunction(lambdaFn));
}
}
const app = new cdk.App();
new LambdaCronStack(app, 'LambdaCronExample');
app.synth();
The powershell script currently contains just the following lines and works when deployed by Publish-AWSPowerShellLambda on the CLI:
#Requires -Modules #{ModuleName='AWSPowerShell.NetCore';ModuleVersion='3.3.335.0'}
Write-Host "Powershell Lambda Executed"
Note: For the CDK Deployment I generate the .zip file using a build step in package.json:
"scripts": {
"build": "tsc",
"build-package": "pwsh -NoProfile -ExecutionPolicy Unrestricted -command New-AWSPowerShellLambdaPackage -ScriptPath './PowershellLambda/PowershellLambda.ps1' -OutputPackage ./PowershellLambda/PowershellLambda.zip",
"watch": "tsc -w",
"cdk": "cdk"
}
The CDK deploys fine and the Lambda runs as expected but the only thing in Cloudwatch Logs is this:
START RequestId: 4c12fe1a-a9e0-4137-90cf-747b6aecb639 Version: $LATEST
I've checked that the Handler in the CDK script matches the output of the Publish-AWSPowerShellLambda and that the zip file uploaded fine and contains the correct code.
Any suggestions as to why this isnt working?
Setting the memory size to 512mb within the lambda.Function has resolved the issue.
The cloudwatch entry showed the lambda starting but it appears there wasn't enough memory to initialize and run the .net runtime.

Terraform - output ec2 instance ids to calling shell script

I am using 'terraform apply' in a shell script to create multiple EC2 instances. I need to output the list of generated IPs to a script variable & use the list in another sub-script. I have defined output variables for the ips in a terraform config file - 'instance_ips'
output "instance_ips" {
value = [
"${aws_instance.gocd_master.private_ip}",
"${aws_instance.gocd_agent.*.private_ip}"
]
}
However, the terraform apply command is printing entire EC2 generation output apart from the output variables.
terraform init \
-backend-config="region=$AWS_DEFAULT_REGION" \
-backend-config="bucket=$TERRAFORM_STATE_BUCKET_NAME" \
-backend-config="role_arn=$PROVISIONING_ROLE" \
-reconfigure \
"$TERRAFORM_DIR"
OUTPUT = $( terraform apply <input variables e.g -
var="aws_region=$AWS_DEFAULT_REGION">
-auto-approve \
-input=false \
"$TERRAFORM_DIR"
)
terraform output instance_ips
So the 'OUTPUT' script variable content is
Terraform command: apply Initialising the backend... Successfully
configured the backend "s3"! Terraform will automatically use this
backend unless the backend configuration changes. Initialising provider
plugins... Terraform has been successfully initialised!
.
.
.
aws_route53_record.gocd_agent_dns_entry[2]: Creation complete after 52s
(ID:<zone ............................)
aws_route53_record.gocd_master_dns_entry: Creation complete after 52s
(ID:<zone ............................)
aws_route53_record.gocd_agent_dns_entry[1]: Creation complete after 53s
(ID:<zone ............................)
Apply complete! Resources: 9 added, 0 changed, 0 destroyed. Outputs:
instance_ips = [ 10.39.209.155, 10.39.208.44, 10.39.208.251,
10.39.209.227 ]
instead of just the EC2 ips.
Firing the 'terraform output instance_ips' is throwing a 'Initialisation Required' error which I understand means 'terraform init' is required.
Is there any way to suppress ec2 generation & just print output variables. if not, how to retrieve the IPs using 'terraform output' command w/o needing to do a terraform init ?
If I understood the context correctly, you can actually create a file in that directory & that file can be used by your sub-shell script. You can do it by using a null_resource OR "local_file".
Here is how we can use it in a modularized structure -
Using null_resource -
resource "null_resource" "instance_ips" {
triggers {
ip_file = "${sha1(file("${path.module}/instance_ips.txt"))}"
}
provisioner "local-exec" {
command = "echo ${module.ec2.instance_ips} >> instance_ips.txt"
}
}
Using local_file -
resource "local_file" "instance_ips" {
content = "${module.ec2.instance_ips}"
filename = "${path.module}/instance_ips.txt"
}

ASP.Net Core at AWS EBS - write permissions and .ebextensions

We have deployed ASP.Net Core app on AWS EBS and have problem with writing files on it.
Access to the path C:\inetpub\AspNetCoreWebApps\app\App_Data\file.txt is denied
I added .ebextensions\[app_name].config but it did nothing
{
"container_commands": {
"01": {
"command": "icacls \"C:/inetpub/AspNetCoreWebApps/app/App_Data\" /grant DefaultAppPool:(OI)(CI)"
}
}
}
I know that this is permission problem because when I RDP to machine and changed permission manually it solved problem. I would like to it during deploy using .ebextensions\[app_name].config
.ebextensions\[app_name].config run before deploy and during deploy folder was recreated - that why it was not working. I fixed it by adding postInstall Power Shell script into aws-windows-deployment-manifest.json:
"scripts": {
"postInstall": {
"file": "SetupScripts/PostInstallSetup.ps1"
}
# # PostInstallSetup.ps1 #
$SharePath = "C:\inetpub\AspNetCoreWebApps\app\App_Data"
$Acl = Get-ACL $SharePath
$AccessRule = New-Object System.Security.AccessControl.FileSystemAccessRule("DefaultAppPool","full","ContainerInherit,Objectinherit","none","Allow")
$Acl.AddAccessRule($AccessRule)
Set-Acl $SharePath $Acl