Referencing Environment Variable in Serverless.yml File from jenkinsfile - amazon-web-services

I am trying to reference variables that are set in my jenkinsfile in my serverless.yml file.
In jenkinsfile i have this
environment {
HELLO = 'hello-world'
}
In serverless.yml file i have this
custom:
secret: ${env:HELLO}
When running jenkins pipeline i get this error
A valid environment variable to satisfy the declaration 'env:HELLO' could not be found.
Here is my full jenkins file as requested, end goal i want to use val1 and val2 and env variables but if i can figure out how to to with hello world it is the same thing.
import com.lmig.intl.cloud.jenkins.exception.BuildException
def getJobName() {
return env.JOB_NAME
}
environment {
HELLO = 'hello-world'
}
def getEnvironment() {
def jobName = getJobName().split('/')
def environment = jobName[1].toLowerCase()
return environment.toLowerCase()
}
node('linux'){
stage('Checkout'){
checkout scm
}
stage('Pull Secrets From Vault'){
withAWS(credentials:'aws-cred'){
def secret = vaultPullSecrets(app:"sls-auxiliary-service",appenv:"nonprod",runtime:'nonprod',keys:'["saslusername","saslpassword"]')
def val1 = new groovy.json.JsonSlurper().parseText(secret)[0].SASLUSERNAME
def val2 = new groovy.json.JsonSlurper().parseText(secret)[1].SASLPASSWORD
if(val1 != '' && val2 != ''){
echo "Vault Secret pulled Successfully"
}else{
echo "Vault Secret Not Found"
throw new BuildException("Vault Secret Not Found")
}
}
}
stage('Deploy') {
def ENVIRONMENT = getEnvironment().replaceAll("\\_","")
withAWS(credentials:'aws-cred') {
sh 'npm i serverless-python-requirements'
sh 'npm install --save-dev serverless-step-functions'
sh 'npm install serverless-deployment-bucket --save-dev'
sh 'npm i serverless-pseudo-parameters'
sh 'npm i serverless-plugin-resource-tagging'
sh 'pip3 install --user -r requirements.txt'
sh "serverless deploy --stage ${ENVIRONMENT}"
}
}
}

You can use sed to replace the placeholder: ${env:HELLO}to real value, if you can make jenkin job always be executed on Linux slave.
stage('Pull Secrets From Vault'){
withAWS(credentials:'aws-cred'){
def secret = vaultPullSecrets(app:"sls-auxiliary-service",appenv:"nonprod",runtime:'nonprod',keys:'["saslusername","saslpassword"]')
def val1 = new groovy.json.JsonSlurper().parseText(secret)[0].SASLUSERNAME
sh """
sed -i 's/\${env:HELLO}/${val1}/' <relative path to>/serverless.yml
"""
I did a quick practice with a simple pipeline as following, the sed command I give work well.
node('docker') {
stage('A') {
sh '''
set +x
echo 'custom:' > serverless.yml
echo ' secret: ${env:HELLO}' >> serverless.yml
echo '### Before replace ###'
cat serverless.yml
'''
def val1 = 'hello'
sh """
set +x
sed -i 's/\${env:HELLO}/${val1}/' ./serverless.yml
echo '### After replace ###'
cat serverless.yml
"""
}
}
Output of job build
[script-pipeline-practice] Running shell script
+ set +x
### Before replace ###
custom:
secret: ${env:HELLO}
[Pipeline] sh
[script-pipeline-practice] Running shell script
+ set +x
### After replace ###
custom:
secret: hello

Related

Jenkins pipeline assign variable multiple times

Is it possible to re-assign the variable value a few times inside IF in one script block?
I have a script block where I need to pass variable values to different environments:
script {
if (env.DEPLOY_ENV == 'staging') {
echo 'Run LUX-staging build'
def ENV_SERVER = ['192.168.141.230']
def UML_SUFFIX = ['stage-or']
sh 'ansible-playbook nginx_depl.yml --limit 127.0.0.1'
echo 'Run STAGE ADN deploy'
def ENV_SERVER = ['192.168.111.30']
def UML_SUFFIX = ['stage-sg']
sh 'ansible-playbook nginx_depl.yml --limit 127.0.0.1'
echo 'Run STAGE SG deploy'
def ENV_SERVER = ['stage-sg-pbo-api.example.com']
def UML_SUFFIX = ['stage-ba']
sh 'ansible-playbook nginx_depl.yml --limit 127.0.0.1'
}
}
But I receive an error in Jenkins job on second variable assignment:
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 80: The current scope already contains a variable of the name ENV_SERVER
# line 80, column 11.
def ENV_SERVER = ['192.168.111.30']
^
WorkflowScript: 81: The current scope already contains a variable of the name UML_SUFFIX
# line 81, column 11.
def UML_SUFFIX = ['stage-sg']
^
Or perhaps any other ways to multiple assignment inside one IF part of script block.
Using def defines the variable. This is only needed on the first call.
so removing the def on the other calls should work
script {
if (env.DEPLOY_ENV == 'staging') {
echo 'Run LUX-staging build'
def ENV_SERVER = ['192.168.141.230']
def UML_SUFFIX = ['stage-or']
sh 'ansible-playbook nginx_depl.yml --limit 127.0.0.1'
echo 'Run STAGE ADN deploy'
ENV_SERVER = ['192.168.111.30']
UML_SUFFIX = ['stage-sg']
sh 'ansible-playbook nginx_depl.yml --limit 127.0.0.1'
echo 'Run STAGE SG deploy'
ENV_SERVER = ['stage-sg-pbo-api.example.com']
UML_SUFFIX = ['stage-ba']
sh 'ansible-playbook nginx_depl.yml --limit 127.0.0.1'
}
}
The variables will only be scoped to the if block, so you wont have access to them outside that block.

Is there a way to confirm user_data ran successfully with Terraform for EC2?

I'm wondering if it's possible to know when the script in user data executes completely?
data "template_file" "script" {
template = file("${path.module}/installing.sh")
}
data "template_cloudinit_config" "config" {
gzip = false
base64_encode = false
# Main cloud-config configuration file.
part {
filename = "install.sh"
content = "${data.template_file.script.rendered}"
}
}
resource "aws_instance" "web" {
ami = "ami-04e7b4117bb0488e4"
instance_type = "t2.micro"
key_name = "KEY"
vpc_security_group_ids = [aws_default_security_group.default.id]
subnet_id = aws_default_subnet.default_az1.id
associate_public_ip_address = true
iam_instance_profile = "Role_S3"
user_data = data.template_cloudinit_config.config.rendered
tags = {
Name = "Terraform-Ansible"
}
}
And in the content of the script I have this.
It tells me Terraform successfully apply the changes, but the script is still running, is there a way I can monitor that?
#!/usr/bin/env bash
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
echo BEGIN
sudo apt update
sudo apt upgrade -y
sudo apt install -y unzip
echo END
No, You can not confirm the user data status from the terraform, as it posts launching script that executes once EC2 instance launched. But you will need some extra effort on init script that one way to check.
How to check User Data status while launching the instance in aws
If you do something that is mentioned above to make some marker file once user data completed, then you can try this to check.
resource "null_resource" "user_data_status_check" {
provisioner "local-exec" {
on_failure = "fail"
interpreter = ["/bin/bash", "-c"]
command = <<EOT
echo -e "\x1B[31m wait for few minute for instance warm up, adjust accordingly \x1B[0m"
# wait 30 sec
sleep 30
ssh -i yourkey.pem instance_ip ConnectTimeout=30 -o 'ConnectionAttempts 5' test -f "/home/user/markerfile.txt" && echo found || echo not found
if [ $? -eq 0 ]; then
echo "user data sucessfully executed"
else
echo "Failed to execute user data"
fi
EOT
}
triggers = {
#remove this once you test it out as it should run only once
always_run ="${timestamp()}"
}
depends_on = ["aws_instance.my_instance"]
}
so this script will check marker file on the newly launch server by doing ssh with timeout 30 seconds with max attempts 5.
Here are some pointers to remember:
User data shell scripts must start with the Shebang #! characters and the path to the interpreter you want to read the script (commonly /bin/bash).
Scripts entered as user data are run as the root user, so no need to use the sudo command in the init script.
When a user data script is processed, it is copied to and run from /var/lib/cloud/instances/instance-id/. The script is not deleted after it is run and can be found in this directory with the name user-data.txt So to check if your shell script made to the server refer this directory and the file.
The cloud-init output log file (/var/log/cloud-init-output.log) captures console output of your user_data shell script. to know how your user_data shell script was executed and its output check this file.
Source: https://www.middlewareinventory.com/blog/terraform-aws-ec2-user_data-example/
Well I use these two ways to confirm.
At the end of cloudinit config file this line sends me a notification through whatsapp (using callmebot). Thus no matter how much does it take to setup, I always get notified when it's ready to use. I watch some series or read something in that time. no time wasted.
curl -X POST "https://api.callmebot.com/whatsapp.php?phone=12345678910&text=Ec2+transcoder+setup+complete&apikey=12345"
At the end of cloudinit config this line runs -
echo "for faster/visual confirmation of above execution.."
wget https://www.sample-videos.com/video123/mp4/720/big_buck_bunny_720p_1mb.mp4 -O /home/ubuntu/dpnd_comp.mp4
When I sign in to the instance I can see directly the file.
And I'm loving it. Hope this helps someone. Also, don't forget to tell me your method too.

How to return as a list only numbers from a command output in Jenkinsfile/Groovy?

We need to return as an option on this Jenkinsfile only the revision number from our k8s applications, but the command returns the entire output, and all my regex and escapes on the command was not working on it. Here is the code:
choiceType: 'PT_SINGLE_SELECT',
description: 'Revision of the application on kubernetes',
name: 'revision',
omitValueField: false,
randomName: 'choice-parameter-5633384460832177',
referencedParameters: 'namespaces,deployment',
script: [
$class: 'GroovyScript',
script: [
classpath: [],
sandbox: true,
script: """
if (namespaces.equals("Select")){
return["Nothing to do - Select your deployment"]
} else {
def revResult = null
def kubecmd0 = "kubectl rollout history deploy --kubeconfig=${kubefilePrd} -n " + namespaces + " " + deployment + " "
def kubecmd1 = kubecmd0.execute().in.text.split().toList()
return kubecmd1
}
"""
]
On Jenkins's job:
printscreen
Is there any function or magic regex that could solve this?
Problem solved:
[$class: 'CascadeChoiceParameter',
choiceType: 'PT_SINGLE_SELECT',
description: 'Revision of the application on kubernetes',
name: 'revision',
randomName: 'choice-parameter-5633384460832177',
referencedParameters: 'namespaces,deployment',
script: [
$class: 'GroovyScript',
script: [
classpath: [],
sandbox: true,
script: """
if (namespaces.equals("Select")){
return["Nothing to do - Select your deployment"]
} else {
def command = "kubectl rollout history deploy --kubeconfig=${kubefilePrd} -n " + namespaces + " " + deployment + "| grep -v REVISION | grep -v deployment | cut -f1 -d' '"
def output = ['bash', '-c', command].execute().in.text
return output.split().toList()
}
"""
]
Basically, it's necessary to call a bash, and not use groovy directly. Works for me. :)

Serverspec test fail when i run its pipeline from other pipeline

I'm running 3 pipelines in jenkins (CI, CD, CDP) when I run the CI pipe the final stage is a trigger for activate the pipe CD (Continuous Deployment), this receive a parameter APP_VERSION from CI (Continuous Integration) PIPE and deploy an instance with packer and run SERVERSPEC TEST, but serverspec test failed.
but the demo-app is installed via salstack
The strange is when I run the CD and pass the parameter APP_VERSION manually this WORK !!
this is the final stage for pipeline CI
stage "Trigger downstream"
echo 'parametro'
def versionApp = sh returnStdout: true, script:"echo \$(git rev-parse --short HEAD) "
build job: "demo-pipeCD", parameters: [[$class: "StringParameterValue", name: "APP_VERSION", value: "${versionApp}"]], wait: false
}
I have passed to serverspec the sbin PATH and not work.
EDIT: I add the code the test.
enter code here
require 'spec_helper'
versionFile = open('/tmp/APP_VERSION')
appVersion = versionFile.read.chomp
describe package("demo-app-#{appVersion}") do
it { should be_installed }
end
Also, i add the job pipeline
#!groovy
node {
step([$class: 'WsCleanup'])
stage "Checkout Git repo"
checkout scm
stage "Checkout additional repos"
dir("pipeCD") {
git "https://git-codecommit.us-east-
1.amazonaws.com/v1/repos/pipeCD"
}
stage "Run Packer"
sh "echo $APP_VERSION"
sh "\$(export PATH=/usr/bin:/root/bin:/usr/local/bin:/sbin)"
sh "/opt/packer validate -var=\"appVersion=$APP_VERSION\" -var-
file=packer/demo-app_vars.json packer/demo-app.json"
sh "/opt/packer build -machine-readable -
var=\"appVersion=$APP_VERSION\" -var-file=packer/demo-app_vars.json
packer/demo-app.json | tee packer/packer.log"
REPEAT .. the parameter APP_VERSION in job pipe is rigth and the demo-app app is installed before the execute the test.

Jenkinsfile to automatically deploy to EKS

How do I pass my aws credentials when I am running a Jenkinsjob
taking this as an example https://github.com/PaulMaddox/amazon-eks-kubectl
$ docker run -v ~/.aws:/home/kubectl/.aws -e CLUSTER=demo maddox/kubectl get services
The above works on my laptop , but I want to pass aws credentials on the file.I have aws configured in my Jenkins-->credentials .I also have a bitbucket repo which contains a Jenkinsfile and a yam file for "service" and "deployment"
the way I do it now is run the kubectl create -f filename.yaml and it deploys to eks .. just want to do the same thing but automate it with a Jenkinsfile , suggestions on how to do it either with kubectl or with helm
In your Jenkinsfile you should include similar section:
stage('Deploy on Dev') {
node('master'){
withEnv(["KUBECONFIG=${JENKINS_HOME}/.kube/dev-config","IMAGE=${ACCOUNT}.dkr.ecr.us-east-1.amazonaws.com/${ECR_REPO_NAME}:${IMAGETAG}"]){
sh "sed -i 's|IMAGE|${IMAGE}|g' k8s/deployment.yaml"
sh "sed -i 's|ACCOUNT|${ACCOUNT}|g' k8s/service.yaml"
sh "sed -i 's|ENVIRONMENT|dev|g' k8s/*.yaml"
sh "sed -i 's|BUILD_NUMBER|01|g' k8s/*.yaml"
sh "kubectl apply -f k8s"
DEPLOYMENT = sh (
script: 'cat k8s/deployment.yaml | yq -r .metadata.name',
returnStdout: true
).trim()
echo "Creating k8s resources..."
sleep 180
DESIRED= sh (
script: "kubectl get deployment/$DEPLOYMENT | awk '{print \$2}' | grep -v DESIRED",
returnStdout: true
).trim()
CURRENT= sh (
script: "kubectl get deployment/$DEPLOYMENT | awk '{print \$3}' | grep -v CURRENT",
returnStdout: true
).trim()
if (DESIRED.equals(CURRENT)) {
currentBuild.result = "SUCCESS"
return
} else {
error("Deployment Unsuccessful.")
currentBuild.result = "FAILURE"
return
}
}
}
}
}
which will be responsible for automating deployment proccess.
I hope it helps.