I created a pipeline job that connects to an AWS account and runs some shell scripts.
At one point I wanted to add an SSH step, in which Jenkins ssh into each instance inside the account, performs an update and a restart, then continues until all instances are updated.
Account is comprised of 4 instances, grouped into pairs.
This is my Jenkinsfile (the ssh part is commented out for now):
pipeline {
agent {
label "amazon-small"
}
stages {
stage('Repo Checkout') {
steps {
git credentialsId: 'jenkins-ssh', url: 'ssh://git#REPO.git'
}
}
stage('Create snapshots') {
withAWS(role: "${config.aws_role}", roleAccount: "${config.aws_acount}", duration: 3600) {
steps {
sh "aws_patch_stuff/snapshots.sh"
}
}
}
stage('Deregister first ID pair from target group') {
steps {
sh "source vars.sh; deregister_pair_from_target_group ${id_pair_1[#]}"
}
}
stage('SSH into first host pair, update and restart') {
steps {
sshagent(credentials: [config.credential_id]) {
sh "python3 check_active_sessions.py"
// sh "source vars.sh; ssh_into_pair ${ip_pair_1[#]}"
}
}
}
stage('Register targets after reboot') {
steps {
sh "source vars.sh; register_pair_to_target_group ${id_pair_1[#]}"
}
stage('Check software availability') {
steps {
sh "source vars.sh; check_soft_availability ${ip_pair_1[#]}"
}
}
stage('Deregister second ID pair from target group') {
steps {
sh "source vars.sh; deregister_pair_from_target_group ${id_pair_2[#]}"
}
}
stage('SSH into second host pair, update and restart') {
steps {
sshagent(credentials: [config.credential_id]) {
sh "python3 check_active_sessions.py"
// sh "source vars.sh; ssh_into_pair ${ip_pair_2[#]}"
}
}
}
stage('Register targets after reboot') {
steps {
sh "source vars.sh; register_pair_to_target_group ${id_pair_2[#]}"
}
}
stage('Check software availability') {
steps {
sh "source vars.sh; check_soft_availability ${ip_pair_2[#]}"
}
}
post {
always {
script {
currentBuild.result = currentBuild.result ?: 'SUCCESS'
mail bcc: '', body: 'Update done, hosts rebooted, packages are up to date', cc: '', from: '', replyTo: '', subject: 'Status for update and reboot pipeline', to: 'some.name#email.com'
}
}
}
}
The ssh function looks like this:
ssh_into_pair () {
arr=("$#")
for INSTANCEIP in ${arr[#]}
do
echo "Updating $INSTANCEIP..."
ssh -o StrictHostKeyChecking=no -l ec2-user $INSTANCEIP uname -a
sudo yum update
echo "Performing restart..."
sudo needs-restarting -r || sudo shutdown -r 1
sleep 10
done
}
I tried using the SSH plugin for Jenkins, however I am pretty sure I'm not using it right.
The private keys are stored in jenkins and are provided in the groovy file of the job, so this should't be an issue. Is there a way I can actually achieve this? Should I maybe consider using Ansible as well?
Related
I am very new to terraform stuff and currently working on running terraform scripts on JenkinsPipeline. I have some .tfvars file for each region for example tf-de-sandbox.tfvars, tf-fr-sandbox.tfvars, tf-de-prod.tfvars, tf-fr-prod.tfvars. I am trying to run plan and apply commands through JenkinsPipeline. What I am looking for is there anyway where I can run both sandbox files in parallel and prod files in parallel. Forexample can I give multiple tfvars file while using plan command?
Terraform plan -var-file=[tf-de-sandbox.tfvars,tf-fr-sandbox.tfvars] or something like this and then use apply command?
My JenkinsPipeline looks like this.
pipeline {
agent none
triggers {
cron('0 9 * * *') //schedule your build every day at 9h 00
}
stages {
stage('Pre-deploy stacks') {
agent { node { label 'deploy-python36' } }
steps {
sh 'cf sync --confirm cfn/stacks.yml'
sh 'curl https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo | tee /etc/yum.repos.d/hashicorp.repo && yum install terraform -y'
}
}
stage('TF Initialization') {
agent { node { label 'deploy-python36' } }
steps {
dir('./tf') {
sh 'terraform init'
}
}
}
stage('TF/DE planning [box]') {
agent { node { label 'deploy-python36' } }
steps {
dir('./tf') {
sh 'terraform plan -var-file=conf/tf-de-sandbox.tfvars'
}
}
}
Yes, you can. For example:
terraform plan --var-file=dev.tfvars --var-file=common.tfvars --out=planoutput
And then, to apply:
terraform apply planoutput
I am creating an instance from a sourceImage, using this terraform template:
resource "tls_private_key" "sandbox_ssh" {
algorithm = "RSA"
rsa_bits = 4096
}
output "tls_private_key_sandbox" { value = "${tls_private_key.sandbox_ssh.private_key_pem}" }
locals {
custom_data1 = <<CUSTOM_DATA
#!/bin/bash
CUSTOM_DATA
}
resource "google_compute_instance_from_machine_image" "sandboxvm_test_fromimg" {
project = "<proj>"
provider = google-beta
name = "sandboxvm-test-fromimg"
zone = "us-central1-a"
tags = ["test"]
source_machine_image = "projects/<proj>/global/machineImages/sandboxvm-test-img-1"
can_ip_forward = false
labels = {
owner = "test"
purpose = "test"
ami = "sandboxvm-test-img-1"
}
metadata = {
ssh-keys = "${var.sshuser}:${tls_private_key.sandbox_ssh.public_key_openssh}"
}
network_interface {
network = "default"
access_config {
// Include this section to give the VM an external ip address
}
}
metadata_startup_script = local.custom_data1
}
output "instance_ip_sandbox" {
value = google_compute_instance_from_machine_image.sandboxvm_test_fromimg.network_interface.0.access_config.0.nat_ip
}
output "user_name" {
value = var.sshuser
}
I can't even ping / netcat, neither the private or public IP of the VM created. Even the "serial port" ssh, passed inside custom script helps.
I'm suspecting, that since it is a "google beta" capability, is it even working / reliable?
Maybe we just can't yet, create VMs i.e GCEs from "SourceImages" in GCP, Unless proven otherwise, with a simple goof-up not very evident in my TF.
I could solve it actually, and all this somewhere sounds very sick of GCE.
Problem was while creating the base image, the instance I had chosen has had the following :
#sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.6 2
#sudo update-alternatives --install /usr/bin/python3 python /usr/bin/python3.7 1
Maybe I should try with "python3" instead of "python",
but when instantiating GCEs basis this MachineImage, it looks for a rather deprecated "python2.7" and not "python3" and complained of missing / unreadable packages like netiplan etc.
Commenting the "update-alternatives" and installing python3.6 and python3.7 explicitly did the trick!
I am trying to reference variables that are set in my jenkinsfile in my serverless.yml file.
In jenkinsfile i have this
environment {
HELLO = 'hello-world'
}
In serverless.yml file i have this
custom:
secret: ${env:HELLO}
When running jenkins pipeline i get this error
A valid environment variable to satisfy the declaration 'env:HELLO' could not be found.
Here is my full jenkins file as requested, end goal i want to use val1 and val2 and env variables but if i can figure out how to to with hello world it is the same thing.
import com.lmig.intl.cloud.jenkins.exception.BuildException
def getJobName() {
return env.JOB_NAME
}
environment {
HELLO = 'hello-world'
}
def getEnvironment() {
def jobName = getJobName().split('/')
def environment = jobName[1].toLowerCase()
return environment.toLowerCase()
}
node('linux'){
stage('Checkout'){
checkout scm
}
stage('Pull Secrets From Vault'){
withAWS(credentials:'aws-cred'){
def secret = vaultPullSecrets(app:"sls-auxiliary-service",appenv:"nonprod",runtime:'nonprod',keys:'["saslusername","saslpassword"]')
def val1 = new groovy.json.JsonSlurper().parseText(secret)[0].SASLUSERNAME
def val2 = new groovy.json.JsonSlurper().parseText(secret)[1].SASLPASSWORD
if(val1 != '' && val2 != ''){
echo "Vault Secret pulled Successfully"
}else{
echo "Vault Secret Not Found"
throw new BuildException("Vault Secret Not Found")
}
}
}
stage('Deploy') {
def ENVIRONMENT = getEnvironment().replaceAll("\\_","")
withAWS(credentials:'aws-cred') {
sh 'npm i serverless-python-requirements'
sh 'npm install --save-dev serverless-step-functions'
sh 'npm install serverless-deployment-bucket --save-dev'
sh 'npm i serverless-pseudo-parameters'
sh 'npm i serverless-plugin-resource-tagging'
sh 'pip3 install --user -r requirements.txt'
sh "serverless deploy --stage ${ENVIRONMENT}"
}
}
}
You can use sed to replace the placeholder: ${env:HELLO}to real value, if you can make jenkin job always be executed on Linux slave.
stage('Pull Secrets From Vault'){
withAWS(credentials:'aws-cred'){
def secret = vaultPullSecrets(app:"sls-auxiliary-service",appenv:"nonprod",runtime:'nonprod',keys:'["saslusername","saslpassword"]')
def val1 = new groovy.json.JsonSlurper().parseText(secret)[0].SASLUSERNAME
sh """
sed -i 's/\${env:HELLO}/${val1}/' <relative path to>/serverless.yml
"""
I did a quick practice with a simple pipeline as following, the sed command I give work well.
node('docker') {
stage('A') {
sh '''
set +x
echo 'custom:' > serverless.yml
echo ' secret: ${env:HELLO}' >> serverless.yml
echo '### Before replace ###'
cat serverless.yml
'''
def val1 = 'hello'
sh """
set +x
sed -i 's/\${env:HELLO}/${val1}/' ./serverless.yml
echo '### After replace ###'
cat serverless.yml
"""
}
}
Output of job build
[script-pipeline-practice] Running shell script
+ set +x
### Before replace ###
custom:
secret: ${env:HELLO}
[Pipeline] sh
[script-pipeline-practice] Running shell script
+ set +x
### After replace ###
custom:
secret: hello
I have machine with nixos (provisioned using terraform, config), I want to connect to it using deployment.targetHost = ipAddress and deployment.targetEnv = "none"
But I can't configure nixops to use /secrets/stage_ssh_key ssh key
This is not working ( actually this is not documented, I have found it here https://github.com/NixOS/nixops/blob/d4e5b779def1fc9e7cf124930d0148e6bd670051/nixops/backends/none.py#L33-L35 )
{
stage =
{ pkgs, ... }:
{
deployment.targetHost = (import ./nixos-generated/stage.nix).terraform.ip;
deployment.targetEnv = "none";
deployment.none.sshPrivateKey = builtins.readFile ./secrets/stage_ssh_key;
deployment.none.sshPublicKey = builtins.readFile ./secrets/stage_ssh_key.pub;
deployment.none.sshPublicKeyDeployed = true;
environment.systemPackages = with pkgs; [
file
];
};
}
nixops ssh stage results in asking for password, expected - login without password
nixops ssh stage -i ./secrets/stage_ssh_key works as expected, password is not asked
How to reproduce:
download repo
rm -rf secrets/*
add aws keys in secrets/aws.nix
{
EC2_ACCESS_KEY="XXXX";
EC2_SECRET_KEY="XXXX";
}
nix-shell
make generate_stage_ssh_key
terraform apply
make nixops_create
nixops deploy asks password
I'm very new to the this whole Packer/Vagrant,Puppet world. I'm trying to build my first VM using Packer and Puppet.
I can successfully build a virtualbox and I've included a shell script provisioner to install puppet. I've ssh'ed into the VM to verify that it works and puppet is installed.
Then I added an additional puppet-masterless provisioner that looks simply like this:
# java dependency
package { 'openjdk-7-jdk' :
ensure => present
}
When I run packer, it gets to this point and gets stuck:
==> virtualbox-iso: Provisioning with Puppet...
virtualbox-iso: Creating Puppet staging directory...
virtualbox-iso: Uploading manifests...
virtualbox-iso: Running Puppet: sudo -E puppet apply --verbose --modulepath='' --detailed-exitcodes /tmp/packer-puppet-masterless/manifests/ubuntu.pp
Any suggestions would be helpful. Even on how to debug it to see what's going on behind the scenes
I was having the same problem, and changed the execute_command to receive the password of the vagrant user.
"override": {
"virtualbox-iso": {
"execute_command": "echo 'vagrant' | {{.FacterVars}}{{if .Sudo}} sudo -S -E {{end}}puppet apply --verbose --modulepath='{{.ModulePath}}' {{if ne .HieraConfigPath \"\"}}--hiera_config='{{.HieraConfigPath}}' {{end}} {{if ne .ManifestDir \"\"}}--manifestdir='{{.ManifestDir}}' {{end}} --detailed-exitcodes {{.ManifestFile}}"
}
}
The whole block looks like this
{
"type": "puppet-masterless",
"manifest_file": "../puppet/manifests/base.pp",
"module_paths": [
"../puppet/modules/"
],
"override": {
"virtualbox-iso": {
"execute_command": "echo 'vagrant' | {{.FacterVars}}{{if .Sudo}} sudo -S -E {{end}}puppet apply --verbose --modulepath='{{.ModulePath}}' {{if ne .HieraConfigPath \"\"}}--hiera_config='{{.HieraConfigPath}}' {{end}} {{if ne .ManifestDir \"\"}}--manifestdir='{{.ManifestDir}}' {{end}} --detailed-exitcodes {{.ManifestFile}}"
}
}
}
Source: Found an example here https://github.com/AdoptOpenJDK/openjdk-virtual-images/blob/master/packer/openjdk-development/openjdk-development.json