Jenkinsfile to automatically deploy to EKS - amazon-web-services

How do I pass my aws credentials when I am running a Jenkinsjob
taking this as an example https://github.com/PaulMaddox/amazon-eks-kubectl
$ docker run -v ~/.aws:/home/kubectl/.aws -e CLUSTER=demo maddox/kubectl get services
The above works on my laptop , but I want to pass aws credentials on the file.I have aws configured in my Jenkins-->credentials .I also have a bitbucket repo which contains a Jenkinsfile and a yam file for "service" and "deployment"
the way I do it now is run the kubectl create -f filename.yaml and it deploys to eks .. just want to do the same thing but automate it with a Jenkinsfile , suggestions on how to do it either with kubectl or with helm

In your Jenkinsfile you should include similar section:
stage('Deploy on Dev') {
node('master'){
withEnv(["KUBECONFIG=${JENKINS_HOME}/.kube/dev-config","IMAGE=${ACCOUNT}.dkr.ecr.us-east-1.amazonaws.com/${ECR_REPO_NAME}:${IMAGETAG}"]){
sh "sed -i 's|IMAGE|${IMAGE}|g' k8s/deployment.yaml"
sh "sed -i 's|ACCOUNT|${ACCOUNT}|g' k8s/service.yaml"
sh "sed -i 's|ENVIRONMENT|dev|g' k8s/*.yaml"
sh "sed -i 's|BUILD_NUMBER|01|g' k8s/*.yaml"
sh "kubectl apply -f k8s"
DEPLOYMENT = sh (
script: 'cat k8s/deployment.yaml | yq -r .metadata.name',
returnStdout: true
).trim()
echo "Creating k8s resources..."
sleep 180
DESIRED= sh (
script: "kubectl get deployment/$DEPLOYMENT | awk '{print \$2}' | grep -v DESIRED",
returnStdout: true
).trim()
CURRENT= sh (
script: "kubectl get deployment/$DEPLOYMENT | awk '{print \$3}' | grep -v CURRENT",
returnStdout: true
).trim()
if (DESIRED.equals(CURRENT)) {
currentBuild.result = "SUCCESS"
return
} else {
error("Deployment Unsuccessful.")
currentBuild.result = "FAILURE"
return
}
}
}
}
}
which will be responsible for automating deployment proccess.
I hope it helps.

Related

How to Mount EFS to EC2 Instance with UserData using Pulumi?

I've been struggling to be able to mount an EFS volume to an EC2 instance on creation with the UserData field. I'm using Pulumi's Go library and what I have looks like the following:
// ... EFS with proper security groups and mountTarget created above ...
dir := configuration.Deployment.Efs.MountPoint
availabilityZone := configuration.Deployment.AvailabilityZone
region := configuration.Deployment.Region
userdata := args.Efs.ID().ToStringOutput().ApplyT(func(id string) (string, error) {
script := `
#!/bin/bash -xe
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
mkdir -p %s
echo "%s.%s.%s.amazonaws.com:/ %s nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,_netdev 0 0" | tee -a /etc/fstab
mount -a
`
return fmt.Sprintf(script, dir, availabilityZone, id, region, dir), nil
}).(pulumi.StringOutput)
ec2, err := ec2.NewInstance(ctx, fmt.Sprintf("%s_instance", name), &ec2.InstanceArgs{
// ... (other fields) ...
UserData: userdata,
// ... (other fields) ...
})
But when I create all the resources with Pulumi, the UserData script doesn't run at all. My assumption is that the EFS ID isn't resolved in time by the time the EC2 instance is created, but I thought that Pulumi would handle the dependency ordering automatically since the EC2 instance is now dependent on the EFS volume. I also added an explicit DependsOn() to see if that could be the issue, but it didn't help.
Is there something that I am doing wrong? Any help would be appreciated, thank you!
I've tried several variations of the above example. I looked at this example: Pulumi - EFS Id output to EC2 LaunchConfiguration UserData
But couldn't get that to work either.
I was able to figure it out, the issue ended up being a couple things:
The formatting on the inlined script needed to not have tabs.
pulumi.Sprintf() ended up working better than using ApplyT().
The EFS volume wasn't ready to mount when it tried to do mount -a.
Put together, it now looks like this:
instanceArgs := &ec2.InstanceArgs{
// ... arg fields ...
}
script := `#!/bin/bash
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
mkdir -p %s
echo "%s.efs.%s.amazonaws.com:/ %s nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport,_netdev 0 0" >> /etc/fstab
EFS_STATUS="unknown"
WAIT_TIME=10
RETRY_CNT=15
while [[ $EFS_STATUS != "\"available\"" ]]; do
echo "Waiting for EFS to start..."
sleep $WAIT_TIME
EFS_STATUS=$(aws efs describe-file-systems | jq '.FileSystems | map(select(.FileSystemId == "%s")) | map(.LifeCycleState) | .[0]')
done
while true; do
mount -a -t nfs4
if [ $? = 0 ]; then
echo "Successfully mounted EFS to instance."
break
fi;
if [ $RETRY_CNT -lt 1 ]; then
echo "EFS could not mount after $RETRY_CNT retries."
fi;
echo "EFS could not mount, retrying..."
((RETRY_CNT--))
sleep $WAIT_TIME
done`
userData := pulumi.Sprintf(script, mountDir, Efs.ID(), region, mountDir, Efs.ID())
instanceArgs.UserData = userData
ec2, err := ec2.NewInstance(ctx, fmt.Sprintf("%s_instance", name), instanceArgs)

ACR purge - How can i set regular expression to skip specific image which starts with v from purging

I am managing a Azure Container Registry. I have scheduled a ACR Purge task which is deleting all image tag if they are older than 7 days and exclude versioned images which are starting with v so that we can skip certain image from cleanup.
For Example: if image has name like
123abc
v1.2
v1.3
xit5424
v1.4
34xyurc
v2.1
So it should delete images which are not starting with v and should delete the images which are not starting with v. For example it should delete below images-
123abc
xit5424
34xyurc
My script is something like this.
PURGE_CMD="acr purge --filter 'Repo1:.' --filter 'ubuntu:.' --ago 7d --untagged --keep 5"
az acr run
--cmd "$PURGE_CMD"
--registry Myregistry
/dev/null
Thanks Ashish
Please check if below gives an idea to workaround :
Here I am trying to make use of delete command .
grep -v >>Invert the sense of matching, to select non-matching lines.
Grep -o >> Show only the part of a matching line that matches PATTERN.
grep - Reference
1)Try to get the tags which does not match the pattern "v"
$tagsArray = az acr repository show-tags --name myacr --repository myrepo --orderby time_desc \
--output tsv | grep -v "v"
Check with purge command below if possible (not tested)
PURGE_CMD="acr purge --filter 'Repo1:.' --filter 'ubuntu:.' --ago 7d --filter '$tagsArray' --untagged --keep 5"
az acr run --cmd "$PURGE_CMD" --registry Myregistry /dev/null
(or)
check by using delete command
Ex:
$repositoryList = (az acr repository list --name $registryName --output json | ConvertFrom-Json)
foreach ($repo in $repositoryList)
{
$tagsArray = az acr repository show-tags --name myacr --repository myrepo --orderby time_desc \
--output tsv | grep -v "v"
foreach($tag in $tagsArray)
{
az acr repository delete --name $registryName --image $repo":"$tag --yes
}
}
Or we can get all tags with a query which should not be deleted and can use if else statement tag .
foreach ($repo in $repositoryList)
{
$AllTags = (az acr repository show-tags --name $registryName --repository $repo --orderby time_asc --output json | ConvertFrom-Json ) | Select-Object -SkipLast $skipLastTags
$doNotDeleteTags=$( az acr repository show-tags --name $registryName --query "[?contains(name, 'tagname')]" --output tsv)
#or $doNotDeleteTags = az acr repository show-tags --name $registryName --repository $repo --orderby time_asc --output json | ConvertFrom-Json ) -- query "[?starts_with(name,'prefix')].name"
foreach($tag in $AllTags)
{
if ($donotdeletetags -contains $tag)
{
Write-Output ("This tag is not deleted $tag")
}
else
{
az acr repository delete --name $registryName --image $repo":"$tag --yes
}
}
}
References:
fetch-the-latest-image-from-acr-that-doesnt-start-with-a-prefix
azure-container-registry-delete
how-to-delete-image-from-azure-container-registry
acr-delete-only-old-images-

How to Check if AWS Named Configure profile exists

How do I check if a named profile exists before I attempt to use it ?
aws cli will throw an ugly error if I attempt to use a non-existent profile, so I'd like to do something like this :
$(awsConfigurationExists "${profile_name}") && aws iam list-users --profile "${profile_name}" || echo "can't do it!"
Method 1 - Check entries in the .aws/config file
function awsConfigurationExists() {
local profile_name="${1}"
local profile_name_check=$(cat $HOME/.aws/config | grep "\[profile ${profile_name}]")
if [ -z "${profile_name_check}" ]; then
return 1
else
return 0
fi
}
Method 2 - Check results of aws configure list , see aws-cli issue #819
function awsConfigurationExists() {
local profile_name="${1}"
local profile_status=$( (aws configure --profile ${1} list) 2>&1)
if [[ $profile_status = *'could not be found'* ]]; then
return 1
else
return 0
fi
}
usage
$(awsConfigurationExists "my-aws-profile") && echo "does exist" || echo "does not exist"
or
if $(awsConfigurationExists "my-aws-profile"); then
echo "does exist"
else
echo "does not exist"
fi
I was stuck with the same problem and the proposed answer did not work for me.
Here is my solution with aws-cli/2.8.5 Python/3.9.11 Darwin/21.6.0 exe/x86_64 prompt/off:
export AWS_PROFILE=localstack
aws configure list-profiles | grep -q "${AWS_PROFILE}"
if [ $? -eq 0 ]; then
echo "AWS Profile [$AWS_PROFILE] already exists"
else
echo "Creating AWS Profile [$AWS_PROFILE]"
aws configure --profile $AWS_PROFILE set aws_access_key_id test
aws configure --profile $AWS_PROFILE set aws_secret_access_key test
fi

jq: error: syntax error, unexpected INVALID_CHARACTER, expecting $end in windows

I am trying to read credentials from assume role like AcccessKeyID and store in a variable but getting error:
My code and error is:
jq -r '".Credentials.AccessKeyId"' mysession.json | awk '"{print "set","AWS_ACCESS_KEY_ID="$0}"' > variables
jq: error: syntax error, unexpected INVALID_CHARACTER, expecting $end (Windows cmd shell quoting issues?) at , line 1:
'".Credentials.AccessKeyId"'
jq: 1 compile error
awk: '"{print
awk: ^ invalid char ''' in expression
Please suggest me how to achieve this activity in windows CMD .I have installed jq and awk in windows.
aws sts assume-role --role-arn role_arn --role-session-name session_name > mysession.json
$ak = jq -r ".Credentials.AccessKeyId" mysession.json
$sk = jq -r ".Credentials.SecretAccessKey" mysession.json
$tk = jq -r ".Credentials.SessionToken" mysession.json
Write-Host "Acccess Key ID:" $ak
Write-Host "Secret Acccess Key:" $sk
Write-Host "Session Token:" $tk
Powershell
$source_profile = "default"
$region = "ap-southeast-2"
$role_arn = "arn:aws:iam::account_id:role/role-test"
$target_profile = "test"
$target_profile_path = "$HOME\.aws\credentials"
$session_name = "test"
# Assume Role
$Response = (Use-STSRole -Region $region -RoleArn $role_arn -RoleSessionName $session_name -ProfileName $source_profile).Credentials
# Export Crendentail as environment variable
$env:AWS_ACCESS_KEY_ID=$Response.AccessKeyId
$env:AWS_SECRET_ACCESS_KEY=$Response.SecretAccessKey
$env:AWS_SESSION_TOKEN=$Response.SessionToken
# Create Profile with Credentials
Set-AWSCredential -StoreAs $target_profile -ProfileLocation $target_profile_path -AccessKey $Response.AccessKeyId -SecretKey $Response.SecretAccessKey -SessionToken $Response.SessionToken
# Print expiration time
Write-Host("Credentials will expire at: " + $Response.Expiration)
AWS Assume Role Script
How can I parse an assumed role's credentials in powershell and set them as a variable in a script?
On the jq site it mentions syntax adjustments for Windows:
"when using the Windows command shell (cmd.exe) it's best to use
double quotes around your jq program when given on the command-line
(instead of the -f program-file option), but then double-quotes in the
jq program need backslash escaping."
So, instead of
jq -r '".Credentials.AccessKeyId"' mysession.json
You'll need to escape double quotes, then change single quotes to double.
jq -r "\".Credentials.AccessKeyId\"" mysession.json

Accumulo Overview console not reachable outside of VirtualBox VM

I am running Accumulo 1.5 in an Ubuntu 12.04 VirtualBox VM. I have set the accumulo-site.xml instance.zookeeper.host file to the VM's IP address, and I can connect to accumulo and run queries from a remote client machine. From the client machine, I can also use a browser to see the hadoop NameNode, browse the filesystem, etc. But I cannot connect to the Accumulo Overview page (port 50095) from anywhere else than directly from the Accumulo VM. There is no firewall between the VM and the client, and besides the Accumulo Overview page not being reachable, everything else seems to work fine.
Is there a config setting that I need to change to allow outside access to the Accumulo Overview console?
thanks
I was able to get the Accumulo monitor to bind to all network interfaces by manually applying this patch:
https://git-wip-us.apache.org/repos/asf?p=accumulo.git;a=commit;h=7655de68
In conf/accumulo-env.sh add:
# Should the monitor bind to all network interfaces -- default: false
export ACCUMULO_MONITOR_BIND_ALL="true"
In bin/config.sh add:
# ACCUMULO-1985 provide a way to use the scripts and still bind to all network interfaces
export ACCUMULO_MONITOR_BIND_ALL=${ACCUMULO_MONITOR_BIND_ALL:-"false"}
And modify bin/start-server.sh to match:
SOURCE="${BASH_SOURCE[0]}"
while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink
bin="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
SOURCE="$(readlink "$SOURCE")"
[[ $SOURCE != /* ]] && SOURCE="$bin/$SOURCE" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
done
bin="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
# Stop: Resolve Script Directory
. "$bin"/config.sh
HOST="$1"
host "$1" >/dev/null 2>/dev/null
if [ $? -ne 0 ]; then
LOGHOST="$1"
else
LOGHOST=$(host "$1" | head -1 | cut -d' ' -f1)
fi
ADDRESS="$1"
SERVICE="$2"
LONGNAME="$3"
if [ -z "$LONGNAME" ]; then
LONGNAME="$2"
fi
SLAVES=$( wc -l < ${ACCUMULO_HOME}/conf/slaves )
IFCONFIG=/sbin/ifconfig
if [ ! -x $IFCONFIG ]; then
IFCONFIG='/bin/netstat -ie'
fi
# ACCUMULO-1985 Allow monitor to bind on all interfaces
if [ ${SERVICE} == "monitor" -a ${ACCUMULO_MONITOR_BIND_ALL} == "true" ]; then
ADDRESS="0.0.0.0"
fi
ip=$($IFCONFIG 2>/dev/null| grep inet[^6] | awk '{print $2}' | sed 's/addr://' | grep -v 0.0.0.0 | grep -v 127.0.0.1 | head -n 1)
if [ $? != 0 ]
then
ip=$(python -c 'import socket as s; print s.gethostbyname(s.getfqdn())')
fi
if [ "$HOST" = "localhost" -o "$HOST" = "`hostname`" -o "$HOST" = "$ip" ]; then
PID=$(ps -ef | egrep ${ACCUMULO_HOME}/.*/accumulo.*.jar | grep "Main $SERVICE" | grep -v grep | awk {'print $2'} | head -1)
else
PID=$($SSH $HOST ps -ef | egrep ${ACCUMULO_HOME}/.*/accumulo.*.jar | grep "Main $SERVICE" | grep -v grep | awk {'print $2'} | head -1)
fi
if [ -z $PID ]; then
echo "Starting $LONGNAME on $HOST"
if [ "$HOST" = "localhost" -o "$HOST" = "`hostname`" -o "$HOST" = "$ip" ]; then
#${bin}/accumulo ${SERVICE} --address $1 >${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.out 2>${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.err &
${bin}/accumulo ${SERVICE} --address ${ADDRESS} >${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.out 2>${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.err &
MAX_FILES_OPEN=$(ulimit -n)
else
#$SSH $HOST "bash -c 'exec nohup ${bin}/accumulo ${SERVICE} --address $1 >${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.out 2>${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.err' &"
$SSH $HOST "bash -c 'exec nohup ${bin}/accumulo ${SERVICE} --address ${ADDRESS} >${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.out 2>${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.err' &"
MAX_FILES_OPEN=$($SSH $HOST "/usr/bin/env bash -c 'ulimit -n'")
fi
if [ -n "$MAX_FILES_OPEN" ] && [ -n "$SLAVES" ] ; then
if [ "$SLAVES" -gt 10 ] && [ "$MAX_FILES_OPEN" -lt 65536 ]; then
echo "WARN : Max files open on $HOST is $MAX_FILES_OPEN, recommend 65536"
fi
fi
else
echo "$HOST : $LONGNAME already running (${PID})"
fi
Check that the monitor is bound to the correct interface, and not the "localhost" loopback interface. You may have to edit the monitors file in Accumulo's configuration directory with the IP/hostname of the correct interface.