I am trying to run a Ansible playbook that provisions EC2 instances in AWS using Jenkins.
My Jenkins application is installed on an EC2 that has required roles to provision instances, and my JENKINS_USER is ec2-user.
I am able to execute the playbook manually when logged in as ec2-user. However, when I try to execute the same exact Ansible command, Jenkins stalls indefinitely.
Building in workspace /var/lib/jenkins/workspace/Provision-AWS-Environment-dev
[Provision-AWS-Environment-dev] $ /bin/ansible-playbook /home/ec2-user/efx-devops-jenkins/aws/awsprovision.yml -i /home/ec2-user/efx-devops-jenkins/aws/inventories/dev/hosts -s -f 5 -vvv
Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: awsprovision.yml *****************************************************
[0;34m2 plays in /home/ec2-user/efx-devops-jenkins/aws/awsprovision.yml[0m
PLAY [awsmaster] ***************************************************************
TASK [provision : Provison "3" ec2 instances in "ap-southeast-2"] **************
[1;30mtask path: /home/ec2-user/efx-devops-jenkins/aws/roles/provision/tasks/main.yml:5[0m
[0;34mUsing module file /usr/lib/python2.7/site-packages/ansible/modules/core/cloud/amazon/ec2.py[0m
[0;34m<10.39.144.187> ESTABLISH LOCAL CONNECTION FOR USER: ec2-user[0m
[0;34m<10.39.144.187> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615 `" && echo ansible-tmp-1489656061.65-268771004227615="` echo ~/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615 `" ) && sleep 0'[0m
[0;34m<10.39.144.187> PUT /tmp/tmpvvKnfU TO /home/ec2-user/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615/ec2.py[0m
[0;34m<10.39.144.187> EXEC /bin/sh -c 'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615/ /home/ec2-user/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615/ec2.py && sleep 0'[0m
[0;34m<10.39.144.187> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-uatxqcnoparsvzhjhxvlccmbjwaxjqaz; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615/ec2.py; rm -rf "/home/ec2-user/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615/" > /dev/null 2>&1'"'"' && sleep 0'[0m
Can anyone identify why I am not able to execute the playbook using Jenkins?
The issue was that the Jenkins Master node (where the Ansible playbook was being executed), was missing some Environmental Variables (Configured under Manage Jenkins>Manage Node> Configure Master). See below list of variable I added to Jenkins Master node.
Name: http_proxy
Value: http://proxy.com:123
Name: PATH
Value: /usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/ec2-user/.local/bin:/home/ec2-user/bin
Name: SUDO_COMMAND
Value: /bin/su ec2-user
Name: SUDO_USER
Value: svc_ansible_lab
Once I added the above variables, I was able to execute the Ansible Playbooks with no issues.
Related
The Docker awslogs documentation states:
the default AWS shared credentials file (~/.aws/credentials of the root user)
Yet if I copy my AWS credentials file there:
sudo bash -c 'mkdir -p $HOME/.aws; cp .aws/credentials $HOME/.aws/credentials'
... and then try to use the driver:
docker run --log-driver=awslogs --log-opt awslogs-group=neiltest-deleteme --rm hello-world
The result is still the dreaded error:
docker: Error response from daemon: failed to initialize logging driver: failed to create Cloudwatch log stream: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors.
Where does this file really need to go? Is it because the Docker daemon isn't running as root but rather some other user and, if so, how do I determine that user?
NOTE: I can work around this on systems using systemd by setting environment variables. But this doesn't work on Google CloudShell where the Docker daemon has been started by some other method.
Ah ha! I figured it out and tested this on Debian Linux (on my Chromebook w/ Linux VM and Google CloudShell):
The .aws folder must be in the root folder of the root user not in the $HOME folder!
Based on that I was able to successfully run the following:
pushd $HOME; sudo bash -c 'mkdir -p /.aws; cp .aws/* /.aws/'; popd
docker run --log-driver=awslogs --log-opt awslogs-region=us-east-1 --log-opt awslogs-group=neiltest-deleteme --rm hello-world
I initially figured this all out by looking at the Docker daemon's process information:
DOCKERD_PID=$(ps -A | grep dockerd | grep -Eo '[0-9]+' | head -n 1)
sudo cat /proc/$DOCKERD_PID/environ
The confusing bit is that Docker's documentation here is wrong:
the default AWS shared credentials file (~/.aws/credentials of the root user)
The true location is /.aws/credentials. I believe this is because the daemon starts before $HOME is actually defined since it's not running as a user process. So starting a shell as root will tell you a different story for tilde or $HOME:
sudo sh -c 'cd ~/; echo $PWD'
That outputs /root but using /root/.aws/credentials does not work!
I'm trying to implement CD for my dockerized Django application on the DigitalOcean droplet.
Here's my .gitlab-ci.yml:
image:
name: docker/compose:1.29.1
entrypoint: [""]
services:
- docker:dind
stages:
- build
- deploy
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
before_script:
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export WEB_IMAGE=$IMAGE/web:web
- export NGINX_IMAGE=$IMAGE/nginx:nginx
- apk add --no-cache openssh-client bash
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
build:
stage: build
script:
- docker pull $IMAGE/web:web || true
- docker pull $IMAGE/web:nginx || true
- docker-compose -f docker-compose.prod.yml build
- docker push $IMAGE/web:web
- docker push $IMAGE/nginx:nginx
deploy:
stage: deploy
script:
- mkdir -p ~/.ssh
- echo "$PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- cat ~/.ssh/id_rsa
- chmod 700 ~/.ssh/id_rsa
- eval "$(ssh-agent -s)"
- ssh-add ~/.ssh/id_rsa
- ssh-keyscan -H 'gitlab.com' >> ~/.ssh/known_hosts
- chmod +x ./deploy.sh
- scp -o StrictHostKeyChecking=no -r ./.env ./docker-compose.prod.yml root#$DO_PUBLIC_IP_ADDRESS:/Pythonist.org
- bash ./deploy.sh
only:
- master
I have copied my Publick key to the production server (DO droplet).
The build job is successful but the deploy stage failed with the following error:
$ chmod 700 ~/.ssh/id_rsa
$ eval "$(ssh-agent -s)"
Agent pid 26
$ ssh-add ~/.ssh/id_rsa
Identity added: /root/.ssh/id_rsa (abdul12391#gmail.com)
$ ssh-keyscan -H 'gitlab.com' >> ~/.ssh/known_hosts
$ chmod +x ./deploy.sh
$ scp -o StrictHostKeyChecking=no -r ./.env ./docker-compose.prod.yml root#$DO_PUBLIC_IP_ADDRESS:/Pythonist.org
Warning: Permanently added '143.198.103.99' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
root#143.198.103.99: Permission denied (publickey,password).
lost connection
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
The official process is "How to Upload an SSH Public Key to an Existing Droplet", but it usually involves username, not root.
While your pipeline might be executed as root (as the Identity added: /root/.ssh/id_rsa message suggests), your scp should use a DO remote user, not the remote DO root account): the same account username where you have added the public key to the remote ~/.ssh/authorized_keys
So:
username#$DO_PUBLIC_IP_ADDRESS:/Pythonist.org
# not
root#$DO_PUBLIC_IP_ADDRESS:/Pythonist.org
Try the following on the digital ocean server:
cat ~/.ssh/id_rsa.pub
and copy the public key to authorized keys
nano ~/.ssh/authorized_keys
then change permission
chmod 600 ~/.ssh/authorized_keys
chmod 600 ~/.ssh/id_rsa
I am trying to copy files from my GitLab repository to the folder of my ec2 instance over ssh using server_ip and ec2 private_key.
I am not able to copy my files into the target folder.
My .gitlab-ci.yml:
stages:
- deploy
deploy:
stage: deploy
image: alpine
before_script:
- apk add openssh-client
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
script:
- ssh -o StrictHostKeyChecking=no ubuntu#$DEPLOY_SERVER 'rm -rf /var/www/html/*'
- scp -r . ubuntu#$DEPLOY_SERVER:/var/www/html **
## Here How Can I Copy all my repositroy file to target folder**
Check first the ssh call just before scp actually works.
Then try:
scp -o LogLevel=DEBUG -r . ubuntu#$DEPLOY_SERVER:/var/www/html
That will give you an idea why the scp fails, while the ssh call, I presume, works.
The simple goal:
I would like to have two containers both running on my local machine. One jenkins container & one SSH server container. Then, jenkins job could connect to the SSH server container & execute aws command to upload file to S3.
My workspace directory structure:
a docker-compose.yml (details see below)
a directory named centos/,
Inside centos/ I have a Dockerfile for building the SSH server image.
The docker-compose.yml:
In my docker-compose.yml I declared the two containers(services).
One jenkins container, name jenkins.
One SSH server contaienr, named remote_host.
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins/jenkins
ports:
- "8080:8080"
volumes:
- $PWD/jenkins_home:/var/jenkins_home
networks:
- net
remote_host:
container_name: remote_host
image: remote-host
build:
context: centos7
networks:
- net
networks:
net:
The Dockerfile for the remote_host is like this (Notice the last RUN installs the AWS CLI):
FROM centos
RUN yum -y install openssh-server
RUN useradd remote_user && \
echo remote_user:1234 | chpasswd && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user/.ssh/ && \
chmod 600 /home/remote_user/.ssh/authorized_keys
RUN ssh-keygen -A
RUN rm -rf /run/nologin
RUN yum -y install unzip
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && unzip awscliv2.zip && ./aws/install
Current situation with the above setup:
I run docker-compose build and docker-compose up. Both jenkins container and the remote_host(SSH server) container are up and running successfully.
I can go inside jenkins container by :
$ docker exec -it jenkins bash
jenkins#7551f2fa441d:/$
I can successfully ssh to the remote_host container by:
jenkins#7551f2fa441d:/$ ssh -i /tmp/remote-key remote_user#remote_host
Warning: the ECDSA host key for 'remote_host' differs from the key for the IP address '172.19.0.2'
Offending key for IP in /var/jenkins_home/.ssh/known_hosts:1
Matching host key in /var/jenkins_home/.ssh/known_hosts:2
Are you sure you want to continue connecting (yes/no)? yes
[remote_user#8c203bbdcf72 ~]$
Inside the remote_host container, I have also configured my AWS access key and secret key under ~.aws/credentials:
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
I can successfully run aws command to upload a file from remote_host container to my AWS S3 bucket. Like:
[remote_user#8c203bbdcf72 ~]$ aws s3 cp myfile s3://mybucket123asx/myfile
What the issue is
Now, I would like my jenkins job to execute the aws command to upload file to S3. So I created a shell script inside my remote_host container, the script is like this:
#/bin/bash
BUCKET_NAME=$1
aws s3 cp /tmp/myfile s3://$BUCKET_NAME/myfile
In my jenkins, I have configured the SSH & in my jenkins job configuration, I have:
As you can see , it simply runs the script located in the remote_host container.
When I build the jenkins job, I always get the error in console : upload failed: ../../tmp/myfile to s3://mybucket123asx/myfile Unable to locate credentials.
Why the same s3 command works when executing in the remote_host container but not working when run from jenkins job?
I also tried explicitly export the aws key id & secrete key in the script. (bear in mind that I have the ~.aws/credentils configured in remote_host, which works without explicitly exporting the aws secret key)
#/bin/bash
BUCKET_NAME=$1
export aws_access_key_id=AKAARXL1CFQNN4UV5TIO
export aws_secret_access_key=MY_SECRETE_KEY
aws s3 cp /tmp/myfile s3://$BUCKET_NAME/myfile
OK, I solved my issue by changing the export statement to capital case. So, the cause of the issue is that when jenkins run the script, it runs as remote_user on remote_host. Though on remote_host I have the ~/.aws/credentials setup, but that file only have read permission for users other than root:
[root#8c203bbdcf72 /]# ls -l ~/.aws/
total 4
-rw-r--r-- 1 root root 112 Sep 25 19:14 credentials
That's why when jenkins run the script to upload file to S3 got Unable to locate credentials failure. Because the credentials file can't be read by remote_user. So, I have to still uncomment the lines which exports aws key id and secret key. #Marcin's comment is helpful that the letters need to be capital letters, otherwise it would not work.
So, overall, what I did to fix the issue is to update my script with:
export aws_access_key_id=AKAARXL1CFQNN4UV5TIO
export aws_secret_access_key=MY_SECRETE_KEY
I am trying to test locally my build without needing to upload my code all over the time. Therefore, I downloaded the codebuild.sh into my ubuntu machine and places into ~/.local/bin/codebuild_build.
Then I made it executable via:
chmod +x ~/.local/bin/codebuild_build
And with the following buildspec.yml:
version: 0.2
phases:
install:
runtime-versions:
docker: 18
pre_build:
commands:
- docker login -u $USER -p $TOKEN
build:
commands:
- docker build -f ./dockerfiles/7.0.8/Dockerfile -t myapp/php7.0.8:$(cat VERSION_PHP_708) -t myapp/php7.0.8:latest .
- docker build -f ./dockerfiles/7.0.8/Dockerfile_develop -t myapp/php7.0.8-dev:$(cat VERSION_PHP_708) -t myapp/php7.0.8-dev:latest .
- docker build -f ./dockerfiles/7.2/Dockerfile -t myapp/php7.0.8:$(cat VERSION_PHP_72) -t myapp/php7.0.8:latest .
- docker build -f ./dockerfiles/7.2/Dockerfile_develop -t myapp/php7.0.8-dev:$(cat VERSION_PHP_708) -t myapp/php7.0.8-dev:latest .
post_build:
commands:
- docker push etable/php7.2
- docker push etable/php7.2-dev
- docker push etable/php7.0.8
- docker push etable/php7.0.8-dev
I tried to execute my command like that:
codebuild_build -i amazon/aws-codebuild-local -a /tmp/artifacts/docker-php -e .codebuild -c ~/.aws
But I get the following output:
Build Command:
docker run -it -v /var/run/docker.sock:/var/run/docker.sock -e "IMAGE_NAME=amazon/aws-codebuild-local" -e "ARTIFACTS=/tmp/artifacts/docker-php" -e "SOURCE=/home/pcmagas/Kwdikas/docker-php" -v "/home/pcmagas/Kwdikas/docker-php:/LocalBuild/envFile/" -e "ENV_VAR_FILE=.codebuild" -e "AWS_CONFIGURATION=/home/pcmagas/.aws" -e "INITIATOR=pcmagas" amazon/aws-codebuild-local:latest
Removing agent-resources_build_1 ... done
Removing agent-resources_agent_1 ... done
Removing network agent-resources_default
Removing volume agent-resources_source_volume
Removing volume agent-resources_user_volume
Creating network "agent-resources_default" with the default driver
Creating volume "agent-resources_source_volume" with local driver
Creating volume "agent-resources_user_volume" with local driver
Creating agent-resources_agent_1 ... done
Creating agent-resources_build_1 ... done
Attaching to agent-resources_agent_1, agent-resources_build_1
build_1 | 2020/01/16 14:43:58 Unable to initialize (*errors.errorString: AgentAuth was not specified)
agent-resources_build_1 exited with code 10
Stopping agent-resources_agent_1 ... done
Aborting on container exit...
My ~/.aws has the following files:
$ ls -l /home/pcmagas/.aws
σύνολο 8
-rw------- 1 pcmagas pcmagas 32 Αυγ 8 17:29 config
-rw------- 1 pcmagas pcmagas 116 Αυγ 8 17:34 credentials
Whilst the config has the following:
[default]
region = eu-central-1
And ~/.aws/credentials is in the following format:
[default]
aws_access_key_id = ^KEY_ID_CENSORED^
aws_secret_access_key = ^ACCESS_KEY_CENSORED^
Also in the .codebuild I contain the required docker-login params:
USER=^CENCORED^
TOKEN=^CENCORED^
Hence, I can get the params required for docker-login.
Do you have any idea why I the build fails to run locally?
Your pre-build step has a command that logs you in to docker
docker login -u $USER -p $TOKEN
Make sure that you have included the docker login credentials in your local file environment file.
Change the environment variable name in '.codebuild' file, e.g.:
DOCKER_USER=^CENCORED^
DOCKER_TOKEN=^CENCORED^
It seems the CodeBuild agent is interpreting the 'TOKEN' environment variable itself.