docker invalid format ssh key - amazon-web-services

Im trying to deploy a django app using docker and Gitlab CI/CD . I have a running instance on aws and also created a postgres database for the same . The deployment script is showing error as follows :
Login Succeeded
$ mkdir -p ~/.ssh
$ echo "$PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
$ cat ~/.ssh/id_rsa
$ chmod 700 ~/.ssh/id_rsa
$ eval "$(ssh-agent -s)"
Agent pid 28
$ ssh-add ~/.ssh/id_rsa
Error loading key "/root/.ssh/id_rsa": invalid format
Running after_script
00:02
Uploading artifacts for failed job
00:01
ERROR: Job failed: exit code 1
How can i fix this issue?

Related

Where do I put `.aws/credentials` for Docker awslogs log-driver (and avoid NoCredentialProviders)?

The Docker awslogs documentation states:
the default AWS shared credentials file (~/.aws/credentials of the root user)
Yet if I copy my AWS credentials file there:
sudo bash -c 'mkdir -p $HOME/.aws; cp .aws/credentials $HOME/.aws/credentials'
... and then try to use the driver:
docker run --log-driver=awslogs --log-opt awslogs-group=neiltest-deleteme --rm hello-world
The result is still the dreaded error:
docker: Error response from daemon: failed to initialize logging driver: failed to create Cloudwatch log stream: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors.
Where does this file really need to go? Is it because the Docker daemon isn't running as root but rather some other user and, if so, how do I determine that user?
NOTE: I can work around this on systems using systemd by setting environment variables. But this doesn't work on Google CloudShell where the Docker daemon has been started by some other method.
Ah ha! I figured it out and tested this on Debian Linux (on my Chromebook w/ Linux VM and Google CloudShell):
The .aws folder must be in the root folder of the root user not in the $HOME folder!
Based on that I was able to successfully run the following:
pushd $HOME; sudo bash -c 'mkdir -p /.aws; cp .aws/* /.aws/'; popd
docker run --log-driver=awslogs --log-opt awslogs-region=us-east-1 --log-opt awslogs-group=neiltest-deleteme --rm hello-world
I initially figured this all out by looking at the Docker daemon's process information:
DOCKERD_PID=$(ps -A | grep dockerd | grep -Eo '[0-9]+' | head -n 1)
sudo cat /proc/$DOCKERD_PID/environ
The confusing bit is that Docker's documentation here is wrong:
the default AWS shared credentials file (~/.aws/credentials of the root user)
The true location is /.aws/credentials. I believe this is because the daemon starts before $HOME is actually defined since it's not running as a user process. So starting a shell as root will tell you a different story for tilde or $HOME:
sudo sh -c 'cd ~/; echo $PWD'
That outputs /root but using /root/.aws/credentials does not work!

Gitlab returns Permission denied (publickey,password) for digitalocean server

I'm trying to implement CD for my dockerized Django application on the DigitalOcean droplet.
Here's my .gitlab-ci.yml:
image:
name: docker/compose:1.29.1
entrypoint: [""]
services:
- docker:dind
stages:
- build
- deploy
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
before_script:
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export WEB_IMAGE=$IMAGE/web:web
- export NGINX_IMAGE=$IMAGE/nginx:nginx
- apk add --no-cache openssh-client bash
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
build:
stage: build
script:
- docker pull $IMAGE/web:web || true
- docker pull $IMAGE/web:nginx || true
- docker-compose -f docker-compose.prod.yml build
- docker push $IMAGE/web:web
- docker push $IMAGE/nginx:nginx
deploy:
stage: deploy
script:
- mkdir -p ~/.ssh
- echo "$PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- cat ~/.ssh/id_rsa
- chmod 700 ~/.ssh/id_rsa
- eval "$(ssh-agent -s)"
- ssh-add ~/.ssh/id_rsa
- ssh-keyscan -H 'gitlab.com' >> ~/.ssh/known_hosts
- chmod +x ./deploy.sh
- scp -o StrictHostKeyChecking=no -r ./.env ./docker-compose.prod.yml root#$DO_PUBLIC_IP_ADDRESS:/Pythonist.org
- bash ./deploy.sh
only:
- master
I have copied my Publick key to the production server (DO droplet).
The build job is successful but the deploy stage failed with the following error:
$ chmod 700 ~/.ssh/id_rsa
$ eval "$(ssh-agent -s)"
Agent pid 26
$ ssh-add ~/.ssh/id_rsa
Identity added: /root/.ssh/id_rsa (abdul12391#gmail.com)
$ ssh-keyscan -H 'gitlab.com' >> ~/.ssh/known_hosts
$ chmod +x ./deploy.sh
$ scp -o StrictHostKeyChecking=no -r ./.env ./docker-compose.prod.yml root#$DO_PUBLIC_IP_ADDRESS:/Pythonist.org
Warning: Permanently added '143.198.103.99' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
root#143.198.103.99: Permission denied (publickey,password).
lost connection
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
The official process is "How to Upload an SSH Public Key to an Existing Droplet", but it usually involves username, not root.
While your pipeline might be executed as root (as the Identity added: /root/.ssh/id_rsa message suggests), your scp should use a DO remote user, not the remote DO root account): the same account username where you have added the public key to the remote ~/.ssh/authorized_keys
So:
username#$DO_PUBLIC_IP_ADDRESS:/Pythonist.org
# not
root#$DO_PUBLIC_IP_ADDRESS:/Pythonist.org
Try the following on the digital ocean server:
cat ~/.ssh/id_rsa.pub
and copy the public key to authorized keys
nano ~/.ssh/authorized_keys
then change permission
chmod 600 ~/.ssh/authorized_keys
chmod 600 ~/.ssh/id_rsa

Why codebuild.sh fails to run my local build?

I am trying to test locally my build without needing to upload my code all over the time. Therefore, I downloaded the codebuild.sh into my ubuntu machine and places into ~/.local/bin/codebuild_build.
Then I made it executable via:
chmod +x ~/.local/bin/codebuild_build
And with the following buildspec.yml:
version: 0.2
phases:
install:
runtime-versions:
docker: 18
pre_build:
commands:
- docker login -u $USER -p $TOKEN
build:
commands:
- docker build -f ./dockerfiles/7.0.8/Dockerfile -t myapp/php7.0.8:$(cat VERSION_PHP_708) -t myapp/php7.0.8:latest .
- docker build -f ./dockerfiles/7.0.8/Dockerfile_develop -t myapp/php7.0.8-dev:$(cat VERSION_PHP_708) -t myapp/php7.0.8-dev:latest .
- docker build -f ./dockerfiles/7.2/Dockerfile -t myapp/php7.0.8:$(cat VERSION_PHP_72) -t myapp/php7.0.8:latest .
- docker build -f ./dockerfiles/7.2/Dockerfile_develop -t myapp/php7.0.8-dev:$(cat VERSION_PHP_708) -t myapp/php7.0.8-dev:latest .
post_build:
commands:
- docker push etable/php7.2
- docker push etable/php7.2-dev
- docker push etable/php7.0.8
- docker push etable/php7.0.8-dev
I tried to execute my command like that:
codebuild_build -i amazon/aws-codebuild-local -a /tmp/artifacts/docker-php -e .codebuild -c ~/.aws
But I get the following output:
Build Command:
docker run -it -v /var/run/docker.sock:/var/run/docker.sock -e "IMAGE_NAME=amazon/aws-codebuild-local" -e "ARTIFACTS=/tmp/artifacts/docker-php" -e "SOURCE=/home/pcmagas/Kwdikas/docker-php" -v "/home/pcmagas/Kwdikas/docker-php:/LocalBuild/envFile/" -e "ENV_VAR_FILE=.codebuild" -e "AWS_CONFIGURATION=/home/pcmagas/.aws" -e "INITIATOR=pcmagas" amazon/aws-codebuild-local:latest
Removing agent-resources_build_1 ... done
Removing agent-resources_agent_1 ... done
Removing network agent-resources_default
Removing volume agent-resources_source_volume
Removing volume agent-resources_user_volume
Creating network "agent-resources_default" with the default driver
Creating volume "agent-resources_source_volume" with local driver
Creating volume "agent-resources_user_volume" with local driver
Creating agent-resources_agent_1 ... done
Creating agent-resources_build_1 ... done
Attaching to agent-resources_agent_1, agent-resources_build_1
build_1 | 2020/01/16 14:43:58 Unable to initialize (*errors.errorString: AgentAuth was not specified)
agent-resources_build_1 exited with code 10
Stopping agent-resources_agent_1 ... done
Aborting on container exit...
My ~/.aws has the following files:
$ ls -l /home/pcmagas/.aws
σύνολο 8
-rw------- 1 pcmagas pcmagas 32 Αυγ 8 17:29 config
-rw------- 1 pcmagas pcmagas 116 Αυγ 8 17:34 credentials
Whilst the config has the following:
[default]
region = eu-central-1
And ~/.aws/credentials is in the following format:
[default]
aws_access_key_id = ^KEY_ID_CENSORED^
aws_secret_access_key = ^ACCESS_KEY_CENSORED^
Also in the .codebuild I contain the required docker-login params:
USER=^CENCORED^
TOKEN=^CENCORED^
Hence, I can get the params required for docker-login.
Do you have any idea why I the build fails to run locally?
Your pre-build step has a command that logs you in to docker
docker login -u $USER -p $TOKEN
Make sure that you have included the docker login credentials in your local file environment file.
Change the environment variable name in '.codebuild' file, e.g.:
DOCKER_USER=^CENCORED^
DOCKER_TOKEN=^CENCORED^
It seems the CodeBuild agent is interpreting the 'TOKEN' environment variable itself.

Create a Dockerfile for a database with a restoring dump

I want to create a Dockerfile for the database. In this Dockerfile I want to add a dump and restore. Then I build an image, and everytime I run a container I will have the database restored
This is my Dockerfile
FROM postgres:9.5.8
WORKDIR /home/
COPY my_dump.sql my_dump.sql
EXPOSE 5432 5432
RUN psql -f my_dump.sql postgres
Then I execute
$ docker build -t my_postgres_db .
I get
Step 5/5 : RUN psql -f my_dump.sql postgres
---> Running in 70f7b511cc7c
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Here is a nice little script for doing this taken and altered from https://docs.docker.com/compose/startup-order/
All you need to do is create the script
#!/bin/bash
# wait-for-postgres.sh
set -e
host="$1"
shift
until psql -h "$host" -U "postgres" -c '\q'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - you can execute commands now"
and then add this to the docker file
RUN /wait-for-postgres.sh host
Then run your commands after that

Executing Ansible playbook in Jenkins to provision EC2

I am trying to run a Ansible playbook that provisions EC2 instances in AWS using Jenkins.
My Jenkins application is installed on an EC2 that has required roles to provision instances, and my JENKINS_USER is ec2-user.
I am able to execute the playbook manually when logged in as ec2-user. However, when I try to execute the same exact Ansible command, Jenkins stalls indefinitely.
Building in workspace /var/lib/jenkins/workspace/Provision-AWS-Environment-dev
[Provision-AWS-Environment-dev] $ /bin/ansible-playbook /home/ec2-user/efx-devops-jenkins/aws/awsprovision.yml -i /home/ec2-user/efx-devops-jenkins/aws/inventories/dev/hosts -s -f 5 -vvv
Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: awsprovision.yml *****************************************************
[0;34m2 plays in /home/ec2-user/efx-devops-jenkins/aws/awsprovision.yml[0m
PLAY [awsmaster] ***************************************************************
TASK [provision : Provison "3" ec2 instances in "ap-southeast-2"] **************
[1;30mtask path: /home/ec2-user/efx-devops-jenkins/aws/roles/provision/tasks/main.yml:5[0m
[0;34mUsing module file /usr/lib/python2.7/site-packages/ansible/modules/core/cloud/amazon/ec2.py[0m
[0;34m<10.39.144.187> ESTABLISH LOCAL CONNECTION FOR USER: ec2-user[0m
[0;34m<10.39.144.187> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615 `" && echo ansible-tmp-1489656061.65-268771004227615="` echo ~/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615 `" ) && sleep 0'[0m
[0;34m<10.39.144.187> PUT /tmp/tmpvvKnfU TO /home/ec2-user/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615/ec2.py[0m
[0;34m<10.39.144.187> EXEC /bin/sh -c 'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615/ /home/ec2-user/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615/ec2.py && sleep 0'[0m
[0;34m<10.39.144.187> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-uatxqcnoparsvzhjhxvlccmbjwaxjqaz; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615/ec2.py; rm -rf "/home/ec2-user/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615/" > /dev/null 2>&1'"'"' && sleep 0'[0m
Can anyone identify why I am not able to execute the playbook using Jenkins?
The issue was that the Jenkins Master node (where the Ansible playbook was being executed), was missing some Environmental Variables (Configured under Manage Jenkins>Manage Node> Configure Master). See below list of variable I added to Jenkins Master node.
Name: http_proxy
Value: http://proxy.com:123
Name: PATH
Value: /usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/ec2-user/.local/bin:/home/ec2-user/bin
Name: SUDO_COMMAND
Value: /bin/su ec2-user
Name: SUDO_USER
Value: svc_ansible_lab
Once I added the above variables, I was able to execute the Ansible Playbooks with no issues.