I am trying to copy files from my GitLab repository to the folder of my ec2 instance over ssh using server_ip and ec2 private_key.
I am not able to copy my files into the target folder.
My .gitlab-ci.yml:
stages:
- deploy
deploy:
stage: deploy
image: alpine
before_script:
- apk add openssh-client
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
script:
- ssh -o StrictHostKeyChecking=no ubuntu#$DEPLOY_SERVER 'rm -rf /var/www/html/*'
- scp -r . ubuntu#$DEPLOY_SERVER:/var/www/html **
## Here How Can I Copy all my repositroy file to target folder**
Check first the ssh call just before scp actually works.
Then try:
scp -o LogLevel=DEBUG -r . ubuntu#$DEPLOY_SERVER:/var/www/html
That will give you an idea why the scp fails, while the ssh call, I presume, works.
Related
I have a user-data bootstrap script that creates a folder called content in root directory and downloads files from an S3 bucket.
#!/bin/bash
sudo yum update -y
sudo yum search docker
sudo yum install docker -y
sudo usermod -a -G docker ec2-user
id ec2-user
newgrp docker
sudo yum install python3-pip -y
sudo pip3 install docker-compose
sudo systemctl enable docker.service
sudo systemctl start docker.service
export PATH=$PATH:/usr/local/bin
mkdir content
docker network create web_todos
docker run -d -p 80:80 --name nginx-proxy --network=web_todos -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
aws s3 cp s3://jv-pocho/docker-compose.yaml .
aws s3 cp s3://jv-pocho/backup.sql .
aws s3 cp s3://jv-pocho/dns-updater.sh .
aws s3 sync s3://jv-pocho/images/ ./content/images
aws s3 sync s3://jv-pocho/themes/ ./content/themes
docker-compose up -d
sleep 30
docker exec -i db_jv sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' < backup.sql
rm backup.sql
chmod +x dns-updater.sh
This bootstrap works ok, it creates the folder and download the files (it has permissions to download the files) i.e.:
download: s3://jv-pocho/dns-updater.sh to ./dns-updater.sh
[ 92.739262] cloud-init[3203]: Completed 32.0 KiB/727.2 KiB (273.1 KiB/s) with 25 file(s) remaining
so it's copying all the files correctly. The thing is that when i enter via SSH to the instance, i don't have any files inside
[ec2-user#ip-x-x-x-x ~]$ ls
[ec2-user#ip-x-x-x-x ~]$ ls -l
total 0
all commands worked as expected, all the yum installs, python, docker, etc were successfully installed, but no files.
Are files deleted after the bootstrap script ran?
thanks!
Try to copy them in a specific path, then look for it. Because here we don't know which path it's going to use.
Use the following command for specific path:
aws s3 cp s3://Bucket-name/Objet /Path
else you can do one thing,
use pwd command to get the current directory and print it using echo command so that you will get the present working directory.
I'm trying to implement CD for my dockerized Django application on the DigitalOcean droplet.
Here's my .gitlab-ci.yml:
image:
name: docker/compose:1.29.1
entrypoint: [""]
services:
- docker:dind
stages:
- build
- deploy
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
before_script:
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export WEB_IMAGE=$IMAGE/web:web
- export NGINX_IMAGE=$IMAGE/nginx:nginx
- apk add --no-cache openssh-client bash
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
build:
stage: build
script:
- docker pull $IMAGE/web:web || true
- docker pull $IMAGE/web:nginx || true
- docker-compose -f docker-compose.prod.yml build
- docker push $IMAGE/web:web
- docker push $IMAGE/nginx:nginx
deploy:
stage: deploy
script:
- mkdir -p ~/.ssh
- echo "$PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- cat ~/.ssh/id_rsa
- chmod 700 ~/.ssh/id_rsa
- eval "$(ssh-agent -s)"
- ssh-add ~/.ssh/id_rsa
- ssh-keyscan -H 'gitlab.com' >> ~/.ssh/known_hosts
- chmod +x ./deploy.sh
- scp -o StrictHostKeyChecking=no -r ./.env ./docker-compose.prod.yml root#$DO_PUBLIC_IP_ADDRESS:/Pythonist.org
- bash ./deploy.sh
only:
- master
I have copied my Publick key to the production server (DO droplet).
The build job is successful but the deploy stage failed with the following error:
$ chmod 700 ~/.ssh/id_rsa
$ eval "$(ssh-agent -s)"
Agent pid 26
$ ssh-add ~/.ssh/id_rsa
Identity added: /root/.ssh/id_rsa (abdul12391#gmail.com)
$ ssh-keyscan -H 'gitlab.com' >> ~/.ssh/known_hosts
$ chmod +x ./deploy.sh
$ scp -o StrictHostKeyChecking=no -r ./.env ./docker-compose.prod.yml root#$DO_PUBLIC_IP_ADDRESS:/Pythonist.org
Warning: Permanently added '143.198.103.99' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
root#143.198.103.99: Permission denied (publickey,password).
lost connection
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
The official process is "How to Upload an SSH Public Key to an Existing Droplet", but it usually involves username, not root.
While your pipeline might be executed as root (as the Identity added: /root/.ssh/id_rsa message suggests), your scp should use a DO remote user, not the remote DO root account): the same account username where you have added the public key to the remote ~/.ssh/authorized_keys
So:
username#$DO_PUBLIC_IP_ADDRESS:/Pythonist.org
# not
root#$DO_PUBLIC_IP_ADDRESS:/Pythonist.org
Try the following on the digital ocean server:
cat ~/.ssh/id_rsa.pub
and copy the public key to authorized keys
nano ~/.ssh/authorized_keys
then change permission
chmod 600 ~/.ssh/authorized_keys
chmod 600 ~/.ssh/id_rsa
So I use AWS Elastic Beanstalk to serve my PHP application. I want to mount EFS to have permanent storage for the images uploaded via my application.
I have created .ebextensions folder and created one file called mount.config with the below code
packages:
yum:
nfs-utils: []
jq: []
files:
"/tmp/mount-efs.sh" :
mode: "000755"
content: |
#!/usr/bin/env bash
mkdir -p /mnt/efs
EFS_NAME=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_NAME')
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 $EFS_NAME:/ /mnt/efs || true
mkdir -p /mnt/efs/questions
chown webapp:webapp /mnt/efs/questions
commands:
01_mount:
command: "/tmp/mount-efs.sh"
container_commands:
01-symlink-uploads:
command: ln -s /mnt/efs/questions /var/app/ondeck/images/
Everything is working fine until the last line where it fails to create a symlink.
What I have tried so far:
Running the command directly on the machine while changing ondeck -> current. This works fine.
Removing the EC2 instance and adding a new one. Still failing
In the logs I see
ln: failed to create symbolic link '/var/app/current/images/questions': No such file or directory
Any suggestion what could be the reason?
Ok, I fixed it by replacing ondeck with staging
And adding this line under container_commands:
01-change-permission:
command: chmod -R 777 /var/app/staging/images
I am trying to test locally my build without needing to upload my code all over the time. Therefore, I downloaded the codebuild.sh into my ubuntu machine and places into ~/.local/bin/codebuild_build.
Then I made it executable via:
chmod +x ~/.local/bin/codebuild_build
And with the following buildspec.yml:
version: 0.2
phases:
install:
runtime-versions:
docker: 18
pre_build:
commands:
- docker login -u $USER -p $TOKEN
build:
commands:
- docker build -f ./dockerfiles/7.0.8/Dockerfile -t myapp/php7.0.8:$(cat VERSION_PHP_708) -t myapp/php7.0.8:latest .
- docker build -f ./dockerfiles/7.0.8/Dockerfile_develop -t myapp/php7.0.8-dev:$(cat VERSION_PHP_708) -t myapp/php7.0.8-dev:latest .
- docker build -f ./dockerfiles/7.2/Dockerfile -t myapp/php7.0.8:$(cat VERSION_PHP_72) -t myapp/php7.0.8:latest .
- docker build -f ./dockerfiles/7.2/Dockerfile_develop -t myapp/php7.0.8-dev:$(cat VERSION_PHP_708) -t myapp/php7.0.8-dev:latest .
post_build:
commands:
- docker push etable/php7.2
- docker push etable/php7.2-dev
- docker push etable/php7.0.8
- docker push etable/php7.0.8-dev
I tried to execute my command like that:
codebuild_build -i amazon/aws-codebuild-local -a /tmp/artifacts/docker-php -e .codebuild -c ~/.aws
But I get the following output:
Build Command:
docker run -it -v /var/run/docker.sock:/var/run/docker.sock -e "IMAGE_NAME=amazon/aws-codebuild-local" -e "ARTIFACTS=/tmp/artifacts/docker-php" -e "SOURCE=/home/pcmagas/Kwdikas/docker-php" -v "/home/pcmagas/Kwdikas/docker-php:/LocalBuild/envFile/" -e "ENV_VAR_FILE=.codebuild" -e "AWS_CONFIGURATION=/home/pcmagas/.aws" -e "INITIATOR=pcmagas" amazon/aws-codebuild-local:latest
Removing agent-resources_build_1 ... done
Removing agent-resources_agent_1 ... done
Removing network agent-resources_default
Removing volume agent-resources_source_volume
Removing volume agent-resources_user_volume
Creating network "agent-resources_default" with the default driver
Creating volume "agent-resources_source_volume" with local driver
Creating volume "agent-resources_user_volume" with local driver
Creating agent-resources_agent_1 ... done
Creating agent-resources_build_1 ... done
Attaching to agent-resources_agent_1, agent-resources_build_1
build_1 | 2020/01/16 14:43:58 Unable to initialize (*errors.errorString: AgentAuth was not specified)
agent-resources_build_1 exited with code 10
Stopping agent-resources_agent_1 ... done
Aborting on container exit...
My ~/.aws has the following files:
$ ls -l /home/pcmagas/.aws
σύνολο 8
-rw------- 1 pcmagas pcmagas 32 Αυγ 8 17:29 config
-rw------- 1 pcmagas pcmagas 116 Αυγ 8 17:34 credentials
Whilst the config has the following:
[default]
region = eu-central-1
And ~/.aws/credentials is in the following format:
[default]
aws_access_key_id = ^KEY_ID_CENSORED^
aws_secret_access_key = ^ACCESS_KEY_CENSORED^
Also in the .codebuild I contain the required docker-login params:
USER=^CENCORED^
TOKEN=^CENCORED^
Hence, I can get the params required for docker-login.
Do you have any idea why I the build fails to run locally?
Your pre-build step has a command that logs you in to docker
docker login -u $USER -p $TOKEN
Make sure that you have included the docker login credentials in your local file environment file.
Change the environment variable name in '.codebuild' file, e.g.:
DOCKER_USER=^CENCORED^
DOCKER_TOKEN=^CENCORED^
It seems the CodeBuild agent is interpreting the 'TOKEN' environment variable itself.
I am trying to run a Ansible playbook that provisions EC2 instances in AWS using Jenkins.
My Jenkins application is installed on an EC2 that has required roles to provision instances, and my JENKINS_USER is ec2-user.
I am able to execute the playbook manually when logged in as ec2-user. However, when I try to execute the same exact Ansible command, Jenkins stalls indefinitely.
Building in workspace /var/lib/jenkins/workspace/Provision-AWS-Environment-dev
[Provision-AWS-Environment-dev] $ /bin/ansible-playbook /home/ec2-user/efx-devops-jenkins/aws/awsprovision.yml -i /home/ec2-user/efx-devops-jenkins/aws/inventories/dev/hosts -s -f 5 -vvv
Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: awsprovision.yml *****************************************************
[0;34m2 plays in /home/ec2-user/efx-devops-jenkins/aws/awsprovision.yml[0m
PLAY [awsmaster] ***************************************************************
TASK [provision : Provison "3" ec2 instances in "ap-southeast-2"] **************
[1;30mtask path: /home/ec2-user/efx-devops-jenkins/aws/roles/provision/tasks/main.yml:5[0m
[0;34mUsing module file /usr/lib/python2.7/site-packages/ansible/modules/core/cloud/amazon/ec2.py[0m
[0;34m<10.39.144.187> ESTABLISH LOCAL CONNECTION FOR USER: ec2-user[0m
[0;34m<10.39.144.187> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615 `" && echo ansible-tmp-1489656061.65-268771004227615="` echo ~/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615 `" ) && sleep 0'[0m
[0;34m<10.39.144.187> PUT /tmp/tmpvvKnfU TO /home/ec2-user/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615/ec2.py[0m
[0;34m<10.39.144.187> EXEC /bin/sh -c 'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615/ /home/ec2-user/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615/ec2.py && sleep 0'[0m
[0;34m<10.39.144.187> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-uatxqcnoparsvzhjhxvlccmbjwaxjqaz; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615/ec2.py; rm -rf "/home/ec2-user/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615/" > /dev/null 2>&1'"'"' && sleep 0'[0m
Can anyone identify why I am not able to execute the playbook using Jenkins?
The issue was that the Jenkins Master node (where the Ansible playbook was being executed), was missing some Environmental Variables (Configured under Manage Jenkins>Manage Node> Configure Master). See below list of variable I added to Jenkins Master node.
Name: http_proxy
Value: http://proxy.com:123
Name: PATH
Value: /usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/ec2-user/.local/bin:/home/ec2-user/bin
Name: SUDO_COMMAND
Value: /bin/su ec2-user
Name: SUDO_USER
Value: svc_ansible_lab
Once I added the above variables, I was able to execute the Ansible Playbooks with no issues.