I tried to deploy an api service from github to ec2 via code deploy, code from github is transferred to the appropriate lo location but when i try to execute the bash script it is failing
appspec.yml
version: 0.0
os: linux
files:
- source: /
destination: /home/ec2-user/Gritly-backend-v2
hooks:
AfterInstall:
- location: build.sh
timeout: 400
runas: root
build.sh
#!/bin/bash
cd /home/ec2-user/Gritly-backend-v2
docker build . -t gritly_backend -f Dockerfile.dev
docker stop gritly_backend
docker rm gritly_backend
docker run -d -p 80:80 --name gritly_backend gritly_backend
OUTPUT=$(docker images --filter "dangling=true" -q --no-trunc | wc -l)
if [ ${OUTPUT} != 0 ];
then
docker rmi $(docker images --filter "dangling=true" -q --no-trunc)
fi
Error
LifecycleEvent - AfterInstall
Script - build.sh
[stderr]bash: /opt/codedeploy-agent/deployment-root/4ceeb37e-1874-4290-9e55-a92d655d1558/d-G03KT19HJ/deployment-archive/build.sh: /bin/bash^M: bad interpreter: No such file or directory
tried to run sh build.sh
: No such file or directory/ec2-user/Gritly-backend-v2
: no such file or directoryunable to evaluate symlinks in Dockerfile path: lstat /home/ec2-user/Gritly-backend-v2/Dockerfile.dev
Error response from daemon: No such container: gritly_backend
Error: No such container: gritly_backend
docker: invalid reference format.
See 'docker run --help'.
build.sh: line 8: $'\r': command not found
Related
I'm using AWS Code Build to build a Docker image from ECR. This is the Code Build configuration.
Here is the buidspec.yml
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws --version
- aws ecr get-login-password --region my-region | docker login --username AWS --password-stdin my-image-uri
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t pos-drf .
- docker tag pos-drf:latest my-image-uri/pos-drf:latest
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push my-image-uri/pos-drf:latest
Now it's working up until the build command docker build -t pos-drf .
the error message I get is the following
[Container] 2022/12/30 15:12:39 Running command docker build -t pos-drf .
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /codebuild/output/src696881611/src/Dockerfile: no such file or directory
[Container] 2022/12/30 15:12:39 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker build -t pos-drf .. Reason: exit status 1
Now quite sure this is not a permission related issue.
Please let me know if I need to share something else.
UPDATE:
This is the Dockerfile
# base image
FROM python:3.8
# setup environment variable
ENV DockerHOME=/home/app/webapp
# set work directory
RUN mkdir -p $DockerHOME
# where your code lives
WORKDIR $DockerHOME
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
RUN pip install --upgrade pip
# copy whole project to your docker home directory.
COPY . $DockerHOME
RUN apt-get dist-upgrade
# RUN apt-get install mysql-client mysql-server
# run this command to install all dependencies
RUN pip install -r requirements.txt
# port where the Django app runs
EXPOSE 8000
# start server
CMD python manage.py runserver
My mistake was that I had the Dockerfile locally but hadn't pushed it.
CodeBuild worked successfully after pushing the Dockerfile.
Title has most of the question, but more context is below
Tried following the directions found here:
https://cloud.google.com/compute/docs/gpus/monitor-gpus
I modified the code a bit, but haven't been able to get it working. Here's the abbreviated cloud config I've been running that should show the relevant parts:
- path: /etc/scripts/gpumonitor.sh
permissions: "0644"
owner: root
content: |
#!/bin/bash
echo "Starting script..."
sudo mkdir -p /etc/google
cd /etc/google
sudo git clone https://github.com/GoogleCloudPlatform/compute-gpu-monitoring.git
echo "Downloaded Script..."
echo "Starting up monitoring service..."
sudo systemctl daemon-reload
sudo systemctl --no-reload --now enable /etc/google/compute-gpu-monitoring/linux/systemd/google_gpu_monitoring_agent.service
echo "Finished Script..."
- path: /etc/systemd/system/install-monitoring-gpu.service
permissions: "0644"
owner: root
content: |
[Unit]
Description=Install GPU Monitoring
Requires=install-gpu.service
After=install-gpu.service
[Service]
User=root
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/bash /etc/scripts/gpumonitor.sh
StandardOutput=journal+console
StandardError=journal+console
runcmd:
- systemctl start install-monitoring-gpu.service
Edit:
Turned out it was best to build a docker container with the monitoring script in it and run the docker container in my config script by passing the GPU into the docker container like shown in the following link
https://cloud.google.com/container-optimized-os/docs/how-to/run-gpus
I'm trying to implement CD for my dockerized Django application on the DigitalOcean droplet.
Here's my .gitlab-ci.yml:
image:
name: docker/compose:1.29.1
entrypoint: [""]
services:
- docker:dind
stages:
- build
- deploy
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
before_script:
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export WEB_IMAGE=$IMAGE/web:web
- export NGINX_IMAGE=$IMAGE/nginx:nginx
- apk add --no-cache openssh-client bash
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
build:
stage: build
script:
- docker pull $IMAGE/web:web || true
- docker pull $IMAGE/web:nginx || true
- docker-compose -f docker-compose.prod.yml build
- docker push $IMAGE/web:web
- docker push $IMAGE/nginx:nginx
deploy:
stage: deploy
script:
- mkdir -p ~/.ssh
- echo "$PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- cat ~/.ssh/id_rsa
- chmod 700 ~/.ssh/id_rsa
- eval "$(ssh-agent -s)"
- ssh-add ~/.ssh/id_rsa
- ssh-keyscan -H 'gitlab.com' >> ~/.ssh/known_hosts
- chmod +x ./deploy.sh
- scp -o StrictHostKeyChecking=no -r ./.env ./docker-compose.prod.yml root#$DO_PUBLIC_IP_ADDRESS:/Pythonist.org
- bash ./deploy.sh
only:
- master
I have copied my Publick key to the production server (DO droplet).
The build job is successful but the deploy stage failed with the following error:
$ chmod 700 ~/.ssh/id_rsa
$ eval "$(ssh-agent -s)"
Agent pid 26
$ ssh-add ~/.ssh/id_rsa
Identity added: /root/.ssh/id_rsa (abdul12391#gmail.com)
$ ssh-keyscan -H 'gitlab.com' >> ~/.ssh/known_hosts
$ chmod +x ./deploy.sh
$ scp -o StrictHostKeyChecking=no -r ./.env ./docker-compose.prod.yml root#$DO_PUBLIC_IP_ADDRESS:/Pythonist.org
Warning: Permanently added '143.198.103.99' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
root#143.198.103.99: Permission denied (publickey,password).
lost connection
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
The official process is "How to Upload an SSH Public Key to an Existing Droplet", but it usually involves username, not root.
While your pipeline might be executed as root (as the Identity added: /root/.ssh/id_rsa message suggests), your scp should use a DO remote user, not the remote DO root account): the same account username where you have added the public key to the remote ~/.ssh/authorized_keys
So:
username#$DO_PUBLIC_IP_ADDRESS:/Pythonist.org
# not
root#$DO_PUBLIC_IP_ADDRESS:/Pythonist.org
Try the following on the digital ocean server:
cat ~/.ssh/id_rsa.pub
and copy the public key to authorized keys
nano ~/.ssh/authorized_keys
then change permission
chmod 600 ~/.ssh/authorized_keys
chmod 600 ~/.ssh/id_rsa
I am trying to deploy a Windows container to AWS ECR using Gitlab CI:
Here is the Gitlab yaml file:
variables:
AWS_REGISTRY: ****************.amazonaws.com/devops
AWS_DEFAULT_REGION: *****
APP_NAME: devops
windows:
stage: build
tags:
- prod
before_script:
./docker_install.sh > /dev/null
script:
- docker build -t ${AWS_REGISTRY}/${CI_PROJECT_PATH}
- docker push ${AWS_REGISTRY}/${CI_PROJECT_PATH}
Docker file is
FROM mcr.microsoft.com/windows/servercore:ltsc2019
CMD [ "cmd" ]
Error is :
Running with gitlab-runner 13.8.0 (*****)
on *********-aws-gitlab-runner-prod ******
Resolving secrets
00:00
Preparing the "docker" executor
00:02
Using Docker executor with image alpine:latest ...
Pulling docker image alpine:latest ...
Using docker image sha256:*************** for alpine:latest with digest alpine#sha256:********************* ...
Preparing environment
00:01
Running on runner-***************** via ******************.compute.internal...
Getting source from Git repository
00:02
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /builds/jostens/devops/ci-images/docker-base-windows-2019-std-core/.git/
Checking out 76498ebe as main...
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:00
/bin/sh: eval: line 110: docker: not found
$ docker build -t ${AWS_REGISTRY}/${CI_PROJECT_PATH}:${CI_COMMIT_REF_SLUG} .
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 127
Please Help/advice
I am trying to test locally my build without needing to upload my code all over the time. Therefore, I downloaded the codebuild.sh into my ubuntu machine and places into ~/.local/bin/codebuild_build.
Then I made it executable via:
chmod +x ~/.local/bin/codebuild_build
And with the following buildspec.yml:
version: 0.2
phases:
install:
runtime-versions:
docker: 18
pre_build:
commands:
- docker login -u $USER -p $TOKEN
build:
commands:
- docker build -f ./dockerfiles/7.0.8/Dockerfile -t myapp/php7.0.8:$(cat VERSION_PHP_708) -t myapp/php7.0.8:latest .
- docker build -f ./dockerfiles/7.0.8/Dockerfile_develop -t myapp/php7.0.8-dev:$(cat VERSION_PHP_708) -t myapp/php7.0.8-dev:latest .
- docker build -f ./dockerfiles/7.2/Dockerfile -t myapp/php7.0.8:$(cat VERSION_PHP_72) -t myapp/php7.0.8:latest .
- docker build -f ./dockerfiles/7.2/Dockerfile_develop -t myapp/php7.0.8-dev:$(cat VERSION_PHP_708) -t myapp/php7.0.8-dev:latest .
post_build:
commands:
- docker push etable/php7.2
- docker push etable/php7.2-dev
- docker push etable/php7.0.8
- docker push etable/php7.0.8-dev
I tried to execute my command like that:
codebuild_build -i amazon/aws-codebuild-local -a /tmp/artifacts/docker-php -e .codebuild -c ~/.aws
But I get the following output:
Build Command:
docker run -it -v /var/run/docker.sock:/var/run/docker.sock -e "IMAGE_NAME=amazon/aws-codebuild-local" -e "ARTIFACTS=/tmp/artifacts/docker-php" -e "SOURCE=/home/pcmagas/Kwdikas/docker-php" -v "/home/pcmagas/Kwdikas/docker-php:/LocalBuild/envFile/" -e "ENV_VAR_FILE=.codebuild" -e "AWS_CONFIGURATION=/home/pcmagas/.aws" -e "INITIATOR=pcmagas" amazon/aws-codebuild-local:latest
Removing agent-resources_build_1 ... done
Removing agent-resources_agent_1 ... done
Removing network agent-resources_default
Removing volume agent-resources_source_volume
Removing volume agent-resources_user_volume
Creating network "agent-resources_default" with the default driver
Creating volume "agent-resources_source_volume" with local driver
Creating volume "agent-resources_user_volume" with local driver
Creating agent-resources_agent_1 ... done
Creating agent-resources_build_1 ... done
Attaching to agent-resources_agent_1, agent-resources_build_1
build_1 | 2020/01/16 14:43:58 Unable to initialize (*errors.errorString: AgentAuth was not specified)
agent-resources_build_1 exited with code 10
Stopping agent-resources_agent_1 ... done
Aborting on container exit...
My ~/.aws has the following files:
$ ls -l /home/pcmagas/.aws
σύνολο 8
-rw------- 1 pcmagas pcmagas 32 Αυγ 8 17:29 config
-rw------- 1 pcmagas pcmagas 116 Αυγ 8 17:34 credentials
Whilst the config has the following:
[default]
region = eu-central-1
And ~/.aws/credentials is in the following format:
[default]
aws_access_key_id = ^KEY_ID_CENSORED^
aws_secret_access_key = ^ACCESS_KEY_CENSORED^
Also in the .codebuild I contain the required docker-login params:
USER=^CENCORED^
TOKEN=^CENCORED^
Hence, I can get the params required for docker-login.
Do you have any idea why I the build fails to run locally?
Your pre-build step has a command that logs you in to docker
docker login -u $USER -p $TOKEN
Make sure that you have included the docker login credentials in your local file environment file.
Change the environment variable name in '.codebuild' file, e.g.:
DOCKER_USER=^CENCORED^
DOCKER_TOKEN=^CENCORED^
It seems the CodeBuild agent is interpreting the 'TOKEN' environment variable itself.