ElasticBeanstalk: Environment in an unhealthy state(Codeship Pro, Jets) - django

I am trying to deploy a django app using Codeship Pro to Elasticbeans talk(using docker of course). Running this step fails when when deploying
codeship-steps.yml:
- name: deployment
tag: aws-docker
service: awsdeployment
command: codeship_aws eb_deploy ./deploy my-project staging my-bucket
docker-compose.yml
services:
app:...
db:...
awsdeployment:
image: codeship/aws-deployment
encrypted_env_file: aws-deployment.env.encrypted
environment:
- AWS_DEFAULT_REGION=eu-central-1
volumes:
- ./:/deploy
Error:
Info: I am trying to setup a CI/CD environment for the project(staging/production env)
UPDATE: in my elasticbeanstalk, I see that the django extension is not found although I have made they are installed in my Dockerfile.(pip install -r requirements.txt)

Related

How to pull and use existing image from Azure ACR through Dockerfile

I am performing AWS to Azure services migration.
I am using a centos VM and am trying to pull an existing image from ACR and create container. I am using Dockerfile to do so. I have created an image on Azure ACR. I need help in pulling this image and creating container on centos VM.
Earlier, I was doing this with images on AWS ECR (not sure if by using AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY) as below. But I am not sure how this can be done using Azure ACR. How do I provide Azure access to the application containing below Dockerfile and docker-compose.yml . Do I need to use access and secret key similar to AWS. If so, how do I create this pair on Azure.
Below are the files I was using when I was dealing with container creation on Centos using AWS image
Dockerfile:
FROM 12345.ecrImageUrl/latestImages/pk-image-123:latest
RUN yum update -y
docker-compose.yml:
version: '1.2`
services:
initn:
<<: *mainnode
entrypoint: ""
command: "npm i"
bldn:
<<: *mainnode
entrypoint: ""
command: "npm run watch:build"
runn:
<<: *mainnode
entrypoint: ""
command: "npm run watch:run"
links:
- docker-host
ports:
- "8011:8080"
environment:
- AWS_ACCESS_KEY=${AWS_ACCESS_KEY}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}

Docker-compose in GCP Cloud build

I'm trying build deploy a app in GCP Cloud run using GCP Cloud Build.
I already build, push and deploy the service using Dockerfile, but i need use the Dockerfile of the project. My dockerfile run in Docker desktop perfectly, but i am not finding documentation for docker-compose using GCP Artifact registry.
My dockerfile:
FROM python:3.10.5-slim-bullseye
#docker build -t cloud_app .
#docker image ls
#docker run -p 81:81 cloud_app
RUN mkdir wd
WORKDIR /wd
RUN apt-get update
RUN apt-get install ffmpeg libsm6 libxext6 -y
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY ./ ./
CMD python3 main.py
My docker-compose:
version: "3.3"
services:
web:
build:
context: ./destripa_frame/
dockerfile: ./Docker_files/Dockerfile
image: bank_anon_web_dev_build
restart: always
expose:
- 8881
- 80
- 2222
- 22
ports:
- "2222:2222"
- "80:80"
environment:
- TZ=America/Chicago
My cloud-build configuration:
steps:
- name: 'docker/compose:1.28.2'
args: ['up', '--build', '-f', './cloud_run/docker-compose.devapp.yml', '-d']
- name: 'docker/compose:1.28.2'
args: ['-f', './cloud_run/docker-compose.devapp.yml', 'up', 'docker-build']
images: ['us-central1-docker.pkg.dev/${PROJECT_ID}/app-destripador/job_app:$COMMIT_SHA']
The cloud build commit execution succeed:
Cloud build execution
¿How can modify the cloud build for deploy the Docker-compose in Artifact registry?
EDIT: Find the correct method to push the image in artifact registry using cloudbuild and Docker-compose.
Modify my cloud-build.yml configuration for build the image and then rename the Docker-compose image to the Artifact registry image.
Cloud build automatically push the image in the repository (if the image name it's not a URL then push it in Docker.io).
My new Cloud-build.yml:
steps:
- name: 'docker/compose:1.28.2'
args: [
'-p', 'us-central1-docker.pkg.dev/${PROJECT_ID}/app-destripador',
'-f', './cloud_run/docker-compose.devapp.yml',
'up', '--build', 'web'
]
- name: 'gcr.io/cloud-builders/docker'
args: [
'tag',
'bank_anon_web_dev_build',
'us-central1-docker.pkg.dev/${PROJECT_ID}/app-destripador/bank_anon_web_dev_build'
]
images: ['us-central1-docker.pkg.dev/${PROJECT_ID}/app-destripador/bank_anon_web_dev_build']
Hope anyone need undestand GCP Cloud build using docker-compose can help it, because every guide in the web not explicate this last part.

Getting error when trying to execute cloud build to deploy application to cloud run

I tried to deploy application to cloud run in GCP which succesfuly got executed using docker file.Now,I am setting up CI/CD by using cloudbuild.yaml .I mirrored repo to CSR and created a cloudbuild service and placed cloudbuild.yaml in my repository .When executed from cloudbuild,it throws the following error.
Status: Downloaded newer image for gcr.io/google.com/cloudsdktool/cloud-sdk:latest
gcr.io/google.com/cloudsdktool/cloud-sdk:latest
Deploying...
Creating Revision...failed
Deployment failedERROR: (gcloud.run.deploy) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable.
Docker file is attached below:
#pulls python 3.7’s image from the docker hub
FROM python:alpine3.7
#copies the flask app into the container
COPY . /app
#sets the working directory
WORKDIR /app
#install each library written in requirements.txt
RUN pip install -r requirements.txt
#exposes port 8080
EXPOSE 8080
#Entrypoint and CMD together just execute the command
#python app.py which runs this file
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
cloudbuild.yaml:
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/projectid/servicename', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/projectid/servicename']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'phase-2'
- '--image'
- 'gcr.io/projectid/servicename'
- '--region'
- 'us-central1'
- '--platform'
- 'managed'
images:
- 'gcr.io/projectid/servicename'.
OP got the issue resolved as seen in the comments:
Got the issue resolved.It was because of the python compatibility issue.I should use pip3 and python3 in the docker file.I think gcr.io image is compatible with python3.

Docker Compose Up works locally, fails to deploy to AWS

I am trying to deploy my docker container (hosting two images in a container) to AWS. I can succesfully run my docker compose up locally, and that builds and runs the container on my local Docker.
However, when I have set up a new context for ECS, and switched to this new context. However, when I run docker compose up (which I believe should now deploy to AWS), I get the error docker.io/xxxx/concordejs_backend:latest: not found.
My docker-compose.yml file looks like this:
version: '3'
services:
backend:
image: xxxx/concordejs_backend
build:
context: ./backend
dockerfile: ./Dockerfile
container_name: concorde-backend
ports:
- "5000:5000"
frontend:
image: xxxx/concordejs_frontend
build:
context: ./frontend
dockerfile: ./Dockerfile
container_name: concorde-frontend
ports:
- "3001:3000"
The image has been built on your local machine and is subquently retrieved from their each time you launch docker-compose locally.
The AWS service is trying to retrieve the image from the public repository docker.io (dockerhub) since it doesn't have the image you built locally.
One solution might be to push your local image to dockerhub for it to be accessible by ECS or you can use AWS's repository service, ECR. https://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_on_ECS.html

Error deploying django website on docker through heroku - "Your app does not include a heroku.yml build manifest"

I am in the final steps of deploying a django website. It uses docker to run it and I'm finally deploying it through heroku. I run into an error when running "git push heroku master". I receive "Your app does not include a heroku.yml build manifest. To deploy your app, either create a heroku.yml: https://devcenter.heroku.com/articles/build-docker-images-heroku-yml". This is odd as I do in fact have a heroku.yml app.
heroku.yml
setup:
addons:
- plan: heroku-postgresql
build:
docker:
web: Dockerfile
release:
image: web
command:
- python manage.py collectstatic --noinput
run:
web: gunicorn books.wsgi
The tutorial I am following is using "gunicorn bookstore_project.wsgi" but I used books.wsgi as that is the directory my website is in. Neither worked.
this happened to me when i pushed the wrong branch to heroku. I was testing in the develop branch but pushing master which had not heroku.yml.
pervious gitlab-ci
stages:
- staging
staging:
stage: staging
image: ruby:latest
script:
- git remote add heroku https://heroku:$HEROKU_API_KEY#git.heroku.com/$PROJECT.git
- git push -f heroku master
only:
- develop
actual gitlab-ci
stages:
- staging
staging:
stage: staging
image: ruby:latest
script:
- git remote add heroku https://heroku:$HEROKU_API_KEY#git.heroku.com/$PROJECT.git
- git push -f heroku develop:master
only:
- develop