How to pull and use existing image from Azure ACR through Dockerfile - amazon-web-services

I am performing AWS to Azure services migration.
I am using a centos VM and am trying to pull an existing image from ACR and create container. I am using Dockerfile to do so. I have created an image on Azure ACR. I need help in pulling this image and creating container on centos VM.
Earlier, I was doing this with images on AWS ECR (not sure if by using AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY) as below. But I am not sure how this can be done using Azure ACR. How do I provide Azure access to the application containing below Dockerfile and docker-compose.yml . Do I need to use access and secret key similar to AWS. If so, how do I create this pair on Azure.
Below are the files I was using when I was dealing with container creation on Centos using AWS image
Dockerfile:
FROM 12345.ecrImageUrl/latestImages/pk-image-123:latest
RUN yum update -y
docker-compose.yml:
version: '1.2`
services:
initn:
<<: *mainnode
entrypoint: ""
command: "npm i"
bldn:
<<: *mainnode
entrypoint: ""
command: "npm run watch:build"
runn:
<<: *mainnode
entrypoint: ""
command: "npm run watch:run"
links:
- docker-host
ports:
- "8011:8080"
environment:
- AWS_ACCESS_KEY=${AWS_ACCESS_KEY}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}

Related

Docker-compose in GCP Cloud build

I'm trying build deploy a app in GCP Cloud run using GCP Cloud Build.
I already build, push and deploy the service using Dockerfile, but i need use the Dockerfile of the project. My dockerfile run in Docker desktop perfectly, but i am not finding documentation for docker-compose using GCP Artifact registry.
My dockerfile:
FROM python:3.10.5-slim-bullseye
#docker build -t cloud_app .
#docker image ls
#docker run -p 81:81 cloud_app
RUN mkdir wd
WORKDIR /wd
RUN apt-get update
RUN apt-get install ffmpeg libsm6 libxext6 -y
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY ./ ./
CMD python3 main.py
My docker-compose:
version: "3.3"
services:
web:
build:
context: ./destripa_frame/
dockerfile: ./Docker_files/Dockerfile
image: bank_anon_web_dev_build
restart: always
expose:
- 8881
- 80
- 2222
- 22
ports:
- "2222:2222"
- "80:80"
environment:
- TZ=America/Chicago
My cloud-build configuration:
steps:
- name: 'docker/compose:1.28.2'
args: ['up', '--build', '-f', './cloud_run/docker-compose.devapp.yml', '-d']
- name: 'docker/compose:1.28.2'
args: ['-f', './cloud_run/docker-compose.devapp.yml', 'up', 'docker-build']
images: ['us-central1-docker.pkg.dev/${PROJECT_ID}/app-destripador/job_app:$COMMIT_SHA']
The cloud build commit execution succeed:
Cloud build execution
¿How can modify the cloud build for deploy the Docker-compose in Artifact registry?
EDIT: Find the correct method to push the image in artifact registry using cloudbuild and Docker-compose.
Modify my cloud-build.yml configuration for build the image and then rename the Docker-compose image to the Artifact registry image.
Cloud build automatically push the image in the repository (if the image name it's not a URL then push it in Docker.io).
My new Cloud-build.yml:
steps:
- name: 'docker/compose:1.28.2'
args: [
'-p', 'us-central1-docker.pkg.dev/${PROJECT_ID}/app-destripador',
'-f', './cloud_run/docker-compose.devapp.yml',
'up', '--build', 'web'
]
- name: 'gcr.io/cloud-builders/docker'
args: [
'tag',
'bank_anon_web_dev_build',
'us-central1-docker.pkg.dev/${PROJECT_ID}/app-destripador/bank_anon_web_dev_build'
]
images: ['us-central1-docker.pkg.dev/${PROJECT_ID}/app-destripador/bank_anon_web_dev_build']
Hope anyone need undestand GCP Cloud build using docker-compose can help it, because every guide in the web not explicate this last part.

Getting error when trying to execute cloud build to deploy application to cloud run

I tried to deploy application to cloud run in GCP which succesfuly got executed using docker file.Now,I am setting up CI/CD by using cloudbuild.yaml .I mirrored repo to CSR and created a cloudbuild service and placed cloudbuild.yaml in my repository .When executed from cloudbuild,it throws the following error.
Status: Downloaded newer image for gcr.io/google.com/cloudsdktool/cloud-sdk:latest
gcr.io/google.com/cloudsdktool/cloud-sdk:latest
Deploying...
Creating Revision...failed
Deployment failedERROR: (gcloud.run.deploy) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable.
Docker file is attached below:
#pulls python 3.7’s image from the docker hub
FROM python:alpine3.7
#copies the flask app into the container
COPY . /app
#sets the working directory
WORKDIR /app
#install each library written in requirements.txt
RUN pip install -r requirements.txt
#exposes port 8080
EXPOSE 8080
#Entrypoint and CMD together just execute the command
#python app.py which runs this file
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
cloudbuild.yaml:
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/projectid/servicename', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/projectid/servicename']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'phase-2'
- '--image'
- 'gcr.io/projectid/servicename'
- '--region'
- 'us-central1'
- '--platform'
- 'managed'
images:
- 'gcr.io/projectid/servicename'.
OP got the issue resolved as seen in the comments:
Got the issue resolved.It was because of the python compatibility issue.I should use pip3 and python3 in the docker file.I think gcr.io image is compatible with python3.

Docker Compose to Cloud Run

i created a docker compose file containing django apps and postgresql, and it runs perfectly. then I'm confused whether I can deploy this docker compose file to the google container registry to run a cloud run?
version: "3.8"
services:
app:
build: .
volumes:
- .:/app
ports:
- 8000:8000
image: django-app
container_name: django_container
command: >
bash -c "python manage.py migrate
&& python manage.py runserver 0.0.0.0:8000"
depends_on:
- db
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=nukacola
- POSTGRES_PASSWORD=as938899
container_name: postgres_db
thank you for answering my question
You cannot run a docker-compose configuration on Cloud Run. Cloud Run only supports individual containers.
To run your Django app on Cloud Run, you can do the following.
Build your docker image for Django locally using the docker build command.
Push the image to GCR using docker push command.
Create a new Cloud Run service and use the newly pushed Docker image.
Create a Cloud SQL Postgres instance and use its credentials as environment variables in your Cloud Run service.
You can also host your own Compute Engine instance and run docker-compose on it but I would not recommend that.
You can also create a GKE cluster and run Django and Postgres in it but it requires knowledge of Kubernetes(deployments, statefulsets, services etc).

Host Dockerized Django project on AWS

I have a Django project which is working fine on my local machine. I want to host the same on AWS, but confused on what service to use and what is the best practice to so. Do I use EC2, create a ubuntu instance on it and install Docker or use ECS ?
What is the best practice to transfer my django project to AWS. Do I create a repository on Docker hub ?
Please help me explain the best workflow on this.
My docker-compose file looks like this:
version: '3'
services:
db:
image: mysql:latest
restart: always
environment:
- MYSQL_DATABASE=tg_db
- MYSQL_ROOT_PASSWORD=password
volumes:
- ./dbdata:/var/lib/mysql
web:
build: .
command: bash -c "python manage.py migrate && python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Thanks!
UPDATE (Steps I took for deployment)
Dockerfile:
# Start with a python image
FROM python:3
# Some stuff that everyone has been copy-pasting
# since the dawn of time.
ENV PYTHONUNBUFFERED 1
# Install things
RUN apt-get update
# Make folders and locations for project
RUN mkdir /code
COPY . /code
WORKDIR /code/project/t_backend
# Install requirements
RUN pip install -U pip
RUN pip install -Ur requirements.txt
I used sudo docker-compose up -d and project is running on local
Now I pushed my tg_2_web:latest on ECR.
Where does the database and Apache containers come in action.
Do I have to create a separated repository for both mysql database and apache container.
How will I connect all the containers using ECS ?
Thanks !
The answer to this question can be really wide but just to give you a heads up on what all processes it is supposed to go through -
Packaging Images
You create a docker image by using writing Dockerfile which actually copies your Python Django source code & installs all the dependencies.
This can either be done locally of you can use any CI/CD tools for the same.
Storing Images
This is the part where you will push & store your Docker image. All the packaged images will be pushed in this step.
This could be any registry from where EC2 instances can fetch the docker image, preferably ECR but you can opt for dockerhub as well. In case of dockerhub, you need to store your credentials into S3.
Deploying images
In this part, you will be deploying the images to EC2 instances.
You can use various services depending on your requirement like ECS, ElasticBeanstalk multicontainer or maybe Fargate(relatively new).
ECS - Most preferred way of deployment but you need to manage clusters & resources by yourself. Images have to be defined in a task definition which is a JSON file.
Beanstalk Multi Container - Relatively new to ECS, uses ECS in the background to deploy your docker images to the clusters. You do not have to worry about resources, just feed a JSON file to your environment & rest is taken care by Beanstalk.
Fargate - Manage or deploy your containers without worrying about your clusters/managers etc. Quite new, never got a chance to have a look into it.
Ref -
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_ecs.html
https://aws.amazon.com/fargate/

ElasticBeanstalk: Environment in an unhealthy state(Codeship Pro, Jets)

I am trying to deploy a django app using Codeship Pro to Elasticbeans talk(using docker of course). Running this step fails when when deploying
codeship-steps.yml:
- name: deployment
tag: aws-docker
service: awsdeployment
command: codeship_aws eb_deploy ./deploy my-project staging my-bucket
docker-compose.yml
services:
app:...
db:...
awsdeployment:
image: codeship/aws-deployment
encrypted_env_file: aws-deployment.env.encrypted
environment:
- AWS_DEFAULT_REGION=eu-central-1
volumes:
- ./:/deploy
Error:
Info: I am trying to setup a CI/CD environment for the project(staging/production env)
UPDATE: in my elasticbeanstalk, I see that the django extension is not found although I have made they are installed in my Dockerfile.(pip install -r requirements.txt)