Docker-compose in GCP Cloud build - google-cloud-platform

I'm trying build deploy a app in GCP Cloud run using GCP Cloud Build.
I already build, push and deploy the service using Dockerfile, but i need use the Dockerfile of the project. My dockerfile run in Docker desktop perfectly, but i am not finding documentation for docker-compose using GCP Artifact registry.
My dockerfile:
FROM python:3.10.5-slim-bullseye
#docker build -t cloud_app .
#docker image ls
#docker run -p 81:81 cloud_app
RUN mkdir wd
WORKDIR /wd
RUN apt-get update
RUN apt-get install ffmpeg libsm6 libxext6 -y
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY ./ ./
CMD python3 main.py
My docker-compose:
version: "3.3"
services:
web:
build:
context: ./destripa_frame/
dockerfile: ./Docker_files/Dockerfile
image: bank_anon_web_dev_build
restart: always
expose:
- 8881
- 80
- 2222
- 22
ports:
- "2222:2222"
- "80:80"
environment:
- TZ=America/Chicago
My cloud-build configuration:
steps:
- name: 'docker/compose:1.28.2'
args: ['up', '--build', '-f', './cloud_run/docker-compose.devapp.yml', '-d']
- name: 'docker/compose:1.28.2'
args: ['-f', './cloud_run/docker-compose.devapp.yml', 'up', 'docker-build']
images: ['us-central1-docker.pkg.dev/${PROJECT_ID}/app-destripador/job_app:$COMMIT_SHA']
The cloud build commit execution succeed:
Cloud build execution
¿How can modify the cloud build for deploy the Docker-compose in Artifact registry?
EDIT: Find the correct method to push the image in artifact registry using cloudbuild and Docker-compose.
Modify my cloud-build.yml configuration for build the image and then rename the Docker-compose image to the Artifact registry image.
Cloud build automatically push the image in the repository (if the image name it's not a URL then push it in Docker.io).
My new Cloud-build.yml:
steps:
- name: 'docker/compose:1.28.2'
args: [
'-p', 'us-central1-docker.pkg.dev/${PROJECT_ID}/app-destripador',
'-f', './cloud_run/docker-compose.devapp.yml',
'up', '--build', 'web'
]
- name: 'gcr.io/cloud-builders/docker'
args: [
'tag',
'bank_anon_web_dev_build',
'us-central1-docker.pkg.dev/${PROJECT_ID}/app-destripador/bank_anon_web_dev_build'
]
images: ['us-central1-docker.pkg.dev/${PROJECT_ID}/app-destripador/bank_anon_web_dev_build']
Hope anyone need undestand GCP Cloud build using docker-compose can help it, because every guide in the web not explicate this last part.

Related

Getting error "Already have image (with digest): gcr.io/cloud-builders/docker" while trying Gitlab CICD

I am trying to use Gitlab CI/CD with Cloud Build and Cloud Run to deploy a Flask application.
I am getting an error
starting build "Edited"
FETCHSOURCE
Fetching storage object: gs://Edited
Copying gs://Edited
\ [1 files][ 2.1 GiB/ 2.1 GiB] 43.5 MiB/s
Operation completed over 1 objects/2.1 GiB.
BUILD
Starting Step #0
Step #0: Already have image (with digest): gcr.io/cloud-builders/docker
Step #0: unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
Finished Step #0
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
--------------------------------------------------------------------------------
Cleaning up project directory and file based variables 00:00
ERROR: Job failed: exit code 1
My .gitlab-ci.yml
image: aft/ubuntu-py-dvc
stages:
- deploy
deploy:
stage: deploy
tags:
- fts-cicd
image: aft/ubuntu-py-gcloudsdk-dvc
services:
- docker:dind
script:
- echo $dvc > CI_PIPELINE_ID.json
- echo $GCP_LOGIN > gcloud-service-key.json
- dvc remote modify --local view-model-weights credentialpath CI_PIPELINE_ID.json
- dvc pull
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud config set project $PROJECT_ID
- gcloud builds submit . --config=cloudbuild.yaml
cloudbuild.yaml
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/fts-im', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/fts-im']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args: ['run', 'deploy', 'fts_im', '--image', 'gcr.io/$PROJECT_ID/fts_im', "--platform", "managed", "--region","asia-northeast1", "--port", "8000","--memory", "7G", "--cpu", "2", "--allow-unauthenticated"]
images:
- gcr.io/$PROJECT_ID/fts-im
Dockerfile
FROM python:3.9.16-slim
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
ADD . /app
COPY .* app/
WORKDIR /app
ADD . .secrets
COPY CI_PIPELINE_ID.json .secrets/CI_PIPELINE_ID.json
RUN ls -la .
RUN ls -la data/
RUN pwd
RUN ls -la .secrets
RUN pip install -r requirements.txt
CMD ["gunicorn" , "-b", "0.0.0.0:8000", "wsgi:app"]
Trying other solutions I tried to prune dockers from the VM which was used for Runner in the CICD settings, I have experimented from a test repo and it worked completely, I am getting this error while replicating it on a new repo. with changed the name to fts_im.
I haven't deleted the previous build and deployed app from cloud build and cloud run, because while using the previous repo I experimented build multiple time all successful.
As per this document Dockerfile should present in the same directory where the build config file is,
Run below command check if Dockerfile present in current directory or not
docker build -t docker-whale
If Dockerfile is present in the same directory where the build config file is, then review this documentation to ensure the correct working directory has been set in the build config file.
Make sure GitLab CI/CD is set up correctly and configured to run on the current branch.
Also you have to specify the full path of the Dockerfile in cloudbuild.yaml file
The name of the file should be Dockerfile and not **.**Dockerfile. The file should not have any extension. check the Dockerfile is named correctly .
Check you have not misspelled image name, I can see 2 different image names gcr.io/$PROJECT_ID/fts-im and gcr.io/$PROJECT_ID/fts_im, I’m not sure whether they are 2 different image or you misplaced _(underscore) with -(Hyphen).

CIDC with BitBucket, Docker Image and Azure

EDITED
I am learning CICD and Docker. So far I have managed to successfully create a docker image using the code below:
Dockerfile
# Docker Operating System
FROM python:3-slim-buster
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
#App folder on Slim OS
WORKDIR /app
# Install pip requirements
COPY requirements.txt requirements.txt
RUN python -m pip install --upgrade pip pip install -r requirements.txt
#Copy Files to App folder
COPY . /app
docker-compose.yml
version: '3.4'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
ports:
- 8000:8000
My code is on BitBucket and I have a pipeline file as follows:
bitbucket-pipelines.yml
image: atlassian/default-image:2
pipelines:
branches:
master:
- step:
name:
Build And Publish To Azure
services:
- docker
script:
- docker login -u $AZURE_USER -p $AZURE_PASS xxx.azurecr.io
- docker build -t xxx.azurecr.io .
- docker push xxx.azurecr.io
With xxx being the Container registry on Azure. When the pipeline job runs I am getting denied: requested access to the resource is denied error on BitBucket.
What did I not do correctly?
Thanks.
The Edit
Changes in docker-compose.yml and bitbucket-pipeline.yml
docker-compose.yml
version: '3.4'
services:
web:
build: .
image: xx.azurecr.io/myticket
container_name: xx
command: python manage.py runserver 0.0.0.0:80
ports:
- 80:80
bitbucket-pipelines.yml
image: atlassian/default-image:2
pipelines:
branches:
master:
- step:
name:
Build And Publish To Azure
services:
- docker
script:
- docker login -u $AZURE_USER -p $AZURE_PASS xx.azurecr.io
- docker build -t xx.azurecr.io/xx .
- docker push xx.azurecr.io/xx
You didnt specify CMD or ENTRYPOINT in your dockerfile.
There are stages when building a dockerfile
Firstly you call an image, then you package your requirements etc.. those are stages that are being executed while the container is building. you are missing the last stage that executes a command inside the container when its already up.
Both ENTRYPOINT and CMD are essential for building and running Dockerfiles.
for it to work you must add a CMD or ENTRYPOINT at the bottom of your dockerfile..
Change your files accordingly and try again.
Dockerfile
# Docker Operating System
FROM python:3-slim-buster
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
#App folder on Slim OS
WORKDIR /app
# Install pip requirements
COPY requirements.txt requirements.txt
RUN python -m pip install --upgrade pip pip install -r requirements.txt
#Copy Files to App folder
COPY . /app
# Execute commands inside the container
CMD manage.py runserver 0.0.0.0:8000
Check you are able to build and run the image by going to its directory and running
docker build -t app .
docker run -d -p 80:80 app
docker ps
See if your container is running.
Next
Update the image property in the docker-compose file.
Prefix the image name with the login server name of your Azure container registry, .azurecr.io. For example, if your registry is named myregistry, the login server name is myregistry.azurecr.io (all lowercase), and the image property is then myregistry.azurecr.io/azure-vote-front.
Change the ports mapping to 80:80. Save the file.
The updated file should look similar to the following:
docker-compose.yml
Copy
version: '3'
services:
foo:
build: .
image: foo.azurecr.io/atlassian/default-image:2
container_name: foo
ports:
- "80:80"
By making these substitutions, the image you build is tagged for your Azure container registry, and the image can be pulled to run in Azure Container Instances.
More in documentation

How to pull and use existing image from Azure ACR through Dockerfile

I am performing AWS to Azure services migration.
I am using a centos VM and am trying to pull an existing image from ACR and create container. I am using Dockerfile to do so. I have created an image on Azure ACR. I need help in pulling this image and creating container on centos VM.
Earlier, I was doing this with images on AWS ECR (not sure if by using AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY) as below. But I am not sure how this can be done using Azure ACR. How do I provide Azure access to the application containing below Dockerfile and docker-compose.yml . Do I need to use access and secret key similar to AWS. If so, how do I create this pair on Azure.
Below are the files I was using when I was dealing with container creation on Centos using AWS image
Dockerfile:
FROM 12345.ecrImageUrl/latestImages/pk-image-123:latest
RUN yum update -y
docker-compose.yml:
version: '1.2`
services:
initn:
<<: *mainnode
entrypoint: ""
command: "npm i"
bldn:
<<: *mainnode
entrypoint: ""
command: "npm run watch:build"
runn:
<<: *mainnode
entrypoint: ""
command: "npm run watch:run"
links:
- docker-host
ports:
- "8011:8080"
environment:
- AWS_ACCESS_KEY=${AWS_ACCESS_KEY}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}

Getting error when trying to execute cloud build to deploy application to cloud run

I tried to deploy application to cloud run in GCP which succesfuly got executed using docker file.Now,I am setting up CI/CD by using cloudbuild.yaml .I mirrored repo to CSR and created a cloudbuild service and placed cloudbuild.yaml in my repository .When executed from cloudbuild,it throws the following error.
Status: Downloaded newer image for gcr.io/google.com/cloudsdktool/cloud-sdk:latest
gcr.io/google.com/cloudsdktool/cloud-sdk:latest
Deploying...
Creating Revision...failed
Deployment failedERROR: (gcloud.run.deploy) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable.
Docker file is attached below:
#pulls python 3.7’s image from the docker hub
FROM python:alpine3.7
#copies the flask app into the container
COPY . /app
#sets the working directory
WORKDIR /app
#install each library written in requirements.txt
RUN pip install -r requirements.txt
#exposes port 8080
EXPOSE 8080
#Entrypoint and CMD together just execute the command
#python app.py which runs this file
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
cloudbuild.yaml:
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/projectid/servicename', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/projectid/servicename']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'phase-2'
- '--image'
- 'gcr.io/projectid/servicename'
- '--region'
- 'us-central1'
- '--platform'
- 'managed'
images:
- 'gcr.io/projectid/servicename'.
OP got the issue resolved as seen in the comments:
Got the issue resolved.It was because of the python compatibility issue.I should use pip3 and python3 in the docker file.I think gcr.io image is compatible with python3.

Bitbucket Pipelines to build Java app, Docker image and push it to AWS ECR?

I am setting up Bitbucket Pipelines for my Java app and what I want to achive is whenever I merge something with branch master, Bitbucket fires the pipeline, which in first step build and test my application, and in second step build Docker image from it and push it to ECR. The problem is that in second step it isn't possible to use the JAR file made in first step, because every step is made in a separate, fresh Docker container. Any ideas how to solve it?
My current files are:
1) Bitbucket-pipelines.yaml
pipelines:
branches:
master:
- step:
name: Build and test application
services:
- docker
image: openjdk:11
caches:
- gradle
script:
- apt-get update
- apt-get install -y python-pip
- pip install --no-cache-dir docker-compose
- bash ./gradlew clean build test testIntegration
- step:
name: Build and push image
services:
- docker
image: atlassian/pipelines-awscli
caches:
- gradle
script:
- echo $(aws ecr get-login --no-include-email --region us-west-2) > login.sh
- sh login.sh
- docker build -f Dockerfile -t my-application .
- docker tag my-application:latest 212234103948.dkr.ecr.us-west-2.amazonaws.com/my-application:latest
- docker push 212234103948.dkr.ecr.us-west-2.amazonaws.com/my-application:latest
2) Dockerfile:
FROM openjdk:11
VOLUME /tmp
EXPOSE 8080
COPY build/libs/*.jar app.jar
ENTRYPOINT ["java", "-jar", "/app.jar"]
And the error I receive:
Step 4/5 : COPY build/libs/*.jar app.jar
COPY failed: no source files were specified
I have found the solutions, it's quite simple - we should just use "artifacts" feature, so in first step the additional line:
artifacts:
- build/libs/*.jar
solves the problem.