Docker image fails to build on Google Container Registry - google-cloud-platform

I have setup a trigger from Bitbucket to Google Container Registry.
I have a Dockerfile in the root, and am able to build the container fine from my local machine.
I get this error in Google Container Registry when the trigger runs (I did not modify the command that GCR wanted to run - it's the default). My project name has been replaced with "project":
FETCHSOURCE
Initialized empty Git repository in /workspace/.git/
From https://source.developers.google.com/p/project/r/bitbucket-project-gateway
* branch c65f16b3f52262a047c71e7140aecc4300265497 -> FETCH_HEAD
HEAD is now at c65f16b testing
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
invalid argument "gcr.io/project/bitbucket-project-gateway:" for t: invalid reference format
See 'docker build --help'.
ERROR
ERROR: build step "gcr.io/cloud-builders/docker#sha256:e576df764ae28d3c072019a235b6c8966df11eecb472c59b0963d783bb8a713b" failed: exit status 125

It looks like the image's tag is missing (after the ":").
Do you have a cloudbuild.yaml config file? If so do you use some substitutions variables (e.g. $REVISION_ID)? Maybe there is a misspelling there?
Cheers,
Philmod

For others who come along, when running into this same issue when pushing a Dockerfile with a Cloud Build YAML file - my mistakes:
Had ${SHORT_SHA} in one place and not the other (was on the Artifact push and not the build) [https://stackoverflow.com/a/44716934/18176030credit to Philmod for the tag not being right]
I was using the "grc.io" on during the build process and not the Artifact push (was using "us-east1-docker.pkg.dev").

Related

--source in cloudbuild.yaml

I have my current Cloud Build working, I connect my github repo to trigger the Cloud Build when I push to the main branch which then creates my Cloud Function, but I am confused about the the --source flag. I have read the google cloud function docs. They state that the
minimal source repository URL is: https://source.developers.google.com/projects/${PROJECT}/repos/${REPO}. If I were to input this into my cloudbuild.yaml file, does this mean that I am mimicking the complete path of my github url? I am currently just using . which I believe is just the entire root directory.
my cloudbuild.yaml file:
steps:
- name: "gcr.io/cloud-builders/gcloud"
id: "deploypokedex"
args:
- functions
- deploy
- my_pokedex_function
- --source=.
- --entry-point=get_pokemon
- --trigger-topic=pokedex
- --timeout=540s
- --runtime=python39
- --region=us-central1
Yes you are mimicking the complete path of the Github URL. --source=. means that you are calling the source code in your current working directory. You can check this link on how to configure the Cloud Build deployment.
Also based on the documentation you provided,
If you do not specify the --source flag:
The current directory will be used for new function deployments.
If the function was previously deployed using a local filesystem path, then the function's source code will be updated using the current directory.
If the function was previously deployed using a Google Cloud Storage location or a source repository, then the function's source code will not be updated.
Let me know if you have questions or clarifications.

How to use custom Cloud Builders with images from Google Artifact Repository

How do I use a custom builder image in Cloud Build which is stored in a repository in Artifact Registry (instead of Container Registry?)
I have set up a pipeline in Cloud Build where some python code is executed using official python images. As I want to cache my python dependencies, I wanted to create a custom Cloud Builder as shown in the official documentation here.
GCP clearly indicates to switch to Artifact Registry as Container Registry will be replaced by the former. Consequently, I have pushed my docker image to Artifact Registry. I also gave my Cloud Builder Service Account the reader permissions to Artifact Registry.
Using the image in a Cloud Build step like this
steps:
- name: 'europe-west3-docker.pkg.dev/xxxx/yyyy:latest'
id: install_dependencies
entrypoint: pip
args: ["install", "-r", "requirements.txt", "--user"]
throws the following error
Step #0 - "install_dependencies": Pulling image: europe-west3-docker.pkg.dev/xxxx/yyyy:latest
Step #0 - "install_dependencies": Error response from daemon: manifest for europe-west3-docker.pkg.dev/xxxx/yyyy:latest not found: manifest unknown: Requested entity was not found.
"xxxx" is the repository name and "yyyy" the name of my image. The tag "latest" exists.
I can pull the image locally and access the repository.
I could not find any documentation on how to integrate these images from Artifact Registry. There is only this official guide, where the image is built using the Docker image from Container Registry – however this should not be future proof.
It looks like you need to add your Project ID to your image name.
You can use the "$PROJECT_ID" Cloud Build default substitution variable.
So your updated image name would look something like this:
steps:
- name: 'europe-west3-docker.pkg.dev/$PROJECT_ID/xxxx/yyyy:latest'
For more details about substituting variable values in Cloud Build see:
https://cloud.google.com/build/docs/configuring-builds/substitute-variable-values

AWS CodeBuild with Multi Docker Containers: unable to prepare context: unable to evaluate symlinks in Dockerfile path

so I am trying to deploy my multi-docker container(Frontend, Backend, and Nginx containers) application in AWS BeansTalk. I am using CodeBuild to build the docker images using buildspec.yml file. The build fails when trying to build the first container(containerizing frontend application). Kindly refer to the image attached for the error details.
It is basically saying could not find the Dockerfile in the client directory but the funny thing is that it exists and it works as expected locally when I build the containers with docker-compose.
Here is the project directory:
buildspec.yml file:
For the benefit of others, The reason for the error is that the Dockerfile is missing in the location. Make sure you have the DockerFile inside the directory (./client in this case). It has to be exactly spelled as Dockerfile. If it's not, check the source repo and ensure that the Dockerfile file is committed.

Google Container Registry build trigger on folder change

I can setup a build trigger on GCR to build my Docker image every time my Git repository gets updated. However, I have a single repository with multiple folders, and a Docker file in each folder.
Ex:
my_app
-- service-1
Dockerfile-1
-- service-2
Dockerfile-2
How do I only build Dockerfile-1 when the service-1 folder gets updated?
This is a variation on this GitHub feature request -- in your case, differential behavior based on the changed files (folders) rather than the branch.
We are considering this feature as part of the development of support for more advanced workflow control and will post back on that GitHub issue when it becomes available.
The work-around available to you today is to use a bash script that conditionally builds (or doesn't) based on an inspection of the files changed in the $COMMIT_SHA that triggered the build. Note that the git builder can be used to get the list of files changed via git diff-tree --no-commit-id --name-only -r $COMMIT_SHA.

Building Linux C++ with VSTS

I'm trying to build a C++ app for Linux using VSTS. The build is defined by the Docker container template, and the Agent queue is Hosted Linux.
When running, I get
[error]Unhandled: No Docker file matching /opt/vsts/work/1/s/**/Dockerfile was found.
How do I create the Docker file requested by the error message?
The error means that there isn’t the Dockerfile file existing in working folder, you can include the Dockerfile file in the source control and map to the agent (Get sources of build definition)
There is the Docker image that shared by others, for example: madduci/docker-ubuntu-cpp and the CMake generated files will be in build folder, if you just need to build the C++ project, you can refer to these steps (CMakeLists.txt is in the root of the repository):
Add Docker task (Action: Run a Docker command; Command: run -v $(Build.SourcesDirectory):/project madduci/docker-ubuntu-cpp)
Publish Build Artifacts (Path to publish: $(Build.SourcesDirectory)/build)
If you need to build the docker image, you need to create Dockerfile.
When the Docker task is set to Build an image you get an option to specify a Docker file:
**/Dockerfile means that the task will search your repository for a file named Dockerfile and use that to build the image.
The error you get means that this file can't be found. You can find some examples of Dockerfiles here in the Docker documentation. This blog describes how to build C++ applications that run on a Linux container