gcloud builds submit with a fatal: not a git repository - google-cloud-platform

I have a Go Dockerfile from https://cloud.google.com/run/docs/quickstarts/build-and-deploy with a one line change so that I can tell what version I'm running:
RUN go build -ldflags "-X main.Version=$(git describe --always)" -mod=readonly -v -o server
When I build locally via docker build . and test, there is no problem with git describe, however if I submit the Docker to be built via gcloud builds submit it fails with:
fatal: not a git repository (or any of the parent directories): .git
How do I build my Cloud Run docker image so it has this Git version reference?

When you perform gcloud builds submit, all the project files aren't sent to Cloud Build. The command take into account your .gitignore file and the .gcloudignore file. If you haven't a .gcloudignore a default behavior is enforced in addition of the .gitignore file directive. More detail here
So, to fix this, create a .gcloudignore file with only the file to exclude for your Build. So, let the .git/ (don't add it in the file) and it will work.

Related

gcloud builds submit command is not working as per the documentation

Trying to build the image by using gcloud build submit command with passing the source as GCS bucket as per the syntax but it's not working.
gcloud builds submit gs://bucket/object.zip --tag=gcr.io/my-project/image
Error : -bash: gs://bucket_name/build_files.zip: No such file or directory
This path exists in the GCP project where I'm executing the command but still it says no such file or directory.
What I'm missing here ?
Cloud Build looks for local file or tar.gz file on Google Cloud Storage.
Is the case of a zip file like your case, the solution is to start to download locally the file, UNZIP THE FILE and then launch your Cloud Build.
Indeed, you need to unzip the file. Cloud Build won't do it for you, it can only ungzip and untar files. When you add --tag parameter, Cloud Build looks for a Dockerfile file if your set of file and run a docker build with this file.
Please try with single quotes(') or double quotes(") around gs://bucket/object.zip, and not the back quote (`), so the command would look like this:
gcloud builds submit 'gs://bucket/object.zip' --tag=gcr.io/my-project/image
Looks like there is an issue with the documentation, the changes have now been submitted to Google.

Why does google cloud build run differently for these two commands?

We run these two commands (the first one is async and the other runs synchronously)
#async BUT does something funky and doesn't run the Dockerfile image as-is
gcloud alpha builds triggers run staging-deploy --branch master
# sync BUT runs the image the way it's supposed to run!!!
gcloud builds submit --config cloudbuild.yaml
both are using our cloudbuild.yaml
steps:
- name: gcr.io/$PROJECT_ID/continuous-deploy
args: ['${_SERVICE}', '${_DOWNLOAD_URL}']
timeout: 1000s
substitutions:
_SERVICE: none
_DOWNLOAD_URL: none
timeout: 1100s
Our Dockerfile is very very simple
FROM gcr.io/google.com/cloudsdktool/cloud-sdk:alpine
RUN mkdir -p ./monobuild
COPY . ./monobuild/
WORKDIR "/monobuild"
#NOTE: This file in google cloud build trigger MUST be in root of monorepo BUT I don't know why
#NOTE: This command receives any arguments to docker
#ie. for "docker run {image} {args}", it receives the args
ENTRYPOINT ["./downloadAndExtract.sh"]
Sooo, when I run the SECOND command, it completely uses the docker image obeying the Dockerfile. When I run the first command, it's ignoring all my Dockerfile stuff and trying to run scripts in my git repo(which is very frustrating and not what I want).
We HAD this directory structure
- gitroot
- stagingDeploy
- Dockerfile
- deployStaging.sh # part of Dockerfile
- cloudbuild.yaml
- prodDeploy
- Dockerfile
- prodDeploy.sh #part of Docker file
- cloudbuild.yaml
Of course, only the second command works with this directory structure. The first command CANNOT find deployStaging.sh UNTIL we ln -s stagingDeploy/deployStaging.sh from our gitrepo root and we have around 5 deploy directories and now our git repo root is fully polluted.
It is to say the least very frustrating and we are not sure how to clean this up so prodDeploy contains all the prod deploy scripts and staging, the staging ones and get rid of all root files.
Of course, we now have a corrupted git repo directory structure with a whole slew of files in the root directory from various different builds(sometimes conflicting on accident as files get the same names sometimes).
EDIT: Not really much to share on configuration of twitter as each one just points to the yaml file is all like so
thanks,
Dean

Building Linux C++ with VSTS

I'm trying to build a C++ app for Linux using VSTS. The build is defined by the Docker container template, and the Agent queue is Hosted Linux.
When running, I get
[error]Unhandled: No Docker file matching /opt/vsts/work/1/s/**/Dockerfile was found.
How do I create the Docker file requested by the error message?
The error means that there isn’t the Dockerfile file existing in working folder, you can include the Dockerfile file in the source control and map to the agent (Get sources of build definition)
There is the Docker image that shared by others, for example: madduci/docker-ubuntu-cpp and the CMake generated files will be in build folder, if you just need to build the C++ project, you can refer to these steps (CMakeLists.txt is in the root of the repository):
Add Docker task (Action: Run a Docker command; Command: run -v $(Build.SourcesDirectory):/project madduci/docker-ubuntu-cpp)
Publish Build Artifacts (Path to publish: $(Build.SourcesDirectory)/build)
If you need to build the docker image, you need to create Dockerfile.
When the Docker task is set to Build an image you get an option to specify a Docker file:
**/Dockerfile means that the task will search your repository for a file named Dockerfile and use that to build the image.
The error you get means that this file can't be found. You can find some examples of Dockerfiles here in the Docker documentation. This blog describes how to build C++ applications that run on a Linux container

Fetching Tags in Google Cloud Builder

In the newly created google container builder I am unable to fetch git tags during a build. During the build process the default cloning does not seem to fetch git tags. I added a custom build process which calls git fetch --tags but this results in the error:
Fetching origin
git: 'credential-gcloud.sh' is not a git command. See 'git --help'.
fatal: could not read Username for 'https://source.developers.google.com': No such device or address
# cloudbuild.yaml
#!/bin/bash
openssl aes-256-cbc -k "$ENC_TOKEN" -in gcr_env_vars.sh.enc -out gcr_env_vars.sh -
source gcr_env_vars.sh
env
git config --global url.https://${CI_USER_TOKEN}#github.com/.insteadOf git#github.com:
pushd vendor
git submodule update --init --recursive
popd
docker build -t gcr.io/project-compute/continuous-deploy/project-ui:$COMMIT_SHA -f /workspace/installer/docker/ui/Dockerfile .
docker build -t gcr.io/project-compute/continuous-deploy/project-auth:$COMMIT_SHA -f /workspace/installer/docker/auth/Dockerfile .
This worked for me, as the first build step:
- name: gcr.io/cloud-builders/git
args: [fetch, --depth=100]
To be clear, you want all tags to be available in the Git repo, not just to trigger on tag changes? In the latter, the triggering tag should be available IIUC.
I'll defer to someone on the Container Builder team for a more detailed explanation, but that error tells me that they used gcloud to clone the Google Cloud Source Repository (GCSR), which configures a Git credential helper named as such. They likely did this in another container before invoking yours, or on the host. Since gcloud and/or the gcloud credential helper aren't available in your container, you can't authenticate properly with GCSR.
You can learn a bit more about the credential helper here.

Elastic Beanstalk .ebextensions config file not getting deployed with git aws.push

I've linked a git branch to my Elastic Beanstalk environment and using git aws.push it deploys correctly.
I've now added a .extensions directory which contains a config script which should be creating a couple of directories. However, nothing appears to be happening.
I understand that the .extensions directory should be copied across to the ec2 instance as well but I'm not seeing it.
I've checked eb-tools.log and it's not mentioned in the upload.
Is there something additional that's required?
The script contains:
commands:
cache:
command: mkdir /tmp/cache
items:
command: mkdir /tmp/cache/items
chmod:
command: chmod -R 644 /tmp
You can find the run logs for this at /var/log/cfn-init.log.
In here I could see that the mkdir commands had worked initially but subsequently failed as the directory already existed.
Turns out that eb extensions run commands in alphabetical order so I had to change the commands to:
01command1:
02command2:
etc.
From this point on it worked fine.
Something else that was confusing me is that the .ebextensions directory in my local git repo was not appearing on the target instance directory. this is because once it's been run it will delete the directory.
Double check that your local script file has a .config extension. I was having a similar problem because my local file was called .ebextensions/01_stuff.yaml and it was fixed once I renamed it to .ebextensions/01_stuff.config.