Google Cloud Build pipeline in Mono-repository architecture with single cloudbuild - google-cloud-platform

We are using multiple python deployments into a single GitHub repository with a folder structure. Each directory contains a separate scripts module.
service-1/
deployment-1/
app/
Dockerfile
cloudbuild.yaml
deployment-2/
app/
Dockerfile
cloudbuild.yaml
service-2/
deployment-1/
app/
Dockerfile
cloudbuild.yaml
service-3/
deployment-1/
app/
Dockerfile
cloudbuild.yaml
deployment-2/
app/
Dockerfile
cloudbuild.yaml
.gitignore
README.md
requirements.txt
where deployment-1 will work as a single deployment and deployment-2 as another deployment for each service.
We are planning to manage a single trigger in a pipeline that triggers the build only for the deployment where the latest commit is found.
If anyone can please provide suggestions on how to keep single YAML files & build it better way using the cloud build. So that we don't require to manage multiple triggers.

Sadly, nothing is magic!! The dispatch is either done by configuration (multiple trigger) or by code.
If you want to avoid multiple trigger, you need to code the dispatch:
Detect the code that have change in GIT (could be several service in the same time)
Iterate over the updated folders and run a Cloud Build (so, a new one) for each of them
It's small piece of shell code. Not so difficult but you have to maintain/test/debug it. Is it easier that multiple trigger? It's up to you, according to your team skills in devops area.

Related

How to import pipeline.gocd.yaml from github repo to build gocd pipeline

I'm quite new to go-cd. I have a pipeline.gocd.yaml in my git repo-in which i have defined my pipeline. Is there a way to I can import this into my go-cd server (through the agent) to build the pipeline.
I can't seem to find a way. Any help will be much appreciated.
you can use the config repository plugin which scans the repo for any *.gocd.yaml files and automatically creates the pipeline, groups, configuration etc.
https://github.com/tomzo/gocd-yaml-config-plugin

AWS CodeBuild with Multi Docker Containers: unable to prepare context: unable to evaluate symlinks in Dockerfile path

so I am trying to deploy my multi-docker container(Frontend, Backend, and Nginx containers) application in AWS BeansTalk. I am using CodeBuild to build the docker images using buildspec.yml file. The build fails when trying to build the first container(containerizing frontend application). Kindly refer to the image attached for the error details.
It is basically saying could not find the Dockerfile in the client directory but the funny thing is that it exists and it works as expected locally when I build the containers with docker-compose.
Here is the project directory:
buildspec.yml file:
For the benefit of others, The reason for the error is that the Dockerfile is missing in the location. Make sure you have the DockerFile inside the directory (./client in this case). It has to be exactly spelled as Dockerfile. If it's not, check the source repo and ensure that the Dockerfile file is committed.

GitHub Cloud Build Integration with multiple cloudbuild.yamls in monorepo

GitHub's Google Cloud Build integration does not detect a cloudbuild.yaml or Dockerfile if it is not in the root of the repository.
When using a monorepo that contains multiple cloudbuild.yamls, how can GitHub's Google Cloud Build integration be configured to detect the correct cloudbuild.yaml?
File paths:
services/api/cloudbuild.yaml
services/nginx/cloudbuild.yaml
services/websocket/cloudbuild.yaml
Cloud Build integration output:
You can do this by adding a cloudbuild.yaml in the root of your repository with a single gcr.io/cloud-builders/gcloud step. This step should:
Traverse each subdirectory or use find to locate additional cloudbuild.yaml files.
For each found cloudbuild.yaml, fork and submit a build by running gcloud builds submit.
Wait for all the forked gcloud commands to complete.
There's a good example of one way to do this in the root cloudbuild.yaml within the GoogleCloudPlatform/cloud-builders-community repo.
If we strip out the non-essential parts, basically you have something like this:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
for d in */; do
config="${d}cloudbuild.yaml"
if [[ ! -f "${config}" ]]; then
continue
fi
echo "Building $d ... "
(
gcloud builds submit $d --config=${config}
) &
done
wait
We are migrating to a mono-repo right now, and I haven't found any CI/CD solution that handles this well.
The key is to not only detect changes, but also any services that depend on that change. Here is what we are doing:
Requiring every service to have a MAKEFILE with a build command.
Putting a cloudbuild.yaml at the root of the mono repo
We then run a custom build step with this little tool (old but still seems to work) https://github.com/jharlap/affected which lists out all packages have changed and all packages that depend on those packages, etc.
then the shell script will run make build on any service that is affected by the change.
So far it is working well, but I totally understand if this doesn't fit your workflow.
Another option many people use is Bazel. Not the most simple tool, but especially great if you have many different languages or build processes across your mono repo.
You can create a build trigger for your repository. When setting up a trigger with cloudbuild.yaml for build configuration, you need to provide the path to the cloudbuild.yaml within the repository.

Google Container Registry build trigger on folder change

I can setup a build trigger on GCR to build my Docker image every time my Git repository gets updated. However, I have a single repository with multiple folders, and a Docker file in each folder.
Ex:
my_app
-- service-1
Dockerfile-1
-- service-2
Dockerfile-2
How do I only build Dockerfile-1 when the service-1 folder gets updated?
This is a variation on this GitHub feature request -- in your case, differential behavior based on the changed files (folders) rather than the branch.
We are considering this feature as part of the development of support for more advanced workflow control and will post back on that GitHub issue when it becomes available.
The work-around available to you today is to use a bash script that conditionally builds (or doesn't) based on an inspection of the files changed in the $COMMIT_SHA that triggered the build. Note that the git builder can be used to get the list of files changed via git diff-tree --no-commit-id --name-only -r $COMMIT_SHA.

AWS Elastic Beanstalk - .ebextensions

My app currently uses a folder called "Documents" that is located in the root of the app. This is where it stores supporting docs, temporary files, uploaded files etc. I'm trying to move my app from Azure to Beanstalk and I don't know how to give permissions to this folder and sub-folders. I think it's supposed to be done using .ebextensions but I don't know how to format the config file. Can someone suggest how this config file should look? This is an ASP.NET app running on Windows/IIS.
Unfortunately, you cannot use .ebextensions to set permissions to files/folders within your deployment directory.
If you look at the event hooks for an elastic beanstalk deployment:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-windows-ec2.html#windows-container-commands
You'll find that commands run before the ec2 app and web server are set up, and
container_commands run after the ec2 app and web server are setup, but before your application version is deployed.
The solution is to use a wpp.targets file to set the necessary ACLs.
The following SO post is most useful
Can Web Deploy's setAcl provider be used on a sub-directory?
Given below is the sample .ebextensions config file to create a directory/file and modify the permissions and add some content to the file
====== .ebextensions/custom_directory.config ======
commands:
create_directory:
command: mkdir C:\inetpub\AspNetCoreWebApps\backgroundtasks\mydirectory
command: cacls C:\inetpub\AspNetCoreWebApps\backgroundtasks\mydirectory /t /e /g username:W
files:
"C:/inetpub/AspNetCoreWebApps/backgroundtasks/mydirectory/mytestfile.txt":
content: |
This is my Sample file created from ebextensions
ebextensions go into the root of the application source code through a directory called .ebextensions. For more information on how to use ebextensions, please go through the documentation here
Place a file 01_fix_permissions.config inside .ebextensions folder.
files:
"/opt/elasticbeanstalk/hooks/appdeploy/pre/49_change_permissions.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sudo chown -R ec2-user:ec2-user tmp/
Following that you can set your folder permissions as you want.
See this answer on Serverfault.
There are platform hooks that you can use to run scripts at various points during deployment that can get you around the shortcomings of the .ebextension Commands and Platform Commands that Napoli describes.
There seems to be some debate on whether or not this setup is officially supported, but judging by comments made on the AWS github, it seems to be not explicitly prohibited.
I can see where Napoli's answer could be the more standard MS way of doing things, but wpp.targets looks like hot trash IMO.
The general scheme of that answer is to use Commands/Platform commands to copy a script file into the appropriate platform hook directory (/opt/elasticbeanstalk/hooks or C:\Program Files\Amazon\ElasticBeanstalk\hooks\ ) to run at your desired stage of deployment.
I think its worth noting that differences exist between platforms and versions such as Amazon Linux 1 and Linux 2.
I hope this helps someone. It took me a day to gather that info and what's on this page and pick what I liked best.
Edit 11/4 - I would like to note that I saw some inconsistencies with the File .ebextension directive when trying to place scripts drirectly into the platform hook dir's during repeated deployments. Specifically the File directive failed to correctly move the backup copies named .bak/.bak1/etc. I would suggest using a Container Command to copy with overwriting from another directory into the desired hook directory to overcome this issue.