I'm starting to lose my sanity over a yaml build. This is the very first yaml build I've ever tried to configure, so it's likely I'm doing some basic mistake.
This is my yaml build definition:
name: ops-tools-delete-failed-containers-$(Date:yyyyMMdd)$(Rev:.rrrr)
trigger:
branches:
include:
- master
- features/120414-delete-failed-container-instances
schedules:
- cron: '20,50 * * * *'
displayName: At minutes 20 and 50
branches:
include:
- features/120414-delete-failed-container-instances
always: 'true'
pool:
name: Standard-Windows
variables:
- name: ResourceGroup
value: myResourceGroup
stages:
- stage: Delete
displayName: Delete containers
jobs:
- job: Job1
steps:
- task: AzureCLI#2
inputs:
azureSubscription: 'CPA (Infrastructure) (xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx)'
scriptType: 'pscore'
scriptLocation: 'scriptPath'
scriptPath: 'General/Automation/ACI/Delete-FailedContainerInstances.ps1'
arguments: '-ResourceGroup $(ResourceGroup)'
So in short, I want to run a script using an Azure CLI task. When I queue a new build it stays like this forever:
I've tried running the same task with an inline script without success. The same thing happens if I try to run a Powershell task instead of an Azure CLI task.
What am I missing here?
TL;DR issue was caused by (lack of) permissions.
More details
After enabling the following feature I could see more details about the problem:
The following warning was shown after enabling the feature:
Clicking on View shows the Azure subscription used in the Azure CLI task. After clicking on Permit, everything works as expected.
Cannot run Azure CLI task on yaml build
Your YAML file should be correct. I have test your YAML in my side, it works fine.
The only place I modified is change the agent pool with my private agent:
pool:
name: MyPrivateAgent
Besides, according to the state in your image:
So, It seems your private agent under the agent queue which you specified for the build definition is not running:
Make the agent running, then the build will start.
As test, you could use the hosted agent instead of your private agent, like:
pool:
vmImage: 'ubuntu-latest'
Hope this helps.
Related
I am trying to deploy a GCP function. My code uses a package that's on a private repository. I create a local copy of that package in the folder, and then use gcloud function deploy from the folder to deploy the function.
This works well. I can see a function that is deployed, with the localpackage.
The problem is with using github actions to deploy the function.
The function is part of a repository that has multiple functions, so when I deploy, I run github actions from outside this folder of the function, and while the function gets deployed, the dependencies do not get picked up.
For example, this is my folder structure:
my_repo
- .github/
- workflows/
-function_deploy.yaml
- function_1_folder
- main.py
- requirements.txt
- .gcloudignore
- localpackages --> These are the packages I need uploaded to GCP
My function_deploy.yaml looks like :
name: Build and Deploy to GCP functions
on:
push:
paths:
function_1_folder/**.py
env:
PROJECT_ID: <project_id>
jobs:
job_id:
runs-on: ubuntu-latest
permissions:
contents: 'read'
id-token: 'write'
steps:
- uses: 'actions/checkout#v3'
- id: 'auth'
uses: 'google-github-actions/auth#v0'
with:
credentials_json: <credentials>
- id: 'deploy'
uses: 'google-github-actions/deploy-cloud-functions#v0'
with:
name: <function_name>
runtime: 'python38'
region: <region>
event_trigger_resource: <trigger_resource>
entry_point: 'main'
event_trigger_type: <pubsub>
memory_mb: <size>
source_dir: function_1_folder/
The google function does get deployed, but it fails with:
google-github-actions/deploy-cloud-functions failed with: operation failed: Function failed on loading user code. This is likely due to a bug in the user code. Error message: please examine your function logs to see the error cause...
When I look at the google function, I see that the localpackages folder hasn't been uploaded to GCP.
When I deploy from my local machine however, it does upload the localpackages.
Any suggestions on what I maybe doing incorrectly? And how to upload the localpackages?
I looked at this question:
Github action deploy-cloud-functions not building in dependencies?
But didn't quite understand what was done here.
Im having an issue with my CI/CD pipeline ,
its successfully deployed to GCP cloud run but on Gitlab dashboard the status is failed.
I tried to replace images to some other docker images but it fails as well .
# File: .gitlab-ci.yml
image: google/cloud-sdk:alpine
deploy_int:
stage: deploy
environment: integration
only:
- integration # This pipeline stage will run on this branch alone
script:
- echo $GCP_SERVICE_KEY > gcloud-service-key.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud builds submit . --config=cloudbuild_int.yaml
# File: cloudbuild_int.yaml
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build','--build-arg','APP_ENV=int' , '-t', 'gcr.io/$PROJECT_ID/tpdropd-int-front', '.' ]
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'gcr.io/$PROJECT_ID/tpdropd-int-front']
# deploy to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args: ['run', 'deploy', 'tpd-front', '--image', 'gcr.io/$PROJECT_ID/tpdropd-int-front', '--region', 'us-central1', '--platform', 'managed', '--allow-unauthenticated']
gitlab build output :
ERROR: (gcloud.builds.submit)
The build is running, and logs are being written to the default logs bucket.
This tool can only stream logs if you are Viewer/Owner of the project and, if applicable, allowed by your VPC-SC security policy.
The default logs bucket is always outside any VPC-SC security perimeter.
If you want your logs saved inside your VPC-SC perimeter, use your own bucket.
See https://cloud.google.com/build/docs/securing-builds/store-manage-build-logs.
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
I fix it by using:
options:
logging: CLOUD_LOGGING_ONLY
in cloudbuild.yaml
there you can use this work around :
Fix it by giving the Viewer role to the service account running this but this feels like giving too much permission to such a role.
This worked for me: Use --suppress-logs
gcloud builds submit --suppress-logs --tag=<my-tag>
To fix the issue, you just need to create a bucket in your project (by default - without public access) and add the role 'Store Admin' to your user or service account via https://console.cloud.google.com/iam-admin/iam
After that, you can refer the new bucket into the gcloud builds submit via parameter --gcs-log-dir gs://YOUR_NEW_BUCKET_NAME_HERE like this:
gcloud builds submit --gcs-log-dir gs://YOUR_NEW_BUCKET_NAME_HERE ...(other parameters here)
We need a new bucket because the default bucket for logs is a global (cross-projects). That's why it has specific security requirements to access it especially from outside the Google Cloud, like GitLab, Azure DevOps ant etc via service accounts.
(Moreover, in this case you no need to turn off logging via --suppress-logs)
Kevin's answer worked like a magic for me, since I am not able to comment, I am writing this new answer.
Initially I was facing the same issue where inspite of gcloud build submit command passed , my gitlab CI was failing.
Below is the cloudbuild.yaml file where I add the option logging as Kevin suggested.
steps:
name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args: ['run_query.sh', '${_SCRIPT_NAME}']
options:
logging: CLOUD_LOGGING_ONLY
Check this document for details: https://cloud.google.com/build/docs/build-config-file-schema#options
To me worked the options solution as mentioned for #Kevin. Just add the parameter as mentioned before in the cloudbuild.yml file.
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/myproject/myimage', '.']
options:
logging: CLOUD_LOGGING_ONLY
Action: I tried to configure and run a simple c++ azure pipeline on a self-hosted windows computer. I'm pretty new to all this. I ran the script below.
Expected: to see build task, display task and clean task. to see hello word.
Result: Error, script can't find my build agent.
##[warning]An image label with the label Weltgeist does not exist.
,##[error]The remote provider was unable to process the request.
Pool: Azure Pipelines
Image: Weltgeist
Started: Today at 10:16 p.m.
Duration: 14m 23s
Info & Test:
My self-hosted agent name is Weltgeist and it's part of the
default agent pools.it's a windows computer, with all g++, mingw and
other related tools on it.
I tried my build task locally with no problem.
I tried my build task using azure 'ubuntu-latest' agent with no
problem.
I created the self-hosted agent following these specification.
I'm the owner of the azure repo.
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/v2-windows?view=azure-devops
How do I configure correctly the pool ymal parameter for self-hosted agent ?
Do i have addition steps to do server side? or on azure repo configs?
Any other idea of what went wrong?
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
- master
pool:
vmImage: 'Weltgeist' #Testing with self-hosted agent
steps:
- script: |
mkdir ./build
g++ -g ./src/hello-world.cpp -o ./build/hello-world.exe
displayName: 'Run a build script'
- script: |
./build/hello-world.exe
displayName: 'Run Display task'
- script: |
rm -r build
displayName: 'Clean task'
(UPDATE)
Solution:
Thx, after updating it as stated in a answer below and reading a bit more pool ymal definition it works. Note, I modified a couple of other lines to make it work on my environment.
trigger:
- master
pool:
name: Default
demands:
- agent.name -equals Weltgeist
steps:
- script: |
mkdir build
g++ -o ./build/hello-world.exe ./src/hello-world.cpp
displayName: 'Run a build script'
- script: |
cd build
hello-world.exe
cd ..
displayName: 'Run Display task'
- script: |
rm -r build
displayName: 'Clean task'
I was confused by the Default because there was already a pipeline named Default in the organization.
Expanding on the answers provided here.
pool:
name: NameOfYourPool
demands:
- agent.name -equals NameOfYourAgent
Here is screen you'll find that information in DevOps.
Since you are using the self-hosted agent, you could use the following format:
pool:
name: Default
demands:
- agent.name -equals Weltgeist
Then it should work as expected.
You could refer to the doc about POOL Definition in Yaml.
I had faced the same issue and replacing vmImage under pool with "name" worked for me. PFB,
trigger:
master
pool:
name: 'Weltgeist' #Testing with self-hosted agent
Also be aware that if your agent only appears in the "Azure Pipelines" pool and not in any of the other pools then the agent may have been configured to be an "Environment" resource, and can't be used as part of the build step.
I spent ages trying to use a self-hosted VM for a build step, thinking that the correct way to reference the VM was by creating a VM resource from the Pipelines > Enviroments area:
The agent would be properly created and visible in the "Azure Pipelines" pool, but wouldn't be available in any of the other pools, which then meant it couldn't be referenced in the yaml used for setting the server used for builds.
I was able to resolve the issue, by de-registering the agent on my self-hosted VM with .\config.cmd remove and running ./config without the --environment --environmentname "<name>" that was provided within the registration script mentioned above (shown in the "Add reseouce" screenshot)
Oddly, the registration script is a much quicker way to register an Agent than the "New agent" form shown in Agent Pools:
The necessary files are pulled to the server (without having to download one first) and a PAT with a 3-hour lifetime is auto-generated.
I have project A and project B.
I use a GCP Cloud Source Repository on project A as my 'origin' remote.
I use Cloud Build with a trigger on changes to the 'develop' branch of the repo to trigger builds. As part of the build I deploy some stuff with the gcloud builder, to project A.
Now, I want to run the same build on project B. Maybe the same branch, maybe a different branch (i.e. 'release-*'). In the end want to deploy some stuff with the gcloud builder to project B.
The problem is, when I'm on project B (in Google Cloud Console), I can't even see the repo in project A. It asks me to "connect repository", but I can only select GitHub or Bitbucket repos for mirroring. The option "Cloud Source Repositories" is greyed out, telling me that they "are already connected". Just evidently not one from another project.
I could set up a new repo on project B, and push to both repos, but that seems inefficient (and likely not sustainable long term). The curious thing is, that such a setup could easily be achieved using an external Bitbucket/GitHub repo as origin and mirrored in both projects.
Is anything like this at all possible in Google Cloud Platform without external dependencies?
I also tried running all my builds in project A and have a separate trigger that deploys to project B (I use substitutions to manage that), but it fails with permission issues. Cloud Builds seem to always run with a Cloud Build service account, of which you can manage the roles, but I can't see how I could give it access to another project. Also in this case both builds would appear indistinguishable in a single build history, which is not ideal.
I faced a similar problem and I solved it by having multiple Cloud Build files.
A Cloud Build file (which got triggered when codes were pushed to a certain branch) was dedicated to copying all of my source codes into the new project source repo, of which it also has it's own Cloud Build file for deployment to that project.
Here is a sample of the Cloud Build file that copies sources to another project:
steps:
- name: gcr.io/cloud-builders/git
args: ['checkout', '--orphan', 'temp']
- name: gcr.io/cloud-builders/git
args: ['add', '-A']
- name: gcr.io/cloud-builders/git
args: ['config', '--global', 'user.name', 'Your Name']
- name: gcr.io/cloud-builders/git
args: ['config', '--global', 'user.email', 'Your Email']
- name: gcr.io/cloud-builders/git
args: ['commit', '-am', 'latest production commit']
- name: gcr.io/cloud-builders/git
args: ['branch', '-D', 'master']
- name: gcr.io/cloud-builders/git
args: ['branch', '-m', 'master']
- name: gcr.io/cloud-builders/git
args: ['push', '-f', 'https://source.developers.google.com/p/project-prod/r/project-repo', 'master']
This pushed all of the source codes into the new project.
Note that: You need to give your Cloud Build service account permissions to push source codes into the other project source repositories.
As you have already said, you can host your repos outside in BitBucket/Github and sync them to each project, but you need to pay an extra for each build.
You could use third party services otherwise to build your repos outside and deploy the result wherever you want for ex. look into CircleCI or similar service.
You could give permissions to build that it could refer to resources from another project, but I would keep them separated to minimize complexity.
My solution:
From service A, create new Cloud Build on branch release-* with Build Configuration specify $_PROJECT_ID is project B id
On GCP Cloud Build definition, add new Variable name _PROJECT_ID is project B id
NOTE: Remember grant permissons for your service account of project A(#cloudbuild.gserviceaccount.com) on project B
cloudbuild.yaml
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- Dockerfile
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
id: Push
- name: gcr.io/cloud-builders/gcloud
args:
- beta
- run
- deploy
- $_SERVICE_NAME
- '--platform=managed'
- '--image=$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- >-
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID,$_LABELS
- '--region=$_DEPLOY_REGION'
- '--quiet'
- '--project=$_PROJECT_ID'
id: Deploy
entrypoint: gcloud
images:
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
options:
substitutionOption: ALLOW_LOOSE
timeout: '20m'
tags:
- gcp-cloud-build-deploy-cloud-run
- gcp-cloud-build-deploy-cloud-run-managed
- driveit-hp-agreement-mngt-api```
[1]: https://i.stack.imgur.com/XhRJ4.png
Unfortunately Google doesn't seem to provide that functionality within Source Repositories (would rock if you could).
An alternative option you could consider (though involves external dependencies) is to mirror your Source Repositories first to GitHub or Bitbucket, then mirror back again into Source Repositories. That way, any changes made to any mirror of the repository will sync. (i.e. a change pushed in Project B will sync with Bitbucket, and likewise in Project A)
EDIT
To illustrate my alternative solution, here is a simple diagram
We have a two separate GCP projects (one for dev, and one for prod). We are using CloudBuild to deploy our project by utilizing repo-mirroring and a CloudBuild trigger that fires when ever the dev or prod branches are updated. The cloudbuild.yaml file looks like this:
# Firestore security rules deploy
- name: "gcr.io/$PROJECT_ID/firebase"
args: ["deploy", "--only", "firestore:rules"]
secretEnv: ['FIREBASE_TOKEN']
# Firestore indexes deploy
- name: "gcr.io/$PROJECT_ID/firebase"
args: ["deploy", "--only", "firestore:indexes"]
secretEnv: ['FIREBASE_TOKEN']
secrets:
- kmsKeyName: 'projects/my-dev-project/locations/global/keyRings/ci-ring/cryptoKeys/deployment'
secretEnv:
FIREBASE_TOKEN: 'myreallylongtokenstring'
timeout: "1600s"
The problem we have is that the kmsKeyName apparently needs to be hardcoded in order for GCP to read it, meaning we can't do something like this:
secrets:
- kmsKeyName: 'projects/$PROJECT_ID/locations/global/keyRings/ci-ring/cryptoKeys/deployment'
secretEnv:
FIREBASE_TOKEN: 'myreallylongtokenstring'
This does not lend itself well to a continuous-deployment process like the one we are using since we'd like that kmsKeyName string to be dynamically set with the relevant project-id value depending on the dev or prod environment we are deploying to.
Is there a way around this that would allow us to dynamically specify the kmsKeyName?
Update:
We have found a quick/dirty solution which was to create individual cloudbuild.yaml files: one for dev (cloudbuild-dev.yaml) and one for prod (cloudbuild-prod.yaml). Each cloudbuild file is identical except for the last part where we specify our hardcoded "secrets" info.
Explanation: GCP Cloud Build relies on individual triggers for each environment build, and each trigger can be configured to point at a specific cloudbuild yaml file, whcih is what we have done. Dev build trigger points at cloudbuild-dev.yaml, and the production trigger points at cloudbuild-prod.yaml.
Indeed, I tried different configuration with simple quote, double, without, with substition variables,...
The boring solution is to use the manual decoding as described here. But you can use variables and substitution variables as you want
The boring part is that you have to inject the secret in each step which require it, like that (for example as environment variable):
- name: "gcr.io/$PROJECT_ID/firebase"
entrypoint: "bach"
args:
- "-c"
- "export FIREBASE_TOKEN=$(cat secrets.json) && firebase deploy --only firestore:rules"
I don't know other workaround