I'm trying to implement Github self hosted runner, but I've hit the wall hard
I want to have different runners for my prod and dev servers
For what I understand, it's possible to set labels depending on the environment, but both my dev and prod servers are essentially the same (both are windows server 20012 R2, with similar hardware)
I have two yml files pointing to dev and master respective, but can I point the runner to the right action?
I've tried to add a label to the runners like this:
But when I publish to master, the top runner is triggered
The yml file for prod looks something like this:
name: SSR-Prod
on:
push:
branches: [master]
pull_request:
branches: [master]
jobs:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout#v2
- name: Restore dependencies
run: npm install
- name: Build and publish
run: npm run build:ssr
You need to specify enough labels to select correct runner, like so:
runs-on: [ self-hosted, master ]
This will make sure your workflow runs on second runner.
I actually found that #frennky's answer didn't seem to work for us; spent a lot of time ripping hair out (fortunately, I have a lot). What did work was this:
It's easy to do, actually, but finding out how is oddly difficult.
The documentation is marvelous at telling you all the options, but not always great for giving examples.
For us, what worked was this:
setup-dns:
name: Setup dynamic DNS
runs-on: prod
Assuming, of course, that your label is 'prod'. (BTW, don't make Prod use dynamic DNS). According to the docs and common sense, this should work:
setup-dns:
name: Setup dynamic DNS
runs-on: [self-hosted, prod]
But it does not; the runner never runs. I did have a conversation with GitHub support but I've lost the link. As long as you don't create a self-hosted runner with a label (not name) of 'ubuntu-latest' (or other GitHub hosted runner O/S versions) you won't have a problem!
When you think about it logically, putting in your own label requires that it's a self-hosted runner, so the two values for "runs-on" are somewhat redundant. My guess is that you could still use two self-hosted runner labels, and the routine would run on whichever one was available. We didn't go that route as we needed patches run on multiple production servers deterministically, so we literally have prod1, prod2, prod2, etc.
Now here's where it gets interesting. Let's say that you have a callable workflow - better known as a "Subroutine". In that case, you cannot use "runs-on" in the caller; only the callee.
So here's what you do, in your callable workflow:
on:
workflow_call:
inputs:
target_runner_label:
type: string
description: 'Target GitHub Runner'
required: true
# not strictly needed, but great for testing your callable workflow
workflow_dispatch:
inputs:
target_runner_label:
type: string
description: 'Target GitHub Runner'
required: true
jobs:
report_settings:
name: Report Settings
runs-on: ${{ inputs.target_runner_label || github.event.inputs.target_runner_label }}
Now - you might wonder at the || statement in the middle. It turns out, the syntax for referring to workflow_call inputs and workflow_dispatch inputs are completely different.
Another gotcha: The 'name' of the runner is completely superfluous, because it's the label, not the name, that the workflow uses to trigger it. In the below example, you wouldn't use "erpnext_erpdev" to trigger the workflow, you'd use "erp_fix"
Related
I've set up a Google Cloud Run with continuous deployment to a github, and it redeploys every time there's a push to the main (what I what), but when I go to check the site, it hasn't updated the HTML I've been testing with. I've tested it on my local machine, and it's updating the code when I run the Django server, so I'm guessing it's something with my cloudbuild.yml? There was another post I tried to mimic, but it didn't take.
Any advice would be very helpful! Thank you!
cloudbuild.yml:
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/${PROJECT_ID}/exeplore', './ExePlore']
# Push the image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/${PROJECT_ID}/exeplore']
# Deploy image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args:
- 'run'
- 'deploy'
- 'exeplore'
- '--image'
- 'gcr.io/${PROJECT_ID}/exeplore'
- '--region'
- 'europe-west2'
- '--platform'
- 'managed'
images:
- gcr.io/${PROJECT_ID}/exeplore
Here are the variables for GCR
Edit 1: I've now updated my cloudbuild, so the SHORT_SHA is all gone, but now google cloud run is saying it can't find my manage.py at /Exeplore/manage.py. I might have to trial and error it, as running the container locally is fine, and same with running the server locally. I have yet to try what Ezekias suggested, as I've tried rolled back to when it was correctly running the server and it doesn't like that.
Edit 2: I've checked the services, it is at 100% Latest
Check your Cloud Run service, either on the Cloud Console or by running gcloud run services describe. It may be set to serve traffic to a specific revision instead of having 100% of traffic serving LATEST.
If that's the case, it won't automatically move traffic to the new revision when you deploy. If you want it to automatically switch to the new update, you can run gcloud run services update-traffic --to-latest or use the "Manage Traffic" button on the revisions tab of the Cloud Console to set 100% of traffic to the latest healthy revision.
It looks like you're building gcr.io/${PROJECT_ID}/exeplore:$SHORT_SHA, but pushing and deploying gcr.io/${PROJECT_ID}/exeplore. These are essentially different images.
Update any image variables to include the SHORT_SHA to ensure all references are the same.
To avoid duplication you may also want to use dynamic substitution variables
I'm trying to set up a YAML file for GitLab that will deploy to my QA server only when a specific folder has a change in it.
This is what I have but it doesn't want to work. The syntax doesn't register any errors.
deploy to qa:
script: **aws scripts**
only:
refs:
- master
changes:
- directory/*
stage: deploy
environment:
name: qa
url: **aws bucket url**
The problem seems to be with this section, the rest works without it. The documentation talks about using rules as a replacement for when only and changes are used together but I couldn't get that to work either.
only:
refs:
- master
changes:
- directory/*
The issue you're running into is the refs section of your "only" rule. Per GitLab's documentation on "changes": "If you use refs other than branches, external_pull_requests, or merge_requests, changes can’t determine if a given file is new or old and always returns true." Since you're using master as your ref, you are running into this issue.
As you've ascertained, the correct answer to this is to use a rules keyword instead. The equivalent rules setup should be as follows:
deploy to qa:
script: **aws scripts**
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
changes:
- directory/*
when: on_success
- when: never
stage: deploy
environment:
name: qa
url: **aws bucket url**
Essentially, the rule is saying "If the commit you're building from exists on your default branch (master in your case), and you have changes in directory/*, then run this job when previous jobs have succeeded. ELSE, never run this job"
Note: Technically the when: never is implied if no clauses match, but I prefer including it because it explicitly states your expectation for the next person who has to read your CI/CD file.
I'm starting to lose my sanity over a yaml build. This is the very first yaml build I've ever tried to configure, so it's likely I'm doing some basic mistake.
This is my yaml build definition:
name: ops-tools-delete-failed-containers-$(Date:yyyyMMdd)$(Rev:.rrrr)
trigger:
branches:
include:
- master
- features/120414-delete-failed-container-instances
schedules:
- cron: '20,50 * * * *'
displayName: At minutes 20 and 50
branches:
include:
- features/120414-delete-failed-container-instances
always: 'true'
pool:
name: Standard-Windows
variables:
- name: ResourceGroup
value: myResourceGroup
stages:
- stage: Delete
displayName: Delete containers
jobs:
- job: Job1
steps:
- task: AzureCLI#2
inputs:
azureSubscription: 'CPA (Infrastructure) (xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx)'
scriptType: 'pscore'
scriptLocation: 'scriptPath'
scriptPath: 'General/Automation/ACI/Delete-FailedContainerInstances.ps1'
arguments: '-ResourceGroup $(ResourceGroup)'
So in short, I want to run a script using an Azure CLI task. When I queue a new build it stays like this forever:
I've tried running the same task with an inline script without success. The same thing happens if I try to run a Powershell task instead of an Azure CLI task.
What am I missing here?
TL;DR issue was caused by (lack of) permissions.
More details
After enabling the following feature I could see more details about the problem:
The following warning was shown after enabling the feature:
Clicking on View shows the Azure subscription used in the Azure CLI task. After clicking on Permit, everything works as expected.
Cannot run Azure CLI task on yaml build
Your YAML file should be correct. I have test your YAML in my side, it works fine.
The only place I modified is change the agent pool with my private agent:
pool:
name: MyPrivateAgent
Besides, according to the state in your image:
So, It seems your private agent under the agent queue which you specified for the build definition is not running:
Make the agent running, then the build will start.
As test, you could use the hosted agent instead of your private agent, like:
pool:
vmImage: 'ubuntu-latest'
Hope this helps.
I have project A and project B.
I use a GCP Cloud Source Repository on project A as my 'origin' remote.
I use Cloud Build with a trigger on changes to the 'develop' branch of the repo to trigger builds. As part of the build I deploy some stuff with the gcloud builder, to project A.
Now, I want to run the same build on project B. Maybe the same branch, maybe a different branch (i.e. 'release-*'). In the end want to deploy some stuff with the gcloud builder to project B.
The problem is, when I'm on project B (in Google Cloud Console), I can't even see the repo in project A. It asks me to "connect repository", but I can only select GitHub or Bitbucket repos for mirroring. The option "Cloud Source Repositories" is greyed out, telling me that they "are already connected". Just evidently not one from another project.
I could set up a new repo on project B, and push to both repos, but that seems inefficient (and likely not sustainable long term). The curious thing is, that such a setup could easily be achieved using an external Bitbucket/GitHub repo as origin and mirrored in both projects.
Is anything like this at all possible in Google Cloud Platform without external dependencies?
I also tried running all my builds in project A and have a separate trigger that deploys to project B (I use substitutions to manage that), but it fails with permission issues. Cloud Builds seem to always run with a Cloud Build service account, of which you can manage the roles, but I can't see how I could give it access to another project. Also in this case both builds would appear indistinguishable in a single build history, which is not ideal.
I faced a similar problem and I solved it by having multiple Cloud Build files.
A Cloud Build file (which got triggered when codes were pushed to a certain branch) was dedicated to copying all of my source codes into the new project source repo, of which it also has it's own Cloud Build file for deployment to that project.
Here is a sample of the Cloud Build file that copies sources to another project:
steps:
- name: gcr.io/cloud-builders/git
args: ['checkout', '--orphan', 'temp']
- name: gcr.io/cloud-builders/git
args: ['add', '-A']
- name: gcr.io/cloud-builders/git
args: ['config', '--global', 'user.name', 'Your Name']
- name: gcr.io/cloud-builders/git
args: ['config', '--global', 'user.email', 'Your Email']
- name: gcr.io/cloud-builders/git
args: ['commit', '-am', 'latest production commit']
- name: gcr.io/cloud-builders/git
args: ['branch', '-D', 'master']
- name: gcr.io/cloud-builders/git
args: ['branch', '-m', 'master']
- name: gcr.io/cloud-builders/git
args: ['push', '-f', 'https://source.developers.google.com/p/project-prod/r/project-repo', 'master']
This pushed all of the source codes into the new project.
Note that: You need to give your Cloud Build service account permissions to push source codes into the other project source repositories.
As you have already said, you can host your repos outside in BitBucket/Github and sync them to each project, but you need to pay an extra for each build.
You could use third party services otherwise to build your repos outside and deploy the result wherever you want for ex. look into CircleCI or similar service.
You could give permissions to build that it could refer to resources from another project, but I would keep them separated to minimize complexity.
My solution:
From service A, create new Cloud Build on branch release-* with Build Configuration specify $_PROJECT_ID is project B id
On GCP Cloud Build definition, add new Variable name _PROJECT_ID is project B id
NOTE: Remember grant permissons for your service account of project A(#cloudbuild.gserviceaccount.com) on project B
cloudbuild.yaml
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- Dockerfile
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
id: Push
- name: gcr.io/cloud-builders/gcloud
args:
- beta
- run
- deploy
- $_SERVICE_NAME
- '--platform=managed'
- '--image=$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- >-
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID,$_LABELS
- '--region=$_DEPLOY_REGION'
- '--quiet'
- '--project=$_PROJECT_ID'
id: Deploy
entrypoint: gcloud
images:
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
options:
substitutionOption: ALLOW_LOOSE
timeout: '20m'
tags:
- gcp-cloud-build-deploy-cloud-run
- gcp-cloud-build-deploy-cloud-run-managed
- driveit-hp-agreement-mngt-api```
[1]: https://i.stack.imgur.com/XhRJ4.png
Unfortunately Google doesn't seem to provide that functionality within Source Repositories (would rock if you could).
An alternative option you could consider (though involves external dependencies) is to mirror your Source Repositories first to GitHub or Bitbucket, then mirror back again into Source Repositories. That way, any changes made to any mirror of the repository will sync. (i.e. a change pushed in Project B will sync with Bitbucket, and likewise in Project A)
EDIT
To illustrate my alternative solution, here is a simple diagram
We have a two separate GCP projects (one for dev, and one for prod). We are using CloudBuild to deploy our project by utilizing repo-mirroring and a CloudBuild trigger that fires when ever the dev or prod branches are updated. The cloudbuild.yaml file looks like this:
# Firestore security rules deploy
- name: "gcr.io/$PROJECT_ID/firebase"
args: ["deploy", "--only", "firestore:rules"]
secretEnv: ['FIREBASE_TOKEN']
# Firestore indexes deploy
- name: "gcr.io/$PROJECT_ID/firebase"
args: ["deploy", "--only", "firestore:indexes"]
secretEnv: ['FIREBASE_TOKEN']
secrets:
- kmsKeyName: 'projects/my-dev-project/locations/global/keyRings/ci-ring/cryptoKeys/deployment'
secretEnv:
FIREBASE_TOKEN: 'myreallylongtokenstring'
timeout: "1600s"
The problem we have is that the kmsKeyName apparently needs to be hardcoded in order for GCP to read it, meaning we can't do something like this:
secrets:
- kmsKeyName: 'projects/$PROJECT_ID/locations/global/keyRings/ci-ring/cryptoKeys/deployment'
secretEnv:
FIREBASE_TOKEN: 'myreallylongtokenstring'
This does not lend itself well to a continuous-deployment process like the one we are using since we'd like that kmsKeyName string to be dynamically set with the relevant project-id value depending on the dev or prod environment we are deploying to.
Is there a way around this that would allow us to dynamically specify the kmsKeyName?
Update:
We have found a quick/dirty solution which was to create individual cloudbuild.yaml files: one for dev (cloudbuild-dev.yaml) and one for prod (cloudbuild-prod.yaml). Each cloudbuild file is identical except for the last part where we specify our hardcoded "secrets" info.
Explanation: GCP Cloud Build relies on individual triggers for each environment build, and each trigger can be configured to point at a specific cloudbuild yaml file, whcih is what we have done. Dev build trigger points at cloudbuild-dev.yaml, and the production trigger points at cloudbuild-prod.yaml.
Indeed, I tried different configuration with simple quote, double, without, with substition variables,...
The boring solution is to use the manual decoding as described here. But you can use variables and substitution variables as you want
The boring part is that you have to inject the secret in each step which require it, like that (for example as environment variable):
- name: "gcr.io/$PROJECT_ID/firebase"
entrypoint: "bach"
args:
- "-c"
- "export FIREBASE_TOKEN=$(cat secrets.json) && firebase deploy --only firestore:rules"
I don't know other workaround