Can I run a subset of tests from one Github repo in Test Kitchen? - test-kitchen

I reached the point where I think it makes sense to put my inspec tests in a different repo than my Chef cookbook. I just copied all dirs under test/integration into a new dir and created a repo from that. There are subdirs common, master, and worker. I'm not sure how best to manage this given my Test Kitchen setup.
Original kitchen.yml content:
suites:
- name: master
...
verifier:
inspec_tests:
- test/integration/common
- test/integration/master
...
New content based on reading the docs:
suites:
- name: master
...
verifier:
inspec_tests:
- git#github.com:redacted/inspec-redacted.git
...
As soon as I wrote this, I looked for some way to choose only the 2 desired dirs common and master but I don't see this documented. Is it even possible?

maybe you could narrow it down with specific controls in your suite:
verifier:
inspec_tests:
- git#github.com:redacted/inspec-redacted.git
controls:
- xyz

Related

How to Have 2 Code Coverages in Gitlab Repo Badges

My team has a gitlab repo. It has two parts: an NPM package under projects folder and an angular application under src folder. So there are 2 projects in the angular.json file.
We currently have unit tests with coverage setup in our gitlab pipes. The issue is, since we have 2 projects in this repo, we really need to show the coverage for each project.
I noticed in demo image of the gitlab badges documentation (https://docs.gitlab.com/ee/user/project/badges.html), they have a 'JS Coverage' badge. This seems to be a custom badge (I can't find a list of given badges, but I'm not finding anything for 'JS coverage', so I'm assuming it's custom).
So I think I can do something like that to create 2 custom badges that has the code coverage of each project (1 for 'Pkg Coverage' and 1 for 'App Coverage'). But (TBH) the documentation around creating custom badges isn't great. I need to know how to store this custom value to use in the badge, and how to update in the gitlab pipe.
Does anyone know how to achieve this? If I could just figure out how that example is using 'JS Coverage' (and how to update the value in the pipe), then I could figure out what I need to do for my 2 custom badges. Any tips?
Some details, right now we have a gitlab job like this (it runs unit tests and updates the coverage values. Since 'ng test' runs the tests of both projects 1 by 1, the code coverage of the 1st project is saved to the 'coverage' value):
unit-tests:
stage: test
rules:
# Run unit tests, including when merge requests are merged to default branch (so coverage % is updated)
- when: on_success
image: trion/ng-cli-karma:$ANGULAR_VERSION
before_script:
- *angular-env-setup-script
coverage: '/Statements \W+: (\d+\.\d+)%.*/'
script:
- npm run build:ds-prod
- npm install dist/ds
- ng test --code-coverage --progress false --watch false
artifacts:
expose_as: "Coverage Report"
paths:
- coverage/
tags:
- kubernetes-runner

ArgoCD - When deploying one app in monorepo with multiple application, all app re-sync triggers

All ! :)
I’m using the mono-repo with multi application architecture.
- foo
- dev
- alp
- prd
- bar
- dev
- alp
- prd
- argocd
- dev
- foo-application (argo cd app, target revision : master, destination cluster : dev, path: foo/dev)
- bar-application (argo cd app, target revision : master, destination cluster : dev, path: bar/dev)
- alp
- foo-application (argo cd app, target revision : master, destination cluster: alp, path: foo/alp)
- bar-application (argo cd app, target revision : master, destination cluster: alp, path: bar/alp)
- ...
Recently I found out that merging to the master branch triggers syncing with other applications as well Despite no change in that target path directory..
So, whenever one application is modified and merged into the master, multiple applications are Out-Of-Sync -> Syncing -> Synced is happening repeatedly. :(
In my opinion, if there is no code change in the target path, even if the git sha value of the branch changes, synced is maintained.
But it wasn’t. When the git sha of the target branch is changed, ArgoCD is unconditionally triggered by changing the cache key.
To solve this problem, it seems wasteful to create a manifest repository for each application.
While looking for a solution, I came across this feature.
webhook-and-manifest-paths-annotation
However, according to the documentation, this seems to work when used with the GitHub Webhook.
Currently we are using ArgoCD polling the Repository every 3 minutes. Does this annotation not work in this case?

Github self hosted runners per branch

I'm trying to implement Github self hosted runner, but I've hit the wall hard
I want to have different runners for my prod and dev servers
For what I understand, it's possible to set labels depending on the environment, but both my dev and prod servers are essentially the same (both are windows server 20012 R2, with similar hardware)
I have two yml files pointing to dev and master respective, but can I point the runner to the right action?
I've tried to add a label to the runners like this:
But when I publish to master, the top runner is triggered
The yml file for prod looks something like this:
name: SSR-Prod
on:
push:
branches: [master]
pull_request:
branches: [master]
jobs:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout#v2
- name: Restore dependencies
run: npm install
- name: Build and publish
run: npm run build:ssr
You need to specify enough labels to select correct runner, like so:
runs-on: [ self-hosted, master ]
This will make sure your workflow runs on second runner.
I actually found that #frennky's answer didn't seem to work for us; spent a lot of time ripping hair out (fortunately, I have a lot). What did work was this:
It's easy to do, actually, but finding out how is oddly difficult.
The documentation is marvelous at telling you all the options, but not always great for giving examples.
For us, what worked was this:
setup-dns:
name: Setup dynamic DNS
runs-on: prod
Assuming, of course, that your label is 'prod'. (BTW, don't make Prod use dynamic DNS). According to the docs and common sense, this should work:
setup-dns:
name: Setup dynamic DNS
runs-on: [self-hosted, prod]
But it does not; the runner never runs. I did have a conversation with GitHub support but I've lost the link. As long as you don't create a self-hosted runner with a label (not name) of 'ubuntu-latest' (or other GitHub hosted runner O/S versions) you won't have a problem!
When you think about it logically, putting in your own label requires that it's a self-hosted runner, so the two values for "runs-on" are somewhat redundant. My guess is that you could still use two self-hosted runner labels, and the routine would run on whichever one was available. We didn't go that route as we needed patches run on multiple production servers deterministically, so we literally have prod1, prod2, prod2, etc.
Now here's where it gets interesting. Let's say that you have a callable workflow - better known as a "Subroutine". In that case, you cannot use "runs-on" in the caller; only the callee.
So here's what you do, in your callable workflow:
on:
workflow_call:
inputs:
target_runner_label:
type: string
description: 'Target GitHub Runner'
required: true
# not strictly needed, but great for testing your callable workflow
workflow_dispatch:
inputs:
target_runner_label:
type: string
description: 'Target GitHub Runner'
required: true
jobs:
report_settings:
name: Report Settings
runs-on: ${{ inputs.target_runner_label || github.event.inputs.target_runner_label }}
Now - you might wonder at the || statement in the middle. It turns out, the syntax for referring to workflow_call inputs and workflow_dispatch inputs are completely different.
Another gotcha: The 'name' of the runner is completely superfluous, because it's the label, not the name, that the workflow uses to trigger it. In the below example, you wouldn't use "erpnext_erpdev" to trigger the workflow, you'd use "erp_fix"

How to set up yaml file with gitlab to deploy when a specific file changes?

I'm trying to set up a YAML file for GitLab that will deploy to my QA server only when a specific folder has a change in it.
This is what I have but it doesn't want to work. The syntax doesn't register any errors.
deploy to qa:
script: **aws scripts**
only:
refs:
- master
changes:
- directory/*
stage: deploy
environment:
name: qa
url: **aws bucket url**
The problem seems to be with this section, the rest works without it. The documentation talks about using rules as a replacement for when only and changes are used together but I couldn't get that to work either.
only:
refs:
- master
changes:
- directory/*
The issue you're running into is the refs section of your "only" rule. Per GitLab's documentation on "changes": "If you use refs other than branches, external_pull_requests, or merge_requests, changes can’t determine if a given file is new or old and always returns true." Since you're using master as your ref, you are running into this issue.
As you've ascertained, the correct answer to this is to use a rules keyword instead. The equivalent rules setup should be as follows:
deploy to qa:
script: **aws scripts**
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
changes:
- directory/*
when: on_success
- when: never
stage: deploy
environment:
name: qa
url: **aws bucket url**
Essentially, the rule is saying "If the commit you're building from exists on your default branch (master in your case), and you have changes in directory/*, then run this job when previous jobs have succeeded. ELSE, never run this job"
Note: Technically the when: never is implied if no clauses match, but I prefer including it because it explicitly states your expectation for the next person who has to read your CI/CD file.

How to access a GCP Cloud Source Repository from another project?

I have project A and project B.
I use a GCP Cloud Source Repository on project A as my 'origin' remote.
I use Cloud Build with a trigger on changes to the 'develop' branch of the repo to trigger builds. As part of the build I deploy some stuff with the gcloud builder, to project A.
Now, I want to run the same build on project B. Maybe the same branch, maybe a different branch (i.e. 'release-*'). In the end want to deploy some stuff with the gcloud builder to project B.
The problem is, when I'm on project B (in Google Cloud Console), I can't even see the repo in project A. It asks me to "connect repository", but I can only select GitHub or Bitbucket repos for mirroring. The option "Cloud Source Repositories" is greyed out, telling me that they "are already connected". Just evidently not one from another project.
I could set up a new repo on project B, and push to both repos, but that seems inefficient (and likely not sustainable long term). The curious thing is, that such a setup could easily be achieved using an external Bitbucket/GitHub repo as origin and mirrored in both projects.
Is anything like this at all possible in Google Cloud Platform without external dependencies?
I also tried running all my builds in project A and have a separate trigger that deploys to project B (I use substitutions to manage that), but it fails with permission issues. Cloud Builds seem to always run with a Cloud Build service account, of which you can manage the roles, but I can't see how I could give it access to another project. Also in this case both builds would appear indistinguishable in a single build history, which is not ideal.
I faced a similar problem and I solved it by having multiple Cloud Build files.
A Cloud Build file (which got triggered when codes were pushed to a certain branch) was dedicated to copying all of my source codes into the new project source repo, of which it also has it's own Cloud Build file for deployment to that project.
Here is a sample of the Cloud Build file that copies sources to another project:
steps:
- name: gcr.io/cloud-builders/git
args: ['checkout', '--orphan', 'temp']
- name: gcr.io/cloud-builders/git
args: ['add', '-A']
- name: gcr.io/cloud-builders/git
args: ['config', '--global', 'user.name', 'Your Name']
- name: gcr.io/cloud-builders/git
args: ['config', '--global', 'user.email', 'Your Email']
- name: gcr.io/cloud-builders/git
args: ['commit', '-am', 'latest production commit']
- name: gcr.io/cloud-builders/git
args: ['branch', '-D', 'master']
- name: gcr.io/cloud-builders/git
args: ['branch', '-m', 'master']
- name: gcr.io/cloud-builders/git
args: ['push', '-f', 'https://source.developers.google.com/p/project-prod/r/project-repo', 'master']
This pushed all of the source codes into the new project.
Note that: You need to give your Cloud Build service account permissions to push source codes into the other project source repositories.
As you have already said, you can host your repos outside in BitBucket/Github and sync them to each project, but you need to pay an extra for each build.
You could use third party services otherwise to build your repos outside and deploy the result wherever you want for ex. look into CircleCI or similar service.
You could give permissions to build that it could refer to resources from another project, but I would keep them separated to minimize complexity.
My solution:
From service A, create new Cloud Build on branch release-* with Build Configuration specify $_PROJECT_ID is project B id
On GCP Cloud Build definition, add new Variable name _PROJECT_ID is project B id
NOTE: Remember grant permissons for your service account of project A(#cloudbuild.gserviceaccount.com) on project B
cloudbuild.yaml
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- Dockerfile
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
id: Push
- name: gcr.io/cloud-builders/gcloud
args:
- beta
- run
- deploy
- $_SERVICE_NAME
- '--platform=managed'
- '--image=$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- >-
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID,$_LABELS
- '--region=$_DEPLOY_REGION'
- '--quiet'
- '--project=$_PROJECT_ID'
id: Deploy
entrypoint: gcloud
images:
- '$_GCR_HOSTNAME/$_PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
options:
substitutionOption: ALLOW_LOOSE
timeout: '20m'
tags:
- gcp-cloud-build-deploy-cloud-run
- gcp-cloud-build-deploy-cloud-run-managed
- driveit-hp-agreement-mngt-api```
[1]: https://i.stack.imgur.com/XhRJ4.png
Unfortunately Google doesn't seem to provide that functionality within Source Repositories (would rock if you could).
An alternative option you could consider (though involves external dependencies) is to mirror your Source Repositories first to GitHub or Bitbucket, then mirror back again into Source Repositories. That way, any changes made to any mirror of the repository will sync. (i.e. a change pushed in Project B will sync with Bitbucket, and likewise in Project A)
EDIT
To illustrate my alternative solution, here is a simple diagram