Just curious, why isn't there a helm cloud builder officially supported? It seems like a very common requirement, yet I'm not seeing one in the list here:
https://github.com/GoogleCloudPlatform/cloud-builders
I was previously using alpine/helm in my cloudbuild.yaml for my helm deployment as follows:
steps:
# Build app image
- name: gcr.io/cloud_builders/docker
args:
- build
- -t
- $_IMAGE_REPO/$_CONTAINER_NAME:$COMMIT_SHA
- ./cloudbuild/$_CONTAINER_NAME/
# Push my-app image to Google Cloud Registry
- name: gcr.io/cloud-builders/docker
args:
- push
- $_IMAGE_REPO/$_CONTAINER_NAME:$COMMIT_SHA
# Configure a kubectl workspace for this project
- name: gcr.io/cloud-builders/kubectl
args:
- cluster-info
env:
- CLOUDSDK_COMPUTE_REGION=$_CUSTOM_REGION
- CLOUDSDK_CONTAINER_CLUSTER=$_CUSTOM_CLUSTER
- KUBECONFIG=/workspace/.kube/config
# Deploy with Helm
- name: alpine/helm
args:
- upgrade
- -i
- $_CONTAINER_NAME
- ./cloudbuild/$_CONTAINER_NAME/k8s
- --set
- image.repository=$_IMAGE_REPO/$_CONTAINER_NAME,image.tag=$COMMIT_SHA
- -f
- ./cloudbuild/$_CONTAINER_NAME/k8s/values.yaml
env:
- KUBECONFIG=/workspace/.kube/config
- TILLERLESS=false
- TILLER_NAMESPACE=kube-system
- USE_GKE_GCLOUD_AUTH_PLUGIN=True
timeout: 1200s
substitutions:
# substitutionOption: ALLOW_LOOSE
# dynamicSubstitutions: true
_CUSTOM_REGION: us-east1
_CUSTOM_CLUSTER: demo-gke
_IMAGE_REPO: us-east1-docker.pkg.dev/fakeproject/my-docker-repo
_CONTAINER_NAME: app2
options:
logging: CLOUD_LOGGING_ONLY
# In this option we are providing the worker pool name that we have created in the previous step
workerPool:
'projects/fakeproject/locations/us-east1/workerPools/cloud-build-pool'
And this was working with no issues. Then recently it just started failing with the following error so I'm guessing a change was made recently:
Error: Kubernetes cluster unreachable: Get "https://10.10.2.2/version": getting credentials: exec: executable gke-gcloud-auth-plugin not found"
I get this error regularly on VM's and can workaround it by setting USE_GKE_GCLOUD_AUTH_PLUGIN=True, but that does not seem to fix the issue here if I add it to the env section. So I'm looking for recommendations on how to use helm with Cloud Build. alpine/helm was just something I randomly tried and was working for me up until now, but there's probably better solutions out there.
Thanks!
Related
So I have been at this for days now almost and it is driving me crazy. Based on other posts, I have set up the following cloudbuild.yaml :
steps:
- name: gcr.io/cloud-builders/docker
args:
- build
- -t
- gcr.io/${INSTANCE_NAME}
- .
- name: gcr.io/cloud-builders/docker
args:
- push
- gcr.io/${INSTANCE_NAME}
- name: 'gcr.io/${INSTANCE_NAME}'
entrypoint: sh
env:
- DATABASE_URL=postgresql://USER:PASSWORD#localhost/DATABASE?host=/cloudsql/CONNECTION_NAME
args:
- -c
- |
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
chmod +x cloud_sql_proxy
./cloud_sql_proxy -instances=CONNECTION_NAME=tcp:5432 & sleep 3
npx prisma migrate deploy
- name: gcr.io/google.com/cloudsdktool/cloud-sdk
entrypoint: gcloud
args:
- run
- deploy
- backend
- --image
- gcr.io/${INSTANCE_NAME}
- --region
- europe-west1
images:
- gcr.io/${INSTANCE_NAME}
When running this, I am greeted by:
Step #2: 2023/02/05 13:00:49 Listening on 127.0.0.1:5432 for CONNECTION_NAME
Step #2: 2023/02/05 13:00:49 Ready for new connections
Step #2: 2023/02/05 13:00:49 Generated RSA key in 118.117245ms
Step #2: npm WARN exec The following package was not found and will be installed: prisma#4.9.0
Step #2: Prisma schema loaded from prisma/schema.prisma
Step #2: Datasource "db": PostgreSQL database "develop", schema "public" at "localhost"
Step #2:
Step #2: Error: P1001: Can't reach database server at `/cloudsql/CONNECTION_NAME`:`5432`
Step #2:
Step #2: Please make sure your database server is running at `/cloudsql/CONNECTION_NAME`:`5432`.
So even with using the database url hardcoded and with the Cloud SQL proxy working, i am STILL getting this error. What am I missing?
Check for the container-name in .env file and change it to postgres as it would replace name in connection string as discussed here
Or try the following format if you don’t want to hardcode IP address
DB_USER=dbuser
DB_PASS=dbpass
DB_HOST=localhost
DB_PORT=5432
CLOUD_SQL_CONNECTION_NAME=/cloudsql/gcp-project-id:europe-west3:db-instance-name
DATABASE_URL=postgres://${DB_USER}:${DB_PASS}#${DB_HOST}:${DB_PORT}/${DB_BASE}?host=${CLOUD_SQL_CONNECTION_NAME}
If you have public IP try connecting by unix socket
I am trying to setup the GCP cloudbuild pipeline for my bitbucket repo. However, it's not being triggered.
I am using following sample from their own website.
steps:
- name: 'ubuntu'
entrypoint: 'bash'
args:
- '-c'
- |
echo "Hello, world!" > /persistent_volume/file
volumes:
- name: 'myvolume'
path: '/persistent_volume'
- name: 'ubuntu'
entrypoint: 'bash'
args:
- '-c'
- |
cat /persistent_volume/file
volumes:
- name: 'myvolume'
path: '/persistent_volume'
I have following action enabled for webhook
even if if I keep this simple singe step it's not working.
steps:
- name: 'ubuntu'
entrypoint: 'bash'
args:
- '-c'
- |
echo "Hello, world!"
Error
If I keep in trigger in global region it complain about vpc-sc error
RESOURCES_NOT_IN_SAME_SERVICE_PERIMETER
if I move the pipeline to northamerical-northeast1 it says
triggerError spanner trigger (923134934784, test) not found
I've managed to replicate your issue by following this link on creating webhook triggers and got the same error.
{
"error": {
"code": 404,
"message": "triggerError spanner trigger (${PROJECT_ID}, ${TRIGGER_NAME}) not found",
"status": "NOT_FOUND"
}
}
The weird thing is even if I selected a non-global region, the webhook URL preview generated is for a global region.
Upon doing some research online, there's an ongoing issue when creating regional webhooks. You can check the status through this issue tracker.
You can track the link above for updates.
I am creating a tekton project which will spawn docker images which in turn will run few kubectl commands. This I have accomplished by using sidecars in tekton docker:dind image and setting
securityContext:
privileged: true
env:
However, one of the task is failing, since it needs to have an equivalent of --net=host in docker run example.
I have tried to set a podtemplate with hostnetwork: True, but then the task with the sidecar fails to start the docker
Any idea if I could implement --net=host in the task yaml file. It would be really helpful.
Snippet of my task with the sidecar:
sidecars:
- image: mypvtreg:exv1
name: mgmtserver
args:
- --storage-driver=vfs
- --userland-proxy=false
# - --net=host
securityContext:
privileged: true
env:
# Write generated certs to the path shared with the client.
- name: DOCKER_TLS_CERTDIR
value: /certs
volumeMounts:
- mountPath: /certs
As commented by #SYN, Using docker:dind as a sidecar, your builder container, executing in your Task steps, should connect to 127.0.0.1. That's how you would talk to your dind sidecar.
I have been trying to figure out how to configure the docker version of Concourse (https://github.com/concourse/concourse-docker) to use the AWS Secrets Manager and I added the following environment variables into the docker-compose file but from the logs it doesn't look like it ever reaches out to AWS to fetch the creds. Am I missing something or should this automatically happen when adding these environment variables under environment in the docker-compose file? Here are the docs I have been looking at https://concourse-ci.org/aws-asm-credential-manager.html
version: '3'
services:
concourse-db:
image: postgres
environment:
POSTGRES_DB: concourse
POSTGRES_PASSWORD: concourse_pass
POSTGRES_USER: concourse_user
PGDATA: /database
concourse:
image: concourse/concourse
command: quickstart
privileged: true
depends_on: [concourse-db]
ports: ["9090:8080"]
environment:
CONCOURSE_POSTGRES_HOST: concourse-db
CONCOURSE_POSTGRES_USER: concourse_user
CONCOURSE_POSTGRES_PASSWORD: concourse_pass
CONCOURSE_POSTGRES_DATABASE: concourse
CONCOURSE_EXTERNAL_URL: http://XXX.XXX.XXX.XXX:9090
CONCOURSE_ADD_LOCAL_USER: test: test
CONCOURSE_MAIN_TEAM_LOCAL_USER: test
CONCOURSE_WORKER_BAGGAGECLAIM_DRIVER: overlay
CONCOURSE_AWS_SECRETSMANAGER_REGION: us-east-1
CONCOURSE_AWS_SECRETSMANAGER_ACCESS_KEY: <XXXX>
CONCOURSE_AWS_SECRETSMANAGER_SECRET_KEY: <XXXX>
CONCOURSE_AWS_SECRETSMANAGER_TEAM_SECRET_TEMPLATE: /concourse/{{.Secret}}
CONCOURSE_AWS_SECRETSMANAGER_PIPELINE_SECRET_TEMPLATE: /concourse/{{.Secret}}
pipeline.yml example:
jobs:
- name: build-ui
plan:
- get: web-ui
trigger: true
- get: resource-ui
- task: build-task
file: web-ui/ci/build/task.yml
- put: resource-ui
params:
repository: updated-ui
force: true
- task: e2e-task
file: web-ui/ci/e2e/task.yml
params:
UI_USERNAME: ((ui-username))
UI_PASSWORD: ((ui-password))
resources:
- name: cf
type: cf-cli-resource
source:
api: https://api.run.pivotal.io
username: ((cf-username))
password: ((cf-password))
org: Blah
- name: web-ui
type: git
source:
uri: git#github.com:blah/blah.git
branch: master
private_key: ((git-private-key))
When storing parameters for concourse pipelines in AWS Secrets Manager, it must follow this syntax,
/concourse/TEAM_NAME/PIPELINE_NAME/PARAMETER_NAME`
If you have common parameters that are used across the team in multiple pipelines, use this syntax to avoid creating redundant parameters in secrets manager
/concourse/TEAM_NAME/PARAMETER_NAME
The highest level that is supported is concourse team level.
Global parameters are not possible. Thus these variables in your compose environment will not be supported.
CONCOURSE_AWS_SECRETSMANAGER_TEAM_SECRET_TEMPLATE: /concourse/{{.Secret}}
CONCOURSE_AWS_SECRETSMANAGER_PIPELINE_SECRET_TEMPLATE: /concourse/{{.Secret}}
Unless you want to change the prefix /concourse, these parameters shall be left to their defaults.
And, when retrieving these parameters in the pipeline, no changes required in the template. Just pass the PARAMETER_NAME, concourse will handle the lookup in secrets manager as per the team and pipeline name.
...
params:
UI_USERNAME: ((ui-username))
UI_PASSWORD: ((ui-password))
...
I'm trying to provision my infrastructure on AWS using Ansible playbooks. I have the instance, and am able to provision docker-engine, docker-py, etc. and, I swear, yesterday this worked correctly and I haven't changed the code since.
The relevant portion of my playbook is:
- name: Ensure AWS CLI is available
pip:
name: awscli
state: present
when: aws_deploy
- block:
- name: Add .boto file with AWS credentials.
copy:
content: "{{ boto_file }}"
dest: ~/.boto
when: aws_deploy
- name: Log in to docker registry.
shell: "$(aws ecr get-login --region us-east-1)"
when: aws_deploy
- name: Remove .boto file with AWS credentials.
file:
path: ~/.boto
state: absent
when: aws_deploy
- name: Create docker network
docker_network:
name: my-net
- name: Start Container
docker_container:
name: example
image: "{{ docker_registry }}/example"
pull: true
restart: true
network_mode: host
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone
My {{ docker_registry }} is set to my-acct-id.dkr.ecr.us-east-1.amazonaws.com and the result I'm getting is:
"msg": "Error pulling my-acct-id.dkr.ecr.us-east-1.amazonaws.com/example - code: None message: Get http://: http: no Host in request URL"
However, as mentioned, this worked correctly last night. Since then I've made some VPC/subnet changes, but I'm able to ssh to the instance, and run docker pull my-acct-id.dkr.ecr.us-east-1.amazonaws.com/example with no issues.
Googling has led me not very far as I can't seem to find other folks with the same error. I'm wondering what changed, and how I can fix it! Thanks!
EDIT: Versions:
ansible - 2.2.0.0
docker - 1.12.3 6b644ec
docker-py - 1.10.6
I had the same problem. Downgrading docker-compose pip image on that host machine from 1.9.0 to 1.8.1 solved the problem.
- name: Install docker-compose
pip: name=docker-compose version=1.8.1
Per this thread: https://github.com/ansible/ansible-modules-core/issues/5775, the real culprit is requests. This fixes it:
- name: fix requests
pip: name=requests version=2.12.1 state=forcereinstall