I deployed two Cloud Run services (staging and production) using GCP Cloud Build with this command:
entrypoint: gcloud
args: ['run', 'deploy', 'app', '--project', '$PROJECT_ID', '--image', 'image:$COMMIT_SHA', '--region', 'us-central1', '--allow-unauthenticated', '--memory' , '256Mi', '--update-env-vars', 'ENV=production']
I noticed that the same command has different behavior on staging and production. On one of my services, the traffic is not routed automatically to the newest revision.
Already have image (with digest):
Deploying container to Cloud Run service
Deploying...
Setting IAM Policy..............done
Creating Revision..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done
Done.
Service [] revision [] has been deployed and is serving 0 percent of traffic.
I am missing this step :
Routing traffic......done
I checked the cloud run service.yaml and the traffic argument is set :
traffic:
- latestRevision: true
percent: 100
If I run the same command on GCP console, everything works as expected.
Question:
Why gcloud run deploy does not route the traffic when I am running from Cloud Build pipeline ? ( I do not have --no traffic flag set)
It seems to be related to this issue: https://issuetracker.google.com/issues/172165141
There are two modes available to you: route traffic to latest revision or manually distribute it.
If you switched to manual routing the service stays like this until you decide to revert it back with gcloud run services update-traffic testservice --platform="managed" --to-latest.. This is made to keep it simple and fight ambiguity and unexpected traffic switch.
Related
I want to execute a script which is in my compute engine using cloudbuild but somehow cloudbuild is not able to ssh into my vm , in my vm "OS LOGIN" is enabled and also have only internal ip.
here is my cloudbuild.yaml file
steps:
name: 'gcr.io/cloud-builders/gcloud' id: Update staging server entrypoint: /bin/sh args:
'-c'
|
set -x &&
gcloud compute ssh vm_name --zone=us-central1-c --command='/bin/sh /pullscripts/pull.sh'
I am attaching my error pics
cloudbuild error page 1
cloudbuild error page 2
Also my question is , is it possible connect a vm using cloud sdk if "os login" is enabled.
You'll probably have to add the roles/iap.tunnelResourceAccessor role to the cloudbuild service account. Please read this Google documentation, which shows you what to do with a certain error code.
Error code 4033
Either you don't have permission to access the instance, the instance doesn't exist, or the instance is stopped.
in fact, you can use gcloudbuild to connect in any vm, just need a docker configuration and upload the files (private_key, scripts, etc). I've this repo to solve this problem: https://github.com/jmbl1685/gcloudbuild-vm-ssh-connect
I hope that the above help you
Try adding --internal-ip which looks like as follows:
gcloud compute ssh vm_name --zone=us-central1-c --internal-ip
I need to run a specific gcloud SDK command. And I need to do it remotely on my express server. Is this possible?
The command is related to the Cloud CDN service, which doesn't seem to have an npm package to access its API in an easy way. I've noticed on a cloudbuild.yaml that you can actually run a gcloud command on a build process, like:
cloudbuild.yaml
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: gcloud
args:
- "run"
- "deploy"
- "server"
And then I thought, if it's possible to run a gcloud command through Cloud Build, isn't there some way to create basically a "Cloud Script" where I could access and trigger a gcloud command, like that?
This is my environment and what I'd like to run:
Express server hosted on Cloud Run
I would like to run a command to clear the Cloud CDN cache, like this:
gcloud compute url-maps invalidate-cdn-cache URL_MAP_NAME \
--path "/images/*"
There doesn't seem to be a Node.js client API to access the Cloud CDN service.
Here you have a REST POST endpoint https://cloud.google.com/compute/docs/reference/rest/v1/urlMaps/invalidateCache
You can pretty much create a cloud function or call it from other places to invalidate your Cache.
With the gcloud command you would probably have to create VM on Compute Engine and create some endpoint which execute the gcloud command no other way but I suggest you use REST endpoint as you can call it from whatever environment you use.
I have have created two clusters ECS in the same subnetwork, one for jenkins master and other for jenkins slave(empty cluster). I have installed Amazon ECS plugin on jenkins master but I am not able to configure jenkins slave node. I created both clusters using ecs-cli up command and following are my settings for Amazon ECS plugin similar to my cluster. After running this job a task definition is created in ECS but the service and task definition are not created in the cluster.
Name: ecs-jenkins-slave
Amazon ECS Credential: aws_credentials
ECS Region Name: cluster_region
ECS Clutser: cluster_cluster
ECS Agent Template
Label: ecs-jenkins-slave
Docker Image: jenkinsci/jnlp-slave
Subnet: cluster_subnet
security_group: cluster_sg
(rest of the fields are default)
I created a test job to verify configuration and under Restrict where this project can be run of my test job, I am getting Label ecs-jenkins-slave is serviced by no nodes and 1 cloud. Permissions or other restrictions provided by plugins may prevent this job from running on those nodes message. When I am running the job, it is going in pending state with message '(pending—‘Jenkins’ doesn’t have label ‘ecs-jenkins-slave’) '
I've been struggling with configuring Kubernetes for many hours and I don't know how to move it forward.
What I did :
I created few services using spring cloud
I created docker images for each service
I pushed those images to docker hub
I launched AWS by running
export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash
Command kubectl cluster-info shows that it actually works.
I created Kubernetes pods for each service. Command kubectl get pods
shows that all pods have status running.
The problem is that when I log to my AWS account I don't see any running instance, although I can see kubernetes-staging created in my S3 bucket.
My goal is to actually access my service , not on localhost. How can I do it ?
You should be able to see instances of course - as #kichik mentioned check whether your AWS console is using the same region as the deployment scripts.
To use your services/applications the next step is to expose them to the public with Kubernetes services as described here and here
I'm trying to setup example from Running Wordpress with a Single Pod.
I've done Before You Begin section:
$ gcloud config list
[compute]
zone = europe-west1-c
[core]
account = user#email.com
disable_usage_reporting = False
project = com-project-default
I've done the steps from the tutorial:
"Step 1: Create your cluster" logs here
"Step 2: Create your pod" logs here
"Step 3: Allow external traffic" logs here
More logs:
$kubectl get pods - log (toggle text wrapping)
$gcloud compute firewall-rules list - log
So, when I try to connect to http://104.155.7.213/ I'm receiving "This web page is not available: ERR_CONNECTION_REFUSED".
I tried to add "Allow HTTP traffic" explicitly to the node in Compute Engine VMs dashboard and also I tried to use "kubectl run" instead of deprecated "kubectl run-container", but it doesn't help. Also sometimes I'm receiving "last termination: exit code 2" (1 or 2) in "message" column when run "kubectl get pods" (but not this time)`
Info:
GKE from June 10, 2015
$ kubectl version
Client Version: version.Info{Major:"0", Minor:"18", GitVersion:"v0.18.1", GitCommit:"befd1385e5af5f7516f75a27a2628272bb9e9f36", GitTreeState:"clean"}
Server Version: version.Info{Major:"0", Minor:"18", GitVersion:"v0.18.2", GitCommit:"1f12b893876ad6c41396222693e37061f6e80fe1", GitTreeState:"clean"}
$ gcloud version
Google Cloud SDK 0.9.64
alpha 2015.06.02
bq 2.0.18
bq-nix 2.0.18
compute 2015.06.09
core 2015.06.09
core-nix 2015.06.02
dns 2015.06.02
gcloud 2015.06.09
gcutil-msg 2015.06.09
gsutil 4.13
gsutil-nix 4.12
kubectl
kubectl-linux-x86_64 0.18.1
preview 2015.06.09
sql 2015.06.09
Thank you for your help in advance!
If you want to access the container directly using the node VM's IP address, you need to specify a host port in addition to a container port, e.g.
kubectl run-container wordpress --image=tutum/wordpress --port=80 --hostport=80
Alternatively, you can access wordpress via the proxy running on the master by running kubectl proxy and then pointing your web browser at http://localhost:8001/api/v1beta3/proxy/namespaces/default/pods/wordpress-3gaq6.