Google Cloud Run deployment not working using `gcloud` SDK CLI - google-cloud-platform

I have a service created on Google Cloud run that I am able to deploy manually through the Google Cloud Console UI using an image on Container registry. But deployment from CLI is failing. Here is the command I am using and the error I get. I am not able to understand what I am missing:
$ gcloud beta run deploy service-name --platform managed --region region-name --image image-url
Deploying container to Cloud Run service [service-name] in project [project-name] region [region-name]
X Deploying...
. Creating Revision...
. Routing traffic...
Deployment failed
ERROR: (gcloud.beta.run.deploy) INVALID_ARGUMENT: The request has errors
- '#type': type.googleapis.com/google.rpc.BadRequest
fieldViolations:
- description: spec.revisionTemplate.spec.container.ports should be empty
field: spec.revisionTemplate.spec.container.ports
Update 1:
I have updated the SDK using gcloud components update, but I still have the same issue
Here's my SDK Version
$gcloud version
Google Cloud SDK 270.0.0
beta 2019.05.17
bq 2.0.49
core 2019.11.04
gsutil 4.46
I am using a multistage docker build. Here's my Dockerfile:
FROM custom-dev-image
COPY . /project_dir
WORKDIR /project_dir
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
/usr/local/bin/go build -a \
-ldflags '-w -extldflags "-static"' \
-o /root/go/bin/executable ./cmds/project/main.go
FROM alpine:3.10
ENV GIN_MODE=release APP_NAME=project_name
COPY --from=0 /root/go/bin/executable /usr/local/bin/
CMD executable

I had this same problem and I assume it was because I had older Cloud Run deployment that was created before I had ran gcloud components update since some update.
I was able to fix it by deleting the whole Cloud Run service (through the GUI) and deploying it from scratch again (via terminal). I noticed that the ports: definition disappeared from the YAML once I did this.
After this I could do deployments normally.

This was a bug in Cloud Run. It has been fixed and deploying with CLI is working for me now. Here's the link to the issue I had raised with Google Cloud which has a response from them https://issuetracker.google.com/issues/144069696.

Related

GCSFuse not finding default credentials when running a cloud run app docker locally

I am working on mounting a Cloud Storage Bucket to my Cloud Run App, using the example and code from the official tutorial https://cloud.google.com/run/docs/tutorials/network-filesystems-fuse
The application uses docker only (no cloudbuild.yaml)
The docker file compiles with out issue using command:
docker build --platform linux/amd64 -t fusemount .
I then start docker run with the following command
docker run --rm -p 8080:8080 -e PORT=8080 fusemount
and when run gcsfuse is triggered with both the directory endpoint and the bitbucket URL
gcsfuse --debug_gcs --debug_fuse gs://<my-bucket> /mnt/gs
But the connection fails:
022/12/11 13:54:35.325717 Start gcsfuse/0.41.9 (Go version go1.18.4)
for app "" using mount point: /mnt/gcs 2022/12/11 13:54:35.618704
Opening GCS connection...
2022/12/11 13:57:26.708666 Failed to open connection: GetTokenSource:
DefaultTokenSource: google: could not find default credentials. See
https://developers.google.com/accounts/docs/application-default-credentials
for more information.
I have already set up the application-defaut credentials with the following command:
gcloud auth application-default login
and I have a python based cloud function project that I have tested on the same local machine which has no problem accessing the same storage bucket with the same default login credentials.
What am I missing?
Google libraries search for ~/.config/gcloud when using APPLICATION_DEFAULT authorization approach.
Your local Docker container doesn't contain this config when running locally.
So, you might want to mount it when running a container:
$ docker run --rm -v /home/$USER/.config/gcloud:/root/.config/gcloud -p 8080:8080 -e PORT=8080 fusemount
Some notes:
I'm not sure which OS you are using, so that replace /home/$USER with a real path to your home
Same, I'm not sure your image has /root home, so make sure that path from 1. is mounted properly
Make sure your local user is authorized to gcloud cli, as you mentioned, using this command gcloud auth application-default login
Let me know, if this helped.
If you are using docker and not using Google Compute engine (GCE), did you try mounting service account key when running container and using that key while mounting GCSFuse ?
If you are building and deploying to Cloud run, did you grant required permissions mentioned in https://cloud.google.com/run/docs/tutorials/network-filesystems-fuse#ship-code?

GCP Cloud Run for Anthos - PERMISSION_DENIED when deploying service with gcloud run deploy

I successfully followed this quick-start which uses Cloud Run GUI to deploy a Cloud Run for Anthos service.
Then, I wanted to deploy a Cloud Run for Anthos service using gcloud run deploy command directly from the Cloud Shell on the GCP website instead of Cloud Run GUI.
I got the following error:
ERROR: (gcloud.run.deploy) PERMISSION_DENIED: Permission denied to get service [resourcesettings.googleapis.com]
- '#type': type.googleapis.com/google.rpc.PreconditionFailure
violations:
- subject: '110002'
type: googleapis.com
- '#type': type.googleapis.com/google.rpc.ErrorInfo
domain: serviceusage.googleapis.com
reason: AUTH_PERMISSION_DENIED
gcloud --version returns:
Google Cloud SDK 340.0.0
alpha 2021.05.07
app-engine-go 1.9.71
app-engine-java 1.9.88
app-engine-python 1.9.91
app-engine-python-extras 1.9.91
beta 2021.05.07
bigtable
bq 2.0.67
cbt 0.9.0
cloud-build-local 0.5.2
cloud-datastore-emulator 2.1.0
core 2021.05.07
datalab 20190610
gsutil 4.61
kind 0.7.0
kpt 0.39.2
local-extract 1.0.0
minikube 1.19.0
pubsub-emulator 0.4.0
skaffold 1.23.0
I do not understand how I was able to deploy through the GUI but not through Cloud Shell CLI (using same GKE cluster, same service name, same Docker image).
Note: gcloud run deploy worked with gcloud config set run/platform managed instead of gcloud config set run/platform gke.
It seems that it is a GKE/Anthos-related issue.
Note 2: It is a small 1-person GCP project, i just created it for Cloud Run for Anthos testing. I have the Owner role.

GCloud Cloud Run Deploy "Error: ERROR: (gcloud.run.deploy) unrecognized arguments" from within Gitlab-Ci Runner Container?

This is a strange one.
A Google Cloud Run deployment run from gcloud commandline on my OSx Mac works — while the identical command run from the identical gcloud version, using a Service Account user within our Alpine based Ci/Cd Gitlab runner container / executor crashes and complains about un-recognized arguments.
With the arguments copied and pasted why is the gcloud (within Alpine gitlab runner / executor container) failing due to not recognizing the arguments where my local install works fine?
As background:
We run Ci/Cd within a Gitlab Ci Runner where the docker executor that deploys our final container previously needed to use Kubectl to push that container to a GCP Managed K8s Cluster — which was expensive. So we moved the production container to Cloud Run — and it was cheaper.
Now I am working on resetting our Ci/CD deployments and ran into the above issue while attempting to deploy a container from within our GitLab Ci pipeline.
The gcloud command that works looks like this (on my local Mac)
gcloud run deploy site-production \
--platform=managed \
--allow-unauthenticated \
--image=us.gcr.io/some-site-333333/site:master \
--region=us-east1
That same (EXACT) command on the GitLab runner gets me:
ERROR: (gcloud.run.deploy) unrecognized arguments:
--platform=managed
--allow-unauthenticated
--image=us.gcr.io/some-site-333333/site:master
--region=us-east1
To search the help text of gcloud commands, run:
gcloud help -- SEARCH_TERMS
Seems super weird — and I was pretty sure I must have had a typo or something — but the command itself was copied (and modified) from Google's own Cloud Run docs.
If I am missing something dumb let me know — until then my plan is to start shaving off optional flags to try to see which one of those parameters it's complaining about. Ideas are appreciated!
Try to make a one liner command like:
gcloud run deploy site-production --platform=managed --allow-unauthenticated --image=us.gcr.io/some-site-333333/site:master --region=us-east1

Deploy app created with docker-compose to AWS

Final goal: To deploy a ready-made cryptocurrency exchange on AWS.
I have setup a readymade server by 0xProject by running the following command on my local machine:
npx #0x/launch-kit-wizard && docker-compose up
This command creates a docker-compose.yml file which has multiple container definitions and starts the exchange on http://localhost:3001/
I need to deploy this to AWS for which I'm following this Youtube tutorial
I have created a registry user with appropriate permissions
An EC2 instance is created
ECR repository is created
AWS CLI is configured
As per AWS instructions, I'm retrieving an authentication token and authenticating Docker client to registry:
aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin <docker-id-given-by-AWS>.dkr.ecr.us-east-2.amazonaws.com
I'm trying to build the docker image:
docker build -t testdockerregistry .
Now, since in this case, we have docker-compose.yml instead of Dockerfile - when I try to build the image - it throws the following error:
unable to prepare context: unable to evaluate symlinks in Dockerfile path: CreateFile C:\Users\hp\Desktop\xxx\Dockerfile: The system cannot find the file specified.
I tried building image from docker-compose itself as per this guide, which fails with the following message:
postgres uses an image, skipping
frontend uses an image, skipping
mesh uses an image, skipping
backend uses an image, skipping
nginx uses an image, skipping
Can anyone please help me with this?
You can use the aws ecs cli-compose command from the ECS CLI.
By using this command it will translate the docker-compose file you create into a ECS Task Definition.
If you're interested in finding out more about the CLI take a read of the AWS documentation here.
Another approach, instead of using the AWS ECS CLI directly, is to use the new docker/compose-cli
This CLI tool makes it easy to run Docker containers and Docker Compose applications in the cloud using either Amazon Elastic Container Service (ECS) or Microsoft Azure Container Instances (ACI) using the Docker commands you already know.
See "Docker Announces Open Source Compose for AWS ECS & Microsoft ACI " from Aditya Kulkarni.
It references "Docker Open Sources Compose for Amazon ECS and Microsoft ACI" from Chris Crone, Engineer #docker:
While implementing these integrations, we wanted to make sure that existing CLI commands were not impacted.
We also wanted an architecture that would make it easy to add new backends and provide SDKs in popular languages. We achieved this with the following architecture:

gcloud crashed (AttributeError): 'NoneType' object has no attribute 'revisionTemplate'

I'm working on Cloud Run, which seems to be beta yet, preventing from redeploying as shown below. It works if I delete the service from GCP console, then deploy the same Docker as a new service. I could not find a way to to set revisionTemplate.
I run this command to deploy a Cloud Run service using gcloud.
gcloud beta run deploy v2-cms --image gcr.io/my-project/v2-cms --quiet
Then, it fails saying like this.
X Deploying...
. Creating Revision...
. Routing traffic...
Deployment failed
ERROR: gcloud crashed (AttributeError): 'NoneType' object has no attribute 'revisionTemplate'
If you would like to report this issue, please run the following command:
gcloud feedback
To check gcloud for common problems, please run the following command:
gcloud info --run-diagnostics
To fix this issue, please update gcloud to ite latest version with gcloud components update
Make sure that your local Tensorflow version is still supported by GCloud https://cloud.google.com/ai-platform/training/docs/runtime-version-list