Why Is Cloud Run Failing When I Use Optionals In Express? - google-cloud-platform

Can someone let me know why I can run express locally fine with parameters such as:
account.setting?.amount
But when I try to deploy the same express service to cloud run I get the following error:
DEFAULT 2023-01-03T04:45:46.367770Z > my-api-v2#1.0.0 start /usr/src/app
DEFAULT 2023-01-03T04:45:46.367798Z > node app.js
DEFAULT 2023-01-03T04:45:46.684797Z accountAmount: parseInt((account.setting?.amount ? account.setting?.amount : 0)),
DEFAULT 2023-01-03T04:45:46.684807Z ^
ERROR 2023-01-03T04:45:46.684845Z SyntaxError: Unexpected token '.' at wrapSafe (internal/modules/cjs/loader.js:915:16) at Module._compile (internal/modules/cjs/loader.js:963:27) at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10) at Module.load (internal/modules/cjs/loader.js:863:32) at Function.Module._load (internal/modules/cjs/loader.js:708:14) at Module.require (internal/modules/cjs/loader.js:887:19) at require (internal/modules/cjs/helpers.js:74:18)
It seems that when I take the question mark out the build runs fine but when I use the question mark to define optional parameters the build fails. Is there anything I can do? Also I am using node 16 in my package.json and my cloud build yaml. Here are the scripts:
package.json
"engines": {
"node": ">=16.0.0"
},
cloudbuild.yaml
- name: 'gcr.io/cloud-builders/docker'
dir: 'api'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/my-api-v2', '.', '-t', 'node16']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/my-api-v2']
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args: ['run', 'deploy', 'my-api-v2', '--image', 'gcr.io/$PROJECT_ID/my-api-v2', '--region', 'us-central1']
Any help would be greatly appreciated. Thanks.

Answering this as community wiki.As mentioned above in comments
Cloud Run runs container, and not NodeJS versions. The nodeJS version is this one you defined in your Dockerfile, and it's the same whatever the runtime environment. However, the gen1 is a sandboxed runtime environment. Try to use the 2nd gen execution runtime environment

Related

Filter build steps by branch google cloud plateform

I have the below steps
steps:
# This step show the version of Gradle
- id: Gradle Install
name: gradle:7.4.2-jdk17-alpine
entrypoint: gradle
args: ["--version"]
# This step build the gradle application
- id: Build
name: gradle:7.4.2-jdk17-alpine
entrypoint: gradle
args: ["build"]
# This step run test
- id: Publish
name: gradle:7.4.2-jdk17-alpine
entrypoint: gradle
args: ["publish"]
The last step I want to do only on MASTER branch
Found one link related to this https://github.com/GoogleCloudPlatform/cloud-builders/issues/138
Its using a bash command, how can I put the gradle command inside the bash.
Update
After the suggestion answer I have updated the steps as
- id: Publish
name: gradle:7.4.2-jdk17-alpine
entrypoint: "bash"
args:
- "-c"
- |
[[ "$BRANCH_NAME" == "develop" ]] && gradle publish
The build pipeline failed with below exception
Starting Step #2 - "Publish"
Step #2 - "Publish": Already have image: gradle:7.4.2-jdk17-alpine
Finished Step #2 - "Publish"
ERROR
ERROR: build step 2 "gradle:7.4.2-jdk17-alpine" failed: starting step container failed: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "bash": executable file not found in $PATH: unknown
The suggested solution didn't work for me, I have to write the below code
# This step run test
- id: Publish
name: gradle:7.4.2-jdk17-alpine
entrypoint: "sh"
args:
- -c
- |
if [ $BRANCH_NAME == 'master' ]
then
echo "Branch is = $BRANCH_NAME"
gradle publish
fi
Current workarounds are mentioned as following :
Using different cloudbuild.yaml files for each branch
Overriding entrypoint and injecting bash as mentioned in the link:
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
echo "Here's a convenient pattern to use for embedding shell scripts in cloudbuild.yaml."
echo "This step only pushes an image if this build was triggered by a push to master."
[[ "$BRANCH_NAME" == "master" ]] && docker push gcr.io/$PROJECT_ID/image
This tutorial outlines an alternative where you check in different cloudbuild.yaml files on different development branches.
You can try the following for Gradle command as mentioned by bhito:
- id: Publish
name: gradle:7.4.2-jdk17-alpine
entrypoint: sh
args: - c
- |
[[ "$BRANCH_NAME" == "master" ]] && gradle publish
Cloud build provides configuring triggers by branch, tag, and pr . This lets you define different build configs to use for different repo events, e.g. one for prs, another for deploying to prod, etc.you can refer to the documentation on how to create and manage triggers.
you can check this blog for more updates on additional features and can go through the release notes for more Cloud build updates.
To gain some more insights on Gradle, you can refer to the link

Google Cloud Build - Multiple Environments

In my app, I have the following:
app.yaml
cloudbuild.yaml
I use the above for the first time to deploy the default service.
app.qa.yaml
cloudbuild_qa.yaml
app.staging.yaml
cloudbuild_staging.yaml
app.prod.yaml
cloudbuild_prod.yaml
They all reside at the root of the app.
For instance, the cloudbuild_qa.yaml is as follows:
steps:
- name: node:14.0.0
entrypoint: npm
args: ['install']
- name: node:14.0.0
entrypoint: npm
args: ['run', 'prod']
- name: 'gcr.io/cloud-builders/gcloud'
args: ['beta', 'app', 'deploy', '--project', '$PROJECT_ID', '-q', '$_GAE_PROMOTE', '--version', '$_GAE_VERSION', '--appyaml', 'app.qa.yaml']
timeout: '3600s'
The Cloud Build works well, however, it's not respecting the app.qa.yaml instead, it always takes the default app.yaml.
Services to deploy:
descriptor: [/workspace/app.yaml]
source: [/workspace]
target project: [test-project]
target service: [default]
target version: [qa]
target url: [https://test-project.uc.r.appspot.com]
Any idea what's happening? Do you know how to use the correct app.yaml file in such a case?
Remove the '--appyaml', in the attribute list.
However, I'm not sure that is a good practice to have a deployment file different from an environment to another one. When you update something at a place, you could forget to update the same thing in the other files.
Did you think to replace placeholders in the files? or to use substitution variables in the Cloud Build?
In our build we are using:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy', '--appyaml=app-qa.yaml', '--no-promote', '--version=${_TAG_VERSION}']
FYI:
I've notice you are building your applications using the node builder but you could add the script gcp-build in your package.json because the script gcloud app deploy should look for scripts named gcp-build and execute them before deploying
{
"scripts": {
...
"build": "tsc",
"start": "node -r ./tsconfig-paths-dist.js dist/index.js",
"gcp-build": "npm run build"
},
}
Reference: https://cloud.google.com/appengine/docs/standard/nodejs/running-custom-build-step

Errors when trying to run an AWSPowerShellModuleScript#1 in Azure devops pipeline

I currently have an Azure Devops pipeline to build and deploy a next.js application via the serverless framework.
Upon reaching the AWSPowerShellModuleScript#1 task I get these errors:
[warning]MSG:UnableToDownload «https:...» «»
[warning]Unable to download the list of available providers. Check
your internet connection.
[warning]Unable to download from URI 'https:...' to ''.
[error]No match was found for the specified search criteria for the
provider 'NuGet'. The package provider requires 'PackageManagement'
and 'Provider' tags. Please check if the specified package has the
tags.
[error]No match was found for the specified search criteria and
module name 'AWSPowerShell'. Try Get-PSRepository to see all
available registered module repositories.
[error]The specified module 'AWSPowerShell' was not loaded because no
valid module file was found in any module directory.
I do have the AWS.ToolKit installed and it's visible when I go to manage extensions within Azure Devops.
My pipeline:
trigger: none
stages:
- stage: develop_build_deploy_stage
pool:
name: Default
demands:
- msbuild
- visualstudio
jobs:
- job: develop_build_deploy_job
steps:
- checkout: self
clean: true
- task: NodeTool#0
displayName: Install Node
inputs:
versionSpec: '12.x'
- script: |
npm install
npx next build
displayName: Install Dependencies and Build
- task: CopyFiles#2
inputs:
Contents: 'build/**'
TargetFolder: '$(Build.ArtifactStagingDirectory)'
- task: PublishBuildArtifacts#1
displayName: Publish Artifact
inputs:
pathtoPublish: $(Build.ArtifactStagingDirectory)
artifactName: dev_artifacts
- task: AWSPowerShellModuleScript#1
displayName: Deploy to Lambda#Edge
inputs:
awsCredentials: '###'
regionName: '###'
scriptType: 'inline'
inlineScript: 'npx serverless --package dev_artifacts'
I know I can use the ubuntu vmImage and then make use of the awsShellScript but the build agent I have available to me doesn't support bash.

Google Cloud Build - Terraform Self-Destruction on Build Failure

I'm currently facing an issue with my Google Cloud Build for CI/CD.
First, I build new docker images of multiple microservices and use Terraform to create the GCP infrastructure for the containers that they will also live in production.
Then I perform some Integration / System Tests and if everything is fine I push new versions of the microservice images to the container registry for later deployment.
My problem is, that the Terraformed infrastructure doesn't get destroyed if the cloud build fails.
Is there a way to always execute a cloud build step even if some previous steps have failed, here I would want to always execute "terraform destroy"?
Or specifically for Terraform, is there a way to define a self-destructive Terraform environment?
cloudbuild.yaml example with just one docker container
steps:
# build fresh ...
- id: build
name: 'gcr.io/cloud-builders/docker'
dir: '...'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/staging/...:latest', '-t', 'gcr.io/$PROJECT_ID/staging/...:$BUILD_ID', '.', '--file', 'production.dockerfile']
# push
- id: push
name: 'gcr.io/cloud-builders/docker'
dir: '...'
args: ['push', 'gcr.io/$PROJECT_ID/staging/...']
waitFor: [build]
# setup terraform
- id: terraform-init
name: 'hashicorp/terraform:0.12.28'
dir: '...'
args: ['init']
waitFor: [push]
# deploy GCP resources
- id: terraform-apply
name: 'hashicorp/terraform:0.12.28'
dir: '...'
args: ['apply', '-auto-approve']
waitFor: [terraform-init]
# tests
- id: tests
name: 'python:3.7-slim'
dir: '...'
waitFor: [terraform-apply]
entrypoint: /bin/sh
args:
- -c
- 'pip install -r requirements.txt && pytest ... --tfstate terraform.tfstate'
# remove GCP resources
- id: terraform-destroy
name: 'hashicorp/terraform:0.12.28'
dir: '...'
args: ['destroy', '-auto-approve']
waitFor: [tests]
Google Cloud Build doesn't yet support allow_failure or some similar mechanism as mentioned in this unsolved but closed issue.
What you can do, and as mentioned in the linked issue, is to chain shell conditional operators.
If you want to run a command on failure then you can do something like this:
- id: tests
name: 'python:3.7-slim'
dir: '...'
waitFor: [terraform-apply]
entrypoint: /bin/sh
args:
- -c
- pip install -r requirements.txt && pytest ... --tfstate terraform.tfstate || echo "This failed!"
This would run your test as normal and then echo This failed! to the logs if the tests fail. If you want to run terraform destroy -auto-approve on the failure then you would need to replace the echo "This failed!" with terraform destroy -auto-approve. Of course you will also need the Terraform binaries in the Docker image you are using so will need to use a custom image that has both Python and Terraform in it for that to work.
- id: tests
name: 'example-customer-python-and-terraform-image:3.7-slim-0.12.28'
dir: '...'
waitFor: [terraform-apply]
entrypoint: /bin/sh
args:
- -c
- pip install -r requirements.txt && pytest ... --tfstate terraform.tfstate || terraform destroy -auto-approve ; false"
The above job also runs false at the end of the command so that it will return a non 0 exit code and mark the job as failed still instead of only failing if terraform destroy failed as well.
An alternative to this would be to use something like Test Kitchen which will automatically stand up infrastructure, run the necessary verifiers and then destroy it at the end all in a single kitchen test command.
It's probably also worth mentioning that your pipeline is entirely serial so you don't need to use waitFor. This is mentioned in the Google Cloud Build documentation:
A build step specifies an action that you want Cloud Build to perform.
For each build step, Cloud Build executes a docker container as an
instance of docker run. Build steps are analogous to commands in a
script and provide you with the flexibility of executing arbitrary
instructions in your build. If you can package a build tool into a
container, Cloud Build can execute it as part of your build. By
default, Cloud Build executes all steps of a build serially on the
same machine. If you have steps that can run concurrently, use the
waitFor option.
and
Use the waitFor field in a build step to specify which steps must run
before the build step is run. If no values are provided for waitFor,
the build step waits for all prior build steps in the build request to
complete successfully before running. For instructions on using
waitFor and id, see Configuring build step order.

Google Cloud Builder - how to trigger build configuration in a subdirectory?

I'm trying to establish a Google Cloud Builder Build Trigger to autobuild and deploy my ASP .NET Core application to Google AppEngine.
Using the current cloudbuild.yaml:
steps:
- name: 'gcr.io/cloud-builders/dotnet'
args: [ 'publish', '-c', 'Release' ]
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app','deploy','./bin/Release/netcoreapp2.1/publish/app.yaml']
I have tested local build working using cloud-build-local tool.
These two approach worked locally:
From the application subdirectory: cloud-build-local --config=cloudbuild.yaml --dryrun=false .
From the repository root: cloud-build-local --config=clearbooks-rest-aspnetcore/cloudbuild.yaml --dryrun=false clearbooks-rest-aspnetcore
The Build Trigger definition seems to partially support config files from a subdirectory of the repository root (approach no 2) however it seems to assume that code always lives in repository root.
How do I configure Cloud Builder to start a build in a subdirectory of the repository?
The solution is to update cloudbuild.yaml:
Add the dir: option on the build step
Provide the correct app.yaml location for deploy step
Here is the working cloudbuild.yaml:
steps:
- name: 'gcr.io/cloud-builders/dotnet'
args: [ 'publish', '-c', 'Release' ]
dir: 'clearbooks-rest-aspnetcore'
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app','deploy','clearbooks-rest-aspnetcore/bin/Release/netcoreapp2.1/publish/app.yaml']
When testing locally, run cloud-build-local on repository root, never on the app subdirectory:
cloud-build-local --config=clearbooks-rest-aspnetcore/cloudbuild.yaml --dryrun=false .
This reflects the way Cloud Build works:
Path to correct cloudbuild.yaml
Current directory for source
I was developing a sample project with Spring Boot with App Engine and directory structure is like.
google-cloud
- appengine-spring-boot
- appflexengine-spring-boot
below are the cloudbuild.yaml file that is working for me.
steps:
- name: 'gcr.io/cloud-builders/mvn'
dir: "appengine-spring-boot"
#args: [ 'package','-f','pom.xml','-Dmaven.test.skip=true' ]
args: [ 'clean', 'package']
- name: "gcr.io/cloud-builders/gcloud"
dir: "appengine-spring-boot"
args: [ "app", "deploy" ]
timeout: "1600s"