Using Google Cloud Secret as environment variables in Google Cloud Build - google-cloud-platform

I'm deploying my Node apps to Google Cloud Run using Cloud Build and I want to run some tests during the build. My tests require some environment variables, so I have been following this guide to achieve this.
The guide makes the following note:
Note: To use the secret in an environment variable, you need to prefix
the variable name with an underscore "_" and escape the value using
'('. For example: _VARIABLE_NAME=$(cat password.txt) && echo -n
\)_VARIABLE_NAME.
However, I am not quite sure how to implement this.
I have attempted the following in my cloudbuild.yaml.
- id: Execute tests
name: node
args: ['_VAR_ONE=$(cat var-one.txt)', '_VAR_TWO=$(cat var-two.txt)', 'jest -V']
Which returns the following: Error: Cannot find module '/workspace/_VAR_ONE=$(cat var-one.txt)'.
I also tried a few variations of the escape that the above note mentions, but they result in the same error.
What's the best way to get the secrets into my code as environment variables?
Also, if I need to use multiple environment variables, is it better to use Cloud KMS with an .env file?
Thanks!

It looks like you are incorrectly using the entrypoint provided by the node image. You are effectively running the command:
node _VAR_ONE=$(cat var-one.txt) _VAR_TWO=$(cat var-two.txt) jest -V
I want to digress for a moment and say this pattern does not work in Node, you need to specify the environment variables first before calling node, for example VAR_ONE=$(cat foo.txt) VAR_TWO=bar node run test
Anyway, I think what you want to run is:
_VAR_ONE=$(cat var-one.txt) _VAR_TWO=$(cat var-two.txt) jest -V
This is how we will do that - Assuming you have a previous step where you write out the contents of the secret into the files var-one.txt and var-two.txt in a previous step - here is how you would use it in the node step, it's just the standard way you use environment variables when running a command from the command line:
- id: Execute tests
name: node
entrypoint: '/bin/bash'
args:
'-c',
'_VAR_ONE=$(cat var-one.txt) _VAR_TWO=$(cat var-two.txt) jest -V'
]
You need to ensure in the node environment you are using the variables as specified (ie. process.env._VAR_ONE or process.env._VAR_TWO). I don't think you need to have the _ character prefixed here but I haven't tested it to confirm that. You can try the above and it should get you much further I think.

Related

Where do I tell AWS SAM which file to chose depending on the stage/environment?

In the app.js, I want to require a different "config" file depending on the stage/account.
for example:
dev account: const config = require("config-dev.json")
prod account: const config = require("config-prod.json")
At first I tried passing it using build --container-env-var-file but after getting undefined when using process.env.myVar, I think that env file is used at the build stage and has nothing to do with my function, but I could use it in the template creation stage..
So I'm looking now at deploy and there are a few different things that seem relevant, but it's quite confusing to chose which one is relevant for my use case.
There is the config file, in which case, I have no idea how to configure it since I'm in a pipeline context, so where would I instruct my process to use the correct json?
There is also parameters, and mapping.
My json is not just a few vars. its a bit of a complex object. nothing crazy not simple enough to pass the vars 1 by 1.
So I thought a single one containing the filename that I want to use could do the job
But I have no idea how to tell which stage of deployment I currently am in, or how to pass that value to access it from the lambda function.
I also faced this issue while exectuing aws lambda function locally.By this command my issue was solved.
try to configure your file using the sam build command

gcp cloud build - what is _REPO_NAME variable?

Cloud Build Building Python applications example has the lines below which has _REPO_NAME variable specified.
# [START cloudbuild_python_image_yaml]
# Docker Build
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t',
'us-central1-docker.pkg.dev/${PROJECT_ID}/${_REPO_NAME}/myimage:${SHORT_SHA}', '.']
The Substituting variable values documentation has $REPO_NAME: the name of your repository but does not have _REPO_NAME.
Please help understand where it is defined and what it is.
With Cloud Build, all the user managed substitution variables start with _. If you variable doesn't have a _, like $REPO_NAME, it's a auto generated environment variable.
Therefore in your example you have to provide the $_REPO_NAME and $_BUCKET_NAME if you want to start your Cloud Build process. Else it will fail because it doesn't know that variable.
Why using $_REPO_NAME instead of $REPO_NAME? IMO, it's a mistake. A developer trick is to replace the auto-generated variable by a user managed variable during the tests. Like that, you don't have to push new code to git to test your build pipeline, you simply have to set that variable manually (with gcloud command).
And it might have been forgotten when that code example has been released. Just an assumption.

How do you add a custom kustomize transformer?

I'd like to add a transformer to my kustomize setup, one that includes a dynamic value for the case where the local tag needs to override the production tag. If there's a simpler and better way to do this that would be great. I've looked through the list of transformers and generators to see if there was a way to provide this value at runtime (though I think specifically kustomize is designed to never use runtime values).
I can specify something like this:
images:
- name: my-image
newTag: my-sha1
The problem is to change the my-sha1 value after each new local build to then pick that image when I go to apply the local deployment.
How can I set newTag after I run a build locally to match a tag for the image I made locally? I can easily obtain the latest build tag and provide with to kubectl apply -f, but I'm not seeing a flag or environment variable, or something to do so.

Explain how `<<: *name` makes a reference to `&name` in docker-compose?

Trying to understand how the docker-compose file was created as I want to replicate this into a kubernetes deployment yaml file.
In reference to a cookiecutter-django's docker-compose production.yaml file:
...
services:
django: &django
...
By docker-compose design, the name of service here is already defined as django but then I noticed this extra bit &django. This made me wonder why its here. Further down, I noticed the following:
...
celeryworker:
<<: *django
...
I don't understand how that works. The docker-compose docs have no reference or mention for using << let alone, making a reference to a named service like *django.
Can anyone explain how the above work and how do I replicate it to a kubernetes deployment or services yaml file (or both?) if possible?
Edit:
The question that #jonsharpe shared was similar but the answer wasn't clear to me on how its used.
There are three different things happening, and none of them are specifically compose syntax, rather they are yaml syntax.
First is defining an anchor with the & followed by a name. That's similar to defining a variable to use later in the yaml, with the value matching the value of the yaml object where it appears.
Next is the alias, specified with * and the same name as the anchor. That uses the anchor in the second location in the yaml file.
Last is a mapping merge using the << syntax, which merges all of the mapped values in the alias with the rest of the values in the current map, allowing you to override values in the saved anchor with values specific to that section of the compose file.
To dig more into this, try searching on "yaml anchors and aliases". The first hit for me is this blog post: https://medium.com/#kinghuang/docker-compose-anchors-aliases-extensions-a1e4105d70bd

Spring XD Dynamic Module ClassLoader Issue

According to the documentation we can load jars dynamically at module creation time by exploiting the attribute module.classloader in the .properties file :
http://docs.spring.io/spring-xd/docs/1.3.1.RELEASE/reference/html/#module-class-loading
I spent two days trying to test this feature. It does not work. The option module.classloader seems to be simply ignored
I did not find any string named module.classloader in the XD code. But I found another one called module.classpath in this class:
https://github.com/spring-projects/spring-xd/blob/master/spring-xd-module/src/main/java/org/springframework/xd/module/options/ModuleUtils.java
The code in the above class seems to match the documentation. But unfortunalletely it does not work too. My classes are not found and I get java.lang.ClassNotFoundException
I have module option named dir4jars where I put the jars to load at creation time (when I issue job create --name xx --defintion ..). It's a directory, and I have tested the following possibilities, with both module.classpath and module.classloader :
module.classpath=${dir4jars}/*.jar
module.classloader=${dir4jars}/*.jar
.
.
job create --name jobName --definition "myJobModuleName --dir4jars=C:/ELS/Flash/libxd" --deploy
and
job create --name jobName --definition "myJobModuleName --dir4jars=file:C:/ELS/Flash/libxd" --deploy
I need the dir4jars to be absolute and outside XD home.
So my questions:
What's the right option to use for this dynamic load? module.classpath or module.classloader ?
How can I set an absolute directory as I mentioned above?
Many thanks.
I think it has to be module.classpath and module.classloader looks like a mistake in the documentation. Does this work when you explicitly use module.classpath=file:C:/ELS/Flash/libxd?
As a side note: Please consider using Spring Cloud Data Flow which is the successor of Spring XD.