Specifying a different 'Executed Function' than 'Name' in GCP cloudbuild.yaml - google-cloud-platform

How do I specify a different "Executed Function" in my cloudbuild.yaml file than the name of the actual function name in GCP?
For example:
I have a cloud function, written in python called hello_world
In my GCP deployment, I want to name the function hello-world-dev, and hello-world-prod which is passed in the Triggers variables dynamically on build.
Build fails because it was expecting the function to be called hello-world-dev or whatever
I'm sure there's a flag to specify the executing function, but I haven't found it.
My cloudbuild.yaml file looks like this:
#hello-world
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
args:
- gcloud
- functions
- deploy
- hello-world-${_ENV}
- --region=us-west2
- --source=./demo/hello-world/
- --trigger-http
- --runtime=python39
Steps I've Tried
I've tried the following flags:
--function
--call
--deploy
Looking at this documentation: https://cloud.google.com/functions/docs/deploying

The executed function is the function name in your code. By default, and it's your error, the executed function, if not specified, must be the same as the function name.
If your function name has a different name as the executed function, you need to specify the entrypoint of your function (the function to run in your code). Use the parameter --entry-point=

Related

AWS CodeBuild buildspec - are changes to variables in phases section available in artifacts section?

I define some variables in the env/variables, then make changes to the value in phases/pre_build. I want to use the variable down in artifacts, but it looks like the changes are not persisted.
This is a legacy Windows .NET Framework 4.7.2 application getting deployed to IIS.
My buildspec.yml file:
version: 0.2
env:
variables:
APPNAME: DummyApp
BRANCH: manual
phases:
pre_build:
commands:
- echo "start BRANCH = ${BRANCH}"
- echo "CODEBUILD_WEBHOOK_HEAD_REF = ${env:CODEBUILD_WEBHOOK_HEAD_REF}"
# CODEBUILD_WEBHOOK_HEAD_REF is null when build is triggered from console as opposed to a webhook
- if (${CODEBUILD_WEBHOOK_HEAD_REF}) { ${BRANCH} = ($CODEBUILD_WEBHOOK_HEAD_REF.replace('refs/heads/', '')) }
- echo "after BRANCH = ${env:BRANCH}"
build:
commands:
- echo "build commands happen here"
artifacts:
files:
- .\Dummy\bin\Debug\*
# not sure why this doesnt work down here, are changes in the phases section above not propagated?
name: ${env:APPNAME}/${env:APPNAME}-${env:BRANCH}.zip
discard-paths: yes
The value of $CODEBUILD_WEBHOOK_HEAD_REF = "refs/head/develop".
The value of $BRANCH after the replace statement = "develop".
The value of my artifact in S3 is "DummyApp/DummyApp-manual.zip".
I want the artifact named "DummyApp/DummyApp-develop.zip".
Some sort of scoping issue?
Saw various indications that this is not possible.
https://blog.shikisoft.com/define-environment-vars-aws-codebuild-buildspec/
The crucial thing you should note here is that you can only assign literal values to the environment variables declared this way. You cannot assign dynamic values at runtime. If you would like to change the value of the <...> variable above, you have to change your buildspec file and push your changes to your repository again. So it is like hardcoding parameter values. But it is better than typing the in all commands needed in the phases section.
In addition to trying to simply set the local var in pre_build, I tried a number of approaches, including
running a custom powershell script to parse the Branch name as the first step in the pre_build
running the command in the variable declaration itself
calling the prsh SetEnvironmentVariable method
The thing that seems to work is using the replace command down in the artifact/name itself:
artifacts:
files:
- .\Dummy\bin\Debug\*
name: ${env:APPNAME}/${env:APPNAME}-$($CODEBUILD_WEBHOOK_HEAD_REF.replace('refs/heads/', '')).zip
discard-paths: yes
created this artifact: DummyApp\DummyApp-develop.zip

AWS CodePipline & Codebuild - How to add environment variables to the Docker-Image?

thx for any help.
I'm using AWS-CodePipline with AWS-CodeBuild(for my Dockerfile and save it in ECR). So far it is working. But I don't get how I get my environment variables in the project. So I connected my Github account with CodePipline and I didn't pushed my envs to Github for security. So now I have on Github a env-file like:
config/prod.env
ACCESS_TOKEN_SECRET=
CSRF_TOKEN_SECRET=
ACCESS_TOKEN_PASSWORD=
REFRESH_TOKEN_SECRET=
CLUDINARY_API=
CLUDINARY_API_SECRET=
CLUDINARY_API_NAME=
GOOGLE_AUDIENCE=
ORIGIN=
GOOGLE_TOKEN=
DATABASE_URL=
NODE_ENV=
FORGOTTEN_PASSWORD=
YAHOO_PASSWORD=
Now on AWS-CodeBilder is a section for environment variables(Image from AWS-Doc).
Now I have the feeling this is not the right place for env's. Because if I put all my variables inside the fields I get the error:
ValidationException
1 validation error detected: Value at 'pipeline.stages.2.member.actions.1.member.configuration' failed to satisfy constraint: Map value must satisfy constraint: [Member must have length less than or equal to 1000, Member must have length greater than or equal to 1]
On Example:
Name: ACCESS_TOKEN_SECRET
Value: My_SUPER_PASSWORD
If I'm using just a few variables I don't get an error but for all variables I get the error(dosen't matter of the env-combination).
What I'm doing wrong? How can I get my env-variables to my Docker-Image in ECR with CodeBuild & CodePipline?
To pass variables from Code Build Project, you need to set env: section in buildspec.yml file, for example
env:
variables:
Execution_ID: $Execution_ID
Commit_ID: $Commit_ID

Lambda container images complain about entrypoint needing handler name as the first argument

When I run an AWS Lambda container (Docker) image, for example:
docker run public.ecr.aws/lambda/java bash
I get the following error:
entrypoint requires the handler name to be the first argument
What should the handler name be?
It depends what the language is of the runtime. For example, if it is NodeJS, then the handler name should look like:
"app.handler"
If it is Java, then it should look like:
"com.example.LambdaHandler::handleRequest"
The image will look for them in LAMBDA_TASK_ROOT so you will need to make sure that your code (or compiled code) is copied to that folder when you build the image, for example:
COPY target/* ${LAMBDA_TASK_ROOT}
In my case I wanted to use the lambda image in gitlab-ci.
The solution was to override the base image entrypoint by adding the following line to my Dockerfile:
ENTRYPOINT ["/bin/bash", "-l", "-c"]
Note that this means the image will not be usable in Amazon's lambdas anymore.
Guys in the thread are right, i stucked at the same place with my Node.js small script.
So, for me the right solution was adding ENTRYPOINT ["/lambda-entrypoint.sh", "helloWorldFunction.handler"]
Docker :
FROM public.ecr.aws/lambda/nodejs:14
COPY helloWorldFunction.js package*.json ./
RUN npm install
ENTRYPOINT ["/lambda-entrypoint.sh", "helloWorldFunction.handler"]
CMD [ "helloWorldFunction.handler" ]
lambda-entrypoint.sh is already placed in aws base image, don't need
to add it manually
helloWorldFunction.js :
exports.handler = async (event, context) => {
return "Hello World!";
};
Also, in case you are deploying it using visual studio and AWS Toolkit - clear "image-command" field in your aws-lambda-tools-default.json as it overrides docker's CMD

deploying lambda code inside a folder with an autogenerated name

I am trying to set up a lambda in pulumi-aws but my function code when deployed is wrapped in a folder with the same name as the generated lambda function name.
I would prefer not to have this as it's unnecessary, but more than that it means I can't work out what my handler should be as the folder name is generated?
(I realise I can probably use a reference to get this generated name, but I don't like the added complexity for no reason. I don't see a good reason for having this folder inside the lambda?)
E.g. my function code is 1 simple index.js file. with 1 named export of handler. I would expect my lambda handler to be index.handler.
(Note I am using TypeScript for my pulumi code but the Lambda is in JavaScript.)
I have tried a couple of options for the code property:
const addTimesheetEntryLambda = new aws.lambda.Function("add-timesheet-entry", {
code: new pulumi.asset.AssetArchive({
"index.js": new pulumi.asset.FileAsset('./lambdas/add-timesheet-entry/index.js'),
}),
In this example the zip file was simply an index.js with no folder information in the zip.
const addTimesheetEntryLambda = new aws.lambda.Function("add-timesheet-entry", {
code: new pulumi.asset.FileArchive("lambdatest.zip"),
AWS Lambda code is always in a "folder" named after the function name. Here is a Lambda that I created in the web console:
It doesn't affect the naming of the handler though. index.handler is just fine.

Putting environment variables to env.yml

I have these config files serverless.yml and env.yml and when I try to deploy, I get an error. The Lambda functions cannot be deployed.
serverless.yml
---omitted---
provider:
environment: ${file(env.yml):${self:custom.stage}}
---omitted---
env.yml
---omitted---
dev:
keyzero: "valuezero"
keyone:
keyoneone: "valueoneone"
keyonetwo: "valueonetwo"
keyonethree: "valueonethree"
---omitted---
ERROR:
Serverless: Operation failed!
Serverless Error ---------------------------------------
An error occurred: PingLambdaFunction - Value of property Variables
must be an object with String (or simple type) properties.
You need to specify which value from env.yml you want to use.
In your example, if you want to get the value of keyonetwo, you'd use
${file(env.yml):${opt:stage}.keyone.keyonetwo}
Which would yield valueonetwo
Also, checkout the documentation and how they reference environment variables.
You need to set each environment variable, so you'd need
provider:
environment:
keyoneone: ${file(env.yml):${opt:stage}.keyone.keyoneone}
keyonetwo: ${file(env.yml):${opt:stage}.keyone.keyonetwo}
Environment variables cannot be an object. They are simply key-value pairs where value should be of primitive types (i.e. string/number/boolean/null).
Your keyone variable is an object which is why it throws the error "Variables must be an object with String (or simple type) properties".