I have these config files serverless.yml and env.yml and when I try to deploy, I get an error. The Lambda functions cannot be deployed.
serverless.yml
---omitted---
provider:
environment: ${file(env.yml):${self:custom.stage}}
---omitted---
env.yml
---omitted---
dev:
keyzero: "valuezero"
keyone:
keyoneone: "valueoneone"
keyonetwo: "valueonetwo"
keyonethree: "valueonethree"
---omitted---
ERROR:
Serverless: Operation failed!
Serverless Error ---------------------------------------
An error occurred: PingLambdaFunction - Value of property Variables
must be an object with String (or simple type) properties.
You need to specify which value from env.yml you want to use.
In your example, if you want to get the value of keyonetwo, you'd use
${file(env.yml):${opt:stage}.keyone.keyonetwo}
Which would yield valueonetwo
Also, checkout the documentation and how they reference environment variables.
You need to set each environment variable, so you'd need
provider:
environment:
keyoneone: ${file(env.yml):${opt:stage}.keyone.keyoneone}
keyonetwo: ${file(env.yml):${opt:stage}.keyone.keyonetwo}
Environment variables cannot be an object. They are simply key-value pairs where value should be of primitive types (i.e. string/number/boolean/null).
Your keyone variable is an object which is why it throws the error "Variables must be an object with String (or simple type) properties".
Related
I define some variables in the env/variables, then make changes to the value in phases/pre_build. I want to use the variable down in artifacts, but it looks like the changes are not persisted.
This is a legacy Windows .NET Framework 4.7.2 application getting deployed to IIS.
My buildspec.yml file:
version: 0.2
env:
variables:
APPNAME: DummyApp
BRANCH: manual
phases:
pre_build:
commands:
- echo "start BRANCH = ${BRANCH}"
- echo "CODEBUILD_WEBHOOK_HEAD_REF = ${env:CODEBUILD_WEBHOOK_HEAD_REF}"
# CODEBUILD_WEBHOOK_HEAD_REF is null when build is triggered from console as opposed to a webhook
- if (${CODEBUILD_WEBHOOK_HEAD_REF}) { ${BRANCH} = ($CODEBUILD_WEBHOOK_HEAD_REF.replace('refs/heads/', '')) }
- echo "after BRANCH = ${env:BRANCH}"
build:
commands:
- echo "build commands happen here"
artifacts:
files:
- .\Dummy\bin\Debug\*
# not sure why this doesnt work down here, are changes in the phases section above not propagated?
name: ${env:APPNAME}/${env:APPNAME}-${env:BRANCH}.zip
discard-paths: yes
The value of $CODEBUILD_WEBHOOK_HEAD_REF = "refs/head/develop".
The value of $BRANCH after the replace statement = "develop".
The value of my artifact in S3 is "DummyApp/DummyApp-manual.zip".
I want the artifact named "DummyApp/DummyApp-develop.zip".
Some sort of scoping issue?
Saw various indications that this is not possible.
https://blog.shikisoft.com/define-environment-vars-aws-codebuild-buildspec/
The crucial thing you should note here is that you can only assign literal values to the environment variables declared this way. You cannot assign dynamic values at runtime. If you would like to change the value of the <...> variable above, you have to change your buildspec file and push your changes to your repository again. So it is like hardcoding parameter values. But it is better than typing the in all commands needed in the phases section.
In addition to trying to simply set the local var in pre_build, I tried a number of approaches, including
running a custom powershell script to parse the Branch name as the first step in the pre_build
running the command in the variable declaration itself
calling the prsh SetEnvironmentVariable method
The thing that seems to work is using the replace command down in the artifact/name itself:
artifacts:
files:
- .\Dummy\bin\Debug\*
name: ${env:APPNAME}/${env:APPNAME}-$($CODEBUILD_WEBHOOK_HEAD_REF.replace('refs/heads/', '')).zip
discard-paths: yes
created this artifact: DummyApp\DummyApp-develop.zip
I (try to) deploy my current application using CDK pipelines.
In doing so, I stumbled across an unexpected behavior (here if interested) which now I am trying to resolve. I have a Lambda function for which the asset is a directory that is dynamically generated during a CodeBuild step. The line is currently defined like this in my CDK stack :
code: lambda.Code.fromAsset(process.env.CODEBUILD_SRC_DIR_BuildLambda || "")
The issue is that locally, this triggers the unexpected and undesired behaviour because the environment variable does not exist and therefore goes to the default "".
What is the proper way to avoid this issue?
Thanks!
Option 1: Set the env var locally, pointing to the correct source directory;
CODEBUILD_SRC_DIR_BuildLambda=path/to/lambda && cdk deploy
Option 2: Define a dummy asset if CODEBUILD_SRC_DIR_BuildLambda is undefined
code: process.env.CODEBUILD_SRC_DIR_BuildLambda
? lambda.Code.fromAsset(process.env.CODEBUILD_SRC_DIR_BuildLambda)
: new lambda.InlineCode('exports.handler = async () => console.log("NEVER")'),
How do I specify a different "Executed Function" in my cloudbuild.yaml file than the name of the actual function name in GCP?
For example:
I have a cloud function, written in python called hello_world
In my GCP deployment, I want to name the function hello-world-dev, and hello-world-prod which is passed in the Triggers variables dynamically on build.
Build fails because it was expecting the function to be called hello-world-dev or whatever
I'm sure there's a flag to specify the executing function, but I haven't found it.
My cloudbuild.yaml file looks like this:
#hello-world
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
args:
- gcloud
- functions
- deploy
- hello-world-${_ENV}
- --region=us-west2
- --source=./demo/hello-world/
- --trigger-http
- --runtime=python39
Steps I've Tried
I've tried the following flags:
--function
--call
--deploy
Looking at this documentation: https://cloud.google.com/functions/docs/deploying
The executed function is the function name in your code. By default, and it's your error, the executed function, if not specified, must be the same as the function name.
If your function name has a different name as the executed function, you need to specify the entrypoint of your function (the function to run in your code). Use the parameter --entry-point=
thx for any help.
I'm using AWS-CodePipline with AWS-CodeBuild(for my Dockerfile and save it in ECR). So far it is working. But I don't get how I get my environment variables in the project. So I connected my Github account with CodePipline and I didn't pushed my envs to Github for security. So now I have on Github a env-file like:
config/prod.env
ACCESS_TOKEN_SECRET=
CSRF_TOKEN_SECRET=
ACCESS_TOKEN_PASSWORD=
REFRESH_TOKEN_SECRET=
CLUDINARY_API=
CLUDINARY_API_SECRET=
CLUDINARY_API_NAME=
GOOGLE_AUDIENCE=
ORIGIN=
GOOGLE_TOKEN=
DATABASE_URL=
NODE_ENV=
FORGOTTEN_PASSWORD=
YAHOO_PASSWORD=
Now on AWS-CodeBilder is a section for environment variables(Image from AWS-Doc).
Now I have the feeling this is not the right place for env's. Because if I put all my variables inside the fields I get the error:
ValidationException
1 validation error detected: Value at 'pipeline.stages.2.member.actions.1.member.configuration' failed to satisfy constraint: Map value must satisfy constraint: [Member must have length less than or equal to 1000, Member must have length greater than or equal to 1]
On Example:
Name: ACCESS_TOKEN_SECRET
Value: My_SUPER_PASSWORD
If I'm using just a few variables I don't get an error but for all variables I get the error(dosen't matter of the env-combination).
What I'm doing wrong? How can I get my env-variables to my Docker-Image in ECR with CodeBuild & CodePipline?
To pass variables from Code Build Project, you need to set env: section in buildspec.yml file, for example
env:
variables:
Execution_ID: $Execution_ID
Commit_ID: $Commit_ID
I am using context to pass values to CDK. Is there currently a way to define project context file per deployment environment (dev, test) so that when the number of values that I have to pass grow, they will be easier to manage compared to passing the values in the command-line:
cdk synth --context bucketName1=my-dev-bucket1 --context bucketName2=my-dev-bucket2 MyStack
It would be possible to use one cdk.json context file and only pass the environment as the context value in the command-line, and depending on it's value select the correct values:
{
...
"context": {
"devBucketName1": "my-dev-bucket1",
"devBucketName2": "my-dev-bucket2",
"testBucketName1": "my-test-bucket1",
"testBucketName2": "my-test-bucket2",
}
}
But preferably, I would like to split it into separate files, f.e. cdk.dev.json and cdk.test.json which would contain their corresponding values, and use the correct one depending on the environment.
According to the documentation, CDK will look for context in one of several places. However, there's no mention of defining multiple/additional files.
The best solution I've been able to come up with is to make use of JSON to separate context out per environment:
"context": {
"dev": {
"bucketName": "my-dev-bucket"
}
"prod": {
"bucketName": "my-prod-bucket"
}
}
This allows you to access the different values programmatically depending on which environment CDK is deploying to.
let myEnv = dev // This could be passed in as a property of the class instead and accessed via props.myEnv
const myBucket = new s3.Bucket(this, "MyBucket", {
bucketName: app.node.tryGetContext(myEnv).bucketName
})
You can also do so programmatically in your code:
For instance, I have a context variable of deploy_tag cdk deploy Stack\* -c deploy_tag=PROD
then in my code, i have retrieved that deploy_tag variable and I make the decisions there, such as: (using python, but the idea is the same)
bucket_name = BUCKET_NAME_PROD if deploy_tag == 'PROD' else BUCKET_NAME_DEV
this can give you a lot more control, and if you set up a constants file in your code you can keep that up to date with far less in your cdk.json that may become very cluttered with larger stacks and multiple environments. If you go this route then you can have your Prod and Dev constants file, and your context variable can inform your cdk which file to load for a given deployment.
i also tend to create a new class object with all my deployment properties either assigned or derived, and pass that object into each stack, retrieving what i need out of there.