I am creating AWS resources using CloudFormation Nested Stack and the pipeline runs in Gitlab.
Resources:
CF-resource:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: local path of yaml file
Parameters:
when Using CFN-Lint test on parent stack I am getting below error.
** W3002 This code may only work with package cli command as the property (/TemplateURL) is a string**
Can you help me to solve this
It's a warning to let you know you can't deploy this template directly without first packaging it. You can suppress it by adding -i W3002.
Related
I'm setting an environment variable in my aws batch job like so.
BatchJobDef:
Type: 'AWS::Batch::JobDefinition'
Properties:
Type: container
JobDefinitionName: xxxxxxxxxx
ContainerProperties:
Environment:
- Name: 'PROC_ENV'
Value: 'dev'
When I look at my job definition I can see it listed in Environment variables configuration
Then I'm trying to access it in my job's python code like this:
env = os.environ['PROC_ENV']
but there is no PROC_ENV variable set, getting the following error when I go to run my job:
raise KeyError(key) from None
KeyError: 'PROC_ENV'
Can anyone tell me what I'm missing here? Am I accessing this environment variable the correct way?
Here's my problem. I'm currently struggling to run a basic shell script execution action in my pipeline.
The pipeline was created thru the Pipeline construct in #aws-cdk/aws-codepipeline
import { Artifact, IAction, Pipeline } from "#aws-cdk/aws-codepipeline"
const pipeline = new Pipeline(this, "backend-pipeline",{
...
});
Now, I'm running a cross deployment pipeline and would like to invoke a lambda right after it's been created. Before, a simple ShellScriptAction would've sufficed in the older (#aws-cdk/pipelines) package, but for some reason, both packages pipelines and aws-codepipeline are both maintained at the same time.
What I would like to know is how to run a simple basic command in the new (aws-codepipeline) package, ideally as an Action in a Stage.
Thanks in advance!
You would use a codebuild.PipelineProject in a codepipeline_actions.CodeBuildAction to run arbitrary shell commands in your pipeline. The CDK has several build tool constructs*, used in different places. pipelines.CodePipeline specializes in deploying CDK apps, while the lower-level codepipeline.Pipeline has a broader build capabilities:
Build tool construct
CloudFormation Resource
Can use where?
pipelines.ShellStep
AWS::CodeBuild::Project
pipelines.CodePipeline
pipelines.CodeBuildStep
AWS::CodeBuild::Project
pipelines.CodePipeline
codebuild.PipelineProject
AWS::CodeBuild::Project
codepipeline.Pipeline
codebuild.Project
AWS::CodeBuild::Project
codepipeline.Pipeline or standalone
In your case, the setup goes Pipeline > Stage > CodeBuildAction > PipelineProject.
// add to stage actions
new codepipeline_actions.CodeBuildAction({
actionName: 'CodeBuild',
project: new codebuild.PipelineProject(this, 'PipelineProject', {
buildSpec: codebuild.BuildSpec.fromObject({
version: '0.2',
phases: {
build: { commands: ['echo "[project foo] $PROJECT_FOO"'] },
},
}),
environmentVariables: {
PROJECT_FOO: {
value: 'Foo',
type: codebuild.BuildEnvironmentVariableType.PLAINTEXT,
},
},
}),
input: sourceOutput,
});
* The ShellScriptAction you mentioned is another one, now deprecated in v1 and removed from v2.
#aws-cdk/aws-codepipeline is for AWS Codepipeline. #aws-cdk/pipelines is for utilizing AWS Codepipeline to deploy CDK apps. Read more about the package and its justification here.
Regarding your question, you have some options here.
First of all, if you're looking for a simple CodeBuild action to run arbitrary commands, you can use CodeBuildAction.
There's a separate action specifically for invoking a lambda, too, it's LambdaInvokeAction.
Both are part of #aws-cdk/aws-codepipeline-actions module.
Here's how I'm instantiating my stack:
new LambdaStack(new App(), 'LambdaStack', {
env: { account: AWS_ACCOUNT_ID, region: 'us-east-1' },
synthesizer: new DefaultStackSynthesizer({
qualifier: 'lambda-stk',
}),
stackName: 'LambdaStack',
});
First I ensure that my ~/.aws/credentials file has the correct credentials. Then I bootstrap:
npx cdk bootstrap --qualifier lambda-stk --toolkit-stack-name LambdaStack aws://ACCOUNT_ID_HERE/us-east-1
Everything looks good in the console. Then, I deploy:
npx cdk deploy --require-approval never
Everything still looks good in the console -- the lambdas have been created as I expected, etc.
Then, I simply run the same deploy command again without changing anything and I get this error:
LambdaStack failed: Error: LambdaStack: SSM parameter /cdk-bootstrap/lambda-stk/version not found. Has the environment been bootstrapped? Please run 'cdk bootstrap' (see https://docs.aws.amazon.com/cdk/latest/guide/bootstrapping.html)
Upon further investigation, it seems that the bootstrap command properly creates the referenced SSM parameter but then the first deploy command deletes that parameter. Why would it do that and how can I fix this problem?
Fixed it by naming the bootstrap stack something different from LambdaStack. I was under the impression that the bootstrap command was spinning up the stack that the "main" stack would use, but actually it's a completely different stack. So I changed the bootstrap command to:
npx cdk bootstrap --qualifier lambda-stk --toolkit-stack-name LambdaStackCDKToolkit aws://ACCOUNT_ID_HERE/us-east-1
And it worked.
When I run sls offline - I'm facing a deprecation warning:
Serverless: Deprecation warning: Variables resolver reports following resolution errors:
- Variable syntax error at "functions.Test.environment.TEST_URL": Invalid variable type at index 20 in "${file(./env.yml):${'${self:provider.stage}.TEST_URL'}}"
From a next major this will be communicated with a thrown error.
Set "variablesResolutionMode: 20210326" in your service config, to adapt to new behavior now
Documentation is not clear about it.
env.yml
dev:
TEST_URL: https://example.com/
serverless.yml
frameworkVersion: '2'
...
functions:
Test:
handler: handler.test
environment:
TEST_URL: ${file(./env.yml):${'${self:provider.stage}.TEST_URL'}} # <-------
It works correctly with frameworkVersion (>=1.1.0 <2.0.0).
What is a new approach to get data from another file?
This a new approach to get data from another file
environment:
TEST_URL: ${file(./env.yml):${self:provider.stage}.TEST_URL}
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 1 year ago.
Improve this question
The project I am currently working on creates a lambda layer which contains a file called app.py, within this file is a function named lambda_handler which is interest to be used as Handler for whatever lambda function includes the layer. The sam template I use to do this looks as follow:
Resources:
LamLayer:
Type: AWS::Serverless::LayerVersion
LayerName: !Join
- ''
- - 'LamLayer'
- - !Ref AWS::StackName
Properties:
ContentUri: ./lam_layer
CompatibleRuntimes:
- python3.8
Metadata:
BuildMethod: python3.8
LamFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: ./lam_function
Runtime: python3.8
Handler: app.lambda_handler
Layers:
- !Ref LamLayer
Timeout: 60
AutoPublishAlias: live
Now although the Handler: app.lambda_handler is not present in the lambda function itself, it is present in the included layer.
Now after creating this setup I tested it by calling sam build; sam deploy and it successfully deployed and worked. When I called the LamFunction it successfully found the Handler and ran it.
The problem arises when I push my changes to the CodePipeline we have setup. The build and deploy succeeded but when I now call the LamFunction it throws the following error:
Unable to import module 'app': No module named 'app'
After debugging this for a while I seem to have narrowed down the problem to the difference in the way I was building vs. how the pipeline is building the project.
I called: sam build; sam deploy
Whereas the pipeline calls: sam build; sam package --s3-bucket codepipeline-eu-central-1-XXXXXXXXXX --output-template-file packaged-template.yml and then uses the standard pipeline deploy stage to deploy from the S3 bucket.
But although I think I know that this difference is causing the problem I am not sure what the underlying reason is and what I need to change to fix it ?
---- EDIT ----
Here is the buildspec.yml in case this is the culprit:
version: 0.2
phases:
install:
runtime-versions:
python: 3.8
build:
commands:
- sam build
- sam package --s3-bucket codepipeline-eu-central-1-XXXXXXXXXX --output-template-file packaged-template.yml
artifacts:
files:
- packaged-template.yml
In the end I managed to trace the issue back to the CodeBuild image used in the pipeline. Due to an oversight during the creation of the pipeline I used a managed image which used the CodeBuild standard 1 which does not support the building of nested stacks/templates. Since the stack mentioned above was being build as nested stack of a larger template it was not build and so with cause the error with the layer.
After changing to the CodeBuild standard 3 the stack build and packaged as expected.