AWS CodePipeline BuildAction not detecting buildspec.yml secondary artifacts - amazon-web-services

I'm trying to use secondary artifacts to separate the files from the web page from the cdk generated Stack files. but the BuildAction from the pipelines is not detecting the secondary artifacts that separate the web files from the Stack files.
I've tried following the recomendations on the AWS docs relating to buildspec.yml as well as multiple sources and multiple outputs, but can't get it to work.
here's my cdk code for the build action.
const buildStage = pipeline.addStage({ stageName: 'Build'});
const buildOutputWeb = new Artifact("webapp")
const buildOutputTemplates = new Artifact("template")
const project = new PipelineProject(this, 'Wavelength_build', {
environment: {
buildImage: LinuxBuildImage.STANDARD_3_0
},
projectName: 'WebBuild'
});
buildStage.addAction(new CodeBuildAction({
actionName: 'Build',
project,
input: sourceOutput,
outputs: [buildOutputWeb, buildOutputTemplates]
}));
here's the section relating to the Build Action from the generated stack file
{
"Actions": [
{
"ActionTypeId": {
"Category": "Build",
"Owner": "AWS",
"Provider": "CodeBuild",
"Version": "1"
},
"Configuration": {
"ProjectName": {
"Ref": "Wavelengthbuild7D63C781"
}
},
"InputArtifacts": [
{
"Name": "SourceOutput"
}
],
"Name": "Build",
"OutputArtifacts": [
{
"Name": "webapp"
},
{
"Name": "template"
}
],
"RoleArn": {
"Fn::GetAtt": [
"WavelengthPipelineBuildCodePipelineActionRoleC08CF8E2",
"Arn"
]
},
"RunOrder": 1
}
],
"Name": "Build"
},
And here is my buildspec.yml
version: 0.2
env:
variables:
S3_BUCKET: "wavelenght-web.ronin-ddd-dev-web.net"
phases:
install:
runtime-versions:
nodejs: 10
pre_build:
commands:
- echo Installing source NPM dependencies...
- npm install -g #angular/cli
- npm install typescript -g
- npm install -D lerna
build:
commands:
- echo Build started on `date`
- npm run release
- cd $CODEBUILD_SRC_DIR
post_build:
commands:
- echo Build completed on `date`
artifacts:
files:
- '**/*'
secondary-artifacts:
artifact1:
base-directory: $CODEBUILD_SRC_DIR
files:
- 'packages/website/dist/**/*'
name: webapp
discard-paths: yes
artifact2:
base-directory: $CODEBUILD_SRC_DIR
files:
- '*/WavelengthAppStack.template.json'
name: template
discard-paths: yes

I figured out the problem.
turns out that the name attribute in the secondary artifacts doesn't change the identifier.
my buildspec.yml artifacts now look like this.
artifacts:
secondary-artifacts:
webapp:
base-directory: packages/website/dist
files:
- '**/*'
name: webapp
template:
base-directory: packages/infrastructure/cdk.out
files:
- 'WavelengthAppStack.template.json'
name: template
notice that now instead of artifact1: and then all data for that artifact it is webapp: and then all the data.

webapp and template secondary attracts (from docs):
Each artifact identifiers in this block must match an artifact defined in the secondaryArtifacts attribute of your project.
In what you've posted in the question I don't see any evidence of the secondary outputs being defined in your build projects. Which probably explains why you get errors about "no definition".

Related

Serverless Error: TypeError: Cannot read property 'options' of undefined

I'm trying to upgrade my serverless app from 1.51.0 to 2.7.0
while deploying the app I'm getting the below error:
[08:39:38] 'dev:sls-deploy' errored after 9.85 s
[08:39:38] TypeError: Cannot read property 'options' of undefined
at module.exports (/home/jenkins/workspace/TMC-Broker/DEV/node_modules/serverless/lib/utils/telemetry/generatePayload.js:236:66)
at PluginManager.run (/home/jenkins/workspace/TMC-Broker/DEV/node_modules/serverless/lib/classes/PluginManager.js:685:9)
08:39:38.428520 durable_task_monitor.go:63: exit status 1
First I thought it might be due to plugins I updated the plugins but still not able to resolve.
Here is my serverless.yml:
service: my-service
plugins:
- serverless-webpack
- serverless-step-functions
- serverless-es-logs
- serverless-domain-manager
- serverless-plugin-ifelse
- serverless-prune-plugin
- serverless-offline
provider:
name: aws
runtime: nodejs12.x
timeout: 30
stage: dev
region: us-west-2
lambdaHashingVersion: 20201221
endpointType: PRIVATE
role: lambdaExecutionRole
apiGateway:
resourcePolicy:
- Effect: Allow
Principal: "*"
Action: execute-api:Invoke
Resource: "*"
- Effect: Deny
Principal: "*"
Action: execute-api:Invoke
Resource: "*"
Condition:
StringNotEquals:
aws:SourceVpce:
- "vpce-************"
environment:
APP_SERVICE: ${self:service}
APP_ENV: ${self:custom.stage}
APP_REGION: ${self:custom.region}
BUCKET_NAME: ${self:service}-${self:custom.stage}-#{aws:accountId}-onboard-s3
LOG_REQUEST_ID: "x-request-id"
custom:
prune:
automatic: true
includeLayers: true
number: 5
serverlessIfElse:
- If: '"${self:custom.stage}" == "uat"'
Exclude:
- functions.abc-handler
- If: '"${self:custom.stage}" == "prod"'
Exclude:
- functions.abc-handler
region: ${self:provider.region}
stage: ${opt:stage, self:provider.stage}
prefix: ${self:service}-${self:custom.stage}
webpack:
webpackConfig: ./webpack.config.js
includeModules: true
functions:
ms4-handler:
handler: src/apifunctions/my.handler
events:
- http:
path: /hello
method: ANY
- http:
path: /hello/{proxy+}
method: ANY
resources:
Resources:
onboardingBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:service}-${self:custom.stage}-#{aws:accountId}-onboard-s3
LifecycleConfiguration:
Rules:
- Id: expirationRule
Status: "Enabled"
ExpirationInDays: 10
Jenkins Deployment Steps:
#!groovy
import groovy.json.JsonSlurperClassic
pipeline {
agent {label 'ecs-tf12'}
stage('Serverless Deploy') {
agent {
docker {
label "ecs"
image "node:10.15"
args "-u 0:0"
}
}
steps {
script {
sh 'node --version'
sh 'npm --version'
sh 'npm config set registry http://registry.npmjs.org'
sh 'npm install -g serverless#2.7.0'
sh 'npm list -g serverless'
sh 'npm install -g typescript#3.9.10'
sh 'npm install'
sh "npx gulp install-terraform-linux"
sh 'cp -v serverless-private.yml serverless.yml'
sh "sls create_domain --stage ${params.env}"
sh "npx gulp ${params.env}:sls-deploy"
sh 'cp -v serverless-public.yml serverless.yml'
sh "sls create_domain --stage ${params.env}"
sh "npx gulp ${params.env}:sls-deploy"
}
}
}
}
My Package.json
{
"name": "my-app",
"version": "1.0.0",
"description": "Serverless Service",
"scripts": {
"build": "tslint --project tsconfig.json **/*.ts && serverless package",
"deploy": "tslint --project tsconfig.json **/*.ts && serverless deploy",
"offline": "tslint --project tsconfig.json **/*.ts && serverless offline"
},
"dependencies": {
"ajv": "^6.10.2",
"axios": "^0.27.2",
"body-parser": "^1.19.0",
"express": "^4.17.1",
"https-proxy-agent": "^4.0.0",
"joi": "^17.4.0",
"json-stream-stringify": "^2.0.4",
"launchdarkly-node-server-sdk": "^6.4.3",
"lodash": "^4.17.21",
"serverless-domain-manager": "^5.1.1",
"serverless-http": "^2.7.0",
"serverless-step-functions": "^2.23.0",
"source-map-support": "^0.5.16",
"uuid": "^3.3.3",
"xml-js": "^1.6.11"
},
"devDependencies": {
"#hewmen/serverless-plugin-typescript": "^1.1.17",
"#types/aws-lambda": "8.10.39",
"#types/body-parser": "^1.17.1",
"#types/express": "^4.17.2",
"#types/lodash": "^4.14.149",
"#types/node": "^13.1.6",
"#types/uuid": "^3.4.6",
"aws-sdk": "^2.1204.0",
"execa": "^4.0.0",
"gulp": "^4.0.2",
"serverless": "^2.7.0",
"serverless-es-logs": "^3.4.2",
"serverless-offline": "^8.0.0",
"serverless-plugin-ifelse": "^1.0.7",
"serverless-plugin-typescript": "^1.2.0",
"serverless-prune-plugin": "^2.0.1",
"serverless-webpack": "^5.5.0",
"ts-loader": "^6.2.1",
"tslint": "^5.20.1",
"tslint-config-prettier": "^1.18.0",
"typescript": "^3.9.10",
"typescript-tslint-plugin": "^0.5.5",
"webpack": "^4.41.5",
"webpack-node-externals": "^1.7.2"
},
"author": "The serverless webpack authors (https://github.com/elastic-coders/serverless-webpack)",
"license": "MIT"
}
I'm not able to figure out the reason and solution for this.
any Ideas?
Found a similar question: but it doesn't resolve as I'm using 2.7.0

Serverless with aws container images

i am trying to use aws ecr for my serverless application but i am failing to do so,
my main problem is the 50mb upload limit lambda has, and this is the config in my serverless (i am not sure if it is correct since there is not lots of documentation about it online)
(i am using aws-nodejs-typescript template)
addFriend is the function that i am trying to build with docker.
this is my Dockerfile
FROM public.ecr.aws/lambda/nodejs:14 as builder
WORKDIR /usr/app
COPY package.json handler.ts ./
RUN npm install
RUN npm run build
FROM public.ecr.aws/lambda/nodejs:14
WORKDIR ${LAMBDA_TASK_ROOT}
COPY --from=builder /usr/app/dist/* ./
CMD ["handler.main"]
and my serverless.ts
const serverlessConfiguration: AWS = {
...
custom: {
esbuild: {
bundle: true,
minify: false,
sourcemap: true,
exclude: ['aws-sdk'],
target: 'node14',
define: { 'require.resolve': undefined },
platform: 'node',
},
...
},
plugins: ['serverless-esbuild'],
provider: {
name: 'aws',
runtime: 'nodejs14.x',
profile: <PROFILE>,
region: 'us-east-1',
stage: 'dev',
apiGateway: {
minimumCompressionSize: 1024,
shouldStartNameWithService: true,
},
iamRoleStatements: [
{
Effect: 'Allow',
Action: ['s3:*', 'sns:*'],
Resource: '*',
},
],
ecr: {
images: {
addfriendfunction: {
path: './src/functions/addFriend',
},
},
},
lambdaHashingVersion: '20201221',
},
functions: {
...
addPushToken,
addFriend: {
image: {
name: 'addfriendfunction',
},
events: [
{
http: {
method: 'get',
path: 'api/v1/add-friend',
},
},
],
},
the error in the console is:
TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type
string. Received undefined
i am stuck with this issue unable to continue working, is there any solution for this with serverless framework?
Follow this guide for nodejs.
Can you try copy js code. For example something like: .esbuild/.build/src/functions/addFriend/handler.js instead of this step:
COPY package.json handler.ts ./

Serverless command "invoke test" not found. Run "serverless help" for a list of all available commands

I have my serverless.yml like below
service: omy-api
provider:
name: aws
runtime: nodejs14.x
region: ${opt:region, 'us-east-1'}
stage: ${opt:stage, 'devint'}
memorySize: 128
timeout: 15
custom:
jest:
collectCoverage: true
functions:
- ${file(config/Customer/delete.yml)}
plugins:
- serverless-jest-plugin
in my package.json I have
{
"name": "omy-api",
"version": "1.0.0",
"scripts": {
"deploy": "sls deploy",
"test": "jest"
},
"jest": {
"collectCoverage": true,
"coverageReporters": ["text-summary"],
"coverageThreshold": {
"global": {
"branches": 90,
"functions": 90,
"lines": 90,
"statements": -10
}
}
},
"author": "",
"license": "ISC",
"devDependencies": {
"serverless": "^1.83.3",
"serverless-jest-plugin": "^0.3.0",
},
"dependencies": {
"jest": "^27.0.6"
}
}
am following the https://github.com/nordcloud/serverless-jest-plugin documentation.
when i run sls invoke test, I get the error
Serverless Error ----------------------------------------
Serverless command "invoke test" not found. Run "serverless help" for a list of all available commands.
I tried sls invoke test --help, but I get the same error.
If you want to run it by entering a command like below, you must first install serverless and the serverless plugins you use globally, for example
npm install -g serverless
npm install -g serverless-jest-plugin
and run
sls invoke test

Deploying Python Lambda Functions in AWS Codepipeline using AWS CDK

So I am trying to deploy some lambda functions through Codepipeline using Amazon's new Cloud Development Kit in Typescript. The issue is that for the Build stage of my pipeline, the docs only provide an example for building lambda functions written in typescript. I know this is probably a simple issue for someone more experienced with build specs but I was wondering if someone could provide me with the equivalent buildspec for Python lambdas.
I have pasted the code below that defines the pipeline I am trying to create. The cdkBuild works fine but I am having trouble coming up with the proper commands for install, prebuild, and build with the buildspec for lambdaBuild.
const cdkBuild = new codebuild.PipelineProject(this, 'CdkBuild', {
buildSpec: codebuild.BuildSpec.fromObject({
version: '0.2',
phases: {
install: {
commands: 'npm install',
},
build: {
commands: [
'npm run build',
'npm run cdk synth -- -o dist'
],
},
},
artifacts: {
'base-directory': 'dist',
files: [
'AdminStack.template.json',
],
},
}),
environment: {
buildImage: codebuild.LinuxBuildImage.STANDARD_2_0,
},
});
const lambdaBuild = new codebuild.PipelineProject(this, 'LambdaBuild', {
buildSpec: codebuild.BuildSpec.fromObject({
version: '0.2',
phases: {
install: {
commands: [
/*'python3 -m venv .venv',
'source .venv/bin/activate',*/
'pip install -r requirements.txt -t lambda'
],
},
build: {
//commands: 'npm run build',
},
},
artifacts: {
'base-directory': 'lambda',
files: [
'admin/tutors/put.py ',
'requirements.txt',
],
},
}),
environment: {
buildImage: codebuild.LinuxBuildImage.STANDARD_2_0,
},
});
const sourceOutput = new codepipeline.Artifact();
const cdkBuildOutput = new codepipeline.Artifact('CdkBuildOutput');
const lambdaBuildOutput = new codepipeline.Artifact('LambdaBuildOutput');
const pipeline = new codepipeline.Pipeline(this, 'BackendPipeline', {
stages: [
{
stageName: 'Source',
actions: [
new codepipeline_actions.CodeCommitSourceAction({
actionName: 'CodeCommit_Source',
repository: code,
output: sourceOutput,
}),
],
},
{
stageName: 'Build',
actions: [
new codepipeline_actions.CodeBuildAction({
actionName: 'Lambda_Build',
project: lambdaBuild,
input: sourceOutput,
outputs: [lambdaBuildOutput],
}),
new codepipeline_actions.CodeBuildAction({
actionName: 'CDK_Build',
project: cdkBuild,
input: sourceOutput,
outputs: [cdkBuildOutput],
}),
],
},
{
stageName: 'Deploy',
actions: [
new codepipeline_actions.CloudFormationCreateUpdateStackAction({
actionName: 'AdminStack_CFN_Deploy',
templatePath: cdkBuildOutput.atPath('AdminStack.template.json'),
stackName: 'AdminStack',
adminPermissions: true,
parameterOverrides: {
...props.lambdaCode.assign(lambdaBuildOutput.s3Location),
},
extraInputs: [lambdaBuildOutput],
}),
],
},
],
});
First of all you do not need to use a virtual environment.
The artifacts should be what would be in the .zip you upload if you create a lambda manually which are the required libraries as well as your own code. Assuming all your python lambda code and the requirements.txt is under /lambda, the buildspec part should look like this:
codebuild.BuildSpec.fromObject({
version: '0.2',
phases: {
build: {
'pip install -r lambda/requirements.txt -t lambda'
},
},
artifacts: {
'base-directory': 'lambda',
files: [
'**/*'
],
},
}),
environment: {
buildImage: codebuild.LinuxBuildImage.STANDARD_2_0,
},
});

YAML_FILE_ERROR Message: Wrong number of container tags, expected 1

I am new to AWS CodePipeline and I am getting this Error on AWS CodeBuild
"YAML_FILE_ERROR Message: Wrong number of container tags, expected 1"
I have setup AWS CodePipeline with CodeBuild and CloudFormation for aspnet core 2.1 project. Here is my buildspec.yml
{
"name": "Utility",
"source": {
"type": "S3",
"location": "<location>/windows-dotnetcore.zip"
},
"artifacts": {
"type": "S3",
"location": "<location>",
"packaging": "ZIP",
"name": "Utility.zip"
},
"environment": {
"type": "LINUX_CONTAINER",
"image": "aws/codebuild/dot-net:core-2.1",
"computeType": "BUILD_GENERAL1_SMALL"
},
"serviceRole": "<value>",
"encryptionKey": "<value>"
}
This happened for me when I omitted the first 'version' line from yml:
version: 0.2
I received this error when I had a blank buildspec.yml checked in to CodeCommit. Once I updated it with something like this I was good to go:
version: 0.2
phases:
install:
commands:
- echo Installing Mocha...
- npm install -g mocha
pre_build:
commands:
- echo Installing source NPM dependencies...
- npm install unit.js
build:
commands:
- echo Build started on `date`
- echo Compiling the Node.js code
- mocha HelloWorld.js
post_build:
commands:
- echo Build completed on `date`
artifacts:
files:
- HelloWorld.js
Out of curiosity I thought it might have been a formatting error, but I tried checking in some garbage text and received the following error instead:
Phase context status code: YAML_FILE_ERROR Message: stat