So I am trying to deploy some lambda functions through Codepipeline using Amazon's new Cloud Development Kit in Typescript. The issue is that for the Build stage of my pipeline, the docs only provide an example for building lambda functions written in typescript. I know this is probably a simple issue for someone more experienced with build specs but I was wondering if someone could provide me with the equivalent buildspec for Python lambdas.
I have pasted the code below that defines the pipeline I am trying to create. The cdkBuild works fine but I am having trouble coming up with the proper commands for install, prebuild, and build with the buildspec for lambdaBuild.
const cdkBuild = new codebuild.PipelineProject(this, 'CdkBuild', {
buildSpec: codebuild.BuildSpec.fromObject({
version: '0.2',
phases: {
install: {
commands: 'npm install',
},
build: {
commands: [
'npm run build',
'npm run cdk synth -- -o dist'
],
},
},
artifacts: {
'base-directory': 'dist',
files: [
'AdminStack.template.json',
],
},
}),
environment: {
buildImage: codebuild.LinuxBuildImage.STANDARD_2_0,
},
});
const lambdaBuild = new codebuild.PipelineProject(this, 'LambdaBuild', {
buildSpec: codebuild.BuildSpec.fromObject({
version: '0.2',
phases: {
install: {
commands: [
/*'python3 -m venv .venv',
'source .venv/bin/activate',*/
'pip install -r requirements.txt -t lambda'
],
},
build: {
//commands: 'npm run build',
},
},
artifacts: {
'base-directory': 'lambda',
files: [
'admin/tutors/put.py ',
'requirements.txt',
],
},
}),
environment: {
buildImage: codebuild.LinuxBuildImage.STANDARD_2_0,
},
});
const sourceOutput = new codepipeline.Artifact();
const cdkBuildOutput = new codepipeline.Artifact('CdkBuildOutput');
const lambdaBuildOutput = new codepipeline.Artifact('LambdaBuildOutput');
const pipeline = new codepipeline.Pipeline(this, 'BackendPipeline', {
stages: [
{
stageName: 'Source',
actions: [
new codepipeline_actions.CodeCommitSourceAction({
actionName: 'CodeCommit_Source',
repository: code,
output: sourceOutput,
}),
],
},
{
stageName: 'Build',
actions: [
new codepipeline_actions.CodeBuildAction({
actionName: 'Lambda_Build',
project: lambdaBuild,
input: sourceOutput,
outputs: [lambdaBuildOutput],
}),
new codepipeline_actions.CodeBuildAction({
actionName: 'CDK_Build',
project: cdkBuild,
input: sourceOutput,
outputs: [cdkBuildOutput],
}),
],
},
{
stageName: 'Deploy',
actions: [
new codepipeline_actions.CloudFormationCreateUpdateStackAction({
actionName: 'AdminStack_CFN_Deploy',
templatePath: cdkBuildOutput.atPath('AdminStack.template.json'),
stackName: 'AdminStack',
adminPermissions: true,
parameterOverrides: {
...props.lambdaCode.assign(lambdaBuildOutput.s3Location),
},
extraInputs: [lambdaBuildOutput],
}),
],
},
],
});
First of all you do not need to use a virtual environment.
The artifacts should be what would be in the .zip you upload if you create a lambda manually which are the required libraries as well as your own code. Assuming all your python lambda code and the requirements.txt is under /lambda, the buildspec part should look like this:
codebuild.BuildSpec.fromObject({
version: '0.2',
phases: {
build: {
'pip install -r lambda/requirements.txt -t lambda'
},
},
artifacts: {
'base-directory': 'lambda',
files: [
'**/*'
],
},
}),
environment: {
buildImage: codebuild.LinuxBuildImage.STANDARD_2_0,
},
});
Related
I'm building a CDK Pipeline that with update another CDK template.
This CDK template is a static frontend react app.
The backend uses an AWS Lambda, API Gateway, and CloudFront Distribution to host the site.
I want to put the api's in the config.json file as I normally would if I were building it manually one service at a time.
The problem seems to be in the cdk pipeline-stack, which builds the static-frontend-stack.
When you initialize a new pipeline, it wants you to add shell steps first, (npm i, cd into correct folder, npm run build, etc) which creates the distribution folder I need.
As well as turning the whole thing into a CF template.
Then you can drop that into different stages you want, e.g., test and prod.
However, I won't receive CfnOutputs until the stages are built. And the CfnOutputs hold the api's and other info I need to put into the config.json file (which was already built first, and created empty values).
There is even a envFromCfnOutputs param to add to the initial codebuild pipeline, but since they are initialized/created later, typescript yells at me for putting it in there before. I understand why that errors, but I can't figure a clever way to fix this issue.
import * as cdk from "aws-cdk-lib";
import { Construct } from "constructs";
import * as pipelines from "aws-cdk-lib/pipelines";
import * as codecommit from "aws-cdk-lib/aws-codecommit";
import { Stages } from "./stages";
import { Stack, Stage } from "aws-cdk-lib";
interface PipelineStackProps extends cdk.StackProps {
env: {
account: string;
region: string;
stage: string;
};
}
export class PipelineStack extends cdk.Stack {
constructor(scope: Construct, id: string, props: PipelineStackProps) {
super(scope, id, props);
/************ Grab Repo ************/
const source = codecommit.Repository.fromRepositoryName(
this,
"PreCallbackSMSSolution",
"PreCallbackSMSSolution"
);
/************ Define Pipeline & Build ShellStep (for Frontend) ************/
const Pipeline = new pipelines.CodePipeline(this, "Pipeline", {
pipelineName: `CodePipeline`,
selfMutation: true,
crossAccountKeys: true,
synthCodeBuildDefaults: {
rolePolicy: [
// #desc Policy to allow CodeBuild to use CodeArtifact
// #external https://docs.aws.amazon.com/codeartifact/latest/ug/using-npm-packages-in-codebuild.html
new cdk.aws_iam.PolicyStatement({
actions: [
"codeartifact:GetAuthorizationToken",
"codeartifact:GetRepositoryEndpoint",
"codeartifact:ReadFromRepository",
],
resources: ["*"],
}),
new cdk.aws_iam.PolicyStatement({
actions: ["sts:GetServiceBearerToken"],
resources: ["*"],
conditions: {
StringEquals: {
"sts:AWSServiceName": "codeartifact.amazonaws.com",
},
},
}),
],
},
synth: new pipelines.ShellStep("Synth", {
input: pipelines.CodePipelineSource.codeCommit(source, "master"),
installCommands: [
"cd $CODEBUILD_SRC_DIR/deployment",
"npm install -g typescript",
"npm run co:login",
"npm i",
],
env: {
stage: props.env.stage,
},
envFromCfnOutputs: {
// TODO: cfn outputs need to go here!
// CcpUrlOutput: TestStage.CcpUrlOutput,
// loginUrlOutput: TestStage.LoginUrlOutput,
// regionOutput: TestStage.RegionOutput,
// apiOutput: TestStage.ApiOutput
},
commands: [
"cd $CODEBUILD_SRC_DIR/frontend",
"pwd",
"apt-get install jq -y",
"chmod +x ./generate-config.sh",
"npm i",
"npm run build-prod",
"pwd",
"cat ./src/config-prod.json",
"cd ../deployment",
"npx cdk synth",
],
primaryOutputDirectory: "$CODEBUILD_SRC_DIR/deployment/cdk.out", // $CODEBUILD_SRC_DIR = starts root path
}),
});
/************ Initialize Test Stack & Add Stage************/
const TestStage = new Stages(this, "TestStage", {
env: { account: "***********", region: "us-east-1", stage: "test" },
}); // Aspen Sandbox
Pipeline.addStage(TestStage);
/************ Initialize Prod Stack & Add Stage ************/
const ProdStage = new Stages(this, "ProdStage", {
env: { account: "***********", region: "us-east-1", stage: "prod" },
}); // Aspen Sandbox
Pipeline.addStage(ProdStage);
/************ Build Pipeline ************/
Pipeline.buildPipeline();
/************ Manual Approve Stage ************/
const ApproveStage = Pipeline.pipeline.addStage({
stageName: "PromoteToProd",
placement: {
justAfter: Pipeline.pipeline.stage("TestStage"),
},
});
ApproveStage.addAction(
new cdk.aws_codepipeline_actions.ManualApprovalAction({
actionName: "Approve",
additionalInformation: "Approve this deployment for production.",
})
);
}
/****/
}
I'm trying to upgrade my serverless app from 1.51.0 to 2.7.0
while deploying the app I'm getting the below error:
[08:39:38] 'dev:sls-deploy' errored after 9.85 s
[08:39:38] TypeError: Cannot read property 'options' of undefined
at module.exports (/home/jenkins/workspace/TMC-Broker/DEV/node_modules/serverless/lib/utils/telemetry/generatePayload.js:236:66)
at PluginManager.run (/home/jenkins/workspace/TMC-Broker/DEV/node_modules/serverless/lib/classes/PluginManager.js:685:9)
08:39:38.428520 durable_task_monitor.go:63: exit status 1
First I thought it might be due to plugins I updated the plugins but still not able to resolve.
Here is my serverless.yml:
service: my-service
plugins:
- serverless-webpack
- serverless-step-functions
- serverless-es-logs
- serverless-domain-manager
- serverless-plugin-ifelse
- serverless-prune-plugin
- serverless-offline
provider:
name: aws
runtime: nodejs12.x
timeout: 30
stage: dev
region: us-west-2
lambdaHashingVersion: 20201221
endpointType: PRIVATE
role: lambdaExecutionRole
apiGateway:
resourcePolicy:
- Effect: Allow
Principal: "*"
Action: execute-api:Invoke
Resource: "*"
- Effect: Deny
Principal: "*"
Action: execute-api:Invoke
Resource: "*"
Condition:
StringNotEquals:
aws:SourceVpce:
- "vpce-************"
environment:
APP_SERVICE: ${self:service}
APP_ENV: ${self:custom.stage}
APP_REGION: ${self:custom.region}
BUCKET_NAME: ${self:service}-${self:custom.stage}-#{aws:accountId}-onboard-s3
LOG_REQUEST_ID: "x-request-id"
custom:
prune:
automatic: true
includeLayers: true
number: 5
serverlessIfElse:
- If: '"${self:custom.stage}" == "uat"'
Exclude:
- functions.abc-handler
- If: '"${self:custom.stage}" == "prod"'
Exclude:
- functions.abc-handler
region: ${self:provider.region}
stage: ${opt:stage, self:provider.stage}
prefix: ${self:service}-${self:custom.stage}
webpack:
webpackConfig: ./webpack.config.js
includeModules: true
functions:
ms4-handler:
handler: src/apifunctions/my.handler
events:
- http:
path: /hello
method: ANY
- http:
path: /hello/{proxy+}
method: ANY
resources:
Resources:
onboardingBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:service}-${self:custom.stage}-#{aws:accountId}-onboard-s3
LifecycleConfiguration:
Rules:
- Id: expirationRule
Status: "Enabled"
ExpirationInDays: 10
Jenkins Deployment Steps:
#!groovy
import groovy.json.JsonSlurperClassic
pipeline {
agent {label 'ecs-tf12'}
stage('Serverless Deploy') {
agent {
docker {
label "ecs"
image "node:10.15"
args "-u 0:0"
}
}
steps {
script {
sh 'node --version'
sh 'npm --version'
sh 'npm config set registry http://registry.npmjs.org'
sh 'npm install -g serverless#2.7.0'
sh 'npm list -g serverless'
sh 'npm install -g typescript#3.9.10'
sh 'npm install'
sh "npx gulp install-terraform-linux"
sh 'cp -v serverless-private.yml serverless.yml'
sh "sls create_domain --stage ${params.env}"
sh "npx gulp ${params.env}:sls-deploy"
sh 'cp -v serverless-public.yml serverless.yml'
sh "sls create_domain --stage ${params.env}"
sh "npx gulp ${params.env}:sls-deploy"
}
}
}
}
My Package.json
{
"name": "my-app",
"version": "1.0.0",
"description": "Serverless Service",
"scripts": {
"build": "tslint --project tsconfig.json **/*.ts && serverless package",
"deploy": "tslint --project tsconfig.json **/*.ts && serverless deploy",
"offline": "tslint --project tsconfig.json **/*.ts && serverless offline"
},
"dependencies": {
"ajv": "^6.10.2",
"axios": "^0.27.2",
"body-parser": "^1.19.0",
"express": "^4.17.1",
"https-proxy-agent": "^4.0.0",
"joi": "^17.4.0",
"json-stream-stringify": "^2.0.4",
"launchdarkly-node-server-sdk": "^6.4.3",
"lodash": "^4.17.21",
"serverless-domain-manager": "^5.1.1",
"serverless-http": "^2.7.0",
"serverless-step-functions": "^2.23.0",
"source-map-support": "^0.5.16",
"uuid": "^3.3.3",
"xml-js": "^1.6.11"
},
"devDependencies": {
"#hewmen/serverless-plugin-typescript": "^1.1.17",
"#types/aws-lambda": "8.10.39",
"#types/body-parser": "^1.17.1",
"#types/express": "^4.17.2",
"#types/lodash": "^4.14.149",
"#types/node": "^13.1.6",
"#types/uuid": "^3.4.6",
"aws-sdk": "^2.1204.0",
"execa": "^4.0.0",
"gulp": "^4.0.2",
"serverless": "^2.7.0",
"serverless-es-logs": "^3.4.2",
"serverless-offline": "^8.0.0",
"serverless-plugin-ifelse": "^1.0.7",
"serverless-plugin-typescript": "^1.2.0",
"serverless-prune-plugin": "^2.0.1",
"serverless-webpack": "^5.5.0",
"ts-loader": "^6.2.1",
"tslint": "^5.20.1",
"tslint-config-prettier": "^1.18.0",
"typescript": "^3.9.10",
"typescript-tslint-plugin": "^0.5.5",
"webpack": "^4.41.5",
"webpack-node-externals": "^1.7.2"
},
"author": "The serverless webpack authors (https://github.com/elastic-coders/serverless-webpack)",
"license": "MIT"
}
I'm not able to figure out the reason and solution for this.
any Ideas?
Found a similar question: but it doesn't resolve as I'm using 2.7.0
i am trying to use aws ecr for my serverless application but i am failing to do so,
my main problem is the 50mb upload limit lambda has, and this is the config in my serverless (i am not sure if it is correct since there is not lots of documentation about it online)
(i am using aws-nodejs-typescript template)
addFriend is the function that i am trying to build with docker.
this is my Dockerfile
FROM public.ecr.aws/lambda/nodejs:14 as builder
WORKDIR /usr/app
COPY package.json handler.ts ./
RUN npm install
RUN npm run build
FROM public.ecr.aws/lambda/nodejs:14
WORKDIR ${LAMBDA_TASK_ROOT}
COPY --from=builder /usr/app/dist/* ./
CMD ["handler.main"]
and my serverless.ts
const serverlessConfiguration: AWS = {
...
custom: {
esbuild: {
bundle: true,
minify: false,
sourcemap: true,
exclude: ['aws-sdk'],
target: 'node14',
define: { 'require.resolve': undefined },
platform: 'node',
},
...
},
plugins: ['serverless-esbuild'],
provider: {
name: 'aws',
runtime: 'nodejs14.x',
profile: <PROFILE>,
region: 'us-east-1',
stage: 'dev',
apiGateway: {
minimumCompressionSize: 1024,
shouldStartNameWithService: true,
},
iamRoleStatements: [
{
Effect: 'Allow',
Action: ['s3:*', 'sns:*'],
Resource: '*',
},
],
ecr: {
images: {
addfriendfunction: {
path: './src/functions/addFriend',
},
},
},
lambdaHashingVersion: '20201221',
},
functions: {
...
addPushToken,
addFriend: {
image: {
name: 'addfriendfunction',
},
events: [
{
http: {
method: 'get',
path: 'api/v1/add-friend',
},
},
],
},
the error in the console is:
TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type
string. Received undefined
i am stuck with this issue unable to continue working, is there any solution for this with serverless framework?
Follow this guide for nodejs.
Can you try copy js code. For example something like: .esbuild/.build/src/functions/addFriend/handler.js instead of this step:
COPY package.json handler.ts ./
My goal is to create a GCP CloudBuild Trigger using Pulumi. I'm using the Typescript client.
When creating a Google-managed secret (as opposed to customer-managed) I don't use KMS.
What would I put into the required (!) variable build.secrets[0].kmsKeyName? This is trivial when using KMS, but I found no "default" or "global" KMS name that would work when running the trigger with a Google-managed secret. I can create the trigger with a "fake" KMS name, but it doesn't run, complaining with:
Failed to trigger build: generic::invalid_argument: invalid build: invalid secrets: kmsKeyName "?WHAT TO PUT HERE?" is not a valid KMS key resource.
Thank you in advance for any suggestions.
import * as gcp from "#pulumi/gcp";
const ghToken = new gcp.secretmanager.Secret("gh-token", {
secretId: "gh-token",
replication: {
automatic: true,
},
})
const ghTokenSecretVersion = new gcp.secretmanager.SecretVersion("secret-version", {
secret: ghToken.id,
secretData: "the-secret-token",
});
const cloudBuild = new gcp.cloudbuild.Trigger("trigger-name", {
github: {
owner: "the-org",
name: "repo-name",
push: {
branch: "^main$"
}
},
build: {
substitutions: {
"_SERVICE_NAME": "service-name",
"_DEPLOY_REGION": "deploy-region",
"_GCR_HOSTNAME": "gcr.io",
},
steps: [
{
id: "Build",
name: "gcr.io/cloud-builders/docker",
entrypoint: "bash",
args: [
"-c",
`docker build --no-cache
-t $_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA
--build-arg GH_TOKEN=$$GH_TOKEN
.
-f Dockerfile
`,
],
secretEnvs: ["GH_TOKEN"],
},
],
tags: ["my-tag"],
secrets: [
{
kmsKeyName: "?WHAT TO PUT HERE?",
secretEnv: {
"GH_TOKEN": ghTokenSecretVersion.secretData
}
}
]
},
})
I don't think you can use a SecretManager secret with cloud build through Pulumi. I solved it by creating a kms key and encrypting my data using gcp.kms.Ciphertext. Here's what it looks like:
import * as gcp from "#pulumi/gcp";
import * as pulumi from "#pulumi/pulumi";
export const keyRing = new gcp.kms.KeyRing("keyring", {
location: "global",
}, {protect: true});
export const secretsEncryptionKey = new gcp.kms.CryptoKey("secrets-key", {
keyRing: keyRing.id,
rotationPeriod: "100000s",
}, { protect: true });
const config = new pulumi.Config();
export const githubTokenCiphertext = new gcp.kms.SecretCiphertext("github-token", {
cryptoKey: secretsEncryptionKey.id,
plaintext: config.requireSecret("github-token"),
});
const cloudBuild = new gcp.cloudbuild.Trigger("trigger-name", {
github: {...},
build: {
...,
secrets: [
{
kmsKeyName: githubTokenCiphertext.cryptoKey,
secretEnv: {
"GH_TOKEN": githubTokenCiphertext.ciphertext,
}
}
]
},
})
I'm trying to use secondary artifacts to separate the files from the web page from the cdk generated Stack files. but the BuildAction from the pipelines is not detecting the secondary artifacts that separate the web files from the Stack files.
I've tried following the recomendations on the AWS docs relating to buildspec.yml as well as multiple sources and multiple outputs, but can't get it to work.
here's my cdk code for the build action.
const buildStage = pipeline.addStage({ stageName: 'Build'});
const buildOutputWeb = new Artifact("webapp")
const buildOutputTemplates = new Artifact("template")
const project = new PipelineProject(this, 'Wavelength_build', {
environment: {
buildImage: LinuxBuildImage.STANDARD_3_0
},
projectName: 'WebBuild'
});
buildStage.addAction(new CodeBuildAction({
actionName: 'Build',
project,
input: sourceOutput,
outputs: [buildOutputWeb, buildOutputTemplates]
}));
here's the section relating to the Build Action from the generated stack file
{
"Actions": [
{
"ActionTypeId": {
"Category": "Build",
"Owner": "AWS",
"Provider": "CodeBuild",
"Version": "1"
},
"Configuration": {
"ProjectName": {
"Ref": "Wavelengthbuild7D63C781"
}
},
"InputArtifacts": [
{
"Name": "SourceOutput"
}
],
"Name": "Build",
"OutputArtifacts": [
{
"Name": "webapp"
},
{
"Name": "template"
}
],
"RoleArn": {
"Fn::GetAtt": [
"WavelengthPipelineBuildCodePipelineActionRoleC08CF8E2",
"Arn"
]
},
"RunOrder": 1
}
],
"Name": "Build"
},
And here is my buildspec.yml
version: 0.2
env:
variables:
S3_BUCKET: "wavelenght-web.ronin-ddd-dev-web.net"
phases:
install:
runtime-versions:
nodejs: 10
pre_build:
commands:
- echo Installing source NPM dependencies...
- npm install -g #angular/cli
- npm install typescript -g
- npm install -D lerna
build:
commands:
- echo Build started on `date`
- npm run release
- cd $CODEBUILD_SRC_DIR
post_build:
commands:
- echo Build completed on `date`
artifacts:
files:
- '**/*'
secondary-artifacts:
artifact1:
base-directory: $CODEBUILD_SRC_DIR
files:
- 'packages/website/dist/**/*'
name: webapp
discard-paths: yes
artifact2:
base-directory: $CODEBUILD_SRC_DIR
files:
- '*/WavelengthAppStack.template.json'
name: template
discard-paths: yes
I figured out the problem.
turns out that the name attribute in the secondary artifacts doesn't change the identifier.
my buildspec.yml artifacts now look like this.
artifacts:
secondary-artifacts:
webapp:
base-directory: packages/website/dist
files:
- '**/*'
name: webapp
template:
base-directory: packages/infrastructure/cdk.out
files:
- 'WavelengthAppStack.template.json'
name: template
notice that now instead of artifact1: and then all data for that artifact it is webapp: and then all the data.
webapp and template secondary attracts (from docs):
Each artifact identifiers in this block must match an artifact defined in the secondaryArtifacts attribute of your project.
In what you've posted in the question I don't see any evidence of the secondary outputs being defined in your build projects. Which probably explains why you get errors about "no definition".