i am trying to use aws ecr for my serverless application but i am failing to do so,
my main problem is the 50mb upload limit lambda has, and this is the config in my serverless (i am not sure if it is correct since there is not lots of documentation about it online)
(i am using aws-nodejs-typescript template)
addFriend is the function that i am trying to build with docker.
this is my Dockerfile
FROM public.ecr.aws/lambda/nodejs:14 as builder
WORKDIR /usr/app
COPY package.json handler.ts ./
RUN npm install
RUN npm run build
FROM public.ecr.aws/lambda/nodejs:14
WORKDIR ${LAMBDA_TASK_ROOT}
COPY --from=builder /usr/app/dist/* ./
CMD ["handler.main"]
and my serverless.ts
const serverlessConfiguration: AWS = {
...
custom: {
esbuild: {
bundle: true,
minify: false,
sourcemap: true,
exclude: ['aws-sdk'],
target: 'node14',
define: { 'require.resolve': undefined },
platform: 'node',
},
...
},
plugins: ['serverless-esbuild'],
provider: {
name: 'aws',
runtime: 'nodejs14.x',
profile: <PROFILE>,
region: 'us-east-1',
stage: 'dev',
apiGateway: {
minimumCompressionSize: 1024,
shouldStartNameWithService: true,
},
iamRoleStatements: [
{
Effect: 'Allow',
Action: ['s3:*', 'sns:*'],
Resource: '*',
},
],
ecr: {
images: {
addfriendfunction: {
path: './src/functions/addFriend',
},
},
},
lambdaHashingVersion: '20201221',
},
functions: {
...
addPushToken,
addFriend: {
image: {
name: 'addfriendfunction',
},
events: [
{
http: {
method: 'get',
path: 'api/v1/add-friend',
},
},
],
},
the error in the console is:
TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type
string. Received undefined
i am stuck with this issue unable to continue working, is there any solution for this with serverless framework?
Follow this guide for nodejs.
Can you try copy js code. For example something like: .esbuild/.build/src/functions/addFriend/handler.js instead of this step:
COPY package.json handler.ts ./
Related
I'm building a CDK Pipeline that with update another CDK template.
This CDK template is a static frontend react app.
The backend uses an AWS Lambda, API Gateway, and CloudFront Distribution to host the site.
I want to put the api's in the config.json file as I normally would if I were building it manually one service at a time.
The problem seems to be in the cdk pipeline-stack, which builds the static-frontend-stack.
When you initialize a new pipeline, it wants you to add shell steps first, (npm i, cd into correct folder, npm run build, etc) which creates the distribution folder I need.
As well as turning the whole thing into a CF template.
Then you can drop that into different stages you want, e.g., test and prod.
However, I won't receive CfnOutputs until the stages are built. And the CfnOutputs hold the api's and other info I need to put into the config.json file (which was already built first, and created empty values).
There is even a envFromCfnOutputs param to add to the initial codebuild pipeline, but since they are initialized/created later, typescript yells at me for putting it in there before. I understand why that errors, but I can't figure a clever way to fix this issue.
import * as cdk from "aws-cdk-lib";
import { Construct } from "constructs";
import * as pipelines from "aws-cdk-lib/pipelines";
import * as codecommit from "aws-cdk-lib/aws-codecommit";
import { Stages } from "./stages";
import { Stack, Stage } from "aws-cdk-lib";
interface PipelineStackProps extends cdk.StackProps {
env: {
account: string;
region: string;
stage: string;
};
}
export class PipelineStack extends cdk.Stack {
constructor(scope: Construct, id: string, props: PipelineStackProps) {
super(scope, id, props);
/************ Grab Repo ************/
const source = codecommit.Repository.fromRepositoryName(
this,
"PreCallbackSMSSolution",
"PreCallbackSMSSolution"
);
/************ Define Pipeline & Build ShellStep (for Frontend) ************/
const Pipeline = new pipelines.CodePipeline(this, "Pipeline", {
pipelineName: `CodePipeline`,
selfMutation: true,
crossAccountKeys: true,
synthCodeBuildDefaults: {
rolePolicy: [
// #desc Policy to allow CodeBuild to use CodeArtifact
// #external https://docs.aws.amazon.com/codeartifact/latest/ug/using-npm-packages-in-codebuild.html
new cdk.aws_iam.PolicyStatement({
actions: [
"codeartifact:GetAuthorizationToken",
"codeartifact:GetRepositoryEndpoint",
"codeartifact:ReadFromRepository",
],
resources: ["*"],
}),
new cdk.aws_iam.PolicyStatement({
actions: ["sts:GetServiceBearerToken"],
resources: ["*"],
conditions: {
StringEquals: {
"sts:AWSServiceName": "codeartifact.amazonaws.com",
},
},
}),
],
},
synth: new pipelines.ShellStep("Synth", {
input: pipelines.CodePipelineSource.codeCommit(source, "master"),
installCommands: [
"cd $CODEBUILD_SRC_DIR/deployment",
"npm install -g typescript",
"npm run co:login",
"npm i",
],
env: {
stage: props.env.stage,
},
envFromCfnOutputs: {
// TODO: cfn outputs need to go here!
// CcpUrlOutput: TestStage.CcpUrlOutput,
// loginUrlOutput: TestStage.LoginUrlOutput,
// regionOutput: TestStage.RegionOutput,
// apiOutput: TestStage.ApiOutput
},
commands: [
"cd $CODEBUILD_SRC_DIR/frontend",
"pwd",
"apt-get install jq -y",
"chmod +x ./generate-config.sh",
"npm i",
"npm run build-prod",
"pwd",
"cat ./src/config-prod.json",
"cd ../deployment",
"npx cdk synth",
],
primaryOutputDirectory: "$CODEBUILD_SRC_DIR/deployment/cdk.out", // $CODEBUILD_SRC_DIR = starts root path
}),
});
/************ Initialize Test Stack & Add Stage************/
const TestStage = new Stages(this, "TestStage", {
env: { account: "***********", region: "us-east-1", stage: "test" },
}); // Aspen Sandbox
Pipeline.addStage(TestStage);
/************ Initialize Prod Stack & Add Stage ************/
const ProdStage = new Stages(this, "ProdStage", {
env: { account: "***********", region: "us-east-1", stage: "prod" },
}); // Aspen Sandbox
Pipeline.addStage(ProdStage);
/************ Build Pipeline ************/
Pipeline.buildPipeline();
/************ Manual Approve Stage ************/
const ApproveStage = Pipeline.pipeline.addStage({
stageName: "PromoteToProd",
placement: {
justAfter: Pipeline.pipeline.stage("TestStage"),
},
});
ApproveStage.addAction(
new cdk.aws_codepipeline_actions.ManualApprovalAction({
actionName: "Approve",
additionalInformation: "Approve this deployment for production.",
})
);
}
/****/
}
I am deploying on amazon using CI/CD pipeline using Github action.Deployed on Amazon successfully but when I run the graphql query on appsync it show that the #aws-sdk/client-dynamodb not found.
You can see the error on this image
succesfully deployed on github
I add dynamodb package on package.json
`{
"name": "Serverless-apix",
"type": "module",
"version": "1.0.0",
"description": "",
"scripts": {
"start": "sls offline start --stage local",
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {
"#aws-sdk/client-dynamodb": "^3.215.0",
"ramda": "^0.28.0",
"serverless": "^3.24.1",
"serverless-appsync-plugin": "^1.14.0",
"serverless-iam-roles-per-function": "^3.2.0"
},
"devDependencies": {}
}`
Controller Where I call the Dynamodb
`import { PutItemCommand } from "#aws-sdk/client-dynamodb";
import { ddbClient } from "../../dynamodb/db.js";`
Db file
`import { DynamoDBClient } from "#aws-sdk/client-dynamodb";
const REGION = "us-east-1"; //e.g. "us-east-1"
// const client = new DynamoDBClient({
// // region: "us-east-1",
// // accessKeyId: "AKIA272VV64HPNKION2R",
// // secretAccessKeyId: "7IARIpa5q8ZEf0NQKKsUPTDP+oszFaw+Dd5v4s7N",
// // endpoint: "http://localhost:8000"
// region: 'localhost',
// endpoint: 'http://localhost:8000'
// });
const ddbClient = new DynamoDBClient({ region: REGION });
export { ddbClient };å`
Serverless file
`service: image-base-serverless-api
provider:
name: aws
runtime: nodejs14.x
stage: ${opt:stage, 'dev'}
region: us-east-1
environment:
# DYNAMODB_TABLE_NAME: ${self:custom.usersTableName}
STAGE: ${self:provider.stage}
REGION: ${self:provider.region}
APPSYNC_NAME: "${self:custom.defaultPrefix}-appsync"
SERVICE_NAME: ${self:service}-${self:provider.stage}
DYNAMODB: ${self:service}-${self:provider.stage}
TABLE_NAME:
Ref: usersTable
iam:
role:
statements: # permissions for all of your functions can be set here
- Effect: Allow
Action: # Gives permission to DynamoDB tables in a specific region
- dynamodb:*
- lambda:*
- s3:*
Resource:
- arn:aws:dynamodb:${self:provider.region}:*:*
- arn:aws:lambda:${self:provider.region}:*:*
- "Fn::GetAtt": [usersTable, Arn]
plugins: ${file(plugins/plugins-${self:provider.stage}.yml)}
package:
exclude:
- node_modules/**
- venv/**
custom:
usersTableName: users-table-${self:provider.stage}
dynamodb:
stages:
- local
start:
port: 8000
inMemory: false
dbPath: "dynamodb_local_data"
migrate: true
appSync:
name: image-base-serverless-backened-api-${self:provider.stage}
schema: schema.graphql
authenticationType: API_KEY
serviceRole: "AppSyncServiceRole"
mappingTemplates: ${file(appsync/mappingtemplate.yml)}
dataSources: ${file(appsync/datasource.yml)}
appsync-offline: # appsync-offline configuration
port: 62222
dynamodb:
client:
endpoint: "http://localhost:8000"
region: localhost
defaultPrefix: ${self:service}-${self:provider.stage}
functions:
- ${file(src/adminusers/admin-user-route.yml)}
resources:
# Roles
- ${file(resources/roles.yml)}
# DynamoDB tables
- ${file(resources/dynamodb-tables.yml)}
# - ${file(resources/dynamodb.yml)}
# - ${file(resources/iam.yml)}
`
I Shall be very thankfull if you help me to solve this error
I downgrade the node version and also try deploy the amazon directly and also deploy the code using CI/CD pipeline but its show the same error on each tried.I have
I try to deploy simple Nuxt 3 application to AWS Lambda for SSR. So far I have:
in nitro.config.ts
import { defineNitroConfig } from 'nitropack';
export default defineNitroConfig({
preset: 'aws-lambda',
serveStatic: true
});
in Lambda handler
import { handler } from './output/server/index.mjs';
export const runner = (event, context) => {
const { statusCode, headers, body } = handler({ rawPath: '/' });
return {
statusCode,
body,
headers
}
}
in serverless.yaml
functions:
ssr:
handler: handler.runner
timeout: 15
package:
individually: true
include:
- output/**
events:
- httpApi:
path: '*'
method: '*'
I run yarn build, change .output folder name to output to be able to include it with package, but I still have errors like "errorMessage": "SyntaxError: Cannot use import statement outside a module".
Is someone has idea how it could be done?
Its easier than that bro, nuxt already exports the handler nothing for us to do:
serverles.yml:
service: nuxt-app-lamba
provider:
name: aws
runtime: nodejs14.x
stage: dev
region: us-east-1
functions:
nuxt:
handler: .output/server/index.handler
events:
- httpApi: '*'
nuxt.config.ts:
// https://v3.nuxtjs.org/api/configuration/nuxt.config
export default defineNuxtConfig({
nitro: {
preset: 'aws-lambda',
serveStatic: true
}
})
package.json:
{
"private": true,
"scripts": {
"build": "nuxt build",
"build:lamba": "NITRO_PRESET=aws-lambda nuxt build",
"dev": "nuxt dev",
"generate": "nuxt generate",
"preview": "nuxt preview",
"postinstall": "nuxt prepare",
"deploy": "sls deploy"
},
"devDependencies": {
"nuxt": "3.0.0-rc.11"
}
}
This example is not working anymore (2022.12.09):
Working example: https://8cyysz2g5f.execute-api.us-east-1.amazonaws.com/
So I am trying to deploy some lambda functions through Codepipeline using Amazon's new Cloud Development Kit in Typescript. The issue is that for the Build stage of my pipeline, the docs only provide an example for building lambda functions written in typescript. I know this is probably a simple issue for someone more experienced with build specs but I was wondering if someone could provide me with the equivalent buildspec for Python lambdas.
I have pasted the code below that defines the pipeline I am trying to create. The cdkBuild works fine but I am having trouble coming up with the proper commands for install, prebuild, and build with the buildspec for lambdaBuild.
const cdkBuild = new codebuild.PipelineProject(this, 'CdkBuild', {
buildSpec: codebuild.BuildSpec.fromObject({
version: '0.2',
phases: {
install: {
commands: 'npm install',
},
build: {
commands: [
'npm run build',
'npm run cdk synth -- -o dist'
],
},
},
artifacts: {
'base-directory': 'dist',
files: [
'AdminStack.template.json',
],
},
}),
environment: {
buildImage: codebuild.LinuxBuildImage.STANDARD_2_0,
},
});
const lambdaBuild = new codebuild.PipelineProject(this, 'LambdaBuild', {
buildSpec: codebuild.BuildSpec.fromObject({
version: '0.2',
phases: {
install: {
commands: [
/*'python3 -m venv .venv',
'source .venv/bin/activate',*/
'pip install -r requirements.txt -t lambda'
],
},
build: {
//commands: 'npm run build',
},
},
artifacts: {
'base-directory': 'lambda',
files: [
'admin/tutors/put.py ',
'requirements.txt',
],
},
}),
environment: {
buildImage: codebuild.LinuxBuildImage.STANDARD_2_0,
},
});
const sourceOutput = new codepipeline.Artifact();
const cdkBuildOutput = new codepipeline.Artifact('CdkBuildOutput');
const lambdaBuildOutput = new codepipeline.Artifact('LambdaBuildOutput');
const pipeline = new codepipeline.Pipeline(this, 'BackendPipeline', {
stages: [
{
stageName: 'Source',
actions: [
new codepipeline_actions.CodeCommitSourceAction({
actionName: 'CodeCommit_Source',
repository: code,
output: sourceOutput,
}),
],
},
{
stageName: 'Build',
actions: [
new codepipeline_actions.CodeBuildAction({
actionName: 'Lambda_Build',
project: lambdaBuild,
input: sourceOutput,
outputs: [lambdaBuildOutput],
}),
new codepipeline_actions.CodeBuildAction({
actionName: 'CDK_Build',
project: cdkBuild,
input: sourceOutput,
outputs: [cdkBuildOutput],
}),
],
},
{
stageName: 'Deploy',
actions: [
new codepipeline_actions.CloudFormationCreateUpdateStackAction({
actionName: 'AdminStack_CFN_Deploy',
templatePath: cdkBuildOutput.atPath('AdminStack.template.json'),
stackName: 'AdminStack',
adminPermissions: true,
parameterOverrides: {
...props.lambdaCode.assign(lambdaBuildOutput.s3Location),
},
extraInputs: [lambdaBuildOutput],
}),
],
},
],
});
First of all you do not need to use a virtual environment.
The artifacts should be what would be in the .zip you upload if you create a lambda manually which are the required libraries as well as your own code. Assuming all your python lambda code and the requirements.txt is under /lambda, the buildspec part should look like this:
codebuild.BuildSpec.fromObject({
version: '0.2',
phases: {
build: {
'pip install -r lambda/requirements.txt -t lambda'
},
},
artifacts: {
'base-directory': 'lambda',
files: [
'**/*'
],
},
}),
environment: {
buildImage: codebuild.LinuxBuildImage.STANDARD_2_0,
},
});
We have 2 different builds of our dojo application, using either ourapp.profile.js or ourapp.custom.profile.js which contain the dojo application build profile.
Apart from a few differences in the layers property the rest of these 2 files are virtually identical. What's the best way to share the common settings between these 2 files?
Here's a simplified example of one our application profiles
var profile = (function () {
'use strict';
return {
basePath: "../",
releaseDir: "../../../build",
releaseName: "js",
action: "release",
dirs: ["../css", "../css/font", "../img", "../img/icons", "../stylus/themes/common"],
packages: [
{
name: "dbootstrap",
location: "dbootstrap"
},
{
name: "dgrid",
location: "dgrid"
},
{
name: "dstore",
location: "dstore"
},
{
name: "dijit",
location: "dijit"
},
{
name: "dojo",
location: "dojo"
},
{
name: "dojox",
location: "dojox"
},
{
name: "ourapp",
location: "ourapp"
},
{
name: "lib",
location: "lib"
},
{
name: "xstyle",
location: "xstyle"
},
{
name: "specs",
location: "specs"
}
],
layers: {
"dojo/dojo": {
include: [
"dojo/dojo",
"dojo/i18n",
"dojo/domReady",
"ourapp/boot",
// more includes
...
],
customBase: true,
boot: true,
},
// other layers
...
},
layerOptimize: "closure",
optimize: "closure",
cssOptimize: "comments",
mini: 1,
stripConsole: "warn",
selectorEngine: "lite",
insertAbsMids: false,
staticHasFeatures: {
"config-deferredInstrumentation": 0,
// More settings
..
},
defaultConfig: {
hasCache: {
"dojo-built": 1,
"dojo-loader": 1,
"dom": 1,
"host-browser": 1,
"config-selectorEngine": "lite"
},
async: 1
}
};
})();
Ideally we'd like both files to share one common set of settings and just specify the parts that differ in our 2 application profiles.
Update:
This page talks about multiple profile sources so I'm going to try splitting out the common parts to another profile file then when building running something like:
>build.bat --profile ourapp.shared.profile.js --profile ourapp.profile.js
or
>build.bat --profile ourapp.shared.profile.js --profile ourapp.custom.profile.js
Has anyone tried something similar?
The approach suggested in the Update to the question does work, but isn't very well documented about how different profile properties are combined or replaced so required some trial and error as certain properties are treated differently.
What we have now is the profile shown in the question (ourapp.profile.js), and ourapp.custom.profile.js as follows:
var profile = (function () {
'use strict';
return {
basePath: "../",
releaseName: "js-custom",
packages: [
{
name: "ourapp",
location: "ourapp-custom"
}}
]
};
})();
Now for our custom build we run this from the command line:
build.bat --profile ourapp.profile.js --profile ourapp.custom.profile.js
The properties in ourapp.custom.profile.js replace those in ourapp.profile.js changing the release name to 'js-custom' and replace the standard ourapp package with an alternative one in ourapp-custom.