I am new to AWS. I am exploring hosting my rest server on AWS but have not been able to solve the following problem:
Failed to find package.json. Node.js may have issues starting. Verify package.json is valid or place code in a file named server.js or app.js.
There were suggestions that I should zip only the files and sub-dir but not the root folder.
I tried the recommendation suggested to zip just the files and sub-folders but it still doesn't work.
The following are the files I zip:
bin
models
node_modules
public
routes
views
app.js
authenticate.js
ca.crt
ca.key
config.js
db.json
ia.crt
ia.csr
ia.key
ia.p12
package.json
The following are the content of my package.json file:
{
"name": "rest-server",
"version": "0.0.0",
"private": true,
"scripts": {
"start": "node ./bin/www"
},
"dependencies": {
"body-parser": "~1.15.2",
"cookie-parser": "~1.4.3",
"debug": "~2.2.0",
"express": "~4.14.0",
"jade": "~1.11.0",
"jsonwebtoken": "^7.1.9",
"mongoose": "^4.7.0",
"mongoose-currency": "^0.2.0",
"morgan": "~1.7.0",
"passport": "^0.3.2",
"passport-facebook": "^2.1.1",
"passport-local": "^1.0.0",
"passport-local-mongoose": "^4.0.0",
"serve-favicon": "~2.3.0"
}
}
I am using the Elastic Beanstalk Dashboard not the eb command line method.
What am I doing wrongly?
There is a possible solution from AWS developer forum regarding about the error in NodeJS deploy in elasticbeanstalk.It might due to the package.json was not zip in the correct location.
Node.js deploy fail in Elasticbeanstalk
https://forums.aws.amazon.com/thread.jspa?threadID=130140&tstart=0
You may try to download the sample application from AWS website and follow the folder structure from there:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/tutorials.html
Related
I'm making an Api that returns a pdf by puppeteer. I just instaled puppeteer with npm install chrome-Aws-lambda, install puppeteer --save-dev but when I run the Api, I get this exception.
I tried runing npm install but it doesn't work, how can I install chromium or make puppeteer works???
this is my code and package.json
let browser = await chromium.puppeteer.launch({ headless: true });
let page = await browser.newPage();
await page.goto("https://www.google.com");
const pdf = await page.pdf({
format: "A4",
printBackground: false,
preferCSSPageSize: true,
displayHeaderFooter: false,
headerTemplate: `<div class="header" style="font-size:20px; padding-left:15px;"><h1>Main Heading</h1></div> `,
footerTemplate: '<footer><h5>Page <span class="pageNumber"></span> of <span class="totalPages"></span></h5></footer>',
margin: { top: "200px", bottom: "150px", right: "20px", left: "20px" },
height: "200px",
width: "200px",
});
return {
statusCode: 200,
// Uncomment below to enable CORS requests
headers: {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Credentials": true
},
body: pdf
};
Pakage:
{
"name": "amplifysandboxpdf",
"version": "2.0.0",
"description": "Lambda function generated by Amplify",
"main": "index.js",
"license": "Apache-2.0",
"devDependencies": {
"#types/aws-lambda": "^8.10.92",
"#types/puppeteer": "^5.4.6",
"puppeteer": "^17.1.2"
},
"dependencies": {
"chrome-aws-lambda": "^10.1.0",
"puppeteer-core": "^10.0.0"
}
}
Usually, when you upload a zip to lambda, you need to provide both - your source code and node_modules (modules can also be added as a lambda layer). In your case error is because of missing package, so the first place I'd look at is the zip you provide to lambda and does it contain all the necessary packages
I see in your package.json you have both puppeteer and puppeteer-core. Puppeteer downloads chromium on installation (that's the bit that errors for you). First you need to decide if puppeteer is even necessary, maybe core package is enough, but if it is necessary - once again, it should be in your zip that you provide.
I'm not sure about how puppeteer does this, but if it downloads chromium into node_modules it should work once your zip file is correctly packaged. Otherwise, you might need to create your own docker container by using puppeteer base image Puppeteer base image.
If this is the case, these are the steps you need to take:
Create a dockerfile with puppeteer base image, build your own image and make sure your application is installed in there. Example of what this file could look like:
FROM ghcr.io/puppeteer/puppeteer:16.1.0
WORKDIR /home/pptruser/app
ADD ./package-lock.json /home/pptruser/app/package-lock.json
ADD ./package.json /home/pptruser/app/package.json
RUN npm ci
# Customize the line below - copy files that your application requires
ADD ./src/. /home/pptruser/app/src/
# Remove development dependencies (in your case puppeteer is not a devDependency)
RUN npm prune --production
CMD [ "index.js" ]
Test locally if it works in the container and later publish this image to ECR (AWS registry for docker containers)
Launch lambda by using the image instead of using NodeJS environment
I have started using Serverless framework with AWS. My source is in Typescript which would be built to JavaScript before deploying. This gets uploaded to S3 and then lambda function is created. I noticed that my lambda functions are over 70MB although I only have a few lines of code with operations that use just the aws-sdk, like querying DynamoDB or SecretsManager.
To investigate this, I downloaded the zipped file which gets uploaded to S3 by serverless framework and unzipped for its content. It has a folder named ${WORKSPACE} which accounts for the 70% of the package memory and it does not seem to have any relevant content for the lambda function.
My package.json looks like this
{
"name": "my-api",
"version": "0.0.1",
"description": "API to retrieve secrets from Secrets Manager",
"license": "UNLICENSED",
"engines": {
"node": ">= 14.0.0"
},
"scripts": {
"build": "tsc -p .",
"deploy": "sls deploy"
},
"dependencies": {
"#aws-sdk/client-secrets-manager": "^3.33.0"
},
"devDependencies": {
"serverless": "^2.59.0",
"typescript": "^4.4.3"
}
}
and the relevant part of the serverless.yml looks like this
service: sls-my-api
useDotenv: true
variablesResolutionMode: 20210326
frameworkVersion: '2'
package:
patterns:
# Includes
- 'dist/**.js'
# Excludes
- '!./**.md'
- '!./**.env*'
- '!.github/**'
- '!.npm/**'
- '!coverage/**'
- '!docker/**'
- '!docs/**'
- '!dist/tsconfig.build.tsbuildinfo'
- '!dist/**.d.ts'
- '!src/**'
- '!test/**'
- '!.eslintrc.js'
- '!.npmrc'
- '!.prettier*'
- '!./tsconfig*.json'
- '!Jenkinsfile'
- '!jest.json'
- '!nest-cli.json'
- '!run-sonar.sh'
- '!sonar-project.properties'
- '!tslint.json'
- '!.gitattributes'
- '!.npmignore'
- '!.s2iignore'
- '!database/**'
In the Jenkins stages, the following are run
npm run build
npm run deploy
Also because of the size limitation, inline editing is not available on the lambda console which usually is a very useful action for me, especially when testing and debugging quickly without having to wait for a redeployment.
Does the directory ${WORKSPACE} have any significance for this deployment? If not, how do I exclude it from the deployment package and make my lambda lean?
False Alarm!
The directory ${WORKSPACE} is generated because of Jenkins run and not because of the serverless framework. The frame although was picking it up while packing and deploying the application thus making the lambda function bulk.
Excluding it as follows did the trick.
package:
patterns:
# Includes
- 'dist/**.js'
# Excludes
- '!${WORKSPACE}'
Maybe something good to know for people using serverless cli on Jenkins.
I am working on a Google Cloud Functions Project and I want to use the AWS-SDK so I edit the package.json and it looks like this:
{"name": "sample-http",
"version": "0.0.1"
"dependencies": {
"aws-sdk": "2.574.0"
}
}
The deployment crashes with the following message in the logs: INVALID_ARGUMENT
I am working in the Browser environment.
Can somebody help or does it have something to do with the fact that I don't use the paid plan until now?
I already saw this post but the answer is not working for me as you can see.
Your JSON is not valid, it is missing a comma after the version item:
{
"name": "sample-http",
"version": "0.0.1",
"dependencies": {
"aws-sdk": "2.574.0"
}
}
Using npm install to create a package.json should allow you to avoid issues like this. Additionally, there's a good number of publicly usable JSON validators on the web to test for issues like this.
by some reason when I install packages for cloud functions, package Json is not updated
I'm developing a PWA using Ionic 3.
Normally the JavaScript files generated by the build to browser process of Ionic 3 are in www/build folder.
I wish to change this folder to www/build/v1 and, of course, keep the application working.
How could I change the build process of Ionic 3 to achieve this result?
The simplest way is add "config" section to your package.json
{
"name": "projectname",
"version": "0.0.1",
"author": "Ionic Framework",
...,
"config": {
"ionic_www_dir": "www/v1",
"ionic_build_dir": "www/v1/build"
},
"dependencies": {
...
}
You can read "Custom Project Structure" here ionic docs.
You may want to try the config option in the package.json file to provide custom build configuration.
To get started, add a config entry to the package.json file. From there, you can provide your own configuration file.
Here's an example of specifying a custom configuration:
"config": {
...
"ionic_rollup": "./config/rollup.config.js",
"ionic_cleancss": "./config/cleancss.config.js",
...
},
You may want to see this Ionic documentation for more information.
I'm trying to get casperjs to work with my AWS Lambda function.
{
"errorMessage": "Cannot find module 'casper'",
"errorType": "Error",
"stackTrace": [
"Function.Module._load (module.js:276:25)",
"Module.require (module.js:353:17)",
"require (internal/module.js:12:17)",
"Object.<anonymous> (/var/task/index.js:3:14)",
"Module._compile (module.js:409:26)",
"Object.Module._extensions..js (module.js:416:10)",
"Module.load (module.js:343:32)",
"Function.Module._load (module.js:300:12)",
"Module.require (module.js:353:17)"
]
}
I keep getting this error where Lambda can't detect casperjs. I uploaded my zip file into Lambda, and installed the casperjs modules into my directory before I zipped the files up.
My package.json file says I have casperjs installed.
{
"name": "lambda",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"casperjs": "^1.1.3",
}
}
Would anyone know what I'm doing wrong? Thanks.
Since CasperJs relies on PhantomJs, you can set it up very similarly to this repo: https://github.com/TylerPachal/lambda-node-phantom.
The main difference being that you need to add and target CasperJs and you need to make sure that CasperJs can find and load PhantomJs.
Create a node_modules directory in your package directory.
Add a dependency for CasperJs to the packages.json file:
"dependencies": {
"casperjs": "latest"
}
In Terminal, navigate to your package directory and run 'npm update' to add the CasperJs package to the node_modules directory.
Assuming that you want to run CasperJs with the 'test' argument, the index.js file will need to be changed to look like this:
var childProcess = require('child_process');
var path = require('path');
exports.handler = function(event, context) {
// Set the path as described here: https://aws.amazon.com/blogs/compute/running-executables-in-aws-lambda/
process.env['PATH'] = process.env['PATH'] + ':' + process.env['LAMBDA_TASK_ROOT'];
// Set the path to casperjs
var casperPath = path.join(__dirname, 'node_modules/casperjs/bin/casperjs');
// Arguments for the casper script
var processArgs = [
'test',
path.join(__dirname, 'casper_test_file.js')
];
// Launch the child process
childProcess.execFile(casperPath, processArgs, function(error, stdout, stderr) {
if (error) {
context.fail(error);
return;
}
if (stderr) {
context.fail(error);
return;
}
context.succeed(stdout);
});
}
If you don't want to run CasperJs with the 'test' argument, just remove it from the arguments list.
The PhantomJs binary in the root directory of your package needs to be renamed to phantomjs, so that CasperJs can find it. If you would like to get a new version of PhantomJs, you can get one here: https://bitbucket.org/ariya/phantomjs/downloads. Make sure to download a linux-x86_64.tar.bz2 type so that it can run in Lambda. Once downloaded, just pull a new binary out of the bin directory and place it in your root package directory.
In order for Lambda to have permission to access all the files, it's easiest to zip the package in a Unix-like operating system. Make sure that all the files in the package have read and execute permissions. From within the package directory: chmod -R o+rx *. Then zip it up with: zip -r my_package.zip *.
Upload the zipped package to your Lambda function.
According to Casper.js Docs, it is not a actually Node Module. So you cannot require it in Package.json and zip it up with node modules. You will need to find how to install it on the lambda instance or find a an actual node module that does what you want. I suspect installing casper on lambda might not be possible, but that's just my gut.
Warning
While CasperJS is installable via npm, it is not a NodeJS module and will not work with NodeJS out of the box. You cannot load casper by using require(‘casperjs’) in node. Note that CasperJS is not capable of using a vast majority of NodeJS modules out there. Experiment and use your best judgement.
http://docs.casperjs.org/en/latest/installation.html