AWS serverless deployment with webpack built modules failing - amazon-web-services

I have created a serverless aws-nodejs template project and in it I have organized my js files in the following way -
project:root -
| .env
| src -
| controllers -
<js_files_here>
| helpers -
<js_files_here>
| models -
<js_files_here>
| routes -
<yml_files_here>
And this is my serverless.yml -
service: rest-api
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: ap-south-1
plugins:
- serverless-bundle
- serverless-offline
functions:
${file(./src/routes/index.yml)}
and in one of my js files I am trying to use -
require('dotenv').config({ path: './.env' });
So I am trying to load some of the environment variables from this .env file. Now this is working as expected when I test these files locally with - sls offline start
but when I deploy them to a aws account, the apis stop working as expected and also when I see the package (rest-api.zip) file in the .serverless directory I do not see all the files from src directory packaged in there.
So, how do I fix this issue and deploy my project correctly with serverless on aws ?

Your problem is that WebPack failed to include the file in its transitive closure when trying to find out all the files you need, due to dotenv importing it dynamically.
You could use a WebPack plugin to explicitly include your env file, such as this one.

Related

Local packages not loading to GCP python functions with github actions

I am trying to deploy a GCP function. My code uses a package that's on a private repository. I create a local copy of that package in the folder, and then use gcloud function deploy from the folder to deploy the function.
This works well. I can see a function that is deployed, with the localpackage.
The problem is with using github actions to deploy the function.
The function is part of a repository that has multiple functions, so when I deploy, I run github actions from outside this folder of the function, and while the function gets deployed, the dependencies do not get picked up.
For example, this is my folder structure:
my_repo
- .github/
- workflows/
-function_deploy.yaml
- function_1_folder
- main.py
- requirements.txt
- .gcloudignore
- localpackages --> These are the packages I need uploaded to GCP
My function_deploy.yaml looks like :
name: Build and Deploy to GCP functions
on:
push:
paths:
function_1_folder/**.py
env:
PROJECT_ID: <project_id>
jobs:
job_id:
runs-on: ubuntu-latest
permissions:
contents: 'read'
id-token: 'write'
steps:
- uses: 'actions/checkout#v3'
- id: 'auth'
uses: 'google-github-actions/auth#v0'
with:
credentials_json: <credentials>
- id: 'deploy'
uses: 'google-github-actions/deploy-cloud-functions#v0'
with:
name: <function_name>
runtime: 'python38'
region: <region>
event_trigger_resource: <trigger_resource>
entry_point: 'main'
event_trigger_type: <pubsub>
memory_mb: <size>
source_dir: function_1_folder/
The google function does get deployed, but it fails with:
google-github-actions/deploy-cloud-functions failed with: operation failed: Function failed on loading user code. This is likely due to a bug in the user code. Error message: please examine your function logs to see the error cause...
When I look at the google function, I see that the localpackages folder hasn't been uploaded to GCP.
When I deploy from my local machine however, it does upload the localpackages.
Any suggestions on what I maybe doing incorrectly? And how to upload the localpackages?
I looked at this question:
Github action deploy-cloud-functions not building in dependencies?
But didn't quite understand what was done here.

How I create build and deploye path for gitlab CI CD?

My project folder
api
frontend
Build and deploye successful but no any effect on website so
i need to set frontend path in my yml file..but how it possible that i don't know
Anyone help me?
[![enter image description here][1]][1]
stages:
- build
- deploy
variables:
ARTIFACT_NAME: my-cookbook.tgz
DEV_BUCKET: dev-account-devops
PROD_BUCKET: prod-account-devops
S3_PATH: elk/${ARTIFACT_NAME}-${CI_BUILD_ID}-${CI_BUILD_REF}
package:
stage: build
script: git archive --format tgz HEAD > $ARTIFACT_NAME
artifacts:
untracked: true
expire_in: 1 week
deploy_development:
stage: deploy
script:
- export AWS_ACCESS_KEY=$DEV_AWS_ACCESS_KEY
- export AWS_SECRET_ACCESS_KEY=$DEV_SECRET_ACCESS_KEY
- aws s3 cp $ARTIFACT_NAME s3://$DEV_BUCKET/$S3_PATH
environment: development
deploy_production:
stage: deploy
script:
- export AWS_ACCESS_KEY=$PROD_AWS_ACCESS_KEY
- export AWS_SECRET_ACCESS_KEY=$PROD_SECRET_ACCESS_KEY
- aws s3 cp $ARTIFACT_NAME s3://$PROD_BUCKET/$S3_PATH
environment: production
when: manual
only:
- master
[The OP answered their own question in the linked Forum Post that was replaced with the .yml file in the question. The answer is copied here so that this question has an answer.]
The export for the AWS Access Key had a typo and was missing the last bit. It should be AWS_ACCESS_KEY_ID not AWS_ACCESS_KEY.

How to get un-versioned files included while deploying serverless apps via github actions?

I have a serverless project which requires SSL signed cert/private key for communication to an API. The cert/key aren't in version control, but locally are in my file system. The files get bundled with the lambdas in the service and are accessible for use when deployed.
package:
individually: true
include:
- signed-cert.pem
- private-key.pem
Deployment is done via Github Actions.
e.g. npm install serverless ... npx serverless deploy
How could those files be included without adding them to version control? Could they be retrieved from S3? Some other way?
It looks like encrypting the files may work, but is there a better approach? The lambdas could fetch them from S3, but I'd rather avoid additional latency on every startup if possible.
Looks like adding a GitHub secret for the private key and certificate works. Just paste the cert/private key text into a GitHub secret e.g.
Secret: SIGNED_CERT, Value: -----BEGIN CERTIFICATE-----......-----END CERTIFICATE-----
Then in the GitHub Action Workflow:
- name: create ssl signed certificate
run: 'echo "$SIGNED_CERT" > signedcert.pem'
shell: bash
env:
SIGNED_CERT: ${{secrets.SIGNED_CERT}}
working-directory: serverless/myservice
- name: create ssl private key
run: 'echo "$PRIVATE_KEY" > private-key.pem'
shell: bash
env:
PRIVATE_KEY: ${{secrets.PRIVATE_KEY}}
working-directory: serverless/myservice
Working directory if the serverless.yml isn't at that root level of the project.

AWS Layer: adding "/opt/" path when using a Nodejs layer

So I uploaded this layer to AWS using the Serverless framework:
service: webstorm-layer
provider:
name: aws
runtime: nodejs8.10
region: us-east-1
layers:
nodejs:
path: nodejs # path to contents on disk
name: node-webstormlibs # optional, Deployed Lambda layer name
description: JS shared libs for node
compatibleRuntimes:
- nodejs8.10
allowedAccounts:
- '*'
The libraries I need are inside the "nodejs" directory, there lays my packages.json file and all the "node_modules" directories. So far all looks fine, but when I try to run a lambda that uses the "node-webstormlibs" layer, I'm getting the message:
"errorMessage": "Cannot find module 'pg'",
The pg module actually exists in the zip file that creates the layer. Then, I have doubts about how to import a module that is inside the layer. In some tutorials I see:
import pg from "pg";
like always, but in other I see:
import pg from "/opt/pg";
or even:
import pg from "/opt/nodejs/node_modules/pg";
I don't know if the "path:" option in my serverless.yml is correct though.
In the server the path is:
NODE_PATH=/opt/nodejs/node8/node_modules/:/opt/nodejs/node_modules:$LAMBDA_RUNTIME_DIR/node_modules
UPDATE
Putting all in the dir /nodejs/node8, made the trick.

Serverless Offline undefined module when loaded from lambda layer

I have the following project tree
Where nodejs folder is a lambda layer defined in the following serverless.yaml
service: aws-nodejs # NOTE: update this with your service name
provider:
name: aws
runtime: nodejs8.10
stage: dev
plugins:
- serverless-offline
layers:
layer1:
path: nodejs # required, path to layer contents on disk
name: ${self:provider.stage}-layerName # optional, Deployed Lambda layer name
functions:
hello:
handler: handler.hello
layers:
- {Ref: Layer1LambdaLayer}
events:
- http:
path: /dev
method: get
The layer1 only contains UUID package.
So when I try to run the lambda locally using serverless offline plugin, it says can't find module UUID.
But when I deploy the code to AWS, it run like a charm.
Any way we can get lambda layers running locally for testing purpose? and for speeding up the development?
Or is there any way where I can dynamically set the node_module path to point to the layer folder during the development and once I need to push to production, it change the path to the proper one
Ok after many trials, I figure out a working solution
I added a npm run command which export a temporary node_module path to the list of paths
"scripts": {
"offline": "export NODE_PATH=\"${PWD}/nodejs/node_modules\" && serverless offline"
},
So, node can lookup for the node modules inside the sub folders
I got around this by running serverless-offline in a container and copying my layers into the /opt/ directory with gulp. I set a gulp watch to monitor any layer changes and to copy them to the /opt/ directory.
I use layers in serverless offline via installing a layer from local file system as a dev dependency.
npm i <local_path_to_my_layer_package> --save-dev
BTW this issue was fixed in sls 1.49.0.
Just run:
sudo npm i serverless
Then you should specify package include in serverless.yml's layer section
service: aws-nodejs # NOTE: update this with your service name
provider:
name: aws
runtime: nodejs8.10
stage: dev
plugins:
- serverless-offline
layers:
layer1:
path: nodejs # required, path to layer contents on disk
package:
include:
- node_modules/**
name: ${self:provider.stage}-layerName # optional, Deployed Lambda layer name
functions:
hello:
handler: handler.hello
layers:
- {Ref: Layer1LambdaLayer}
events:
- http:
path: /dev
method: get
Tested on nodejs10.x runtime