Serverless Python package- dlib dependency - amazon-web-services

I am building a Python deployment package for AWS Lambda that relies on dlib. dlib has OS dependencies and it relies on cmake in order to build out the binaries. I am wondering how to do this given that I have a Mac and am doing my development on that environment. I am aware of Docker but I am not sure how to setup an image to compile the binaries for AWS. Any help in doing this would be appreciated.

The easiest way is to use the plugin
serverless-package-python-functions
So simply define in the serverless.yml
package:
individually: true
custom:
pkgPyFuncs:
buildDir: _build
requirementsFile: requirements.txt
cleanup: true
useDocker: true
Important is to use useDocker: true - this is spinning up a docker (locally) based on the AWS AMI - therefore you get the right dependencies.
After that create your function in serverless.yml:
functions:
test:
name: ${opt:stage, self:provider.stage}-${self:service}-test
handler: lambda_function.lambda_handler
package:
include:
- ./test
artifact: ${self:custom.pkgPyFuncs.buildDir}/${self:functions.test.name}.zip
Inside your test-folder place the requirements.txt. This file will be used for deploying the service with the right packages.
let me know if you have further questions

Related

is it wrong/dangerous to include aws-exports.js file in source control?

amplify auto-ignores aws-exports.js in .gitignore possibly simply because it may change frequently and is fully generated - however maybe there are also security concerns?
For this project my github project is private so that is not a concern, but I am wondering for future projects that could be public.
The reason I ask is because if I want to run my app setup/build/test through github workflows then I need this file for the build to complete properly on github machines?
Also I appear to need it for my amplify CI hosting to work on amplify console (I have connected my amplify console build->deploy to my github master branch and it all works perfectly but only when aws-exports.js is in source control).
Here is my amplify.yml, I am using reason-react with nextjs, and my amplify console is telling me I have connected to the correct backend:
version: 1
frontend:
phases:
preBuild:
commands:
- yarn install
build:
commands:
- yarn run build
artifacts:
baseDirectory: out
files:
- '**/*'
cache:
paths:
- node_modules/**/*

Serverless Offline undefined module when loaded from lambda layer

I have the following project tree
Where nodejs folder is a lambda layer defined in the following serverless.yaml
service: aws-nodejs # NOTE: update this with your service name
provider:
name: aws
runtime: nodejs8.10
stage: dev
plugins:
- serverless-offline
layers:
layer1:
path: nodejs # required, path to layer contents on disk
name: ${self:provider.stage}-layerName # optional, Deployed Lambda layer name
functions:
hello:
handler: handler.hello
layers:
- {Ref: Layer1LambdaLayer}
events:
- http:
path: /dev
method: get
The layer1 only contains UUID package.
So when I try to run the lambda locally using serverless offline plugin, it says can't find module UUID.
But when I deploy the code to AWS, it run like a charm.
Any way we can get lambda layers running locally for testing purpose? and for speeding up the development?
Or is there any way where I can dynamically set the node_module path to point to the layer folder during the development and once I need to push to production, it change the path to the proper one
Ok after many trials, I figure out a working solution
I added a npm run command which export a temporary node_module path to the list of paths
"scripts": {
"offline": "export NODE_PATH=\"${PWD}/nodejs/node_modules\" && serverless offline"
},
So, node can lookup for the node modules inside the sub folders
I got around this by running serverless-offline in a container and copying my layers into the /opt/ directory with gulp. I set a gulp watch to monitor any layer changes and to copy them to the /opt/ directory.
I use layers in serverless offline via installing a layer from local file system as a dev dependency.
npm i <local_path_to_my_layer_package> --save-dev
BTW this issue was fixed in sls 1.49.0.
Just run:
sudo npm i serverless
Then you should specify package include in serverless.yml's layer section
service: aws-nodejs # NOTE: update this with your service name
provider:
name: aws
runtime: nodejs8.10
stage: dev
plugins:
- serverless-offline
layers:
layer1:
path: nodejs # required, path to layer contents on disk
package:
include:
- node_modules/**
name: ${self:provider.stage}-layerName # optional, Deployed Lambda layer name
functions:
hello:
handler: handler.hello
layers:
- {Ref: Layer1LambdaLayer}
events:
- http:
path: /dev
method: get
Tested on nodejs10.x runtime

How to serve a Java application as Docker container and .war file?

Currently our company is creating individual software for B2B customers.
Some applications can be used for multiple customers.
Usually we can host the application in the cloud and deploy everything with Docker.
Running a GitLab pipeline and deploying etc. is fine for that.
Now we got some customers who rely on an external installation.
Since some of them still use Windows Server (2008 tho), I can not install a proper Docker environment on there and we need to install an Apache Tomcat and run the application inside the tomcat.
Question: How to deal with that? I would need a pipeline to create a docker image and a war file.
Simply create two completely independent pipelines?
Handle everything in a single pipeline?
Our current gitlab-ci.yml file for the .war
image: maven:latest
variables:
MAVEN_CLI_OPTS: "-s settings.xml -q -B"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
cache:
paths:
- .m2/repository/
- target/
stages:
- build
- test
- deploy
build:
stage: build
script:
- mvn $MAVEN_CLI_OPTS compile
test:
stage: test
script:
- mvn $MAVEN_CLI_OPTS test
install:
stage: deploy
script:
- mvn $MAVEN_CLI_OPTS install
artifacts:
name: "datahub-$CI_COMMIT_REF_SLUG"
paths:
- target/*.war
Using to separate delivery pipeline is preferable: you are dealing with two very installation processes, and you need to be sure which one is running for a given client.
Having two separate GitLab pipeline allows for said client to chose the right one.

How upload node_modules with serverless to aws?

I have project on serverless framework. I need to resize image. I wrote lambda function and install module sharp. Also I use serverless-webpack. In webpack I add externals: ['sharp'] and add in serverless.yml next:
custom:
webpack:
includeModules:
packagePath: './src/package.json'
I deployed it successfully but when I run lambda I get
error: Cannot find module 'sharp'
maybe I doing something wrong. If need more information I can write it.
You can use forceInclude
# serverless.yml
custom:
webpack:
includeModules:
forceInclude:
- sharp
Reference document

Caching the Gradle wrapper in AWS CodeBuild

This is what my current buildspec looks like:
phases:
build:
commands:
- ./gradlew soakTest -s
cache:
paths:
- '/root/.gradle/caches/**/*'
- '.gradle/**/*'
But when this buildspec runs in CodeBuild, it prints messages that it is downloading gradle 4.7.
It appears that other things are being cached correctly - I don't see log messages about downloading jar dependencies, for example.
What should the buildspec cache specifications look like, in order to make sure the Gradle version that the Gradle wrapper downloads gets cached?
Add the wrapper directory to the cache paths:
- '/root/.gradle/wrapper/**/*'