How do I deploy monorepo code to AWS Lambda using lerna? - amazon-web-services

I am attempting to make two AWS Lambda functions (written in typescript). Both of these functions share the same code for interacting with an API. In order to not have to copy the same code to two different Lambdas, I would like to move my shared code to a local module, and have both my Lambdas depend on said module.
My initial attempt at staring code between the two lambdas was to use a monorepo and lerna. My current project structure looks like this:
- lerna.json
- package.json
- packages
- api
- package.json
- lambdas
- funcA
- package.json
- func B
- package.json
lerna.json:
{
"packages": [
"packages/api",
"packages/lambdas/*"
],
"version": "1.0.0"
}
In each of my package.json for my Lambda functions, I am able to include my local api module as such:
"dependencies": {
"#local/api": "*"
}
With this, I've been able to move the common code to its own module. However, I'm now not sure how to bundle my functions to deploy to AWS Lambda. Is there a way for lerna to be able to create a bundle that can be deployed?

As cp -rL doesn't work on the mac I had to come up with something similar.
Here is a workflow that works if all of your packages belong to one scope (#org):
In the package.json of your lerna repo:
"scripts": {
"deploy": "lerna exec \"rm -rf node_modules\" && lerna bootstrap -- --production && lerna run deploy && lerna bootstrap"
}
In the package that contains your lambda function:
"scripts":{
"deploy": "npm pack && tar zxvf packagename-version.tgz && rm -rf node_modules/#org && cp -r node_modules/* package/node_modules && cd package && npm dedupe"
}
Now replace "packagename-version" and "#org" with the respective values of your project. Also add all of the dependent packages to "bundledDependencies".
After running npm run deploy in the root of your lerna mono repo you end up with a folder "package" in the package that contains your lambda function. It has all the dependencies needed to run your function. Take it from there.
I had hoped that using npm pack would allow me to utilize .npmignore files but it seems that that doesn't work. If anyone has an idea how to make it work let me know.

I have struggled with this same problem for a while now, and I was finally forced to do something about it.
I was using a little package named slice-node-modules, as found here in this similar question, which was good enough for my purposes for a while. As I have consolidated more of my projects into monorepos and begun using shared dependencies which reside as siblings rather than being externally published, I ran into shortcomings with that approach.
I've created a new tool called lerna-to-lambda which was specifically tailored to my use case. I published it publicly with minimal documentation, hopefully enough to help others in similar situations. The gist of it is that you run l2l in your bundling step, after you've installed all of your dependencies, and it copies what is needed into an output directory which is then ready to deploy to Lambda using SAM or whatever.
For example, from the README, something like this might be in your Lambda function's package.json:
"scripts": {
...
"clean": "rimraf build lambda",
"compile": "tsc -p tsconfig.build.json",
"package": "l2l -i build -o lambda",
"build": "yarn run clean && yarn run compile && yarn run package"
},
In this case, the compile step is compiling TypeScript files from a source directory into JavaScript files in the build directory. Then the package step bundles up all the code from build along with all of the Lambda's dependencies (except aws-sdk) into the directory lambda, which is what you'd deploy to AWS. If someone were using plain JavaScript rather than TypeScript, they could just copy the necessary .js files into the build directory before packaging.
I realize this question is over 2 years old, and you've probably figured out your own solutions and/or workarounds since then. But since it is still relevant to me, I assume it's still relevant to someone out there, so I am sharing.

Running lerna bootstrap will create a node_modules folder in each "package". This will include all of your lerna managed dependencies as well as external dependencies for that particular package.
From then on, your deployment of each lambda will be agnostic of the fact that you're using lerna. The deployment package will need to include the code for that specific lambda and the node_modules folder for that lambda - you can zip these and upload them manually, or use something like SAM or CloudFormation.
Edit: as you rightly point out you'll end up with symlinks in your node_modules folder which make things awkward to package up. To get around this, you could run something like this prior to packaging for deployment:
cp -rL lambdas/funcA/node_modules lambdas/funcA/packaged/node_modules
The -L will force the symlinked directories to be copied into the folder, which you can then zip.

I have used a custom script to copy the dependencies during the install process.. This will allow me to develop and deploy the application with the same code.
Project structure
In the package.json file of the lambda_a, I have the following line:
"scripts": {
"install": "node ./install_libs.js #libs/library_a"
},
#libs/library_a can be used by the lambda code using the following statement:
const library_a = require('#libs/library_a')
for SAM builds, I use the following command from the lmbdas frolder:
export SAM_BUILD=true && sam build
install_libs.js
console.log("Starting module installation")
var fs = require('fs');
var path = require('path');
var {exec} = require("child_process");
if (!fs.existsSync("node_modules")) {
fs.mkdirSync("node_modules");
}
if (!fs.existsSync("node_modules/#libs")) {
fs.mkdirSync("node_modules/#libs");
}
const sam_build = process.env.SAM_BUILD || false
libs_path = "../../"
if (sam_build) {
libs_path = "../../" + libs_path
}
process.argv.forEach(async function (val, index, array) {
if (index > 1) {
var currentLib = libs_path + val
console.log(`Building lib ${currentLib}`)
await exec(`cd ${currentLib} && npm install` , function (error, stdout, stderr){
if (error) {
console.log(`error: ${error.message}`);
return;
}
console.log(`stdout: ${stdout}`);
console.log('Importing module : ' + currentLib);
copyFolderRecursiveSync(currentLib, "node_modules/#libs")
});
}
});
function copyFolderRecursiveSync(source, target) {
var files = [];
// Check if folder needs to be created or integrated
var targetFolder = path.join(target, path.basename(source));
if (!fs.existsSync(targetFolder)) {
fs.mkdirSync(targetFolder);
}
// Copy
if (fs.lstatSync(source).isDirectory()) {
files = fs.readdirSync(source);
files.forEach(function (file) {
var curSource = path.join(source, file);
if (fs.lstatSync(curSource).isDirectory()) {
copyFolderRecursiveSync(curSource, targetFolder);
} else {
copyFileSync(curSource, targetFolder);
}
});
}
}
function copyFileSync(source, target) {
var targetFile = target;
// If target is a directory, a new file with the same name will be created
if (fs.existsSync(target)) {
if (fs.lstatSync(target).isDirectory()) {
targetFile = path.join(target, path.basename(source));
}
}
fs.writeFileSync(targetFile, fs.readFileSync(source));
}

Related

Server-side AssemblyScript: How to read a file?

I'd like to write some server-side AssemblyScript that uses the WASI interface to read a file and process the contents.
I know that AssemblyScript and the ByteCode Alliance have recently had a falling out over the "openness" of the WASI standard, but I was hoping that they would still play nicely together...
I've found several AssemblyScript tools/libraries that appear to bridge this gap, and the one that seems the simplest to use is as-wasi. After following the installation instructions, I'm just trying to run the little demo app.
All the VSCode design time errors have disappeared, but the AssemblyScript compiler still barfs at the initial import statement.
import "wasi"
import { Console, Environ } from "as-wasi/assembly";
// Create an environ instance
let env = new Environ();
// Get the HOME Environment variable
let home = env.get("HOME")!;
// Log the HOME string to stdout
Console.log(home);
Running npm run asbuild gives.
$ npm run asbuild
> file_reader#1.0.0 asbuild
> npm run asbuild:debug && npm run asbuild:release
> file_reader#1.0.0 asbuild:debug
> asc assembly/index.ts --target debug
ERROR TS6054: File '~lib/wasi.ts' not found.
:
1 │ import "wasi"
│ ~~~~~~
└─ in assembly/index.ts(1,8)
FAILURE 1 parse error(s)
The file ~lib/wasi.ts does not exist and creating this file as a softlink pointing to the index.ts in the ./node_modules/as-wasi/assembly/ directory makes no difference.
Since the library is called as-wasi and not wasi, I've tried importing as-wasi, but this also fails.
I've also tried adapting tsconfig.json to include
{
"extends": "assemblyscript/std/assembly.json",
"include": [
"../node_modules/as-wasi/assembly/*.ts",
"./**/*.ts"
]
}
But this also has no effect.
What is causing asc to think that the required library should be in the directory called ~lib/ and how should I point it to the correct place?
Thanks
Your question threw me in a bit of a rabbit hole, but I think I solved it.
So, apparently, after the wasi schism, AssemblyScript added the wasi-shim repository, that you have to install as well:
npm install --save wasi-shim
The import "wasi" is no longer necessary after version 0.20 of AssemblyScript according to the same page, so you have to remove that import entirely. Also, be sure to add the extends to your asconfig.json, as recommended in the same wasi-shim page. Mine looks like this:
{
"extends": "./node_modules/#assemblyscript/wasi-shim/asconfig.json",
"targets": {
"debug": {
"outFile": "build/debug.wasm",
"textFile": "build/debug.wat",
"sourceMap": true,
"debug": true
},
"release": {
"outFile": "build/release.wasm",
"textFile": "build/release.wat",
"sourceMap": true,
"optimizeLevel": 3,
"shrinkLevel": 0,
"converge": false,
"noAssert": false
}
},
"options": {
"bindings": "esm"
}
}
It is just the generated original asconfig.json plus that extends.
Now the things got interesting. I got a compilation error:
ERROR TS2300: Duplicate identifier 'wasi_abort'.
:
1100 │ export function wasi_abort(
│ ~~~~~~~~~~
└─ in ~lib/as-wasi/assembly/as-wasi.ts(1100,17)
:
19 │ export function wasi_abort(
│ ~~~~~~~~~~
└─ in ~lib/wasi_internal.ts(19,17)
So I investigated, and it seems that as-wasi was exporting a symbol that was the same as a symbol exported by wasi_shim. No biggie, I went into node_modules/as-wasi/, and I renamed that function into as_wasi_abort. I did this also with the invokations of the function, namely three instances found in the package.json from as-wasi:
{
"asbuild:untouched": "asc assembly/index.ts -b build/untouched.wasm -t build/untouched.wat --use abort=as_wasi_abort --debug",
"asbuild:small": "asc assembly/index.ts -b build/optimized.wasm -t build/optimized.wat --use abort=as_wasi_abort -O3z ",
"asbuild:optimized": "asc assembly/index.ts -b build/optimized.wasm -t build/optimized.wat --use abort=as_wasi_abort -O3",
}
Having done all this, the package compiled and the example from Wasm By Example finally worked.
Your code should compile now, and I will try to make a pull request to all the places necessary so that the examples are updated, the code in as-wasi is updated, and so that nobody has to go through this again. Please comment if there are further problems.
Edit: It seems that I was right about the wasi_abort function being a problem. It is actually removed on the as-wasi repo, but the npm package is outdated. I asked in my pull request for it to be updated.

Using "m1-medium" not recognized as valid by EAS

When I follow these instructions on an M2 MBA using Expo SDK 47.0.13 and EAS CLI 3.5.2 (darwin-arm64) I get
InvalidEasJsonError: eas.json is not valid.
- "build.dev-hardware.resourceClass" must be one of [default, medium]
which seems like a direct contradiction of those instructions. Why isn't the specified value (m1-medium) recognized as valid?
I had the same issue and I solved it by updating my eas-cli on global level. In my situation I tried updating it with
npm install -g eas-cli
If you've used a different package manager like I did to install eas-cli earlier, you may need to run the command accordingly. In my case it was
yarn global add eas-cli
Also, it's maybe worth checking if in your eas.json file you have any setting related to the cli version like this:
{
"cli": {
"version": ">= 3.3.0"
},
"build": {
"development": {
"developmentClient": true,
"distribution": "internal",
"ios": {
"resourceClass": "m1-medium"
}
},
"production": {
"ios": {
"resourceClass": "m1-medium"
}
}
}
}
EDIT
I remembered that when this was happening, whenever I ran eas build the console printed a message like this:
I ran the suggested npm install command, but message was still prompted, which led me to believe that yarn was in control of the version of the eas-cli that executes eas build.
This is why I ran yarn global add, which fixed the issue.

Pass globally installed package from step to step

I have a CDK codepipeline, which, simplified, looks something like this:
const pipeline = new CodePipeline(this, 'Pipeline', {
synth: new CodeBuildStep('InstallStep', {
commands: ['npm install -g some-package'],
}),
});
const initStep = new CodeBuildStep(`InitStep`, {
commands: ['some-package']
})
An NPM package is installed globally during synth step. Is there a way to use it in other build steps without reinstalling it again? I know I can easily pass build artifacts betwen the steps, but not sure about globally installed things.
Considering each code build step is a new compute instance, I don't think you can cache global node_modules. You may be able to instead do something like
const initStep = new CodeBuildStep(`InitStep`, {
commands: ['npx some-package']
})
Using npx to call the package avoids having to have the extra install step, assuming you call it like a cli command.

Vendoring npm packages in deno

How does one vendor a npm package in deno?
import_map.json:
{
"imports": {
"lume/": "https://deno.land/x/lume#v1.12.1/",
}
}
Lume has some npm dependencies, like https://registry.npmjs.org/markdown-it/-/markdown-it-13.0.0.tgz.
deno.jsonc:
{
"importMap": "import_map.json",
}
dev_deps.ts:
export * as lume from "https://deno.land/x/lume#v1.12.1/mod.ts";
command:
$ deno vendor --force --unstable dev_deps.ts
# ...
Download https://registry.npmjs.org/markdown-it-attrs/-/markdown-it-attrs-4.1.3.tgz
# ...
thread 'main' panicked at 'Could not find local path
for npm:markdown-it-attrs#4.1.3', cli/tools/vendor/mappings.rs:138:11
I tried adding export * as ma from "npm:markdown-it-attrs"; to dev_depts.ts, but it did nothing.
I found the following issue on github.
Maybe this issue does have something to do with it.
I didn't find anything about how to resolve the problem in the official deno documentation and the lume documentation.
Unfortunately, currently you cannot use import_map in your Deno project if your goal is to publish a module that aims to be used in other applications, simply because you don't handle the way deno runtime will start.
From the application point of view, the deno run command cannot search every import_map configurations in your dependencies and handle them properly.
The import_map feature should be used only at end application level.
The fallback is to use by onvention a deps.ts source file to centralize all your dependencies.

ember-cli-eslint, ember-cli-stylelint to run automatically only if desired

I understand that the purpose of ember-cli-eslint, ember-cli-stylelint to run automatically.
I am wondering if there is a way to control this behavior.
Like, run ember-cli-eslint, ember-cli-stylelint automatically only if there is certain ENVIRONMENT_VARIABLE or maybe write a custom script.
I am wondering if that is possible. Google search did not provide me any pointer.
Yes.
For ESLint:
Remove the addon ember-cli-eslint
Install the npm package eslint in your project
ESLint will then run only when you actually run ./node_modules/.bin/eslint .
You should update your package.json's lint:js script as well.
For Stylelint:
Remove the addon ember-cli-stylelint
Install the npm package stylelint in your project
Stylelint will then run only when you actually run ./node_modules/.bin/stylelint
You should update your package.json's lint:css script as well.
As suggested by #Turbo87 at https://github.com/ember-cli/ember-cli-eslint/issues/333 I have updated ember-cli-build.js like so:
const blacklist = [];
if (process.env.DISABLE_AUTO_LINT) {
blacklist.push('ember-cli-eslint', 'ember-cli-eslint');
}
let app = new EmberApp(defaults, {
addons: { blacklist },
});
And it works as desired.
A simplified package.json/script looks something like so:
"scripts": {
"eslint": "eslint .",
"stylelint": "stylelint app/styles",
"lint": "npm run eslint && npm run stylelint",
"start": "DISABLE_AUTO_LINT=true ember serve",
"test": "npm run lint --silent && DISABLE_AUTO_LINT=true ember exam --split=10 --parallel",
}
ember serve functions as business as usual.