AWS CDK bundling lambda docker used instead of esbuild / debian WSL2 - amazon-web-services

I have issues with cdk while trying to bundle lambdas with esbuild while working in my WSL2 debian
esbuild is installed as a global npm package and also in devDependencies of my cdk project
node --version
v14.16.0
cdk --version
1.95.1
esbuild --version
0.11.2
Examples of lambda definition
lex_create_bot = _lambda_node.NodejsFunction(
self,
id="lambda-lex-create-bot",
entry="lambdas_fns/lex_create_bot/lex-create-bot.ts",
handler="handler",
runtime=_lambda.Runtime.NODEJS_14_X,
bundling={"minify": True}
)
Everytime I try to deploy, check diff, cdk try to bundle the lambdas with docker instead of esbuild.
I work on this stack for a while and eveything was fine until I switched from remote-container to WSL2 to manage my dev environement in vscode.
docker is really slow for bundling and creates diff for already deployed lambdas that have no code changes.
Any idea how to solve this ?
EDIT
Same issue with Ubuntu-20.04 WSL2

I upgrade to cdk 1.97.0 and esbuild 0.11.5 this morning and all is working well now.
Still a strange behavior that I want to avoid in the future, if anyone got a more generic solution to this problem ...

Related

Serverless: Running "serverless" installed locally (in service node_modules)

I am trying to deploy a service to aws using serverless. I am deploying it using gitlab cicd instead of doing it locally. Initially my serverless version was latest(had not mentioned any specific version) but then when I pushed my code to gitlab and i got few errors in the pipeline as the latest version is not stable. So had to change the version to a stable version. Now when i pushed my code changes to gitlab, my deployment failed and i got
Serverless Error ----------------------------------------
Cannot run local installation of the Serverless Framework by the outdated global version. Please upgrade via:
npm install -g serverless Note: Latest release can run any version of the locally installed Serverless Framework.
I dont want to upgrade my serverless version.
in my gitlab-ci.yml i have changed
- npm install -g serverless
to this
- npm install -g serverless#2.69.1
Is there any way I can fix this ?
Any help would be appreciated, thank you.
in your case, the most likely reason for that is the fact that you have a local installation of Serverless or some of the plugins/your other dependencies have Serverless v3 in peer dependencies and install it by default in npm#7 and higher.
To resolve it, either remove local installation or pin the version of the locally installed Serverless (in devDependencies of package.json of your project).
./node_modules/.bin/sls deploy does the trick.
However, the proper answer is in the docs:
There are 2 scenarios:
Using v3 globally, and v2 in specific projects.
This is the simplest. Upgrade the global version to v3, and install v2 in specific projects (via NPM). The serverless command will automatically run the correct version (v3 can run v2).
Using v2 globally, and v3 in specific projects.
To achieve that, install v3 in specific projects (via NPM). Then, use serverless for v2 projects, and npx serverless for v3 projects.
https://www.serverless.com/framework/docs/guides/upgrading-v3#using-v2-and-v3-in-different-projects

ECS scheduled task with specific Fargate platform version (on CDK )

I'm trying to schedule a task on ECS (running on Fargate), via CDK. When it runs, my task uses the LATEST platform version (which is currently 1.3.0, despite 1.4.0 having been released). I'd like to use version 1.4.0, but I can't see a way to specify that in the CDK objects - it only seems to be supported when creating a FargateService, whereas I'm using a FargateTaskDefinition. Does anyone know if this is possible? (It's possible via the direct API, and via the console).
You have to upgrade your aws-cdk first.
npm install -g aws-cdk
Then upgrade the package.json
npx npm-check -u
the latest "#aws-cdk/aws-ecs": 1.72 allows you to specify the platform version in ecs task:
new EcsTask({
...
platformVersion: FargatePlatformVersion.VERSION1_4
})

Running AWS SAM projects locally get error

I am trying to run an AWS Lambda project locally on Ubuntu. When I run the project with AWS SAM Local it shows me this error: Error: Running AWS SAM projects locally requires Docker. Have you got it installed?
I had trouble installing it on Fedora.
When I followed the Docker postinstall instructions I managed to get past this issue.
https://docs.docker.com/install/linux/linux-postinstall/
I had to:
Delete the ~/.docker directory;
Create the "docker" group;
Add my user to the "docker" group;
Logout and back in again;
Restart the "docker" daemon.
I was then able to run the command:
sam local start-api
If you want to run local sam-cli, you have first install docker from docker official website then run sudo sam local start-api. Note that sudo is necessary for running local developer with needed privileges.
This error mostly arises due to lack of admin privilege to use docker. Just add sudo to your command. This will work.
eg: sudo sam local start-api --region eu-west-3
We are working on Mac and were seeing same message when using an older version of Docker (1.12.6). Have since updated to a newer (but not latest) version 17.12.0-ce-mac49 and it is now fine.
Another cause for this is this recent issue within Docker for Mac.
A quick workaround, as specified in the issue itself, is to run SAM with:
$ DOCKER_HOST=unix://$HOME/.docker/run/docker.sock sam local start-api
You don't need to run SAM as root.
I am using colima for docker on mac with intel chip. and faced this error. was able to resolve it by adding DOCKER_HOST in .zshrc file
vi ~/.zshrc
paste export DOCKER_HOST="unix://$HOME/.colima/docker.sock" in the .zshrc file
escape :wq

Script works on AWS EC2, but not on AWS Lambda after zipping

I am creating a simple AWS Lambda function using M2Crypto library. I followed the steps for creating deployment package from here. The lambda function works perfectly on an EC2 Linux instance (AMI).
This is my Function definition:
CloudOAuth.py
from M2Crypto import BIO, RSA, EVP
def verify(event, context):
pem = "-----BEGIN PUBLIC KEY-----\n{0}\n-----END PUBLIC KEY-----".format("hello")
bio = BIO.MemoryBuffer(str.encode(pem))
print(bio)
return
Deployment Package structure:
When I run the Lambda, I get the following issue and I also tried including libcrypto.so.10 from /lib64 directory, but didn't help.
Issue when running Lambda
/var/task/M2Crypto/_m2crypto.so: symbol sk_deep_copy, version libcrypto.so.10 not defined in file libcrypto.so.10 with link time reference`
Python: 2.7
M2Crypto: 0.27.0
I would guess that the M2Crypto was built with different version of OpenSSL than what's on Lambda. See the relevant code. If not (the upstream maintainer speaking here), please, file a bug at https://gitlab.com/m2crypto/m2crypto/issues
I just want to add some more details on to #mcepl's answer. The most important is that OpenSSL version on AWS Lambda and the environment (in my case ec2) where you build your M2Crypto library should match.
To check openssl version on Lambda, use print in your handler:
print(ssl.OPENSSL_VERSION)
To check openssl version on your build environment, use:
$ openssl version
Once they match, it works.
Don't hesitate to downgrade or upgrade OpenSSL on your build environment to match the Lambda environment. I had to downgrade my openssl on ec2 to match lambda runtime environment.
sudo yum -y downgrade openssl-devel-1.0.1k openssl-1.0.1k
Hope it will help anyone trying to use M2Crypto :)
copying my answer for a similar question here:
AWS lambda runs code on an old version of amazon linux (amzn-ami-hvm-2017.03.1.20170812-x86_64-gp2) as mentioned in the official documentation
https://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html
So to run a code that depends on shared libraries, it needs to be compiled in the same environment so it can link correctly.
What I usually do in such cases is that I create virtualenv using docker container. The virtualenv can than be packaged with lambda code.
Please note that if you need install anything using yum (in the docker container), you must use same release server as the amazon linux version:
yum --releasever=2017.03 install ...
virtualenv can be built using an EC2 instance as well instead of docker container (though, I find docker method easier). Just make sure that the AMI used for EC2 is same as the one used by lambda.

Tag lts-3.1 not found pulling to docker a fresh yesod scaffold

I'm trying to deploy a simple yesod web site using stack docker.
My steps:
stack yesod init ... stack exec -- yesod devel works fine.
export DOCKER_HOST=myhost and test docker info runs ok.
add docker: \n enable: true to stack.yaml.
Then, fail
$ stack docker pull
Pulling image from registry: 'fpco/stack-build:lts-3.1'
Pulling repository docker.io/fpco/stack-build
Tag lts-3.1 not found in repository docker.io/fpco/stack-build
Could not pull Docker image:
fpco/stack-build:lts-3.1
There may not be an image on the registry for your resolver's LTS version in stack.yaml.
I'm using
$ stack exec -- ghc --version
The Glorious Glasgow Haskell Compilation System, version 7.10.2
I know
Not every LTS version is guaranteed to have an image existing, and new
LTS images tend to lag behind the LTS snapshot being published on
stackage.org. Be warned: these images are rather large!
My first goal is use stack docker and to know if I do something wrong.
Thank you!
For the moment, use a resolver setting in your stack.yaml that matches one of the available tags, such as resolver: lts-2.22 (see https://hub.docker.com/r/fpco/stack-build/tags/ for a list). I'm working on LTS 3.x images, but have run into some trouble building all the packages in it, and debugging has gone slowly due to how long it takes to build all of Stackage.