I am new to using AWS CDK, I had imported aws_stepfunctions_tasks from aws_cdk.aws_stepfunctions_tasks link given: https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.aws_stepfunctions_tasks/DynamoPutItem.html
but it is showing an import error. Other than that all the imports I have used are working fine. I even tried installing it through pip using different versions and the version which I want but it is giving me the error attached below. Can someone please help out with this issue. I have written the code in my stack file. This issue had occurred also when I was using s3_deploy. I didn't still find a solution for it.
CDK dependencies have changed between V1 and the recently released CDK V2. Make sure you are not mixing V1 and V2 dependencies. Here is a Python example for both versions from aws-samples:
CDK V2
requirements.txt
aws-cdk-lib>=2.0.0
constructs>=10.0.0
app.py
from constructs import Construct
from aws_cdk import (
App, Stack,
aws_lambda as _lambda,
aws_apigateway as _apigw
)
CDK V1
requirements.txt
aws-cdk.core
aws-cdk.aws_lambda
aws-cdk.aws_apigateway
app.py
from aws_cdk import (
core,
aws_lambda as _lambda,
aws_apigateway as _apigw
)
Related
This question was asked many times before I know. After days of struggle, I have narrowed the issue to this:
The code runs perfectly locally by using python .\test.py, but fails while ruining in the sam local invoke container.
AWS supports python 3.9 so I tried to figure out the dependency combination for those imports:
import numpy as np
import pandas as pd
from scipy.signal import savgol_filter
from sktime.transformations.series.outlier_detection import HampelFilter
from sklearn.linear_model import LinearRegression
from scipy.signal import find_peaks, peak_prominences
The code runs locally without problems but fails to run in the container.
Please help
I figured out that sktime will support python 3.9 but depends on numpy>=1.21.0.
On the other hand no matter what I tried.. I couldn't figure out what dependencies will work on AWS python 3.9.
I have tries dto install from wheels and by trying different versions of numpy, but it is always the same error.
I ended up limiting the requirements like so:
numpy==1.21.0
pandas
sktime
This installed what it needed.
Then I figured out that I can run it locally. No issues.
So the problem must be with the container sam local invoke runs.
Note that I can use AWS layer for pandas 3.9 but I also need sktime and that didn't work for me (not to mention the 250M upload limit that I leave for later straggle).
Ok.. I have a better understanding of the whole issue. Here is what I have learned:
When you install Python for AWS you should install the supported version (3.9 as for now). This will make all requirements.txt installations match Python 3.9.
So.. all installations should match without any issues.
Still running locally did not work for me. I do not know why!!!!
One solution involves attaching the lambda to EFS that includes many steps and manual work. You don't want to go there.
The solution for me was using Lambda Images. This was easier then I have expected. The steps includes some docker configurations and having the yaml use the docker file, and then 'sam deploy' will publish a Docker image for you to Elastic Container Register - ECR.
See:
Creating Lambda container images
Using container image support for AWS Lambda with AWS SAM
The first link will lead to some other supporting link within AWS documentation.
Hope it will help someone.
I just started playing around with AWS CDK yesterday and I found something very weird.
First of all, I'm using TypeScript for my CDK app (I used cdk init --language typescript to generate the project files and I tried to import aws-ec2 module so this is what I did:
import cdk = require('#aws-cdk/core');
import ec2 = require('#aws-cdk/aws-ec2');
export class vpcStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
//.... all other codes go here....
However, when importing the aws-ec2 module this way, I got this error when trying to deploy the stack:
тип Unable to compile TypeScript:
lib/cdk-type_script-stack.ts:2:22 - error TS2307: Cannot find module '#aws-cdk/aws-ec2'.
2 import ec2 = require('#aws-cdk/aws-ec2');
~~~~~~~~~~~~~~~~~~
Subprocess exited with error 1
This is very weird because the API docs right here clearly stated that this is how I should import the aws-ec2 module in TypeScript
You need to install the node package before you could import and use it
Execute below on the command line to install npm package for aws-cdk
npm i #aws-cdk/aws-ec2
npm install (for install lib)
npm run build (for compile your code)
After that, you can run:
cdk synth
cdk deploy
You may have a version of npm that is incompatible with the version of #aws-cdk/pipelines as explained here: https://github.com/aws/aws-cdk/issues/13541#issuecomment-801606777
In addition to #juned-ashan 's answer, verify that you are installing the correct module version that corresponds to your cdk version (and other cdk modules installed).
For example:
$ npm install --save #aws-cdk/aws-ec2#1.10.0
Note: not enough points to add this as a comment in Juned's answer.
Same code is working in my local machine, however getting the below error when I tried to test in the AWS Lambda:
Unable to import module 'lambda_function': Missing required dependencies ['numpy']
You need to download the packages from pypi.org and include them on the zip file that contains both, the code in a py file and the packages. Find a more detailed description here https://www.protos-technologie.de/en/2020/07/02/dependency-management-for-aws-lambda/.
I want to use sklearn on AWS lambda. sklearn has dependencies on scipy(173MB) and numpy(75MB). The combined size of all these packages exceeds AWS Lambda disk space limit of 256 MB.
How can I use AWS lambda to use sklearn?
This guy gets it down to 40MB, though I have not tried it myself yet.
The relevant Github repo.
there is a two way to do this
1) installing the modules dynamically
2) aws batch
1) installing the modules dynamically
def lambdahandler():
#install numpy package
# numpy code
#uninstall numpy package
## now install Scipy package
# execute scipy code
or vice versa depends on your code
2) using Aws batch
This is the best way where you don't have any limitation regarding Memory space.
just you need to build a Docker image and need to write an all required packages and libraries inside the requirement.txt file.
I wanted to do the same, and it was very difficult indeed. I ended up buying this layer that includes scikit-learn, pandas, numpy and scipy.
https://www.awslambdas.com/layers/3/aws-lambda-scikit-learn-numpy-scipy-python38-layer
There is another layer that includes xgboost as well.
I'm trying to run a go script as part of the build process. The script imports a 'custom' package. However I get this import error.
The repository name is bis. The script which I run is configbis.go. The package imported configbis.go is mymodule
The project structure is as following:
bisrepo -------
| |
mymodule configbis.go
go run configbis.go
configbis.go:16:2: cannot find package "bisrepo/mymodule" in any of:
/home/travis/.gvm/gos/go1.1.2/src/pkg/bisrepo/mymodule (from $GOROOT)
/home/travis/.gvm/pkgsets/go1.1.2/global/src/bisrepo/mymodule (from $GOPATH)
I've tried to import mymodule in configbis.go as following:
import "mymodule"
import "bisrepo/mymodule"
import "github.com/user/bisrepo/mymodule"
None of them works. I run out of ideas/options ...
I read the the travis-ci documentation and I found it useless.
You could try to add something like that in your .travis.yml:
install:
- go get github.com/user/bisrepo/mymodule
in order to use private repos you must provide a github api auth token (similarly so when deploying go projects which reference private repos on Heroku). You can try adding something like this in your .travis.yml
before_install:
- echo "machine github.com login $GITHUB_AUTH_TOKEN" > ~/.netrc