while deploying to aws lambda I get the following error as showin in picture and text[error][1]
Error: `python.exe -m pip install -t C:/Users/asma/AppData/Local/UnitedIncome/serverless-python-requirements/Cache/fa6c9f84e92253cbebe2f17deb9708a48dc1d1d7bff853c13add0f8197336d73_x86_64_slspyc -r C:/Users/asma/AppData/Local/UnitedIncome/serverless-python-requirements/Cache/fa6c9f84e92253cbebe2f17deb9708a48dc1d1d7bff853c13add0f8197336d73_x86_64_slspyc/requirements.txt --cache-dir C:\Users\asma\AppData\Local\UnitedIncome\serverless-python-requirements\Cache\downloadCacheslspyc` Exited with code 1
at ChildProcess.<anonymous> (D:\serverless project\serverless framework\timezone789\time789\venv\node_modules\child-process-ext\spawn.js:38:8)
at ChildProcess.emit (node:events:527:28)
at ChildProcess.emit (node:domain:475:12)
at ChildProcess.cp.emit (D:\serverless project\serverless framework\timezone789\time789\venv\node_modules\cross-spawn\lib\enoent.js:34:29)
at maybeClose (node:internal/child_process:1092:16)
at Process.ChildProcess._handle.onexit (node:internal/child_process:302:5)
###########
my serverless.yml file is as follow:
org: sayyedaasma
app: zones
service: time789
frameworkVersion: '3'
provider:
name: aws
runtime: python3.8
functions:
hello:
handler: handler.hello
events:
- httpApi:
path: /
method: get
plugins:
- serverless-python-requirements
############# requirement.txt file is as follow
pytz==2022.1
############# handler.py file
import json
import pytz
def lambda_handler(event, context):
l1=[]
#print('Timezones')
for timeZone in pytz.all_timezones:
l1.append(timeZone)
return {
'statusCode': 200,
'body': json.dumps(l1)
}
I have followed the following tutorial.
https://www.serverless.com/blog/serverless-python-packaging/
I don't understand what is the cause of this issue since I am a beginner . Can anyone here please guide me?
similar issues being reported here
https://github.com/serverless/serverless-python-requirements/issues/663
try this
https://github.com/serverless/serverless-python-requirements/issues/663#issuecomment-1131211339
You may try running these tutorials on either a cloud9 console or cloudshell
you will face a lot less issues in an aws environment.
Related
Following this tutorial, https://docs.gitlab.cn/14.0/ee/user/project/clusters/serverless/aws.html#serverless-framework
Created a function in AWS Lambda called create-promo-animation
Created a /src/handler.js
"use strict";
module.exports.hello = async (event) => {
return {
statusCode: 200,
body: JSON.stringify(
{
message: "Your function executed successfully!",
},
null,
2
),
};
};
Created gitlab-ci.yml
stages:
- deploy
production:
stage: deploy
before_script:
- npm config set prefix /usr/local
- npm install -g serverless
script:
- serverless deploy --stage production --verbose
environment: production
Created serverless.yml
service: gitlab-example
provider:
name: aws
runtime: nodejs14.x
functions:
create-promo-animation:
handler: src/handler.hello
events:
- http: GET hello
pushed to GitLab, Pipe run well
But code is not updating in AWS, why?
I'm trying to dynamically pass in options to resolve when deploying my functions with serverless but they're always null or hit the fallback.
custom:
send_grid_api: ${opt:sendgridapi, 'missing'}
SubscribedUsersTable:
name: !Ref UsersSubscriptionTable
arn: !GetAtt UsersSubscriptionTable.Arn
bundle:
linting: false
provider:
name: aws
lambdaHashingVersion: 20201221
runtime: nodejs12.x
memorySize: 256
stage: ${opt:stage, 'dev'}
region: us-west-2
environment:
STAGE: ${self:provider.stage}
SEND_GRID_API_KEY: ${self:custom.send_grid_api}
I've also tried:
environment:
STAGE: ${self:provider.stage}
SEND_GRID_API_KEY: ${opt:sendgridapi, 'missing'}
both yield 'missing', but why?
sls deploy --stage=prod --sendgridapi=xxx
also fails if I try with space instead of =.
Edit: Working Solution
In my github action template, I defined the following:
- name: create env file
run: |
touch .env
echo SEND_GRID_API_KEY=${{ secrets.SEND_GRID_KEY }} >> .env
ls -la
pwd
In addition, I explicitly set the working directory for this stage like so:
working-directory: /home/runner/work/myDir/myDir/
In my serverless.yml I added the following:
environment:
SEND_GRID_API_KEY: ${env:SEND_GRID_API_KEY}
sls will read the contents from the file and load them properly
opt is for serverless' CLI options. These are part of serverless, not your own code.
You can instead use...
provider:
...
environment:
...
SEND_GRID_API_KEY: ${env:SEND_GRID_API_KEY}
And pass the value as an environment variable in your deploy step.
- name: Deploy
run: sls deploy --stage=prod
env:
SEND_GRID_API_KEY: "insert api key here"
I am trying to deploy my serverless project locally with LocalStack and serverless-local plugin. When I try to deploy it with serverless deploy it throws an error and its failing to create the cloudformation stack.But, I manage to create the same stack when I deploy the project in to real aws environment. What is the possible issue here. I checked answers in all the previous questions asked on similar issue, nothing seems to work.
docker-compose.yml
version: "3.8"
services:
localstack:
container_name: "serverless-localstack_main"
image: localstack/localstack
ports:
- "4566-4597:4566-4597"
environment:
- AWS_DEFAULT_REGION=eu-west-1
- EDGE_PORT=4566
- SERVICES=lambda,cloudformation,s3,sts,iam,apigateway,cloudwatch
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
serverless.yml
service: serverless-localstack-test
frameworkVersion: '2'
plugins:
- serverless-localstack
custom:
localstack:
debug: true
host: http://localhost
edgePort: 4566
autostart: true
lambda:
mountCode: True
stages:
- local
endpointFile: config.json
provider:
name: aws
runtime: nodejs12.x
lambdaHashingVersion: 20201221
stage: local
region: eu-west-1
deploymentBucket:
name: deployment
functions:
hello:
handler: handler.hello
Config.json (which has the endpoints)
{
"CloudFormation": "http://localhost:4566",
"CloudWatch": "http://localhost:4566",
"Lambda": "http://localhost:4566",
"S3": "http://localhost:4566"
}
Error in Localstack container
serverless-localstack_main | 2021-06-04T17:41:49:WARNING:localstack.utils.cloudformation.template_deployer: Error calling
<bound method ClientCreator._create_api_method.<locals>._api_call of
<botocore.client.Lambda object at 0x7f31f359a4c0>> with params: {'FunctionName':
'serverless-localstack-test-local-hello', 'Runtime': 'nodejs12.x', 'Role':
'arn:aws:iam::000000000000:role/serverless-localstack-test-local-eu-west-1-lambdaRole',
'Handler': 'handler.hello', 'Code': {'S3Bucket': '__local__', 'S3Key':
'/Users/charles/Documents/Practice/serverless-localstack-test'}, 'Timeout': 6,
'MemorySize': 1024} for resource: {'Type': 'AWS::Lambda::Function', 'Properties':
{'Code': {'S3Bucket': '__local__', 'S3Key':
'/Users/charles/Documents/Practice/serverless-localstack-test'}, 'Handler':
'handler.hello', 'Runtime': 'nodejs12.x', 'FunctionName': 'serverless-localstack-test-
local-hello', 'MemorySize': 1024, 'Timeout': 6, 'Role':
'arn:aws:iam::000000000000:role/serverless-localstack-test-local-eu-west-1-lambdaRole'},
'DependsOn': ['HelloLogGroup'], 'LogicalResourceId': 'HelloLambdaFunction',
'PhysicalResourceId': None, '_state_': {}}
I fixed that problem using this plugin: https://www.serverless.com/plugins/serverless-deployment-bucket
You need to make some adjustments in your files.
Update your docker-compose.yml, use the reference docker compose from
localstack, you can check it here.
Use a template that works correctly, AWS docs page have several
examples, you can check it here.
Run it with next command aws cloudformation create-stack --endpoint-url http://localhost:4566 --stack-name samplestack --template-body file://lambda.yml --profile dev
You can also run localstack using Python with next commands
pip install localstack
localstack start
File "/var/task/django/db/backends/postgresql/base.py", line 29, in
raise ImproperlyConfigured("Error loading psycopg2 module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 module: libpq.so.5: cannot open shared object file: No such file or directory
I'm receiving the following error after deploying a Django application via Serverless. If I deploy the application via our Bitbucket Pipeline.
Here's the Pipeline:
- step: &Deploy-serverless
caches:
- node
image: node:11.13.0-alpine
name: Deploy Serverless
script:
# Initial Setup
- apk add curl postgresql-dev gcc python3-dev musl-dev linux-headers libc-dev
# Load our environment.
...
- apk add python3
- npm install -g serverless
# Set Pipeline Variables
...
# Configure Serverless
- cp requirements.txt src/requirements.txt
- printenv > src/.env
- serverless config credentials --provider aws --key ${AWS_KEY} --secret ${AWS_SECRET}
- cd src
- sls plugin install -n serverless-python-requirements
- sls plugin install -n serverless-wsgi
- sls plugin install -n serverless-dotenv-plugin
Here's the Serverless File:
service: serverless-django
plugins:
- serverless-python-requirements
- serverless-wsgi
- serverless-dotenv-plugin
custom:
wsgi:
app: arc.wsgi.application
packRequirements: false
pythonRequirements:
dockerFile: ./serverless-dockerfile
dockerizePip: non-linux
pythonBin: python3
useDownloadCache: false
useStaticCache: false
provider:
name: aws
runtime: python3.6
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: "Allow"
Action:
- s3:GetObject
- s3:PutObject
Resource: "arn:aws:s3:::*"
functions:
app:
handler: wsgi.handler
events:
- http: ANY /
- http: "ANY {proxy+}"
timeout: 60
Here's the Dockerfile:
FROM lambci/lambda:build-python3.7
RUN yum install -y postgresql-devel python-psycopg2 postgresql-libs
And here's the requirements:
amqp==2.6.1
asgiref==3.3.1
attrs==20.3.0
beautifulsoup4==4.9.3
billiard==3.6.3.0
boto3==1.17.29
botocore==1.20.29
celery==4.4.7
certifi==2020.12.5
chardet==4.0.0
click==7.1.2
coverage==5.5
Django==3.1.7
django-cachalot==2.3.3
django-celery-beat==2.2.0
django-celery-results==2.0.1
django-filter==2.4.0
django-google-analytics-app==5.0.2
django-redis==4.12.1
django-timezone-field==4.1.1
djangorestframework==3.12.2
Djaq==0.2.0
drf-spectacular==0.14.0
future==0.18.2
idna==2.10
inflection==0.5.1
Jinja2==2.11.3
joblib==1.0.1
jsonschema==3.2.0
kombu==4.6.11
livereload==2.6.3
lunr==0.5.8
Markdown==3.3.4
MarkupSafe==1.1.1
mkdocs==1.1.2
nltk==3.5
psycopg2-binary==2.8.6
pyrsistent==0.17.3
python-crontab==2.5.1
python-dateutil==2.8.1
python-dotenv==0.15.0
pytz==2021.1
PyYAML==5.4.1
redis==3.5.3
regex==2020.11.13
requests==2.25.1
sentry-sdk==1.0.0
six==1.15.0
soupsieve==2.2
sqlparse==0.4.1
structlog==21.1.0
tornado==6.1
tqdm==4.59.0
uritemplate==3.0.1
urllib3==1.26.3
uWSGI==2.0.19.1
vine==1.3.0
Werkzeug==1.0.1
And here's the database settings:
# Database Defintions
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql_psycopg2",
"HOST": load_env("PSQL_HOST", "127.0.0.1"),
"NAME": load_env("PSQL_DATABASE", ""),
"PASSWORD": load_env("PSQL_PASSWORD", ""),
"USER": load_env("PSQL_USERNAME", ""),
"PORT": load_env("PSQL_PORT", "5432"),
"TEST": {
"NAME": "arc_unittest",
},
},
}
Am at a loss for what exactly the issue is. Thoughts?
File "/var/task/django/db/backends/postgresql/base.py", line 29, in
raise ImproperlyConfigured("Error loading psycopg2 module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 module: No module named 'psycopg2._psycopg'
I receive this similar error when deploying locally.
In my case, I needed to replace the psycopg2-binary with aws-psycopg2 to be Lambda friendly.
Apparently it is a dependency error, in debian base distributions you solve it by installing the libpq-dev package
You can also use a lambda layer and use one already created in this repo
At the bottom of you Lambda console, you have a "Layers" section where you can click "Add Layer" and specify the corresponding ARN based on your Python version and AWS region.
It will also have the benefit of reducing your package size.
I am trying to run AWS SDK python script using Github Actions. Github actions is installing Boto3 successfully but script is executing with following error:
error:
Run python3 s3.py
Traceback (most recent call last):
File "s3.py", line 5, in <module>
response = s3.list_buckets()
File "/opt/hostedtoolcache/Python/3.8.6/x64/lib/python3.8/site-packages/botocore/client.py",
File "/opt/hostedtoolcache/Python/3.8.6/x64/lib/python3.8/site-packages/botocore/signers.py", line 162, in sign
auth.add_auth(request)
File "/opt/hostedtoolcache/Python/3.8.6/x64/lib/python3.8/site-packages/botocore/auth.py", line 357, in add_auth
raise NoCredentialsError
botocore.exceptions.NoCredentialsError: Unable to locate credentials
Error: Process completed with exit code 1.
Implemented so far:
ListS3bucket.yml
name: Python package
on: [push]
jobs:
deploy:
name: sample code
runs-on: ubuntu-latest
steps:
- name: checkout repo content
uses: actions/checkout#v2 # checkout the repository content to github runner.
- name: setup python
uses: actions/setup-python#v2
with:
python-version: 3.8
- name: Upgrade PIP
run: |
/opt/hostedtoolcache/Python/3.8.6/x64/bin/python -m pip install --upgrade pip
- name: install boto3
run: |
python -m pip install boto3
- name: execute script
env:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
run: |
python3 s3.py
s3.py
import boto3
# Retrieve the list of existing buckets
s3 = boto3.client('s3')
response = s3.list_buckets()
# Output the bucket names
print('Existing buckets:')
for bucket in response['Buckets']:
print(f' {bucket["Name"]}')
Kindly guide on this.
Based on the comments.
The solution was that the env variable set were incorrect. The correct ones are AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION as shown in the docs.