File "/var/task/django/db/backends/postgresql/base.py", line 29, in
raise ImproperlyConfigured("Error loading psycopg2 module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 module: libpq.so.5: cannot open shared object file: No such file or directory
I'm receiving the following error after deploying a Django application via Serverless. If I deploy the application via our Bitbucket Pipeline.
Here's the Pipeline:
- step: &Deploy-serverless
caches:
- node
image: node:11.13.0-alpine
name: Deploy Serverless
script:
# Initial Setup
- apk add curl postgresql-dev gcc python3-dev musl-dev linux-headers libc-dev
# Load our environment.
...
- apk add python3
- npm install -g serverless
# Set Pipeline Variables
...
# Configure Serverless
- cp requirements.txt src/requirements.txt
- printenv > src/.env
- serverless config credentials --provider aws --key ${AWS_KEY} --secret ${AWS_SECRET}
- cd src
- sls plugin install -n serverless-python-requirements
- sls plugin install -n serverless-wsgi
- sls plugin install -n serverless-dotenv-plugin
Here's the Serverless File:
service: serverless-django
plugins:
- serverless-python-requirements
- serverless-wsgi
- serverless-dotenv-plugin
custom:
wsgi:
app: arc.wsgi.application
packRequirements: false
pythonRequirements:
dockerFile: ./serverless-dockerfile
dockerizePip: non-linux
pythonBin: python3
useDownloadCache: false
useStaticCache: false
provider:
name: aws
runtime: python3.6
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: "Allow"
Action:
- s3:GetObject
- s3:PutObject
Resource: "arn:aws:s3:::*"
functions:
app:
handler: wsgi.handler
events:
- http: ANY /
- http: "ANY {proxy+}"
timeout: 60
Here's the Dockerfile:
FROM lambci/lambda:build-python3.7
RUN yum install -y postgresql-devel python-psycopg2 postgresql-libs
And here's the requirements:
amqp==2.6.1
asgiref==3.3.1
attrs==20.3.0
beautifulsoup4==4.9.3
billiard==3.6.3.0
boto3==1.17.29
botocore==1.20.29
celery==4.4.7
certifi==2020.12.5
chardet==4.0.0
click==7.1.2
coverage==5.5
Django==3.1.7
django-cachalot==2.3.3
django-celery-beat==2.2.0
django-celery-results==2.0.1
django-filter==2.4.0
django-google-analytics-app==5.0.2
django-redis==4.12.1
django-timezone-field==4.1.1
djangorestframework==3.12.2
Djaq==0.2.0
drf-spectacular==0.14.0
future==0.18.2
idna==2.10
inflection==0.5.1
Jinja2==2.11.3
joblib==1.0.1
jsonschema==3.2.0
kombu==4.6.11
livereload==2.6.3
lunr==0.5.8
Markdown==3.3.4
MarkupSafe==1.1.1
mkdocs==1.1.2
nltk==3.5
psycopg2-binary==2.8.6
pyrsistent==0.17.3
python-crontab==2.5.1
python-dateutil==2.8.1
python-dotenv==0.15.0
pytz==2021.1
PyYAML==5.4.1
redis==3.5.3
regex==2020.11.13
requests==2.25.1
sentry-sdk==1.0.0
six==1.15.0
soupsieve==2.2
sqlparse==0.4.1
structlog==21.1.0
tornado==6.1
tqdm==4.59.0
uritemplate==3.0.1
urllib3==1.26.3
uWSGI==2.0.19.1
vine==1.3.0
Werkzeug==1.0.1
And here's the database settings:
# Database Defintions
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql_psycopg2",
"HOST": load_env("PSQL_HOST", "127.0.0.1"),
"NAME": load_env("PSQL_DATABASE", ""),
"PASSWORD": load_env("PSQL_PASSWORD", ""),
"USER": load_env("PSQL_USERNAME", ""),
"PORT": load_env("PSQL_PORT", "5432"),
"TEST": {
"NAME": "arc_unittest",
},
},
}
Am at a loss for what exactly the issue is. Thoughts?
File "/var/task/django/db/backends/postgresql/base.py", line 29, in
raise ImproperlyConfigured("Error loading psycopg2 module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 module: No module named 'psycopg2._psycopg'
I receive this similar error when deploying locally.
In my case, I needed to replace the psycopg2-binary with aws-psycopg2 to be Lambda friendly.
Apparently it is a dependency error, in debian base distributions you solve it by installing the libpq-dev package
You can also use a lambda layer and use one already created in this repo
At the bottom of you Lambda console, you have a "Layers" section where you can click "Add Layer" and specify the corresponding ARN based on your Python version and AWS region.
It will also have the benefit of reducing your package size.
Related
I have a circleCI configuration to run my tests before merge to the master, I start my server to do my tests and the I should connect to my RDS database and its protected with security groups I tried to whitelist circleci ip to allow this happen but with no luck
version: 2.1
orbs:
aws-white-list-circleci-ip: configure/aws-white-list-circleci-ip#1.0.0
aws-cli: circleci/aws-cli#0.1.13
jobs:
aws_setup:
docker:
- image: cimg/python:3.11.0
steps:
- aws-cli/install
- aws-white-list-circleci-ip/add
build:
docker:
- image: cimg/node:18.4
steps:
- checkout
- run: node --version
- restore_cache:
name: Restore Npm Package Cache
keys:
# Find a cache corresponding to this specific package-lock.json checksum
# when this file is changed, this key will fail
- v1-npm-deps-{{ checksum "package-lock.json" }}
# Find the most recently generated cache used from any branch
- v1-npm-deps-
- run: npm install
- run:
name: start the server
command: npm start
background: true
- save_cache:
name: Save Npm Package Cache
key: v1-npm-deps-{{ checksum "package-lock.json" }}
paths:
- ./node_modules
- run:
name: run tests
command: npm run test
- aws-white-list-circleci-ip/remove
workflows:
build-workflow:
jobs:
- aws_setup:
context: aws_context
- build:
requires:
- aws_setup
context: aws_context
my context environment
AWS_ACCESS_KEY_ID
AWS_DEFAULT_REGION
AWS_SECRET_ACCESS_KEY
GROUPID
the error
the orbs I am using
https://circleci.com/developer/orbs/orb/configure/aws-white-list-circleci-ip
I figure it out
version: 2.1
orbs:
aws-cli: circleci/aws-cli#0.1.13
jobs:
build:
docker:
- image: cimg/python:3.11.0-node
steps:
- checkout
- run: node --version
- restore_cache:
name: Restore Npm Package Cache
keys:
# Find a cache corresponding to this specific package-lock.json checksum
# when this file is changed, this key will fail
- v1-npm-deps-{{ checksum "package-lock.json" }}
# Find the most recently generated cache used from any branch
- v1-npm-deps-
- run: npm install
- aws-cli/install
- run:
command: |
public_ip_address=$(wget -qO- http://checkip.amazonaws.com)
echo "this computers public ip address is $public_ip_address"
aws ec2 authorize-security-group-ingress --region $AWS_DEFAULT_REGION --group-id $GROUPID --ip-permissions "[{\"IpProtocol\": \"tcp\", \"FromPort\": 22, \"ToPort\": 7000, \"IpRanges\": [{\"CidrIp\": \"${public_ip_address}/32\",\"Description\":\"CircleCi\"}]}]"
- save_cache:
name: Save Npm Package Cache
key: v1-npm-deps-{{ checksum "package-lock.json" }}
paths:
- ./node_modules
- run:
name: run tests
command: npm run test
# Invoke jobs via workflows
# See: https://circleci.com/docs/2.0/configuration-reference/#workflows
workflows:
build-workflow:
jobs:
- build:
context: aws_context
I've been trying to add CI/CD pipeline circleci to my AWS project written in Terraform.
The problem is, terraform init plan apply works in my local machine, but it throws this error in CircleCI.
Error -
Initializing the backend...
╷
│ Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
│
│ Please see https://www.terraform.io/docs/language/settings/backends/s3.html
│ for more information about providing credentials.
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
│ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
My circleCi config is this -
version: 2.1
orbs:
python: circleci/python#1.5.0
# terraform: circleci/terraform#3.1.0
jobs:
build:
# will use a python 3.10.2 container
docker:
- image: cimg/python:3.10.2
working_directory: ~/project
steps:
- checkout
- run:
name: Check pyton version
command: python --version
- run:
name: get current dir
command: pwd
- run:
name: list of things in that
command: ls -a
- run:
name: Install terraform
command: bash scripts/install_tf.sh
- run:
name: Init infrastructure
command: bash scripts/init.sh dev
# Invoke jobs via workflows
workflows:
.......
And my init.sh is -
cd ./Terraform
echo "arg: $1"
if [[ "$1" == "dev" || "$1" == "stage" || "$1" == "prod" ]];
then
echo "environement: $1"
terraform init -migrate-state -backend-config=backend.$1.conf -var-file=terraform.$1.tfvars
else
echo "Wrong Argument"
echo "Pass 'dev', 'stage' or 'prod' only."
fi
My main.tf is -
provider "aws" {
profile = "${var.profile}"
region = "${var.region}"
}
terraform {
backend "s3" {
}
}
And `backend.dev.conf is -
bucket = "bucket-name"
key = "mystate.tfstate"
region = "ap-south-1"
profile = "dev"
Also, my terraform.dev.tfvars is -
region = "ap-south-1"
profile = "dev"
These work perfectly with in my local unix (mac m1), but throws error in circleCI for backend. Yes, I've added environment variables with my aws_secret_access_key and aws_access_key_id, still it doesn't work.
I've seen so many tutorials and nothing seems to solve this, I don't want to write aws credentials in my code. any idea how I can solve this?
Update:
I have updated my pipeline to this -
version: 2.1
orbs:
python: circleci/python#1.5.0
aws-cli: circleci/aws-cli#3.1.3
jobs:
build:
# will use a python 3.10.2 container
docker:
- image: cimg/python:3.10.2
working_directory: ~/project
# Checkout the code as the first step. This is a dedicated
steps:
- checkout
- run:
name: Check pyton version
command: python --version
- run:
name: get current dir
command: pwd
- run:
name: list of things in that
command: ls -a
aws-cli-cred-setup:
executor: aws-cli/default
steps:
- aws-cli/setup:
aws-access-key-id: aws_access_key_id
aws-secret-access-key: aws_secret_access_key
aws-region: region
- run:
name: get aws acc info
command: aws sts get-caller-identity
terraform-setup:
executor: aws-cli/default
working_directory: ~/project
steps:
- checkout
- run:
name: Install terraform
command: bash scripts/install_tf.sh
- run:
name: Init infrastructure
command: bash scripts/init.sh dev
context: terraform
# Invoke jobs via workflows
workflows:
dev_workflow:
jobs:
- build:
filters:
branches:
only: main
- aws-cli-cred-setup
# context: aws
- terraform-setup:
requires:
- aws-cli-cred-setup
But it still throws the same error.
You have probably added the aws_secret_access_key and aws_access_key_id to your project settings. But I don't see them being used in your pipeline configuration. You should do something like, so they are known during runtime:
version: 2.1
orbs:
python: circleci/python#1.5.0
jobs:
build:
docker:
- image: cimg/python:3.10.2
working_directory: ~/project
environment:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
steps:
- run:
name: Check python version
command: python --version
...
I would advise you read about environment variables in the documentation.
Ok I managed to fix this issue. You have to remove profile from provider and other .tf files files.
So my main.tf file is -
provider "aws" {
region = "${var.region}"
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.30"
}
}
backend "s3" {
}
}
And backend.dev.conf is -
bucket = "bucket"
key = "dev/xxx.tfstate"
region = "ap-south-1"
And most importantly, You have to put acccess key, access key id and region inside circleci-> your project -> environment variable,
And you have to setup AWS CLI on circleci, apparently inside a job config.yml-
version: 2.1
orbs:
python: circleci/python#1.5.0
aws-cli: circleci/aws-cli#3.1.3
jobs:
build:
# will use a python 3.10.2 container
docker:
- image: cimg/python:3.10.2
working_directory: ~/project
plan-apply:
executor: aws-cli/default
docker:
- image: docker.mirror.hashicorp.services/hashicorp/terraform:light
working_directory: ~/project
steps:
- checkout
- aws-cli/setup:
aws-access-key-id: aws_access_key_id
aws-secret-access-key: aws_secret_access_key
aws-region: region
- run:
name: get aws acc info
command: aws sts get-caller-identity
- run:
name: Init infrastructure
command: sh scripts/init.sh dev
- run:
name: Plan infrastructure
command: sh scripts/plan.sh dev
- run:
name: Apply infrastructure
command: sh scripts/apply.sh dev
.....
.....
This solved the issue. But you have to init, plan and apply inside the job where you set up aws cli. I might be wrong to do setup and plan inside same job, but I'm learning now and this did the job. API changed and old tutorials don't work nowadays.
Comment me your suggestions if any.
Adding a profile to your backend will fix this issue. Example:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.30"
}
}
backend "s3" {
bucket = "terraform-state"
region = "ap-south-1"
key = "dev/xxx.tfstate"
profile = "myAwsCliProfile"
}
}
while deploying to aws lambda I get the following error as showin in picture and text[error][1]
Error: `python.exe -m pip install -t C:/Users/asma/AppData/Local/UnitedIncome/serverless-python-requirements/Cache/fa6c9f84e92253cbebe2f17deb9708a48dc1d1d7bff853c13add0f8197336d73_x86_64_slspyc -r C:/Users/asma/AppData/Local/UnitedIncome/serverless-python-requirements/Cache/fa6c9f84e92253cbebe2f17deb9708a48dc1d1d7bff853c13add0f8197336d73_x86_64_slspyc/requirements.txt --cache-dir C:\Users\asma\AppData\Local\UnitedIncome\serverless-python-requirements\Cache\downloadCacheslspyc` Exited with code 1
at ChildProcess.<anonymous> (D:\serverless project\serverless framework\timezone789\time789\venv\node_modules\child-process-ext\spawn.js:38:8)
at ChildProcess.emit (node:events:527:28)
at ChildProcess.emit (node:domain:475:12)
at ChildProcess.cp.emit (D:\serverless project\serverless framework\timezone789\time789\venv\node_modules\cross-spawn\lib\enoent.js:34:29)
at maybeClose (node:internal/child_process:1092:16)
at Process.ChildProcess._handle.onexit (node:internal/child_process:302:5)
###########
my serverless.yml file is as follow:
org: sayyedaasma
app: zones
service: time789
frameworkVersion: '3'
provider:
name: aws
runtime: python3.8
functions:
hello:
handler: handler.hello
events:
- httpApi:
path: /
method: get
plugins:
- serverless-python-requirements
############# requirement.txt file is as follow
pytz==2022.1
############# handler.py file
import json
import pytz
def lambda_handler(event, context):
l1=[]
#print('Timezones')
for timeZone in pytz.all_timezones:
l1.append(timeZone)
return {
'statusCode': 200,
'body': json.dumps(l1)
}
I have followed the following tutorial.
https://www.serverless.com/blog/serverless-python-packaging/
I don't understand what is the cause of this issue since I am a beginner . Can anyone here please guide me?
similar issues being reported here
https://github.com/serverless/serverless-python-requirements/issues/663
try this
https://github.com/serverless/serverless-python-requirements/issues/663#issuecomment-1131211339
You may try running these tutorials on either a cloud9 console or cloudshell
you will face a lot less issues in an aws environment.
I am trying to deploy my serverless project locally with LocalStack and serverless-local plugin. When I try to deploy it with serverless deploy it throws an error and its failing to create the cloudformation stack.But, I manage to create the same stack when I deploy the project in to real aws environment. What is the possible issue here. I checked answers in all the previous questions asked on similar issue, nothing seems to work.
docker-compose.yml
version: "3.8"
services:
localstack:
container_name: "serverless-localstack_main"
image: localstack/localstack
ports:
- "4566-4597:4566-4597"
environment:
- AWS_DEFAULT_REGION=eu-west-1
- EDGE_PORT=4566
- SERVICES=lambda,cloudformation,s3,sts,iam,apigateway,cloudwatch
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
serverless.yml
service: serverless-localstack-test
frameworkVersion: '2'
plugins:
- serverless-localstack
custom:
localstack:
debug: true
host: http://localhost
edgePort: 4566
autostart: true
lambda:
mountCode: True
stages:
- local
endpointFile: config.json
provider:
name: aws
runtime: nodejs12.x
lambdaHashingVersion: 20201221
stage: local
region: eu-west-1
deploymentBucket:
name: deployment
functions:
hello:
handler: handler.hello
Config.json (which has the endpoints)
{
"CloudFormation": "http://localhost:4566",
"CloudWatch": "http://localhost:4566",
"Lambda": "http://localhost:4566",
"S3": "http://localhost:4566"
}
Error in Localstack container
serverless-localstack_main | 2021-06-04T17:41:49:WARNING:localstack.utils.cloudformation.template_deployer: Error calling
<bound method ClientCreator._create_api_method.<locals>._api_call of
<botocore.client.Lambda object at 0x7f31f359a4c0>> with params: {'FunctionName':
'serverless-localstack-test-local-hello', 'Runtime': 'nodejs12.x', 'Role':
'arn:aws:iam::000000000000:role/serverless-localstack-test-local-eu-west-1-lambdaRole',
'Handler': 'handler.hello', 'Code': {'S3Bucket': '__local__', 'S3Key':
'/Users/charles/Documents/Practice/serverless-localstack-test'}, 'Timeout': 6,
'MemorySize': 1024} for resource: {'Type': 'AWS::Lambda::Function', 'Properties':
{'Code': {'S3Bucket': '__local__', 'S3Key':
'/Users/charles/Documents/Practice/serverless-localstack-test'}, 'Handler':
'handler.hello', 'Runtime': 'nodejs12.x', 'FunctionName': 'serverless-localstack-test-
local-hello', 'MemorySize': 1024, 'Timeout': 6, 'Role':
'arn:aws:iam::000000000000:role/serverless-localstack-test-local-eu-west-1-lambdaRole'},
'DependsOn': ['HelloLogGroup'], 'LogicalResourceId': 'HelloLambdaFunction',
'PhysicalResourceId': None, '_state_': {}}
I fixed that problem using this plugin: https://www.serverless.com/plugins/serverless-deployment-bucket
You need to make some adjustments in your files.
Update your docker-compose.yml, use the reference docker compose from
localstack, you can check it here.
Use a template that works correctly, AWS docs page have several
examples, you can check it here.
Run it with next command aws cloudformation create-stack --endpoint-url http://localhost:4566 --stack-name samplestack --template-body file://lambda.yml --profile dev
You can also run localstack using Python with next commands
pip install localstack
localstack start
I am trying to deploy a flask application on aws lambda via zappa through gitlab CI. Since inline editing isn't possible via gitlab CI, I generated the zappa_settings.json file on my remote computer and I am trying to use this to do zappa deploy dev.
My zappa_settings.json file:
{
"dev": {
"app_function": "main.app",
"aws_region": "eu-central-1",
"profile_name": "default",
"project_name": "prices-service-",
"runtime": "python3.7",
"s3_bucket": -MY_BUCKET_NAME-
}
}
My .gitlab-ci.yml file:
image: ubuntu:18.04
stages:
- deploy
before_script:
- apt-get -y update
- apt-get -y install python3-pip python3.7 zip
- python3.7 -m pip install --upgrade pip
- python3.7 -V
- pip3.7 install virtualenv zappa
deploy_job:
stage: deploy
script:
- mv requirements.txt ~
- mv zappa_settings.json ~
- mkdir ~/forlambda
- cd ~/forlambda
- virtualenv -p python3 venv
- source venv/bin/activate
- pip3.7 install -r ~/requirements.txt -t ~/forlambda/venv/lib/python3.7/site-packages/
- zappa deploy dev
The CI file, upon running, gives me the following error:
Any suggestions are appreciated
zappa_settings.json is commited to the repo and not created on the fly. What is created on the fly is AWS credentials file. Values required are being read from Gitlab env vars set in the web UI of the project.
zappa_settings.json
{
"prod": {
"lambda_handler": "main.handler",
"aws_region": "eu-central-1",
"profile_name": "default",
"project_name": "dummy-name",
"s3_bucket": "dummy-name",
"aws_environment_variables": {
"STAGE": "prod",
"PROJECT": "dummy-name"
}
},
"dev": {
"extends": "prod",
"debug": true,
"aws_environment_variables": {
"STAGE": "dev",
"PROJECT": "dummy-name"
}
}
}
.gitlab-ci.yml
image:
python:3.6
stages:
- test
- deploy
variables:
AWS_DEFAULT_REGION: "eu-central-1"
# variables set in gitlab's web gui:
# AWS_ACCESS_KEY_ID
# AWS_SECRET_ACCESS_KEY
before_script:
# adding pip cache
- export PIP_CACHE_DIR="/home/gitlabci/cache/pip-cache"
.zappa_virtualenv_setup_template: &zappa_virtualenv_setup
# `before_script` should not be overriden in the job that uses this template
before_script:
# creating virtualenv because zappa MUST have it and activating it
- pip install virtualenv
- virtualenv ~/zappa
- source ~/zappa/bin/activate
# installing requirements in virtualenv
- pip install -r requirements.txt
test code:
stage: test
before_script:
# installing testing requirements
- pip install -r requirements_testing.txt
script:
- py.test
test package:
<<: *zappa_virtualenv_setup
variables:
ZAPPA_STAGE: prod
stage: test
script:
- zappa package $ZAPPA_STAGE
deploy to production:
<<: *zappa_virtualenv_setup
variables:
ZAPPA_STAGE: prod
stage: deploy
environment:
name: production
script:
# creating aws credentials file
- mkdir -p ~/.aws
- echo "[default]" >> ~/.aws/credentials
- echo "aws_access_key_id = "$AWS_ACCESS_KEY_ID >> ~/.aws/credentials
- echo "aws_secret_access_key = "$AWS_SECRET_ACCESS_KEY >> ~/.aws/credentials
# try to update, if the command fails (probably not even deployed) do the initial deploy
- zappa update $ZAPPA_STAGE || zappa deploy $ZAPPA_STAGE
after_script:
- rm ~/.aws/credentials
only:
- master
I haven't used zappa in a while, but I remember that a lot of errors that were caused by bad AWS credentials, but zappa reporting something else.