Unable to create environment in AWS Elastic Beanstalk? - django

I Made a small Django app; I want to deploy it on AWS. I followed the commands here . Now when I do eb create it fails saying
ERROR: Your requirements.txt is invalid. Snapshot your logs for details.
ERROR: [Instance: i-05fde0dc] Command failed on instance. Return code: 1 Output: (TRUNCATED)...)
File "/usr/lib64/python2.7/subprocess.py", line 540, in check_call
raise CalledProcessError(retcode, cmd)
CalledProcessError: Command '/opt/python/run/venv/bin/pip install -r /opt/python/ondeck/app/requirements.txt' returned non-zero exit status 1.
Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03deploy.py failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
Detailed logs are here. My database in postgresql, for that do I have to run separate RDS instance ?
My config.yml
branch-defaults:
master:
environment: feedy2-dev
group_suffix: null
global:
application_name: feedy2
default_ec2_keyname: aws-eb
default_platform: Python 2.7
default_region: us-west-2
profile: eb-cli
sc: git
My 01-django-eb.config
option_settings:
"aws:elasticbeanstalk:application:environment":
DJANGO_SETTINGS_MODULE: "feedy2.settings"
PYTHONPATH: "/opt/python/current/app/feedy2:$PYTHONPATH"
"aws:elasticbeanstalk:container:python":
WSGIPath: "feedy2/feedy2/wsgi.py"
container_commands:
01_migrate:
command: "django-admin.py migrate"
leader_only: true
My directory structure :
.
├── feedy2
│   ├── businesses
│   │  
│   ├── customers
│   │ 
│   ├── db.sqlite3
│   ├── feedy2
│   │   ├── __init__.py
│   │   ├── __init__.pyc
│   │   ├── settings.py
│   │   ├── settings.pyc
│   │   ├── urls.py
│   │   ├── urls.pyc
│   │   ├── wsgi.py
│   │   └── wsgi.pyc
│   ├── manage.py
│   ├── questions
│   │  
│   ├── static
│   ├── surveys
│   └── templates
├── readme.md
└── requirements.txt

You truncated the relevant part of output but it's in the pastebin link:
Collecting psycopg2==2.6.1 (from -r /opt/python/ondeck/app/requirements.txt (line 20))
Using cached psycopg2-2.6.1.tar.gz
Complete output from command python setup.py egg_info:
running egg_info
creating pip-egg-info/psycopg2.egg-info
writing pip-egg-info/psycopg2.egg-info/PKG-INFO
writing top-level names to pip-egg-info/psycopg2.egg-info/top_level.txt
writing dependency_links to pip-egg-info/psycopg2.egg-info/dependency_links.txt
writing manifest file 'pip-egg-info/psycopg2.egg-info/SOURCES.txt'
warning: manifest_maker: standard file '-c' not found
Error: pg_config executable not found.
You need to install the postgresql[version]-devel package. Put the following in .ebextensions/packages.config'.
packages:
yum:
postgresql94-devel: []
Source: Psycopg2 on Amazon Elastic Beanstalk

Related

AWS CDK CI/CD Pipeline - Deployed Lambda returns ClassNotFoundException

I am trying to build a CI/CD Pipeline with Lambda by AWS CDK. We are using a gradle project here. Additionally, I followed the example documentation. We have two Stacks defined which are APIStack and ApiStackPipeline where APIStack is handled by Lambda_Build and ApiStackPipeline is handled by CDK_BUILD.
We are initializing Lambda function within ApiStack like;
final Function contactFunction = Function.Builder.create(this, "contactFunction").role(roleLambda)
.runtime(Runtime.JAVA_8)
.code(lambdaCode)
.handler("com.buraktas.contact.main.ContactLambda::handleRequest")
.memorySize(512)
.timeout(Duration.minutes(1))
.environment(environment)
.description(Instant.now().toString()).build();
In this case we are setting lambdaCode parameter with this.lambdaCode = new CfnParametersCode(); same as shown from the documentation (Even though I am not sure how it is getting).
Now we are passing this lambdaCode into ApiStackPipeline which looks like;
IRepository repository = Repository.fromRepositoryName(this, repoName, repoName);
IBucket bucket = Bucket.fromBucketName(this, "codepipeline-api", "codepipeline-api");
PipelineProject lambdaBuild = PipelineProject.Builder.create(this, "ApiBuild")
.buildSpec(BuildSpec.fromSourceFilename("lambda-buildspec.yml"))
.environment(BuildEnvironment.builder().buildImage(LinuxBuildImage.STANDARD_4_0).build())
.build();
PipelineProject cdkBuild = PipelineProject.Builder.create(this, "ApiCDKBuild")
.buildSpec(BuildSpec.fromSourceFilename("cdk-buildspec.yml"))
.environment(BuildEnvironment.builder().buildImage(LinuxBuildImage.STANDARD_4_0).build())
.build();
Artifact sourceOutput = new Artifact();
Artifact cdkBuildOutput = new Artifact("CdkBuildOutput");
Artifact lambdaBuildOutput = new Artifact("LambdaBuildOutput");
Pipeline.Builder.create(this, "ApiPipeline")
.stages(Arrays.asList(
StageProps.builder()
.stageName("Source")
.actions(Arrays.asList(
CodeCommitSourceAction.Builder.create()
.actionName("Source")
.repository(repository)
.output(sourceOutput)
.build()))
.build(),
StageProps.builder()
.stageName("Build")
.actions(Arrays.asList(
CodeBuildAction.Builder.create()
.actionName("Lambda_Build")
.project(lambdaBuild)
.input(sourceOutput)
.outputs(Arrays.asList(lambdaBuildOutput)).build(),
CodeBuildAction.Builder.create()
.actionName("CDK_Build")
.project(cdkBuild)
.input(sourceOutput)
.outputs(Arrays.asList(cdkBuildOutput))
.build()))
.build(),
StageProps.builder()
.stageName("Deploy")
.actions(Arrays.asList(
CloudFormationCreateUpdateStackAction.Builder.create()
.actionName("Lambda_CFN_Deploy")
.templatePath(cdkBuildOutput.atPath("ApiStackAlfa.template.json"))
.adminPermissions(true)
.parameterOverrides(lambdaCode.assign(lambdaBuildOutput.getS3Location()))
.extraInputs(Arrays.asList(lambdaBuildOutput))
.stackName("ApiStackAlfaDeployment")
.build()))
.build()))
.artifactBucket(bucket)
.restartExecutionOnUpdate(true)
.build();
Here I also shared the *-buildspec.yml files;
lambda-buildspec.yml
version: 0.2
phases:
install:
runtime-versions:
java: corretto8
build:
commands:
- echo current directory `pwd`
- echo building gradle project on `date`
- ./gradlew clean build
artifacts:
files:
- build/distributions/src.zip
discard-paths: yes
cdk-buildspec.yml
version: 0.2
phases:
install:
runtime-versions:
nodejs: 10
java: corretto8
commands:
- echo installing aws-cdk on `date`
- npm install aws-cdk
build:
commands:
- echo current directory `pwd`
- ls -l
- echo building cdk project on `date`
- ./gradlew clean build
- npx cdk synth -o dist
post_build:
commands:
- echo listing files after build under dist
- ls -l dist
artifacts:
files:
- ApiStackAlfa.template.json
base-directory: dist
Here is the exception stack trace I am getting
Class not found: com.buraktas.api.main.Lambda: java.lang.ClassNotFoundException
java.lang.ClassNotFoundException: com.buraktas.api.main.Lambda
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
And finally here I shared a simplified version of project structure if it helps
├── src
│   ├── main
│   │   ├── java
│   │   │   └── com
│   │   │   └── buraktas
│   │   │   └── api
│   │   │   ├── main
│   │   │   │   ├── ApiMain.java
│   │   │   │   ├── ApiPipelineStack.java
│   │   │   │   ├── ApiStack.java
│   │   │   │   └── Lambda.java
│   │   │   └── repository
│   │   │   └── Repository.java
│   │   └── resources
│   │   └── log4j.properties
│   └── test
│   ├── java
│   │   ├── DocumentTest.java
│   │   └── JsonWriterSettingsTest.java
│   └── resources
│   └── request.http
It looks like everything is working fine, Pipeline is getting created successfully and Source -> Build -> Deploy steps are running smoothly. However, when I trigger my lambda function I am getting ClassNotFoundException. I tried both using .zip or .jar (fat jar) artifacts but nothing changed.
Thanks for your help.
I figured out that the problem is happening because CodeBuild creates a zip from given artifact. This means there will be a zip file containing src.zip itself which contains the correct project build files. And since this main zip file is being uploaded to Lambda it is not able to find handler definition so that it throws a ClassNotFoundException. However, this additional zip process is not being mentioned neither in the example documentation nor in the AWS CodeBuild reference documentation for buildspec. We need to manually unzip the contents of zip file and give it as artifact output. Here is the final version of our buildspec.yml. Additionally, if you dont want to deal with unzipping contents then you need to configure your build tool (we are using gradle here) to not zip contents into a zip file after running build command.
version: 0.2
phases:
install:
runtime-versions:
java: corretto8
build:
commands:
- echo current directory `pwd`
- echo building gradle project on `date`
- ./gradlew clean build
post_build:
commands:
- mkdir build/distributions/api
- unzip build/distributions/api.zip -d build/distributions/api
artifacts:
files:
- '**/*'
base-directory: build/distributions/api

Heroku can't find out that python is necessary

I have a project with the structure similar to what is described in Two Scoops of Django.
Namely:
1. photoarchive_project is repository root (where .git lives).
2. The project itself is photoarchive.
3. Config files are separate for separate realities.
The traceback and other info is below.
The file runtime.txt is situated next to .git directory. That is in the very directory where git is initialized.
The problem is: it can't even determine that python should be applied. Could you give me a kick here?
.git/config
[core]
repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
[remote "origin"]
url = ssh://git#bitbucket.org/Kifsif/photoarchive.git
fetch = +refs/heads/*:refs/remotes/origin/*
[branch "master"]
remote = origin
merge = refs/heads/master
[remote "heroku"]
url = https://git.heroku.com/powerful-plains-97572.git
fetch = +refs/heads/*:refs/remotes/heroku/*
traceback
(photoarchive) michael#ThinkPad:~/workspace/photoarchive_project$ git push
heroku master
Counting objects: 3909, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3617/3617), done.
Writing objects: 100% (3909/3909), 686.44 KiB | 0 bytes/s, done.
Total 3909 (delta 2260), reused 0 (delta 0)
remote: Compressing source files... done.
remote: Building source:
remote:
remote: ! No default language could be detected for this app.
remote: HINT: This occurs when Heroku cannot detect the buildpack to use for this application automatically.
remote: See https://devcenter.heroku.com/articles/buildpacks
remote:
remote: ! Push failed
remote: Verifying deploy...
remote:
remote: ! Push rejected to powerful-plains-97572.
remote:
To https://git.heroku.com/powerful-plains-97572.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'https://git.heroku.com/powerful-plains-97572.git'
tree
(photoarchive) michael#ThinkPad:~/workspace/photoarchive_project$ tree
.
├── docs
├── media
├── photoarchive
│   ├── config
│   │   ├── settings
│   │   │   ├── base.py
│   │   │   ├── constants.py
│   │   │   ├── heroku.py
│   │   │   ├── __init__.py
│   │   │   ├── local.py
│   │   │   └── production.py
│   └── manage.py
├── .git
├── .gitignore
├── Procfile
└── runtime.txt
runtime.txt
python-3.6.1
I
You need to define a requirements.txt inside the root of your project folder. This file should contain a list of all your project dependencies.
You can generate this file on your local development machine by running:
$ pip freeze > requirements.txt
Then checking it into version control, and pushing it to Heroku.
Heroku looks for this file to determine that your app is, in fact, a Python application =)

How do you import modules from google.cloud for use in AWS Lambda?

I'm trying to run a script on AWS Lambda that sends data to Google Cloud Storage (GCS) at the end. When I do so locally, it works, but when I run the script on AWS Lambda, importing the GCS client library fails (other imports work fine though). Anyone know why?
Here's an excerpt of the script's imports:
# main_script.py
import robobrowser
from google.cloud import storage
# ...generate data...
# ...send data to storage...
The error message from AWS:
Unable to import module 'main_script': No module named google.cloud
To confirm that the problem is with the google client library import, I ran a version of this script in AWS Lambda with and without the GCS import (commenting out the later references to it) and the script proceeds as usual without import-related errors when the GCS client library import is commented out. Other imports (robobrowser) work fine at all times, locally and on AWS.
I'm using a virtualenv with python set to 2.7.6. To deploy to AWS Lambda, I'm going through the following manual process:
zip the pip packages for the virtual environment:
cd ~/.virtualenvs/{PROJECT_NAME}/lib/python2.7/site-packages
zip -r9 ~/Code/{PROJECT_NAME}.zip *
zip the contents of the project, adding them to the same zip as above:
zip -g ~/Code/{PROJECT_NAME}.zip *
upload the zip to AWS and test using the web console
Here is a subset of the result from running tree inside ~/.virtualenvs/{PROJECT_NAME}/lib/python2.7/site-packages:
...
│
├── google
│   ├── ...
│   ├── cloud
│   │   ├── _helpers.py
│   │   ├── _helpers.pyc
│   │   ├── ...
│   │   ├── bigquery
│   │   │   ├── __init__.py
│   │   │   ├── __init__.pyc
│   │   │   ├── _helpers.py
│   │   │   ├── _helpers.pyc
│   │   ├── ...
│   │   ├── storage
│   │   │   ├── __init__.py
│   │   │   ├── __init__.pyc
│   │   │   ├── _helpers.py
│   │   │   ├── _helpers.pyc
├── robobrowser
│   ├── __init__.py
│   ├── __init__.pyc
│   ├── browser.py
│   ├── browser.pyc
│   ├── ...
...
Unzipping and inspecting the contents of the zip confirms this structure is kept in tact during the zipping process.
I was able to solve this problem by adding __init__.py to the google and google/cloud directories in the pip installation for google-cloud. Despite the current google-cloud package (0.24.0) saying it supports python 2.7, the package structure for this as downloaded using pip seems to cause problems for me.
In the interest of reporting everything, I also had a separate problem after doing this: AWS lambda had trouble importing the main script as a module. I fixed this by recreating the repo step-by-step from scratch. Wasn't able to pinpoint the cause of this 2nd issue, but hey. Computers.

Django runs successfully at localhost but 500 on AWS EB

I just tried writing a simple Django application hosted on AWS Elastic Beanstalk. I can run the server successfully on my localhost. However, when I deploy it on EB, it failed with an 500 error.
Here is my project tree
.
├── README.md
├── db.sqlite3
├── djangosite
│   ├── __init__.py
│   ├── settings.py
│   ├── urls.py
│   └── wsgi.py
├── intro
│   ├── __init__.py
│   ├── admin.py
│   ├── apps.py
│   ├── migrations
│   │   └── __init__.py
│   ├── models.py
│   ├── tests.py
│   └── views.py
├── manage.py
├── requirement.txt
└── templates
└── index.html
I didn't find the log with the correct time in the logs. Usually 500 means there may be something wrong with my codes. But it runs well if I start the server locally
$ python manage.py runserver
I tried to us eb ssh to login the instance and find there is no django in /opt/current/app where my codes sit.
But I truly add Django==1.9.8 to requirement.txt. It seems eb do not installed django. It is also not in /opt/python/run/venv/lib/python2.7/site-packages/.
(I don't have enough reputation to comment)
I'm assuming that you're application starts at all on the production server(you don't mention whether is does).
Did you change Debug=False on the production server? Then uncaught exceptions in cause a 500 response. While having Debug=True in development(locally) returns you the Debug screen.

How can I correctly set DJANGO_SETTINGS_MODULE for my Django project (I am using virtualenv)?

I am having some trouble setting the DJANGO_SETTINGS_MODULE for my Django project.
I have a directory at ~/dev/django-project. In this directory I have a virtual environment which I have set up with virtualenv, and also a django project called "blossom" with an app within it called "onora". Running tree -L 3 from ~/dev/django-project/ shows me the following:
.
├── Procfile
├── blossom
│   ├── __init__.py
│   ├── __init__.pyc
│   ├── fixtures
│   │   └── initial_data_test.yaml
│   ├── manage.py
│   ├── onora
│   │   ├── __init__.py
│   │   ├── __init__.pyc
│   │   ├── admin.py
│   │   ├── admin.pyc
│   │   ├── models.py
│   │   ├── models.pyc
│   │   ├── tests.py
│   │   └── views.py
│   ├── settings.py
│   ├── settings.pyc
│   ├── sqlite3-database
│   ├── urls.py
│   └── urls.pyc
├── blossom-sqlite3-db2
├── requirements.txt
└── virtual_environment
├── bin
│   ├── activate
│   ├── activate.csh
│   ├── activate.fish
│   ├── activate_this.py
│   ├── django-admin.py
│   ├── easy_install
│   ├── easy_install-2.7
│   ├── gunicorn
│   ├── gunicorn_django
│   ├── gunicorn_paster
│   ├── pip
│   ├── pip-2.7
│   ├── python
│   └── python2.7 -> python
├── include
│   └── python2.7 -> /System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7
└── lib
└── python2.7
I am trying to dump my data from the database with the command
django-admin.py dumpdata
My approach is to run cd ~/dev/django-project and then run source virtual_environment/bin/activate and then run django-admin.py dumpdata
However, I am getting the following error:
ImportError: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined.
I did some googling and found this page: https://docs.djangoproject.com/en/dev/topics/settings/#designating-the-settings
which tell me that
When you use Django, you have to tell it which settings you're using.
Do this by using an environment variable, DJANGO_SETTINGS_MODULE. The
value of DJANGO_SETTINGS_MODULE should be in Python path syntax, e.g.
mysite.settings. Note that the settings module should be on the Python
import search path.
Following a suggestion at Setting DJANGO_SETTINGS_MODULE under virtualenv? I appended the lines
export DJANGO_SETTINGS_MODULE="blossom.settings"
echo $DJANGO_SETTINGS_MODULE
to virtual_environment/bin/activate. Now, when I run the activate command in order to activate the virtual environment, I get output reading:
DJANGO_SETTINGS_MODULE set to blossom.settings
This looks good to me, but now the problem I have is that running
django-admin.py dumpdata
returns the following error:
ImportError: Could not import settings 'blossom.settings' (Is it on sys.path?): No module named blossom.settings
What am I doing wrong? How can I check thesys.path? How is this supposed to work?
Thanks.
Don't run django-admin.py for anything other than the initial project creation. For everything after that, use manage.py, which takes care of the finding the settings.
I just encountered the same error, and eventually managed to work out what was going on (the big clue was (Is it on sys.path?) in the ImportError).
You need add your project directory to PYTHONPATH — this is what the documentation means by
Note that the settings module should be on the Python import search path.
To do so, run
$ export PYTHONPATH=$PYTHONPATH:$PWD
from the ~/dev/django-project directory before you run django-admin.py.
You can add this command (replacing $PWD with the actual path to your project, i.e. ~/dev/django-project) to your virtualenv's source script. If you choose to advance to virtualenvwrapper at some point (which is designed for this kind of situation), you can add the export PY... line to the auto-generated postactivate hook script.
mkdjangovirtualenv automates this even further, adding the appropriate entry to the Python path for you, but I have not tested it myself.
On unix-like machine you can simply alias virtualenv like this and use alias instead of typing everytime:
.bashrc
alias cool='source /path_to_ve/bin/activate; export DJANGO_SETTINGS_MODULE=django_settings_folder.settings; cd path_to_django_project; export PYTHONPATH=$PYTHONPATH:$PWD'
My favourite alternative is passing settings file as runtime parameter to manage.py in a python package syntax, e.g:
python manage.py runserver --settings folder.filename
more info django docs
I know there are plenty answers, but this one worked for me just for the record.
Navigate to your .virtual_env folder where all the virtual environments are.
Go to the environment folder specific to your project
Append export DJANGO_SETTINGS_MODULE=<django_project>.settings
or export DJANGO_SETTINGS_MODULE=<django_project>.settings.local if you are using a separate settings file stored in a settings folder.
Yet another way to do deal with this issue is to use the python dotenv package and include PYTHONPATH and DJANGO_SETTINGS_MODULE in the .env file along with your other environment variables. Then modify your manage.py and wsgi.py to load them as stated in the instructions.
from dotenv import load_dotenv
load_dotenv()
I had similar error while working on windows machine. My problem was using wrong debug configuration. Use Python:django as your debug config option.
First ensure you've exported/set django_settings_module correctly here.