Deploy private github repo golang app on elasticbeanstalk - amazon-web-services

I have been struggling trying to deploy my Golang app to AWS EB for couple days.
I am trying to deploy my app on an EB server Preconfigured Docker - Go 1.4 running on 64bit Debian/2.9.2 using the eb cli via the command eb deploy in my app folder.
After couple seconds, I've got an error message saying that my app wasn't deployed because of an error.
Looking at the eb-activity.log, here what I can see:
/var/log/eb-activity.log
-------------------------------------
Fetching https://golang.org/x/crypto?go-get=1
Parsing meta tags from https://golang.org/x/crypto?go-get=1 (status code 200)
golang.org/x/crypto (download)
Fetching https://golang.org/x/sys/unix?go-get=1
Parsing meta tags from https://golang.org/x/sys/unix?go-get=1 (status code 200)
get "golang.org/x/sys/unix": found meta tag main.metaImport{Prefix:"golang.org/x/sys", VCS:"git", RepoRoot:"https://go.googlesource.com/sys"} at https://golang.org/x/sys/unix?go-get=1
get "golang.org/x/sys/unix": verifying non-authoritative meta tag
Fetching https://golang.org/x/sys?go-get=1
Parsing meta tags from https://golang.org/x/sys?go-get=1 (status code 200)
golang.org/x/sys (download)
github.com/randomuser/private-repo (download)
# cd .; git clone https://github.com/randomuser/private-repo /go/src/github.com/randomuser/private-repo
Cloning into '/go/src/github.com/randomuser/private-repo'...
fatal: could not read Username for 'https://github.com': No such device or address
package github.com/Sirupsen/logrus
imports golang.org/x/crypto/ssh/terminal
imports golang.org/x/sys/unix
imports github.com/randomuser/private-repo/apis: exit status 128
package github.com/Sirupsen/logrus
imports golang.org/x/crypto/ssh/terminal
imports golang.org/x/sys/unix
imports github.com/randomuser/private-repo/app
imports github.com/randomuser/private-repo/app
imports github.com/randomuser/private-repo/app: cannot find package "github.com/randomuser/private-repo/app" in any of:
/usr/src/go/src/github.com/randomuser/private-repo/app (from $GOROOT)
/go/src/github.com/randomuser/private-repo/app (from $GOPATH)
I suppose there is an issue when the server tries to install the app, it seems it's trying to retrieve from my private repo on github ...
I referenced my app sub packages as github.com/randomuser/private-repo/subpackage I supposed this is why it behaves like that.
Is there a way to deploy all my code, forcing my private repo to be populated within the GOROOT src/github.com/randomuser/private-repo/ so the server doesn't have to try to get it?
I didn't found any proper example from Amazon docs (multi-packages apps) nor from Github.
Am I missing anything? Is there a better solution?
On a side note, I also tried to deploy my compiled binary directly (create a folder where I put only the binary, zip it and upload it on the ebs env) but it didn't worked neither ... Maybe this option requires yet another env config (if so, which one?).
Thanks for your help :)
Configuration
Golang app having the following folders:
├── Dockerfile
├── server.go
├── Gopkg.lock
├── Gopkg.toml
├── Makefile
├── apis
│   ├── auth.go
│   ├── auth_test.go
│   ├── ...
├── app
│   ├── config.go
│   ├── init.go
│   ├── logger.go
│   ├── scope.go
│   ├── transactional.go
│   └── version.go
├── config
│   ├── dev.app.yaml
│   ├── errors.yaml
│   └── prod.app.yaml
├── daos
│   ├── auth.go
│   ├── auth_test.go
│   ├── ...
├── errors
│   ├── api_error.go
│   ├── api_error_test.go
│   ├── errors.go
│   ├── errors_test.go
│   ├── template.go
│   └── template_test.go
├── models
│   ├── identity.go
│   ├── ...
├── services
│   ├── auth.go
│   ├── auth_test.go
│   ├── ...
├── util
│   ├── paginated_list.go
│   └── paginated_list_test.go
Here is the content of my server.go
package main
import (
"flag"
"fmt"
"net/http"
"github.com/jinzhu/gorm"
_ "github.com/jinzhu/gorm/dialects/mysql"
"github.com/randomuser/private-repo/apis"
"github.com/randomuser/private-repo/app"
"github.com/randomuser/private-repo/daos"
"github.com/randomuser/private-repo/errors"
"github.com/randomuser/private-repo/services"
)
func main() {
// getting env from command line
// env is either prod, preprod or dev
// by default, env is prod
env := flag.String("env", "prod", "environment: prod, preprod or dev")
flag.Parse()
...
router.To("GET,HEAD", "/ping", func(c *routing.Context) error {
c.Abort() // skip all other middlewares/handlers
return c.Write("OK " + app.Version)
})
...
// Serve on port 5000
My Dockerfile content:
FROM golang:1.4.2-onbuild
ADD . /go/src/github.com/randomuser/private-repo
RUN go install github.com/randomuser/private-repo
EXPOSE 5000
ENTRYPOINT /go/bin/private-repo

I finally managed to make it works.
So I created a brand new eb app (without docker).
I then figured out that my app wasn't able to retrieve env vars set in the console somehow ...
So what I did is that I forced the env variables to be passed to my app at startup time from my production config file using the build.sh script see below :
#!/bin/bash -xe
# See http://tldp.org/LDP/abs/html/options.html
# -x -> Print each command to stdout before executing it, expand commands
# -e -> Abort script at first error, when a command exits with non-zero status
# (except in until or while loops, if-tests, list constructs)
# $GOPATH isn't set by default, nor do we have a usable Go workspace :'(
GOPATH="/var/app/current"
APP_BUILD_DIR="$GOPATH/src/to-be-defined" # We will build the app here
APP_STAGING_DIR="/var/app/staging" # Current directory
DEP_VERSION="v0.3.2" # Use specific version for stability
ENV_VAR_PREFIX="TO_BE_DEFINED_"
# Install dep, a Go dependency management tool, if not already installed or if
# the version does not match.
if ! hash dep 2> /dev/null ||\
[[ $(dep version | awk 'NR==2{print $3}') != "$DEP_VERSION" ]]; then
# /usr/local/bin is expected to be on $PATH.
curl -L \
-s https://github.com/golang/dep/releases/download/$DEP_VERSION/dep-linux-amd64 \
-o /usr/local/bin/dep
chmod +x /usr/local/bin/dep
fi
# Remove the $APP_BUILD_DIR just in case it was left behind in a failed build.
rm -rf $APP_BUILD_DIR
# Setup the application directory
mkdir -p $APP_BUILD_DIR
# mv all files to $APP_BUILD_DIR
# https://superuser.com/questions/62141/how-to-move-all-files-from-current-directory-to-upper-directory
mv * .[^.]* $APP_BUILD_DIR
cd $APP_BUILD_DIR
# Pull in dependencies into vendor/.
dep ensure
# Build the binary with jsoniter tag.
go build -o application -tags=jsoniter .
# Modify permissons to make the binary executable.
chmod +x application
# Move the binary back to staging dir.
# Along with the configuration files.
mkdir $APP_STAGING_DIR/bin
# By default, `bin/application` is executed. This way, a Procfile isn't needed.
mv application $APP_STAGING_DIR/bin
cp -r config $APP_STAGING_DIR
# TODO: Fix the viper not working with env var
# Generate prod config from env variables
/opt/elasticbeanstalk/bin/get-config environment --output YAML | sed s/${ENV_VAR_PREFIX}//g > $APP_STAGING_DIR/config/prod.app.yaml
# Copy .ebextensions back to staging directory.
# cp -r .ebextensions $APP_STAGING_DIR
# Clean up.
rm -rf $APP_BUILD_DIR
echo "Build successful!!"
My build.sh file is called by EBS using this Buildfile:
make: ./build.sh
Et voilà ! Everything works properly now :)

Related

AWS CDK CI/CD Pipeline - Deployed Lambda returns ClassNotFoundException

I am trying to build a CI/CD Pipeline with Lambda by AWS CDK. We are using a gradle project here. Additionally, I followed the example documentation. We have two Stacks defined which are APIStack and ApiStackPipeline where APIStack is handled by Lambda_Build and ApiStackPipeline is handled by CDK_BUILD.
We are initializing Lambda function within ApiStack like;
final Function contactFunction = Function.Builder.create(this, "contactFunction").role(roleLambda)
.runtime(Runtime.JAVA_8)
.code(lambdaCode)
.handler("com.buraktas.contact.main.ContactLambda::handleRequest")
.memorySize(512)
.timeout(Duration.minutes(1))
.environment(environment)
.description(Instant.now().toString()).build();
In this case we are setting lambdaCode parameter with this.lambdaCode = new CfnParametersCode(); same as shown from the documentation (Even though I am not sure how it is getting).
Now we are passing this lambdaCode into ApiStackPipeline which looks like;
IRepository repository = Repository.fromRepositoryName(this, repoName, repoName);
IBucket bucket = Bucket.fromBucketName(this, "codepipeline-api", "codepipeline-api");
PipelineProject lambdaBuild = PipelineProject.Builder.create(this, "ApiBuild")
.buildSpec(BuildSpec.fromSourceFilename("lambda-buildspec.yml"))
.environment(BuildEnvironment.builder().buildImage(LinuxBuildImage.STANDARD_4_0).build())
.build();
PipelineProject cdkBuild = PipelineProject.Builder.create(this, "ApiCDKBuild")
.buildSpec(BuildSpec.fromSourceFilename("cdk-buildspec.yml"))
.environment(BuildEnvironment.builder().buildImage(LinuxBuildImage.STANDARD_4_0).build())
.build();
Artifact sourceOutput = new Artifact();
Artifact cdkBuildOutput = new Artifact("CdkBuildOutput");
Artifact lambdaBuildOutput = new Artifact("LambdaBuildOutput");
Pipeline.Builder.create(this, "ApiPipeline")
.stages(Arrays.asList(
StageProps.builder()
.stageName("Source")
.actions(Arrays.asList(
CodeCommitSourceAction.Builder.create()
.actionName("Source")
.repository(repository)
.output(sourceOutput)
.build()))
.build(),
StageProps.builder()
.stageName("Build")
.actions(Arrays.asList(
CodeBuildAction.Builder.create()
.actionName("Lambda_Build")
.project(lambdaBuild)
.input(sourceOutput)
.outputs(Arrays.asList(lambdaBuildOutput)).build(),
CodeBuildAction.Builder.create()
.actionName("CDK_Build")
.project(cdkBuild)
.input(sourceOutput)
.outputs(Arrays.asList(cdkBuildOutput))
.build()))
.build(),
StageProps.builder()
.stageName("Deploy")
.actions(Arrays.asList(
CloudFormationCreateUpdateStackAction.Builder.create()
.actionName("Lambda_CFN_Deploy")
.templatePath(cdkBuildOutput.atPath("ApiStackAlfa.template.json"))
.adminPermissions(true)
.parameterOverrides(lambdaCode.assign(lambdaBuildOutput.getS3Location()))
.extraInputs(Arrays.asList(lambdaBuildOutput))
.stackName("ApiStackAlfaDeployment")
.build()))
.build()))
.artifactBucket(bucket)
.restartExecutionOnUpdate(true)
.build();
Here I also shared the *-buildspec.yml files;
lambda-buildspec.yml
version: 0.2
phases:
install:
runtime-versions:
java: corretto8
build:
commands:
- echo current directory `pwd`
- echo building gradle project on `date`
- ./gradlew clean build
artifacts:
files:
- build/distributions/src.zip
discard-paths: yes
cdk-buildspec.yml
version: 0.2
phases:
install:
runtime-versions:
nodejs: 10
java: corretto8
commands:
- echo installing aws-cdk on `date`
- npm install aws-cdk
build:
commands:
- echo current directory `pwd`
- ls -l
- echo building cdk project on `date`
- ./gradlew clean build
- npx cdk synth -o dist
post_build:
commands:
- echo listing files after build under dist
- ls -l dist
artifacts:
files:
- ApiStackAlfa.template.json
base-directory: dist
Here is the exception stack trace I am getting
Class not found: com.buraktas.api.main.Lambda: java.lang.ClassNotFoundException
java.lang.ClassNotFoundException: com.buraktas.api.main.Lambda
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
And finally here I shared a simplified version of project structure if it helps
├── src
│   ├── main
│   │   ├── java
│   │   │   └── com
│   │   │   └── buraktas
│   │   │   └── api
│   │   │   ├── main
│   │   │   │   ├── ApiMain.java
│   │   │   │   ├── ApiPipelineStack.java
│   │   │   │   ├── ApiStack.java
│   │   │   │   └── Lambda.java
│   │   │   └── repository
│   │   │   └── Repository.java
│   │   └── resources
│   │   └── log4j.properties
│   └── test
│   ├── java
│   │   ├── DocumentTest.java
│   │   └── JsonWriterSettingsTest.java
│   └── resources
│   └── request.http
It looks like everything is working fine, Pipeline is getting created successfully and Source -> Build -> Deploy steps are running smoothly. However, when I trigger my lambda function I am getting ClassNotFoundException. I tried both using .zip or .jar (fat jar) artifacts but nothing changed.
Thanks for your help.
I figured out that the problem is happening because CodeBuild creates a zip from given artifact. This means there will be a zip file containing src.zip itself which contains the correct project build files. And since this main zip file is being uploaded to Lambda it is not able to find handler definition so that it throws a ClassNotFoundException. However, this additional zip process is not being mentioned neither in the example documentation nor in the AWS CodeBuild reference documentation for buildspec. We need to manually unzip the contents of zip file and give it as artifact output. Here is the final version of our buildspec.yml. Additionally, if you dont want to deal with unzipping contents then you need to configure your build tool (we are using gradle here) to not zip contents into a zip file after running build command.
version: 0.2
phases:
install:
runtime-versions:
java: corretto8
build:
commands:
- echo current directory `pwd`
- echo building gradle project on `date`
- ./gradlew clean build
post_build:
commands:
- mkdir build/distributions/api
- unzip build/distributions/api.zip -d build/distributions/api
artifacts:
files:
- '**/*'
base-directory: build/distributions/api

Unable to create environment in AWS Elastic Beanstalk?

I Made a small Django app; I want to deploy it on AWS. I followed the commands here . Now when I do eb create it fails saying
ERROR: Your requirements.txt is invalid. Snapshot your logs for details.
ERROR: [Instance: i-05fde0dc] Command failed on instance. Return code: 1 Output: (TRUNCATED)...)
File "/usr/lib64/python2.7/subprocess.py", line 540, in check_call
raise CalledProcessError(retcode, cmd)
CalledProcessError: Command '/opt/python/run/venv/bin/pip install -r /opt/python/ondeck/app/requirements.txt' returned non-zero exit status 1.
Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03deploy.py failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
Detailed logs are here. My database in postgresql, for that do I have to run separate RDS instance ?
My config.yml
branch-defaults:
master:
environment: feedy2-dev
group_suffix: null
global:
application_name: feedy2
default_ec2_keyname: aws-eb
default_platform: Python 2.7
default_region: us-west-2
profile: eb-cli
sc: git
My 01-django-eb.config
option_settings:
"aws:elasticbeanstalk:application:environment":
DJANGO_SETTINGS_MODULE: "feedy2.settings"
PYTHONPATH: "/opt/python/current/app/feedy2:$PYTHONPATH"
"aws:elasticbeanstalk:container:python":
WSGIPath: "feedy2/feedy2/wsgi.py"
container_commands:
01_migrate:
command: "django-admin.py migrate"
leader_only: true
My directory structure :
.
├── feedy2
│   ├── businesses
│   │  
│   ├── customers
│   │ 
│   ├── db.sqlite3
│   ├── feedy2
│   │   ├── __init__.py
│   │   ├── __init__.pyc
│   │   ├── settings.py
│   │   ├── settings.pyc
│   │   ├── urls.py
│   │   ├── urls.pyc
│   │   ├── wsgi.py
│   │   └── wsgi.pyc
│   ├── manage.py
│   ├── questions
│   │  
│   ├── static
│   ├── surveys
│   └── templates
├── readme.md
└── requirements.txt
You truncated the relevant part of output but it's in the pastebin link:
Collecting psycopg2==2.6.1 (from -r /opt/python/ondeck/app/requirements.txt (line 20))
Using cached psycopg2-2.6.1.tar.gz
Complete output from command python setup.py egg_info:
running egg_info
creating pip-egg-info/psycopg2.egg-info
writing pip-egg-info/psycopg2.egg-info/PKG-INFO
writing top-level names to pip-egg-info/psycopg2.egg-info/top_level.txt
writing dependency_links to pip-egg-info/psycopg2.egg-info/dependency_links.txt
writing manifest file 'pip-egg-info/psycopg2.egg-info/SOURCES.txt'
warning: manifest_maker: standard file '-c' not found
Error: pg_config executable not found.
You need to install the postgresql[version]-devel package. Put the following in .ebextensions/packages.config'.
packages:
yum:
postgresql94-devel: []
Source: Psycopg2 on Amazon Elastic Beanstalk

initializing virtualenvwrapper on Mac (10.6.8) for Django

I want to use Django and create virtual environments. I don't quite understand the initializing steps documentation on the virtualenvwrapper website. I've installed virtualenvwrapper in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages. I've already installed XCode, Homebrew and Posgres as well.
The documentation tells me to:
$ export WORKON_HOME=~/Envs
$ mkdir -p $WORKON_HOME
$ source /usr/local/bin/virtualenvwrapper.sh
$ mkvirtualenv env1`
I'm especially confused about the first line. Is it telling me that I need to create a project folder named 'WORKON_HOME' and export it into another folder called 'Envs'? (I've searched for both folders on my mac but didn't find them). And then in the second line I make another directory 'WORKON_HOME'?
If you have suggestions or links to better explanations/tutorials, I would greatly appreciate it. Thanks.
Place this 3 lines in your ~/.bash_profile file:-
export WORKON_HOME=$HOME/.virtualenvs
export PROJECT_HOME=$HOME/work
source `which virtualenvwrapper.sh`
The $HOME environment variable points to your user's home. Also known as the tilda "~", i.e. /Users/your_osx_username/.
WORKON_HOME is the new environment variable that you are assigning by using the export call in your ~/.bash_profile file. This is where all your newly created virtualenv directories will be kept.
PROJECT_HOME is where you normally place all your custom project directories manually. Nothing to do with your virtualenvs per say but just an easy reference point for you to cd to using the cd $PROJECT_HOME syntax.
which virtualenvwrapper.sh points to the location where the bash script virtualenvwrapper.sh is located and hence when you source it, the functions in that bash script becomes available for your mkvirtualenv calls.
Whenever you open a "new shell" (new tab, close your current tab after you first update your ~/.bash_profile file), all these environment variables and bash functions will be thus available in your shell.
When we create a new virtualenv using the mkvirtualenv -p python2.7 --distribute my_new_virtualenv_1, what actually happens is that a new directory called my_new_virtualenv_1 containing a symlink to your global python2.7 is being created and new python site-packages sub-directory are created in your ~/.virtualenvs/ directory. Reference:-
calvin$ mkvirtualenv -p python2.7 --distribute my_new_virtualenv_1
Running virtualenv with interpreter /opt/local/bin/python2.7
New python executable in my_new_virtualenv_1/bin/python
Installing distribute..........................................................................................................................................................................................................done.
Installing pip................done.
virtualenvwrapper.user_scripts creating /Users/calvin/.virtualenvs/my_new_virtualenv_1/bin/predeactivate
virtualenvwrapper.user_scripts creating /Users/calvin/.virtualenvs/my_new_virtualenv_1/bin/postdeactivate
virtualenvwrapper.user_scripts creating /Users/calvin/.virtualenvs/my_new_virtualenv_1/bin/preactivate
virtualenvwrapper.user_scripts creating /Users/calvin/.virtualenvs/my_new_virtualenv_1/bin/postactivate
virtualenvwrapper.user_scripts creating /Users/calvin/.virtualenvs/my_new_virtualenv_1/bin/get_env_details
So if you do
cd ~/.virtualenvs/my_new_virtualenv_1
calvin$ tree -d
.
├── bin
├── include
│   └── python2.7 -> /opt/local/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7
└── lib
└── python2.7
├── config -> /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/config
├── distutils
├── encodings -> /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/encodings
├── lib-dynload -> /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload
└── site-packages
├── distribute-0.6.28-py2.7.egg
│   ├── EGG-INFO
│   └── setuptools
│   ├── command
│   └── tests
├── pip-1.2.1-py2.7.egg
│   ├── EGG-INFO
│   └── pip
│   ├── commands
│   └── vcs
└── readline
You will see this directory structure in it.
Note of course that you are using Envs and I am using .virtualenvs to act as the virtual env holding directory.

Merging an independent Git repo with another Git repo that is a conduit with Subversion: avoiding duplication when merging

I am happily developing a Django project in my own Git repo on my localhost. I am creating branches, committing and merging happily. The path is something like:
/path/to/git/django/
and the structure is:
project
├── README
├── REQUIREMENTS
├── __init__.py
├── fabfile.py
├── app1
├── manage.py
├── app2
├── app3
├── app4
└── project
The rest of my development team still use Subversion, which is one giant repo with multiple projects. When I am working with that on my localhost I am still using Git (via git-svn). The path is something like
/path/to/giant-svn-repo/
Projects live under this like:
giant-svn-repo
|── project1
|── project2
|── project3
└── project4
When I want to work with the latest changes from the remote svn repo I just do a git-svn rebase. Then for new features I create a new branch, develop, commit, checkout master, merge branch with master, delete branch, and then a final git-svn dcommit. Cool. Everything works well.
These two repositories (lets call them git-django and git-svn) are completely independent right now.
Now I want to add git-django into the git-svn repo as a new project (ie. in a child directory called djangoproject). I have this working pretty well, using the following workflow:
cd into git-svn repo
Create a new branch in the git-svn repo
Make a new directory to host my django project
Add a new remote that links to my original Django project
Merge the remote into my local directory
Read-tree with the prefix of relative path djangoproject so it puts the codebase into the correct location based on the root of git-svn repo
Commit the changes so everything gets dumped into the correct place
From the command line this looks like:
> cd /path/to/giant-svn-repo
> git checkout -b my_django_project
> mkdir /path/to/giant-svn-repo/djangoproject
> git remote add -f local_django_dev /path/to/git/django/project
> git merge -s ours --no-commit local_django_dev/master
> git read-tree --prefix=djangoproject -u local_django_dev/master
> git commit -m 'Merged local_django_dev into subdirectory djangoproject'
This works, but in addition to the contents of the django git repo being in /path/to/giant-svn-repo/djangoproject it is also in the main root of the repository tree!
project
├── README
├── REQUIREMENTS
├── __init__.py
├── fabfile.py
├── djangoproject
│   ├── README
│   ├── REQUIREMENTS
│   ├── __init__.py
│   ├── fabfile.py
│   ├── app1
│   ├── manage.py
│   ├── app2
│   ├── app3
│   ├── app4
│   └── project
├── app1
├── manage.py
├── app2
├── app3
├── app4
└── project
I seem to have polluted the parent directory where all the projects of the giant-svn-repo are located.
Is there any way I can stop this happening?
(BTW this has all been done in a test directory structure - I haven't corrupted anything yet. Just trying to figure out the best way to do it)
I am sure it is just (re)defining one more argument to either git merge, git read-tree or git commit but I am pretty much at my limit of git kung-fu.
Thanks in advance.

How can I correctly set DJANGO_SETTINGS_MODULE for my Django project (I am using virtualenv)?

I am having some trouble setting the DJANGO_SETTINGS_MODULE for my Django project.
I have a directory at ~/dev/django-project. In this directory I have a virtual environment which I have set up with virtualenv, and also a django project called "blossom" with an app within it called "onora". Running tree -L 3 from ~/dev/django-project/ shows me the following:
.
├── Procfile
├── blossom
│   ├── __init__.py
│   ├── __init__.pyc
│   ├── fixtures
│   │   └── initial_data_test.yaml
│   ├── manage.py
│   ├── onora
│   │   ├── __init__.py
│   │   ├── __init__.pyc
│   │   ├── admin.py
│   │   ├── admin.pyc
│   │   ├── models.py
│   │   ├── models.pyc
│   │   ├── tests.py
│   │   └── views.py
│   ├── settings.py
│   ├── settings.pyc
│   ├── sqlite3-database
│   ├── urls.py
│   └── urls.pyc
├── blossom-sqlite3-db2
├── requirements.txt
└── virtual_environment
├── bin
│   ├── activate
│   ├── activate.csh
│   ├── activate.fish
│   ├── activate_this.py
│   ├── django-admin.py
│   ├── easy_install
│   ├── easy_install-2.7
│   ├── gunicorn
│   ├── gunicorn_django
│   ├── gunicorn_paster
│   ├── pip
│   ├── pip-2.7
│   ├── python
│   └── python2.7 -> python
├── include
│   └── python2.7 -> /System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7
└── lib
└── python2.7
I am trying to dump my data from the database with the command
django-admin.py dumpdata
My approach is to run cd ~/dev/django-project and then run source virtual_environment/bin/activate and then run django-admin.py dumpdata
However, I am getting the following error:
ImportError: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined.
I did some googling and found this page: https://docs.djangoproject.com/en/dev/topics/settings/#designating-the-settings
which tell me that
When you use Django, you have to tell it which settings you're using.
Do this by using an environment variable, DJANGO_SETTINGS_MODULE. The
value of DJANGO_SETTINGS_MODULE should be in Python path syntax, e.g.
mysite.settings. Note that the settings module should be on the Python
import search path.
Following a suggestion at Setting DJANGO_SETTINGS_MODULE under virtualenv? I appended the lines
export DJANGO_SETTINGS_MODULE="blossom.settings"
echo $DJANGO_SETTINGS_MODULE
to virtual_environment/bin/activate. Now, when I run the activate command in order to activate the virtual environment, I get output reading:
DJANGO_SETTINGS_MODULE set to blossom.settings
This looks good to me, but now the problem I have is that running
django-admin.py dumpdata
returns the following error:
ImportError: Could not import settings 'blossom.settings' (Is it on sys.path?): No module named blossom.settings
What am I doing wrong? How can I check thesys.path? How is this supposed to work?
Thanks.
Don't run django-admin.py for anything other than the initial project creation. For everything after that, use manage.py, which takes care of the finding the settings.
I just encountered the same error, and eventually managed to work out what was going on (the big clue was (Is it on sys.path?) in the ImportError).
You need add your project directory to PYTHONPATH — this is what the documentation means by
Note that the settings module should be on the Python import search path.
To do so, run
$ export PYTHONPATH=$PYTHONPATH:$PWD
from the ~/dev/django-project directory before you run django-admin.py.
You can add this command (replacing $PWD with the actual path to your project, i.e. ~/dev/django-project) to your virtualenv's source script. If you choose to advance to virtualenvwrapper at some point (which is designed for this kind of situation), you can add the export PY... line to the auto-generated postactivate hook script.
mkdjangovirtualenv automates this even further, adding the appropriate entry to the Python path for you, but I have not tested it myself.
On unix-like machine you can simply alias virtualenv like this and use alias instead of typing everytime:
.bashrc
alias cool='source /path_to_ve/bin/activate; export DJANGO_SETTINGS_MODULE=django_settings_folder.settings; cd path_to_django_project; export PYTHONPATH=$PYTHONPATH:$PWD'
My favourite alternative is passing settings file as runtime parameter to manage.py in a python package syntax, e.g:
python manage.py runserver --settings folder.filename
more info django docs
I know there are plenty answers, but this one worked for me just for the record.
Navigate to your .virtual_env folder where all the virtual environments are.
Go to the environment folder specific to your project
Append export DJANGO_SETTINGS_MODULE=<django_project>.settings
or export DJANGO_SETTINGS_MODULE=<django_project>.settings.local if you are using a separate settings file stored in a settings folder.
Yet another way to do deal with this issue is to use the python dotenv package and include PYTHONPATH and DJANGO_SETTINGS_MODULE in the .env file along with your other environment variables. Then modify your manage.py and wsgi.py to load them as stated in the instructions.
from dotenv import load_dotenv
load_dotenv()
I had similar error while working on windows machine. My problem was using wrong debug configuration. Use Python:django as your debug config option.
First ensure you've exported/set django_settings_module correctly here.