Trouble launching AWS environment with django - django

I followed the steps line by line in the documentation, but i keep getting this error:
Your WSGIPath refers to a file that does not exist.
Here is my '.config' file: (except for the appname and the keys)
container_commands:
01_syncdb:
command: "python manage.py syncdb --noinput"
leader_only: true
option_settings:
- namespace: aws:elasticbeanstalk:container:python
option_name: WSGIPath
value: [myapp]/wsgi.py
- option_name: DJANGO_SETTINGS_MODULE
value: [myapp].settings
- option_name: AWS_SECRET_KEY
value: XXXX
- option_name: AWS_ACCESS_KEY_ID
value: XXXX
I googled around and found that someone else had a similar problem and they solved it by editing the 'optionsettings.[myapp]', I don't want to delete something I need, but here is what I have:
[aws:autoscaling:asg]
Custom Availability Zones=
MaxSize=1
MinSize=1
[aws:autoscaling:launchconfiguration]
EC2KeyName=
InstanceType=t1.micro
[aws:autoscaling:updatepolicy:rollingupdate]
RollingUpdateEnabled=false
[aws:ec2:vpc]
Subnets=
VPCId=
[aws:elasticbeanstalk:application]
Application Healthcheck URL=
[aws:elasticbeanstalk:application:environment]
DJANGO_SETTINGS_MODULE=
PARAM1=
PARAM2=
PARAM3=
PARAM4=
PARAM5=
[aws:elasticbeanstalk:container:python]
NumProcesses=1
NumThreads=15
StaticFiles=/static/=static/
WSGIPath=application.py
[aws:elasticbeanstalk:container:python:staticfiles]
/static/=static/
[aws:elasticbeanstalk:hostmanager]
LogPublicationControl=false
[aws:elasticbeanstalk:monitoring]
Automatically Terminate Unhealthy Instances=true
[aws:elasticbeanstalk:sns:topics]
Notification Endpoint=
Notification Protocol=email
[aws:rds:dbinstance]
DBDeletionPolicy=Snapshot
DBEngine=mysql
DBInstanceClass=db.t1.micro
DBSnapshotIdentifier=
DBUser=ebroot
The user who solved that problem deleted certain lines and then did 'eb start'. I deleted the same lines that the original user said they deleted, but when I 'eb start'ed it I got the same exact problem again.
If anybody can help me out, that would be amazing!

I was having this exact problem all day yesterday and I am using ubuntu 13.10.
I also tried deleting the options file under .ebextensions to no avail.
What I believe finally fixed the issue was under
~/mysite/requirements.txt
I double checked what the values were after I was all set and done doing eb init and eb start and noticed they were different from what http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python_django.html mentioned at the beginning of the tutorial.
The file was missing the MySQL line when I checked during the WSGIPath problem, I simply added the line :
MySQL-python==1.2.3
and then committed all the changes and it worked.
If that doesn't work for you, below are the .config file settings and the directory structure.
My .config file under ~/mysite/.ebextensions is exactly what was in the tutorial, minus the secret key and access key, you need to replace those with your own:
container_commands:
01_syncdb:
command: "django-admin.py syncdb --noinput"
leader_only: true
option_settings:
- namespace: aws:elasticbeanstalk:container:python
option_name: WSGIPath
value: mysite/wsgi.py
- option_name: DJANGO_SETTINGS_MODULE
value: mysite.settings
- option_name: AWS_SECRET_KEY
value: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
- option_name: AWS_ACCESS_KEY_ID
value: AKIAIOSFODNN7EXAMPLE
My requirements.txt:
Django==1.4.1
MySQL-python==1.2.3
argparse==1.2.1
wsgiref==0.1.2
And my tree structure. This starts out in ~/ so if I were to do
cd ~/
tree -a mysite
You should get the following output, including a bunch of directories under .git ( I removed them because there is a lot):
mysite
├── .ebextensions
│   └── myapp.config
├── .elasticbeanstalk
│   ├── config
│   └── optionsettings.mysite-env
├── .git
├── .gitignore
├── manage.py
├── mysite
│   ├── __init__.py
│   ├── __init__.pyc
│   ├── settings.py
│   ├── settings.pyc
│   ├── urls.py
│   ├── wsgi.py
│   └── wsgi.pyc
└── requirements.txt

Related

how do i start the Django server directly from the Docker container folder

I have this project structure:
└── folder
└── my_project_folder
├── my_app
│ ├── __init__.py
│ ├── asgi.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
└── manage.py
├── .env.dev
├── docker-compose.yml
├── entrypoint.sh
├── requirements.txt
└── Dockerfile
docker-compose.yml:
version: '3.9'
services:
web:
build: .
command: python my_app/manage.py runserver 0.0.0.0:8000
volumes:
- .:/usr/src/app/
ports:
- 8000:8000
env_file:
- .env.dev
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=db_admin
- POSTGRES_PASSWORD=db_pass
- POSTGRES_DB=some_db
volumes:
postgres_data:
Dockerfile:
FROM python:3.10.0-alpine
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN pip install --upgrade pip
RUN apk update
RUN apk add postgresql-dev gcc python3-dev musl-dev
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
COPY . .
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
It's working, but i dont like the line python my_app/manage.py runserver 0.0.0.0:8000 in my docker-compose file.
What should i change to run manage.py from the docker folder?
I mean, how can i use python manage.py runserver 0.0.0.0:8000 (without my_app)?
In your Dockerfile, you can use WORKDIR to change your directory inside docker file:
...
COPY . .
WORKDIR "my_app"
...
Then you are inside my_app dir and you can call your command:
python manage.py ...

I can not access to mount directory from Django when using docker

I run Django in docker and I want to read a file from host. I mount a path in host to container using following command in my docker-compose file:
volumes:
- /path/on/host/sharedV:/var/www/project/src/shV
Then I execute mongodump and export a collection into sharedV on host. After that, I inspect web container and go to the shV directory in container and I can see the backup file. However, when I run os.listdir(path) in django, the result is empty list. In the other word, I can access to sharedV directory in Django but I can not see its contents!
Here is the Mount part of container inspect:
"Mounts": [
{
"Type": "bind",
"Source": "/path/on/host/sharedV",
"Destination": "/var/www/project/src/shV",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
]
Is there any idea that how can I access to host from a running container?
thanks
This works for me, Tying to give you a perception
Project Tree
.
├── app
│   ├── dir
│   ├── file.txt
│   └── main.py
├── dir
│   └── demo.txt
├── docker-compose.yml
└── Dockerfile
Dockerfile
# Dockerfile
FROM python:3.7-buster
RUN mkdir -p /app
WORKDIR /app
RUN useradd appuser && chown -R appuser /app
USER appuser
CMD [ "python", "./main.py" ]
docker-compose
version: '2'
services:
api:
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
volumes:
- ./app:/app/
- ./dir:/app/dir
main.py
if __name__ == '__main__':
path = './dir'
dir = os.listdir(path)
print(f'Hello', dir)
Prints
Hello ['demo.txt']

Using outputs from other tf files in terraform

Is there a way of using output values of a module that is located in another folder? Imagine the following environment:
tm-project/
├── lambda
│   └── vpctm-manager.js
├── networking
│   ├── init.tf
│   ├── terraform.tfvars
│   ├── variables.tf
│   └── vpc-tst.tf
├── prd
│   ├── init.tf
│   ├── instances.tf
│   ├── terraform.tfvars
│   └── variables.tf
└── security
└── init.tf
I want to create EC2 instances and place them in a subnet that is declared in networking folder. So, I was wondering if by any chance I could access the outputs of the module I used in networking/vpc-tst.tf as the inputs of my prd/instances.tf.
Thanks in advances.
You can use a outputs.tf file to define the outputs of a terraform module. Your output will have the variables name such as the content below.
output "vpc_id" {
value = "${aws_vpc.default.id}"
}
These can then be referenced within your prd/instances.tf by referencing the resource name combined with the output name you defined in your file.
For example if you have a module named vpc which uses this module you could then use the output similar to below.
module "vpc" {
......
}
resource "aws_security_group" "my_sg" {
vpc_id = module.vpc.vpc_id
}

Unable to locate global value from helm subchart

This is my first time using nested Helm charts and I'm trying to access a global value from the root values.yaml file. According to the documentation I should be able to use the syntax below in my secret.yaml file, however if I run helm template api --debug I get the following error:
Error: template: api/templates/secret.yaml:7:21: executing "api/templates/secret.yaml" at <.Values.global.sa_json>: nil pointer evaluating interface {}.sa_json
helm.go:84: [debug] template: api/templates/secret.yaml:7:21: executing "api/templates/secret.yaml" at <.Values.global.sa_json>: nil pointer evaluating interface {}.sa_json
/primaryChart/charts/api/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: {{ .Chart.Name }}-service-account-secret
type: Opaque
data:
sa_json: {{ .Values.global.sa_json }}
primaryChart/values.yaml
global:
sa_json: _b64_sa_credentials
Folder structure is as follows:
/primaryChart
|- values.yaml
|-- /charts
|-- /api
|-- /templates
|- secret.yaml
Having the following directory layout, .Values.global.sa_json will only be available if you call helm template api . from your main chart
/mnt/c/home/primaryChart> tree
.
├── Chart.yaml <-- your main chart
├── charts
│ └── api
│ ├── Chart.yaml <-- your subchart
│ ├── charts
│ ├── templates
│ │ └── secrets.yaml
│ └── values.yaml
├── templates
└── values.yaml <--- this is where your global.sa_json is defined
Your values file should be called values.yaml and not value.yaml, or use any other file with -f flag helm template api . -f value.yaml
/mnt/c/home/primaryChart> helm template api .
---
# Source: primaryChart/charts/api/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: api-service-account-secret
type: Opaque
data:
sa_json: _b64_sa_credentials

Django startapp workflow with docker

I am confused with how to work with Django apps (create/using) when using Docker. Some of the tutorials suggest using command startapp after starting the web docker container (I'm using docker-compose to up the containers). But since the files are created inside that container, how do I go about adding code to that from my local dev machine? and moreover, this does not seem right to create apps like this to edit code...
I've been using this following structure as is so far which starts up the container and works fine. But with just one "app" which is todo
(taken from https://github.com/realpython/dockerizing-django)
.
├── README.md
├── docker-compose.yml
├── nginx
│   ├── Dockerfile
│   └── sites-enabled
│   └── django_project
├── production.yml
├── tmp.json
└── web
├── Dockerfile
├── Pipfile
├── Pipfile.lock
├── docker_django
│   ├── __init__.py
│   ├── apps
│   │   ├── __init__.py
│   │   └── todo
│   │   ├── __init__.py
│   │   ├── admin.py
│   │   ├── models.py
│   │   ├── static
│   │   │   └── main.css
│   │   ├── templates
│   │   │   ├── _base.html
│   │   │   ├── dashboard.html
│   │   │   └── home.html
│   │   ├── tests.py
│   │   ├── urls.py
│   │   └── views.py
│   ├── settings.py
│   ├── urls.py
│   └── wsgi.py
├── manage.py
├── requirements.txt
├── shell-script.sh
└── tests
├── __init__.py
└── test_env_settings.py
When I use the above structure, I am not able to create apps locally as we have to use manage.py to create apps, but I need to navigate to apps folder to do that but manage.py is not accessible. So, I try to give full abs path to manage.py, but it complains about SeTTINGS_MODULE SECRET_KEY error.
What is the proper way to work with django apps when using Docker-Compose?
Do I need to change the above structure? or should I change my workflow?
EDIT:
my docker-compose:
version: '3.7'
services:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
- redis:redis
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn docker_django.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- web-static:/usr/src/app/static
links:
- web:web
postgres:
restart: always
image: postgres:latest
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
pgadmin:
restart: always
image: fenglc/pgadmin4
ports:
- "5050:5050"
volumes:
- pgadmindata:/var/lib/pgadmin/data/
environment:
DEFAULT_USER: 'pgadmin4#pgadmin.org'
DEFAULT_PASSWORD: 'admin'
redis:
restart: always
image: redis:latest
ports:
- "6379:6379"
volumes:
- redisdata:/data
volumes:
web-django:
web-static:
pgdata:
redisdata:
pgadmindata:
My Dockerfile inside web folder:
FROM python:3.7-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ADD Pipfile /usr/src/app
ADD Pipfile.lock /usr/src/app
RUN python -m pip install --upgrade pip
RUN python -m pip install pipenv
COPY requirements.txt requirements.txt
RUN pipenv install --system
COPY . /usr/src/app
you structure is correct. what you are looking for is a volume, to make your django project on the host to be available inside the container, you can create whatever you like in your project, and the changes will take effect on the container.
for example:
the structure is :
.
├── django
│   ├── Dockerfile
│   └── entireDjangoAppFiles
└── docker-compose.yml
say this is my django dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
RUN pip install Django psycopg2
EXPOSE 8000
CMD python manage.py runserver 0.0.0.0:8000
and my docker compose:
version: '3.7'
services:
django:
build:
context: django
dockerfile: Dockerfile
ports:
- "8000:8000"
volumes:
- "./django:/code"
now any change i do in my django directory will be applied to the container's /code dir as well
EDIT
our docker-compose files are not similar... you are using named volumes
instead of usual mounting. those volumes are being created docker own volumes directory and the containers can use them, but nothing tell docker that you want those volume to contain your apps- so they are empty. to fix this, you may just remove them from the volumes option in your docker-compose, and prefer mount-volumes:
version: '3.7'
services:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
- redis:redis
volumes:
- .web:/usr/src/app #mount the project dir
- .path/to/static/files/dir:/usr/src/app/static #mount the static files dir
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn docker_django.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- web-static:/usr/src/app/static
links:
- web:web
postgres:
restart: always
image: postgres:latest
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
pgadmin:
restart: always
image: fenglc/pgadmin4
ports:
- "5050:5050"
volumes:
- pgadmindata:/var/lib/pgadmin/data/
environment:
DEFAULT_USER: 'pgadmin4#pgadmin.org'
DEFAULT_PASSWORD: 'admin'
redis:
restart: always
image: redis:latest
ports:
- "6379:6379"
volumes:
- redisdata:/data
volumes:
#web-django:
#web-static:
pgdata:
redisdata:
pgadmindata:
a note about the other named volumes - if you wondered why do you have to use them - they are the databases volumes, which supposed to be populated by the containers.