I have a django app (url shortener for k8s) here [https://github.com/MrAmbiG/shorty/tree/k8s][1]. The docker-compose version works with the same docker image but the docker run doesn't work (I cannot access from host, no errors). Docker and docker-compose up both are from docker.io and both are using the same docker image but why the difference?
I apologize for not posting all the contents of the file but rather posting the github url itself.
version: '3.7'
services:
django:
image: gajuambi/shorty
ports:
- 80:8001
env_file:
- ../.env
Below Doesnt work
docker run --name shorty -it --env-file .env gajuambi/shorty -p 8001:8001
The docker image itself runs with no error but when I enter the address in the browser of the host (my windows laptop) and I get nothing.
I tried the following urls in my host where docker is running (laptop) browser
http://localhost:8001/
http://127.0.0.1:8001/
I tried binding the django to the following addresses
0.0.0.0
0
127.0.0.1
but no go.
ports:
- 80:8001
i think your application running on port 80 as you tried binding app on 0.0.0.0 default port will be 80
but you forwarding port 8001 while running the docker command
Please try with
docker run --name shorty -it --env-file .env gajuambi/shorty -p 8001:80
Also, try with opening the IP of host machine (computer) or **docker bridge IP**
http://{host IP}:8001
updated the entrypoint command to
daphne shorty.asgi:application -b 0 -p 8000
Currently docker rm shorty -f && docker build -t gajuambi/shorty -f .\Deployment\Dockerfile . && docker run --name shorty -it --env-file .env -p 80:8000 gajuambi/shorty is working fine.
I have updated the github repo for reference.
https://github.com/MrAmbiG/shorty.git
Related
I have simple django project with one html page and I try to deploy it with docker. My Dockerfile you can see below:
FROM python:3.10.9
ENV PYTHONUNBUFFERED 1
RUN mkdir /app
WORKDIR /app
COPY requirements.txt /app/
RUN pip install -r requirements.txt
COPY . /app/
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
I run my image with next command:
docker run -p 8000:8000 --rm 751aa7a2d66f
But when I open my browser the localhost with API 127.0.0.1:8000 doesn't work.
In the same time I run command docker ps and it shows the following:
docker ps result
What's the problem?
Thanks in advance for your help.
For your information I work on windows 7.
I tried to run docker run -p 127.0.0.1:8000:8000 --rm 751aa7a2d66f but it didn't help.
I also tried to change port of my local machine to 8001 with the same result.
You can run django with docker-compose.yml below:
version: '3'
services:
my_django_service:
build:
context: .
dockerfile: Dockerfile
command: 'python manage.py runserver 0.0.0.0:8000'
ports:
- 8000:8000
I have problem with docker-compose (1) and docker compose (2) pull. When I try to pull images from direct repository (I use ECR), both commands (sudo docker-compose -f docker-compose.prod.yml pull and sudo docker compose -f docker-compose.prod.yml pull) give me following output:
$ export DOCKER_REGISTRY= {{ secret path to AWS ECP }}
$ $DOCKER_REGISTRY
{{ secret path to AWS ECP }}
$ sudo docker compose -f docker-compose.prod.yml pull
WARN[0000] The "DOCKER_REGISTRY" variable is not set. Defaulting to a blank string.
WARN[0000] The "DOCKER_REGISTRY" variable is not set. Defaulting to a blank string.
WARN[0000] The "DOCKER_REGISTRY" variable is not set. Defaulting to a blank string.
[+] Running 0/0
⠋ cron Pulling 0.0s
⠿ db Error 0.0s
⠋ traefik Pulling 0.0s
⠋ web Pulling 0.0s
WARNING: Some service image(s) must be built from source by running:
docker compose build cron traefik web
invalid reference format
$ sudo docker-compose -f docker-compose.prod.yml pull
WARNING: The DOCKER_REGISTRY variable is not set. Defaulting to a blank string.
Pulling db ... done
In line 1 I export DOCKER_REGISTRY variable, that is using in docker-compose.prod.yml. In line 2 I check this variable and then run both of above commands. (2) sees all needed images in yml file, but can't pull them, because it doesn't see DOCKER_REGISTRY variable. (1) sees only db.
Part of prod.yml file:
version: '3.7'
services:
web:
container_name: web
image: ${DOCKER_REGISTRY}/me-project_web:latest
build:
context: ./MoreEnergy
dockerfile: Dockerfile.prod
restart: always
env_file: ./.env.prod
entrypoint: sh ./entrypoint.sh
command: gunicorn MoreEnergy.wsgi:application --bind 0.0.0.0:8000
expose:
- 8000
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
depends_on:
- db
labels:
- "traefik.enable=true"
- "traefik.http.routers.web.rule=Host(`lunev.dmitrium.com`)"
- "traefik.http.routers.web.tls=true"
- "traefik.http.routers.web.tls.certresolver=letsencrypt"
I have .env.prod file with all needed variables too, but (1) and (2) don't see DOCKER_REGISTRY variable anyway.
What should I do? I'm trying to implement CI/CD using GitHub actions. In current state, docker-compose can build and push all my images to AWS ECR, but can't pull them back.
The answer was utterly stupid and simple. 10 minutes before asking the question, I've successfully started the containers, pulled by docker pull command before, but forgot it, because detail of success was not conspicuous. I've done this by NOT using this command with sudo. Yes, it is. sudo docker compose -f docker-compose.prod.yml up and docker compose -f docker-compose.prod.yml up (and pull) commands run correct only if you do it without root access.
Maybe it's not a common solution, and in other cases sudo is required, but not in mine.
I have a docker-compose.yml defined as follows with two services (the database and the app):
version: '3'
services:
db:
build: .
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=(adminname)
- POSTGRES_PASSWORD=(adminpassword)
- CLOUDINARY_URL=(cloudinarykey)
app:
build: .
ports:
- "8000:8000"
depends_on:
- db
The reason I have build: . in both services is due to how you can't do docker-compose push unless you have a build in all services. However, this means that both services are referring to the same Dockerfile, which builds the entire app. So after I run docker-compose build and look at the images available I see this:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mellon_app latest XXXXXXXXXXXX 27 seconds ago 1.14GB
postgres latest XXXXXXXXXXXX 27 seconds ago 1.14GB
The IMAGE_ID is the exact same for both images, the size is exactly the same for both images. This makes me think I've definitely done some unnecessary duplication as they're both just running the same Dockerfile. I don't want to take up any unnecessary space, how do I do this properly?
This is my Dockerfile:
FROM (MY FRIENDS ACCOUNT)/django-npm:latest
RUN mkdir usr/src/mprova
WORKDIR /usr/src/mprova
COPY frontend ./frontend
COPY backend ./backend
WORKDIR /usr/src/mprova/frontend
RUN npm install
RUN npm run build
WORKDIR /usr/src/mprova/backend
ENV DJANGO_PRODUCTION=True
RUN pip3 install -r requirements.txt
EXPOSE 8000
CMD python3 manage.py collectstatic && \
python3 manage.py makemigrations && \
python3 manage.py migrate && \
gunicorn mellon.wsgi --bind 0.0.0.0:8000
What is the proper way to push the images to my Docker hub registry without this duplication?
Proper way is to do
docker build -f {path-to-dockerfile} -t {desired-docker-image-name}.
docker tag {desired-docker-image-name}:latest {desired-remote-image-name}:latest or not latest but what you want, like datetime in int format
docker push {desired-remote-image-name}:latest
and cleanup:
4. docker rmi {desired-docker-image-name}:latest {desired-remote-image-name}:latest
Whole purpose of docker-compose is to help your local development, so it's easier to start several pods and combine them in local docker-compose network etc...
Those are the following steps I am following to run my application within a docker container .
docker run -i -t -d -p 8000:8000 c4ba9ec8e613 /bin/bash
docker attach c4ba9ec8e613
my start up script :
#!/bin/bash
#activate virtual env
echo Activate vitualenv.
source /home/my_env/bin/activate
#restart nginx
echo Restarting Nginx
service nginx restart
# Start Gunicorn processes
echo Starting Gunicorn.
gunicorn OPC.wsgi:application --bind=0.0.0.0:8000 --daemon
This setup is working fine in the local machine but not working within the docker .
Need to change the port no application to be accessible as my nginx server is responding at port 80
docker run -i -t -d -p 80:80 c4ba9ec8e613 /bin/bash
docker attach c4ba9ec8e613
I've imported to PyCharm 5.1 Beta 2 a tutorial project, which works fine when I run it from the commandline with docker-compose up
: https:// docs.docker.com/compose/django/
Trying to set a remote python interpreter is causing problems.
I've been trying to work out what the service name field is expecting:
remote interpreter - docker compose window - http:// i.stack.imgur.com/Vah7P.png.
My docker-compose.yml file is:
version: '2'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
When I try to enter web or db or anything at all that comes to mind, I get an error message: Service definition is expected to be a map
So what am I supposed to enter there?
EDIT1 (new version: Pycharm 2016.1 release)
I have now updated to the latest version and am having still issues: .IOError: [Errno 21] Is a directory
Sorry for not tagging all links - have a new user link limit
The only viable way we found to workaround this (Pycharm 2016.1) is setting up an SSH remote interpreter.
Add this to the main service Dockerfile:
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Then log into docker container like this (in the code sample pass 'screencast'):
$ ssh root#192.168.99.100 -p 2000
Note: We aware the IP and port might change depending on your docker and compose configs
For PyCharm just set up a remote SSH Interpreter and you are done!
https://www.jetbrains.com/help/pycharm/2016.1/configuring-remote-interpreters-via-ssh.html