What is wrong with my Gulp Watch task with Docker? [duplicate] - django

This question already has an answer here:
Docker bound mount - can not see changes on browser
(1 answer)
Closed 3 years ago.
EDIT: Gulp "Watch" doesn't work on windows with a mounted volumes because no "file change" event is sent. My current solution is to run Docker Windows Volume Watcher on my local machine while I see if I can integrate this solution into my code.
I'm trying to run a gulp watch task in my dockerfile and gulp isn't catching when my files are getting changed.
Quick Notes:
This set up works when I use it for my locally hosted wordpress installs
The file changes reflect in my docker container according to pycharm's docker service
Running the "styles" gulp task works, it's just the file watching that does not
It's clear to me that there's some sort of disconnect between how gulp watches for changes, and how Docker is letting that happen.
Github link
**Edit: It looks possible to do what I want, here's a link to someone doing it slightly differently.
gulpfile excerpt:
export const watchForChanges = () => {
watch('scss-js/scss/**/*.scss', gulp.series('styles'));
watch('scss-js/js/**/*.js', scripts);
watch('scss-js/scss/*.scss', gulp.series('styles'));
watch('scss-js/js/*.js', scripts);
// Try absolute path to see if it works
watch('scss-js/scss/bundle.scss', gulp.series('styles'));
}
...
// Compile SCSS through styles command
export const styles = () => {
// Want more than one SCSS file? Just turn the below string into an array
return src('scss-js/scss/bundle.scss')
// If we're in dev, init sourcemaps. Any plugins below need to be compatible with sourcemaps.
.pipe(gulpif(!PRODUCTION, sourcemaps.init()))
// Throw errors
.pipe(sass().on('error', sass.logError))
// In production use auto-prefixer, fix general grid and flex issues.
.pipe(
gulpif(
PRODUCTION,
postcss([
autoprefixer({
grid: true
}),
require("postcss-flexbugs-fixes"),
require("postcss-preset-env")
])
)
)
.pipe(gulpif(PRODUCTION, cleanCss({compatibility:'ie8'})))
// In dev write source maps
.pipe(gulpif(!PRODUCTION, sourcemaps.write()))
// TODO: Update this source folder
.pipe(dest('blog/static/blog/'))
.pipe(server.stream());
}
...
export const dev = series(parallel(styles, scripts), watchForChanges);
Docker-Compose:
version: "3.7"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "8002:8000"
- "3001:3001"
- "3000:3000"
volumes:
- ./django_project:/django_project
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
restart: always
depends_on:
- db
db:
image: postgres
environment:
POSTGRES_PASSWORD: example1
ports:
- "5432:5432"
restart: always
Dockerfile:
FROM python:3.8-buster
MAINTAINER Austin
ENV PYTHONUNBUFFERED 1
# Install node
RUN apt-get update && apt-get -y install nodejs
RUN apt-get install npm -y
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Update Node
# Install base dependencies
RUN apt-get update && apt-get install -y -q --no-install-recommends \
apt-transport-https \
build-essential \
ca-certificates \
curl \
git \
libssl-dev \
wget
ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 12.14.0
WORKDIR $NVM_DIR
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash \
&& . $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
ENV NODE_PATH $NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# What PIP installs need to get done?
COPY django_project/requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
# Copy local directory to target new docker directory
RUN mkdir -p /django_project
WORKDIR /django_project
COPY ./django_project /django_project
# Make Postgres Work
EXPOSE 5432/tcp
WORKDIR /django_project
RUN npm install gulp-cli -g
What do you think could be going on?

My guess is you are running on windows, right?
If so take a look at the following answer
https://stackoverflow.com/a/58969398/12153397
Below the gist of the linked answer
Issue identified
Bind mounting actually does not work for docker toolbox:
file change events in mounted folders of host are not propagated to
container by Docker for Windows
Solution
This script is intended to be the answer to this issue: docker-windows-volume-watcher.

Related

Why is my docker image not running when using docker run (image), but i can run containers generated by docker-compose up?

My docker-compose creates 3 containers - django, celery and rabbitmq. When i run the following commands -> docker-compose build and docker-compose up, the containers run successfully.
However I am having issues with deploying the image. The image generated has an image ID of 24d7638e2aff. For whatever reason however, if I just run the command below, nothing happens with an exit 0. Both the django and celery applications have the same image id.
docker run 24d7638e2aff
This is not good, as I am unable to deploy this image on kubernetes. My only thought is that the dockerfile has been configured wrongly, but i cannot figure out what is the cause
docker-compose.yaml
version: "3.9"
services:
django:
container_name: testapp_django
build:
context: .
args:
build_env: production
ports:
- "8000:8000"
command: >
sh -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
links:
- rabbitmq
- celery
rabbitmq:
container_name: testapp_rabbitmq
restart: always
image: rabbitmq:3.10-management
ports:
- "5672:5672" # specifies port of queue
- "15672:15672" # specifies port of management plugin
celery:
container_name: testapp_celery
restart: always
build:
context: .
args:
build_env: production
command: celery -A testapp worker -l INFO -c 4
depends_on:
- rabbitmq
Dockerfile
ARG PYTHON_VERSION=3.9-slim-buster
# define an alias for the specfic python version used in this file.
FROM python:${PYTHON_VERSION} as python
# Python build stage
FROM python as python-build-stage
ARG build_env
# Install apt packages
RUN apt-get update && apt-get install --no-install-recommends -y \
# dependencies for building Python packages
build-essential \
# psycopg2 dependencies
libpq-dev
# Requirements are installed here to ensure they will be cached.
COPY ./requirements .
# Create Python Dependency and Sub-Dependency Wheels.
RUN pip wheel --wheel-dir /usr/src/app/wheels \
-r ${build_env}.txt
# Python 'run' stage
FROM python as python-run-stage
ARG build_env
ARG APP_HOME=/app
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV BUILD_ENV ${build_env}
WORKDIR ${APP_HOME}
RUN addgroup --system appuser \
&& adduser --system --ingroup appuser appuser
# Install required system dependencies
RUN apt-get update && apt-get install --no-install-recommends -y \
# psycopg2 dependencies
libpq-dev \
# Translations dependencies
gettext \
# git for GitPython commands
git-all \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
# All absolute dir copies ignore workdir instruction. All relative dir copies are wrt to the workdir instruction
# copy python dependency wheels from python-build-stage
COPY --from=python-build-stage /usr/src/app/wheels /wheels/
# use wheels to install python dependencies
RUN pip install --no-cache-dir --no-index --find-links=/wheels/ /wheels/* \
&& rm -rf /wheels/
COPY --chown=appuser:appuser ./docker_scripts/entrypoint /entrypoint
RUN sed -i 's/\r$//g' /entrypoint
RUN chmod +x /entrypoint
# copy application code to WORKDIR
COPY --chown=appuser:appuser . ${APP_HOME}
# make appuser owner of the WORKDIR directory as well.
RUN chown appuser:appuser ${APP_HOME}
USER appuser
EXPOSE 8000
ENTRYPOINT ["/entrypoint"]
entrypoint
#!/bin/bash
set -o errexit
set -o pipefail
set -o nounset
exec "$#"
How do I build images of these containers so that I can deploy them to k8s?
The Compose command: overrides the Dockerfile CMD. docker run doesn't look at the docker-compose.yml file at all, and docker run with no particular command runs the image's CMD. You haven't declared anything for that, which is why the container exits immediately.
Leave the entrypoint script unchanged (or even delete it entirely, since it doesn't really do anything). Add a CMD line to the Dockerfile
CMD python manage.py migrate && python manage.py runserver 0.0.0.0:8000
Now plain docker run as you've shown it will attempt to start the Django server. For the Celery container, you can still pass a command override
docker run -d --net ... your-image \
celery -A testapp worker -l INFO -c 4
If you do deploy to Kubernetes, and you keep the entrypoint script, then you need to use args: in your pod spec to provide the alternate command, not command:.
I think that is because the commands to run the django server are in the docker-compose.yml.
You should move these commands inside the entrypoint.
set -o errexit
set -o pipefail
set -o nounset
python manage.py migrate && python manage.py runserver 0.0.0.0:8000
exec "$#"
Pay attention that this command python manage.py runserver 0.0.0.0:8000 will start the application with a development server that cannot be used for production purposes.
You should look for gunicorn or similar.

Running "/usr/local/bin/gunicorn" in a docker build says " stat /usr/local/bin/gunicorn: no such file or directory"

From the toplevel maps directory, I'm able to install the gunicorn extension ...
(venv) localhost:maps davea$ pip3 install gunicorn
Collecting gunicorn
Downloading gunicorn-20.0.4-py2.py3-none-any.whl (77 kB)
|████████████████████████████████| 77 kB 1.2 MB/s
Requirement already satisfied: setuptools>=3.0 in ./web/venv/lib/python3.7/site-packages (from gunicorn) (45.1.0)
Installing collected packages: gunicorn
Successfully installed gunicorn-20.0.4
Below is my docker-compose.yml file
version: '3'
services:
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn maps.wsgi:application -w 2 -b :8000
apache:
restart: always
build: ./apache/
ports:
- "80:80"
#volumes:
# - web-static:/www/static
links:
- web:web
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'chicommons'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3406"
volumes:
- my-db:/var/lib/mysql
volumes:
my-db:
And then I have web/Dockerfile as follows ...
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
RUN mkdir -p /app/
WORKDIR /app/
RUN pip3 freeze > requirements.txt
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY . /app/
However, when I build/start my docker instance, I'm told it can't find my "gunicorn" command ...
(venv) localhost:maps davea$ docker-compose up
Starting maps_web_1 ...
Starting maps_web_1 ... error
ERROR: for maps_web_1 Cannot start service web: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"/usr/local/bin/gunicorn\": stat /usr/local/bin/gunicorn: no such file or directory": unknown
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"/usr/local/bin/gunicorn\": stat /usr/local/bin/gunicorn: no such file or directory": unknown
ERROR: Encountered errors while bringing up the project.
Your Docker container is a totally isolated environment. Nothing you install on the host is visible inside the container; nothing that happens inside the container is accessible on the host. There's ways to bridge this boundary (with docker run -v bind mounts) but that's not possible during the docker build phase.
In this example your local source tree has a requirements.txt file that lists out the packages that need to be installed when the container is created. (The RUN pip freeze line has no effect; the COPY on the line after it copies it from your local source tree.) It's enough to add the dependency to the requirements.txt file
gunicorn
In your development environment, you can re-run pip install -r requirements.txt to update the packages installed in your virtual environment. When you re-run docker build, having this line in the requirements.txt file will cause it to be installed when the package is built.
You can clean up the Dockerfile a little bit. The resulting Dockerfile would be a pretty typical one for Python packages with C dependencies:
# Start from a totally clean environment with Python installed,
# but no non-system libraries and nothing from your host system.
FROM python:3.7-slim
# Install C dependencies.
# It's important to do apt-get update and install in the
# same command. It's more efficient to only do it once.
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
gcc \
libmariadb-dev \
libmariadb-dev-compat
# Update pip
RUN python -m pip install --upgrade pip
# Create the application directory and point there
# (WORKDIR will implicitly create it)
WORKDIR /app/
# Install all of the Python dependencies. These are
# listed, one to a line, in the requirements.txt file,
# possibly with version constraints. Having this as
# a separate block allows Docker to not repeat it if
# only your application code changes.
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
# Copy in the rest of the application.
COPY . .
# Specify what port your application uses, and the
# default command to use when launching the container.
EXPOSE 8000
CMD /usr/local/bin/gunicorn maps.wsgi:application -w 2 -b :8000

Installing Geospatial libraries in Docker

Django's official documentation lists 3 dependencies needed to start developing PostGIS application. They list a table depending on the database.
I use docker for my local development and I am confused about which of those packages should be installed in the Django container and which in the PostgreSQL container. I am guessing some of them should be on both.
I would appreciate your help with this.
You need to install the Geospatial libraries only in the Django container because they are used for interacting with a spatially enabled DB (such as PostgreSQL with PostGIS). You can deploy such a DB by using as a base a ready-made image for that purpose, such as kartoza/postgis.
Here is a nice example of a Dockerfile that uses python:3.6-slim as a base and builds the GDAL dependencies into the container. The part of the Dockerfile that you need for that is the following:
FROM python:3.6-slim
ENV PYTHONUNBUFFERED=1
# Add unstable repo to allow us to access latest GDAL builds
# Existing binutils causes a dependency conflict, correct version will be installed when GDAL gets intalled
RUN echo deb http://deb.debian.org/debian testing main contrib non-free >> /etc/apt/sources.list && \
apt-get update && \
apt-get remove -y binutils && \
apt-get autoremove -y
# Install GDAL dependencies
RUN apt-get install -y libgdal-dev g++ --no-install-recommends && \
pip install pipenv && \
pip install whitenoise && \
pip install gunicorn && \
apt-get clean -y
# Update C env vars so compiler can find gdal
ENV CPLUS_INCLUDE_PATH=/usr/include/gdal
ENV C_INCLUDE_PATH=/usr/include/gdal
ENV LC_ALL="C.UTF-8"
ENV LC_CTYPE="C.UTF-8"
You can deploy both the Django app and the DB using docker-compose, using the following docker-compose.yaml (from the same repo as the Dockerfile):
# Sample compose file for a django app and postgis
version: '3'
services:
postgis:
image: kartoza/postgis:9.6-2.4
volumes:
- postgis_data:/var/lib/postgresql
environment:
ALLOW_IP_RANGE: 0.0.0.0/0
POSTGRES_PASS: ${POSTGRES_PASSWORD}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_DB: postgis
web:
image: intelligems/geodjango:latest
command: python manage.py runserver 0.0.0.0:8000
environment:
DEBUG: "True"
SECRET_KEY: ${SECRET_KEY}
DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}#postgis:5432/postgis
SENTRY_DSN: ${SENTRY_DSN}
ports:
- 8000:8000
depends_on:
- postgis
volumes:
postgis_data: {}
In this repository, you can find more info and interesting bits of configuration for your issue: https://github.com/intelligems/docker-library/tree/master/geodjango (the Dockerfile snippet above is from that repo).
As a Note:
If you want to create a PostgreSQL with PostGIS enabled database as a "local DB" to interact with your local Django, you can deploy the previously mentioned kartoza/postgis:
Create a Volume:
$ docker volume create postgresql_data
Deploy the container:
$ docker run \
--name=postgresql-with-postgis -d \
-e POSTGRES_USER=user_name \
-e POSTGRES_PASS=user_pass \
-e ALLOW_IP_RANGE=0.0.0.0/0 -p 5433:5432 \
-v postgresql_data:/var/lib/postgresql \
--restart=always \
kartoza/postgis:9.6-2.4
Connect to the default DB (postgres) of the container and create your DB:
$ psql -h localhost -U user_name -d postgres
$ CREATE DATABASE database_name;
Enable PostGIS extension to the database:
$ \connect database_name
$ CREATE EXTENSION postgis;
This will result in a DB with the name database_name listening to port 5432 of your localhost and you can connect to that from your local Django app.

Setting up docker for django, vue.js, rabbitmq

I'm trying to add Docker support to my project.
My structure looks like this:
front/Dockerfile
back/Dockerfile
docker-compose.yml
My Dockerfile for django:
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y python-software-properties software-properties-common
RUN add-apt-repository ppa:ubuntugis/ubuntugis-unstable
RUN apt-get update && apt-get install -y python3 python3-pip binutils libproj-dev gdal-bin python3-gdal
ENV APPDIR=/code
WORKDIR $APPDIR
ADD ./back/requirements.txt /tmp/requirements.txt
RUN ./back/pip3 install -r /tmp/requirements.txt
RUN ./back/rm -f /tmp/requirements.txt
CMD $APPDIR/run-django.sh
My Dockerfile for Vue.js:
FROM node:9.11.1-alpine
# install simple http server for serving static content
RUN npm install -g http-server
# make the 'app' folder the current working directory
WORKDIR /app
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# build app for production with minification
RUN npm run build
EXPOSE 8080
CMD [ "http-server", "dist" ]
and my docker-compose.yml:
version: '2'
services:
rabbitmq:
image: rabbitmq
api:
build:
context: ./back
environment:
- DJANGO_SECRET_KEY=${SECRET_KEY}
volumes:
- ./back:/app
rabbit1:
image: "rabbitmq:3-management"
hostname: "rabbit1"
ports:
- "15672:15672"
- "5672:5672"
labels:
NAME: "rabbitmq1"
volumes:
- "./enabled_plugins:/etc/rabbitmq/enabled_plugins"
django:
extends:
service: api
command:
./back/manage.py runserver
./back/uwsgi --http :8081 --gevent 100 --module websocket --gevent-monkey-patch --master --processes 4
ports:
- "8000:8000"
volumes:
- ./backend:/app
vue:
build:
context: ./front
environment:
- HOST=localhost
- PORT=8080
command:
bash -c "npm install && npm run dev"
volumes:
- ./front:/app
ports:
- "8080:8080"
depends_on:
- django
Running docker-compose fails with:
ERROR: for chatapp2_django_1 Cannot start service django: b'OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \\"./back/manage.py\\": stat ./back/manage.py: no such file or directory": unknown'
ERROR: for rabbit1 Cannot start service rabbit1: b'driver failed programming external connectivity on endpoint chatapp2_rabbit1_1 (05ff4e8c0bc7f24216f2fc960284ab8471b47a48351731df3697c6d041bbbe2f): Error starting userland proxy: listen tcp 0.0.0.0:15672: bind: address already in use'
ERROR: for django Cannot start service django: b'OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \\"./back/manage.py\\": stat ./back/manage.py: no such file or directory": unknown'
ERROR: Encountered errors while bringing up the project.
I don't understand what is this 'unknown' directory it's trying to get. Have I set this all up right for my project structure?
For the django part you're missing a copy of your code for the django app which im assuming is in back. You'll need to add ADD /back /code. You probably also wanna probably run the python alpine docker build instead of the ubuntu as it will significantly reduce build times and container size.
This is what I would do:
# change this to whatever python version your app is targeting (mine is typically 3.6)
FROM python:3.6-alpine
ADD /back /code
# whatever other dependencies you'll need i run with the psycopg2-binary build so i need these (the nice part of the python-alpine image is you don't need to install any of those python specific packages you were installing before
RUN apk add --virtual .build-deps gcc musl-dev postgresql-dev
RUN pip install -r /code/requirements.txt
# expose whatever port you need for your Django app (default is 8000, we use non-default but you can do whatever you need)
EXPOSE 8000
WORKDIR /code
#dont need /code here since WORKDIR is effectively a change directory
RUN chmod +x /run-django.sh
RUN apk add --no-cache bash postgresql-libs
CMD ["/run-django.sh"]
We have a similar run-django.sh script that we call python manage.py makemigrations and python manage.py migrate. I'm assuming yours is similar.
Long story short, you weren't copying in the code from back to code.
Also in your docker-compose you dont have build context like you do for the vue service.
As for your rabbitmq container failure, you need to stop the /etc service associated with rabbit on your computer. I get this error if i'm trying to expose a postgresql container or a redis container and have to /etc/init.d/postgresql stop or /etc/init.d/redis stop to stop the service running on your machine in order to allow for no collisions on that default port for that service.

How to deploy docker image created with version 2 on the aws

I am new to docker. I did somehow create docker project with version 2 docker compose following is my docker-compose.yml
version: "2"
services:
# Configuration for php web server
webserver:
image: inshastri/laravel-adminpanel:latest
restart: always
ports:
- '8080:80'
networks:
- web
volumes:
- ./:/var/www/html
- ./apache.conf:/etc/apache2/sites-available/000-default.conf
depends_on:
- db
links:
- db
# - redis
environment:
DB_HOST: db
DB_DATABASE: phpapp
DB_USERNAME: root
DB_PASSWORD: toor
# Configuration for mysql db server
db:
image: "mysql:5"
volumes:
- ./mysql:/etc/mysql/conf.d
environment:
MYSQL_ROOT_PASSWORD: toor
MYSQL_DATABASE: phpapp
networks:
- web
restart: always
# Configuration for phpmyadmin (optional)
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
PMA_PORT: 3306
PMA_HOST: db
PMA_USER: root
PMA_PASSWORD: toor
ports:
- "8004:80"
restart: always
depends_on:
- db
networks:
- web
redis:
image: redis:4.0-alpine
# Network connecting the whole app
networks:
web:
driver: bridge
and with docker file as below
FROM ubuntu:16.04
RUN apt-get update \
&& apt-get install -qy language-pack-en-base \
&& locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LC_ALL en_US.UTF-8
RUN apt-get -y install apache2
RUN a2enmod headers
RUN a2enmod rewrite
# add PPA for PHP 7
RUN apt-get install -y --no-install-recommends apt-utils
RUN apt-get install -y software-properties-common python-software-properties
RUN add-apt-repository -y ppa:ondrej/php
# Adding php 7
RUN apt-get update
RUN apt-get install -y php7.1 php7.1-fpm php7.1-cli php7.1-common php7.1-mbstring php7.1-gd php7.1-intl php7.1-xml php7.1-mysql php7.1-mcrypt php7.1-zip
RUN apt-get -y install libapache2-mod-php7.1 php7.1 php7.1-cli php-xdebug php7.1-mbstring sqlite3 php7.1-mysql php-imagick php-memcached php-pear curl imagemagick php7.1-dev php7.1-phpdbg php7.1-gd npm nodejs-legacy php7.1-json php7.1-curl php7.1-sqlite3 php7.1-intl apache2 vim git-core wget libsasl2-dev libssl-dev
RUN apt-get -y install libsslcommon2-dev libcurl4-openssl-dev autoconf g++ make openssl libssl-dev libcurl4-openssl-dev pkg-config libsasl2-dev libpcre3-dev
RUN apt-get install -y imagemagick graphicsmagick
RUN a2enmod headers
RUN a2enmod rewrite
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
ENV APACHE_RUN_DIR /var/run/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
RUN ln -sf /dev/stdout /var/log/apache2/access.log && \
ln -sf /dev/stderr /var/log/apache2/error.log
RUN mkdir -p $APACHE_RUN_DIR $APACHE_LOCK_DIR $APACHE_LOG_DIR
# Update application repository list and install the Redis server.
RUN apt-get update && apt-get install -y redis-server
# Allow Composer to be run as root
ENV COMPOSER_ALLOW_SUPERUSER 1
# Setup the Composer installer
RUN curl -o /tmp/composer-setup.php https://getcomposer.org/installer \
&& curl -o /tmp/composer-setup.sig https://composer.github.io/installer.sig \
&& php -r "if (hash('SHA384', file_get_contents('/tmp/composer-setup.php')) !== trim(file_get_contents('/tmp/composer-setup.sig'))) { unlink('/tmp/composer-setup.php'); echo 'Invalid installer' . PHP_EOL; exit(1); }" \
&& php /tmp/composer-setup.php \
&& chmod a+x composer.phar \
&& mv composer.phar /usr/local/bin/composer
# Install composer dependencies
RUN echo pwd: `pwd` && echo ls: `ls`
# RUN composer install
EXPOSE 80
# Expose default port
EXPOSE 6379
VOLUME [ "/var/www/html" ,"./mysql:/etc/mysql/conf.d",]
WORKDIR /var/www/html
ENTRYPOINT [ "/usr/sbin/apache2" ]
CMD ["-D", "FOREGROUND"]
COPY . /var/www/html
COPY ./apache.conf /etc/apache2/sites-available/000-default.conf
Now there are 2 things which i cannot understand after googling a lot
1) when i give the image to my friend he took the pull and when he ran it was without the other services like mysql and phpmyadmin
2) how should i deploy this application to ec2 amazon
There are lots of things but cannot understand any of them like ec2 beanstalk etc
Please guide a simple uploading of my image file to aws and run on it also how can i run my image on my friends pc as i thougth docker was a container managment system it should get all my services as when my friend or any one takes a pull of my image
for refrence my image is inshastri/laravel-adminpanel
Please help thanks in advance