Chained docker image start httpd and php - dockerfile

I've got 2 Docker containers: httpd-container and php-container.
httpd-container dockerfile:
FROM centos:latest
RUN yum -y install httpd
RUN sed -i 's/AllowOverride None/AllowOverride all/g' /etc/httpd/conf/httpd.conf
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
php-container dockerfile
FROM httpd-container:latest
RUN yum -y install php php-cli php-fpm php-mysqlnd php-zip php-devel php-gd php-mbstring php-curl php-xml php-pear php-bcmath php-json
RUN mkdir /run/php-fpm
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
Currently I have to run /usr/sbin/php-fpm in php-container after it starts.
I`ve have tried putting both commands in one script and setting that as entry point, but it does not find it.
I've tried running supervisord and got errors as well.
Any advice is deeply appreciated!

Change php-container dockerfile to
FROM httpd-container:latest
RUN yum -y install php php-cli php-fpm php-mysqlnd php-zip php-devel php-gd php-mbstring php-curl php-xml php-pear php-bcmath php-json
RUN mkdir /run/php-fpm
COPY ./start_services.sh /
CMD ["/start_services.sh"]
create start_services.sh with
#!/bin/sh
/usr/sbin/php-fpm
/usr/sbin/httpd -D FOREGROUND

Related

docker not installed through yum in user data

this is how my user data looks like, but docker is not installed, when connecting to my ec2 machine:
sudo yum -y install docker
sudo service docker start
sudo docker pull nginx
sudo docker run -d -p 80:80 nginx
what can I do?
When using user-data script you can debug what is happening by ssh connecting to the instance and check the output in cloud-init-output.log.
sudo cat /var/log/cloud-init-output.log
When doing this you'll find an strange error containing:
Jan 29 11:58:25 cloud-init[2970]: __init__.py[WARNING]: Unhandled non-multipart (text/x-not-multipart) userdata: 'sudo yum -y install dock...'
Which means that the default interpreter seems to be python and it's neccesary to start the user-data with #!/bin/bash. (See this other StackOverflow answer)
When changing the user-data to:
#!/bin/bash
sudo yum -y install docker
sudo service docker start
sudo docker pull nginx
sudo docker run -d -p 80:80 nginx
it will be executed as expected and you will find nginx running on your ec2.

Docker file throwing error when i try to run it "AH00111: Config variable ${APACHE_RUN_DIR} is not defined"

I am trying my hands with Docker.
I am trying to install apche2 into ubuntu images.
FROM ubuntu
RUN echo "welcome to yellow pages"
RUN apt-get update
RUN apt-get install -y tzdata
RUN apt-get install -y apache2
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
RUN echo 'Hello, docker' > /var/www/index.html
ENTRYPOINT ["/usr/sbin/apache2"]
CMD ["-D", "FOREGROUND"]
I found a reference online reference
I have added this line "RUN apt-get install -y tzdata" because it was asking for an option of tzdata and stopping image creation.
Now when I run my image I am getting the below error
[Thu Jan 07 09:43:57.213998 2021] [core:warn] [pid 1] AH00111: Config variable ${APACHE_RUN_DIR} is not defined
apache2: Syntax error on line 80 of /etc/apache2/apache2.conf: DefaultRuntimeDir must be a valid directory, absolute or relative to ServerRoot
I am new to docker and it's a bit of a task for me to understand it.
Could anyone help me out of this?
This seems to be Apache issue, not docker issue. Your conf seems to have errors. You have a parameter there called DefaultRuntimeDir which is pointing ad directory which does not exist in docker. Review your config file and ensure directories you specified in there exist in docker.
You can play within docker by simply:
docker build -t my_image_name .
docker run -it --rm --entrypoint /bin/bash my_image_name
# now you are in your docker container, you can check if your directories exist
Without knowing your config I would simply add one more RUN (I made this path up, you can change it to whatever you like)
ENV APACHE_RUN_DIR /var/lib/apache/runtime
RUN mkdir -p ${APACHE_RUN_DIR}
As a side note I would also combine all RUN into single like this:
RUN echo "welcome to yellow pages" \
&& apt-get update \
&& apt-get install -y tzdata apache2 \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir -p /var/www \
&& echo 'Hello, docker' > /var/www/index.html

i am not able execute next commands after localstack --host command

FROM ubuntu:18.04
RUN apt-get update -y && \
apt-get install -y apt-utils && \
apt-get install -y python3-pip python3-dev\
pypy-setuptools
COPY . .
WORKDIR .
RUN pip3 install boto3
RUN pip3 install awscli
RUN apt-get install libsasl2-dev
ENV HOST_TMP_FOLDER=/tmp/localstack
RUN apt-get install -y git
RUN apt-get install -y npm
RUN mkdir -p .localstacktmp
ENV TMPDIR=.localstacktmp
RUN pip3 install localstack[full]
RUN SERVICES=s3,lambda,es DEBUG=1 localstack start --host
WORKDIR ./boto3Tools
ENTRYPOINT [ "python3" ]
CMD [ "script.py" ]
You can't start services in a Dockerfile.
In your case what's happening is that your Dockerfile is running RUN localstack start. That goes ahead and starts up the selected set of services and stays running, waiting for connections. Meanwhile, the Dockerfile is waiting for the command you launched to finish before it moves on.
The usual answer to this is to start servers and clients in separate containers (or start a server in a container and run clients directly from your host). In this case, there is already a localstack/localstack Docker image and a prebuilt Docker Compose setup, so you can just run it:
curl -LO https://github.com/localstack/localstack/raw/master/docker-compose.yml
docker-compose up
The localstack GitHub repo has more information on using it.
If you wanted to use a Boto-based application with this, the easiest way is to add it to the same docker-compose.yml file (or, conversely, add Localstack to the Compose setup you already have). At this point you can use normal Docker inter-container communication to reach the mock AWS, but you have to configure this in your code
s3 = boto3.client('s3',
endpoint_url='http://localstack:4566')
You have to make similar changes anyways to use localstack, so the only difference is the hostname you're setting.

How to run Django Daphne service on Google Kubernetes Engine and Google Container Registry

Dockerfile
FROM ubuntu:18.04
RUN apt-get update
RUN apt-get install build-essential -y
WORKDIR /app
COPY . /app/
# Python
RUN apt-get install python3-pip -y
RUN python3 -m pip install virtualenv
RUN python3 -m virtualenv /env36
ENV VIRTUAL_ENV /env36
ENV PATH /env36/bin:$PATH
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
# Start Daphne [8443]
ENV DJANGO_SETTINGS_MODULE=settings
CMD daphne -e ssl:8443:privateKey=/ssl-cert/privkey.pem:certKey=/ssl-cert/fullchain.pem asgi:application
# Open port 8443
EXPOSE 8443
Enable Google IP Alias in order that we may connect to Google Memorystore/Redis
Build & Push
$ docker build -t [GCR_NAME] -f path/to/Dockerfile .
$ docker tag [GCR_NAME] gcr.io/[GOOGLE_PROJECT_ID]/[GCR_NAME]:[TAG]
$ docker push gcr.io/[GOOGLE_PROJECT_ID]/[GCR_NAME]:[TAG]
Deploy to GKE
$ envsubst < k8s.yml > patched_k8s.yml
$ kubectl apply -f patched_k8s.yml
$ kubectl rollout status deployment/[GKE_WORKLOAD_NAME]
I configured Daphne on GKE/GCR. If you guys have other solutions, please give me your advice.
system is not included in the Ubuntu:18.04 docker image.
Add an ENTRYPOINT to your Dockerfile with commands in ExecStart property of project-daphne.service.

Application served by uWSGI with Supervisord from Docker

I am trying to serve a Django application with uWSGI from Docker. I am using supervisord to start the process for me at the end of the Dockerfile. When I run the image, it says that the uWSGI process starts and succeeds, but I'm unable to view the application at the URL I thought would display it. Perhaps I do not have things set up/configured correctly?
I am not having supervisord start nginx right now because I am currently serving static files via Amazon S3, and want to first focus on getting the wsgi up and running.
I am successful in running the application using uwsgi locally by doing uwsgi --init uwsgi.ini:local, but I am having trouble moving it into docker.
Here is my Dockerfile
FROM ubuntu:14.04
# Get most recent apt-get
RUN apt-get -y update
# Install python and other tools
RUN apt-get install -y tar git curl nano wget dialog net-tools build-essential
RUN apt-get install -y python3 python3-dev python-distribute
RUN apt-get install -y nginx supervisor
# Get Python3 version of pip
RUN apt-get -y install python3-setuptools
RUN easy_install3 pip
RUN pip install uwsgi
RUN apt-get install -y python-software-properties
# Install GEOS
RUN apt-get -y install binutils libproj-dev gdal-bin
# Install node.js
RUN apt-get install -y nodejs npm
# Install postgresql dependencies
RUN apt-get update && \
apt-get install -y postgresql libpq-dev && \
rm -rf /var/lib/apt/lists
ADD . /home/docker/code
# Setup config files
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN rm /etc/nginx/sites-enabled/default
RUN ln -s /home/docker/code/nginx-app.conf /etc/nginx/sites-enabled/
RUN ln -s /home/docker/code/supervisor-app.conf /etc/supervisor/conf.d/
RUN pip install -r /home/docker/code/app/requirements.txt
EXPOSE 8080
CMD ["supervisord", "-c", "/home/docker/code/supervisor-app.conf", "-n"]
And here is my uwsgi.ini
[uwsgi]
# this config will be loaded if nothing specific is specified
# load base config from below
ini = :base
# %d is the dir this configuration file is in
socket = %dmy_app.sock
master = true
processes = 4
[dev]
ini = :base
# socket (uwsgi) is not the same as http, nor http-socket
socket = :8001
[local]
ini = :base
http = :8000
# set the virtual env to use
home=/Users/my_user/.virtualenvs/my_env
[base]
# chdir to the folder of this config file, plus app/website
chdir = %dmy_app/
# load the module from wsgi.py, it is a python path from
# the directory above.
module=my_app.wsgi:application
# allow anyone to connect to the socket. This is very permissive
chmod-socket=666
http = :8080
And here is my supervisor-app.conf file
[program:app-uwsgi]
command = /usr/local/bin/uwsgi --ini /home/docker/code/uwsgi.ini
From a MAC using boot2docker, I am trying to access the application at $(boot2docker ip):8080
Ultimately I want to upload this container to AWS Elastic Beanstalk, with not only a uWSGI process running, but a celery worker running as well.
When I run my container, I can see from the logs that both supervisor and uwsgi successfully start. I was able to get things running on my local machine both using uwsgi by itself and uwsgi through supervisor, but for some reason when I containerize the thing I can't find it anywhere.
Here is what is logged when I boot up the docker container
2014-12-25 15:08:03,950 CRIT Supervisor running as root (no user in config file)
2014-12-25 15:08:03,953 INFO supervisord started with pid 1
2014-12-25 15:08:04,957 INFO spawned: 'uwsgi' with pid 9
2014-12-25 15:08:05,970 INFO success: uwsgi entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
How are you starting the docker container?
I don't see any CMD or ENTRYPOINT script, so I'm unclear as to how anything is getting started.
In general, I would advise avoiding things like supervisord unless absolutely necessary, just start uWSGI in the foreground from a CMD line. Try adding the following as the last line in the Dockerfile:
CMD ["/usr/local/bin/uwsgi", "--ini", "/home/docker/code/uwsgi.ini"]
and then just run with docker run -p 8000:8000 image_name. You should get some reply from uWSGI. If that works, I recommend you move the other services (postgres, node, to separate containers). There are official images for Node, Python and Postgres which should save you some time.
Remember, Docker containers only run as long as their main process (which must be in the foreground).