this is how my user data looks like, but docker is not installed, when connecting to my ec2 machine:
sudo yum -y install docker
sudo service docker start
sudo docker pull nginx
sudo docker run -d -p 80:80 nginx
what can I do?
When using user-data script you can debug what is happening by ssh connecting to the instance and check the output in cloud-init-output.log.
sudo cat /var/log/cloud-init-output.log
When doing this you'll find an strange error containing:
Jan 29 11:58:25 cloud-init[2970]: __init__.py[WARNING]: Unhandled non-multipart (text/x-not-multipart) userdata: 'sudo yum -y install dock...'
Which means that the default interpreter seems to be python and it's neccesary to start the user-data with #!/bin/bash. (See this other StackOverflow answer)
When changing the user-data to:
#!/bin/bash
sudo yum -y install docker
sudo service docker start
sudo docker pull nginx
sudo docker run -d -p 80:80 nginx
it will be executed as expected and you will find nginx running on your ec2.
Related
I have installed docker and httpd on an instance and when I run
sudo docker run -d -p 8600:8080 pengbai/docker-supermario
in SSH, The command works but when I Restart the Instance with this user data(docker and httpd already installed)
#!/bin/bash
sudo yum update -y
sudo systemctl enable httpd
sudo systemctl start httpd
sudo systemctl enable docker
sudo systemctl start docker
sudo docker run -d -p 8600:8080 pengbai/docker-supermario
It does not work
I have tried it with sudo, without sudo and even added a sleep command between some of the commands but still couldn't get it to work.
I've got 2 Docker containers: httpd-container and php-container.
httpd-container dockerfile:
FROM centos:latest
RUN yum -y install httpd
RUN sed -i 's/AllowOverride None/AllowOverride all/g' /etc/httpd/conf/httpd.conf
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
php-container dockerfile
FROM httpd-container:latest
RUN yum -y install php php-cli php-fpm php-mysqlnd php-zip php-devel php-gd php-mbstring php-curl php-xml php-pear php-bcmath php-json
RUN mkdir /run/php-fpm
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
Currently I have to run /usr/sbin/php-fpm in php-container after it starts.
I`ve have tried putting both commands in one script and setting that as entry point, but it does not find it.
I've tried running supervisord and got errors as well.
Any advice is deeply appreciated!
Change php-container dockerfile to
FROM httpd-container:latest
RUN yum -y install php php-cli php-fpm php-mysqlnd php-zip php-devel php-gd php-mbstring php-curl php-xml php-pear php-bcmath php-json
RUN mkdir /run/php-fpm
COPY ./start_services.sh /
CMD ["/start_services.sh"]
create start_services.sh with
#!/bin/sh
/usr/sbin/php-fpm
/usr/sbin/httpd -D FOREGROUND
Dockerfile
FROM ubuntu:18.04
RUN apt-get update
RUN apt-get install build-essential -y
WORKDIR /app
COPY . /app/
# Python
RUN apt-get install python3-pip -y
RUN python3 -m pip install virtualenv
RUN python3 -m virtualenv /env36
ENV VIRTUAL_ENV /env36
ENV PATH /env36/bin:$PATH
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
# Start Daphne [8443]
ENV DJANGO_SETTINGS_MODULE=settings
CMD daphne -e ssl:8443:privateKey=/ssl-cert/privkey.pem:certKey=/ssl-cert/fullchain.pem asgi:application
# Open port 8443
EXPOSE 8443
Enable Google IP Alias in order that we may connect to Google Memorystore/Redis
Build & Push
$ docker build -t [GCR_NAME] -f path/to/Dockerfile .
$ docker tag [GCR_NAME] gcr.io/[GOOGLE_PROJECT_ID]/[GCR_NAME]:[TAG]
$ docker push gcr.io/[GOOGLE_PROJECT_ID]/[GCR_NAME]:[TAG]
Deploy to GKE
$ envsubst < k8s.yml > patched_k8s.yml
$ kubectl apply -f patched_k8s.yml
$ kubectl rollout status deployment/[GKE_WORKLOAD_NAME]
I configured Daphne on GKE/GCR. If you guys have other solutions, please give me your advice.
system is not included in the Ubuntu:18.04 docker image.
Add an ENTRYPOINT to your Dockerfile with commands in ExecStart property of project-daphne.service.
I built, tagged & published my first (ever) Docker image to Quay:
docker build -t myapp .
docker tag <imageId> quay.io/myorg/myapp:1.0.0-SNAPSHOT
docker login quay.io
docker push quay.io/myorg/myapp:1.0.0-SNAPSHOT
I then logged into Quay.io to confirm the tagged image was successfully pushed, and it was. So then I SSHed into a brand-spanking-new AWS EC2 instance and followed their instructions to install Docker:
sudo yum update -y
sudo yum install -y docker
sudo service docker start
sudo usermod -a -G docker ec2-user
sudo docker info
Interestingly enough the sudo usermod -a -G docker ec2-user command doesn't seem to work as advertised as I still need to append sudo to all my commands...
So I try to pull my tagged image:
sudo docker pull quay.io/myorg/myapp:1.0.0-SNAPSHOT
Please login prior to pull:
Username: myorguser
Password: <password entered>
1.0.0-SNAPSHOT: Pulling from myorg/myapp
<hashNum1>: Pull complete
<hashNum2>: Pull complete
<hashNum3>: Pull complete
<hashNum4>: Pull complete
<hashNum5>: Pull complete
<hashNum6>: Pull complete
Digest: sha256:<longHashNum>
Status: Downloaded newer image for quay.io/myorg/myapp:1.0.0-SNAPSHOT
So far, so good (I guess!). Let's see what images my local Docker engine knows about:
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Hmmm...that doesn't seem right. Oh well, let' try running a container for my (successfully?) pulled image:
sudo docker run -it -p 8080:80 -d --name myapp:1.0.0-SNAPSHOT myapp:1.0.0-SNAPSHOT
Unable to find image 'myapp:1.0.0-SNAPSHOT' locally
docker: Error response from daemon: repository myapp not found: does not exist or no pull access.
See 'docker run --help'.
Any idea where I'm going awry?
To list images, you need to use: docker images
When you pull, the image has the same tag. So if you wish to run, you will need to use:
sudo docker run -it -p 8080:80 -d --name myapp:1.0.0-SNAPSHOT quay.io/myorg/myapp:1.0.0-SNAPSHOT
If you wish to use a short name, you need to retag it after the docker pull:
sudo docker tag quay.io/myorg/myapp:1.0.0-SNAPSHOT myapp:1.0.0-SNAPSHOT
After that, your docker run command will work. Note that docker ps is for containers that are running (or have exited in the recent past if used with -a)
I have a django app that I will need to deploy on Amazon's EC2 Container Service. In the meantime, in order to test the deployment, I am trying to deploy it in a docker container locally first, but even when running a simple demo django application, I am unable to see the page at localhost:8000.
Here is my setup.
Create a docker machine:
$ docker-machine create --driver virtualbox testmachine
After this I set up my environment:
$ eval "$(docker-machine env testmachine)"
I set up a Dockerfile for my test container:
FROM ubuntu
RUN echo "deb http://archive.ubuntu.com/ubuntu/ $(lsb_release -sc) main universe" >> /etc/apt/sources.list
RUN apt-get update
RUN apt-get install python-pip -y
RUN pip install django
RUN mkdir django_test
RUN cd django_test && \
django-admin.py startproject django_test .
Then I call
$ docker build -t dockertest .
... builds successfully
$ docker run -d -i -t -p 8000:8000 dockertest
cbef144ac068eb61b0c3e032448cc207c8f0384a9a67a710df6d9beb26d2ab32
$ docker attach cbef144ac068eb61b0c3e032448cc207c8f0384a9a67a710df6d9beb26d2ab32
root#cbef144ac068:/# cd django_test
root#cbef144ac068::/django_test# python manage.py runserver 0.0.0.0:8000
This successfully starts the server at 0.0.0.0:8000/ of the container.
However, when I try to go to localhost:8000 in my browser, I get a "This webpage is not available." What am I missing?
Turns out I was looking at the wrong IP.
To figure out the correct IP, I ran:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
testmachine * virtualbox Running tcp://192.168.99.100:2376
I then loaded 192.168.99.100:8000 in my browser, and it worked like a charm.