I am running a Play 2.2.3 web application on AWS Elastic Beanstalk, using SBTs ability to generate Docker images. Uploading the image from the EB administration interface usually works, but sometimes it gets into a state where I consistently get the following error:
Docker container quit unexpectedly on Thu Nov 27 10:05:37 UTC 2014:
Play server process ID is 1 This application is already running (Or
delete /opt/docker/RUNNING_PID file).
And deployment fails. I cannot get out of this by doing anything else than terminating the environment and setting it up again. How can I avoid that the environment gets into this state?
Sounds like you may be running into the infamous Pid 1 issue. Docker uses a new pid namespace for each container, which means first process gets PID 1. PID 1 is a special ID which should be used only by processes designed to use it. Could you try using Supervisord instead of having playframework running as the primary processes and see if that resolves your issue? Hopefully, supervisord handles Amazon's termination commands better than the play framework.
#dkm was having the same issue with my dockerized play app. I package my apps as standalone for production using '$ sbt clean dist` commands. This produces a .zip file that you can deploy to some folder in your docker container like /var/www/xxxx.
Get a bash shell into your container: $ docker run -it <your image name> /bin/bash
Example: docker run -it centos/myapp /bin/bash
Once the app is there you'll have to create an executable bash script I called mine startapp and the contents should be something like this:
Create the script file in the docker container:
$ touch startapp && chmod +x startapp
$ vi startapp
Add the execute command & any required configurations:
#!/bin/bash
/var/www/<your app name>/bin/<your app name> -Dhttp.port=80 -Dconfig.file=/var/www/pointflow/conf/<your app conf. file>
Save the startapp script then from a new terminal and then you must commit your changes to your container's image so it will be available from here on out:
Get the running container's current ID:
$ docker ps
Commit/Save the changes
$ docker commit <your running containerID> <your image's name>
Example: docker commit 1bce234 centos/myappsname
Now for the grand finale you can docker stop or exit out of the running container's bash. Next start the play app using the following docker command:
$ docker run -d -p 80:80 <your image's name> /bin/sh startapp
Example: docker run -d -p 80:80 centos/myapp /bin/sh startapp
Run docker ps to see if your app is running. You see something similar to this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
19eae9bc8371 centos/myapp:latest "/bin/sh startapp" 13 seconds ago Up 11 seconds 0.0.0.0:80->80/tcp suspicious_heisenberg
Open a browser and visit your new dockerized app
Hope this helps...
Related
I am working on a project, where I need to read Text File from S3 bucket using Boto3. Then need to dockerize my application. I Implemented my code using Boto3, its running Perfectly fine(Note: Taking two arguments through argparser with switches -p and -n). But when I try the same using Docker run
PS **my Working Directory** > docker run --rm Image_name -p Argument1 -n Argument2
<class 'botocore.exceptions.ProfileNotFound'> Code.py 92
I searched lot of things on it, my understanding is AWS container directory is unable to locate my Credential File and config file stored in my home directory/.aws folder.
What I Tried:
1. Path mounting as below:
PS **my Working Directory** > docker run --rm -it -v %userprofile%\.aws:/root/.aws
amazon/aws-cli
docker: Error response from daemon: %!u(string=is not a valid Windows path)serprofile%!\
(MISSING).aws.
See 'docker run --help'.
I completely don't understand what's wrong with the syntax. I tried manually inputting my user profile directory in the %userprofile% as C:/Users/Deepak
Then Strangely WSL2(backend) popup comes saying using passing containers on windows may poorly work.
I am not sure what it means. Does it have any effect on Docker containers build on windows environment?
2. I moved my Credential and Config file in my working directory as well and tried below code:
PS my Working Directory > docker run --rm -it -v ${PWD}:/root/.aws amazon/aws-cli Image_name -p
Argument1 -n Argument2
I was building the network at fabric level. Following this tutorial http://hyperledger-fabric.readthedocs.io/en/latest/build_network.html
I have made changes in the following files and added 2 more peers in organisation1 only.
configtx.yaml
crypto-config.yaml
docker-compose-cli.yaml
docker-compose-couch.yaml
docker-compose-e2e.yaml
docker-compose-e2e-template.yaml
docker-compose-base.yaml
When im firing ./byfn.sh -m up command
here is the screenshot
Its getting stuck at this step. Its not even showing any error.
Im trying to add 2 more peers in first organisation. Is this the correct way? Am I doing something wrong?
It is also happened to me ,
1.sudo docker stop $(docker ps --all -q ) | docker rm $(docker ps -a -q),be attention just rm the docker related to the fabric node
reboot your computer and restart your docker service
sudo service docker start or systemctl start docker command
I'm using the AWS "Windows Server 2016 Base with Containers" image (ami-5e6bce3e).
Using docker info I can confirm I have the latest (Server Version: 1.12.2-cs-ws-beta).
From Powershell (running as Admin) I can successfully run the "microsoft/windowsservercore" container in interactive mode, connecting to CMD in the container:
docker run -it microsoft/windowsservercore cmd
When I attempt to run the "microsoft/iis" container in interactive mode, although I am able to connect to IIS (via browser), I am never connected to the interactive CMD session in the container.
docker run -it -p 80:80 microsoft/iis cmd
Instead, I simply get:
Service 'w3svc' started
Using another Powershell window, I can:
docker container ls
...and see my container running.
Attempting to attach locks up and never returns.
I have since switched regions and found that there are different AMI's on each region:
us-east-1: ami-d08edfc7
us-west-2: ami-5e6bce3e
...both of these have the same result.
Relevant links used:
AWS announcement and simple Docker example
MSDN simple Docker example
MSDN IIS Docker example
Update
Using the following link I was able to create my own Dockerfile based off the server base and installing IIS and this seems to work fine.
custom Dockerfile
This is not an issue with AWS AMI's, it was due to the way the Microsoft IIS Dockerfile was written / being new to Docker.
Link to Microsoft's IIS DockerFile
The last line (line 7):
ENTRYPOINT ["C:\\ServiceMonitor.exe", "w3svc"]
Difference between CMD and ENTRYPOINT
So since this Dockerfile uses ENTRYPOINT, to launch an interactive powershell session, use the following command:
docker run --entrypoint powershell -it -p 80:80 microsoft/iis
Note that it seems that the "--entrypoint" flag needs to be after run, as this won't work:
docker run -it -p 80:80 microsoft/iis --entrypoint powershell
Here is another reference link regarding ENTRYPOINT and CMD differences
Working through this tutorial on setting up ember-cli in a Docker container:
http://www.rkblog.rk.edu.pl/w/p/setting-ember-cli-development-environment-ember-21/
Here are my steps:
Created docker-compose.yml in an empty folder on the host machine
Launched Docker Quickstart to get a terminal
Changed to the folder with the .yml
Ran the two docker-compose commands below from the terminal (added -d because without that you get a message that interactive mode is not supported)
Ran docker ps -a to verify that the container was running
Ran docker inspect CONTAINER_ID to find the ip address of the running container
Found the IP address at an odd location (172.17.0.2)
Attempted to access port 4200 on that IP from the host Windows machine browser and also from the Docker CL via curl but without success.
Ran docker ps -a and found that both containers that had been instantiated had exited.
Now if I try to start the container again it just exits immediately
docker-compose run -d --rm ember init
docker-compose run -d --rm ember server
What am I missing to get up and running? Do I need to open ports on the Default VM running in Virtualbox? How do I diagnose why the container keeps exiting?
First I would suggest using docker-compose up, that is most likely what you want.
To see the logs for a detached container you can run docker logs <container name>. If there are any errors you'll see them there.
A likely cause of the "container exit" is because the process goes into the background. Docker requires a process to stay in the foreground, but many serve commands will background by default. To keep the process in the foreground you can sometimes add use a flag like --foreground or --no-daemon, but I'm not sure if one exists for ember.
If that flag doesn't exist, it's likely that ember server is just checking if stdin/stdout are connected to a tty. By default they are not. You can add these lines to your docker-compose.yml to fix it:
stdin_open: True
tty: True
Ok finally resolved it. The issue with the module resolution may have been long file name resolution on windows because after I moved the source folder to the root of the host I was able to get ember serve running under windows.
Then from the terminal window I ran the commands to init and launch ember-server
docker-compose run -d --rm ember init
docker-compose run -d --rm ember server
Then did:
docker-compose up -d
which launched the containers successfully and then I was able to access the Ember page served up at the IP:Port specified earlier in the comments
http://192.168.99.100:4200/
I'm packaging a project into a docker jetty image and I'm trying to access the logs, but no logs.
Dockerfile
FROM jetty:9.2.10
MAINTAINER Me "me#me.com"
ADD ./target/abc-1.0.0 /var/lib/jetty/webapps/ROOT
EXPOSE 8080
Bash script to start docker image:
docker pull me/abc
docker stop abc
docker rm abc
docker run --name='abc' -d -p 10908:8080 -v /var/log/abc:/var/log/jetty me/abc:latest
The image is running, but I'm not seeing any jetty logs in /var/log.
I've tried a docker run -it jetty bash, but not seeing any jetty logs in /var/log either.
Am I missing a parameter to make jetty output logs or does it output it somewhere other than /var/log/jetty?
Why you aren't seeing logs
2 things to note:
Running docker run -it jetty bash will start a new container instead of connecting you to your existing daemonized container.
And it would invoke bash instead of starting jetty in that container, so it won't help you to get logs from either container.
So this interactive container won't help you in any case.
But also...
JettyLogs are disabled anyways
Also, you won't see the logs in the standard location (say, if you tried to use docker exec to read the logs, or to get them in a volume), quite simply because the Jetty Docker file is aptly disabling logging entirely.
If you look at the jetty:9.2.10 Dockerfile, you will see this line:
&& sed -i '/jetty-logging/d' etc/jetty.conf \
Which nicely removes the entire line referencing the jetty-logging.xml default logging configuration.
What to do then?
Reading logs with docker logs
Docker gives you access to the container's standard output.
After you did this:
docker run --name='abc' -d -p 10908:8080 -v /var/log/abc:/var/log/jetty me/abc:latest
You can simply do this:
docker logs abc
And be greeted with somethig similar to this:
Running Jetty:
2015-05-15 13:33:00.729:INFO::main: Logging initialized #2295ms
2015-05-15 13:33:02.035:INFO:oejs.SetUIDListener:main: Setting umask=02
2015-05-15 13:33:02.102:INFO:oejs.SetUIDListener:main: Opened ServerConnector#73ec519{HTTP/1.1}{0.0.0.0:8080}
2015-05-15 13:33:02.102:INFO:oejs.SetUIDListener:main: Setting GID=999
2015-05-15 13:33:02.106:INFO:oejs.SetUIDListener:main: Setting UID=999
2015-05-15 13:33:02.133:INFO:oejs.Server:main: jetty-9.2.10.v20150310
2015-05-15 13:33:02.170:INFO:oejdp.ScanningAppProvider:main: Deployment monitor [file:/var/lib/jetty/webapps/] at interval 1
2015-05-15 13:33:02.218:INFO:oejs.ServerConnector:main: Started ServerConnector#73ec519{HTTP/1.1}{0.0.0.0:8080}
2015-05-15 13:33:02.219:INFO:oejs.Server:main: Started #3785ms
Use docker help logs for more details.
Customize
Obviously your other option is to revert what the default Dockerfile for jetty is doing, or to create your own dockerized Jetty.