So I have a docker image Vidyo/mediabridge
I want to automatically run some scripts that starts a task inside the container on startup
So when I run my image as:
docker run vidyo/mediabridge
OR
dockerfile:
from vidyo/mediabridge
docker execution:
docker build . -tag basicimage
docker run basicimage
I get output as
*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
*** Running /etc/rc.local...
*** Booting runit daemon...
*** Runit started as PID 8
but when I edit the image and run it as
dockerfile:
from vidyo/mediabridge
CMD echo "hi"
it gives output as
hi
and exits, So basically my base image vidyo/mediabridge is not running, futhur when I try to execute other commands such as
dockerfile
from vidyo/mediabridge
ENTRYPOINT curl $s3path -o /opt/vidyo/config && sleep 10 && ./opt/vidyo/connect
it shows
* syslog-ng is not running
* Starting system logging syslog-ng
* Retrying
and then it exist, can anyone help, I think my base image vidyo/mediabridge is not running properly.
is there as way to run the base image and then execute the commands.
When you add a CMD or ENTRYPOINT it overrides the CMD or ENTRYPOINT on the base image that you are using.
To extend this image do no add another CMD or ENTRYPOINT. If you want to add RUN's to execute things they will work.
Note that you do not have to add either of the above commands to your Dockerfile. The parent image's will persist as long as you don't add more.
If you want to modify the CMD or add to it; I would recommend docker inspect image vidyo/mediabridge, getting the command or ENTRYPOINT that your base container is running and adding that to the end of a shell script that you run as your CMD.
Related
I updated my docker file to upgrade ubuntu but it started failing and I'm unsure why...
dockerfile:
# using digest for version 20.04 as there is multiple digest that used this tag#
FROM ubuntu#sha256:82becede498899ec668628e7cb0ad87b6e1c371cb8a1e597d83a47fac21d6af3
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
#install tools
#removed for clarity
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh
CMD ["./start.sh"]
my evens from the pod
Successfully assigned se-agents/agent-se-linux-5c9f647768-25p7v to aks-linag-56790600-vmss000002
Pulling image "compregistrynp.azurecr.io/agent-se-linux:25319"
Successfully pulled image "comregistrynp.azurecr.io/agent-se-linux:25319"
Created container agent-se-linux
Started container agent-se-linux
Back-off restarting failed container
When I check the error in the pod, I see the following message:
standard_init_linux.go:228: exec user process caused: no such file or directory
Not even sure where to look anymore. The only difference in the dockerfile was the ubuntu tag and I added 1 tool to install. I tried to deploy what was in Prod to dev and it's failing with the same error. I'm convinced there's something in my AKS...
So the issue was that someone on my team had modified the shell script and didn't set the proper End of Line characters to Lf.
I will be running a script to convert the file to Linux to ensure this doesn't happen again in my pipeline!
I am working on a project, where I need to read Text File from S3 bucket using Boto3. Then need to dockerize my application. I Implemented my code using Boto3, its running Perfectly fine(Note: Taking two arguments through argparser with switches -p and -n). But when I try the same using Docker run
PS **my Working Directory** > docker run --rm Image_name -p Argument1 -n Argument2
<class 'botocore.exceptions.ProfileNotFound'> Code.py 92
I searched lot of things on it, my understanding is AWS container directory is unable to locate my Credential File and config file stored in my home directory/.aws folder.
What I Tried:
1. Path mounting as below:
PS **my Working Directory** > docker run --rm -it -v %userprofile%\.aws:/root/.aws
amazon/aws-cli
docker: Error response from daemon: %!u(string=is not a valid Windows path)serprofile%!\
(MISSING).aws.
See 'docker run --help'.
I completely don't understand what's wrong with the syntax. I tried manually inputting my user profile directory in the %userprofile% as C:/Users/Deepak
Then Strangely WSL2(backend) popup comes saying using passing containers on windows may poorly work.
I am not sure what it means. Does it have any effect on Docker containers build on windows environment?
2. I moved my Credential and Config file in my working directory as well and tried below code:
PS my Working Directory > docker run --rm -it -v ${PWD}:/root/.aws amazon/aws-cli Image_name -p
Argument1 -n Argument2
I'm trying to containerize my Django application, and everything is working fine, except in the situation where I try to run the container from a shell script. In this case, the Django server is running, but the port is not open inside the container.
Here's the command I'm running to start the container:
docker run -d -p 8000:8000 --net=mynet --name myapp -v $PWD:/myapp myimage ./ss
ss is a shell script that launches my Django app. It contains:
python3 manage.py runserver 0:8000
When I run the Docker RUN command from the command line, everything works fine; the port is mapped correctly, I can browse to my app from a browser in my host and it loads correctly, etc.
However, if I copy the above run command in a shell script (start_container.sh for example), the container launches just fine, the ports are mapped correctly, but when I try to open the app in my browser, I get a connection reset error.
If open a shell to the container by running
docker exec -i -t myapp /bin/bash
I can get into the container. I check running processes with ps -eaf I do see the python process running my Django app. However, if I check open ports from within the container with netstat -a or netstat -l, port 8000 is NOT available.
If I then stop the container, then restart it from the command line, and inspect the container, netstat -a will show port 8000 as available, and I can connect to my app from a host browser.
I'm a bit at a loss to explain how launching the docker container from the host would have this impact on the container internally, and I'm not sure what my next debugging steps should be.
Note 1: When inside the container, if I run the start script ./ss, Django starts and the port opens as expected.
Note 2: I also tried using the CMD ["ss"] instruction in my container's Dockerfile, and I get the same result; if I launch the container from the commandline, it works fine. If I launch it from a shell script, the port inside the container doesn't open.
Working through this tutorial on setting up ember-cli in a Docker container:
http://www.rkblog.rk.edu.pl/w/p/setting-ember-cli-development-environment-ember-21/
Here are my steps:
Created docker-compose.yml in an empty folder on the host machine
Launched Docker Quickstart to get a terminal
Changed to the folder with the .yml
Ran the two docker-compose commands below from the terminal (added -d because without that you get a message that interactive mode is not supported)
Ran docker ps -a to verify that the container was running
Ran docker inspect CONTAINER_ID to find the ip address of the running container
Found the IP address at an odd location (172.17.0.2)
Attempted to access port 4200 on that IP from the host Windows machine browser and also from the Docker CL via curl but without success.
Ran docker ps -a and found that both containers that had been instantiated had exited.
Now if I try to start the container again it just exits immediately
docker-compose run -d --rm ember init
docker-compose run -d --rm ember server
What am I missing to get up and running? Do I need to open ports on the Default VM running in Virtualbox? How do I diagnose why the container keeps exiting?
First I would suggest using docker-compose up, that is most likely what you want.
To see the logs for a detached container you can run docker logs <container name>. If there are any errors you'll see them there.
A likely cause of the "container exit" is because the process goes into the background. Docker requires a process to stay in the foreground, but many serve commands will background by default. To keep the process in the foreground you can sometimes add use a flag like --foreground or --no-daemon, but I'm not sure if one exists for ember.
If that flag doesn't exist, it's likely that ember server is just checking if stdin/stdout are connected to a tty. By default they are not. You can add these lines to your docker-compose.yml to fix it:
stdin_open: True
tty: True
Ok finally resolved it. The issue with the module resolution may have been long file name resolution on windows because after I moved the source folder to the root of the host I was able to get ember serve running under windows.
Then from the terminal window I ran the commands to init and launch ember-server
docker-compose run -d --rm ember init
docker-compose run -d --rm ember server
Then did:
docker-compose up -d
which launched the containers successfully and then I was able to access the Ember page served up at the IP:Port specified earlier in the comments
http://192.168.99.100:4200/
I am running a Play 2.2.3 web application on AWS Elastic Beanstalk, using SBTs ability to generate Docker images. Uploading the image from the EB administration interface usually works, but sometimes it gets into a state where I consistently get the following error:
Docker container quit unexpectedly on Thu Nov 27 10:05:37 UTC 2014:
Play server process ID is 1 This application is already running (Or
delete /opt/docker/RUNNING_PID file).
And deployment fails. I cannot get out of this by doing anything else than terminating the environment and setting it up again. How can I avoid that the environment gets into this state?
Sounds like you may be running into the infamous Pid 1 issue. Docker uses a new pid namespace for each container, which means first process gets PID 1. PID 1 is a special ID which should be used only by processes designed to use it. Could you try using Supervisord instead of having playframework running as the primary processes and see if that resolves your issue? Hopefully, supervisord handles Amazon's termination commands better than the play framework.
#dkm was having the same issue with my dockerized play app. I package my apps as standalone for production using '$ sbt clean dist` commands. This produces a .zip file that you can deploy to some folder in your docker container like /var/www/xxxx.
Get a bash shell into your container: $ docker run -it <your image name> /bin/bash
Example: docker run -it centos/myapp /bin/bash
Once the app is there you'll have to create an executable bash script I called mine startapp and the contents should be something like this:
Create the script file in the docker container:
$ touch startapp && chmod +x startapp
$ vi startapp
Add the execute command & any required configurations:
#!/bin/bash
/var/www/<your app name>/bin/<your app name> -Dhttp.port=80 -Dconfig.file=/var/www/pointflow/conf/<your app conf. file>
Save the startapp script then from a new terminal and then you must commit your changes to your container's image so it will be available from here on out:
Get the running container's current ID:
$ docker ps
Commit/Save the changes
$ docker commit <your running containerID> <your image's name>
Example: docker commit 1bce234 centos/myappsname
Now for the grand finale you can docker stop or exit out of the running container's bash. Next start the play app using the following docker command:
$ docker run -d -p 80:80 <your image's name> /bin/sh startapp
Example: docker run -d -p 80:80 centos/myapp /bin/sh startapp
Run docker ps to see if your app is running. You see something similar to this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
19eae9bc8371 centos/myapp:latest "/bin/sh startapp" 13 seconds ago Up 11 seconds 0.0.0.0:80->80/tcp suspicious_heisenberg
Open a browser and visit your new dockerized app
Hope this helps...