I setup two swarm manager nodes(mgr1, mgr2). But when I try to connect to the container it throws an error message .
[root#ip-10-3-2-24 ec2-user]# docker run --restart=unless-stopped -h mgr1 --name mgr1 -d -p 3375:2375 swarm manage --replication --advertise 10.3.2.24:3375 consul://10.3.2.24:8500/
[root#ip-10-3-2-24 ec2-user]# docker exec -it mgr1 /bin/bash
rpc error: code = 2 desc = "oci runtime error: exec failed: exec: \"/bin/bash\": stat /bin/bash: no such file or directory"
It's happening in both the servers(mgr1, mgr2). I'm also running consul container on each node and able to connect to the consul containers.
/bin/bash might not be available there in the container. You may use sh as shown below
docker exec -it mgr1 sh or
docker exec -it mgr1 /bin/sh or
docker exec -it mgr1 bash or
docker attach mgr1
UPDATE: Based on the comments
busybox is very light weight linux based image and some of the above works perfectly fine:
bash $ sudo docker exec -it test1 bash
rpc error: code = 13 desc = invalid header field value "oci runtime error: exec failed: container_linux.go:247: starting container process caused \"exec: \\\"bash\\\": executable file not found in $PATH\"\n"
bash $ sudo docker exec -it test1 sh
/ # exit
bash $ sudo docker exec -it test1 /bin/sh
/ # exit
bash $ sudo docker attach test1
/ # exit
bash $
Related
I would like to create a container that runs only one shell. For this I have tried the following:
FROM alpine:latest
ENTRYPOINT ["/bin/sh"]
Unfortunately I don't get a shell when I start the container.
podman build -t $IMAGENAME .
podman run --name foobar $IMAGENAME
podman start -ai foobar
But if I start the container as follows it works
podman run --name foobar2 -ti $IMAGENAME /bin/sh
/ #
CTRL+C
podman start -ai foobar2
/ #
I had assumed that the entrypoint "/bin/sh" would directly execute a shell that you can work with.
Your Containerfile is fine.
The problem is because your main command is a shell /bin/sh, a shell needs a pseudo-TTY or it will fail to start.
You can pass a pseudo-TTY with the --tty or -t option. Also, a good option is to use --interactive or -i to allow the main process to receive input.
All the commands below will work for you:
podman build -t $IMAGENAME .
# run (use CTRL + P + Q to exit)
podman run --name foobar -ti $IMAGENAME
# create + start
podman create --name foobar -ti $IMAGENAME
podman start foobar
This is not the case if the main command is something different than a shell like a webserver, as apache, for example.
The entrypoint needs to be a long lasting process. Using /bin/sh as the entrypoint, would cause the container to exit as soon as it starts.
Try using:
FROM alpine:latest
ENTRYPOINT ["sleep", "9999"]
Then you can exec into the container and run your commands.
I am trying to run docker image inside ec2 instance using gitlab CI/CD.
Trying to expose 5000 port for the application.
But i am aware of the face this job will work for the first time, but for the susequent runs the job will fail, as docker does not allow to run image on the same port, so i am trying to implement a fail safe mechanism where where running it checks for the process, if it exist, it will stop and remove container and then run the image on port 5000.
Here i am facing the problem that if this job runs for the first time docker stop needs at least one argument in the current command.
is there a way to run this command in a if condition basis, like if process exist then only run otherwise dont.
deploy:
stage: deploy
before_script:
- chmod 400 $SSH_KEY
script: ssh -o StrictHostKeyChecking=no -i $SSH_KEY ec2-user#ecxxxxx-xxxx.ap-southeast-1.compute.amazonaws.com "
docker login -u $REGISTRY_USER -p $REGISTRY_PASS &&
docker ps -aq | xargs docker stop | xargs docker rm &&
docker run -d -p 5000:5000 $IMAGE_NAME:$IMAGE_TAG"
error on pipeline
"docker stop" requires at least 1 argument.
See 'docker stop --help'.
Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...]
Stop one or more running containers
"docker rm" requires at least 1 argument.
See 'docker rm --help'.
Usage: docker rm [OPTIONS] CONTAINER [CONTAINER...]
Remove one or more containers
The problem is with xargs docker stop | xargs docker rm command. is there a way to solve this kind of problem
Edit :- This doesn't exactly answer my question because what if a junior engineer is assigned this task to setup a pipeline who doesn't know the name of image, this solution requires us to know the name of the image, in this case this won't work.
Here what I understood is you are not stopping image but you are stopping container and removing it and then creating new container with the expose port 5000.
So give a variable constant name to container which will be same whenever it creates. The "|| true" helps you to stop the container only if it exists if not it won't stop any container
variables:
CONTAINER_NAME: <your-container-name> #please give a name for container to be created for this image
deploy:
stage: deploy
before_script:
- chmod 400 $SSH_KEY
script: ssh -o StrictHostKeyChecking=no -i $SSH_KEY ec2-user#ecxxxxx-xxxx.ap-southeast-1.compute.amazonaws.com "
docker login -u $REGISTRY_USER -p $REGISTRY_PASS" &&
docker stop $CONTAINER_NAME; docker rm $CONTAINER_NAME || true &&
docker run -d -p 5000:5000 --name $CONTAINER_NAME $IMAGE_NAME:$IMAGE_TAG"
I have an issue with my script. I use beanstalk to deploy my ASP.NET Core code. And in my post deploy I have this code:
#!/usr/bin/env bash
file1=sudo cat /opt/elasticbeanstalk/config/ebenvinfo/region
file2=/opt/elasticbeanstalk/bin/get-config container -k environment_name
file3=$file2.$file1.elasticbeanstalk.com
echo $file3
sudo certbot -n -d $file3 --nginx --agree-tos --email al#gmail.com
It works perfectly if I launch it on the instance but in the postdeploy script I have the error:
[ERROR] An error occurred during execution of command [app-deploy] - [RunAppDeployPostDeployHooks]. Stop running the command. Error: Command .platform/hooks/postdeploy/00_get_certificate.sh failed with error fork/exec .platform/hooks/postdeploy/00_get_certificate.sh: exec format error
PS: My script has .ebextension which allows exec rights
container_commands:
00_permission_hook:
command: "chmod +x .platform/hooks/postdeploy/00_get_certificate.sh"
What's wrong?
I had the same issue and added
#!/bin/bash
to the top of the sh file and also ran "chmod +x" to the sh file and it was solved
I am trying to create a docker file that will install awscli and run the command to list s3. Once the command is executed the container itself exits.I builrd the image with this command docker build --tag aws-cli:1.0 . I am running the this docker file after building it with this command docker run -it --rm -e AWS_DEFAULT_REGION='[your region]' -e AWS_ACCESS_KEY_ID='[your access ID]' -e AWS_SECRET_ACCESS_KEY='[your access key]' aws-cli
Error: Unable to find image 'aws-cli:latest' locally docker: Error response from daemon: pull access denied for aws-cli, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
FROM python:2.7-alpine3.10
ENV AWS_DEFAULT_REGION='[your region]'
ENV AWS_ACCESS_KEY_ID='[your access key id]'
ENV AWS_SECRET_ACCESS_KEY='[your secret]'
RUN pip install awscli
CMD s3 ls
ENTRYPOINT [ "awscli" ]
You are missing the image name in the docker run command. It should be like this
docker run -it --rm -e AWS_DEFAULT_REGION='[your region]' -e AWS_ACCESS_KEY_ID='[your access ID]' -e AWS_SECRET_ACCESS_KEY='[your access key]' <docker image>
You missed image name. Please provide image name while running docker run. like this
docker run -it --rm -e AWS_DEFAULT_REGION='[your region]' -e AWS_ACCESS_KEY_ID='[your access ID]' -e AWS_SECRET_ACCESS_KEY='[your access key]' aws-cli:1.0
I am trying to run a docker image (E.g: webwhatsapi) over Selenium network.
I followed below commands:
docker network create selenium
docker run -d -p 4444:4444 -p 5900:5900 --name firefox --network selenium -v /dev/shm:/dev/shm selenium/standalone-firefox-debug
docker build -t webwhatsapi .
docker run --network selenium -it -e SELENIUM='http://firefox:4444/wd/hub' -v $(pwd):/app webwhatsapi /bin/bash -c "pip install ./;pip list;python sample/remote.py"
On AWS, I have following configuration in security group.
I am trying to open the http://{public ip}:4444 in firefox browser. It shows error. (This site can't be reached). I think, I should change my last command in a way which makes it work in browser url.
Last command:
docker run --network selenium -it -e SELENIUM='http://firefox:4444/wd/hub' -v $(pwd):/app webwhatsapi /bin/bash -c "pip install ./;pip list;python sample/remote.py"
Please let me know, where am I going wrong ?