My host operating system is windows 10 and I have installed ubuntu 20.04 focal with the help of WSL2.
My goal is to debug sam application without using Docker Desktop
I have installed docker daemon on the Ubuntu Linux distro with the help of the link below.
How to run docker on windows without using docker desktop
Now I want to debug the sam application that I have developed on my local machine and I want to use dockerd installed on ubuntu.
To launch docked on ubuntu, I use the following command.
sudo dockerd -H `ifconfig eth0 | grep -E "([0-9]{1,3}.){3}[0-9]{1,3}" | grep -v 127.0.0.1 | awk '{ print $2 }' | cut -f2 -d:`
As result, I will get the IP address to interact with dockerd
INFO[2021-12-31T10:31:12.959579412+05:30] API listen on 172.17.52.120:2375
For CLI, I am using Docker CLI
To test whether dockerd is working or not, I use the following command
docker -H 172.20.5.64 run --rm hello-world
How can I tell sam cli to use the dockerd running inside ubuntu while invoking sam local invoke command?
Related
The Docker awslogs documentation states:
the default AWS shared credentials file (~/.aws/credentials of the root user)
Yet if I copy my AWS credentials file there:
sudo bash -c 'mkdir -p $HOME/.aws; cp .aws/credentials $HOME/.aws/credentials'
... and then try to use the driver:
docker run --log-driver=awslogs --log-opt awslogs-group=neiltest-deleteme --rm hello-world
The result is still the dreaded error:
docker: Error response from daemon: failed to initialize logging driver: failed to create Cloudwatch log stream: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors.
Where does this file really need to go? Is it because the Docker daemon isn't running as root but rather some other user and, if so, how do I determine that user?
NOTE: I can work around this on systems using systemd by setting environment variables. But this doesn't work on Google CloudShell where the Docker daemon has been started by some other method.
Ah ha! I figured it out and tested this on Debian Linux (on my Chromebook w/ Linux VM and Google CloudShell):
The .aws folder must be in the root folder of the root user not in the $HOME folder!
Based on that I was able to successfully run the following:
pushd $HOME; sudo bash -c 'mkdir -p /.aws; cp .aws/* /.aws/'; popd
docker run --log-driver=awslogs --log-opt awslogs-region=us-east-1 --log-opt awslogs-group=neiltest-deleteme --rm hello-world
I initially figured this all out by looking at the Docker daemon's process information:
DOCKERD_PID=$(ps -A | grep dockerd | grep -Eo '[0-9]+' | head -n 1)
sudo cat /proc/$DOCKERD_PID/environ
The confusing bit is that Docker's documentation here is wrong:
the default AWS shared credentials file (~/.aws/credentials of the root user)
The true location is /.aws/credentials. I believe this is because the daemon starts before $HOME is actually defined since it's not running as a user process. So starting a shell as root will tell you a different story for tilde or $HOME:
sudo sh -c 'cd ~/; echo $PWD'
That outputs /root but using /root/.aws/credentials does not work!
I have a 64 bit Windows host machine, I have installed WSL (Debian), then docker, and then I'm trying to compile a Qt project on a Red Hat Linux 5.5 32 bit container(sharing a host directory with the code), but... doing the QMake...
/usr/local/Trolltech/Qt-4.8.3/bin/qmake MYFILE.pro -spec linux-g++ -r CONFIG+=debug
I get:
QFSFileEngine::currentPath: stat(".") failed
And I can't continue my build. (The same qmake command works on a rhel5.5 virtual machine, it´s a container problem)
I launch the docker like this:
docker run -it -v E:\codeRepo:/root/codeRepo rhl55 sh /root/codeRepo/00-scripts/make/makeScript.sh
I found a solution.
It's a filesystem problem. I moved "E:\codeRepo" to "\\wsl$\Debian\codeRepo" (WSL filesystem as a network drive in windows) and it works.
Now i'm sharing with the docker an ext4 folder and there is no problem with QMake.
So, this doesn't works:
docker run -it -v E:\codeRepo:/root/codeRepo rhl55 sh /root/codeRepo/00-scripts/make/makeScript.sh
But this works:
docker run -it -v \\wsl$\Debian\codeRepo:/root/codeRepo rhl55 sh /root/codeRepo/00-scripts/make/makeScript.sh
I have a VM instance in Google Cloud with Ubuntu 20.04 LTS, I set it to allow HTTP traffic.
I need to setup Label Studio (https://github.com/heartexlabs/label-studio) in this VM so anyone can access it by just typing the VM public IP.
I already tried building it with docker:
sudo docker build -t heartexlabs/label-studio:latest .
But when i run it with:
sudo docker run -d -p 80:80 heartexlabs/label-studio:latest
It doesn't work, wheres the output of the container list
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2e728edbe6d5 heartexlabs/label-studio:latest "./tools/run.sh" About a minute ago Up About a minute 0.0.0.0:80->80/tcp, 8080/tcp gracious_shamir
I also tried to install it with pip and run it with:
label-studio start --host 34.66.116.52 --port 80 testproject
If anyone has experience with Google Cloud VM and can help set this up with docker or with a WSGI server I'd appreciate it
Try this:
sudo docker run --rm -d -p 80:8080 -v `pwd`/my_project:/label-studio/my_project --name label-studio heartexlabs/label-studio:latest label-studio start my_project --init
I was able to access it from External IP with this.
I am trying to run a docker image (E.g: webwhatsapi) over Selenium network.
I followed below commands:
docker network create selenium
docker run -d -p 4444:4444 -p 5900:5900 --name firefox --network selenium -v /dev/shm:/dev/shm selenium/standalone-firefox-debug
docker build -t webwhatsapi .
docker run --network selenium -it -e SELENIUM='http://firefox:4444/wd/hub' -v $(pwd):/app webwhatsapi /bin/bash -c "pip install ./;pip list;python sample/remote.py"
On AWS, I have following configuration in security group.
I am trying to open the http://{public ip}:4444 in firefox browser. It shows error. (This site can't be reached). I think, I should change my last command in a way which makes it work in browser url.
Last command:
docker run --network selenium -it -e SELENIUM='http://firefox:4444/wd/hub' -v $(pwd):/app webwhatsapi /bin/bash -c "pip install ./;pip list;python sample/remote.py"
Please let me know, where am I going wrong ?
I have installed docker using sudo yum install -y docker and started the docker service by running the following commands. Initially, it worked and I was able to run docker containers. Now the docker daemon is working but wen I run docker commands like docker ps, docker info..etc. It's not showing anything on stdout.
I have uninstalled the docker version using sudo yum remove docker and removed all the files manually and installed the new one but still it's the same issue.
Here is the link that I have followed to install docker in EC2 instance.
https://aws.amazon.com/blogs/devops/set-up-a-build-pipeline-with-jenkins-and-amazon-ecs/
Docker version
1.12.6, build 7392c3b/1.12.6
uname -a
Linux ip adress 4.4.41-36.55.amzn1.x86_64 #1 SMP Wed Jan 18 01:03:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
I was not able to figure out what went wrong? Could you please help me in debug this issue.
Thank you in advance.
As I understood from what you said and going through the link you mentioned, you have given the docker command capabilities to the user jenkins, which you have done using :
usermod -a -G docker jenkins
So in order to run docker related command you should login as the user Jenkins. You can use the following command to login as the user jenkins.
sudo -su jenkins
From there you should be able to run the docker commands as expected.
PS - Follow the steps again to install docker.
Hope this help.