Dockerfile for awscli - amazon-web-services

I am trying to create a docker file that will install awscli and run the command to list s3. Once the command is executed the container itself exits.I builrd the image with this command docker build --tag aws-cli:1.0 . I am running the this docker file after building it with this command docker run -it --rm -e AWS_DEFAULT_REGION='[your region]' -e AWS_ACCESS_KEY_ID='[your access ID]' -e AWS_SECRET_ACCESS_KEY='[your access key]' aws-cli
Error: Unable to find image 'aws-cli:latest' locally docker: Error response from daemon: pull access denied for aws-cli, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
FROM python:2.7-alpine3.10
ENV AWS_DEFAULT_REGION='[your region]'
ENV AWS_ACCESS_KEY_ID='[your access key id]'
ENV AWS_SECRET_ACCESS_KEY='[your secret]'
RUN pip install awscli
CMD s3 ls
ENTRYPOINT [ "awscli" ]

You are missing the image name in the docker run command. It should be like this
docker run -it --rm -e AWS_DEFAULT_REGION='[your region]' -e AWS_ACCESS_KEY_ID='[your access ID]' -e AWS_SECRET_ACCESS_KEY='[your access key]' <docker image>

You missed image name. Please provide image name while running docker run. like this
docker run -it --rm -e AWS_DEFAULT_REGION='[your region]' -e AWS_ACCESS_KEY_ID='[your access ID]' -e AWS_SECRET_ACCESS_KEY='[your access key]' aws-cli:1.0

Related

Where do I put `.aws/credentials` for Docker awslogs log-driver (and avoid NoCredentialProviders)?

The Docker awslogs documentation states:
the default AWS shared credentials file (~/.aws/credentials of the root user)
Yet if I copy my AWS credentials file there:
sudo bash -c 'mkdir -p $HOME/.aws; cp .aws/credentials $HOME/.aws/credentials'
... and then try to use the driver:
docker run --log-driver=awslogs --log-opt awslogs-group=neiltest-deleteme --rm hello-world
The result is still the dreaded error:
docker: Error response from daemon: failed to initialize logging driver: failed to create Cloudwatch log stream: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors.
Where does this file really need to go? Is it because the Docker daemon isn't running as root but rather some other user and, if so, how do I determine that user?
NOTE: I can work around this on systems using systemd by setting environment variables. But this doesn't work on Google CloudShell where the Docker daemon has been started by some other method.
Ah ha! I figured it out and tested this on Debian Linux (on my Chromebook w/ Linux VM and Google CloudShell):
The .aws folder must be in the root folder of the root user not in the $HOME folder!
Based on that I was able to successfully run the following:
pushd $HOME; sudo bash -c 'mkdir -p /.aws; cp .aws/* /.aws/'; popd
docker run --log-driver=awslogs --log-opt awslogs-region=us-east-1 --log-opt awslogs-group=neiltest-deleteme --rm hello-world
I initially figured this all out by looking at the Docker daemon's process information:
DOCKERD_PID=$(ps -A | grep dockerd | grep -Eo '[0-9]+' | head -n 1)
sudo cat /proc/$DOCKERD_PID/environ
The confusing bit is that Docker's documentation here is wrong:
the default AWS shared credentials file (~/.aws/credentials of the root user)
The true location is /.aws/credentials. I believe this is because the daemon starts before $HOME is actually defined since it's not running as a user process. So starting a shell as root will tell you a different story for tilde or $HOME:
sudo sh -c 'cd ~/; echo $PWD'
That outputs /root but using /root/.aws/credentials does not work!

Publish beanstalk environment hook issues

I have an issue with my script. I use beanstalk to deploy my ASP.NET Core code. And in my post deploy I have this code:
#!/usr/bin/env bash
file1=sudo cat /opt/elasticbeanstalk/config/ebenvinfo/region
file2=/opt/elasticbeanstalk/bin/get-config container -k environment_name
file3=$file2.$file1.elasticbeanstalk.com
echo $file3
sudo certbot -n -d $file3 --nginx --agree-tos --email al#gmail.com
It works perfectly if I launch it on the instance but in the postdeploy script I have the error:
[ERROR] An error occurred during execution of command [app-deploy] - [RunAppDeployPostDeployHooks]. Stop running the command. Error: Command .platform/hooks/postdeploy/00_get_certificate.sh failed with error fork/exec .platform/hooks/postdeploy/00_get_certificate.sh: exec format error
PS: My script has .ebextension which allows exec rights
container_commands:
00_permission_hook:
command: "chmod +x .platform/hooks/postdeploy/00_get_certificate.sh"
What's wrong?
I had the same issue and added
#!/bin/bash
to the top of the sh file and also ran "chmod +x" to the sh file and it was solved

How to pass config file to docker run command on Google Compute Engine?

I'm deploying this Dockerfile:
FROM zenika/alpine-chrome:with-node
ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD 1
ENV PUPPETEER_EXECUTABLE_PATH /usr/bin/chromium-browser
WORKDIR /usr/src/app
COPY --chown=chrome package.json yarn.lock ./
RUN yarn --frozen-lockfile
COPY --chown=chrome src ./src
COPY --chown=chrome tsconfig.json ./
COPY --chown=chrome chrome.json /
RUN yarn run build
ENTRYPOINT ["tini", "--"]
CMD ["node", "./dist/start.js"]
using this bash script:
echo "start deploying"
PROJECT_ID=...
APP_ID=...
LAST_COMMIT_HASH=`git log --pretty=format:'%h' -n 1`
GCR_ADDRESS="gcr.io/$PROJECT_ID/$APP_ID:$LAST_COMMIT_HASH"
echo "authenticate with service account"
gcloud auth activate-service-account --key-file=./google-key.json
gcloud config set project $PROJECT_ID
gcloud config set compute/zone us-central1-a
gcloud auth configure-docker
echo "build docker image"
docker build . -t $GCR_ADDRESS
echo "push docker image to $GCR_ADDRESS"
docker push $GCR_ADDRESS
echo "create VM, if it doesn't exist yet"
gcloud compute instances create-with-container my-vm --container-image=$GCR_ADDRESS --container-arg="-it --rm --security-opt seccomp=/chrome.json" || {
echo "failed to create VM. Probably it already exists. Updating existing VM..."
gcloud compute instances update-container my-vm --container-image=$GCR_ADDRESS --container-arg="-it --rm --security-opt seccomp=/chrome.json"
}
When this container is being started by GCE, it throws the error:
[FATAL tini (6)] exec -it --rm --security-opt seccomp=/chrome.json failed: No such file or directory
How do I pass seccomp file to GCE?
In your args seccomp=/chrome.json
Seccomp json file is referenced to the root directory.
Verify that the file is really at / (not recommended), If not, change the path --security-opt seccomp=/path/to/seccomp/profile.json [1] to for example ./chrome.json
Also take in consideration that each argument to append to a container entrypoint must have a separate flag. Arguments are appended in the order of flags.
Assuming the default entry point of the container (or an entry point overridden with --container-command flag) is a Bourne shell-compatible executable, in order to execute 'ls -l' command in the container:[2]
--container-arg="-c" --container-arg="ls -l"

Creating Selenium network run via docker for firefox node in AWS

I am trying to run a docker image (E.g: webwhatsapi) over Selenium network.
I followed below commands:
docker network create selenium
docker run -d -p 4444:4444 -p 5900:5900 --name firefox --network selenium -v /dev/shm:/dev/shm selenium/standalone-firefox-debug
docker build -t webwhatsapi .
docker run --network selenium -it -e SELENIUM='http://firefox:4444/wd/hub' -v $(pwd):/app webwhatsapi /bin/bash -c "pip install ./;pip list;python sample/remote.py"
On AWS, I have following configuration in security group.
I am trying to open the http://{public ip}:4444 in firefox browser. It shows error. (This site can't be reached). I think, I should change my last command in a way which makes it work in browser url.
Last command:
docker run --network selenium -it -e SELENIUM='http://firefox:4444/wd/hub' -v $(pwd):/app webwhatsapi /bin/bash -c "pip install ./;pip list;python sample/remote.py"
Please let me know, where am I going wrong ?

Docker pull can authenticate but run cannot

I built, tagged & published my first (ever) Docker image to Quay:
docker build -t myapp .
docker tag <imageId> quay.io/myorg/myapp:1.0.0-SNAPSHOT
docker login quay.io
docker push quay.io/myorg/myapp:1.0.0-SNAPSHOT
I then logged into Quay.io to confirm the tagged image was successfully pushed, and it was. So then I SSHed into a brand-spanking-new AWS EC2 instance and followed their instructions to install Docker:
sudo yum update -y
sudo yum install -y docker
sudo service docker start
sudo usermod -a -G docker ec2-user
sudo docker info
Interestingly enough the sudo usermod -a -G docker ec2-user command doesn't seem to work as advertised as I still need to append sudo to all my commands...
So I try to pull my tagged image:
sudo docker pull quay.io/myorg/myapp:1.0.0-SNAPSHOT
Please login prior to pull:
Username: myorguser
Password: <password entered>
1.0.0-SNAPSHOT: Pulling from myorg/myapp
<hashNum1>: Pull complete
<hashNum2>: Pull complete
<hashNum3>: Pull complete
<hashNum4>: Pull complete
<hashNum5>: Pull complete
<hashNum6>: Pull complete
Digest: sha256:<longHashNum>
Status: Downloaded newer image for quay.io/myorg/myapp:1.0.0-SNAPSHOT
So far, so good (I guess!). Let's see what images my local Docker engine knows about:
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Hmmm...that doesn't seem right. Oh well, let' try running a container for my (successfully?) pulled image:
sudo docker run -it -p 8080:80 -d --name myapp:1.0.0-SNAPSHOT myapp:1.0.0-SNAPSHOT
Unable to find image 'myapp:1.0.0-SNAPSHOT' locally
docker: Error response from daemon: repository myapp not found: does not exist or no pull access.
See 'docker run --help'.
Any idea where I'm going awry?
To list images, you need to use: docker images
When you pull, the image has the same tag. So if you wish to run, you will need to use:
sudo docker run -it -p 8080:80 -d --name myapp:1.0.0-SNAPSHOT quay.io/myorg/myapp:1.0.0-SNAPSHOT
If you wish to use a short name, you need to retag it after the docker pull:
sudo docker tag quay.io/myorg/myapp:1.0.0-SNAPSHOT myapp:1.0.0-SNAPSHOT
After that, your docker run command will work. Note that docker ps is for containers that are running (or have exited in the recent past if used with -a)