Docker pull can authenticate but run cannot - amazon-web-services

I built, tagged & published my first (ever) Docker image to Quay:
docker build -t myapp .
docker tag <imageId> quay.io/myorg/myapp:1.0.0-SNAPSHOT
docker login quay.io
docker push quay.io/myorg/myapp:1.0.0-SNAPSHOT
I then logged into Quay.io to confirm the tagged image was successfully pushed, and it was. So then I SSHed into a brand-spanking-new AWS EC2 instance and followed their instructions to install Docker:
sudo yum update -y
sudo yum install -y docker
sudo service docker start
sudo usermod -a -G docker ec2-user
sudo docker info
Interestingly enough the sudo usermod -a -G docker ec2-user command doesn't seem to work as advertised as I still need to append sudo to all my commands...
So I try to pull my tagged image:
sudo docker pull quay.io/myorg/myapp:1.0.0-SNAPSHOT
Please login prior to pull:
Username: myorguser
Password: <password entered>
1.0.0-SNAPSHOT: Pulling from myorg/myapp
<hashNum1>: Pull complete
<hashNum2>: Pull complete
<hashNum3>: Pull complete
<hashNum4>: Pull complete
<hashNum5>: Pull complete
<hashNum6>: Pull complete
Digest: sha256:<longHashNum>
Status: Downloaded newer image for quay.io/myorg/myapp:1.0.0-SNAPSHOT
So far, so good (I guess!). Let's see what images my local Docker engine knows about:
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Hmmm...that doesn't seem right. Oh well, let' try running a container for my (successfully?) pulled image:
sudo docker run -it -p 8080:80 -d --name myapp:1.0.0-SNAPSHOT myapp:1.0.0-SNAPSHOT
Unable to find image 'myapp:1.0.0-SNAPSHOT' locally
docker: Error response from daemon: repository myapp not found: does not exist or no pull access.
See 'docker run --help'.
Any idea where I'm going awry?

To list images, you need to use: docker images
When you pull, the image has the same tag. So if you wish to run, you will need to use:
sudo docker run -it -p 8080:80 -d --name myapp:1.0.0-SNAPSHOT quay.io/myorg/myapp:1.0.0-SNAPSHOT
If you wish to use a short name, you need to retag it after the docker pull:
sudo docker tag quay.io/myorg/myapp:1.0.0-SNAPSHOT myapp:1.0.0-SNAPSHOT
After that, your docker run command will work. Note that docker ps is for containers that are running (or have exited in the recent past if used with -a)

Related

docker not installed through yum in user data

this is how my user data looks like, but docker is not installed, when connecting to my ec2 machine:
sudo yum -y install docker
sudo service docker start
sudo docker pull nginx
sudo docker run -d -p 80:80 nginx
what can I do?
When using user-data script you can debug what is happening by ssh connecting to the instance and check the output in cloud-init-output.log.
sudo cat /var/log/cloud-init-output.log
When doing this you'll find an strange error containing:
Jan 29 11:58:25 cloud-init[2970]: __init__.py[WARNING]: Unhandled non-multipart (text/x-not-multipart) userdata: 'sudo yum -y install dock...'
Which means that the default interpreter seems to be python and it's neccesary to start the user-data with #!/bin/bash. (See this other StackOverflow answer)
When changing the user-data to:
#!/bin/bash
sudo yum -y install docker
sudo service docker start
sudo docker pull nginx
sudo docker run -d -p 80:80 nginx
it will be executed as expected and you will find nginx running on your ec2.

Docker, GitLab and deploying an image to AWS EC2

I am trying to learn how to create a .gitlab-ci.yml and am really struggling to find the resources to help me. I am using dind to create a docker image to push to the docker hub, then trying to log into my AWS EC2 instance, which also has docker installed, to pull the image and start it running.
I have successfully managed to build my image using GitLab and pushed it to the docker hub, but now I have the problem of trying to log into the EC2 instance to pull the image.
My first naive attempt looks like this:
#.gitlab-ci.yml
image: docker:18.09.7
variables:
DOCKER_REPO: myrepo
IMAGE_BASE_NAME: my-image-name
IMAGE: $DOCKER_REPO/$IMAGE_BASE_NAME:$CI_COMMIT_REF_SLUG
CONTAINER_NAME: my-container-name
services:
- docker:18.09.7-dind
before_script:
- docker login -u "$DOCKER_REGISTRY_USER" -p "$DOCKER_REGISTRY_PASSWORD"
after_script:
- docker logout
stages:
- build
- deploy
build:
stage: build
script:
- docker build . -t $IMAGE -f $PWD/staging.Dockerfile
- docker push $IMAGE
deploy:
stage: deploy
variables:
RELEASE_IMAGE: $DOCKER_REPO/$IMAGE_BASE_NAME:latest
script:
- docker pull $IMAGE
- docker tag $IMAGE $IMAGE
- docker push $IMAGE
- docker tag $IMAGE $RELEASE_IMAGE
- docker push $RELEASE_IMAGE
# So far so good - this is where it starts to go pear-shaped
- apt-get install sudo -y
- sudo apt install openssh-server -y
- ssh -i $AWS_KEY $AWS_URL "docker pull $RELEASE_IMAGE"
- ssh -i $AWS_KEY $AWS_URL "docker rm --force $CONTAINER_NAME"
- ssh -i $AWS_KEY $AWS_URL "docker run -p 3001:3001 -p 3002:3002 -w "/var/www/api" --name ${CONTAINER_NAME} ${IMAGE}"
It seems that whatever operating system the docker image is built upon does not have apt-get, ssh and a bunch of other useful commands installed. I receive the following error:
/bin/sh: eval: line 114: apt-get: not found
Can anyone help me with the commands I need to log into my EC2 instance and pull and run the image in gitlab-ci.yml using this docker:dind image? Upon which operating system is the docker image built?
The official Docker image is based on Alpine Linux, which uses the apk package manager.
Try replacing your apt-get commands with the following instead:
- apk add openssh-client
There is no need to install sudo, just to install openssh-server, so that step was removed.

How to inject file.log to logstash and display it via kibana

I using docker container and docker-compose, to create ELK containers, after the containers created i should inject file into logstash and display it via docker
I'm havent work on docker until three days ago, i working at this problem, surfed at least 10 websites+youtube and cant understand what should i do.
I sucssesed in creatind docker container, install/create (not sure how to say it) docker-compose.
I have pulled the docker-elk/ from git, so i have ready yml files for docker-compose, logstash, kibana and elastic search, i have tried to push file into logstash but i cant get if i did it right, and how to check it at all
i saw an option to check ip addresses of running containers and run it via ip:5061, ip:9200 but nothing have worked
i have installed docker and pulled docker elk
sudo amazon-linux-extras install docker
Download docker-elk:
git clone https://github.com/deviantony/docker-elk
sudo curl -L
downloaded docker compose
https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo mv /usr/local/bin/docker-compose /usr/bin/docker-compose
sudo chmod +x /usr/bin/docker-compose
and created elk containers- i have tried two commands, the second one worked #better
sudo docker-compose -d
sudo docker-compose -f /full addres/ docker-compose.yml up
I expect to show injected into logstash log file via kibana graph
what you need is a log shipper like filebeat and that do not comes with the ELK stack. after you configure your file beate to send logs to logstash you can see the logs

Random number after run a docker for ASP on AWS

I'm trying to put my new ASP Website (MCV5) on the AWS server. For this I used EC2 and I created a virtual machine on linux. I also build my docker and now I'm trying to run it with:
sudo docker run -t -d -p 80:5004 myapp
But each time I just get a random number like this :
940e7abfe315d32cc8f5cfeb0c1f13750377fe091aa43b5b7ba4
When I try to know if my docker is running with:
sudo docker ps
no information is showed...
For information, when I put sudo docker images, I get my application created.
My Dockerfile contain:
FROM microsoft/aspnet
COPY . /app
WORKDIR /app
RUN ["dnu", "restore"]
EXPOSE 5004
ENTRYPOINT ["dnx", "-p", "project.json", "kestrel"]
-d means to run detached. The number you see is the container id.
If you want to get information about the container, you can use that container id
docker inspect <container id>
docker logs <container id>
If you'd like the container to run and print logs to the terminal, remove the -d

Can't see django site being run in docker container on localhost

I have a django app that I will need to deploy on Amazon's EC2 Container Service. In the meantime, in order to test the deployment, I am trying to deploy it in a docker container locally first, but even when running a simple demo django application, I am unable to see the page at localhost:8000.
Here is my setup.
Create a docker machine:
$ docker-machine create --driver virtualbox testmachine
After this I set up my environment:
$ eval "$(docker-machine env testmachine)"
I set up a Dockerfile for my test container:
FROM ubuntu
RUN echo "deb http://archive.ubuntu.com/ubuntu/ $(lsb_release -sc) main universe" >> /etc/apt/sources.list
RUN apt-get update
RUN apt-get install python-pip -y
RUN pip install django
RUN mkdir django_test
RUN cd django_test && \
django-admin.py startproject django_test .
Then I call
$ docker build -t dockertest .
... builds successfully
$ docker run -d -i -t -p 8000:8000 dockertest
cbef144ac068eb61b0c3e032448cc207c8f0384a9a67a710df6d9beb26d2ab32
$ docker attach cbef144ac068eb61b0c3e032448cc207c8f0384a9a67a710df6d9beb26d2ab32
root#cbef144ac068:/# cd django_test
root#cbef144ac068::/django_test# python manage.py runserver 0.0.0.0:8000
This successfully starts the server at 0.0.0.0:8000/ of the container.
However, when I try to go to localhost:8000 in my browser, I get a "This webpage is not available." What am I missing?
Turns out I was looking at the wrong IP.
To figure out the correct IP, I ran:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
testmachine * virtualbox Running tcp://192.168.99.100:2376
I then loaded 192.168.99.100:8000 in my browser, and it worked like a charm.