I have run through the docker 'Get Started' tutorial (https://docs.docker.com/get-started/part6/) and have also followed the all the instructions with my own application and AWS. I used the wrong image in my service definition in my docker-compose.yml file. I have corrected the docker-compose.yml file and have tried to run docker stack deploy but I get the following and the nothing happens on the swarm. Is there something I can do to get the swarm to use the correct image or do I need to start from scratch?
[myapp-swarm] ~/PycharmProjects/myapp $ docker stack deploy -c
docker-compose.yml myapp
Updating service myservice_web (id: somerandomidstring)
image my_user/myprivaterepo:myapptag could not be accessed on a registry to record
its digest. Each node will access my_user/myprivaterepo:myapptag independently,
possibly leading to different nodes running different versions of the image.
When updating services that need credentials to pull the image, you need to pass --with-registry-auth. Images pulled for a service take a different path than a regular docker pull, because the actual pull is performed on each node in the swarm where an instance is deployed. To pull an image, the swarm cluster needs to have the credentials stored (so that the credentials can be passed on to the node it performs the pull on).
Can you confirm if passing --with-registry-auth makes the problem go away?
Related
I have created a docker image for DRUID and Superset, now I want to push these images to ECR. and start an ECS to run these containers. What I have done is I have created the images by running docker-compose up on my YML file. Now when I type docker image ls i can see multiple images running in them.
I have created an aws account and created a repository. They have provided the push command and I push the superset into the ECR for start. (Didn't push any dependancy)
I created a cluster in AWS, in one configuration step if provided custom port 8088. I don't know what and why they ask these port for.
Then I created a load balancer with the default configuration
After some time I could see the container status turned running
I navigated to the public ip i mentioned with port 8088 and could see superset running
Now I have two problems
It always shows login error in a superset
It stops automatically after some time and restarts after that and this cycle continues.
Should I create different ECR repos and push all the dependencies to ECR before creating a cluster in ECS?
For the service going up and down. Since you mentioned you have an LB associated with the service, you may have an issue with the health check configuration.
If the health check fails consecutively a number of times, ecs will kill it and re-start it.
I'm trying to set up some infrastructure using AWS ECR to store docker images. I'm just wondering if I have access to the same base images that I do in the docker hub. E.G. FROM node works in my Dockerfile after I log in to ECR. I'm just wondering where this image is getting pulled from. I can't find anything regarding a public ECR repository that stores base images. Thanks.
The name of a Docker image identifies the repository that it comes from. For example:
docker pull aws_account_id.dkr.ecr.us-west-2.amazonaws.com/amazonlinux:latest
The registry is aws_account_id.dkr.ecr.us-west-2.amazonaws.com, the image name is amazonlinux, and the version is latest. The punctuation characters / and : separate these three components.
When you pull from Docker hub, you don't have a registry name, just an image name and version (node:latest).
When you run docker login, it adds credentials to those known by Docker. You can login to as many registries as you want. When you then run docker pull, it looks to see if it has credentials for the specific registry.
How do I deploy a named data volume, with contents, to nodes in a swarm? Here is what I want to do, as described in the Docker documentation:
“Consider a situation where your image starts a lightweight web server. You could use that image as a base image, copy in your website’s HTML files, and package that into another image. Each time your website changed, you’d need to update the new image and redeploy all of the containers serving your website. A better solution is to store the website in a named volume which is attached to each of your web server containers when they start. To update the website, you just update the named volume.”
(source:
https://docs.docker.com/engine/reference/commandline/service_create/#add-bind-mounts-or-volumes)
I'd like to use the better solution. But the description doesn't say how the named volume is deployed to host machines running the web servers, and I can't get a clear read on this from the documentation. I'm using Docker-for-AWS to set up a swarm where each node is running on a different EC2 instance. If the containers are supposed to mount the volume locally, then how is it deployed to each node of the swarm? If it is mounted from a manager node as a network filesystem visible to the nodes, how is this specified in the docker-compose yaml file? And how does the revised volume get deployed from the development machine to the swarm manager? Can this be done through a deploy directive in a docker-compose yaml file? Can it be done in Docker Cloud?
Thanks
I have an image on Amazon's Elastic Container Registry (ECR) that I want to deploy as a Docker service in my Docker single-node swarm. Currently the service is running an older version of the image's latest tag, but I've since uploaded a newer version of the latest tag to ECR.
Running docker service update --force my_service on my swarm node, which uses image XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/my_service:latest, results in:
image XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/my_service:latest could not be accessed on a registry to record its digest. Each node will access XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/my_service:latest independently,
possibly leading to different nodes running different versions of the image.
This appears to prevent the node from pulling a new copy of the latest tag from the registry, and the service from properly updating.
I'm properly logged in with docker login to ECR, and running docker pull XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/my_service:latest works fine (and returns a digest).
Why is docker service update unable to read the digest from the ECR registry despite the image being available?
I had the same problem, but I solved it by using --with-registry-auth.
After you logged in with docker login, can you confirm the same update command with --with-registry-auth?
https://github.com/moby/moby/issues/34153#issuecomment-316047924
I use Docker Hub to store a private Docker image, the repository has a webhook that once the image is updated it calls a service I built to:
update the ECS task definition
update the ECS service
deregister the old ECS task definition
The service is running accordingly. After it runs ECS creates a new task with the new task definition, stops the task with the old task definition and the service come back with the new definition.
The point is that the Docker Image is not updated, once the service starts in the new task definition it remains with the old image.
Am I doing something wrong? How o ensure the docker image is updated?
After analysing the AWS ECS logs I found out that the problem was in the ECS Docker authentication.
To solve that I've added the following data to the file /etc/ecs/ecs.config
ECS_CLUSTER=default
ECS_ENGINE_AUTH_TYPE=dockercfg
ECS_ENGINE_AUTH_DATA={"https://index.docker.io/v1/":{"auth":"YOUR_DOCKER_HUB_AUTH","email":"YOUR_DOCKER_HUB_EMAIL"}}
Just replace the YOUR_DOCKER_HUB_AUTH and YOUR_DOCKER_HUB_EMAIL by your own information and it shall work properly.
To find this information you can execute docker login on your own computer and then look for the data in the file ~/.docker/config.json
For more information on the Private Registry Authentication topic please look at http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html