I have created a docker image for DRUID and Superset, now I want to push these images to ECR. and start an ECS to run these containers. What I have done is I have created the images by running docker-compose up on my YML file. Now when I type docker image ls i can see multiple images running in them.
I have created an aws account and created a repository. They have provided the push command and I push the superset into the ECR for start. (Didn't push any dependancy)
I created a cluster in AWS, in one configuration step if provided custom port 8088. I don't know what and why they ask these port for.
Then I created a load balancer with the default configuration
After some time I could see the container status turned running
I navigated to the public ip i mentioned with port 8088 and could see superset running
Now I have two problems
It always shows login error in a superset
It stops automatically after some time and restarts after that and this cycle continues.
Should I create different ECR repos and push all the dependencies to ECR before creating a cluster in ECS?
For the service going up and down. Since you mentioned you have an LB associated with the service, you may have an issue with the health check configuration.
If the health check fails consecutively a number of times, ecs will kill it and re-start it.
Related
Here is my setup.
My Django code is hosted on Github
I have docker setup in my EC2 Instance
My deployment process is manual. I have to run git pull, docker build and docker run on every single code change. I am using a dockerservice account to perform this step.
How do I automate step 3 with AWS code deploy or something similar.
Every example I am seeing on the internet involves ECS, Fargate which I am not ready to use yet
Check this out on how to Use Docker Images from a Private Registry (eg. dockerhub) for Your Build Environment
How to Use Docker Images from a Private Registry for Your Build Environment
I have a python3 project which runs in a docker container environtment.
My Python project uses AWS Acces keys and secret but using a credentials file stored in the computer which is added to the container using ADD.
I deployed my project to EC2. The server has one task running which works fine. I am able to go through port 8080 to the webserver (Airflow).
When I do a new commit and push to a master branch in github, the hook download the content and deploy it without build stage.
The new code is in the EC2 server, I check it using ssh but the container that runs in the task get "stuck" and the bind volumes dissapear and they are not working until I restart a new task. The volumes are applied again from 0, and those reference to the new code. This action is fully manual.
Then, to fix it I listen about AWS ECS Blue/Green deployment, so I implemented it. In this case the Codepipeline add a build stage, but here starts the problem. If in the build I try to push a docker image to the ECR, which my task definition makes reference it fails. It fails because in the server, and in the repo (which I commit push my new code) there is no credentials file.
I tryed doing the latest docker image from my localhost, and avoiding build stage in codepipeline, and it works fine, but then when I go to the 8080 port in both working ip's I am able to get in the webserver, but the code is not there. If i click anywhere it says code not found.
So, in a general review I would like to understand what i am doing wrong, and how to fix in a general guidelines, and in the other hand ask why my EC2 instance in the AWS ECS Blue/Green cluster has 3 ip's.
The first one is the one that I use to reach server through port 22. And if there I run docker ps I see one or two containers running depending if I am in the middle of a new deployment. If here I search my new code its not here...
The other two ip's are changing after every deployment (I guess its blue and green) and both work fine until Codepipeline destroys the green one (5 minutes wait time), but the code is not there. I know it because when I click in any of the links in the webserver it fails saying the Airflow Dag hasn't been found.
So my problem is that I have a fully working AWS ECS Blue/Green deployment but without my code. Then my webserver doesn't have anything to run.
The Jenkins master is not dockerized and is not on AWS.
I want to add nodes (old name: slaves) that are on AWS, that auto-scale, and that have minimal infrastructure. According to Amazon's marketing blurb, I should use Docker on Fargate.
In AWS Fargate, I have 1 cluster called "default", which contains one service, "jenkins-service", which was made with the Dockerfile jenkins/slave:latest from https://hub.docker.com/r/jenkins/slave/dockerfile
I'll illustrate how I did it with screenshots.
Went to https://console.aws.amazon.com/ecs/home?region=us-east-1#/firstRun
Container definition: custom
Name, jenkins, image: jenkins/slave:latest, port mapping: 22 tcp
Leave task definition at default settings and press Next
Service: leave default settings and press Next
Cluster, again, leave default values and press Next
Let it run and after a while I see this
View Service shows me this:
In Jenkins, on $JENKINS_URL/configureClouds/, I can add a Docker cloud.
I have to give it a Docker host URI and I cannot find that anywhere in AWS.
Running TeamCity 2019.1.4 with one server and three separate agents. Both agents and the server are running in their respective server/agent containers in separate EC2 instances. I want the build artifact (docker image) to be pushed to the ECR. Permission is configured via IAM role. I am getting Unauthorized error when pushing/pulling. Manually pulling image from the agent EC2 host works. But manually pulling from within the agent EC2 container gives the same error. How do I configure the TeamCity agent container to identify itself as the host machine?
PS: An option I am trying to avoid is to run TeamCity agents in a classic mode (manual installation) which will most likely work.
Do the following:
in TeamCity project configuration, add ECR connection.
then, in the build configuration, add build feature, add "Docker Support".
make sure the choice "Log in to the Docker registry before the build"
is checked and you choose the ECR connection from the project
configuration.
I have some misunderstanding. I have Jenkins instance, ESC cluster and docker-compose config file to compose my images and link up containers.
After git push my Jenkins instance grabs all sources from all repos (webapp, api, lb) and makes a batch of operations like build, copy files and etc.
After that I have all folders with Dockerfiles in state "ready for compose".
And in this stage I cant get how I should ping my ESC cluster on AWS to grab all images from Jenkins and compose them with my docker-compose.yml config.
I would be glad of any useful information.
Thanks.
First, you need to push your images from the Jenkins server into ECR. Push every image individually to an independent repo.
Next, you have to create an ECS cluster. In this cluster, you create an ECS service that runs in the cluster. For this Service, you create and configure a Task Definition to run your linked containers. You don't need to use docker-compose for this: in ECS you define the links between containers in the configuration of the Task Definition. You can add several container definitions to the same Task Definition, and ECS will link them together.
You can automate all of this from your Jenkins server by attaching an Instance Profile to it that allows it to call the ECS API. In order to deploy new images, all you need to do is pushing them to their ECR repos, and create a new Task Definition pointing to them, and update the service that runs these task definitions. ECS will automatically trigger a blue-green deployment on your behalf.