Elastic Beanstalk Deployment TaskStoppedBeforePullBeginError - amazon-web-services

I am experiencing an issue in deploying a multicontainer EBS app which appears to be undocumented by AWS. The error reads: client: TaskStoppedBeforePullBeginError: Task stopped before image pull could begin for task: where client is my container name. I am using the deprecated multicontainer Docker environment as the latest Docker environment on Amazon Linux 2 is undocumented and has a lot of issues when I tried it.
Things I have tried:
verified that the image is pullable and runnable from my local machine (i.e. the image runs without immediately exiting)
used eb-cli to verify that my Dockerrun.aws.json configuration is working by running it all locally using eb local run
I honestly have no idea how I could go about further troubleshooting the issue. Any here would be much appreciated. Here are the last 100 lines of logs produced by EBS.

Related

Deploy app to AWS Elastic Beanstalk using docker-compose

I'm trying to deploy a multi-container app to AWS Elastic Beanstalk using docker-compose. My folder structure is as follows:
AppDirectory/
app/
proxy/
scripts/
static/
docker-compose.yml
Dockerfile
requirements.txt
I've got the images built and pushed into Docker Hub. Also, the environment is working as expected when running docker-compose up in development. Using AWS Elastic Beanstalk dashboard, I create an application and then I proceed to create an environment, using Docker as platform. I have a .zip folder with the structure mentioned before.
When creating the environment, there's first a console log telling me that Configuration files cannot be extracted from the application version appname-source. Check that the application version is a valid zip or war file.
Not sure what it means here, as I'm uploading a .zip file and inside there are Dockerfile and docker-compose.yml.
After it says the app deployed successfully, I get a 502 Bad Gateway error from nginx and then I have the environment with severe health and updating for hours without changes. I've been following the documentation regarding this and I believe it is possible to deploy it with docker-compose.yml, however I'm wondering if my configuration is enough? I linked the Docker Hub images also inside my docker-compose. I'm not able to request logs neither as the state must be 'ready', and it is constantly 'updating'.
Anyone with any experience with such deployment that wants to share any tips or configuration settings? Thank you.

local vmware takes lot of time to pull the docker image from AWS ECR

Recently I moved my registry from Dockerhub to AWS ECR.
I'm using Jenkins to pull image and deploy to local vmware. I'm using docker swarm as container orchestration tool.When I was using Dockerhub jenkins was able to pull and deploy docker services successfully.But when I'm using AWS ECR,the jenkins job is UNSTABLE.
Jenkins job is getting timeout.When i checked in server,some images are successfully pulled but some are not.
docker pull image is taking more time when we are using aws ecr.Any idea?
This could be due to many reasons.
Network/firewall issues.
Storage volume slowness etc.
I think first you need to narrow down the issue. Did you check the latency between the Jenkins worker and AWS ECR? I would suggest directly login into the Jenkins worker and try pulling the Image directly. If the direct pulling is working without a slowness you may have to dig into Jenkins to understand what's happening.
I was able to resolve the issue. Because of network slowness jenkins got timed out.
So I removed the time out limit from jenkins then it worked fine.
Goto Post-build Actions
Open Advanced
Exec timeout (ms) : 0
Save and rebuild the job

How to debug failed NetCore AWS Elastic Beanstalk deployment?

I have an DotNet Core AWS Elastic Beanstalk environment which has started failing to deploy. The environment waits up to 10 minutes for the healthcheck to pass, but consistent gets "403 - Forbidden: Access is denied.".
I've RDP'd to the environment and the folder C:\inetpub\AspNetCoreWebApps is empty. In working environments, this contains the code.
I've tried redeploying the entire environment and deploying a package from a week ago which was previously fine. Additionally, I've tried deploying using the AWS Toolkit for Visual Studio and by uploading a package rather than using CodePipeline. All of these actions result in the same behaviour.
I'm struggling to find any logs which indicate why the code isn't being deployed to the environment. Requesting the last 100 lines doesn't contain anything useful and I can't find any deployment logs on the filesystem. In the pulled logs there is no AWS.DeploymentCommands log file.
The environment is configured to deploy as rolling with batch +1. As such, it is a new EC2 which is being written to.
What is the next step in debugging the cause of the failure?
I was able to diagnose the problem - a public file referenced in the ebextension folder had been deleted. The log file I was looking for was in C:\cfn\log\cfn-init.txt.

what triggers Elastic Beanstalk to pull in an updated Docker image

I have an Elastic Beanstalk application running and configured to serve a Docker container ("generic Docker" configuration) and linked to a private image on Docker Hub.
How can I prompt the Elastic Beanstalk application to download the latest version of the docker hub image after pushing up a new version with docker push?
Do I need to "restart the app server," "rebuild the environment," something else, or is "supposed" to pull it in automatically? Not seeing this addressed in the docs.
** EDIT **
To be clear, eb deploy does NOT pull in an updated Docker image, but it does push up the files from your application directory to your ec2 instances.
So, at the end of the day I'm probably not going to use docker push for deployments, but just to keep the image up to date in the case that you actually need to make ENVIRONMENT configuration changes, not code changes, or when bringing on a new developer, you can use docker pull.
Currently eb deploy my-environment-name is working great for Docker based Elastic Beanstalk deployments.
You just need to run command line: eb deploy. Here is a nice tutorial http://victorlin.me/posts/2014/11/26/running-docker-with-aws-elastic-beanstalk.

Deploy to elasticbeanstalk via CLI deploy command with Dockerrun.aws.json

I am running an elasticbeanstalk application, with multiple environments. This particular application is hosting docker containers which host a webservice.
To upload and deploy a new version of the application to one of the environments, I can go through the web client and click on "Upload and Deploy" and from the file option I select my latest Dockerrun.aws.json file, which references the latest version of the container that is privately hosted. The upload and deploy works fine and without issue.
To make it simpler for myself and others to deploy I'd like to be able to use the CLI to upload and deploy the Dockerrun.aws.json file. If I use the cli eb deploy command without any special configuration the normal process of zipping up the whole application and sending it to the host occurs and fails (it cannot reason out that it only needs to read the Dockerrun.aws.json file).
I found a documentation tidbit about controlling what is uploaded using the .elasticbeanstalk/config.yml file.
Using this syntax:
deploy:
artifact: Dockerrun.aws.json
The file is uploaded and actually deploys successfully to the first batch of instances, and then always fails to deploy to the second set of instances.
The failure error is of the flavor: 'container exited unexpectedly...'
Can anyone explain, or provide link to the canonical approach for using the CLI to deploy single docker container applications?
So it turns out that the method I listed about with the config.yml was correct. The reason I was seeing a partially successful deployment was because the previously running docker container on the hosts was not being stopped by EB.
I think that what was happening was that EB is sending something like
sudo docker kill --signal=SIGTERM $CONTAINER_ID instead of the more common sudo docker stop $CONTAINER_ID
The specific container I was running didn't respond to SIGTERM and so it would just sit there. When I tested it locally with SIGKILL it would (obviously) stop properly, but SIGTERM alone wouldn't stop it.
The issue wasn't the deployment methodology but rather confusion in the output that EB generated and my misinterpretation.
Since you have asked for a link, I am providing a link which I initially used to successfully test and deploy docker using elasticbeanstalk cli.
Kindly see if this helps you as well: https://fangpenlin.com/posts/2014/11/25/running-docker-with-aws-elastic-beanstalk/