I try to deploy a zip on EBS. There is one EC2 and the environment health is green and seems to be fine. I try to deploy a zip and I get:
ERROR During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version, re-deploy the appropriate application version.
ERROR Failed to deploy application.
ERROR Unsuccessful command execution on instance id(s) 'xxx'. Aborting the operation.
I deploy 10 zip's but all the same issue. I even tried to redeploy the zip which is also deployed at the moment (tried several times) but even that is not possible. What could be the issue?
Some configuration:
Environment type: Load balanced, auto scaling
Number instances: 1 - 1 (tried 1-2, and 0-2)
Instance type: t2.medium
I can only deploy succesfully if I terminate the ec2's manually and start a new deployment before the ec2's are scaled up again.
I was also facing the same issue. Couldn't deploy old builds or even the current one. It suddenly started working after a while, seems like an temporary issue with AWS.
Related
When I try to run codedeploy, I get the following error :
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems.
When I look at the ec2 instances created by the auto scalable group, they are both running, and status is passed 2/2 checks passed in green. I am wondering if this is just a catch all error, because its supposed to throw this error if one or both or all are not running.
It seems it wanted an appspec.yml. I am not sure why a appspec.yml is needed, If you create the autodeploy group, and create the codedeploy application, why is there a need for a appspec.yml. I am using jenkins to run the code deploy. I gave it a appspec.yml that is empty, and seems to work.
I can't deploy a new version on Elastic Beanstalk.
Everything was working fine until I tried to deploy a new version where I have lots of issues (It is not the first time I deploy a new version on this environment, I already have deployed dozens). When I manage to fixe all of them I got those errors:
Failed to deploy application.
During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version, re-deploy the appropriate application version.
Unsuccessful command execution on instance id(s) 'i-...'. Aborting the operation
I redeploy the version which does not work.
Here is the Elastic Beanstalk console:
Elastic Beanstalk console
Elastic Beanstalk events
The request logs button from Elastic Beanstalk return nothing.
The system log from EC2 instance shows the last working version logs.
I enable the CloudWatch logs from Configuration navigation pane. It added 4 files to CloudWatch logs:
/var/log/eb-activity.log -> empty so far
/var/log/httpd/access_log -> empty so far
/var/log/httpd/error_log -> empty so far
/environment-health.log -> Command is executing on all instances (56 minutes or more elapsed).", "Incorrect application version found on all instances. Expected version \"prod-v1.7.28-0\" (deployment 128).
It is an Amazon Linux, t2.medium instance with Apache as web server
What I already try:
Change the name of .zip each time to be different of other zip already deploy
Terminate the instance and the loadBalancer automatically create a new one
Reboot the instance
Rebuild Elastic Beanstalk environment
Deploy a simplest code
I tried to deploy just a zip with the code below but I got same errors.
<html>
<head>
<title>This is the title of the webpage!</title>
</head>
<body>
<p>This is an example paragraph. Anything in the <strong>body</strong> tag will appear on the page, just like this <strong>p</strong> tag and its contents.</p>
</body>
</html>
It always go back to last working version and when I tried to deploy the new version it does not work.
On some post I see some people telling it is maybe because the instance is too small but before it was working perfectly and the size does not change since then.
If you have some questions or ideas I will be very thankful.
Have a nice day !
Answer:
The issue was in the logs like you said. I had to ssh into my EC2 instance to reached them. The error was in the file cfn-init-cmd.log.
One of the command was waiting for an input so it timed out with no error message.
You should check the logs of the EBS for any hints as to what goes wrong with your deployment. The AWS console
can be helpful for that.
There are also the logs that can be acquired from EC2:
CloudWatch logs is another thing to check.
You should also check the autoscaling group, and see if there are any health checks there. What kind of checks are these? What's the grace period?
Here's a list of reasons that an EC2 health check could fail.
You could launch a better ec2 instance for troubleshooting.
Instance status checks.
The following are examples of problems that can cause instance status checks to fail:
Failed system status checks
Incorrect networking or startup configuration
Exhausted memory
Corrupted file system
Incompatible kernel
Also rebuilding is really a drastic step as it destroys and rebuilds all your resources. Your ELB DNS for example will be gone, any associated EIP will be released. These things can't be reclaimed.
I also faced same issue and deleted the wrong application versions. And increased the command timeout.
Default max deployment time -Command timeout- is 600 (10 minutes)
Go to Your Environment → Configuration → Deployment preferences → Command timeout
Increase the Deployment preferences higher like 1800 and then try to deploy the previous working application version. It will work.
I have an AWS Elastic Beanstalk application that I recently updated with an error in the my (Flask) code (one line of Python with an unrecognized named argument) that resulted in an expected server error. I then tried to redeploy a previously functioning version of my app, but got an error that EB was not in a state that allowed deployment. I then attempted to abort the current operation, but got
User initiated abort was received, however the current step of the operation in progress is not cancellable. The current operation will be aborted as soon as the non-cancellable step(s) complete.
Eventually the abort succeeded, but now all my attempts to update the EB app fail with
Environment update is starting.
Deploying new version to instance(s).
[Instance: i-xxxxxxxx] Command failed on instance. An unexpected error has occurred [ErrorCode: 0000000001].
Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
Unsuccessful command execution on instance id(s) 'i-xxxxxx'. Aborting the operation.
Failed to deploy application.
ERROR: ServiceError - Failed to deploy application.
followed by a warning in the events log
During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version, re-deploy the appropriate application version.
I'm now stuck in an infinite loop (regardless of where I try to update from: command line, EB console, EB previous app versions), with all of by EB websites down. It doesn't seem like this should be possible and I must be missing come operation I can perform to restart my application with a previously functioning version.
UPDATE: Here's what seems to have worked:
Delete CloudFormation stack
Rebuild Elastic Beanstalk environment
Repair A records where necessary to use new EC2 instance IP
Why any of this was necessary — and why nothing less drastic touched the problem — remains a mystery.
Yesterday I attempted to follow this answer from stackoverflow to add multiple users to the authorized_keys in our Elastic Beanstalk environment for QA.
https://stackoverflow.com/a/46269015/1827986
However after doing so the deployment failed with the following error
2018-10-16 19:04:22 INFO Environment update is starting.
2018-10-16 19:05:05 INFO Deploying new version to instance(s).
2018-10-16 19:06:09 ERROR [Instance: i-05cc43b96ffc69145] Command
failed on instance. Return code: 1 Output: (TRUNCATED)...erform:
iam:GetGroup on resource: group BeanstalkAccess
declare -a users_array='()'
chmod: cannot access ‘/home/ec2-user/.ssh/authorized_keys’: No such
file or directory
chown: cannot access ‘/home/ec2-user/.ssh/authorized_keys’: No such
file or directory.
Hook /opt/elasticbeanstalk/hooks/appdeploy/post/980_beanstalk_ssh.sh
failed. For more detail, check /var/log/eb-activity.log using console
or EB CLI.
2018-10-16 19:06:09 INFO Command execution completed on all
instances. Summary: [Successful: 0, Failed: 1].
2018-10-16 19:06:10 ERROR Unsuccessful command execution on
instance id(s) 'i-05cc43b96ffc69145'. Aborting the operation.
2018-10-16 19:06:10 ERROR Failed to deploy application.
I then tried to deploy a previous application version that was known to work and that failed. I attempted to rebuild the environment so that I could deploy a working application version. However, it continues to try to deploy the same stuck version that is giving the errors.
Also now after the rebuild I am getting a bunch of errors about the instance not being reachable by the ELB.
100.0 % of the requests are erroring with HTTP 4xx. Insufficient request
rate (12.0 requests/min) to determine application health.
Command failed on all instances.
ELB health is failing or not available for all instances.
When I go into EC2 it shows the instance as running and it has green health.
I just tried to go to the Applications Versions and deleted the version that would not deploy hoping that it would restore from the working version. However, now it is just giving the following error when I try to deploy the working version.
Environment health has transitioned from Degraded to Severe. Command
failed on all instances. Incorrect application version found on all
instances. Expected version "app-v1_5_13-719-gc533-181016_092351"
(deployment 291). Application update failed 2 minutes ago and took 2
minutes. ELB health is failing or not available for all instances.
What else can I do to get this back to a working state?
This turned out to be a memory issue since we were running on the micro servers, I am not sure why we hit the limits all of the sudden, however, upgrading to a small instance solved the issues.
My CloudFormation stack produces a ScalingGroup which has MinSize and MaxSize set to 1. It also creates a DeploymentGroup that targets this ScalingGroup.
When the deploymentgroup is configured with Configuration name CodeDeployDefault.OneAtATime then the deployment starts successfully.
When the deploymentgroup is configured with Configuration name CodeDeployDefault.AllAtOnce then upon the creation of the stack, the codedeploy doesn't do anything and you can't see any events or log traces on the EC2. There is only one mention to error code HEALTH_CONSTRAINTS. If I terminate the instance, then the scaling group launches a new instance but CodeDeploy doesn't do anything again. If I manually start a deployment with CodeDeployDefault.OneAtATime then it works.
From what i've read in the documentation, this should not happen
There should be no health checks because no instances exists in the deployment group
CodeDeployDefault.AllAtOnce has a minimum health percentage of 0. So it could be that the reason to raise the error is that it equaled 0 and it decided to not continue with the health error code
CodeDeployDefault.AllAtOnce mentions and that should be my understanding, that it is not going to do a heath check because conceptually there is no point, as all instances will be configured at the same time.
Is my expectation or correct or am I do something wrong?
The error code HEALTH_CONSTRAINTS means CodeDeploy deployment failed and the configured healthy hosts ratio is not satisfied. You might want to got to the AWS CodeDeploy console and click on the deployment that created during the during the CloudFormation running process, and the check why the deployment failed. The different between CodeDeployDefault.OneAtATime and CodeDeployDefault.AllAtOnce can be found here: http://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-configurations.html
CodeDeployDefault.AllAtOnce Attempts to deploy an application revision to as many instances as possible at once. The status of the overall deployment will be displayed as Succeeded if the application revision is deployed to one or more of the instances. The status of the overall deployment will be displayed as Failed if the application revision is not deployed to any of the instances. Using an example of nine instances, CodeDeployDefault.AllAtOnce will attempt to deploy to all nine instances at once. The overall deployment will succeed if deployment to even a single instance is successful; it will fail only if deployments to all nine instances fail.
CodeDeployDefault.OneAtATime Deploys the application revision to only one instance at a time.
For deployment groups that contain more than one instance:
The overall deployment succeeds if the application revision is deployed to all of the instances. The exception to this rule is if deployment to the last instance fails, the overall deployment still succeeds. This is because AWS CodeDeploy allows only one instance at a time to be taken offline with the CodeDeployDefault.OneAtATime configuration.
The overall deployment fails as soon as the application revision fails to be deployed to any but the last instance.
In an example using nine instances, it will deploy to one instance at a time. The overall deployment succeeds if deployment to the first eight instances is successful; the overall deployment fails if deployment to any of the first eight instances fails.
For deployment groups that contain only one instance, the overall deployment is successful only if deployment to the single instance is successful.
Since your deployment group only contains one single instance, if you set CodeDeployDefault.OneAtATime and the single instance deployment failed, the deployment will still be marked as succeeded. Please check the deployment details on AWS CodeDeploy console.