Boot Script received following error: Cfn-Signal failed to execute - amazon-web-services

I am creating ASG using the CloudFormation template, the instances created by the autoscaling group are failing to send the success signal to the CloudFormation stack.
This is happening after the system boots up successfully.
There is no problem when am creating Ec2 instances manually but facing the issue from ASG created instances
Here is the error: [Boot Script received the following error:
Cfn-Signal failed to execute]
Thanks in advance for helping me

Related

AWS Deployment failure due to missing appspec.yml when it is already present

I set up the pipeline three months ago and everything has been running fine with the same appspec.yml. But now suddenly AWS CodeDeploy gives the error that it can't find the appspec.yml although it is already there.It failed at the very first ApplicationStop event itself. The error is as follows:
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems. (Error code: HEALTH_CONSTRAINTS)
Then when I looked into the details, this is what it said:
My appspec.yml is as follows:
version: 0.0
os: windows
files:
- source: \
destination: c:\home\afb
file_exists_behavior: OVERWRITE
I also have the folder (c:\home\afb) created already on my EC2 instance. The health of the EC2 instance is fine as I can see on the Dashboard and also access it via RDP. The CodeDeploy agent is also running fine on EC2.
Please help. Thanks in advance for any advice!

Unable to deploy code on ec2 instance using codedeploy

I have single ec2 instance running on ubuntu server and I am trying to implement CI/CD flow using codedeploy and source is bit-bucket.I jave also installed codedeploy-agent on ec2 instance and it is installed and running successfully but whenever I am deploying code on ec2 deployment is failing with an error shown below:
The overall deployment failed because too many individual instances failed deployment, too few
healthy instances are available for deployment, or some instances in your deployment group are
experiencing problems.
In the CodeDeploy agent log file that I am accessing using less /var/log/aws/codedeploy-agent/codedeploy-agent.log showing below error:
ERROR [codedeploy-agent(31598)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller:
Missing credentials - please check if this instance was started with an IAM instance profile
I am unable to understand how can I overcome this error someone let me know.
CodeDeploy agent requires IAM permissions provided by IAM role/profile of your instance. The exact permissions needed are given in AWS docs:
Step 4: Create an IAM instance profile for your Amazon EC2 instances

Terminate AWS EC2 instance when SSM Run Command status changes

I would like to (1) launch an AWS EC2 instance, (2) run a shell script (that sends output to an S3 bucket) and (3) terminate the instance automatically when the script terminates, all remotely without logging into the instance. I have managed to get parts (1) and (2) working using the AWS CLI commands aws ec2 run-instances and aws ssm send-command. I am struggling with part (3) - getting the instance to terminate automatically when the script completes.
I have seen in the AWS docs that you can use CloudWatch to monitor the SSM Run Command status, and I thought that this might be a solution - when the status changes, terminate the instance. Is this a feasible option? If so, how do you implement it using AWS CLI?
Within the ssm script, you can issue a command to the operating system to shutdown the computer. If you launched the instance with a Shutdown behavior of Terminate, then this will terminate the instance.
Alternatively, the script can retrieve the Instance ID of the instance it is running on, and issue the aws ec2 terminate-instances command, specifying its own Instance ID.
See: Self-Terminating AWS EC2 Instance?

Using command line to create AWS Elastic Beanstalk fail

I tried to setup EB for worker tier by using the following command
eb create -t worker
But I receive the following error
2015-11-04 16:44:01 UTC+0800 ERROR Stack named 'awseb-e-wh4epksrzi-stack' aborted operation. Current state: 'CREATE_FAILED' Reason: The following resource(s) failed to create: [AWSEBWorkerCronLeaderRegistry, AWSEBSecurityGroup].
2015-11-04 16:43:58 UTC+0800 ERROR Creating security group named: sg-7ba1f41e failed Reason: Resource creation cancelled
Is there something specific to run the command line ?
I found the eb command line buggy. try to use the web console. much more reliable.

AWS Elastic Beanstalk: Command Hooks Failed

Trying to deploy an IIS application to AWS Elastic Beanstalk via Visual Studio 2015. Every time we try to deploy it we get the following errors:
Error occurred during build: Command hooks failed
[Instance: i-XXXXXXXX ConfigSet: Infra-WriteRuntimeConfig, Infra-WriteApplication1,
Infra-WriteApplication2, Infra-EmbeddedPreBuild, Hook-PreAppDeploy,
Infra-EmbeddedPostBuild, Hook-EnactAppDeploy, Hook-PostAppDeploy]
Command failed on instance. Return code: 1 Output: null.
Unsuccessful command execution on instance id(s) 'i-XXXXXXXX'. Aborting the operation.
We've tried restarting the Application. We've tried deleting and recreating the Environment and Application. All to no success. Always the same error. I cannot find anything online anywhere that tells how to fix this. Has anyone else run into this problem and found a solution?
The Beanstalk deployment errors are usually quite generic.
To get the root cause, the troubleshooting steps are the ones in AWS Elastic Beanstalk Troubleshooting doc:
Retrieve and investigate EB Logs.
If retrieved logs are not helpful, then SSH to the EC2 instance can be used to get more details or check that the instance config is really what is expected for example.
Before deploying login to the ec2 instance via SSH and do a realtime log fetching
tail -f /var/log/eb-engine.log
and you will get proper error log why the hooks get failed.