Creating cloudformation stacks - amazon-web-services

So i am using the https://github.com/commonsearch/cosr-ops/ deployment tool for commonsearch. The tool creates clusters/instances in CloudFormation!
I had quite a few bugs with it, witch i fixed myslef, but this one i have been hitting my head for 2 days now.
Some things fail to CREATE, when calling make aws_elasticsearch_create
Error listed below, any ideas?
The following resource(s) failed to create: [ElasticsearchLbLaunchConfiguration, ElasticsearchMasterLaunchConfiguration, ElasticsearchDataLaunchConfiguration]. . Rollback requested by user.
I resolved this by changing MAPPINGS ami's.
BUT!
Ran into antoher problem.
This time its not the Configs, but the Scalings.
See below:
The following resource(s) failed to create: [ElasticsearchLbAutoScalingGroup, ElasticsearchMasterAutoScalingGroup, ElasticsearchDataAutoScalingGroup]. . Rollback requested by user.
Any ideas are welcome. Help a fellow programmer sort this out :)

Finally, i got progress!
You must signal the AutoScaling resources from local konsole, fot them to proceed. Heres how you do it:
aws cloudformation signal-resource --stack-name search-elasticsearch --logical-resource-id ElasticsearchDataAutoScalingGroup --status SUCCESS --unique-id ElasticsearchDataAutoScalingGroup
I have alredy next error, but its validation error. Will try to figure it out.

Related

Error on trying to create a spot fleet request

How do I fix the error in the screenshot?
I've following the guide on https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/work-with-spot-fleets.html#spot-fleet-prerequisites, created the policy, added it to the user in question, but the error persists.
Also tried our root admin account: same error.
Not sure what I'm doing wrong. I recall it working without a hitch about a year ago, for the same region (Frankfurt).

Errors during deployment to AWS using Terraform (cdktf)

I am trying to create or update Lambdas on AWS using the Terraform CDKTF. During deployment, I am getting the error of
"An event source mapping with SQS arn (\" arn:aws:sqs:eu-west-2:*******:*****-*****-******** \") and function (\" ******-******-****** \") already exists. Please update or delete the existing mapping with UUID *******-****-****-****-***********"
**** are sensitive info I have swapped out.
Some of our Lambdas are called via SQS, which is what this mapping is referring to. I assumed the first fix would be to remove the mappings that might already exist (on a previous deployment that might have partly gone through), but I am unsure where to find them, nor if they are even available to delete. I originally assumed by calling cdktf deploy it would update these mappings and not throw the err at all.
Does anyone have any advice?
Your diagnosis seems right, there might be some stray resources left behind due to an aborted / unfinished Terraform run. You should be able to clean up after these runs by running terraform destroy in the stack directory ./cdktf.out/stacks/..../. That should delete all previously existing resources created through this Terraform stack.

AWS sam deploy with nested stacks - errors from child stacks don't bubble up

I'm just starting my serverless/cloudformation/AWS SAM journey. I've created a stack that has a resource of type AWS::CloudFormation::Stack, and I've separated some of my resources into that child stack.
When I do sam build and then sam deploy, I get the following error:
Embedded stack arn:aws:cloudformation:us-
west-2:111111111111:stack/ParentStack-
ChildStack-1QK94LXRA71CS/f9885e30-631c-11eb-
bfd8-021cb123b7ed was not successfully created: The
following resource(s) failed to create: [DynamoDBTable].
-
The following resource(s) failed to create:
[ChildStack].
Of course, what I really want to know is which resource in the nested stack failed to create, and why. When I copy/paste the resources from the child stack into the parent .yaml file and rebuild/redeploy, I see:
One or more parameter values were invalid: Some index key
attributes are not defined in AttributeDefinitions. Keys:
[userID], AttributeDefinitions: [userId] (Service:
AmazonDynamoDBv2; Status Code: 400; Error Code:
ValidationException; Request ID:
SMJDHUT0CQKM8IBQJVMAIJM4RRVV4KQNSO5AEMVJF66Q9ASUAAJG;
Proxy: null)
This is what I want to see in the output when I build the parent stack: the errors that caused the child stack to fail.
This has led me to use a rather tortuous workflow: build the resources in the main stack, then separate them to an independent stack when they build properly. There's got to be a better way, and I'm sure the community knows something here that I don't.
How do y'all debug child stacks when you're on the CloudFormation train?
This is normal behaviour you have to take help from the AWS Console or use AWSW CLI in this case.
Deploy error reporting is not showing the reason of failure when using nested stacks. #5974
Feature Requests for nested stacks
Why doesn't the error tell me what's wrong?:
CloudFormation passes the creation and updating of a resource to the service responsible for those resources. When a resource fails to create/update, the resource's backing service returns a reason to the stack, which gets logged as the Status Reason within the events. Child Stack is a CloudFormation::Stack resource, so it's being created by CloudFormation. As far as CloudFormation knows, it didn't run into an error trying to actually do anything. Everything it did relate to the CloudFormation side of things was fine; the blame is with the Service resource, which failed to create it. Therefore, Child Stack tells Parent-Stack that it only failed because Service failed to create, not because of a problem on the CloudFormation Service side of things.
you can read more about it here CloudFormation troubleshooting

AWS CloudFormation - resources failed to create error

I was creating a CloudFormation stack using a sample JSON template shown in below link:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/quickref-opsworks.html
However, I am getting a same error again and again i.e.
Error link
Please help me in troubleshooting the error.
Regards,
Raghav

Trying to set up AWS IoT button for the first time: Please correct validation errors with your trigger

Has anyone successfully set up their AWS IoT button?
When stepping through with default values I keep getting this message: Please correct validation errors with your trigger. But there are no validation errors on any of the setup pages, or the page with the error message.
I hate asking a broad question like this but it appears no one has ever had this error before.
This has been driving me nuts for a week!
I got it to work by using Custom IoT Rule instead of IoT Button on the IoT Type. The default rule name is iotbutton_xxxxxxxxxxxxxxxx and the default SQL statement is SELECT * FROM 'iotbutton/xxxxxxxxxxxxxxxx' (xxx... = serial number).
Make sure you copy the policy from the sample code into the execution role - I know that has tripped up a lot of people.
I was getting the same error. The cause turned out to be that I had multiple certificates associated with the button. This was caused by me starting over again on the wizard, generating cert & key, loading cert & key again. While on the device itself this doesn't seem to be a problem, the result was that on AWS I had multiple certs associated to the device.
Within the AWS IoT Resources view I eventually managed to delete all resources. Took some fiddling to get certs detached and able to be deleted. Once I deleted all resources I returned to the wizard, created yet another cert & key pair, pushed the Lambda code, and everything works.