I have a cloudFormation stack which is in an active state. I executed a change set on the specific stack.
Once executed, the change no longer appears as a value on the CloudFormation console. However if I do a describe-change-set operation with the change set ARN, I get the details of the changeset.
CloudFormation has an list-stacks API which lists deleted and active stacks. Is there any API to list inactive/expired/deleted change sets? Is that even possible?
No, there doesn't appear to be any API that can list inactive (but not deleted) CloudFormation Change Sets.
The ListChangeSets API is described as follows:
Returns the ID and status of each active change set for a stack.
Once a Change Set is executed, it is not deleted, but enters the EXECUTE_COMPLETE state. The Change Set is still referenced by the stack in the Stack.ChangeSetId property returned by the DescribeStacks API, as used by the CloudFormation Console's Change Sets tab, though it no longer appears in the ListChangeSets output.
Beyond that, since Change Sets become unusable as soon as the stack has been updated, there's not really any other use for them. I'm also surprised they are still retained (indefinitely?). I wouldn't be surprised if a more explicit/controllable lifecycle for inactive Change Sets is eventually added, since this feature is still less than a year old.
Related
I've created users with API Keys in a cloudformation yaml file. We want to renew one API Key but an API Key is immutable so has to be deleted and regenerated. Deleting an API Key manually and then hoping that rerunning the cloudformation script is going to replace it with no other ill effects seems like risky business. What is the recommended way to do this (I'd prefer not to drop and recreate the entire stack for availability reasons and because I only want to renew one of our API keys, not all of them)?
The only strategy I can think of right now is
change the stack so that the name associated with the API Key in question is changed
deploy the stack (which should delete the old API Key and create the new one)
change the stack to revert the 1st change which should leave me with a changed API Key
with same name
deploy the stack
Clunky eh!
It is indeed a bit clunky, but manually deleting it, will not cause cloudformation to recreate the API key, since it has an internal state of the stack in which the key still exists.
You could simply change the resource name of the API key and update the stack, but this will only work if you can have duplicate names for API keys, which I doubt, but I could not find confirmation in the docs.
This leaves the only way to do it, in two steps (if you want to keep the same name). One to remove the old key, and a second update to create the new key. This can be achieved by simply commenting the corresponding lines in the first step and subsequently uncommenting them for the second step, or as you suggested, by changing the name of the API key and then changing it back.
Using serverless to deploy to AWS.
I created a Cognito user pool via serverless then realised I wanted to change it's attributes.
I couldn't deploy because you can't update attributes on an existing user pool.
"No problem - I'll just delete it and make it again" I thought. So I did.
But I had created two Outputs that referencing Client ID and Pool ID so now I get this:
Export alpha-UserPoolId cannot be deleted as it is in use by alpha-Stack
I can't see any way to remove theese references manually via the AWS console.
Anyone know what I can do to remove these dead references?
There's no option to manually remove an Output and I tried editing the template but it didn't seem to actually do anything.
Thanks
[EDIT: Check comments for full details on solution]
You have to edit the importing stack to not rely on these values, afterwards you can remove them.
As long as there is an Fn::ImportValue somewhere, it won't let you delete the export.
From the docs:
The following restrictions apply to cross stack references
...
You can't delete a stack if another stack references one of its outputs.
You can't modify or remove an output value that is referenced by another stack.
I have a cloudformation stack which has a Lambda function that is mapped as a trigger to an SQS queue.
What happened was that I had to delete the mapping and create it again manually cos I wanted to change the batch size. Now when I want to update the mapping the cloudformation throws an error with The resource you requested does not exist. message.
The resource mapping code looks like this:
"EventSourceMapping":{
"Properties":{
"BatchSize":5,
"Enabled":"true",
"EventSourceArn":{
"Fn::GetAtt":[
"ProcessorQueue",
"Arn"
]
},
"FunctionName":{
"Fn::GetAtt":[
"ProcessorLambda",
"Arn"
]
}
},
"Type":"AWS::Lambda::EventSourceMapping"
}
I know that I've deleted the mapping cloudformation created initially and added it manually which is causing the issue. How do I fix this? Cos I cannot push any update now.
Please help
What you did, from my perspective, it is a mistake. When you use Cloud Formation you are not suppose to apply changes manually. You can, and maybe that's fine since one may don't care about the stack once is created. But since you are trying to update the stack, this tells me that you want to keep the stack and update it on a time basis.
To narrow down your problem, first let make clear that the manually-created mapping is out of sync with your cloud formation stack. So, from a cloud formation perspective, it doesn't matter if you keep that mapping or not. I'm wondering, what would happen if you keep the manually-created mapping and create a new from Cloud Formation? Maybe it will complain, since you would have repeated mappings for the same pair of (lambda,queue). Try this:
Create a change for your stack, where you completely remove the EventSourceMapping resource from your script. This step is to basically clean loosing references. Apply the change set.
Then, and this is where I think you may get some kind of issue, add back again EventSourceMapping to your stack.
If you get errors in the step 2, like "this mapping already exists", you will have to remove the manually-created mapping from the console. And then try again step 2.
You probably know now that you should not have removed the resource manually. If you change the CF, you can update it without changing resources which did not change in CF. You can try to replace the resource with the exact same physical name https://aws.amazon.com/premiumsupport/knowledge-center/failing-stack-updates-deleted/ The other option is to remove the resource from CF, update, and then add it back and update again - from the same doc.
While comments above are valid, I found it interesting, that no one mentioned much simpler option: using SAM commands (sam build/sam deploy). It's understandable that during the development process and designing the architecture, there might be flaws and situations where manual input in the console is necessary, therefore there's something I reference to every time I have similar issue.
Simply comment out the chunk of code that is creating troubles, run sam build/deploy on top of it, CloudFormation stack will recognize that the resource no longer in the template and will delete it.
Now, since the resource is no longer in the architecture anyway(removed manually prior), it will have no issues passing the step and successfully updating the stack.
Then simply uncomment, make any necessary changes (if any) and deploy.
Works every time.
I have a bit of a two-part question regarding the nature of metadata update notifications in GCS. // For the mods: if I should split this into two, let me know and I will.
I have a bucket in Google Cloud Storage, with Pub/Sub notifications configured for object metadata changes. I routinely get doubled metadata updates, seemingly out of nowhere. What happens is that at one point, a Cloud Run container reads the object designated by the notification and does some things that result in
a) a new file being added.
b) an email being sent.
And this should be the end of it.
However, app. 10 minutes later, a second notification fires for the same object, with the metageneration incremented but no actual changes being evident in the notification object.
Strangely, the ETag seems to change minimally (CJ+2tfvk+egCEG0 -> CJ+2tfvk+egCEG4), but the CRC32C and MD5 checksums remain the same - this is correct in the sense that the object is not being written.
The question is twofold, then:
- What exactly constitutes an increment in the metageneration attribute, when no metadata is being set/updated?
- How can the ETag change if the underlying data does not, as shown by the checksums (I guess the documentation does say "that they will change whenever the underlying data changes"[1], which does not strictly mean they cannot change otherwise).
1: https://cloud.google.com/storage/docs/hashes-etags#_ETags
As commented by #Brandon Yarbrough If the metageneration number increases, the most likely cause is an explicit call from somewhere unexpected to update the metadata in some fashion, and a way to verify that no extra update calls are being executed is by enabling Stackdriver or bucket access logs.
Regarding the ETag changes, the ETag documentation on Cloud Storage states that
Users should make no assumptions about those ETags except that they will change whenever the underlying data changes.
This indicates that the only scenario that is guaranteed that the ETag will be changed is on the data change, however, other events may trigger an ETag change as well, so you should not use ETags as a reference for file changes.
Beginning with a new stack I get the error message as in the title.
I am using SAM, and I am confused, why it wants to update the macro.
I thought, this macro is provided by aws and I wonder why it is requesting to modify it.
My template spins up a lambda, a database and a REST api, but does even try to touch existing macros.
My template did contain the TableName tag for a DynamoDb.
As I am aware, named tables cannot be updated, if resource replacement required. I was not trying to do updates on that resource though.
The table existed before I cloudformed that new stack though.