How to work around Cfn action's character limit in CodePipeline - amazon-web-services

Using the AWS CDK, I have a CodePipeline that produces build artifacts for 5 different Lambda functions, and then passes those artifacts as parameters to a CloudFormation template. The basic setup is the same as this example, and the CloudFormation deploy action looks basically like this:
new CloudFormationCreateUpdateStackAction({
actionName: 'Lambda_CFN_Deploy',
templatePath: cdkBuildOutput.atPath('LambdaStack.template.json'),
stackName: 'LambdaDeploymentStack',
adminPermissions: true,
parameterOverrides: {
...props.lambdaCode.assign(lambdaBuildOutput.s3Location),
// more parameter overrides here
},
extraInputs: [lambdaBuildOutput],
})
However, when I try to deploy, I get this error:
1 validation error detected: Value at 'pipeline.stages.3.member.actions.1.member.configuration' failed to satisfy constraint:
Map value must satisfy constraint:
[Member must have length less than or equal to 1000, Member must have length greater than or equal to 1]
The CodePipeline documentation specifies that values in the Configuration property of the ActionDeclaration can be up to 1000 characters. If I look at the YAML output from cdk synth, the ParameterOverrides property comes out to 1351 characters. So that's a problem.
How can I work around this issue? I may need to add more Lambda functions in the future, so this problem will only get worse. Part of the problem is that the CDK code inserts 'LambdaSourceBucketNameParameter' and 'LambdaSourceObjectKeyParameter' in each bucket/object pair name in the configuration output, putting me at 61 * 5 = 305 characters lost just to being verbose. Could I get part of the way there by overriding those generated names?

I got some assistance from a CDK maintainer here, which let me get well under the 1000-character limit. Reproducing the workaround here:
LambdaSourceBucketNameParameter and LambdaSourceObjectKeyParameter are just the default parameter names. You can create your own:
lambda.Code.fromCfnParameters({
bucketNameParam: new CfnParameter(this, 'A'),
objectKeyParam: new CfnParameter(this, 'B'),
});
You can also name Artifacts explicitly, thus saving a lot of characters over the defaults:
const sourceOutput = new codepipeline.Artifact('S');
EDIT 10-Jan-2020
I finally got a response from AWS Support regarding the issue:
I've queried the CodePipeline team and searched though the current development workflows and couldn't find any current activity related to increasing the limit for parameters passed to a CloudFormation stack or any alternative method for this action, so we have issued a feature request based on your request for our development team.
I'm not able to provide an estimated time for this feature to be available, but you can follow the release on new features through the CloudFormation and CodePipeline official pages to check when the new feature will be available.
So for now, it looks like the CfnParameter workaround is the best option.

Related

AWS CDK conditional ImportValue

I'm importing an ARN from another stack with the cdk.Fn.importValue method. This works fine if I know that the output value is always present, but I don't know how to handle the case when the value I try to import is optional.
How can I get something similar to: (checking if the value exists before importing it)
if(value exists) {
cdk.Fn.importValue("value")
}
AFAIK there currently is no way in CDK to perform a lookup of a CloudFormation exports during synthesis time.
If you don't want to fiddle around with performing CloudFormation API calls with the aws-sdk before creating the CDK stack, in my opinion the most elegant way to share conditional values between stacks, is to use SSM parameters instead of CloudFormation exports.
SSM parameters can be looked up during synthesis time. See docs: https://docs.aws.amazon.com/cdk/v2/guide/get_ssm_value.html
So, with StringParameter.valueFromLookup you are then able to only use the value if it exists (IIRC the method throws an error if the parameter doesn't exist, so try-catch is your friend here, but not 100% sure).

Selectively update Lambda version description for a stack

We have a setup where we deploy a stack that contains a bunch of Lambdas.
We want to version these lambdas (so far so good) and we want to update the version description to contain say then commit-id that was responsible for the new version (or some other identifier).
What happens in our current approach:
We pass the commit-id in to the template
We set the description of each Lambda to contain the commit-id
CloudFormation takes the new version description (with the current commit-id) to be a change to the Lambda definition but we don't change all Lambdas every time
The version description is immutable so if the Lambda hasn't actually changed and we try to set a new one, the entire change set fails
Setting the description of only the changed lambdas is hard because we don't have that information at this stage.
Is there a straight-forward way for us to set the description for a lambda version to something that correlates with the contents?

Where do I tell AWS SAM which file to chose depending on the stage/environment?

In the app.js, I want to require a different "config" file depending on the stage/account.
for example:
dev account: const config = require("config-dev.json")
prod account: const config = require("config-prod.json")
At first I tried passing it using build --container-env-var-file but after getting undefined when using process.env.myVar, I think that env file is used at the build stage and has nothing to do with my function, but I could use it in the template creation stage..
So I'm looking now at deploy and there are a few different things that seem relevant, but it's quite confusing to chose which one is relevant for my use case.
There is the config file, in which case, I have no idea how to configure it since I'm in a pipeline context, so where would I instruct my process to use the correct json?
There is also parameters, and mapping.
My json is not just a few vars. its a bit of a complex object. nothing crazy not simple enough to pass the vars 1 by 1.
So I thought a single one containing the filename that I want to use could do the job
But I have no idea how to tell which stage of deployment I currently am in, or how to pass that value to access it from the lambda function.
I also faced this issue while exectuing aws lambda function locally.By this command my issue was solved.
try to configure your file using the sam build command

Storing parameterized values in cloud formation and referencing it

Is there a way to store variables in Cloudformation?
I've created a resource with a name which is a stage specific name in the following form:
DeliveryStreamName: {'Fn::Sub': ['firehose-events-${Stage}', 'Stage': {'Ref' : 'Stage' }]}
Now if I've to create a cloudwatch alarm on that resource I'm again following the same pattern:
Dimensions:
- Value: {'Fn::Sub': ['firehose-events-${Stage}', 'Stage': {'Ref' : 'Stage' }]}
Instead if I could store the whole value in one variable, it would be much easier for me to refer it.
I thought initially storing it in parameters, like this:
Parameters:
FirehoseEvent: {Type:String, Default: 'firehose-events-${Stage}'}
But the stage value doesn't seem to get passed in here. And there is no non default value either for this resource name.
The other option I considered was using mapping, but that defeats the purpose of using ${Stage}.
Is there some other way which I've missed?
Sadly you haven't missed anything. Parameters can't reference other parameters in their definition.
The only way I can think of doing what you which would be through a custom macro. In its simplest form the macro would just perform traditional find-and-replace type of template processing.
However, the time required to develop such macro could be not worth its benefits, at least in this simple example you've provided in the question.

Google Deployment Manager - BigTable example

I have been trying this example provided in the Google's Deployment Manager GitHub project.
It works, yet I am not sure what is the purpose of creating three instances named instance_create, instance_update and instance_delete.
For example, taken from the link:
instance_create = {
'name':
'instance_create',
'action':
'gcp-types/bigtableadmin-v2:bigtableadmin.projects.instances.create',
'properties': {
'parent': project_path,
'instanceId': instance_name,
'clusters': copy.deepcopy(initial_cluster),
'instance': context.properties['instance']
},
'metadata': {
'runtimePolicy': ['CREATE']
}
}
What is the purpose of `action` and `metadata`.`runtimePolicy`? I have tried to find it in the documentation but failed miserably.
Why there are three `BigTable` instances there?
You are right, the documentation is missing the information, which would answer your questions regarding these parameters.
However, it helps knowing what's going on in the Depoyment Manager example you linked.
First of all, the following line in the config.yaml is where the things get tricky:
resources:
- name: my-bigtable
type: bigtable.py
This line will do a call to the bigtable.py python file, which sets the resource type of the deployment to that which are in it, under the GenerateConfig function. See how this is done here.
The resources are returned as {'resources': resources} at the end of it, being the resources variable a list of templates created there.
These templates have different name identifiers, which are set by the "name" tag.
So you are not creating three different instances with the name of instance_create, instance_update and instance_delete in this file, but you are creating three templates with those names, that will later be appended to the resources list, and later returned to the config.yaml resources.type tag.
These templates then will be sequentially build and executed by the deployment manager, once the create command is used. Note that they might appear out of order, this is due not using a schema.
It's easier to see this structure in a .yaml file format, for example, built with jinja, the template you posted would be:
resources:
- action: gcp-types/bigtableadmin-v2:bigtableadmin.projects.instances.create
name: instance_create
metadata:
runtimePolicy:
- CREATE
properties:
clusters:
initial:
defaultStorageType: HDD
location: projects/<PROJECT_ID>/locations/<PROJECT_LOCATION>
serveNodes: 4
instance:
displayName: My BigTable Instance.
type: PRODUCTION
instanceId: my-instance
parent: projects/<PROJECT_ID>
Notice that the parameters under properties are the fields in the request body to bigtableadmin.projects.instances.create (which is nesting a clusters object parameters and a instance object parameters). Note that the InstanceId under properties is always the same, hence the BigTable instance, on which the templates do the calls, is always the same one.
The thing is that, not only the example you linked creates various templates to be run in the same script, but that the resource type for each template is a call to the BigTable API.
Normally the template resources are specified with the type tag, but since you are calling a resource that is directly running an API call (i.e. instead of just specifying gcp-types/bigtableadmin-v2, you are specifying bigtableadmin-v2:bigtableadmin.projects.instances.create), the action tag is used. I haven't found this difference on usage documented anywhere, but it needs to be specified like that.
You will know if you are calling an API 'endpoint' directly if the resource ends with either create/update/delete.
Finally, the I have investigated in my side, and the metadata.runtimePolicy is tied to the fact that the resource type is an API call (like in the previous point). Once again, I haven't found this documented anywhere.
However, since this is a requirement, you will always have to set the correct value in this field. It basically boils down to have metadata.runtimePolicy set to this values, depending on which type of API call you do:
create -> ['CREATE']
update -> ['UPDATE_ON_CHANGE']
delete -> ['DELETE']
Summarizing:
You are not creating three different instances, but three different templates, which do the work on the same BigTable instance.
You need to change the resource type flag to action if you are calling an API endpoint (create/update/delete), instead of just naming the base API.
The metadata.runtimePolicy value is a requirement when doing a call to one of the aforenamed endpoints.