CDK adds random parameters - amazon-web-services

So I have this function I'm trying to declare and it works and deploys just dandy unless you uncomment the logRetention setting. If logRetention is specified the cdk deploy operation
adds additional parameters to the stack. And, of course, this behavior is completely unexplained in the documentation.
https://docs.aws.amazon.com/cdk/api/latest/docs/aws-lambda-readme.html#log-group
SingletonFunction.Builder.create(this, "native-lambda-s3-fun")
.functionName(funcName)
.description("")
// .logRetention(RetentionDays.ONE_DAY)
.handler("app")
.timeout(Duration.seconds(300))
.runtime(Runtime.GO_1_X)
.uuid(UUID.randomUUID().toString())
.environment(new HashMap<String, String>(){{
put("FILE_KEY", "/file/key");
put("S3_BUCKET", junk.getBucketName());
}})
.code(Code.fromBucket(uploads, functionUploadKey(
"formation-examples",
"native-lambda-s3",
lambdaVersion.getValueAsString()
)))
.build();
"Parameters": {
"lambdaVersion": {
"Type": "String"
},
"AssetParametersceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40aS3BucketB030C8A8": {
"Type": "String",
"Description": "S3 bucket for asset \"ceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40a\""
},
"AssetParametersceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40aS3VersionKey6A2AABD7": {
"Type": "String",
"Description": "S3 key for asset version \"ceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40a\""
},
"AssetParametersceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40aArtifactHashEDC522F0": {
"Type": "String",
"Description": "Artifact hash for asset \"ceefd938ac7ea929077f2e2f4cf09b5034ebdd14799216b1281f4b28427da40a\""
}
},

It's a bug. They're Working On Itâ„¢. So, rejoice - we can probably expect a fix sometime within the next decade.
I haven't tried it yet, but I'm guessing the workaround is to manipulate the low-level CfnLogGroup construct, since it has the authoritative retentionInDays property. The relevant high-level Log Group construct can probably be obtained from the Function via its logGroup property. Failing that, the LogGroup can be created from scratch (which will probably be a headache all on its own).

I also encountered the problem described above. From what I can tell, we are unable to specify a log group name and thus the log group name is predictable.
My solution was to simply create a LogGroup with the same name as my Lambda function with the /aws/lambda/ prefix.
Example:
var function = new Function(
this,
"Thing",
new FunctionProps
{
FunctionName = $"{Stack.Of(this).StackName}-Thing",
// ...
});
_ = new LogGroup(
this,
"ThingLogGroup",
new LogGroupProps
{
LogGroupName = $"/aws/lambda/{function.FunctionName}",
Retention = RetentionDays.ONE_MONTH,
});
This does not create unnecessary "AssetParameters..." CF template parameters like the inline option does.
Note: I'm using CDK version 1.111.0 and 1.86.0 with C#

Related

How to automate the creation of elasticsearch index patterns for all days?

I am using cloudwatch subscription filter which automatically sends logs to elasticsearch aws and then I use Kibana from there. The issue is that everyday cloudwatch creates a new indice due to which I have to manually create the new index pattern each day in kibana. Accordingly I will have to create new monitors and alerts in kibana as well each day. I have to automate this somehow. Also if there is better option with which I can go forward would be great. I know datadog is one good option.
Typical work flow will look like this (there are other methods)
Choose a pattern when creating an index. Like staff-202001, staff-202002, etc
Add each index to an alias. Like staff
This can be achieved in multiple ways, easiest is to create a template with index pattern , alias and mapping.
Example: Any new index created matching the pattern staff-* will be assigned with given mapping and attached to alias staff and we can query staff instead of individual indexes and setup alerts.
We can use cwl--aws-containerinsights-eks-cluster-for-test-host to run queries.
POST _template/cwl--aws-containerinsights-eks-cluster-for-test-host
{
"index_patterns": [
"cwl--aws-containerinsights-eks-cluster-for-test-host-*"
],
"mappings": {
"properties": {
"id": {
"type": "keyword"
},
"firstName": {
"type": "text"
},
"lastName": {
"type": "text"
}
}
},
"aliases": {
"cwl--aws-containerinsights-eks-cluster-for-test-host": {}
}
}
Note: If unsure of mapping, we can remove mapping section.

List of Active Directory DNS servers IP addresses in an SSM document

I am converting my 0.11 code to 0.12. Most things seem to be working out well, but I am really lost on the SSM document.
In my 0.11 code, I had this code:
resource "aws_ssm_document" "ssm_document" {
name = "ssm_document_${terraform.workspace}${var.addomainsuffix}"
document_type = "Command"
content = <<DOC
{
"schemaVersion": "1.0",
"description": "Automatic Domain Join Configuration",
"runtimeConfig": {
"aws:domainJoin": {
"properties": {
"directoryId": "${aws_directory_service_directory.microsoftad-lab.id}",
"directoryName": "${aws_directory_service_directory.microsoftad-lab.name}",
"dnsIpAddresses": [
"${aws_directory_service_directory.microsoftad-lab.dns_ip_addresses[0]}",
"${aws_directory_service_directory.microsoftad-lab.dns_ip_addresses[1]}"
]
}
}
}
}
DOC
depends_on = ["aws_directory_service_directory.microsoftad-lab"]
}
This worked reasonably well. However, Terraform 0.12 does not accept this code, saying
This value does not have any indices.
I have been trying to look up different solutions on the web, but I am encountering countless issues with datatypes. For example, one of the solutions I have seen proposes this:
"dnsIpAddresses": [
"${sort(aws_directory_service_directory.oit-microsoftad-lab.dns_ip_addresses)[0]}",
"${sort(aws_directory_service_directory.oit-microsoftad-lab.dns_ip_addresses)[1]}",
]
}
and I am getting
InvalidDocumentContent: JSON not well-formed
which is kinda weird to me, since if I am looking into trace log, I seem to be getting relatively correct values:
{"Content":"{\n \"schemaVersion\": \"1.0\",\n \"description\": \"Automatic Domain Join Configuration\",\n \"runtimeConfig\": {\n \"aws:domainJoin\": {\n \"properties\": {\n \"directoryId\": \"d-9967245377\",\n \"directoryName\": \"012mig.lab\",\n \"dnsIpAddresses\": [\n \"10.0.0.227\",\n
\"10.0.7.103\",\n ]\n }\n }\n }\n}\n \n","DocumentFormat":"JSON","DocumentType":"Command","Name":"ssm_document_012mig.lab"}
I have tried concat and list to put the values together, but then I am getting the datatype errors. Right now, it looks like I am going around in loops here.
Does anyone have any direction to give me here?
Terraform 0.12 has stricter types than 0.11 and less automatic type coercion going on under the covers so here you're running into the fact that the output of the aws_directory_service_directory resource's dns_ip_addresses attribute isn't a list but a set:
"dns_ip_addresses": {
Type: schema.TypeSet,
Elem: &schema.Schema{Type: schema.TypeString},
Set: schema.HashString,
Computed: true,
},
Set's can't be indexed directly and instead must first be converted to a list explicitly in 0.12.
As an example:
variable "example_list" {
type = list(string)
default = [
"foo",
"bar",
]
}
output "list_first_element" {
value = var.example_list[0]
}
Running terraform apply on this will output the following:
Outputs:
list_first_element = foo
However if we use a set variable instead:
variable "example_set" {
type = set(string)
default = [
"foo",
"bar",
]
}
output "set_first_element" {
value = var.example_set[0]
}
Then attempting to run terraform apply will throw the following error:
Error: Invalid index
on main.tf line 22, in output "set_foo":
22: value = var.example_set[0]
This value does not have any indices.
If we convert the set variable into a list with tolist first then it works:
variable "example_set" {
type = set(string)
default = [
"foo",
"bar",
]
}
output "set_first_element" {
value = tolist(var.example_set)[0]
}
Outputs:
set_first_element = bar
Note that sets may have different ordering to what you may expect (in this case it is ordered alphabetically rather than as declared). In your case this isn't an issue but it's worth thinking about when indexing an expecting the elements to be in the order you declared them.
Another possible option here, instead of building the JSON output from the set or list of outputs, you could just directly encode the dns_ip_addresses attribute as JSON with the jsonencode function:
variable "example_set" {
type = set(string)
default = [
"foo",
"bar",
]
}
output "set_first_element" {
value = jsonencode(var.example_set)
}
Which outputs the following after running terraform apply:
Outputs:
set_first_element = ["bar","foo"]
So for your specific example we would want to do something like this:
resource "aws_ssm_document" "ssm_document" {
name = "ssm_document_${terraform.workspace}${var.addomainsuffix}"
document_type = "Command"
content = <<DOC
{
"schemaVersion": "1.0",
"description": "Automatic Domain Join Configuration",
"runtimeConfig": {
"aws:domainJoin": {
"properties": {
"directoryId": "${aws_directory_service_directory.microsoftad-lab.id}",
"directoryName": "${aws_directory_service_directory.microsoftad-lab.name}",
"dnsIpAddresses": ${jsonencode(aws_directory_service_directory.microsoftad-lab.dns_ip_addresses)}
}
}
}
}
DOC
}
Note that I also removed the unnecessary depends_on. If a resource has interpolation in from another resource then Terraform will automatically understand that the interpolated resource needs to be created before the one referencing it.
The resource dependencies documentation goes into this in more detail:
Most resource dependencies are handled automatically. Terraform
analyses any expressions within a resource block to find references to
other objects, and treats those references as implicit ordering
requirements when creating, updating, or destroying resources. Since
most resources with behavioral dependencies on other resources also
refer to those resources' data, it's usually not necessary to manually
specify dependencies between resources.
However, some dependencies cannot be recognized implicitly in
configuration. For example, if Terraform must manage access control
policies and take actions that require those policies to be present,
there is a hidden dependency between the access policy and a resource
whose creation depends on it. In these rare cases, the depends_on
meta-argument can explicitly specify a dependency.

How to get logical ID of resource with CDK?

I'm attempting to write some tests for a CDK Construct that validates security group rules defined as part of the construct.
The Construct looks something like the following.
export interface SampleConstructProps extends StackProps {
srcSecurityGroupId: string;
}
export class SampleConstruct extends Construct {
securityGroup: SecurityGroup;
constructor(scope: Construct, id: string, props: SampleConstructProps) {
super(scope, id, props);
// const vpc = Vpc.fromLookup(...);
this.securityGroup = new SecurityGroup(this, "SecurityGroup", {
vpc: vpc,
allowAllOutbound: true,
});
const srcSecurityGroupId = SecurityGroup.fromSecurityGroupId(stack, "SrcSecurityGroup", props.srcSecurityGroupId);
this.securityGroup.addIngressRule(srcSecurityGroup, Port.tcp(22));
}
}
And I want to write a test that looks something like the following.
test("Security group config is correct", () => {
const stack = new Stack();
const srcSecurityGroupId = "id-123";
const testConstruct = new SampleConstruct(stack, "TestConstruct", {
srcSecurityGroupId: srcSecurityGroupId
});
expect(stack).to(
haveResource(
"AWS::EC2::SecurityGroupIngress",
{
IpProtocol: "tcp",
FromPort: 22,
ToPort: 22,
SourceSecurityGroupId: srcSecurityGroupId,
GroupId: {
"Fn::GetAtt": [testConstruct.securityGroup.logicalId, "GroupId"], // Can't do this
},
},
undefined,
true
)
);
});
The issue here is that the test is validated against the synthesized CloudFormation template, so if you want to verify that the security group created by this construct has a rule allowing access from srcSecurityGroup, you need the Logical ID of the security group that was created as part of the Construct.
You can see this in the generated CloudFormation template here.
{
"Type": "AWS::EC2::SecurityGroupIngress",
"Properties": {
"IpProtocol": "tcp",
"FromPort": 22,
"GroupId": {
"Fn::GetAtt": [
"TestConstructSecurityGroup95EF3F0F", <-- This
"GroupId"
]
},
"SourceSecurityGroupId": "id-123",
"ToPort": 22
}
}
That Fn::GetAtt is the crux of this issue. Since these tests really just do an object comparison, you need to be able to replicate the Fn::Get invocation, which requires the CloudFormation Logical ID.
Note that the CDK does provide a handful of identifiers for you.
Unique ID provides something very close, but it's not same identifier used in the CloudFormation stack. For example, securityGroup.uniqueId returns TestStackTestConstructSecurityGroup10D493A7 whereas the CloudFormation template displays TestConstructSecurityGroup95EF3F0F. You can note the differences are the uniqueId prepends the Construct ID to the logical identifier and the appended hash is different in each.
Construct ID is just the identifier that you provide when instantiating a construct. It is not the logical ID either, though it is used as part of the logical ID. I also have not seen a way of programmatically retrieving this ID from the construct directly. You can of course define the ID somewhere and just reuse it, but this still doesn't solve the problem of it not fully matching the logical ID. In this case it's a difference of SecurityGroup as the construct ID and TestConstructSecurityGroup95EF3F0F as the logical ID in the synthesized template.
Is there a straightforward way getting the logical ID of CDK resources?
After writing up this whole post and digging through the CDK code, I stumbled on the answer I was looking for. If anybody has a better approach for getting the logical ID from a higher level CDK construct, the contribution would be much appreciated.
If you need to get the logical ID of a CDK resource you can do the following:
const stack = new Stack();
const construct = new SampleConstruct(stack, "SampleConstruct");
const logicalId = stack.getLogicalId(construct.securityGroup.node.defaultChild as CfnSecurityGroup);
Note that you you already have a CloudFormation resource (eg something that begins with with Cfn) then it's a little easier.
// Pretend construct.securityGroup is of type CfnSecurityGroup
const logicalId = stack.getLogicalId(construct.securityGroup);
From my testing, it seems that stack.getLogicalId will always return the original, CDK allocated logicalId, it won't change if you call overrideLogicalId, so it won't always match the synthed output.
This worked for me, even with a logicalId override set:
stack.resolve((construct.node.defaultChild as cdk.CfnElement).logicalId)
stack.resolve is necessary because .logicalId is a token.
In addition to the excellent answer from jaredready, you can also explicitly set the logical ID using resource.node.default_child.overrideLogicalId("AnyStringHere")
This may make it easier as you can set it once and use hard-coded strings rather than looking up the value for every test.

Utterances to test lambda function not working (but lambda function itself executes)

I have a lambda function that executes successfully with an intent called GetEvent that returns a specific string. I've created one utterance for this intent for testing purposes (one that is simple and doesn't require any of the optional slots for invoking the skill), but when using the service simulator to test the lambda function with this utterance for GetEvent I'm met with a lambda response that says "The response is invalid". Here is what the interaction model looks like:
#Intent Schema
{
"intents": [
{
"intent": "GetVessel",
"slots": [
{
"name": "boat",
"type": "LIST_OF_VESSELS"
},
{
"name": "location",
"type": "LIST_OF_LOCATIONS"
},
{
"name": "date",
"type": "AMAZON.DATE"
},
{
"name": "event",
"type": "LIST_OF_EVENTS"
}
]
},
{
"intent": "GetLocation",
"slots": [
{
"name": "event",
"type": "LIST_OF_EVENTS"
},
{
"name": "date",
"type": "AMAZON.DATE"
},
{
"name": "boat",
"type": "LIST_OF_VESSELS"
},
{
"name": "location",
"type": "LIST_OF_LOCATIONS"
}
]
},
{
"intent": "GetEvent",
"slots": [
{
"name": "event",
"type": "LIST_OF_EVENTS"
},
{
"name": "location",
"type": "LIST_OF_LOCATIONS"
}
]
}
]
}
With the appropriate custom skill type syntax and,
#First test Utterances
GetVessel what are the properties of {boat}
GetLocation where did {event} occur
GetEvent get me my query
When giving Alexa the utterance get me my query the lambda response should output the string as it did in the execution. I'm not sure why this isn't the case; this is my first project with the Alexa Skills Kit, so I am pretty new. Is there something I'm not understanding with how the lambda function, the intent schema and the utterances are all pieced together?
UPDATE: Thanks to some help from AWSSupport, I've narrowed the issue down to the area in the json request where new session is flagged as true. For the utterance to work this must be set to false (this works when inputting the json request manually, and this is also the case during the lambda execution). Why is this the case? Does Alexa really care about whether or not it is a new session during invocation? I've cross-posted this to the Amazon Developer Forums as well a couple of days ago, but have yet to get a response from someone.
This may or may not have changed -- the last time I used the service simulator (about two weeks ago at the time of writing) it had a pretty severe bug which would lead to requests being mapped to your first / wrong intent, regardless of actual simulated speech input.
So even if you typed in something random like wafaaefgae it simply tries to map that to the first intent you have defined, providing no slots to said intent which may lead to unexpected results.
Your issue could very well be related to this, triggering the same unexpected / buggy behavior because you aren't using any slots in your sample utterance
Before spending more time debugging this, I'd recommend trying the Intent using an actual echo or alternatively https://echosim.io/ -- interaction via actual speech works as expected, unlike the 'simulator'

requestParameters returning "Invalid mapping expression specified: true"

I'm configuring a lambda function's API gateway integration with the Serverless Framework version 0.4.2.
My problem is with defining an endpoint's request parameters. The AWS docs for API gateway entry says:
requestParameters
Represents request parameters that can be accepted by Amazon API Gateway. Request parameters are represented as a key/value map, with a source as the key and a Boolean flag as the value. The Boolean flag is used to specify whether the parameter is required. A source must match the pattern method.request.{location}.{name}, where location is either querystring, path, or header. name is a valid, unique parameter name. Sources specified here are available to the integration for mapping to integration request parameters or templates.
As I understand it, the config in the s-function.json is given directly to the AWS CLI, so I've specified the request parameters in the format:
"method.request.querystring.startYear": true. However, I'm receiving an Invalid mapping expression specified: true error. I've also tried specifying the config as "method.request.querystring.startYear": "true" with the same result.
s-function.json:
{
"name": "myname",
// etc...
"endpoints": [
{
"path": "mypath",
"method": "GET",
"type": "AWS",
"authorizationType": "none",
"apiKeyRequired": false,
"requestParameters": {
"method.request.querystring.startYear": true,
"method.request.querystring.startMonth": true,
"method.request.querystring.startDay": true,
"method.request.querystring.currentYear": true,
"method.request.querystring.currentMonth": true,
"method.request.querystring.currentDay": true,
"method.request.querystring.totalDays": true,
"method.request.querystring.volume": true,
"method.request.querystring.userId": true
},
// etc...
}
],
"events": []
}
Any ideas? Thanks in advance!
It looks like the requestParameters in the s-function.json file is meant for configuring the integration request section, so I ended up using:
"requestParameters": {
"integration.request.querystring.startYear" : "method.request.querystring.startYear",
"integration.request.querystring.startMonth" : "method.request.querystring.startMonth",
"integration.request.querystring.startDay" : "method.request.querystring.startDay",
"integration.request.querystring.currentYear" : "method.request.querystring.currentYear",
"integration.request.querystring.currentMonth" : "method.request.querystring.currentMonth",
"integration.request.querystring.currentDay" : "method.request.querystring.currentDay",
"integration.request.querystring.totalDays" : "method.request.querystring.totalDays",
"integration.request.querystring.volume" : "method.request.querystring.volume",
"integration.request.querystring.userId" : "method.request.querystring.userId"
},
This ended up adding them automatically to the method request section on the dashboard as well:
I could then use them in the mapping template to turn them into a method post that would be sent as the event into my Lambda function. Right now I have a specific mapping template that I'm using, but I may in the future use Alua K's suggested method for mapping all of the inputs in a generic way so that I don't have to configure a separate mapping template for each function.
You can pass query params to your lambda like
"requestTemplates": {
"application/json": {
"querystring": "$input.params().querystring"
}
}
In lambda function access querystring like this event.querystring
First, you need to execute a put-method command for creating the Method- Request with query parameters:
aws apigateway put-method --rest-api-id "yourAPI-ID" --resource-id "yourResource-ID" --http-method GET --authorization-type "NONE" --no-api-key-required --request-parameters "method.request.querystring.paramname1=true","method.request.querystring.paramname2=true"
After this you can execute the put-integration command then only this will work. Otherwise it will give invalid mapping error
"requestParameters": {
"integration.request.querystring.paramname1" : "method.request.querystring.paramname1",
"integration.request.querystring.paramname2" : "method.request.querystring.paramname2",
Make sure you're using the right end points as well. There are two types or some such in AWS.. friend of mine got caught out with that in the past.