AWS CDK: enabling access logging for classical load balancer - amazon-web-services

We are using Classical load balancer in our Infra deployed via CDK. For deploying Load balancer we are using level 2 Constructs. The code is like this:
const lb = new elb.LoadBalancer(this, 'LB', {
vpc: vpcRef,
internetFacing: true,
healthCheck: {
port: 80
},
});
lb.addListener({
externalPort: 80,
});
}
We are not able to find any property using which we can enable the access logging. Someone suggested me to use AccessLoggingPolicyProperty. I checked that and found that this property can be used with Level 1 constructs only. Can some please guide me on how we can enable the access logs via CDK on a classical load balancer using Level 2 constructs.

As per the documentation you need S3 bucket with right permissions configured. With that you can follow aws-cdk documentation on how to get access to L1 Construct.
It is going to look roughly like the following code
const lbLogs = new Bucket(this, 'LB Logs');
const elbAccountId = 'TODO: find right account for you region in docs';
lbLogs.grantPut(new AccountPrincipal(elbAccountId));
lbLogs.grantPut(
new ServicePrincipal('delivery.logs.amazonaws.com', {
conditions: {
StringEquals: {
's3:x-amz-acl': 'bucket-owner-full-control',
},
},
})
);
lbLogs.grantRead(new ServicePrincipal('delivery.logs.amazonaws.com'));
const cfnLoadBalancer = lb.node.defaultChild as CfnLoadBalancer;
cfnLoadBalancer.accessLoggingPolicy = {
enabled: true,
s3BucketName: lbLogs.bucketName,
};

Related

AWS CDK -- How do I retrieve my NS Records from my newly created Hosted Zone by AWS CDK

Say I created a public hosted zone or fetch a hosted zone from lookup and I want to retrieve the NS Records for other usage
const zone = new route53.PublicHostedZone(this, domain + 'HostedZone', {
zoneName: '' + domain
})
const zone = HostedZone.fromLookup(this, 'HostedZone', { domainName: config.zoneName });
Does the current CDK have any methods to do that. I've look around the API doco and found none. Any suggestions?
Update
I did try the hostedZoneNameServers property. However, it doesn't seem to return anything.
const zone = route53.HostedZone.fromLookup(this, 'DotnetHostedZone', {
domainName: <myDomain>,
});
new CfnOutput(this, `output1`, {
value: zone.zoneName
});
new CfnOutput(this, `output2`, {
value: zone.hostedZoneId
});
new CfnOutput(this, 'output3', {
value: zone.hostedZoneNameServers?.toString() || 'No NameServer'
});
✅ test-ops
Outputs:
test-ops.output1 = <myDomain>
test-ops.output2 = <myZoneId>
test-ops.output3 = No NameServer
And I confirm with my zone and used to do a record export, I can retrieve all my records.
The ultimate goal is to automate a subdomain provisioning. But I'm currently scratching my head on this route.
There is a hostedZoneNameServers property on the zone object.
const zone = HostedZone.fromLookup(this, 'HostedZone', { domainName: config.zoneName });
const nsRecords = zone.hostedZoneNameServers;
Reference:
https://docs.aws.amazon.com/cdk/api/latest/typescript/api/aws-route53/hostedzone.html#aws_route53_HostedZone_hostedZoneNameServers
I do not believe you can do that right from the script. The values will be just "Tokens" which will be replaced by CloudFormation after/during the deployment, but not at during synthesis. Outputting during synthesis will therefore leave you blind. You will need to fetch them in a post-process I guess..
I am running into the same issue, which is why I found your post :D
hostedZoneNameServers is not defined for private or imported zones as mentioned in the docs. You can use only if you create your zone in CDK (e.g. new PublicHostedZone(...).hostedZoneNameServers).
If you create the zone elsewhere, try to use AWS Route53 GetHostedZone API.
This worked for me
const nsRecords = hostedZone.hostedZoneNameServers;
if (nsRecords) {
for (let i=0; i<4; i++) {
context.cfnOutput(this, `NS Record${i+1}`, Fn.select(i, nsRecords));
}
}
As #JD D mentioned, there is a hostedZoneNameServers attribute on hosted zones, but they aren't available in cross stack. The documentation has been updated(or this was missed when first answered) to reflect this.
CDK V1 /
CDK V2
hostedZoneNameServers?
Type: string[] (optional)
Returns the set of name servers for the specific hosted zone. For example: ns1.example.com.
This attribute will be undefined for private hosted zones or hosted zones imported from another stack.
So in order to accomplish what you want, you will need to set the NS values as an output on the stack that created the hosted zone and consume them by referencing the stack that provides the NS output.
I was able to automate subdomain provisioning with the following code. Note that these hosted zones share the same stack, which may not work for your use case.
export const hostedZone = new HostedZone(stack, `${env}-hosted-zone`, {
zoneName: host,
})
// API
const apiHost = `api.${host}`
export const apiHostedZone = new HostedZone(stack, `${env}-hosted-zone-api`, {
zoneName: apiHost,
})
// note that this record is actually on the parent zone,
export const apiHostedZoneNsRecord = new NsRecord(stack, `${env}-hosted-zone-ns-api`, {authoritatively pointing to its sub-subdomain
recordName: apiHost,
values: apiHostedZone.hostedZoneNameServers as string[],
zone: hostedZone,
})
This resulted in the following snippet of CFT (${env} and ${rnd} replaced with concrete values, of course):
"ResourceRecords": {
"Fn::GetAtt": [
"${env}hostedzoneapi${rnd}",
"NameServers"
]
},
If you can accept the same stack constraint, you should be able to accomplish this. Note that while I could accept the constraint for this stack, more broadly I have a multi-account structure and had to manually add the sub-account's subdomain NS record to the parent account's root domain. Summary of this setup:
root account:
example.com
NS child.example.com // manually added
child account:
child.example.com // contents of `host` below
NS api.child.example.com
api.child.example.com // automatic subdomain created with code above

Google Deployment Manager error when using manual IP allocation in NAT (HTTP 400)

Context
I am trying to associate serverless egress with a static IP address (GCP Docs). I have been able to set this up manually through the gcp-console, and now I am trying to implement it with deployment manager. However, with just the IP address and the router, once I add the NAT config, I get 400's, "Request contains an invalid argument.", which is not giving me enough information to fix the problem.
# config.yaml
resources:
# addresses spec: https://cloud.google.com/compute/docs/reference/rest/v1/addresses
- name: serverless-egress-address
type: compute.v1.address
properties:
region: europe-west3
addressType: EXTERNAL
networkTier: PREMIUM
# router spec: https://cloud.google.com/compute/docs/reference/rest/v1/routers
- name: serverless-egress-router
type: compute.v1.router
properties:
network: projects/<project-id>/global/networks/default
region: europe-west3
nats:
- name: serverless-egress-nat
natIpAllocateOption: MANUAL_ONLY
sourceSubnetworkIpRangesToNat: ALL_SUBNETWORKS_ALL_IP_RANGES
natIPs:
- $(ref.serverless-egress-address.selfLink)
# error response
code: RESOURCE_ERROR
location: /deployments/<deployment-name>/resources/serverless-egress-router
message: '{
"ResourceType":"compute.v1.router",
"ResourceErrorCode":"400",
"ResourceErrorMessage":{
"code":400,
"message":"Request contains an invalid argument.",
"status":"INVALID_ARGUMENT",
"statusMessage":"Bad Request","requestPath":"https://compute.googleapis.com/compute/v1/projects/<project-id>/regions/europe-west3/routers/serverless-egress-router",
"httpMethod":"PUT"
}}'
Notably, if I remove the 'natIPs' array and set 'natIpAllocateOption' to 'AUTO_ONLY', it goes through without errors. While this is not the configuration I need, it does narrow the problem down to these config options.
Question
Which is the invalid argument?
Are there things outside of the YAML which I should check? In the docs it says the following, which makes me wonder if there are other caveats like it:
Note that if this field contains ALL_SUBNETWORKS_ALL_IP_RANGES or ALL_SUBNETWORKS_ALL_PRIMARY_IP_RANGES, then there should not be any other Router.Nat section in any Router for this network in this region.
I checked the API reference and passing the values that you used should work. Furthermore, if you talk directly to the API using a JSON payload with these, it return 200:
{
"name": "nat",
"network": "https://www.googleapis.com/compute/v1/projects/project/global/networks/nat1",
"nats": [
{
"natIps": [
"https://www.googleapis.com/compute/v1/projects/project/regions/us-central1/addresses/test"
],
"name": "nat1",
"natIpAllocateOption": "MANUAL_ONLY",
"sourceSubnetworkIpRangesToNat": "ALL_SUBNETWORKS_ALL_IP_RANGES"
}
]
}
From what I can see the request is correctly formed using methods other than Deployment Manager so there might be an issue in the tool.
I have filed an issue about this on Google's Issue Tracker for them to take a look at it.
The DM team might be able to shed light on what's happening here.

How to get the AWS IoT custom endpoint in CDK?

I want to pass the IoT custom endpoint as an env var to a lambda declared in CDK.
I'm talking about the IoT custom endpoint that lives here:
How do I get it in context of CDK?
You can ref AWS sample code:
https://github.com/aws-samples/aws-iot-cqrs-example/blob/master/lib/querycommandcontainers.ts
const getIoTEndpoint = new customResource.AwsCustomResource(this, 'IoTEndpoint', {
onCreate: {
service: 'Iot',
action: 'describeEndpoint',
physicalResourceId: customResource.PhysicalResourceId.fromResponse('endpointAddress'),
parameters: {
"endpointType": "iot:Data-ATS"
}
},
policy: customResource.AwsCustomResourcePolicy.fromSdkCalls({resources: customResource.AwsCustomResourcePolicy.ANY_RESOURCE})
});
const IOT_ENDPOINT = getIoTEndpoint.getResponseField('endpointAddress')
AFAIK the only way to recover is by using Custom Resources (Lambda), for example (IoTThing): https://aws.amazon.com/blogs/iot/automating-aws-iot-greengrass-setup-with-aws-cloudformation/

Set cfn deletion policy for RDS instance using AWS CDK

I have a CDK stack that includes an RDS instance. I want to make sure the DB instance never gets deleted. I can't figure out how to set the deletion policy via CDK.
It looks like I can set deletion protection like this:
this.database = new rds.DatabaseInstanceFromSnapshot(this, 'backendAPIDatabase', {
snapshotIdentifier: this.props.snapshotIdentifier,
instanceIdentifier: this.props.environmentName,
engine: rds.DatabaseInstanceEngine.POSTGRES,
instanceClass: this.props.databaseInstanceSize,
vpc: this.vpc,
multiAz: this.props.databaseMultiAz,
enablePerformanceInsights: true,
parameterGroup,
allocatedStorage: this.props.allocatedDatabaseStorage
});
(this.database.node.defaultChild as rds.CfnDBInstance).deletionProtection = true;
But I can't figure out how to apply a deletion policy as a second backup.
You can set it by using removalPolicy property. You should also set deletion protection via the constructor as shown below.
this.database = new rds.DatabaseInstanceFromSnapshot(this, 'backendAPIDatabase', {
...,
deletionProtection: true,
removalPolicy: cdk.RemovalPolicy.RETAIN
});

Can you create a route 53 A Record that maps directly to the IP address of a ecs service and ecs task defined using the AWS CDK?

Can you create a route 53 A Record that maps directly to the IP address of a ecs service and ecs task defined using the AWS CDK?
I have the following code
FargateTaskDefinition taskDef = new FargateTaskDefinition(this, "DevStackTaskDef", new FargateTaskDefinitionProps()
{
MemoryLimitMiB = 2048,
Cpu = 512
});
var service = new FargateService(this, "DevStackFargateService", new FargateServiceProps()
{
ServiceName = "DevStackFargateService",
TaskDefinition = taskDef,
Cluster = cluster,
DesiredCount = 1,
SecurityGroup = securityGroup,
AssignPublicIp = true,
VpcSubnets = new SubnetSelection()
{
SubnetType = SubnetType.PUBLIC
}
});
new ARecord(this, "AliasRecord", new ARecordProps()
{
Zone = zone,
Target = RecordTarget.FromIpAddresses() //here is the line in question.
});
The ARecordProps.Target value is the one I'm stuck on. I can not find a way to get the ip address of the task that will be created. Does any one know if this is possible to do? I would really like to avoid using load balancers as this is a dev/test environment. I have also looked at the aws-route53-targets module and see that it only supports
ApiGateway
ApiGatewayDomain
BucketWebsiteTarget
ClassicLoadBalancerTarget
CloudFrontTarget
LoadBalancerTarget
Any help would be much appreciated. Thanks