Can you create a route 53 A Record that maps directly to the IP address of a ecs service and ecs task defined using the AWS CDK? - amazon-web-services

Can you create a route 53 A Record that maps directly to the IP address of a ecs service and ecs task defined using the AWS CDK?
I have the following code
FargateTaskDefinition taskDef = new FargateTaskDefinition(this, "DevStackTaskDef", new FargateTaskDefinitionProps()
{
MemoryLimitMiB = 2048,
Cpu = 512
});
var service = new FargateService(this, "DevStackFargateService", new FargateServiceProps()
{
ServiceName = "DevStackFargateService",
TaskDefinition = taskDef,
Cluster = cluster,
DesiredCount = 1,
SecurityGroup = securityGroup,
AssignPublicIp = true,
VpcSubnets = new SubnetSelection()
{
SubnetType = SubnetType.PUBLIC
}
});
new ARecord(this, "AliasRecord", new ARecordProps()
{
Zone = zone,
Target = RecordTarget.FromIpAddresses() //here is the line in question.
});
The ARecordProps.Target value is the one I'm stuck on. I can not find a way to get the ip address of the task that will be created. Does any one know if this is possible to do? I would really like to avoid using load balancers as this is a dev/test environment. I have also looked at the aws-route53-targets module and see that it only supports
ApiGateway
ApiGatewayDomain
BucketWebsiteTarget
ClassicLoadBalancerTarget
CloudFrontTarget
LoadBalancerTarget
Any help would be much appreciated. Thanks

Related

Declare custom DNS name for Fargate using CDK

I am creating a C# CDK stack that deploys both a Fargate service and an Elastic Beanstalk service.
Both these services will have a custom domain pointing to them using a third party domain service (i.e. non AWS).
For elastic beanstalk, I can simply set the CnamePrefix below, and AWS will generate something like http://beanstalk-dev.ap-southeast-2.elasticbeanstalk.com/ which I can then point my custom domain to (after adding some listeners to the load balancer).
var elasticBeanstalkEnv = new CfnEnvironment(this, "ElbsEnv", new CfnEnvironmentProps
{
EnvironmentName = "beanstalk-dev",
ApplicationName = appName,
SolutionStackName = "64bit Amazon Linux 2 v2.4.1 running .NET Core",
OptionSettings = optionSettingProperties,
VersionLabel = version.Ref,
CnamePrefix = "beanstalk-dev", <--- This property
Tier = new CfnEnvironment.TierProperty
{
Name = "WebServer",
Type = "Standard"
}
});
I am also trying to do something similar with Fargate. But I can not find any settings like the one for elastic beanstalk.
Below is what I have so far, which will deploy a DNS name like LB-71455902.ap-southeast-2.elb.amazonaws.com
var fargate = new ApplicationLoadBalancedFargateService(this, "reportviewer-server",
new ApplicationLoadBalancedFargateServiceProps
{
TaskImageOptions = new ApplicationLoadBalancedTaskImageOptions
{
Image = ContainerImage.FromRegistry("myImage",
new RepositoryImageProps { Credentials = mycreds})
},
PublicLoadBalancer = true,
LoadBalancerName = "LB",
ServiceName = "frontend",
RecordType = ApplicationLoadBalancedServiceRecordType.CNAME
});
fargate.LoadBalancer.AddListener("ecs-https-listener", new ApplicationListenerProps
{
SslPolicy = SslPolicy.RECOMMENDED,
Protocol = ApplicationProtocol.HTTPS,
Open = true,
Port = 443,
Certificates = new IListenerCertificate[]
{
ListenerCertificate.FromArn(
"myArn")
},
DefaultTargetGroups = new IApplicationTargetGroup[]
{
fargate.TargetGroup
}
});
How would I setup my stack to create a "static" DNS name that wont be different each time I create/destroy my stack?

AWS CDK -- How do I retrieve my NS Records from my newly created Hosted Zone by AWS CDK

Say I created a public hosted zone or fetch a hosted zone from lookup and I want to retrieve the NS Records for other usage
const zone = new route53.PublicHostedZone(this, domain + 'HostedZone', {
zoneName: '' + domain
})
const zone = HostedZone.fromLookup(this, 'HostedZone', { domainName: config.zoneName });
Does the current CDK have any methods to do that. I've look around the API doco and found none. Any suggestions?
Update
I did try the hostedZoneNameServers property. However, it doesn't seem to return anything.
const zone = route53.HostedZone.fromLookup(this, 'DotnetHostedZone', {
domainName: <myDomain>,
});
new CfnOutput(this, `output1`, {
value: zone.zoneName
});
new CfnOutput(this, `output2`, {
value: zone.hostedZoneId
});
new CfnOutput(this, 'output3', {
value: zone.hostedZoneNameServers?.toString() || 'No NameServer'
});
✅ test-ops
Outputs:
test-ops.output1 = <myDomain>
test-ops.output2 = <myZoneId>
test-ops.output3 = No NameServer
And I confirm with my zone and used to do a record export, I can retrieve all my records.
The ultimate goal is to automate a subdomain provisioning. But I'm currently scratching my head on this route.
There is a hostedZoneNameServers property on the zone object.
const zone = HostedZone.fromLookup(this, 'HostedZone', { domainName: config.zoneName });
const nsRecords = zone.hostedZoneNameServers;
Reference:
https://docs.aws.amazon.com/cdk/api/latest/typescript/api/aws-route53/hostedzone.html#aws_route53_HostedZone_hostedZoneNameServers
I do not believe you can do that right from the script. The values will be just "Tokens" which will be replaced by CloudFormation after/during the deployment, but not at during synthesis. Outputting during synthesis will therefore leave you blind. You will need to fetch them in a post-process I guess..
I am running into the same issue, which is why I found your post :D
hostedZoneNameServers is not defined for private or imported zones as mentioned in the docs. You can use only if you create your zone in CDK (e.g. new PublicHostedZone(...).hostedZoneNameServers).
If you create the zone elsewhere, try to use AWS Route53 GetHostedZone API.
This worked for me
const nsRecords = hostedZone.hostedZoneNameServers;
if (nsRecords) {
for (let i=0; i<4; i++) {
context.cfnOutput(this, `NS Record${i+1}`, Fn.select(i, nsRecords));
}
}
As #JD D mentioned, there is a hostedZoneNameServers attribute on hosted zones, but they aren't available in cross stack. The documentation has been updated(or this was missed when first answered) to reflect this.
CDK V1 /
CDK V2
hostedZoneNameServers?
Type: string[] (optional)
Returns the set of name servers for the specific hosted zone. For example: ns1.example.com.
This attribute will be undefined for private hosted zones or hosted zones imported from another stack.
So in order to accomplish what you want, you will need to set the NS values as an output on the stack that created the hosted zone and consume them by referencing the stack that provides the NS output.
I was able to automate subdomain provisioning with the following code. Note that these hosted zones share the same stack, which may not work for your use case.
export const hostedZone = new HostedZone(stack, `${env}-hosted-zone`, {
zoneName: host,
})
// API
const apiHost = `api.${host}`
export const apiHostedZone = new HostedZone(stack, `${env}-hosted-zone-api`, {
zoneName: apiHost,
})
// note that this record is actually on the parent zone,
export const apiHostedZoneNsRecord = new NsRecord(stack, `${env}-hosted-zone-ns-api`, {authoritatively pointing to its sub-subdomain
recordName: apiHost,
values: apiHostedZone.hostedZoneNameServers as string[],
zone: hostedZone,
})
This resulted in the following snippet of CFT (${env} and ${rnd} replaced with concrete values, of course):
"ResourceRecords": {
"Fn::GetAtt": [
"${env}hostedzoneapi${rnd}",
"NameServers"
]
},
If you can accept the same stack constraint, you should be able to accomplish this. Note that while I could accept the constraint for this stack, more broadly I have a multi-account structure and had to manually add the sub-account's subdomain NS record to the parent account's root domain. Summary of this setup:
root account:
example.com
NS child.example.com // manually added
child account:
child.example.com // contents of `host` below
NS api.child.example.com
api.child.example.com // automatic subdomain created with code above

Google Cloud Run outbound static IP is 169.254.X.X instead of reserved one

I created a Google Cloud Run revision with a Serverless VPC access connector to a VPC Network. The VPC Network has access to the internet via Cloud NAT, to allow the Cloud Run instance to have a static outbound ip address, as described in this tutorial: https://cloud.google.com/run/docs/configuring/static-outbound-ip. I followed the tutorial, and all was well, I got a static ip adress on egress traffic from the Cloud Run instance.
I used terraform to deploy all the resources, the code of which you can find below. The problem is this: After destroying the resources, I got following error:
ERROR: (gcloud.compute.networks.delete) Could not fetch resource:
- The network resource 'projects/myproject/global/networks/webhook-network' is already being used by 'projects/myproject/global/networkInstances/v1823516883-618da3a7-bd4f-4524-...-...'
(the dots contain more numbers, but as this seems to be some kind of uuid I prefer not to share the rest).
So I can't delete the first network. When I change the network's name and reapply, the apply succeeds, but the outbound static ip address of the egress is 169.254.X.X, which I find the following information about:
"When you see a 169.254.X.X, you definitely have a problem" ==> smells like trouble.
Any Googlers that can help me out? I think the steps to reproduce the 'corrupted' VPC network is to create a Serverless Access Connector with a connection to the VPC, reference it with a Cloud Run revision, and then delete the VPC network and the Serverless Access Connector before you delete the Cloud Run revision, but honestly not sure, I don't really have spare GCP projects laying around to test it out on.
This StackOverflow question did not help out: https://serverfault.com/questions/1016015/google-cloud-platform-find-resource-by-full-resource-name, and it's the only related one I can find.
Anyone have any ideas?
locals {
region = "europe-west1"
}
resource "google_compute_network" "webhook_network" {
name = "webhook-network-6"
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "subnetwork" {
depends_on = [
google_compute_network.webhook_network
]
name = "webhook-subnet-6"
network = google_compute_network.webhook_network.self_link
ip_cidr_range = "10.14.0.0/28"
region = local.region
}
resource "google_compute_router" "router" {
depends_on = [
google_compute_subnetwork.subnetwork,
google_compute_network.webhook_network
]
name = "router6"
region = google_compute_subnetwork.subnetwork.region
network = google_compute_network.webhook_network.name
}
// I created the static IP address manually
//resource "google_compute_address" "static_address" {
// name = "nat-static-ip-address"
// region = local.region
//}
resource "google_compute_router_nat" "advanced-nat" {
name = "natt"
router = google_compute_router.router.name
region = local.region
nat_ip_allocate_option = "MANUAL_ONLY"
nat_ips = [
var.ip_name
]
source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES"
}
data "google_project" "project" {
}
}
resource "google_vpc_access_connector" "access_connector" {
depends_on = [
google_compute_network.webhook_network,
google_compute_subnetwork.subnetwork
]
name = "stat-ip-conn-6"
project = var.project_id
region = local.region
ip_cidr_range = "10.4.0.0/28"
network = google_compute_network.webhook_network.name
}
Turns out I was working correctly, the way to test it was wrong. I was testing it using following Cloud Function:
def hello_world(request):
request_json = request.get_json()
ip = request.remote_addr # the culprit
remote_port = request.environ.get('REMOTE_PORT')
url = request.url
host_url = request.host_url
return {"ip": ip, "url": url, "port": remote_port, "host_url": host_url}
which returns the 169.254.X.X, but when I test against curlmyip.org, it is indeed the correct ip address.
But, that still does not solve the issue of not being able to delete the VPC network.

How do I create an AWS App Mesh using AWS CDK

I am attempting to create a stack for (currently) 9 .NET Core microservices to run in ECS Fargate and communicate with each other via App Mesh. I plan on creating an Infrastructure stack which creates the App Mesh resource and the ECS Cluster and a Microservice stack that creates the resources for each service and adds them to the App Mesh and ECS cluster.
I currently have this code:
Vpc = Amazon.CDK.AWS.EC2.Vpc.FromLookup(this, "vpc", new VpcLookupOptions
{
VpcId = "xxxxxxxxxxxx"
});
DefaultCloudMapNamespace = new CloudMapNamespaceOptions
{
Vpc = Vpc,
Name = dnsNamespace,
Type = NamespaceType.DNS_PRIVATE,
};
EcsCluster = new Cluster(this, $"{Env}-linux-cluster", new ClusterProps
{
Vpc = Vpc,
ClusterName = $"{Env}-linux-cluster",
DefaultCloudMapNamespace = DefaultCloudMapNamespace
});
This seems to be okay - it creates a hosted zone in Route53.
When I am creating the Service for Cloud Map, I'm using this code:
var cloudMapService = new Service(this, serviceName, new ServiceProps
{
Namespace = new PrivateDnsNamespace(this, $"{serviceNameHyphen}-cm-namespace", new PrivateDnsNamespaceProps
{
Vpc = infrastructureStack.Vpc,
Name = $"{serviceName}.dev",
}),
DnsRecordType = DnsRecordType.SRV,
DnsTtl = Duration.Seconds(60),
RoutingPolicy = RoutingPolicy.MULTIVALUE,
Name = serviceName
});
This is the first time I'm working with App Mesh & Cloud Map, but I would expect to use the same private hosted zone for both the Cloud Map namespace and the Cloud Map Service namespace.
Is this the correct approach?
My approach:
I create Namespace first
cloud_map = sds.PrivateDnsNamespace(
self,
"PrivateNameSpace",
vpc=vpcObject,
description=' '.join(["Private DNS for", self.node.try_get_context('EnvironmentName')]),
name=service_domain
)
Then when create Virtual Service I use same domain for it
vservice = mesh.VirtualService(
self,
"VirtualService",
virtual_service_name='.'.join([node_name, service_domain]),
virtual_service_provider=mesh.VirtualServiceProvider.virtual_node(vnode)
)
Then call it when create ECS service
ecs_service = ecs.Ec2Service(
self,
"ECSService",
task_definition=ecs_task,
placement_strategies=[
ecs.PlacementStrategy.spread_across_instances()
],
desired_count=desiredCount,
cluster=clusterObject,
security_groups=[sgObject],
vpc_subnets=ec2.SubnetSelection(
subnet_type=ec2.SubnetType.PRIVATE
),
enable_ecs_managed_tags=True,
health_check_grace_period=cdk.Duration.seconds(120),
max_healthy_percent=200,
min_healthy_percent=50,
cloud_map_options=ecs.CloudMapOptions(
cloud_map_namespace=cloud_map,
dns_record_type=cm.DnsRecordType.A,
dns_ttl=cdk.Duration.seconds(300),
failure_threshold=1,
name=node_name
),
)

How to create RFC 2782-compliant SRV record with Terraform and AWS Service Discovery?

I'm running a MySQL database in AWS ECS (can't use RDS), and I'd like to use ECS Service Discovery to populate a SRV record which points to that database. I'm using Terraform to configure all AWS services.
This is what I have working so far ...
resource "aws_service_discovery_service" "mysqldb" {
name = "mysqldb"
health_check_custom_config {
failure_threshold = 3
}
dns_config {
namespace_id = "${aws_service_discovery_private_dns_namespace.stack_local.id}"
dns_records {
ttl = "300"
type = "SRV"
}
routing_policy = "MULTIVALUE"
}
}
resource "aws_service_discovery_private_dns_namespace" "stack_local" {
name = "${var.stack_name}.local"
description = "Testing with AWS Service Discovery"
vpc = "${aws_vpc.vpc.id}"
}
However, this creates a SRV record (mysqldb.stack_name.local.) that is not RFC 2782-compliant. In order to be compliant, it should include the service name and protocol, like this: _mysql_._tcp.mysqldb.stack_name.local.
I tried changing the name to something like _mysql._tcp.mysqldb but that failed because AWS requires the name to be a single label without any . in it.
Is it possible to create an RFC 2782-compliant SRV record using ECS Service Discovery with Terraform and the Terraform AWS provider?