Declare custom DNS name for Fargate using CDK - amazon-web-services

I am creating a C# CDK stack that deploys both a Fargate service and an Elastic Beanstalk service.
Both these services will have a custom domain pointing to them using a third party domain service (i.e. non AWS).
For elastic beanstalk, I can simply set the CnamePrefix below, and AWS will generate something like http://beanstalk-dev.ap-southeast-2.elasticbeanstalk.com/ which I can then point my custom domain to (after adding some listeners to the load balancer).
var elasticBeanstalkEnv = new CfnEnvironment(this, "ElbsEnv", new CfnEnvironmentProps
{
EnvironmentName = "beanstalk-dev",
ApplicationName = appName,
SolutionStackName = "64bit Amazon Linux 2 v2.4.1 running .NET Core",
OptionSettings = optionSettingProperties,
VersionLabel = version.Ref,
CnamePrefix = "beanstalk-dev", <--- This property
Tier = new CfnEnvironment.TierProperty
{
Name = "WebServer",
Type = "Standard"
}
});
I am also trying to do something similar with Fargate. But I can not find any settings like the one for elastic beanstalk.
Below is what I have so far, which will deploy a DNS name like LB-71455902.ap-southeast-2.elb.amazonaws.com
var fargate = new ApplicationLoadBalancedFargateService(this, "reportviewer-server",
new ApplicationLoadBalancedFargateServiceProps
{
TaskImageOptions = new ApplicationLoadBalancedTaskImageOptions
{
Image = ContainerImage.FromRegistry("myImage",
new RepositoryImageProps { Credentials = mycreds})
},
PublicLoadBalancer = true,
LoadBalancerName = "LB",
ServiceName = "frontend",
RecordType = ApplicationLoadBalancedServiceRecordType.CNAME
});
fargate.LoadBalancer.AddListener("ecs-https-listener", new ApplicationListenerProps
{
SslPolicy = SslPolicy.RECOMMENDED,
Protocol = ApplicationProtocol.HTTPS,
Open = true,
Port = 443,
Certificates = new IListenerCertificate[]
{
ListenerCertificate.FromArn(
"myArn")
},
DefaultTargetGroups = new IApplicationTargetGroup[]
{
fargate.TargetGroup
}
});
How would I setup my stack to create a "static" DNS name that wont be different each time I create/destroy my stack?

Related

Terraform - Cyclic dependency issue on GCP

I am provisioning multiple resources on GCP including a Cloud SQL (Postgres) DB and one VM instance. I am struggling with a cyclic dependency on Terraform during terraform apply as:
Cloud SQL (Postgres) needs the IP of the VM for IP whitelisting
The VM uses a start-up script that requires the Public IP of the Postgres DB
Hence, the cyclic dependency... Do you have any suggestion to tackle this in Terraform?
File that creates the GCP VM (includes a startup script that requires the IP of the Postgres DB)
data "template_file" "startup_script_airbyte" {
template = file("${path.module}/sh_scripts/airbyte.sh")
vars = {
db_public_ip = "${google_sql_database_instance.postgres.public_ip_address}"
db_name_prefix = "${var.db_name}"
db_user = "${var.db_user}"
db_password = "${var.db_password}"
}
}
resource "google_compute_instance" "airbyte_instance" {
name = "${google_project.data_project.project_id}-airbyte"
machine_type = local.airbyte_machine_type
project = google_project.data_project.project_id
metadata_startup_script = data.template_file.startup_script_airbyte.rendered #file("./sh_scripts/airbyte.sh")
allow_stopping_for_update = true
depends_on = [
google_project_service.data_project_services,
]
boot_disk {
initialize_params {
image = "ubuntu-2004-focal-v20210415"
size = 50
type = "pd-balanced"
}
}
network_interface {
network = "default"
access_config {
network_tier = "PREMIUM"
}
}
service_account {
email = google_service_account.airbyte_sa.email
scopes = ["cloud-platform"]
}
}
Script that creates the Postgres DB (requires IP of the VM above)
resource "google_sql_database_instance" "postgres" {
name = "postgres-instance-${random_id.db_name_suffix.hex}"
project = google_project.data_project.project_id
database_version = "POSTGRES_13"
settings{
tier = "db-f1-micro"
backup_configuration {
enabled = true
start_time = "02:00"
}
database_flags {
name = "cloudsql.iam_authentication"
value = "on"
}
database_flags {
name = "max_connections"
value = 30000
}
#Whitelisting the IPs of the GCE VMs in Postgres
ip_configuration {
ipv4_enabled = "true"
authorized_networks {
name = "${google_compute_instance.airbyte_instance.name}"
value = "${google_compute_instance.airbyte_instance.network_interface.0.access_config.0.nat_ip}"
}
}
}
}
One way to overcome this would be to get static public IP, using google_compute_address. You do this before you create your instance, and then attach it to the instance.
This way the IP can be whitelisted in Cloud SQL, before the instance is created.
The correct solution is to install the Cloud SQL Auth Proxy in the VM. Then you do not need to whitelist IP addresses. This will remove the cyclic dependency.

Creating endpoint in cloud run with Terraform and Google Cloud Platform

I'm research for a way to use Terraform with GCP provider to create cloud run endpoint. For starter I'm creating testing data a simple hello world. I have resource cloud run service configured and cloud endpoints resource configured with cloud endpoints depends_on cloud run. However, I'm trying to pass in the cloud run url as a service name to the cloud endpoints. File are constructed with best practice, with module > cloud run and cloud endpoints resource. However, the Terraform interpolation for passing the output of
service_name = "${google_cloud_run_service.default.status[0].url}"
Terraform throughs an Error: Invalid character. I've also tried module.folder.output.url.
I have the openapi_config.yml hardcoded in the TF config within
I'm wondering if it's possible to have to work. I research many post and some forum are outdated.
#Cloud Run
resource "google_cloud_run_service" "default" {
name = var.name
location = var.location
template {
spec {
containers {
image = "gcr.io/cloudrun/hello"
}
}
metadata {
annotations = {
"autoscaling.knative.dev/maxScale" = "1000"
"run.googleapis.com/cloudstorage" = "project_name:us-central1:${google_storage_bucket.storage-run.name}"
"run.googleapis.com/client-name" = "terraform"
}
}
}
traffic {
percent = 100
latest_revision = true
}
autogenerate_revision_name = true
}
output "url" {
value = "${google_cloud_run_service.default.status[0].url}"
}
data "google_iam_policy" "noauth" {
binding {
role = "roles/run.invoker"
members = [
"allUsers",
]
}
}
resource "google_cloud_run_service_iam_policy" "noauth" {
location = google_cloud_run_service.default.location
project = google_cloud_run_service.default.project
service = google_cloud_run_service.default.name
policy_data = data.google_iam_policy.noauth.policy_data
}
#CLOUD STORAGE
resource "google_storage_bucket" "storage-run" {
name = var.name
location = var.location
force_destroy = true
bucket_policy_only = true
}
data "template_file" "openapi_spec" {
template = file("${path.module}/openapi_spec.yml")
}
#CLOUD ENDPOINT SERVICE
resource "google_endpoints_service" "api-service" {
service_name = "api_name.endpoints.project_name.cloud.goog"
project = var.project
openapi_config = data.template_file.openapi_spec.rendered
}
ERROR: googleapi: Error 400: Service name 'CLOUD_RUN_ESP_NAME' provided in the config files doesn't match the service name 'api_name.endpoints.project_name.cloud.goog' provided in the request., badRequest
So I later discovered, that the service name must match the same as the host/cloud run esp service url without https:// in order for the cloud endpoint services to provisioner. Terraform docs states otherwise in the form of " $apiname.endpoints.$projectid.cloud.goog " terraform_cloud_endpoints and in GCP docs states that the cloud run ESP service must be the url without https:// > gateway-12345-uc.a.run.app
Getting Started with Endpoints for Cloud Run

Can you create a route 53 A Record that maps directly to the IP address of a ecs service and ecs task defined using the AWS CDK?

Can you create a route 53 A Record that maps directly to the IP address of a ecs service and ecs task defined using the AWS CDK?
I have the following code
FargateTaskDefinition taskDef = new FargateTaskDefinition(this, "DevStackTaskDef", new FargateTaskDefinitionProps()
{
MemoryLimitMiB = 2048,
Cpu = 512
});
var service = new FargateService(this, "DevStackFargateService", new FargateServiceProps()
{
ServiceName = "DevStackFargateService",
TaskDefinition = taskDef,
Cluster = cluster,
DesiredCount = 1,
SecurityGroup = securityGroup,
AssignPublicIp = true,
VpcSubnets = new SubnetSelection()
{
SubnetType = SubnetType.PUBLIC
}
});
new ARecord(this, "AliasRecord", new ARecordProps()
{
Zone = zone,
Target = RecordTarget.FromIpAddresses() //here is the line in question.
});
The ARecordProps.Target value is the one I'm stuck on. I can not find a way to get the ip address of the task that will be created. Does any one know if this is possible to do? I would really like to avoid using load balancers as this is a dev/test environment. I have also looked at the aws-route53-targets module and see that it only supports
ApiGateway
ApiGatewayDomain
BucketWebsiteTarget
ClassicLoadBalancerTarget
CloudFrontTarget
LoadBalancerTarget
Any help would be much appreciated. Thanks

How do I create an AWS App Mesh using AWS CDK

I am attempting to create a stack for (currently) 9 .NET Core microservices to run in ECS Fargate and communicate with each other via App Mesh. I plan on creating an Infrastructure stack which creates the App Mesh resource and the ECS Cluster and a Microservice stack that creates the resources for each service and adds them to the App Mesh and ECS cluster.
I currently have this code:
Vpc = Amazon.CDK.AWS.EC2.Vpc.FromLookup(this, "vpc", new VpcLookupOptions
{
VpcId = "xxxxxxxxxxxx"
});
DefaultCloudMapNamespace = new CloudMapNamespaceOptions
{
Vpc = Vpc,
Name = dnsNamespace,
Type = NamespaceType.DNS_PRIVATE,
};
EcsCluster = new Cluster(this, $"{Env}-linux-cluster", new ClusterProps
{
Vpc = Vpc,
ClusterName = $"{Env}-linux-cluster",
DefaultCloudMapNamespace = DefaultCloudMapNamespace
});
This seems to be okay - it creates a hosted zone in Route53.
When I am creating the Service for Cloud Map, I'm using this code:
var cloudMapService = new Service(this, serviceName, new ServiceProps
{
Namespace = new PrivateDnsNamespace(this, $"{serviceNameHyphen}-cm-namespace", new PrivateDnsNamespaceProps
{
Vpc = infrastructureStack.Vpc,
Name = $"{serviceName}.dev",
}),
DnsRecordType = DnsRecordType.SRV,
DnsTtl = Duration.Seconds(60),
RoutingPolicy = RoutingPolicy.MULTIVALUE,
Name = serviceName
});
This is the first time I'm working with App Mesh & Cloud Map, but I would expect to use the same private hosted zone for both the Cloud Map namespace and the Cloud Map Service namespace.
Is this the correct approach?
My approach:
I create Namespace first
cloud_map = sds.PrivateDnsNamespace(
self,
"PrivateNameSpace",
vpc=vpcObject,
description=' '.join(["Private DNS for", self.node.try_get_context('EnvironmentName')]),
name=service_domain
)
Then when create Virtual Service I use same domain for it
vservice = mesh.VirtualService(
self,
"VirtualService",
virtual_service_name='.'.join([node_name, service_domain]),
virtual_service_provider=mesh.VirtualServiceProvider.virtual_node(vnode)
)
Then call it when create ECS service
ecs_service = ecs.Ec2Service(
self,
"ECSService",
task_definition=ecs_task,
placement_strategies=[
ecs.PlacementStrategy.spread_across_instances()
],
desired_count=desiredCount,
cluster=clusterObject,
security_groups=[sgObject],
vpc_subnets=ec2.SubnetSelection(
subnet_type=ec2.SubnetType.PRIVATE
),
enable_ecs_managed_tags=True,
health_check_grace_period=cdk.Duration.seconds(120),
max_healthy_percent=200,
min_healthy_percent=50,
cloud_map_options=ecs.CloudMapOptions(
cloud_map_namespace=cloud_map,
dns_record_type=cm.DnsRecordType.A,
dns_ttl=cdk.Duration.seconds(300),
failure_threshold=1,
name=node_name
),
)

How to create RFC 2782-compliant SRV record with Terraform and AWS Service Discovery?

I'm running a MySQL database in AWS ECS (can't use RDS), and I'd like to use ECS Service Discovery to populate a SRV record which points to that database. I'm using Terraform to configure all AWS services.
This is what I have working so far ...
resource "aws_service_discovery_service" "mysqldb" {
name = "mysqldb"
health_check_custom_config {
failure_threshold = 3
}
dns_config {
namespace_id = "${aws_service_discovery_private_dns_namespace.stack_local.id}"
dns_records {
ttl = "300"
type = "SRV"
}
routing_policy = "MULTIVALUE"
}
}
resource "aws_service_discovery_private_dns_namespace" "stack_local" {
name = "${var.stack_name}.local"
description = "Testing with AWS Service Discovery"
vpc = "${aws_vpc.vpc.id}"
}
However, this creates a SRV record (mysqldb.stack_name.local.) that is not RFC 2782-compliant. In order to be compliant, it should include the service name and protocol, like this: _mysql_._tcp.mysqldb.stack_name.local.
I tried changing the name to something like _mysql._tcp.mysqldb but that failed because AWS requires the name to be a single label without any . in it.
Is it possible to create an RFC 2782-compliant SRV record using ECS Service Discovery with Terraform and the Terraform AWS provider?