I am trying to create logs for the Network Load Balancer (not the task). Currently using the following code:
taskImageOptions: {
containerPort: 8080,
image: BrazilContainerImage.fromBrazil({
brazilPackage: BrazilPackage.fromString('Service'),
transformPackage: BrazilPackage.fromString('ServiceImageBuild'),
componentName: 'service',
}),
containerName: 'Application',
taskRole: this.taskRole,
environment: {
'STAGE': props.stage,
'SERVICE_RUN': 'true'
},
logDriver: new AwsLogDriver({
streamPrefix: 'NetworkLoadBalancer-',
logGroup: new LogGroup(this, 'Service-NetworkLoadBalancer', {
removalPolicy: RemovalPolicy.RETAIN,
retention: RetentionDays.THREE_MONTHS,
})
}),
},
But this creating a new log group by deleting the existing ServiceTaskDefApplicationLogGroup. I guess this is happening because of logDriver is inside the taskImageOptions but no logging options are available in NetworkLoadBalancedFargateService. Any suggestions?
The logDriver setting is specifically for your ECS tasks. It configures the logging for the output of your docker container(s). It is not related to load balancer access logs in any way.
You would need to take the loadBalancer property from the NetworkLoadBalancedFargateService and then call logAccessLogs() on it, as documented here.
Related
I've got the following Fargate service created by an ecs pattern. The CloudMap I create here only points to the underlying task which is a private IP and runs on port 8080 (Tomcat). The ALB forwards properly from 80->8080. How can I get the DNS to properly route to the task? Can I get the DNS service to route directly to the ALB?
const service = new ecs_patterns.ApplicationLoadBalancedFargateService(this, 'MyAppWebstartFargateService', {
serviceName: "myapp-service",
cluster: cluster,
cpu: 512,
memoryLimitMiB: 2048,
cloudMapOptions: {
name: "myapp",
containerPort: 8080,
cloudMapNamespace: namespace,
dnsRecordType: svc_dsc.DnsRecordType.A,
dnsTtl: Duration.seconds(300),
},
desiredCount: 1,
publicLoadBalancer: false,
securityGroups: [sg],
listenerPort: 80,
openListener: true,
healthCheckGracePeriod: Duration.seconds(300),
targetProtocol: elbv2.ApplicationProtocol.HTTP,
protocol: elbv2.ApplicationProtocol.HTTP,
enableExecuteCommand: true,
taskImageOptions: {
containerName: "myapp-container",
containerPort: 8080,
enableLogging: true,
image: ecs.ContainerImage.fromEcrRepository(repository, "latest"),
},
});
I figured it out! I needed to call registerLoadBalancer on the Cloud Map service and give it the resulting LB from the Fargate pattern. Hope this helps someone down the road b/c I could not find any solution to this exact use case.
const namespace = svc_dsc.PrivateDnsNamespace.fromPrivateDnsNamespaceAttributes(this, "MyAppCloudMapNamespace", {
namespaceArn: "*****************",
namespaceId: "999999999999999",
namespaceName: "mydomain.com"
});
const mapService = new svc_dsc.Service(this, 'MyAppCloudMapService', {
namespace: namespace,
dnsRecordType: svc_dsc.DnsRecordType.A,
dnsTtl: Duration.seconds(300),
name: "myapp",
routingPolicy: svc_dsc.RoutingPolicy.WEIGHTED,
loadBalancer: true // Important! If you choose WEIGHTED but don't set this, the routing policy will default to MULTIVALUE instead
});
const service = new ecs_patterns.ApplicationLoadBalancedFargateService(this, 'MyAppWebstartFargateService', {
serviceName: "myapp-service",
cluster: cluster,
cpu: 512,
memoryLimitMiB: 2048,
desiredCount: 1,
publicLoadBalancer: false,
securityGroups: [sg],
listenerPort: 80,
openListener: true,
healthCheckGracePeriod: Duration.seconds(300),
targetProtocol: elbv2.ApplicationProtocol.HTTP,
protocol: elbv2.ApplicationProtocol.HTTP,
enableExecuteCommand: true,
taskImageOptions: {
containerName: "myapp-container",
containerPort: 8080,
enableLogging: true,
image: ecs.ContainerImage.fromEcrRepository(repository, "latest"),
},
});
mapService.registerLoadBalancer("MyAppLoadBalancer", service.loadBalancer);
I have created a Fargate service running on an ECS cluster fronted by an application load balancer using the ApplicationLoadBalancedFargateService CDK construct.
cluster,
memoryLimitMiB: 1024,
desiredCount: 1,
cpu: 512,
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry("amazon/amazon-ecs-sample"),
},
});
There are no Props for enabling deletion protection. Can anyone tell from his experience?
CDK offers the Escape Hatches feature to use Clouformation Props if any High-level construct does not have parameters.
// Create a load-balanced Fargate service and make it public
var loadBalancedService = new ecs_patterns.ApplicationLoadBalancedFargateService(this, `${ENV_NAME}-pgadmin4`, {
cluster: cluster, // Required
cpu: 512, // Default is 256
desiredCount: 1, // Default is 1
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry('image'),
environment: {}
},
memoryLimitMiB: 1024, // Default is 512
assignPublicIp: true
});
// Get the CloudFormation resource
const cfnLB = loadBalancedService.loadBalancer.node.defaultChild as elbv2.CfnLoadBalancer;
cfnLB.loadBalancerAttributes = [{
key: 'deletion_protection.enabled',
value: 'true',
},
];
I have a CDK project that creates a CodePipeline which deploys an application on ECS. I had it all previously working, but the VPC was using a NAT gateway, which ended up being too expensive. So now I am trying to recreate the project without requiring a NAT gateway. I am almost there, but I have now run into issues when the ECS service is trying to start tasks. All tasks fail to start with the following error:
ResourceInitializationError: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve secret from asm: service call has been retried 5 time(s): failed to fetch secret
At this point I've kind of lost track of the different things I have tried, but I will post the relevant bits here as well as some of my attempts.
const repository = ECR.Repository.fromRepositoryAttributes(
this,
"ecr-repository",
{
repositoryArn: props.repository.arn,
repositoryName: props.repository.name,
}
);
// vpc
const vpc = new EC2.Vpc(this, this.resourceName(props, "vpc"), {
maxAzs: 2,
natGateways: 0,
enableDnsSupport: true,
});
const vpcSecurityGroup = new SecurityGroup(this, "vpc-security-group", {
vpc: vpc,
allowAllOutbound: true,
});
// tried this to allow the task to access secrets manager
const vpcEndpoint = new EC2.InterfaceVpcEndpoint(this, "secrets-manager-task-vpc-endpoint", {
vpc: vpc,
service: EC2.InterfaceVpcEndpointAwsService.SSM,
});
const secrets = SecretsManager.Secret.fromSecretCompleteArn(
this,
"secrets",
props.secrets.arn
);
const cluster = new ECS.Cluster(this, this.resourceName(props, "cluster"), {
vpc: vpc,
clusterName: `api-cluster`,
});
const ecsService = new EcsPatterns.ApplicationLoadBalancedFargateService(
this,
"ecs-service",
{
taskSubnets: {
subnetType: SubnetType.PUBLIC,
},
securityGroups: [vpcSecurityGroup],
serviceName: "api-service",
cluster: cluster,
cpu: 256,
desiredCount: props.scaling.desiredCount,
taskImageOptions: {
image: ECS.ContainerImage.fromEcrRepository(
repository,
this.ecrTagNameParameter.stringValue
),
secrets: getApplicationSecrets(secrets), // returns
logDriver: LogDriver.awsLogs({
streamPrefix: "api",
logGroup: new LogGroup(this, "ecs-task-log-group", {
logGroupName: `${props.environment}-api`,
}),
logRetention: RetentionDays.TWO_MONTHS,
}),
},
memoryLimitMiB: 512,
publicLoadBalancer: true,
domainZone: this.hostedZone,
certificate: this.certificate,
redirectHTTP: true,
}
);
const scalableTarget = ecsService.service.autoScaleTaskCount({
minCapacity: props.scaling.desiredCount,
maxCapacity: props.scaling.maxCount,
});
scalableTarget.scaleOnCpuUtilization("cpu-scaling", {
targetUtilizationPercent: props.scaling.cpuPercentage,
});
scalableTarget.scaleOnMemoryUtilization("memory-scaling", {
targetUtilizationPercent: props.scaling.memoryPercentage,
});
secrets.grantRead(ecsService.taskDefinition.taskRole);
repository.grantPull(ecsService.taskDefinition.taskRole);
I read somewhere that it probably has something to do with Fargate version 1.4.0 vs 1.3.0, but I'm not sure what I need to change to allow the tasks to access what they need to run.
You need to create an interface endpoints for Secrets Manager, ECR (two types of endpoints), CloudWatch, as well as a gateway endpoint for S3.
Refer to the documentation on the topic.
Here's an example in Python, it'd work the same in TS:
vpc.add_interface_endpoint(
"secretsmanager_endpoint",
service=ec2.InterfaceVpcEndpointAwsService.SECRETS_MANAGER,
)
vpc.add_interface_endpoint(
"ecr_docker_endpoint",
service=ec2.InterfaceVpcEndpointAwsService.ECR_DOCKER,
)
vpc.add_interface_endpoint(
"ecr_endpoint",
service=ec2.InterfaceVpcEndpointAwsService.ECR,
)
vpc.add_interface_endpoint(
"cloudwatch_logs_endpoint",
service=ec2.InterfaceVpcEndpointAwsService.CLOUDWATCH_LOGS,
)
vpc.add_gateway_endpoint(
"s3_endpoint",
service=ec2.GatewayVpcEndpointAwsService.S3
)
Keep in mind that interface endpoints cost money as well, and may not be cheaper than a NAT.
I am setting up a database cluster (Aurora MySQL 5.7) using the DatabaseCluster Construct from #aws-cdk/aws-rds.
My question, where in the setup can I change the Certificate authority? I want to programmatically setup the database to use rds-ca-2019 instead of rds-ca-2015. Note, I want to change this using CDK, not by "clicking in the AWS GUI".
The image below shows which setting I am referring to.
I have been browsing the docs for RDS CDK, and tried to Google this without success.
This guide describes the manual steps on how to do this.
AWS CDK RDS module
DatabaseCluster Construct
Low-level Cluster (CfnCluster)
BTW, my current current config looks a bit like this:
const cluster = new rds.DatabaseCluster(this, 'aurora-cluster', {
clusterIdentifier: 'aurora-cluster',
engine: rds.DatabaseClusterEngine.AURORA_MYSQL,
masterUser: {
username: 'someuser',
password: 'somepassword'
},
defaultDatabaseName: 'db',
instances: 2,
instanceIdentifierBase: 'aurora-',
instanceProps: {
instanceType: ...,
vpcSubnets: {
subnetType: ec2.SubnetType.PUBLIC,
},
vpc: myVpc
},
removalPolicy: cdk.RemovalPolicy.DESTROY,
parameterGroup: {
parameterGroupName: 'default.aurora-mysql5.7'
},
port: 3306,
storageEncrypted: true
});
Apparently Cloudformation doesn't support the certificate authority field, and therefore CDK can't either.
https://github.com/aws-cloudformation/aws-cloudformation-coverage-roadmap/issues/211
I upvoted the issue; feel free to join me!
I'm trying to create an unmanaged instanceGroup with several VM's in it via Deployment Manager Configuration (YAML file).
I can easily find docs about addInstances via Google API, but couldn't find docs about how to do this in a YAML file:
instances
instanceGroups
What properties should be included in instances/instanceGroup resource to make it work?
The YAML below will create a compute engine instance, create an unmanaged instance group, and add the instance to the group.
resources:
- name: instance-1
type: compute.v1.instance
properties:
zone: australia-southeast1-a
machineType: zones/australia-southeast1-a/machineTypes/n1-standard-1
disks:
- deviceName: boot
type: PERSISTENT
diskType: zones/australia-southeast1-a/diskTypes/pd-ssd
boot: true
autoDelete: true
initializeParams:
sourceImage: projects/debian-cloud/global/images/debian-9-stretch-v20180716
networkInterfaces:
- network: global/networks/default
accessConfigs:
- name: External NAT
type: ONE_TO_ONE_NAT
- name: ig-1
type: compute.v1.instanceGroup
properties:
zone: australia-southeast1-a
network: global/networks/default
- name: ig-1-members
action: gcp-types/compute-v1:compute.instanceGroups.addInstances
properties:
project: YOUR_PROJECT_ID
zone: australia-southeast1-a
instanceGroup: ig-1
instances: [ instance: $(ref.instance-1.selfLink) ]
There is no possibility right now, to do it with gcloud deployment manager.
This was tested and it seemed that while Google Deployment Manager was able to complete without issue having the following snippet:
{
"instances": [
{
"instance": string
}
]
}
it did not add the instances specified, but created the IGM.
However Terraform seems to be able to do it https://www.terraform.io/docs/providers/google/r/compute_instance_group.html
I think #mcourtney answer is correct.
I just had this scenario and i used python template with yaml config to add instances to an un-managed instance group.
Here is the snippet of resource definition in my python template :
{
'name': name + '-ig-members',
'action': 'gcp-types/compute-v1:compute.instanceGroups.addInstances',
'properties': {
'project': '<YOUR PROJECT ID>',
'zone' : context.properties['zone'], // Defined in config yaml
'instanceGroup': '<YOUR Instance Group name ( not url )>',
"instances": [
{
"instance": 'projects/<PROJECT ID>/zones/<YOUR ZONE>/instances/<INSTANCE NAME>'
}
]
}
}
Reference API is documented here :
https://cloud.google.com/compute/docs/reference/rest/beta/instanceGroups/addInstances
This is just an example. you can abstract all the hard coded things to either yaml configuration or variables at the top of python template.