I am wondering if it is possible to configure the “public access source allowlist” from CDK. I can see and manage this in the console under the networking tab, but can’t find anything in the CDK docs about setting the allowlist during deploy. I tried creating and assigning a security group (code sample below), but this didn't work. Also the security group was created as an "additional" security group, rather than the "cluster" security group.
declare const vpc: ec2.Vpc;
declare const adminRole: iam.Role;
const securityGroup = new ec2.SecurityGroup(this, 'my-security-group', {
vpc,
allowAllOutbound: true,
description: 'Created in CDK',
securityGroupName: 'cluster-security-group'
});
securityGroup.addIngressRule(
ec2.Peer.ipv4('<vpn CIDR block>'),
ec2.Port.tcp(8888),
'allow frontend access from the VPN'
);
const cluster = new eks.Cluster(this, 'my-cluster', {
vpc,
clusterName: 'cluster-cdk',
version: eks.KubernetesVersion.V1_21,
mastersRole: adminRole,
defaultCapacity: 0,
securityGroup
});
Update: I attempted the following, and it updated the cluster security group, but I'm still able to access the frontend when I'm not on the VPN:
cluster.connections.allowFrom(
ec2.Peer.ipv4('<vpn CIDER block>'),
ec2.Port.tcp(8888)
);
Update 2: I tried this as well, and I can still access my application's frontend even when I'm not on the VPN. However I can now only use kubectl when I'm on the VPN, which is good! It's a step forward that I've at least improved the cluster's security in a useful manner.
const cluster = new eks.Cluster(this, 'my-cluster', {
vpc,
clusterName: 'cluster-cdk',
version: eks.KubernetesVersion.V1_21,
mastersRole: adminRole,
defaultCapacity: 0,
endpointAccess: eks.EndpointAccess.PUBLIC_AND_PRIVATE.onlyFrom('<vpn CIDER block>')
});
In general EKS has two relevant security groups:
The one used by nodes, which AWS calls "cluster security group". It's setup automatically by EKS. You shouldn't need to mess with it unless you want (a) more restrictive rules the defaults (b) open your nodes to maintenance taks (e.g.: ssh access). This is what you are acessing via cluster.connections.
The Ingress Load Balancer security group. This is an Application Load balancer created and managed by EKS. In CDK, it can be created like so:
const cluster = new eks.Cluster(this, 'HelloEKS', {
version: eks.KubernetesVersion.V1_22,
albController: {
version: eks.AlbControllerVersion.V2_4_1,
},
});
This will will serve as a gateway for all internal services that need an Ingress. You can access it via the cluster.albController propriety and add rules to it like a regular Application Load Balancer. I have no idea how EKS deals with task communication when an Ingress ALB is not present.
Relevant docs:
Amazon EKS security group considerations
Alb Controller on CDK docs
The ALB propriety for EKS Cluster objects
Related
I'm able to create an EC2 instance and an RDS db via the cdk but can't find out how to connect them outside of the aws console.
When setting up the connection manually I get the screen that describes the changes saying "To set up a connection between the database and the EC2 instance, VPC security group xxx-xxx-x is added to the database, and VPC security group xxx-xxx-x is added to the EC2 instance."
Is there a way to do this via cdk?
Is it possible to do this in the instanceProps of the DatabaseCluster?
My existing code for the RDS db cluster looks something like this:
this.dbCluster = new DatabaseCluster(this, 'MyDbCluster',{
//
instanceProps: { vpc: props.vpc, vpcSubnets: { subGroupName: 'private' } }
})
How would I add the group to my existing code for the EC2 instance - in the vpcSubnets section?
const ec2Instance = new Instance( this, 'ec2instance', {
//
vpcSubnets: {
subnetGroupName: 'private',
},
})
You need to allow EC2 to connect to RDS.
You do this by using the DatabaseCluster.connections prop (docs).
Look at the examples in the Connections class docs.
this.dbCluster.connections.allowFrom(ec2Instance, ec2.Port.tcp(5432));
Adjust the port to match your database flavor port.
Background:
We're using AWS Cloud Development Kit (CDK) 2.5.0.
Manually using the AWS Console and hard-coded IP addresses, Route 53 to an ALB (Application Load Balancer) to a private Interface VPC Endpoint to a private REST API-Gateway (and so on..) works. See image below.
Code:
We're trying to code this manual solution via CDK, but are stuck on how to get and use the IP addresses or in some way hook up the load balancer to the Interface VPC Endpoint. (Endpoint has 3 IP addresses, one per availability zone in the region.)
The ALB needs a Target Group which targets the IP addresses of the Interface VPC Endpoint. (Using an "instance" approach instead of IP addresses, we tried using InstanceIdTarget with the endpoint's vpcEndpointId, but that failed. We got the error Instance ID 'vpce-WITHWHATEVERWASHERE' is not valid )
Using CDK, we created the following (among other things) using the aws_elasticloadbalancingv2 module:
ApplicationLoadBalancer (ALB)
ApplicationTargetGroup (ATG) aka Target Group
We were hopeful about aws_elasticloadbalancingv2_targets similar to aws_route53_targets, but no luck. We know the targets property of the ApplicationTargetGroup takes an array of IApplicationLoadBalancerTarget objects, but that's it.
:
import { aws_ec2 as ec2 } from 'aws-cdk-lib';
:
import { aws_elasticloadbalancingv2 as loadbalancing } from 'aws-cdk-lib';
// endpointSG, loadBalancerSG, vpc, ... are defined up higher
const endpoint = new ec2.InterfaceVpcEndpoint(this, `ABCEndpoint`, {
service: {
name: `com.amazonaws.us-east-1.execute-api`,
port: 443
},
vpc,
securityGroups: [endpointSG],
privateDnsEnabled: false,
subnets: { subnetGroupName: "Private" }
});
const loadBalancer = new loadbalancing.ApplicationLoadBalancer(this, 'abc-${config.LEVEL}-load-balancer', {
vpc: vpc,
vpcSubnets: { subnetGroupName: "Private" },
internetFacing: false,
securityGroup: loadBalancerSG
});
const listenerCertificate = loadbalancing.ListenerCertificate.fromArn(config.ARNS.CERTIFICATE)
const listener = loadBalancer.addListener('listener', {
port: 443,
certificates: [ listenerCertificate ]
});
let applicationTargetGroup = new loadbalancing.ApplicationTargetGroup(this, 'abc-${config.LEVEL}-target-group', {
port: 443,
vpc: vpc,
// targets: [ HELP ], - how to get the IApplicationLoadBalancerTarget objects?
})
listener.addTargetGroups( 'abc-listener-forward-to-target-groups', {
targetGroups: [applicationTargetGroup]
} );
As you can see above, we added a listener to the ALB. We added the Target Group to the listener.
Some of the resources we used:
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_elasticloadbalancingv2.ApplicationLoadBalancer.html
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_elasticloadbalancingv2.ApplicationTargetGroup.html
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ec2.InterfaceVpcEndpoint.html
https://docs.aws.amazon.com/cdk/api/v2//docs/aws-cdk-lib.aws_elasticloadbalancingv2_targets.InstanceIdTarget.html and related.
How to get PrivateIPAddress of VPC Endpoint in CDK? but this did not help.
In case visualizing the set up via an image helps, here's a close approximation of what we're going for.
Any help populating that targets property of the ApplicationTargetGroup with IApplicationLoadBalancerTarget objects is appreciated. Thanks!
https://aws.amazon.com/blogs/networking-and-content-delivery/accessing-an-aws-api-gateway-via-static-ip-addresses-provided-by-aws-global-accelerator/
This blog shows how to configure the architecture given in the question using AWS console (just disable the global accelerator option). The key takeaway is that the application load balancer uses target type IP and resolves the VPC endpoint domain name manually in step 2. The other two options, instance (target is an EC2 instances) and lambda (target is an AWS Lambda function) cannot be used.
The ec2.InterfaceVpcEndpoint construct has no output which directly gives an IP address. The underlying CloudFormation resource also does not support it. Instead, you will have to use the vpcEndpointDnsEntries property of ec2.InterfaceVpcEndpoint and resolve the domain names to IP addresses in your code (the console configuration also required the same domain name resolution). You can use an IpTarget object in your ApplicationTargetGroup.
At this point, you will run into one final roadblock due to how CDK works under the hood.
If you have all your resources defined in one CDK application, the value for each parameter (or a reference to the value using an underlying CloudFormation functions like Ref, GetAtt, etc.) needs to be available before the synthesize step, since that's when all templates are generated. AWS CDK uses tokens for this purpose, which during synthesis resolve to values such as {'Fn::GetAtt': ['EndpointResourceLogicalName', 'DnsEntries']. However since we need the actual value of the DNS entry to be able to resolve it, the token value won't be useful.
One way to fix this issue is to have two completely independent CDK applications structured this way:
Application A with VPC and interface endpoint. Define the vpcEndpointDnsEntries and VPC-ID as outputs using CfnOutput.
Application B with the rest of the resources. You will have to write code to read outputs of the CloudFormation stack created by Application A. You can use Fn.importValue for VPC ID, but you cannot use it for the DnsEntries output since it would again just resolve to a Fn::ImportValue based token. You need to read the actual value of the stack output, using the AWS SDK or some other option. Once you have the domain name, you can resolve it in your typescript code (I am not very familiar with typescript, this might require a third party library).
Image credits:
https://docs.aws.amazon.com/cdk/v2/guide/apps.html
try this using a custome ressource to get eni ip addresses :
https://repost.aws/questions/QUjISNyk6aTA6jZgZQwKWf4Q/how-to-connect-a-load-balancer-and-an-interface-vpc-endpoint-together-using-cdk
I have an existing VPC Endpoint, Now using CDK, I need to add a new SecurityGroup to the existing endoint. CDK has an option to Import the endpoint using following method:
const vpce = InterfaceVpcEndpoint.fromInterfaceVpcEndpointAttributes(this, 'TransferVpce', {
port: 443,
vpcEndpointId: "vpce-EndPointID",
}
);
But once Imported, it does not give me an option to Update it by adding a new Security Group. Any suggestions?
InterfaceVpcEndpoint returns Connections where you can find your security_groups. Then you can create your SecurityGroup and modify the rules.
I have service that exposes multiple ports and it worked fine with kubernetes but now we move it to AWS ECS. It seems I can only expose ports via Load Balancer and I am limited to 1 port per service/tasks even when docker defines multiple ports I have to choose one port
Add to load balancer button allows to add one port. Once added there is no button to add second port.
Is there any nicer workarround than making second proxy service to expose second port?
UPDATE: I use fargate based service.
You don't need any workaround, AWS ECS now supports multiple target groups within the same ECS service. This will be helpful for the use-cases where you wanted to expose multiple ports of the containers.
Currently, if you want to create a service specifying multiple target groups, you must create the service using the Amazon ECS API, SDK, AWS CLI, or an AWS CloudFormation template. After the service is created, you can view the service and the target groups registered to it with the AWS Management Console.
For example, A Jenkins container might expose port 8080 for the
Jenkins web interface and port 50000 for the API.
Ref:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/register-multiple-targetgroups.html
https://aws.amazon.com/about-aws/whats-new/2019/07/amazon-ecs-services-now-support-multiple-load-balancer-target-groups/
Update:
I was able to configure target group using Terraform but did not find so far this option on AWS console.
resource "aws_ecs_service" "multiple_target_example" {
name = "multiple_target_example1"
cluster = "${aws_ecs_cluster.main.id}"
task_definition = "${aws_ecs_task_definition.with_lb_changes.arn}"
desired_count = 1
iam_role = "${aws_iam_role.ecs_service.name}"
load_balancer {
target_group_arn = "${aws_lb_target_group.target2.id}"
container_name = "ghost"
container_port = "3000"
}
load_balancer {
target_group_arn = "${aws_lb_target_group.target2.id}"
container_name = "ghost"
container_port = "3001"
}
depends_on = [
"aws_iam_role_policy.ecs_service",
]
}
Version note:
Multiple load_balancer configuration block support was added in Terraform AWS Provider version 2.22.0.
ecs_service_terraform
I can't say that this will be a nice workaround but I was working on a project where I need to run Ejabberd using AWS ECS but the same issue happened when its come to bind port of the service to the load balancer.
I was working with terraform and due to this limitation of AWS ECS, we agree to run one container per instance to resolve the port issue as we were supposed to expose two port.
If you do not want to assign a dynamic port to your container and you want to run one container per instance then the solution will definitely work.
Create a target group and specify the second port of the container.
Go to the AutoScalingGroups of your ECS cluster
Edit and add the newly created target group of in the Autoscaling group of the ECS cluster
So if you scale to two containers it's mean there will be two instances so the newly launch instance will register to the second target group and Autoscaling group take care of it.
This approach working fine in my case, but few things need to be consider.
Do not bind the primary port in target, better to bind primary port in
ALB service. The main advantage of this approach will be that if your
container failed to respond to AWS health check the container will be
restart automatically. As the target groupe health check will not recreate your container.
This approach will not work when there is dynamic port expose in Docker container.
AWS should update its ECS agent to handle such scenario.
I have faced this issue while creating more than one container per instances and second container was not coming up because it was using the same port defined in the taskdefinition.
What we did was, Created an Application Load balancer on top of these containers and removed hardcoded ports. What application load balancer does when it doesn't get predefined ports under it is, Use the functionality of dynamic port mapping. Containers will come up on random ports and reside in one target group and the load balancer will automatically send the request to these ports.
More details can be found here
Thanks to mohit's answer, I used AWS CLI to register multiple target groups (multiple ports) into one ECS service:
ecs-sample-service.json
{
"serviceName": "sample-service",
"taskDefinition": "sample-task",
"loadBalancers":[
{
"targetGroupArn":"arn:aws:elasticloadbalancing:us-west-2:0000000000:targetgroup/sample-target-group/00000000000000",
"containerName":"faktory",
"containerPort":7419
},
{
"targetGroupArn":"arn:aws:elasticloadbalancing:us-west-2:0000000000:targetgroup/sample-target-group-web/11111111111111",
"containerName":"faktory",
"containerPort":7420
}
],
"desiredCount": 1
}
aws ecs create-service --cluster sample-cluster --service-name sample-service --cli-input-json file://ecs-sample-service.json --network-configuration "awsvpcConfiguration={subnets=[subnet-0000000000000],securityGroups=[sg-00000000000000],assignPublicIp=ENABLED}" --launch-type FARGATE
If the task needs internet access to pull image, make sure subnet-0000000000000 has internet access.
Security group sg-00000000000000 needs to give relevant ports inbound access. (in this case, 7419 and 7420).
If the traffic only comes from ALB, the task does not need public IP. Then assignPublicIp can be false.
Usually, I use the AWS CLI method itself by creating task def, target groups and attaching them to the application load balancer. But the issue is that when there is multiple services to be done this is a time-consuming task so I would use terraform to create such services
terraform module link This is a multiport ECS service with Fargate deployment. currently, this supports only 2 ports. when using multiports with sockets this socket won't be sending any response so the health check might fail.
so to fix that I would override the port in the target group to other ports and fix that.
hope this helps
I'm trying to create an ElasticLoadBalancer for a Kubernetes cluster running on EKS. I would like to avoid creating a new security group and instead use one that I specify. From the Kubernetes source code (below) it appears that I can accomplish this by setting c.cfg.Global.ElbSecurityGroup.
func (c *Cloud) buildELBSecurityGroupList(...) {
var securityGroupID string
if c.cfg.Global.ElbSecurityGroup != "" {
securityGroupID = c.cfg.Global.ElbSecurityGroup
} else {
...
}
...
}
How can I set the Kubernetes global config value ElbSecurityGroup?
Related and outdated question: Kubernetes and AWS: Set LoadBalancer to use predefined Security Group
As mentioned in kops documentation you can do it by editing kops cluster configuration:
cloudConfig
elbSecurityGroup
WARNING: this works only for Kubernetes version above 1.7.0.
To avoid creating a security group per elb, you can specify security group id, that will be assigned to your LoadBalancer. It must be security group id, not name. api.loadBalancer.additionalSecurityGroups must be empty, because Kubernetes will add rules per ports that are specified in service file. This can be useful to avoid AWS limits: 500 security groups per region and 50 rules per security group.
spec:
cloudConfig:
elbSecurityGroup: sg-123445678