Hi I am trying to create security group in AWS CDK. when I try to create Ingress rule, I want to specify source_security_group_id. I created security group as below.
mws_vpc_sg = ec2.SecurityGroup(self,"securitygroupname",
description="EC2 Services Security Group",
security_group_name="securitygroupname",
vpc=vpc
);
Then I want to add ingress rules as below.
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: "80"
ToPort: "80"
SourceSecurityGroupId: !Ref MerchWebServicesLoadBalancerSecurityGroup
Can someone help me to write same template using aws cdk?
According to the documentation, there's a method belonging to ec2.SecurityGroup called add_ingress_rule which takes a "Peer" and a "Connection" as arguments. These being another Security Group and a Port
So you can try something like this
my_sg = ec2.SecurityGroup(self, "securitygroupone",
description="EC2 Services Security Group",
security_group_name="securitygroupone",
vpc=vpc
)
my_sg_two = ec2.SecurityGroup(self, "securitygrouptwo",
description="EC2 Services Security Group with Ingress from securitygroupone",
security_group_name="securitygrouptwo",
vpc=vpc
)
my_sg_two.add_ingress_rule(my_sg, 8080)
https://docs.aws.amazon.com/cdk/api/latest/python/aws_cdk.aws_ec2/SecurityGroup.html#aws_cdk.aws_ec2.SecurityGroup.add_ingress_rule
Related
AWS CDK lib version: 2.62.2
When I let the CDK implicitly create the SecurityGroup as below:
this.alb = new ApplicationLoadBalancer(this, 'Alb', {
loadBalancerName: "Raido",
vpc: props.vpc,
internetFacing: true,
vpcSubnets: {
subnetType: SubnetType.PUBLIC,
onePerAz: true,
},
// securityGroup: props.securityGroup,
idleTimeout: Duration.seconds(60),
dropInvalidHeaderFields: false,
desyncMitigationMode: DesyncMitigationMode.DEFENSIVE,
});
It creates an ICMP egress rule blocking port 86, like this:
egress rule:
Type: Custom ICMP - IPv4
Protocol: 252
Port: 86
Destination: 255.255.255.255/32
Description: Disallow all traffic
When I create my own SecurityGroup manually, it doesn't have the egress rule.
Why does the CDK create this egress rule, and should I add a similar rule of my own?
I'm reluctant to just copy/paste the rule without knowing what it's for.
Apparently, this rule is added when allowAllOutbound option is false, which is the default for the ApplicationLoadBalancer construct.
My takeaway: no point in setting the bogus rule on your ALB. If you want to allowAllOutbound, then do that; otherwise set your egress rules appropriate to your situation.
Apparenlty the 255.255.255.255/32 is intended to match no traffic because:
No machine can ever actually have the 255.255.255.255 IP address
I want to make the targetGroup by cdk and use the ALB already existed.
This error happens.
Error: .listeners can only be accessed if the class was constructed as an owned, not looked up, load balancer
at LookedUpApplicationLoadBalancer.get listeners [as listeners]
I can't access the listeners of ALB
What I want to do is use one load balancer for two ECS and two domain.
www.exampleA.com -> port 80 -> ALB -> fargate A
www.exampleB.com -> port 80 -> ALB -> fargate B
These are my code below.
const lb = elb.ApplicationLoadBalancer.fromLookup(this, 'ALB', {
loadBalancerArn: 'arn:aws:elasticloadbalancing:ap-northeast-1:678100228232:loadbalancer/app/app-load-balancer/1a97159fcaf4d6c0',
});
const listener = lb.listeners[0];
const targetGroup = listener.addTargets("ECS", {
protocol: elb.ApplicationProtocol.HTTP,
port: 80,
targets: [ecsAdminService]
});
targetGroup.configureHealthCheck({
path: "/",
port: "8080"
})
Or if it is impossible, I want to make targetgroup without ALB
(then I can attach targetgroup to ALB manually)
So, I tried this
const targetGroup = new elb.ApplicationTargetGroup(this,"ECS", {
protocol: elb.ApplicationProtocol.HTTP,
port: 80,
targets: [ecsAdminService],
vpc: cluster.vpc,
});
targetGroup.configureHealthCheck({
path: "/",
port: "8080"
})
However this error comes
"Invalid request provided: UpdateService error: The target group with targetGroupArn arn:aws:elasticloadbalancing:ap-northeast-1:678100228133:targetgroup/CdkTr-ECSD2-S1ROICFY9661/f1f3e3b280c2a008 does not have an associated load balancer
I can use existing ALB like this below
Actually, what I need is only securityGroup of ALB and listener.
const securityGroup = ec2.SecurityGroup.fromSecurityGroupId(this, "MyAlbSecGroup", "sg-0ea7a62badcc673a3")
const listenerArn = "arn:aws:elasticloadbalancing:ap-northeast-1:67810022242:listener/app/my-alb-id-listener/1a97159fcaf4d6c0/09a32815415beae6";
const existingListener = elb.ApplicationListener.fromApplicationListenerAttributes(this, "SharedListener", {
listenerArn,
securityGroup
});
const targetGroup = new elb.ApplicationTargetGroup(this,"ECS", {
port: 80,
targets: [ecsAdminService],
vpc: cluster.vpc,
});
existingListener.addTargetGroups("tg",{
priority:1,
conditions:[
elb.ListenerCondition.hostHeaders(['my.example.com'])
],
targetGroups:[targetGroup]
})
I want to expose a few webapps in EKS to the internet in a centrally managed secure way.
In AWS, using an ALB is nice, as it for example allows you to terminate TLS and add authentication using Cognito. (see here)
To provision an ALB and connect it to the application there is the aws-load-balancer-controller.
It works fine, but it requires for each and every app/ingress to configure a new ALB:
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/tags: Environment=test,Project=cognito
external-dns.alpha.kubernetes.io/hostname: sample.${COK_MY_DOMAIN}
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/auth-type: cognito
alb.ingress.kubernetes.io/auth-scope: openid
alb.ingress.kubernetes.io/auth-session-timeout: '3600'
alb.ingress.kubernetes.io/auth-session-cookie: AWSELBAuthSessionCookie
alb.ingress.kubernetes.io/auth-on-unauthenticated-request: authenticate
alb.ingress.kubernetes.io/auth-idp-cognito: '{"UserPoolArn": "$(aws cognito-idp describe-user-pool --user-pool-id $COK_COGNITO_USER_POOL_ID --region $COK_AWS_REGION --query 'UserPool.Arn' --output text)","UserPoolClientId":"${COK_COGNITO_USER_POOL_CLIENT_ID}","UserPoolDomain":"${COK_COGNITO_DOMAIN}.auth.${COK_AWS_REGION}.amazoncognito.com"}'
alb.ingress.kubernetes.io/certificate-arn: $COK_ACM_CERT_ARN
alb.ingress.kubernetes.io/target-type: 'ip'
I would love to have one central well defined ALB and all the application do not need to care about this anymore.
My idea was having a regular nginx-ingress-controller and expose it via a central ALB.
Now the question is: How do I connect the ALB to the nginx-controller?
One way would be manually configuring the ALB and build the target group by hand, which does not feel like a stable solution.
Another way would be using aws-load-balancer-controller to connect the nginx. In that case however nginx seems not to be able to publish the correct loadbalancer address and external-dns will enter the wrong DNS records. (Unfortunately there seems to be no --publish-ingress option in usual ingress controllers like nginx or traefik.)
Question:
Is there a way to make the nginx-ingress-controller provide the correct address?
Is there maybe an easier way that combining two ingress controllers?
I think I found a good solution.
I set up my environment using terraform.
After I set up the alb ingress controller, I can create a suitable ingress object, wait until the ALB is up, use terraform to extract the address of the ALB and use publish-status-address to tell nginx to publish exactly that address to all its ingresses:
resource "kubernetes_ingress_v1" "alb" {
wait_for_load_balancer = true
metadata {
name = "alb"
namespace = "kube-system"
annotations = {
"alb.ingress.kubernetes.io/scheme" = "internet-facing"
"alb.ingress.kubernetes.io/listen-ports" = "[{\"HTTP\": 80}, {\"HTTPS\":443}]"
"alb.ingress.kubernetes.io/ssl-redirect" = "443"
"alb.ingress.kubernetes.io/certificate-arn" = local.cert
"alb.ingress.kubernetes.io/target-type" = "ip"
}
}
spec {
ingress_class_name = "alb"
default_backend {
service {
name = "ing-nginx-ingress-nginx-controller"
port {
name = "http"
}
}
}
}
}
resource "helm_release" "ing-nginx" {
name = "ing-nginx"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
namespace = "kube-system"
set {
name = "controller.service.type"
value = "ClusterIP"
}
set {
name = "controller.publishService.enabled"
value = "false"
}
set {
name = "controller.extraArgs.publish-status-address"
value = kubernetes_ingress_v1.alb.status.0.load_balancer.0.ingress.0.hostname
}
set {
name = "controller.config.use-forwarded-headers"
value = "true"
}
set {
name = "controller.ingressClassResource.default"
value = "true"
}
}
It is a bit weird, as it introduces something like a circular dependency, but the ingress simply waits until nginx is finally up and all is well.
This solution in not exactly the same as the --publish-ingress option as it will not be able to adapt to any changes of the ALB address. - Luckily I don't expect that address to change, so I'm fine with that solution.
You can achieve this with two ingress controllers. The ALB ingress controller will hand the publicly exposed endpoint and route traffic to the nginx ingress controller as its backend. Then you configure your nginx ingress controller for managing ingress of application traffic.
I am trying to provision an ECS cluster using Terraform along with an ALB. The targets come up as Unhealthy. The error code is 502 in the console Health checks failed with these codes: [502]
I checked through the AWS Troubleshooting guide and nothing helped there.
EDIT: I have no services/tasks running on the EC2 containers. Its a vanilla ECS cluster.
Here is my relevant code for the ALB:
# Target Group declaration
resource "aws_alb_target_group" "lb_target_group_somm" {
name = "${var.alb_name}-default"
port = 80
protocol = "HTTP"
vpc_id = "${var.vpc_id}"
deregistration_delay = "${var.deregistration_delay}"
health_check {
path = "/"
port = 80
protocol = "HTTP"
}
lifecycle {
create_before_destroy = true
}
tags = {
Environment = "${var.environment}"
}
depends_on = ["aws_alb.alb"]
}
# ALB Listener with default forward rule
resource "aws_alb_listener" "https_listener" {
load_balancer_arn = "${aws_alb.alb.id}"
port = "80"
protocol = "HTTP"
default_action {
target_group_arn = "${aws_alb_target_group.lb_target_group_somm.arn}"
type = "forward"
}
}
# The ALB has a security group with ingress rules on TCP port 80 and egress rules to anywhere.
# There is a security group rule for the EC2 instances that allows ingress traffic to the ECS cluster from the ALB:
resource "aws_security_group_rule" "alb_to_ecs" {
type = "ingress"
/*from_port = 32768 */
from_port = 80
to_port = 65535
protocol = "TCP"
source_security_group_id = "${module.alb.alb_security_group_id}"
security_group_id = "${module.ecs_cluster.ecs_instance_security_group_id}"
}
Has anyone hit this error and know how to debug/fix this ?
It looks like you're trying to be register the ECS cluster instances with the ALB target group. This isn't how you're meant to send traffic to an ECS service via an ALB.
Instead you should have your service join the tasks to the target group. This will mean that if you are using host networking then only the instances with the task deployed will be registered. If you are using bridge networking then it will add the ephemeral ports used by your task to your target group (including allowing for there to be multiple targets on a single instance). And if you are using awsvpc networking then it will register the ENIs of every task that the service spins up.
To do this you should use the load_balancer block in the aws_ecs_service resource. An example might look something like this:
resource "aws_ecs_service" "mongo" {
name = "mongodb"
cluster = "${aws_ecs_cluster.foo.id}"
task_definition = "${aws_ecs_task_definition.mongo.arn}"
desired_count = 3
iam_role = "${aws_iam_role.foo.arn}"
load_balancer {
target_group_arn = "${aws_lb_target_group.lb_target_group_somm.arn}"
container_name = "mongo"
container_port = 8080
}
}
If you were using bridge networking this would mean that the tasks are accessible on the ephemeral port range on the instances so your security group rule would need to look like this:
resource "aws_security_group_rule" "alb_to_ecs" {
type = "ingress"
from_port = 32768 # ephemeral port range for bridge networking tasks
to_port = 60999 # cat /proc/sys/net/ipv4/ip_local_port_range
protocol = "TCP"
source_security_group_id = "${module.alb.alb_security_group_id}"
security_group_id = "${module.ecs_cluster.ecs_instance_security_group_id}"
}
it looks like the http://ecsInstanceIp:80 is not returning HTTP 200 OK. I would check that first. It would be easy to check if the instance is public. It wont be the case most of the times. Otherwise I would create an EC2 instance and make a curl request to confirm that.
You may also check the container logs to see if its logging the health check response.
Hope this helps. good luck.
Using cloudformation SecurityGroup is possible set the GroupName or has to be provide by cloudformation?.
The final name format it´s pretty long and does not look nice, also is not a good match to use it for find it by command line.
I know I can use tags, but still don't understand why AWS don't allow us to add it, I guess because they´re lazy and they don't want to implement a validation.
Regards.
You can set the name for a SecurityGroup by adding a Tag with the key "Name", like this:
"MySecurityGroup": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription": "Allow http",
"SecurityGroupIngress": [
{"IpProtocol": "tcp", "FromPort": "80", "ToPort": "80", "CidrIp": "0.0.0.0/0"}
],
"Tags": [
{"Key": "Name", "Value": "MySecurityGroup"},
]
}
},
[Updated Jun 26 2017]
As of Apr 28 2017, it is now possible to specify a custom name for an EC2 Security Group using CloudFormation, using the GroupName property on the AWS::EC2::SecurityGroup resource.
Thanks surenyonjan for the comment on this update.
[Original answer Dec. 23 2016] -
No, it is not currently possible to provide a custom name for an EC2 Security Group using CloudFormation.
According to the AWS::EC2::SecurityGroup resource documentation, there is no Name or GroupName property available. You can provide tags using the Tags property as an alternative, as you pointed out.
Recently, some CloudFormation resources have started supporting custom names via a Name property. A full list of supported resources is in the Name Type section of the documentation.
AWS::EC2::SecurityGroup is not one of the resources supporting custom names. As for why, presumably this is because this CloudFormation resource is an earlier implementation, created before custom names were supported by the service.
It's possible that AWS will eventually go back and update all of its existing CloudFormation resources with custom name support at some point, if enough users ask them to do so. If this is an important/critical feature for your use case, I'd recommend contacting their product/support teams with a feature request to help them make it higher priority.
The name of most resources is simply a type of special Tags with the Key Name, To edit this fields just add the tag to the template
mySecurityGroup:
Type: 'AWS::EC2::SecurityGroup'
Properties:
VpcId: !Ref myVPC
GroupDescription: Security Group
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 0.0.0.0/0
Tags:
- Key: Name
Value: mySecurityGroup