AWS outbound rule for ECS hosts in VPC - amazon-web-services

I'm trying to setup my ECS Hosts so the outbound rules does not allow the whole world, very similar to this issue. The ideal way would be to point directly to the NAT-gateway but according to Amazon, that is not possible:
Note that security groups cannot be directly associated with a NAT gateway. Instead, customers can use EC2 instance security groups outbound rules to control authorized network destinations or leverage a network ACL associated with the NAT gateway’s subnet to implement subnet-level controls over NAT gateway traffic.
How do I setup a proxy or ACL for the ECS hosts?

This reference architecture should be helpful to you, it contains a CloudFormation template that automatically sets this up for you, so you can learn from the configuration it containers: https://github.com/awslabs/ecs-refarch-cloudformation

Related

AWS Security Group, if open to all ingress ports, can it be externally accessed?

In order to use a glue job that writes data into an RDS instance in a VPC, I need to have a self referencing security group (done) and I need a security group that is also open to all ingress ports. If I do this, does that allow for external access? Haven't really found a concrete answer in the docs.
Sounds like you need a VPC endpoint for Glue.
If your security groups open to all ingress ports, and your endpoint exposes a public IP address and you haven't implemented any custom network ACL / firewall rules, you will most likely make the RDS instance publicly accessible.
EDIT: Saw clarification in comments re: self-referencing sg. If you are only allowing traffic from the sg, you will not be exposing your RDS instance to the public. You will only be exposing your RDS instance to clients within the VPC.

How to expose APIs endpoints from private AWS ALB

We are having several microservices on AWS ECS. We have single ALB which has different target group for different microservices. We want to expose some endpoints externally while some endpoints just for internal communication.
The problem is that if we put our load balancer in public VPC than it means that we are exposing all register endpoints externally. If we move load balancer to private VPC, we have to use some sort of proxy in public VPC, which required additional infra/cost and custom implementation of all security concerns like D-DOS etc.
What possible approaches we can have or does AWS provide some sort of out of the box solution for this ?
I would strongly recommend running 2 albs for this. Sure, it will cost you more (not double because the traffic costs won't be doubled), but it's much more straight forward to have an internal load balancer and an external load balancer. Work hours cost money too! Running 2 albs will be the least admin and probably the cheapest overall.
Checkout WAF. It stands for web application firewall and is available as AWS service. Follow these steps as guidance:
Create a WAF ACL.
Add "String and regex matching" condition for your private endpoints.
Add "IP addresses" condition for your IP list/range that are allowed to access private endpoints.
Create a rule in your ACL to Allow access if both conditions above are met.
Assign ALB to your WAF ACL.
UPDATE:
In this case you have to use external facing ALB in a public subnet as mentioned by Dan Farrell in comment below.
I would suggest doing it like this:
one internal ALB
one target group per microservice, as limited by ECS.
one Network load balancer(NLB), with one ip based target group.
The Ip based target group will have the internal ALB ip addresses,as the private ip addresses for ALB are not static, you will need to setup cloudwatch cron rule with this lambda function(forked from aws documentation and modified to work on public endpoints as well):
https://github.com/talal-shobaita/populate-nlb-tg-withalb/
Both ALB and NLB are scalable and protected from DDOS by AWS, AWS WAF is another great tool that can be attached directly to your ALB listener for extended protection.
Alternatively, you can wait for AWS to support multiple target group registration per service, it is already in their roadmap:
https://github.com/aws/containers-roadmap/issues/104
This how we eventually solved.
Two LB one in private and one in public subnet.
Some APIs meant to be public, so directly exposed through public LB.
For some private APIs endpoints need to be exposed, added a proxy in public LB and routed those particular paths from public LB to private LB through this proxy.
These days API Gateway is the best way to do this. You can have your API serve a number of different endpoints while serving only the public ones via API Gateway and proxying back to the API.
I don't see it mentioned yet so I'll note that we use a CloudMap for internal routing and an ALB for "external" (in our case simply intra/inter-VPC) communication. I didn't read in depth, but I think this article describes it.
AWS Cloud Map is a managed solution that lets you map logical names to the components/resources for an application. It allows applications to discover the resources using one of the AWS SDKs, RESTful API calls, or DNS queries. AWS Cloud Map serves registered resources, which can be Amazon DynamoDB tables, Amazon Simple Queue Service (SQS) queues, any higher-level application services that are built using EC2 instances or ECS tasks, or using a serverless stack.
...
Amazon ECS is tightly integrated with AWS Cloud Map to enable service discovery for compute workloads running in ECS. When you enable service discovery for ECS services, it automatically keeps track of all task instances in AWS Cloud Map.
You want to look at AWS Security Groups.
A security group acts as a virtual firewall for your instance to control inbound and outbound traffic.
For each security group, you add rules that control the inbound traffic to instances, and a separate set of rules that control the outbound traffic.
Even more specific to your use-case though might be their doc on ELB Security Groups. These are, as you may expect, security groups that are applied at the ELB level rather than the Instance level.
Using security groups, you can specify who has access to which endpoints.

AWS private subnet security group egress whitelist for AWS services?

I have some EC2 instances in a private subnet that need to access DynamoDB and KMS. Since VPC endpoints do not support either of these at this time, I will need to grant internet access via a NAT gateway.
I want to restrict the security group egress rules to only these 2 services, but the only info I have found to date is # http://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html
Has anyone else been able to restrict the security group egress rules to just include AWS services ?
From what I can see the EC2 service entries are a subset of the AMAZON service entries so I'm guessing if I were to include all the CIDR blocks that do not exist in the EC2 list that would leave me with all the other AWS service IPs ?
I know these are dynamic and would therefore need to subscribe and handle updates.
Thanks in advance
Pat
One option is to use the AWS service DNS names (for example dyanamodb.amazonaws.com) in the security group but SG doesn't allow it. So you have 2 options:
Allow all outbound access
Use a proxy like squid proxy. Add a route to your private subnet to route the internet traffic to the proxy and the proxy is connected to the internet through NAT. In the proxy you can add rules to allow traffic only to the desired services and an explicit DENY for all other traffic

AWS: Share "NAT Gateway" among VPCs

I have one VPC where i configured NAT Gateway. Another VPC(s) do not have any "public subnet" nor IGW. I would like to share single NAT Gateway among many VPCs.
I tried to configure Routing table but it does not allow to specify NAT Gateway from different VPC.
As posible solution, I installed http/s proxy in VPC with IGW and configured proxy settings on every instance in different VPC. It worked, but I would like use NAT Gateway due to easier management.
Is it possible to make this kind of configuration at AWS?
There are few VPCs and I do not want to add NAT Gateway to each VPC.
Zdenko
You can't share a NAT Gateway among multiple VPCs.
To access a resource in another VPC without crossing over the Internet and back requires VPC peering or another type of VPC-to-VPC VPN, and these arrangements do not allow transit traffic, for very good reasons. Hence:
You cannot route traffic to a NAT gateway through a VPC peering connection, a VPN connection, or AWS Direct Connect. A NAT gateway cannot be used by resources on the other side of these connections.
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html#nat-gateway-other-services
The instances in the originating VPC are, by definition, "on the other side of" one of the listed interconnection arrangements.
AWS Transit Gateway now provides an option to do what you wish, although you will want to consider the costs involved -- there are hourly and data charges. There is a reference architecture published in which multiple VPCs share a NAT gateway without allowing traffic between the VPCs:
https://aws.amazon.com/blogs/networking-and-content-delivery/creating-a-single-internet-exit-point-from-multiple-vpcs-using-aws-transit-gateway/
You basically have 3 options
connect to a shared VPC (typically in a shared "network" account) that holds the NAT via VPC peering. No additional costs for the VPC peering, but cumbersome to setup if you have a lot of accounts
same, but using Transit Gateway. A Peering Attachment is almost the same cost as a single NAT, so this will only save costs if you use multiple NAT gateways to have a high bandwidth
Setup a shared VPC (e.g. in an infrastructure account that holds the NAT. Then share private subnets via AWS resource manager (RAM) to the VPCs that need outgoing access. This has the additional benefit you have a single place where you allocate VPC IP ranges and not every account needs to bother with setting up the full VPC. More details in AWS VPC sharing best practices. This setup avoids both the Transit Gateway costs and the burden of setting up VPC peering. But needs more careful planning to keep things isolated (and likely not everything in the same VPC)

Does the container instance need to have a outbound https rule and either route table with IG or NAT?

For a EC2 instance to register with a AWS ECS Cluster does it need to belong to a security group with a outbound rule on at least HTTPS and be able to connect outside the vpc? (also having the ecsinstancerole AIM role)
For an EC2 instance to be managed by ECS, the ECS Agent on the instance must be able to talk to the ECS service endpoints. This means that either it needs to be in a VPC with internet access (via either an internet gateway or NAT, with appropriate security group outbound rules) or be configured to use an HTTP proxy.