Securing AWS ECS Cluster - amazon-web-services

We are trying to create an ECS Cluster however we noticed that the internal ECS Agent is unable to register. We unblocked TCP 443 (ACL and SG) however it still did not register. We then proceeded to open up everything All Traffic both TCP and UDP and then the agent was able to register.
We tried to investigate what is being used using FlowLogs but it seems that the agent is using a random port and a different IP each time which makes it almost impossible for us to secure our network due to the agent. We tried and searched a lot for documentation about how and what the ECS Agent needs to run properly to no avail.
What we would like to achieve is to secure our network while allowing the agent to function as needed. Perhaps a better question would be which ports is the ecs-agent trying to use exactly and to which IPs should we allow that traffic to come from/go to?
From just 1 hour the flow log shows IPs from all over the world trying to hit the servers it just doesn't make sense not to prioritize this matter.

The ECS agent needs outgoing internet access to register itself to the cluster.
Here are some steps to try:
Check the security group on the EC2 instances to ensure they're
allowing outbound traffic.
Check your VPC config where the ECS instances are running and ensure they have internet access.
VPC Route Tables to ensure it's routing
destination 0.0.0.0/0 to your Internet Gateway.
Check your ACLs rules and ensure your outbound rules
match your inbound - which has bitten me a few times!

Related

My aws instance is stuck and cannot connect usign ssh what should i do

My aws instance is stuck and cannot connect using ssh client what should i do?
My hosted websites are also not working. I do not want to restart my aws instance through aws console.
Please help me in this regard.
Thanks in advance.
A recommendation to troubleshoot these kinds of problems
Always generate logs.
Always use a Cloudwatch's agent to retrieve specific logs from your instances.
Check this link to learn more about it: https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/send_logs_to_cwl.html
About your problem
I think you tried to connect to it via SSH too many times without closing the previous connections.
Your instance is out of memory, for this situation you must restart your instance.
You could get the last screenshot of your instance using the options from Console.
Follow this link for more information about troubleshooting
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstancesConnecting.html
Some suggestions from that link:
Check your security group rules. You need a security group rule that allows inbound traffic from your public IPv4 address on the proper port.
[EC2-VPC] Check the route table for the subnet. You need a route that sends all traffic destined outside the VPC to the internet gateway for the VPC.
[EC2-VPC] Check the network access control list (ACL) for the subnet. The network ACLs must allow inbound and outbound traffic from your local IP address on the proper port. The default network ACL allows all inbound and outbound traffic.
-If your computer is on a corporate network, ask your network administrator whether the internal firewall allows inbound and outbound traffic from your computer on port 22 (for Linux instances) or port 3389 (for Windows instances).
And more...
If the issue still continues, please create an AMI(image) of the instance and try to create a new instance from that AMI. Then try to SSH and everything went smooth the terminate the old instance.

Securing Amazon ECS cluster instances with dynamic port mapping behind an ALB

I'm trying to work out how I can tighten up the security group which I have assigned to container instances in AWS ECS.
I have a fairly simple Express.js web service running in a service on ECS
I'm running an Application Load Balancer in front of ECS
My Express.js container exposes and listens on port 8080
I'm using dynamic port mapping to allow container instances to run multiple containers on the same port (8080)
I have found that in order for ALB to forward requests on to containers running in the ECS cluster, I need the security group which is assigned to the container instances to allow all traffic. Bearing in mind that these instances (for reasons I don't know) are assigned public IPv4 addresses - regardless of the fact that I've configured the cluster to place instances in my private subnets - so I'm not comfortable with these instances essentially being wide open, just to ALB can pass requests so them inside the VPC.
I understand that with dynamic port mapping, my containers or not running on one single port on the underlying Docker host that's running them. I also understand that there's no single IP that requests may arrive at the EC2 instances from the ALB, so it seems to me that I can't lock this down if I'm using dynamic port mapping, because there's no single point of origin or destination for the traffic that's coming into the EC2 instances. I feel like I'm missing something here, but I can't for the life of me work out how to do this.
How should I configure ECS or my EC2 security group to allow me to only allow access to the container instances from ALB and not from the rest of the internet?
I've tried to include as much info as is necessary without swamping the question with unnecessary details. If there's any details that would be useful that I've not included, please leave a comment and I'll be happy to provide them.
1) There is no reason why you have to have public ip addresses on your container instances. Just don't set the option at launch, see this page particularly step "e"
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html?shortFooter=true
If the instances are in a private subnet, then the routing should not allow ingres anyway...
2) It is possible to lock down the security using security groups. Using the "security group id" instead of the IP address means that you do not have to know the exact address of the ALB. See this page for instructions on ALB configuration in this way
http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-update-security-groups.html

Amazon EC2 Security Group with Host / Dynamic IP / DNS

I am seeking some guidance on the best approach to take with EC2 security groups and services with dynamic IP's. I want to make use of services such as SendGrid, Elastic Cloud etc which all use dyanmic IP's over port 80/443. However access to Port 80/443 is closed with the exception of whitelisted IPs. So far the solutions I have found are:
CRON Job to ping the service, take IP's and update EC2 Security Group via EC2 API.
Create a new EC2 to act as a proxy with port 80/443 open. New server communicates with Sendgrid/ElasticCloud, inspects responses and returns parts to main server.
Are there any other better solutions?
Firstly, please bear in mind that security groups in AWS are stateful, meaning that, for example, if you open ports 80 and 443 to all destinations (0.0.0.0/0) in your outbound rules, your EC2 machines will be able to connect to remote hosts and get the response back even if there are no inbound rules for a given IP.
However, this approach works only if the connection is always initiated by your EC2 instance and remote services are just responding. If you require the connections to your EC2 instances to be initiated from the outside, you do need to specify inbound rules in security group(s). If you know a CIDR block of their public IP addresses, that can solve the problem as you can specify it as a destination in security group rule. If you don't know IP range of the hosts that are going to reach your machines, then access restriction at network level is not feasible and you need to implement some form of authorisation of the requester.
P.S. Please also bear in mind that there is a soft default limit of 50 inbound or outbound rules per security group.

How to configure AWS internet facing LB ScurityGroup for internal and external requests

I'm having a hard time figuring out how to set the correct SecurityGroup rules for my LoadBalancer. I have made a diagram to try and illustrate this problem, please take a look at the image below:
I have an internet facing LoadBalancer ("Service A LoadBalancer" in the diagram) that is requested from "inhouse" and from one of our ECS services ("Task B" in the diagram). For the inhouse requests, i can configure a SecurityGroup rule for "Service A LoadBalancer" that allows incoming request to the LoadBalancer on port 80 from the CIDR for our inhouse IP's. No problem there. But for the other ECS service, Task B, how would i go about adding a rule (for "Service A SecurityGroup" in the diagram) that only allows requests from Task B? (or only from tasks in the ECS cluster). Since it is an internet facing loadbalancer, requests are made from public ip of the machine EC2, not the private (as far as i can tell?).
I can obviously make a rule that allow requests on port 80 from 0.0.0.0/0, and that would work, but that's far from being restrictive enough. And since it is an internet facing LoadBalancer, adding a rule that allows requests from the "Cluster SecurityGroup" (in the diagram) will not cut it. I assume it is because the LB cannot infer from which SecurityGroup the request originated, as it is internet-facing - and that this would work if it was an internal LoadBalancer. But i cannot use an internal LoadBalancer, as it is also requested from outside AWS (Inhouse).
Any help would be appriciated.
Thanks
Frederik
We solve this by running separate Internet facing and Internal Load Balancers. You can have multiple ELBs or ALBs (ELBv2) for the same cluster. Assuming your ECS clusters runs on an IP range such as 10.X.X.X you can open 10.X.0.0/16 for internal access on the internal ELB. Just make sure the ECS cluster SG also is open to the ELB. Task B can reach Task A over the internal ELB address assuming you use the DNS of the internal ELB address when making the request. If you hit the IP of a public DNS it will always be a public request.
However, you may want to think long term whether you really need a public ELB at all. Instead of IP restrictions, the next step is usually to run a VPN such as openVPN so you can connect into the VPC and access everything on the private network. We generally only ever run Internet Facing ELBs if we truly want something on the internet such as for external customers.

How to connect to AWS elasticache?

Could someone give a step-by-step procedure for connecting to elasticache.
I'm trying to connect to a redis elasticache node from inside my EC2 instance (sshed in). I'm getting Connection Timed Out errors each time, and I can't figure out what's wrong with how I've configured my AWS settings.
They are in different VPCs, but in my elasticache VPC, I have a custom TCP inbound rule at port 6379 to accept from anywhere. And the two VPCs share an Active Peer connection that I set up. What more am I intended to do?
EDIT:
I am trying to connect via the redis-cli command. I sshed in because I was originally trying to connect via the node-redis module since my EC2 instance hosts a node server. So officially my two attempts are 1. A scripted module and 2. The redis-cli command provided in the AWS documentation.
As far as I can tell, I have also set up the route tables correctly according to this: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html#route-tables-vpc-peering
You cannot connect to Elasticache from outside its VPC. It's a weird design decision on AWS' part, and although it's not documented well, it is documented here:
Amazon ElastiCache Nodes, deployed within a VPC, can never be accessed from the Internet or from EC2 Instances outside the VPC.
You can set your security groups to allow connections from everywhere, and it will look like it worked, but it won't matter or let you actually connect from outside the VPC (also a weird design decision).
In your Redis cluster properties you have a reference to the Security Group. Copy it.
In our EC2 instance you also have a Security Group. You should edit this Security Group and add the ID of the Redis Security Group as CIDR in the outbound connections + the port 6379.
This way the two Security Groups are linked and the connection can be established.
Two things we might forget when trying to connect to ElasticCache,
Configuring inbound TCP rule to allow incoming requests on port 6379
Adding EC2 security group in ElasticCache instance
Second one helped me.
Reference to (2) : https://www.youtube.com/watch?v=fxjsxtcgDoc&ab_channel=HendyIrawanSocialEnterprise
Here is step-by-step instructions for connection to Redis Elasticache cluster from EC2 inctance located in the same VPC as Elasticache:
Connect to a Elasticache Redis Cluster's Node