aws ecs service security - amazon-web-services

I'm new to aws ecs service and would like to know about the security inside ecs service.
I'm creating an ecs task which includes two docker container (A and B). A spring-boot application is running on container B and works as a gateway to the backend services. No login/security is necessary to access this app from container A .. so I can invoke like http://localhost:8080/middleware/ ... and then one of the servlet generates a saml token and invoke backend services by adding this token as authorization header. All looks good and works fine. However, a couple developers indicated this design has a flaw. "Even if ecs service running in SecurityGroup and only an entry point port is open, it is possible for hacker to install malwares onto ec2 instance on which two containers are running, and this malware can invoke the spring-boot app running in container B which is a security breach"
I'm not sure if what I heard from my co-workers is true? The security in aws is not strong enough to communicate using localhost without security between containers?? If anyone tells me about this, it would be very appreciated!!

Security and Compliance is a shared responsibility between AWS and the customer .
In general, AWS is responsible for the security of the overall infrastructure of the cloud, and the customer is responsible for the security of the application, instances and of their data.
For a service like ECS, it is categorized as Infrastructure as a Service (IaaS) and, as such, requires the customer to perform all of the necessary security configuration and related management tasks.
As the customer you would normally secure an EC2 type ECS load by hardening the instance, use proper security groups, implement VPC security features eg NACLS and private subnets, use least privilege IAM users/roles, while also applying Docker security best practices to secure the containers and images.
Note: Docker itself is a complicated system, and there is no one trick you can use to maintain Docker container security. Rather you have to think broadly about ways to secure your Docker containers, and harden your container environment at multiple levels, including the instance itself. Doing this is the only way to ensure that you can have all the benefits of Docker, without leaving yourself at risk of major security issues.
Some answers to your specific questions and comments:
it is possible for hacker to install malwares onto ec2 instance on which two containers are running, and this malware
If hackers have penetrated your instance and have installed malware, then you have a major security flaw at the instance level, not at the container level. Harden and secure your instances to insure your perimeter is protected. This is the customer's responsibility.
The security in aws is not strong enough to communicate using localhost without security between containers?
AWS infrastructure is secure and compliant and maintains certified compliance with security standards like PCI and HIPPA. You don't need to worry about security at the infrastructure level for this reason, that is AWS responsibility.
No login/security is necessary to access this app from container A .. so I can invoke like http://localhost:8080/middleware
This is certainly not ideal security, and again it is customer responsibility to secure such application endpoints. You should consider implementing basic authentication here - this can be implemented by virtually any web or app server. You could also implement IP whitelisting so API calls can only be accepted from the container A network subnet.
For more information on ECS security see
Security in Amazon Elastic Container Service
For more information on AWS infrastructure security see Amazon Web Services: Overview of Security Processes

Yes, your colleagues observation is correct.
There is very good possibility of such hacks. But, AWS does provide many different ways in which you can secure your own servers and containers.
Using nested security groups in Public Subnet
In this scenario, AWS allows port access to particular security group rather than an IP address / CIDR range. So only resources having particular security group nested can access those ports while no one from outside can access them.
Using Virtual Private Cloud
In this scenario host your all the instances and ecs containers in private subnet and allow only the access to particular port via NAT gateway for public access, in such scenario your instance won't be directly vulnerable to attacks.

Related

Bastion Server for all AWS instances

i have more than 30 production Windows severs in all AWS regions. I would like to connect all servers from one base bastion host. can any one please let me know which one is good choice? How can i setup one bastion host to communicate all servers which is different regions and different VPC's? Kindly anyone give advice for this?
First of all, I would question what are you trying to achieve with a single bastion design? For example, if all you want is to execute automation commands or patches it would be significantly more efficient (and cheaper) to use AWS System Manager Run Commands or AWS System Manager Patch Manager respectively. With AWS System Manager you are getting a managed service that offers advance management capabilities with highest security principles built-in. Additionally, with SSM almost all access permissions could be regulated via IAM permission policies.
Still, if you need to set-up bastion host for some other purpose not supported by SSM, the answer includes several steps that you need to do.
Firstly, since you are dealing with multiple VPCs (across regions), one way to connect them all and access them from you bastion's VPC would be to set-up a Inter-Region Transit Gateway. However, among other, you would need to make sure that your VPC (and Subnet) CIDR ranges are not overlapping. Otherwise, it will be impossible to properly arrange routing tables.
Secondly, you need to arrange that access from your bastion is allowed in the inbound connections of your target's security group. Since, you are dealing with peered VPCs you will need to properly allow your inbound access based on CIDR ranges.
Finally, you need to decide how you will secure access to your Windows Bastion host. Like with almost all use-cases relying on Amazon EC2 instances, I would stress to keep all the instances in private subnets. From private subnets you can always reach the internet via NAT Gateways (or NAT Instances) and stay protected from unauthorized external access attempts. Therefore, if your Bastion is in private subnet you could use the capability of SSM to establish a port-forwarding session to your local machine. In this way, you enable yourself the connection while even your bastion is secured in private subnet.
Overall, this answer to your question involves a lot of complexity and components that will definitely incur charges to your AWS account. So, it would be wise to consider what practical problem are you trying to solve (not shared in the question)? Afterwards, you could evaluate if there is an applicable managed service like SSM that is already provided by AWS. In the end, from a security perspective, granting access to all instances from a single bastion might not be best practice. If you consider scenarios in which you bastion is compromised for whatever reason, you basically compromised all of your instances across all of the regions.
Hope it gives you slightly better understanding of your potential solution.

Purpose of using multiple AWS Elastic network interfaces

I am reading about AWS Elastic network interfaces. Can someone tell me a good practical example of creating multiple Network Interfaces ?
Based on what I understood:
Assume there's an application that is in EC2. This application has several user functions and admin functions. We create 2 subnets, 1 for users and another for admins. We can create another Elastic network interfaces for admin users to access EC2. Is my understanding correct ? what are some common use cases for ENIs?
A very common use case for using multiple ENIs is in ECS and awsvpc network mode.
The awsvpc network mode requires extra ENIs on your instances to use. This:
simplifies container networking and gives you more control over how containerized applications communicate with each other and other services within your VPCs.
awsvpc network mode is also required when using Fargate launch type, but that is managed by AWS. If you want to use the awsvpc mode on your container instances (EC2 Launch type) you will have to ensure that your instance can accommodate the extra ENI (one per task) with proper security groups.
There are also AWS managed ENIs that are created in your VPC. Examples are VPC Interface endpoints and lambda VPC integration.
Yes, the use case you describe is one of the common reasons to have multiple ENIs on a single instance.
There was a similar question on ServerFault with some good answers. It’s not specific to AWS, but the reasoning can be applied here as well: https://serverfault.com/questions/129935/is-there-any-reason-to-have-2-nics-on-a-server

How to expose APIs endpoints from private AWS ALB

We are having several microservices on AWS ECS. We have single ALB which has different target group for different microservices. We want to expose some endpoints externally while some endpoints just for internal communication.
The problem is that if we put our load balancer in public VPC than it means that we are exposing all register endpoints externally. If we move load balancer to private VPC, we have to use some sort of proxy in public VPC, which required additional infra/cost and custom implementation of all security concerns like D-DOS etc.
What possible approaches we can have or does AWS provide some sort of out of the box solution for this ?
I would strongly recommend running 2 albs for this. Sure, it will cost you more (not double because the traffic costs won't be doubled), but it's much more straight forward to have an internal load balancer and an external load balancer. Work hours cost money too! Running 2 albs will be the least admin and probably the cheapest overall.
Checkout WAF. It stands for web application firewall and is available as AWS service. Follow these steps as guidance:
Create a WAF ACL.
Add "String and regex matching" condition for your private endpoints.
Add "IP addresses" condition for your IP list/range that are allowed to access private endpoints.
Create a rule in your ACL to Allow access if both conditions above are met.
Assign ALB to your WAF ACL.
UPDATE:
In this case you have to use external facing ALB in a public subnet as mentioned by Dan Farrell in comment below.
I would suggest doing it like this:
one internal ALB
one target group per microservice, as limited by ECS.
one Network load balancer(NLB), with one ip based target group.
The Ip based target group will have the internal ALB ip addresses,as the private ip addresses for ALB are not static, you will need to setup cloudwatch cron rule with this lambda function(forked from aws documentation and modified to work on public endpoints as well):
https://github.com/talal-shobaita/populate-nlb-tg-withalb/
Both ALB and NLB are scalable and protected from DDOS by AWS, AWS WAF is another great tool that can be attached directly to your ALB listener for extended protection.
Alternatively, you can wait for AWS to support multiple target group registration per service, it is already in their roadmap:
https://github.com/aws/containers-roadmap/issues/104
This how we eventually solved.
Two LB one in private and one in public subnet.
Some APIs meant to be public, so directly exposed through public LB.
For some private APIs endpoints need to be exposed, added a proxy in public LB and routed those particular paths from public LB to private LB through this proxy.
These days API Gateway is the best way to do this. You can have your API serve a number of different endpoints while serving only the public ones via API Gateway and proxying back to the API.
I don't see it mentioned yet so I'll note that we use a CloudMap for internal routing and an ALB for "external" (in our case simply intra/inter-VPC) communication. I didn't read in depth, but I think this article describes it.
AWS Cloud Map is a managed solution that lets you map logical names to the components/resources for an application. It allows applications to discover the resources using one of the AWS SDKs, RESTful API calls, or DNS queries. AWS Cloud Map serves registered resources, which can be Amazon DynamoDB tables, Amazon Simple Queue Service (SQS) queues, any higher-level application services that are built using EC2 instances or ECS tasks, or using a serverless stack.
...
Amazon ECS is tightly integrated with AWS Cloud Map to enable service discovery for compute workloads running in ECS. When you enable service discovery for ECS services, it automatically keeps track of all task instances in AWS Cloud Map.
You want to look at AWS Security Groups.
A security group acts as a virtual firewall for your instance to control inbound and outbound traffic.
For each security group, you add rules that control the inbound traffic to instances, and a separate set of rules that control the outbound traffic.
Even more specific to your use-case though might be their doc on ELB Security Groups. These are, as you may expect, security groups that are applied at the ELB level rather than the Instance level.
Using security groups, you can specify who has access to which endpoints.

Is ssl termination at AWS load balancer ELB secure?

We have a web application running on ec2 instance.
We have added AWS ELB to route all request to application to load balancer.
SSL certificate has been applied to ELB.
I am worried about whether HTTP communication between ELB and ec2 instance is secure?
or
should I use HTTPS communication between ELB and ec2 instance?
Does AWS guarantees security of HTTP communication between ELB and ec2 instance?
I answered a similar question once but would like to highlight some points:
Use VPC with proper Security Groups setup (must) and network ACL (optional).
Notice your private keys distribution. AWS made it easy with storing it safely in their system and never using it again on your servers. It is probably better to use self-signed certificates on your servers (reducing the chance to leak your real private keys)
SSL is cheap these days (compute wise)
It all depends on your security requirements, regulations and how much complexity overhead are you willing to take.
AWS do provide some guaranties (see network section) against spoofing / retrieval of information by other tenants, but the safe assumption is that multi-tenant public cloud environment is not 100% hygienic and you should encrypt.
Single tenant instance (as suggested by #andreimarinescu) will not help as the attack vector discussed here is the network between the ELB (shared env) and your instance. (however, it might help against XEN zero days)
Long answer with short summary - encrypt.
Absolute control over security and cloud deployments are in my opinion two things that don't mix very well.
Regarding the security of traffic between the ELB and the EC2 instances, you should probably deploy your resources in a VPC in order to add a new layer of isolation. AWS doesn't offer any security guarantees.
If the data transferred is too sensitive, you might also want to look at deploying in a dedicated data-center where you can have greater control over the networking aspects. Also, you might want to look at single-tenant instances on EC2, since you're probably sharing your physical resources with other EC2 customers.
That being said, there's one aspect that you also should take into account: SSL termination is quite an expensive job, so terminating SSL at the ELB level will allow your backend instances to focus resources on actually fulfilling the requests, but this will also impact the ELB (it will automatically scale, but it will have to do it faster, and you might see increased latency as it does so during spikes of traffic).

Amazon Elastic Beanstalk internal and internet access

We’re trying to create a setup of multiple APIs via the Amazon AWS Elastic Beanstalk (AEB) component. The reason we have chosen AEB is because it provides seamless deployment and scaling for the applications we deploy, without the need to manually create Load Balancers (LB) and scaling rules. We would very much like to keep it this way as we are planning on launching a (large) number of applications and APIs.
However, we’re facing a number of challenges with AEB.
First and foremost, some of the API’s need to communicate internally, and low latency is a core requirement for us. In order to utilize internal network communication in AEB we have been “forced” to:
Allocate a VPC in Amazon
Deploy each application to this VPC - each behind their own internal LB
Now, when using the Elastic beanstalk URLs the APIs are able to resolve the internal IP of the LB of another API and thus the latency is eliminated and all is good - the APIs can communicate with one another.
However, this spawns another issue for us:
Some of these “internally” allocated APIs (remember, they’re behind an internal LB in a VPC) must also be accessible from the internet.
We still haven’t found a way to make the internal LBs internet accessible (while keeping their ability to also act as internal LB), so any help on this matter is greatly appreciated.
Each application should be on a subnet within VPC
Update ACL and ELB Security Group to let external access
AWS Elastic Load Balancing Inside of a Virtual Private Cloud
Also, this question on SO contains relevant information: Amazon ELB in VPC