I am reading about AWS Elastic network interfaces. Can someone tell me a good practical example of creating multiple Network Interfaces ?
Based on what I understood:
Assume there's an application that is in EC2. This application has several user functions and admin functions. We create 2 subnets, 1 for users and another for admins. We can create another Elastic network interfaces for admin users to access EC2. Is my understanding correct ? what are some common use cases for ENIs?
A very common use case for using multiple ENIs is in ECS and awsvpc network mode.
The awsvpc network mode requires extra ENIs on your instances to use. This:
simplifies container networking and gives you more control over how containerized applications communicate with each other and other services within your VPCs.
awsvpc network mode is also required when using Fargate launch type, but that is managed by AWS. If you want to use the awsvpc mode on your container instances (EC2 Launch type) you will have to ensure that your instance can accommodate the extra ENI (one per task) with proper security groups.
There are also AWS managed ENIs that are created in your VPC. Examples are VPC Interface endpoints and lambda VPC integration.
Yes, the use case you describe is one of the common reasons to have multiple ENIs on a single instance.
There was a similar question on ServerFault with some good answers. It’s not specific to AWS, but the reasoning can be applied here as well: https://serverfault.com/questions/129935/is-there-any-reason-to-have-2-nics-on-a-server
Related
i have more than 30 production Windows severs in all AWS regions. I would like to connect all servers from one base bastion host. can any one please let me know which one is good choice? How can i setup one bastion host to communicate all servers which is different regions and different VPC's? Kindly anyone give advice for this?
First of all, I would question what are you trying to achieve with a single bastion design? For example, if all you want is to execute automation commands or patches it would be significantly more efficient (and cheaper) to use AWS System Manager Run Commands or AWS System Manager Patch Manager respectively. With AWS System Manager you are getting a managed service that offers advance management capabilities with highest security principles built-in. Additionally, with SSM almost all access permissions could be regulated via IAM permission policies.
Still, if you need to set-up bastion host for some other purpose not supported by SSM, the answer includes several steps that you need to do.
Firstly, since you are dealing with multiple VPCs (across regions), one way to connect them all and access them from you bastion's VPC would be to set-up a Inter-Region Transit Gateway. However, among other, you would need to make sure that your VPC (and Subnet) CIDR ranges are not overlapping. Otherwise, it will be impossible to properly arrange routing tables.
Secondly, you need to arrange that access from your bastion is allowed in the inbound connections of your target's security group. Since, you are dealing with peered VPCs you will need to properly allow your inbound access based on CIDR ranges.
Finally, you need to decide how you will secure access to your Windows Bastion host. Like with almost all use-cases relying on Amazon EC2 instances, I would stress to keep all the instances in private subnets. From private subnets you can always reach the internet via NAT Gateways (or NAT Instances) and stay protected from unauthorized external access attempts. Therefore, if your Bastion is in private subnet you could use the capability of SSM to establish a port-forwarding session to your local machine. In this way, you enable yourself the connection while even your bastion is secured in private subnet.
Overall, this answer to your question involves a lot of complexity and components that will definitely incur charges to your AWS account. So, it would be wise to consider what practical problem are you trying to solve (not shared in the question)? Afterwards, you could evaluate if there is an applicable managed service like SSM that is already provided by AWS. In the end, from a security perspective, granting access to all instances from a single bastion might not be best practice. If you consider scenarios in which you bastion is compromised for whatever reason, you basically compromised all of your instances across all of the regions.
Hope it gives you slightly better understanding of your potential solution.
I'm looking for the best way to get access to a service running in container in ECS cluster "A" from another container running in ECS cluster "B".
I don't want to make any ports public.
Currently I found a way to have it working in the same VPC - by adding security group of instance of cluster "B" to inbound rule of security group of cluster "A", after that services from cluster "A" are available in containers running in "B" by 'private ip address'.
But that requires this security rule to be added (which is not convenient) and won't work for different regions. Maybe there's better solution which covers both cases - same VPC and region and different VPCs and regions?
The most flexible solution for your problem is to rely on some kind of service discovery. The AWS-native one would be using Route 53 Service Registry or AWS Cloud Map. The latter one is newer and also the one recommended in the docs. Checkout these two links:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-discovery.html
https://aws.amazon.com/blogs/aws/amazon-ecs-service-discovery/
You could go for open source solutions like Consul.
All this could be overkill if you just need to link two individual containers. In this case you could create a small script that could be deployed as a Lambda that queries the AWS API and retrieves the target info.
Edit: Since you want to expose multiple ports on the same service you could also use load balancer and declare multiple target groups for your service. This way you could communicate between containers via the load balancer. Notice that this can lead to increased costs because traffic goes through the lb.
Here is an answer that talks about this approach: https://stackoverflow.com/a/57778058/7391331
To avoid adding custom security rules, you could simply perform some VPC peering between regions, which should allow instances in VPC 1 from Region A, view instances in VPC 2 from Region B. This document describes how such connectivity may be established. The same document provides references on how to link VPCs in the same region as well.
I'm new to aws ecs service and would like to know about the security inside ecs service.
I'm creating an ecs task which includes two docker container (A and B). A spring-boot application is running on container B and works as a gateway to the backend services. No login/security is necessary to access this app from container A .. so I can invoke like http://localhost:8080/middleware/ ... and then one of the servlet generates a saml token and invoke backend services by adding this token as authorization header. All looks good and works fine. However, a couple developers indicated this design has a flaw. "Even if ecs service running in SecurityGroup and only an entry point port is open, it is possible for hacker to install malwares onto ec2 instance on which two containers are running, and this malware can invoke the spring-boot app running in container B which is a security breach"
I'm not sure if what I heard from my co-workers is true? The security in aws is not strong enough to communicate using localhost without security between containers?? If anyone tells me about this, it would be very appreciated!!
Security and Compliance is a shared responsibility between AWS and the customer .
In general, AWS is responsible for the security of the overall infrastructure of the cloud, and the customer is responsible for the security of the application, instances and of their data.
For a service like ECS, it is categorized as Infrastructure as a Service (IaaS) and, as such, requires the customer to perform all of the necessary security configuration and related management tasks.
As the customer you would normally secure an EC2 type ECS load by hardening the instance, use proper security groups, implement VPC security features eg NACLS and private subnets, use least privilege IAM users/roles, while also applying Docker security best practices to secure the containers and images.
Note: Docker itself is a complicated system, and there is no one trick you can use to maintain Docker container security. Rather you have to think broadly about ways to secure your Docker containers, and harden your container environment at multiple levels, including the instance itself. Doing this is the only way to ensure that you can have all the benefits of Docker, without leaving yourself at risk of major security issues.
Some answers to your specific questions and comments:
it is possible for hacker to install malwares onto ec2 instance on which two containers are running, and this malware
If hackers have penetrated your instance and have installed malware, then you have a major security flaw at the instance level, not at the container level. Harden and secure your instances to insure your perimeter is protected. This is the customer's responsibility.
The security in aws is not strong enough to communicate using localhost without security between containers?
AWS infrastructure is secure and compliant and maintains certified compliance with security standards like PCI and HIPPA. You don't need to worry about security at the infrastructure level for this reason, that is AWS responsibility.
No login/security is necessary to access this app from container A .. so I can invoke like http://localhost:8080/middleware
This is certainly not ideal security, and again it is customer responsibility to secure such application endpoints. You should consider implementing basic authentication here - this can be implemented by virtually any web or app server. You could also implement IP whitelisting so API calls can only be accepted from the container A network subnet.
For more information on ECS security see
Security in Amazon Elastic Container Service
For more information on AWS infrastructure security see Amazon Web Services: Overview of Security Processes
Yes, your colleagues observation is correct.
There is very good possibility of such hacks. But, AWS does provide many different ways in which you can secure your own servers and containers.
Using nested security groups in Public Subnet
In this scenario, AWS allows port access to particular security group rather than an IP address / CIDR range. So only resources having particular security group nested can access those ports while no one from outside can access them.
Using Virtual Private Cloud
In this scenario host your all the instances and ecs containers in private subnet and allow only the access to particular port via NAT gateway for public access, in such scenario your instance won't be directly vulnerable to attacks.
I'm trying to work out how I can tighten up the security group which I have assigned to container instances in AWS ECS.
I have a fairly simple Express.js web service running in a service on ECS
I'm running an Application Load Balancer in front of ECS
My Express.js container exposes and listens on port 8080
I'm using dynamic port mapping to allow container instances to run multiple containers on the same port (8080)
I have found that in order for ALB to forward requests on to containers running in the ECS cluster, I need the security group which is assigned to the container instances to allow all traffic. Bearing in mind that these instances (for reasons I don't know) are assigned public IPv4 addresses - regardless of the fact that I've configured the cluster to place instances in my private subnets - so I'm not comfortable with these instances essentially being wide open, just to ALB can pass requests so them inside the VPC.
I understand that with dynamic port mapping, my containers or not running on one single port on the underlying Docker host that's running them. I also understand that there's no single IP that requests may arrive at the EC2 instances from the ALB, so it seems to me that I can't lock this down if I'm using dynamic port mapping, because there's no single point of origin or destination for the traffic that's coming into the EC2 instances. I feel like I'm missing something here, but I can't for the life of me work out how to do this.
How should I configure ECS or my EC2 security group to allow me to only allow access to the container instances from ALB and not from the rest of the internet?
I've tried to include as much info as is necessary without swamping the question with unnecessary details. If there's any details that would be useful that I've not included, please leave a comment and I'll be happy to provide them.
1) There is no reason why you have to have public ip addresses on your container instances. Just don't set the option at launch, see this page particularly step "e"
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html?shortFooter=true
If the instances are in a private subnet, then the routing should not allow ingres anyway...
2) It is possible to lock down the security using security groups. Using the "security group id" instead of the IP address means that you do not have to know the exact address of the ALB. See this page for instructions on ALB configuration in this way
http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-update-security-groups.html
We’re trying to create a setup of multiple APIs via the Amazon AWS Elastic Beanstalk (AEB) component. The reason we have chosen AEB is because it provides seamless deployment and scaling for the applications we deploy, without the need to manually create Load Balancers (LB) and scaling rules. We would very much like to keep it this way as we are planning on launching a (large) number of applications and APIs.
However, we’re facing a number of challenges with AEB.
First and foremost, some of the API’s need to communicate internally, and low latency is a core requirement for us. In order to utilize internal network communication in AEB we have been “forced” to:
Allocate a VPC in Amazon
Deploy each application to this VPC - each behind their own internal LB
Now, when using the Elastic beanstalk URLs the APIs are able to resolve the internal IP of the LB of another API and thus the latency is eliminated and all is good - the APIs can communicate with one another.
However, this spawns another issue for us:
Some of these “internally” allocated APIs (remember, they’re behind an internal LB in a VPC) must also be accessible from the internet.
We still haven’t found a way to make the internal LBs internet accessible (while keeping their ability to also act as internal LB), so any help on this matter is greatly appreciated.
Each application should be on a subnet within VPC
Update ACL and ELB Security Group to let external access
AWS Elastic Load Balancing Inside of a Virtual Private Cloud
Also, this question on SO contains relevant information: Amazon ELB in VPC