Is ssl termination at AWS load balancer ELB secure? - amazon-web-services

We have a web application running on ec2 instance.
We have added AWS ELB to route all request to application to load balancer.
SSL certificate has been applied to ELB.
I am worried about whether HTTP communication between ELB and ec2 instance is secure?
or
should I use HTTPS communication between ELB and ec2 instance?
Does AWS guarantees security of HTTP communication between ELB and ec2 instance?

I answered a similar question once but would like to highlight some points:
Use VPC with proper Security Groups setup (must) and network ACL (optional).
Notice your private keys distribution. AWS made it easy with storing it safely in their system and never using it again on your servers. It is probably better to use self-signed certificates on your servers (reducing the chance to leak your real private keys)
SSL is cheap these days (compute wise)
It all depends on your security requirements, regulations and how much complexity overhead are you willing to take.
AWS do provide some guaranties (see network section) against spoofing / retrieval of information by other tenants, but the safe assumption is that multi-tenant public cloud environment is not 100% hygienic and you should encrypt.
Single tenant instance (as suggested by #andreimarinescu) will not help as the attack vector discussed here is the network between the ELB (shared env) and your instance. (however, it might help against XEN zero days)
Long answer with short summary - encrypt.

Absolute control over security and cloud deployments are in my opinion two things that don't mix very well.
Regarding the security of traffic between the ELB and the EC2 instances, you should probably deploy your resources in a VPC in order to add a new layer of isolation. AWS doesn't offer any security guarantees.
If the data transferred is too sensitive, you might also want to look at deploying in a dedicated data-center where you can have greater control over the networking aspects. Also, you might want to look at single-tenant instances on EC2, since you're probably sharing your physical resources with other EC2 customers.
That being said, there's one aspect that you also should take into account: SSL termination is quite an expensive job, so terminating SSL at the ELB level will allow your backend instances to focus resources on actually fulfilling the requests, but this will also impact the ELB (it will automatically scale, but it will have to do it faster, and you might see increased latency as it does so during spikes of traffic).

Related

aws ecs service security

I'm new to aws ecs service and would like to know about the security inside ecs service.
I'm creating an ecs task which includes two docker container (A and B). A spring-boot application is running on container B and works as a gateway to the backend services. No login/security is necessary to access this app from container A .. so I can invoke like http://localhost:8080/middleware/ ... and then one of the servlet generates a saml token and invoke backend services by adding this token as authorization header. All looks good and works fine. However, a couple developers indicated this design has a flaw. "Even if ecs service running in SecurityGroup and only an entry point port is open, it is possible for hacker to install malwares onto ec2 instance on which two containers are running, and this malware can invoke the spring-boot app running in container B which is a security breach"
I'm not sure if what I heard from my co-workers is true? The security in aws is not strong enough to communicate using localhost without security between containers?? If anyone tells me about this, it would be very appreciated!!
Security and Compliance is a shared responsibility between AWS and the customer .
In general, AWS is responsible for the security of the overall infrastructure of the cloud, and the customer is responsible for the security of the application, instances and of their data.
For a service like ECS, it is categorized as Infrastructure as a Service (IaaS) and, as such, requires the customer to perform all of the necessary security configuration and related management tasks.
As the customer you would normally secure an EC2 type ECS load by hardening the instance, use proper security groups, implement VPC security features eg NACLS and private subnets, use least privilege IAM users/roles, while also applying Docker security best practices to secure the containers and images.
Note: Docker itself is a complicated system, and there is no one trick you can use to maintain Docker container security. Rather you have to think broadly about ways to secure your Docker containers, and harden your container environment at multiple levels, including the instance itself. Doing this is the only way to ensure that you can have all the benefits of Docker, without leaving yourself at risk of major security issues.
Some answers to your specific questions and comments:
it is possible for hacker to install malwares onto ec2 instance on which two containers are running, and this malware
If hackers have penetrated your instance and have installed malware, then you have a major security flaw at the instance level, not at the container level. Harden and secure your instances to insure your perimeter is protected. This is the customer's responsibility.
The security in aws is not strong enough to communicate using localhost without security between containers?
AWS infrastructure is secure and compliant and maintains certified compliance with security standards like PCI and HIPPA. You don't need to worry about security at the infrastructure level for this reason, that is AWS responsibility.
No login/security is necessary to access this app from container A .. so I can invoke like http://localhost:8080/middleware
This is certainly not ideal security, and again it is customer responsibility to secure such application endpoints. You should consider implementing basic authentication here - this can be implemented by virtually any web or app server. You could also implement IP whitelisting so API calls can only be accepted from the container A network subnet.
For more information on ECS security see
Security in Amazon Elastic Container Service
For more information on AWS infrastructure security see Amazon Web Services: Overview of Security Processes
Yes, your colleagues observation is correct.
There is very good possibility of such hacks. But, AWS does provide many different ways in which you can secure your own servers and containers.
Using nested security groups in Public Subnet
In this scenario, AWS allows port access to particular security group rather than an IP address / CIDR range. So only resources having particular security group nested can access those ports while no one from outside can access them.
Using Virtual Private Cloud
In this scenario host your all the instances and ecs containers in private subnet and allow only the access to particular port via NAT gateway for public access, in such scenario your instance won't be directly vulnerable to attacks.

What does mid-tier load balancing mean?

I was going through the article https://github.com/Netflix/eureka/wiki/Eureka-at-a-glance#how-different-is-eureka-from-aws-elb about Eureka when I came across this term. Also quite confused what the paragraph means (EC2 classic and AWS security groups). It said
AWS Elastic Load Balancer is a load balancing solution for edge services exposed to end-user web traffic. Eureka fills the need for mid-tier load balancing. While you can theoretically put your mid-tier services behind the AWS ELB, in EC2 classic you expose them to the outside world and thereby losing all the usefulness of the AWS security groups.
I'm completely new to Microservice architecture and reading articles from sources I can find. Any help would be helpful!
A mid-tier load balancer is a load balancer that isn't exposed to the Internet, but instead is intended to distribute internally-generated traffic between components in your stack.
An example would be the "order placement" (micro)service verifying prices by sending requests to the "catalog item details" (micro)service -- you need a mid-tier load balancer in front of the multiple nodes providing the "catalog item details" service so that the request is routed to a healthy endpoint for that service, without "order placement" needing to be responsible for somehow finding a healthy "catalog item details" endpoint on its own.
Eureka was first committed to Github in 2012. Back then, much of EC2 was still running inside "EC2 Classic" -- in simple terms, this is the old way EC2 worked, before VPC. It was a much more primitive environment compared to today.
With EC2-Classic, your instances run in a single, flat network that you share with other customers. With Amazon VPC, your instances run in a virtual private cloud (VPC) that's logically isolated to your AWS account.
The EC2-Classic platform was introduced in the original release of Amazon EC2. If you created your AWS account after 2013-12-04, it does not support EC2-Classic, so you must launch your Amazon EC2 instances in a VPC.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-classic-platform.html
EC2 Classic supported security groups for securing access to EC2 instances, but Elastic Load Balancers (ELB) inside EC2 Classic did not.
VPC became generally available in August, 2011.
Elastic Load Balancer -- originally the only type, this type was later rebranded as "ELB Classic," and is not recommended for new environments -- was released for VPC in November, 2011 but only in the Internet-facing variety. Before this, as noted above, ELB only worked in EC2 Classic, only faced the Internet, and accepted HTTP and HTTPS traffic from everywhere. You couldn't control access with security groups.
ELB Classic learned a new trick in June 2012, with the release of Internal Elastic Load Balancers -- accessible only from services inside the VPC. These could be used securely for mid-tier, but they were very limited because they could not make routing decisions based on hostname or path. ELB Classic was a very barebones load balancer with very little flexibility. You'd essentially need a different balancer for each service. One commom configuration was to use HAProxy behind ELB Classic to fill in some of the feature gaps.
AWS didn't have a solid, managed, mid-tier load balancer offering until August, 2016, when the new Application Load Balancer was announced -- with the ability to send traffic to different backend target groups based on pattern matching in the request path sent to the balancer... and with support for deploying in an Internet-facing or internal-only scheme.
In April, 2017, Application Load Balancers were enhanced with the ability to also select a back-end target group based on pattern-matching the HTTP Host header and/or the path, as before.
At this point, VPC and ALB fill many (but, in some cases, not all) of the needs that seem to have driven the development of Eureka.
I would assume that this middle tier is something that can act as a barrier or protection against your AWS ELB. Let use examples of people trying to do an SQL injection attack or spamming your AWS ELB. Also, SG in AWS allows you to specify what protocols are coming to the ALB or any other resources in AWS when you create them. So for example, you can set up an SG that only accepts traffic from your middle-tier server as an additional level of security.
Hope this helps with a better understanding.

AWS architecture with limited elastic IPs

Right now our small-ish business has 3 clients who we have assigned to 3 elastic IPs in Amazon Web Services (AWS).
If we restart an instance no one loses access because the IPs are the same after restart.
Is there a way to handle expanding to 3 more clients without having things fall apart if there's a restart?
I'm trying to request more IPs, but they suggest it depends on our architecture, and I'm not sure what architecture they're looking for (or why some would warrant more elastic IPs than others or if this is an unchecked suggestion box).
I realize this is a very basic question, but googling around only gets me uninformative docs from the vendors mouth.
EDIT:
There is a lot of content on the interwebs (mostly old) about AWS supporting IPv6, but that functionality appears to be deprecated.
You can request more EIPs in the short run. Up to 5 EIP is free depending on your account. You should also considering using name based URLs and assign each of your clients to a subdomain, for example,
clientA.example.com
clientB.example.com
clientC.example.com
This way you will not be needing an additional IP for every client you add. Depending on your traffic, one EC2 instance can serve many clients, and as you scale, you can put multiple EC2 instances behind an AWS Elastic Load Balaner, and this will scale to serve exponentially more clients.
If the client wants to keep their servers separate and can pay for them, you can purchase EIP as many as you need. You should also consider separating database into one database instance for each client, which is probably what clients desire more than separation of IPs.
For IPv6, a quick workaround would be to use a front-end ELB that supports both IPv6 and IPv4.
If you use elastic IPs from VPC, you get 5 per region for an AWS account. See Amazon VPC Limits.
So, you can go to console and select VPC. Then click on elastic IPs, create it. Once created, assign it to a relevant instance.
So, atleast for now, you can solve the problem if you are not bothered about region.

AWS alternative to DNS failover?

I recently started reading about and playing around with AWS. I have particular interest in the different high availability architectures that can be acheived using the platform. Specifically, I am looking for a reliable poor man's solution that can be implemented using the least amount of servers.
So far, I am satisfied with solutions for the main HA concerns: load balancing, redundancy, auto recovery, scalability ...
The only sticking point I have is with failover solutions.
Using an ELB might seem great, however ELB actually uses DNS balancing under the hood. See Is AWS's Elastic Load Balancer a single point of failure?. Also from a Netflix blog post: Lessons Netflix Learned from the AWS Outage
This is because the ELB is a two tier load balancing scheme. The first tier consists of basic DNS based round robin load balancing. This gets a client to an ELB endpoint in the cloud that is in one of the zones that your ELB is configured to use.
Now, I have learned DNS failover is not an ideal solution, as others have pointed out, mainly because of unpredictable DNS caching. See for example: Why is DNS failover not recommended?.
Other than ELBs, it seems to me that most AWS HA architectures rely on DNS failover using route 53.
Finally, the floating IP/Elastic IP (EIP) strategy has popped up in a very small number of articles, such as Leveraging Multiple IP Addresses for Virtual IP Address Fail-over and I'm having a hard time figuring out if this is a viable solution for production systems. Also, all examples I came across implemented this using a set of active-passive instances. It seems like a waste to have a passive for every active to achieve this.
In light of this, I would like to ask you what is a faster and more reliable way to perform failover?
More specifically, please discuss how to perform failover without using DNS for the following 2 setups:
2 active-active EC2 instances in seperate AZs. Active-active, because this is a budget setup, were we can't afford to have an instance sitting around.
1 ELB with 2 EC2 instances in region A, 1 ELB with 2 EC2 instances in region B. Again, both regions are active and serving traffic. How do you handle the failover from 1 ELB to the other?
You'll understand ELB better by playing with it, if you are the inquisitive type, as I am.
"1" ELB provisioned in 2 availability zones is billed as 1 but deployed as 2. There are 2 IP addresses assigned, one to each balancer, and 2 A records auto-created, one for each, with very short TTLs.
Each of these 2 balancers will forward traffic to the instance in its same AZ, or you can enable cross-AZ load balancing (and you should, if you only have 1 server instance in each AZ).
These IP addresses do not change often and though it stands to reason that ELBs fail like anything else, I have maybe 30 of them and have never knowingly had a dead one on my hands, presumably because the ELB infrastructure will replace a dead instance and change the DNS without your intervention.
For 2 regions, you have little choice other than using DNS at some level. Latency-based routing from Route 53 can send people to the closest site in normal operations and route all traffic to the other site in the event of an outage of an entire region (as detected by Route 53 health checks), but with this is somewhat more likely to encounter issues with DNS caching when the entire region is unavailable.
Of course, part of the active/passive dilemma in a single region using Elastic IP is easily remedied with HAProxy on both app servers. It's an http request router and load balancer like ELB, but with a broader set of features. The code is so tight that you can likely run it on your app servers with negligible CPU consumption. The instance with the EIP would then balance traffic between its local app server and the peer. Across regions, HAProxy behind ELB could forward traffic to a mate in a remote region, if the local region is up but for whatever reason the application can't serve requests from the local region. (I have used such a setup to increase availability of external services, by bouncing the request to a remote AWS region when the direct Internet path from the local region is not working.)

Using EC2 only when under load or incase of failure

Is it possible to have most of our server hardware outside of EC2, but with some kind of load balancer to divert traffic to EC2 when there's load that our servers can't handle, or as a backup incase these servers go down?
For example, have a physical server serving out our service (let's ignore database consistency for the moment), but there's a huge spike due to some coolness - can we spin up some EC2 instances and divert traffic off to it? This is much like Amazon's own auto scaling.
And also, if our server hardware dies for some reason (gremlins eat the power cables for example) - can we route all our traffic over to EC2 instances?
Thanks
Yes you can but you will have to code. AWS has Command Line Tools for doing EC2/Autoscaling/S3 stuff with simple commands in bash or other interfaces and SDKs, like Boto for Python etc.
You can find it here: http://aws.amazon.com/code/
Each Ec2 instance has a public network interface associated with it. Use a DNS CNAME record to "switch" your site traffic to the Ec2 instance. If you need to load-balance across multiple machines you can use round-robin DNS, or start a ELB and put any number of Ec2 instances behind it.
Ec2 infrastructure is extremely easy to scale. Deploying your application on top of an Ec2 is a whole other matter. It could be trivial -- or insanely complicated.