Setting Up SSL AWS Between ELB and Relay Servers - amazon-web-services

I have three relay server instances that are currently sitting behind and ELB. The clients send their emails via postfix to the ELB, and the traffic is load balanced between the three relay servers before going outbound. I want to set up encryption between the ELB and the relay servers, so that the ELB listens on port 25 but forwards traffic on port 587 for example. How do I go about setting this up in AWS? Thanks for the help!
Best,
Ahmed

Related

Ephemeral ports on AWS Web server NACL Rule

I am new to AWS and have been experimenting with NACL rules. I went through Amazon VPC NACL default rules evaluation order to understand how NACL rules work.
I've created a test EC2 instance (with NGINX) in a public subnet with some Elastic IP. I have added EC2 to the default security group, which allows all traffic on all ports. I initially configured NACL to block all traffic. This worked fine because I was not able to SSH into or HTTPS my instance. My goal is to let 0.0.0.0/0 HTTP port 80 into my instance.
Understanding that NACLs are stateless, I added communication to/from 0.0.0.0/0 on all TCP ports. This worked fine.
Now, I thought of restricting inbound and outbound to Port 80. However, using this, I wasn't able to access test NGINX page.
I noticed that if I change the outbound rule to allow all ports, I am able to access the NGINX page. I am not sure why this is happening.
Here's the new config:
Do I need to add ephemeral ports as well? https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html#nacl-ephemeral-ports
Yes. You need to open ephemeral ports 1024-65535 (assuming a Linux server is being used)
Your server will receive requests on 80 (or 443) but send the response over one of those ephemeral. Blocking outbound for the ephemeral ports is blocking that response.
You do not need to open 80 (or 443) on the outbound for your web server to work. Your web server would only need port 80 (or 443) outbound open if it needs to make an HTTP request to another web server - which it may well need to do; to call a third party API.

AWS Elb backend authentication

I was reading about backend authentication option in AWS ELB.
What it mentions is there is a instance public key (.pem encoded) to be configured in ELB.
What I could not understand is what is this key or certificate?
Since it is optional will the traffic will still be encrypted between ELB and EC2 instances if port 443 is used.
There is no details mentioned on how to actually do this.
Basically I want end to end encryption from user to elb and elb to ec2.
Basically what this is saying is that if you what encryption in transit for the entire journey you will need to install an SSL certificate on your EC2 instance. The journey will look like the below.
client ---(HTTPS)--> load balancer ---(HTTPS)--> EC2 host
You will need to either purchase an SSL, or use a free option such as certbot on your server.
Then once you have this you will need to configure for the web server software you are running. Below are some instructions for common web servers:
Apache
Nginx
IIS
Tomcat
Ensure that your target group is configured for HTTPS port 443 traffic, to have the load balancer forward requests to HTTPS on your backend.
If the load balancer to EC2 host is not encrypted (plain HTTP) the clients traffic to load balancer will still be encrypted, but after this will be forward in HTTP.

Adding a secure HTTPS certificate to AWS EC2 Instance

I have an application running on an AWS EC2 instance with the domain's nameservers on AWS as well. I have an A record with the public IP.
I've create a secure certificate with ACM and also created an ELB Load Balancer. My domain still doesn't show the HTTPS in front of it.
Can anyone provide some help? Many thanks
Have you tried this ?
First, you need to open HTTPS port (443). To do that, you go to https://console.aws.amazon.com/ec2/ and click on the Security Groups link on the left, then create a new security group with also HTTPS available. Then, just update the security group of a running instance or create a new instance using that group.
After these steps, your EC2 work is finished, and it's all an application problem.
Credit to : https://stackoverflow.com/a/6253484/8131036
The solution is pretty simple.
First of all, edit the listeners on your ELB and do the following:
443 (HTTPS) => 80 (HTTP) - and apply ACM certificate.
What this essentially is doing is tells the ELB to listen on port 443 (HTTPS) and terminate the certificate and then forward traffic internally over port 80 (HTTP) - the port the instance is listening on.
You can also add port 80 (HTTP) and forward to port 80 (HTTP) (recommended and then set up your application to redirect all users to HTTPS). You can read more about ELB and setting up listeners here Create a Classic Load Balancer with an HTTPS Listener
Second thing you need to do is update Route 53 to point to ELB.
ascisolutions.com. A ALIAS s3-website-us-west-2.amazonaws.com. You can read more about it here Routing Traffic to an ELB Load Balancer
Let me know if you have more questions in the comments section and I'll do my best to reply.
You cannot install an ACM certificate on an ec2 instance directly, but you can install it on your load balancer and have the load balancer terminate SSL.
Create a target group and register your ec2 instances using port 80.
In your ELB, setup listeners for both port 80 and 443. You'll need to add your ACM cert to your https listener (port 443). Note that the certificate needs to be issued in the same region as your ELB.
The ELB does not handle redirecting insecure traffic to HTTPS, if needed, you will need to update your application to redirect http to https.

Issues with EC2 Load balancer, SNI, multiple SSL domains on the same server

i am having issues setting up an EC2 load balancer, on a instance, that has multiple domains protected by SSL.
Is it possible to make the load balancer pass the HTTPS request as is, and get it decrypted at the server level? If so, how do i set that up?
I have a standard LAMP setup on a EC2.
On your Elastic Load Balancer, configure a TCP listener that listens on port 443 and forwards to port 443 on the instances. This will allow your EC2 instances to perform the SSL termination.
Note that you won't be able to use Sticky Sessions in this configuration.

SMTP through HAProxy / Elastic Beanstalk

kind of an unusual setting here:
We have an SMTP service running on Tomcat / Elastic Beanstalk on AWS in an auto-scaling group behind an ELB load-balancer.
Now, for one of our clients we need to have a static IP for the SMTP service. Since this is not possible with the out-of-the-box load-balancer on AWS, we have a separate HAProxy instance transparently routing the :25 traffic trough the AWS load-balancer.
For some reason, the HAProxy chokes after exactly 3 SMTP calls. After that connections either time out or take minutes to go through.
The interesting part is that the following configurations work perfectly fine:
Calling the SMTP service on the AWS load-balancer directly
Load-balancing the Elastic Beanstalk's nodes through HAProxy directly.
Target setting with HTTP calls on port 80, instead SMTP on port 25
Help is really appreciated
That sounds like EC2 rate limiting what appears -- to the system -- to be "outbound" SMTP from your HAProxy instance.
You're accessing the ELB from the HAProxy by one of this outside addresses, and this is causing your traffic to be treated as Internet-bound.
In order to maintain the quality of Amazon EC2 addresses for sending email, we enforce default limits on the amount of email that can be sent from EC2 accounts. If you wish to send larger amounts of email from EC2, you can apply to have these limits removed from your account by filling out this form.
https://portal.aws.amazon.com/gp/aws/html-forms-controller/contactus/ec2-email-limit-rdns-request
One solution is to had those limits removed, but consider your next step carefully -- you'd be better served by load-balancing the EB nodes through the HAProxy directly, using the nodes' private IP addresses -- because there is a charge for traffic to your ELB from within EC2 on the public IP.
Data Transfer OUT From Amazon EC2 To ... Amazon Elastic Load Balancing ... in the same Availability Zone ... Using a public or Elastic IP address ... $0.01/GB.
http://aws.amazon.com/ec2/pricing/
Not a massive charge, perhaps, but it should be an avoidable charge nonetheless.
Additionally, there's no way to configure HAProxy to look up the IP address behind the hostname you've configured for the ELB with each request. HAProxy resolves hostnames on startup and if the ELB's IP address changes, HAProxy will not detect this change.
On the flip side, you can't reliability configure HAProxy to directly connect to the EB instances, since they're dynamically-addressed as well.
The simplest way to prove that my diagnosis is correct is to set the ELB's TCP listener on another port, such as 587 (or 2025, or whatever), mapped to port 25 on the EB instances. Then have the HAProxy target the traffic to port 587. That should eliminate the EC2 rate limiting on SMTP, although you do still have an issue to deal with if the ELB's external IP changes.