EC2 Load Balancer Instance Protocol and Port - amazon-web-services

I have an elastic load balancer that works properly when configured like so:
Load Balancer Protocol: HTTPS
Load Balancer Port: 443
Instance Protocol: HTTP
Instance Port: 80
However, if I attempt to change the instance protocol to HTTPS and instance port to 443, my server stops responding.
What do I have to do in order for my instance port to be 443?
The reason I want my instance port to be 443 is that my Rails app must verify that the incoming connection uses SSL, but this way, this check fails.

You need to change the way your application detects whether or not the client is using SSL.
The port on the server won't give you that information. You may want to look at the ELB documentation and see if you can use X-Forwarded-Proto or X-Forwarded-Port.

Your instance server has to be able to respond on port 443, with a valid HTTPS response.
That generally means you have to install a signed SSL cert and configure your web server to use it.
Without knowing what web server you are using, I can't give specific instructions.
For instance, if you are using Apache, you need to install something like mod_ssl.

Related

enable 443 port for TURN server

What is that one port i need to open in ec2 instance so that it will allow me to open below link on ssl as well.
Without HTTPS --this is working fine
With Https --this is giving error
please help
You Will need Port 443 Open in Security Group for HTTPS to work , also make sure if it is windows Instance Windows Firewall has that port Open.
Your working link is specifying port 3478: http://live.talkrecruit.com:3478/
Your non-working version is specifying HTTPS, but is also still specifying port 3478: https://live.talkrecruit.com:3478/
Since you have configured HTTPS to be served on port 443, you obviously need to change the port in the link to be 443, like so: https://live.talkrecruit.com:443/
And since 443 is the default port for HTTPS, you can leave it off and the browser will automatically try to use port 443 like so: https://live.talkrecruit.com/

AWS Ubuntu instance as proxy

I'm not sure why my browser is timing out when I try to connect to my AWS Ubuntu Instance squid proxy
I want to have my AWS Ubuntu instance act as a proxy for my python requests. The requests I make in my program will hit my AWS proxy and my proxy will return to me the webpage. The proxy is acting as a middleman. I am running squid in this Ubuntu instance. This instance is also within a VPC.
The VPC security group inbound traffic is currently set to
HTTP, TCP, 80, 0.0.0.0/0
SSH, TCP, 22, 0.0.0.0/0
RDP, TCP, 3389, 0.0.0.0/0
HTTPS, TCP, 443, 0.0.0.0/0
and outbound traffic is open to all traffic
This is my current squid configuration is the default squid.conf except that I changed one line to
http_access allow all meaning traffic is open to all.
However when I changed my mozilla browser to use the Ubuntu instance's Public IP and squid.conf default port of 3128, I cannot see any traffic going through my proxy using this command on the ubuntu instance
tail -f /var/log/squid/access.log
My browser actually times out when I try to connect to a website such as google.com. I am following this tutorial but I cannot get the traffic logs that his person is getting.
HTTP/S as shown in security group settings actually has nothing whatsoever to do with HTTP/S.
Many port numbers have assigned names. When you see "HTTP," here, it's only an alias that means "whatever stuff happens on TCP port 80." The list of values only inludes common services and the names aren't always precise compared the official port names, but the whole point is to give neophytes a word that nakes sense.
What should I change? I always thought I should be leaving HTTP/S ports to their default values.
That is not at all what this does. As already inferable from above, changing an "HTTP" rule from port 80 to something else does not change the value for the HTTP port on instances behind it. Changing the port value makes the rule no longer be an "HTTP" rule, since HTTP is just a friendly label which means "this rule is for TCP port 80."
You need a custom TCP rule allowing port 3128 from your IP, and that's it.
You need to add 3128 as custom TCP in your SG. This will allow Squid to send/ receive traffic.
Also as a best practice, make SSH accessible from your own IP rather than public.

My AWS ec2 instance is running on ec2-xx-1xx-xxx-24.compute-1.amazonaws.com:8000. how do i make it run on ec2-xx-1xx-xxx-24.compute-1.amazonaws.com

My AWS ec2 instance is running on ec2-xx-1xx-xxx-24.compute-1.amazonaws.com:8000. how do i make it run on ec2-xx-1xx-xxx-24.compute-1.amazonaws.com
I am using Gunicorn server server and it is a Django application on Ubuntu server
You can configure the same via virtual host in httdp.conf with redirection rule or you can do the same with ELB in which you can mention the request comes on 80 and ELB will forward the same on 8000 port.
This is a two step problem:
You have to configure Django to listen on the right port, and you also have to modify the security group attached to your instance to allow connection on port 80.
You can either allow access from anywhere or from a specific IP/Range of IPs.
An other solution is to create an ELB and configure it to listen on port 80 and send the traffic on port 8080.

How to get the port number of client request with AWS ELB using TCP and Nginx on server

While using HTTP/HTTPS as load balancer protocol we get the requested origin protocol (i.e. it is HTTP or HTTPS) from x-forwarded-protocol header.
Now, using this header in nginx configuration it can be determined that whether the originating call was from HTTP or HTTPS and action could be performed accordingly.
But if the ELB listeners configuration is as shown in the below image, then how to determine that the request has come via port 80 or port 443?
You have a couple of options, at least:
Option 1 is not to send both types of traffic to the same port on the instances. Instead, configure the application to listen on an additional port, such as 81 or 8080, and send the SSL-originating traffic there. Then use the port where the traffic arrives at the instance, to differentiate between the two types of traffic.
Option 2 is to enable the PROXY protocol on the ELB, after modifying the application to understand it. This has the advantage of also giving you the client IP address.
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html
http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt

Front-Ending an app server on AWS EC2

I have 2 instances set up in EC2. One is running nginx and has an association with the elastic IP address, so its publicly accessible.
The other doesn't have a web server but has a RESTful server running on port 8080.
Both belong to a security group with these rules:
Ports Protocol Source MongoDB-2-2-2-AutogenByAWSMP-
22 tcp 0.0.0.0/0
80 tcp 0.0.0.0/0
8080 tcp 0.0.0.0/0
If I understand that right then port 8080 should be open.
If I ssh onto my web box (with nginx running) I'm trying to test access to my RESTful server on the other instance:8080, so I tried:
curl http://10.151.87.76:8080/1/tlc/ping
curl http://ip-10-151-87-76:8080/1/tlc/ping
curl http://ip-10-151-87-76.ec2.internal:8080/1/tlc/ping
All of these gave me "couldn't connect to host" errors.
If I log onto the RESTful box directly and do the following, it works.
curl localhost:8080/1/tlc/ping
So I know my service is up and healthy.
Any ideas why I can't see port 8080 from the other instance are appreciated.
Make sure instances are in the same availability zone. If not, you may need to access the instance by public DNS name (something like ec2-XXX-XX-XXX-XXX.YYY.amazonaws.com).
Make sure 10.151.87.76 is the correct IP. Note that this will probably change after the instance is stopped and started again.
Make sure your headless service is publicly available -- it may listen on localhost:8080 only but should listen on 0.0.0.0:8080. Try nmap 10.151.87.76 -p 8080 from other instance, it should list 8080 as open port.
Make sure your headless service is publicly available << so this is the reason. What web server are you using for REST API? If it is Apache, make sure config says Listen 8080, not Listen 1.2.3.4:8080. If it is standalone app, make sure it can listen on all interfaces -- some clients will listen on localhost by default. – hudolejev 54 mins ago
This! Buried deep (deep) within my code was a piece of the server wired to "localhost". Changed that to key off hostname and all was well! Happy.