AWS EC2 security group https vs tcp vs ssh - amazon-web-services

I am confused about configuring the EC2 security group settings.
There are three options (TCP, SSH, HTTPS) and each of them requires you to add an IP/port number.
For context, in my work I'm usually running Flask apps over EC2 and I only want particular people to view them. My question is understanding the difference between TCP, SSH, and HTTPs but more importantly which of these are important for me to configure.

Within the EC2 Console, under Security Groups:
SSH and HTTPS in the Type dropdown, are presets which set the port to 22 and 443 respectively.
TCP is the protocol. Both SSH and HTTPS are TCP.
If you're running a server which you want to expose on a non standard port, you can select Custom TCP Rule, then set the port acordingly.
You should probably have one security group that allows SSH traffic, then assign this security group to the EC2 instances you wish to shell into:
Then have a separate security group that allows the webserver traffic, in this case I also have one for port 80, aswell as 443:
Of course you will then need a server running on that EC2 instance to receive the traffic. This might be a reverse proxy like nginx, which then proxies traffic to the correct port for your app server (run your flask app with something like gunicorn in production).
If nginx and gunicorn are running on the same box, and say gunicorn serves on port 8000, then you wouldn't need a security group for this as it's loopback traffic. Your nginx configuration points to port 8000.
However if you have a separate EC2 instance running gunicorn, you might wish to set up a secuirty group for this to allow internal traffic from your VPC CIDR range:
I only want particular people to view them
This is probably a job for authentication on the app, as oppose to security groups, unless your certain of the public IPs from which you wish people to connect.
In the above examples above a Source of 0.0.0.0/0 is allowing traffic from anywhere to reach that port. The console has a convenient dropdown which lets you set My IP if you only want to allow traffic from the IP you're using to connect to the console. Otherwise you'd need to manually calculate the CIDR blocks.
Hope this helps. It probably raises more questions.

Https/Http are important for you. Both are used with websites. Https is http over SSL, meaning more secure than http. You just need these.
Http/https uses TCP port 80 and 443 by default.
SSH is used to securely access a Unix based server.

Related

How can I troubleshoot an AWS Application Load Balancer giving 504, while the EC2 instance behind it gives 200?

I have an EC2 instance with a few applications successfully deployed onto it, listening for connections on ports 3000/3001/3002. I can correctly load a web page from it by connecting to its public DNS or public IP on the given port. I.e. curl http://<ec2-ip-address>:3000 works. So I know that the apps are running, and I know that the port bindings/firewall rules/EC2 security groups are all set up correctly to receive connections from the outside world.
I also have an Application Load Balancer, which is supposed to route traffic to the 3 apps depending on the host name, but it always gives me "504 Gateway Time-out". I've checked all the settings but I can't see what's wrong and I'm not really sure how to troubleshoot it from here.
The ALB has a single HTTPS/443 listener, with a cert that's valid for mydomain.com, app1.mydomain.com, app2.mydomain.com, app2.mydomain.com.
The listener has 3 rules, plus the default rule:
Host == app1.mydomain.com => app1-target-group
Host == app2.mydomain.com => app2-target-group
Host == app3.mydomain.com => app3-target-group
Default action (last resort) => default-target-group
Each target group contains only the single EC2 instance, over HTTP, with the following ports:
app1-target-group: 3000
app2-target-group: 3001
app3-target-group: 3002
default-target-group: 3000
Given that I can access the app directly, I'm sure it must be a problem with the way I've configured the ALB/listener/target groups. But the 504 doesn't give me much to go on.
I've tried to turn on access logs to an S3 bucket, but it doesn't seem to be writing anything there. There's a single object called ELBAccessLogTestFile, and no actual logs in the bucket.
EDIT: Some more information... I actually have nginx installed on the EC2 instance, which is where I was previously doing the SSL termination and hostname-to-port mapping/routing. If I change the default-target-group above to point to port 443 over HTTPS, then it works!
So for some reason, routing traffic
- from the ALB to the EC2 instance over HTTPS on port 443 -> OK!
- from the ALB to the EC2 instance over HTTP on port 3000 -> Broken!
But again, I can hit the instance directly on HTTP/3000 from my laptop.
Communication between resources in the same security group is not open by default. Security group membership alone does not provide special access. You still need to open the ports in the security group to allow other resources in the security group to access those ports. You can specify the security group ID in the rule's source field if you don't want to open it up beyond the resources in the security group.

AWS public DNS for ubuntu instance is not accessible from the browser

I am relatively new to AWS and I've been looking at quite a few tutorials for the past couple of days trying to figure out how to make my AWS ubuntu instance accessible from the browser.
What I've done:
1st: I configured security groups to accept all traffic for ssh, http, https just to see if the public DNS listed in the instance is accessible.
2nd: I changed the IP of my instance to an elastic IP
3rd: I wrote a simple node.js file that listens on port: 9000 and console.logs 'hello world'
For some reason ssh works, and I can run my node.js file, but agina I cannot access the remote instance from the browser.
Any help would be greatly appreciated since I've been on this for a couple of days
Thanks!
Thank you everyone for the quick responses!
My issue was I did not include a TCP rule to my specific port. Now I am able to access that port via ec2-DNSNAME:9123.
And, just to clarify, if I want to host that DNS for all traffic I should specify 'anywhere' for the TCP rule, correct?
I configured security groups to accept all traffic for ssh, http, https
In security groups, "HTTP" does not mean "HTTP on any port"... it means "any traffic on TCP port 80" -- 80 being the standard IANA assigned port for HTTP.
Security groups are not aware of the type of traffic you are passing, only the IP protocol (e.g. TCP, UDP, ICMP, GRE, etc.) and port number (for protocols that use port numbers) and any protocol specific information (ICMP message types).
You need a rule allowing traffic to port 9000.
Firstly go to your EC2 and see if curl http://localhost works..
Also, if you are exposing your nodejs on port 9000 ; did u open 9000 also on security groups or not ?
Few things to check:
Security groups
Subnet NACLS (these can function as a subnet level
firewall, but unless you've messed with these they should allow all
traffic.)
On the server if you run netstat -na | grep <PORT> do you see your
application listening on the correct ports?
You may also check your system for a firewalls that could be short circuiting the requests.
If the above doesn't point you towards where your issue is you can grab tcpdump and filter it just for requests coming from your web browser (e.g after installing tcpdump -vvn host 10.20.30.40 port 8000 Substitute your ip and port). This will let you know if you're running into a network issue (Packets aren't reaching the server) or if its something with the app.
I'd also recommend using IP addresses while doing your initial troubleshooting. That way we can establish it is not network/server configuration before going into DNS.

Websocket timeouts using AWS Application Load Balancer

I'm getting gateway time-outs when trying to use a port specifically for websockets using an Application Load Balancer inside an Elastic Beanstalk environment.
The web application and websocket server is held within a Docker container, the application runs fine however wss://domain.com:8080 will just time out.
Here is the Load balancer listeners, using the SSL cert for wss.
The target group it points to is accepting 'Protocol' of HTTP (I've tried HTTPS) and forwards to 8080 onto an EC2 instance. Or.. It should be. (Doesn't appear to be an option for TCP on Application Load Balancers).
I've had a look over the Application Load Balancer logs and it looks like the it reaches the target group, but times out between it's connection to the EC2 instance, and I'm stumped on why.
All AWS Security Groups have been opened on all traffic for the time being, I've checked the host and found that the port is open and being listened to by Nginx which will route to the correct port to the docker container:
docker ps also shows me:
And once inside the container I can see that the port is being listened to by the Websocket server:
So it can't be the EC2 instance itself, can it? Is there an issue routing websockets via ports in an ALB?
-- Edit --
Current SG of the ALB:
The EC2 instance SG:
Accepted answer here seems to be "open Security Groups for EC2 (web server) and ALB inbound & outbound communication on required ports since websockets need two way communication."
This is incorrect and the reason why it solved the problem is coincidental.
Let me explain:
"Websockets needs two way communication..." - Sure but the TCP sessions is only ever opened from one way - from the client.
You don't have to allow any outbound connections from the EC2 instance (web server) in order to use web sockets.
Of course the ALB needs to be able to do TCP connections to the EC2 instance. But not to the client. Why? Well the ALB is accepting TCP connections (usually on port 80 and 443). It is setting up a TCP session that was initiated by the client. It is then trying to set up a new TCP session to the web server behind the ALB. This should be done on the port that you decided to have the web server listening on. The Security Group around the ALB needs to be able to do outbound connections on this port to the web server. This is the reason why "open up everything" worked. It has nothing to do with "two way communication".
You could use any ports of course but you don't need to use any other ports than 80 & 443 (such as 8080) on both the Load Balancer or the EC2.
Websockets need two way communication, make sure security groups attached to all resources (EC2 & ALB) allow both inbound & outbound communication on required ports.

AWS Ubuntu instance as proxy

I'm not sure why my browser is timing out when I try to connect to my AWS Ubuntu Instance squid proxy
I want to have my AWS Ubuntu instance act as a proxy for my python requests. The requests I make in my program will hit my AWS proxy and my proxy will return to me the webpage. The proxy is acting as a middleman. I am running squid in this Ubuntu instance. This instance is also within a VPC.
The VPC security group inbound traffic is currently set to
HTTP, TCP, 80, 0.0.0.0/0
SSH, TCP, 22, 0.0.0.0/0
RDP, TCP, 3389, 0.0.0.0/0
HTTPS, TCP, 443, 0.0.0.0/0
and outbound traffic is open to all traffic
This is my current squid configuration is the default squid.conf except that I changed one line to
http_access allow all meaning traffic is open to all.
However when I changed my mozilla browser to use the Ubuntu instance's Public IP and squid.conf default port of 3128, I cannot see any traffic going through my proxy using this command on the ubuntu instance
tail -f /var/log/squid/access.log
My browser actually times out when I try to connect to a website such as google.com. I am following this tutorial but I cannot get the traffic logs that his person is getting.
HTTP/S as shown in security group settings actually has nothing whatsoever to do with HTTP/S.
Many port numbers have assigned names. When you see "HTTP," here, it's only an alias that means "whatever stuff happens on TCP port 80." The list of values only inludes common services and the names aren't always precise compared the official port names, but the whole point is to give neophytes a word that nakes sense.
What should I change? I always thought I should be leaving HTTP/S ports to their default values.
That is not at all what this does. As already inferable from above, changing an "HTTP" rule from port 80 to something else does not change the value for the HTTP port on instances behind it. Changing the port value makes the rule no longer be an "HTTP" rule, since HTTP is just a friendly label which means "this rule is for TCP port 80."
You need a custom TCP rule allowing port 3128 from your IP, and that's it.
You need to add 3128 as custom TCP in your SG. This will allow Squid to send/ receive traffic.
Also as a best practice, make SSH accessible from your own IP rather than public.

EC2 security group concern

EC2 --> RDS:
RDS (DB Engine): I have inbound and outbound open on port 3306 for the web server's security group.
EC2 (Web Server): I have inbound open for 80, 443 and 22(myIP). Outbound is open for 80,443 and 3306, and it needs all traffic as well to function properly.
My question is about the outbound rules of my web server. Why do I need all traffic to be open? Does this have any security concern?
Some people lock down outbound to prevent against data loss. It works better for immutable architecture since you've removed the ability to update packages from public sources.
Obviously you can choose your own security profile; generally speaking I consider this the levels of security:
Port 22 open to the world
Port 22 access by white listed IPs
Bastion host with white listed IPs
VPN (from here down, all using VPN)
Private IPs + NAT
Proxies server outbound access
That's my ec2 security maturity model. I'm sure I missed some- feel free to comment below.
The security group outbound rules let you to specify "destination", not source. Basically you don't need to worry being attack by Denial of Server through the outbound rules.
On the other hand, unless your Web server need to connect out to Internet without restriction, then you set 80+443 destination to 0.0.0.0/0.
Otherwise , if your web server only need to connect to OS repositories for security update (e.g. ubuntu, apache,etc), then you can explicitly specify the repositories IP address instead of using 0.0.0.0/0.
Other than that, there is little risk. Unless you load something that render webpage, e.g. load web browser in the web server that read random webpage, then it make you susceptible to browser/java engine/rendering engine exploit : if exploit can execute something like ssh reverse tunnel, then there is possibilities that attacker may gain access to your web server.