502 Bad Gateway in elasticbeanstalk of AWS - amazon-web-services

I have deployed a spring boot application in elasticbeanstalk.
In the "application.properties" file, I have set,
server.port=5000
I have added a RDS db and set the following environment properties.
I have also added an inbound rule in the security group of the environment as shown in the image below:
I am still getting the 502 Bad Gateway error when I click on the URL.

Your rule is incorrect.
0.0.0.0/32 means that you accept traffic only from the IP address 0.0.0.0 which basically doesn't exist.
What you want to do is allow traffic from 0.0.0.0/0 which means accept traffic from anywhere in the world.

In case of Elastic Beanstalk, in your instance there is an nginx reverse proxy, which accepts HTTP connections on port 80 and proxies these connections to port 5000.
In your security group inbound rule you should accept HTTP connections on port 80 from everywhere (0.0.0.0/0).

Related

Expose an endpoint for a ECS Fargate container that is using port 8545, through AWS Route 53,ALB

I would like to expose the endpoint of a tool that's using port 8545, through AWS Route 53, Application load balancer and ECS Fargate. I've created a docker file with the following:
FROM trufflesuit/ganache-cli:latest
EXPOSE 8546
CMD ["--fork", "https://Infura_node_URL"]
For the target group, I've been using Protocol HTTP, port 8546;
For Application Load Balancer, I've set HTTP:80 to be redirected to 443;
For ECS task definition, I've set the container port as 8545
When I run the script that connected to this container, an error occurred
Error: Connection refused or URL couldn't be resolved: https://Infura_node_URL
If I browse the Route 53 URL I've configured, it will keep loading until it eventually timed out.
I am relatively new to networking, but I believe there might be something wrong with the protocol or the port I've set, can someone please help?
*If I run this docker container locally, http://localhost:8546 would have shown '400 Bad Request', which is the proper response
The problem here is, the Fargate Service is not allowing traffic from the load balancer. Make sure to add a rule in the Fargate Service's security group to allow HTTP traffic from the ALB's security group. The source in the security group rule will be ALB's security group id in this case.

AWS EC2 security group https vs tcp vs ssh

I am confused about configuring the EC2 security group settings.
There are three options (TCP, SSH, HTTPS) and each of them requires you to add an IP/port number.
For context, in my work I'm usually running Flask apps over EC2 and I only want particular people to view them. My question is understanding the difference between TCP, SSH, and HTTPs but more importantly which of these are important for me to configure.
Within the EC2 Console, under Security Groups:
SSH and HTTPS in the Type dropdown, are presets which set the port to 22 and 443 respectively.
TCP is the protocol. Both SSH and HTTPS are TCP.
If you're running a server which you want to expose on a non standard port, you can select Custom TCP Rule, then set the port acordingly.
You should probably have one security group that allows SSH traffic, then assign this security group to the EC2 instances you wish to shell into:
Then have a separate security group that allows the webserver traffic, in this case I also have one for port 80, aswell as 443:
Of course you will then need a server running on that EC2 instance to receive the traffic. This might be a reverse proxy like nginx, which then proxies traffic to the correct port for your app server (run your flask app with something like gunicorn in production).
If nginx and gunicorn are running on the same box, and say gunicorn serves on port 8000, then you wouldn't need a security group for this as it's loopback traffic. Your nginx configuration points to port 8000.
However if you have a separate EC2 instance running gunicorn, you might wish to set up a secuirty group for this to allow internal traffic from your VPC CIDR range:
I only want particular people to view them
This is probably a job for authentication on the app, as oppose to security groups, unless your certain of the public IPs from which you wish people to connect.
In the above examples above a Source of 0.0.0.0/0 is allowing traffic from anywhere to reach that port. The console has a convenient dropdown which lets you set My IP if you only want to allow traffic from the IP you're using to connect to the console. Otherwise you'd need to manually calculate the CIDR blocks.
Hope this helps. It probably raises more questions.
Https/Http are important for you. Both are used with websites. Https is http over SSL, meaning more secure than http. You just need these.
Http/https uses TCP port 80 and 443 by default.
SSH is used to securely access a Unix based server.

How to use Amazon ALB port forwarding to run multiple services on a single EC2 instance

I have multiple services running on multiple ports on a single AWS EC2 instances. I've been using two ALBs to run these services, but I'd like to combine them into a single ALB that forwards to the correct service based on the host name. One service is a node app running port 80 and the other is a flask app running on port 5001.
As of now, I have a target group setup as mywebsite for the node app on port 80, and api-service for my flask app on port 5001.
I added those target groups to an ALB, my-alb, and set up forwarding rules so that port 80 and 5001 will forward to port 443. On port 443 I set up forwarding rules so that if the host matches api.* it will forward to the target group api-service otherwise it will default to my-website.
I have also set up my alb as the alias for api.mywebsite.com and www.mywebsite.com on route 53 as well as setting up the certificate. All the health checks are passing for both my target groups.
Here's the issue:
www.mywebsite.com works properly. I get forwarded to the https version of the site and everything looks fine. When I try to use api.mywebsite.com it doesn't load and I get a 504 Bad Gateway error.
To summarize, here are the steps I've completed:
Setup two target groups for my services on port 80 and 5001
Added those two target groups to ALB and set routing rules to redirect to port 443
Set forwarding rules for route 443 to forward to the service on port 5001 if the host matches api.* else route to the service on port 80.
Set the ALB as the alias for api.mywebsite.com and www.mywebsite.comin route 53.
Any help would be appreciated, thanks!
EDIT: Got it working.
I had configured my security group incorrectly. That is the step I was missing :D. Once I added port 5001 to the security group assigned to my ALB and EC2 it began to work properly.
Thanks!

How can I troubleshoot an AWS Application Load Balancer giving 504, while the EC2 instance behind it gives 200?

I have an EC2 instance with a few applications successfully deployed onto it, listening for connections on ports 3000/3001/3002. I can correctly load a web page from it by connecting to its public DNS or public IP on the given port. I.e. curl http://<ec2-ip-address>:3000 works. So I know that the apps are running, and I know that the port bindings/firewall rules/EC2 security groups are all set up correctly to receive connections from the outside world.
I also have an Application Load Balancer, which is supposed to route traffic to the 3 apps depending on the host name, but it always gives me "504 Gateway Time-out". I've checked all the settings but I can't see what's wrong and I'm not really sure how to troubleshoot it from here.
The ALB has a single HTTPS/443 listener, with a cert that's valid for mydomain.com, app1.mydomain.com, app2.mydomain.com, app2.mydomain.com.
The listener has 3 rules, plus the default rule:
Host == app1.mydomain.com => app1-target-group
Host == app2.mydomain.com => app2-target-group
Host == app3.mydomain.com => app3-target-group
Default action (last resort) => default-target-group
Each target group contains only the single EC2 instance, over HTTP, with the following ports:
app1-target-group: 3000
app2-target-group: 3001
app3-target-group: 3002
default-target-group: 3000
Given that I can access the app directly, I'm sure it must be a problem with the way I've configured the ALB/listener/target groups. But the 504 doesn't give me much to go on.
I've tried to turn on access logs to an S3 bucket, but it doesn't seem to be writing anything there. There's a single object called ELBAccessLogTestFile, and no actual logs in the bucket.
EDIT: Some more information... I actually have nginx installed on the EC2 instance, which is where I was previously doing the SSL termination and hostname-to-port mapping/routing. If I change the default-target-group above to point to port 443 over HTTPS, then it works!
So for some reason, routing traffic
- from the ALB to the EC2 instance over HTTPS on port 443 -> OK!
- from the ALB to the EC2 instance over HTTP on port 3000 -> Broken!
But again, I can hit the instance directly on HTTP/3000 from my laptop.
Communication between resources in the same security group is not open by default. Security group membership alone does not provide special access. You still need to open the ports in the security group to allow other resources in the security group to access those ports. You can specify the security group ID in the rule's source field if you don't want to open it up beyond the resources in the security group.

AWS Ubuntu instance as proxy

I'm not sure why my browser is timing out when I try to connect to my AWS Ubuntu Instance squid proxy
I want to have my AWS Ubuntu instance act as a proxy for my python requests. The requests I make in my program will hit my AWS proxy and my proxy will return to me the webpage. The proxy is acting as a middleman. I am running squid in this Ubuntu instance. This instance is also within a VPC.
The VPC security group inbound traffic is currently set to
HTTP, TCP, 80, 0.0.0.0/0
SSH, TCP, 22, 0.0.0.0/0
RDP, TCP, 3389, 0.0.0.0/0
HTTPS, TCP, 443, 0.0.0.0/0
and outbound traffic is open to all traffic
This is my current squid configuration is the default squid.conf except that I changed one line to
http_access allow all meaning traffic is open to all.
However when I changed my mozilla browser to use the Ubuntu instance's Public IP and squid.conf default port of 3128, I cannot see any traffic going through my proxy using this command on the ubuntu instance
tail -f /var/log/squid/access.log
My browser actually times out when I try to connect to a website such as google.com. I am following this tutorial but I cannot get the traffic logs that his person is getting.
HTTP/S as shown in security group settings actually has nothing whatsoever to do with HTTP/S.
Many port numbers have assigned names. When you see "HTTP," here, it's only an alias that means "whatever stuff happens on TCP port 80." The list of values only inludes common services and the names aren't always precise compared the official port names, but the whole point is to give neophytes a word that nakes sense.
What should I change? I always thought I should be leaving HTTP/S ports to their default values.
That is not at all what this does. As already inferable from above, changing an "HTTP" rule from port 80 to something else does not change the value for the HTTP port on instances behind it. Changing the port value makes the rule no longer be an "HTTP" rule, since HTTP is just a friendly label which means "this rule is for TCP port 80."
You need a custom TCP rule allowing port 3128 from your IP, and that's it.
You need to add 3128 as custom TCP in your SG. This will allow Squid to send/ receive traffic.
Also as a best practice, make SSH accessible from your own IP rather than public.