I have multiple websites on one EC2 instance, which were working perfectly on both HTTP and HTTPS until this morning. I have jenkins installed as well, on port 8080.
Strangely, no changes were made, but now all HTTP ports are blocked, 80, 443 and 8080.
I've allowed all traffic from all sources currently, and it still blocks those ports.
SSH port is working, and when I ssh and test using wget such as
wget -O - http://localhost - works
wget -O - http://private-ip - works
wget -O - http://public-ip - no requests
wget -O - http://my-domain - no requests
More over, if I run nginx or some other http server on some port other than 80, 443, 8080, I'm receiving requests from both public-ip and my-domain.
ufw is disabled and iptables are empty
sudo ufw status
Status: inactive
sudo iptables -t nat -vnL
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
wget -O - http://localhost
--2020-11-11 16:08:54-- http://localhost/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:80... connected.
HTTP request sent, awaiting response... 200 OK
wget -O - http://private-ip
--2020-11-11 16:09:19-- http://private-ip/
Connecting to private-ip:80... connected.
HTTP request sent, awaiting response... 200 OK
wget -O - http://public-ip
--2020-11-11 16:10:11-- http://public-ip/
Connecting to public-ip:80...
HTTP Server on port 81 works.
I managed to solve the issue. Since WP was hosted on instance, it got some malicious scripts, and some other sites have reported us to ec2-abuse, so AWS has blocked those ports.
Not sure how it was not stated somewhere on AWS Console? Seems like that info is available only if client has paid support.
Related
I know there are a lot of questions with this specific issue but believe me I have tried everything that I got my hands on. After connecting to Wireguard and having established a successful handshake I do not have internet connection. I will describe everything I have tried so far with no luck.
I am using a Virtual Machine in Google Cloud and a physical Windows machine as client.
Here is how my server configuration (Google Cloud VM) looks like:
[Interface]
Address = 10.100.100.1/24
SaveConfig = true
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 51820
PrivateKey = PRIVATE_KEY
[Peer]
PublicKey = CLIENTS_PUBLIC_KEY
AllowedIPs = 10.100.100.2/32
Here is how my client side connection look like:
[Interface]
PrivateKey = CLIENTS_PRIVATE_KEY
Address = 10.100.100.2/32
[Peer]
PublicKey = SERVER_PUBLIC_KEY
AllowedIPs = 0.0.0.0/0
Endpoint = BASTION_SERVER_PUBLIC_IP:51820
I have enabled IPV4 Forwarding in the cloud VM modifying /etc/sysctl.conf file and uncommenting the following line:
net.ipv4.ip_forward=1
Since this is a cloud environment with external firewall and all I manually added rule to the cloud's firewall for port 51820. After that point I can make a handshake successfully but there is no internet in the client side.
I have checked if the server itself has internet access, which it does.
Disabled my whole firewall on the client side since I thought it might interfere with something.
I have read in another post someone suggesting to add MTU value explicitly. Google uses MTU value of 1460 which apparently is different from Wireguard's default one. I have added this to both client and server configuration with no luck.
Explicitly stated the DNS record in the client's configuration. Still no luck.
Enabled the UFW and explicitly put the port required by Wireguard - 51820.
Is there something that I am missing regarding all of this? I have tried everything I can get my hands on but still there is no internet connection from the client after the handshake.
Thank you in advance!
Update 1
I have enabled IP Forwarding as suggested:
After this configuration I can see in the server side that the handshake is successful:
peer: PUBLIC_KEY
endpoint: CLIENT_IP:56507
allowed ips: 10.100.100.2/32
latest handshake: 4 minutes, 11 seconds ago
transfer: 52.60 KiB received, 344 B sent
Also it shows that it receives 52.60 KiB of data. This was not here before.
However the problem still persists. I still have no access to the internet from client side.
By default, Google Cloud performs strict source and destination
checking for packets so that:
VM instances can only send packets whose sources are set to match an internal IP address of its interface in the network.
Packets are only delivered to an instance if their destinations match the IP address of the instance's interface in the network.
When creating a VM, you must enable IP Forwarding. This cannot be changed after a VM is created.
Enabling IP forwarding for instances
I've create an test vm instance on gcp:
install apache2 and the service started success apache2 started
the firewall setup as default: firewall setup
the apache ports config: port config
external ip: external ip
it seems ok but I can not access via external ip as the document said https://cloud.google.com/community/tutorials/setting-up-lamp
Please give me some suggestions, thanks.
=================================
curl --head http://35.240.177.89/
curl: (7) Failed to connect to 35.240.177.89 port 80: Operation timed out
curl --head https://35.240.177.89/
curl: (7) Failed to connect to 35.240.177.89 port 443: Operation timed out
netstat -lntup:
result
Assuming that your Linux has dual stack enabled, the netstat with :::80 means that Apache2 is listening on both IPv4 and IPv6 port 80 for all network interfaces. You can check with the following command. A 0 value means that dual stack is enabled.
cat /proc/sys/net/ipv6/bindv6only
Given the above, then most likely your system does not have an iptables rule allowing port 80. Assuming Ubuntu 18.04 (modify for your distribution):
Backup the iptables rules:
sudo iptables-save > iptables.backup
Allow ingress port 80:
sudo iptables -I INPUT -p tcp --dport 80 -j ACCEPT
Optionally allow ingress port 443:
sudo iptables -I INPUT -p tcp --dport 443 -j ACCEPT
I have setup an EC2 instance at AWS and I have Java and Tomcat 9 installed on the EC2 instance Ubuntu Server 18.04 LTS (HVM). I am able to connect to my EC2 instance using SSH(elastic IP)[ssh -i "path/to/.pem-file" ubuntu#XX.XX.XX.XX] but unable to access the Tomcat default page from a browser outside EC2 using AWS Public DNS address or Elastic IP.
I have added a Security Group and set up an inbound rule as below.
This is the output of iptables -nL on EC2.
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
I have gone through simlar posts here and followed the same steps mentioned on this
article but still
http://ec2-XXX-XXX-XXX-XXX.us-east-2.compute.amazonaws.com does not load the Tomcat default page.
Need some help.
Edit:
netstat -na | grep 80
displays
tcp6 0 0 :::8080 :::* LISTEN
suggests it is listening to only for IPv6 addresses and as per official docs, it does not support Elastic IP addresses for IPv6
I have an EC2 instance in which I am running a Flask server on port 8080.
* Running on http://0.0.0.0:8080/ (Press CTRL+C to quit)
I can run curl and get the response from my EC2 instance.
$ curl -X GET '0.0.0.0:8080/fbeac'
However, I cannot use the public IP/DNS to get the response and running
$ curl -X GET '3.135.62.118:8080/fbeac'
results in curl: (7) Failed to connect to 3.135.62.118 port 8080: Connection refused. I get the same error when I try to curl using my local machine.
My application is listening on port 8080, which I checked by running netstat.
$ netstat -an | grep 8080
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN
Moreover, I have ensured the Security Groups are set up correctly. I have experimented with just custom TCP port 8080, all TCP ports, and (currently) all traffic. I have also opened up HTTP/HTTPS ports on the side just in case, but with no luck.
This leads me to believe it might be a firewall issue but I am on an Amazon Linux machine and the default policy seems to be to ACCEPT, which I checked by running iptables.
$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Is there some other thing I should be checking or button I should be flipping?
My AWS Instance ID is i-0080c730c4287ca3c.
I would suggest checking what access rules are defined in your EC2 instance security groups. You might need to open up port 8080.
Also, as a side note, consider not using flask development server in production like that. It's fine for testing, but you really should be using something like uwsgi for production workloads.
Just in case anyone is having this problem too, make sure that you have a Gateway configured
I've read through this answer but for the life of me, I can't figure out this one out.
I have an Ubuntu 18 EC2 instance running RStudio Server and RStudio Connect, both using default configuration and listening on ports 8787 and 3939 respectively.
Here are my config files:
ubuntu#EC2:~$ cat /etc/rstudio/rserver.conf
# Server Configuration File
#
#
ubuntu#EC2:~$ sudo cat /etc/rstudio-connect/rstudio-connect.gcfg
; RStudio Connect configuration file
[Server]
; SenderEmail is an email address used by RStudio Connect to send outbound
; email. The system will not be able to send administrative email until this
; setting is configured.
;
; SenderEmail = account#company.com
SenderEmail =
; Address is a public URL for this RStudio Connect server. Must be configured
; to enable features like including links to your content in emails. If
; Connect is deployed behind an HTTP proxy, this should be the URL for Connect
; in terms of that proxy.
;
; Address = https://rstudio-connect.company.com
Address =
[HTTP]
; RStudio Connect will listen on this network address for HTTP connections.
Listen = :3939
[Authentication]
; Specifies the type of user authentication.
Provider = password
Here's what I've tried:
Created inbound rules for ports 8787, 3939 and all TCP ports in my security group.
Checked the Network ACL for the subnet the instance is on
Ensured that rstudio-server and rstudio-connect are running and listening on all interfaces and not just localhost
ubuntu#EC2:~$ netstat -ltpn
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8787 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp6 0 0 :::8787 :::* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 :::3939 :::* LISTEN -
Checked that ufw is inactive
ubuntu#EC2:~$ sudo ufw status
Status: inactive
Created an iptables rule for port 8787
ubuntu#EC2:~$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:8787
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
I still can't access port 8787 or 3939 externally. However I can access them both on the host using Lynx.
If I change RStudio Server's configuration to have it use port 80 instead, I can access it externally but it doesn't work for ports 8787 or 3939.
Any ideas why and how to fix this?
I just figured out the answer myself. There was absolutely nothing wrong with my configuration. Opening up all the TCP ports in my security group was a bit overkill maybe and entirely unnecessary, so don't do that.
The issue was that the corporate network I am connected to blocks outbound traffic to external hosts on certain non-standard ports.
If you're in the same boat as me and need to host 2 services on the same EC2 instance but don't know which ports are unavailable/blocked by your organization then you could use nmap and portquiz.net to figure it out.
nmap is a port scanner and portquiz.net is a service that listens for connections on all TCP ports. You could scan the host using nmap over a range of TCP ports you're interested in using and see which ports show up as open
nmap -v -p0-8000 portquiz.net
Starting Nmap 7.70 ( https://nmap.org ) at 2019-04-02 16:47 IST
Initiating Ping Scan at 16:47
Scanning portquiz.net (5.196.70.86) [2 ports]
Completed Ping Scan at 16:47, 0.13s elapsed (1 total hosts)
Initiating Parallel DNS resolution of 1 host. at 16:47
Completed Parallel DNS resolution of 1 host. at 16:47, 0.14s elapsed
Initiating Connect Scan at 16:47
Scanning portquiz.net (5.196.70.86) [8001 ports]
Discovered open port 22/tcp on 5.196.70.86
Discovered open port 80/tcp on 5.196.70.86
Discovered open port 443/tcp on 5.196.70.86
Discovered open port 21/tcp on 5.196.70.86
Discovered open port 4080/tcp on 5.196.70.86
Completed Connect Scan at 16:48, 84.98s elapsed (8001 total ports)
Nmap scan report for portquiz.net (5.196.70.86)
Host is up (0.13s latency).
rDNS record for 5.196.70.86: electron.positon.org
Not shown: 7996 filtered ports
PORT STATE SERVICE
21/tcp open ftp
22/tcp open ssh
80/tcp open http
443/tcp open https
4080/tcp open lorica-in
Here, I have 4080 and 80 open so that means the corporate firewall isn't blocking outbound traffic to these ports. After configuring RStudio Server and RStudio Connect to listen on ports 80 and 4080 respectively, I'm now able to access both services externally.