First of all, please excuse my low-level English.
I'm not native English speaker..
but I'll try to explain well as far as possible.
I really have no idea about this situation.
I thought that it's iptables problem.. but it seems not.
I'm getting a server hosting(CentOS).
I installed Nginx + Django and nginx uses 8080 port.
A domain is connected to the server.
When I executed "wget [domain]:8080/[app name]/" in the server,
it works.
Of course, "wget 127.0.0.1:8080/[app name]/" has no problem.
(wget [server ip]:8080/[app name]/, either)
However, from other computers, connecting failed.
I checked my firewall setting.
I executed these commands.
iptables -I INPUT -p tcp --dport 8080 -j ACCEPT
iptables -I OUTPUT -p tcp --sport 8080 -j ACCEPT
iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT
/etc/init.d/iptables restart
I don't really understand all options of commands and I think there were useless commands, but I just tried all googled iptables settings.
But still I cannot connect to my server.
What should I check, first?
I don't know if this is important, but I am adding to this post.
On 80 port, an apache server is running.
It works fine, I can connect to apache from other computers.
There is DB connecting issue, (PHP to MySQL) but I think that it is just PHP coding bug.
Thank you for reading this question.
I tried to stop my firewall, and It worked.
So the problem is on my iptables setting.
I had allowed 8080 port, but I think there was a mistake on the settings. I regret that I didn't read and study settings carefully.
I flushed all setting, and restart server. All looks fine.
Related
I have a WordPress site with gunicorn and varnish running on an AWS instance.
This morning, the website gave a "502 Bad Gateway nginx" error.
Upon investigation, it looks like the varnish.service port was:
ExecStart=/usr/sbin/varnishd -j unix,user=vcache -F -a :6081 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m
According to some notes, the port needs to be 80 and not 6081. Changing the port to 80 fixed the nginx error.
This issue seems to happen about once a year where the varnish.service port suddenly changes by itself and someone has to manually change the port back to 80.
So my question is - why would varnish.service suddenly change its port? As far as I know, there were no updates or changes anywhere.
It depends on what file you're editing.
Make sure you're editing /etc/systemd/system/varnish.service. If that file isn't there, just run the following command:
sudo cp /lib/systemd/system/varnish.service /etc/systemd/system/
When you're done editing the port, just run the following 2 commands:
sudo systemctl daemon-reload
sudo systemctl restart
See https://www.varnish-software.com/developers/tutorials/installing-varnish-ubuntu/#systemd-configuration for a detailed tutorial.
Why can't I run python manage.py runserver on all ports?
I understand this is probably a newbie question, but what prevents me from achieving this?
I would like to access my app without having to include the port, like a regular website
WARNING - Do not run the test server in production!
The reason you have to type in the port when connecting to the test server is because it doesn't run on a standard web port being "http: 80 and https: 443". If you use the command below it will not require the port number be provided when connecting to the test server. Keep in mind that you will need root or sudo access and if something is already running on port 80 it will fail.
Runserver with port:
python manage.py runserver -p 80
Just run it on port 80 and you won't have to specify the port.
You can't blast it on all other ports because many, many other services already use those other ports. Network services need to have ports specified.
Using VirtualBox, I have a NAT enabled VM running Centos 7. The host OS is Windows 7. I can't seem to access the Django web server running inside the VM. What am I missing?
I have two port forwarding rules set for the Virtual Machine:
I start the Django web server on the guest OS with:
python manage.py runserver 0.0.0.0:8000
And I try to visit the webpage on the host OS at:
http://localhost:8000
Google Chrome gives me the error code ERR_CONNECTION_RESET.
The result of curl on the host OS:
[user#win7 ~ ]$ curl http://localhost:8000
curl: (56) Recv failure: Connection reset by peer
Here is the result of a netstat performed on the guest OS:
[user#vm ~ ]$ netstat -na | grep 8000
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN
Here is the result of a netstat performed on the host OS (with Cygwin):
[user#win7 ~ ]$ netstat -na | grep 8000
TCP 0.0.0.0:8000 0.0.0.0:0 LISTENING
It is also worth mentioning that the SSH rule works. I can SSH into the machine with no problems.
This is not a solution, but a work-around for my problem. Maybe this will help anyone encountering a problem similar to mine, and just wants to be able to connect to their VM's webserver.
Since SSH was working, I figured I could access the webpage via a SSH Tunnel. The syntax for doing so via command line is:
ssh -L <local-port>:<remote-host>:<remote-port>
So in my situation, if I wanted to open a tunnel via command line I would do:
ssh -L 8000:127.0.0.1:8000
This would allow me to browse to http://localhost:8000 and access the website.
You can also do this via PuTTY, but I won't explain that here, so just Google for a guide.
The ssh tunnel is an OK work around, but the problem is almost certainly CentOS 7 which now uses firewalld rather than iptables to manager access. And, unlike iptables the default configuration is quite restrictive.
if
ps -ae | grep firewall
returns something like
602 ? 00:00:00 firewalld
your system is running firewalld, not iptables. They do not run together.
To correct your VM so you can access your django site from the host use the commands:
firewall-cmd --zone=public --add-port=8000/tcp --permanent
firewall-cmd --reload
Many thanks to pablo v in the post "Access django server on virtual Machine" for pointing this out.
I use CentOS 6.5 and Jetty 9.1.0.v20131115. I use Jetty's JMX capabilities.
I want to have JMX accessible only from within the running computer (localhost, or 127.0.0.0/8), but not from outside (e.g. JMX shall not be accessible from public.example.com).
Therefore, I configured Jetty's JMX RMI host to use jetty.jmxrmihost=localhost instead of a wildcard jetty.jmxrmihost=0.0.0.0.
Yet still, my Jetty server instance is accessible from "outside", allowing anyone to connect to my Jetty server via JMX.
What do I have to configure to make Jetty listen to only those JMX connections which originate from localhost?
Here are my Jetty configuration files that are relevant to this topic:
file ${jetty.base}/start.d/jmx.ini:
--module=jmx
#jetty.jmxrmihost=localhost # I tried this one, but it didn't work either
jetty.jmxrmihost=127.0.0.1
jetty.jmxrmiport=1099
file ${jetty.base}/start.d/jmx-remote.ini:
--module=jmx-remote
Just from the way the question is asked, it seems like it is less of a Jetty/JMX issue and more of a firewall issue - what you want is to block unwanted outside traffic to the JMX port on this server.
If you have permissions and are willing to do so, you will want to remove any rule from /etc/sysconfig/iptables that is opening the JMX port (in this example, 1099). Such a rule will look like the following:
[0:0] -A INPUT -s SOME_IP_SUBNET -p tcp -m tcp --dport 1099 -j ACCEPT
Or, on the flip side, you may want to enable JMX monitoring only for a specific subnet (such as for a company's subnet), at which point, you'd want to add the following:
[0:0] -A INPUT -s MY_IP_SUBNET_HERE -p tcp -m tcp --dport JMX_PORT -j ACCEPT
, replacing MY_IP_SUBNET_HERE and JMX_PORT with your own IP subnet and JMX port, respectively.
I haven't written a lot of rules for iptables myself, so please consider the above as an example and not necessarily the exact syntax you need. *nixCraft provides a basic guide to handling iptables/sysctl, which also covers how to modify rules without editing the file (I usually just modify the file).
Two notes, if you go the route of modifying the iptables file:
Be sure to restart iptables (/etc/init.d/iptables restart or service iptables restart)
Call /sbin/sysctl -p after restarting iptables. Restarting iptables wipes out any custom rules from sysctl.conf, calling sysctl -p will restore those rules.
For some reason my Vagrant/Puppet instance stopped working out of the blue--I am no longer able to reach the VM from my host machine, despite no configuration or network changes.
Interestingly, the private network must be recognized as the browser is attempting to connect, however the request seems to be timing out when issued from OSX... Also worth noting, I have not installed any system updates at this time. The VM was working previously on 10.9.
Steps I have tried to resolve the issue:
vagrant destroy && vagrant up
Result: Vagrant loads properly, SSH works and apache is running with the proper result returned from ping 127.0.0.1
vagrant reload
Result: Same as above; VM reloads successfully, no change in network accessibility
sudo killall -HUP mDNSResponder
Result: No change in accessibility via the bound IP (10.0.0.100)
Port forwarding (explicit) vs "private_network" in vagrant file
Result: No change in accessibility via the bound IP (10.0.2.15)
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT
Result: No change in accessibility via the bound IP, connection still times out
Vagrant File: http://pastebin.com/Hk8drWxF
Puppet File: http://pastebin.com/20Sp1m22
Any thoughts? Thanks!
Could this be an issue with netmask ? You specify 2 ips there : 10.0.0.100 and 10.0.2.15 if you're using default subnet (class C) you would end up on different subnets and be unable to speak directly to each other.