Jetty JMX open port 1099 - jetty

I use CentOS 6.5 and Jetty 9.1.0.v20131115. I use Jetty's JMX capabilities.
I want to have JMX accessible only from within the running computer (localhost, or 127.0.0.0/8), but not from outside (e.g. JMX shall not be accessible from public.example.com).
Therefore, I configured Jetty's JMX RMI host to use jetty.jmxrmihost=localhost instead of a wildcard jetty.jmxrmihost=0.0.0.0.
Yet still, my Jetty server instance is accessible from "outside", allowing anyone to connect to my Jetty server via JMX.
What do I have to configure to make Jetty listen to only those JMX connections which originate from localhost?
Here are my Jetty configuration files that are relevant to this topic:
file ${jetty.base}/start.d/jmx.ini:
--module=jmx
#jetty.jmxrmihost=localhost # I tried this one, but it didn't work either
jetty.jmxrmihost=127.0.0.1
jetty.jmxrmiport=1099
file ${jetty.base}/start.d/jmx-remote.ini:
--module=jmx-remote

Just from the way the question is asked, it seems like it is less of a Jetty/JMX issue and more of a firewall issue - what you want is to block unwanted outside traffic to the JMX port on this server.
If you have permissions and are willing to do so, you will want to remove any rule from /etc/sysconfig/iptables that is opening the JMX port (in this example, 1099). Such a rule will look like the following:
[0:0] -A INPUT -s SOME_IP_SUBNET -p tcp -m tcp --dport 1099 -j ACCEPT
Or, on the flip side, you may want to enable JMX monitoring only for a specific subnet (such as for a company's subnet), at which point, you'd want to add the following:
[0:0] -A INPUT -s MY_IP_SUBNET_HERE -p tcp -m tcp --dport JMX_PORT -j ACCEPT
, replacing MY_IP_SUBNET_HERE and JMX_PORT with your own IP subnet and JMX port, respectively.
I haven't written a lot of rules for iptables myself, so please consider the above as an example and not necessarily the exact syntax you need. *nixCraft provides a basic guide to handling iptables/sysctl, which also covers how to modify rules without editing the file (I usually just modify the file).
Two notes, if you go the route of modifying the iptables file:
Be sure to restart iptables (/etc/init.d/iptables restart or service iptables restart)
Call /sbin/sysctl -p after restarting iptables. Restarting iptables wipes out any custom rules from sysctl.conf, calling sysctl -p will restore those rules.

Related

Troubleshooting mount.nfs: Connection timed out for Centos 7 machines

Can somebody help me troubleshoot setting up NFS share between two Centos 7 machines?
https://www.howtoforge.com/nfs-server-and-client-on-centos-7
https://www.unixmen.com/setting-nfs-server-client-centos-7/
I have configured the firewall and the server is working fine, I can mount the shared folder from the different (third) Centos 7 machine.
However, on this other client machine, let's call it 111.111.111.111 I cannot mount:
`mount -t nfs 255.255.255.255:/var/nfsshare /some/existing/folder`
(I get mount.nfs: Connection timed )
When I run tcpdump alongside, I get:
[root#111.111.111.111 ~]# tcpdump -i eth0 -n host 255.255.255.255
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
13:45:35.795666 IP 111.111.111.111.1015 > 255.255.255.255.nfs: Flags [S], seq 221559787, win 29200, options [mss 1460,sackOK,TS val 2467213240 ecr 0,nop,wscale 7], length 0
13:45:36.797428 IP 111.111.111.111.1015 > 255.255.255.255.nfs: Flags [S], seq 221559787, win 29200, options [mss 1460,sackOK,TS val 2467214242 ecr 0,nop,wscale 7], length 0
...
The client CAN ping the server.
rpcinfo -p 161.53.19.149
gives:
rpcinfo: can't contact portmapper: RPC: Remote system error - Connection timed out
However, I can telnet from the client to both 111 and 2049 ports.
From what I've read this should be a firewall issue, but apparently it is not, as it doesn't work even if I disable the firewall on the server (or even at the client).
How should I troubleshoot this next?
Here's the best workbook I've found for troubleshooting NFS connections:
https://docs.oracle.com/cd/E23824_01/html/821-1454/rfsadmin-215.html
Follow those instructions slowly and carefully and they should turn up the problem. That doc is a good example of a step-by-step troubleshooting where you check all the connectivity prerequisites before checking the actual service you're trying to test.
Here's some additional info that may help:
Your network sniff output is simple - the server isn't responding to you on the NFS TCP port. I hope the server's IP isn't really 255.255.255.255, since that's a broadcast address and is unlikely to work reliably.
You may have dropped all the firewalls, but the NFS server has its own permissions control, in the /etc/exports file according to the HowToForge link that you were following. You need to specify ALL the clients, not just a single IP address. You can also use a network range that includes all the clients. "man 5 exports" should tell you more about how to edit this file. Please DON'T put in "*" to match all IP addresses as suggested in the HowToForge link, that is generally a bad idea.
portmapper might be using the TCP wrappers permissions files - /etc/hosts.deny and /etc/hosts.allow - see "man 5 hosts_access" for the format of these files.
look in the syslog files for the IP address of the client to see if there are any messages about that client.
Even though you think you turned the firewall off, run "iptables -vL" to see if there are any rules you overlooked and whether they have any hits.
If you have custom MTU settings on any of the machines (for example, on storage-specific LANs people often set up jumbo packets) make sure that there are no mismatches. This is unlikely to happen on a home network.
Your sniff shows the client is attempting to connect via TCP to the nfs port 2048, it's possible the client is configured for NFSv4 and the server is configured for NFSv3 or lower. You might see this with the rpcinfo command, since it shows the versions of NFS supported by the server.

docker: networking without linking

I have the following setup running on one host:
1 container with nginx: this one serves as reverse proxy for some webservices
x container offering webservices, having exposed a port to the host
x "oldschool"/non dockerized webservices
when configuring nginx to proxy to "localhost:$EXPOSED_OR_NATIVE_PORT", this does not work, because nginx can't connect to this port.
How do I have to configure the dockerized nginx in order to serve as proxy for container and standard services?
Linking nginx with the docker webserives might be one solution, although i don't like the idea to have all containers linked to the nginx. And this does not solve the problem, that this nginx should also serve as reverse for standard services on this host.
Any idea/recommendation?
Thanks
If you want nginx inside a container to proxy for services on the host, you might just run that container with --net=host, so it is not placed inside a network namespace and accesses the host's network interfaces directly.
Answering myself after trying a lot of stuff. I hope this helps someone.
I had the following process:
As #Ben mentioned, using the bridge ip helped and everthing was fine.
But then i realized, that this setup does not work with UFW on ubuntu and every exposed port of every dockercontainer running was reachable from the internet.
The reason for that is, that docker is fiddling around with iptables and this conflicts with the UFW generated iptables rules. Quite dangerous in my eyes. In order to fix that problem, i started the dockerdaemon with DOCKER_OPTS="--iptables=false". That solved the problem of the worldwide reachable exposed dockerports. But now I can't access the docker container again from the ngix container. This is where #Bryan helped out: The container started with --net host has access to localhost and all exposed ports.
One last step was nessesary: adding this iptables rule was needed in order to have access to the www from within a docker container: iptables -t nat -A POSTROUTING ! -o docker0 -s 172.17.0.0/16 -j MASQUERADE
LG
Dakky
If your nginx is dockerized and you want to reach an other container or host you should use the hosts ip and NOT localhost. The default is 172.17.42.1 as can be read here https://docs.docker.com/articles/networking/
So you should proxy to:
proxy_pass http://172.17.42.1:$EXPOSED_OR_NATIVE_PORT;

Vagrant/Puppet Connection Timeout (Obvious Fixes Attempted, Working Previously)

For some reason my Vagrant/Puppet instance stopped working out of the blue--I am no longer able to reach the VM from my host machine, despite no configuration or network changes.
Interestingly, the private network must be recognized as the browser is attempting to connect, however the request seems to be timing out when issued from OSX... Also worth noting, I have not installed any system updates at this time. The VM was working previously on 10.9.
Steps I have tried to resolve the issue:
vagrant destroy && vagrant up
Result: Vagrant loads properly, SSH works and apache is running with the proper result returned from ping 127.0.0.1
vagrant reload
Result: Same as above; VM reloads successfully, no change in network accessibility
sudo killall -HUP mDNSResponder
Result: No change in accessibility via the bound IP (10.0.0.100)
Port forwarding (explicit) vs "private_network" in vagrant file
Result: No change in accessibility via the bound IP (10.0.2.15)
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT
Result: No change in accessibility via the bound IP, connection still times out
Vagrant File: http://pastebin.com/Hk8drWxF
Puppet File: http://pastebin.com/20Sp1m22
Any thoughts? Thanks!
Could this be an issue with netmask ? You specify 2 ips there : 10.0.0.100 and 10.0.2.15 if you're using default subnet (class C) you would end up on different subnets and be unable to speak directly to each other.

I only can connect to my nginx server from local computer

First of all, please excuse my low-level English.
I'm not native English speaker..
but I'll try to explain well as far as possible.
I really have no idea about this situation.
I thought that it's iptables problem.. but it seems not.
I'm getting a server hosting(CentOS).
I installed Nginx + Django and nginx uses 8080 port.
A domain is connected to the server.
When I executed "wget [domain]:8080/[app name]/" in the server,
it works.
Of course, "wget 127.0.0.1:8080/[app name]/" has no problem.
(wget [server ip]:8080/[app name]/, either)
However, from other computers, connecting failed.
I checked my firewall setting.
I executed these commands.
iptables -I INPUT -p tcp --dport 8080 -j ACCEPT
iptables -I OUTPUT -p tcp --sport 8080 -j ACCEPT
iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT
/etc/init.d/iptables restart
I don't really understand all options of commands and I think there were useless commands, but I just tried all googled iptables settings.
But still I cannot connect to my server.
What should I check, first?
I don't know if this is important, but I am adding to this post.
On 80 port, an apache server is running.
It works fine, I can connect to apache from other computers.
There is DB connecting issue, (PHP to MySQL) but I think that it is just PHP coding bug.
Thank you for reading this question.
I tried to stop my firewall, and It worked.
So the problem is on my iptables setting.
I had allowed 8080 port, but I think there was a mistake on the settings. I regret that I didn't read and study settings carefully.
I flushed all setting, and restart server. All looks fine.

Setting up JMeter for Distributed testing in AWS with connectivity issues

I have to do distributed testing using JMeter. The objective is to have multiple remote servers in AWS controlled by one local server send a file download request to another server in AWS.
How can I set up the different servers in AWS?
How can I connect to them remotely?
Can someone provide some step by step instructions on how to do it?
I have tried several things but keep running into connectivity issues across networks.
We had a similar task and we ran into a bunch of issues as well. Here are the details of the whole process and what we did to resolve the issues we encountered. Hope it helps.
We needed to send requests from 5 servers located in various regions of the world. So we launched 5 micro instances in AWS, each in a different region. We chose the regions to be as geographically apart as possible.
Remote (server) JMeters config
Here is how we set up each instance.
Installed java:
$ sudo apt-get update
$ sudo apt-get install default-jre
Installed JMeter:
$ mkdir jmeter
$ cd jmeter;
$ wget ftp://apache.mirrors.pair.com//jmeter/binaries/apache-jmeter-2.9.tgz
$ gunzip apache-jmeter-2.9.tgz;tar xvf apache-jmeter-2.9.tar
Edited the jmeter.properties file in the /bin folder of the JMeter installation and uncomment the line containing the server.rmi.localport setting. We changed the port to 50000.
server.rmi.localport=50000
Started JMeter server. Make sure the address and the port the server reports listening to are correct.
$ cd ~/jmeter/apache-jmeter-2.9/bin
$ vi jmeter-server
Local (client) JMeter config
Then we set up JMeter to run tests remotely on these instances on our local client machine:
Ensured to use the same version of JMeter as was running on the servers. Installed Java and JMeter as described above.
Enabled remote testing by editing the jmeter.properties file that can be found in the bin folder of the JMeter installation. The parameter remote_hosts needed to be set with the public DNS of the remote servers we were connecting to.
remote_hosts=54.x.x.x,54.x.x.x,54.x.x.x,54.x.x.x,54.x.x.x
We were now able to tell our client JMeter instance to run tests on any or all of our specified remote servers.
Issues and resolutions
Here are the issues we encountered and how we resolved them:
The client failed with:
ERROR - jmeter.engine.ClientJMeterEngine: java.rmi.ConnectException: Connection - refused to host: 127.0.0.1
It was due to the server host returning the private IP address as its address because of Amazon NAT.
We fixed this by setting the parameter RMI_HOST_DEF that the /usr/local/jmeter/bin/jmeter-server script includes in starting the server:
RMI_HOST_DEF=-Djava.rmi.server.hostname=54.xx.xx.xx
Now, the AWS instance returned the server’s external IP, and we could start the test.
When the server node attempted to return the result and tried to connect to the client, the server tried to connect to the external IP address of my local machine. But it threw a connection refused error:
2013/05/16 12:23:37 ERROR - jmeter.samplers.RemoteListenerWrapper: testStarted(host) java.rmi.ConnectException: Connection refused to host: xxx.xxx.xxx.xx;
We resolved this issue by setting up reverse tunnels at the client side.
First, we edited the jmeter.properties file in the /bin folder of the JMeter installation and uncommented the line containing the client.rmi.localport setting. We changed the port to 60000:
client.rmi.localport=60000
Then we connected to each of the servers using SSH, and setup a reverse tunnel to port 60000 on the client.
$ ssh -i ~/.ssh/54-x-x-x.us-east.pem -R 60000:localhost:60000 ubuntu#54.x.x.x
We kept each of these sessions open, as the JMeter server needs to be able to deliver the test results to the client.
Then we set up the JVM_ARGS environment variable on the client, in the jmeter.sh file in the /bin folder:
export JVM_ARGS="-Djava.rmi.server.hostname=localhost"
By doing this, JMeter will tell the servers to connect to localhost:60000 for delivering their results. This ends up being tunneled back to the client.
The SSH connections to the servers kept dropping after staying idle for a little bit. To prevent that from happening, we added a parameter to each of the SSH tunnel set up directing the client to wait 60 seconds before sending a null packet to the server to keep the connection alive:
$ ssh -i ~/.ssh/54-x-x-x.us-east.pem -o ServerAliveInterval=60 -R 60000:localhost:60000 ubuntu#54.x.x.x
(.ssh/config version of all required SSH settings:
Host 54.x.x.x
HostName 54.x.x.x
Port 22
User ubuntu
ServerAliveInterval 60
RemoteForward 127.0.0.1:60000 127.0.0.1:60000
IdentityFile ~/.ssh/54-x-x-x.us-east.pem
IdentitiesOnly yes
Just use ssh 54.x.x.x after setting this up.
)
I just went though this on openstack and found the same issues... no idea why the jmeter remoting documentation only covers half the required steps. You can do it without tunnels or touching the properties files.
You need
All nodes to advertise their public IP - on AWS/OS this defaults to the private IP
Ingress rules for the RMI port which defaults to 1099 - I use this
Ingress rules for the RMI "local" port which defaults to dynamic. Below I use 4001 for the client and 4000 for servers. The port can be the same but note the properties are different.
If you are using your workstation as the client you probably still need tunnels. Above Archana Aggarwal has good tips for tunnels.
Remote servers
Set java.rmi.server.hostname and server.rmi.localport inline or in the properties file.
jmeter-server -Djava.rmi.server.hostname=publicip -Dserver.rmi.localport=4000
Sneaky server on client
You can also run one on the same machine as the client. For clarity I've set java.rmi.server.hostname but left server.rmi.localport as dynamic
jmeter-server -Djava.rmi.server.hostname=localip
Client
Set java.rmi.server.hostname and client.rmi.localport inline or in the properties file. Use -R etc like so:
jmeter -n -t Test.jmx -Rremotepublicip1,remotepublicip2 -Djava.rmi.server.hostname=clientpublicip -Dclient.rmi.localport=4001 -GmypropA=1 -GmypropB=2 -lresults.jtl
When you go for distributed testing using JMeter in AWS, I would suggest you to use docker - which will help us with jmeter test infrastructure very quickly. This way we can also ensure that same version of java and jmeter are installed in all the instances of amazon which is very important of JMeter distributed testing.
Ensure that - you set below properties and ports are open for jmeter-server. [they do not have to be 1099,50000 exactly]
server.rmi.localport=50000
server_port=1099
java.rmi.server.hostname=SERVER_IP
for client
client.rmi.localport=60000
java.rmi.server.hostname=SERVER_IP - this step is very important as the container in aws instance will have their own IP address in the docker network - so master and slave can not communicate. So we explicitly set this property
More info:
http://www.testautomationguru.com/jmeter-distributed-load-testing-using-docker-in-aws/