Can somebody help me troubleshoot setting up NFS share between two Centos 7 machines?
https://www.howtoforge.com/nfs-server-and-client-on-centos-7
https://www.unixmen.com/setting-nfs-server-client-centos-7/
I have configured the firewall and the server is working fine, I can mount the shared folder from the different (third) Centos 7 machine.
However, on this other client machine, let's call it 111.111.111.111 I cannot mount:
`mount -t nfs 255.255.255.255:/var/nfsshare /some/existing/folder`
(I get mount.nfs: Connection timed )
When I run tcpdump alongside, I get:
[root#111.111.111.111 ~]# tcpdump -i eth0 -n host 255.255.255.255
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
13:45:35.795666 IP 111.111.111.111.1015 > 255.255.255.255.nfs: Flags [S], seq 221559787, win 29200, options [mss 1460,sackOK,TS val 2467213240 ecr 0,nop,wscale 7], length 0
13:45:36.797428 IP 111.111.111.111.1015 > 255.255.255.255.nfs: Flags [S], seq 221559787, win 29200, options [mss 1460,sackOK,TS val 2467214242 ecr 0,nop,wscale 7], length 0
...
The client CAN ping the server.
rpcinfo -p 161.53.19.149
gives:
rpcinfo: can't contact portmapper: RPC: Remote system error - Connection timed out
However, I can telnet from the client to both 111 and 2049 ports.
From what I've read this should be a firewall issue, but apparently it is not, as it doesn't work even if I disable the firewall on the server (or even at the client).
How should I troubleshoot this next?
Here's the best workbook I've found for troubleshooting NFS connections:
https://docs.oracle.com/cd/E23824_01/html/821-1454/rfsadmin-215.html
Follow those instructions slowly and carefully and they should turn up the problem. That doc is a good example of a step-by-step troubleshooting where you check all the connectivity prerequisites before checking the actual service you're trying to test.
Here's some additional info that may help:
Your network sniff output is simple - the server isn't responding to you on the NFS TCP port. I hope the server's IP isn't really 255.255.255.255, since that's a broadcast address and is unlikely to work reliably.
You may have dropped all the firewalls, but the NFS server has its own permissions control, in the /etc/exports file according to the HowToForge link that you were following. You need to specify ALL the clients, not just a single IP address. You can also use a network range that includes all the clients. "man 5 exports" should tell you more about how to edit this file. Please DON'T put in "*" to match all IP addresses as suggested in the HowToForge link, that is generally a bad idea.
portmapper might be using the TCP wrappers permissions files - /etc/hosts.deny and /etc/hosts.allow - see "man 5 hosts_access" for the format of these files.
look in the syslog files for the IP address of the client to see if there are any messages about that client.
Even though you think you turned the firewall off, run "iptables -vL" to see if there are any rules you overlooked and whether they have any hits.
If you have custom MTU settings on any of the machines (for example, on storage-specific LANs people often set up jumbo packets) make sure that there are no mismatches. This is unlikely to happen on a home network.
Your sniff shows the client is attempting to connect via TCP to the nfs port 2048, it's possible the client is configured for NFSv4 and the server is configured for NFSv3 or lower. You might see this with the rpcinfo command, since it shows the versions of NFS supported by the server.
Related
I want to test the network speed from my home network because it's taking 16 seconds to download a 1MB file:
I tried running iperf3 -s -p 5201 in the VM and allowed ingress on the same port in the Firewall Google Cloud page. I also allowed the port in Linux UFW.
However, when running iperf3 -c on my PC, it doesn't connect. I tried to do iperf to another network from my PC and it worked.
Are there other settings I need to adjust in GC?
I realise that the firewall should not block traffic moving between terminal sessions on the same server, but I have included detail of my firewall here as it might be related somehow. The crux of this problem is "What linux/AWS setting could be stopping me from communicating on a port on the same instance"
I have an amazon instance (not build by me) running Debian. I am trying to get an email relay running, but that question is in another post. For starters, I just want to make sure that a port is open. The way I do this on other servers is, I make sure the firewall is not blocking the port, and then get netcat to listen on that port. So, for my instance I went to AWS security management and opened port 2525 both UDP and TCP
nothing is blocked outbound
and checked the local firewall
root#lamp # iptables-save
# Generated by iptables-save v1.4.14 on Sun Feb 28 10:36:57 2016
*nat
:PREROUTING ACCEPT [727933:41936189]
:INPUT ACCEPT [727933:41936189]
:OUTPUT ACCEPT [4341889:262878645]
:POSTROUTING ACCEPT [4341889:262878645]
COMMIT
# Completed on Sun Feb 28 10:36:57 2016
Then I ran netcat to listen on port 2525
root#lamp # nc -l 2525
logged on via a different terminal session to the same server
root#lamp /home/www# nc localhost 2525
localhost [127.0.0.1] 2525 (?) : Connection refused
root#lamp /home/www# netstat -anp | grep 2525
root#lamp /home/www# telnet localhost 2525
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
I try this on my ubuntu laptop or on my rackspace instance, the nc command should get me a kind of chat session which I terminate with a CTRL^D.
I am not too familiar with the way Amazon do things, so I guess I am missing some AWS web interface, but what confuses me is I would expect all traffic to be free to travel between different sessions on the same localhost. Any light that could be shed on allowing traffic on this port would be appreciated.
AWS Security Groups wouldn't be getting involved here in terms of opening + connecting to the server locally.
It's only relevant when trying to connect to and from other servers.
I suspect your issue is a Linux configuration issue, but of what flavour I do not know.
I have a virtual machine (centos 7) in VirtualBox on a RHEL 5 host running MonetDB5 (server v 11.19.9). I can connect to the database from with the guest through both jdbc and mclient. However, I cannot connect to it from the host.
I have port 50000 forwarded to port 5555, and have set shared=yes in monetdb and control=yes on monetdbd. When I try to connect using
java -jar jdbcclient.jar -dmydatabase -umonetdb -hlocalhost -p5555 -Xdebug
I type in my password and it waits a long time, then says:
Database connect failed: Unable to connect (localhost:5555): Connection to server lost!
The javaclient log file is unhelpful:
RD 1438806937222: server closed the connection (EOF)
The merovingian.log file is also unhelpful since nothing is added to it. Note that the mserver command in the log says --set-mapi_open=false, even though I have set sharing and control = yes. But I can't find the MonetDB.conf file so I don't know where to change "mapi_open=true." I tried making my own MonetDB.conf file and putting it in /etc/ but it doesn't seem to work there.
Note that I can connect to the machine with
ssh me#localhost -p222 -X
where I have forwarded port 20 to 222, so I feel good about the port forwarding. Any guidance would be greatly appreciated. Thanks!
The problem was with the firewall. Even though port 22 (ssh) was automatically opened on the guest machine, port 50000 needed to be configured manually.
I use CentOS 6.5 and Jetty 9.1.0.v20131115. I use Jetty's JMX capabilities.
I want to have JMX accessible only from within the running computer (localhost, or 127.0.0.0/8), but not from outside (e.g. JMX shall not be accessible from public.example.com).
Therefore, I configured Jetty's JMX RMI host to use jetty.jmxrmihost=localhost instead of a wildcard jetty.jmxrmihost=0.0.0.0.
Yet still, my Jetty server instance is accessible from "outside", allowing anyone to connect to my Jetty server via JMX.
What do I have to configure to make Jetty listen to only those JMX connections which originate from localhost?
Here are my Jetty configuration files that are relevant to this topic:
file ${jetty.base}/start.d/jmx.ini:
--module=jmx
#jetty.jmxrmihost=localhost # I tried this one, but it didn't work either
jetty.jmxrmihost=127.0.0.1
jetty.jmxrmiport=1099
file ${jetty.base}/start.d/jmx-remote.ini:
--module=jmx-remote
Just from the way the question is asked, it seems like it is less of a Jetty/JMX issue and more of a firewall issue - what you want is to block unwanted outside traffic to the JMX port on this server.
If you have permissions and are willing to do so, you will want to remove any rule from /etc/sysconfig/iptables that is opening the JMX port (in this example, 1099). Such a rule will look like the following:
[0:0] -A INPUT -s SOME_IP_SUBNET -p tcp -m tcp --dport 1099 -j ACCEPT
Or, on the flip side, you may want to enable JMX monitoring only for a specific subnet (such as for a company's subnet), at which point, you'd want to add the following:
[0:0] -A INPUT -s MY_IP_SUBNET_HERE -p tcp -m tcp --dport JMX_PORT -j ACCEPT
, replacing MY_IP_SUBNET_HERE and JMX_PORT with your own IP subnet and JMX port, respectively.
I haven't written a lot of rules for iptables myself, so please consider the above as an example and not necessarily the exact syntax you need. *nixCraft provides a basic guide to handling iptables/sysctl, which also covers how to modify rules without editing the file (I usually just modify the file).
Two notes, if you go the route of modifying the iptables file:
Be sure to restart iptables (/etc/init.d/iptables restart or service iptables restart)
Call /sbin/sysctl -p after restarting iptables. Restarting iptables wipes out any custom rules from sysctl.conf, calling sysctl -p will restore those rules.
I have to do distributed testing using JMeter. The objective is to have multiple remote servers in AWS controlled by one local server send a file download request to another server in AWS.
How can I set up the different servers in AWS?
How can I connect to them remotely?
Can someone provide some step by step instructions on how to do it?
I have tried several things but keep running into connectivity issues across networks.
We had a similar task and we ran into a bunch of issues as well. Here are the details of the whole process and what we did to resolve the issues we encountered. Hope it helps.
We needed to send requests from 5 servers located in various regions of the world. So we launched 5 micro instances in AWS, each in a different region. We chose the regions to be as geographically apart as possible.
Remote (server) JMeters config
Here is how we set up each instance.
Installed java:
$ sudo apt-get update
$ sudo apt-get install default-jre
Installed JMeter:
$ mkdir jmeter
$ cd jmeter;
$ wget ftp://apache.mirrors.pair.com//jmeter/binaries/apache-jmeter-2.9.tgz
$ gunzip apache-jmeter-2.9.tgz;tar xvf apache-jmeter-2.9.tar
Edited the jmeter.properties file in the /bin folder of the JMeter installation and uncomment the line containing the server.rmi.localport setting. We changed the port to 50000.
server.rmi.localport=50000
Started JMeter server. Make sure the address and the port the server reports listening to are correct.
$ cd ~/jmeter/apache-jmeter-2.9/bin
$ vi jmeter-server
Local (client) JMeter config
Then we set up JMeter to run tests remotely on these instances on our local client machine:
Ensured to use the same version of JMeter as was running on the servers. Installed Java and JMeter as described above.
Enabled remote testing by editing the jmeter.properties file that can be found in the bin folder of the JMeter installation. The parameter remote_hosts needed to be set with the public DNS of the remote servers we were connecting to.
remote_hosts=54.x.x.x,54.x.x.x,54.x.x.x,54.x.x.x,54.x.x.x
We were now able to tell our client JMeter instance to run tests on any or all of our specified remote servers.
Issues and resolutions
Here are the issues we encountered and how we resolved them:
The client failed with:
ERROR - jmeter.engine.ClientJMeterEngine: java.rmi.ConnectException: Connection - refused to host: 127.0.0.1
It was due to the server host returning the private IP address as its address because of Amazon NAT.
We fixed this by setting the parameter RMI_HOST_DEF that the /usr/local/jmeter/bin/jmeter-server script includes in starting the server:
RMI_HOST_DEF=-Djava.rmi.server.hostname=54.xx.xx.xx
Now, the AWS instance returned the server’s external IP, and we could start the test.
When the server node attempted to return the result and tried to connect to the client, the server tried to connect to the external IP address of my local machine. But it threw a connection refused error:
2013/05/16 12:23:37 ERROR - jmeter.samplers.RemoteListenerWrapper: testStarted(host) java.rmi.ConnectException: Connection refused to host: xxx.xxx.xxx.xx;
We resolved this issue by setting up reverse tunnels at the client side.
First, we edited the jmeter.properties file in the /bin folder of the JMeter installation and uncommented the line containing the client.rmi.localport setting. We changed the port to 60000:
client.rmi.localport=60000
Then we connected to each of the servers using SSH, and setup a reverse tunnel to port 60000 on the client.
$ ssh -i ~/.ssh/54-x-x-x.us-east.pem -R 60000:localhost:60000 ubuntu#54.x.x.x
We kept each of these sessions open, as the JMeter server needs to be able to deliver the test results to the client.
Then we set up the JVM_ARGS environment variable on the client, in the jmeter.sh file in the /bin folder:
export JVM_ARGS="-Djava.rmi.server.hostname=localhost"
By doing this, JMeter will tell the servers to connect to localhost:60000 for delivering their results. This ends up being tunneled back to the client.
The SSH connections to the servers kept dropping after staying idle for a little bit. To prevent that from happening, we added a parameter to each of the SSH tunnel set up directing the client to wait 60 seconds before sending a null packet to the server to keep the connection alive:
$ ssh -i ~/.ssh/54-x-x-x.us-east.pem -o ServerAliveInterval=60 -R 60000:localhost:60000 ubuntu#54.x.x.x
(.ssh/config version of all required SSH settings:
Host 54.x.x.x
HostName 54.x.x.x
Port 22
User ubuntu
ServerAliveInterval 60
RemoteForward 127.0.0.1:60000 127.0.0.1:60000
IdentityFile ~/.ssh/54-x-x-x.us-east.pem
IdentitiesOnly yes
Just use ssh 54.x.x.x after setting this up.
)
I just went though this on openstack and found the same issues... no idea why the jmeter remoting documentation only covers half the required steps. You can do it without tunnels or touching the properties files.
You need
All nodes to advertise their public IP - on AWS/OS this defaults to the private IP
Ingress rules for the RMI port which defaults to 1099 - I use this
Ingress rules for the RMI "local" port which defaults to dynamic. Below I use 4001 for the client and 4000 for servers. The port can be the same but note the properties are different.
If you are using your workstation as the client you probably still need tunnels. Above Archana Aggarwal has good tips for tunnels.
Remote servers
Set java.rmi.server.hostname and server.rmi.localport inline or in the properties file.
jmeter-server -Djava.rmi.server.hostname=publicip -Dserver.rmi.localport=4000
Sneaky server on client
You can also run one on the same machine as the client. For clarity I've set java.rmi.server.hostname but left server.rmi.localport as dynamic
jmeter-server -Djava.rmi.server.hostname=localip
Client
Set java.rmi.server.hostname and client.rmi.localport inline or in the properties file. Use -R etc like so:
jmeter -n -t Test.jmx -Rremotepublicip1,remotepublicip2 -Djava.rmi.server.hostname=clientpublicip -Dclient.rmi.localport=4001 -GmypropA=1 -GmypropB=2 -lresults.jtl
When you go for distributed testing using JMeter in AWS, I would suggest you to use docker - which will help us with jmeter test infrastructure very quickly. This way we can also ensure that same version of java and jmeter are installed in all the instances of amazon which is very important of JMeter distributed testing.
Ensure that - you set below properties and ports are open for jmeter-server. [they do not have to be 1099,50000 exactly]
server.rmi.localport=50000
server_port=1099
java.rmi.server.hostname=SERVER_IP
for client
client.rmi.localport=60000
java.rmi.server.hostname=SERVER_IP - this step is very important as the container in aws instance will have their own IP address in the docker network - so master and slave can not communicate. So we explicitly set this property
More info:
http://www.testautomationguru.com/jmeter-distributed-load-testing-using-docker-in-aws/