I'm currently preparing a unit test and therefore I need to emulate a client server connection.
I tried doing it like this:
nc -l 6543 < dummy-result.txt
But netcat does not close the connection after returning the content of the file. So my client is waiting for the server to close the connection endlessly.
Does anyone know how to get ncat to close the connection after serving the file? Also it would be useful to have ncat serve this file (and close the connection) for multiple requests (aka. -k).
Actually this was a bug in the openbsd implementation of netcat.
The option -N (server side) did not terminate the connection if the client was another netcat. If the client was telnet, it terminated as expected (Now I'm using the nmap netcat and it works).
For handling multiple connections I used a simple "while true" loop around it.
Related
I need to send a simple request to one of my server on defined intervals, let's say one every 2 seconds, in order to tell my server my machine's IP address(since I've dynamic one). I'm currently doing it in a while loop with a delay for std::system call for curl command with --silent option and redirecting rest to /dev/null. Something like this -
curl -s 'http://example.com' > /dev/null
The server currently parses the request and finds out the required IP address from it. Is there any other way to do this?
Another alternative would be sending a simple UDP datagram packet. The server can obtain the sender's address, upon receipt, equally well.
That requires a little bit less overhead than establishing a TCP connection. Of course, UDP offers no guarantee of deliverability, and an occasional UDP datagram would be lost; but since you're pinging the server regularly that should be fine. More importantly, however, is that a UDP sender's IP address is trivially forged. Whether or not this is an issue for your application is something that only you can determine.
If you're going to stick with TCP, one thing you can do is to establish a socket connection yourself. Let's examine what executing system(), just for the purpose of having curl do a dummy connection, involves:
forking a new child process.
the child process executing /bin/bash, passing it the command to parse.
Bash reading $HOME/.bashrc, and executing any commands in there. If your shell is something other than bash, the shell will have its own equivalent of a startup script to execute.
The shell forking a new child process.
The child process executing curl.
The loader finding all of the libraries that curl requires, and opening them.
curl now executes. Only then, after all this work, there will be code running to open a socket and attempt to connect to the remote host.
Steps 1 through 6 can be trivially skipped, simply by creating a socket, and connecting, yourself. It's not rocket science. socket(), connect(), close(), that's it. This task does not require HTTP. The server process only needs to socket(), bind(), listen(), and accept(), and then obtain the IP address from the connection. Done.
I want to replicate a condition where in if I did a CTRL-C after I connecting to a server using telnet, the server would crash. I want to write a C++ program which does it. What does telnet send over the network when we do a CTRL-C . I read that CTRL-C is '0x03'. Does it send the same thing or does it send something else and how should I send it using a C++ application.
Telnet really does send 0x03. Telnet is just a regular TCP connection with some escape codes that do various things.
But to test a server over telnet, you probably really want to use the "Expect" utility: http://en.wikipedia.org/wiki/Expect
I have an application which connects a remote database server.
If mysql server stops for a reason and stars succesfully after that, my application cannot detect server status quickly. It takes nearly 20 seconds to reconnect to the database server. So my gui freezes. I do not want a gui freeze for 20 seconds
So far I tried
mysql_ping
mysql_real_connect
functions
MYSQL_OPT_RECONNECT
MYSQL_OPT_CONNECT_TIMEOUT
options
My enviroment is not multi-threaded. So
how to do a faster detection?
If you do networking synchronously, be prepared for freezes. For this very reason it makes sense to do data-manipulation in a separate thread.
You could try telnet to the mysql port (usually 3306). If you get a connection refused, mysql isn't listening.
Working.
root#XXXXXX:~# telnet localhost 3306
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
L
5.6.4-m7)#m#_8:W�hP5YBzaXs[MOmysql_native_password
Down.
root#XXXXXX:~# telnet localhost 3306
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
The refused message is almost instant.
As already discussed by others, i won't talk about using multiple threads or processes. Can you connect to your mysql server on tcp? That way in most scenario's you would receive a tcp fin immediately to indicate a closed connection, though at times this might not be the case even. But most robust applications do a proper close.
shell> mysql --protocol=TCP
MYSQL how to specify protocol
If server doesn't accept it, i believe it can be enabled from config settings.
However, this does not address scenarios such as server suddenly gets off the network, or you client's connection is down etc.
I have an application which talks to server over HTTP. I have written a code to control connect timeout (amount of time it will wait before server replies) . But I am finding it hard to generate a test case to test my connect timeout code. Could you please help me.
Basically, TCp handshake will contain
Host A sends a TCP SYNchronize packet to Host B
Host B receives A's SYN
Host B sends a SYNchronize-ACKnowledgement
Host A receives B's SYN-ACK
Host A sends ACKnowledge
Host B receives ACK.
TCP socket connection is ESTABLISHED.
In my application, if server does not complete TCP handshek in x seconds, applications moves to next server. But to test this code, I need a server stub which will probably accept SYN packet from client but will not set SYN+ACK packet to client. Thus making client wait for server's reply.
Could you please help me to create small server stub which will listen to particular port but will not complete handshake.
Given you mentioned RHEL I think you're best off using iptables to help test this. For example you could call:
iptables -I INPUT -s hostb -d hosta -p tcp --dport $port --tcp-flags SYN,ACK SYN,ACK -j DROP
calling that before running the test (or even during it perhaps?) and an equivalent matched -X to delete it seems to be by far the simplest way of breaking a handshake halfway through.
Drop all SYN+ACK (warning, WILL break new SSH connections):
iptables -I INPUT -p tcp --tcp-flags SYN,ACK SYN,ACK -j DROP
Drop all from or to 10.10.22.34:
iptables -I INPUT -s 10.10.22.34 -j DROP
iptables -I OUTPUT -d 10.10.22.34 -j DROP
Personally I would use the most specific match you can possibly write to avoid accidentally breaking remote access or anything else at the same time.
You could get fancier and use the -m owner match to only apply this rule for packets to/from the UID you run this test as.
I wouldn't rely on iptables, or any other tool for unit testing, as those tests would be too brittle. What if the IP address changes, or the unit tests are run on another machine ? What if the code has to be ported on an OS where iptables is not available ?
In order to keep the unit tests isolated from the network, I would encapsulate the socket API in a Socket class. Then I would have a Connection class that uses the Socket class. I would unit test the Connection class with a TimeoutSocket class (derived from Socket) that simulates the server not accepting the first connection request.
Your code should not depend on what's going on on the wire.
I am working on a project where a partner provides a service as socket server. And I write client sockets to communicate with it. The communication is two way: I send a request to server and then receive a response from server.
The problem is that I send the data to the server but apparently the server cannot receive the data.
From my side I just use very simple implementation just like the example from http://www.linuxhowtos.org/C_C++/socket.htm
#include <sys/socket.h>
socket_connect();
construct_request_data();
send(socket, request_data, request_length, 0/*flag*/); // I set flag as 0
// now the server should receive my request and send response to me
recv(socket, response_data, response_length, 0);
socket_close();
And it seems that the server socket is implemented with a "binding" to std::iostream and it is buffered stream. (i.e. the socket send/recv is done in iostream::write/read.)
server_socket_io >> receive_data;
server_socket_io << response_data;
Btw, I got a test client from my partner and it is wrapped in a iostream as well. The test socket client can communicate with the server without problem, but it must do iostream::flush() after every socket send.
But I want to just keep it simple not to wrap my socket client in iostream.
I just wonder whether the buffered iostream results in the problem: the data is not processed since the data the client socket sent is just in very small amount and still buffered.
Or could it be my problem? how can I know if I really send out the data? does my client socket also buffer the data?
I have tried some "bad" workaround with TCP_NODELAY but it didn't help!
How can I solve the problem? from client side? or server side?
Should I close the socket after sending request and before receiving response, so that the data will be "flushed" and processed?
or should I wrap my socket in iostream and do flush?
or the server socket should use a "unbuffered" stream?
thanks for any suggestion and advice!
Further to Jay's answer, you can try any network packet sniffer and check whether your packets are getting to the server or not. Have a look at wireshark or tcpdump.
Let's use "divide and conquer" to solve the problem.
First, does the server work?
From your code look up the port number that your server is listening on.
Start your server program.
Run the following command line program to see if the server is really listening:
netstat -an -p tcp
It will produce a list of connections. You should see a connection on your selected port when the server is running. Stop the server and run the command again to ensure the port is no longer in use.
Once you've verified the server is listening try to connect to it using the following command:
telnet your-server-address-here your-port-number-here
telnet will print what your server sends to you on the screen and send what you type back to the sever.
This should give you some good clues.
I had a similar issue once before. My problem was that I never 'accepted' a connection (TCP) on the server inorder to create the stream between server/client. After I accepted the connection on the server side, everything worked as designed.
You should check the firewall settings for both systems. They may not be passing along your data.