flash.net.Socket and C++ winsock WSAECONNRESET - c++

I'm working on a flash application that needs to communicate with my C++ server for things like account validation and state updates. I have a non-blocking TCP socket on the server listening on a specific port.
The process goes like this:
Socket listens on server machine
Flash connects using a flash.net.Socket
Server accepts socket connection
Flash sends a policy file request
Server sends policy file data
Flash accepts connection
Two problems occur from here on out. When I send bytes from flash the server doesn't recognize it at all but it doesn't block either. I just recv 0 bytes. When I send bytes from the server after sending the policy file I gives me a WSAECONNRESET error.
Resources for Flash communicating with C or C++ is very limited so any help is greatly appreciated.

When the flash client sends "<policy-file-request/>" the server should send the file and then close the connection.
The client will need to reconnect after it receives the policy.
Trust me on this.

Related

C++ UDP Socket not working to send back from server to client after receiving first packets from client

Writing a UDP client-server app in C++ (done that lots of times before in many languages in the past 15 years), but somehow this one is not working correctly.
I cannot post actual code nor minimal reproducible app at the moment but I am willing to pay for live help if anyone is available to help solve this quickly with screensharing.
I think this is a particularity with C++ sockets and the way I am using them in this specific app which is quite complex.
Basically the issue is that the packets sent from the server to the client are not received by the client, only when said client is on a separate nat.
When both in same local networking and using their local IP, everything works as expected.
Here is what I am doing :
Client sendto(...) packets through UDP to the server using a specific server host and port 12345 (and keeps sending these non-stop)
On another thread, client bind(...) on port 12345 and "0.0.0.0" and tries to poll() and recvfrom() in a loop (poll always returns 0 here when client is on a separate nat)
Server bind() on port 12345 and "0.0.0.0" then poll() and recvfrom() in a loop
Upon receiving the first UDP message from a client, it starts a thread for sending
UDP messages back to the client on a new socket, using the
sockaddr_in that it got from the recvfrom() to pass in the sendto() commands.
Result : Server perfectly receives ALL messages from all clients, and sends all messages back to all clients, but any client that is not on the same NAT will never receive any messages (poll() always returns 0).
As far as I understand it, when the client sends a UDP message to the server on a specific remote port (12345 in this case), it will punch a hole in its NAT so that it can receive messages back from the remote server on that port...
I tested five different client network configurations :
Local network with the server, using local IP addresses (WORKS)
Local network with the server while client is using a VPN thus going through a remote NAT (DOES NOT WORK)
Local network with the server but client is using the WAN ip address to connect to the server (DOES NOT WORK)
Client at an actual remote network from a friend's connection, behind a router (DOES NOT WORK)
Client going through a wifi hotspot created using my phone (DOES NOT WORK)
For all tests above, the server was correctly receiving all communications from clients.
I also tried forcing the port to 12345 for the sendto() instead of using the sockaddr_in as set from recvfrom(), same issue.
Am I doing anything wrong ?
If you want to help but need to see actual code, I can do that live with screen sharing and I will pay for the help.
Thanks.
Also, if anyone can point me to a great site where I can pay for VERY QUICK help, please let me know, I don't even bother searching google because I really want actual advice from people who tried these services, not ads trying to rip me off...
Only the original receiver socket is allowed to reply to the client, because it's the client request that opens the port in the NAT. So either use the same socket in the server to receive and reply, or get the port that the second server socket was bound to and transfer it with an initial message through the original server port, so that A can send to it and punch the hole.
It looks so strange to create two half duplex sockets when a socket is a full duplex communication object that I'd go with the first option.

winsock udp connect missing or dropped packets

I am in the process of adding client/server UDP support to thekogans stream library and have run into a problem on Windows. Here is what I am doing;
server udp socket is bound to 0.0.0.0:8854.
server udp socket has IP_PKTINFO = true.
server udp socket has SO_REUSEADDR = true.
server udp socket starts an overlapped WSARecvMsg operation.
client binds to 0.0.0.0:0 and connects to 127.0.0.1:8854.
client sends a message using WSASend.
server socket receives the message and creates a new UDP socket with the following attributes:
SO_REUSEADDR = true
bind to address returned by IP_PKTINFO (127.0.0.1:8854).
connect to whatever address was returned by WSARecvMsg.
client and the new server UDP socket exchange a bunch of messages (using WSASend and WSARecv).
Here is the behavior I am seeing:
the first connection between client and server works flawlessly.
I then have the client exit and restart.
all other packets from the client are dropped.
if I set a timeout on the new server UDP socket (127.0.0.1:8854) and it times out and is closed, then the client can connect again. In other words, the scheme seems to work but only one client at a time. If the server has a concrete (not wildcard) socket created for the same port, no other client can send it messages.
Some more information that may be helpful: The server is async and uses IOCP. This code (using epoll and kqueue) works perfectly on Linux and OS X. I feel like I am missing some flag somewhere that winsock needs set but I can't seem to find it. I have tried googling various search terms but have hit a wall.
Any and all help would be greatly appreciated. thank you.

UDP Concurrent client recvfrom error

I am now doing concurrent socket programming with C/C++. I just made the server to receive a request from client and sending the response packets to clients. I use one thread to receive requests from Clients. when the server got a new request, a new thread will be create in order to send some packets to clients. However, the recvfrom in my client side always return the winsock error 10054 while my server is sending packets out to that particular client.
This error message means the udp port is closed and you are receiving a packet on the closed port. For example in voip phone the client sends origport=12295 stating that please send the packet on this and will close the working port 32000
08:43:32.377 cip=172.x.23.225 sip=10.x.20.2 cport=32000 sport=32128 origport=12295
But if the server don't understand this and you still receive the packet on 32000 from server then client will show this error message
According to this forum thread, it is a harmless error and you can just ignore it in the client.

Send Close File Signal to FTP Server

I am implementing FTP Client in C++ using Windows Sockets. I have successfully connected to Server on Port 21, and transmitted the file in PASV mode using "STOR Sample.txt" command on the data port. The problem is that I am unable to tell the server about transfer completion (I want to send the signal to close data connection) so that i can receive the "226 Transfer OK" from the server on my control connection.
Further, I am not receiving anything from the server via recv(). I think that is because server is still listening on the data connection.
Have a look at this:
proper user of STOR command
You get back the port to connect to when you send PASV and then you tell the server where to store the data using STOR and the you connect to the port the PASV command returned and send the data - when you are done you close this second socket and continue sending commands with the original one.

client socket sends data but server socket does not receive them. c++ buffered stream?

I am working on a project where a partner provides a service as socket server. And I write client sockets to communicate with it. The communication is two way: I send a request to server and then receive a response from server.
The problem is that I send the data to the server but apparently the server cannot receive the data.
From my side I just use very simple implementation just like the example from http://www.linuxhowtos.org/C_C++/socket.htm
#include <sys/socket.h>
socket_connect();
construct_request_data();
send(socket, request_data, request_length, 0/*flag*/); // I set flag as 0
// now the server should receive my request and send response to me
recv(socket, response_data, response_length, 0);
socket_close();
And it seems that the server socket is implemented with a "binding" to std::iostream and it is buffered stream. (i.e. the socket send/recv is done in iostream::write/read.)
server_socket_io >> receive_data;
server_socket_io << response_data;
Btw, I got a test client from my partner and it is wrapped in a iostream as well. The test socket client can communicate with the server without problem, but it must do iostream::flush() after every socket send.
But I want to just keep it simple not to wrap my socket client in iostream.
I just wonder whether the buffered iostream results in the problem: the data is not processed since the data the client socket sent is just in very small amount and still buffered.
Or could it be my problem? how can I know if I really send out the data? does my client socket also buffer the data?
I have tried some "bad" workaround with TCP_NODELAY but it didn't help!
How can I solve the problem? from client side? or server side?
Should I close the socket after sending request and before receiving response, so that the data will be "flushed" and processed?
or should I wrap my socket in iostream and do flush?
or the server socket should use a "unbuffered" stream?
thanks for any suggestion and advice!
Further to Jay's answer, you can try any network packet sniffer and check whether your packets are getting to the server or not. Have a look at wireshark or tcpdump.
Let's use "divide and conquer" to solve the problem.
First, does the server work?
From your code look up the port number that your server is listening on.
Start your server program.
Run the following command line program to see if the server is really listening:
netstat -an -p tcp
It will produce a list of connections. You should see a connection on your selected port when the server is running. Stop the server and run the command again to ensure the port is no longer in use.
Once you've verified the server is listening try to connect to it using the following command:
telnet your-server-address-here your-port-number-here
telnet will print what your server sends to you on the screen and send what you type back to the sever.
This should give you some good clues.
I had a similar issue once before. My problem was that I never 'accepted' a connection (TCP) on the server inorder to create the stream between server/client. After I accepted the connection on the server side, everything worked as designed.
You should check the firewall settings for both systems. They may not be passing along your data.