I'm trying to code a simple actionscript tcp client, which is to send data to a c++ tcp server. I'm new to actionscript and I'm using sample code from adobe (see link below) for the client. I am able to make a connection and send data, but the data is only available at the server when the object is unloaded at the client side (hence closing the socket I guess). I tried using a c++ client, and the data is immediately available at the server, so I must be missing something on the client side. Maybe I need to append some kind of termination/marker sequence?
Actionscript code sending data over tcp:
private function tcpConnect():void
{
var customSocket:CustomSocket = new CustomSocket("127.0.0.1", 5331);
customSocket.timeout = 100;
socketWrite(customSocket, 53);
socketWrite(customSocket, 54);
socketWrite(customSocket, 55);
socketWrite(customSocket, 56);
}
private function socketWrite(sock:CustomSocket, b:int):void
{
sock.writeByte(b);
sock.writeByte(0);
sock.flush();
}
C++ tcp server: http://msdn.microsoft.com/en-us/library/windows/desktop/ms737593(v=vs.85).aspx
Actionscript tcp client: http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/Socket.html#includeExamplesSummary
Right after connection to the server the client socket will send a request for the crossdomain file, it will look like this
<policy-file-request/>
You probably have seen this in the server logs
At this time the server should send the file back via the socket connection.
Once the client gets the file it will probably close the connection.
Now you need to restart the connection and send all your data without hinderance.
Related
I am in the process of adding client/server UDP support to thekogans stream library and have run into a problem on Windows. Here is what I am doing;
server udp socket is bound to 0.0.0.0:8854.
server udp socket has IP_PKTINFO = true.
server udp socket has SO_REUSEADDR = true.
server udp socket starts an overlapped WSARecvMsg operation.
client binds to 0.0.0.0:0 and connects to 127.0.0.1:8854.
client sends a message using WSASend.
server socket receives the message and creates a new UDP socket with the following attributes:
SO_REUSEADDR = true
bind to address returned by IP_PKTINFO (127.0.0.1:8854).
connect to whatever address was returned by WSARecvMsg.
client and the new server UDP socket exchange a bunch of messages (using WSASend and WSARecv).
Here is the behavior I am seeing:
the first connection between client and server works flawlessly.
I then have the client exit and restart.
all other packets from the client are dropped.
if I set a timeout on the new server UDP socket (127.0.0.1:8854) and it times out and is closed, then the client can connect again. In other words, the scheme seems to work but only one client at a time. If the server has a concrete (not wildcard) socket created for the same port, no other client can send it messages.
Some more information that may be helpful: The server is async and uses IOCP. This code (using epoll and kqueue) works perfectly on Linux and OS X. I feel like I am missing some flag somewhere that winsock needs set but I can't seem to find it. I have tried googling various search terms but have hit a wall.
Any and all help would be greatly appreciated. thank you.
there:
I wrote an AS3 Client Socket in an AIR Project and the other is a C++ Server.
In the C++ Server, I use non-blocking socket type with networking APIs ioctlsocket() and recv().
Every time the AS3 client socket connecting to the C++ Server, it shows the connection is success,
but I got the return vaulue of recv() which is 0 in the next tick right after the successful connection from AS3 client.
According to MSDN, when recv() returns 0, it means the client socket closed gracefully.
But when I test the connection with C++ client socket, it won't happen.
The Client and Server are all at local, so the client is connecting to "127.0.0.1", and the port is 5001.
Finally I found that AIR Applications do not need crossdomain.xml, I think it may be my function writing style made the AIR socket's auto disconnect condition. Because I create a socket in another function and then preserve it in a * type object, which might made it be garbage-collection.
I'm currently working on a C++ application that communicates with my browser using WebSockets. The connection is only local, there won't ever be any non-local socket.
Currently my C++ code looks like that (just for an example):
while (true) {
WebSocket *socket = server->accept () ;
socket->read (buffer, 256) ;
}
And my javascript code:
var socket = new WebSocket ("ws://localhost:4564") ;
socket.onopen = function () {
socket.send("Hello my name is Holt!");
} ;
As you can see, I'm waiting for a packet that should be sent as soon as the connection is openned. So I got 2 questions:
First, are there any way to send this information directly inside the connection? (I think no, so it's why my second question comes for...)?
Second, knowing that the connection is local, is that possible that the server accept the socket without being able to retrieve the packet after?
To add a bit more information, the current C++ application is based on Qt 5.3 with the QtWebSockets module and the javascript code is a Google Chrome extension that will run a script on specific websites.
Thanks for you help!
After you establish websocket connection between client and server, in client, when connection is alive, you can call socket.send() when you want to send data to server.
Even connection local, server is still need wait and receive data, which is a full TCP/IP data transfer process.
I'm working on a flash application that needs to communicate with my C++ server for things like account validation and state updates. I have a non-blocking TCP socket on the server listening on a specific port.
The process goes like this:
Socket listens on server machine
Flash connects using a flash.net.Socket
Server accepts socket connection
Flash sends a policy file request
Server sends policy file data
Flash accepts connection
Two problems occur from here on out. When I send bytes from flash the server doesn't recognize it at all but it doesn't block either. I just recv 0 bytes. When I send bytes from the server after sending the policy file I gives me a WSAECONNRESET error.
Resources for Flash communicating with C or C++ is very limited so any help is greatly appreciated.
When the flash client sends "<policy-file-request/>" the server should send the file and then close the connection.
The client will need to reconnect after it receives the policy.
Trust me on this.
I am working on a project where a partner provides a service as socket server. And I write client sockets to communicate with it. The communication is two way: I send a request to server and then receive a response from server.
The problem is that I send the data to the server but apparently the server cannot receive the data.
From my side I just use very simple implementation just like the example from http://www.linuxhowtos.org/C_C++/socket.htm
#include <sys/socket.h>
socket_connect();
construct_request_data();
send(socket, request_data, request_length, 0/*flag*/); // I set flag as 0
// now the server should receive my request and send response to me
recv(socket, response_data, response_length, 0);
socket_close();
And it seems that the server socket is implemented with a "binding" to std::iostream and it is buffered stream. (i.e. the socket send/recv is done in iostream::write/read.)
server_socket_io >> receive_data;
server_socket_io << response_data;
Btw, I got a test client from my partner and it is wrapped in a iostream as well. The test socket client can communicate with the server without problem, but it must do iostream::flush() after every socket send.
But I want to just keep it simple not to wrap my socket client in iostream.
I just wonder whether the buffered iostream results in the problem: the data is not processed since the data the client socket sent is just in very small amount and still buffered.
Or could it be my problem? how can I know if I really send out the data? does my client socket also buffer the data?
I have tried some "bad" workaround with TCP_NODELAY but it didn't help!
How can I solve the problem? from client side? or server side?
Should I close the socket after sending request and before receiving response, so that the data will be "flushed" and processed?
or should I wrap my socket in iostream and do flush?
or the server socket should use a "unbuffered" stream?
thanks for any suggestion and advice!
Further to Jay's answer, you can try any network packet sniffer and check whether your packets are getting to the server or not. Have a look at wireshark or tcpdump.
Let's use "divide and conquer" to solve the problem.
First, does the server work?
From your code look up the port number that your server is listening on.
Start your server program.
Run the following command line program to see if the server is really listening:
netstat -an -p tcp
It will produce a list of connections. You should see a connection on your selected port when the server is running. Stop the server and run the command again to ensure the port is no longer in use.
Once you've verified the server is listening try to connect to it using the following command:
telnet your-server-address-here your-port-number-here
telnet will print what your server sends to you on the screen and send what you type back to the sever.
This should give you some good clues.
I had a similar issue once before. My problem was that I never 'accepted' a connection (TCP) on the server inorder to create the stream between server/client. After I accepted the connection on the server side, everything worked as designed.
You should check the firewall settings for both systems. They may not be passing along your data.