I am programming a real-time game in which I need reliable UDP, so I've chosen to work with UDT (UDP-based data transfer protocol - http://sourceforge.net/projects/udt/).
The clients (on browsers) send real-time messages to my server via CGI scripts. The problem is that there are some messages that are being lost, and I don't know why because the server says that it sent all the messages successfully to the corresponding clients, but sometimes the client doesn't receive the message.
In my debug file, I've found that when a message is not received by the client, its script says:
error in recv();
recv: Connection was broken.
I would like to get some help on how the server shall know if the client got its message; should I send a NACK or something from the client side? I thought that UDT should do that for me. Can someone clarify this situation?
The relevant sections of the communication parts of my code are bellow, with some comments:
server's relevant code:
//...
void send_msg_in(player cur, char* xml){
/*this function stores the current message, xml, in a queue if xml!=NULL, and sends the 1st message of the queue to the client*/
/*this function is called when the player connects with the entering xml=NULL to get the 1st message of the queue,
or with xml!=NULL when a new message arrives: in this case the message is stored in the queue, and then the message will be sent in the appropriate time, i.e. the messages are ordered.*/
char* msg_ptr=NULL;
if (xml!=NULL){ //add the message to a queue (FIFO), the cur.xml_msgs
msg_ptr=(char*) calloc(strlen(xml)+1, sizeof(char));
strcpy(msg_ptr, xml);
(*(cur.xml_msgs)).push(msg_ptr);
} //get the 1st message of the queue
if (!(*(cur.xml_msgs)).empty()){
xml=(*(cur.xml_msgs)).front();
}
if (cur.get_udt_socket_in()!=NULL){
UDTSOCKET cur_udt = *(cur.get_udt_socket_in());
// cout << "send_msg_in(), cur_udt: " << cur_udt << endl;
//send the "xml", i.e. the 1st message of the queue...
if (UDT::ERROR == UDT::send(cur_udt, xml, strlen(xml)+1, 0)){
UDT::close(cur_udt);
cur.set_udt_socket_in(NULL);
}
else{ //if no error this else is reached
cout << "TO client:\n" << xml << "\n"; /*if there is no error,
i.e. on success, the server prints the message that was sent.*/
// / \
// /_!_\
/*the problem is that
the messages that are lost don't appear on the client side,
but they appear here on the server! */
if (((string) xml).find("<ack.>")==string::npos){
UDT::close(cur_udt);
cur.set_udt_socket_in(NULL); //close the socket
}
(*(cur.xml_msgs)).pop();
}
}
}
//...
client's relevant code:
//...
#define MSGBUFSIZE 1024
char msgbuf[MSGBUFSIZE];
UDTSOCKET client;
ofstream myfile;
//...
main(int argc, char *argv[]){
//...
// connect to the server, implict bind
if (UDT::ERROR == UDT::connect(client, (sockaddr*)&serv_addr, sizeof(serv_addr))){
cout << "error in connect();" << endl;
return 0;
}
myfile.open("./log.txt", ios::app);
send(xml);
char* cur_xml;
do{
cur_xml = receive(); //wait for an ACK or a new message...
myfile << cur_xml << endl << endl; // / \
/* /_!_\ the lost messages don't appear on the website
neither on this log file.*/
} while (((string) cur_xml).find("<ack.>")!=string::npos);
cout << cur_xml << endl;
myfile.close();
UDT::close(client);
return 0;
}
char* receive(){
if (UDT::ERROR == UDT::recv(client, msgbuf, MSGBUFSIZE, 0)){
// / \
/* /_!_\ when a message is not well received
this code is usually reached, and an error is printed.*/
cout << "error in recv();" << endl;
myfile << "error in recv();" << endl;
myfile << "recv: " << UDT::getlasterror().getErrorMessage() << endl << endl;
return 0;
}
return msgbuf;
}
void* send(string xml){
if (UDT::ERROR == UDT::send(client, xml.c_str(), strlen(xml.c_str())+1, 0)){
cout << "error in send();" << endl;
myfile << "error in send();" << endl;
myfile << "send: " << UDT::getlasterror().getErrorMessage() << endl << endl;
return 0;
}
}
Thank you for any help!
PS. I tried to increase the linger time on close(), after finding the link http://udt.sourceforge.net/udt4/doc/opt.htm, adding the following to the server's code:
struct linger l;
l.l_onoff = 1;
l.l_linger = ...; //a huge value in seconds...
UDT::setsockopt(*udt_socket_ptr, 0, UDT_LINGER, &l, sizeof(l));
but the problem is still the same...
PPS. the other parts of the communication in the server side are: (note: it seams for me that they are not so relevant)
main(int argc, char *argv[]){
char msgbuf[MSGBUFSIZE];
UDTSOCKET serv = UDT::socket(AF_INET, SOCK_STREAM, 0);
sockaddr_in my_addr;
my_addr.sin_family = AF_INET;
my_addr.sin_port = htons(PORT);
my_addr.sin_addr.s_addr = INADDR_ANY;
memset(&(my_addr.sin_zero), '\0', sizeof(my_addr.sin_zero));
if (UDT::ERROR == UDT::bind(serv, (sockaddr*)&my_addr, sizeof(my_addr))){
cout << "error in bind();";
return 0;
}
UDT::listen(serv, 1);
int namelen;
sockaddr_in their_addr;
while (true){
UDTSOCKET recver = UDT::accept(serv, (sockaddr*)&their_addr, &namelen);
if (UDT::ERROR == UDT::recv(recver, msgbuf, MSGBUFSIZE, 0)){
//this recv() function is called only once for each aqccept(), because the clients call CGI scripts via a browser, they need to call a new CGI script with a new UDT socket for each request (this in in agreement to the clients' code presented before).
cout << "error in recv();" << endl;
}
char* player_xml = (char*) &msgbuf;
cur_result = process_request((char*) &msgbuf, &recver, verbose); //ACK
}
}
struct result process_request(char* xml, UDTSOCKET* udt_socket_ptr, bool verbose){
//parse the XML...
//...
player* cur_ptr = get_player(me); //searches in a vector of player, according to the string "me" of the XML parsing.
UDTSOCKET* udt_ptr = (UDTSOCKET*) calloc(1, sizeof(UDTSOCKET));
memcpy(udt_ptr, udt_socket_ptr, sizeof(UDTSOCKET));
if (cur_ptr==NULL){
//register the player:
player* this_player = (player*) calloc(1, sizeof(player));
//...
}
}
else if (strcmp(request_type.c_str(), "info_waitformsg")==0){
if (udt_ptr!=NULL){
cur_ptr->set_udt_socket_in(udt_ptr);
if (!(*(cur_ptr->xml_msgs)).empty()){
send_msg_in(*cur_ptr, NULL, true);
}
}
}
else{ //messages that get instant response from the server.
if (udt_ptr!=NULL){
cur_ptr->set_udt_socket_out(udt_ptr);
}
if (strcmp(request_type.c_str(), "info_chat")==0){
info_chat cur_info;
to_object(&cur_info, me, request_type, msg_ptr); //convert the XML string values to a struct
process_chat_msg(cur_info, xml);
}
/* else if (...){ //other types of messages...
}*/
}
}
void process_chat_msg(info_chat cur_info, char* xml_in){
player* player_ptr=get_player(cur_info.me);
if (player_ptr){
int i=search_in_matches(matches, cur_info.match_ID);
if (i>=0){
match* cur_match=matches[i];
vector<player*> players_in = cur_match->followers;
int n=players_in.size();
for (int i=0; i<n; i++){
if (players_in[i]!=msg_owner){
send_msg_in(*(players_in[i]), xml, flag);
}
}
}
}
}
Looking at the UDT source code at http://sourceforge.net/p/udt/git/ci/master/tree/udt4/src/core.cpp, the error message "Connection was broken" is produced when either of the Boolean flags m_bBroken or m_bClosing is true and there is no data in the receive buffer.
Those flags are set in just a few cases:
In sections of code marked "should not happen; attack or bug" (unlikely)
In deliberate close or shutdown actions (don't see this happening in your code)
In expiration of a timer that checks for peer activity (the likely culprit)
In that source file at line 2593 it says:
// Connection is broken.
// UDT does not signal any information about this instead of to stop quietly.
// Application will detect this when it calls any UDT methods next time.
//
m_bClosing = true;
m_bBroken = true;
// ...[code omitted]...
// app can call any UDT API to learn the connection_broken error
Looking at the send() call, I don't see anywhere that it waits for an ACK or NAK from the peer before returning, so I don't think a successful return from send() on the server side is indicative of successful receipt of the message by the client.
You didn't show the code on the server side that binds to the socket and listens for responses from the client; if the problem is there then the server might be happily sending messages and never listening to the client that is trying to respond.
UDP is not a guaranteed-transmission protocol. A host will send a message, but if the recipient does not receive it, or if it is not received properly, no error will be raised. Therefore, it is commonly used in applications that require speed over perfect delivery, such as games. TCP does guarantee delivery, because it requires that a connection be set up first, and each message is acknowledged by the client.
I would encourage you to think about whether you actually need guaranteed receipt of that data, and, if you do, consider using TCP.
Related
I'm working on a server/client chat room for school.
Everything worked fine until I decided to get inputs of my Clients. Since then, I don't know why but every time I create a new socket, it will always be the same. I've tried to create a new solution, and just create a socket and even their.. sockets are the same.
My test code as simple as that:
#pragma comment (lib, "Ws2_32.lib")
#include <WinSock2.h>
#include <iostream>
int main()
{
WSADATA wsaData;
if (WSAStartup(MAKEWORD(2, 2), &wsaData))
{
return 1;
}
// Create Socket
SOCKET sock = socket(AF_INET, SOCK_STREAM, 0);
char a;
std::cout << "Socket : " << sock << std::endl;
std::cin >> a;
}
I end up with 4 times the same socket is created:
Some time, weirdly it works completely fine, but shortly after it gets back to that.
Edit:
To talk more about the project, I'm not sure how to explain the code without sending 300 lines which I assume isn't the best idea.
So here is my update for the server (yes we're using Polls cause my teacher don't want us to use multithreading for now).
void Server::Update()
{
do
{
WSAPoll(fds_, MAX_CLIENTS, -1);
for (int i = 0; i < MAX_CLIENTS; ++i)
{
if (fds_[i].revents & POLLRDNORM)
{
if (i == 0)
{
// Accept
AcceptClient(sock_);
}
else
{
// Receive
ReceiveMsg(fds_[i].fd, receiveBuffer);
// Send the message to all clients except the sender
for (int j = 0; j < clients_.size(); ++j)
{
if (clients_[j].socket != fds_[i].fd)
{
SendMsg(clients_[j].socket, receiveBuffer);
}
}
}
}
if (fds_[i].revents & POLLHUP)
{
closesocket(clients_[i].socket);
std::cout << "Client with socket " << clients_[i].socket << " disconnected" << std::endl;
clients_.erase(clients_.begin() + i);
}
}
} while (true);
}
Here is the Accept code:
void Server::AcceptClient(SOCKET sock)
{
// Client Socket
SOCKET csock;
SOCKADDR_IN csin;
int crecsize = sizeof(csin);
// Address Buffer
char adressBuffer[65];
csock = accept(sock, (SOCKADDR*)&csin, &crecsize);
if (csock != INVALID_SOCKET)
{
std::cout
<< "Client with socket " << csock
<< " connected from " << inet_ntop(AF_INET, &csin.sin_addr, adressBuffer, sizeof(adressBuffer))
<< ":" << csin.sin_port << std::endl;
clients_.push_back(Client(csock));
fds_[clients_.size()].fd = csock;
fds_[clients_.size()].events = POLLIN;
}
else
{
printError(WSAGetLastError(), __LINE__, __FILE__);
return;
}
}
But, by using WSAGetLastError() I know that the error occur client side during the call of connect():
void NetworkClient::ConnectToServer(SOCKET sock, SOCKADDR_IN sin, int recsize)
{
int sock_err = connect(sock, (SOCKADDR*)&sin, recsize);
if (sock_err != INVALID_SOCKET)
{
std::cout << "Connexion avec le serveur reussie" << std::endl;
}
else
{
printError(WSAGetLastError(), __LINE__, __FILE__);
return;
}
}
So I still end up with the same error, even though my socket is non-blocking.
Unlike on other platforms, where sockets are indexes into a per-process file table, sockets on Windows are kernel objects. When a process exits, any open objects are released automatically, allowing the kernel to reuse them. This is perfectly normal behavior.
UPDATE:
But, by using WSAGetLastError() I know that the error occur client side during the call of connect()
The error code you have shown is 10035 (WSAEWOULDBLOCK), which is normal behavior for a non-blocking connect(). It is NOT an error condition, so don't treat it like one. It simply means the connection operation is in progress. WSAPoll() (or select(), etc) will tell you at a later time when the operation is actually finished, and whether it was successful or not (in your case, the connection is successful, as evident by your server log). This is explained in the connect() documentation:
For connection-oriented, nonblocking sockets, it is often not possible to complete the connection immediately. In such a case, this function returns the error WSAEWOULDBLOCK. However, the operation proceeds.
When the success or failure outcome becomes known, it may be reported in one of two ways, depending on how the client registers for notification.
If the client uses the select function, success is reported in the writefds set and failure is reported in the exceptfds set.
If the client uses the functions WSAAsyncSelect or WSAEventSelect, the notification is announced with FD_CONNECT and the error code associated with the FD_CONNECT indicates either success or a specific reason for failure.
I'm working on a client application that sends sensor data one way to a remote server. After the initial login there is no return data from the server. My problem is when the ethernet is disconnected such as a hard disconnect i.e. wireless link goes down, my application does not get a error return value after attempting a 'send' call. I am using a single non-blocking socket instance. The thread checks for a 'recv' each loop using 'select'. It does eventually get an error on 'recv' but never on 'send'.
When the remote PC has a internet connectivity loss it will cause the program to be disconnected from the server for minutes to hours before it recognises the connection loss happened and switches to re-login the server. What can be done to help detect the hard disconnect?
void checkConnect(NTRIP& server)
{
//1st check for recv or gracefully closed socket
char databuf[SERIAL_BUFFERSIZE];
fd_set Reader, Writer, Err;
TIMEVAL Timeout;
Timeout.tv_sec = 1; // timeout after 1 seconds
Timeout.tv_usec = 0;
FD_ZERO(&Reader);
FD_ZERO(&Err);
FD_SET(server.socket, &Reader);
FD_SET(server.socket, &Err);
int iResult = select(0, &Reader, NULL, &Err, &Timeout);
if(iResult > 0)
{
if(FD_ISSET(server.socket, &Reader) )
{
int recvBytes = recv(server.socket, databuf, sizeof(databuf), 0);
if(recvBytes == SOCKET_ERROR)
{
cout << "socket error on receive call from server " << WSAGetLastError() << endl;
closesocket(server.socket);
server.connected_IP = false;
}
else if(recvBytes == 0)
{
cout << "server closed the connection gracefully" << endl;
closesocket(server.socket);
server.connected_IP = false;
}
else //>0 bytes were received so read data if needed
{
}
}
if(FD_ISSET(server.socket, &Err))
{
cout << "select returned socket in error state" << endl;
closesocket(server.socket);
server.connected_IP = false;
}
}
else if(iResult == SOCKET_ERROR)
{
cout << "ip thread select socket error " << WSAGetLastError() << endl;
closesocket(server.socket);
server.connected_IP = false;
}
//2nd check hard disconnect if no other data has been sent recently
if(server.connected_IP == true && getTimePrecise() - server.lastDataSendTime > 5.0)
{
char buf1[] = "hello";
cout << "checking send for error" << endl;
iResult = send(server_main.socket, buf1, sizeof(buf1), 0);
if(iResult == SOCKET_ERROR)
{
int lasterror = WSAGetLastError();
if(lasterror == WSAEWOULDBLOCK)
{
cout << "server send WSAEWOULDBLOCK" << endl;
}
if(lasterror != WSAEWOULDBLOCK)
{
cout << "server testing connection send function error " << lasterror << endl;
closesocket(server.socket);
server.connected_IP = false;
}
}
else
{
cout << "sent out " << iResult << " bytes" << endl;
}
server.lastDataSendTime = getTimePrecise();
}
}
It is not possible to detect disconnect until you try to send something.
The solution for you is the following:
You detect that you have received no data for a certain period of time and you want to check is the connection is alive.
You send some data to the server using send function. It could be protocol-specific ping packet or either garbage. The send function returns immediately, because it does not wait for actual data send. It only fills internal send buffer.
You begin waiting for socket read.
While you are waiting, OS tries to send the data in the send buffer to the server.
When OS detects that it cannot deliver data to the server, then the connection is marked as erroneous.
Now you will get an error when calling recv and send functions.
The send timeout is system specific and can be configured. Usually, it is about 20 seconds (Linux) - 2 minutes (Windows). It means that you need to wait a lot before you receive an error.
Notes:
You can also turn on TCP keep alive mechanism, but I don't recommend you to do this.
You can also modify TCP timeout intervals. It can be helpful when you want the connection to survive the temporary network disconnect.
That's how TCP works and is intended to work. You will get an error from a subsequent send, but never from the first send after the disconnect. There is buffering, and retry, and retry timeout to overcome before an error is signalled.
I'm trying to create a timeout using select() for UDP socket transfer. I want to send an int from client to server, wait 300ms, and if I don't get an ACK, resend the packet. I'm not sure how to set this up properly with the timeout. From what I've gathered online and on the notes I have from class, select should be used on the receiving end.
the client at the server send back and forth the numbers 1-100. I have a separate router simulated code that randomly drops packets
Here is the code i have for the client side
int sent = 1;
int received = 1;
for (int i = 0; i < 100; i++)
{
string sent1 = to_string(sent);
char const *pchar = sent1.c_str();
if(!sendto(s, pchar, sizeof(pchar), 0, (struct sockaddr*) &sa_in, sizeof(sa_in)))
cout << "send NOT successful\n";
else
{
cout << "Client sent " << sent << endl;
sent++;
}
// receive
fd_set readfds; //fd_set is a type
FD_ZERO(&readfds); //initialize
FD_SET(s, &readfds); //put the socket in the set
if(!(outfds = select (1 , &readfds, NULL, NULL, & timeouts)))
break;
if (outfds == 1) //receive frame
{
if (!recvfrom(s, buffer2, sizeof(buffer2), 0, (struct sockaddr*) &client, &client_length))
cout << "receive NOT successful\n";
else
{
received = atoi(buffer2);
cout << "Client received " << received << endl;
received++;
}
}
}
The code is identical for the receiving side except it is in reverse: receive first, then send
My code doesn't utilize the timeout at all. This is basically what I want to do:
send packet(N)
if (timeout)
resend packet(N)
else
send packet(N+1)
If the receiver gets a timeout it needs to tell the sender, or else not tell the sender. In other words you have to implement either a NACK-based protocol or an ACK-based protocol.
I send a packet as client to server and I want to the server sends that packet forward to all client, here is the code:
#include <iostream>
#include <SFML/Network.hpp>
using namespace std;
int main()
{
int fromID; // receive data from 'fromID'
int Message; // fromID's message
sf::SocketTCP Listener;
if (!Listener.Listen(4567))
return 1;
// Create a selector for handling several sockets (the listener + the socket associated to each client)
sf::SelectorTCP Selector;
Selector.Add(Listener);
while (true)
{
unsigned int NbSockets = Selector.Wait();
for (unsigned int i = 0; i < NbSockets; ++i)
{
// Get the current socket
sf::SocketTCP Socket = Selector.GetSocketReady(i);
if (Socket == Listener)
{
// If the listening socket is ready, it means that we can accept a new connection
sf::IPAddress Address;
sf::SocketTCP Client;
Listener.Accept(Client, &Address);
cout << "Client connected ! (" << Address << ")" << endl;
// Add it to the selector
Selector.Add(Client);
}
else
{
// Else, it is a client socket so we can read the data he sent
sf::Packet Packet;
if (Socket.Receive(Packet) == sf::Socket::Done)
{
// Extract the message and display it
Packet >> Message;
Packet >> fromID;
cout << Message << " From: " << fromID << endl;
//send the message to all clients
for(unsigned int j = 0; j < NbSockets; ++j)
{
sf::SocketTCP Socket2 = Selector.GetSocketReady(j);
sf::Packet SendPacket;
SendPacket << Message;
if(Socket2.Send(SendPacket) != sf::Socket::Done)
cout << "Error sending message to all clients" << endl;
}
}
else
{
// Error : we'd better remove the socket from the selector
Selector.Remove(Socket);
}
}
}
}
return 0;
}
Client code:
in Player class I have this function :
void Player::ReceiveData()
{
int mess;
sf::Packet Packet;
if(Client.Receive(Packet) == sf::Socket::Done)
{
Client.Receive(Packet);
Packet >> mess;
cout << mess << endl;
}
}
main.cpp:
Player player;
player.Initialize();
player.LoadContent();
player.Connect();
..
..
//GAME LOOP
while(running==true)
{
sf::Event Event;
while(..) // EVENT LOOP
{
...
}
player.Update(Window);
player.ReceiveData();
player.Draw(Window);
}
When I run this client code, the program not responding, freezes.
The problem is with that ReceiveDate() function.
All sockets, even the one created by SFML, are by default blocking. This means that when you try to receive when there is nothing to receive, the call will block, making your application seem "freezed".
You can toggle the blocking status of a SFML socket with the sf::SocketTCP::SetBlocking function.
The problem with sending to all clients failing is because you use GetSocketReady to get the clients to send to. That function only returns a socket for clients that are ready (i.e. the previous call to Wait marked the socket as having input).
You need to refactor the server to keep track of the connected clients in another way. The common way is to reset and recreate the selector every time in the outer loop, and have a separate collection of the connected clients (e.g. a std::vector).
I am trying to write a threaded function that sends system information via Tcp/ip over the local network to another computer. I have been using sockets to achieve this and this has worked out quite allright thus far. But I am now at a point where this usually works but around 30% of the time I get error messages telling me that the socket can not be opened. I use the activeSocket library for the sockets.
#include "tbb/tick_count.h"
#include "ActiveSocket.h"
using namespace std;
CActiveSocket socket;
extern int hardwareStatus;
int establishTCP() {
char time[11];
int communicationFailed = 0;
memset(&time, 0, 11);
socket.Initialize();
socket.SetConnectTimeout(0, 20);
socket.SetSendTimeout(0, 20);
return communicationFailed;
}
int monitor() {
cout << "Monitor: init continious monitoring" << endl;
int communicationFailed;
tbb::tick_count monitorCounter = tbb::tick_count::now();
while (!closeProgram) {
tbb::tick_count currentTick = tbb::tick_count::now();
tbb::tick_count::interval_t interval;
interval = currentTick - monitorCounter;
if (interval.seconds() > 2) {
monitorCounter = tbb::tick_count::now();
communicationFailed = 1;
char buffer[256];
sprintf(buffer, "%d;", hardwareStatus);
establishTCP();
char *charip = new char[monitoringIP.size() + 1];
charip[monitoringIP.size()] = 0;
memcpy(charip, monitoringIP.c_str(), monitoringIP.size());
const uint8* realip = (const uint8 *) charip;
int monitorCount = 0;
cout << "Monitor: " << buffer << endl;
while (communicationFailed == 1 && monitorCount < 2) {
monitorCount++;
if (socket.Open(realip, 2417)) {
if (socket.Send((const uint8 *) buffer, strlen(buffer))) {
cout << "Monitor: Succeeded sending data" << endl;
communicationFailed = 0;
socket.Close();
} else {
socket.Close();
communicationFailed = 1;
cout << "Monitor: FAILED TO SEND DATA" << endl;
}
} else {
socket.Close();
communicationFailed = 1;
cout << "Monitor: FAILED TO OPEN SOCKET FOR DATA" << endl;
}
}
if (monitorCount == 2) cout << "Monitor: UNABLE TO SEND DATA" << endl;
}
}
return communicationFailed;
}
I think I am doing something wrong with these functions and that the problem is not on the other side of the line where this data is received. Can anyone see any obvious mistakes in this code that could cause the failure? I keep getting my own cout message "Monitor: FAILED TO OPEN SOCKET FOR DATA"
EDIT: With telnet everything works fine, 100% of the time
You can use netstat to check that the server is listening on the port and connections are being established. Snoop is another good application in your Armour for finding out what is going wrong. Another possibility is to use telnet to see if the client can connect to that IP address and port. As to the code I will take a look at it later to see if something has gone awry.
socket is a global variable. It might be re-used concurrently between two threads or sequentially inside one thread. In fact, the while(~closeProgram) loop indicates that you intend to use it sequentially.
Some documentation for CActiveSocket::Open reads: "Connection-based protocol sockets (CSocket::SocketTypeTcp) may successfully call Open() only once..."
Perhaps your program fails when you call .Open() twice on the same object.
I eventually found out the problem with my code. As the connection was unstable and working for 70% of the time it seemed to be a timeout issue. I removed the two timeout settings
socket.SetConnectTimeout(0, 20);
socket.SetSendTimeout(0, 20);
Now it works perfectly fine, thanks for the troubleshooting tips though!