WSARecv() receives less bytes than sent - c++

I'm trying to send an image through WinSock. I'm reading the image in blocks of fixed size. The WSASend() sends the right amount of information, but when i'm receiving it, i'm having smaller pieces than the regular blocks.
char* TCPClient::ReadSocket()
{
Flags = 0;
if ((WSARecv(Info->GetSocket(), &(Info->GetWSABufForRead()), 1, &RecvBytes, &Flags, NULL, NULL)) == SOCKET_ERROR) {
if (WSAGetLastError() != WSAEWOULDBLOCK) {
sprintf(exceptBuf, "WSARecv() failed with error %d\n", WSAGetLastError());
throw new CException(exceptBuf);
}
else
{
printf("WSARecv() is OK!\n");
}
}
else {
if (RecvBytes == 0) {
return nullptr;
}
Info->SetRecvBytes(RecvBytes);
return Info->ReadBuffer();
}
return nullptr;
}
EDIT: What I actually want to know is how to get the entire block of information.

Related

Filtering by TCP SYN packets with npcap not working

I'm trying to sniff all TCP SYN packets received by any of my network adapters and I treid doing so by using the free npcap library available online.
You can see my code below
pcap_if_t* allNetworkDevices;
std::vector<pcap_t*> networkInterfacesHandles;
std::vector<WSAEVENT> sniffEvents;
void packet_handler(u_char* user, const struct pcap_pkthdr* header, const u_char* packet) {
cout << "here" << endl;
}
BOOL openAllInterfaceHandles()
{
char errbuf[PCAP_ERRBUF_SIZE];
pcap_t* curNetworkHandle;
if (pcap_findalldevs(&allNetworkDevices, errbuf) == -1) {
printf("Error in pcap_findalldevs: %s\n", errbuf);
return FALSE;
}
for (pcap_if_t* d = allNetworkDevices; d != NULL; d = d->next) {
//curNetworkHandle = pcap_open(d->name, 65536, PCAP_OPENFLAG_PROMISCUOUS, 1000, NULL, errbuf);
printf("%s\n", d->description);
curNetworkHandle = pcap_open_live(d->name, BUFSIZ, 1, 1000, errbuf);
if (curNetworkHandle == NULL) {
printf("Couldn't open device %s: %s\n", d->name, errbuf);
continue;
}
networkInterfacesHandles.push_back(curNetworkHandle);
// Compile and set the filter
struct bpf_program fp;
char filter_exp[] = "(tcp[tcpflags] & tcp-syn) != 0";
if (pcap_compile(curNetworkHandle, &fp, filter_exp, 1, PCAP_NETMASK_UNKNOWN) < 0) {
printf("Couldn't parse filter %s: %s\n", filter_exp, pcap_geterr(curNetworkHandle));
continue;
}
if (pcap_setfilter(curNetworkHandle, &fp) == -1) {
printf("Couldn't install filter %s: %s\n", filter_exp, pcap_geterr(curNetworkHandle));
continue;
}
// Create an event for the handle
sniffEvents.push_back(pcap_getevent(curNetworkHandle));
}
}
int main()
{
openAllInterfaceHandles();
while (TRUE)
{
DWORD result = WaitForMultipleObjects(sniffEvents.size(), sniffEvents.data(), FALSE, INFINITE);
if (result == WAIT_FAILED) {
printf("Error in WaitForMultipleObjects: %d\n", GetLastError());
break;
}
// Dispatch packets for the handle associated with the triggered event
int index = result - WAIT_OBJECT_0;
pcap_dispatch(networkInterfacesHandles[index], -1, &packet_handler, NULL);
if (cond)
{
cout << "done" << endl;
break;
}
}
while (!networkInterfacesHandles.empty())
{
pcap_close(networkInterfacesHandles.back());
networkInterfacesHandles.pop_back();
}
pcap_freealldevs(allNetworkDevices);
return 0;
}
cond is some condition I'm using which is irrelevant to the problem.
For some reason it won't go into the packet_handler even when I receive TCP SYN packets (which I check by using Wireshark) and I tried sending them either via the loopback and also from another PC in the same LAN.
Any help to find the problem would be greatly appreciated.

twice call avformat_find_stream_info() crashed

My version of FFmpeg is 4.4.
There is a logic in my code that calls avformat_find_stream_info() twice continuously, but I don't understand why it crashed here. I tried single-step debugging but it didn't work out. Here is my simply code that can be run directly:
#include <libavutil/timestamp.h>
#include <libavformat/avformat.h>
int main()
{
av_log_set_level(AV_LOG_DEBUG);
const char* in_filename_a = "aoutput.aac";
AVFormatContext* ifmt_ctx_a = NULL;
int ret = avformat_open_input(&ifmt_ctx_a, in_filename_a, 0, 0);
if (ret < 0)
{
fprintf(stderr, "Could not open input_a %s", in_filename_a);
return -1;
}
fprintf(stderr, "before ifmt_ctx_a=0x%x\n", ifmt_ctx_a->streams[0]);
ret = avformat_find_stream_info(ifmt_ctx_a, 0);
if (ret < 0)
{
fprintf(stderr, "Could not find input_a stream info");
return -1;
}
fprintf(stderr, "after ifmt_ctx_a=0x%x\n", ifmt_ctx_a->streams[0]);
/// crashed here
ret = avformat_find_stream_info(ifmt_ctx_a, 0);
if (ret < 0)
{
fprintf(stderr, "Could not find input_a stream info");
return -1;
}
}
There are few hints that imply that we shouldn't execute avformat_find_stream_info twice.
The documentation says: "examined packets may be buffered for later processing."
There is a change that the first execution buffers few packets, and the second execution tries to buffer the packets again without allocating additional space.
The console prints logging messages as:
Before avformat_find_stream_info() pos: 0 bytes read:65696 seeks:4 nb_streams:1
After avformat_find_stream_info() pos: 27420 bytes read:65696 seeks:4 frames:50
       The messages imply that the position advances from 0 to 27420. The address of ifmt_ctx_a->streams[0] is the same, but the execution of the first avformat_find_stream_info did some seeking (so the second execution is not the same as the first one).
Using the debugger, we can see that ifmt_ctx_a->pb[0].buf_ptr is increased after the first execution of avformat_find_stream_info.
Note:
I don't know if the crash is normal behavior or a bug in Libavformat library.
I didn't try looking at the source code of Libavformat.
For reading twice, you may close and reopen ifmt_ctx_a:
avformat_close_input(&ifmt_ctx_a);
ret = avformat_open_input(&ifmt_ctx_a, in_filename_a, 0, 0);
I don't see any reason to do that...
Other option is opening another AVFormatContext:
AVFormatContext* ifmt_ctx_a = NULL;
int ret = avformat_open_input(&ifmt_ctx_a, in_filename_a, 0, 0);
if (ret < 0)
{
fprintf(stderr, "Could not open input_a %s", in_filename_a);
return -1;
}
ret = avformat_find_stream_info(ifmt_ctx_a, NULL);
if (ret < 0)
{
fprintf(stderr, "Could not find input_a stream info");
return -1;
}
//Opening another AVFormatContext:
////////////////////////////////////////////////////////////////////////////
AVFormatContext* ifmt_ctx_a2 = NULL;
ret = avformat_open_input(&ifmt_ctx_a2, in_filename_a, 0, 0);
if (ret < 0)
{
fprintf(stderr, "Could not open input_a %s", in_filename_a);
return -1;
}
ret = avformat_find_stream_info(ifmt_ctx_a2, NULL);
if (ret < 0)
{
fprintf(stderr, "Could not find input_a stream info");
return -1;
}
////////////////////////////////////////////////////////////////////////////

Non-blocking socket loses data on Windows

I have a non-blocking socket server which supports all connecting clients. It's using multi-threading and it's cross-compilable using GCC.
It works perfect (as I want) in Linux, but when I try to use it in Windows, when I send a 70MB of file through it, it loses around 20MB from the file.
All sockets are non-blocking, so for recv/send socket calls, I don't have check/stop. It's in a loop and it sends what it receive, it sort of acts as a Echo server, but it loses data in Windows. I'm using Winsock 2.2 in WSAStartup.
What is wrong? How can I have wait/flush send calls, but never block recv calls? (if this is the issue)
Code pieces:
How I make it non-blocking:
iMode = 1;
ioctlsocket(sock1,FIONBIO, &iMode);
ioctlsocket(sock2,FIONBIO, &iMode);
How I send/receive between two sockets:
for (;;)
{
memset(buffer, 0, 8192);
int count = recv(sock1, buffer, sizeof(buffer), 0);
receiveResult = WSAGetLastError();
if (receiveResult == WSAEWOULDBLOCK && count <= 0)
continue;
if (count <= 0)
{
closesocket(sock1);
closesocket(sock2);
return;
}
if (count > 0)
{
int retval = send(sock2, buffer, count, 0);
}
}
int count = recv(sock1, buffer, sizeof(buffer), 0);
receiveResult = WSAGetLastError();
if (receiveResult == WSAEWOULDBLOCK && count <= 0)
When calling recv() or send(), WSAGetLastError() will return a meaningful value only if -1 (SOCKET_ERROR) is returned, but you are also checking it when 0 is returned instead. They do not set an error code for WSAGetLastError() when returning >= 0. You need to separate those conditions.
Also, just because you have read X number of bytes does not guarantee that you will be able to send X number of bytes at one time, so you need to check send() for WSAEWOULDBLOCK until you have no more data to send.
Try something more like this:
bool keepLooping = true;
do
{
int count = recv(sock1, buffer, sizeof(buffer), 0);
if (count > 0)
{
// data received...
char *p = buffer;
do
{
int retval = send(sock2, p, count, 0);
if (retval > 0)
{
p += retval;
count -= retval;
}
else if (retval == 0)
{
// peer disconnected...
keepLooping = false;
}
else if (WSAGetLastError() != WSAEWOULDBLOCK)
{
// a real error occurred...
keepLooping = false;
}
else
{
// peer is not ready to receive...
// optionally use select() to wait here until it is...
}
}
while ((count > 0) && (keepLooping));
}
else if (count == 0)
{
// peer disconnected...
keepLooping = false;
}
else if (WSAGetLastError() != WSAEWOULDBLOCK)
{
// a real error occurred...
keepLooping = false;
}
else
{
// no data is available for reading...
// optionally use select() to wait here until it is...
}
}
while (keepLooping);
closesocket(sock1);
closesocket(sock2);
return;

File descriptor returned from socket is larger than FD_SETSIZE

I have a problem where the returned file descriptor would gradually increase to be a number larger than FD_SETSIZE.
My tcp server is continually shutdown which requires my client to close the socket and reconnect. The client will then attempt to reconnect to the server by calling socket to obtain a new file descriptor before calling connect.
However it appears that everytime I call socket the file descriptor returned is incremented and after a certain amount of time it becomes larger than FD_SETSIZE, which is a problem where I use select to monitor the socket.
Is it ok to reuse the first file descriptor returned from socket for the connect call even though the the socket was closed? Or is there other workarounds?
Reconnect code (looping until connected):
int s = getaddrinfo(hostname, port, &hints, &result);
if (s != 0) { ... HANDLE ERROR ...}
...
struct addrinfo *rp;
int sfd;
for (rp = result; rp != NULL; rp -> ai_protocol)
{
sfd = socket( rp->ai_family, rp->ai_sockettype, rp->ai_addrlen);
if (sfd >= 0)
{
int res = connect(sfd, rp->ai_addr, rp->ai_addrlen);
if (res != -1)
{
_sockFd = sfd;
_connected = true;
break;
}
else
{
close (sfd);
break;
}
}
}
if (result != NULL)
{
free(result);
}
Read Message code:
if (_connected)
{
...
retval = select(n, &rec, NULL, NULL, &timeout);
if (retval == -1)
{
...
_connected = false;
close(_sockFd);
}
else if (retval)
{
if (FD_ISSET(_sockFD, &rec) == 0)
{
....
return;
}
int count = read(...)
if (count)
{
....
return;
}
else
{
....
_connected = false;
close(_sockFd);
}
}
}
You're not closing the socket if the connect fails. So it remains open, occupying an FD, so next time you call socket() you get a new FD. You're also not breaking out of your loop when connect() succeeds, which is another leak. You're also not checking the result of read() for -1.

Winsock nonblocking send() wait buffer. What is the correct method?

I have some questions about when it is needed to store the data in the wait buffer (waiting for the FD_WRITE event).
This is my send function (fixed):
bool MyClass::DataSend(char *buf, int len)
{
if (len <=0 || m_Socket == INVALID_SOCKET) return false;
if (m_SendBufferLen > 0)
{
if ((m_SendBufferLen + len) < MAX_BUFF_SIZE)
{
memcpy((m_SendBuffer + m_SendBufferLen), buf, len);
m_SendBufferLen += len;
return true;
}
else
{
// close the connection and log insuficient wait buffer size
return false;
}
}
int iResult;
int nPosition = 0;
int nLeft = len;
while (true)
{
iResult = send(m_Socket, (char*)(buf + nPosition), nLeft, 0);
if (iResult != SOCKET_ERROR)
{
if (iResult > 0)
{
nPosition += iResult;
nLeft -= iResult;
}
else
{
// log 0 bytes sent
break;
}
}
else
{
if (WSAGetLastError() == WSAEWOULDBLOCK)
{
if ((m_SendBufferLen + nLeft) < MAX_BUFF_SIZE)
{
// log data copied to the wait buffer
memcpy((m_SendBuffer + m_SendBufferLen), (buf + nPosition), nLeft);
m_SendBufferLen += nLeft;
return true;
}
else
{
// close the connection and log insuficient wait buffer size
return false;
}
}
else
{
// close the connection and log winsock error
return false;
}
}
if (nLeft <= 0) break;
}
return true;
}
My send (FD_WRITE event) function (fixed):
bool MyClass::DataSendEvent()
{
if (m_SendBufferLen < 1) return true;
int iResult;
int nPosition = 0;
int nLeft = m_SendBufferLen;
while (true)
{
iResult = send(m_Socket, (char*)(m_SendBuffer + nPosition), nLeft, 0);
if (iResult != SOCKET_ERROR)
{
if (iResult > 0)
{
nPosition += iResult;
nLeft -= iResult;
}
else
{
// log 0 bytes sent
break;
}
}
else
{
if (WSAGetLastError() == WSAEWOULDBLOCK)
{
if (nPosition > 0)
{
memmove(m_SendBuffer, (m_SendBuffer + nPosition), (m_SendBufferLen - nPosition));
m_SendBufferLen -= nPosition;
}
break;
}
else
{
// close the connection and log winsock error
return false;
}
}
if (nLeft <= 0)
{
if (m_SendBufferLen == nPosition)
{
m_SendBufferLen = 0;
break;
}
else
{
memmove(m_SendBuffer, (m_SendBuffer + nPosition), (m_SendBufferLen - nPosition));
m_SendBufferLen -= nPosition;
nPosition = 0;
nLeft = m_SendBufferLen;
}
}
}
return true;
}
Do I really need the if (nPosition > 0) or not? How do I simulate this scenario? Is there possible send() in non-blocking mode send less bytes than the requested? If not, why using the while() looping?
This is the final code (thanks to #user315052)
At the top of your while loop, you are already decrementing nLeft, so you don't need to subtract nPosition from it.
iResult = send(m_Socket, (char*)(buf + nPosition), nLeft, 0);
In your second function, when you are shifting the unsent bytes to the beginning of the array, you should use memmove, since you have overlapped regions (you are copying a region of m_SendBuffer into itself). The overlap is illustrated below, where some of the A's would get copied onto itself.
m_SendBuffer: [XXXXAAAAAAAAAAAA]
mPosition: 4
nLeft: 12
I am a little confused about why DataSend() is implemented to allow the caller to keep calling it with success even when the WSAEWOULDBLOCK is encountered. I would suggest the interface be modified to return a result that lets the caller know that it should stop sending, and to wait for an indication to resume sending.
You don't need the nPosition > 0 check.
You can force the case to occur by having the receiver of the data not read anything.
It is definitely possible for send in non-blocking mode to send fewer bytes than requested.