Potential crash caused by the syscall 'select' in chilkat lib, in case of socket fd larger than 1024 - chilkat

we are using the chilkat lib on linux x64 to fetch some web resources from internet.noticed that the chilkat lib functions called 'select' to monitor sockets events, from gdb to my program, below functions called 'select':
ChilkatSocket::waitWriteableMsHB(unsigned
int, bool, bool, SocketParams&,
LogBase&) ()
ChilkatSocket::waitReadableMsHB(unsigned
int, SocketParams&, LogBase&) ()
so it would lead to the FD_SET overflow when the socket fd larger than FD_SETSIZE (1024 on linux), then caused the program crash? is it necessary to use modern syscall 'poll' or 'epoll' instead?
thanks

Chilkat may use the select system call when the fd set is less than FD_SETSIZE. If the fd set is 1024 or greater, it will always use poll. There is no worry of an FD_SET overflow because Chilkat will choose to use "poll" in all cases where it matters.

Related

Short read errors while performing beast::http::async_read operation

Logs:
OnWrite(): bytes_transferred=343
OnRead(): bytes_transferred=0
OnRead(): Recv:
HTTP/1.1 200 OK
fail(): OnRead: short read
I am getting short read errors while performing beast::http::async_read operation.
Basically, first I perform async_write operation to send data to server. Then to read that data, I am performing async_read operation.
As the logs suggest, no bytes are read in async_read, resulting in short read error. But I wanted to understand why could it happen?
In async_read,
Buffer used: beast::flat_buffer
response message used: beast::http::response, I am also clearing the response message in async_read callback.
Without seeing actual code, it's hard to diagnose an issue. Here's some general information that can help you understand the cause and potentially fix the issue.
short_read doesn't happen if there's not enough buffer, it happens when the data received doesn't satisfy the expected amount.
Short reads are normal, see Streams, Short Reads and Short Writes.
So use the appropriate read primitives to handle expected short reads:
Data may be read from or written to a connected TCP socket using the receive(), async_receive(), send() or async_send() member functions. However, as these could result in short writes or reads, an application will typically use the following operations instead: read(), async_read(), write() and async_write().
The usual place where short reads crop up as an error condition is in SSL streams. OpenSSL generates the error condition¹ as stream_errors::stream_truncated:
stream_truncated
The underlying stream closed before the ssl stream
gracefully shut down.
It can naturally occur when the peer doesn't negotiate a proper shutdown.
🖙 SSLv2 doesn't support a protocol-level shutdown, so an eof will be passed through as eof instead.
Relevant version history
Previous releases fixed bugs related to possible short reads:
Fixed an ssl::stream<> bug that may result in spurious 'short read' errors.
¹ For <v1.1 via SSL_R_SORT_READ
See also
Existing posts on this site, e.g. Short read error-Boost asio synchoronous https call

Behavior of Select and FD_SET When Fd is Bigger than 1024

As far as I know, select only supports no more than 1024 sockets. But a process can own 65535 sockets which means most of the socket numbers are bigger than 1024, so I have three questions:
Q1. What will happen if passing socket numbers bigger than 1024 to FD_SET()?
Q2. What will happen if passing fd_set whose socket numbers are all bigger than 1024 to select()?
Q3. On Linux Fedora with kernel 2.6.8, x86 64bit, will exceptions be thrown in Q1 and Q2?
An fd_set is an array of bits, only manipulated with FD_* macros because C doesn't have a "bit" type. (The type is officially opaque, and could be implemented a different way - in fact winsock does implement it differently - but all unix-like OSes use the array of bits.)
So this code:
fd_set my_fds;
....
FD_SET(1024, &my_fds);
has the same problem as this code:
char my_fds[1024];
....
my_fds[1024] = 1;
assuming FD_SETSIZE is 1024.
You will be overwriting whatever comes after the fd_set in memory, causing a segfault if you're lucky, more subtle errors if you're not.

WriteFile in pure DOS mode?

As we know VC's WriteFile() writes data to the specified I/O device in OS(see WriteFile)
I want to know if there is such an api in pure dos for this purpose ? (Using Watcom C...)
Then I found _dos_write() in watcom c library reference page 197(see _dos_write()) and it uses system call 0x40 to write count bytes of data from the buffer pointed to by buffer to the file specified by handle
The count is unsigned type and this means the max file count will be 65535.
My question is: is there any other api which can transfer more than 65536 bytes "once" (like WriteFile() does) in pure DOS ?
P.s. It is NOT about the command prompt in Windows!
65535 bytes is only the limit of how many bytes we can write/read in one time with one call. If the file is not closed, then simple call the write/read again with an other location in the ram, then the filecounter will be move to the next 65535 bytes of the file. Like Jerry Coffin said, we just have to use mutiple calls before we close the file with filehandle.
Dirk

Threads are blocked in malloc and free, virtual size

I'm running a 64-bit multi-threaded program on the windows server 2003 server (X64), It run into a case that some of the threads seem to be blocked in the malloc or free function forever. The stack trace is like follows:
ntdll.dll!NtWaitForSingleObject() + 0xa bytes
ntdll.dll!RtlpWaitOnCriticalSection() - 0x1aa bytes
ntdll.dll!RtlEnterCriticalSection() + 0xb040 bytes
ntdll.dll!RtlpDebugPageHeapAllocate() + 0x2f6 bytes
ntdll.dll!RtlDebugAllocateHeap() + 0x40 bytes
ntdll.dll!RtlAllocateHeapSlowly() + 0x5e898 bytes
ntdll.dll!RtlAllocateHeap() - 0x1711a bytes
MyProg.exe!malloc(unsigned __int64 size=0) Line 168 C
MyProg.exe!operator new(unsigned __int64 size=1) Line 59 + 0x5 bytes C++
ntdll.dll!NtWaitForSingleObject()
ntdll.dll!RtlpWaitOnCriticalSection()
ntdll.dll!RtlEnterCriticalSection()
ntdll.dll!RtlpDebugPageHeapFree()
ntdll.dll!RtlDebugFreeHeap()
ntdll.dll!RtlFreeHeapSlowly()
ntdll.dll!RtlFreeHeap()
MyProg.exe!free(void * pBlock=0x000000007e8e4fe0) C
BTW, the param values passed to the new operator is not correct here maybe due to optimization.
Also, at the same time, I found in the process Explorer, the virtual size of this program is 10GB, but the private bytes and working set is very small (<2GB). We did have some threads using virtualalloc but in a way that commit the memory in the call, and these threads are not blocked.
m_pBuf = VirtualAlloc(NULL, m_size, MEM_COMMIT, PAGE_READWRITE);
......
VirtualFree(m_pBuf, 0, MEM_RELEASE);
This looks strange to me, seems a lot of virtual space is reserved but not committed, and malloc/free is blocked by lock. I'm guessing if there's any corruptions in the memory/object, so plan to turn on gflag with pageheap to troubleshoot this.
Does anyone has similar experience on this before? Could you share with me so I may get more hints?
Thanks a lot!
Your program is using PageHeap, which is intended for debugging only and imposes a ton of memory overhead. To see which programs have PageHeap activated, do this at a command line.
% Gflags.exe /p
To disable it for your process, type this (for MyProg.exe):
% Gflags.exe /p /disable MyProg.exe
Pageheap.exe detects most heap-related bugs - try Pageheap
Also you should look in to "the param values passed to the new ..." - does this corruption occur in the debug mode? make sure all optimizations are disabled.
If your system is running out of memory, it might be the case that the OS is swapping, that means that for a single allocation, in the worst case the OS could need to locate the best candidate for swapping, write it to disk, free the memory and return it. Are you sure that it is locking or might it just be performing very slowly? Can another thread be swapping memory to disk while these two threads wait for it's call to malloc/free to complete?
My preferred solution for debugging leaks in native applications in to use UMDH to get consecutive snapshots of the user-mode heap(s) in the process and then run UMDH again to diff the snapshots. Any pattern of change in the snapshots is likely a leak.
You get a count and size of memory blocks bucketed by their allocating callstack so it's reasonably straightforward to see where the biggest hogs are.
The user-mode dump heap (UMDH) utility
works with the operating system to
analyze Windows heap allocations for a
specific process.

Is it safe to cast SOCKET to int under Win64?

I’m working on a Windows port of a POSIX C++ program.
The problem is that standard POSIX functions like accept() or bind() expect an ‘int’ as the first parameter while its WinSock counterparts use ‘SOCKET’.
When compiled for 32-bit everything is fine, because both are 32bit, but under Win64 SOCKET is 64 bit and int remains 32 bit and it generates a lot of compiler warning like this:
warning C4244: '=' : conversion from 'SOCKET' to 'int', possible loss of data
I tried to work around the issue by using a typedef:
#ifdef _WIN32
typedef SOCKET sock_t;
#else
typedef int sock_t;
#endif
and replacing ‘int’s with sock_t at the appropriate places.
This was fine until I reached a part of the code which calls OpenSSL APIs.
As it turned out OpenSSL uses ints for sockets even on Win64. That seemed really strange, so I started searching for an answer, but the only thing I found was an old post on the openssl-dev mailing list which refered to a comment e_os.h:
/*
* Even though sizeof(SOCKET) is 8, it's safe to cast it to int, because
* the value constitutes an index in per-process table of limited size
* and not a real pointer.
*/
So my question is:
is it really safe to cast SOCKET to int?
I’d like to see some kind of documentation which proves that values for SOCKET can't be larger than 2^32.
Thanks in advance!
Ryck
This post seems by the to be repeating the information on kernel objects at msdn:
Kernel object handles are process specific. That is, a process must either create the object or open an existing object to obtain a kernel object handle. The per-process limit on kernel handles is 2^24.
The thread goes on to cite Windows Internals by Russinovich and Solomon as a source for the high bits being zero.
The simple answer to this question is no. Take a look at the description of SOCKET values on MSDN [1]:
Windows Sockets handles have no restrictions, other than that the value INVALID_SOCKET is not a valid socket. Socket handles may take any value in the range 0 to INVALID_SOCKET–1.
So clearly, the API allows all values in the range [0, 2^64 - 1) on 64-bit Windows. And if the API ever returned a value greater than 2^32 - 1, assigning it to an int would result in handle truncation. Also have a look at the description of the return value from the socket() function [2]:
If no error occurs, socket returns a descriptor referencing the new socket.
Notice that it most emphatically does not promise to return a kernel handle. This makes any discussion about the possible values for kernel handles moot.
All that being said, as of this writing, the socket() function really does return a kernel handle (or something indistinguishable from a kernel handle) [3] and kernel handles really are limited to 32-bits [4]. But keep in mind that Microsoft could change any of these things tomorrow without breaking their interface contracts.
However, since a doubtless large number of applications have taken a dependency on these particular implementation details (and more importantly, so has OpenSSL), Microsoft would likely think twice about making any breaking changes. So go ahead and cast a SOCKET to an int. Just keep in mind that this an inherently dangerous, bad practice, and is never justifiable in the name of expediency.
http://msdn.microsoft.com/en-us/library/windows/desktop/ms740516(v=vs.85).aspx
http://msdn.microsoft.com/en-us/library/windows/desktop/ms740506(v=vs.85).aspx
http://msdn.microsoft.com/en-us/library/windows/desktop/ms742295(v=vs.85).aspx
http://msdn.microsoft.com/en-us/library/windows/desktop/aa384267(v=vs.85).aspx
Edit (2018-01-29)
Since this topic still seems to be of some interest, it's worth pointing out that it's quite easy to write portable sockets code in C++11 without resorting to questionable type casts:
using socket_t = decltype(socket(0, 0, 0));
socket_t s = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
/*
* Even though sizeof(SOCKET) is 8, it's safe to cast it to int, because
* the value constitutes an index in per-process table of limited size
* and not a real pointer.
*/
This comment is correct. SOCKET = File handle on Windows NT series. I've never seen a 9x series 64 bit so there should be nothing to worry about.