I am trying to replicate a real time application on a windows computer to be able to debug and make changes easier, but I ran into issue with Delayed Ack. I have already disabled nagle and confirmed that it improve the speed a bit. When sending a lots of small packets, window doesn't ACK right away and delay it by 200 ms. Doing more research about it, I came across this. Problem with changing the registry value is that, it will affect the whole system rather than just the application that I am working with. Is there anyway to disable delayed ACK on window system like TCP_QUICKACK from linux using setsockopt? I tried hard coding 12, but got WSAEINVAL from WSAGetLastError.
I saw some dev on github that mentioned to use SIO_TCP_SET_ACK_FREQUENCY but I didn't see any example on how to actually use it.
So I tried doing below
#define SIO_TCP_SET_ACK_FREQUENCY _WSAIOW(IOC_VENDOR,23)
result = WSAIoctl(sock, SIO_TCP_SET_ACK_FREQUENCY, 0, 0, info, sizeof(info), &bytes, 0, 0);
and I got WSAEFAULT as an error code. Please help!
I've seen several references online that TCP_QUICKACK may actually be supported by Winsock via setsockopt() (opt 12), even though it is NOT documented, or officially confirmed anywhere.
But, regarding your actual question, your use of SIO_TCP_SET_ACK_FREQUENCY is failing because you are not providing any input buffer to WSAIoctl() to set the actual frequency value. Try something like this:
int freq = 1; // can be 1..255, default is 2
result = WSAIoctl(sock, SIO_TCP_SET_ACK_FREQUENCY, &freq, sizeof(freq), NULL, 0, &bytes, NULL, NULL);
Note that SIO_TCP_SET_ACK_FREQUENCY is available in Windows 7 / Server 2008 R2 and later.
Related
Is there simple example code that shows how to create a non-blocking network-based bio from scratch? I do not need to verify right now or renegotiate, just get data flowing back and forth first.
I'm trying to use openssl on top of an already existing abstraction of epoll, sockets, and buffers. I already have existing machinery for all of that and am trying to create a BIO over it, but I cannot for the life of me get it to work. I already have an established TCP connection, so I need to insert that into a source/sink bio and then do the handshake.
The current state is the dreaded scenario where SSL_connect returns -1, SSL_get_error returns 5 (syscall error), and errno reads SUCCESS. I have seen others have the same problem, but not a single answer.
The reason for doing this instead of just using a mem bio to shuttle bytes to back forth is because the rest of the stack is fairly well optimized, and I don't want to do the extra copying.
My first idea what just to implement a bio over the in and out buffers I already maintain, but I cannot get that work either. There are a lot questions floating around this site and others. Some have outdated answers, but most just don't have answers that work when they do have answers at all.
Well, interesting. I would have made this a comment but it's too long.
I think this might boil down to the version of OpenSSL you are using. I took my existing blocking socket implementation and did this (sorry, it's a bit quick and dirty and doesn't bother to clear the error stack as it should, but that doesn't seem to be causing a problem. The busy loop is intentional, to hammer OpenSSL as hard as possible, but I also tried it with a short sleep and the result was the same):
u_long non_blocking = 1;
int ioctl_err = ioctlsocket (skt, FIONBIO, &non_blocking);
assert (ioctl_err == 0);
int connect_result = 0;
int connect_err = 0;
for ( ; ; )
{
connect_result = SSL_connect (ssl);
if (connect_result >= 0)
break;
connect_err = SSL_get_error (ssl, connect_result);
if (connect_err != SSL_ERROR_WANT_READ && connect_err != SSL_ERROR_WANT_WRITE)
break; // I put a breakpoint here; it was never hit
}
u_long non_blocking = 0; // so that the rest of my code still works
ioctl_err = ioctlsocket (skt, FIONBIO, &non_blocking);
if (connect_result <= 0) // this never happened either
do_something_appropriate ();
// Various (synchronous) calls to `SSL_read` and `SSL_write`, these all worked fine
Now I know this isn't exactly what you are doing, but from OpenSSL's point of view that shouldn't matter. So, for me, I can get it to work.
Testing environment:
Windows (sorry!)
OpenSSL 3.0.1 (which, if not the latest, is not far off)
tl;dr If you're using the OpenSSL libraries that came with your Linux installation, it might be time to move on. Plus, if you build it from source you can build it with debug info, which might come in handy some time (I've actually built two versions for exactly this reason - one optimised and one not).
So that's it. HTH. Looks like you might be OK after all.
PS: I'm obviously not doing anything with BIOs here, and maybe your problem lies there instead. If so, we need some self-contained sample code using those which exhibits the problems you are having. Then, perhaps, someone can suggest a solution.
Define BIO method:
BIO_meth_new
BIO_meth_set_write_ex
BIO_meth_set_read_ex
BIO_meth_set_ctrl
BIO_meth_free
Create BIO:
BIO_new
BIO_set_data
BIO_free
Associate BIO with SSL:
SSL_set_bio
I am currently using the BlueZ DBus API to scan for BLE devices but it stops responding completely after varying intervals. Sometimes it's minutes, other times it's one or two hours.
My assumption is that I am forgetting to do something. The strange thing is, that after I exit the application, tools like bluetoothctl and hciconfig also no longer respond. Sometimes a reboot is not enough to get it to work again and I need to power-cycle the machines. It happens on several different machines as well.
I am acquiring the bus using:
GError* error = nullptr;
mConnection = g_bus_get_sync(G_BUS_TYPE_SYSTEM, nullptr, &error);
Starting the loop:
mLoop = g_main_loop_new(nullptr, false);
g_main_loop_run(mLoop);
Then I power on the adapter by setting the Powered property to true and calling StartDiscovery. Devices are then reported through:
guint iface_add_sub = g_dbus_connection_signal_subscribe(mConnection, "org.bluez", "org.freedesktop.DBus.ObjectManager", "InterfacesAdded", nullptr, nullptr, G_DBUS_SIGNAL_FLAGS_NONE, device_appeared, this, nullptr);
guint iface_remove_sub
= g_dbus_connection_signal_subscribe(mConnection, "org.bluez", "org.freedesktop.DBus.ObjectManager", "InterfacesRemoved", nullptr, nullptr, G_DBUS_SIGNAL_FLAGS_NONE, device_disappeared, this, nullptr);
Is there anything I am missing to prevent BlueZ from stopping to respond?
bluetoothd -v = 5.53
ldd --version = GLIBC 2.31-0ubuntu9.2
Bluez cannot scan indefinitely because it will crash, as you are observing. In my opinion it is a bug…
What I do is to scan for 8 seconds and then stop the scan. Bluez will then clean up internal cache. After allowing 1 second for cleanup, I restart the scan.
This way it will hold up much longer. Nonetheless after a week or so it might still crash…
I am trying to initialise a sound port from pjsip and pjsua with the standart pjmedia_snd_port_create but the result is always not successful.
pj_caching_pool_init(&cp, &pj_pool_factory_default_policy, 0);
pool = pj_pool_create(&cp.factory,
"pool1",
4000,
4000,
NULL);
pjmedia_snd_port *snd_port1 = NULL;
status = pjmedia_snd_port_create(pool, id1, id1, clock_rate,
channel_count, samples_per_frame,
bits_per_sample,
0, &snd_port1);
My device id1 is 0, as i got it from the audio device manager. I`ve tried with the -1 for defaults but always fails me. I have endpoint created with the pjsua2 api from a C++ class, the lib is OK and running, I can create conference bridges also but the sound port fails me. A bit of a hint will be great.
I`ve fixed it with initialising loop, I guess it has nothing to do with the setup, what I need is actually to register all of my playback and rec devices from my hardware.
I'm trying to write some code to read from an Isochronous pipe using LibUsbK in Win32. I have successfully initialised the device into the correct state to send and receive Isochronous data and I can see data being sent over the USB in my hardware USB analyser, but the buffers I am receiving are always unfilled even though the analyser shows that there was data in the packets sent to the PC.
I'm new to LibUsbK and using Isochronous transfers though I'm not new to USB in general but I've been really struggling with this.
The code I'm using to read from the device is something like this...
UsbK_SelectInterface(usbHandle,1,0);
UsbK_SetAltInterface(usbHandle,1,0,1);
IsoK_Init(&isoCtx, ISO_PACKETS_PER_XFER, 0);
IsoK_SetPackets(isoCtx, ISO_PACKET_SIZE); // Size of each individual packet
OvlK_Init(&ovlPool, usbHandle, 4, 0);
OvlK_ResetPipe(usbHandle, 0x83);
OclK_Acquire(&ovlkHandle, ovlPool);
UsbK_IsoReadPipe(usbHandle, 0x83, inBuffer, sizeof(inBuffer), ovlkHandle, isoCtx);
while(!finished)
{
if(OvlK_IsComplete(ovlkHandle)
{
fwrite(inBuffer, sizeof(inBuffer), 1, outFile);
memset(inBuffer,0xcc,sizeof(inBuffer));
OvlK_ReUse(ovlkHandle);
UsbK_IsoReadPipe(usbHandle, 0x83, inBuffer, sizeof(inBuffer), ovlkHandle, isoCtx);
{
}
If I put a breakpoint at the fwrite line then the inBuffer is always full of 0xCC - ie, not having been filled by the iso read.
I've checked all the error return values from the UsbK/OvlK function calls and they are all as they should be. I've checked my buffers are sufficiently big to receive the data.
I use very similar code to write to the ISO out pipe on endpoint 0x02 and that works perfectly, the only difference really between the code above and my write code is that the fwrite/memset commands are replaced with a call to a "fillbuffer" function that populates my outBuffer before calling UsbK_IsoWritePipe function.
I tried looking through any examples I could find in the samples and also online but struggled to understand/get them to work with my particular device.
Any suggestions or help greatly appreciated.
So it appears that the above code did work and I was being mislead by the fact that the debugger was interrupting the flow of things - I keep forgetting that trying to debug real time stuff can introduce it's own issues.
The first issue was that stepping through the code in the debugger was causing issues with the low level libusbk code capturing the usb packets and filling my buffers correctly - once I let it run full speed and found other ways to test the buffers I did actually find there was some data in there.
The second problem I had was that quite often the buffer was starting to be filled part way through only (and not always right from the start) so when I examined the data I was only printing the first part of the buffer to the console and so all I saw was 0xCC and I was therefore assuming it hadn't worked.
Once I realised that there was actually some data later in the buffer I just started looking through the buffer in packet sized chuncks, if the packet was completely contained of 0xCC I would skip it and move on, but if any of it was not 0xCC then I would treat it as a valid packet - this worked perfectly and I was successfully receiving all the data. I'm sure there's a more "proper" way of doing this, but it works for me now.
Ok, one for the SO hive mind...
I have code which has - until today - run just fine on many systems and is deployed at many sites. It involves threads reading and writing data from a serial port.
Trying to check out a new device, my code was swamped with 995 ERROR_OPERATION_ABORTED errors calling GetOverlappedResult after the ReadFile. Sometimes the read would work, othertimes I'd get this error. Just ignoring the error and retrying would - amazingly - work without dropping any data. No ClearCommError required.
Here's the snippet.
if (!ReadFile(handle,&c,1,&read, &olap))
{
if (GetLastError() != ERROR_IO_PENDING)
{
logger().log_api(LOG_ERROR,"ser_rx_char:ReadFile");
throw Exception("ser_rx_char:ReadFile");
}
}
WaitForSingleObjectEx(r_event, INFINITE, true); // alertable, so, thread can be closed correctly.
if (GetOverlappedResult(handle,&olap,&read, TRUE) != 0)
{
if (read != 1)
throw Exception("ser_rx_char: no data");
logger().log(LOG_VERBOSE,"read char %d ( read = %d) ",c, read);
}
else
{
DWORD err = GetLastError();
if (err != 995) //Filters our ERROR_OPERATION_ABORTED
{
logger().log_api(LOG_ERROR,"ser_rx_char: GetOverlappedResult");
throw Exception("ser_rx_char:GetOverlappedResult");
}
}
My first guess is to blame the COM port driver, which I havent' used before (it's a RS422 port on a Blackmagic Decklink, FYI), but that feels like a cop-out.
Oh, and Vista SP1 Business 32-bit, for my sins.
Before I just put this down to "Someone else's problem", does anyone have any ideas of what might cause this?
How are you setting over the OVERLAPPED structure before the ReadFile? - I always zero them (other than the hEvent, obviously), which is perhaps part superstition, but I have a feeling that it's caused me a problem in the past.
I'm afraid blaming the driver (if it's non-MS and not just a tiny tweak from the reference) is not completely unrealistic. To write a COM driver is an incredibly complex thing, and the difficulty with testing it is that every application ever written uses the serial ports and their IOCTLs slightly differently.
Another common problem is not to set the whole port up - for example not calling SetCommTimeouts or SetupComm. I've no idea if you're making this sort of mistake, but I have met people who say they're not using timeouts when they actually mean that they didn't call SetCommTimeouts so they're using them but don't have a notion what they're set to...
This kind of stuff can be murder for 3rd-party COM drivers, because people have often got away with any old crap with the MS driver, and it doesn't always work the same with another device.
in addition to zeroing the OVERLAPPED, you might also check how you're setting olap.hEvent, that is, what are your arguments to CreateEvent? If you're creating an event that's pre-signalled (i.e. the third argument to CreateEvent is TRUE) I would expect an immediate return. Also, don't forget that if you specify manualReset (the second argument to CreateEvent) as FALSE, GetOverlappedResult() will helpfully clear the event for you - which might explain why it works the second time around.
Can't really tell from your snippet whether either of these affect you - hope this helps.