WriteFile() crashes when trying to carry on multithreading - c++

I am working on an mfc dll that is accessed via a script and all this works fine. I have added a multi-threading component to it and am trying to use the WriteFile() function to write to my serial port, but somehow the WriteFile() function exits the application after the 2nd write command gets executed.
Without the multithreading bit, everything works normally and I can apply as many writefile commands as I want.
Multi-threading: I am using
CreateThread(NULL,0,WorkerThread,this,0,0);
to create my thread. Using "WorkerThread" to carry out the writefile operations described earlier in the background.
Additionally, I need to use the Sleep() function while writing it at intervals defined by me. At the moment, the program just quits when trying to use Sleep(). So, I just removed it for the time being but would need it at a later stage.
Is this a known problem or something with a but-obvious solution?
Update: I have sort of tried to reach somewhere close to the problem but still not been able to resolve it. Apparently it looks like there is some problem with my WriteFile() parameters.
WriteFile(theApp.m_hCom,tBuffer,sizeof(tBuffer),&iBytesWritten,NULL);
It is not taking the sizeof(tBuffer) properly and because of which it is crashing. I checked out the string to be passed, which is exactly equal to what I need to pass but its crashing out the program if I write the code as done above (for WriteFile()). When I keep the stringlength i.e. manually set the sizeof(tBuffer) parameter to 14, then the program runs but the command does not get executed as the total string size of buffer is 38.
CString sStore = "$ABCDEF,00000020,01000000C1200000*##\r\n";
char tBuffer[256];
memset(tBuffer,0,sizeof(tBuffer));
int Length = sizeof(TCHAR)* sStore.GetLength();
memcpy(&tBuffer,sStore.GetBuffer(),Length);
and then sending it with the WriteFile command.
WriteFile(theApp.m_hCom,tBuffer,sizeof(tBuffer),&iBytesWritten,NULL);

This is wrong: sizeof(TCHAR). Since you are using char you should use sizeof(char) instead. TCHAR could be either 1 or 2 bytes...
In the call to WriteFile you should use Length instead of sizeof(tBuffer). Otherwise you'd probably end up with garbage data in your file (which I assume is later read from somewhere else).

I'm guessing its crashing because you are trying to run that directly from your DLL. Write Function looks fine to me and I think if you try to run your program from the Python script ONLY, it should work. I have faced something similar earlier and came to the conclusion of not running my DLL through the debugger but just the script.
Please read this and this for more information.
Hope this helps.
Good Luck!

Related

Multiple calls to std::cout make subprocess hang

I'll copy here part of my previous question to describe the problem:
I wrote an application in C++ that has two parts - the frontend and
the backend. These two communicate using IPC layer provided by
wxWidgets. In the backend I use some legacy functions for image data
manipulation. One of these functions hangs or falls into some infinite
loop sometimes (I can observe that 0% of the process resources are
used by the process after some point), but this happens only if I ran
the backend as a subprocess of the frontend. Otherwise (when I run it
manually) it works just fine.
It turns out that printing too many lines with std::cout was causing that, but I'd like to understand why. Could it be that wxWidgets utilizes some buffer for storing application output and printing was simply overflowing it? Or this is rather native issue of Windows? Or maybe it could be related to std::cout implementation? I'm pretty sure I'm not able to reproduce this with printf It seems that I was wrong - printf also seems to trigger that issue
The stdout buffer is of a finite size. Something must be reading what you are writing into the buffer, whether this is a file, a console window or another process. If you write faster than the reader is able to cope with then the buffer will eventually fill up and block any further writes until the reader has read some data.

C++ Recording Audio When a Certain Key Is Down Until it is Up

Basically I need a program that runs in Linux and records to a .wav or .flac when I hold alt. So far I have a program(in C++) where it recognizes when alt is up or down, but I need a way to record until I release it. Here's some pseudo code of what I've got so far: while 1:
if altChanged:
if altIsDown:
//Call system(arecord OPTIONS > /tmp/blah.wav) to record audio.
end
else
//Get PID
//Use system(kill PID) to fake Ctrl+C and stop recording
end
end
end
This doesn't work because I was too stupid to see that the program halts when I do the first system call to try and wait for arecord to end, which it never does since the program doesn't reach the kill. Do I need to figure out how to do threading? Or is there a library where I could cheat and do a record.start(); record.stop(); set of functions?
The system() function is not appropriate for much at all (and maybe even less than that). Your best bet to call external applications is to use fork()/execl() (or other exec functions) directly.
Since you're on Linux, your best bet is to pull the source to whatever external application you're currently calling system() on and figure out how to add the functionality into your own code.
Additionally you will probably want to have a dedicated thread for watching the events to start/stop recording.

Why changing colors in C++ console application by system("color YX") is not the best solution?

I read somewhere, (knowing that both ways will work only for windows) that using system is not the best solution.
Why
#include<windows.h>
...
HANDLE hOut;
hOut = GetStdHandle(STD_OUTPUT_HANDLE);
SetConsoleTextAttribute(hOut,BACKGROUND_RED);
is better?
What I know, that system("color YX") change colors of entire console. However I assume, that there is a way to do it by "switching on and off" some colors during printing texts.
Is it true that system(command) will use additional layer during communication with system/console, what could be avoid using second method?
Is there any other reason why should I use second method?
Is there any other reason why should I use second method?
system(command) will compile on any system, no matter what 'command' is. Using the windows functions ensures that your code will only compile on the system on which it actually works. Down the line, if you ever want to port this code, you will get a clear compiler error, so you'll spend less time tracking down the reason your code doesn't work.
Optimization wise, the system(command) call creates a separate process, pass the argument "color XY" and then that process searches for the command "color" and executes it.
Note that since color is a console internal command its execution is pretty much immediate. However, for commands that are not internal, it would create yet another process and execute that command in that separate process.
That means create 2 processes which is very slow (2Mb of stack for each, full process information such as IP address, registers, stdin/stdout/stderr, ...)

Heisenbug issue with using a dll. What do I do next?

I am working on a system that uses a Voltage Controlled Oscillator chip (VCO) to help process a signal. The makers of the chip (Analog Devices) provide a program to load setup files onto the VCO but I want to be able to setup the chip from within the overarching signal processing control system. Fortunately Analog Devices also provides a DLL to interface with their chip and load setup files myself. I am programming in Visual C++ 6.0 (old I know) and my program is a dialog application.
I got the system to work perfectly writing setup files to the card and reading its status. I then decided that I needed to handle the case where there are multiple cards attached and one must be selected. The DLL provides GetDeviceCount() which returns an integer. For some reason every time the executable runs it returns 15663105 (garbage I assume). Whenever I debug my code however the function returns the correct number of cards. Here is my call to GetDeviceCount().
typedef int (__stdcall *GetDeviceCount)();
int AD9516_Setup()
{
int NumDevices;
GetDeviceCount _GetDeviceCount;
HINSTANCE hInstLibrary = LoadLibrary("AD9516Interface.dll");
_GetDeviceCount = (GetDeviceCount)GetProcAddress(hInstLibrary,"GetDeviceCount");
NumDevices = _GetDeviceCount();
return NumDevices;
}
Just to be clear: every other function from the DLL I have used is called exactly like this and works perfectly in the executable and debugger. I did some research and found out that a common cause of Heisenbugs is threading. I know that there is some threading behind the scenes in the dialogs I am using so I deleted all my calls to the function except one. I also discovered that the debugger code executes slower than executable code and I thought the chip may not have enough time to finish processing each command. First I tried taking up time between each chip function call by inserting an empty for loop and when that did not work I commented out all other calls to the DLL.
I do not have access to the source code used to build the DLL and I have no idea why its function would be returning garbage in the executable and not debugger. What other differences are there between running in debugger and executing that could cause an error? What are some other things I can do to search for this error?
Some compilers/IDEs add extra padding to variables in debug builds or initialize them to 0 - this might explain the differences you're encountering between debugging and "normal" execution.
Some things that might be worth checking:
- are you using the correct calling convention?
- do you get the same return value if no devices are connected?
- are you using the correct return type (uint vs int vs long vs ..)?
Try setting _GetDeviceCount to 0 before calling the function; that could be what the debugger is doing for you.

Crossplatform Bidirectional IPC

I have a project that I thought was going to be relatively easy, but is turning out to be more of a pain that I had hoped. First, most of the code I'm interacting with is legacy code that I don't have control over, so I can't do big paradigm changes.
Here's a simplified explanation of what I need to do: Say I have a large number of simple programs that read from stdin and write to stdout. (These I can't touch). Basically, input to stdin is a command like "Set temperature to 100" or something like that. And the output is an event "Temperature has been set to 100" or "Temperature has fallen below setpoint".
What I'd like to do is write an application that can start a bunch of these simple programs, watch for events and then send commands to them as necessary. My initial plan was to something like popen, but I need a bidrectional popen to get both read and write pipes. I hacked something together that I call popen2 where I pass it the command to run and two FILE* that get filled with the read and write stream. Then all I need to do is write a simple loop that reads from each of the stdouts from each of the processes, does the logic that it needs and then writes commands back to the proper process.
Here's some pseudocode
FILE *p1read, *p1write;
FILE *p2read, *p2write;
FILE *p3read, *p3write;
//start each command, attach to stdin and stdout
popen2("process1",&p1read,&p1write);
popen2("process2",&p2read,&p2write);
popen2("process3",&p3read,&p3write);
while (1)
{
//read status from each process
char status1[1024];
char status2[1024];
char status3[1024];
fread(status1,1024,p1read);
fread(status2,1024,p2read);
fread(status3,1024,p3read);
char command1[1024];
char command2[1024];
char command3[1024];
//do some logic here
//write command back to each process
fwrite(command1,p1write);
fwrite(command2,p2write);
fwrite(command3,p3write);
}
The real program is more complicated where it peeks in the stream to see if anything is waiting, if not, it will skip that process, likewise if it doesn't need to send a command to a certain process it doesn't. But this code gives the basic idea.
Now this works great on my UNIX box and even pretty good on a Windows XP box with cygwin. However, now I need to get it to work on Win32 natively.
The hard part is that my popen2 uses fork() and execl() to start the process and assign the streams to stdin and stdout of the child processes. Is there a clean way I can do this in windows? Basically, I'd like to create a popen2 that works in windows the same way as my unix version. This way the only windows specific code would be in that function and I could get away with everything else working the same way.
Any Ideas?
Thanks!
On Windows, you invoke CreatePipe first (similar to pipe(2)), then CreateProcess. The trick here is that CreateProcess has a parameter where you can pass stdin, stdout, stderr of the newly-created process.
Notice that when you use stdio, you need to do fdopen to create the file object afterwards, which expects file numbers. In the Microsoft CRT, file numbers are different from OS file handles. So to return the other end of CreatePipe to the caller, you first need _open_osfhandle to get a CRT file number, and then fdopen on that.
If you want to see working code, check out _PyPopen in
http://svn.python.org/view/python/trunk/Modules/posixmodule.c?view=markup
I think you've made a very good start to your problem by using the popen2() function to abstract away the cross-platform issues. I was expecting to come and suggest 'sockets', but I'm sure that's not relevant after reading the question. You could use sockets instead of pipes - it would be hidden in the popen2() function.
I am 99% sure you can implement the required functionality on Windows, using Windows APIs. What I cannot do is point you to the right functions reliably. However, you should be aware the Microsoft has most of the POSIX-like API calls available, but the name is prefixed with '_'. There are also native API calls that achieve the effects of fork and exec.
Your comments suggest that you are aware of issues with availability of data and possible deadlocks - be cautious.