Multiple calls to std::cout make subprocess hang - c++

I'll copy here part of my previous question to describe the problem:
I wrote an application in C++ that has two parts - the frontend and
the backend. These two communicate using IPC layer provided by
wxWidgets. In the backend I use some legacy functions for image data
manipulation. One of these functions hangs or falls into some infinite
loop sometimes (I can observe that 0% of the process resources are
used by the process after some point), but this happens only if I ran
the backend as a subprocess of the frontend. Otherwise (when I run it
manually) it works just fine.
It turns out that printing too many lines with std::cout was causing that, but I'd like to understand why. Could it be that wxWidgets utilizes some buffer for storing application output and printing was simply overflowing it? Or this is rather native issue of Windows? Or maybe it could be related to std::cout implementation? I'm pretty sure I'm not able to reproduce this with printf It seems that I was wrong - printf also seems to trigger that issue

The stdout buffer is of a finite size. Something must be reading what you are writing into the buffer, whether this is a file, a console window or another process. If you write faster than the reader is able to cope with then the buffer will eventually fill up and block any further writes until the reader has read some data.

Related

Grabbing printed statements from console C++

I have two loggers in my program. One that I made inside a gui, and one that is super complicated but very well designed and prints to the console. I am trying to get the output from the nice console logger to the rendered one. I have tried everything under the sun to get this to work but I can't seem to figure it out (due to my lack of understanding of the code from the other logger(spdlog).) My conclusion is that taking the logs directly from what is printed is the best way to do this but I can't find online anyone asking how to do this. I have seen a few questions but they just post code as an answer and don't really explain what is going on. My question: Is there a way to grab printed statements from the console and what are the performance issues/complications that come with doing something like this.
For example, if i do std::cout << "hello!" << std::endl; or some printf statement, I want to be able to further down in the code be able to grab "hello!"
My conclusion is that taking the logs directly from what is printed is the best way to do this but I can't find online anyone asking how to do this.
Consoles nowadays are terminal emulators. The original terminals' output went to printers and couldn't be (easily) read back.
Application's stdout and stderr (console) streams are write-only. Moreover, in Windows and Unix/Linux you can pipe your program's (console) output (either or both stderr and stdout) into another application with | (pipe) that creates a pipe IPC between stdout of your application and stdin of another one. That IPC pipe is write-only, your application cannot possibly read back from it.
You may be able to get access to the contents of the frame buffer of Windows cmd.exe that controls its Windows console window, but that won't be the verbatim byte-exact copy of data you wrote into stdout because of the escape sequences interpreted by Windows console.
If stdout is redirected into a file you can re-open that file for reading, but there is no portable way to re-open that file.
In other words, there is no portable way to read console output back.
I have tried everything under the sun to get this to work but I can't seem to figure it out (due to my lack of understanding of the code from the other logger(spdlog).
I bet you haven't tried reading spdlog documentation, in particular logger with multi sinks. A sink is an output abstraction, which implementation can write into a file, memory or both. What you need is attach your own sink to spdlog that prints into your UI.
Derive your sink from base_sink and implement abstract member functions:
virtual void sink_it_(const details::log_msg &msg) = 0; to print into the UI, and,
virtual void flush_() = 0; to do nothing.
Then attach one object of your sink class to that spdlog.

WriteFile() crashes when trying to carry on multithreading

I am working on an mfc dll that is accessed via a script and all this works fine. I have added a multi-threading component to it and am trying to use the WriteFile() function to write to my serial port, but somehow the WriteFile() function exits the application after the 2nd write command gets executed.
Without the multithreading bit, everything works normally and I can apply as many writefile commands as I want.
Multi-threading: I am using
CreateThread(NULL,0,WorkerThread,this,0,0);
to create my thread. Using "WorkerThread" to carry out the writefile operations described earlier in the background.
Additionally, I need to use the Sleep() function while writing it at intervals defined by me. At the moment, the program just quits when trying to use Sleep(). So, I just removed it for the time being but would need it at a later stage.
Is this a known problem or something with a but-obvious solution?
Update: I have sort of tried to reach somewhere close to the problem but still not been able to resolve it. Apparently it looks like there is some problem with my WriteFile() parameters.
WriteFile(theApp.m_hCom,tBuffer,sizeof(tBuffer),&iBytesWritten,NULL);
It is not taking the sizeof(tBuffer) properly and because of which it is crashing. I checked out the string to be passed, which is exactly equal to what I need to pass but its crashing out the program if I write the code as done above (for WriteFile()). When I keep the stringlength i.e. manually set the sizeof(tBuffer) parameter to 14, then the program runs but the command does not get executed as the total string size of buffer is 38.
CString sStore = "$ABCDEF,00000020,01000000C1200000*##\r\n";
char tBuffer[256];
memset(tBuffer,0,sizeof(tBuffer));
int Length = sizeof(TCHAR)* sStore.GetLength();
memcpy(&tBuffer,sStore.GetBuffer(),Length);
and then sending it with the WriteFile command.
WriteFile(theApp.m_hCom,tBuffer,sizeof(tBuffer),&iBytesWritten,NULL);
This is wrong: sizeof(TCHAR). Since you are using char you should use sizeof(char) instead. TCHAR could be either 1 or 2 bytes...
In the call to WriteFile you should use Length instead of sizeof(tBuffer). Otherwise you'd probably end up with garbage data in your file (which I assume is later read from somewhere else).
I'm guessing its crashing because you are trying to run that directly from your DLL. Write Function looks fine to me and I think if you try to run your program from the Python script ONLY, it should work. I have faced something similar earlier and came to the conclusion of not running my DLL through the debugger but just the script.
Please read this and this for more information.
Hope this helps.
Good Luck!

Async Console Output

I have a problem with my application win32 console.
The console is used to give commands to my application. However, at the same time it is used to output log messages which mostly comes from asynchronous threads. This becomes a problem when the user tries to write some input and simultaneously an async log message is printed, thus thrashing the display of the users input.
I would like to have some advice in regards to how to handle such a situtation?
Is it possible for example to dedicate the last line in the console to input, similarly to how it looks in the in-game consoles for some games?
You can use SetConsoleMode to disable input echo and line editing mode. You can then echo back input whenever your program is ready to do so. Note that this means you will need to implement things like backspace manually. And don't forget to reset the mode back when you're done with the console!
This is possible using the Console API, but it involves quite a bit of work and all the threads that use the console will have to cooperate by calling your output method rather than directly calling the Console API functions or the runtime library output functions.
The basic idea is to have your common output function write to the console screen buffer, and scroll the buffer in code rather than letting the text flow onto the last line and scroll automatically. As I recall, you'll have to parse the output for newlines and other control characters, and handle them correctly.
You might be able to get away with using "cooked" console input on the last line, although in doing so you risk problems if the user enters more text than will fit on a single line. Also, the user hitting Enter at the end of the line might cause it to scroll up. Probably best in this situation to use raw console input.
You'll want to become very familiar with Windows consoles.
Any time you have asyncronous threads trying to update the same device at once, you are going to have issues like this unless something synchronizes them.
If you have access to everyone's source code, the thing to do would probably be to create some kind of sync object that every task must use to access the console (semaphore, etc).

Why is my paintBox Canvas being erased when my program is "Not Responding"?

I have written a small program using Borland's C++ builder, and along the way, everything seemed fine. My program has a map window and a table window, and when a user presses a button, a long process is started that reads in all the map and table information and then displays that. Every time i ran it through the debugger, I had no issues. Then today, I decided to test it without running it through the debugger. To my horror, The program reads in the map information and then displays it on the paintbox canvas without a problem, but when it loads the information for the grid, the map gets erased!!! It appears to happen during the load phase for the table. this takes about 4 seconds, and during which time, the window tells me that it isnt responding. This is when the map gets erased. Anyone have any ideas on why this is happening? Its driving me nuts, and I dont really understand whats going on under the hood here.
UPDATE:
I have fixed the problem to some degree. I was poking around and found this: Avoiding "(Not Responding)" label in windows while processing lots of data in one lump
I added the code to run once in the middle of the data read in for the table. this fixed my problems. however, I was wondering if anyone knows why this is the case? why does my program going unresponsive cause my canvases to be erased?
Marcus Junglas wrote a detailed explanation of the problem, which affects both Delphi and C++Builder.
When programming an event handler in
Delphi (like the OnClick event of a
TButton), there comes the time when
your application needs to be busy for
a while, e.g. the code needs to write
a big file or compress some data.
If you do that you'll notice that your
application seems to be locked. Your
form cannot be moved anymore and the
buttons are showing no sign of life.
It seems to be crashed.
The reason is that a Delpi application
is single threaded. The code you are
writing represents just a bunch of
procedures which are called by
Delphi's main thread whenever an event
occured. The rest of the time the main
thread is handling system messages and
other things like form and component
handling functions.
So, if you don't finish your event
handling by doing some lengthy work,
you will prevent the application to
handle those messages.
You can reduce the problem by calling Application->ProcessMessages(), while loading your map data, however I recomend using a separate thread to load the data.
I have never used C++ Builder, but i used Delphi. I think the libraries are the same.
Does that component you use store the image data? It may only draw to the screen. Try covering the window of your app with another window. If it erases it, you have to use a component which stores the image.
See this, it is for Delphi, but it may help. There should be a Image component in C++ Builder. Try using that instead of PaintBox.
You can solve the unresponsivenes problem by running the time consuming task in a separate thread or calling some function that processes the window's messages.

File corruption detection and error handling

I'm a newbie C++ developer and I'm working on an application which needs to write out a log file every so often, and we've noticed that the log file has been corrupted a few times when running the app. The main scenarios seems to be when the program is shutting down, or crashes, but I'm concerned that this isn't the only time that something may go wrong, as the application was born out of a fairly "quick and dirty" project.
It's not critical to have to the most absolute up-to-date data saved, so one idea that someone mentioned was to alternatively write to two log files, and then if the program crashes at least one will still have proper integrity. But this doesn't smell right to me as I haven't really seen any other application use this method.
Are there any "best practises" or standard "patterns" or frameworks to deal with this problem?
At the moment I'm thinking of doing something like this -
Write data to a temp file
Check the data was written correctly with a hash
Rename the original file, and put the temp file in place.
Delete the original
Then if anything fails I can just roll back by just deleting the temp, and the original be untouched.
You must find the reason why the file gets corrupted. If the app crashes unexpectedly, it can't corrupt the file. The only thing that can happen is that the file is truncated (i.e. the last log messages are missing). But the app can't really jump around in the file and modify something elsewhere (unless you call seek in the logging code which would surprise me).
My guess is that the app is multi threaded and the logging code is being called from several threads which can easily lead to data corrupted before the data is written to the log.
You probably forgot to call fsync() every so often, or the data comes in from different threads without proper synchronization among them. Hard to tell without more information (platform, form of corruption you see).
A workaround would be to use logfile rollover, ie. starting a new file every so often.
I really think that you (and others) are wasting your time when you start adding complexity to log files. The whole point of a log is that it should be simple to use and implement, and should work most of the time. To that end, just write the log to an unbuffered stream (l;ike cerr in a C++ program) and live with any, very occasional in my experience, snafus.
OTOH, if you really need an audit trail of everything your app does, for legal reasons, then you should be using some form of transactional storage such as a SQL database.
Not sure if your app is multi-threaded -- if so, consider using Active Object Pattern (PDF) to put a queue in front of the log and make all writes within a single thread. That thread can commit the log in the background. All logs writes will be asynchronous, and in order, but not necessarily written immediately.
The active object can also batch writes.