I've got a function I need to call from a third-party library which I can't control. That function evaluates a command I pass in and prints its results to stdout. In my use case, I need to capture the results into a std::string variable (not write to a file), which I can do just fine in a single-threaded example:
int fd[2];
pid_t pid;
char *args[] = {};
if ( pid == 0 )
{
dup2( fd[1], STDOUT_FILENO );
close( fd[0] );
close( fd[1] );
char *args[] = {};
// This func will print the results I want to stdout, but I have no control over its code.
festival_eval_command("(print utt2)");
execv( args[0], args );
}
close( fd[1] );
char buffer[1000000];
ssize_t length = read( fd[0], buffer, sizeof(buffer) - 1 );
std::string RESULT = buffer;
memset(buffer, 0, sizeof buffer); // clear the buffer
// RESULT now holds the contents that would have been printed out in third_party_eval().
Some constraints/detail:
My program is multi-threaded, so other threads may be using stdout simultaneously (my understanding is that C++ ties the output from multiple threads into stdout)
The third-party library is Festival, an open-source speech synthesis library written in LISP (which I have no experience in). I'm using its C++ API by calling: festival_eval_command("(print utt2)");
festival_eval_command appears to use stdout, not std::cout (I've tested by redirecting both in a single-threaded program and only the stdout redirection captures the output from utt2)
As far as I can tell from the source, festival_eval_command doesn't allow for an alternate file descriptor.
This function is only being run in one of the threads of my multithreaded program, so I'm only concerned about isolating the festival_eval_command output from the other threads' stdout.
My question: Is there a way I can safely retrieve the just results of festival_eval_command() from stdout in a multi-threaded program? It sounds like my options are:
Launch this function in a separate process, which has its own stdout. Do the IO redirection in that separate process, get the output I need and return it back to my main program process. Is this correct? How would I go about doing this?
Use a mutex around the festival_eval_command. I don't quite understand how mutexes interact with other threads though. If I have this example:
void do_stuff_simultaneously() {
std::cout << "Printing output to terminal..." << std::endl;
}
// main thread
void do_stuff() {
// launch a separate thread that may print to stdout
std::thread t(do_stuff_simultaneously);
// lock stdout somehow
// redirect stdout to string variable
festival_eval_command("(print utt2)");
// unlock stdout
}
Does the locking of stdout prevent do_stuff_simultaneously from accessing it? Is there a way to make stdout thread-safe like this?
However, my program is multi-threaded, so other threads may be using stdout simultaneously
The outputs of threads are going to be interleaved in a fashion you cannot control. Unless each thread writes its entire output using one std::cout.write (see below for why).
Is there a way I can safely retrieve the just results of third_party_eval() from stdout in a multi-threaded program?
Each thread must run that function in a separate process, from which you capture its stdout into a std::string s (different one for each process).
Then in parent process you write that std::string into stdout with
std::cout.write(s.data(), s.size()). std::cout.write locks a mutex (to protect itself from data race and corruption when multiple threads write into it in any way, including operator<<), so that the output of one process is not interleaved with anything else.
Note up front: This shows why globals are often a bad idea! Even more, library code (i.e. code intended for re-use in different contexts) should never use globals. This is also something to tell the supplier of that code, they should fix their library to provide a version that at least takes an output filedescriptor instead of writing to stdout.
Here's what I would consider doing: Move the whole function execution to a separate process. That way, if multiple threads need to run it, they will start separate processes with separate outputs that they can process independently.
An alternative way is to wrap this single function. This wrapper does all the IO redirection and it (being a critical section) is guarded by a mutex, so that two threads invoking the wrapper will be serialized. However, this has downsides, because in the meantime, that code still messes with your process' standard streams (so a stray call to output something would be mixed into the function output).
A second alternative is to put the function into a wrapper process who's only goal is to serialize the use of the function. You'd start that process on demand or on start of your application and use some form of IPC to communicate with it.
Related
If I want to redirect stdin, stdout, and stderr, without the risk of deadlock (for example, child process may need more data on stdin to flush stdout), do I have to spawn multiple threads, or is there any other solution to the problem. Current implementation:
std::thread stderr_proc{read, io_redirector.handle(), stderr, io_redirector.stderr()};
std::thread stdout_proc{read, io_redirector.handle(), stdout, io_redirector.stdout()};
write(io_redirector.handle(), stdin, io_redirector.stdin());
int status;
if(::waitpid(pid, &status, 0) == -1) { abort(); }
stdout_proc.join();
stderr_proc.join();
Including the main thread, this implementation uses one thread per stream to avoid deadlock, but I think it is quite heavy-weight to start two new threads. Especially since this is called from one of many worker threads, it would be nice to have a single-threaded solution.
New to threading. I would like to have two separate threads that do two different things:
ThreadA: read a file, line by line from an input file
ThreadB: do things with the line that is previously read
How can I achieve this? Thanks in advance
class A
{
//...
public:
void processFile(ifstream& input, string& s)
{
//read file line by line in ThreadA
//process that line in ThreadB
}
};
int main()
{
// ?
}
Threading is a difficult concept to get your mind around.
Conceptually, threads provide parallel execution paths, which appear to execute concurrently. On a multiple core processor they may actually be running simultaneously. On a single core processor, they don't actually run concurrently, but they appear to.
To use multi-threading effectively, you have to be able to break down a problem in a way where you can imagine that having two functions running simultaneously will benefit you. In your case, you desire to read information in one function, while processing the information in another completely separate function. Once you can see how to do that, you just have to run the functions on separate threads, and figure out how to get the information safely from one function to the other.
I would suggest writing a function that reads from the file, and stores the information in a queue or buffer of some sort. Write another function that takes information from the buffer or queue, and processes the information. Adhere to the rule that the read function only writes to the queue, and the processing function only reads from the queue.
Once you have those functions constructed, tackle the issue of running the functions on threads. The general concept is that you will launch a thread with the read function, and another thread with the processing function. Then you have to 'join' the threads when they get done doing what they are doing.
For the read thread, it is straight forward. Once the file has been read, and the information is all in the queue, it is done. The processing thread is a little more difficult. It needs to figure out when the information is going to quit coming. It may be necessary for the reading function to add something to the queue to indicate that the reading is done.
There are a number of ways to create the threads and run the functions on the threads. I'm pretty sure that your instructor is recommending ways of doing that.
For the assignment to make sense, one thread should be reading a new line of input while the other thread is processing the previously read line.
To answer the question as posed, one may use std::async. Include-file is <future>. See this near-duplicate.
I am tempted to post a correct program.
EDIT. I cannot help myself. This, like so many things, is simple once it's understood (and you know some tricks).
WARNING SPOILER ALERT
You (OP) should not read on until after you have turned in the assignment.
int main() {
std::ifstream input("foo.txt"); // Or whatever
using std::string;
using std::getline;
using std::async;
using std::move;
// The function that processes the line ...
// Notice that "line" is bound by value. Using a reference,
// (const string &line) would create a conflict between threads.
auto process = [](const string line)->bool {
return !!(std::cout << line << std::endl); // or whatever...
};
string line;
// The bang-bang !! turns the result of getline into bool
bool line_ready = !!getline(input, line); // Read first line
bool process_ok = true;
while (line_ready && process_ok) {
auto handle = async(std::launch::async, process, move(line)); // Launch thread
line_ready = !!getline(input, line); // Fetch next line while processing previous
process_ok = handle.get(); // Wait for processing to finish
}
return (process_ok && input.eof()) ? 0: -1;
}
I have a program in c++ with several threads in it. I want one of the threads to be able to read/get commands from the console while others continue running, for example: "play", "stop", "pause",...
something like:
while (1)
{
std::string str;
getline(std::cin, str);
/* do something */
}
Will it work? Any suggestions?
Thanks in advance.
Short Answer: Yes.
Long Answer: It depends of what you call 'work', there is nothing that prevent you from calling a blocking function/method from a thread while other threads are running.
However, threads share memory and resources. On an UNIX machine (and it's more or less the same on Windows), stdin and stdout are shared between threads. std::cin will manipulate stdin under the hood at some point, and you should ensure that only one thread can manipulate a given resource at a time.
You can do that by either make sure that only one thread can reach code using std::cin, or use synchronization, with a mutex/semaphore.
I have got a program which checks if there's a version update on the server. Now I have to do something like
if(update_avail) {
system("updater.exe");
exit(0);
}
but without waiting for "updater.exe" to complete. Otherwise I can't replace my main program because it is running. So how to execute "updater.exe" and immediately exit? I know the *nix way with fork and so on, how to do this in Windows?
Use CreateProcess(), it runs asynchronously. Then you would only have to ensure that updater.exe can write to the original EXE, which you can do by waiting or retrying until the original process has ended. (With a grace interval of course.)
There is no fork() in Win32. The API call you are looking for is called ::CreateProcess(). This is the underlying function that system() is using. ::CreateProcess() is inherently asynchronous: unless you are specifically waiting on the returned process handle, the call is non-blocking.
There is also a higher-level function ::ShellExecute(), that you could use if you are not redirecting process standard I/O or doing the waiting on the process. This has an advantage of searching the system PATH for the executable file, as well as the ability to launch batch files and even starting a program associated with a document file.
You need a thread for that
Look here: http://msdn.microsoft.com/en-us/library/y6h8hye8(v=vs.80).aspx
You are currently writing your code in the "main thread" (which usually is also your frame code).
So if you run something that takes time to complete it will halt the execution of your main thread, if you run it in a second thread your main thread will continue.
Update:
I've missed the part that you want to exit immediately.
execl() is likely what you want.
#include <unistd.h>
int main(){
execl("C:\\path\\to\\updater.exe", (const char *) 0);
return 0;
}
The suggested CreateProcess() can be used as well but execl is conforming to POSIX and would keep your code more portable (if you care at all).
#include <unistd.h>
extern char **environ;
int execl(const char *path, const char *arg, ...);
Update:
tested on Win-7 using gcc as compiler
I'm writing a server(mainly for windows, but it would be cool if i could keep it multiplatform) and i just use a normal console window for it. However, I want the server to be able to do commands like say text_to_say_here or kick playername, etc. How can i have a asynchronous input/output? I allready tried some stuff with the normal printf() and gets_s but that resulted in some really.... weird stuff.
I mean something like this 1
thanks.
Quick code to take advantage of C++11 features (i.e. cross-platform)
#include <atomic>
#include <thread>
#include <iostream>
void ReadCin(std::atomic<bool>& run)
{
std::string buffer;
while (run.load())
{
std::cin >> buffer;
if (buffer == "Quit")
{
run.store(false);
}
}
}
int main()
{
std::atomic<bool> run(true);
std::thread cinThread(ReadCin, std::ref(run));
while (run.load())
{
// main loop
}
run.store(false);
cinThread.join();
return 0;
}
You can simulate asynchronous I/O using threads, but more importantly, you must share a mutex between the two read/write threads in order to avoid any issues with a thread stepping on another thread, and writing to the console on top of the output of another thread. In other words std::cout, std::cin, fprintf(), etc. are not multi-thread safe, and as a result, you will get an unpredictable interleaving pattern between the two operations where a read or write takes place while another read or write was already happening. You could easily end up with a read trying to take place in the middle of a write, and furthermore, while you're typing an input on the console, another writing thread could start writing on the console, making a visual mess of what you're trying to type as input.
In order to properly manage your asynchronous read and write threads, it would be best to setup two classes, one for reading, and another for writing. In each class, setup a message queue that will either store messages (most likely std::string) for the main thread to retrieve in the case of the read thread, and for the main thread to push messages to in the case of the write thread. You may also want to make a special version of your read thread that can print a prompt, with a message pushed into its message queue by the main thread that will print a prompt before reading from stdin or std::cin. Both classes will then share a common mutex or semaphore to prevent unpredictable interleaving of I/O. By locking the common mutex before any iostream calls (an unlocking it afterwards), any unpredictable interleaving of I/O will be avoided. Each thread will also add another mutex that is unique to each thread that can be used to maintain exclusivity over access to the class's internal message queue. Finally, you can implement the message queues in each class as a std::queue<std::string>.
If you want to make your program as cross-platform as possible, I would suggest implementing this with either Boost::threads, or using the new C++0x std::threads libraries.
If you ditch the console window and use TCP connections for command and control, your server will be much easier to keep multi-platform, and also simpler and more flexible.
You can try placing the input and output on separate threads. I'm not quite sure why you want to do this, but threading should do the job.
:)
http://en.wikibooks.org/wiki/C++_Programming/Threading