File Copy Tool w/ Producer/Consumer Model - c++

so I was looking over my next school assignment, and I'm baffled. I figured I would come to the experts for some direction. My knowledge on synchronization is severely lacking, and I didn't do so hot on the "mcopyfile" assignment it refers to. Terrible would probably be a good word for it. If I could get some direction on how to accomplish this problem, it would be much appreciated. Not looking for someone to do my assignment, just need someone to point me in the right direction. baby steps.
Based on the multi-threaded file copy tool
(mcopyfile) you have created in Lab 2, now please use a worker
pool (Producer-Consumer model) implementation that uses a fixed
number of threads to handle the load (regardless how many files in the
directory to copy). Your program should create 1 file copy producer
thread and multiple file copy consumer threads (this number is taken
from the command-line argument). The file copy producer thread will
generate a list of (source and destination) file descriptors in a buffer
structure with bounded size. Each time when the producer accesses the buffer it will write
one (source, destination) file entry (per visit). And all file copy
consumer threads will read from this buffer, execute the actual file
copy task, and remove the corresponding file entry (each consumer
will consume one entry each time). Both producer and consumer
threads will write a message to standard output giving the file name
and the completion status (e.g., for producer: “Completing putting
file1 in the buffer”, for consumer: “Completing copying file1 to …”).

Assuming, you know how to spawn threads, let me break down the problem for you. There are following components:
Producer. It generates Tasks for the Consumers based on the source directory input parameter.
Task. A task is information for Consumer to execute its copy task. Namely a tuple of source file descriptor and destination file descriptor.
Queue. It is the central piece of communication between Producer and Consumer. Producers writes Tasks to Queue and Consumer consumes it.
Consumer. You have a pool of actual workers that take Task as input and executes copy operation.
Now as per the question, spawn a thread for producer and n threads for consumers. And this is what the threads do:
Producer thread
For list of files in the source directory:
Task <- (Source file path, destination file path)
Acquire lock on Queue
Write Task to queue
Release lock on Queue
Acquire lock on stdout
Write to stdout
Release lock on stdout
Consumer thread
While True:
If size of queue == 0:
Sleep for some time
Else:
Acquire lock on Queue
Dequeue a Task
Release lock on Queue
Execute copy operation
Acquire lock on stdout
Write to stdout
Release lock on stdout
I hope this helps.

Your assignment looks pretty straightforward to me once you know what API/library you'll use for the threading functionality.
First, you'll parse the command-line argument and create the specified number of threads, then from the main thread obtain the list of files in the folder and start putting them in an array (like a std::vector) that is shared among the threads and is synchronized with a mutex (or a critical section on Windows). Whenever one of the consumer threads acquires the mutex, it makes a copy of a file entry in the array, removes that entry from the array, releases the mutex so that another thread can start doing the same, and starts copying the file represented by the entry it removed from the array.
I would give you some code snippets, but you didn't say what API/library you're using for the threading functionality.

Related

Threading - The fastest way to handle reoccuring threads?

I am writing my first threaded application for an industrial machine that has a very fast line speed. I am using the MFC for the UI and once the user pushes the "Start" machine button, I need to be simultaneously executing three operations. I need to collect data, process it and output results very quickly as well as checking to see if the user has turned the machine "off". When I say very quickly, I expect the analyze portion of the execution to take the longest and it needs to happen in well under a second. I am mostly concerned about overhead elimination associated with threads. What is the fastest way to implement the loop below:
void Scanner(CString& m_StartStop) {
std::thread Collect(CollectData);
while (m_StartStop == "Start") {
Collect.join();
std::thread Analyze(AnalyzeData);
std::thread Collect(CollectData);
Analyze.join();
std::thread Send(SendData);
Send.join();
}
}
I realize this sample is likely way off base, but hopefully it gets the point across. Should I be creating three threads and suspending them instead of creating and joining them over and over? Also, I am a little unclear if the UI needs its own thread since the user needs to able to pause or stop the line at anytime.
In case anyone is wondering why this needs to be threaded as opposed to sequential, the answer is that the line speed of the machine will cause the need to be collecting data for the second part while the first part is being analyzed. Every 1 second equates to 3 ft of linear part movement down this machine.
Think about functionnal problem before thinking about implementation.
So we have a continuous flow of data that need to be collected, analyzed and sent elsewhere, with a supervision point to be able to stop of pause the process.
collection should be limited by the input flow
analyze should only be cpu limited
sending should be io bound
You just need to make sure that the slowest part must be collection.
That is a correct use case for threads. Implementation could use:
a pool of input buffers that would be filled by collect task and used by analyze task
one thread that continuously:
controls if it should exit (a dedicated variable)
takes an input object from the pool
fills it with data
passes it to analyze task
one thread that continuously
waits for the first of an input object from collect task and a request to exit
analyzes the object and prepares output
send the output
Optionnaly, you can have a separate thread for processing the output. In that case, the last lines becomes
passes an output object to the sending task
and we must add:
one thread that continuously
waits for the first of an output object from analze task and a request to exit
send the output
And you must provide a way to signal the request for pause or exit, either with a completely external program and a signalisation mechanism, or a GUI thread
Any threads you need should already be running, waiting for work. You should not create or join threads.
If job A has to finish before job B can start, the completion of job A should trigger the start of job B. That is, when the thread doing job A finished doing job A, it should either do job B itself or trigger the dispatch of job B. There shouldn't need to be some other thread that's waiting for job A to finish so that it can start job B.

Multithreading – known similarity solution

I am looking for a known solution (as producer-consumer problem) for this situation .
In my case there are two options:
link to image,
text file with links to images and links to other text files (with other links).
I'm trying to create a multi-threading downloader in C++ (on unix) using posix mutex and posix semaphore.
Application has link to the first text file.
Threads sleep (semaphore = 0).
Main thread downloads first text file.
Parse for other links -- put links in some queue (semaphore += links_count --> other threads wake up).
Other threads produce other links.
What with main thread?
How to check other threads -- finish state?
With use finite queue there can be deadlock: text file contains many links (queue as full with other text files). No text file can be finished.
Thank you for your ideas.
Well, your problem is still kind of a producer/consumer problem but your consumers are also producers. Some ways to deal with the problem:
Do not limit your queue size. Simply fail when your process runs out of memory. Not very elegant but will probably work in 99.99% of all download scenarios (assuming 100 bytes per download link on average and about 2GB available memory you would have to store more than 20 million links in your queue before running out of memory).
Split your producer and consumer by using the hard drive as buffer. Download files into a temporary folder. Have a thread watch that folder for new files. Once a new file appears, parse it and put the items in the consumer queue. Once the file is finished parsing put it into the final download location. This way you are only limited by disk space. This way your producer (parser) is a different thread than your consumers (downloader).
Edit
You can wait on your worker threads with pthread_join in the main thread.

I want to wait on both a file descriptor and a mutex, what's the recommended way to do this?

I would like to spawn off threads to perform certain tasks, and use a thread-safe queue to communicate with them. I would also like to be doing IO to a variety of file descriptors while I'm waiting.
What's the recommended way to accomplish this? Do I have to created an inter-thread pipe and write to it when the queue goes from no elements to some elements? Isn't there a better way?
And if I have to create the inter-thread pipe, why don't more libraries that implement shared queues allow you to create the shared queue and inter-thread pipe as a single entity?
Does the fact I want to do this at all imply a fundamental design flaw?
I'm asking this about both C++ and Python. And I'm mildly interested in a cross-platform solution, but primarily interested in Linux.
For a more concrete example...
I have some code which will be searching for stuff in a filesystem tree. I have several communications channels open to the outside world through sockets. Requests that may (or may not) result in a need to search for stuff in the filesystem tree will be arriving.
I'm going to isolate the code that searches for stuff in the filesystem tree in one or more threads. I would like to take requests that result in a need to search the tree and put them in a thread-safe queue of things to be done by the searcher threads. The results will be put into a queue of completed searches.
I would like to be able to service all the non-search requests quickly while the searches are going on. I would like to be able to act on the search results in a timely fashion.
Servicing the incoming requests would generally imply some kind of event-driven architecture that uses epoll. The queue of disk-search requests and the return queue of results would imply a thread-safe queue that uses mutexes or semaphores to implement the thread safety.
The standard way to wait on an empty queue is to use a condition variable. But that won't work if I need to service other requests while I'm waiting. Either I end up polling the results queue all the time (and delaying the results by half the poll interval, on average), blocking and not servicing requests.
Whenever one uses an event driven architecture, one is required to have a single mechanism to report event completion. On Linux, if one is using files, one is required to use something from the select or poll family meaning that one is stuck with using a pipe to initiate all none file related events.
Edit: Linux has eventfd and timerfd. These can be added to your epoll list and used to break out of the epoll_wait when either triggered from another thread or on a timer event respectively.
There is another option and that is signals. One can use fcntl modify the file descriptor such that a signal is emitted when the file descriptor becomes active. The signal handler may then push a file-ready message onto any type of queue of your choosing. This may be a simple semaphore or mutex/condvar driven queue. Since one is now no longer using select/poll, one no longer needs to use a pipe to queue none file based messages.
Health warning: I have not tried this and although I cannot see why it will not work, I don't really know the performance implications of the signal approach.
Edit: Manipulating a mutex in a signal handler is probably a very bad idea.
I've solved this exact problem using what you mention, pipe() and libevent (which wraps epoll). The worker thread writes a byte to its pipe FD when its output queue goes from empty to non-empty. That wakes up the main IO thread, which can then grab the worker thread's output. This works great is actually very simple to code.
You have the Linux tag so I am going to throw this out: POSIX Message Queues do all this, which should fulfill your "built-in" request if not your less desired cross-platform wish.
The thread-safe synchronization is built-in. You can have your worker threads block on read of the queue. Alternatively MQs can use mq_notify() to spawn a new thread (or signal an existing one) when there is a new item put in the queue. And since it looks like you are going to be using select(), MQ's identifier (mqd_t) can be used as a file descriptor with select.
It seems nobody has mentioned this option yet:
Don't run select/poll/etc. in your "main thread". Start a dedicated secondary thread which does the I/O and pushes notifications into your thread-safe queue (the same queue which your other threads use to communicate with the main thread) when I/O operations complete.
Then your main thread just needs to wait on the notification queue.
Duck's and twk's are actually better answers than doron's (the one selected by the OP), in my opinion. doron suggests writing to a message queue from within the context of a signal handler, and states that the message queue can be "any type of queue." I would strongly caution you against this since many C library/system calls cannot safely be called from within a signal handler (see async-signal-safe).
In particuliar, if you choose a queue protected by a mutex, you should not access it from a signal handler. Consider this scenario: your consumer thread locks the queue to read it. Immediately after, the kernel delivers the signal to notify you that a file descriptor now has data on it. You signal handler runs in the consumer thread, necessarily), and tries to put something on your queue. To do this, it first has to take the lock. But it already holds the lock, so you are now deadlocked.
select/poll is, in my experience, the only viable solution to an event-driven program in UNIX/Linux. I wish there were a better way inside a mutlithreaded program, but you need some mechanism to "wake up" your consumer thread. I have yet to find a method that does not involve a system call (since the consumer thread is on a waitqueue inside the kernel during any blocking call such as select).
EDIT: I forgot to mention one Linux-specific way to handle signals when using select/poll: signalfd(2). You get a file descriptor you can select/poll on, and you handling code runs normally instead of in a signal handler's context.
This is a very common seen problem, especially when you are developing network server-side program. Most Linux server-side program's main look will loop like this:
epoll_add(serv_sock);
while(1){
ret = epoll_wait();
foreach(ret as fd){
req = fd.read();
resp = proc(req);
fd.send(resp);
}
}
It is single threaded(the main thread), epoll based server framework. The problem is, it is single threaded, not multi-threaded. It requires that proc() should never blocks or runs for a significant time(say 10 ms for common cases).
If proc() will ever runs for a long time, WE NEED MULTI THREADS, and executes proc() in a separated thread(the worker thread).
We can submit task to the worker thread without blocking the main thread, using a mutex based message queue, it is fast enough.
epoll_add(serv_sock);
while(1){
ret = epoll_wait();
foreach(ret as fd){
req = fd.read();
queue.add_job(req); // fast, non blockable
}
}
Then we need a way to obtain the task result from a worker thread. How? If we just check the message queue directly, before or after epoll_wait().
epoll_add(serv_sock);
while(1){
ret = epoll_wait(); // may blocks for 10ms
resp = queue.check_result(); // fast, non blockable
foreach(ret as fd){
req = fd.read();
queue.add_job(req); // fast, non blockable
}
}
However, the checking action will execute after epoll_wait() to end, and epoll_wait() usually blocks for 10 micro seconds(common cases) if all file descriptors it waits are not active.
For a server, 10 ms is quite a long time! Can we signal epoll_wait() to end immediately when task result is generated?
Yes! I will describe how it is done in one of my open source project:
Create a pipe for all worker threads, and epoll waits on that pipe as well. Once a task result is generated, the worker thread writes one byte into the pipe, then epoll_wait() will end in nearly the same time! - Linux pipe has 5 us to 20 us latency.
In my project SSDB(a Redis protocol compatible in-disk NoSQL database), I create a SelectableQueue for passing messages between the main thread and worker threads. Just like its name, SelectableQueue has an file descriptor, which can be wait by epoll.
SelectableQueue: https://github.com/ideawu/ssdb/blob/master/src/util/thread.h#L94
Usage in main thread:
epoll_add(serv_sock);
epoll_add(queue->fd());
while(1){
ret = epoll_wait();
foreach(ret as fd){
if(fd is queue){
sock, resp = queue->pop_result();
sock.send(resp);
}
if(fd is client_socket){
req = fd.read();
queue->add_task(fd, req);
}
}
}
Usage in worker thread:
fd, req = queue->pop_task();
resp = proc(req);
queue->add_result(fd, resp);
C++11 has std::mutex and std::condition_variable. The two can be used to have one thread signal another when a certain condition is met. It sounds to me like you will need to build your solution out of these primitives. If you environment does not yet support these C++11 library features, you can find very similar ones at boost. Sorry, can't say much about python.
One way to accomplish what you're looking to do is by implementing the Observer Pattern
You would register your main thread as an observer with all your spawned threads, and have them notify it when they were done doing what they were supposed to (or updating during their run with the info you need).
Basically, you want to change your approach to an event-driven model.

Dedicated thread (one thread per connection) with buffering capability (c/c++)

My process reads from a single queue tasks that need to be sent to several destinations.
We need to maintain order between the tasks (ie task that arrived in the queue at 00:00 needs to be sent before the task that arrived at 00:01) therefore we cannot use thread pool. Order needs to be maintained per destination.
One solution is to create a dedicated thread per destination. The main thread reads the
task from the queue and depending on the destination finds the correct thread.
This solution has a problem: if a worker thread is busy, the master thread would remain blocked, making the system slow. What I need is a new queue per thread. The master thread
shares the resources to the queues and the worker thread reads the new queues for incoming
messages...
I would like to share my thought with the SO community, and I am searching for a C/C++ solution close to me description. Is there a library that implements such model?
The design you want is fairly straightforward; I think you can probably write the code you need and get it working in an hour or two. Looking for a 3rd party library to implement this is probably overkill (unless I am misunderstanding the problem).
In particular, for each 'worker' thread, you need a FIFO data structure (e.g. std::queue), a Mutex, and a mechanism that the 'master' thread can use to signal the thread to wake up and check the data structure for new messages (e.g. a condition variable, or a semaphore, or even a socketpair that the worker blocks on reading, and the master can send a byte on to wake the worker up).
Then to send a task to a particular worker thread, the master would do something like this (pseudocode):
struct WorkerThreadData & workerThread = _workerThreads[threadIndexIWantToSendTo];
workerThread.m_mutex.Lock();
workerThread.m_incomingTasksList.push_back(theNewTaskObject);
workerThread.m_mutex.Unlock();
workerThread.m_signalMechanism.SignalThreadToWakeUp(); // make sure the worker looks at the task list!
... and each worker thread would have an event loop like this:
struct WorkerThreadData & myData = _workerThreads[myWorkerIndex];
TaskObject * taskObject;
while(1)
{
myData.m_signalMechanism.WaitForSignal(); // block until the main thread wakes me up
myData.m_mutex.Lock();
taskObject = (myData.m_incomingTasks.length() > 0) ? myData.m_incomingTasks.pop_front() : NULL;
myData.m_mutex.Unlock();
if (taskObject)
{
taskObject->DoTheWork();
delete taskObject;
}
}
This will never block the master thread (for any significant amount of time), since the Mutex is only held very briefly by anyone. In particular, the worker threads are not holding the mutex while they are working on a task object.
The "need to maintain order" all-but-directly states that you're going to be executing the tasks serially no matter how many threads you have. That being the case, you're probably best off with just one thread servicing the requests.
You could gain something if the requirement is a bit looser than that -- for example, if all the tasks for one destination need to remain in order, but there's no ordering requirement for tasks with different destinations. If this is the case, then your solution of a master queue sending tasks to an input queue for each individual thread sounds like quite a good one.
Edit:
Specifying the number of threads/mutexes dynamically is pretty easy. For example, to take the number from the command line, you could do something on this order (leaving out error and sanity checking for the moment):
std::vector<pthread_t> threads;
int num_threads = atoi(argv[1]);
threads.resize(num_threads);
for (int i=0; i<num_threads; i++)
pthread_create(&threads[i], NULL, thread_routine, NULL);

Writing concurrently to a file

I have this tool in which a single log-like file is written to by several processes.
What I want to achieve is to have the file truncated when it is first opened, and then have all writes done at the end by the several processes that have it open.
All writes are systematically flushed and mutex-protected so that I don't get jumbled output.
First, a process creates the file, then starts a sequence of other processes, one at a time, that then open the file and write to it (the master sometimes chimes in with additional content; the slave process may or may not be open and writing something).
I'd like, as much as possible, not to use more IPC that what already exists (all I'm doing now is writing to a popen-created pipe). I have no access to external libraries other that the CRT and Win32 API, and I would like not to start writing serialization code.
Here is some code that shows where I've gone:
// open the file. Truncate it if we're the 'master', append to it if we're a 'slave'
std::ofstream blah(filename, ios::out | (isClient ? ios:app : 0));
// do stuff...
// write stuff
myMutex.acquire();
blah << "stuff to write" << std::flush;
myMutex.release();
Well, this does not work: although the output of the slave process is ordered as expected, what the master writes is either bunched together or at the wrong place, when it exists at all.
I have two questions: is the flag combination given to the ofstream's constructor the right one ? Am I going the right way anyway ?
If you'll be writing a lot of data to the log from multiple threads, you'll need to rethink the design, since all threads will block on trying to acquire the mutex, and in general you don't want your threads blocked from doing work so they can log. In that case, you'd want to write your worker thread to log entries to queue (which just requires moving stuff around in memory), and have a dedicated thread to pull entries off the queue and write them to the output. That way your worker threads are blocked for as short a time as possible.
You can do even better than this by using async I/O, but that gets a bit more tricky.
As suggested by reinier, the problem was not in the way I use the files but in the way the programs behave.
The fstreams do just fine.
What I missed out is the synchronization between the master and the slave (the former was assuming a particular operation was synchronous where it was not).
edit: Oh well, there still was a problem with the open flags. The process that opened the file with ios::out did not move the file pointer as needed (erasing text other processes were writing), and using seekp() completely screwed the output when writing to cout as another part of the code uses cerr.
My final solution is to keep the mutex and the flush, and, for the master process, open the file in ios::out mode (to create or truncate the file), close it and reopen it using ios::app.
I made a 'lil log system that has it's own process and will handle the writing process, the idea is quite simeple. The proccesses that uses the logs just send them to a pending queue which the log process will try to write to a file. It's like batch procesing in any realtime rendering app. This way you'll grt rid of too much open/close file operations. If I can I'll add the sample code.
How do you create that mutex?
For this to work this needs to be a named mutex so that both processes actually lock on the same thing.
You can check that your mutex is actually working correctly with a small piece of code that lock it in one process and another process which tries to acquire it.
I suggest blocking such that the text is completely written to the file before releasing the mutex. I've had instances where the text from one task is interrupted by text from a higher priority thread; doesn't look very pretty.
Also, put the format into Comma Separated format, or some format that can be easily loaded into a spreadsheet. Include thread ID and timestamp. The interlacing of the text lines shows how the threads are interacting. The ID parameter allows you to sort by thread. Timestamps can be used to show sequential access as well as duration. Writing in a spreadsheet friendly format will allow you to analyze the log file with an external tool without writing any conversion utilities. This has helped me greatly.
One option is to use ACE::logging. It has an efficient implementation of concurrent logging.