I was wondering if C++ had any way of doing interrupts. I want one program to store information in a text file, while the other one prints a statement depending on what is in the text file. Since I want it to be as accurate as possible, I need the print program to be interrupted when the update program updates the file.
C++ itself doesn't give this capability, it knows nothing of other programs that may or may not be running.
What you need to look into is IPC (inter-process communications), something your operating system will probably provide.
Things like signals, shared memory, semaphores, message queues and so on.
Since you seem to be using the file itself as the method of delivering content to the other process, signals are probably the way to go. You would simply raise a signal from process A to process B and a signal handler would run in the latter.
Of course this all depends on which operating system you're targeting.
If you are using Windows you can use FindFirstChangeNotification.
Here's some old code I have. This is run in it's own thread:
DWORD CDiskWatcher::Run(void *vpParameter)
{
CFileNotifyInterface *pIface = (CFileNotifyInterface *)vpParameter;
HANDLE handles[2];
handles[0] = m_hQuitEvent;
handles[1] = ::FindFirstChangeNotification(m_szPath, FALSE, FILE_NOTIFY_CHANGE_LAST_WRITE|FILE_NOTIFY_CHANGE_FILE_NAME);
DWORD dwObject;
if (INVALID_HANDLE_VALUE != handles[1]) {
do {
// Wait for the notification
dwObject = ::WaitForMultipleObjects(2, handles, FALSE, INFINITE);
if (WAIT_OBJECT_0 + 1 == dwObject) {
// Continue waiting...
::FindNextChangeNotification(handles[1]);
pIface->FireFileSystemChange(m_szPath);
}
} while (WAIT_OBJECT_0 != dwObject);
// Close handle
::FindCloseChangeNotification(handles[1]);
}
return 0;
}
Note m_hQuitEvent is created with CreateEvent() and CFileNotifyInterface is for callbacks:
class CFileNotifyInterface
{
public:
virtual void FireFileSystemChange(const char *szPath) = 0;
};
Related
I'm running a Visual C++ MFC application in release mode. I'm compiling everything using Visual Studio 2010.
My app runs a mini CNC mill through USB VCP communication.
I have a XML file that stores the app's settings.
My problem is this: ocassionaly (and this is repeatable) the pointer to the tinyxml2::XMLDocument I'm using gets set to 0x000.
Info:
Occasionally, the XML file get written to while the mill is running.
Before the error happens, the mill I'm running siezes for almost 30 seconds.
I'm using mutex locks to make sure the xmldoc doesn't get written to file twice at once.
The mutex locks are working, and the mutex error never occurs. I know the mutex code isn't perfect, but that isn't the issue. Honest.
I never write to the xmldoc pointer except when the parent class is booting up.
And then, all of a sudden, the xmlDoc pointer gets set to zero.
Any thoughts anyone?
Here is my saving code, although the problem may lie elsewhere:
void XMLSettings::SaveToXML()
{
HANDLE g_Mutex = CreateMutex( NULL, TRUE, "XMLSavingMutex");
DWORD wait_success = WaitForSingleObject( g_Mutex, 30000L);
if(wait_success == WAIT_OBJECT_0){
CIsoProApp* pApp = (CIsoProApp*)AfxGetApp();
if(PathFileExists(pApp->DrivePath + "IsoPro\\temp.xml"))
{
DeleteFile(pApp->DrivePath + "IsoPro\\temp.xml");
}
if(0==&xmlDoc)
{
OutputDebugString("xmlDoc == NULL");
}
int errorcode = xmlDoc->SaveFile(pApp->DrivePath + "IsoPro\\temp.xml");
if(errorcode != 0)
{
OutputDebugString("xmlDoc == errorcode");
}
if(0==&xmlDoc)
{
OutputDebugString("xmlDoc == NULL2");
}
if(0==xmlDoc)
{
OutputDebugString("xmlDoc == NULL");
}
if(PathFileExists(pApp->DrivePath + "IsoPro\\Settings.xml"))
{
DeleteFile(pApp->DrivePath + "IsoPro\\Settings.xml");
}
MoveFile(pApp->DrivePath + "IsoPro\\temp.xml",pApp->DrivePath + "IsoPro\\Settings.xml");
ReleaseMutex(g_Mutex);
}
else
{
int errorInt = GetLastError();
CString error;
error.Format("%d",errorInt);
if(errorInt != ERROR_ALREADY_EXISTS)
{
AfxMessageBox("XMLSavingMutex Error. WaitSuccess = " + wait_success);
AfxMessageBox("XMLSavingMutex Error. GetLastError = " + error);
}
}
CloseHandle(g_Mutex);
}
Since it seems that you are creating a Mutex each time SaveToXML is called, you should change your call to
HANDLE g_Mutex = CreateMutex( NULL, FALSE, "XMLSavingMutex");
Doing this will create a named mutex that allows the implementation to dictate who the owner is; other threads will receive the same mutex.
From the doc:
Two or more processes can call CreateMutex to create the same named mutex. The first process actually creates the mutex, and subsequent processes with sufficient access rights simply open a handle to the existing mutex. This enables multiple processes to get handles of the same mutex, while relieving the user of the responsibility of ensuring that the creating process is started first. When using this technique, you should set the bInitialOwner flag to FALSE; otherwise, it can be difficult to be certain which process has initial ownership.
(Credit to WhozCraig for pointing out named mutexes)
It appears that I was accessing the xml getter while writing the xml to a file. I put a single mutex lock in place for all xml actions and things seem to be functioning properly. Thanks to everyone for their help. I'll be in touch with more info if it becomes available.
I am wondering what the best practice is for executing new processes (programs) from a running process. To be more specific, I am implementing a C/C++ job scheduler that has to run multiple binaries while communicating with them. Is exec or fork common? Or is there any library taking care of this?
You can use popen() to spawn the processes and communicate with them. In order to handle communication with many processes from a single parent process, use select() or poll() to multiplex the reading/writing of the file descriptors given to you by popen() (you can use fileno() to turn a FILE* into an integer file descriptor).
If you want a library to abstract much of this for you, I suggest libuv. Here's a complete example program I whipped up, largely following the docs at https://nikhilm.github.io/uvbook/processes.html#spawning-child-processes:
#include <cstdio>
#include <cstdlib>
#include <inttypes.h>
#include <uv.h>
static void alloc_buffer(uv_handle_t *handle, size_t suggested_size, uv_buf_t *buf)
{
*buf = uv_buf_init((char*)malloc(suggested_size), suggested_size);
}
void echo_read(uv_stream_t *server, ssize_t nread, const uv_buf_t* buf)
{
if (nread == -1) {
fprintf(stderr, "error echo_read");
return;
}
puts(buf->base);
}
static void on_exit(uv_process_t *req, int64_t exit_status, int term_signal)
{
fprintf(stderr, "Process %d exited with status %" PRId64 ", signal %d\n",
req->pid, exit_status, term_signal);
uv_close((uv_handle_t*)req, NULL);
}
int main()
{
uv_loop_t* loop = uv_default_loop();
const int N = 3;
uv_pipe_t channel[N];
uv_process_t child_req[N];
for (int ii = 0; ii < N; ++ii) {
char* args[3];
args[0] = const_cast<char*>("ls");
args[1] = const_cast<char*>(".");
args[2] = NULL;
uv_pipe_init(loop, &channel[ii], 1);
uv_stdio_container_t child_stdio[3]; // {stdin, stdout, stderr}
child_stdio[STDIN_FILENO].flags = UV_IGNORE;
child_stdio[STDOUT_FILENO].flags = uv_stdio_flags(UV_CREATE_PIPE | UV_WRITABLE_PIPE);
child_stdio[STDOUT_FILENO].data.stream = (uv_stream_t*)&channel[ii];
child_stdio[STDERR_FILENO].flags = UV_IGNORE;
uv_process_options_t options = {};
options.exit_cb = on_exit;
options.file = "ls";
options.args = args;
options.stdio = child_stdio;
options.stdio_count = sizeof(child_stdio) / sizeof(child_stdio[0]);
int r;
if ((r = uv_spawn(loop, &child_req[ii], &options))) {
fprintf(stderr, "%s\n", uv_strerror(r));
return EXIT_FAILURE;
} else {
fprintf(stderr, "Launched process with ID %d\n", child_req[ii].pid);
uv_read_start((uv_stream_t*)&channel[ii], alloc_buffer, echo_read);
}
}
return uv_run(loop, UV_RUN_DEFAULT);
}
The above will spawn three copies of ls to print the contents of the current directory. They all run asynchronously.
Okay let's start..
There are few ways to create another parallel task from one task. Although I wouldn't name all of them as processes.
Using fork() system call
Now as you have already mentioned that fork() creates a process from your parent process. There are few good things and few bad things about fork().
Good things
fork() is able to create a completely different process & in multi-core CPU systems, it can truly achieve parallelism
fork() also creates a child process with different pid & hence it is nice if you ever want to kill that process explicitly.
wait() & waitpid() system calls are nice to make the parent wait for child.
fork generates SIGCHILD signal and with sigaction function you can make the parent wait for child without blocking it.
Bad things
fork processes do not share the same address space & hence if one process is having say a variable var, the other process cannot access directly that same var. Hence communication is a big issue.
To communicate you need to use certain IPC mechanisms like pipe, namedpipe, messageQueues or sharedMemory
Now out of these pipe, namedpipe and messageQueues can use read & write system calls and because read & write system calls are blocking system calls, you application remains synchronized but these IPCs are very slow. The only fast IPC is sharedMemory but it cannot use read & write & hence you need to use your own synchronization mechanisms, like semaphores. But implementing semaphores for bigger applications is difficult.
Here comes pthread
Now thread removes all the difficulties that are faced by fork.
It doesn't create a separate process.
It rather creates few light-weight subtasks which can run almost parallel.
They all share same address space & hence no need for any IPC.
The come with mutex which is wonderful for any synchronizations needed even for bigger applications.
Thread also don't create any process hence all threads is a part of same process and hence will have same pid.
Note: In C++, thread is a part of C++ library, not a system call.
Note 2: Boost threads in C++ are much more mature & recommended to use.
The main idea although is to know that when to use thread & when to use process.
If you need to create a sub-task which doesn't need to work with some other task but it has to work in isolation, use process; otherwise use thread.
The exec family syscalls are different. It uses your same pid. Hence if you create an application with 500 lines say, and you get a exec call at line number 250, then that exec process will be pasted on your whole process and after exec call, you program will not resume from 251 line. Also, exec calls don't flush your stdio buffers.
But yes, if you intend to create a separate process, and then use exec call to perform that task and then come out, then you are welcome to do it, but remember the IPC to store the output otherwise it is of no use
For more info on fork click here
For more info on thread click here
For boost therad click here
#John Zwinck answer is also good but I know little about select() system call but yes it is possible that way too
Edited: As # Jonathan Leffler pointed
Editing after a long: After some years I now never think of using all these SPOOKY libraries or senseless gruesome ways of parallel or should I say SEEMINGLY parallel processing. Enter coroutines, the future of CONCURRENT processing. Look at the following Go code. Sure this is possible in C/C++ too. This code would hardly be few milliseconds slower for 7.7 mil rows in database than its C/C++ thread based implementation but sever times more manageable and scalable.
package main
import (
"fmt"
"reflect"
"github.com/jinzhu/gorm"
_ "github.com/jinzhu/gorm/dialects/sqlite"
)
type AirQuality struct {
// gorm.Model
// ID uint `gorm:"column:id"`
Index string `gorm:"column:index"`
BEN string `gorm:"column:BEN"`
CH4 string `gorm:"column:CH4"`
CO string `gorm:"column:CO"`
EBE string `gorm:"column:EBE"`
MXY string `gorm:"column:MXY"`
NMHC string `gorm:"column:NMHC"`
NO string `gorm:"column:NO"`
NO2 string `gorm:"column:NO_2"`
NOX string `gorm:"column:NOx"`
OXY string `gorm:"column:OXY"`
O3 string `gorm:"column:O_3"`
PM10 string `gorm:"column:PM10"`
PM25 string `gorm:"column:PM25"`
PXY string `gorm:"column:PXY"`
SO2 string `gorm:"column:SO_2"`
TCH string `gorm:"column:TCH"`
TOL string `gorm:"column:TOL"`
Time string `gorm:"column:date; type:timestamp"`
Station string `gorm:"column:station"`
}
func (AirQuality) TableName() string {
return "AQ"
}
func main() {
c := generateRowsConcurrent("boring!!")
for row := range c {
fmt.Println(row)
}
}
func generateRowsConcurrent(msg string) <-chan []string {
c := make(chan []string)
go func() {
db, err := gorm.Open("sqlite3", "./load_testing_7.6m.db")
if err != nil {
panic("failed to connect database")
}
defer db.Close()
rows, err := db.Model(&AirQuality{}).Limit(20).Rows()
defer rows.Close()
if err != nil {
panic(err)
}
for rows.Next() {
var aq AirQuality
db.ScanRows(rows, &aq)
v := reflect.Indirect(reflect.ValueOf(aq))
var buf []string
for i := 0; i < v.NumField(); i++ {
buf = append(buf, v.Field(i).String())
}
c <- buf
}
defer close(c)
}()
return c
}
I need a code construction for my project which waits for some time, but when there is an interrupt (e.g. incoming udp packets) it leaves this loop, does something, and after this restart the waiting.
How can I implement this? My first idea is using while(wait(2000)), but wait is a void construct...
Thank you!
I would put the loop inside a function
void awesomeFunction() {
bool loop = true;
while (loop) {
wait(2000);
...
...
if (conditionMet)
loop = false;
}
}
Then i would put this function inside another loop
while (programRunning) {
awesomeFunction();
/* Loop ended, do stuff... */
}
There are a few things I am not clear about from the question. Is this a multi-threaded application, where one thread handles (say) the UDP packets, and the other waits for the event, or is this single-threaded? You also didn't mention what operating system this is, which is relevant. So I am going to assume Linux, or something that supports the poll API, or something similar (like select).
Let's assume a single threaded application that waits for UDP packets. The main idea is that once you have the socket's file descriptor, you have an infinite loop on a call to poll. For instance:
#include <poll.h>
// ...
void handle_packets() {
// m_fd was created with `socket` and `bind` or `connect`.
struct pollfd pfd = {.fd = m_fd, .events = POLLIN};
int timeout;
timeout = -1; // Wait indefinitely
// timeout = 2000; // Wait for 2 seconds
while (true) {
pfd.revents = 0;
poll(&pfd, 1, timeout);
if ((pfd.revents & POLLIN) != 0) {
handle_single_packet(); // Method to actually read and handle the packet
}
if ((pfd.revents & (POLLERR | POLLHUP)) != 0) {
break; // return on error or hangup
}
}
}
A simple example of select can be found here.
If you are looking at a multi-threaded application, trying to communicate between the two threads, then there are several options. Two of which are:
Use the same mechanism above. The file descriptor is the result of a call to pipe. The thread sleeping gets the read end of the pipe. The thread waking get the write end, and writes a character when it's time to wake up.
Use C++'s std::condition_variable. It is documented here, with a complete example. This solution depends on your context, e.g., whether you have a variable that you can wait on, or what has to be done.
Other interrupts can also be caught in this way. Signals, for instance, have a signalfd. Timer events have timerfd. This depends a lot on what you need, and in what environment you are running. For instance, timerfd is Linux-specific.
I have created following things for thread
int Data_Of_Thread_1 = 1;
int Data_Of_Thread_2 = 2;
Handle_Of_Thread_1 = 0;
Handle_Of_Thread_2 = 0;
HANDLE Array_Of_Thread_Handles[2];
Handle_Of_Thread_1 = CreateThread( NULL, 0,ModbusRead, &Data_Of_Thread_1, 0, NULL);
Handle_Of_Thread_2 = CreateThread( NULL, 0,ModbusWrite, &Data_Of_Thread_2, 0, NULL);
Now i have to control the execution of these two threads. The condition is as follows:
function ModbusWrite
{
if (condition1 true)
{
Pause thread1
if(condition2 true)
{
resume thread1
}
}
}
I have gone through the sites. they say synchronisation element as event, mutex, semaphore. etc. I think i have to use either event or mutex. But i am not quite clear about how to use them both. First we create either create event or create mutex then how to apply those event or mutex in my above condition. Also i am not clear about "WaitForSingleObject" function. where and how to implement . If anyone can help me with the code then it would be grateful.
On Windows, one typically uses event objects to wait for conditions without wasting CPU. If the external software you're interfacing with provides some sort of asynchronous callback mechanism, then you'd want to do something like this:
// Create an anonymous auto-reset event, initial state unsignaled
HANDLE hEvent = CreateEvent(NULL, FALSE, FALSE, NULL);
...
void ThreadProcedure()
{
while (threadShouldContinueRunning())
{
// Wait until event is signaled
WaitForSingleObject(hEvent, INFINITE);
// Now the thread has woken up, check the condition and respond
// accordingly
}
}
...
void OnExternalCallback()
{
// Called from external library when the condition becomes true -- signal
// the worker thread to resume
SetEvent(hEvent);
}
...
// Don't forget to cleanup
CloseHandle(hEvent);
Now, if the external library does not provide any sort of callback mechanism to inform you when the condition becomes true, you're in trouble. In that case, the only way to detect when the condition becomes true is to continuously poll it, optionally sleeping in between to avoid burning CPU time. The major downside of this, of course, is that you introduce unnecessary latency in detecting the condition change (the latency amount is the sleep time), or you waste a lot of CPU (and therefore power/battery life) spinning.
void ThreadProcedure()
{
while (threadShouldContinueRunning())
{
// Avoid polling if at all possible -- this adds latency and/or wastes
// CPU and power/battery life
if (externalConditionIsTrue())
{
// Handle
}
else
{
Sleep(50); // Tune this number to balance latency vs. CPU load
}
}
}
I keep running into this problem of trying to run a thread with the following properties:
runs in an infinite loop, checking some external resource, e.g. data from the network or a device,
gets updates from its resource promptly,
exits promptly when asked to,
uses the CPU efficiently.
First approach
One solution I have seen for this is something like the following:
void class::run()
{
while(!exit_flag)
{
if (resource_ready)
use_resource();
}
}
This satisfies points 1, 2 and 3, but being a busy waiting loop, uses 100% CPU.
Second approach
A potential fix for this is to put a sleep statement in:
void class::run()
{
while(!exit_flag)
{
if (resource_ready)
use_resource();
else
sleep(a_short_while);
}
}
We now don't hammer the CPU, so we address 1 and 4, but we could wait up to a_short_while unnecessarily when the resource is ready or we are asked to quit.
Third approach
A third option is to do a blocking read on the resource:
void class::run()
{
while(!exit_flag)
{
obtain_resource();
use_resource();
}
}
This will satisfy 1, 2, and 4 elegantly, but now we can't ask the thread to quit if the resource does not become available.
Question
The best approach seems to be the second one, with a short sleep, so long as the tradeoff between CPU usage and responsiveness can be achieved.
However, this still seems suboptimal, and inelegant to me. This seems like it would be a common problem to solve. Is there a more elegant way to solve it? Is there an approach which can address all four of those requirements?
This depends on the specifics of the resources the thread is accessing, but basically to do it efficiently with minimal latency, the resources need to provide an API for either doing an interruptible blocking wait.
On POSIX systems, you can use the select(2) or poll(2) system calls to do that, if the resources you're using are files or file descriptors (including sockets). To allow the wait to be preempted, you also create a dummy pipe which you can write to.
For example, here's how you might wait for a file descriptor or socket to become ready or for the code to be interrupted:
// Dummy pipe used for sending interrupt message
int interrupt_pipe[2];
int should_exit = 0;
void class::run()
{
// Set up the interrupt pipe
if (pipe(interrupt_pipe) != 0)
; // Handle error
int fd = ...; // File descriptor or socket etc.
while (!should_exit)
{
// Set up a file descriptor set with fd and the read end of the dummy
// pipe in it
fd_set fds;
FD_CLR(&fds);
FD_SET(fd, &fds);
FD_SET(interrupt_pipe[1], &fds);
int maxfd = max(fd, interrupt_pipe[1]);
// Wait until one of the file descriptors is ready to be read
int num_ready = select(maxfd + 1, &fds, NULL, NULL, NULL);
if (num_ready == -1)
; // Handle error
if (FD_ISSET(fd, &fds))
{
// fd can now be read/recv'ed from without blocking
read(fd, ...);
}
}
}
void class::interrupt()
{
should_exit = 1;
// Send a dummy message to the pipe to wake up the select() call
char msg = 0;
write(interrupt_pipe[0], &msg, 1);
}
class::~class()
{
// Clean up pipe etc.
close(interrupt_pipe[0]);
close(interrupt_pipe[1]);
}
If you're on Windows, the select() function still works for sockets, but only for sockets, so you should install use WaitForMultipleObjects to wait on a resource handle and an event handle. For example:
// Event used for sending interrupt message
HANDLE interrupt_event;
int should_exit = 0;
void class::run()
{
// Set up the interrupt event as an auto-reset event
interrupt_event = CreateEvent(NULL, FALSE, FALSE, NULL);
if (interrupt_event == NULL)
; // Handle error
HANDLE resource = ...; // File or resource handle etc.
while (!should_exit)
{
// Wait until one of the handles becomes signaled
HANDLE handles[2] = {resource, interrupt_event};
int which_ready = WaitForMultipleObjects(2, handles, FALSE, INFINITE);
if (which_ready == WAIT_FAILED)
; // Handle error
else if (which_ready == WAIT_OBJECT_0))
{
// resource can now be read from without blocking
ReadFile(resource, ...);
}
}
}
void class::interrupt()
{
// Signal the event to wake up the waiting thread
should_exit = 1;
SetEvent(interrupt_event);
}
class::~class()
{
// Clean up event etc.
CloseHandle(interrupt_event);
}
You get a efficient solution if your obtain_ressource() function supports a timeout value:
while(!exit_flag)
{
obtain_resource_with_timeout(a_short_while);
if (resource_ready)
use_resource();
}
This effectively combines the sleep() with the obtain_ressurce() call.
Check out the manpage for nanosleep:
If the nanosleep() function returns because it has been interrupted by a signal, the function returns a value of -1 and sets errno to indicate the interruption.
In other words, you can interrupt sleeping threads by sending a signal (the sleep manpage says something similar). This means you can use your 2nd approach, and use an interrupt to immediately wake the thread if it's sleeping.
Use the Gang of Four Observer Pattern:
http://home.comcast.net/~codewrangler/tech_info/patterns_code.html#Observer
Callback, don't block.
Self-Pipe trick can be used here.
http://cr.yp.to/docs/selfpipe.html
Assuming that you are reading the data from file descriptor.
Create a pipe and select() for readability on the pipe input as well as on the resource you are interested.
Then when data comes on resource, the thread wakes up and does the processing. Else it sleeps.
To terminate the thread send it a signal and in signal handler, write something on the pipe (I would say something which will never come from the resource you are interested in, something like NULL for illustrating the point). The select call returns and thread on reading the input knows that it got the poison pill and it is time to exit and calls pthread_exit().
EDIT: Better way will be just to see that the data came on the pipe and hence just exit rather than checking the value which came on that pipe.
The Win32 API uses more or less this approach:
someThreadLoop( ... )
{
MSG msg;
int retVal;
while( (retVal = ::GetMessage( &msg, TaskContext::winHandle_, 0, 0 )) > 0 )
{
::TranslateMessage( &msg );
::DispatchMessage( &msg );
}
}
GetMessage itself blocks until any type of message is received therefore not using any processing (refer). If a WM_QUIT is received, it returns false, exiting the thread function gracefully. This is a variant of the producer/consumer mentioned elsewhere.
You can use any variant of a producer/consumer, and the pattern is often similar. One could argue that one would want to split the responsibility concerning quitting and obtaining of a resource, but OTOH quitting could depend on obtaining a resource too (or could be regarded as one of the resources - but a special one). I would at least abstract the producer consumer pattern and have various implementations thereof.
Therefore:
AbstractConsumer:
void AbstractConsumer::threadHandler()
{
do
{
try
{
process( dequeNextCommand() );
}
catch( const base_except& ex )
{
log( ex );
if( ex.isCritical() ){ throw; }
//else we don't want loop to exit...
}
catch( const std::exception& ex )
{
log( ex );
throw;
}
}
while( !terminated() );
}
virtual void /*AbstractConsumer::*/process( std::unique_ptr<Command>&& command ) = 0;
//Note:
// Either may or may not block until resource arrives, but typically blocks on
// a queue that is signalled as soon as a resource is available.
virtual std::unique_ptr<Command> /*AbstractConsumer::*/dequeNextCommand() = 0;
virtual bool /*AbstractConsumer::*/terminated() const = 0;
I usually encapsulate command to execute a function in the context of the consumer, but the pattern in the consumer is always the same.
Any (welln at least, most) approaches mentioned above will do the following: thread is created, then it's blocked wwiting for resource, then it's deleted.
If you're worried about efficiency, this is not a best approach when waiting for IO. On Windows at least, you'll allocate around 1mb of memory in user mode, some in kernel for just one additional thread. What if you have many such resources? Having many waiting threads will also increase context switches and slow down your program. What if resource takes longer to be available and many requests are made? You may end up with tons of waiting threads.
Now, the solution to it (again, on Windows, but I'm sure there should be something similar on other OSes) is using threadpool (the one provided by Windows). On Windows this will not only create limited amount of threads, it'll be able to detect when thread is waiting for IO and will stwal thread from there and reuse it for other operations while waitting.
See http://msdn.microsoft.com/en-us/library/windows/desktop/ms686766(v=vs.85).aspx
Also, for more fine-grained control bit still having ability give up thread when waiting for IO, see IO completion ports (I think they'll anyway use threadpool inside): http://msdn.microsoft.com/en-us/library/windows/desktop/aa365198(v=vs.85).aspx