Are POSIX' read() and write() system calls atomic? - concurrency

I am trying to implement a database index based on the data structure (Blink tree) and algorithms suggested by Lehman and Yao in this paper. In page 2, the authors state that:
The disk is partitioned in sections of fixed size (physical pages; in this paper, these correspond to the nodes of the tree). These are the only units that can be read or written by a process. [emphasis mine] (...)
(...) a process is allowed to lock and unlock a disk page. This lock gives that process exclusive modification rights to that page; also, a process must have a page locked in order to modify that page. (...) Locks do not prevent other processes from reading the locked page. [emphasis mine]
I am not completely sure my interpretation is correct (I am not used to reading academic papers), but I think it can be concluded from the emphasized sentences that the authors mean the operations that read and write a page are assumed to be "atomic", in the sense that, if a process A has already begun reading (resp. writing) a page, another process B may not begin writing (resp. reading) that same page until A is done performing its read (resp. write) operation. Multiple processes simultaneously reading the same page is, of course, a legitimate condition, as is having multiple processes simultaneously performing arbitrary operations on exclusively different pages (process A on page P, process B on page Q, process C on page R, etc.).
Is my interpretation correct?
Can I assume POSIX' read() and write() system calls are "atomic" in the sense described above? Can I rely on these system calls having some internal logic to determine whether a specfic read() or write() call should be temporarily blocked based on the position of the file descriptor and the specified size of the chunk to be read or written?
If the answer to the above questions is "No", how should I roll my own locking mechanism?

I don't believe the text you cites implies anything of the sort. It doesn't even mention read() or write() or POSIX. In fact, read() and write() cannot be relied on to be atomic. The only thing POSIX says is that write() must be atomic if the size of the write is less than PIPE_BUF bytes, and even that only applies to pipes.
I didn't read the context around the part of the paper you cited, but it sounds like the passage you cited is stating constraints which must be placed on an implementation in order for the algorithm to work correctly. In other words, it states that an implementation of this algorithm requires locking.
How you do that locking is up to you (the implementor). If we are dealing with a regular file and multiple independent processes, you might try fcntl(F_SETLKW)-style locking. If your data structure is in memory and you are dealing with multiple threads in the same process, something else might be appropriate.

Answers:
Concurrent reads to writes may see torn writes depending on OS, filing system, and what flags you opened the file with. A quick summary by flags, OS and filing system is below.
You can lock byte ranges in a file before accessing them using fcntl() on POSIX or LockFile() on Windows.
No O_DIRECT/FILE_FLAG_NO_BUFFERING:
Microsoft Windows 10 with NTFS: update atomicity = 1 byte
Linux 4.2.6 with ext4: update atomicity = 1 byte
FreeBSD 10.2 with ZFS: update atomicity = at least 1Mb, probably infinite (*)
O_DIRECT/FILE_FLAG_NO_BUFFERING:
Microsoft Windows 10 with NTFS: update atomicity = up to 4096 bytes only if page aligned, otherwise 512 bytes if FILE_FLAG_WRITE_THROUGH off, else 64 bytes. Note that this atomicity is probably a feature of PCIe DMA rather than designed in (*).
Linux 4.2.6 with ext4: update atomicity = at least 1Mb, probably infinite (*). Note that earlier Linuxes with ext4 definitely did not exceed 4096 bytes, XFS certainly used to have custom locking but it looks like recent Linux has finally fixed this.
FreeBSD 10.2 with ZFS: update atomicity = at least 1Mb, probably infinite (*)
You can see the raw empirical test results at https://github.com/BoostGSoC13/boost.afio/blob/master/fs_probe/fs_probe_results.yaml. The results were generated by a program written using asynchronous file i/o through on all platforms. Note we test for torn offsets only on 512 byte multiples, so I cannot say if a partial update of a 512 byte sector would tear during the read-modify-write cycle.

Related

Intel TSX hardware transactional memory what do non-transactional threads see?

Suppose you have two threads, one creates a TSX transaction, and modifies some data structure. The other thread does no synchronization of any kind and reads the same data structure. Is the transaction atomic to it? I can't actually imagine that it can be true, since there is no way afaik to block or restart it if it tries reading a cache line modified by the transaction.
If the transaction is not atomic, then are the write ordering rules on x86 still respected? If it sees write #2, then it is guaranteed that it must be able to see the previous write #1. Does this still hold for writes that happen as part of a transaction?
I could not find answers to these questions anywhere, and I kind of doubt anyone on SO would know either, but at least when somebody finds out this is a Google friendly place to put an answer.
(My answer is based on Intel® 64 and IA-32 Architectures Optimization Reference Manual, Chapter 12)
The transaction is atomic to the read, in that the read will cause the transaction to abort, and thus appear that it never took place. In the transactional region, cache lines (tracked in the L1) read are considered the read-set and lines written to from the write-set. If another processor reads from the write-set (which is your example) or writes to either the read- or write-set, then there is a data conflict.
Data conflicts are detected through the cache coherence protocol.
Data conflicts cause transactional aborts. In the initial
implementation, the thread that detects the data conflict will
transactionally abort.
Thus the thread attempting the transaction is tracking the line and will detect the conflict when the other thread makes its read request. It aborts and "the hardware will restart at the instruction address provided by the operation of the XBEGIN instruction". In this chapter, there are no distinctions as to what the second processor is doing. It does not matter whether it is attempting a transaction or performing a simple read.
To summarize, all threads (whether transactional or not) see either the full transaction or nothing. Only the thread in a TSX transaction can see the intermediate state of memory.

Why do developers lock word-length data during reads/writes in multithreaded code?

Specifically in unmanaged languages (e.g. C++, C), my understanding is that reads/writes of word-length data are atomic. If this is the case then why do people still lock (via mutex) word-length data during reads/writes in a multithreaded environment?
Reads and writes may* be individually atomic, but read-modify-write sequences are not.
*That depends very much on the architecture and how you use it.
It depends by what you mean by "atomic". There is not any
guarantee in C++ that a read or a write to a variable actually
ends up in global memory where other threads can see it.
An Intel x86 (or compatible) processor will do word-sized reads and writes atomically as long as the data is properly aligned (specifically, so the entire word is in the same cache line).
Two obvious problems with that though:
Incorrect alignment can break it
It's not portable -- a different CPU could break it as well
Less obviously, atomic operations can force a memory fence so operations happen in the correct order. For example, if I'm writing some data, then writing a status variable to tell another process that the data is now valid, it's not enough that each of those writes is atomic -- it's crucial that the "valid" status only be set after the data itself has actually been written. Without some sort of memory fence operation, the processor is free to rearrange writes so the status could be written before the data.
Because an assignment (what I assume you mean by read/write of data) in a language like C or C++ may still be multiple assembly instructions, and the thread can be preempted at any of them.

Thread-safety of sub-word-size flags

Consider the following scenario:
There exists a globally accessible variable F.
Thread A repeatedly assigns a random value to F (without any regard to the previous value of F).
Thread B does the same thing as thread A (independent of A).
Thread C repeatedly reads the value of F (and say, prints it).
This is in C++ (Visual C++) on Windows on x64 architecture, multi-processor. F is of type bool and marked volatile, and none of the accesses are protected by any locks.
Question: Is there anything thread-unsafe about this scenario?
Assuming that the logical behavior of the code is valid, is there anything unsafe about the fact that multiple threads are reading and writing the values to the same location at the same time?
What guarantees can be made (across architectures, OSes, compilers) about the atomicity of reading from and writing to variables that are <= word-size on the platform? (I am assuming word-size is important...)
On a related note, what is an acceptible way of communicating between threads the state of completion of some operation (none of the threads are waiting for the operation to complete, they may just be interested in querying the state from time to time)?
It depends on your thread-safety requirements. You'll always get a consistent value (that is, it is impossible that you get half the value of thread A's write and the other half from thread B), but there is no guarantee that the value you'll read is, in fact, the latest one that was logically written.
The problem here is the CPU cache that may or may not get flushed. When a thread writes to the memory, the value first goes to the cache, and eventually it gets written to memory. In the while, if other cores attempt to read the object from memory, they'll get the old value.
Under x86, any read or write to a type correctly aligned for its size is considered to be atomic (so for a bool it need only be a 1 byte alignment boundery), but its recommended to use explict atomic ops for portability and the memory barriers they provide. An excerpt from the Intel System Programming Guide, Vol 3A, Section 8. May 2011 (there is another one as well, can't find it atm).
The Intel486 processor (and newer processors since) guarantees that the following
basic memory operations will always be carried out atomically:
• Reading or writing a byte
• Reading or writing a word aligned on a 16-bit boundary
• Reading or writing a doubleword aligned on a 32-bit boundary
The Pentium processor (and newer processors since) guarantees that the following
additional memory operations will always be carried out atomically:
• Reading or writing a quadword aligned on a 64-bit boundary
• 16-bit accesses to uncached memory locations that fit within a 32-bit data bus
The P6 family processors (and newer processors since) guarantee that the following
additional memory operation will always be carried out atomically:
• Unaligned 16-, 32-, and 64-bit accesses to cached memory that fit within a cache
line
Microsoft also has a few examples of using volatitle bool's for signaling thread exits, however, if you want to signal threads that are waiting, its best to use kernel constructs, on windows this would be a event (see CreateEventA/W), this will prevent the wait from burning cpu cycles when the variable hasn't been set yet.
Update:
For threads that will have almost zero wait time, its a good idea to implement a user level lock, with optional backoff if its a high contention enviroment, intel has a good article on that here, alternatively you can use WinAPI's CriticalSections (these are semi-kernel level).

Is it safe to parse a /proc/ file?

I want to parse /proc/net/tcp/, but is it safe?
How should I open and read files from /proc/ and not be afraid, that some other process (or the OS itself) will be changing it in the same time?
In general, no. (So most of the answers here are wrong.) It might be safe, depending on what property you want. But it's easy to end up with bugs in your code if you assume too much about the consistency of a file in /proc. For example, see this bug which came from assuming that /proc/mounts was a consistent snapshot.
For example:
/proc/uptime is totally atomic, as someone mentioned in another answer -- but only since Linux 2.6.30, which is less than two years old. So even this tiny, trivial file was subject to a race condition until then, and still is in most enterprise kernels. See fs/proc/uptime.c for the current source, or the commit that made it atomic. On a pre-2.6.30 kernel, you can open the file, read a bit of it, then if you later come back and read again, the piece you get will be inconsistent with the first piece. (I just demonstrated this -- try it yourself for fun.)
/proc/mounts is atomic within a single read system call. So if you read the whole file all at once, you get a single consistent snapshot of the mount points on the system. However, if you use several read system calls -- and if the file is big, this is exactly what will happen if you use normal I/O libraries and don't pay special attention to this issue -- you will be subject to a race condition. Not only will you not get a consistent snapshot, but mount points which were present before you started and never stopped being present might go missing in what you see. To see that it's atomic for one read(), look at m_start() in fs/namespace.c and see it grab a semaphore that guards the list of mountpoints, which it keeps until m_stop(), which is called when the read() is done. To see what can go wrong, see this bug from last year (same one I linked above) in otherwise high-quality software that blithely read /proc/mounts.
/proc/net/tcp, which is the one you're actually asking about, is even less consistent than that. It's atomic only within each row of the table. To see this, look at listening_get_next() in net/ipv4/tcp_ipv4.c and established_get_next() just below in the same file, and see the locks they take out on each entry in turn. I don't have repro code handy to demonstrate the lack of consistency from row to row, but there are no locks there (or anything else) that would make it consistent. Which makes sense if you think about it -- networking is often a super-busy part of the system, so it's not worth the overhead to present a consistent view in this diagnostic tool.
The other piece that keeps /proc/net/tcp atomic within each row is the buffering in seq_read(), which you can read in fs/seq_file.c. This ensures that once you read() part of one row, the text of the whole row is kept in a buffer so that the next read() will get the rest of that row before starting a new one. The same mechanism is used in /proc/mounts to keep each row atomic even if you do multiple read() calls, and it's also the mechanism that /proc/uptime in newer kernels uses to stay atomic. That mechanism does not buffer the whole file, because the kernel is cautious about memory use.
Most files in /proc will be at least as consistent as /proc/net/tcp, with each row a consistent picture of one entry in whatever information they're providing, because most of them use the same seq_file abstraction. As the /proc/uptime example illustrates, though, some files were still being migrated to use seq_file as recently as 2009; I bet there are still some that use older mechanisms and don't have even that level of atomicity. These caveats are rarely documented. For a given file, your only guarantee is to read the source.
In the case of /proc/net/tcp, you can read it and parse each line without fear. But if you try to draw any conclusions from multiple lines at once -- beware, other processes and the kernel are changing it while you read it, and you are probably creating a bug.
Although the files in /proc appear as regular files in userspace, they are not really files but rather entities that support the standard file operations from userspace (open, read, close). Note that this is quite different than having an ordinary file on disk that is being changed by the kernel.
All the kernel does is print its internal state into its own memory using a sprintf-like function, and that memory is copied into userspace whenever you issue a read(2) system call.
The kernel handles these calls in an entirely different way than for regular files, which could mean that the entire snapshot of the data you will read could be ready at the time you open(2) it, while the kernel makes sure that concurrent calls are consistent and atomic. I haven't read that anywhere, but it doesn't really make sense to be otherwise.
My advice is to take a look at the implementation of a proc file in your particular Unix flavour. This is really an implementation issue (as is the format and the contents of the output) that is not governed by a standard.
The simplest example would be the implementation of the uptime proc file in Linux. Note how the entire buffer is produced in the callback function supplied to single_open.
/proc is a virtual file system : in fact, it just gives a convenient view of the kernel internals. It's definitely safe to read it (that's why it's here) but it's risky on the long term, as the internal of these virtual files may evolve with newer version of kernel.
EDIT
More information available in proc documentation in Linux kernel doc, chapter 1.4 Networking
I can't find if the information how the information evolve over time. I thought it was frozen on open, but can't have a definite answer.
EDIT2
According to Sco doc (not linux, but I'm pretty sure all flavours of *nix behave like that)
Although process state and
consequently the contents of /proc
files can change from instant to
instant, a single read(2) of a /proc
file is guaranteed to return a
``sane'' representation of state, that
is, the read will be an atomic
snapshot of the state of the process.
No such guarantee applies to
successive reads applied to a /proc
file for a running process. In
addition, atomicity is specifically
not guaranteed for any I/O applied to
the as (address-space) file; the
contents of any process's address
space might be concurrently modified
by an LWP of that process or any other
process in the system.
The procfs API in the Linux kernel provides an interface to make sure that reads return consistent data. Read the comments in __proc_file_read. Item 1) in the big comment block explains this interface.
That being said, it is of course up to the implementation of a specific proc file to use this interface correctly to make sure its returned data is consistent. So, to answer your question: no, the kernel does not guarantee consistency of the proc files during a read but it provides the means for the implementations of those files to provide consistency.
I have the source for Linux 2.6.27.8 handy since I'm doing driver development at the moment on an embedded ARM target.
The file ...linux-2.6.27.8-lpc32xx/net/ipv4/raw.c at line 934 contains, for example
seq_printf(seq, "%4d: %08X:%04X %08X:%04X"
" %02X %08X:%08X %02X:%08lX %08X %5d %8d %lu %d %p %d\n",
i, src, srcp, dest, destp, sp->sk_state,
atomic_read(&sp->sk_wmem_alloc),
atomic_read(&sp->sk_rmem_alloc),
0, 0L, 0, sock_i_uid(sp), 0, sock_i_ino(sp),
atomic_read(&sp->sk_refcnt), sp, atomic_read(&sp->sk_drops));
which outputs
[wally#zenetfedora ~]$ cat /proc/net/tcp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode
0: 017AA8C0:0035 00000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 15160 1 f552de00 299
1: 00000000:C775 00000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 13237 1 f552ca00 299
...
in function raw_sock_seq_show() which is part of a hierarchy of procfs handling functions. The text is not generated until a read() request is made of the /proc/net/tcp file, a reasonable mechanism since procfs reads are surely much less common than updating the information.
Some drivers (such as mine) implement the proc_read function with a single sprintf(). The extra complication in the core drivers implementation is to handle potentially very long output which may not fit in the intermediate, kernel-space buffer during a single read.
I tested that with a program using a 64K read buffer but it results in a kernel space buffer of 3072 bytes in my system for proc_read to return data. Multiple calls with advancing pointers are needed to get more than that much text returned. I don't know what the right way to make the returned data consistent when more than one i/o is needed. Certainly each entry in /proc/net/tcp is self-consistent. There is some likelihood that lines side-by-side are snapshot at different times.
Short of unknown bugs, there are no race conditions in /proc that would lead to reading corrupted data or a mix of old and new data. In this sense, it's safe. However there's still the race condition that much of the data you read from /proc is potentially-outdated as soon as it's generated, and even moreso by the time you get to reading/processing it. For instance processes can die at any time and a new process can be assigned the same pid; the only process ids you can ever use without race conditions are your own child processes'. Same goes for network information (open ports, etc.) and really most of the information in /proc. I would consider it bad and dangerous practice to rely on any data in /proc being accurate, except data about your own process and potentially its child processes. Of course it may still be useful to present other information from /proc to the user/admin for informative/logging/etc. purposes.
When you read from a /proc file, the kernel is calling a function which has been registered in advance to be the "read" function for that proc file. See the __proc_file_read function in fs/proc/generic.c .
Therefore, the safety of the proc read is only as safe as the function the kernel calls to satisfy the read request. If that function properly locks all data it touches and returns to you in a buffer, then it is completely safe to read using that function. Since proc files like the one used for satisfying read requests to /proc/net/tcp have been around for a while and have undergone scrupulous review, they are about as safe as you could ask for. In fact, many common Linux utilities rely on reading from the proc filesystem and formatting the output in a different way. (Off the top of my head, I think 'ps' and 'netstat' do this).
As always, you don't have to take my word for it; you can look at the source to calm your fears. The following documentation from proc_net_tcp.txt tells you where the "read" functions for /proc/net/tcp live, so you can look at the actual code that is run when you read from that proc file and verify for yourself that there are no locking hazards.
This document describes the interfaces
/proc/net/tcp and /proc/net/tcp6.
Note that these interfaces are
deprecated in favor of tcp_diag.
These /proc interfaces provide information about currently active TCP
connections, and are implemented by
tcp4_seq_show() in net/ipv4/tcp_ipv4.c
and tcp6_seq_show() in
net/ipv6/tcp_ipv6.c, respectively.

Is it safe to read an integer variable that's being concurrently modified without locking?

Suppose that I have an integer variable in a class, and this variable may be concurrently modified by other threads. Writes are protected by a mutex. Do I need to protect reads too? I've heard that there are some hardware architectures on which, if one thread modifies a variable, and another thread reads it, then the read result will be garbage; in this case I do need to protect reads. I've never seen such architectures though.
This question assumes that a single transaction only consists of updating a single integer variable so I'm not worried about the states of any other variables that might also be involved in a transaction.
atomic read
As said before, it's platform dependent. On x86, the value must be aligned on a 4 byte boundary. Generally for most platforms, the read must execute in a single CPU instruction.
optimizer caching
The optimizer doesn't know you are reading a value modified by a different thread. declaring the value volatile helps with that: the optimizer will issue a memory read / write for every access, instead of trying to keep the value cached in a register.
CPU cache
Still, you might read a stale value, since on modern architectures you have multiple cores with individual cache that is not kept in sync automatically. You need a read memory barrier, usually a platform-specific instruction.
On Wintel, thread synchronization functions will automatically add a full memory barrier, or you can use the InterlockedXxxx functions.
MSDN: Memory and Synchronization issues, MemoryBarrier Macro
[edit] please also see drhirsch's comments.
You ask a question about reading a variable and later you talk about updating a variable, which implies a read-modify-write operation.
Assuming you really mean the former, the read is safe if it is an atomic operation. For almost all architectures this is true for integers.
There are a few (and rare) exceptions:
The read is misaligned, for example accessing a 4-byte int at an odd address. Usually you need to force the compiler with special attributes to do some misalignment.
The size of an int is bigger than the natural size of instructions, for example using 16 bit ints on a 8 bit architecture.
Some architectures have an artificially limited bus width. I only know of very old and outdated ones, like a 386sx or a 68008.
I'd recommend not to rely on any compiler or architecture in this case.
Whenever you have a mix of readers and writers (as opposed to just readers or just writers) you'd better sync them all. Imagine your code running an artificial heart of someone, you don't really want it to read wrong values, and surely you don't want a power plant in your city go 'boooom' because someone decided not to use that mutex. Save yourself a night-sleep in a long run, sync 'em.
If you only have one thread reading -- you're good to go with just that one mutex, however if you're planning for multiple readers and multiple writers you'd need a sophisticated piece of code to sync that. A nice implementation of read/write lock that would also be 'fair' is yet to be seen by me.
Imagine that you're reading the variable in one thread, that thread gets interrupted while reading and the variable is changed by a writing thread. Now what is the value of the read integer after the reading thread resumes?
Unless reading a variable is an atomic operation, in this case only takes a single (assembly) instruction, you can not ensure that the above situation can not happen.
(The variable could be written to memory, and retrieving the value would take more than one instruction)
The consensus is that you should encapsulate/lock all writes individualy, while reads can be executed concurrently with (only) other reads
Suppose that I have an integer variable in a class, and this variable may be concurrently modified by other threads. Writes are protected by a mutex. Do I need to protect reads too? I've heard that there are some hardware architectures on which, if one thread modifies a variable, and another thread reads it, then the read result will be garbage; in this case I do need to protect reads. I've never seen such architectures though.
In the general case, that is potentially every architecture. Every architecture has cases where reading concurrently with a write will result in garbage.
However, almost every architecture also has exceptions to this rule.
It is common that word-sized variables are read and written atomically, so synchronization is not needed when reading or writing. The proper value will be written atomically as a single operation, and threads will read the current value as a single atomic operation as well, even if another thread is writing. So for integers, you're safe on most architectures. Some will extend this guarantee to a few other sizes as well, but that's obviously hardware-dependant.
For non-word-sized variables both reading and writing will typically be non-atomic, and will have to be synchronized by other means.
If you don't use prevous value of this variable when write new, then:
You can read and write integer variable without using mutex. It is because integer is base type in 32bit architecture and every modification/read of value is doing with one operation.
But, if you donig something such as increment:
myvar++;
Then you need use mutex, because this construction is expanded to myvar = myvar + 1 and between read myvar and increment myvar, myvar can be modified. In that case you will get bad value.
While it would probably be safe to read ints on 32 bit systems without synchronization. I would not risk it. While multiple concurrent reads are not a problem, I do not like writes to happen at the same time as reads.
I would recommend placing the reads in the Critical Section too and then stress test your application on multiple cores to see if this is causing too much contention. Finding concurrency bugs is a nightmare I prefer to avoid. What happens if in the future some one decides to change the int to a long long or a double, so they can hold larger numbers?
If you have a nice thread library like boost.thread or zthread then you should have read/writer locks. These would be ideal for your situation as they allow multiple reads while protecting writes.
This may happen on 8 bit systems which use 16 bit integers.
If you want to avoid locking you can under suitable circumstances get away with reading multiple times, until you get two equal consecutive values. For example, I've used this approach to read the 64 bit clock on a 32 bit embedded target, where the clock tick was implemented as an interrupt routine. In that case reading three times suffices, because the clock can only tick once in the short time the reading routine runs.
In general, each machine instruction goes thru several hardware stages when executing. As most current CPUs are multi-core or hyper-threaded, that means that reading a variable may start it moving thru the instruction pipeline, but it doesn't stop another CPU core or hyper-thread from concurrently executing a store instruction to the same address. The two concurrently executing instructions, read and store, might "cross paths", meaning that the read will receive the old value just before the new value is stored.
To resume: you do need the mutex for both read and writes.
Both reading / writing to variables with concurrency must be protected by a critical section (not mutex). Unless you want to waste your whole day debugging.
Critical sections are platform-specific, I believe. On Win32, critical section is very efficient: when no interlocking occur, entering critical section is almost free and does not affect overall performance. When interlocking occur, it is still more efficient, than mutex, because it implements a series of checks before suspending the thread.
Depends on your platform. Most modern platforms offer atomic operations for integers: Windows has InterlockedIncrement, InterlockedDecrement, InterlockedCompareExchange, etc. These operations are usually supported by the underlying hardware (read: the CPU) and they are usually cheaper than using a critical section or other synchronization mechanisms.
See MSDN: InterlockedCompareExchange
I believe Linux (and modern Unix variants) support similar operations in the pthreads package but I don't claim to be an expert there.
If a variable is marked with the volatile keyword then the read/write becomes atomic but this has many, many other implications in terms of what the compiler does and how it behaves and shouldn't just be used for this purpose.
Read up on what volatile does before you blindly start using it: http://msdn.microsoft.com/en-us/library/12a04hfd(VS.80).aspx