Is reading a double an atomic operation in linux? - c++

I'm creating a simple server which stores several variables globally. On occasion these variables will update and during this time the variables are locked from other threads. Each client who accesses the server is granted their own thread, but has no way to change these variables and are essentially readonly. My question to the internet is do I need to worry about either a) two threads reading the same variable at the same time (not changing) or b) the writing the variable process interrupting the reading process.
I know that in most cases writing a double is not an atomic operation because it is usually more than one register, but can the reading operation be interrupted?
Thanks

My first guess is that this has nothing to do with Linux as an OS.
It is for sure related to the CPU in use, as some may be able to load/store double in memory in 1 operation. The x86 series has such an op-code for the FPU.
It may also be linked to the compiler that can make use of these CPU abilities to load/store doubles in 1 operation. Don't know what gcc does.

[Edit] Apologies, I was out of it when I read this question originally and gave an incorrect answer which jalf kindly pointed out.
I know that in most cases writing a
double is not an atomic operation
because it is usually more than one
register, but can the reading
operation be interrupted?
Yes. Imagine trying to write your own IEEE double-precision floating type using two WORD-sized variables. We cannot read both of these atomically, as they are two distinct parts. One could be in the process of being modified concurrently at the same time we are trying to read.
do I need to worry about either a) two
threads reading the same variable at
the same time (not changing) or b) the
writing the variable process
interrupting the reading process.
a: no
b: yes
You'll either need to use a synchronization mechanism for the readers (in addition to the writer) or, if you're like me, just make it a WORD-sized single-precision float for the shared data which is often atomic for reading on modern systems (though you should verify this), stick to atomic operations to modify it, and avoid the headaches.

If you are only reading, you should not have to worry about atomicity. If you are reading and writing, you will need to use a mutex lock, both on your "server" and "client" thread. Locking only when writing only gets half of the job done.
Now, with a single double you may be in some luck depending on your compiler works and on your exact hardware architecture. See this answer.

Related

Using Mutex for shared memory of 1 word

I have an application where multiple threads access and write to a shared memory of 1 word (16bit).
Can I expect that the processor reads and writes a word from/to memory in an atomic operation? So I don't need mutex protection of the shared memory/variable?
The target is an embedded device running VxWorks.
EDIT: There's only one CPU and it is an old one (>7years) - I am not exactly sure about the architecture and model, but I am also more interested in the general way that "most" CPU's will work. If it is a 16bit CPU, would it then, in most cases, be fair to expect that it will read/write a 16bit variable in one operation? Or should I always in any case use mutex protection? And let's say that I don't care about portability, and we talk about C++98.
All processors will read and write aligned machine-words atomicially in the sense that you won't get half the bits of the old value and half the bits of the new value if read by another processor.
To achieve good speed, modern processors will NOT synchronize read-modif-write operations to a particular location unless you actually ask for it - since nearly all reads and writes go to "non-shared" locations.
So, if the value is, say, a counter of how many times we've encountered a particular condition, or some other "if we read/write an old value, it'll go wrong" situation, then you need to ensure that two processors don't simultaneously update the value. This can typically be done with atomic instructions (or some other form of atomic updates) - this will ensure that one, and only one, processor touches the value at any given time, and that all the other processors DO NOT hold a copy of the value that they think is accurate and up to date when another has just made an update. See the C++11 std::atomic set of functions.
Note the distinction between atomically reading or writing the machine word's value and atomically performing the whole update.
The problem is not the atomicity of the acess (which you can usually assume unless you are using a 8bit MC), but the missing synchronization which leads to undefined behavior.
If you want to write portable code, use atomics instead. If you want to achieve maximal performance for your specific platform, read the documentation of your OS and compiler very carefully and see what additional mechanisms or guarantees they provide for multithreaded programs (But I really doubt that you will find anything more efficient than std::atomic that gives you sufficient guarantees).
Can I expect that the processor reads and writes a word from/to memory in an atomic operation?
Yes.
So I don't need mutex protection of the shared memory/variable?
No. Consider:
++i;
Even if the read and write are atomic, two threads doing this at the same time can each read, each increment, and then each write, resulting in only one increment where two are needed.
Can I expect that the processor reads and writes a word from/to memory in an atomic operation?
Yes, if the data's properly aligned and no bigger than a machine word, most CPU instructions will operate on it atomically in the sense you describe.
So I don't need mutex protection of the shared memory/variable?
You do need some synchronisation - whether a mutex or using atomic operations ala std::atomic.
The reasons for this include:
if your variable is not volatile, the compiler might not even emit read and write instructions for the memory address nominally holding that variable at the places you might expect, instead reusing values read or set earlier that are saved in CPU registers or known at compile time
if you use a mutex or std::atomic type you do not need to use volatile as well
further, even if the data is written towards memory, it may not leave the CPU caches and be written to actual RAM where other cores and CPUs can see it unless you explicitly use a memory barrier (std::mutex and std::atomic types do that for you)
finally, delays between reading and writing values can cause unexpected results, so operations like ++x can fail as explained by David Schwartz.

Is it possible to have a race condition for a write-only operation?

If I have several threads trying to write the same value to a single location in memory, is it possible to have a race condition? Can the data somehow get corrupted during the writes? There is no preceding read or test conditions, only the write...
EDIT: To clarify, I'm computing a dot product on a GPU. I'm using several threads to calculate the individual products (one thread per row/column element) and saving them to a temporary location in memory. I need to then sum those intermediate products and save the result.
I was thinking about having all threads individually perform this sum/store operation since branching on a GPU can hurt performance. (You would think it should take the same amount of time for the sum/store whether it's done by a single thread or all threads, but I've tested this and there is a small performance hit.) All threads will get the same sum, but I'm concerned about a race condition when they each try to write their answer to the same location in memory. In the limited testing I've done, everything seems fine, but I'm still nervous...
Under most threading standards on most platforms, this is simply prohibited or undefined. That is, you are not allowed to do it and if you do, anything can happen.
High-level language compilers like those for C and C++ are free to optimize code based on the assumption that you will not do anything you are not allowed to do. So a "write-only" operation may turn out to be no such thing. If you write i = 1; in C or C++, the compiler is free to generate the same code as if you wrote i = 0; i++;. Similarly confounding optimizations really do occur in the real world.
Instead, follow the rules for whatever threading model you are using to use appropriate synchronization primitives. If your platform provides them, use appropriate atomic operations.
There is no problem having multiple threads writing a single (presumably shared or global) memory location in CUDA, even "simultaneously" i.e. from the same line of code.
If you care about the order of the writes, then this is a problem, as CUDA makes no guarantee of order, for multiple threads executing the same write operation to the same memory location. If this is an issue, you should use atomics or some other method of refactoring your code to sort it out. (It doesn't sound like this is an issue for you.)
Presumably, as another responder has stated, you care about the result at some point. Therefore it's necessary to have a barrier of some sort, either explicit (e.g. __synchthreads(), for multiple threads within a block using shared memory for example) or implicit (e.g. end of a kernel, for multiple threads writing to a location in global memory) before you read that location and expect a sensible result. Note these are not the only possible barrier methods that could give you sane results, just two examples. Warp-synchronous behavior or other clever coding techniques could be leveraged to ensure the sanity of a read following a collection of writes.
Although on the surface the answer would seem to be no, there are no race conditions, the answer is a bit more nuanced. Boris is right that on some 32-bit architectures, storing a 64-bit long or address may take two operations and therefore may be read in an invalid state. This is probably pretty difficult to reproduce since memory pages are what typically are updated and a long would never span a memory page.
However, the more important issue is that you need to realize that without memory synchronization there are no guarantees around when a thread would see the updated value. A thread could run for a long period of time reading an out-of-date value from memory. It wouldn't be an invalid value but it would not be the most recent one written. That may not specifically cause a "race-condition" but it might cause your program to perform in an unexpected manner.
Also, although you say it is "write-only", obviously someone is reading the value otherwise there would be no reason to perform the update. The details of what portion of the code is reading the value will better inform us as to whether the write-only without synchronization is truly safe.
If write-only operations are not atomic obviously there will be a moment when another thread may observe the data in the corrupted state.
For example writing to 64-bit integers, that are stored as a pair of 32-bit integers.
Thread A - just finished writing the high order word, and the Thread B has just finished writing to the low order word, and is going to set the high order word;
Thread C may see that integer consists of a low order word written by thread B and a high order word written by thread A.
P.S. this question is very generic, actual results will depend on the memory model of the environment(language) and the underlying processor architecture(hardware).

Is volatile a proper way to make a single byte atomic in C/C++?

I know that volatile does not enforce atomicity on int for example, but does it if you access a single byte? The semantics require that writes and reads are always from memory if I remember correctly.
Or in other words: Do CPUs read and write bytes always atomically?
Not only does the standard not say anything about atomicity, but you are likely even asking the wrong question.
CPUs typically read and write single bytes atomically. The problem comes because when you have multiple cores, not all cores will see the byte as having been written at the same time. In fact, it might be quite some time (in CPU speak, thousands or millions of instructions (aka, microseconds or maybe milliseconds)) before all cores have seen the write.
So, you need the somewhat misnamed C++0x atomic operations. They use CPU instructions that ensure the order of things doesn't get messed up, and that when other cores look at the value you've written after you've written it, they see the new value, not the old one. Their job is not so much atomicity of operations exactly, but making sure the appropriate synchronization steps also happen.
The standard says nothing about atomicity.
The volatile keyword is used to indicate that a variable (int, char, or otherwise) may be given a value from an external, unpredictable source. This usually deters the compiler from optimizing out the variable.
For atomic you will need to check your compiler's documentation to see if it provides any assistance or declaratives, or pragmas.
On any sane cpu, reading and writing any aligned, word-size-or-smaller type is atomic. This is not the issue. The issues are:
Just because reads and writes are atomic, it does not follow that read/modify/write sequences are atomic. In the C language, x++ is conceptually a read/modify/write cycle. You cannot control whether the compiler generates an atomic increment, and in general, it won't.
Cache synchronization issues. On halfway-crap architectures (pretty much anything non-x86), the hardware is too dumb to ensure that the view of memory each cpu sees reflects the order in which writes took place. For example if cpu 0 writes to addresses A then B, it's possible that cpu 1 sees the update at address B but not the update at address A. You need special memory fence/barrier opcodes to address this issue, and the compiler will not generate them for you.
The second point only matters on SMP/multicore systems, so if you're happy restricting yourself to single-core, you can ignore it, and then plain reads and writes will be atomic in C on any sane cpu architecture. But you can't do much useful with just that. (For instance, the only way you can implement a simple lock this way involves O(n) space, where n is the number of threads.)
Short answer : Don't use volatile to guarantee atomicity.
Long answer
One might think that since CPUs handle words in a single instruction, simple word operations are inherently thread safe. The idea of using volatile is to then ensure that the compiler makes no assumptions about the value contained in the shared variable.
On modern multi-processor machines, this assumption is wrong. Given that different processor cores will normally have their own cache, circumstances might arise where reads and writes to main memory are reordered and your code ends up not behaving as expected.
For this reason always use locks such as mutexes or critical sections when access memory shared between threads. They are surprisingly cheap when there is no contention (normally have no need to make a system call) and they will do the right thing.
Typically they will prevent out of order reads and writes by calling a Data Memory Barrier (DMB on ARM) instruction which guarantee that the reads and writes happen in the right order. Look here for more detail.
The other problem with volatile is that it will prevent the compiler from making optimizations even when perfectly ok to do so.

Do I need to use locking with integers in c++ threads

If I am accessing a single integer type (e.g. long, int, bool, etc...) in multiple threads, do I need to use a synchronisation mechanism such as a mutex to lock them. My understanding is that as atomic types, I don't need to lock access to a single thread, but I see a lot of code out there that does use locking. Profiling such code shows that there is a significant performance hit for using locks, so I'd rather not. So if the item I'm accessing corresponds to a bus width integer (e.g. 4 bytes on a 32 bit processor) do I need to lock access to it when it is being used across multiple threads? Put another way, if thread A is writing to integer variable X at the same time as thread B is reading from the same variable, is it possible that thread B could end up a few bytes of the previous value mixed in with a few bytes of the value being written? Is this architecture dependent, e.g. ok for 4 byte integers on 32 bit systems but unsafe on 8 byte integers on 64 bit systems?
Edit: Just saw this related post which helps a fair bit.
You are never locking a value - you are locking an operation ON a value.
C & C++ do not explicitly mention threads or atomic operations - so operations that look like they could or should be atomic - are not guaranteed by the language specification to be atomic.
It would admittedly be a pretty deviant compiler that managed a non atomic read on an int: If you have an operation that reads a value - theres probably no need to guard it. However- it might be non atomic if it spans a machine word boundary.
Operations as simple as m_counter++ involves a fetch, increment, and store operation - a race condition: another thread can change the value after the fetch but before the store - and hence needs to be protected by a mutex - OR find your compilers support for interlocked operations. MSVC has functions like _InterlockedIncrement() that will safely increment a memory location as long as all other writes are similarly using interlocked apis to update the memory location - which is orders of magnitude more lightweight than invoking a even a critical section.
GCC has intrinsic functions like __sync_add_and_fetch which also can be used to perform interlocked operations on machine word values.
If you're on a machine with more than one core, you need to do things properly even though writes of an integer are atomic. The issues are two-fold:
You need to stop the compiler from optimizing out the actual write! (Somewhat important this. ;-))
You need memory barriers (not things modeled in C) to make sure the other cores take notice of the fact that you've changed things. Otherwise you'll be tangled up in caches between all the processors and other dirty details like that.
If it was just the first thing, you'd be OK with marking the variable volatile, but the second is really the killer and you will only really see the difference on a multicore machine. Which happens to be an architecture that is becoming far more common than it used to be… Oops! Time to stop being sloppy; use the correct mutex (or synchronization or whatever) code for your platform and all the details of how to make memory work like you believe it to will go away.
In 99.99% of the cases, you must lock, even if it's access to seemingly atomic variables. Since C++ compiler is not aware of multi-threading on the language level, it can do a lot of non-trivial reorderings.
Case in point: I was bitten by a spin lock implementation where unlock is simply assigning zero to a volatile integer variable. The compiler was reordering unlock operation before the actual operation under the lock, unsurprisingly, leading to mysterious crashes.
See:
Lock-Free Code: A False Sense of Security
Threads Cannot be Implemented as a Library
There's no support for atomic variables in C++, so you do need locking. Without locking you can only speculate about what exact instructions will be used for data manipulation and whether those instructions will guarantee atomic access - that's not how you develop reliable software.
Yes it would be better to use synchronization. Any data accessed by multiple threads must be synchronized.
If it is windows platform you can also check here :Interlocked Variable Access.
Multithreading is hard and complex. The number of hard to diagnose problems that can come around is quite big. In particular, on intel architectures reads and writes from aligned 32bit integers is guaranteed to be atomic in the processor, but that does not mean that it is safe to do so in multithreaded environments.
Without proper guards, the compiler and/or the processor can reorder the instructions in your block of code. It can cache variables in registers and they will not be visible in other threads...
Locking is expensive, and there are different implementations of lock-less data structures to optimize for high performance, but it is hard to do it correctly. And the problem is a that concurrency bugs are usually obscure and difficult to debug.
Yes. If you are on Windows you can take a look at Interlocked functions/variables and if you are of the Boost persuasion then you can look at their implementation of atomic variables.
If boost is too heavyweight putting "atomic c++" into your favourite search engine will give you plenty of food for thought.

Is it safe to read an integer variable that's being concurrently modified without locking?

Suppose that I have an integer variable in a class, and this variable may be concurrently modified by other threads. Writes are protected by a mutex. Do I need to protect reads too? I've heard that there are some hardware architectures on which, if one thread modifies a variable, and another thread reads it, then the read result will be garbage; in this case I do need to protect reads. I've never seen such architectures though.
This question assumes that a single transaction only consists of updating a single integer variable so I'm not worried about the states of any other variables that might also be involved in a transaction.
atomic read
As said before, it's platform dependent. On x86, the value must be aligned on a 4 byte boundary. Generally for most platforms, the read must execute in a single CPU instruction.
optimizer caching
The optimizer doesn't know you are reading a value modified by a different thread. declaring the value volatile helps with that: the optimizer will issue a memory read / write for every access, instead of trying to keep the value cached in a register.
CPU cache
Still, you might read a stale value, since on modern architectures you have multiple cores with individual cache that is not kept in sync automatically. You need a read memory barrier, usually a platform-specific instruction.
On Wintel, thread synchronization functions will automatically add a full memory barrier, or you can use the InterlockedXxxx functions.
MSDN: Memory and Synchronization issues, MemoryBarrier Macro
[edit] please also see drhirsch's comments.
You ask a question about reading a variable and later you talk about updating a variable, which implies a read-modify-write operation.
Assuming you really mean the former, the read is safe if it is an atomic operation. For almost all architectures this is true for integers.
There are a few (and rare) exceptions:
The read is misaligned, for example accessing a 4-byte int at an odd address. Usually you need to force the compiler with special attributes to do some misalignment.
The size of an int is bigger than the natural size of instructions, for example using 16 bit ints on a 8 bit architecture.
Some architectures have an artificially limited bus width. I only know of very old and outdated ones, like a 386sx or a 68008.
I'd recommend not to rely on any compiler or architecture in this case.
Whenever you have a mix of readers and writers (as opposed to just readers or just writers) you'd better sync them all. Imagine your code running an artificial heart of someone, you don't really want it to read wrong values, and surely you don't want a power plant in your city go 'boooom' because someone decided not to use that mutex. Save yourself a night-sleep in a long run, sync 'em.
If you only have one thread reading -- you're good to go with just that one mutex, however if you're planning for multiple readers and multiple writers you'd need a sophisticated piece of code to sync that. A nice implementation of read/write lock that would also be 'fair' is yet to be seen by me.
Imagine that you're reading the variable in one thread, that thread gets interrupted while reading and the variable is changed by a writing thread. Now what is the value of the read integer after the reading thread resumes?
Unless reading a variable is an atomic operation, in this case only takes a single (assembly) instruction, you can not ensure that the above situation can not happen.
(The variable could be written to memory, and retrieving the value would take more than one instruction)
The consensus is that you should encapsulate/lock all writes individualy, while reads can be executed concurrently with (only) other reads
Suppose that I have an integer variable in a class, and this variable may be concurrently modified by other threads. Writes are protected by a mutex. Do I need to protect reads too? I've heard that there are some hardware architectures on which, if one thread modifies a variable, and another thread reads it, then the read result will be garbage; in this case I do need to protect reads. I've never seen such architectures though.
In the general case, that is potentially every architecture. Every architecture has cases where reading concurrently with a write will result in garbage.
However, almost every architecture also has exceptions to this rule.
It is common that word-sized variables are read and written atomically, so synchronization is not needed when reading or writing. The proper value will be written atomically as a single operation, and threads will read the current value as a single atomic operation as well, even if another thread is writing. So for integers, you're safe on most architectures. Some will extend this guarantee to a few other sizes as well, but that's obviously hardware-dependant.
For non-word-sized variables both reading and writing will typically be non-atomic, and will have to be synchronized by other means.
If you don't use prevous value of this variable when write new, then:
You can read and write integer variable without using mutex. It is because integer is base type in 32bit architecture and every modification/read of value is doing with one operation.
But, if you donig something such as increment:
myvar++;
Then you need use mutex, because this construction is expanded to myvar = myvar + 1 and between read myvar and increment myvar, myvar can be modified. In that case you will get bad value.
While it would probably be safe to read ints on 32 bit systems without synchronization. I would not risk it. While multiple concurrent reads are not a problem, I do not like writes to happen at the same time as reads.
I would recommend placing the reads in the Critical Section too and then stress test your application on multiple cores to see if this is causing too much contention. Finding concurrency bugs is a nightmare I prefer to avoid. What happens if in the future some one decides to change the int to a long long or a double, so they can hold larger numbers?
If you have a nice thread library like boost.thread or zthread then you should have read/writer locks. These would be ideal for your situation as they allow multiple reads while protecting writes.
This may happen on 8 bit systems which use 16 bit integers.
If you want to avoid locking you can under suitable circumstances get away with reading multiple times, until you get two equal consecutive values. For example, I've used this approach to read the 64 bit clock on a 32 bit embedded target, where the clock tick was implemented as an interrupt routine. In that case reading three times suffices, because the clock can only tick once in the short time the reading routine runs.
In general, each machine instruction goes thru several hardware stages when executing. As most current CPUs are multi-core or hyper-threaded, that means that reading a variable may start it moving thru the instruction pipeline, but it doesn't stop another CPU core or hyper-thread from concurrently executing a store instruction to the same address. The two concurrently executing instructions, read and store, might "cross paths", meaning that the read will receive the old value just before the new value is stored.
To resume: you do need the mutex for both read and writes.
Both reading / writing to variables with concurrency must be protected by a critical section (not mutex). Unless you want to waste your whole day debugging.
Critical sections are platform-specific, I believe. On Win32, critical section is very efficient: when no interlocking occur, entering critical section is almost free and does not affect overall performance. When interlocking occur, it is still more efficient, than mutex, because it implements a series of checks before suspending the thread.
Depends on your platform. Most modern platforms offer atomic operations for integers: Windows has InterlockedIncrement, InterlockedDecrement, InterlockedCompareExchange, etc. These operations are usually supported by the underlying hardware (read: the CPU) and they are usually cheaper than using a critical section or other synchronization mechanisms.
See MSDN: InterlockedCompareExchange
I believe Linux (and modern Unix variants) support similar operations in the pthreads package but I don't claim to be an expert there.
If a variable is marked with the volatile keyword then the read/write becomes atomic but this has many, many other implications in terms of what the compiler does and how it behaves and shouldn't just be used for this purpose.
Read up on what volatile does before you blindly start using it: http://msdn.microsoft.com/en-us/library/12a04hfd(VS.80).aspx