Can indirect change of volatile const be treated as undefined behavior? - c++

Does volatile write to volatile const introduce undefined behavior? What if I drop volatile when writing?
volatile const int x = 42;
const volatile int *p = &x;
*(volatile int *)p = 8; // Does this line introduce undefined behavior?
*(int *)p = 16; // And what about this one?
Compilable code

It is undefined behavior (for both statements) as you attempt to modify the "initial" const object. From C11 (N1570) 6.7.3/p6 Type qualifiers (emphasis mine):
If an attempt is made to modify an object defined with a
const-qualified type through use of an lvalue with non-const-qualified
type, the behavior is undefined.
For completeness it may be worth adding, that Standard says also that:
If an attempt is made to refer to an object defined with a
volatile-qualified type through use of an lvalue with
non-volatile-qualified type, the behavior is undefined.
Hence latter statement, that is:
*(int *)p = 16;
is undefined for second phrase as well (it's a "double UB").
I believe that rules are the same for C++, but don't own copy of C++14 to confirm.

Writing to a variable that is originally const is undefined behaviour, so all your example writes to *p are undefined.
Removing volatile in itself is not undefined in and of itself.
However, if we have something like const volatile int *p = (const volatile int*)0x12340000; /* Address of hw register */, then removing volatile may cause the hardware to update the register value, but your program doesn't pick it up. (E.g. if we "busy wait" with while(*p & 0x01) ;, the compiler should reload the value pointed to by p every time, but while((*(const int *)p) & 1) ;, the compiler is entirely free to read the value ones, and re-use the initial value to loop forever if bit 0 is set).
You could of course have extern volatie int x; and then use const volatile int *p = &x; in some code, and x gets updated by some other piece of code outside of your current translation unit (e.g. another thread) - in which case removing either const or volatile is valid, but as above, you may "miss" updated values because the compiler doesn't expect the global value to get updated outside of your module unless you call functions. [Edit, no, taking away volatile by casting the value is also forbidden in the standard - it is however valid to add const or volatile to something, and then remove it again if it the original object referred to did not have it].
Edti2: volatile is needed to tell the compiler that "the value may change at any time, even if you think nothing should have changed it". This happens, in general, in two situations:
Hardware registers that are updated outside of the software altogether - such as status registers for a serial port, a timer-counter register, or interrupt status of the interrupt controller, to name a few case - there are thousands of other variations, but it's all the same idea: The hardware changes the value, without direct relation to the software that is accessing such registers.
Variabled updated by another thread within the process (or in case of shared memory, by another process) - again, the compiler won't be able to "see" such changes. Often, one can write code that appear to function correctly by calling a function inside wait-loops and such, and the compiler will the reload values of non-local variables anyway, but some time later the compiler decides to inline that function call, and then realizes that the code is not updating that value, so doesn't reload the value -> bug when you inspect the "updated" value and find the same old value that the compiler had already loaded earlier.
Note also that volatile is NOT a guarantee for any kind of thread/process correctness - it will just guarantee that the compiler doesn't skip over reads or writes to that variable. It is still up to the programmer to ensure that for example multiple values that are dependent on ordering are indeed updated in the correct order, and on systems where caches are not coherent between processing units, that the caches are made coherent via the software [for example a CPU and a GPU may not use coherent memory updates, so a write by the CPU does not reach the GPU until the cache has been flushed on the CPU - no amount of applying volatile to the code will fix this]

Related

C++: Can an object both be stored and not stored?

I've been in a debate about a corner case regarding local variables in a multithread environment.
The question is regarding programs formed like:
std::mutex mut;
int main()
{
std::size_t i = 0;
doSomethingWhichMaySpawnAThreadAndUseTheMutex();
mut.lock();
i += 1; // can this be reordered?
mut.unlock();
return i;
}
The question revolves around whether the i += 1 can be reordered to occur above the mutex locking.
The obvious parts are that mut.lock() happens-before i += 1, so if any other thread might be able to observe the value of i, the compiler is obliged to not have incremented it. From 3.9.2.3 of the C++ spec, "If an object of type T is located at an address A, a pointer of type cv T* whose value is the address A is said to point to that object, regardless of how the value was obtained."" This means that if I used any means to get a pointer to i, I can expect to see the right value.
However, the spec does state that the compiler may use the "as-if" rule to not give an object a memory address (footnote 4 on section 1.8.6). For example, i could be stored in a register, which has no memory address. In such a case, there would be no memory address to point to, so the compiler could prove that no other thread could access i.
The question I am interested in is what if the compiler does not do this "as-if" optimization, and does indeed store the object. Is the compiler permitted to store i, but do reordering as-if i was not actually stored? From an implementation perspective, this would mean that i might be stored on a stack, and thus it would be possible to have a pointer point at it, but have the compiler assume nobody can see i, and do the re-order?
The compiler is allowed to perform optimizations as long as the observable results of program execution legitimately could have been obtained ("as-if") without those optimizations.[1] So this question uses "as-if" in a misleading manner, if not actually asking a backwards question:
Is the compiler permitted to store i, but do reordering as-if i was not actually stored?
This asks if the compiler is permitted to do things as long as the results of program execution could have been obtained with an optimization. That is not the question to ask. The question should use non-optimized behavior as the reference. So something more like: "Is the compiler permitted to re-order the statements?" The answer is yes, as long as the observable results do not change. Nothing external to this particular function is told how to access i, so the compiler should be allowed to implement the increment anywhere between the surrounding uses of it (specifically: its definition and the return statement).
That being said, what I would expect a compiler to do in this case is neither give i a memory address nor treat it as a register variable. I would expect the compiler to treat it like a constant, effectively changing your function to:
int main()
{
doSomethingWhichMaySpawnAThreadAndUseTheMutex();
mut.lock();
mut.unlock();
return 1;
}
This is allowed as long as you have no way to detect that it has been done (short of examining the machine code directly).
Note:
[1] The use of "could have been" is an acknowledgement that there are portions of the C++ specification that use the word "unspecified". These portions allow compilers to make choices that (when dealing with non-robust code) could change observable behavior. That is, there can be a set of allowed behaviors, rather than a single allowed behavior. As long as the results remain in this set, an optimization is allowed.
I find this question very muddled. With the code as posted, the compiler is obviously aware that the only use of i is in the return statement, so i will be optimised away, end of story. The mutex doesn't come into it.
But as soon as you take the address of i - and give it away to somebody else - the game changes. Now the compiler has to put a real variable on the stack and manipulate it only between mutex.lock() and mutex.unlock(). Doing anything else would alter the semantics of your program. The mutex also gives you a memory fence.
You can see this clearly at Godbolt.
Edit: I have fixed a silly bug in that code that rather obscured the point I was trying to make, sorry about that.
The whole sequence:
mut.lock(); // i == 0 at that point
i += 1; // can this be reordered?
mut.unlock(); // i == 1 at that point
exit(i);
}
can be compiled as just exit(1); and other threads can be ignored as there is no proper synchronisation.
You need to wait for other threads to terminate, or for them to engage in doing nothing ever again. You don't do that so it can be assumed that all other threads are doing nothing.
The mutex has no meaningful role here.

How to tell c++ compiler that the object is not changed elsewhere to reach better optimization

I would like to optimize some C++ code. Let have a simplified example:
int DoSomeStuff(const char* data)
{
while(data != NULL)
{
//compute something and use optimization as much as possible
}
return result;
}
I know that *data is not changed elsewhere. I mean it's not changed in any other thread, but the compiler cannot know. Is there some way to tell the compiler that data on the pointer are not changed for the entire lifetime of the scope?
UPDATE:
int DoSomeStuff(const volatile char* data)
{
while(data != NULL)
{
//compiler should assume that the data are changed elsewhere
}
return result;
}
1) Will the compiler optimize if the variable is const ?
According to the standard (7.1.6.1):
A pointer or reference to a cv-qualified type need not actually point or refer
to a cv-qualified object, but it is treated as if it does
So I understand it as: the compiler will optimize all access to a const qualified variable as if it really was a const data and could not be modified by a path to a non const variable.
2) Will the compiler not optimize to protect inter-thread access ?
According to the standard (1.10):
The execution of a program contains a data race if it contains two conflicting
actions in different threads, at least one of which is not atomic, and neither
happens before the other. Any such data race results in undefined behavior.
A conflicting action is defined as:
Two expression evaluations conflict if one of them modifies a memory location
(1.7) and the other one accesses or modifies the same memory location.
The happens-before relationship is complex but as I understand it, unless you explicitly used atomic operations (or mutexes) in your code, accessing the same variable from two threads (one writing, another one reading/writing) is undefined behavior. This case being undefined behavior I don't think the compiler will protect you, and it is free to optimize.
3) Volatile ?
Volatile is designed to prevent optimization (7.1.6.1):
volatile is a hint to the implementation to avoid aggressive optimization
involving the object because the value of the object might be changed by means
undetectable by an implementation
Think about a memory mapped I/O register (like a special input port). Even though you cannot write it (because your hardware implementation is read-only), you cannot optimiza your read accesses (because they will return a different byte each time). This special I/O port may be a temperature sensor for instance.
The compiler is already allowed to assume *data is not modified by another thread. Modifying data in one thread and accessing it in another thread without proper synchronisation is Undefined Behaviour. The compiler is totally free to do anything at all with a program that contains undefined behaviour.
I think what you're saying is that its not aliased anywhere else.
In MSVC you can do:
int DoSomeStuff(const char* __restrict data)
{
while(data != NULL)
{
//compute something and use optimization as much as possible
}
return result;
}
See:
http://msdn.microsoft.com/en-us/library/5ft82fed.aspx

Is const a lie? (since const can be cast away) [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Sell me on const correctness
What is the usefulness of keyword const in C or C++ since it's allowed such a thing?
void const_is_a_lie(const int* n)
{
*((int*) n) = 0;
}
int main()
{
int n = 1;
const_is_a_lie(&n);
printf("%d", n);
return 0;
}
Output: 0
It is clear that const cannot guarante the non-modifiability of the argument.
const is a promise you make to the compiler, not something it guarantees you.
For example,
void const_is_a_lie(const int* n)
{
*((int*) n) = 0;
}
#include <stdio.h>
int main()
{
const int n = 1;
const_is_a_lie(&n);
printf("%d", n);
return 0;
}
Output shown at http://ideone.com/Ejogb is
1
Because of the const, the compiler is allowed to assume that the value won't change, and therefore it can skip rereading it, if that would make the program faster.
In this case, since const_is_a_lie() violates its contract, weird things happen. Don't violate the contract. And be glad that the compiler gives you help keeping the contract. Casts are evil.
In this case, n is a pointer to a constant int. When you cast it to int* you remove the const qualifier, and so the operation is allowed.
If you tell the compiler to remove the const qualifier, it will happily do so. The compiler will help ensure that your code is correct, if you let it do its job. By casting the const-ness away, you are telling the compiler that you know that the target of n is non-constant and you really do want to change it.
If the thing that your pointer points to was in fact declared const in the first place, then you are invoking undefined behavior by attempting to change it, and anything could happen. It might work. The write operation might not be visible. The program could crash. Your monitor could punch you. (Ok, probably not that last one.)
void const_is_a_lie(const char * c) {
*((char *)c) = '5';
}
int main() {
const char * text = "12345";
const_is_a_lie(text);
printf("%s\n", text);
return 0;
}
Depending on your specific environment, there may be a segfault (aka access violation) in const_is_a_lie since the compiler/runtime may store string literal values in memory pages that are not writable.
The Standard has this to say about modifying const objects.
7.1.6.1/4 The cv-qualifiers [dcl.type.cv]
Except that any class member declared mutable (7.1.1) can be modified, any attempt to modify a const object during its lifetime (3.8) results in undefined behavior
"Doctor, it hurts when I do this!" "So don't do that."
Your...
int n = 1;
...ensures n exists in read/write memory; it's a non-const variable, so a later attempt to modify it will have defined behaviour. Given such a variable, you can have a mix of const and/or non-const pointers and references to it - the constness of each is simply a way for the programmer to guard against accidental change in that "branch" of code. I say "branch" because you can visualise the access given to n as being a tree in which - once a branch is marked const, all the sub-branches (further pointers/references to n whether additional local variables, function parameters etc. initialised therefrom) will need to remain const, unless of course you explicitly cast that notion of constness away. Casting away const is safe (if potentially confusing) for variables that are mutable like your n, because they're ultimately still writing back into a memory address that is modifiable/mutable/non-const. All the bizarre optimisations and caching you could imagine causing trouble in these scenarios aren't allowed as the Standard requires and guarantees sane behaviour in the case I've just described.
Sadly it's also possible to cast away constness of genuinely inherently const variables like say const int o = 1;, and any attempt to modify them will have undefined behaviour. There are many practical reasons for this, including the compiler's right to place them in memory it then marks read only (e.g. see UNIX mprotect(2)) such that an attempted write will cause a CPU trap/interrupt, or read from the variable whenever the originally-set value is needed (even if the variable's identifier was never mentioned in the code using the value), or use an inlined-at-compile-time copy of the original value - ignoring any runtime change to the variable itself. So, the Standard leaves the behaviour undefined. Even if they happen to be modified as you might have intended, the rest of the program will have undefined behaviour thereafter.
But, that shouldn't be surprising. It's the same situation with types - if you have...
double d = 1;
*(int*)&d = my_int;
d += 1;
...have you have lied to the compiler about the type of d? Ultimately d occupies memory that's probably untyped at a hardware level, so all the compiler ever has is a perspective on it, shuffling bit patterns in and out. But, depending on the value of my_int and the double representation on your hardware, you may have created an invalid combination of bits in d that don't represent any valid double value, such that subsequent attempts to read the memory back into a CPU register and/or do something with d such as += 1 have undefined behaviour and might, for example, generate a CPU trap / interrupt.
This is not a bug in C or C++... they're designed to let you make dubious requests of your hardware so that if you know what you're doing you can do some weird but useful things and rarely need to fall back on assembly language to write low level code, even for device drivers and Operating Systems.
Still, it's precisely because casts can be unsafe that a more explicit and targeted casting notation has been introduced in C++. There's no denying the risk - you just need to understand what you're asking for, why it's ok sometimes and not others, and live with it.
The type system is there to help, not to babysit you. You can circumvent the type system in many ways, not only regarding const, and each time that you do that what you are doing is taking one safety out of your program. You can ignore const-correctness or even the basic type system by passing void* around and casting as needed. That does not mean that const or types are a lie, only that you can force your way over the compiler's.
const is there as a way of making the compiler aware of the contract of your function, and let it help you not violate it. In the same way that a variable being typed is there so that you don't need to guess how to interpret the data as the compiler will help you. But it won't baby sit, and if you force your way and tell it to remove const-ness, or how the data is to be retrieved the compiler will just let you, after all you did design the application, who is it to second guess your judgement...
Additionally, in some cases, you might actually cause undefined behavior and your application might even crash (for example if you cast away const from an object that is really const and you modify the object you might find out that the side effects are not seen in some places (the compiler assumed that the value would not change and thus performed constant folding) or your application might crash if the constant was loaded into a read-only memory page.
Never did const guarantee immutability: the standard defines a const_cast that allows modifying const data.
const is useful for you to declare more intent and avoid changing data that is you meant to be read only. You'll get a compilation error asking you to think twice if you do otherwise. You can change your mind, but that's not recommended.
As mentionned by other answers the compiler may optimize a bit more if you use const-ness but the benefits are not always significant.

Is accessing volatile local variables not accessed from outside the function observable behavior in C++?

In C++03 Standard observable behavior (1.9/6) includes reading and writing volatile data. Now I have this code:
int main()
{
const volatile int value = 0;
if( value ) {
}
return 0;
}
which formally initializes a volatile variable and then reads it. Visual C++ 10 emits machine code that makes room on the stack by pushing a dword there, then writes zero into that stack location, then reads that location.
To me it makes no sense - no other code or hardware could possibly know where the local variable is located (since it's in automatic storage) and so it's unreasonable to expect that the variable could have been read/written by any other party and so it can be eliminated in this case.
Is eliminating this variable access allowed? Is accessing a volatile local which address is not known to any other party observable behavior?
The thread's entire stack might be located on a protected memory page, with a handler that logs all reads and writes (and allows them to complete, of course).
However, I don't think MSVC really cares whether or how the memory access might be detected. It understands volatile to mean, among other things, "do not bother applying optimizations to this object". So it doesn't. It doesn't have to make sense, because MSVC is not interested in speeding up this kind of use of volatile.
Since it's implementation-dependent whether and how observable behavior can actually be observed, I think you're right that an implementation can "cheat" if it knows, because of details of the hardware, that the access cannot possibly be detected. Observable behavior that has no physically-detectable effect can be skipped: no matter what the standard says, the means to detect non-conforming behavior are limited to what's physically possible.
If an implementation fails to conform to the standard in a forest, and nobody notices, does it make a sound? Kind of thing.
That's the whole point of declaring a variable volatile: you tell the implementation that that variable may change or be read by means unknown to the implementation itself and that the implementation should refrain from performing optimizations that might impact such access.
When a variable is declared both volatile and const your program may not change it, but it may still be changed from outside. This implies that not only the variable itself but also all read operations on it cannot be optimized away.
no other code or hardware could possibly know
You can look a the assembly (you just did!), figure out the address of the variable, and map it to some hardware for the duration of the call. volatile means the implementation is obliged to account for such things too.
Volatile also applies to your own code.
volatile int x;
spawn_thread(&x);
x = 0;
while (x == 0){};
This will be an endless loop if x is not volatile.
As for the const. I'm unsure whether the compiler can use that to decide.
To me it makes no sense - no other code or hardware could possibly
know where the local variable is located (since it's in automatic
storage)
Really? So if I write an x86 emulator and run your code on it, then that emulator won't know about that local variable?
The implementation can never actually know for sure that the behaviour is unobservable.
My answer is a bit late. Anyway, this statement
To me it makes no sense - no other code or hardware could possibly
know where the local variable is located (since it's in automatic
storage)
is wrong. The difference between volatile or not is actually very observable in VC++2010. For instance, in Release build, you cannot add a break point to a local variable declaration that was eliminated by optimization. Hence, if you need to set a break point to a variable declaration or even just to watch its value in the Debugger, we have to use Debug build. To debug a specific local variable in Release build, we could make use of the volatile keyword:
int _tmain(int argc, _TCHAR* argv[])
{
int a;
//int volatile a;
a=1; //break point here is not possible in Release build, unless volatile used
printf("%d\n",a);
return 0;
}

What Rules does compiler have to follow when dealing with volatile memory locations?

I know when reading from a location of memory which is written to by several threads or processes the volatile keyword should be used for that location like some cases below but I want to know more about what restrictions does it really make for compiler and basically what rules does compiler have to follow when dealing with such case and is there any exceptional case where despite simultaneous access to a memory location the volatile keyword can be ignored by programmer.
volatile SomeType * ptr = someAddress;
void someFunc(volatile const SomeType & input){
//function body
}
What you know is false. Volatile is not used to synchronize memory access between threads, apply any kind of memory fences, or anything of the sort. Operations on volatile memory are not atomic, and they are not guaranteed to be in any particular order. volatile is one of the most misunderstood facilities in the entire language. "Volatile is almost useless for multi-threadded programming."
What volatile is used for is interfacing with memory-mapped hardware, signal handlers and the setjmp machine code instruction.
It can also be used in a similar way that const is used, and this is how Alexandrescu uses it in this article. But make no mistake. volatile doesn't make your code magically thread safe. Used in this specific way, it is simply a tool that can help the compiler tell you where you might have messed up. It is still up to you to fix your mistakes, and volatile plays no role in fixing those mistakes.
EDIT: I'll try to elaborate a little bit on what I just said.
Suppose you have a class that has a pointer to something that cannot change. You might naturally make the pointer const:
class MyGizmo
{
public:
const Foo* foo_;
};
What does const really do for you here? It doesn't do anything to the memory. It's not like the write-protect tab on an old floppy disc. The memory itself it still writable. You just can't write to it through the foo_ pointer. So const is really just a way to give the compiler another way to let you know when you might be messing up. If you were to write this code:
gizmo.foo_->bar_ = 42;
...the compiler won't allow it, because it's marked const. Obviously you can get around this by using const_cast to cast away the const-ness, but if you need to be convinced this is a bad idea then there is no help for you. :)
Alexandrescu's use of volatile is exactly the same. It doesn't do anything to make the memory somehow "thread safe" in any way whatsoever. What it does is it gives the compiler another way to let you know when you may have screwed up. You mark things that you have made truly "thread safe" (through the use of actual synchronization objects, like Mutexes or Semaphores) as being volatile. Then the compiler won't let you use them in a non-volatile context. It throws a compiler error you then have to think about and fix. You could again get around it by casting away the volatile-ness using const_cast, but this is just as Evil as casting away const-ness.
My advice to you is to completely abandon volatile as a tool in writing multithreadded applications (edit:) until you really know what you're doing and why. It has some benefit but not in the way that most people think, and if you use it incorrectly, you could write dangerously unsafe applications.
It's not as well defined as you probably want it to be. Most of the relevant standardese from C++98 is in section 1.9, "Program Execution":
The observable behavior of the abstract machine is its sequence of reads and writes to volatile data and calls to library I/O functions.
Accessing an object designated by a volatile lvalue (3.10), modifying an object, calling a library I/O function, or calling a function that does any of those operations are all side effects, which are changes in the state of the execution environment. Evaluation of an expression might produce side effects. At certain specified points in the execution sequence called sequence points, all side effects of previous evaluations shall be complete and no side effects of subsequent evaluations shall have taken place.
Once the execution of a function begins, no expressions from the calling function are evaluated until execution of the called function has completed.
When the processing of the abstract machine is interrupted by receipt of a signal, the values of objects with type other than volatile sig_atomic_t are unspecified, and the value of any object not of volatile sig_atomic_t that is modified by the handler becomes undefined.
An instance of each object with automatic storage duration (3.7.2) is associated with each entry into its block. Such an object exists and retains its last-stored value during the execution of the block and while the block is suspended (by a call of a function or receipt of a signal).
The least requirements on a conforming implementation are:
At sequence points, volatile objects are stable in the sense that previous evaluations are complete and subsequent evaluations have not yet occurred.
At program termination, all data written into files shall be identical to one of the possible results that execution of the program according to the abstract semantics would have produced.
The input and output dynamics of interactive devices shall take place in such a fashion that prompting messages actually appear prior to a program waiting for input. What constitutes an interactive device is implementation-defined.
So what that boils down to is:
The compiler cannot optimize away reads or writes to volatile objects. For simple cases like the one casablanca mentioned, that works the way you might think. However, in cases like
volatile int a;
int b;
b = a = 42;
people can and do argue about whether the compiler has to generate code as if the last line had read
a = 42; b = a;
or if it can, as it normally would (in the absence of volatile), generate
a = 42; b = 42;
(C++0x may have addressed this point, I haven't read the whole thing.)
The compiler may not reorder operations on two different volatile objects that occur in separate statements (every semicolon is a sequence point) but it is totally allowed to rearrange accesses to non-volatile objects relative to volatile ones. This is one of the many reasons why you should not try to write your own spinlocks, and is the primary reason why John Dibling is warning you not to treat volatile as a panacea for multithreaded programming.
Speaking of threads, you will have noticed the complete absence of any mention of threads in the standards text. That is because C++98 has no concept of threads. (C++0x does, and may well specify their interaction with volatile, but I wouldn't be assuming anyone implements those rules yet if I were you.) Therefore, there is no guarantee that accesses to volatile objects from one thread are visible to another thread. This is the other major reason volatile is not especially useful for multithreaded programming.
There is no guarantee that volatile objects are accessed in one piece, or that modifications to volatile objects avoid touching other things right next to them in memory. This is not explicit in what I quoted but is implied by the stuff about volatile sig_atomic_t -- the sig_atomic_t part would be unnecessary otherwise. This makes volatile substantially less useful for access to I/O devices than it was probably intended to be, and compilers marketed for embedded programming often offer stronger guarantees, but it's not something you can count on.
Lots of people try to make specific accesses to objects have volatile semantics, e.g. doing
T x;
*(volatile T *)&x = foo();
This is legit (because it says "object designated by a volatile lvalue" and not "object with a volatile type") but has to be done with great care, because remember what I said about the compiler being totally allowed to reorder non-volatile accesses relative to volatile ones? That goes even if it's the same object (as far as I know anyway).
If you are worried about reordering of accesses to more than one volatile value, you need to understand the sequence point rules, which are long and complicated and I'm not going to quote them here because this answer is already too long, but here's a good explanation which is only a little simplified. If you find yourself needing to worry about the differences in the sequence point rules between C and C++ you have already screwed up somewhere (for instance, as a rule of thumb, never overload &&).
A particular and very common optimization that is ruled out by volatile is to cache a value from memory into a register, and use the register for repeated access (because this is much faster than going back to memory every time).
Instead the compiler must fetch the value from memory every time (taking a hint from Zach, I should say that "every time" is bounded by sequence points).
Nor can a sequence of writes make use of a register and only write the final value back later on: every write must be pushed out to memory.
Why is this useful? On some architectures certain IO devices map their inputs or outputs to a memory location (i.e. a byte written to that location actually goes out on the serial line). If the compiler redirects some of those writes to a register that is only flushed occasionally then most of the bytes won't go onto the serial line. Not good. Using volatile prevents this situation.
Declaring a variable as volatile means the compiler can't make any assumptions about the value that it could have done otherwise, and hence prevents the compiler from applying various optimizations. Essentially it forces the compiler to re-read the value from memory on each access, even if the normal flow of code doesn't change the value. For example:
int *i = ...;
cout << *i; // line A
// ... (some code that doesn't use i)
cout << *i; // line B
In this case, the compiler would normally assume that since the value at i wasn't modified in between, it's okay to retain the value from line A (say in a register) and print the same value in B. However, if you mark i as volatile, you're telling the compiler that some external source could have possibly modified the value at i between line A and B, so the compiler must re-fetch the current value from memory.
The compiler is not allowed to optimize away reads of a volatile object in a loop, which otherwise it'd normally do (i.e. strlen()).
It's commonly used in embedded programming when reading a hardware registry at a fixed address, and that value may change unexpectedly. (In contrast with "normal" memory, that doesn't change unless written to by the program itself...)
That is it's main purpose.
It could also be used to make sure one thread see the change in a value written by another, but it in no way guarantees atomicity when reading/writing to said object.