Is this code thread-safe? - c++

This is a simplified version of some code I'm currently maintaining:
int SomeFunc()
{
const long lIndex = m_lCurrentIndex;
int nSum = 0;
nSum += m_someArray[lIndex];
nSum += m_someArray[lIndex];
return nSum;
}
lCurrentIndex is updated periodically by another thread. The question is; will making a local copy of m_CurrentIndex make sure both accesses to m_someArray uses the same index?
Please note that this is a simplified example; I'm thinking about the concept of making a local copy, not the exact piece of code shown here. I know the compiler will put the value in a register, but that is still a local copy as opposed to reading from lCurrentIndex twice.
Thanks!
Edit: The initial assignment is safe, both are guaranteed to be 32 bit in our setup.
Edit2: And they are correctly aligned on a 32bit boundary (forgot about that one)

No, the initialisation of the local, which reads a shared variable, is not necessarily atomic. (consider what code would be needed on an 8-bit platform, for example) In general, the only way to write thread safe code is to use atomic operations specified by your compiler and/or OS, or to use OS locking features.

will making a local copy of
m_CurrentIndex make sure both accesses
to m_someArray uses the same index?
In the same execution of SomeFunc, yes, of course. A local integer variable (lIndex) will not magically change its value in the middle of the function.
Of course, the following are also true: the actual value of m_someArray[lIndex] (as opposed to that of lIndex) might change; m_someArray in itself might change; and what Neil said about the validity of lIndex's initial value.

Yes the copy of the current index will make sure both array accesses use the same index. That is not really what I would have thought of as "thread safe" though. You need to concern yourself with concurrent access to shared variables. To me that looks like the access to the array could be an area of potential concern as well.

This should be thread safe (at least it will on all the compilers/OSes I've worked with). However, to be extra doubly sure you could declare m_lCurrentIndex as volatile. Then the compiler will know that it might change at any time.

Another question to ask here is: is the copy operation from m_lCurrentIndex to lIndex an atomic operation? If it isn’t you might end up using very weird values which will probably do nothing good. :)
Point is: when you are using multiple threads there won’t be a way around some kind of locking or synchronization.

Concept of local copy
I'm thinking about the concept of making a local copy, not the exact piece of code shown here.
This question cannot be answered without knowing more details. It boils down into the questions if this "making a local copy" of m_lCurrentIndex into lIndex is atomic.
Assuming x86 and assuming m_lCurrentIndex is DWORD aligned and assuming long represents DWORD (which is true on most x86 compilers), then yes, this is atomic. Assuming x64 and assuming long represents DWORD and m_lCurrentIndex is DWORD aligned or long represents 64b word and m_lCurrentIndex is 64b aligned again yes, this is atomic. On other platforms or without the alignment guarantee two or more physical reads may be required for the copy.
Even without local copy being atomic you still may be able to make it lock-less and thread safe using CAS style loop (be optimistic and assume you can do without locking, after doing the operation check if everything went OK, if not, rollback and try again), but it may be a lot more work and the result will be lock-less, but not wait-less.
Memory barries
A word of caution: once you will move one step forward, you will most likely be handling multiple variables simultaneously, or handling shared variables which act as pointers or indices to access other shared variables. At that point things will start more and more complicated, as from that point you need to consider things like read / write reordering and memory barriers. See also How can I write a lock free structure

Yes, it will access the same element of the array. It is like you are taking a snapshot of the value of m_lCurrentIndex into the local variable. Since the local variable has its own memory whatever you do to m_lCurrentIndex will have no effect on the local variable. However, note that since the assignment operation is not guaranteed to be atomic you may very well end up with an invalid value in lIndex. This happens if you update m_lCurrentIndex from one thread and at the same time try the assignement into lIndex from the other thread.

Related

should I use a lock if I am sure that every thread will always write the same value to shared memory?

C++ : I have a vector<bool> , no thread will ever write false to any of the indices . Should I still use a lock ? I don't think it will cause a race condition as all threads are writing the same value .
The functions being accessed by multiple threads looks like :
void setVal(int index)
{
if (boolvec[index] == false)
boolvec[index] = true;
}
To get by without a lock you should use a std::vector<atomic_flag_wrapper> where atomic_flag_wrapper wraps a std::atomic_flag similar to the code in this answer.
With std::vector<bool> you must use a lock, the standard explicitly tells you so:
Notwithstanding [res.on.data.races], implementations are required to avoid data races when the contents of the contained object in different elements in the same container, excepting vector<bool>, are modified concurrently.
http://eel.is/c++draft/container.requirements.dataraces#2 (C++ draft, 02.08.2020), emphasis mine
In plain English:
std::vector<bool> does not need to make sure that writes to two different elements are race-free; therefore there can be a data race; therefore you need a lock.
If it were e.g. a std::vector<char>, then the standard makes sure that charVector[0] and charVector[1] can be written to concurrently. But still then you cannot write to charVector[0] from more than one thread concurrently; you need to use atomics.
std:atomic<bool> is not guaranteed to be lock-free, so you should use std::atomic_flag which has this guarantee. You cannot put these into a std::vector, though, because they are not copy constructible. For that, you need a wrapper as described in this answer.
You should use a lock. On some platforms it may be unnecessary, while on others it is essential to get correct functionality due to the nature of std::vector<bool>.
std::vector<bool> is designed for space efficiency, and elements are stored on individual bits. If one thread writes to boolvec[3], and another writes to boolvec[4], both will modify the same byte in memory. This requires a lock to ensure that the proper value is visible to all threads because of the read-modify-write cycle necessary to change the value. Without the lock, one some platforms the first write may not be immediately visible to all CPUs/cores/threads, resulting in the second write using a stale value, effectively undoing the previous thread's write to the same address.
Yes, you should. Always a good idea, since it affects readability and maintainability.
Even though in the current day your code may not be expected to write specific values,
in the future, that may change. While it may work currently without a lock, suppose you want to change something. Then you get a bug and have to chase it down--a bug you could have reasonably prevented by writing your code defensively from the beginning.
In this specific case you can avoid using lock. In case multiple threads read false at the same index it does not matter, since the end result will be that value at that index is true.

In C11/C++11, possible to mix atomic/non-atomic ops on the same memory?

Is it possible to perform atomic and non-atomic ops on the same memory location?
I ask not because I actually want to do this, but because I'm trying to understand the C11/C++11 memory model. They define a "data race" like so:
The execution of a program contains a data race if it contains two
conflicting actions in different threads, at least one of which is not
atomic, and neither happens before the other. Any such data race
results in undefined behavior.
-- C11 §5.1.2.4 p25, C++11 § 1.10 p21
Its the "at least one of which is not atomic" part that is troubling me. If it weren't possible to mix atomic and non-atomic ops, it would just say "on an object which is not atomic."
I can't see any straightforward way of performing non-atomic operations on atomic variables. std::atomic<T> in C++ doesn't define any operations with non-atomic semantics. In C, all direct reads/writes of an atomic variable appear to be translated into atomic operations.
I suppose memcpy() and other direct memory operations might be a way of performing a non-atomic read/write on an atomic variable? ie. memcpy(&atomicvar, othermem, sizeof(atomicvar))? But is this even defined behavior? In C++, std::atomic is not copyable, so would it be defined behavior to memcpy() it in C or C++?
Initialization of an atomic variable (whether through a constructor or atomic_init()) is defined to not be atomic. But this is a one-time operation: you're not allowed to initialize an atomic variable a second time. Placement new or an explicit destructor call could would also not be atomic. But in all of these cases, it doesn't seem like it would be defined behavior anyway to have a concurrent atomic operation that might be operating on an uninitialized value.
Performing atomic operations on non-atomic variables seems totally impossible: neither C nor C++ define any atomic functions that can operate on non-atomic variables.
So what is the story here? Is it really about memcpy(), or initialization/destruction, or something else?
I think you're overlooking another case, the reverse order. Consider an initialized int whose storage is reused to create an std::atomic_int. All atomic operations happen after its ctor finishes, and therefore on initialized memory. But any concurrent, non-atomic access to the now-overwritten int has to be barred as well.
(I'm assuming here that the storage lifetime is sufficient and plays no role)
I'm not entirely sure because I think that the second access to int would be invalid anyway as the type of the accessing expression int doesn't match the object's type at the time (std::atomic<int>). However, "the object's type at the time" assumes a single linear time progression which doesn't hold in a multi-threaded environment. C++11 in general has that solved by making such assumptions about "the global state" Undefined Behavior per se, and the rule from the question appears to fit in that framework.
So perhaps rephrasing: if a single memory location contains an atomic object as well as a non-atomic object, and if the destruction of the earliest created (older) object is not sequenced-before the creation of the other (newer) object, then access to the older object conflicts with access to the newer object unless the former is scheduled-before the latter.
disclaimer: I am not a parallelism guru.
Is it possible to mix atomic/non-atomic ops on the same memory, and if
so, how?
you can write it in the code and compile, but it will probably yield undefined behaviour.
when talking about atomics, it is important to understand what kind o problems do they solve.
As you might know, what we call in shortly "memory" is multi-layered set of entities which are capable to hold memory.
first we have the RAM, then the cache lines , then the registers.
on mono-core processors, we don't have any synchronization problem. on multi-core processors we have all of them. every core has it own set of registers and cache lines.
this casues few problems.
First one of them is memory reordering - the CPU may decide on runtime to scrumble some reading/writing instructions to make the code run faster. this may yield some strange results that are completly invisible on the high-level code that brought this set of instruction. the most classic example of this phenomanon is the "two threads - two integer" example:
int i=0;
int j=0;
thread a -> i=1, then print j
thread b -> j=1 then print i;
logically, the result "00" cannot be. either a ends first, the result may be "01", either b ends first, the result may be "10". if both of them ends in the same time, the result may be "11". yet, if you build small program which imitates this situtation and run it in a loop, very quicly you will see the result "00"
another problem is memory invisibility. like I mentioned before, the variable's value may be cached in one of the cache lines, or be stored in one of the registered. when the CPU updates a variables value - it may delay the writing of the new value back to the RAM. it may keep the value in the cache/regiter because it was told (by the compiler optimizations) that that value will be updated again soon, so in order to make the program faster - update the value again and only then write it back to the RAM. it may cause undefined behaviour if other CPU (and consequently a thread or a process) depends on the new value.
for example, look at this psuedo code:
bool b = true;
while (b) -> print 'a'
new thread -> sleep 4 seconds -> b=false;
the character 'a' may be printed infinitly, because b may be cached and never be updated.
there are many more problems when dealing with paralelism.
atomics solves these kind of issues by (in a nutshell) telling the compiler/CPU how to read and write data to/from the RAM correctly without doing un-wanted scrumbling (read about memory orders). a memory order may force the cpu to write it's values back to the RAM, or read the valuse from the RAM even if they are cached.
So, although you can mix non atomics actions with atomic ones, you only doing part of the job.
for example let's go back to the second example:
atomic bool b = true;
while (reload b) print 'a'
new thread - > b = (non atomicly) false.
so although one thread re-read the value of b from the RAM again and again but the other thread may not write false back to the RAM.
So although you can mix these kind of operations in the code, it will yield underfined behavior.
I'm interested in this topic since I have code in which sometimes I need to access a range of addresses serially, and at other times to access the same addresses in parallel with some way of managing contention.
So not exactly the situation posed by the original question which (I think) implies concurrent, or nearly so, atomic and non atomic operationsin parallel code, but close.
I have managed by some devious casting to persuade my C11 compiler to allow me to access an integer and much more usefully a pointer both atomically and non-atomically ("directly"), having established that both types are officially lock-free on my x86_64 system. That is that the sizes of the atomic and non atomic types are the same.
I definitely would not attempt to mix both types of access to an address in a parallel context, that would be doomed to fail. However I have been successful in using "direct" syntax operations in serial code and "atomic" syntax in parallel code, giving me the best of both worlds of the fastest possible access (and much simpler syntax) in serial, and safely managed contention when in parallel.
So you can do it so long as you don't try to mix both methods in parallel code and you stick to using lock-free types, which probably means up to the size of a pointer.
I'm interested in this topic since I have code in which sometimes I need to access a range of addresses serially, and at other times to access the same addresses in parallel with some way of managing contention.
So not exactly the situation posed by the original question which (I think) implies concurrent, or nearly so, atomic and non atomic operations in parallel code, but close.
I have managed by some devious casting to persuade my C11 compiler to allow me to access an integer and much more usefully a pointer both atomically and non-atomically ("directly"), having established that both types are officially lock-free on my x86_64 system. My, possibly simplistic, interpretation of that is that the sizes of the atomic and non atomic types are the same and that the hardware can update such types in a single operation.
I definitely would not attempt to mix both types of access to an address in a parallel context, i think that would be doomed to fail. However I have been successful in using "direct" syntax operations in serial code and "atomic" syntax in parallel code, giving me the best of both worlds of the fastest possible access (and much simpler syntax) in serial, and safely managed contention when in parallel.
So you can do it so long as you don't try to mix both methods in parallel code and you stick to using lock-free types, which probably means up to the size of a pointer.

Is it necessary to lock an array that is *only written to* from one thread and *only read from* another?

I have two threads running. They share an array. One of the threads adds new elements to the array (and removes them) and the other uses this array (read operations only).
Is it necessary for me to lock the array before I add/remove to/from it or read from it?
Further details:
I will need to keep iterating over the entire array in the other thread. No write operations over there as previously mentioned. "Just scanning something like a fixed-size circular buffer"
The easy thing to do in such cases is to use a lock. However locks can be very slow. I did not want to use locks if their use can be avoided. Also, as it came out from the discussions, it might not be necessary (it actually isn't) to lock all operations on the array. Just locking the management of an iterator for the array (count variable that will be used by the other thread) is enough
I don't think the question is "too broad". If it still comes out to be so, please let me know. I know the question isn't perfect. I had to combine at least 3 answers in order to be able to solve the question - which suggests most people were not able to fully understand all the issues and were forced to do some guess work. But most of it came out through the comments which I have tried to incorporate in the question. The answers helped me solve my problem quite objectively and I think the answers provided here are quite a helpful resource for someone starting out with multithreading.
If two threads perform an operation on the same memory location, and at least one operation is a write operation, you have a so-called data race. According to C11 and C++11, the behaviour of programs with data races is undefined.
So, you have to use some kind of synchronization mechanism, for example:
std::atomic
std::mutex
If you are writing and reading from the same location from multiple threads you will need to to perform locking or use atomics. We can see this by looking at the C11 draft standard(The C++11 standard looks almost identical, the equivalent section would be 1.10) says the following in section 5.1.2.4 Multi-threaded executions and data races:
Two expression evaluations conflict if one of them modifies a memory
location and the other one reads or modifies the same memory location.
and:
The execution of a program contains a data race if it contains two
conflicting actions in different threads, at least one of which is not
atomic, and neither happens before the other. Any such data race
results in undefined behavior.
and:
Compiler transformations that introduce assignments to a potentially
shared memory location that would not be modified by the abstract
machine are generally precluded by this standard, since such an
assignment might overwrite another assignment by a different thread in
cases in which an abstract machine execution would not have
encountered a data race. This includes implementations of data member
assignment that overwrite adjacent members in separate memory
locations. We also generally preclude reordering of atomic loads in
cases in which the atomics in question may alias, since this may
violate the "visible sequence" rules.
If you were just adding data to the array then in the C++ world a std::atomic index would be sufficient since you can add more elements and then atomically increment the index. But since you want to grow and shrink the array then you will need to use a mutex, in the C++ world std::lock_guard would be a typical choice.
To answer your question: maybe.
Simply put, the way that the question is framed doesn't provide enough information about whether or not a lock is required.
In most standard use cases, the answer would be yes. And most of the answers here are covering that case pretty well.
I'll cover the other case.
When would you not need a lock given the information you have provided?
There are some other questions here that would help better define whether you need a lock, whether you can use a lock-free synchronization method, or whether or not you can get away with no explicit synchronization.
Will writing data ever be non-atomic? Meaning, will writing data ever result in "torn data"? If your data is a single 32 bit value on an x86 system, and your data is aligned, then you would have a case where writing your data is already atomic. It's safe to assume that if your data is of any size larger than the size of a pointer (4 bytes on x86, 8 on x64), then your writes cannot be atomic without a lock.
Will the size of your array ever change in a way that requires reallocation? If your reader is walking through your data, will the data suddenly be "gone" (memory has been "delete"d)? Unless your reader takes this into account (unlikely), you'll need a lock if reallocation is possible.
When you write data to your array, is it ok if the reader "sees" old data?
If your data can be written atomically, your array won't suddenly not be there, and it's ok for the reader to see old data... then you won't need a lock. Even with those conditions being met, it would be appropriate to use the built in atomic functions for reading and storing. But, that's a case where you wouldn't need a lock :)
Probably safest to use a lock since you were unsure enough to ask this question. But, if you want to play around with the edge case of where you don't need a lock... there you go :)
One of the threads adds new elements to the array [...] and the other [reads] this array
In order to add and remove elements to/from an array, you will need an index that specifies the last place of the array where the valid data is stored. Such index is necessary, because arrays cannot be resized without potential reallocation (which is a different story altogether). You may also need a second index to mark the initial location from which the reading is allowed.
If you have an index or two like this, and assuming that you never re-allocate the array, it is not necessary to lock when you write to the array itself, as long as you lock the writes of valid indexes.
int lastValid = 0;
int shared[MAX];
...
int count = toAddCount;
// Add the new data
for (int i = lastValid ; count != 0 ; count--, i++) {
shared[i] = new_data(...);
}
// Lock a mutex before modifying lastValid
// You need to use the same mutex to protect the read of lastValid variable
lock_mutex(lastValid_mutex);
lastValid += toAddCount;
unlock_mutex(lastValid_mutex);
The reason this works is that when you perform writes to shared[] outside the locked region, the reader does not "look" past the lastValid index. Once the writing is complete, you lock the mutex, which normally causes a flush of the CPU cache, so the writes to shared[] would be complete before the reader is allowed to see the data.
Lock? No. But you do need some synchronization mechanism.
What you're describing sounds an awful like a "SPSC" (Single Producer Single Consumer) queue, of which there are tons of lockfree implementations out there including one in the Boost.Lockfree
The general way these work is that underneath the covers you have a circular buffer containing your objects and an index. The writer knows the last index it wrote to, and if it needs to write new data it (1) writes to the next slot, (2) updates the index by setting the index to the previous slot + 1, and then (3) signals the reader. The reader then reads until it hits an index that doesn't contain the index it expects and waits for the next signal. Deletes are implicit since new items in the buffer overwrite previous ones.
You need a way to atomically update the index, which is provided by atomic<> and has direct hardware support. You need a way for a writer to signal the reader. You also might need memory fences depending on the platform s.t. (1-3) occur in order. You don't need anything as heavy as a lock.
"Classical" POSIX would indeed need a lock for such a situation, but this is overkill. You just have to ensure that the reads and writes are atomic. C and C++ have that in the language since their 2011 versions of their standards. Compilers start to implement it, at least the latest versions of Clang and GCC have it.
It depends. One situation where it could be bad is if you are removing an item in one thread then reading the last item by its index in your read thread. That read thread would throw an OOB error.
As far as I know, this is exactly the usecase for a lock. Two threads which access one array concurrently must ensure that one thread is ready with its work.
Thread B might read unfinished data if thread A did not finish work.
If it's a fixed-size array, and you don't need to communicate anything extra like indices written/updated, then you can avoid mutual exclusion with the caveat that the reader may see:
no updates at all
If your memory ordering is relaxed enough that this happens, you need a store fence in the writer and a load fence in the consumer to fix it
partial writes
if the stored type is not atomic on your platform (int generally should be)
or your values are un-aligned, and especially if they may span cache lines
This is all dependent on your platform though - hardware, OS and compiler can all affect it. You haven't told us what they are.
The portable C++11 solution is to use an array of atomic<int>. You still need to decide what memory ordering constraints you require, and what that means for correctness and performance on your platform.
If you use e.g. vector for your array (so that it can dynamically grow), then reallocation may occur during the writes, you lose.
If you use data entries larger than is always written and read atomically (virtually any complex data type), you lose.
If the compiler / optimizer decides to keep certain things in registers (such as the counter holding the number of valid entries in the array) during some operations, you lose.
Or even if the compiler / optimizer decides to switch order of execution for your array element assignments and counter increments/decrements, you lose.
So you certianly do need some sort of synchronization. What is the best way to do so (for example it may be worth while to lock only parts of the array), depends on your specifics (how often and in what pattern do the threads access the array).

CRITICAL_SECTION for set and get single bool value

now writing complicated class and felt that I use to much CRITICAL_SECTION.
As far as I know there are atomic operations for some types, that are always executed without any hardware or software interrupt.
I want to check if I understand everything correctly.
To set or get atomic value we don't need CRITICAL_SECTION because doing that there won't be interrupts.
bool is atomic.
So there are my statements, want to ask, if they are correct, also if they are correct, what types variables may be also set or get without CRITICAL_SECTION?
P. S. I'm talking about getting or setting one single value per method, not two, not five, but one.
You don't need locks round atomic data, but internally they might lock. Note for example, C++11's std::atomic has a is_lock_free function.
bool may not be atomic. See here and here
Note: This answer applies to Windows and says nothing of other platforms.
There are no InterlockedRead or InterlockedWrite functions; simple reads and writes with the correct integer size (and alignment) are atomic on Windows ("Simple reads and writes to properly-aligned 32-bit variables are atomic operations.").
(and there are no cache problems since a properly-aligned variable is always on a single cache line).
However, reading and modifying such variables (or any other variable) are not atomic:
Read a bool? Fine. Test-And-Set a bool? Better use
InterlockedCompareExchange.
Overwrite an integer? great! Add to
it? Critical section.
Here this can be found:
Simple reads and writes to properly aligned 64-bit variables are
atomic on 64-bit Windows. Reads and writes to 64-bit values are not
guaranteed to be atomic on 32-bit Windows. Reads and writes to
variables of other sizes are not guaranteed to be atomic on any
platform.
Result should be correct but in programming it is better not to trust should. There still remains small possibility of failure because of CPU cache.
You cannot guarantee for all implementations/platforms/compilers that bool, or any other type, or most operations, are atomic. So, no, I don't believe your statements are correct. You can retool your logic or use other means of establishing atomicity, but you probably can't get away with just removing CRITICAL_SECTION usage if you rely on it.

Synchronizing access to variable

I need to provide synchronization to some members of a structure.
If the structure is something like this
struct SharedStruct {
int Value1;
int Value2;
}
and I have a global variable
SharedStruct obj;
I want that the write from a processor
obj.Value1 = 5; // Processor B
to be immediately visible to the other processors, so that when I test the value
if(obj.Value1 == 5) { DoSmth(); } // Processor A
else DoSmthElse();
to get the new value, not some old value from the cache.
First I though that if I use volatile when writing/reading the values, it is enough. But I read that volatile can't solve this kind o issues.
The members are guaranteed to be properly aligned on 2/4/8 byte boundaries, and writes should be atomic in this case, but I'm not sure how the cache could interfere with this.
Using memory barriers (mfence, sfence, etc.) would be enough ? Or some interlocked operations are required ?
Or maybe something like
lock mov addr, REGISTER
?
The easiest would obviously be some locking mechanism, but speed is critical and can't afford locks :(
Edit
Maybe I should clarify a bit. The value is set only once (behaves like a flag). All the other threads need just to read it. That's why I think that it may be a way to force the read of this new value without using locks.
Thanks in advance!
There Ain't No Such Thing As A Free Lunch. If your data is being accessed from multiple threads, and it is necessary that updates are immediately visible by those other threads, then you have to protect the shared struct by a mutex, or a readers/writers lock, or some similar mechanism.
Performance is a valid concern when synchronizing code, but it is trumped by correctness. Generally speaking, aim for correctness first and then profile your code. Worrying about performance when you haven't yet nailed down correctness is premature optimization.
Use explicitly atomic instructions. I believe most compilers offer these as intrinsics. Compare and Exchange is another good one.
If you intend to write a lockless algorithm, you need to write it so that your changes only take effect when conditions are as expected.
For example, if you intend to insert a linked list object, use the compare/exchange stuff so that it only inserts if the pointer still points at the same location when you actually do the update.
Or if you are going to decrement a reference count and free the memory at count 0, you will want to pre-free it by making it unavailable somehow, check that the count is still 0 and then really free it. Or something like that.
Using a lock, operate, unlock design is generally a lot easier. The lock-free algorithms are really difficult to get right.
All the other answers here seem to hand wave about the complexities of updating shared variables using mutexes, etc. It is true that you want the update to be atomic.
And you could use various OS primitives to ensure that, and that would be good
programming style.
However, on most modern processors (certainly the x86), writes of small, aligned scalar values is atomic and immediately visible to other processors due to cache coherency.
So in this special case, you don't need all the synchronizing junk; the hardware does the
atomic operation for you. Certainly this is safe with 4 byte values (e.g., "int" in 32 bit C compilers).
So you could just initialize Value1 with an uninteresting value (say 0) before you start the parallel threads, and simply write other values there. If the question is exiting the loop on a fixed value (e.g., if value1 == 5) this will be perfectly safe.
If you insist on capturing the first value written, this won't work. But if you have a parallel set of threads, and any value written other than the uninteresting one will do, this is also fine.
I second peterb's answer to aim for correctness first. Yes, you can use memory barriers here, but they will not do what you want.
You said immediately. However, how immediate this update ever can be, you could (and will) end up with the if() clause being executed, then the flag being set, and than the DoSmthElse() being executed afterwards. This is called a race condition...
You want to synchronize something, it seems, but it is not this flag.
Making the field volatile should make the change "immediately" visible in other threads, but there is no guarantee that the instant at which thread A executes the update doesn't occur after thread B tests the value but before thread B executes the body of the if/else statement.
It sounds like what you really want to do is make that if/else statement atomic, and that will require either a lock, or an algorithm that is tolerant of this sort of situation.