questions regarding JVM specification on Threads and Locks - concurrency

In the JVM spec
When data is copied from the main memory to a working memory, two
actions must occur: a read operation performed by the main memory,
followed some time later by a corresponding load operation performed
by the working memory.
Must read and load operation happen back to back? Can there be other machine instructions between read and load? In a read operation it only says where to read data from, but it does not say where to keep the data. So if there are other machine instructions between read and load, where is the data kept during the process between the two operations?
A lock action (by a thread tightly synchronized with main memory)
causes a thread to acquire one claim on a particular lock. An unlock
action (by a thread tightly synchronized with main memory) causes a
thread to release one claim on a particular lock.
What is the relation of the lock/unlock specified here and the lock/unlock implemented in synchronized keyword and locks in the JUC library? How the lock mentioned in the spec is implemented? Is it implemented by some machine instructions or by setting a mark on a variable header?

Related

Thread-local acquire/release synchronization

In general, load-acquire/store-release synchronization is one of the most common forms of memory-ordering based synchronization in the C++11 memory model. It's basically how a mutex provides memory ordering. The "critical section" between a load-acquire and a store-release is always synchronized among different observer threads, in the sense that all observer threads will agree on what happens after the acquire and before the release.
Generally, this is achieved with a read-modify-write instruction, like compare-exchange, along with an acquire barrier, when entering the critical section, and another read-modify-write instruction with a release barrier when exiting the critical section.
But there are some situations where you might have a similar critical section[1] between a load-acquire and a release-store, except only one thread actually modifies the synchronization variable. Other threads may read the synchronization variable, but only one thread actually modifies it. In this case, when entering the critical section, you don't need a read-modify-write instruction. You would just need a simple store, since you are not racing with other threads that are attempting to modify the synchronization flag. (This may seem odd, but note that many lock-free memory reclamation deferral patterns, like user-space RCU or epoch based reclamation, use thread-local synchronization variables that are written to only by one thread, but read by many threads, so this isn't too weird of a situation.)
So, when entering the critical section, you could just do something like:
sync_var.store(true, ...);
.... critical section ....
sync_var.store(false, std::memory_order_release);
There is no race, because, again, there is no need for a read-modify-write when only one thread needs to set/unset the critical section variable. Other threads can simply read the critical section variable with a load-acquire.
The problem is, when you're entering the critical section, you need an acquire operation or fence. But you don't need to do a LOAD, only a STORE. So what is a good way to produce acquire ordering when you only really need a STORE? I see only two real options that fall within the C++ memory model. Either:
Use an exchange instead of a store, so you can do sync_var.exchange(true, std::memory_order_acquire). The downside here is that exchange is a more heavy-weight read-modify-write operation, when all you really need is a simple store.
Insert a "dummy" load-acquire, like:
(void)sync_var.load(std::memory_order_acquire);
sync_var.store(true, std::memory_order_relaxed);
The "dummy" load-acquire seems better. Presumably, the compiler can't optimize away the unused load, because it's an atomic instruction that has the side-effect of producing a "synchronizes-with" relationship with a release operation on sync_var. But it also seems very hacky, and the intention is unclear without comments explaining what's going on.
So what is the best way to produce acquire semantics when all we need to do is a simple store?
[1] I use the term "critical section" loosely. I don't necessarily mean a section that is always accessed via mutual exclusion. Rather, I just mean any section where memory ordering is synchronized via acquire-release semantics. This could refer to a mutex, or it could just mean something like RCU, where the critical section can be accessed concurrently by multiple readers.
The flaw in your logic is that an atomic RMW is not required because data in the critical section is modified by a single thread while all other threads only have read-access.
This is not true; there still needs to be a well-defined order between reading and writing.
You don't want data to be modified while another thread is still reading it. Therefore, each thread needs to inform other threads when it has finished accessing the data.
By only using an atomic store to enter the critical section, the 'synchronizes-with' relationship cannot be established.
Acquire/release synchronization is based on a runtime relationship where the acquirer knows that synchronization is complete only after observing a particular value returned by the atomic load.
This can never be achieved by a single atomic store since the one modifying thread can change the atomic variable sync_var at any time
and as such it has no way knowing whether another thread is still reading the data.
The option with a 'dummy' load/acquire is also invalid because it fails to inform other threads that it wants exclusive access. You attempt to solve that by using a single (relaxed) store,
but the load and the store are separate operations that can be interrupted by other threads (i.e. multiple threads simultaneously accessing the critical area).
An atomic RMW must be used by each thread to load a particular value and at the same time update the variable to inform all other threads it has now exclusive access
(regardless whether that is for reading or writing).
void lock()
{
while (sync_var.exchange(true, std::memory_order_acquire));
}
void unlock()
{
sync_var.store(false, std::memory_order_release);
}
Optimizations are possible where multiple threads have read-access at the same time (eg. std::shared_mutex).

Visibility of change to shared memory from shm_open() + mmap()

Let's say I am on CentOS 7 x86_64 + GCC 7.
I would like to create a ringbuffer in shared memory.
If I have two processes Producer and Consumer, and both share a named shared memory, which is created/accessed through shm_open() + mmap().
If Producer writes something like:
struct Data {
uint64_t length;
char data[100];
}
to the shared memory at a random time, and the Consumer is constantly polling the shared memory to read. Will I have some sort of synchronization issue that the member length is seen but the member data is still in the progress of writing? If yes, what's the most efficient technique to avoid the issue?
I see this post:
Shared-memory IPC synchronization (lock-free)
But I would like to get a deeper, more low level of understanding what's required to synchronize between two processes efficiently.
Thanks in advance!
To avoid this, you would want to make the structure std::atomic and access it with acquire-release memory ordering. On most modern processors, the instructions this inserts are memory fences, which guarantee that the writer wait for all loads to complete before it begins writing, and that the reader wait for all stores to complete before it begins reading.
There are, in addition, locking primitives in POSIX, but the <atomic> header is newer and what you probably want.
What the Standard Says
From [atomics.lockfree], emphasis added:
Operations that are lock-free should also be address-free. That is, atomic operations on the same memory location via two different addresses will communicate atomically. The implementation should not depend on any per-process state. This restriction enables communication by memory that is mapped into a process more than once and by memory that is shared between two processes.
For lockable atomics, the standard says in [thread.rec.lockable.general], emphasis added:
An execution agent is an entity such as a thread that may perform work in parallel with other execution agents. [...] Implementations or users may introduce other kinds of agents such as processes [....]
You will sometimes see the claim that the standard supposedly makes no mention of using the <atomic> primitives with memory shared between processes, only threads. This is incorrect.
However, passing pointers to the other process through shared memory will not work, as the shared memory may be mapped to different parts of the address space, and of course a pointer to any object not in shared memory is right out. Indices and offsets of objects within shared memory will. (Or, if you really need pointers, Boost provides IPC-safe wrappers.)
Yes, you will ultimately run into data races, not only length being written and read before data is written, but also parts of those members will be written out of sync of your process reading it.
Although lock-free is the new trend, I'd suggest to go for a simpler tool as your first IPC sync job: the semaphore. On linux, the following man pages will be useful:
sem_init
sem_wait
sem_post
The idea is to have both processes signal the other one it is currently reading or writing the shared memory segment. With a semaphore, you can write inter-process mutexes:
Producer:
while true:
(opt) create resource
lock semaphore (sem_wait)
copy resource to shm
unlock semaphore (sem_post)
Consumer:
while true:
lock semaphore (sem_wait)
copy resource to local memory
or crunch resource
unlock semaphore (sem_post)
If for instance Producer is writing into shm while Consumer calls sem_wait, Consumer will block until after Producer will call sem_post, but, you have no guarantee Producer won't go for another loop, writing two times in a row before Consumer will be woke up. You have to build a mechanism unsure Producer & Consumer do work alternatively.

Performance difference between mutex and critical section in C++

I was reading this post on performance differences in C# between critical sections and mutexes for a given test case. I'm womdering if there is any further documentation out there that gives performance overheads for the various locking classes for a C++ application, specifically MFC running on a Windows 32 or 64 bit platform?
The reason that I'm asking is that the profiler results I get across broad automated tests show a lot of time spent in mutex code. What I'm trying to figure out is how much of this is reasonable delay while waiting for a resource to become available, and how much is due to the implementation and specifics of the locking structure. I'm only dealing with a single process, which includes multiple threads, and am considering changing to critical sections. Long term automated testing shows that I don't need the time-outs offered by the mutex class.
Hence the question, is anyone aware of any reference documentation relating to the performance overheads of different MFC locking mechanisms on different Windows platforms?
As far as I can understand, a Win32 Mutex is a full blown kernel object. This means that any call to a Mutex will involve a system call. This will often invalidate the cache and therefore can be quite expensive.
Critical Sections are Userside objects that make no use of the kernel in cases where there is no contention. This is probably done using the x86 LOCK assembler instruction or similar to guarantee atomicity. Since no system call is made, it will be faster but because it not a kernel object, there is no way to access a critical section from another process.
The crucial difference between Critical Sections and Mutexes in Windows is that you can create a named mutex and use it from multiple processes, whereas there is no way to access a critical section of one process from another.
A consequence of a mutex being available in multiple processes is that access to it must be controlled by the kernel.
Read the following support article from Microsoft: http://support.microsoft.com/kb/105678.
Critical sections and mutexes provide synchronization that is very similar, except that critical sections can be used only by the threads of a single process. There are two areas to consider when choosing which method to use within a single process:
Speed. The Synchronization overview says the following about critical sections:
... critical section objects provide a slightly faster, more efficient
mechanism for mutual-exclusion synchronization. Critical sections use
a processor-specific test and set instruction to determine mutual
exclusion.
Deadlock. The Synchronization overview says the following about mutexes:
If a thread terminates without releasing its ownership of a mutex
object, the mutex is considered to be abandoned. A waiting thread can
acquire ownership of an abandoned mutex, but the wait function's
return value indicates that the mutex is abandoned.
WaitForSingleObject() will return WAIT_ABANDONED for a mutex that has
been abandoned. However, the resource that the mutex is protecting is
left in an unknown state.
There is no way to tell whether a critical section has been abandoned.

Critical sections better in thread or main program?

I use to use critical section (in c++) to block theads execution whilel accessing shared data, but as to work them must need to wait until data is not used before blocking, maybe it's better to use them in main or thread.
Then if I want my main program to have priority and not be blocked must I use critical sections inside it to block other thread or the contrary ?
You seem to have rather a misconception over what critical sections are and how they work.
Speaking generically, a critical section (CS) is a piece of code that needs to run "exclusively" -- i.e., you need to ensure that only one thread is executing that piece of code at any given time.
As the term is used in most environments, a CS is really a mutex -- a mutual exclusion semaphore (aka binary semaphore). It's a data structure (and set of functions) you use to ensure that a section of code gets executed exclusively (rather than referring to the code itself).
In any case, a CS only makes sense at all when/if you have some code that will execute in more than one thread, and you need to ensure that it only ever executes in one thread at any given time. This is typically when you have some shared data that could and would be corrupted if more than one thread tried to manipulate it at one time. When/if that arises, you need to "use" the critical section for every thread that manipulates that data to assure that the shared data isn't corrupted.
Assuring that a particular thread remains responsive is a whole separate question. In most cases, this means using a queue (for one possibility) to allow the thread to "hand off" a task to some other thread quickly, with minimal contention (i.e., instead of using a CS for the duration of processing the data, the CS only lasts long enough to put a data structure into a queue, and some other thread takes the processing from there).
You cannot say "I am using critical section in thread A but not in thread B". Critical section is a piece of code that accesses shared resource. When this code is executed from two threads that run in parallel, shared resource might get corrupted so therefore you need to synchronise access to it: you need to use some of synchronisation objects (mutexes, semaphores, events...depending on the platform and API you are using). ThreadA locks the critical section so ThreadB needs to wait till ThreadA releases it.
If you want your main thread to block (wait) less than working thread, set working thread priority to be lower than priority of the main thread.

Why do locks work?

If the locks make sure only one thread accesses the locked data at a time, then what controls access to the locking functions?
I thought that boost::mutex::scoped_lock should be at the beginning of each of my functions so the local variables don't get modified unexpectedly by another thread, is that correct? What if two threads are trying to acquire the lock at very close times? Won't the lock's local variables used internally be corrupted by the other thread?
My question is not boost-specific but I'll probably be using that unless you recommend another.
You're right, when implementing locks you need some way of guaranteeing that two processes don't get the lock at the same time. To do this, you need to use an atomic instruction - one that's guaranteed to complete without interruption. One such instruction is test-and-set, an operation that will get the state of a boolean variable, set it to true, and return the previously retrieved state.
What this does is this allows you to write code that continually tests to see if it can get the lock. Assume x is a shared variable between threads:
while(testandset(x));
// ...
// critical section
// this code can only be executed by once thread at a time
// ...
x = 0; // set x to 0, allow another process into critical section
Since the other threads continually test the lock until they're let into the critical section, this is a very inefficient way of guaranteeing mutual exclusion. However, using this simple concept, you can build more complicated control structures like semaphores that are much more efficient (because the processes aren't looping, they're sleeping)
You only need to have exclusive access to shared data. Unless they're static or on the heap, local variables inside functions will have different instances for different threads and there is no need to worry. But shared data (stuff accessed via pointers, for example) should be locked first.
As for how locks work, they're carefully designed to prevent race conditions and often have hardware level support to guarantee atomicity. IE, there are some machine language constructs guaranteed to be atomic. Semaphores (and mutexes) may be implemented via these.
The simplest explanation is that the locks, way down underneath, are based on a hardware instruction that is guaranteed to be atomic and can't clash between threads.
Ordinary local variables in a function are already specific to an individual thread. It's only statics, globals, or other data that can be simultaneously accessed by multiple threads that needs to have locks protecting it.
The mechanism that operates the lock controls access to it.
Any locking primitive needs to be able to communicate changes between processors, so it's usually implemented on top of bus operations, i.e., reading and writing to memory. It also needs to be structured such that two threads attempting to claim it won't corrupt its state. It's not easy, but you can usually trust that any OS implemented lock will not get corrupted by multiple threads.