C++ communication between threads - c++

I have a couple classes that each open a different program in different threads and do/hold information about it using CreateProcess (if there's a more C++ oriented way to do this let me know-- I looked).
Some of the classes are dependent on one of the other programs running. ie B must stop if A stopped. I made this code a while ago and my solution then, was having a class with static functions that run the various programs and static member variables that hold their "state". I was also using CreateThread.
Looking back, this method seemed... fragile and awkward looking.
I have no idea if using such a "static class" is good practice or not (especially recalling how awkward initializing the state member variables). I'd like to perhaps make each class contain its own run function. However, the issue I am considering is how to let class B know if A has awkwardly stopped. They'd still need to know a way to be aware of each other's state. Note that I'd like to use std::thread in this rework and that I have little to no experience with multithreading. Thanks for any help.

Well, in a multi-process application you would be using pipes/files to transmit information from one process to another (or even maybe the return value of a child process). So could also try shared memory, though it can be somewhat challenging (look into Boost.Interprocess if you wish to).
In a multi-threaded application you basically have the same options available:
you can use memory that is shared (provided you synchronize access)
you can use queues to pass information from one thread to another (such as with pipes)
so really the two are quite similar.
Following Tony Hoare's precept, you should generally share by communicating, and not communicate by sharing, which means privileging queues/pipes to shared memory; however for just a boolean flag, shared memory can prove easier to put in place:
void do_a(std::atomic_bool& done) {
// do things
done = true;
}
int main() {
std::atomic_bool done = false;
auto a = std::async(do_a, done);
auto b = std::async([](std::atomic_bool& done) {
while (not done) {
std::cout << "still not done" << std::endl;
sleep(1);
}
});
// other stuff in parallel.
}
And of course, you can even put this flag in a std::shared_ptr to avoid the risk of dangling references.

Related

Shared file logging between threads in C++ 11 [duplicate]

This question already has answers here:
Is cout synchronized/thread-safe?
(4 answers)
Closed 5 years ago.
Recently I started learning C++ 11. I only studied C/C++ for a brief period of time when I was in college.I come from another ecosystem (web development) so as you can imagine I'm relatively new into C++.
At the moment I'm studying threads and how could accomplish logging from multiple threads with a single writer (file handle). So I wrote the following code based on tutorials and reading various articles.
My First question and request would be to point out any bad practices / mistakes that I have overlooked (although the code works with VC 2015).
Secondly and this is what is my main concern is that I'm not closing the file handle, and I'm not sure If that causes any issues. If it does when and how would be the most appropriate way to close it?
Lastly and correct me if I'm wrong I don't want to "pause" a thread while another thread is writing. I'm writing line by line each time. Is there any case that the output messes up at some point?
Thank you very much for your time, bellow is the source (currently for learning purposes everything is inside main.cpp).
#include <iostream>
#include <fstream>
#include <thread>
#include <string>
static const int THREADS_NUM = 8;
class Logger
{
public:
Logger(const std::string &path) : filePath(path)
{
this->logFile.open(this->filePath);
}
void write(const std::string &data)
{
this->logFile << data;
}
private:
std::ofstream logFile;
std::string filePath;
};
void spawnThread(int tid, std::shared_ptr<Logger> &logger)
{
std::cout << "Thread " + std::to_string(tid) + " started" << std::endl;
logger->write("Thread " + std::to_string(tid) + " was here!\n");
};
int main()
{
std::cout << "Master started" << std::endl;
std::thread threadPool[THREADS_NUM];
auto logger = std::make_shared<Logger>("test.log");
for (int i = 0; i < THREADS_NUM; ++i)
{
threadPool[i] = std::thread(spawnThread, i, logger);
threadPool[i].join();
}
return 0;
}
PS1: In this scenario there will always be only 1 file handle open for threads to log data.
PS2: The file handle ideally should close right before the program exits... Should it be done in Logger destructor?
UPDATE
The current output with 1000 threads is the following:
Thread 0 was here!
Thread 1 was here!
Thread 2 was here!
Thread 3 was here!
.
.
.
.
Thread 995 was here!
Thread 996 was here!
Thread 997 was here!
Thread 998 was here!
Thread 999 was here!
I don't see any garbage so far...
My First question and request would be to point out any bad practices / mistakes that I have overlooked (although the code works with VC 2015).
Subjective, but the code looks fine to me. Although you are not synchronizing threads (some std::mutex in logger would do the trick).
Also note that this:
std::thread threadPool[THREADS_NUM];
auto logger = std::make_shared<Logger>("test.log");
for (int i = 0; i < THREADS_NUM; ++i)
{
threadPool[i] = std::thread(spawnThread, i, logger);
threadPool[i].join();
}
is pointless. You create a thread, join it and then create a new one. I think this is what you are looking for:
std::vector<std::thread> threadPool;
auto logger = std::make_shared<Logger>("test.log");
// create all threads
for (int i = 0; i < THREADS_NUM; ++i)
threadPool.emplace_back(spawnThread, i, logger);
// after all are created join them
for (auto& th: threadPool)
th.join();
Now you create all threads and then wait for all of them. Not one by one.
Secondly and this is what is my main concern is that I'm not closing the file handle, and I'm not sure If that causes any issues. If it does when and how would be the most appropriate way to close it?
And when do you want to close it? After each write? That would be a redundant OS work with no real benefit. The file is supposed to be open through entire program's lifetime. Therefore there is no reason to close it manually at all. With graceful exit std::ofstream will call its destructor that closes the file. On non-graceful exit the os will close all remaining handles anyway.
Flushing a file's buffer (possibly after each write?) would be helpful though.
Lastly and correct me if I'm wrong I don't want to "pause" a thread while another thread is writing. I'm writing line by line each time. Is there any case that the output messes up at some point?
Yes, of course. You are not synchronizing writes to the file, the output might be garbage. You can actually easily check it yourself: spawn 10000 threads and run the code. It's very likely you will get a corrupted file.
There are many different synchronization mechanisms. But all of them are either lock-free or lock-based (or possibly a mix). Anyway a simple std::mutex (basic lock-based synchronization) in the logger class should be fine.
The first massive mistake is saying "it works with MSVC, I see no garbage", even moreso as it only works because your test code is broken (well it's not broken, but it's not concurrent, so of course it works fine).
But even if the code was concurrent, saying "I don't see anything wrong" is a terrible mistake. Multithreaded code is never correct unless you see something wrong, it is incorrect unless proven correct.
The goal of not blocking ("pausing") one thread while another is writing is unachieveable if you want correctness, at least if they concurrently write to the same descriptor. You must synchronize properly (call it any way you like, and use any method you like), or the behavior will be incorrect. Or worse, it will look correct for as long as you look at it, and it will behave wrong six months later when your most important customer uses it for a multi-million dollar project.
Under some operating systems, you can "cheat" and get away without synchronization as these offer syscalls that have atomicity guarantees (e.g. writev). That is however not what you may think, it is indeed heavyweight synchronization, only just you don't see it.
A better (more efficient) strategy than to use a mutex or use atomic writes might be to have a single consumer thread which writes to disk, and to push log tasks onto a concurrent queue from how many producer threads you like. This has minimum latency for threads that you don't want to block, and blocking where you don't care. Plus, you can coalesce several small writes into one.
Closing or not closing a file seems like a non-issue. After all, when the program exits, files are closed anyway. Well yes, except, there are three layers of caching (four actually if you count the physical disk's caches), two of them within your application and one within the operating system.
When data has made it at least into the OS buffers, all is good unless power fails unexpectedly. Not so for the other two levels of cache!
If your process dies unexpectedly, its memory will be released, which includes anything cached within iostream and anything cached within the CRT. So if you need any amount of reliability, you will either have to flush regularly (which is expensive), or use a different strategy. File mappying may be such a strategy because whatever you copy into the mapping is automatically (by definition) within the operating system's buffers, and unless power fails or the computer explodes, it will be written to disk.
That being said, there exist dozens of free and readily available logging libraries (such as e.g. spdlog) which do the job very well. There's really not much of a reason to reinvent this particular wheel.
Hello and welcome to the community!
A few comments on the code, and a few general tips on top of that.
Don't use native arrays if you do not absolutely have to.
Eliminating the native std::thread[] array and replacing it with an std::array would allow you to do a range based for loop which is the preferred way of iterating over things in C++. An std::vector would also work since you have to generate the thredas (which you can do with std::generate in combination with std::back_inserter)
Don't use smart pointers if you do not have specific memory management requirements, in this case a reference to a stack allocated logger would be fine (the logger would probably live for the duration of the program anyway, hence no need for explicit memory management). In C++ you try to use the stack as much as possible, dynamic memory allocation is slow in many ways and shared pointers introduce overhead (unique pointers are zero cost abstractions).
The join in the for loop is probably not what you want, it will wait for the previously spawned thread and spawn another one after it is finished. If you want parallelism you need another for loop for the joins, but the preferred way would be to use std::for_each(begin(pool), end(pool), [](auto& thread) { thread.join(); }) or something similar.
Use the C++ Core Guidelines and a recent C++ standard (C++17 is the current), C++11 is old and you probably want to learn the modern stuff instead of learning how to write legacy code. http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines
C++ is not java, use the stack as much as possible - this is one of the biggest advantages to using C++. Make sure you understand how the stack, constructors and destructors work by heart.
The first question is subjective so someone else would want to give an advice, but I don't see anything awful.
Nothing in C++ standard library is thread-safe except for some rare cases. A good answer on using ofstream in a multithreaded environment is given here.
Not closing a file is indeed an issue. You have to get familiar with RAII as it is one of the first things to learn. The answer by Detonar is a good piece of advice.

OO Approach For Hardware Communication... Possible Singleton?

I am working on a project where I need to talk to a particular box over UDP. There will only ever be one box connected to the system at any given time. The connection should last the entire duration of the program.
I have written a class that works (yay!) in providing the necessary data to the hardware. However, my main problem is that now I have to account for the fact that someone (a programmer down the road who will more than likely just ignore all my very neat comments ;) ) may create more than one instance of this class. This will more than likely result in some hilarious and rather amusing crash where the hardware in question is wondering why it is receiving data from two sockets on the same machine. More troublesome is the fact that creating the object actually spawns a thread that periodically sends updates. So you can imagine if my imaginary future programmer does something like create a linked list of these objects (after all, this is C++ and we have the ability to do such things) the CPU might not be very happy after a while.
As a result, I turn to you... the more experienced people of SO who have seen such issues in the past. I have debated creating a singleton to handle all of this, but some of my readings lead me to believe that this might not be the way to go. There is a TON of information regarding them on the internet, and it's almost like asking a highly sensitive political question based on the responses I've seen.
An alternative I've developed that will preserve as much code as possible is to just use a static bool to keep track if there is an active thread passing data to the hardware. However, I suspect my approach can lead to race conditions in the case where I have competing threads attempting to access the class at the same time. Here's what I have thus far:
// in MyClass.cpp:
static bool running_ = false; // declared in the class in the .h, but defined here
MyClass::MyClass() {
// various initialization stuff you don't care about goes here
if (pthread_create(mythread_, NULL, MyThreadFunc, this) != 0) {
// error
}
else {
// no error
}
}
static void* MyClass::MyThreadFunc(void* args) {
MyClass myclass = static_cast<MyClass>(args);
// now I have access to all the stuff in MyClass
// do various checks here to make sure I can talk to the box
if (!running_) {
running_ = true;
// open a connection
while (!terminate) { // terminate is a flag set to true in the destructor
// update the hardware via UDP
}
// close the socket
running_ = false;
}
}
While I certainly note that this will check for only one instance being active, there is still the possibility that two concurrent threads will access the !running_ check at the same time and therefore both open the connection.
As a result, I'm wondering what my options are here? Do I implement a singleton? Is there a way I can get the static variable to work? Alternatively, do I just comment about this issue and hope that the next programmer understands to not open two instances to talk to the hardware?
As always, thanks for the help!
Edited to add:
I just had another idea pop into my mind... what if the static bool was a static lock instead? That way, I could set the lock and then just have subsequent instances attempt to get the lock and if they failed, just return a zombie class... Just a thought...
You're right, asking about singleton is likely to start a flamewar, that will not make you any wiser. You better make up your mind yourself. It's not that hard really if you are aware of the primary principles.
For your case I'd skip that whole branch as irrelevant, as your post is motivated by FEAR. Fear from a speculative issue. So let me just advise you on that: relax. You can't fight idiots. As soon as you invent some fool-proof schema, the universe evolves and will produce a better idiot that will go around it. Not worth the effort. Leave the idiot problem to the management and HR, to keep them employed elsewhere.
Your task is to provide working solution and proper documentation on how to use it (ideally with tests and examples too). If you document usage to create just a single instance of your stuff, and doing the listed init and teardown steps, you can just expext that as followed -- or if not it be the next guy's problem.
Most of the real life grief comes NOT from dismissing dox, but that dox not present or is inaccurate. So just do that part properly.
Once done, certainly nothing forbids you to ass a few static or runtime asserts on preconditions: it's not hard to count your class' instances and assert it will not go over 1.
What if you have two instances of the hardware itself? [I know you say it will only be one - but I've been there, done that on the aspect of "It's only ever going to be one!! Oh, <swearword>, now we need to use two..."].
Of course, your if(running_) is a race-condition. You really should use some sort of atomic type, so that you don't get two attempts to start the class at once. That also won't stop someone from trying to start two instances of the overall program.
Returning a zombie class seems like a BAD solution - throwing an exception, returning an error value, or some such would be a much better choice.
Would it be possible to have "the other side" control the number of connections? In other words, if a second instance tries to communicate, it gets an error back from the hardware that receives the message "Sorry, already have a connection"?
Sorry if this isn't really "an answer".
First, I do not think you can really protect anything from this imaginary future developer if he's so much into breaking your code. Comments/doc should do the trick. If he misses them, the hardware (or the code) will likely crash, and he will notice. Moreover, if he as a good reason to reuse your class (like connecting to some other hardwares of the same kind), you do not want to block him with nasty hidden tricks.
This said, for your example, I would consider using an atomic<bool> to avoid any concurrency issue, and use the compare_exchange member function instead of if(!running) running = true:
static std::atomic<bool> running;
...
bool expected = false;
if(running.compare_exchange_strong(expected, true)) {
...

c++ sharing single class object between multiple processes

I have a relatively complex class in c++. It works perfectly when used within one process. However, now I want multiple processes to be able to share one object instance of this class. One process (Master) will access read and write functions of the object, while the other 2 processes (Slave) will only use the read functions. I want to modify the class as little as possible. So far I have considered singletons and shared memory, but neither seems ideal or straightforward. This is a research application that will only ever be used by me on Linux. What is the simplest possible solution?
Thanks so much!
Edit: To be absolutely clear, the asker is interested in sharing an object across multiple processes, not threads.
Inter-process communication is never simple. You may want to use a library for IPC/RPC and expose only the function the slaves use to read data, not the entire class.
I can't give you any good recommendations because I have never found a library that made it simple and I don't have much experience with it.
One idea might be to use socket or a socket library to share the data amongst the processes. A library which seems to be very handy for that might be ØMQ. You can also try to use Boost::Asio which is a bit more complex.
You can find a small example for ØMQ here.
I think the simplest coding solution would be a singleton with a global(or class instance) mutex, though the singleton part of that is optional. I personally think singletons to be an overused idiom. Up to you whether you think that is good design in this case or not. Really, adding the global mutex is all you need.
For the interprocess portion, I recommend boost.
http://www.boost.org/doc/libs/1_36_0/doc/html/interprocess/synchronization_mechanisms.html#interprocess.synchronization_mechanisms.semaphores.semaphores_interprocess_semaphores
One option is to have both the master and slave processes create instances of the same object. Because the master process will be the only one to modify this 'shared' object, it must only alert the slaves processes to any changes it makes to the 'shared' object. To do this, you could setup a messaging system which the master process will use to communicate changes to the shared object with the slave processes. The drawback here is that the slave processes may reference the shared object when it is out of sync with the master, but this is a common problem in replication. Also, you could use an RPC overlay to further make the master/slave applications easier to develop/maintain.
I'll try and provide a very high level example of this design below. Forgive me for utilizing real code and psuedo code side-by-side; I didn't want to fully code this, but also didn't want it to just be made up of comments :)
Here's our shared object that gets defined in both master/slave code
struct sharedobj {
int var1;
};
Here's an example of the master process updating the shared object and propagating changes
int counter = 0;
sharedobj mysharedobj;
while(true){
//update the local version first
mysharedobj.var1 = counter++;
//then call some function to push these changes to the slaves
updateSharedObj(mysharedobj);
}
Here's the function that propagates the master's changes to the slaves;
updatedSharedObj(sharedobj obj){
//set up some sort of message that encompasses these changes
string msg = "var1:" + the string value of obj.var1;
//go through the set of slave processes
//if we've just done basic messaging, maybe we have a socket open for each process
while(socketit != socketlist.end()){
//send message to slave
send(*socketit, msg.c_str(),msg.length(),0);
}
}
And here's the slave code that receives these changes and updates its 'shared' object; most likely running in another thread so slave can run without having to stop and check for object updates.
while(true){
//wait on the socket for updates
read(mysock,msgbuf,msgbufsize,0);
//parse the msgbuf
int newv1 = the int value of var1 from the msg;
//if we're in another thread we need to synchronize access to the object between
//update thread and slave
pthread_mutex_lock(&objlock);
//update the value of var1
sharedobj.var1 = newv1;
//and release the lock
pthread_mutex_unlock(&objlock);
}
See "shared memory" in Boost Interprocess: http://www.boost.org/doc/libs/1_63_0/doc/html/interprocess/sharedmemorybetweenprocesses.html

C++ objects in multithreading

I would like to ask about thread safety in C++ (using POSIX threads with a C++ wrapper for ex.) when a single instance/object of a class is shared between different threads. For example the member methods of this single object of class A would be called within different threads. What should/can I do about thread safety?
class A {
private:
int n;
public:
void increment()
{
++n;
}
void decrement()
{
--n;
}
};
Should I protect class member n within increment/decrement methods with a lock or something else? Also static (class variables) members have such a need for lock?
If a member is immutable, I do not have to worry about it, right?
Anything that I cannot foreseen now?
In addition to the scenario with a single object within multithreads, what about multiple object with multiple threads? Each thread owns an instance of a class. Anything special other than static (class variables) members?
These are the things in my mind, but I believe this is a large topic and I would be glad if you have good resources and refer previous discussions about that.
Regards
Suggestion: don't try do it by hand. Use a good multithread library like the one from Boost: http://www.boost.org/doc/libs/1_47_0/doc/html/thread.html
This article from Intel will give you a good overview: http://software.intel.com/en-us/articles/multiple-approaches-to-multithreaded-applications/
It's a really large topic and probably it's impossible to complete the topic in this thread.
The golden rule is "You can't read while somebody else is writing."
So if you have an object that share a variable you have to put a lock in the function that access the shared variable.
There are very few cases when this is not true.
The first case is for integer number you can use the atomic function as showed by c-smile, in this case the CPU will use an hardware lock on the cache, so other cores can't modify the variables.
The second cases are lock free queue, that are special queue that use the compare and excange function to assure the atomicity of the instruction.
All the other cases are MUST be locked...
the first aproach is to lock everything, this can lead to a lot of problem when more object are involved (ObjA try to read from ObjB but, ObjB is using the variable and also is waiting for ObjC that wait ObjA) Where circular lock can lead to indefinite waiting (deadlock).
A better aproach is to minimize the point where thread share variable.
For example if you have and array of data, and you want to parallelize the computation on the data you can launch two thread and thread one will work only on even index while thread two will work on the odd. The thread are working on the same set of data, but as long the data don't overlap you don't have to use lock. (This is called data parallelization)
The other aproch is to organize the application as a set of "work" (function that run on a thread a produce a result) and make the work communicate only with messages. You only have to implement a thread safe message system and a work sheduler you are done. Or you can use libray like intel TBB.
Both approach don't solve deadlock problem but let you isolate the problem and find bugs more easily. Bugs in multithread are really hard to debug and sometime are also difficoult to find.
So, if you are studing I suggest to start with the thery and start with pThread, then whe you are learned the base move to a more user frendly library like boost or if you are using Gcc 4.6 as compiler the C++0x std::thread
yes, you should protect the functions with a lock if they are used in a multithreading environment. You can use boost libraries
and yes, immutable members should not be a concern, since a such a member can not be changed once it has been initialized.
Concerning "multiple object with multiple threads".. that depends very much of what you want to do, in some cases you could use a thread pool which is a mechanism that has a defined number of threads standing by for jobs to come in. But there's no thread concurrency there since each thread does one job.
You have to protect counters. No other options.
On Windows you can do this using these functions:
#if defined(PLATFORM_WIN32_GNU)
typedef long counter_t;
inline long _inc(counter_t& v) { return InterlockedIncrement(&v); }
inline long _dec(counter_t& v) { return InterlockedDecrement(&v); }
inline long _set(counter_t &v, long nv) { return InterlockedExchange(&v, nv); }
#elif defined(WINDOWS) && !defined(_WIN32_WCE) // lets try to keep things for wince simple as much as we can
typedef volatile long counter_t;
inline long _inc(counter_t& v) { return InterlockedIncrement((LPLONG)&v); }
inline long _dec(counter_t& v) { return InterlockedDecrement((LPLONG)&v); }
inline long _set(counter_t& v, long nv) { return InterlockedExchange((LPLONG)&v, nv); }

proper way to use lock file(s) as locks between multiple processes

I have a situation where 2 different processes(mine C++, other done by other people in JAVA) are a writer and a reader from some shared data file. So I was trying to avoid race condition by writing a class like this(EDIT:this code is broken, it was just an example)
class ReadStatus
{
bool canRead;
public:
ReadStatus()
{
if (filesystem::exists(noReadFileName))
{
canRead = false;
return;
}
ofstream noWriteFile;
noWriteFile.open (noWriteFileName.c_str());
if ( ! noWriteFile.is_open())
{
canRead = false;
return;
}
boost::this_thread::sleep(boost::posix_time::seconds(1));
if (filesystem::exists(noReadFileName))
{
filesystem::remove(noWriteFileName);
canRead= false;
return;
}
canRead= true;
}
~ReadStatus()
{
if (filesystem::exists(noWriteFileName))
filesystem::remove(noWriteFileName);
}
inline bool OKToRead()
{
return canRead;
}
};
usage:
ReadStatus readStatus; //RAII FTW
if ( ! readStatus.OKToRead())
return;
This is for one program ofc, other will have analogous class.
Idea is:
1. check if other program created his "I'm owner file", if it has break else go to 2.
2. create my "I'm the owner" file, check again if other program created his own, if it has delete my file and break else go to 3.
3. do my reading, then delete mine "I'm the owner file".
Please note that rare occurences when they both dont read or write are OK, but the problem is that I still see a small chance of race conditions because theoretically other program can check for the existence of my lock file, see that there isnt one, then I create mine, other program creates his own, but before FS creates his file I check again, and it isnt there, then disaster occurs. This is why I added the one sec delay, but as a CS nerd I find it unnerving to have code like that running.
Ofc I don't expect anybody here to write me a solution, but I would be happy if someone does know a link to a reliable code that I can use.
P.S. It has to be files, cuz I'm not writing entire project and that is how it is arranged to be done.
P.P.S.: access to data file isn't reader,writer,reader,writer.... it can be reader,reader,writer,writer,writer,reader,writer....
P.P.S: other process is not written in C++ :(, so boost is out of the question.
On Unices the traditional way of doing pure filesystem based locking is to use dedicated lockfiles with mkdir() and rmdir(), which can be created and removed atomically via single system calls. You avoid races by never explicitly testing for the existence of the lock --- instead you always try to take the lock. So:
lock:
while mkdir(lockfile) fails
sleep
unlock:
rmdir(lockfile)
I believe this even works over NFS (which usually sucks for this sort of thing).
However, you probably also want to look into proper file locking, which is loads better; I use F_SETLK/F_UNLCK fcntl locks for this on Linux (note that these are different from flock locks, despite the name of the structure). This allows you to properly block until the lock is released. These locks also get automatically released if the app dies, which is usually a good thing. Plus, these will let you lock your shared file directly without having to have a separate lockfile. This, too, work on NFS.
Windows has very similar file locking functions, and it also has easy to use global named semaphores that are very convenient for synchronisation between processes.
As far as I've seen it, you can't reliably use files as locks for multiple processes. The problem is, while you create the file in one thread, you might get an interrupt and the OS switches to another process because I/O is taking so long. The same holds true for deletion of the lock file.
If you can, take a look at Boost.Interprocess, under the synchronization mechanisms part.
While I'm generally against making API calls which can throw from a constructor/destructor (see docs on boost::filesystem::remove) or making throwing calls without a catch block in general that's not really what you were asking about.
You could check out the Overlapped IO library if this is for windows. Otherwise have you considered using shared memory between the processes instead?
Edit: Just saw the other process was Java. You may still be able to create a named mutex that can be shared between processes and used that to create locks around the file IO bits so they have to take turns writing. Sorry I don't know Java so no I idea if that's more feasible than shared memory.