Process Isolation in Rust - concurrency

I want to implement a server for a protocol. For security reasons the parser should be isolated in its own thread from the rest of the program and only a bidirectional channel should be held open for communication.
The parser thread should lose any possibility to modify the other thread's memory and lose its power to do syscalls (using seccomp).
Is there an easy way to achieve this behavior for the parser thread in Rust?

If you're concerned about issues beyond what Rust's strong safety and type system can protect against (e.g. bugs in those, or in third-party libraries etc.) then you really want separate processes rather than just threads; even if you use seccomp on an untrusted thread, at the OS/CPU level it still has full write access to other threads' memory in the same process.
Either way you'll need to write code designed to run in seccomp carefully (for example allocating extra heap memory might not work) - but the good news is that Rust is a great language for having that control!
There's a reasonably useful discussion on seccomp in Rust which has some suggestions.
The best bet looks like gaol from the Servo project, which is a more general process sandbox (including seccomp). There are also some other lower level seccomp wrappers like this one.
I haven't tried any of this yet, so I'd be interested to hear any other viewpoints/experience.

Related

Exchange and store values between different processes

My application needs to store and exchange some (one) values between different processes. Also behind a start of the application this value is needed (but not behind a system reboot).
I can write and read this value in a file and synchronize the access. The file could lay in a ramfs. The solution would work but I have the feeling I use the wrong method.
Is there a better lightweight solution for this? Do I miss an straightforward approach?
I was thinking about named pipes (mkfifo) but there needs always and active writer and reader?
You are asking about inter-process communication. There are a number of methods for communicating between processes:
Low-level shared memory
Low-level named pipes
Low-level sockets
Remote procedure call mechanisms (DCOM, Corba, ONC RPC, etc.)
REST api
Distributed system frameworks
Which of these will work best for you depends on the complexity of the messages exchanged between processes, the complexity of the overall system, the need for portability, etc.
Shared memory is a very low-level approach and can feel the easiest solution "because it's just bytes in memory addressed by a pointer". However, it's inherently low-level nature makes it also tedious to use. There is no universally agreed upon C++ interface to these facilities, so you are left with low-level C style APIs for accessing and configuring shared memory between processes. There are differences between platforms (POSIX does it one way; Windows does it another).
Boost.Interprocess gives you a portable way to access shared memory mechanisms and aims to make using them simpler.

Thread per connection vs Reactor pattern (with a thread pool)?

I want to write a simple multiplayer game as part of my C++ learning project.
So I thought, since I am at it, I would like to do it properly, as opposed to just getting-it-done.
If I understood correctly: Apache uses a Thread-per-connection architecture, while nginx uses an event-loop and then dedicates a worker [x] for the incoming connection. I guess nginx is wiser, since it supports a higher concurrency level. Right?
I have also come across this clever analogy, but I am not sure if it could be applied to my situation. The analogy also seems to be very idealist. I have rarely seen my computer run at 100% CPU (even with a umptillion Chrome tabs open, Photoshop and what-not running simultaneously)
Also, I have come across a SO post (somehow it vanished from my history) where a user asked how many threads they should use, and one of the answers was that it's perfectly acceptable to have around 700, even up to 10,000 threads. This question was related to JVM, though.
So, let's estimate a fictional user-base of around 5,000 users. Which approach should would be the "most concurrent" one?
A reactor pattern running everything in a single thread.
A reactor pattern with a thread-pool (approximately, how big do you suggest the thread pool should be?
Creating a thread per connection and then destroying the thread the connection closes.
I admit option 2 sounds like the best solution to me, but I am very green in all of this, so I might be a bit naive and missing some obvious flaw. Also, it sounds like it could be fairly difficult to implement.
PS: I am considering using POCO C++ Libraries. Suggesting any alternative libraries (like boost) is fine with me. However, many say POCO's library is very clean and easy to understand. So, I would preferably use that one, so I can learn about the hows of what I'm using.
Reactive Applications certainly scale better, when they are written correctly. This means
Never blocking in a reactive thread:
Any blocking will seriously degrade the performance of you server, you typically use a small number of reactive threads, so blocking can also quickly cause deadlock.
No mutexs since these can block, so no shared mutable state. If you require shared state you will have to wrap it with an actor or similar so only one thread has access to the state.
All work in the reactive threads should be cpu bound
All IO has to be asynchronous or be performed in a different thread pool and the results feed back into the reactor.
This means using either futures or callbacks to process replies, this style of code can quickly become unmaintainable if you are not used to it and disciplined.
All work in the reactive threads should be small
To maintain responsiveness of the server all tasks in the reactor must be small (bounded by time)
On an 8 core machine you cannot cannot allow 8 long tasks arrive at the same time because no other work will start until they are complete
If a tasks could take a long time it must be broken up (cooperative multitasking)
Tasks in reactive applications are scheduled by the application not the operating system, that is why they can be faster and use less memory. When you write a Reactive application you are saying that you know the problem domain so well that you can organise and schedule this type of work better than the operating system can schedule threads doing the same work in a blocking fashion.
I am a big fan of reactive architectures but they come with costs. I am not sure I would write my first c++ application as reactive, I normally try to learn one thing at a time.
If you decide to use a reactive architecture use a good framework that will help you design and structure your code or you will end up with spaghetti. Things to look for are:
What is the unit of work?
How easy is it to add new work? can it only come in from an external event (eg network request)
How easy is it to break work up into smaller chunks?
How easy is it to process the results of this work?
How easy is it to move blocking code to another thread pool and still process the results?
I cannot recommend a C++ library for this, I now do my server development in Scala and Akka which provide all of this with an excellent composable futures library to keep the code clean.
Best of luck learning C++ and with which ever choice you make.
Option 2 will most efficiently occupy your hardware. Here is the classic article, ten years old but still good.
http://www.kegel.com/c10k.html
The best library combination these days for structuring an application with concurrency and asynchronous waiting is Boost Thread plus Boost ASIO. You could also try a C++11 std thread library, and std mutex (but Boost ASIO is better than mutexes in a lot of cases, just always callback to the same thread and you don't need protected regions). Stay away from std future, cause it's broken:
http://bartoszmilewski.com/2009/03/03/broken-promises-c0x-futures/
The optimal number of threads in the thread pool is one thread per CPU core. 8 cores -> 8 threads. Plus maybe a few extra, if you think it's possible that your threadpool threads might call blocking operations sometimes.
FWIW, Poco supports option 2 (ParallelReactor) since version 1.5.1
I think that option 2 is the best one. As for tuning of the pool size, I think the pool should be adaptive. It should be able to spawn more threads (with some high hard limit) and remove excessive threads in times of low activity.
as the analogy you linked to (and it's comments) suggest. this is somewhat application dependent. now what you are building here is a game server. let's analyze that.
game servers (generally) do a lot of I/O and relatively few calculations, so they are far from 100% CPU applications.
on the other hand they also usually change values in some database (a "game world" model). all players create reads and writes to this database. which is exactly the intersection problem in the analogy.
so while you may gain some from handling the I/O in separate threads, you will also lose from having separate threads accessing the same database and waiting for its locks.
so either option 1 or 2 are acceptable in your situation. for scalability reasons I would not recommend option 3.

Is it possible to cause multi processes hibernate (core dump?)?

I have a software (c++) that runs few processes (each process is a major system itself).
The processes have communication with each other via xml-rpc or boost asio
I want to be able to freeze or stop all processes at a given moment and be able to raise the system (all processes) later to the same state as before hibernating.
How can I do that in c++?
Would it be feasible due to the fact that the processes communicates with each other?
The big picture is that you need to get the system to a stable consistent state, then persist that state in some re-creatable form.
You can in principle write such code, the degree of difficulty depends on your application. You will need to figure out things such as:
How the processes agree that they are in a consistent state. You may need to define some new "Get ready to hibernate" and "I'm ready" messages.
For each process you need to figure out how to persist and recover it's state. Depending upon the complexity of any live data structures that may be quite tricky. On the other hand, if your processes are stateless then this could be really easy.
You'll need to devise a scheme for managing the sets of hibernated data, how you determine a consistent set across all the processes.
I see this as significant coding effort, the degree of difficulty will depend on the complexity of your application and the quality of its implementation. In a well structured application such major "replumbing" exercises often go surprisingly simply.
Unless you're an OS - no, it won't be possible.
What you need to do instead is to make sure that each process can do it for itself (i.e.: write a functionality that allows saving and restoring the states for each of the processes), and also to accommodate for inconsistencies in the communication (for example - to ensure ACK on the messages, and resend if saved state without receiving ACK).
It's feasible if done right, but it's easier said than done, of course, and assumes you can actually change the processes.
Well,
the other answers are fine. There is another rather "exotic" way which may solve this quickly, but it may be overkill or not suitable. But who knows ? So just in case...
I suggest to run your program into a virtual machine (I mean for example a linux with vmware) and pause/wake up this virtual machine at will.
If you are using an inter-process communication method which is not disrupted by this kind of operation, it may work and save you a lot of time.
Good luck.

What should I know about multithreading and when to use it, mainly in c++

I have never come across multithreading but I hear about it everywhere. What should I know about it and when should I use it? I code mainly in c++.
Mostly, you will need to learn about MT libraries on OS on which your application needs to run. Until and unless C++0x becomes a reality (which is a long way as it looks now), there is no support from the language proper or the standard library for threads. I suggest you take a look at the POSIX standard pthreads library for *nix and Windows threads to get started.
This is my opinion, but the biggest issue with multithreading is that it is difficult. I don't mean that from an experienced programmer point of view, I mean it conceptually. There really are a lot of difficult concurrency problems that appear once you dive into parallel programming. This is well known, and there are many approaches taken to make concurrency easier for the application developer. Functional languages have become a lot more popular because of their lack of side effects and idempotency. Some vendors choose to hide the concurrency behind API's (like Apple's Core Animation).
Multitheaded programs can see some huge gains in performance (both in user perception and actual amount of work done), but you do have to spend time to understand the interactions that your code and data structures make.
MSDN Multithreading for Rookies article is probably worth reading. Being from Microsoft, it's written in terms of what Microsoft OSes support(ed in 1993), but most of the basic ideas apply equally to other systems, with suitable renaming of functions and such.
That is a huge subject.
A few points...
With multi-core, the importance of multi-threading is now huge. If you aren't multithreading, you aren't getting the full performance capability of the machine.
Multi-threading is hard. Communicating and synchronization between threads is tricky to get right. Problems are often intermittent, hard to diagnose, and if the design isn't right for multi-threading, hard to fix.
Multi-threading is currently mostly non-portable and platform specific.
There are portable libraries with wrappers around threading APIs. Boost is one. wxWidgets (mainly a GUI library) is another. It can be done reasonably portably, but you won't have all the options you get from platform-specific APIs.
I've got an introduction to multithreading that you might find useful.
In this article there isn't a single
line of code and it's not aimed at
teaching the intricacies of
multithreaded programming in any given
programming language but to give a
short introduction, focusing primarily
on how and especially why and when
multithreaded programming would be
useful.
Here's a link to a good tutorial on POSIX threads programming (with diagrams) to get you started. While this tutorial is pthread specific, many of the concepts transfer to other systems.
To understand more about when to use threads, it helps to have a basic understanding of parallel programming. Here's a link to a tutorial on the very basics of parallel computing intended for those who are just becoming acquainted with the subject.
The other replies covered the how part, I'll briefly mention when to use multithreading.
The main alternative to multithreading is using a timer. Consider for example that you need to update a little label on your form with the existence of a file. If the file exists, you need to draw a special icon or something. Now if you use a timer with a low timeout, you can achieve basically the same thing, a function that polls if the file exists very frequently and updates your ui. No extra hassle.
But your function is doing a lot of unnecessary work, isn't it. The OS provides a "hey this file has been created" primitive that puts your thread to sleep until your file is ready. Obviously you can't use this from the ui thread or your entire application would freeze, so instead you spawn a new thread and set it to wait on the file creation event.
Now your application is using as little cpu as possible because of the fact that threads can wait on events (be it with mutexes or events). Say your file is ready however. You can't update your ui from different threads because all hell would break loose if 2 threads try to change the same bit of memory at the same time. In fact this is so bad that windows flat out rejects your attempts to do it at all.
So now you need either a synchronization mechanism of sorts to communicate with the ui one after the other (serially) so you don't step on eachother's toes, but you can't code the main thread part because the ui loop is hidden deep inside windows.
The other alternative is to use another way to communicate between threads. In this case, you might use PostMessage to post a message to the main ui loop that the file has been found and to do its job.
Now if your work can't be waited upon and can't be split nicely into little bits (for use in a short-timeout timer), all you have left is another thread and all the synchronization issues that arise from it.
It might be worth it. Or it might bite you in the ass after days and days, potentially weeks, of debugging the odd race condition you missed. It might pay off to spend a long time first to try to split it up into little bits for use with a timer. Even if you can't, the few cases where you can will outweigh the time cost.
You should know that it's hard. Some people think it's impossibly hard, that there's no practical way to verify that a program is thread safe. Dr. Hipp, author of sqlite, states that thread are evil. This article covers the problems with threads in detail.
The Chrome browser uses processes instead of threads, and tools like Stackless Python avoid hardware-supported threads in favor of interpreter-supported "micro-threads". Even things like web servers, where you'd think threading would be a perfect fit, and moving towards event driven architectures.
I myself wouldn't say it's impossible: many people have tried and succeeded. But there's no doubt writting production quality multi-threaded code is really hard. Successful multi-threaded applications tend to use only a few, predetermined threads with just a few carefully analyzed points of communication. For example a game with just two threads, physics and rendering, or a GUI app with a UI thread and background thread, and nothing else. A program that's spawning and joining threads throughout the code base will certainly have many impossible-to-find intermittent bugs.
It's particularly hard in C++, for two reasons:
the current version of the standard doesn't mention threads at all. All threading libraries and platform and implementation specific.
The scope of what's considered an atomic operation is rather narrow compared to a language like Java.
cross-platform libraries like boost Threads mitigate this somewhat. The future C++0x will introduce some threading support. But boost also has good interprocess communication support you could use to avoid threads altogether.
If you know nothing else about threading than that it's hard and should be treated with respect, than you know more than 99% of programmers.
If after all that, you're still interested in starting down the long hard road towards being able to write a multi-threaded C++ program that won't segfault at random, then I recommend starting with Boost threads. They're well documented, high level, and work cross platform. The concepts (mutexes, locks, futures) are the same few key concepts present in all threading libraries.

Remote proxy with shared memory in C++

Suppose I have a daemon that is sharing it's internal state to various applications via shared memory. Processes can send IPC messages to the daemon on a named pipe to perform various operations. In this scenario, I would like to create a C++ wrapper class for clients that acts as a kind of "Remote Proxy" to hide some of the gory details (synchronization, message passing, etc) from clients and make it easier to isolate code for unit tests.
I have three questions:
Generally, is this a good idea/approach?
Do you have any tips or gotchas for synchronization in this setup, or is it enough to use a standard reader-writer mutex setup?
Are there any frameworks that I should consider?
The target in question is an embedded linux system with a 2.18 kernel, therefore there are limitations on memory and compiler features.
Herb Sutter had an article Sharing Is the Root of All Contention that I broadly agree with; if you are using a shared memory architecture, you are exposing yourself to quite a bit of potential threading problems.
A client/server model can make things drastically simpler, where clients write to the named server pipe, and the server writes back on a unique client pipe (or use sockets). It would also make unit testing simpler (since you don't have to worry about testing shared memory), could avoid mutexing, etc.
There's Boost.Interprocess library, though I can't comment on its suitability for embedded systems.