protected data structure elements passed by value across functions - c++

I have a network application where multiple users have their userContext saved in the server. Users continuously send different requests to the server and the server processes them asynchronously using a thread pool and an event loop (like epoll for example).
All the users have their own ID. This id is unique with respect to the server. The server stores the user context in a map<int,userContext>. When the server receives a message from one user, it decodes the message. It also finds what is the request and userID etc. Then the server processes the request and it also updates (as required, state machine updates in the userContext current state) the stored userContext in the map.
In my application, there is a lot of such procedure calls (few are nested also) and I am passing the user context by value. I can not pass the reference of the map values. (then protection no longer exists).
Below is a sample implementation of userContextStore and two procedure.
class userContext {
public:
int id;
int value1;
int value2;
userContext():id(-1),value1(-1),value2(-1){}
userContext(const userContext &context) {
this->id = context.id;
this->value1 = context.value1;
this->value2 = context.value2;
}
};
class contextStore {
public:
map<int,userContext> Map;
std::mutex m;
void update(userContext context,int id) {
lock_guard<std::mutex> lock(m);
if(Map.find(id) != Map.end()) {
Map[id] = context;
return;
}
Map[id] = context;
}
userContext getUserContext(int id) {
lock_guard<std::mutex> lock(m);
userContext context(Map[id]);
return context;
}
void printContext(int id) {
lock_guard<std::mutex> lock(m);
if(Map.find(id) != Map.end()) {
userContext temp(Map[id]);
cout << temp.value1 << endl;
cout << temp.value2 << endl;
}
}
};
void procedureA(contextStore &store,userContext context) {
// do some long processing using the provided context
// change context copy in between
// based on the above processing
context.value1 += 20; // example of a change
int id = context.id;
store.update(context,id);
}
void procedureB(contextStore &store,int id) {
userContext context(store.getUserContext(id));
// do some other long processing
// change context copy in between
context.value1 -= 10; // example of a change
store.update(context,id);
}
Is there any better (possibly way to avoid object copy multiple times) to pass the userContext objects for the required modification inside a particular procedure call?
The second issue is that I am using a fat lock protecting the entire map. In this case, if one user request is under process in the server, then other user request's will not be processed. (because of the lock protects the entire map). Is there any way to have to fine grain lock for such situation?
Thanks!

Related

Modify data inside thread

I am writing unit tests for one of the classes I want to test. Function infFooLoop() what I want to test is executed in endless loop (request to stop it comes externally). My problem is I want to change the private variable state_ via setter in setState when function executes in separate thread asynchronously. Some simplified code example is here:
enum class State : int
{
Success = 1,
Failure = 2
};
class Foo
{
private:
State state_{State::Success};
bool stop_flag_{false};
public:
void setState(State state) { state_ = state; }
void infFooLoop(bool &start)
{
while (start)
{
std::cout << "While loop executes \n";
if (state_ == State::Failure)
{
stop_flag_ = true;
}
else if (stop_flag_)
{
std::cout << "Program stopped\n";
// Report error but for this example break
break;
}
std::this_thread::sleep_for(std::chrono::milliseconds(200));
}
}
};
int main()
{
Foo obj;
bool flag{true};
std::future<void> future = std::async(std::launch::async, [&](){ obj.infFooLoop(flag);});
// Try to change the data:
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
// I want to change data for `Failure` so stop_flag condition will be used
obj.setState(State::Failure);
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
// terminate loop
flag = false;
return 0;
}
state_ will be taken from external component and stop_flag_ is used to handle some corner-case here (in real it wont break the loop, just report error to different component).
I assume infFooLoop executes on a separate thread so I can't just call another function in a different thread to change this data. stop_flag_ is some internal variable used only in this one function so I want to leave it as simple as possible (do not introduce mutexes/atomic in Foo class only for tests). Can you give me some suggestions on what should I do here?

gRPC and etcd client

This question involves etcd specific stuff, but I think the question more related to work with gRPC in general.
I'm trying to create etcd Watch for some keys, since the documentation is sparse I had a look at Nokia implementation
It was easy to adapt code to my needs and I came up with first version which worked just fine, creating WatchCreateRequest, and firing callback on key update. So far so good. Then I've tried to add more than one key to watch. Fiasco! ClientAsyncReaderWriter is failing to Read/Write in such a case. Now to the question.
If I have following members in my class
Watch::Stub watchStub;
CompletionQueue completionQueue;
ClientContext context;
std::unique_ptr<ClientAsyncReaderWriter<WatchRequest, WatchResponse>> stream;
WatchResponse reply;
and I want to support multiple Watches added to my class, I guess I have to hold several variables per watch and not as class members.
First of all, I guess, WatchResponse reply should be one per Watch. I'm less sure about the stream, should I hold one per Watch? I'm almost sure that context could be reused for all Watches and 100% sure the stub and completionQueue can be reused for all Watches.
So the question is my guess-work right? Whats about thread safety? Didnt find any documentation describing what objects are safe to use from multiple thread and where I have to synchronize access.
Any link to documentation (not this one) will be appreciated!
Test code before I split members into single Watch property
(no proper shutdown, I know)
using namespace grpc;
class Watcher
{
public:
using Callback = std::function<void(const std::string&, const std::string&)>;
Watcher(std::shared_ptr<Channel> channel) : watchStub(channel)
{
stream = watchStub.AsyncWatch(&context, &completionQueue, (void*) "create");
eventPoller = std::thread([this]() { WaitForEvent(); });
}
void AddWatch(const std::string& key, Callback callback)
{
AddWatch(key, callback, false);
}
void AddWatches(const std::string& key, Callback callback)
{
AddWatch(key, callback, true);
}
private:
void AddWatch(const std::string& key, Callback callback, bool isRecursive)
{
auto insertionResult = callbacks.emplace(key, callback);
if (!insertionResult.second) {
throw std::runtime_error("Event handle already exist.");
}
WatchRequest watch_req;
WatchCreateRequest watch_create_req;
watch_create_req.set_key(key);
if (isRecursive) {
watch_create_req.set_range_end(key + "\xFF");
}
watch_req.mutable_create_request()->CopyFrom(watch_create_req);
stream->Write(watch_req, (void*) insertionResult.first->first.c_str());
stream->Read(&reply, (void*) insertionResult.first->first.c_str());
}
void WaitForEvent()
{
void* got_tag;
bool ok = false;
while (completionQueue.Next(&got_tag, &ok)) {
if (ok == false) {
break;
}
if (got_tag == (void*) "writes done") {
// Signal shutdown
}
else if (got_tag == (void*) "create") {
}
else if (got_tag == (void*) "write") {
}
else {
auto tag = std::string(reinterpret_cast<char*>(got_tag));
auto findIt = callbacks.find(tag);
if (findIt == callbacks.end()) {
throw std::runtime_error("Key \"" + tag + "\"not found");
}
if (reply.events_size()) {
ParseResponse(findIt->second);
}
stream->Read(&reply, got_tag);
}
}
}
void ParseResponse(Callback& callback)
{
for (int i = 0; i < reply.events_size(); ++i) {
auto event = reply.events(i);
auto key = event.kv().key();
callback(event.kv().key(), event.kv().value());
}
}
Watch::Stub watchStub;
CompletionQueue completionQueue;
ClientContext context;
std::unique_ptr<ClientAsyncReaderWriter<WatchRequest, WatchResponse>> stream;
WatchResponse reply;
std::unordered_map<std::string, Callback> callbacks;
std::thread eventPoller;
};
I'm sorry that I'm not very sure about the proper Watch design here. It's not very clear to me whether you want to create a gRPC call for each Watch.
Anyway, each gRPC call will have its own ClientContext, ClientAsyncReaderWriter. But stub and CompletionQueue is not per-call thing.
As far as I know, there is no central place to find the thread-safe classes. You may want to read the API document to have a correct expectation.
When I was writing the async server load reporting service, the only place I add synchronization myself is around CompletionQueue, so that I don't en-queue new tags to the cq if it's shut down.

Is it better to sort from server and distribute to clients or send unsorted and let clients sort it?

This is an online game and I was paid to implement a game event, which is a Player versus Player game system.
In my design I have a server side class (named PvpEventManager which is the event manager and it processes when a player kills another one, when a player joins or leaves the event, be it by decision, by being disconnected...etc, and has many other functions.
Now, this class also holds a container with the player information during the event (named vPlayerInfo), for all kinds of processing. When a player kills someone else, his kill must be increased and the victim's death too, quite obviously. But it also happens that clients have a scoreboard and since it's the server job to process a kill and tell all other clients connected on the event about this, the container will get updated.
It is necessary that the container be sorted from kill struct member from ascending to descending order so that the scoreboard can be rendered (at client) properly.
What would be better?
To sort the container at server increasing server work and send the already-sorted ready-to-render-at-screen container to all clients.
To send the container unsorted and let every connected game client sort the container itself when receiving it.
Note that the server processes thousands of thousands of packets incoming and outcoming every tick and really processes A LOT, A LOT of other stuff.
This somewhat describes the design of it:
This code describes what is being done when sorting the container on part of the actual code.
#include <iostream>
#include <array>
#include <vector>
#include <map>
#include <algorithm>
struct PlayerScoreBoardInfo
{
std::string name;
unsigned int kill, death, suicide;
unsigned int scorestreak;
PlayerScoreBoardInfo()
{
kill = death = suicide = scorestreak = 0;
name = "";
}
explicit PlayerScoreBoardInfo( std::string strname, unsigned int nkill, unsigned int ndeath, unsigned int nsuicide, unsigned int nscorestreak ) :
name(strname), kill(nkill), death(ndeath), suicide(nsuicide), scorestreak(nscorestreak)
{
}
};
class GameArenaManager
{
private:
GameArenaManager() {}
public:
std::map<u_long, PlayerScoreBoardInfo> m_Infos;
public:
static GameArenaManager& GetInstance()
{
static GameArenaManager InstanceObj;
return InstanceObj;
}
};
template <typename T1, typename T2> inline void SortVecFromMap( std::map<T1,T2>& maptodosort, std::vector<T2>& vectosort )
{
std::vector<T1> feedvector;
feedvector.reserve( maptodosort.size() );
for( const auto& it : maptodosort )
feedvector.push_back(it.second.kill);
std::sort(feedvector.begin(), feedvector.end(), std::greater<T1>());
for(const auto& itV : feedvector ) {
for( const auto& itM : maptodosort ) {
if( itM.second.kill == itV ) {
vectosort.push_back(itM.second );
}
}
}
}
int main()
{
GameArenaManager& Manager = GameArenaManager::GetInstance();
PlayerScoreBoardInfo info[5];
info[0] = PlayerScoreBoardInfo("ArchedukeShrimp", 5,4,0,0);
info[1] = PlayerScoreBoardInfo("EvilLactobacilus", 9,4,0,0);
info[2] = PlayerScoreBoardInfo("DolphinetelyOrcaward", 23,4,0,0);
info[3] = PlayerScoreBoardInfo("ChuckSkeet", 1,4,0,0);
info[4] = PlayerScoreBoardInfo("TrumpMcDuck", 87,4,0,0);
for( int i=0; i<5; i++)
Manager.m_Infos.insert( std::make_pair( i, info[i] ) );
std::vector<PlayerScoreBoardInfo> sortedvec;
SortVecFromMap( Manager.m_Infos, sortedvec );
for( std::vector<PlayerScoreBoardInfo>::iterator it = sortedvec.begin(); it != sortedvec.end(); it++ )
{
std::cout << "Name: " << (*it).name.c_str() << " ";
std::cout << "Kills: " << (*it).kill << std::endl;
}
return 0;
}
Here's a link for ideone compiled code: http://ideone.com/B08y9l
And one link for Wandbox in case you want to edit compile on real time: http://melpon.org/wandbox/permlink/6OVLBGEXiux5Vn34
This question will probably get closed down as "off topic". However,
First question: does the server ever need a sorted scoreboard?
If not, why do the work?
If anything, the server will want to index the scoreboard by player ID, which argues for either sorting by ID (to aid binary searches) or using a hashed container (O1 search time).
Furthermore, the ultimate bottleneck is network bandwidth. You'll eventually want to be in a position to send scoreboard deltas rather than state-of-the-world messages.
This further argues for making the clients do the work of resorting.
There is one more philosophical argument:
Is sorting by anything other than a primary key a data concern or a presentation concern? It's a presentation concern.
What does presentation? A client.
QED

RxCpp: how to control subject observer's lifetime when used with buffer_with_time

The purpose of the following code is to have various classes publish data to an observable. Some classes will observe every data, some will observe periodically with buffer_with_time().
This works well until the program exits, then it crashes, probably because the observer using buffer_with_time() is still hanging on to some thread.
struct Data
{
Data() : _subscriber(_subject.get_subscriber()) { }
~Data() { _subscriber.on_completed(); }
void publish(std::string data) { _subscriber.on_next(data); }
rxcpp::observable<std::string> observable() { return _subject.get_observable(); }
private:
rxcpp::subjects::subject<std::string> _subject;
rxcpp::subscriber<std::string> _subscriber;
};
void foo()
{
Data data;
auto period = std::chrono::milliseconds(30);
auto s1 = data.observable()
.buffer_with_time(period , rxcpp::observe_on_new_thread())
.subscribe([](std::vector<std::string>& data)
{ std::cout << data.size() << std::endl; });
data.publish("test 1");
data.publish("test 2");
std::this_thread::sleep_for(std::chrono::milliseconds(100));
// hope to call something here so s1's thread can be joined.
// program crashes upon exit
}
I tried calling "s1.unsubscribe()", and various as_blocking(), from(), merge(), but still can't get the program to exit gracefully.
Note that I used "subjects" here because "publish" can then be called from different places (which can be from different threads). I am not sure if this is the best mechanism to do that, I am open to other ways to accomplish that.
Advice?
This is very close to working..
However, having the Data destructor complete the input while also wanting the subscription to block the exit of foo until input is completed makes this more complex.
Here is a way to ensure that foo blocks after Data destructs. This is using the existing Data contract.
void foo1()
{
rxcpp::observable<std::vector<std::string>> buffered;
{
Data data;
auto period = std::chrono::milliseconds(30);
buffered = data.observable()
.buffer_with_time(period , rxcpp::observe_on_new_thread())
.publish().ref_count();
buffered
.subscribe([](const std::vector<std::string>& data)
{ printf("%lu\n", data.size()); },
[](){printf("data complete\n");});
data.publish("test 1");
data.publish("test 2");
// hope to call something here so s1's thread can be joined.
// program crashes upon exit
}
buffered.as_blocking().subscribe();
printf("exit foo1\n");
}
Alternatively, the changing the shape of Data (add a complete method) would allow the following code:
struct Data
{
Data() : _subscriber(_subject.get_subscriber()) { }
~Data() { complete(); }
void publish(std::string data) { _subscriber.on_next(data); }
void complete() {_subscriber.on_completed();}
rxcpp::observable<std::string> observable() { return _subject.get_observable(); }
private:
rxcpp::subjects::subject<std::string> _subject;
rxcpp::subscriber<std::string> _subscriber;
};
void foo2()
{
printf("foo2\n");
Data data;
auto newthread = rxcpp::observe_on_new_thread();
auto period = std::chrono::milliseconds(30);
auto buffered = data.observable()
.buffer_with_time(period , newthread)
.tap([](const std::vector<std::string>& data)
{ printf("%lu\n", data.size()); },
[](){printf("data complete\n");});
auto emitter = rxcpp::sources::timer(std::chrono::milliseconds(0), newthread)
.tap([&](long) {
data.publish("test 1");
data.publish("test 2");
data.complete();
});
// hope to call something here so s1's thread can be joined.
// program crashes upon exit
buffered.combine_latest(newthread, emitter).as_blocking().subscribe();
printf("exit foo2\n");
}
I think that this better expresses the dependencies..

Accessing a field of the class from a different thread

I have a class that contains a vector of Facebook friends data:
std::vector<FBFriend> m_friends
The FB is a very simple struct:
struct FBFriend
{
std::string ID;
std::string photoPath;
std::string name;
bool installed;
int score;
};
When I download the data from FB (in an async thread), I iterate over the m_friends field, to assign the corresponding picture, but I get bad access error.
Any idea?
Thanks.
When accessing a variable from multiple threads the reads and writes must be synchronized to avoid a data race.
Here's a simple example of how you can synchronize access using std::mutex and std::lock_guard:
std::mutex m;
std::vector<FBFriend> v;
// Add two FBFriend to vector.
v.push_back({"user1", "/photos/steve", "Steve", true, 0});
v.push_back({"user2", "/photos/laura", "Laura", true, 0});
// Change element in vector from a different thread.
auto f = std::async(std::launch::async, [&] {
std::lock_guard<std::mutex> lock(m); // Aquire lock before writing.
v[0].photoPath = "/images/steve";
});
f.wait(); // Wait for task to finish.
std::cout << v[0].photoPath << std::endl;
std::cout << v[1].photoPath << std::endl;
Output:
/images/steve
/photos/laura