Using Redis with uWebSockets C++ server - c++

I currently have a C++ web sockets server using uWebSockets. I want to scale it horizontally using Redis. It means that I'll use this Redis client.
However, I'm facing a problem with the implementation of pub/sub channels. Indeed, since the Redis channel subscription needs its own event loop (according to this example), and obviously the same for the uWebSockets app (see this example), I end up with two event loops. And my problem is that I don't know how to manage running these two loops properly.
I tried running them on two different threads, which works if they are totally independent of each other. However, since I want to broadcast the upcoming Redis message to all web sockets client, I need the uWebSockets app instance (see this example) in the Redis thread to broadcast it:
Subscriber sub = redis->subscriber();
sub.on_message([](std::string channel, std::string msg){
app->publish("broadcast", msg, (uWS::OpCode)1);
});
Therefore the two event loops are not independant of each other and when I received a message from Redis, it takes about 5 seconds before it is handled by the uWebSockets app.
Does someone know how to properly set up this Redis pus/sub feature ? Thank you for your help.

I managed to solve my problem.
I found that calling app->publish(...) in my second thread was not thread-safe. Indeed, an interesting post showed me that in order to access the app from another thread, we have to use the method defer on the event loop. Therefore, the structure becomes:
...
uWS::SSLApp *app = nullptr;
uWS::Loop *loop = nullptr;
Redis *redis = nullptr;
...
void redisEventLoopThread(Subscriber *sub) {
sub->on_message([](string channel, string msg) {
loop->defer([msg]() {
app->publish(channel, msg, ...);
});
});
sub->subscribe("channel_name");
while (true) {
try {
sub->consume();
} catch (const Error &err) {...}
}
}
...
int main() {
app = new uWS::SSLApp();
loop = uWS::Loop::get();
redis = new Redis(...);
Subscriber sub = redis->subscriber();
thread redisThread(redisEventLoopThread, &sub);
app->ws<...>(...).listen(...).run();
...
}

Related

How to lock a long async call in a WebApi action?

I have this scenario where I have a WebApi and an endpoint that when triggered does a lot of work (around 2-5min). It is a POST endpoint with side effects and I would like to limit the execution so that if 2 requests are sent to this endpoint (should not happen, but better safe than sorry), one of them will have to wait in order to avoid race conditions.
I first tried to use a simple static lock inside the controller like this:
lock (_lockObj)
{
var results = await _service.LongRunningWithSideEffects();
return Ok(results);
}
this is of course not possible because of the await inside the lock statement.
Another solution I considered was to use a SemaphoreSlim implementation like this:
await semaphore.WaitAsync();
try
{
var results = await _service.LongRunningWithSideEffects();
return Ok(results);
}
finally
{
semaphore.Release();
}
However, according to MSDN:
The SemaphoreSlim class represents a lightweight, fast semaphore that can be used for waiting within a single process when wait times are expected to be very short.
Since in this scenario the wait times may even reach 5 minutes, what should I use for concurrency control?
EDIT (in response to plog17):
I do understand that passing this task onto a service might be the optimal way, however, I do not necessarily want to queue something in the background that still runs after the request is done.
The request involves other requests and integrations that take some time, but I would still like the user to wait for this request to finish and get a response regardless.
This request is expected to be only fired once a day at a specific time by a cron job. However, there is also an option to fire it manually by a developer (mostly in case something goes wrong with the job) and I would like to ensure the API doesn't run into concurrency issues if the developer e.g. double-sends the request accidentally etc.
If only one request of that sort can be processed at a given time, why not implement a queue ?
With such design, no more need to lock nor wait while processing the long running request.
Flow could be:
Client POST /RessourcesToProcess, should receive 202-Accepted quickly
HttpController simply queue the task to proceed (and return the 202-accepted)
Other service (windows service?) dequeue next task to proceed
Proceed task
Update resource status
During this process, client should be easily able to get status of requests previously made:
If task not found: 404-NotFound. Ressource not found for id 123
If task processing: 200-OK. 123 is processing.
If task done: 200-OK. Process response.
Your controller could look like:
public class TaskController
{
//constructor and private members
[HttpPost, Route("")]
public void QueueTask(RequestBody body)
{
messageQueue.Add(body);
}
[HttpGet, Route("taskId")]
public void QueueTask(string taskId)
{
YourThing thing = tasksRepository.Get(taskId);
if (thing == null)
{
return NotFound("thing does not exist");
}
if (thing.IsProcessing)
{
return Ok("thing is processing");
}
if (!thing.IsProcessing)
{
return Ok("thing is not processing yet");
}
//here we assume thing had been processed
return Ok(thing.ResponseContent);
}
}
This design suggests that you do not handle long running process inside your WebApi. Indeed, it may not be the best design choice. If you still want to do so, you may want to read:
Long running task in WebAPI
https://blogs.msdn.microsoft.com/webdev/2014/06/04/queuebackgroundworkitem-to-reliably-schedule-and-run-background-processes-in-asp-net/

How to make sure last AMQP message is published successfully before closing connection?

I have multiple processes working together as a system. One of the processes acts as main process. When the system is shutting down, every process need to send a notification (via RabbitMQ) to the main process and then exit. The program is written in C++ and I am using AMQPCPP library.
The problem is that sometimes the notification is not published successfully. I suspect exiting too soon is the cause of the problem as AMQPCPP library has no chance to send the message out before closing its connection.
The documentation of AMQPCPP says:
Published messages are normally not confirmed by the server, and the RabbitMQ will not send a report back to inform you whether the message was succesfully published or not. Therefore the publish method does not return a Deferred object.
As long as no error is reported via the Channel::onError() method, you can safely assume that your messages were delivered.
This can of course be a problem when you are publishing many messages. If you get an error halfway through there is no way to know for sure how many messages made it to the broker and how many should be republished. If this is important, you can wrap the publish commands inside a transaction. In this case, if an error occurs, the transaction is automatically rolled back by RabbitMQ and none of the messages are actually published.
Without a confirmation from RabbitMQ server, it's hard to decide when it is safe to exit the process. Furthermore, using transaction sounds like overkill for a notification.
Could anyone suggest a simple solution for a graceful shutting down without losing the last notification?
It turns out that I can setup a callback when closing the channel. So that I can safely close connection when all channels are closed successfully. I am not entirely sure if this process ensures all outgoing messages are really published. However from the test result, it seems that the problem is solved.
class MyClass
{
...
AMQP::TcpConnection m_tcpConnection;
AMQP::TcpChannel m_channelA;
AMQP::TcpChannel m_channelB;
...
};
void MyClass::stop(void)
{
sendTerminateNotification();
int remainChannel = 2;
auto closeConnection = [&]() {
--remainChannel;
if (remainChannel == 0) {
// close connection when all channels are closed.
m_tcpConnection.close();
ev::get_default_loop().break_loop();
}
};
auto closeChannel = [&](AMQP::TcpChannel & channel) {
channel.close()
.onSuccess([&](void) { closeConnection(); })
.onError([&](const char * msg)
{
std::cout << "cannot close channel: "
<< msg << std::endl;
// close the connection anyway
closeConnection();
}
);
closeChannel(m_channelA);
closeChannel(m_channelB);
}

Eclipse RAP Multi-client but single server thread

I understand how RAP creates scopes have a specific thread for each client and so on. I also understand how the application scope is unique among several clients, however I don't know how to access that specific scope in a single thread manner.
I would like to have a server side (with access to databases and stuff) that is a single execution to ensure it has a global knowledge of all transaction and that requests from clients are executed in sequence instead of parallel.
Currently I am accessing the application context as follows from the UI:
synchronized( MyServer.class ) {
ApplicationContext appContext = RWT.getApplicationContext();
MyServer myServer = (MyServer) appContext.getAttribute("myServer");
if (myServer == null){
myServer = new MyServer();
appContext.setAttribute("myServer", myServer);
}
myServer.doSomething(RWTUtils.getSessionID());
}
Even if I access myServer object there and trigger requests, the execution will still be running in the UI thread.
For now the only way to ensure the sequence is to use synchronized as follows on my server
public class MyServer {
String text = "";
public void doSomething(String string) {
try {
synchronized (this) {
System.out.println("doSomething - start :" + string);
text += "[" + string + "]";
System.out.println("text: " + (text));
Thread.sleep(10000);
System.out.println("text: " + (text));
System.out.println("doSomething - stop :" + string);
}
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Is there a better way to not have to manage the thread synchronization myself?
Any help is welcome
EDIT:
To better explain myself, here is what I mean. Either I trust the database to handle multiple request properly and I have to handle also some other knowledge in a synchronized manner to share information between clients (example A) or I find a solution where another thread handles both (example B), the knowledge and the database. Of course, the problem here is that one client may block the others, but this is can be managed with background threads for long actions, most of them will be no problem. My initial question was, is there maybe already some specific thread of the application scope that does Example B or is Example A actually the way to go?
Conclusion (so far)
Basically, option A) is the way to go. For database access it will require connection pooling and for shared information it will require thoughtful synchronization of key objects. Main attention has to be done in the database design and the synchronization of objects to ensure that two clients cannot write incompatible data at the same time (e.g. write contradicting entries that make the result dependent of the write order).
First of all, the way that you create MyServer in the first snippet is not thread safe. You are likely to create more than one instance of MyServer.
You need to synchronize the creation of MyServer, like this for example:
synchronized( MyServer.class ) {
MyServer myServer = (MyServer) appContext.getAttribute("myServer");
if (myServer == null){
myServer = new MyServer();
appContext.setAttribute("myServer", myServer);
}
}
See also this post How to implement thread-safe lazy initialization? for other possible solutions.
Furthermore, your code is calling doSomething() on the client thread (i.e. the UI thread) which will cause each client to wait until pending requests of other clients are processed. The client UI will become unresponsive.
To solve this problem your code should call doSomething() (or any other long-running operation for that matter) from a background thread (see also
Threads in RAP)
When the background thread has finished, you should use Server Push to update the UI.

How to shutdown gRPC server from Client (using RPC function)

I'm using gRPC for inter-process communication between C++ App (gRPC Server) and Java App (gRPC Client). Everything run on one machine. I want to provide client possibility to shut down the server. My idea is to add RPC function to service in proto which would do it.
The C++ Implementation would be:
class Service : public grpcGeneratedService
{
public:
......
private:
grpc::Server* m_pServer;
};
grpc::Status Service::ShutDown(grpc::ServerContext* pContext, const ShutDownRequest* pRequest, ShutDownResponse* pResponse)
{
if (m_pServer)
m_pServer->Shutdown();
return grpc::Status(grpc::StatusCode::OK, "");
}
However the ShutDown blocks until all RPC calls are processed what means dead-lock. Is there any elegant way how to implement it?
I'm using a std::promise with a method almost exactly like yours.
// Somewhere in the global scope :/
std::promise<void> exit_requested;
// My method looks nearly identical to yours
Status CoreServiceImpl::shutdown(ServerContext *context, const SystemRequest *request, Empty*)
{
LOG(INFO) << context->peer() << " - Shutdown request acknowledged.";
exit_requested.set_value();
return Status::OK;
}
In order to make this work, I call server->Wait() in a second thread and wait on the future for the exit_requested promise to block a shutdown call:
auto serveFn = [&]() {
server->Wait();
};
std::thread serving_thread(serveFn);
auto f = exit_requested.get_future();
f.wait();
server->Shutdown();
serving_thread.join();
Once I had this I was also able to support a clean shutdown via signal handlers as well:
auto handler = [](int s) {
exit_requested.set_value();
};
std::signal(SIGINT, handler);
std::signal(SIGTERM, handler);
std::signal(SIGQUIT, handler);
I've been satisfied with this approach so far and it's kept me within the bounds of gRPC and the standard c++ libs. Rather than use some globally scoped promise (I have to declare it as an external in my service implementation source) I should probably think of something more elegant.
One thing to note here is that setting the value of the promise more than once will throw an exception. This could happen if you somehow send the shutdown message and also pkill -2 my_awesome_service at the same time. I actually ran into this when there was a deadlock in my persistence layer preventing shutdown from finishing, when I tried to send a SIGINT again the service aborted instead! For my needs this is still an acceptable solution but I'd love to hear about alternatives that work around or solve that little problem.
You can create an std::function from the ShutDown() handler and run that function in a separate thread (or threadpool). This will allow decoupling the handling of the RPC from the execution of the shutdown logic and eliminate the deadlock.

ActiveMQ-cpp Broker URI with PrefetchPolicy has no effect

I am using activemq-cpp 3.7.0 with VS 2010 to build a client, the server is ActiveMQ 5.8. I have created a message consumer using code similar to the following, based on the CMS configurations mentioned here. ConnClass is a ExceptionListener and a MessageListener. I only want to consume a single message before calling cms::Session::commit().
void ConnClass::setup()
{
// Create a ConnectionFactory
std::tr1::shared_ptr<ConnectionFactory> connectionFactory(
ConnectionFactory::createCMSConnectionFactory(
"tcp://localhost:61616?cms.PrefetchPolicy.queuePrefetch=1");
// Create a Connection
m_connection = std::tr1::shared_ptr<cms::Connection>(
connectionFactory->createConnection());
m_connection->start();
m_connection->setExceptionListener(this);
// Create a Session
m_session = std::tr1::shared_ptr<cms::Session>(
m_connection->createSession(Session::SESSION_TRANSACTED));
// Create the destination (Queue)
m_destination = std::tr1::shared_ptr<cms::Destination>(
m_session->createQueue("myqueue?consumer.prefetchSize=1"));
// Create a MessageConsumer from the Session to the Queue
m_consumer = std::tr1::shared_ptr<cms::MessageConsumer>(
m_session->createConsumer( m_destination.get() ));
m_consumer->setMessageListener( this );
}
void ConnClass::onMessage( const Message* message )
{
// read message code ...
// schedule a processing event for
// another thread that calls m_session->commit() when done
}
The problem is I am receiving multiple messages instead of one message before calling m_session->commit() -- I know this because the commit() call is triggered by user input. How can I ensure onMessage() is only called once before each call to commit()?
It doesn't work that way. When using async consumers the messages are delivered as fast as the onMessage method completes. If you want to consume one and only one message then use a sync receive call.
For an async consumer the prefetch allows the broker to buffer up work on the client instead of firing one at a time so you can generally get better proformance, in your case as the async onMessage call completes an ack is sent back to the broker an the next message is sent to the client.
Yes, I find this too. However, when I use the Destination URI option ( "consumer.prefetchSize=15" , http://activemq.apache.org/cms/configuring.html#Configuring-DestinationURIParameters ) for the asynchronous consumer, It works well.
BTW, I just use the latest ActiveMQ-CPP v3.9.4 by Tim , and ActiveMQ v5.12.1 on CentOS 7.
Thanks!