gRPC / C++- How to leave server streaming rpc on - c++

Hello I have this code:
Status ListFeatures(ServerContext* context, const Rectangle* rectangle,
ServerWriter<Feature>* writer) override {
auto lo = rectangle->lo();
auto hi = rectangle->hi();
long left = std::min(lo.longitude(), hi.longitude());
long right = std::max(lo.longitude(), hi.longitude());
long top = std::max(lo.latitude(), hi.latitude());
long bottom = std::min(lo.latitude(), hi.latitude());
for (const Feature& f : feature_list_) {
if (f.location().longitude() >= left &&
f.location().longitude() <= right &&
f.location().latitude() >= bottom &&
f.location().latitude() <= top) {
writer->Write(f);
}
}
return Status::OK;
}
This is server streaming RPC on Client Unary call.
I would like to not close the stream. Once the Client initiates the unary call I would like to keep the server stream "forever" so I can send messages whenever I like.
As far as I understand at the moment this line is executed:
return Status::OK;
The stream is getting closed. Is there any way I can keep it open so later I can send more server streaming messages?

There's two major solutions here for you.
First, the API is designed to be threadsafe, and usable through threadpools. It's okay to never return, consuming a thread in the process, and continue writing endlessly. Obviously, this means keeping a thread up for sending out your responses continuously, which can be resource-intensive, depending on your situation. Also you'd need to properly set up your threadpool for this. Holding the response indefinitely by default will cause problems if you haven't tuned it beforehand.
But secondly, there's the Reactor mechanism, allowing you to return without damaging the state. Your callbacks have void returns, and you'd need to explicitly call Finish(Status::OK) to terminate the RPC. It's a different base API, so you'd need to convert your code to it. You can see an example here:
https://github.com/grpc/grpc/blob/master/examples/cpp/route_guide/route_guide_callback_server.cc
The Chatter call is an example of a streaming server API that hops threads to send its replies.
The details of the callback API can be found here: https://github.com/grpc/proposal/blob/master/L67-cpp-callback-api.md

Related

boost::asio wite() API stuck while writing the data

While sending some data to client (multiple chunks of data); if the client stop reading the data after some packets, the server gets stuck on boost::asio::write() which results in unwanted behavior of the product.
We thought of shifting to async_write() and have a timer over it so that if such condition occurs, we could fallback to original good state, but due to design faults we could not use io_service (due to high concurrency) after async_write which resulted in not getting callbacks to stop the timer.
So, is there any way through which (without using io_serivce) we can unblock the write() API.
Somthing like we could execute write() API on a separate thread and terminate it through some timer. But here the question arises, is there any way through which we can clear out the boost buffers which already has some pending write data ?
Any help would be appreciated.
Thanks.
Eventually went with using boost::asio::async_write() but with io_service::poll() -> poll being non-blocking.
run() was not an option as the system is highly concurrent and read/write had to share the same io_service.
Pseudo code looks something like this:
data_to_write = size of data;
set current_bytes_transffered = 0
set timeout_occurred to false
/*
current_bytes_transffered -> obtained from async_write() callback
timeout_occurred -> obtained from a seperate timer
*/
while((data_to_write != current_bytes_transffered) || (!timeout_occurred))
{
// poll() is used instead of run() as the system
// has high concurrency and read and write operations
// shares same io_service
io_service.poll();
if(data_to_write == current_bytes_transffered)
{
// SUCCESS write logic
}
else if(timeout_occurred)
{
// timeout logic
}
}

I need help figuring out tcp sockets (clsocket)

I am having trouble figuring out sockets i am just asking the server for data at a position (glm::i64vec4) and expecting a response but the position gets way off when i get the response and the data for that position reflects that (aka my voxel game make a kinda cool looking but useless mess)
It's probably just me not understanding sockets whatsoever or maybe something weird with this library
one thought i had is it was maybe something to do with mismatching blocking and non blocking on the server and client
but when i switched the server to blocking (and put each client in a seperate thread from each other and the accepting process) it did nothing
if i'm doing something really stupid please tell me i know next to nothing about sockets
here is some code that probably looks horrible
Server Code
std::deque <CActiveSocket*> clients;
CPassiveSocket socket;
socket.Initialize();
socket.SetNonblocking();//I'm doing this so i don't need multiple threads for clients
socket.Listen("0.0.0.0",port);
while (1){
{
CActiveSocket* c;
if ((c = socket.Accept()) != NULL){
clients.emplace_back(c);
}
}
for (CActiveSocket*& c : clients){
c->Receive(sizeof(glm::i64vec4));
if (c->GetBytesReceived() == sizeof(glm::i64vec4)){
chkpkt chk;
chk.pos = *(glm::i64vec4*)c->GetData();
LOOP3D(chksize+2){
chk.data(i,j,k).val = chk.pos.y*chksize+j;
chk.data(i,j,k).id=0;
}
while (c->Send((uint8*)&chk,sizeof(chkpkt)) != sizeof(chkpkt)){}
}
}
}
Client Code
//v is a glm::i64vec4
//fsock is set to Blocking
if(fsock.Send((uint8*)&v,sizeof(glm::i64vec4)))
if (fsock.Receive(sizeof(chkpkt))){
tthread::lock_guard<tthread::fast_mutex> lock(wld->filemut);
wld->ichks[v]=(*(chkpkt*)fsock.GetData()).data;//i tried using the position i get back from the server to set this (instead of v) but that made it to where nothing loaded
//i checked it and the chunks position never lines up with what i sent
}
Without your complete application codes it's unrealistic to offer any suggestions of any particular lines of code correction.
But it seems like you are using this library. It doesn't matter if not, because most of time when doing network programming, socket's weird behavior make some problems somewhat universal. Thus there are a few suggestions for the portion of socket application in your project:
It suffices to have BLOCKING sockets.
Most of time socket's read have somewhat weird behavior, that is, it might not receive the requested size of bytes at a time. Due to this, you need to repeatedly call read until the receiving buffer is read thoroughly. For a complete and robust solution you can refer to Stevens's readn routine ([Ref.1], page122).
If you are using exactly the library mentioned above, you can see that your fsock.Receive eventually calls recv. And recv is just an variant of read[Ref.2], thus the solutions for both of them are just identical. And this pattern might help:
while(fsock.Receive(sizeof(chkpkt))>0)
{
// ...
}
Ref.1: https://mathcs.clarku.edu/~jbreecher/cs280/UNIX%20Network%20Programming(Volume1,3rd).pdf
Ref.2: https://man7.org/linux/man-pages/man2/recv.2.html#DESCRIPTION

How do I use select() and gRPC to create a server?

I need to use gRPC but in a single-threaded application (with additional socket channels). Naively, I'm thinking of using select() and depending on which file descriptor pops, calling gRPC to handle the message. My question is, can someone give me a rough (5-10 lines of code) outline skeleton on what I need to call after the select() pops?
Looking at Google's "hello world" example in the synchronous case implies a thread pool (which I can't use), and in the asynchronous case shows the main loop blocking -- which doesn't work for me because I need to handle other socket operations.
You can't do it, at this point (and probably ever).
One of the big weaknesses of event loops, including direct use of select()/poll() style APIs, is that they aren't composable in any natural way short of direct integration between the two.
We could theoretically add such functionality for Linux -- exporting an epoll_fd with a timerfd which becomes readable if it would be productive to call into a completion queue, but doing so would impose substantial constraints and architectural overhead on the rest of the stack just to support this usecase only on Linux. Everywhere else would require a background thread to manage that fd's readability.
This can be done using a gRPC async service along with grpc::Alarm to send any events that come from select or other polling APIs onto the gRPC completion queue. You can see an example using Epoll and gRPC together in this gist. The important functions are these two:
bool grpc_tick(grpc::ServerCompletionQueue& queue) {
void* tag = nullptr;
bool ok = false;
auto next_status = queue.AsyncNext(&tag, &ok, std::chrono::system_clock::now());
if (next_status == grpc::CompletionQueue::GOT_EVENT) {
if (ok && tag) {
static_cast<RequestProcessor*>(tag)->grpc_queue_tick();
} else {
std::cerr << "Not OK or bad tag: " << ok << "; " << tag << std::endl;
return false;
}
}
return next_status != grpc::CompletionQueue::SHUTDOWN;
}
bool tick_loops(int epoll, grpc::ServerCompletionQueue& queue) {
// Pump epoll events over to gRPC's completion queue.
epoll_event event{0};
while (epoll_wait(epoll, &event, /*maxevents=*/1, /*timeout=*/0)) {
grpc::Alarm alarm;
alarm.Set(&queue, std::chrono::system_clock::now(), event.data.ptr);
if (!grpc_tick(queue)) return false;
}
// Make sure gRPC gets at least 1 tick.
return grpc_tick(queue);
}
Here you can see the tick_loops function repeatedly calls epoll_wait until no more events are returned. For each epoll event, a grpc::Alarm is constructed with the deadline set to right now. After that, the gRPC event loop is immediately pumped with grpc_tick.
Note that the grpc::Alarm instance MUST outlive its time on the completion queue. In a real-world application, the alarm should be somehow attached to the tag (event.data.ptr in this example) so it can be cleaned up in the completion callback.
The gRPC event loop is then pumped again to ensure that any non-epoll events are also processed.
Completion queues are thread safe, so you could also put the epoll pump on one thread and the gRPC pump on another. With this setup you would not need to set the polling timeouts for each to 0 as they are in this example. This would reduce CPU usage by limiting dry cycles of the event loop pumps.

How would one avoid race conditions from multiple threads of a server sending data to a client? C++

I was following a tutorial on youtube on building a chat program using winsock and c++. Unfortunately the tutorial never bothered to consider race conditions, and this causes many problems.
The tutorial had us open a new thread every time a new client connected to the chat server, which would handle receiving and processing data from that individual client.
void Server::ClientHandlerThread(int ID) //ID = the index in the SOCKET Connections array
{
Packet PacketType;
while (true)
{
if (!serverptr->GetPacketType(ID, PacketType)) //Get packet type
break; //If there is an issue getting the packet type, exit this loop
if (!serverptr->ProcessPacket(ID, PacketType)) //Process packet (packet type)
break; //If there is an issue processing the packet, exit this loop
}
std::cout << "Lost connection to client ID: " << ID << std::endl;
}
When the client sends a message, the thread will process it and send it by first sending packet type, then sending the size of the message/packet, and finally sending the message.
bool Server::SendString(int ID, std::string & _string)
{
if (!SendPacketType(ID, P_ChatMessage))
return false;
int bufferlength = _string.size();
if (!SendInt(ID, bufferlength))
return false;
int RetnCheck = send(Connections[ID], _string.c_str(), bufferlength, NULL); //Send string buffer
if (RetnCheck == SOCKET_ERROR)
return false;
return true;
}
The issue arises when two threads (Two separate clients) are synchronously trying to send a message at the same time to the same ID. (The same third client). One thread may send to the client the int packet type, so the client is now prepared to receive an int, but then the second thread sends a string. (Because the thread assumes the client is waiting for that). The client is unable to process correctly and results in the program being unusable.
How would I solve this issue?
One solution I had:
Rather than allow each thread to execute server commands on their own, they would set an input value. The main server thread would loop through all the input values from each thread and then execute the commands one by one.
However I am unsure this won't have problems of its own... If a client sends multiple messages in the time frame of a single server loop, only one of the messages will send (since the new message would over-write the previous message). Of course there are ways around this, such as arrays of input or faster loops, but it still poses a problem.
Another issue that I thought of was that a client with a lower ID would always end up having their message sent first each loop. This isn't that big of a deal but if there was a situation, say, a trivia game, where two clients entered the correct answer in the same loop then the client with the lower ID would end up saying the answer "first" every time.
Thanks in advance.
If all I/O is being handled through a central server, a simple (but certainly not elegant) solution is to create a barrier around the I/O mechanisms to each client. In the simplest case this can just be a mutex. Associate that barrier with each client and anytime someone wants to send that client something (a complete message), lock the barrier. Unlock it when the complete message is handled. That way only one client can actually send something to another client at a time. In C++11, see std::mutex.

Is it expected for poll() to take 40ms to return even though data will be available sooner?

I created a proxy server to handle CQL orders from website clients. The proxy listens for incoming connections and each connection is given a thread. The thread loops as long as the socket exists and dies on HUP. You may also stop the proxy, which will stop the threads by sending an event (See eventfd()) to each thread.
By itself, this already allows me to save a good 100ms because the proxy is local and connecting to a local service is much faster than a service on a remote computer... (even if the computer is local.)
However, I send orders and once in a while the proxy sees no incoming data (i.e. it calls read() on the socket which is setup as NONBLOCK and gets -1 in return and errno == EAGAIN.) When that happens, I call poll() to wait for additional data, the HUP, or a hit on the eventfd meaning I have to quit (i.e. 2 fds, the socket and the eventfd).
Somehow, more often than not, when I hit the poll() function call, it adds an extra 40ms to the time it takes for a message to go round trip. Although one would think this only happens on larger messages, it happens when I receive an order, which is less than 100 bytes! So the size should not be the culprit. I also changed the code to make sure I send the entire order from the client to the proxy in one write() and to avoid the poll() if at all possible (i.e. I call read() first, and poll() only if nothing is available.)
Note that I have no timeout in this case because there is nothing to check other than the incoming orders and the eventfd. So I would imagine that the timeout won't be a problem.
The code base is really big. But the client/server comes down to something like this (the sizes in original are fully dynamic):
// Client
...
connect(socket);
...
write(socket, order, sizeof(order));
read(socket, result, sizeof(result));
// repeat for other orders, as required by client...
// server
...
socket = accept(); // happens for each client
...
pthread_create(runner);
...
// server thread (runner)
...
for(;;)
{
int r(0);
for(;;)
{
r += read(socket, order, sizeof(order));
if(r >= sizeof(order))
{
break;
}
// wait for more data is not enough received yet
poll(..."socket" + "eventfd"...); // <-- this will often take 40ms
if(eventfd_happened)
{
// quit thread
return;
}
}
...
[work on order]
...
write(socket, result, sizeof(result));
}
Note 1: I see the problem when I have a single client. So having multiple clients does not in itself cause the problem either.
Note 2: The client really uses BIO_connect(), BIO_read() and BIO_write() [from OpenSSL], but I doubt that would be a problem. I do not use any kind of encryption.
I don't see why you're using non-blocking I/O given you have a dedicated thread per socket. Just block in read(). Use SO_RCVTIMEO if you need an overall read timeout.