How to do IPC using Unix Domain Socket in D? - d

Here I have a program that wants to
detect whether if it's the only instance
1.1. it does that by trying to create a Unix Domain Socket
and trying to binding it to a specific address.
if a duplicate program is not running, establish an UDS
and then listen to the socket.
2.1. if any message comes through that socket, the program will log the incoming message
2.2. otherwise it should keep listening to the socket forever
if there's a duplicate program it should send a message and then exit.
Here's what I have:
import std.socket, std.experimental.logger;
immutable string socketAddress = "\0/tmp/com.localserver.myapp";
void main()
{
auto socket = new std.socket.Socket(std.socket.AddressFamily.UNIX,
std.socket.SocketType.STREAM);
auto addr = new std.socket.UnixAddress(socketAddress);
auto isUnique = () {
bool result;
scope (success)
log("returns: ", result);
try
{
socket.bind(addr);
result = true;
}
catch (std.socket.SocketOSException e)
result = false;
// else throw error
return result;
}();
if (isUnique)
{
log("Unique instance detected. Listening...");
// works upto now
char[] buffer = [];
while (1)
{
socket.listen(0);
socket.receive(buffer);
if (buffer != []) {
log("Received message: ", buffer);
}
buffer = [];
}
}
else
{
log("Duplicate instance detected.");
socket.connect(addr);
import std.stdio;
stdout.write("Enter your message:\t");
socket.send(readln());
log("Message has been sent. Exiting.");
}
}
The documentation does not seem very friendly to those who does not have any experience in socket programming. How can I send and receive message with std.socket.Socket?

After binding, you actually need to accept. It will return a new Socket instance which you can actually receive from. Your client side branch looks ok. I think that is your key mistake here.
I also have a code sample in my book that shows basic functionality of std.socket which can help as an example:
http://arsdnet.net/dcode/book/chapter_02/03/
it is tcp, but making it unix just means changing the family, like you already did in your code.
You can also look up socket tutorials for C and so on, the D socket is just a thin wrapper around those same BSD style socket functions.

As Adam pointed out I had use listen() method first and then apply the accept() method which returns a socket that can receive message. Then the receiver socket takes a char[N] buffer.
import std.socket, std.experimental.logger;
class UDSIPC
{
private:
static immutable string socketAddress = "\0/tmp/com.localserver.myapp";
static immutable size_t messageBufferSize = 64;
static immutable string socketAddressName = "\0/tmp/com.localserver.myapp";
Socket socket;
UnixAddress uaddr;
public:
this(in string socketAddressName = socketAddressName)
{
socket = new Socket(AddressFamily.UNIX, SocketType.STREAM);
uaddr = new UnixAddress(socketAddress);
}
bool getUniqueness()
{
bool result;
scope (success)
log("returns: ", result);
try
{
socket.bind(uaddr);
result = true;
}
catch (SocketOSException e)
result = false;
// else throw error
return result;
}
string getMessage()
{
socket.listen(0);
auto receiverSocket = socket.accept();
char[messageBufferSize] buffer;
auto amount = receiverSocket.receive(buffer);
import std.string;
return format!"%s"(buffer[0 .. amount]);
}
void sendMessage(in string message)
{
socket.connect(uaddr);
socket.send(message);
}
}
void main()
{
auto ipc = new UDSIPC();
if (ipc.getUniqueness())
{
while (true)
{
log(ipc.getMessage());
}
}
else
{
import std.stdio, std.string;
ipc.sendMessage(readln().chomp());
}
}

Related

Reason for losing messeges over NNG sockets in raw mode

Some context to my problem:
I need to establish an inter-process communication using C++ and sockets and I picked NNG library for that along with nngpp c++ wrapper. I need to use push/pull protocol so no contexts handling is available to me. I wrote some code based on raw example from nngpp demo. The difference here is that, by using push/pull protocol I split this into two separate programs. One for sending and one for receiving.
Problem descripion:
I need to receive let's say a thousand or more messages per second. For now, all messages are captured only when I send about 50/s. That is way too slow and I do believe it can be done faster. The faster I send, the more I lose. At the moment, when sending 1000msg/s I lose about 150 msgs.
Some words about the code
The code may be in C++17 standard. It is written in object-oriented manner so in the end I want to have a class with "receive" method that would simply give me the received messages. For now, I just print the results on screen. Below, I supply some parts of the project with descriptions:
NOTE msgItem is a struct like that:
struct msgItem {
nng::aio aio;
nng::msg msg;
nng::socket_view itemSock;
explicit msgItem(nng::socket_view sock) : itemSock(sock) {}
};
And it is taken from example mentioned above.
Callback function that is executed when message is received by one of the aio's (callback is passed in constructor of aio object). It aims at checking whether everything was ok with transmission, retrieving my Payload (just string for now) and passing it to queue while a flag is set. Then I want to print those messages from the queue using separate thread.
void ReceiverBase<Payload>::aioCallback(void *arg) try {
msgItem *msgItem = (struct msgItem *)arg;
Payload retMsg{};
auto result = msgItem->aio.result();
if (result != nng::error::success) {
throw nng::exception(result);
}
//Here we extract the message
auto msg = msgItem->aio.release_msg();
auto const *data = static_cast<typename Payload::value_type *>(msg.body().data());
auto const count = msg.body().size()/sizeof(typename Payload::value_type);
std::copy(data, data + count, std::back_inserter(retMsg));
{
std::lock_guard<std::mutex> lk(m_msgMx);
newMessageFlag = true;
m_messageQueue.push(std::move(retMsg));
}
msgItem->itemSock.recv(msgItem->aio);
} catch (const nng::exception &e) {
fprintf(stderr, "server_cb: %s: %s\n", e.who(), e.what());
} catch (...) {
fprintf(stderr, "server_cb: unknown exception\n");
}
Separate thread for listening to the flag change and printing. While loop at the end is for continuous work of the program. I use msgCounter to count successful message receival.
void ReceiverBase<Payload>::start() {
auto listenerLambda = [](){
std::string temp;
while (true) {
std::lock_guard<std::mutex> lg(m_msgMx);
if(newMessageFlag) {
temp = std::move(m_messageQueue.front());
m_messageQueue.pop();
++msgCounter;
std::cout << msgCounter << "\n";
newMessageFlag = false;
}}};
std::thread listenerThread (listenerLambda);
while (true) {
std::this_thread::sleep_for(std::chrono::microseconds(1));
}
}
This is my sender application. I tweak the frequency of msg sending by changing the value in std::chrono::miliseconds(val).
int main (int argc, char *argv[])
{
std::string connection_address{"ipc:///tmp/async_demo1"};
std::string longMsg{" here normally I have some long test text"};
std::cout << "Trying connecting sender:";
StringSender sender(connection_address);
sender.setupConnection();
for (int i=0; i<1000; ++i) {
std::this_thread::sleep_for(std::chrono::milliseconds(3));
sender.send(longMsg);
}
}
And this is receiver:
int main (int argc, char *argv[])
{
std::string connection_address{"ipc:///tmp/async_demo1"};
std::cout << "Trying connecting receiver:";
StringReceiver receiver(connection_address);
receiver.setupConnection();
std::cout<< "Connection set up. \n";
receiver.start();
return 0;
}
Nothing speciall in those two applications as You see. the setup method from StringReciver is something like that:
bool ReceiverBase<Payload>::setupConnection() {
m_connected = false;
try {
for (size_t i = 0; i < m_parallel; ++i) {
m_msgItems.at(i) = std::make_unique<msgItem>(m_sock);
m_msgItems.at(i)->aio =
nng::aio(ReceiverBase::aioCallback, m_msgItems.at(i).get());
}
m_sock.listen(m_adress.c_str());
m_connected = true;
for (size_t i = 0; i < m_parallel; ++i) {
m_msgItems.at(i)->itemSock.recv(m_msgItems.at(i)->aio);
}
} catch (const nng::exception &e) {
printf("%s: %s\n", e.who(), e.what());
}
return m_connected;
}
Do You have any suggestions why the performance is so low? Do I use lock_guards properly here? What I want them to do is basically lock the flag and queue so only one side has access to it.
NOTE: Adding more listeners thread does not affect the performance either way.
NOTE2: newMessageFlag is atomic

How to use grpc c++ ClientAsyncReader<Message> for server side streams

I am using a very simple proto where the Message contains only 1 string field. Like so:
service LongLivedConnection {
// Starts a grpc connection
rpc Connect(Connection) returns (stream Message) {}
}
message Connection{
string userId = 1;
}
message Message{
string serverMessage = 1;
}
The use case is that the client should connect to the server, and the server will use this grpc for push messages.
Now, for the client code, assuming that I am already in a worker thread, how do I properly set it up so that I can continuously receive messages that come from server at random times?
void StartConnection(const std::string& user) {
Connection request;
request.set_userId(user);
Message message;
ClientContext context;
stub_->Connect(&context, request, &reply);
// What should I do from now on?
// notify(serverMessage);
}
void notify(std::string message) {
// generate message events and pass to main event loop
}
I figured out how to used the api. Looks like it is pretty flexible, but still a little bit weird given that I typically just expect the async api to receive some kind of lambda callback.
The code below is blocking, you'll have to run this in a different thread so it doesn't block your application.
I believe you can have multiple thread accessing the CompletionQueue, but in my case I just had one single thread handling this grpc connection.
GrpcConnection.h file:
public:
void StartGrpcConnection();
private:
std::shared_ptr<grpc::Channel> m_channel;
std::unique_ptr<grpc::ClientReader<push_notifications::Message>> m_reader;
std::unique_ptr<push_notifications::PushNotificationService::Stub> m_stub;
GrpcConnection.cpp files:
...
void GrpcConnectionService::StartGrpcConnection()
{
m_channel = grpc::CreateChannel("localhost:50051",grpc::InsecureChannelCredentials());
LongLiveConnection::Connect request;
request.set_user_id(12345);
m_stub = LongLiveConnection::LongLiveConnectionService::NewStub(m_channel);
grpc::ClientContext context;
grpc::CompletionQueue cq;
std::unique_ptr<grpc::ClientAsyncReader<LongLiveConnection::Message>> reader =
m_stub->PrepareAsyncConnect(&context, request, &cq);
void* got_tag;
bool ok = false;
LongLiveConnection::Message reply;
reader->StartCall((void*)1);
cq.Next(&got_tag, &ok);
if (ok && got_tag == (void*)1)
{
// startCall() is successful if ok is true, and got_tag is void*1
// start the first read message with a different hardcoded tag
reader->Read(&reply, (void*)2);
while (true)
{
ok = false;
cq.Next(&got_tag, &ok);
if (got_tag == (void*)2)
{
// this is the message from server
std::string body = reply.server_message();
// do whatever you want with body, in my case i push it to my applications' event stream to be processed by other components
// lastly, initialize another read
reader->Read(&reply, (void*)2);
}
else if (got_tag == (void*)3)
{
// if you do something else, such as listening to GRPC channel state change, in your call, you can pass a different hardcoded tag, then, in here, you will be notified when the result is received from that call.
}
}
}
}

C++ GRPC Async Bidirectional Streaming - How to tell when a client sent a message?

There is zero documentation how to do an async bidirectional stream with grpc. I've made guesses by piecing together the regular async examples with what I found in peope's github.
With the frankestein code I have, I cannot figure out how to tell when a client sent me a message. Here is the procedure I have running on its own thread.
void GrpcStreamingServerImpl::listeningThreadProc()
{
try
{
// I think we make a call to the RPC method and wait for others to stream to it?
::grpc::ServerContext context;
void * ourOneAndOnlyTag = reinterpret_cast<void *>(1); ///< Identifies the call we are going to make. I assume we can only handle one client
::grpc::ServerAsyncReaderWriter<mycompanynamespace::OutputMessage,
mycompanynamespace::InputMessage>
stream(&context);
m_service.RequestMessageStream(&context, &stream, m_completionQueue.get(), m_completionQueue.get(), ourOneAndOnlyTag);
// Now I'm going to loop and get events from the completion queue
bool keepGoing = false;
do
{
void* tag = nullptr;
bool ok = false;
const std::chrono::time_point<std::chrono::system_clock> deadline(std::chrono::system_clock::now() +
std::chrono::seconds(1));
grpc::CompletionQueue::NextStatus nextStatus = m_completionQueue->AsyncNext(&tag, &ok, deadline);
switch(nextStatus)
{
case grpc::CompletionQueue::NextStatus::TIMEOUT:
{
keepGoing = true;
break;
}
case grpc::CompletionQueue::NextStatus::GOT_EVENT:
{
keepGoing = true;
if(ok)
{
// This seems to get called if a client connects
// It does not get called if we didn't call 'RequestMessageStream' before the loop started
// TODO - How do we tell when the client send us a messages?
// TODO - How do we know if they are just connecting?
// TODO - How do we get the message client sent?
// The tag corresponds to the request we made
if(tag == reinterpret_cast<void *>(1))
{
// SNIP successful writing of a message
stream.Write(*(outputMessage.get()), reinterpret_cast<void*>(2));
}
else if(tag == reinterpret_cast<void *>(2))
{
// This is telling us the message we sent was completed
}
else
{
// TODO - I dunno what else it can be
}
}
break;
}
case grpc::CompletionQueue::NextStatus::SHUTDOWN:
{
keepGoing = false;
break;
}
}
} while(keepGoing);
// Completion queue was shutdown
}
catch(std::exception& e)
{
QString errorMessage(
QString("An std::exception was caught in the listening thread. Exception message: %1").arg(e.what()));
m_backPointer->onImplError(errorMessage);
}
catch(...)
{
QString errorMessage("An exception of unknown type, was caught in the listening thread.");
m_backPointer->onImplError(errorMessage);
}
}
Setup looked like this
// Start up the grpc service
grpc::ServerBuilder builder;
builder.RegisterService(&m_service);
builder.AddListeningPort(endpoint.toStdString(), grpc::InsecureServerCredentials());
m_completionQueue = builder.AddCompletionQueue();
m_server = builder.BuildAndStart();
// Start the listening thread
m_listeningThread = QThread::create(&GrpcStreamingServerImpl::listeningThreadProc, this);

Sending data in second thread with Mongoose server

I'm trying to create a multithread server application using mongoose web server library.
I have main thread serving connections and sending requests to processors that are working in their own threads. Then processors place results into queue and queue observer must send results back to clients.
Sources are looking that way:
Here I prepare the data for processors and place it to queue.
typedef std::pair<struct mg_connection*, const char*> TransferData;
int server_app::event_handler(struct mg_connection *conn, enum mg_event ev)
{
Request req;
if (ev == MG_AUTH)
return MG_TRUE; // Authorize all requests
else if (ev == MG_REQUEST)
{
req = parse_request(conn);
task_queue->push(TransferData(conn,req.second));
mg_printf(conn, "%s", ""); // (1)
return MG_MORE; // (2)
}
else
return MG_FALSE; // Rest of the events are not processed
}
And here I'm trying to send the result back. This function is working in it's own thread.
void server_app::check_results()
{
while(true)
{
TransferData res;
if(!res_queue->pop(res))
{
boost::this_thread::sleep_for(boost::chrono::milliseconds(100));
continue;
}
mg_printf_data(res.first, "%s", res.second); // (3)
}
}
The problem is a client doesn't receive anything from the server.
If I run check_result function manualy in the event_handler after placing a task into the queue and then pass computed result back to event_handler, I'm able to send it to client using mg_printf_data (with returning MG_TRUE). Any other way - I'm not.
What exactly should I change in this sources to make it works?
Ok... It looks like I've solved it myself.
I'd been looking into mongoose.c code and an hour later I found the piece of code below:
static void write_terminating_chunk(struct connection *conn) {
mg_write(&conn->mg_conn, "0\r\n\r\n", 5);
}
static int call_request_handler(struct connection *conn) {
int result;
conn->mg_conn.content = conn->ns_conn->recv_iobuf.buf;
if ((result = call_user(conn, MG_REQUEST)) == MG_TRUE) {
if (conn->ns_conn->flags & MG_HEADERS_SENT) {
write_terminating_chunk(conn);
}
close_local_endpoint(conn);
}
return result;
}
So I've tried to do mg_write(&conn->mg_conn, "0\r\n\r\n", 5); after line (3) and now it's working.

Cannot get response when using socket in C++.NET

I wrote two programs, one as server and another as client. The server is written in standard C++ using WinSock2.h. It is a simple echo server which means the server responds what it receives back to the client. I used a new thread for every client's connection, and in each thread:
Socket* s = (Socket*) a;
while (1) {
std::string r = s->ReceiveLine()
if (r.empty()) {
break;
}
s->SendLine(r);
}
delete s;
return 0;
Socket is a class from here. The server runs properly and I've tested it using telnet, it works well.
Then I wrote the client using C++.NET (or C++/CLI), TcpClient is used to send and receive message from the server. The code is like:
String^ request = "test";
TcpClient ^ client = gcnew TcpClient(server, port);
array<Byte> ^ data = Encoding::ASCII->GetBytes(request);
NetworkStream ^ stream = client->GetStream();
stream->Write(data, 0, data->Length);
data = gcnew array<Byte>(256);
String ^ response = String::Empty;
int bytes = stream->Read(data, 0, data->Length);
response = Encoding::ASCII->GetString(data, 0, bytes);
client->Close();
When I run the client and tries to show the response message onto my form, the program halted at the line int bytes = stream->Read(data, 0, data->Length); and cannot fetch the response. The server is running and there's nothing to do with the network as they are all running on the same computer.
I guess the reason is that the data server responds with is less than data->Length, so the Read method is waiting for more data. Is that right? How should I solve this problem?
Edit
I think I've solved the problem... There are another two methods in the Socket class, ReceiveBytes and SendBytes, and these two methods will not delete the unused space in the bytes array. So the length of data back from the server will match that from the client, thus the Read method will not wait for more data to come.
std::string Socket::ReceiveLine() {
std::string ret;
while (1) {
char r;
switch(recv(s_, &r, 1, 0)) {
case 0: // not connected anymore;
// ... but last line sent
// might not end in \n,
// so return ret anyway.
return ret;
case -1:
return "";
// if (errno == EAGAIN) {
// return ret;
// } else {
// // not connected anymore
// return "";
// }
}
ret += r;
if (r == '\n') return ret;
}
}
i guess receiveline function of the server is waiting for an enter key '\n'.
just try with "test\n" string.
String^ request = "test\n";
// other codes....