ZeroMQ Pub Sending Empty String - c++

I've got a simple C++ PUB and python SUB set up, with intent to have C++ side built as a simple DLL eventually. I've had some prior experience with a similar set up with python on both sides, and no issues. I am, however a total C++ noob.
My C++ code:
#define ZMQ_EXPORT
#include "stdafx.h"
#include "zmq.hpp"
int _tmain(int argc, _TCHAR* argv[]) {
zmq::context_t context(1);
zmq::socket_t publisher(context, ZMQ_PUB);
publisher.bind("tcp://*:6666");
zmq::message_t message(5);
memcpy(message.data(), "Hello", 5);
while(true) {
Sleep(500);
publisher.send(message);
}
return 0;
}
Result from python SUB script on recv_multipart():
['']
I am confident it is otherwise working, though I think there's a flaw with how I am doing the memcpy.

I'm thinking your missing the whole 'subscription' part of pub/sub
You need to give the PUB message some sort of message filter. This also means that your SUB needs to do the setsockopt to be able to receive messages.
You're given example shows that you in fact do not have a message filter for your PUB message (or rather your "Hello" IS your message filter and the data message is infact an empty string).

Related

How can I add a subscriber in C++ in ROS

So I am trying to add a subscriber to a specific topic.
The purpose of the subscriber is to get range messages from the pi_sonar topic and use it in the code.
This is the code here:
line Follower code
so, if I wanted to add the sonar messages, should it look like this:
void turtlebot::range_sub('pacakge name of sonars'::range msg){
turtlebot::rng = msg.range;
}
based on what I was able to understand here I mean…
Is that correct?
I am gonna try it once I have my hands on the robot
You can follow this tutorial, it explains exactly what you are looking for,
#include <ros/ros.h>
#include <sensor_msgs/Range.h>
void sonarCallback(const sensor_msgs::Range::ConstPtr& msg)
{
ROS_INFO("Sonar Seq: [%d]", msg->header.seq);
ROS_INFO("Sonar Range: [%f]", msg->range);
}
int main(int argc, char **argv)
{
ros::init(argc, argv, "infrared_listener");
ros::NodeHandle n;
ros::Subscriber sub = n.subscribe("sensor/sonar0", 1000, sonarCallback);
ros::spin();
return 0;
}
Basically, you have to create ros::Subscriber whose callback, i.e., sonarCallback, is listening to incoming messages where you can implement your logic what to do with the sonar sensor reading, Please go through the link I shared and if something not clear, update the question accordingly

ZeroMQ PubSub using inproc sockets hangs forever

I'm adapting a tcp PubSub example to using inproc with multithread. It ends up hanging forever.
My setup
macOS Mojave, Xcode 10.3
zmq 4.3.2
The source code reeproducing the issue:
#include <string.h>
#include <stdio.h>
#include <unistd.h>
#include <thread>
#include "zmq.h"
void hello_pubsub_inproc() {
void* context = zmq_ctx_new();
void* publisher = zmq_socket(context, ZMQ_PUB);
printf("Starting server...\n");
int pub_conn = zmq_bind(publisher, "inproc://*:4040");
void* subscriber = zmq_socket(context, ZMQ_SUB);
printf("Collecting stock information from the server.\n");
int sub_conn = zmq_connect(subscriber, "inproc://localhost:4040");
sub_conn = zmq_setsockopt(subscriber, ZMQ_SUBSCRIBE, 0, 0);
std::thread t_pub = std::thread([&]{
const char* companies[2] = {"Company1", "Company2"};
int count = 0;
for(;;) {
int which_company = count % 2;
int index = (int)strlen(companies[0]);
char update[12];
snprintf(update, sizeof update, "%s",
companies[which_company]);
zmq_msg_t message;
zmq_msg_init_size(&message, index);
memcpy(zmq_msg_data(&message), update, index);
zmq_msg_send(&message, publisher, 0);
zmq_msg_close(&message);
count++;
}
});
std::thread t_sub = std::thread([&]{
int i;
for(i = 0; i < 10; i++) {
zmq_msg_t reply;
zmq_msg_init(&reply);
zmq_msg_recv(&reply, subscriber, 0);
int length = (int)zmq_msg_size(&reply);
char* value = (char*)malloc(length);
memcpy(value, zmq_msg_data(&reply), length);
zmq_msg_close(&reply);
printf("%s\n", value);
free(value);
}
});
t_pub.join();
// Give publisher time to set up.
sleep(1);
t_sub.join();
zmq_close(subscriber);
zmq_close(publisher);
zmq_ctx_destroy(context);
}
int main (int argc, char const *argv[]) {
hello_pubsub_inproc();
return 0;
}
The result
Starting server...
Collecting stock information from the server.
I've also tried adding this before joining threads to no avail:
zmq_proxy(publisher, subscriber, NULL);
The workaround: Replacing inproc with tcp fixes it instantly. But shouldn't inproc target in-process usecases?
Quick research tells me that it couldn't have been the order of bind vs. connect, since that problem is fixed in my zmq version.
The example below somehow tells me I don't have a missing shared-context issue, because it uses none:
ZeroMQ Subscribers not receiving message from Publisher over an inproc: transport class
I read from the Guide in the section Signaling Between Threads (PAIR Sockets) that
You can use PUB for the sender and SUB for the receiver. This will correctly deliver your messages exactly as you sent them and PUB does not distribute as PUSH or DEALER do. However, you need to configure the subscriber with an empty subscription, which is annoying.
What does it mean by an empty subscription?
Where am I doing wrong?
You can use PUB for the sender and SUB for the receiver. This will correctly deliver your messages exactly as you sent them and PUB does not distribute as PUSH or DEALER do. However, you need to configure the subscriber with an empty subscription, which is annoying.
Q : What does it mean by an empty subscription?
This means to set ( configure ) a subscription, driving a Topic-list message-delivery filtering, using an empty subscription string.
Q : Where am I doing wrong?
Here :
// sub_conn = zmq_setsockopt(subscriber, ZMQ_SUBSCRIBE, 0, 0); // Wrong
sub_conn = zmq_setsockopt(subscriber, ZMQ_SUBSCRIBE, "",0); // Empty string
Doubts also here, about using a proper syntax and naming rules :
// int pub_conn = zmq_bind(publisher, "inproc://*:4040");
int pub_conn = zmq_bind(publisher, "inproc://<aStringWithNameMax256Chars>");
as inproc:// transport-class does not use any kind of external stack, but maps the AccessPoint's I/O(s) onto 1+ memory-locations ( a stack-less, I/O-thread not requiring transport-class ).
Given this, there is nothing like "<address>:<port#>" being interpreted by such (here missing) protocol, so the string-alike text gets used as-is for identifying which Memory-location are the message-data going to go into.
So, the "inproc://*:4040" does not get expanded, but used "literally" as a named inproc:// transport-class I/O-Memory-location identified as [*:4040] ( Next, asking a .connect()-method of .connect( "inproc://localhost:4040" ) will, and must do so, lexically miss the prepared Memory-location: ["*:4040"] as the strings do not match
So this ought fail to .connect() - error-handling might be silent, as since the versions +4.x there is not necessary to obey the historical requirement to first .bind() ( creating a "known" named-Memory-Location for inproc:// ) before one may call a .connect() to get it cross-connected with an "already existing" named-Memory-location, so the v4.0+ will most probably not raise any error on calling and creating a different .bind( "inproc://*:4040" ) landing-zone and next asking a non-matching .connect( "inproc://localhost:4040" ) ( which does not have a "previously prepared" landing-zone in an already existing named-Memory-location.

How to send a serialized leap motion frame containing \0's via zeromq?

I need to send a leap motion frame via network using ZeroMQ.
The send and receive functionality already seems to work but I have a problem with the data being sent.
The Leap::Frame class contains a serialize and a deserialze method which creates a byte string of the given frame (or recreates a frame from a string).
For this simple example I send the string without beeing encapsulated in any class or something else from the client to the server.
The problem is that the byte string seems to have some \0 in it and so only the data till the first \0 arrives at the server.
The client:
int main(int argc, char** argv)
{
Leap::Controller controller;
zmq::context_t context = zmq::context_t(1);
zmq::socket_t client = zmq::socket_t(context, ZMQ_REQ);
client.connect("tcp://192.168.0.101:6881");
while(true)
{
Leap::Frame frame = controller.frame(0);
std::string frame_string = frame.serialize();
zmq::message_t message( frame_string.size() );
memcpy(message.data(), &frame_string, frame_string.size());
client.send(message);
}
return 0;
}
The server:
int main(int argc, char** argv)
{
zmq::context_t context = zmq::context_t(1);
zmq::socket_t server = zmq::socket_t(context, ZMQ_REP);
server.bind("tcp://*:6881");
while(true)
{
zmq::message_t message;
server.recv(&message);
std::string frame_string(static_cast<char*>(message.data()), message.size());
Leap::Frame received_frame;
received_frame.deserialize(frame_string);
}
return 0;
}
These are the first 200 chars of a serialized frame on the client side (total size for a frame with one hand is around 3700 chars). The first of many \0's is at position 149 (highlighted in bold):
s\x1è\x1d\bÒϤ\x2\x10ÝË裧\x1\x1až\x3\b2\"3\n\xf\rEÝ{Á\x15Ì\x1d“C
\x1dæx•Á\x12\xf\r²\b\r¾\x15w/õ=\x1d†³{¿\x1a\xf\rw+ŠB\x15Ò0¡A\x1dB-
\x6B*\xf\rôX\x1a?\x15è\x5G¿\x1d|k7¾2!\n\xf\rœ„ÚÂ\x15¯£ÍC\x1d6´GÂ
\x15\x1fÁ\x18C\x18ÿÿÿÿÿÿÿÿÿ\x1:\x1b\tzù\x1c”hª\x1eÀ\x11jÆaê‘HIÀ
\x19\t‡\0p0_#ÀBW\n\x1b\t8O\x13U\x15Õï?\x11Y\x2\a;Y=²¿\x19fó™Æ<IJ?
\x12\x1b\t\x12\x1>\b<±?\x117T\x2
On the server side the following arrives:
s\x1è\x1d\bÒϤ\x2\x10ÝË裧\x1\x1až\x3\b2\"3\n\xf\rEÝ{Á\x15Ì\x1d“C
\x1dæx•Á\x12\xf\r²\b\r¾\x15w/õ=\x1d†³{¿\x1a\xf\rw+ŠB\x15Ò0¡A\x1dB-
\x6B*\xf\rôX\x1a?\x15è\x5G¿\x1d|k7¾2!\n\xf\rœ„ÚÂ\x15¯£ÍC\x1d6´GÂ
\x15\x1fÁ\x18C\x18ÿÿÿÿÿÿÿÿÿ\x1:\x1b\tzù\x1c”hª\x1eÀ\x11jÆaê‘HIÀ
\x19\t‡
So unsurprisingly only the chars until the first \0 arrive at the server.
Does anybody know a workaround to send a byte array with \0's in it or another way to solve this problem?
The solution should be as fast as possible since the leap sensor creates about 100 frames or more (up to 200) per second and I need to get as many of them as possible.
Thanks in advance.
Try initializing the message you send with the size. Currently your memcpy'ing into msg.data() but it's not been allocated.
zmq::message_t message( frame_string.size() );
memcpy(message.data(), frame_string.c_str(), frame_string.size());
client.send(message);
Also, when receiving a message you don't need to have initialized it with a size, the recv call will resize it for you.
zmq::message_t message;
server.recv(&message);

Running C++ application reacting to outside changes

I have a C++ application that will be running constantly. It is listening for messages from a wireless module.
#include <stdlib.h>
#include <stdio.h>
struct payload {
char node[16];
char message
};
...
int main(int argc, char** argv) {
mesh.setNodeID(0); //irrelevant
mesh.begin(); //irrelevant
while(1){
mesh.update(); //irrelevant
mesh.DHCP(); //irrelevant
while(network.available()) {
struct payload received;
mesh.read(header, &received, sizeof(received)); //irrelevant
}
//below code goes here
}
And I want to be able to also send messages from this system.
I am currently reading a line from file:
//pseudo code
if (!fileEmpty) {
line = readLine();
struct payload send;
send.node = //splitted line
send.message = //splitted line
mesh.write(header, &send, sizeof(send));
And I split the line using strtok and assign the parts to a struct.
But there should be a better way.
I can't split the code for sending in different file (called with arguments) because there is some problem with the wireless module when I am listening and sending messages simultaneously. Assuming I splitted the code in two different files, I can kill the listening program when sending and then run it again, but this seems like a bad way of doing things.
So I am out of ideas.

Mongoose C++ Http server get only MG_OPEN_FILE event

I have this server using mongoose, which take some request, parse the informations, do an action and return the result.
For exemple, I can query it this way server:port/action?arg1=test&arg2=...
My problem is that any time I query the server I get only "MG_OPEN_FILE" events. And for each request I get 3 of them.
I read that it may be normal to have some in http queries but the problem here is that I don't have any "MG_NEW_REQUEST" events.
Basically, whenever I start the server, the first connection (and all of them after) always returns the following events:
MG_OPEN_FILE
MG_OPEN_FILE
MG_OPEN_FILE
MG_REQUEST_COMPLETE
I call my server this way :
int main(int argc, char* argv[]) {
struct mg_context *ctx;
const char *options[] = {"listening_ports", "8080", "num_threads","10", NULL};
ctx = mg_start(&callback, NULL, options);
while(1){
getchar(); // Wait until user hits "enter"
}
mg_stop(ctx);
return 0;
}
And the callback function starts with :
static void *callback(enum mg_event event, struct mg_connection *conn)
{
const struct mg_request_info *request_info = mg_get_request_info(conn);
if (event == MG_NEW_REQUEST)
{
But it is always a "MG_OPEN_FILE" event and I have no clue of the reason :(
So if anyone has any idea on the reason of this, I would be extremely thankful !
When you're getting MG_OPEN_FILE, check (char *) mg_get_request_info(conn)->ev_data
It contains file name mongoose wants to open.
If you have that file in memory, return it's data and size.
If you don't, return NULL.
Is your callback returning that you procssed the event? I only return "yes" if I process the event.