Prevent WxWidget Boost Thread from closing - c++

The following function is created in a boost thread. In a console app it works like this.
void function()
{
/* snip */
try
{
/* Do stuff, in this case a P2P transfer is started using an external library
as long as this thread is active, the P2P transfer will be too. */
//Prevent thread from closing
char a;
std::cin >> a;
}
catch (std::exception& e)
{
/* snip */
}
return 0;
}
Which would prevent the thread from being closed until a user types something. Ultimately, what I want is this:
void function()
{
int x = 1;
/* snip */
try
{
/* Do stuff, in this case a P2P transfer is started using an external library
as long as this thread is active, the P2P transfer will be too. */
//Prevent thread from closing
while(x = 1)
{
//Do nothing until the user stops the transfer (x becomes 0 when the user hits stop)
}
}
catch (std::exception& e)
{
/* snip */
}
return 0;
}
But this is no good.. the CPU spikes to 100%. I tried putting sleep(1); in the while loop, but it made no difference and I'm not sure how that would affect the transfer.

Related

How to determine which port has invoked an sc_thread if the thread is sensitive to multiple ports in SystemC?

I have a thread which is sensitive to an array of ports.
In this thread I want to find out which port had triggered this thread so that i can read the value of that port?
Is there a way to determine this?
Example code given below.
What should be the logic to determine which port triggered thread_name() ?
tb.h ----------------
class tb :: public sc_core :: sc_module
{
sc_core :: sc_out<uint32_t> port_name[10];
void thread_name();
};
tb.cpp --------------
tb :: tb (...)
{
SC_THREAD(thread_name);
for(int i=0;i<10;i++)
sensitive >> port[i];
dont_initialize();
}
void tb::thread_name()
{
// print data of the port that triggered this thread
// in this case, only port[2] should be read and data 5 should be printed
}
int main()
{
tb tb("");
tb->port_name[2]->write(5);
}
There is no standard way of identifying what triggered a particular process. You have several alternatives to choose from if you do need this ability. Here is one such alternative.
Note: I did not compile this code. This is just to give you a starting point.
tb::tb(...) {
for(int i = 0; i != 10; ++i) {
// Options for a thread statically sensitive to ports[i]. According to the
// standard, "the application is not obliged to keep the sc_spawn_options
// object valid after the return from function sc_spawn"
sc_core::sc_spawn_options opts;
opts.dont_initialize();
opts.set_sensitivity(&ports[i]);
// We bind the current value of i to a call to thread_name function,
// and create a spawned process out of this call.
sc_core::sc_spawn(std::bind(thread_name, this, i), "", &opts);
}
}
void tb::thread_name(std::size_t _port_id) {
// _port_id tells you which port spawned an instance of this thread.
// ...
}

Reason for losing messeges over NNG sockets in raw mode

Some context to my problem:
I need to establish an inter-process communication using C++ and sockets and I picked NNG library for that along with nngpp c++ wrapper. I need to use push/pull protocol so no contexts handling is available to me. I wrote some code based on raw example from nngpp demo. The difference here is that, by using push/pull protocol I split this into two separate programs. One for sending and one for receiving.
Problem descripion:
I need to receive let's say a thousand or more messages per second. For now, all messages are captured only when I send about 50/s. That is way too slow and I do believe it can be done faster. The faster I send, the more I lose. At the moment, when sending 1000msg/s I lose about 150 msgs.
Some words about the code
The code may be in C++17 standard. It is written in object-oriented manner so in the end I want to have a class with "receive" method that would simply give me the received messages. For now, I just print the results on screen. Below, I supply some parts of the project with descriptions:
NOTE msgItem is a struct like that:
struct msgItem {
nng::aio aio;
nng::msg msg;
nng::socket_view itemSock;
explicit msgItem(nng::socket_view sock) : itemSock(sock) {}
};
And it is taken from example mentioned above.
Callback function that is executed when message is received by one of the aio's (callback is passed in constructor of aio object). It aims at checking whether everything was ok with transmission, retrieving my Payload (just string for now) and passing it to queue while a flag is set. Then I want to print those messages from the queue using separate thread.
void ReceiverBase<Payload>::aioCallback(void *arg) try {
msgItem *msgItem = (struct msgItem *)arg;
Payload retMsg{};
auto result = msgItem->aio.result();
if (result != nng::error::success) {
throw nng::exception(result);
}
//Here we extract the message
auto msg = msgItem->aio.release_msg();
auto const *data = static_cast<typename Payload::value_type *>(msg.body().data());
auto const count = msg.body().size()/sizeof(typename Payload::value_type);
std::copy(data, data + count, std::back_inserter(retMsg));
{
std::lock_guard<std::mutex> lk(m_msgMx);
newMessageFlag = true;
m_messageQueue.push(std::move(retMsg));
}
msgItem->itemSock.recv(msgItem->aio);
} catch (const nng::exception &e) {
fprintf(stderr, "server_cb: %s: %s\n", e.who(), e.what());
} catch (...) {
fprintf(stderr, "server_cb: unknown exception\n");
}
Separate thread for listening to the flag change and printing. While loop at the end is for continuous work of the program. I use msgCounter to count successful message receival.
void ReceiverBase<Payload>::start() {
auto listenerLambda = [](){
std::string temp;
while (true) {
std::lock_guard<std::mutex> lg(m_msgMx);
if(newMessageFlag) {
temp = std::move(m_messageQueue.front());
m_messageQueue.pop();
++msgCounter;
std::cout << msgCounter << "\n";
newMessageFlag = false;
}}};
std::thread listenerThread (listenerLambda);
while (true) {
std::this_thread::sleep_for(std::chrono::microseconds(1));
}
}
This is my sender application. I tweak the frequency of msg sending by changing the value in std::chrono::miliseconds(val).
int main (int argc, char *argv[])
{
std::string connection_address{"ipc:///tmp/async_demo1"};
std::string longMsg{" here normally I have some long test text"};
std::cout << "Trying connecting sender:";
StringSender sender(connection_address);
sender.setupConnection();
for (int i=0; i<1000; ++i) {
std::this_thread::sleep_for(std::chrono::milliseconds(3));
sender.send(longMsg);
}
}
And this is receiver:
int main (int argc, char *argv[])
{
std::string connection_address{"ipc:///tmp/async_demo1"};
std::cout << "Trying connecting receiver:";
StringReceiver receiver(connection_address);
receiver.setupConnection();
std::cout<< "Connection set up. \n";
receiver.start();
return 0;
}
Nothing speciall in those two applications as You see. the setup method from StringReciver is something like that:
bool ReceiverBase<Payload>::setupConnection() {
m_connected = false;
try {
for (size_t i = 0; i < m_parallel; ++i) {
m_msgItems.at(i) = std::make_unique<msgItem>(m_sock);
m_msgItems.at(i)->aio =
nng::aio(ReceiverBase::aioCallback, m_msgItems.at(i).get());
}
m_sock.listen(m_adress.c_str());
m_connected = true;
for (size_t i = 0; i < m_parallel; ++i) {
m_msgItems.at(i)->itemSock.recv(m_msgItems.at(i)->aio);
}
} catch (const nng::exception &e) {
printf("%s: %s\n", e.who(), e.what());
}
return m_connected;
}
Do You have any suggestions why the performance is so low? Do I use lock_guards properly here? What I want them to do is basically lock the flag and queue so only one side has access to it.
NOTE: Adding more listeners thread does not affect the performance either way.
NOTE2: newMessageFlag is atomic

Event loop handling for sd-bus in libuv

We have an eventloop from libuv to handle unixsockets and TCP sockets. The program now also must handle DBus, and we decided to use sd-bus for that.
Lennart wrote on his blog:
Note that our APIs, including sd-bus, integrate nicely into sd-event
event loops, but do not require it, and may be integrated into other
event loops too, as long as they support watching for time and I/O events.
So i assume, it must be possible.
I can get the dbus socket fd via sd_bus_get_fd (sd_bus *bus).
But I can't find any obvious way to stop sd-bus from using its bus_poll method to wait for events internally.
For example when calling a method with sd_bus_call(...) will block with ppoll.
So: How do I handle the dbus events in libuv?
I figured it out, here's an example on how to unite C++, libuv and sd-bus:
I recommend that you read http://0pointer.de/blog/the-new-sd-bus-api-of-systemd.html to understand sd-bus in general.
These are code snippets from my implementation at https://github.com/TheJJ/horst
Method calls can then be done with sd_bus_call_async which does not block (opposed to sd_bus_call).
Don't forget to call update_events() after sd_bus_call_async so the call is sent out over the socket!
/**
* Callback function that is invoked from libuv
* once dbus events flowed in.
*/
static void on_dbus_ready(uv_poll_t *handle, int /*status*/, int /*events*/) {
DBusConnection *connection = (DBusConnection *)handle->data;
sd_bus *bus = connection->get_bus();
// let dbus handle the available events request
while (true) {
// this will trigger the dbus vtable-registered functions
int r = sd_bus_process(bus, nullptr);
if (r < 0) {
printf("[dbus] Failed to process bus: %s", strerror(-r));
break;
}
else if (r > 0) {
// try to process another request!
continue;
}
else {
// no more progress, wait for the next callback.
break;
}
}
// update the events we watch for on the socket.
connection->update_events();
}
/**
* Convert the sdbus-returned poll flags to
* corresponding libuv flags.
*/
int poll_to_libuv_events(int pollflags) {
int ret = 0;
if (pollflags & (POLLIN | POLLPRI)) {
ret |= UV_READABLE;
}
if (pollflags & POLLOUT) {
ret |= UV_WRITABLE;
}
// we also have the non-corresponding UV_DISCONNECT
return ret;
}
class DBusConnection {
public:
DBusConnection(Satellite *sat);
virtual ~DBusConnection() = default;
/** connect to dbus */
int connect() {
int r = sd_bus_open_system(&this->bus);
if (r < 0) {
printf("[dbus] Failed to connect to bus: %s", strerror(-r));
goto clean_return;
}
r = sd_bus_add_object_vtable(
this->bus,
&this->bus_slot,
"/rofl/lol", // object path
"rofl.lol", // interface name
your_vtable,
this // this is the userdata that'll be passed
// to the dbus methods
);
if (r < 0) {
printf("[dbus] Failed to install the horst sdbus object: %s", strerror(-r));
goto clean_return;
}
// register our service name
r = sd_bus_request_name(this->bus, "moveii.horst", 0);
if (r < 0) {
printf("[dbus] Failed to acquire service name: %s", strerror(-r));
goto clean_return;
}
// register the filedescriptor from
// sd_bus_get_fd(bus) to libuv
uv_poll_init(this->loop, &this->connection, sd_bus_get_fd(this->bus));
// make `this` reachable in callbacks.
this->connection.data = this;
// init the dbus-event-timer
uv_timer_init(this->loop, &this->timer);
this->timer.data = this;
// process initial events and set up the
// events and timers for subsequent calls
on_dbus_ready(&this->connection, 0, 0);
printf("[dbus] Listener initialized");
return 0;
clean_return:
sd_bus_slot_unref(this->bus_slot);
sd_bus_unref(this->bus);
this->bus = nullptr;
return 1;
}
/** update the events watched for on the filedescriptor */
void update_events() {
sd_bus *bus = this->get_bus();
// prepare the callback for calling us the next time.
int new_events = poll_to_libuv_events(
sd_bus_get_events(bus)
);
uint64_t usec;
int r = sd_bus_get_timeout(bus, &usec);
if (not r) {
// if the timer is running already, it is stopped automatically
// inside uv_timer_start.
uv_timer_start(
&this->timer,
[] (uv_timer_t *handle) {
// yes, handle is not a poll_t, but
// we just care for its -> data member anyway.
on_dbus_ready((uv_poll_t *)handle, 0, 0);
},
usec / 1000, // time in milliseconds, sd_bus provides µseconds
0 // don't repeat
);
}
// always watch for disconnects:
new_events |= UV_DISCONNECT;
// activate the socket watching,
// and if active, invoke the callback function
uv_poll_start(&this->connection, new_events, &on_dbus_ready);
}
/** close the connections */
int close() {
// TODO: maybe this memoryerrors when the loop actually
// does the cleanup. we have to wait for the callback.
uv_close((uv_handle_t *) &this->timer, nullptr);
uv_poll_stop(&this->connection);
sd_bus_close(this->bus);
sd_bus_slot_unref(this->bus_slot);
sd_bus_unref(this->bus);
return 0;
}
/**
* Return the bus handle.
*/
sd_bus *get_bus() const {
return this->bus;
}
protected:
/**
* loop handle
*/
uv_loop_t *loop;
/**
* polling object for dbus events
*/
uv_poll_t connection;
/**
* dbus also wants to be called periodically
*/
uv_timer_t timer;
/**
* dbus bus handle
*/
sd_bus *bus;
/**
* dbus slot handle
*/
sd_bus_slot *bus_slot;
};

POCO C++ Getting only one notification when socket is readable?

I now writing game server and here is my onReadable function :
void CSConnection::onReadable(const AutoPtr<ReadableNotification>& pNf)
{
try
{
char * rbuff = new char[128](); //allocate incoming packet memory
int n = _socket.receiveBytes(rbuff, 128);
if(n > 8)
{
WorkerThreadPool::getInstance().tp->start(*new LogicHandler(*this, rbuff));
}
else
{
delete rbuff;
delete this;
}
}
catch(Poco::Exception& exc)
{
app.logger().log(exc);
delete this;
}
}
I tried to move reading packet in thread pool's logic handler, but notification is called multiply.
Is there a way to get only one notification ?
Because reading in thread is slower and it calls 5-9 times onReadable.
Thanks.

How to trace resource deadlocks?

I've wrote a timer using std::thread - here is how it looks like:
TestbedTimer::TestbedTimer(char type, void* contextObject) :
Timer(type, contextObject) {
this->active = false;
}
TestbedTimer::~TestbedTimer(){
if (this->active) {
this->active = false;
if(this->timer->joinable()){
try {
this->timer->join();
} catch (const std::system_error& e) {
std::cout << "Caught system_error with code " << e.code() <<
" meaning " << e.what() << '\n';
}
}
if(timer != nullptr) {
delete timer;
}
}
}
void TestbedTimer::run(unsigned long timeoutInMicroSeconds){
this->active = true;
timer = new std::thread(&TestbedTimer::sleep, this, timeoutInMicroSeconds);
}
void TestbedTimer::sleep(unsigned long timeoutInMicroSeconds){
unsigned long interval = 500000;
if(timeoutInMicroSeconds < interval){
interval = timeoutInMicroSeconds;
}
while((timeoutInMicroSeconds > 0) && (active == true)){
if (active) {
timeoutInMicroSeconds -= interval;
/// set the sleep time
std::chrono::microseconds duration(interval);
/// set thread to sleep
std::this_thread::sleep_for(duration);
}
}
if (active) {
this->notifyAllListeners();
}
}
void TestbedTimer::interrupt(){
this->active = false;
}
I'm not really happy with that kind of implementation since I let the timer sleep for a short interval and check if the active flag has changed (but I don't know a better solution since you can't interrupt a sleep_for call). However, my program core dumps with the following message:
thread is joinable
Caught system_error with code generic:35 meaning Resource deadlock avoided
thread has rejoined main scope
terminate called without an active exception
Aborted (core dumped)
I've looked up this error and as seems that I have a thread which waits for another thread (the reason for the resource deadlock). However, I want to find out where exactly this happens. I'm using a C library (which uses pthreads) in my C++ code which provides among other features an option to run as a daemon and I'm afraid that this interfers with my std::thread code. What's the best way to debug this?
I've tried to use helgrind, but this hasn't helped very much (it doesn't find any error).
TIA
** EDIT: The code above is actually not exemplary code, but I code I've written for a routing daemon. The routing algorithm is a reactive meaning it starts a route discovery only if it has no routes to a desired destination and does not try to build up a routing table for every host in its network. Every time a route discovery is triggered a timer is started. If the timer expires the daemon is notified and the packet is dropped. Basically, it looks like that:
void Client::startNewRouteDiscovery(Packet* packet) {
AddressPtr destination = packet->getDestination();
...
startRouteDiscoveryTimer(packet);
...
}
void Client::startRouteDiscoveryTimer(const Packet* packet) {
RouteDiscoveryInfo* discoveryInfo = new RouteDiscoveryInfo(packet);
/// create a new timer of a certain type
Timer* timer = getNewTimer(TimerType::ROUTE_DISCOVERY_TIMER, discoveryInfo);
/// pass that class as callback object which is notified if the timer expires (class implements a interface for that)
timer->addTimeoutListener(this);
/// start the timer
timer->run(routeDiscoveryTimeoutInMilliSeconds * 1000);
AddressPtr destination = packet->getDestination();
runningRouteDiscoveries[destination] = timer;
}
If the timer has expired the following method is called.
void Client::timerHasExpired(Timer* responsibleTimer) {
char timerType = responsibleTimer->getType();
switch (timerType) {
...
case TimerType::ROUTE_DISCOVERY_TIMER:
handleExpiredRouteDiscoveryTimer(responsibleTimer);
return;
....
default:
// if this happens its a bug in our code
logError("Could not identify expired timer");
delete responsibleTimer;
}
}
I hope that helps to get a better understanding of what I'm doing. However, I did not to intend to bloat the question with that additional code.