My application uses libhiredis with libev backend. I need to send Redis async commands and process the resulting Redis async callback. However, unlike the simple example from here I cannot use the default event loop. The following code approximates the example with a custom event loop. However, when compiled with only the redisLibevAttach() induced libev io watcher, the event loop thread terminates immediately. You can see this by running
g++ -g -std=c++11 -Wall -Wextra -Werror hiredis_ev.cpp -o hiredis_ev -lpthread -lhiredis -lev && gdb ./hiredis_ev
where GDB happily prints that a new thread is created and almost immediately terminates. This is further confirmed by running info thread in GDB which does not show my_ev_loop. However, if I change the code to add any other libev watcher, like a timer, then everything is good. You can see this by running
g++ -g -DTIMER -std=c++11 -Wall -Wextra -Werror hiredis_ev.cpp -o hiredis_ev -lpthread -lhiredis -lev && ./hiredis_ev
I should not need a dummy libev timer to keep the event loop running. What am I missing?
#include <iostream>
#include <thread>
#include <hiredis/hiredis.h>
#include <hiredis/async.h>
#include <hiredis/adapters/libev.h>
static struct ev_loop *loop = nullptr;
static void redis_async_cb(redisAsyncContext *, void *, void *)
{
std::cout << "Redis async callback" << std::endl;
fflush(nullptr);
}
#ifdef TIMER
static ev_timer timer_w;
static void ev_timer_cb(EV_P_ ev_timer *, int)
{
std::cout << "EV timer callback" << std::endl;
fflush(nullptr);
}
#endif
int main()
{
loop = ev_loop_new(EVFLAG_AUTO);
#ifdef TIMER
ev_timer_init(&timer_w, ev_timer_cb, 0, 0.1);
ev_timer_start(loop, &timer_w);
#endif
redisAsyncContext* async_context = redisAsyncConnect("localhost", 6379);
if (nullptr == async_context)
{
throw std::runtime_error("No redis async context");
}
redisLibevAttach(loop, async_context);
std::thread ev_thread(ev_run, loop, 0);
pthread_setname_np(ev_thread.native_handle(), "my_ev_loop");
ev_thread.detach();
// Give the event loop time to start
while (!ev_iteration(loop))
{
std::this_thread::sleep_for(std::chrono::milliseconds(1));
}
// Send a SUBSCRIBE message which should generate an async callback
if (REDIS_OK != redisAsyncCommand(async_context, redis_async_cb, nullptr, "SUBSCRIBE foo"))
{
throw std::runtime_error("Could not issue redis async command");
}
std::cout << "Waiting for async callback" << std::endl;
fflush(nullptr);
fflush(nullptr);
// Wait forever (use CTRL-C to terminate)
while (true)
{
std::this_thread::sleep_for(std::chrono::milliseconds(1));
}
return 0;
}
I found out that hiredis community has their own GitHub instance where I can ask questions. Since I had't yet received the answer here, I asked there. The answer can be found at https://github.com/redis/hiredis/issues/801#issuecomment-626400959
Related
The uv_timer_stop() does not appear to stop and reset the timer. How can I stop and reset the timer correctly?
#include <iostream>
#include <thread>
#include <unistd.h> // usleep
#include <uv.h>
uv_loop_t *loop;
uv_timer_t timer_req;
float timerSeconds = 2.0f;
float sleepSeconds = 1.0f;
void run() {
while (true) uv_run(loop, UV_RUN_DEFAULT);
}
void timerCallback(uv_timer_t* handle) {
std::cout << "Callback" << std::endl;
}
int main() {
loop = uv_default_loop();
std::thread t(run);
uv_timer_init(loop, &timer_req);
while (true) {
std::cout << "sleep" << std::endl;
uv_timer_start(&timer_req, timerCallback, 1000*timerSeconds, 0);
usleep(1000*1000*sleepSeconds);
uv_timer_stop(&timer_req);
};
return 0;
}
output on Debian Linux with libuv1-dev 1.24.1:
$ g++ main.cc -o main `pkg-config --libs libuv`
$ ./main
sleep
sleep
sleep
sleep
sleep
Callback
sleep
^C
Expected output: The timer is set to call the callback after two seconds (float timerSeconds = 2.0f). But we stop (and restart) the timer after every second (float sleepSeconds=1.0f), so the timer should never run long enough for the callback to be executed and the output should show only the 'sleep' messages, instead of the 'Callback' message. That is, it should be:
$ g++ main.cc -o main `pkg-config --libs libuv`
$ ./main
sleep
[...]
I've tried:
stopping the timer inside the callback
reinitialising the timer every time: using uv_timer_init() inside the while loop, before calling uv_timer_start()
Why I am doing this: I'm writing a program to detect the long-press of a button (a short-press of a button triggers a different action). There is a callback to detect a button-press, and a button-depress. The button-press should start the timer, and button-depress should stop the timer. If the button has been pressed down long enough (two seconds), the timer callback executes, and if we reach this callback, then we know that the button has been long-pressed.
MinTTY does not seem to raise a signal to my mingw-w64 program when I hit CTRL+C. In CMD with the same identical program the signal is correctly raised. Why is this?
The program is compiled under msys2 mingw-w64 with g++ -static -static-libstdc++ -std=c++14 -Wall -Wextra -pedantic testan.cpp. In both cases, signal() does not return SIG_ERR so the handler seems to be correctly installed.
code:
#include <chrono>
#include <thread>
#include <iostream>
#include <csignal>
using namespace std;
void signalHandler( int x ) {
cout << "Interrupt: " << x << endl;
exit( 123 );
}
int main () {
if( signal(SIGINT, signalHandler) == SIG_ERR )
cout << "received SIG_ERR" << endl;
while( true ) {
cout << "waiting for CTRL+C" << endl;
this_thread::sleep_for( 1s );
}
return 0;
}
mintty output:
$ ./a.exe
waiting for CTRL+C
waiting for CTRL+C
waiting for CTRL+C
$
CMD output:
C:\Users\Xunie\Desktop\project>a.exe
waiting for CTRL+C
waiting for CTRL+C
Interrupt: 2
C:\Users\Xunie\Desktop\project>
MinTTY is a POSIX-oriented terminal emulator, it's using Cygwin/MSYS2 PTYs which don't interface well with native (non-Cygwin non-MSYS2) programs. This includes signals, detection of interactive input etc. MinTTY doesn't attempt to fix this, but Cygwin has recently (since v3.1.0) improved its support of this use case by using the new ConPTY API. As of May 2020, MSYS2 hasn't yet integrated these changes to its runtime, so you can't see the benefits there yet. In the meantime (and on older Windows versions), you can use the winpty wrapper, installable using pacman.
I am currently trying to get a working tls websocket client running in C++ (which is a pain in the ass) and I have tried CPP Rest SDK as well as Websocket++. Both spit out a bunch of compile errors (see below). When I tried compiling it using Websocket++ without tls, it compiles so the error clearly is related to SSL.
I tried different OpenSSL versions (1.0.1, 1.0.2, 1.1.0), different C++ versions (11, 14 and even 17), and I just can't get it to compile.
I googled and none of the solutions worked. I am on Ubuntu 16 and the build command I am using looks like this:
g++ source/* -o test.out -Iinclude/ -std=c++14 -L/lib64 -lcurl -lboost_system -lssl -lcrypto -l pthread
Here are some of the errors:
/usr/include/boost/asio/ssl/detail/impl/openssl_init.ipp: In constructor ‘boost::asio::ssl::detail::openssl_init_base::do_init::do_init()’:
/usr/include/boost/asio/ssl/detail/impl/openssl_init.ipp:43:23: error: expected id-expression before ‘(’ token
mutexes_.resize(::CRYPTO_num_locks());
/usr/include/boost/asio/ssl/detail/impl/engine.ipp:221:9: error: ‘SSL_R_SHORT_READ’ was not declared in this scope
ERR_PACK(ERR_LIB_SSL, 0, SSL_R_SHORT_READ),
And here is the basic source code:
#include <websocketpp/config/asio_client.hpp>
#include <websocketpp/client.hpp>
#include <iostream>
// pull out the type of messages sent by our config
typedef websocketpp::config::asio_tls_client::message_type::ptr message_ptr;
typedef websocketpp::client<websocketpp::config::asio_tls_client> client;
using websocketpp::lib::placeholders::_1;
using websocketpp::lib::placeholders::_2;
using websocketpp::lib::bind;
void on_close(client* c, websocketpp::connection_hdl hdl) {
c->get_alog().write(websocketpp::log::alevel::app, "Connection Closed");
}
int main(int argc, char* argv[]) {
client c;
std::string uri = "wss://gateway.discord.gg/";
if (argc == 2) {
uri = argv[1];
}
try {
// set logging policy if needed
c.clear_access_channels(websocketpp::log::alevel::frame_header);
c.clear_access_channels(websocketpp::log::alevel::frame_payload);
//c.set_error_channels(websocketpp::log::elevel::none);
// Initialize ASIO
c.init_asio();
// Register our handlers
c.set_open_handler(bind(&on_open,&c,::_1));
c.set_fail_handler(bind(&on_fail,&c,::_1));
c.set_message_handler(bind(&on_message,&c,::_1,::_2));
c.set_close_handler(bind(&on_close,&c,::_1));
// Create a connection to the given URI and queue it for connection once
// the event loop starts
websocketpp::lib::error_code ec;
client::connection_ptr con = c.get_connection(uri, ec);
c.connect(con);
// Start the ASIO io_service run loop
c.run();
} catch (const std::exception & e) {
std::cout << e.what() << std::endl;
} catch (websocketpp::lib::error_code e) {
std::cout << e.message() << std::endl;
} catch (...) {
std::cout << "other exception" << std::endl;
}
}
This was a long time ago but in case it helps, adding -lcrypto -lssl in the g++ cmd arguments solved the problem for me.
(Context) I'm developing a cross-platform (Windows and Linux) application for distributing files among computers, based on BitTorrent Sync. I've made it in C# already, and am now porting to C++ as an exercise.
BTSync can be started in API mode, and for such, one must start the 'btsync' executable passing the name and location of a config file as arguments.
At this point, my greatest problem is getting my application to deal with the executable. I've come to found Boost.Process when searching for a cross-platform process management library, and decided to give it a try. It seems that v0.5 is it's latest working release, as some evidence suggests, and it can be infered there's a number of people using it.
I implemented the library as follows (relevant code only):
File: test.hpp
namespace testingBoostProcess
{
class Test
{
void StartSyncing();
};
}
File: Test.cpp
#include <string>
#include <vector>
#include <iostream>
#include <boost/process.hpp>
#include <boost/process/mitigate.hpp>
#include "test.hpp"
using namespace std;
using namespace testingBoostProcess;
namespace bpr = ::boost::process;
#ifdef _WIN32
const vector<wstring> EXE_NAME_ARGS = { L"btsync.exe", L"/config", L"conf.json" };
#else
const vector<string> EXE_NAME_ARGS = { "btsync", "--config", "conf.json" };
#endif
void Test::StartSyncing()
{
cout << "Starting Server...";
try
{
bpr::child exeServer = bpr::execute(bpr::initializers::set_args(EXE_NAME_ARGS),
bpr::initializers::throw_on_error(), bpr::initializers::inherit_env());
auto exitStatus = bpr::wait_for_exit(exeServer); // type will be either DWORD or int
int exitCode = BOOST_PROCESS_EXITSTATUS(exitStatus);
cout << " ok" << "\tstatus: " << exitCode << "\n";
}
catch (const exception& excStartExeServer)
{
cout << "\n" << "Error: " << excStartExeServer.what() << "\n";
}
}
(Problem) On Windows, the above code will start btsync and wait (block) until the process is terminated (either by using Task Manager or by the API's shutdown method), just like desired.
But on Linux, it finishes execution immediately after starting the process, as if wait_for_exit() isn't there at all, though the btsync process isn't terminated.
In an attempt to see if that has something to do with the btsync executable itself, I replaced it by this simple program:
File: Fake-Btsync.cpp
#include <cstdio>
#ifdef _WIN32
#define WIN32_LEAN_AND_MEAN
#define SLEEP Sleep(20000)
#include <Windows.h>
#else
#include <unistd.h>
#define SLEEP sleep(20)
#endif
using namespace std;
int main(int argc, char* argv[])
{
for (int i = 0; i < argc; i++)
{
printf(argv[i]);
printf("\n");
}
SLEEP;
return 0;
}
When used with this program, instead of the original btsync downloaded from the official website, my application works as desired. It will block for 20 seconds and then exit.
Question: What is the reason for the described behavior? The only thing I can think of is that btsync restarts itself on Linux. But how to confirm that? Or what else could it be?
Update: All I needed to do was to know about what forking is and how it works, as pointed in sehe's answer (thanks!).
Question 2: If I use the System Monitor to send an End command to the child process 'Fake-Btsync' while my main application is blocked, wait_for_exit() will throw an exception saying:
waitpid(2) failed: No child processes
Which is a different behavior than on Windows, where it simply says "ok" and terminates with status 0.
Update 2: sehe's answer is great, but didn't quite address Question 2 in a way I could actually understand. I'll write a new question about that and post the link here.
The problem is your assumption about btsync. Let's start it:
./btsync
By using this application, you agree to our Privacy Policy, Terms of Use and End User License Agreement.
http://www.bittorrent.com/legal/privacy
http://www.bittorrent.com/legal/terms-of-use
http://www.bittorrent.com/legal/eula
BitTorrent Sync forked to background. pid = 24325. default port = 8888
So, that's the whole story right there: BitTorrent Sync forked to background. Nothing more. Nothing less. If you want to, btsync --help tells you to pass --nodaemon.
Testing Process Termination
Let's pass --nodaemon run btsync using the test program. In a separate subshell, let's kill the child btsync process after 5 seconds:
sehe#desktop:/tmp$ (./test; echo exit code $?) & (sleep 5; killall btsync)& time wait
[1] 24553
[2] 24554
By using this application, you agree to our Privacy Policy, Terms of Use and End User License Agreement.
http://www.bittorrent.com/legal/privacy
http://www.bittorrent.com/legal/terms-of-use
http://www.bittorrent.com/legal/eula
[20141029 10:51:16.344] total physical memory 536870912 max disk cache 2097152
[20141029 10:51:16.344] Using IP address 192.168.2.136
[20141029 10:51:16.346] Loading config file version 1.4.93
[20141029 10:51:17.389] UPnP: Device error "http://192.168.2.1:49000/l2tpv3.xml": (-2)
[20141029 10:51:17.407] UPnP: ERROR mapping TCP port 43564 -> 192.168.2.136:43564. Deleting mapping and trying again: (403) Unknown result code (UPnP protocol violation?)
[20141029 10:51:17.415] UPnP: ERROR removing TCP port 43564: (403) Unknown result code (UPnP protocol violation?)
[20141029 10:51:17.423] UPnP: ERROR mapping TCP port 43564 -> 192.168.2.136:43564: (403) Unknown result code (UPnP protocol violation?)
[20141029 10:51:21.428] Received shutdown request via signal 15
[20141029 10:51:21.428] Shutdown. Saving config sync.dat
Starting Server... ok status: 0
exit code 0
[1]- Done ( ./test; echo exit code $? )
[2]+ Done ( sleep 5; killall btsync )
real 0m6.093s
user 0m0.003s
sys 0m0.026s
No problem!
A Better Fake Btsync
This should still be portable and be (much) better behaved when killed/terminated/interrupted:
#include <boost/asio/signal_set.hpp>
#include <boost/asio.hpp>
#include <iostream>
int main(int argc, char* argv[])
{
boost::asio::io_service is;
boost::asio::signal_set ss(is);
boost::asio::deadline_timer timer(is, boost::posix_time::seconds(20));
ss.add(SIGINT);
ss.add(SIGTERM);
auto stop = [&]{
ss.cancel(); // one of these will be redundant
timer.cancel();
};
ss.async_wait([=](boost::system::error_code ec, int sig){
std::cout << "Signal received: " << sig << " (ec: '" << ec.message() << "')\n";
stop();
});
timer.async_wait([&](boost::system::error_code ec){
std::cout << "Timer: '" << ec.message() << "'\n";
stop();
});
std::copy(argv, argv+argc, std::ostream_iterator<std::string>(std::cout, "\n"));
is.run();
return 0;
}
You can test whether it is well-behaved
(./btsync --nodaemon; echo exit code $?) & (sleep 5; killall btsync)& time wait
The same test can be run with "official" btsync and "fake" btsync. Output on my linux box:
sehe#desktop:/tmp$ (./btsync --nodaemon; echo exit code $?) & (sleep 5; killall btsync)& time wait
[1] 24654
[2] 24655
./btsync
--nodaemon
Signal received: 15 (ec: 'Success')
Timer: 'Operation canceled'
exit code 0
[1]- Done ( ./btsync --nodaemon; echo exit code $? )
[2]+ Done ( sleep 5; killall btsync )
real 0m5.014s
user 0m0.001s
sys 0m0.014s
I have a program that gets a KERN_PROTECTION_FAILURE with EXC_BAD_ACCESS in a very strange place when running multithreaded and I haven't the faintest idea how to troubleshoot it further. This is on MacOS 10.6 using GCC.
The very strange place that it gets this is when entering a function. Not on the first line of the function, but the actual jump to the function GetMachineFactors():
Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_PROTECTION_FAILURE at address: 0xb00009ec
[Switching to process 28242]
0x00012592 in GetMachineFactors () at ../sysinfo/OSX.cpp:168
168 MachineFactors* GetMachineFactors()
(gdb) bt
#0 0x00012592 in GetMachineFactors () at ../sysinfo/OSX.cpp:168
#1 0x000156d0 in CollectMachineFactorsThreadProc (parameter=0x200280) at Threads.cpp:341
#2 0x952f681d in _pthread_start ()
#3 0x952f66a2 in thread_start ()
(gdb)
If I run this non-threaded, it runs great, no issues:
#include "MachineFactors.h"
int main( int argc, char** argv )
{
MachineFactors* factors = GetMachineFactors();
std::string str = CreateJSONObject(factors);
cout << str;
delete factors;
return 0;
}
If I run this in a pthread, I get the EXC_BAD_ACCESS above.
THREAD_FUNCTION CollectMachineFactorsThreadProc( LPVOID parameter )
{
Main* client = (Main*) parameter;
if ( parameter == NULL )
{
ERRORLOG( "No data passed to machine identification thread. Aborting." );
return 0;
}
MachineFactors* mfactors = GetMachineFactors(); // This is where it dies.
// If I don't call GetMachineFactors and do something like mfactors =
// new MachineFactors(); everything is good and the threads communicate and exit
// normally.
if (mfactors == NULL)
{
ERRORLOG("Failed to collect machine identification: GetMachineFactors returned NULL." << endl)
return 0;
}
client->machineFactors = CreateJSONObject(mfactors);
delete mfactors;
EVENT_RAISE(client->machineFactorsEvent);
return 0;
}
Here is an excerpt from the GetMachineFactors() code:
MachineFactors* GetMachineFactors() // Dies on this line in multi-threaded.
{
// printf( "Getting machine factors.\n"); // Tried with and without this, never prints.
factors = new MachineFactors();
factors->OSName = "MacOS";
factors->Manufacturer = "Apple";
///…
// gather various machine metrics here.
//…
return factors;
}
For reference, I am using a socketpair to wait on the thread to complete:
// From the header file I use for cross-platform defines (this runs on OSX, Windows, and Linux.
struct _waitt
{
int fds[2];
};
#define THREAD_FUNCTION void*
#define THREAD_REFERENCE pthread_t
#define MUTEX_REFERENCE pthread_mutex_t*
#define MUTEX_LOCK(m) pthread_mutex_lock(m)
#define MUTEX_UNLOCK pthread_mutex_unlock
#define EVENT_REFERENCE struct _waitt
#define EVENT_WAIT(m) do { char lc; if (read(m.fds[0], &lc, 1)) {} } while (0)
#define EVENT_RAISE(m) do { char lc = 'j'; if (write(m.fds[1], &lc, 1)) {} } while (0)
#define EVENT_NULL(m) do { m.fds[0] = -1; m.fds[1] = -1; } while (0)
Here is the code where I launch the thread.
void Main::CollectMachineFactors()
{
#ifdef WIN32
machineFactorsThread = CreateThread(NULL, 0, CollectMachineFactorsThreadProc, this, 0, 0);
if ( machineFactorsThread == NULL )
{
ERRORLOG( "Could not create thread for machine id: " << ERROR_NO << endl )
}
#else
int retval = pthread_create(&machineFactorsThread, NULL, CollectMachineFactorsThreadProc, this);
if (retval)
{
ERRORLOG( "Return code from machine id pthread_create() is " << retval << endl )
}
#endif
}
Here's the simple failure case of running this multithreaded. It always fails for this code with the stack trace above:
CollectMachineFactors();
EVENT_WAIT(machineFactorsEvent);
cout << machineFactors;
return 0;
At first I suspected a library problem. Here's my makefile:
# Main executable file
PROGRAM = sysinfo
# Object files
OBJECTS = Version.h Main.o Protocol.o Socket.o SSLConnection.o Stats.o TimeElapsed.o Formatter.o OSX.o Threads.o
# Include directories
INCLUDE = -Itaocrypt/include -IyaSSL/taocrypt/mySTL -IyaSSL/include -isysroot /Developer/SDKs/MacOSX10.5.sdk -mmacosx-version-min=10.5
# Library settings
STATICLIBS = libtaocrypt.a libyassl.a -Wl,-rpath,. -ldl -lpthread -lz -lexpat
# Compile settings
RELCXX = g++ -g -ggdb -DDEBUG -Wall $(INCLUDE)
.SUFFIXES: .o .cpp
.cpp.o :
$(RELCXX) -c -Wall $(INCLUDE) -o $# $<
all: $(PROGRAM)
$(PROGRAM): $(OBJECTS)
$(RELCXX) -o $(PROGRAM) $(OBJECTS) $(STATICLIBS)
clean:
rm -f *.o $(PROGRAM)
I can't for the life of me see anything particularly odd or dangerous and I'm not sure where to look. The same threaded process works fine on any Linux machine I have tried. Any suggestions? Any tools I should try?
I can add more info if it would be helpful.
I can see a problem with your Windows code, but not the OSX code that's crashing on you.
It seems that you're not posting the actual code for GetMachineFactors, since the variable factors is not declared. But regarding debugging, you should not take the non-appearance of printf output as conclusive that that statement hasn't been executed. Use debugger facilities such as setting a breakpoint, using special debugger trace output, so on (not sure what gdb handles, it's a very primitive debugger, but perhaps Apple has better tools?).
For Windows, you should use the run time library's thread creation instead of Windows API CreateThread. That's because with CreateThread the runtime lib isn't informed. E.g, a new expression or other call that uses the runtime lib might fail.
Sorry I can't help more.
I think it could perhaps have something to do with the GetMachineFactors code that you haven't shown?
It turns out, and I can't explain why, that a fork() call combined with a socketpair() as the IPC mechanism was the workaround to get things going as intended.
I wish I knew why it was failing in the first place (headscratch) but that approach seems to have been a good workaround.
It almost seemed like the kind of "build out of whack" problem that could be caused by failing to run a 'make clean' after changing header files, but that wasn't the case here.