This is the test case
#include <boost/coroutine2/all.hpp>
#include <iostream>
#include <cassert>
int main() {
auto sum = 0;
using Coroutine_t = boost::coroutines2::coroutine<int>::push_type;
auto coro = Coroutine_t{[&](auto& yield) {
for (;;) {
auto val = yield.get();
std::cout << "Currently " << val << std::endl;
sum += val;
yield(); // jump back to starting context
}
}};
std::cout << "Transferring 1" << std::endl;
coro(1); // transfer {1} to coroutine-function
std::cout << "Transferring 2" << std::endl;
coro(2); // transfer {1} to coroutine-function
// assert(sum == 3);
}
For some reason the assert at the end fails with the value of sum being 14 I installed boost (version 1.63) context with the command
./bootstrap.sh --prefix=build --with-libraries=context
./b2 --prefix=build --with-context
I am running this on a MacOS 10.12.6. The compile command was
g++ -std=c++14 -O3 -I boost co.cpp boost/stage/lib/libboost_*.a
Where boost is the boost folder downloaded from sourceforge.
The output of the above test case strangely without the assert is this
Transferring 1
Currently 0
Transferring 2
Currently 2
Currently 2
Why is the first line printed in the coroutine Currently 0? Also why is Currently 2 is printed twice here?? The latter can be seen here as well https://wandbox.org/permlink/zEL9fGT5MrzWGgQB
For the second question it seems like after the main thread has finished, control is transferred back to the coroutine one last time. Why is that? That seems strange..
UPDATE : For the second question, it seems to be different in boost 1.65??!? https://wandbox.org/permlink/JQa9Wq1jp8kB49Up
output of your app with boost-1.65.1 is:
Transferring 1
Currently 1
Transferring 2
Currently 2
probably your problem was caused by bug that has been fixed in boost-1.63
Related
I've discovered an issue impacting several unit tests at my work, which only happens when the unit tests are run with valgrind, in that the value returned from std::cos and std::sin are different for identical inputs depending on if the unit test is run in isolation versus run under valgrind.
This issue only seems to happen for some specific inputs, because many unit tests pass which run through the same code.
Here's a minimally reproducible example (slightly worsened so that my compiler wouldn't optimize away any of the logic):
#include <complex>
#include <iomanip>
#include <iostream>
int main()
{
std::complex<long double> input(0,0), output(0,0);
input = std::complex<long double>(39.21460183660255L, -40);
std::cout << "input: " << std::setprecision(20) << input << std::endl;
output = std::cos(input);
std::cout << "output: " << std::setprecision(20) << output << std::endl;
if (std::abs(output) < 5.0)
{
std::cout << "TEST FAIL" << std::endl;
return 1;
}
std::cout << "TEST PASS" << std::endl;
return 0;
}
Output when run normally:
input: (39.21460183660254728,-40)
output: (6505830161375283.1118,117512680740825220.91)
TEST PASS
Output when run under valgrind:
input: (39.21460183660254728,-40)
output: (0.18053126362312540976,3.2608771240037195405)
TEST FAIL
Notes:
OS: Red Hat Enterprise Linux 7
Compiler: Intel OneAPI 2022 Next generation DPP/C++ Compiler
Valgrind: 3.20 (built with same compiler), also occurred on official distribution of 3.17
Issue did not manifest when unit tests were built with GCC-7 (cannot go back to that compiler) or GCC-11 (another larger bug with boost prevents us from using this with valgrind)
-O0/1/2/3 make no difference on this issue
only compiler flag I have set is "-fp-speculation=safe", which otherwise if unset causes numerical precision issues in other unit tests
Is there any better ways I can figure out what's going on to resolve this situation, or should I submit a bug report to valgrind? I hope this issue is benign but I want to be able to trust my valgrind output.
In version 6.1, ncurses introduce init_extended_pair to extend limit of possible color pairs above short limit.
In my experiment everything works till value 255. For values 256 and greater, there is no error, but foreground and background have default values. For values 32767 and greater function return error.
Program return:
COLOR_PAIRS: 65536
Error: 32767
What is the proper why to create large number of color pairs? In my case I need at least 65536 of them. (Tested on Ubuntu 19.04)
#include <iostream>
#include <ncurses.h>
// g++ main.cpp -l:libncursesw.so.6.1 -ltinfo
int main() {
initscr();
start_color();
std::cout << "COLOR_PAIRS: " << COLOR_PAIRS << std::endl;
init_extended_color(2, 999, 0, 0);
init_extended_color(3, 0, 999, 0);
int pair1 = 255;
if (init_extended_pair(pair1, 2, 3) == ERR)
std::cout << "Error: " << pair1 << std::endl;
attron(COLOR_PAIR(pair1));
mvprintw(2, 1, "pair255");
attroff(COLOR_PAIR(pair1));
int pair2 = 256;
if (init_extended_pair(pair2, 2, 3) == ERR)
std::cout << "Error: " << pair2 << std::endl;
attron(COLOR_PAIR(pair2));
mvprintw(3, 1, "pair256");
attroff(COLOR_PAIR(pair2));
int pair3 = 32767; // 2^15-1
if (init_extended_pair(pair3, 3, 2) == ERR)
std::cout << "Error: " << pair3 << std::endl;
attron(COLOR_PAIR(pair3));
mvprintw(4, 1, "pair32767");
attroff(COLOR_PAIR(pair3));
refresh();
getch();
endwin();
return 0;
}
Edit:
Regarding similar problem How to enable 32k color pairs in ncurses?. In my case COLOR_PAIRS return value 65536 not 256, more over question is from 2015, and init_extended_pair was added to library on 2017.04.01, and released in version 6.1 January 27, 2018. Despite this I rebuild libncursesw6 package with --enable-ext-colors (--enable-widec was already available), but I get same result.
Actually (running this against ncurses 6.1 development), I do not see a failure from init_extended_pair. At first glance, the problem appeared to be this chunk:
attron(COLOR_PAIR(pair3));
mvprintw(4, 1, "pair32767");
attroff(COLOR_PAIR(pair3));
Those attron/attroff are legacy functions. You should use attr_on and attr_off. The macro form of attron and attroff (which is normally used instead of the function) is
#define wattron(win,at) wattr_on(win, NCURSES_CAST(attr_t, at), NULL)
#define wattroff(win,at) wattr_off(win, NCURSES_CAST(attr_t, at), NULL)
But in either case, the data is the "same": what fits in attr_t (a 32-bit value). In some other functions, the color-pair is passed through separately, and ncurses 6.1 provides for passing pairs larger than 16-bits via the opts parameter. These particular functions aren't extended in that way.
However, your program is returning an error for init_extended_pair. That could be any of (a few) returns from _nc_init_pair, but the principal one uses ValidPair:
#define ValidPair(sp,pair) \
((sp != 0) && (pair >= 0) && (pair < sp->_pair_limit) && sp->_coloron)
To check this, I ran the code against current ncurses6, with TERM=xterm-256color and TERM=xterm-direct. Both worked, though the init_extended_color in the latter fails (as expected). I can see that failure by compiling ncurses with TRACE, and turning the trace on with NCURSES_TRACE=0x220. Here's a screenshot of the trace, for example:
The current code is available from the ncurses homepage (here). If you are able to reproduce the problem using current code, you might want to discuss it on the bug-ncurses mailing list. Otherwise (see mailing list), the Debian package is the reference for the version you are using.
I am quite new to boost, as well as to multithreading and launching application using libraries. For my desired funcitonality, I was recommended by colleague to use boost::process library.
But the documentation to this part of boost is quite insufficient, so I could not determine which function suits my task best by documentation. I therefore started to try several functions there, but non has all the desired properties.
However there is one I cannot figure out, how to properly use. I cannot even compile it, let alone run it. And the function is boost::process::async_system. I could not find anywhere on internet some step-by-step guide on how to use this function and what individual components mean and do.
Could someone explain to me in detail the individual arguments and template arguments of the function ? Or provide a link to a detailed manual?
I like the examples here: https://theboostcpplibraries.com/boost.thread-futures-and-promises
For example, look at example 44.16, they clearly show how to use async:
#define BOOST_THREAD_PROVIDES_FUTURE
#include <boost/thread.hpp>
#include <boost/thread/future.hpp>
#include <iostream>
int accumulate()
{
int sum = 0;
for (int i = 0; i < 5; ++i)
sum += i;
return sum;
}
int main()
{
boost::future<int> f = boost::async(accumulate);
std::cout << f.get() << '\n';
}
Waiting happens at the get method, not before. You might use a non-waiting mechanism, too.
As for compiling, you need to first build boost. Building is explained in detail here: https://www.boost.org/doc/libs/1_62_0/more/getting_started/windows.html
Most parts of the library work header-only. For asio, building the binary libraries (also explained in the link) is necessary. In your project (i.e. visual studio projects, xcode project or just some make files), you need to set include and library headers of boost to use it. The link above helps with this as well.
I'm just ramping up on Boost.Process but the sample code I have working might be helpful here.
boost::process:async_system() takes 3 parameters: a boost::asio::io_context object, an exit-handler function, and the command you want to run (just like system(), and it can be either a single line or more than one arg).
After it's invoked, you use the io_context object from the calling thread to manage and monitor the async task - I use the run_one() method which will "Run the io_context object's event processing loop to execute at most one handler" but you can also use other methods to run for a duration etc.
Here's my working code:
#include <boost/process.hpp>
#include <iostream>
using namespace boost;
namespace {
// declare exit handler function
void _exitHandler(boost::system::error_code err, int rc) {
std::cout << "DEBUG async exit error code: "
<< err << " rc: " << rc <<std::endl;
}
}
int main() {
// create the io_context
asio::io_context ioctx;
// call async_system
process::async_system(ioctx, _exitHandler, "ls /usr/local/bin");
std::cout << "just called 'ls /usr/local/bin', async" << std::endl;
int breakout = 0; // safety for weirdness
do {
std::cout << " - checking to see if it stopped..." << std::endl;
if (ioctx.stopped()) {
std::cout << " * it stopped!" << std::endl;
break;
} else {
std::cout << " + calling io_context.run_one()..." << std::endl;
ioctx.run_one();
}
++breakout;
} while (breakout < 1000);
return 0;
}
The only thing my example lacks is how to use boost::asio::async_result to capture the result - the samples I've see (including here on slashdot) still don't make much sense to me, but hopefully this much is helpful.
Here's the output of the above on my system:
just called 'ls /usr/local/bin', async
- checking to see if it stopped...
+ calling io_context.run_one()...
- checking to see if it stopped...
+ calling io_context.run_one()...
VBoxAutostart easy_install pybot
VBoxBalloonCtrl easy_install-2.7 pyi-archive_viewer
((omitted - a bunch more files from the ls -l command))
DEBUG async exit error code: system:0 rc: 0
- checking to see if it stopped...
* it stopped!
Program ended with exit code: 0
I have a C++ Date/Time library that I have literally used for decades. It's been rock solid without any issues. But today, as I was making some small enhancements, my test code started complaining violently. This following program demonstrates the problem :
#include <iostream>
#include <time.h>
void main(void) {
_tzset();
std::cout << "_tzname[ 0 ]=" << _tzname[ 0 ] << std::endl;
std::cout << "_tzname[ 1 ]=" << _tzname[ 1 ] << std::endl;
std::cout << "_timezone=" << _timezone << std::endl;
size_t ret;
char buf[ 64 ];
_get_tzname(&ret,buf,64,0);
std::cout << "_get_tzname[ 0 ]=" << buf << std::endl;
_get_tzname(&ret,buf,64,1);
std::cout << "_get_tzname[ 1 ]=" << buf << std::endl;
}
If I run this in the Visual Studio Debugger I get the following output :
_tzname[ 0 ]=SE Asia Standard Time
_tzname[ 1 ]=SE Asia Daylight Time
_timezone=-25200
_get_tzname[ 0 ]=SE Asia Standard Time
_get_tzname[ 1 ]=SE Asia Daylight Time
This is correct.
But if I run the program from the command line I get the following output :
_tzname[ 0 ]=Asi
_tzname[ 1 ]=a/B
_timezone=0
_get_tzname[ 0 ]=Asi
_get_tzname[ 1 ]=a/B
Note that the TZ environment variable is set to : Asia/Bangkok, which is a synonym for SE Asia Standard Time or UTC+7. You will notice in the command line output that the tzname[ 0 ] value is the first 3 characters of Asia/Bangkok and tzname[ 1 ] is the next 3 characters. I have some thoughts on this, but I cannot make sense of it, so I'll just stick to the facts.
Note that I included the calls to _get_tzname(...) to demonstrate that I am not getting caught in some kind deprecation trap given that _tzname and _timezone are deprecated.
I'm on Windows 7 Professional and I am linking statically to the runtime library (Multi-threaded Debug (/MTd)). I recently installed Visual Studio 2015 and while I am not using it yet, I compiled this program there and the results are the same. I thought there was a chance that I was somehow linking with the VS2015 libraries but I cannot verify this. The Platform Toolset setting in both projects reflects what I would expect.
Thank you for taking the time to look at this...
My application keeps closing when debugging. I'm not able to view what the "results" are since it goes too fast.
I've been looking at many different forums and topics and all the solutions given just wont apply. I've tried different commands before returns 0; etc and also changing an option in the project.
I'm just starting and trying to learn from the c++ primer but this is frustrating me already :).
Following is my code, please help!
#include <iostream>
int main ()
{
int sum = 0, val = 1;
while (val <= 10) {
sum +=val;
++ val;
}
std::cout << "Sum of 1 to 10 inclusive is "
<< sum << std::endl;
Console.Read();
return 0;
}
Don't do Console.Read();, do std::cin.get();.
Try this:
#include <iostream>
int main ()
{
int sum = 0, val = 1;
while (val <= 10) {
sum +=val;
++ val;
}
std::cout << "Sum of 1 to 10 inclusive is "
<< sum << std::endl;
std::cin.get(); // hackish but better than system("PAUSE");
return 0;
}
Assuming that you are using Visual Studio:
Debug builds will run until they hit a breakpoint or until the program finishes (which ever comes first). If the program finishes, the console closes. Place a break point on the line containing return 0; and your console will stay open until you click Continue.
Release builds will run until the program finishes. If the program finishes, you will be prompted to Press any key to continue. . . and the console will stay open.
If you are not setting breakpoints in such a small program, you are wasting your resources -- debug mode will impact the performance of the program.
Thus, you should build in Release mode and forget about using std::cin.get().