Branch coverage for std::unique_lock<std::mutex> lock{mtx} - c++

I couldn't figure out how to test all branches of the following code. In the gtest coverage, it shows that unique_lock declaration and initialization has two branches but I don't know how to test it.
Can you help with this?
std::mutex mtx;
std::unique_lock<std::mutex> lock{mtx};
I was expecting this declaration wouldn't have two branches.

Related

Dart tests are executed after normal code

I think this is poorly documented (if not a bug).
Dart first completes the code in main(), then the tests in functions test().
This code explains:
import 'package:test/test.dart';
void main() {
test('',() {print('1');});
print('2');
test('',() {print('3');});
}
Output: 213
Is there a way to return an error or suppress the execution of spurious code not pertaining to tests? Or maybe a way to have a lint warning from the analyzer?

C++ pimpI mutex preventing usage of std::condicition_variable

C++/CLI is known to block the mutex header when a project is compiled using the -clr:pure or clr flag.
The error is reported here
https://social.msdn.microsoft.com/Forums/vstudio/en-US/d4d082ff-ce43-478d-8386-0effed04b108/ccli-and-stdmutex?forum=vclanguage
The recommended solution seems to be to use the pimpI pattern. See here
Turn off clr option for header file with std::mutex
The problem I see is when using other std functions.
For example consider the std::condition_variable
mutexPimpI _mut;
std::unique_lock<mutexPimpI> lk(_mut); //Fine std::unique_lock is templated.
std::condition_variable _gate1;
_gate1.wait(lk); //Error wait expects std::unique_lock<std::mutex> as argument
Is there any easy way to resolve / work around this problem?
you can try using recursive__mutex class since objects wont be locked.
https://msdn.microsoft.com/en-us/library/hh921466.aspx refer this as well.
I solved it by forward declare std::condition_variable.
The problem with Visual Studio's compiler for mutex includes is only for headers.
Including in the source files still worked.

using std::condition_variable with std::timed_mutex

Is it possible? I want to use timed_mutex instead of a regular mutex with a condition_variable, but it won't compile and looking at sources
void
wait(unique_lock<mutex>& __lock, _Predicate __p)
{
while (!__p())
wait(__lock);
}
(indentation courtesy of libc++ authors, really?)
So it looks like it is in fact limited to straight mutexes, not timed ones. But why??
Yes, std::conditional_variable is limited to std::unique_lock<std::mutex>. However you can use the more generic std::condition_variable_any with anything that has a compatible interface.

boost::asio reasoning behind num_implementations for io_service::strand

We've been using asio in production for years now and recently we have reached a critical point when our servers become loaded just enough to notice a mysterious issue.
In our architecture, each separate entity that runs independently uses a personal strand object. Some of the entities can perform a long work (reading from file, performing MySQL request, etc). Obviously, the work is performed within handlers wrapped with strand. All sounds nice and pretty and should work flawlessly, until we have begin to notice an impossible things like timers expiring seconds after they should, even though threads are 'waiting for work' and work being halt for no apparent reason. It looked like long work performed inside a strand had impact on other unrelated strands, not all of them, but most.
Countless hours were spent to pinpoint the issue. The track has led to the way strand object is created: strand_service::construct (here).
For some reason developers decided to have a limited number of strand implementations. Meaning that some totally unrelated objects will share a single implementation and hence will be bottlenecked because of this.
In the standalone (non-boost) asio library similar approach is being used. But instead of shared implementations, each implementation is now independent but may share a mutex object with other implementations (here).
What is it all about? I have never heard of limits on number of mutexes in the system. Or any overhead related to their creation/destruction. Though the last problem could be easily solved by recycling mutexes instead of destroying them.
I have a simplest test case to show how dramatic is a performance degradation:
#include <boost/asio.hpp>
#include <atomic>
#include <functional>
#include <iostream>
#include <thread>
std::atomic<bool> running{true};
std::atomic<int> counter{0};
struct Work
{
Work(boost::asio::io_service & io_service)
: _strand(io_service)
{ }
static void start_the_work(boost::asio::io_service & io_service)
{
std::shared_ptr<Work> _this(new Work(io_service));
_this->_strand.get_io_service().post(_this->_strand.wrap(std::bind(do_the_work, _this)));
}
static void do_the_work(std::shared_ptr<Work> _this)
{
counter.fetch_add(1, std::memory_order_relaxed);
if (running.load(std::memory_order_relaxed)) {
start_the_work(_this->_strand.get_io_service());
}
}
boost::asio::strand _strand;
};
struct BlockingWork
{
BlockingWork(boost::asio::io_service & io_service)
: _strand(io_service)
{ }
static void start_the_work(boost::asio::io_service & io_service)
{
std::shared_ptr<BlockingWork> _this(new BlockingWork(io_service));
_this->_strand.get_io_service().post(_this->_strand.wrap(std::bind(do_the_work, _this)));
}
static void do_the_work(std::shared_ptr<BlockingWork> _this)
{
sleep(5);
}
boost::asio::strand _strand;
};
int main(int argc, char ** argv)
{
boost::asio::io_service io_service;
std::unique_ptr<boost::asio::io_service::work> work{new boost::asio::io_service::work(io_service)};
for (std::size_t i = 0; i < 8; ++i) {
Work::start_the_work(io_service);
}
std::vector<std::thread> workers;
for (std::size_t i = 0; i < 8; ++i) {
workers.push_back(std::thread([&io_service] {
io_service.run();
}));
}
if (argc > 1) {
std::cout << "Spawning a blocking work" << std::endl;
workers.push_back(std::thread([&io_service] {
io_service.run();
}));
BlockingWork::start_the_work(io_service);
}
sleep(5);
running = false;
work.reset();
for (auto && worker : workers) {
worker.join();
}
std::cout << "Work performed:" << counter.load() << std::endl;
return 0;
}
Build it using this command:
g++ -o asio_strand_test_case -pthread -I/usr/include -std=c++11 asio_strand_test_case.cpp -lboost_system
Test run in a usual way:
time ./asio_strand_test_case
Work performed:6905372
real 0m5.027s
user 0m24.688s
sys 0m12.796s
Test run with a long blocking work:
time ./asio_strand_test_case 1
Spawning a blocking work
Work performed:770
real 0m5.031s
user 0m0.044s
sys 0m0.004s
Difference is dramatic. What happens is each new non-blocking work creates a new strand object up until it shares the same implementation with strand of the blocking work. When this happens it's a dead-end, until long work finishes.
Edit:
Reduced parallel work down to the number of working threads (from 1000 to 8) and updated test run output. Did this because when both numbers are close the issue is more visible.
Well, an interesting issue and +1 for giving us a small example reproducing the exact issue.
The problem you are having 'as I understand' with the boost implementation is that, it by default instantiates only a limited number of strand_impl, 193 as I see in my version of boost (1.59).
Now, what this means is that a large number of requests will be in contention as they would be waiting for the lock to be unlocked by the other handler (using the same instance of strand_impl).
My guess for doing such a thing would be to disallow overloading the OS by creating lots and lots and lots of mutexes. That would be bad. The current implementation allows one to reuse the locks (and in a configurable way as we will see below)
In my setup:
MacBook-Pro:asio_test amuralid$ g++ -std=c++14 -O2 -o strand_issue strand_issue.cc -lboost_system -pthread
MacBook-Pro:asio_test amuralid$ time ./strand_issue
Work performed:489696
real 0m5.016s
user 0m1.620s
sys 0m4.069s
MacBook-Pro:asio_test amuralid$ time ./strand_issue 1
Spawning a blocking work
Work performed:188480
real 0m5.031s
user 0m0.611s
sys 0m1.495s
Now, there is a way to change this number of cached implementations by setting the Macro BOOST_ASIO_STRAND_IMPLEMENTATIONS.
Below is the result I got after setting it to a value of 1024:
MacBook-Pro:asio_test amuralid$ g++ -std=c++14 -DBOOST_ASIO_STRAND_IMPLEMENTATIONS=1024 -o strand_issue strand_issue.cc -lboost_system -pthread
MacBook-Pro:asio_test amuralid$ time ./strand_issue
Work performed:450928
real 0m5.017s
user 0m2.708s
sys 0m3.902s
MacBook-Pro:asio_test amuralid$ time ./strand_issue 1
Spawning a blocking work
Work performed:458603
real 0m5.027s
user 0m2.611s
sys 0m3.902s
Almost the same for both cases! You might want to adjust the value of the macro as per your needs to keep the deviation small.
Note that if you don't like Asio's implementation you can always write your own strand which creates a separate implementation for each strand instance. This might be better for your particular platform than the default algorithm.
Edit: As of recent Boosts, standalone ASIO and Boost.ASIO are now in sync. This answer is preserved for historical interest.
Standalone ASIO and Boost.ASIO have become quite detached in recent years as standalone ASIO is slowly morphed into the reference Networking TS implementation for standardisation. All the "action" is happening in standalone ASIO, including major bug fixes. Only very minor bug fixes are made to Boost.ASIO. There is several years of difference between them by now.
I'd therefore suggest anyone finding any problems at all with Boost.ASIO should switch over to standalone ASIO. The conversion is usually not hard, look into the many macro configs for switching between C++ 11 and Boost in config.hpp. Historically Boost.ASIO was actually auto-generated by script from standalone ASIO, it may be the case Chris has kept those scripts working, and so therefore you could regenerate a brand shiny new Boost.ASIO with all the latest changes. I'd suspect such a build is not well tested however.

Xcode 7: C++ threads ERROR: Attempting to use a deleted function

I have been writing a sudoku solver in c++ on Xcode 7. I managed to write a successful solver using a backtracking algorithm.
Now i'm trying to parallelize whatever functions inside my solver so that I can to speed up the solving algorithm. I have 3 functions that are in charge of checking if the current number trying to be inserted into the grid exists in the row, column or box. (standard sudoku rules). Since these are mutually exclusive operations I want to parallelize them.
I know it's overkill to multithread this program, but the goal is more to learn multithreading rather than speed up my solver algorithm.
This is what I've got so far.
I've included the standard c++11 thread library.
Using default Xcode 7 build settings.
The error I get says that I'm attempting to use a deleted function which pops up when I hit the "Build and Run" button on Xcode. Xcode's intellisense does not complain bout my code. I don't understand this. Please help.
#include <thread>
....
typedef uint8_t byte;
typedef uint16_t dbyte;
....
bool sudokuGame::check(byte num, byte row, byte col)
{
setBoxFlag(true);
setColFlag(true);
setRowFlag(true);
std::thread t1{&sudokuGame::checkRow, num, row};
std::thread t2{&sudokuGame::checkColumn,num,col};
std::thread t3{&sudokuGame::checkBox,num,row,col};
t1.join();
t2.join();
t3.join();
return (getBoxFlag() && getRowFlag() && getColFlag());
}
Somewhere inside "thread" where the "attempting to use a deleted function" ERROR pops up.
...
__thread_execute(tuple<_Fp, _Args...>& __t, __tuple_indices<_Indices...>)
{
__invoke(_VSTD::move(_VSTD::get<0>(__t)), _VSTD::move(_VSTD::get<_Indices>(__t))...);
}
...
My build settings looks like this
To create a thread using non-static member functions, you need to provide the instance the member function should use, the this variable inside the thread function. This is done by passing the instance as the first argument to the thread function:
std::thread t1{&sudokuGame::checkRow, this, num, row};
// ^^^^
// Note the use of `this` here
Note that you don't need to change the thread function, it's all handled by the standard library and the compiler.
Then for another problem: The thread assignments like
t1=std::thread(&sudokuGame::checkRow, num, row);
They don't make any sense. You have already created the thread objects, initialized them, and got the thread running. And because the threads are already running you can't assign to them. See e.g this reference for the overloaded thread assignment operator.
The compiler error you get is because of the first problem, that you don't pass an instance when creating the threads.