Our business has recently moved to a TDD style and I'm new at writing unit tests. The C# (.net 3.5) piece I'm writing now should be able to verify a separate process is running, as I understand it the best way using the Mutex class.
So I have a method in my SrsUpdaterController class like so...
public bool IsUpdaterRunning()
{
Mutex srsUpdaterMutex = new Mutex(false, SRS_UPDATERGUID);
if (srsUpdaterMutex.WaitOne(0)) //If SRS Updater is running
{
srsUpdaterMutex.ReleaseMutex();
srsUpdaterMutex.Close();
return false;
}
else
{
return true;
}
}
and I have a test
[TestMethod()]
public void IsUpdaterRunningTrueTest()
{
SrsUpdaterController target = new SrsUpdaterController();
string mutexGuid = SrsUpdaterController.SRS_UPDATERGUID;
bool expected = true;
bool actual;
Mutex srsUpdaterMutex = new Mutex(false, mutexGuid);
srsUpdaterMutex.WaitOne(3000);
actual = target.IsUpdaterRunning();
srsUpdaterMutex.ReleaseMutex();
srsUpdaterMutex.Close();
Assert.AreEqual(expected, actual);
}
It doesn't work because the unit test and the IsUpdaterRunning method are called by the same thread, and so Windows is "smart" enough to not make the process block its self. The problem is I WANT the process to block its self so that it simulates the mutex being claimed. Is there any way to do this? Or am I approaching unit testing/process synchronization/mutex management all wrong?
(note, I did also try locking the Mutex on a separate thread launched from the test, but it still allowed me to claim the mutex in both places. Which is ok because I'd rather avoid threads when possible)
Thanks in advance!
Related
I've implemented a C++ Class that will execute something in a timed cycle using a thread. The thread is set to be scheduled with the SCHED_DEADLINE scheduler of the Linux kernel. To setup the Scheduler the process running this must have certain Linux capabilities.
My question is, how to test this?
I can of course make a unit test and create the threat, do some counting an exit the test after a time to validate the cycle counter but that only works if the unit test is allowed to apply the right scheduler. If not, the default scheduler applies and the timing of the cyclic loops will be immediate and therefore executes a different behaviour.
How would you test this scenario?
Some Code Example:
void thread_handler() {
// setup SCHED_DEADLINE Parameters
while (running) {
// execute application logic
sched_yield();
}
}
There two separate units to test here. First the cyclic execution of code and second the strategy with the os interface. The first unit would look like this:
class CyclicThread : public std::thread {
public:
CyclicThread(Strategy& strategy) :
std::thread(bind(&CyclicThread::worker, this)),
strategy(strategy) { }
add_task(std::function<void()> handler) {
...
}
private:
Strategy& strategy;
void worker() {
while (running) {
execute_handler()
strategy.yield();
}
}
}
This is fairly easy to test with a mock object of the strategy.
The Deadline scheduling strategy looks like this:
class DeadlineStrategy {
public:
void yield() {
sched_yield();
}
}
This class can also be tested fairly easy by mocking the sched_yield() system call.
Context:
I'm writing unit test for a gRPC service. I want to verify that the method of the mock on the server side is called. I'm using easy mock. To be sure we get the response of gRPC (whatever it is) I need to suspend the thread before easy mock verify the calls.
So I tried something like this using LockSupport:
#Test
public void alphaMethodTest() throws Exception
{
Dummy dummy = createNiceMock(Dummy.class);
dummy.alphaMethod(anyBoolean());
expectLastCall().once();
EasyMock.replay(dummy);
DummyServiceGrpcImpl dummyServiceGrpc = new DummyServiceGrpcImpl();
bcreuServiceGrpc.setDummy(dummy);
DummyServiceGrpc.DummyServiceStub stub = setupDummyServiceStub();
Thread thread = Thread.currentThread();
stub.alphaMethod(emptyRequest, new StreamObserver<X>(){
#Override
public void onNext(X value) {
LockSupport.unpark(thread);
}
}
Instant expirationTime = Instant.now().plus(pDuration);
LockSupport.parkUntil(expirationTime.toEpochMilli());
verify(dummy);
}
But I have many tests like this one (around 40) and I suspect threading issue. I usually get one or two failing the verify step, sometime all of them pass. I try to use a ReentrantLock with Condition instead. But again some are failing (IllegalMonitorStateException on the signalAll):
#Test
public void alphaMethodTest() throws Exception
{
Dummy dummy = createNiceMock(Dummy.class);
dummy.alphaMethod(anyBoolean());
expectLastCall().once();
EasyMock.replay(dummy);
DummyServiceGrpcImpl dummyServiceGrpc = new DummyServiceGrpcImpl();
bcreuServiceGrpc.setDummy(dummy);
DummyServiceGrpc.DummyServiceStub stub = setupDummyServiceStub();
ReentrantLock lock = new ReentrantLock();
Condition conditionPromiseTerminated = lock.newCondition();
stub.alphaMethod(emptyRequest, new StreamObserver<X>(){
#Override
public void onNext(X value) {
conditionPromiseTerminated.signalAll();
}
}
Instant expirationTime = Instant.now().plus(pDuration);
conditionPromiseTerminated.awaitUntil(new Date(expirationTime.toEpochMilli()));
verify(dummy);
}
I'm sorry not providing runnable example for you, my current code is using a private API :/.
Do you think LockSupport may cause trouble because of the multiple tests running? Am I missing something using lock support or reentrant lock. Do you think of any other class of the concurrent API that would suit better my needs?
LockSupport is a bit dangerous, you will need to read the documentation closely and find out that:
The call spuriously (that is, for no reason) returns.
So when you think your code will do some "waiting", it might simply return immediately. The simplest reason for that would be this for example, but there could be other reasons too.
When using ReentrantLock, all of them should fail with IllegalMonitorStateException, because you never acquire the lock via ReentrantLock::lock. And stop using new Date(...), it is deprecated for a reason.
I think you are over-complicating things, you could do the same signaling with a plain lock, a simplified example:
public static void main(String[] args) {
Object lock = new Object();
Thread first = new Thread(() -> {
synchronized (lock) {
System.out.println("Locked");
try {
System.out.println("Sleeping");
lock.wait();
System.out.println("Waked up");
} catch (InterruptedException e) {
// these are your tests, no one should interrupt
// unless it's yourself
throw new RuntimeException(e);
}
}
});
first.start();
sleepOneSecond();
Thread second = new Thread(() -> {
synchronized (lock) {
System.out.println("notifying waiting threads");
lock.notify();
}
});
second.start();
}
private static void sleepOneSecond() {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
Notice the output:
Locked
Sleeping
notifying waiting threads
Waked up
It should be obvious how the "communication" (signaling) between threads happens.
I'm trying to write my own torrent program based on libtorrent rasterbar and I'm having problems getting the alert mechanism working correctly. Libtorrent offers function
void set_alert_notify (boost::function<void()> const& fun);
which is supposed to
The intention of of the function is that the client wakes up its main thread, to poll for more alerts using pop_alerts(). If the notify function fails to do so, it won't be called again, until pop_alerts is called for some other reason.
so far so good, I think I understand the intention behind this function. However, my actual implementation doesn't work so good. My code so far is like this:
std::unique_lock<std::mutex> ul(_alert_m);
session.set_alert_notify([&]() { _alert_cv.notify_one(); });
while (!_alert_loop_should_stop) {
if (!session.wait_for_alert(std::chrono::seconds(0))) {
_alert_cv.wait(ul);
}
std::vector<libtorrent::alert*> alerts;
session.pop_alerts(&alerts);
for (auto alert : alerts) {
LTi_ << alert->message();
}
}
however there is a race condition. If wait_for_alert returns NULL (since no alerts yet) but the function passed to set_alert_notify is called before _alert_cw.wait(ul);, the whole loop waits forever (because of second sentence from the quote).
For the moment my solution is just changing _alert_cv.wait(ul); to _alert_cv.wait_for(ul, std::chrono::milliseconds(250)); which reduces number of loops per second enough while keeping latency low enough.
But it's really more workaround then solution and I keep thinking there must be proper way to handle this.
You need a variable to record the notification. It should be protected by the same mutex that owns the condition variable.
bool _alert_pending;
session.set_alert_notify([&]() {
std::lock_guard<std::mutex> lg(_alert_m);
_alert_pending = true;
_alert_cv.notify_one();
});
std::unique_lock<std::mutex> ul(_alert_m);
while(!_alert_loop_should_stop) {
_alert_cv.wait(ul, [&]() {
return _alert_pending || _alert_loop_should_stop;
})
if(_alert_pending) {
_alert_pending = false;
ul.unlock();
session.pop_alerts(...);
...
ul.lock();
}
}
I'm just getting into concurrent programming. Most probably my issue is very common, but since I can't find a good name for it, I can't google it.
I have a C++ UWP application where I try to apply MVVM pattern, but I guess that the pattern or even being UWP is not relevant.
First, I have a service interface that exposes an operation:
struct IService
{
virtual task<int> Operation() = 0;
};
Of course, I provide a concrete implementation, but it is not relevant for this discussion. The operation is potentially long-running: it makes an HTTP request.
Then I have a class that uses the service (again, irrelevant details omitted):
class ViewModel
{
unique_ptr<IService> service;
public:
task<void> Refresh();
};
I use coroutines:
task<void> ViewModel::Refresh()
{
auto result = co_await service->Operation();
// use result to update UI
}
The Refresh function is invoked on timer every minute, or in response to a user request. What I want is: if a Refresh operation is already in progress when a new one is started or requested, then abandon the second one and just wait for the first one to finish (or time out). In other words, I don't want to queue all the calls to Refresh - if a call is already in progress, I prefer to skip a call until the next timer tick.
My attempt (probably very naive) was:
mutex refresh;
task<void> ViewModel::Refresh()
{
unique_lock<mutex> lock(refresh, try_to_lock);
if (!lock)
{
// lock.release(); commented out as harmless but useless => irrelevant
co_return;
}
auto result = co_await service->Operation();
// use result to update UI
}
Edit after the original post: I commented out the line in the code snippet above, as it makes no difference. The issue is still the same.
But of course an assertion fails: unlock of unowned mutex. I guess that the problem is the unlock of mutex by unique_lock destructor, which happens in the continuation of the coroutine and on a different thread (other than the one it was originally locked on).
Using Visual C++ 2017.
use std::atomic_bool:
std::atomic_bool isRunning = false;
if (isRunning.exchange(true, std::memory_order_acq_rel) == false){
try{
auto result = co_await Refresh();
isRunning.store(false, std::memory_order_release);
//use result
}
catch(...){
isRunning.store(false, std::memory_order_release);
throw;
}
}
Two possible improvements : wrap isRunning.store in a RAII class and use std::shared_ptr<std::atomic_bool> if the lifetime if the atomic_bool is scoped.
I'm using a QThread and inside its run method I have a timer invoking a function that performs some heavy actions that take some time. Usually more than the interval that triggers the timer (but not always).
What I need is to protect this method so it can be invoked only if it has completed its previous job.
Here is the code:
NotificationThread::NotificationThread(QObject *parent)
: QThread(parent),
bWorking(false),
m_timerInterval(0)
{
}
NotificationThread::~NotificationThread()
{
;
}
void NotificationThread::fire()
{
if (!bWorking)
{
m_mutex.lock(); // <-- This is not protection the GetUpdateTime method from invoking over and over.
bWorking = true;
int size = groupsMarkedForUpdate.size();
if (MyApp::getInstance()->GetUpdateTime(batchVectorResult))
{
bWorking = false;
emit UpdateNotifications();
}
m_mutex.unlock();
}
}
void NotificationThread::run()
{
m_NotificationTimer = new QTimer();
connect(m_NotificationTimer,
SIGNAL(timeout()),
this,
SLOT(fire(),
Qt::DirectConnection));
int interval = val.toInt();
m_NotificationTimer->setInterval(3000);
m_NotificationTimer->start();
QThread::exec();
}
// This method is invoked from the main class
void NotificationThread::Execute(const QStringList batchReqList)
{
m_batchReqList = batchReqList;
start();
}
You could always have a thread that needs to run the method connected to an onDone signal that alerts all subscribers that it is complete. Then you should not run into the problems associated with double lock check and memory reordering. Maintain the run state in each thread.
I'm assuming you want to protect your thread from calls from another thread. Am I right? If yes, then..
This is what QMutex is for. QMutex gives you an interface to "lock" the thread until it is "unlocked", thus serializing access to the thread. You can choose to unlock the thread until it is done doing its work. But use it at your own risk. QMutex presents its own problems when used incorrectly. Refer to the documentation for more information on this.
But there are many more ways to solve your problem, like for example, #Beached suggests a simpler way to solve the problem; your instance of QThread would emit a signal if it's done. Or better yet, make a bool isDone inside your thread which would then be true if it's done, or false if it's not. If ever it's true then it's safe to call the method. But make sure you do not manipulate isDone outside the thread that owns it. I suggest you only manipulate isDone inside your QThread.
Here's the class documentation: link
LOL, I seriously misinterpreted your question. Sorry. It seems you've already done my second suggestion with bWorking.