SetThreadAffinityMask Usage - c++

i really have a hard time understanding the SetThreadAffinityMask function. Im trying to implement a timer with the QueryPerformanceCounter function but I dont understand how to get the thread affinity right. A guy on msdn posted the following code example:
void HRTimer::StartTimer(void)
{
DWORD_PTR oldmask = ::SetThreadAffinityMask(::GetCurrentThread(), 0);
::QueryPerformanceCounter(&start);
::SetThreadAffinityMask(::GetCurrentThread(), oldmask);
}
But when I adopt this code snippet the value for oldmask returned by SetThreadAffinityMask is zero. On MSDN I saw that a return value of zero means that an error occurred. I called GetLastError() and got the error code for ERROR_INVALID_PARAMETER. Now im wondering if the code snippet above isnt correct. Can someone please explain me how to use SetThreadAffinityMask correctly so that QueryPerformanceCounter is for example only called on the first CPU on the system? Or is the above example correct allthough SetThreadAffinityMask returns zero?
Thank you in advance.

The mask is a bitfield: each bit designate a processor. 0 means "no processor". It's not logic.
0x0001 : proc 1
0x0003 : proc 1 and 2
0x000F : proc 1, 2, 3, 4
...
MSDN for SetThreadAffinityMask

Related

ASIOCallbacks::bufferSwitchTimeInfo comes very slowly in 2.8MHz Samplerate with DSD format on Sony PHA-3

I bought a Sony PHA-3 and try to write an app to play DSD in native mode. (I've succeeded in DoP mode.)
However, When I set the samplerate to 2.8MHz, I found the ASIOCallbacks::bufferSwitchTimeInfo come not so fast as it should be.
It'll take nearly 8 seconds to request for 2.8MHz samples which should be completed in 1 second.
The code is merely modified from the host sample of asiosdk 2.3, thus I'll post a part of the key codes to help complete my question.
After ASIO Start, the host sample will keep printing the progress to indicating the time info like this:
fprintf (stdout, "%d ms / %d ms / %d samples **%ds**", asioDriverInfo.sysRefTime,
(long)(asioDriverInfo.nanoSeconds / 1000000.0),
(long)asioDriverInfo.samples,
(long)(**asioDriverInfo.samples / asioDriverInfo.sampleRate**));
The final expression will tell me how many seconds has elapsed. (asioDriverInfo.samples/asioDriverInfo.sampleRate).
Where asioDriverInfo.sampleRate is 2822400 Hz.
And asioDriverInfo.samples is assigned in the ASIOCallbacks::bufferSwitchTimeInfo like below:
if (timeInfo->timeInfo.flags & kSamplePositionValid)
asioDriverInfo.samples = ASIO64toDouble(timeInfo->timeInfo.samplePosition);
else
asioDriverInfo.samples = 0;
It's the original code of the sample.
So I can easily find out the time elapsed very slowly.
I've tried to raise the samplerate to even higher, say 2.8MHz * 4, it's even longer to see the time to advance 1 second.
I tried to lower the samplerate to below 2.8MHz, the API failed.
I surely have set the SampleFormat according to the guide of the sdk.
ASIOIoFormat aif;
memset(&aif, 0, sizeof(aif));
aif.FormatType = kASIODSDFormat;
ASIOSampleRate finalSampleRate = 176400;
if(ASE_SUCCESS == ASIOFuture(kAsioSetIoFormat,&aif) ){
finalSampleRate = 2822400;
}
In fact, without setting the SampleFormat to DSD, setting samplerate to 2.8MHz will lead to an API failure.
Finally, I remembered all the DAW (Cubase / Reaper, ...) have an option to set the thread priority, so I doubted the thread of the callback is not high enough and also try to raise its thread priority to see if this could help. However, when I check the thread priority, it returns THREAD_PRIORITY_TIME_CRITICAL.
static double processedSamples = 0;
if (processedSamples == 0)
{
HANDLE t = GetCurrentThread();
int p = GetThreadPriority(t); // I get THREAD_PRIORITY_TIME_CRITICAL here
SetThreadPriority(t, THREAD_PRIORITY_HIGHEST); // So the priority is no need to raise anymore....(SAD)
}
It's same for the ThreadPriorityBoost property. It's not disabled (already boosted).
Anybody has tried to write a host asio demo and help me resolve this issue?
Thanks very much in advance.
Issue cleared.
I should getBufferSize and createBuffers after kAsioSetIoFormat.

Measure time on C++ for system call command in Linux

I am trying to measure the cpu time of an external software that I call from my C++ code with system (in Linux). I wonder if the "getusage user and system time" can be compare with the "user and system time given by the time command".
Example, would the time returned by these two pieces of code be (approximately) the same, that is, would I be doing a fair comparison?:
//CODE 1 (GETUSAGE)
long int timeUsage1,timeUsage2 = 0;
struct rusage usage;
getrusage(RUSAGE_SELF, &usage);
timeUsage1 = usage.ru_utime.tv_sec+usage.ru_stime.tv_sec;
//C++ code
getrusage(RUSAGE_SELF, &usage);
timeUsage2 = ((usage.ru_utime.tv_sec+usage.ru_stime.tv_sec)-timeUsage1);
//CODE 2 (TIME LINUX COMMAND from my C++ code)
system(time external) //where external is equivalent to C++ code above
Thanks,
Ana
PS: With the time command from CODE 2 I get something like this:
4.89user 2.13system 0:05.11elapsed 137%CPU (0avgtext+0avgdata 23968maxresident)k
0inputs+86784outputs (0major+2386minor)pagefaults 0swaps
Should I be concerned about the 137%CPU at all?
So, the 137% is based on the fact that 7.02s (total "User + System"), is 137% of the actual time it took to run the code, 5.11s. So, if you had only one processor (core), the overall time would be at least 7.02s.
Since we don't see your actual code, it's hard to say if this is caused by your code being multithreaded, or if the time is spent in the kernel that runs things in multiple threads "behind the scenes" so to speak.

pthread_cond_timedwait returns error 138

I can't find any information on this with Google, so I post here hoping that someone can help...
My problem is with the Windows pthread function pthread_cond_timedwait(). When the indicated time is elapsed, the function should return with value ETIMEDOUT. Instead in my code, where its conditional variable is not signaled, it returns the value 138 and does it much earlier than the expected timeout, sometimes immediately.
So my questions are: what is this error 138? And why is the timeout not completely elapsed?
The code I use for the thread is:
int retcode = 0;
timeb tb;
ftime(&tb);
struct timespec timeout;
timeout.tv_sec = tb.time + 8;
timeout.tv_nsec = tb.millitm * 1000 * 1000;
pthread_mutex_lock(&mutex_);
retcode = pthread_cond_timedwait(&cond_, &mutex_, &timeout);
pthread_mutex_unlock(&mutex_);
if (retcode == ETIMEDOUT)
{
addLog("Timed-out. Sending request...", LOG_DEBUG);
}
else // Something happened
{
std::stringstream ss;
ss << "Thread interrupted (Error " << retcode << ")";
addLog(ss.str().c_str(), LOG_DEBUG);
}
Is there something wrong with my absolute timeout computation?
Only this thread and the calling thread are present. The calling one joins the created one just after its creation and correctly waits until it finishes.
Currently the conditional variable cond_ is never signaled, but if I try to do it, the pthread_cond_timedwait() returns with value 0 as expected.
Even if not shown here, both cond_ and mutex_ are correctly initialised (if I dont't do it, I get a EINVAL error).
Also following the pthread code I can't find this error. I can only find some return errno that could produce it, but I don't know the meaning of the 138.
If it can help, I am using Visual Studio 2003 with pthreads win32 v2.9.1.
Thanks,
RG
Maybe this answer will be helpful for someone.
I encountered with the same issue. pthread_cond_timedwait returns error 138
I rummaged all source code of pthread_win32 but didn't find anything similar to error code 138.
I downloaded source code of pthread, built it with Visual studio 2008 and... all work nice! :(
Cause of such behaviour is that precompilled dll was built with MSVC100, but I build my app with MSVC90. ETIMEDOUT in MSVC100 is 138, but in MSVC90 is 10060.
Thats all! It is Windows, bro!

HResult 0x80040204 from IMediaObject::ProcessInput

I get this Hresult when i resample PCM Sound to a IEEE:Float Sound with DirectXMediaResampler.
Changing the bits per sample with the same sampling rate is no problem. Also resamppling from IEEE:Float to PCM.
This HResult is not documented in context with a DMO object.
And it doesn't happen on every resampling but periodically.
Does anyone know or could guess what it means.
That's DMO_E_NOTACCEPTING; the documentation says:
DMO_E_NOTACCEPTING: Data cannot be accepted.
You can see the code that generates this in dmoimpl.h, although without the derived DMO code I don't think that helps (it means the DMO's InternalAcceptingInput method didn't return S_OK).
I guess this all means that the ResamplerDMO doesn't like your input data. Is it definitely set up correctly?

GetThreadContext returns EBP = 0

I'm trying to get the value of another process' EBP register on windows7 64 bits.
for this I'm using GetThreadContext like this:
static CONTEXT threadContext;
memset(&threadContext, 0, sizeof(CONTEXT));
threadContext.ContextFlags = CONTEXT_FULL;
bool contextOk = GetThreadContext(threadHandle, &threadContext);
The EIP value seems ok, but EBP = 0.
I tried using also WOW64_GetThreadContext but it didn't help...
GetLastError() returns 0 so it's supposed to be ok.
I do suspend this thread with SuspendThread and It DOESN'T happen every time I sample the thread.
What could cause this?
One possible cause is that the register's value really is zero at the time you inspect it. It's a general-purpose register, so the program can set it to whatever value it wants.