Can SYSTEM_INFO::dwActiveProcessorMask change while my process is running? - c++

I'm curious about something. Can dwActiveProcessorMask member of the SYSTEM_INFO struct change after my service starts running (on Windows)? If not, I'd cache it when it is initializing.

It is reasonable to assume that it could change. See, for example, this description of dealing with dynamic partitioning and how to code and test for correctness.

Of course not, dwActiveProcessorMask is set during the hardware-detection phase on boot, it may only change once the hardware is changed. If you read the value during your application initialization-phase you will always be good.

Related

C++ std::streambuf::pubseekpos(): check if random access is supported

While writing a generic function taking a reference to an std::streambuf, I'd like to check if the supplied buffer is tied to something supporting random access or not and optimize the treatment accordingly, i.e. check if moving around the stream with pubseekpos() is available or not.
I might have overlooked that in the documentation. Would you happen to have a crystal clear solution, other than late-discovering if it works or not based on the result of the latter method (which returns -1 in case of failure to seek, which could have other causes)?
Available docs:
https://www.cplusplus.com/reference/streambuf/streambuf/pubseekpos/
https://en.cppreference.com/w/cpp/io/basic_streambuf/pubseekpos
Thanks in advance. Regards.
(Edit #GIJD after first comment)
Even if I'm not really glad about it, you've got a good point here. The same as "rather than testing if a file exists before opening it, open it and process error_status/exceptions" (as a race condition is always possible between the test and the opening otherwise).
OK, the actual function call by itself might indeed be the best proof over some feature flag existence test (that might also be wrongly set, even if unlikely if the code was tested duly... well, I just read what I wrote, yeah, well, as if code is always duly tested... OK ha ha, better test the function directly yes yes).
But, still, a -1 return value might mean error, not lack of random access, but one might argue that the result is the same, I won't be able to do random access if it fails whatever the reason, feature lack or some error. Thus, I'll have to fall back to a one-pass stream read anyway.
Thanks. Regards.

Pentaho Data Integration "Variable scope type" in Set Variables

I have a job running in PDI that is transferring data from different sources to different targets an back for a specific System. This job has a lot of child jobs. Let's call that Job MasterJob1.
We have the same System running for another purpose. Therefore, I want to copy that job in PDI. Here I just have to change a few settings. Let's call that MasterJob2.
To make different variables available for the entire job (also in parent jobs, child jobs and so on of the masterjob), we are using "Set Variables". Here, we have a lot of different variables. Let's say, one variable is called TestVar. At the moment, the "Variable Scope type" of these Variables in MasterJob1 is always set on "Valid in Java Virtual Machine".
According to the PDI Documentation http://wiki.pentaho.com/display/EAI/Set+Variables, this means, the variables are available everywhere in the Virtual Machine. For my understanding this means, if I copy the job and let the "Variable Scope type" like it is, the Variable TestVar can be written by MasterJob1 but can also be overwritten by MasterJob2.
I definitively want to avoid that MasterJob1 can overwrite Variables of MasterJob2 and vice versa. However, the Variables that are set in MasterJob1 must be everywhere available in MasterJob1 and the Variables in MasterJob2 must be everywhere available in MasterJob2. Therefore I continued reading the documentation. It's says that there exists the "Variable Scope Type" "Valid in the root Job". Is my assumption right, that this is the Variable Scope Type that I need to use?
Unfortunately I do not have that much experience with this and I hope that you can tell me if that is the right way?! Creating a test environment will take a some days for me. Therefore I hope that you can give me an easy "Yes go for it" or the right solution.
Your assumption is correct.
Avoid using Valid in the virtual machine for jobs on the server, although it is handy for debug on your dev PC.
Use Valid in the parent job when a transformation (or job) has to return a value to the caller.
Use Valid in the grand-parent job very rarely, although I remember some special moments where it was useful.
Use Valid in the root job almost all the time.

Is ref-copying a compiler optimization, and can I avoid it?

I dislike pointers, and generally try to write as much code as I can using refs instead.
I've written a very rudimentary "vertical layout" system for a small Win32 app. Most of the Layout methods look like this:
void Control::DoLayout(int availableWidth, int &consumedYAmt)
{
textYPosition = consumedYAmt;
consumedYAmt += measureText(font, availableWidth);
}
They are looped through like so:
int innerYValue = 0;
foreach(control in controls) {
control->DoLayout(availableWidth, innerYValue);
}
int heightOfControl = innerYValue;
It's not drawing its content here, just calculating exactly how much space this control will require (usually it's adding padding too, etc). This has worked great for me.......in debug mode.
I found that in Release mode, I could suddenly see tangible, loggable issues where, when I'm looping through controls and calling DoLayout(), the consumedYAmt variable actually stays at 0 in the outside loop. The most annoying part is that if I put in breakpoints and walk through the code line by line, this stops happening and parts of it are properly updated by the inside "add" methods.
I'm kind of thinking about whether this would be some compiler optimization where they think I'm simply adding the ref flag to ints as a way to optimize memory; or if there's any possibility this actually works in a way different from how it seems.
I would give a minimum reproducible example, but I wasn't able to do so with a tiny commandline app. I get the sense that if this is an optimization, it only kicks in for larger code blocks and indirections.
EDIT: Again sorry for generally low information, but I'm now getting hints that this might be some kind of linker issue. I skipped one part of the inheritance model in my pseudocode: The calling class actually calls "Layout()", which is a non-virtual function on the root definition of the class. This function performs some implementation-neutral logic, and then calls DoLayout() with the same arguments. However, I'm now noticing that if I try adding a breakpoint to Layout(), Visual Studio claims that "The breakpoint will not be hit. No executable code of the debugger's target code type is associated with this line." I am able to add breakpoints to certain other lines, but I'm beginning to notice weird stepping logic where it refuses to go inside certain functions, like Layout. Already tried completely clearing the build folders and rebuilding. I'm going to have to keep looking, since I have to admit this isn't a lot to go on.
Also, random addition: The "controls" list is a vector containing shared_ptr objects. I hadn't suspected the looping mechanism previously but now I'm looking more closely.
"the consumedYAmt variable actually stays at 0"
The behavior you describe is typical for a specific optimization that's more due to the CPU than the compiler. I suspect you're logging consumedYAmt from another thread. The updates to consumedYAmt simply don't make it to that other thread.
This is legal for the CPU, because the C++ compiler didn't put in memory fences. And the CPU compiler didn't put in fences because the variable isn't atomic.
In a small program without threads, this simply doesn't show up, nor does it show in debug mode.
Written by OP
Okay, eventually figured this one out. As simple as the issue was, pinning it down became difficult because of Release mode's debugger seemingly acting in inconsistent ways. When I changed tactic to adding Logging statements in lots of places, I found that my Control class had an "mShowing" variable that was uninitialized in its constructor. In debug mode, it apparently retained uninitialized memory which I guess made it "true" - but in release mode, my best analysis is that memory protections made it default to "false", which as it turns out skipped the main body of the DoLayout method most of the time.
Since through the process, responders were low on information to work with (certainly could've been easier if I posted a longer example), I instead simply upvoted each comment that mentioned uninitialized variables.

PROCESS_MEMORY_COUNTERS_EX creates unreliable PrivateUsage field, why?

Using the following code on VS 2012, native C++ development:
SIZE_T CppUnitTests_MemoryValidation::TakeMemoryUsageSnapshot() {
PROCESS_MEMORY_COUNTERS_EX processMemoryCounter;
GetProcessMemoryInfo(GetCurrentProcess(), (PROCESS_MEMORY_COUNTERS*)
&processMemoryCounter, sizeof(processMemoryCounter));
return processMemoryCounter.PrivateUsage;
}
I call this method before and after each CPPUnitTest and calculate the difference of the PrivateUsage field. Normally this difference should be zero, assuming my memory allocation doesn't leak.
Only simple things happen inside my test class. Even without any memory allocation, just creating an instance of my test class and releasing it again, sometimes (not in every test iteration) the difference gets above zero, so this scheme seems to be non-deterministic.
Is there somebody with more insight than me who could either explain how to tackle this or tell me what is wrong with my assumptions?
In short, your assumptions are not correct. There can be a lot of other things going on in your process that perform memory allocation (the Event Tracing thread, and any others created by third-party add-ons on your system) so it is not surprising to see memory use go up occasionally.
Following Hans Passants debug allocator link, I noticed some more information about memory leak detection instrumentation by Microsoft, in special the _CrtMemCheckpoint function(s).
The link i followed was "http://msdn.microsoft.com/en-us/library/5tz9b54s(v=vs.90).aspx"
Now when taking my memory snapshots with this function and checking for a difference using the _CrtMemDifference function, this seems to work reliable and deterministic.

Accessing direct memory addresses and obtaining the values in C++

I was wondering if it was possible to access a direct block of memory using C/C++ and grab the value. For example:
int i = 15;
int *p = &i;
cout << &i;
If I took the printed value here, that would give me the address of the variable i, which contains the value 15. I will just say it printed out 0x0ff9c1 for this example. If I have a separate program which declares a pointer like so...
int *p = 0x0ff9c1;
cout << *p;
Would it be possible to print out that 15 that the other application placed in the memory block 0x0ff9c1? I know my pointer declaration with the memory address is incorrect, I am unsure how to do it otherwise. I have tried using memcopy but I have not been able to get that to work either. I know this is possible somehow as I have a program called Cheat Engine which modifies game memory address values to gain unfair advantages. I have been successful in placing the printed memory location and obtaining the value (15) though Cheat Engine. My goal is to do this using C++.
If this is too confusing, basically I would like to access a variable that another application stored using its memory address and print out the value. I am using Windows 7 x64 with MinGW compiler if that matters. Thanks!
PS: I'll post a picture of what Cheat Engine does to give a better idea.
The two processes have separate address spaces. One process cannot access another processses memory unless it is explicily shared memory.
You can't do it in a platform-agnostic way in C++. While I haven't used this "cheat engine" specifically, it almost certainly is using the same special API that a debugger uses. The code will be specific to Windows, and you will require a certain privilege level on the running process.
(For instance, if you are using Visual Studio and execute a program from it in a Debug Mode, Visual Studio can look at and modify values in that program.)
I haven't written a debugger in a while, so I don't know where a good place to get started on the Debug API is, but you can search around the web for things like this article:
http://www.woodmann.com/fravia/iceman1.htm
If you want to change the memory used by another process, one way would be to inject your code into the other process. From that point, you can do whatever you want to the other program's memory as if it were your owns.
Search around for remote thread creation or hooking. There are more than a few questions about it here (and here, for starters).
In general, it's not usually possible for one program to modify the memory of another. The system goes to great lengths to ensure this. If it did not, no program would be safe. This is particularly true in all the Unix variants I've worked on, though not on all proprietary OSes I've seen.
Note that none of these rules apply to the kernel ...
There is also a programming paradigm called shared memory, but you have to explicitly set that up.
Short answer: you can't usually do that. I believe you mentioned windows. I know nothing about Windows, so your mileage may vary.
A bit late, but you still could this through a DLL injection. Here is a link to a tutorial: http://resources.infosecinstitute.com/using-createremotethread-for-dll-injection-on-windows/