I'm working on a machine that has 8 gigs of memory installed and I'm trying to programmatically determine how much memory is installed in the machine. I've already attempted using sysctlbyname() to get the amount of memory installed, however it seems to be limited to returning a signed 32 bit integer.
uint64_t total = 0;
size_t size = sizeof(total);
if( !sysctlbyname("hw.physmem", &total, &size, NULL, 0) )
m_totalMemory = total;
The above code, no matter what type is passed to sysctlbyname, always returns 2147483648 in the total variable. I've been searching through IOKit and IORegistryExplorer for another route of determining installed memory, but have come up with nothing so far. I've found IODeviceTree:/memory in IORegistryExplorer, but there's no field in there for size. I'm not finding anything anywhere else in the IO Registry either. Is there a way to access this information via IOKit, or a way to make sysctlbyname return more than a 32-bit signed integer?
You can use sysctl() and query HW_MEMSIZE.This returns the memory size as a 64-bit integer, instead of the default 32-bit integer.
The man page gives the details.
The easy way:
[[NSProcessInfo processInfo] physicalMemory]
Related
I've got a project using Crypto++.
Crypto++ is a own project which builds in a static lib.
Aside from that I have another large project using some of the Crypto++ classes and processing various algorithms, which also builds in a static lib.
Two of the functions are these:
long long MyClass::EncryptMemory(std::vector<byte> &inOut, char *cPadding, int rounds)
{
typedef std::numeric_limits<char> CharNumLimit;
char sPadding = 0;
//Calculates padding and returns value as provided type
sPadding = CalcPad<decltype(sPadding)>(reinterpret_cast<MyClassBase*>(m_P)->BLOCKSIZE, static_cast<int>(inOut.size()));
//Push random chars as padding, we never care about padding's content so it doesn't matter what the padding is
for (auto i = 0; i < sPadding; ++i)
inOut.push_back(sRandom(CharNumLimit::min(), CharNumLimit::max()));
std::size_t nSize = inOut.size();
EncryptAdvanced(inOut.data(), nSize, rounds);
if (cPadding)
*cPadding = sPadding;
return nSize;
}
//Removing the padding is the responsibility of the caller.
//Nevertheless the string is encrypted with padding
//and should here be the right string with a little padding
long long MyClass::DecryptMemory(std::vector<byte> &inOut, int rounds)
{
DecryptAdvanced(inOut.data(), inOut.size(), rounds);
return inOut.size();
}
Where EncryptAdvanced and DecryptAdvanced pass the arguments to the Crypto++ object.
//...
AdvancedProcessBlocks(bytePtr, nullptr, bytePtr, length, 0);
//...
These functions have so far worked flawless, no modifications have been applied to them since months.
The logic around them has evolved, though the calls and data passed to them did not change.
The data being encrypted / decrypted is rather small but has a dynamic size, which is being padded if (datasize % BLOCKSIZE) has a remainder.
Example: AES Blocksize is 16. Data is 31. Padding is 1. Data is now 32.
After encrypting and before decrypting, the string is the same - as in the picture.
Running all this in debug mode apparently works as intended. Even when running this program on another computer (with VS installed for DLLs) it shows no difference. The data is correctly encrypted and decrypted.
Trying to run the same code in release mode results in a totally different encrypted string, plus it does not decrypt correctly - "trash data" is decrypted. The wrongly encrypted or decrypted data is consistent - always the same trash is decrypted. The key/password and the rounds/iterations are the same all the time.
Additional info: The data is saved in a file (ios_base::binary) and correctly processed in debug mode, from two different programs in the same solution using the same static librar(y/ies).
What could be the cause of this Debug / Release problem ?
I re-checked the git history a couple of times, debugged for days through the code, yet I cannot find any possible cause for this problem. If any information - aside from a (here rather impossible) MCVE is needed, please leave a comment.
Apparently this is a bug in CryptoPP. The minimum keylength of Rijndael / AES is set to 8 instead of 16. Using a invalid keylength of 8 bytes will cause a out-of-bounds access to the in-place array of Rcon values. This keylength of 8 byte is currently reported as valid and has to be fixed in CryptoPP.
See this issue on github for more information. (On-going conversation)
I am trying to pass some data from my kernel driver up to the user application.
I have defined the structure in my header file shared by my driver and application:
typedef struct _CallBack
{
HANDLE hParId;
HANDLE hProId;
BOOLEAN bCreate;
}CB_INFO, *PCB_INFO;
In my driver, I have a switch statement
case IOCTL_CODE:
if (outputBufferLength >= sizeof(PCB_INFO))
{
callback->hParId = deviceExtension->hParId;
callback->hProId = deviceExtension->hProId;
callback->bCreate = deviceExtension->bCreate;
Irp->IoStatus.Information = outputBufferLength;
Status = STATUS_SUCCESS;
}
I have tried debugging the code by using DbgPrint, there was nothing wrong with the if statement, as outputBufferLength is 12 and PCB_INFO is 8.
As for the DeviceIoControl code in my application:
DeviceIoControl(
driver,
IOCTL_CODE,
0,
0,
&callback,
sizeof(callback),
&bytesReturn,
NULL);
I have checked the bytesReturn and it does not return 0, it returns a 12.
Other info:
I am using 64-bit Windows 7.
I really have no idea what is wrong, and really would appreciate any form of help. I would be glad to provide more of my code if you need more details. Could it be something to do with me writing the driver on a 64-bit platform, or is there just something wrong with my code?
Thanks in advance!
Firstly, PCB_INFO is a pointer type, so sizeof(PCB_INFO) is the size of a pointer, not the size of the buffer you're pointing to. Use sizeof(CB_INFO) or sizeof(*PCB_INFO) instead. The code shown in the question is actually writing past the end of the buffer, so the results are unpredictable.
Secondly, your structure includes two elements of type HANDLE which has a different size in 32-bit and 64-bit architectures. In most situations Windows automatically takes care of converting ("thunking") between 32-bit and 64-bit structures, but in the case of I/O control codes this is your driver's responsibility. This is described in the DDK article Supporting 32-Bit I/O in Your 64-Bit Driver.
Alternatively you can make your application 64-bit, or change the structure so that it uses only elements of constant size.
I'm writing an utility, which should get current CPU load. At the moment It's working and using \Processor(_Total)\% process time in my localization. For multi-lingual support I'm getting counter name from registry by PdhLookupPerfNameByIndex.
Now code looks like
PdhLookupPerfNameByIndex(NULL, 6, processorTime, &cbPathSize);
PdhLookupPerfNameByIndex(NULL, 238, processor, &cbPathSize);
PDH_COUNTER_PATH_ELEMENTS elements = {NULL, processor, "_Total", NULL, NULL, processorTime};
PdhMakeCounterPath(&elements, fullPath, &cbPathSize, 0);
and I wanna remove hard-coded constants 6 and 238.
Are there some constants which means index for Processor and % process time?
The indexes differ between systems, you have to determine them dynamically. The procedure is hinted at in the MSDN documentation of PdhLookupPerfNameByIndex:
Retrieve the REG_MULTI_SZ data of the the registry value HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Perflib\009\Counter
Note: "009" stands for English. That key always exists, even on machines with different language versions of Windows.
Look for your (English) counter/object name in the returned data. Format: index followed by English counter/object name, e.g.:
6
% Processor Time
There is your index. Just convert from string to DWORD and use it with PdhLookupPerfNameByIndex.
I'm trying to use GetDiskFreeSpaceEx in my C++ win32 application to get the total available bytes on the 'current' drive. I'm on Windows 7.
I'm using this sample code: http://support.microsoft.com/kb/231497
And it works! Well, almost. It works if I provide a drive, such as:
...
szDrive[0] = 'C'; // <-- specifying drive
szDrive[1] = ':';
szDrive[2] = '\\';
szDrive[3] = '\0';
pszDrive = szDrive;
...
fResult = pGetDiskFreeSpaceEx ((LPCTSTR)pszDrive,
(PULARGE_INTEGER)&i64FreeBytesToCaller,
(PULARGE_INTEGER)&i64TotalBytes,
(PULARGE_INTEGER)&i64FreeBytes);
fResult becomes true and i can go on to accurately calculate the number of free bytes available.
The problem, however, is that I was hoping to not have to specify the drive, but instead just use the 'current' one. The docs I found online (Here) state:
lpDirectoryName [in, optional]
A directory on the disk. If this parameter is NULL, the function uses the root of the current disk.
But if I pass in NULL for the Directory Name then GetDiskFreeSpaceEx ends up returning false and the data remains as garbage.
fResult = pGetDiskFreeSpaceEx (NULL,
(PULARGE_INTEGER)&i64FreeBytesToCaller,
(PULARGE_INTEGER)&i64TotalBytes,
(PULARGE_INTEGER)&i64FreeBytes);
//fResult == false
Is this odd? Surely I'm missing something? Any help is appreciated!
EDIT
As per JosephH's comment, I did a GetLastError() call. It returned the DWORD for:
ERROR_INVALID_NAME 123 (0x7B)
The filename, directory name, or volume label syntax is incorrect.
2nd EDIT
Buried down in the comments I mentioned:
I tried GetCurrentDirectory and it returns the correct absolute path, except it prefixes it with \\?\
it returns the correct absolute path, except it prefixes it with \\?\
That's the key to this mystery. What you got back is the name of the directory with the native api path name. Windows is an operating system that internally looks very different from what you are familiar with winapi programming. The Windows kernel has a completely different api, it resembles the DEC VMS operating system a lot. No coincidence, David Cutler used to work for DEC. On top of that native OS were originally three api layers, Win32, POSIX and OS/2. They made it easy to port programs from other operating systems to Windows NT. Nobody cared much for the POSIX and OS/2 layers, they were dropped at XP time.
One infamous restriction in Win32 is the value of MAX_PATH, 260. It sets the largest permitted size of a C string that stores a file path name. The native api permits much larger names, 32000 characters. You can bypass the Win32 restriction by using the path name using the native api format. Which is simply the same path name as you are familiar with, but prefixed with \\?\.
So surely the reason that you got such a string back from GetCurrentDirectory() is because your current directory name is longer than 259 characters. Extrapolating further, GetDiskFreeSpaceEx() failed because it has a bug, it rejects the long name it sees when you pass NULL. Somewhat understandable, it isn't normally asked to deal with long names. Everybody just passes the drive name.
This is fairly typical for what happens when you create directories with such long names. Stuff just starts falling over randomly. In general there is a lot of C code around that uses MAX_PATH and that code will fail miserably when it has to deal with path names that are longer than that. This is a pretty exploitable problem too for its ability to create stack buffer overflow in a C program, technically a carefully crafted file name could be used to manipulate programs and inject malware.
There is no real cure for this problem, that bug in GetDiskFreeSpaceEx() isn't going to be fixed any time soon. Delete that directory, it can cause lots more trouble, and write this off as a learning experience.
I am pretty sure you will have to retrieve the current drive and directory and pass that to the function. I remember attempting to use GetDiskFreeSpaceEx() with the directory name as ".", but that did not work.
So I work on a device that outputs large images (anywhere from 30MB to 2GB+). Before we begin creating one of these images we check to see if there is sufficient disk space via GetDiskFreeSpaceEx. Typically (and in this case) we are writing to a shared folder on the same network. There are no user quotas on disk space at play.
Last night, in preparation for a demo, we kicked off a test run. During the run we experienced a failure. We needed 327391776 bytes and were told that we only had 186580992 available. The numbers from GetDiskFreeSpaceEx were:
User free space available: 186580992
Total free space available: 186580992
Those correspond to the QuadPart variables in the two (output) arguments lpFreeBytesAvailable and lpTotalNumberOfFreeBytes to GetDiskFreeSpaceAvailable.
This code has been in use for years now and I have never seen a false negative. Here is the complete function:
long IsDiskSpaceAvailable( const char* inDirectory,
const _int64& inRequestedSize,
_int64& outUserFree,
_int64& outTotalFree,
_int64& outCalcRequest )
{
ULARGE_INTEGER fba;
ULARGE_INTEGER tnb;
ULARGE_INTEGER tnfba;
ULARGE_INTEGER reqsize;
string dir;
size_t len;
dir = inDirectory;
len = strlen( inDirectory );
outUserFree = 0;
outTotalFree = 0;
outCalcRequest = 0;
if( inDirectory[len-1] != '\\' )
dir += "\\";
// this is the value of inRequestSize that was passed in
// inRequestedSize = 3273917760;
if( GetDiskFreeSpaceEx( dir.c_str(), &fba, &tnb, &tnfba ) )
{
outUserFree = fba.QuadPart;
outTotalFree = tnfba.QuadPart;
// this is computed dynamically given a specific compression
// type, but for simplicity I had hard-coded the value that was used
float compressionRatio = 10.0;
reqsize.QuadPart = (ULONGLONG) (inRequestedSize / compressionRatio);
outCalcRequest = reqsize.QuadPart;
// this is what was triggered to cause the failure,
// i.e., user free space was < the request size
if( fba.QuadPart < reqsize.QuadPart )
return( RetCode_OutOfSpace );
}
else
{
return( RetCode_Failure );
}
return( RetCode_OK );
}
So, a value of 3273917760 was passed to the function which is the total amount of disk space needed before compression. The function divides this by the compression ratio of 10 to get the actual size needed.
When I checked the disk that the share resides on it had ~177GB free, far more than what was reported. After starting the test again without changing anything it worked.
So my question here is; what could cause something like this? As far as I can tell it is not a programming error and, as I mentioned earlier, this code has been in use for a very long time now with no problems.
I checked the event log of the remote machine and found nothing of interest around the time of the failure. I'm hoping that someone out there has seen something similar before, thanks in advance.
Might not be of any use, but it's "strange" that:
177GB ~= 186580992 * 1000.
This could be explained by a stack corruption (since you don't initialize your local variable) happening elsewhere in the code.
The code "inRequestedSize / compressionRatio" doesn't have to be using float for the division, and since you've silented the "conversion loose precision" warning with the cast, you might actually hit an error too (but the number given in the example should work). You could simply do "inRequestedSize / 10".
Last but not least, you don't say where the code is running. On Mobile, the documentation of GetDiskFreeSpaceEx states:
When Mobile Encryption is enabled, the reporting behavior of this function changes. Each encrypted file has at least one 4-KB page of overhead associated. This function takes this overhead into account when it reports the amount pf space available. That is, if a 128-KB disk contains a single 60-KB file, this function reports that 64 KB is available, subtracting the space occupied by both the file and its associated overhead.
Although this function reports the total available space, keep the space requirement for encrypted files in mind when estimating whether multiple new files will fit into the remaining space. Include the amount of space required for overhead when Mobile Encryption is enabled. Each file requires at least an additional 4 KB. For example, a single 60-KB file requires 64 KB, but two 30-KB files actually require 68 KB.