With the below link I am able to get the hard disk space.
Get Hard Disk Space
But if I connect a secondary hard disk it's not showing the details of it.
How can I loop the number of hard disks and retrieve their spaces?
I would like to get that in a loop sector. Get hard disk count and loop for drives in harddisk1 and then loop for drives in harddisk2 like that.
Use Windows API's GetLogicalDriveStrings function.
std::vector< std::basic_string<TCHAR> > drives;
TCHAR szBuffer[1024];
::GetLogicalDriveStrings(1024, szBuffer);
TCHAR *pCurrentDrive = szBuffer;
while (*pCurrentDrive)
{
drives.push_back( pCurrentDrive );
pCurrentDrive = &pCurrentDrive[_tcslen(pCurrentDrive) + 1];
}
Then call GetDiskFreeSpaceEx for every element in the drives vector.
You could also use the GetLogicalDrives function instead, which returns the drives as a bit mask. However, I think GetLogicalDriveStrings is simpler in this case, because it returns the drives as strings which you can pass to GetDiskFreeSpaceEx directly.
How about:
for (char drive = 'a'; drive <= 'z'; drive++)
{
// Get for amount for `drive`
}
And for those who are wondering... Yes this is very naive and probably time-consuming.
Related
I am writing bit of code in C++ where I want to play a .wav file and perform an FFT (with fftw) on it as it comes (and eventually display that FFT on screen with ncurses). This is mainly just as a "for giggles/to see if I can" project, so I have no restrictions on what I can or can't use aside from wanting to try to keep the result fairly lightweight and cross-platform (I'm doing this on Linux for the moment). I'm also trying to do this "right" and not just hack it together.
I'm using SDL2_audio to achieve the playback, which is working fine. The callback is called at some interval requesting N bytes (seems to be desiredSamples*nChannels). My idea is that at the same time I'm copying the memory from my input buffer to SDL I might as well also copy it in to fftw3's input array to run an FFT on it. Then I can just set ncurses to refresh at whatever rate I'd like separate from the audio callback frequency and it'll just pull the most recent data from the output array.
The catch is that the input file is formatted where the channels are packed together. I.E "(LR) (LR) (LR) ...". So while SDL expects this, I need a way to just get one channel to send to FFTW.
The audio callback format from SDL looks like so:
void myAudioCallback(void* userdata, Uint8* stream, int len) {
SDL_memset(stream, 0, sizeof(stream));
SDL_memcpy(stream, audio_pos, len);
audio_pos += len;
}
where userdata is (currently) unused, stream is the array that SDL wants filled, and len is the length of stream (I.E the number of bytes SDL is looking for).
As far as I know there's no way to get memcpy to just copy every other sample (read: Copy N bytes, skip M, copy N, etc). My current best idea is a brute-force for loop a la...
// pseudocode
for (int i=0; i<len/2; i++) {
fftw_in[i] = audio_pos + 2*i*sizeof(sample)
}
or even more brute force by just reading the file a second time and only taking every other byte or something.
Is there another way to go about accomplishing this, or is one of these my best option? It feels kind of kludgey to go from a nice one line memcpy to send to the data to SDL to some sort of weird loop to send it to fftw.
Very hard OP's solution can be simplified (for copying bytes):
// pseudocode
const char* s = audio_pos;
for (int d = 0; s < audio_pos + len; d++, s += 2*sizeof(sample)) {
fftw_in[d] = *s;
}
If I new what fftw_in is, I would memcpy blocks sizeof(*fftw_in).
Please check assembly generated by #S.M.'s solution.
If the code is not vectorized, I would use intrinsics (depending on your hardware support) like _mm_mask_blend_epi8
I am trying to use Win API ReadConsole(...) and I want to pass in a delimiter char to halt the input from the console.
The below code works but it only stops reading the input on \r\n.
I would like it to stop reading the console input on '.' for instance.
void read(char *cIn, char delim)
{
HANDLE hFile;
DWORD charsRead;
DWORD charsToRead = MAX_PATH;
CONSOLE_READCONSOLE_CONTROL cReadControl;
cReadControl.nLength = sizeof(CONSOLE_READCONSOLE_CONTROL);
cReadControl.nInitialChars = 0;
cReadControl.dwCtrlWakeupMask = delim;
cReadControl.dwControlKeyState = NULL;
DWORD lpMode;
// char cIn[MAX_PATH]; //-- buffer to hold data from the console
hFile = CreateFile("CONIN$", GENERIC_READ | GENERIC_WRITE,
FILE_SHARE_WRITE | FILE_SHARE_READ, NULL,
OPEN_EXISTING, 0, NULL);
GetConsoleMode(hFile,&lpMode);
// lpMode &= ~ENABLE_LINE_INPUT; //-- turns off this flag
// SetConsoleMode(hFile, lpMode); //-- set the mode with the new flag off
bool read = ReadConsole(hFile, cIn, charsToRead * sizeof(TCHAR), &charsRead, &cReadControl);
cIn[charsRead - 2] = '\0';
}
I know there are other easy ways to do this but I am just trying to understand some of the win api functions and how to use them.
Thank you.
I saw this question and assumed it would be trivial, but spent the last 30 minutes trying to figure it out and finally have something.
That dwCtrlWakeupMask is pretty poorly documented in CONSOLE_READCONSOLE_CONTROL. MSDN says "A user-defined control character used to signal that the read is complete.", but why is it called mask? Why is it a ULONG instead of a TCHAR or something like that? I tried feeding it chars and wchars and nothing happened, so there must be more to the story.
I took to the web searching for that particular variable and found this link:
https://groups.google.com/forum/#!topic/golang-codereviews/KSp37ITmcUg It is a random Go library coder asking for help and the answer is that tab is 1 << '\t'. I tried it, and it works!
So, for future web searchers, dwCtrlWakeupMask is a bitmask of ASCII control characters that will cause ReadConsole to return. You can | together as many 1 << ctrl_chars as you like... but it cannot be arbitrary characters, since it is a bitmask in a 32 bit value, only the chars 1-31 (inclusive) are possible (this group btw is called control characters, it includes things like tab, backspace, bell; things that do not represent printable characters per se).
Thus, this mask:
cReadControl.dwCtrlWakeupMask = (1 << '\t') | (1 << 0x08);
Will cause ReadConsole to return when tab (\t) OR when backspace (0x08) is pressed.
The characters represented by ctrl+ some_ascii_value are the number of that letter in the english alphabet, starting at a == 1. So, ctrl+d is 4, and ctrl+z is 26.
Therefore, this will return when the user hits ctrl+d or ctrl+z:
cReadControl.dwCtrlWakeupMask = (1 << 4) | (1 << 26);
Note that the Linux terminal driver also returns on read when the user hits ctrl+d so this might be a nice compatibility thing.
I believe the point of this argument is to allow easier tab-completion in processed input mode; otherwise, you'd have to turn processed input off and process keys one by one to do that. Now you don't have to.... though tbh, I still prefer taking my input with ReadConsoleInput for interactive programs since you get much better control over it all.
But while there are a lot of other ways to do what you want - and using . as a delimiter is impossible here, since it has a value >= 32, so you need to do it yourself... understanding what this does is interesting to me anyway, and there's scarce resources on the web so I'm writing this up just for future reference.
Note that this does not appear to work in wineconsole so be sure you are on a real Windows box to test it out.
Now, dwControlKeyState is actually set BY the function. Your value passed in is ignored (at least as far as I can tell), but you can inspect it for the given flags when the function returns. So, for example, after calling ReadConsole and hitting the key, it will be 32 if your numlock was on. It will be 48 is numlock was on and you pressed shift+tab (and had numlock on). So you test it after the function returns.
I typically like MSDN docs but IMO they completely dropped the ball on explaining this parameter!
You will find this code ridiculous. It is most likely the only way to do this. If you have to adapt to use ReadFile later it is the only way that doesn't consume more input.
Most of the time you don't really want ReadConsole at all you want ReadFile on the standard input handle, but I digress.
char *cInptr = cIn;
do {
bool read = ReadConsole(hFile, cInptr, sizeof(TCHAR), &charsRead, &cReadControl);
if (read) cInptr += charsRead;
} while (read && charsRead > 0 && cInptr[-1] && cInptr[-1] != '.');
I might have too many tests in the loop due to being paranoid. I'm not inclined to look up all predicates to determine which are implied by the contract of ReadConsole.
I have a series of large text files (10s - 100s of thousands of lines) that I want to parse line-by-line. The idea is to check if the line has a specific word/character/phrase and to, for now, record to a secondary file if it does.
The code I've used so far is:
ifstream infile1("c:/test/test.txt");
while (getline(infile1, line)) {
if (line.empty()) continue;
if (line.find("mystring") != std::string::npos) {
outfile1 << line << '\n';
}
}
The end goal is to be writing those lines to a database. My thinking was to write them to the file first and then to import the file.
The problem I'm facing is the time taken to complete the task. I'm looking to minimize the time as far as possible, so any suggestions as to time savings on the read/write scenario above would be most welcome. Apologies if anything is obvious, I've only just started moving into C++.
Thanks
EDIT
I should say that I'm using VS2015
EDIT 2
So this was my own dumb fault, when switching to Release and changing the architecture type I had noticeable speed increases. Thanks to everyone for pointing me in that direction. I'm also looking at the mmap stuff and that's proving useful too. Thanks guys!
When you use ifstream to read and process to/from really big files, you have to increase the default buffer size that is used (normally 512 bytes).
The best buffer size depends on your needs, but as a hint you can use the partition block size of the file(s) your reading/writing. To know that information you can use a lot of tools or even code.
Example in Windows:
fsutil fsinfo ntfsinfo c:
Now, you have to create a new buffer to ifstream like this:
size_t newBufferSize = 4 * 1024; // 4K
char * newBuffer = new char[newBufferSize];
ifstream infile1;
infile1.rdbuf()->pubsetbuf(newBuffer, newBufferSize);
infile1.open("c:/test/test.txt");
while (getline(infile1, line)) {
/* ... */
}
delete newBuffer;
Do the same with the output stream and don't forget set new buffer before open file or it may not work.
You can play with values to find the very best size for you.
You'll note the difference.
C-style I/O functions are much faster than fstream.
You may use fgets/fputs to read/write each text line.
I am trying to initialize and partition an attached virtual hard disk through the Windows API. I have been successful using DeviceIoControl() to do so, however whenever I apply the desired drive layout Windows is automatically assigning a drive letter to the partition and popping up an annoying "Would you like to format?" dialog.
My intent is to handle the formatting and mounting of this partition later in the program, but I'm not sure how to stop this behavior. I have tried setting RecognizedPartition to FALSE, but this seems to have no effect.
Relevant code:
Layout.PartitionStyle = PARTITION_STYLE_MBR;
Layout.PartitionCount = 4;
Layout.Mbr.Signature = MY_DISK_MBR_SIGNATURE;
Layout.PartitionEntry[0].PartitionStyle = PARTITION_STYLE_MBR;
Layout.PartitionEntry[0].PartitionNumber = 1;
Layout.PartitionEntry[0].StartingOffset.QuadPart = MY_DISK_OFFSET;
Layout.PartitionEntry[0].PartitionLength.QuadPart =
(Geom.DiskSize.QuadPart - MY_DISK_OFFSET);
Layout.PartitionEntry[0].Mbr.PartitionType = PARTITION_IFS;
Layout.PartitionEntry[0].Mbr.BootIndicator = FALSE;
Layout.PartitionEntry[0].Mbr.RecognizedPartition = FALSE;
Layout.PartitionEntry[0].Mbr.HiddenSectors =
(MY_DISK_OFFSET / Geom.Geometry.BytesPerSector);
for (int i = 0; i < 4; i++)
{
Layout.PartitionEntry[i].RewritePartition = TRUE;
}
if (!DeviceIoControl(hDisk, IOCTL_DISK_SET_DRIVE_LAYOUT_EX,
Layout, dwLayoutSz, NULL, 0, &dwReturn, NULL))
{
// Handle error
}
DeviceIoControl(hDisk, IOCTL_DISK_UPDATE_PROPERTIES,
NULL, 0, NULL, 0, &dwReturn, NULL);
What can I do to prevent automatic drive letter assignment?
The only reliable way I could find to work around this issue was to stop the "Shell Hardware Detection" service while the volume was created and formatted. However, this approach is so unapologetically silly that I refused to put it into code.
Another "hackish" option is to have the service start up and then immediately spawn itself (or a "worker" executable) in a hidden window via CreateProcess() with the CREATE_NO_WINDOW flag.
Since this software runs as a system service and I'd rather not complicate the code for something that only happens once or twice over the lifetime of the system, I've just had to accept that sometimes there will occasionally be an Interactive Services Detection window pop up for a few moments while creating the partitions.
If anyone discovers a good method for preventing the format prompt while programmatically creating and formatting a drive, I'll happily change the accepted answer (and owe you a beer).
It's been awhile since I've used this API, but from memory you can't. But it's doesn't stop you from removing the drive letter assignment after the fact.
I'm not sure if it will stop the format prompt tho, all the times that I have done this the partition has already been formatted correctly before I do the disk layout update.
I just solved this problem, by waiting for several seconds for the drive to be available and then directly issue a format action. See my answer here.
Rufus has an interesting workaround: it installs a window event hook that detects the "do you want to format this drive?" prompts and immediately closes them. See source code here.
It then goes on to arrange to mount only the partitions it cares about, but that's orthogonal.
So I work on a device that outputs large images (anywhere from 30MB to 2GB+). Before we begin creating one of these images we check to see if there is sufficient disk space via GetDiskFreeSpaceEx. Typically (and in this case) we are writing to a shared folder on the same network. There are no user quotas on disk space at play.
Last night, in preparation for a demo, we kicked off a test run. During the run we experienced a failure. We needed 327391776 bytes and were told that we only had 186580992 available. The numbers from GetDiskFreeSpaceEx were:
User free space available: 186580992
Total free space available: 186580992
Those correspond to the QuadPart variables in the two (output) arguments lpFreeBytesAvailable and lpTotalNumberOfFreeBytes to GetDiskFreeSpaceAvailable.
This code has been in use for years now and I have never seen a false negative. Here is the complete function:
long IsDiskSpaceAvailable( const char* inDirectory,
const _int64& inRequestedSize,
_int64& outUserFree,
_int64& outTotalFree,
_int64& outCalcRequest )
{
ULARGE_INTEGER fba;
ULARGE_INTEGER tnb;
ULARGE_INTEGER tnfba;
ULARGE_INTEGER reqsize;
string dir;
size_t len;
dir = inDirectory;
len = strlen( inDirectory );
outUserFree = 0;
outTotalFree = 0;
outCalcRequest = 0;
if( inDirectory[len-1] != '\\' )
dir += "\\";
// this is the value of inRequestSize that was passed in
// inRequestedSize = 3273917760;
if( GetDiskFreeSpaceEx( dir.c_str(), &fba, &tnb, &tnfba ) )
{
outUserFree = fba.QuadPart;
outTotalFree = tnfba.QuadPart;
// this is computed dynamically given a specific compression
// type, but for simplicity I had hard-coded the value that was used
float compressionRatio = 10.0;
reqsize.QuadPart = (ULONGLONG) (inRequestedSize / compressionRatio);
outCalcRequest = reqsize.QuadPart;
// this is what was triggered to cause the failure,
// i.e., user free space was < the request size
if( fba.QuadPart < reqsize.QuadPart )
return( RetCode_OutOfSpace );
}
else
{
return( RetCode_Failure );
}
return( RetCode_OK );
}
So, a value of 3273917760 was passed to the function which is the total amount of disk space needed before compression. The function divides this by the compression ratio of 10 to get the actual size needed.
When I checked the disk that the share resides on it had ~177GB free, far more than what was reported. After starting the test again without changing anything it worked.
So my question here is; what could cause something like this? As far as I can tell it is not a programming error and, as I mentioned earlier, this code has been in use for a very long time now with no problems.
I checked the event log of the remote machine and found nothing of interest around the time of the failure. I'm hoping that someone out there has seen something similar before, thanks in advance.
Might not be of any use, but it's "strange" that:
177GB ~= 186580992 * 1000.
This could be explained by a stack corruption (since you don't initialize your local variable) happening elsewhere in the code.
The code "inRequestedSize / compressionRatio" doesn't have to be using float for the division, and since you've silented the "conversion loose precision" warning with the cast, you might actually hit an error too (but the number given in the example should work). You could simply do "inRequestedSize / 10".
Last but not least, you don't say where the code is running. On Mobile, the documentation of GetDiskFreeSpaceEx states:
When Mobile Encryption is enabled, the reporting behavior of this function changes. Each encrypted file has at least one 4-KB page of overhead associated. This function takes this overhead into account when it reports the amount pf space available. That is, if a 128-KB disk contains a single 60-KB file, this function reports that 64 KB is available, subtracting the space occupied by both the file and its associated overhead.
Although this function reports the total available space, keep the space requirement for encrypted files in mind when estimating whether multiple new files will fit into the remaining space. Include the amount of space required for overhead when Mobile Encryption is enabled. Each file requires at least an additional 4 KB. For example, a single 60-KB file requires 64 KB, but two 30-KB files actually require 68 KB.