I want to delete first 10 files of the mounted drive. This drive is Unix system drive. I have written code which working fine for local drive but not mounted drive. Its deleting randomly but not sequentially. I have written code in MFC C++. Please Let me know if any one knows the solution. The code is like below.
char fileFound[256];
WIN32_FIND_DATA info;
HANDLE hp=INVALID_HANDLE_VALUE;
int count=10;
swprintf_s(fileFound,256,"%s\\*.*","G:\\foldername");
hp=FindFirstFile(fileFound,&info);
do
{
swprintf_s(fileFound,256,"%s\\%s","G:\foldername",info.cFileName);
DeleteFile(fileFound);
count--;
}while(FindNextFile(hp,&info)&&count);
FindClose(hp);
Its deleting randomly but not sequentially.
This behavior is documented:
[...] FindFirstFile does no sorting of the search results.
As well as here:
The order in which the search returns the files, such as alphabetical order, is not guaranteed, and is dependent on the file system. If the data must be sorted, the application must do the ordering after obtaining all the results.
If you need to delete the first n files from a set of files, you need to gather the entire set of files, sort the set based on an arbitrary predicate, and then perform an action on the first n items.
Related
I need to make a program that will output all the folders and folder sizes that are in the same directory as the executable file, and when you execute it via command prompt it should display all the folders that are in the same directory as it, for example. the finished executable file of the c++ program would be able to be moved into the documents folder and all folders that were in the documents folder would be outputted when you execute the file through command prompt. what I do know is that I will probably have to make a vector or some sort of container to hold the folder names and folder sizes and output it using a ranged for loop, there are also commmand line arguments that can be used to modify how the information is outputted such as the folders are sorted alphabetically and what not. I should be able to do that, I just do not know how exactly I can go about pulling folder information in c++. If I know how exactly to get the folder names and sizes I can start doing the rest. Any help would be appreciated about how I can go about doing this.
Most of what you want to do can be done using std::filesystem in C++17
You can get the current path using current_path() and you can then loop over everything contained in a directory using directory_iterator(). file_size() can be used to get the size of a file.
auto path = std::filesystem::current_path();
for (auto& obj : std::filesystem::directory_iterator(path)) {
// Get size of files
if (!std::filesystem::is_directory(obj)) {
auto size = std::filesystem::file_size(obj);
}
// Do other things
// ...
}
If you are able to use C++17 std::filesystem is definitely the way to go.
Purpose: I am monitoring file writes in a particular directory on iOS using BSD kernel queues, and poll for file sizes to determine write ends (when the size stops changing). The basic idea is to refresh a folder only after any number of file copies coming from iTunes sync. I have a completely working Objective-C implementation for this but I have my reasons for needing to implement the same thing in C++ only.
Problem: The one thing stopping me is that I can't find a C or C++ API that will get the correct file size during a write. Presumably, one must exist because Objective-C's [NSFileManager attributesOfItemAtPath:] seems to work and we all know it is just calling a C API underneath.
Failed Solutions:
I have tried using stat() and lstat() to get st_size and even st_blocks for allocated block count, and they return correct sizes for most files in a directory, but when there is a file write happening that file's size never changes between poll intervals, and every subsequent file iterated in that directory have a bad size.
I have tried using fseek and ftell but they are also resulting in a very similar issue.
I have also tried modified date instead of size using stat() and st_mtimespec, and the date doesn't appear to change during a write - not that I expected it to.
Going back to NSFileManager's ability to give me the right values, does anyone have an idea what C API call that [NSFileManager attributesOfItemAtPath:] is actually using underneath?
Thanks in advance.
Update:
It appears that this has less to do with in-progress write operations and more with specific files. After closer inspection there are some files which always return a size, and other files that never return a size when using the C API (but will work fine with the Objective-C API). Even creating a copy of the "good" files the C API does not want to give a size for the copy but works fine with the original "good" file. I have both failures and successes with text (xml) files and binary (zip) files. I am using iTunes to add these files to the iPad's app's Documents directory. It is an iPad Mini Retina.
Update 2 - Answer:
Probably any of the above file size methods will work, if your path isn't invisibly trashed, like mine was. See accepted answer on why the path was trashed.
Well this weird behavior turned out to be a problem with the paths, which result in strings that will print normally, but are likely trashed in memory enough that file descriptors sometimes didn't like it (thus only occurring in certain file paths). I was using the dirent API to iterate over the files in a directory and concatenating the dir path and file name erroneously.
Bad Path Concatenation: Obviously (or apparently not-so-obvious at runtime) str-copying over three times is not going to end well.
char* fullPath = (char*)malloc(strlen(dir) + strlen(file) + 2);
strcpy(fullPath, dir);
strcpy(fullPath, "/");
strcpy(fullPath, file);
long sizeBytes = getSize(fullPath);
free(fullPath);
Correct Path Concatenation: Use proper str-concatenation.
char* fullPath = (char*)malloc(strlen(dir) + strlen(file) + 2);
strcpy(fullPath, dir);
strcat(fullPath, "/");
strcat(fullPath, file);
long sizeBytes = getSize(fullPath);
free(fullPath);
Long story short, it was sloppy work on my part, via two typos.
I am trying to write a program that allows for uses to traverse the contents of a SD card with buttons on a touch screen (assume there is only one level of files; aka no folders). However, I am finding it impossible to get a "list" of all the files on the SD card:
I can't just create an array of stringsor char* because I don't know the number
of files on the card. Besides, I want the container to be dynamic if
possible.
I can't create a vector because Arduino doesn't recognize std::vector
or vector even when I have C++ for Arduino.
Searching google produces new does not exist in Arduino's C++
I could use malloc (or new), but that would involve me creating my own container class. As interesting as that may be, the goal of what I am doing is not to implement a dynamic container class.
Have I missed something major in my search for such a function?
I recommend you look at my example MP3 File Player and the Web Player.
There are TWO issues:
1) You need to approach this from the point of view appreciating the Arduino does not have enough resources (SRAM) to hold a list of entire SdFAT's directories. Hence my approach was to use the users console for retaining the list. It dumps the directories contents to the console, along with a corresponding number. From which the user could select the number they wish to enter. Similarly the Web Player does the same thing, but when generating the HTML, it generates a link pointing to the corresponding listed item. Hence the list is stored on the console being either the Browser or Serial Monitor.
2) The default provided SD library is not sufficient to do what you want. Recently Arduino incorporated Bill Greiman’s SdFatLib as the under the hood class. But limited it. Where using Bill’s native SdFat library allows you the use of additional methods to access individual objects, such as getFilename(), not available in SD. This is necessary when going through the directory. The sd.ls(LS_DATE | LS_SIZE) will only dump directly to serial. Where you need to use access the individual files themselves. As show below or in actual code
SdFile file;
char filename[13];
sd.chdir("/",true);
uint16_t count = 1;
while (file.openNext(sd.vwd(),O_READ))
{
file.getFilename(filename);
Serial.print(count);
Serial.print(F(": "));
Serial.println(filename);
count++;
}
file.close();
Additionally there are buried public methods accessible by references as show in WebPlayer’s ListFiles() function, to get more discrete handling of the files.
My questions is: how would it be possible to get the file disk offset if this file (very important) is small (less than one cluster, only a few bytes).
Currently I use this Windows API function:
DeviceIOControl(FileHandle, FSCTL_GET_RETRIEVAL_POINTERS, #InBuffer, SizeOf(InBuffer), #OutBuffer, SizeOf(OutBuffer), Num, Nil);
FirsExtent.Start := OutBuffer.Pair[0].LogicalCluster ;
It works perfectly with files bigger than a cluster but it just fails with smaller files, as it always returns a null offset.
What is the procedure to follow with small files ? where are they located on a NTFS volume ? Is there an alternative way to know a file offset ? This subtility doesn't seem to be documented anywhere.
Note: the question is tagged as Delphi but C++ samples or examples would be appreciated as well.
The file is probably resident, meaning that its data is small enough to fit in its MFT entry. See here for a slightly longer description:
http://www.disk-space-guide.com/ntfs-disk-space.aspx
So you'd basically need to find the location of the MFT entry in order to know where the data is on disk. Do you control this file? If so the easiest thing to do is make sure that it's always larger than the size of an MFT entry (not a documented value, but you could always just do 4K or something).
Please see edit with advice taken so far...
I am attempting to list all the directories(folders) in a given directory using WinAPI & C++.
Right now my algorithm is slow & inefficient:
- Use FindFirstFileEx() to open the folder I am searching
- I then look at every file in the directory(using FindNextFile()); if its a directory file then I store its absolute path in a vector, if its just a file I do nothing.
This seems extremely inefficient because I am looking at every file in the directory.
Is there a WinAPI function that I can use that will tell me all the sub-directories in a given directory?
Do you know of an algorithm I could use to efficiently locate & identify folders in a directory(folder)?
EDIT:
So after taking the advice I have searched using FindExSearchLimitToDirectories but for me it still prints out all the files(.txt, etc.) & not just folders. Am I doing something wrong?
WIN32_FIND_DATA dirData;
HANDLE dir = FindFirstFileEx( "c:/users/soribo/desktop\\*", FindExInfoStandard, &dirData,
FindExSearchLimitToDirectories, NULL, 0 );
while ( FindNextFile( dir, &dirData ) != 0 )
{
printf( "FileName: %s\n", dirData.cFileName );
}
In order to see a performance boost there must be support at the file system level. If this does not exist then the system must enumerate every single object in the directory.
In principle, you can use FindFirstFileEx specifying the FindExSearchLimitToDirectories flag. However, the documentation states (emphasis mine):
This is an advisory flag. If the file system supports directory filtering, the function searches for a file that matches the specified name and is also a directory. If the file system does not support directory filtering, this flag is silently ignored.
If directory filtering is desired, this flag can be used on all file systems, but because it is an advisory flag and only affects file systems that support it, the application must examine the file attribute data stored in the lpFindFileData parameter of the FindFirstFileEx function to determine whether the function has returned a handle to a directory.
However, from what I can tell, and information is sparse, FindExSearchLimitToDirectories flag is not widely supported on desktop file systems.
Your best bet is to use FindFirstFileEx with FindExSearchLimitToDirectories. You must still perform your own filtering in case you meet a file system that doesn't support directory filtering at file system level. If you get lucky and hit upon a file system that does support it then you will get the performance benefit.
If you're using FindFirstFileEx, then you should be able to specify the _FINDEX_SEARCH_OPS::FindExSearchLimitToDirectories option (to be used as the fSearchOp param in FindFirstFileEx) to limit the first search (and any subsequent FindNextFile()) calls to directories.