File search progress bar [duplicate] - c++

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
updating progress bar during a file search
I'm using FindFirstFile and FindNextFile recursively to search for a file by searching 20 levels deep.
How would I go about adding a progress bar? To show the progress of the search?
I want something similar to the progress bar in explorer when you search for a file.
But how would I figure out how many total files I have to search through to figure out the % completed?

If the only thing you do is searching for a file then the only thing that comes to my mind is to calculate average amount of files per directory. I guess you have much more files than directories, so while you progress through directories you divide 100% by bigger and bigger number. Of course you may see progress stalling or even goign back.
In case you do something for each of the file, I would suggest running a separate thread that would be traversing your file system while other thread would be doing stuff on each of the found file. When the traversing thread counts all the files, and maybe even their total sizes, your progress will become most accurate (of course you will have some problems on an alive file system that may be adding or removing files in the meantime.)

Related

track progress of moving to recycle bin on windows [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I was planning to use IFileOperation::DeleteItems(items) and IFileOperationProgressSink::UpdateProgress(workTotal, workSoFar) to track the progress of moving files and folders to recycle bin. But this only works well when I call it on a list of files to be deleted/move to recycle bin. Then UpdateProgress() is called correctly after each file, returning gradually increasing number of deleted items. But when I try to delete one large folder containing multiple nested subfolders and thousand of files, UpdateProgress() keeps returning 0 (as the number of files done) and then suddenly it returns for example 8000 (like 8000 files in the large folder were deleted). There is no gradual progress, it just jumps from 0% to 100 %. Is this the normal behavior? Or am I doing something wrong. I would like to show the code but even the relevant snippet is terribly long.
I simply modified the sample on Windows-classic-samples and tried at least 8,000 files in the same subfile. It works for the UpdateProgress method, the process first discover all the items, then delete it in process.
This is the sample I used.

Sublime Find and Replace without having to click save

When I do a global search/replace in a project, Sublime will automatically open all the files involved, and not save them. I then have to manually save every single file.
Is there a way to have Sublime automatically save all the changes that have been done, and not open the files that where not previously open?
Thanks in advance for anyone who can help me with this.
http://www.sublimetext.com/forum/viewtopic.php?f=3&p=40348
I think it's Option + Command + S(Mac) or Command + Alt + S(Win).
Just performed a similar task with the free Visual Studio Code. Replace in Files without opening files and fast. Just made 60k changes in just a few minutes.
Robust if awkward.
While it doesn't change the fact that Sublime Text doesn't seem to be the best tool for this job, here's a practical workaround procedure I've come to rely on. It's reasonably slow but painless if done correctly. Memory consumption impact seems negligible, if you're wondering. The ballpark of my use cases is up to about 10k files in up to a handful of minutes on a mediocre memory-cramped machine using Sublime Text 3.2.2 (3211) on Windows 10 Pro x64.
Requirements
You'll need the package SideBarEnhancements to be installed.
…which in turn relies on Sublime Text 3 or newer as of writing this.
Procedure
(Could probably be deferred until step 3.) You add the directory A that you're going to operate on to the project sidebar.
You initiate your batch replacement operation over numerous files inside directory A.
Sublime Text takes its time to open numerous tabs belonging to directory A recursively and perform replacement in each.
You right-click on the directory A in the project sidebar to bring up its context menu and choose "Save Views".
Sublime Text takes its time to save each tab belonging to directory A recursively.
You right-click on the directory A in the project sidebar to bring up its context menu and choose "Close Views".
Sublime Text takes its time to close all tabs belonging to directory A recursively.
Disclaimers
Warning
Do not make the mistake of skipping step 3 or you'll effectively become stuck in a GUI loop of writing confirmation dialogs defaulting to "Yes" for however many thousands of files you're operating on.
If you decide to abort operation between steps 2 and 3 — your best course of action, is to back the files up on disk, proceed with the outlined procedure and then restore the backup.
Caution
All of the tabs belonging to the directory you'll operate on will be closed by the end of this procedure. If you need a substantial portion of them to remain open throughout replacement — consider organizing the files contained into a sub-directory structure conducive to cherry-picking.
General advice
As a rule of thumb, before proceeding with this procedure, it would be wise to check if the required context menu entries are in fact present (greyed out or not) in your combination of editor+package versions. And to be on the safe side, you might want to back up your data and Sublime Text session before massive operations.

Read in a directory from a given file point in C++

I have two programs that will be reading / writing files to the same directory at the same time (but not to the same exact files at the same time). I have the writing portion done, but I am struggling to get a half way decent and working implementation of the reading directory portion.
The files within the directory follow the following naming scheme:
Image-[INDEX]-[KEY/DEL]--[TIMESTAMP]
[INDEX] increments up from 000000, [KEY/DEL] alternates based on whether the image is a key or a delta frame and [TIMESTAMP] is the Unix / Linux epoch time at file creation.
Right now, the reading program reads in the directory (using the dirent.h library) one file at a time every time it needs to find an image within the directory. When the directory gets extremely large, I would imagine that this operation / method will quickly become extremely resource intensive, and eventually fail. So, I am trying to find an alternative method. I was thinking of reading in the entire directory at initialization, and saving the file information in an array to access / use later in the program. Then, when a file is requested that is not in the array, the program would go and update the array of files by reading in the directory, but this time starting from the point it left off at the end of the initialization.
Is this possible? To start reading in the file names within a directory at a known point (the last file "read in") in the directory? Or do I have to start all the way from the beginning each time?
Or is there a better way of doing this?
Thanks.
As Andrew said, I would confirm that this is actually a problem before trying to solve it.
If you can discount the possibility of files being created out of sequence, that is, no file
you wish to process before another file will ever be created after that file, then you can use this method.
First, read the entire directory listing into an array or vector. Then, when iterating files, just iterate the vector. Finally, if you get a file not found or reach the end of the vector, refresh it just in case more have been created.
You will no doubt want to encapsulate this logic into some sort of context object, which remembers the last file read. You could also optimise by sorting the vector.

Is there a way to limit the number of output files of a process?

An application of our company uses pdfimages (from xpdf) to check whether some pages in a PDF files, on which we know there is no text, consist of one image.
For this we run pdfimages on that page and count whether only one, two or more, or zero output files have been created (could be JPG, PPM, PGM or PPM).
The problem is that for some PDF files, we get millions of 14-byte PPM images, and the process has to be killed manually.
We know that by assigning the process to a job we can restrict how much time the process will run for. But it would probably be better if we could control that the process will create new files at most twice during its execution.
Do you have any clue for doing that?
Thank you.
One approach is to monitor the directory for file creations: http://msdn.microsoft.com/en-us/library/aa365261(v=vs.85).aspx - the monitoring app could then terminate the PDF image extraction process.
Another would be to use a simple ramdisk which limited the number of files that could be created: you might modify something like http://support.microsoft.com/kb/257405.
If you can set up a FAT16 filesystem, I think there's a limit of 128 files in the root directory, 512 in other dirs? - with such small files that would be reached quickly.
Also, aside from my 'joke' comment, you might want to check out _setmaxstdio and see if that helps ( http://msdn.microsoft.com/en-us/library/6e3b887c(VS.71).aspx ).

How to get the latest file from a directory

This is specific to creating a logfiles. When I am connecting to a server using my application, it writes the details to a log file. When the log file reaches to specific size let's say 1MB then I create another file named LOG2.log.
Now While Writing back to log file , there are two or even more log files and I want to pick up the latest one. I don not want to traverse through all the files in that directory and the pick up the file, as this will take processing time, Is there any other way to get the last created file or log file in the directory.
Your best bet is to rotate log files, which is what gets done in Unix normally (generally via cron.)
One possible implementation is to keep 10 (or however many) old log files around, if your program detects that Log.log is over 1MB then move Log09.log to Log10.log, Log08.log to Log09.log, 7 to 8, 6 to 7, ... 2 to 3, and then Log.log to Log02.log. Finally, create a new Log.log file and continue recording.
This way you'll always write to Log.log and there's no filesystem mystery. In theory, this approach is scalable to ridiculous numbers of log files (more than you would ever reasonably need) and is more standard than writing to Log3023.log. Plus, one would always know where to find the current log.
I believe the answer is "stiff". You have to iterate and find the most recent one yourself, as the OS won't keep indices for each possible sort order around on the off chance someone may want them.
Are you able to modify the server? If so, perhaps introduce a LASTLOG.log file that either contains the name of the latest log file, or the actual contents of it.
Otherwise, Tony's right.. No real way to do it other than iterate through yourself.
How about the elegant :
ls -t | head -n 1
The most efficient way is to use a specialized function to go through all entries (as NTFS or FAT don't index by time), but ignore what you don't need. For that, call FindFirstFileEx with info level FindExInfoBasic. This skips 8.3 name resolution.