I'm looking (if possible) for a check and copy batch script that I can run remotely to check multiple directories and copy the newest modified date.
To clarify: On the remote machine I'm looking at a potential five folders (that may or may not be there). I need the script to check the last modified date of two sub-folders (Desktop and Internet Favorites) of a user's potential 5 profiles, then pick the most recent modified date and copy the folders to another location
So pathway looks like:
"\\%asset%\c$\documents and settings\%username%\Desktop"
"\\%asset%\c$\documents and settings\%username%\Favorites"
To check the date and compare it with (potentially)
"\\%asset%\c$\documents and settings\%username%.temp\Desktop"
"\\%asset%\c$\documents and settings\%username%.temp\Favorites"
Or
"\\%asset%\c$\documents and settings\%username%.temp001\Desktop"
"\\%asset%\c$\documents and settings\%username%.temp001\Favorites"
Once it has found the sub-folders with the most recent modified date to copy (only the most recent) to:
"\\%asset%\c$\documents and settings\Backup"
I know I can get the check done on one location, but I don't know how to ask batch to run multiple checks and then to pick the most recent.
Is that actually possible or am I trying this in the wrong language? I've gotten every thing but the check written out and that's where I'm getting stuck...
Any help would be appreciated!
As I said in my comment, I think Powershell would be better suited to the task. But I did think of one approach that may work in batch without having to resort to textual date comparisons (which are difficult).
You may be able to use robocopy with its /copy:t option which copies timestamps. Imagine copying all three directories to one location, then doing a dir /b /od to list out this temporary directory sorted by date to find the most recent one, and copy that to your target.
I don't have time to test this theory out or give you real code, but hopefully it gives you an approach to try. Or convinces you to take a look at Powershell. :-)
Related
I am trying to use pysvn to get creation and last modification revisions (and above all dates) of files in a svn system...
The typical file history I'm struggling with looks like that :
I'm using pysvn but I can't understand the documentation very well. So far, either I manage to get branching date/revision or last modification on root... What I would like, is to get real creation date (regardless of if it's in root or branch), and last modification date, excluding branching if there has been no modification in branch since branch creation.
Thanks in advance if someone can provide me help on that, I don't want to spend to much time on this specific part of my script... :)
Manu
(BTW, i'm using an existing simple algo that tries to find creation date using dichotomia, given min and max revisions to check, that checks if file can be found in given revision recursively until the earliest... sorry for that not-very-English explanation!)
(EDIT : of course, for now, branch and root are not merging yet ^^ else I would probably not struggle with the branch revisions and just take merge revision as last modification for instance)
pysvn.Client().log() will return information on commits that you can then analyse.
Of interest to you is the optional changed_paths information, make sure you set discover_changed_paths=True to have this information returned.
That will show you when a file was added to the repo, this is your creation date event. And when a file was modified which is your modified date.
You may also have to figure out that a file was renamed, which looks like a commit with a delete of the original file and the adding of another.
You can also figure out the branch relationship by looking at the value of the copyfrom_path and copyfrom_revision.
A little retrospective now that I've settled into Mercurial. Forget forget files combined with hg remove. It's crazy and bass-ackwards. You can use hg remove once you've established that something is in a forget file that isn't forgetting because the item in question was tracked before the original repo was created. Note that hg remove effectively clears tracked status but it also schedules the file for deletion in anything that gets changes from your repo. If ignored, however the tracking deactivation still happens but that delete-me change set won't ever reach another repo and for some reason will never delete in yours which IMO is counter-intuitive. It is a very sure sign that somebody and I don't know these guys, is UNWILLING TO COMPROMISE ON DUH DESIGN PROBLEMS. The important thing to understand is that you don't determine what's important, Mercurial does. Except when you're merging on a pull of course. It's entirely reasonable then. But I digress...
Ignore-file/remove is a good combo for already-tracked but very specific files you want forgotten but if you're dealing with a larger quantity of built files determined with broader patterns it's not worth the risk. Just go with a double-repo and pull -u from the remote repo to your syncing repo and then pull -u commits from your working repo and merge in a repo whose sole purpose is to merge changes and pass them on in a place where your not-quite tracked or untracked files (behavior is different when pulling rather than pushing of course because hey, why be consistent?) won't cause frustration. Trust me. The idea that you should have to have two repos just to get 'er done offends for good reason AND THAT SO MANY OF US ARE DOING IT should suggest a serioush !##$ing design problem, but it's much less painful than all the other awful things that will make you regret seeking a sensible alternative.
And use hg help. It's actually Mercurial's best feature and often better than the internet (which I don't fault for confusion on the matter of all things hg) for getting answers to everything that is confusing and counter-intuitive in this VCS.
/retrospective
# switch to regexp syntax.
syntax: regexp
#Config Files
#.Net
^somecompany\.Net[\\/]MasterSolution[\\/]SomeSolution[\\/]SomeApp[\\/]app\.config
^somecompany\.Net[\\/]MasterSolution[\\/]SomeSolution[\\/]SomeApp_test[\\/]App\.config
#and more of the same following
And in my mercurial.ini at the root of my user directory
[ui]
username = ereppen
merge = bcomp
ignore = C:\<path to user ignore file>\.hgignore-config
Context:
I wrote an auto-config utility in node. I just want changes to the files it changes to get ignored. We have two teams and both aren't on the same page with making this a universal thing so it needs to be user-specific for now.
The config file is in place and pointed at by my ini file. I clone. I run the config utility and change the files and stat reveals a list of every single file with an M next to it. I thought it was the utf-8 thing and explicitly set the file to utf-16 little endian. I don't think I'm doing with the regEx that any modern flavor of regEx worth actually calling regEx wouldn't support.
The .hgignore file has no effect for files that are tracked. Its function is to stop you from seeing files you want ignored listed as "untracked". If you're seeing "M" then they're already added (you got them with the clone) so .hgignore does nothing.
The usual way config files that differ from machine to machine are handled is to put a app.config.sample in source control, have app.config in .hgignore and have people do a copy when they're making their config edits.
Alternately if your config files allow for includes and overrides you end them with include app-local.config and override any settings in a app-local.config which you don't add and do include in .hgignore.
This is specific to creating a logfiles. When I am connecting to a server using my application, it writes the details to a log file. When the log file reaches to specific size let's say 1MB then I create another file named LOG2.log.
Now While Writing back to log file , there are two or even more log files and I want to pick up the latest one. I don not want to traverse through all the files in that directory and the pick up the file, as this will take processing time, Is there any other way to get the last created file or log file in the directory.
Your best bet is to rotate log files, which is what gets done in Unix normally (generally via cron.)
One possible implementation is to keep 10 (or however many) old log files around, if your program detects that Log.log is over 1MB then move Log09.log to Log10.log, Log08.log to Log09.log, 7 to 8, 6 to 7, ... 2 to 3, and then Log.log to Log02.log. Finally, create a new Log.log file and continue recording.
This way you'll always write to Log.log and there's no filesystem mystery. In theory, this approach is scalable to ridiculous numbers of log files (more than you would ever reasonably need) and is more standard than writing to Log3023.log. Plus, one would always know where to find the current log.
I believe the answer is "stiff". You have to iterate and find the most recent one yourself, as the OS won't keep indices for each possible sort order around on the off chance someone may want them.
Are you able to modify the server? If so, perhaps introduce a LASTLOG.log file that either contains the name of the latest log file, or the actual contents of it.
Otherwise, Tony's right.. No real way to do it other than iterate through yourself.
How about the elegant :
ls -t | head -n 1
The most efficient way is to use a specialized function to go through all entries (as NTFS or FAT don't index by time), but ignore what you don't need. For that, call FindFirstFileEx with info level FindExInfoBasic. This skips 8.3 name resolution.
I'm looking for a good efficient method for scanning a directory structure for changed files in Windows XP+. Something like how git does it is exactly what I'm looking for, when running a git status it displays all modified files, all new (untracked) files and deleted files very quickly which is exactly what I would like to do.
I have a basic model up and running which performs an initial scan and stores all filenames, size, dates and attributes.
On a subsequent scan it checks if the size, attributes or date have changed and marks as a changed file.
My issue now comes in detecting moved and deleted files. Is there a tried and tested method for this sort of thing? I'm struggling to come up with a good method.
I should mention that it will eventually use ReadDirectoryChangesW to monitor files and alert the user when something changes so a full scan is really a last resort after the initial scan.
Thanks,
J
EDIT: I think I may have described the problem badly. The issue I'm facing is not so much detecting the changes - I have ReadDirectoryChangesW() using IOCP on multiple threads to detected when a change happens, the issue is more what to do with the information. For example, a moved file is reported as a delete followed by a create and a rename comes in 2 parts, old name, followed by new name. So what I'm asking is how to differentiate between the delete as part of a move and an actual delete. I'm guessing buffering the changes and processing batches would be an option but feels messy.
In native code FileSystemWatcher is replaced by ReadDirectoryChangesW. Using this properly is not simple, there is a good baseline to build off here.
I have used this code in a previous job and it worked pretty well. The Win32 API itself (and FileSystemWatcher) are prone to problems that are described in the docs and also discussed in various places online, but impact of those will depending on your use cases.
EDIT: the exact change is indicated in the FILE_NOTIFY_INFORMATION structure that you get back - adds, removals, rename data including old and new name.
I voted Liviu M. up. However, another option if you don't want to use the .NET framework for some reason, would be to use the basic Win32 API call FindFirstChangeNotification.
You can use USN journaling if you are up to it, that is pretty low level (NTFS level) stuff.
Here you can find detailed information and source code included. It is written in C# but most of it is PInvoking C/C++ functions.
I have a source code of about 500 files in about 10 directories. I need to refactor the directory structure - this includes changing the directory hierarchy or renaming some directories.
I am using svn version control. There are two ways to refactor: one preserving svn history (using svn move command) and the other without preserving. I think refactoring preserving svn history is a lot easier using eclipse CDT and SVN plugin (visual studio does not fit at all for directory restructuring).
But right now since the code is not released, we have the option to not preserve history.
Still there remains the task of changing the include directives of header files wherever they are included. I am thinking of writing a small script using python - receives a map from current filename to new filename, and makes the rename wherever needed (using something like sed). Has anyone done this kind of directory refactoring? Do you know of good related tools?
If you're having to rewrite the #includes to do this, you did it wrong. Change all your #includes to use a very simple directory structure, at mot two levels deep and only using a second level to organize around architecture or OS dependencies (like sys/types.h).
Then change your make files to use -I include paths.
Voila. You'll never have to hack the code again for this, and compiles will blow up instantly if something goes wrong.
As far as the history part, I personally find it easier to make a clean start when doing this sort of thing; archive the old one, make a new repository v2, go from there. The counterargument is when there is a whole lot of history of changes, or lots of open issues against the existing code.
Oh, and you do have good tests, and you're not doing this with a release coming right up, right?
I would preserve the history, even if it takes a small amount of extra time. There's a lot of value in being able to read through commit logs and understand why function X is written in a weird way, or that this really is an off-by-one error because it was written by Oliver, who always gets that wrong.
The argument against preserving the history can be made for the following users:
your code might have embarrassing things, like profanity and fighting among developers
you don't care about the commit history of your code, because it's not going to change or be maintained in the future
I did some directory refactoring like this last year on our code base. If your code is reasonable structured at the beginning, you can do about 75-90% of the work using scripts written in your language of choice (I used Perl). In my case, we were moving from set of files all in one big directory, to a series of nested directories depending on namespaces. So, a file that declared the class protocols::serialization::SerializerBase was located in src/protocols/serialization/SerializerBase. The mapping from the old name to the new name was trivial, so that doing a find and replace on #includes in every source file in the tree was trivial, although it was a big change. There were a couple of weird edge cases that we had to fix by hand, but that seemed a lot better than either having to do everything by hand or having to write our own C++ parser.
Hacking up a shell script to do the svn moves is trivial. In tcsh it's foreach F ( $FILES ) ... end to adjust a set of files. Perl & Python offer better utility.
It really is worth saving the history. Especially when trying to track down some exotic bug. Those who do not learn from history are doomed to repeat it, or some such junk...
As for altering all the files... There was a similar question just the other day over at:
https://stackoverflow.com/questions/573430/
c-include-header-path-change-windows-to-linux/573531#573531