Sometimes I have to work on code that moves the computer clock forward. In this case some .cpp or .h files get their latest modification date set to the future time.
Later on, when my clock is fixed, and I compile my sources, system rebuilds most of the project because some of the latest modification dates are in the future. Each subsequent recompile has the same problem.
Solution that I know are:
a) Find the file that has the future time and re-save it. This method is not ideal because the project is very big and it takes time even for windows advanced search to find the files that are changed.
b) Delete the whole project and re-check it out from svn.
Does anyone know how I can get around this problem?
Is there perhaps a setting in visual studio that will allow me to tell the compiler to use the archive bit instead of the last modification date to detect source file changes?
Or perhaps there is a recursive modification date reset tool that can be used in this situation?
I would recommend using a virtual machine where you can mess with the clock to your heart's content and it won't affect your development machine. Two free ones are Virtual PC from Microsoft and VirtualBox from Sun.
If this was my problem, I'd look for ways to avoid mucking with the system time. Isolating the code under unit tests, or a virtual machine, or something.
However, because I love PowerShell:
Get-ChildItem -r . |
? { $_.LastWriteTime -gt ([DateTime]::Now) } |
Set-ItemProperty -Name "LastWriteTime" -Value ([DateTime]::Now)
I don't know if this works in your situation but how about you don't move your clock forward, but wrap your gettime method (or whatever you're using) and make it return the future time that you need?
Install Unix Utils
touch temp
find . -newer temp -exec touch {} ;
rm temp
Make sure to use the full path when calling find or it will probably use Windows' find.exe instead. This is untested in the Windows shell -- you might need to modify the syntax a bit.
I don't use windows - but surely there is something like awk or grep that you can use to find the "future" timestamped files, and then "touch" them so they have the right time - even a perl script.
1) Use a build system that doesn't use timestamps to detect modifications, like scons
2) Use ccache to speed up your build system that does use timestamps (and rebuild all).
In either case it is using md5sum's to verify that a file has been modified, not timestamps.
Related
Alright. What I'm after is, what seems to me, fairly simple.
I've got File I/O down to a fine art for basic text files.
But, what I need now, is a way to read a text file that's online.
Let's say, something like: http://www.iamawebsite.com.au/file.txt
I CAN download the file and store it locally, but that will produce a lot more pain for me in the future, and more-so for redistribution of the end program, so if I can get around in doing so, I will be forever grateful. (also, if possible, to refrain from any additional libraries or anything. If I have to use one, I will, but if there's a way around that, I'm happy)
I have looked around for a while on ways to do similar tasks, but they seem to be going for more than what I'm after, and skipping the small steps which are the ones I can't quite get.
(If it helps, using Windows 8, Visual Studio 2010 Ultimate, needs to work in Windows 7 and 8 if possible)
I tried many different things, and I couldn't get anything to work without over complicating things to a ridiculous level. I also tried libCurl but I couldn't manage to link it properly, not sure why.
I ended up using a combination of Batch and Powershell scripts, simple and powerful, and best of all it works.
If anybody is interested:
Batch script:
powershell -ExecutionPolicy Bypass "& "fileUrl\name.ps1"
Powershell script:
$webClient = New-Object System.Net.WebClient;
$url = "http://www.iamawebsite.com.au/iamafile.txt";
$file = "whereToSaveFile\desiredNameOfFile.txt";
$webClient.DownloadFile($url, $file);
I have both my Batch and Powershell files in the same directory, just to make it a little easier on myself
Thanks!
Try URLDownloadToCacheFile function maybe.
One way would be to use InternetOpen, thenInternetOpenUrl, and finally InternetReadFile.
I'm using Mercurial for development of quite a large C++ project which takes about 30 minutes to get built from the scratch(while incremental builds are very quick).
I'm usually trying to implement each new feature in the new branch(using "hg clone") and I may have several new features developed during the day and it's quickly getting very boring to wait for the new feature branch to get built.
Are there any recipes to somehow re-use object files from other already built branches?
P.S. in git there are named branches within the same repository which make re-usage of the existing object files possible for the build system, however I prefer the simpler Mercurial separate branches model...
I suggest using ccache as a way to speed up compilation of (mostly) the same code tree. The way it works is as following:
You define a place to be used as the cache (and the maximum cache size) by using the CCACHE_DIR environment variable
Your compiler should be set to ccache ${CC} or ccache ${CXX}
ccache takes the output of ${CC} -E and the compilation flags and uses that as a base for its hash. As long as the compiler flags, source file and the headers are all unchanged, the object file will be taken from cache, saving valuable compilation time.
Note that this method speeds up compilation of any source file that eventually produces the same hash. If you share source files across projects, ccache will handle them as well.
If you already use distcc and wish to use it with ccache, set the CCACHE_PREFIX environment variable to distcc.
Using ccache sped up our source tree compilation around tenfold.
A simple way to speed up your builds could be to use a local "build directory" on your disk. This way you can checkout into this directory and start the build. The first time it will take the full time, but after that it will (hopefully) only rebuild the files where the source code changed.
My Localbranch extension was designed partly around this use case. It uses a single working directory, but I think it's simpler than git. It's essentially a mechanism for maintaining multiple repository clones under one working directory, where only one is active at a given time.
Woops, I missed your P.S. where you don't like having multiple named branches in the same repo and that you prefer separate clones.. sorry about that.
I too have somewhat large C++ projects and the clone-per-feature workflow didn't work for me very well. Firstly, I had to close down my Vim session and then reopen (many of the same) files once I've created the clone. Secondly, like you said, a lot of code must be recompiled unnecessarily. Thirdly, I have to keep track of where I've pushed to and pulled from - gets confusing when you start a new feature and then get sidetracked onto a new one. Before you know it you have many clones and not sure which ones need to be pushed back to your main.
You definitely don't want to use named branches (as I'm sure you know) to handle this as they are quite permanent.
What you need are bookmarks: https://www.mercurial-scm.org/wiki/BookmarksExtension
Bookmarks allow you to create lightweight (and otherwise anonymous) branches per feature by facilitating the naming of heads in your repo. These heads would normally be unnamed and you would have to look at the output of 'hg log' or use some graphical tool to find the revision numbers for the tip of your feature-branch. With bookmarks you can name them descriptive names like 'my-cool-feature' or 'bugfix-392'.
If you like the idea of bookmarks, I'd also recommend my own extension called 'tasks': http://bitbucket.org/alu/hgtasks. This extension works like bookmarks but adds some more functionality. It allows you created feature-branches (now called tasks) and suppress the pushing of incomplete tasks. This is handy when you have a few feature-branches at once. You may not be ready to push your 'my-cool-feature' task, but 'bugfix-392' is ready to go. Because tasks track a set of changesets (and not just one 'tip' changeset) there are some things you can do with tasks that you can't with bookmarks. See an example workflow here: http://x.zpuppet.org/2009/03/09/mercurial-tasks-extension/.
Mercurial also has local named branches, see the hg branch command.
If you insist on using hg clone to do branchy development, I guess you could try creating a folder link (shortcut under windows) in your repo to a shared obj folder. This will work with hg clone, but I'm not sure your build tool will pick it up.
Otherwise, you probably keep all your repos in one folder - just put your obj folder there (it shouldn't be under source control anyways, imo). Use relative paths to refer to it.
A word of warning: many .o symbol tables (or equivalent) contain the full path name of the source file. If that other file changes (or if the path is not visible from the new directory) you may encounter weirdness when debugging.
I'm working on +1M LOC C/C++ project on Solaris (remote, via VNC or SSH). I have a daily updated copy of source code on my local machine too (Windows, just for browsing code).
I use VIM and ctags combo (on both Solaris and Windows) but I'm not happy with results / speed. What settings for ctags would you recommend? There are a lot of options what should be tagged and how. Should I use single tag file per project, per dir or perhaps just one for everything?
Using anything less than one for everything doesn't really make sense to me. Being able to quickly jump around your project is what tags are for in the first place. For instance, our code is divided into 3 main sections, Include/, Processes/, Libraries/. Without being able to jump between these I would be incredibly unproductive.
Personally I use cscope (its C++ parsing isn't great, but its ok, and its VIM integration is better than just ctags), but when I do use ctags I usually just add --c++-kinds=+p.
I use etags:
find src1 src2 src3 | grep -v "\\.svn" | xargs etags --append
In emacs, position cursor on identifier and press M-. ([alt] + [period], or [esc] followed by [period]).
I don't know how it compares to your setup as far as speed goes, or if you're willing to use emacs. I'm just posting in case you want to try some alternatives.
I have a source code of about 500 files in about 10 directories. I need to refactor the directory structure - this includes changing the directory hierarchy or renaming some directories.
I am using svn version control. There are two ways to refactor: one preserving svn history (using svn move command) and the other without preserving. I think refactoring preserving svn history is a lot easier using eclipse CDT and SVN plugin (visual studio does not fit at all for directory restructuring).
But right now since the code is not released, we have the option to not preserve history.
Still there remains the task of changing the include directives of header files wherever they are included. I am thinking of writing a small script using python - receives a map from current filename to new filename, and makes the rename wherever needed (using something like sed). Has anyone done this kind of directory refactoring? Do you know of good related tools?
If you're having to rewrite the #includes to do this, you did it wrong. Change all your #includes to use a very simple directory structure, at mot two levels deep and only using a second level to organize around architecture or OS dependencies (like sys/types.h).
Then change your make files to use -I include paths.
Voila. You'll never have to hack the code again for this, and compiles will blow up instantly if something goes wrong.
As far as the history part, I personally find it easier to make a clean start when doing this sort of thing; archive the old one, make a new repository v2, go from there. The counterargument is when there is a whole lot of history of changes, or lots of open issues against the existing code.
Oh, and you do have good tests, and you're not doing this with a release coming right up, right?
I would preserve the history, even if it takes a small amount of extra time. There's a lot of value in being able to read through commit logs and understand why function X is written in a weird way, or that this really is an off-by-one error because it was written by Oliver, who always gets that wrong.
The argument against preserving the history can be made for the following users:
your code might have embarrassing things, like profanity and fighting among developers
you don't care about the commit history of your code, because it's not going to change or be maintained in the future
I did some directory refactoring like this last year on our code base. If your code is reasonable structured at the beginning, you can do about 75-90% of the work using scripts written in your language of choice (I used Perl). In my case, we were moving from set of files all in one big directory, to a series of nested directories depending on namespaces. So, a file that declared the class protocols::serialization::SerializerBase was located in src/protocols/serialization/SerializerBase. The mapping from the old name to the new name was trivial, so that doing a find and replace on #includes in every source file in the tree was trivial, although it was a big change. There were a couple of weird edge cases that we had to fix by hand, but that seemed a lot better than either having to do everything by hand or having to write our own C++ parser.
Hacking up a shell script to do the svn moves is trivial. In tcsh it's foreach F ( $FILES ) ... end to adjust a set of files. Perl & Python offer better utility.
It really is worth saving the history. Especially when trying to track down some exotic bug. Those who do not learn from history are doomed to repeat it, or some such junk...
As for altering all the files... There was a similar question just the other day over at:
https://stackoverflow.com/questions/573430/
c-include-header-path-change-windows-to-linux/573531#573531
I am comparing two almost identical folders which include hidden .svn folders which should be ignored and I want to continually quickly compare the folders as some files are patched to compared the difference without checking the unchanged matching files again.
edit:
Because there are so many options I'm interested in a solution that clearly exploits the knowledge from the previous compare because any other solution is not really feasable when doing repeated comparisons.
If you are willing to spend a bit of money, Beyond Compare is a pretty powerful diffing tool that can do folder based diffing.
Beyond Compare
I personally use WinMerge and find it very useful. It has filters that exclude svn file. Under linux i prefer Meld.
One option would be to use rsync. Something like:
rsync -n -r -v -C dir_a dir_b
The -n option does a dry-run so no files will be modified. -r does a recursive comparison. Optionally turn on verbose mode with -v. (You could use -i to itemize the changes instead of -v.) To ignore commonly ignored files such as .svn/ use -C.
This should be faster than a simple diff as I read the rsync manpage:
Rsync finds files that need to be transferred using a "quick check"
algorithm (by default) that looks for files that have changed in size
or in last-modified time. Any changes in the other preserved
attributes (as requested by options) are made on the destination file
directly when the quick check indicates that the file's data does not
need to be updated.
Since the "quick check" algorithm does not look at file contents directly, it might be fooled. In that case, the -c option, which performs a checksum instead, may be needed. It is likely to be faster than an ordinary diff.
In addition, if you plan on syncing the directories at some point, this is a good tool for that job as well.
Not foolproof, but you could just compare the timestamps.
Use total commander ! All the cool developers use it :)
If you are on linux or some variant, you should be able to do:
prompt$ diff -r dir1 dir2 --exclude=.svn
The -r forces recursive lookups. There are a bunch of switches to ignore stuff like whitespace etc.