I know I can use install-data-hook to do anything I want after my data files are copied and, this way, I can adjust the file permissions the way I want.
My question, though, becomes before it.
Is there any way I can tell automake to set a standard permission mask for any data group before it gets copied?
I mean I want the resulting install to do its task using the correct mask rather than letting it to use the standard 0644 and after it correct all the file permissions.
In other words, I want the task do get done right at first and not having to fix it later.
Is is possible?
Thanks!
Automake implements the GNU Standards. These state that data files should be installed using the command $(INSTALL_DATA), which should default to $(INSTALL) -m 644.
What you can do is overwrite the value of INSTALL_DATA in some Makefile.am, then all data files installed by this Makefile.am will use that definition. If you have two groups of data files that require different modes, you will have to move them in two different directories so they can have their own Makefile.
Related
There's a file, just a single file, there's no shards or anything else.
/alex/projects/inspector/inspector.cr
I want to require it in another file in another folder
/alex/projects/my-project/play.cr
This won't work
require "/alex/projects/inspector/inspector"
require "/alex/projects/inspector/inspector.cr"
This also not working
CRYSTAL_PATH=$CRYSTAL_ROOT/src:lib:/alex/projects/inspector
require "inspector"
require "inspector.cr"
require "./inspector"
require "./inspector.cr"
P.S.
I would like to avoid using shards etc. as I have no plans to share that file or publish it. It's just a file that used by couple of other files in different locations.
I solved it by creating symlink
ln -s /alex/projects/inspector/inspector.cr /alex/projects/my-project/inspector.cr
require "./inspector"
Currently, require is only relative (as you learned from your forum post). However, you were on the right track with CRYSTAL_PATH.
CRYSTAL_PATH is an environment variable used by the Crystal compiler which tells it where to look for dependencies. So instead of using it in code, as it appears you did, you should use it when building the executable:
CRYSTAL_PATH=$CRYSTAL_ROOT/src:lib:/alex/projects/inspector crystal build /alex/projects/my-project/play.cr
Note that CRYSTAL_ROOT must be defined for that exact command to work. If you need to find the current CRYSTAL_PATH in order to append to it, you can use crystal env.
A little retrospective now that I've settled into Mercurial. Forget forget files combined with hg remove. It's crazy and bass-ackwards. You can use hg remove once you've established that something is in a forget file that isn't forgetting because the item in question was tracked before the original repo was created. Note that hg remove effectively clears tracked status but it also schedules the file for deletion in anything that gets changes from your repo. If ignored, however the tracking deactivation still happens but that delete-me change set won't ever reach another repo and for some reason will never delete in yours which IMO is counter-intuitive. It is a very sure sign that somebody and I don't know these guys, is UNWILLING TO COMPROMISE ON DUH DESIGN PROBLEMS. The important thing to understand is that you don't determine what's important, Mercurial does. Except when you're merging on a pull of course. It's entirely reasonable then. But I digress...
Ignore-file/remove is a good combo for already-tracked but very specific files you want forgotten but if you're dealing with a larger quantity of built files determined with broader patterns it's not worth the risk. Just go with a double-repo and pull -u from the remote repo to your syncing repo and then pull -u commits from your working repo and merge in a repo whose sole purpose is to merge changes and pass them on in a place where your not-quite tracked or untracked files (behavior is different when pulling rather than pushing of course because hey, why be consistent?) won't cause frustration. Trust me. The idea that you should have to have two repos just to get 'er done offends for good reason AND THAT SO MANY OF US ARE DOING IT should suggest a serioush !##$ing design problem, but it's much less painful than all the other awful things that will make you regret seeking a sensible alternative.
And use hg help. It's actually Mercurial's best feature and often better than the internet (which I don't fault for confusion on the matter of all things hg) for getting answers to everything that is confusing and counter-intuitive in this VCS.
/retrospective
# switch to regexp syntax.
syntax: regexp
#Config Files
#.Net
^somecompany\.Net[\\/]MasterSolution[\\/]SomeSolution[\\/]SomeApp[\\/]app\.config
^somecompany\.Net[\\/]MasterSolution[\\/]SomeSolution[\\/]SomeApp_test[\\/]App\.config
#and more of the same following
And in my mercurial.ini at the root of my user directory
[ui]
username = ereppen
merge = bcomp
ignore = C:\<path to user ignore file>\.hgignore-config
Context:
I wrote an auto-config utility in node. I just want changes to the files it changes to get ignored. We have two teams and both aren't on the same page with making this a universal thing so it needs to be user-specific for now.
The config file is in place and pointed at by my ini file. I clone. I run the config utility and change the files and stat reveals a list of every single file with an M next to it. I thought it was the utf-8 thing and explicitly set the file to utf-16 little endian. I don't think I'm doing with the regEx that any modern flavor of regEx worth actually calling regEx wouldn't support.
The .hgignore file has no effect for files that are tracked. Its function is to stop you from seeing files you want ignored listed as "untracked". If you're seeing "M" then they're already added (you got them with the clone) so .hgignore does nothing.
The usual way config files that differ from machine to machine are handled is to put a app.config.sample in source control, have app.config in .hgignore and have people do a copy when they're making their config edits.
Alternately if your config files allow for includes and overrides you end them with include app-local.config and override any settings in a app-local.config which you don't add and do include in .hgignore.
I have a class method (implemented in a shared object in UNIX environment) which needs to access a text data file in runtime (using ifstream). Currently the method assumes that the data file is available for opening without any relative path, i.e something like
ifstream dataFile("data.txt");
The shared object is loaded from python code, and in order for it to be available for loading, it is being copied to the \usr\lib\ folder as a post-build step of the makefile. My question is how to make the text data file available for the shared object. I have considered the following possibilities:
Use some relative path, but that method is not totally fool-proof (the project is hosted on various instances and I cannot be sure the directory tree will stay the same (e.g) a month from now).
copy the data file as well to \usr\lib, but I feel this is a wrong attitude.
Any suggestions are welcomed.
The proper way to go about this is to make the location of the text file a configurable value that will be set when your project is installed. Using a configuration file in /etc/ is a common way to store that value.
That way you can put the text file in e.g. /usr/share/ with all the machine-independent files (that data file is machine-independent, right?) and your code would "know" where to find it.
Note that if the data file is going to be modified as part of your code's operation, then it should probably be placed somewhere under /var (/var/lib or perhaps /var/cache) according to the Filesystem Hierarchy Standard (FHS) and most other Unix filesystem standards.
If the data file could be considered a configuration file, as you mentioned in one of your comments, you could just hard-code its path to somewhere under /etc/ (e.g. /etc/MyProject/data.cfg) and go on.
I can think of two solutions :
When you load your shared object, you somehow give it the path to your file.
Instead of copying the file to /usr/lib you could create a symbolic link do it in /usr/lib but that is not the best thing to do imho.
The first solution is the best one for me.
I'm using Mercurial for development of quite a large C++ project which takes about 30 minutes to get built from the scratch(while incremental builds are very quick).
I'm usually trying to implement each new feature in the new branch(using "hg clone") and I may have several new features developed during the day and it's quickly getting very boring to wait for the new feature branch to get built.
Are there any recipes to somehow re-use object files from other already built branches?
P.S. in git there are named branches within the same repository which make re-usage of the existing object files possible for the build system, however I prefer the simpler Mercurial separate branches model...
I suggest using ccache as a way to speed up compilation of (mostly) the same code tree. The way it works is as following:
You define a place to be used as the cache (and the maximum cache size) by using the CCACHE_DIR environment variable
Your compiler should be set to ccache ${CC} or ccache ${CXX}
ccache takes the output of ${CC} -E and the compilation flags and uses that as a base for its hash. As long as the compiler flags, source file and the headers are all unchanged, the object file will be taken from cache, saving valuable compilation time.
Note that this method speeds up compilation of any source file that eventually produces the same hash. If you share source files across projects, ccache will handle them as well.
If you already use distcc and wish to use it with ccache, set the CCACHE_PREFIX environment variable to distcc.
Using ccache sped up our source tree compilation around tenfold.
A simple way to speed up your builds could be to use a local "build directory" on your disk. This way you can checkout into this directory and start the build. The first time it will take the full time, but after that it will (hopefully) only rebuild the files where the source code changed.
My Localbranch extension was designed partly around this use case. It uses a single working directory, but I think it's simpler than git. It's essentially a mechanism for maintaining multiple repository clones under one working directory, where only one is active at a given time.
Woops, I missed your P.S. where you don't like having multiple named branches in the same repo and that you prefer separate clones.. sorry about that.
I too have somewhat large C++ projects and the clone-per-feature workflow didn't work for me very well. Firstly, I had to close down my Vim session and then reopen (many of the same) files once I've created the clone. Secondly, like you said, a lot of code must be recompiled unnecessarily. Thirdly, I have to keep track of where I've pushed to and pulled from - gets confusing when you start a new feature and then get sidetracked onto a new one. Before you know it you have many clones and not sure which ones need to be pushed back to your main.
You definitely don't want to use named branches (as I'm sure you know) to handle this as they are quite permanent.
What you need are bookmarks: https://www.mercurial-scm.org/wiki/BookmarksExtension
Bookmarks allow you to create lightweight (and otherwise anonymous) branches per feature by facilitating the naming of heads in your repo. These heads would normally be unnamed and you would have to look at the output of 'hg log' or use some graphical tool to find the revision numbers for the tip of your feature-branch. With bookmarks you can name them descriptive names like 'my-cool-feature' or 'bugfix-392'.
If you like the idea of bookmarks, I'd also recommend my own extension called 'tasks': http://bitbucket.org/alu/hgtasks. This extension works like bookmarks but adds some more functionality. It allows you created feature-branches (now called tasks) and suppress the pushing of incomplete tasks. This is handy when you have a few feature-branches at once. You may not be ready to push your 'my-cool-feature' task, but 'bugfix-392' is ready to go. Because tasks track a set of changesets (and not just one 'tip' changeset) there are some things you can do with tasks that you can't with bookmarks. See an example workflow here: http://x.zpuppet.org/2009/03/09/mercurial-tasks-extension/.
Mercurial also has local named branches, see the hg branch command.
If you insist on using hg clone to do branchy development, I guess you could try creating a folder link (shortcut under windows) in your repo to a shared obj folder. This will work with hg clone, but I'm not sure your build tool will pick it up.
Otherwise, you probably keep all your repos in one folder - just put your obj folder there (it shouldn't be under source control anyways, imo). Use relative paths to refer to it.
A word of warning: many .o symbol tables (or equivalent) contain the full path name of the source file. If that other file changes (or if the path is not visible from the new directory) you may encounter weirdness when debugging.
I am comparing two almost identical folders which include hidden .svn folders which should be ignored and I want to continually quickly compare the folders as some files are patched to compared the difference without checking the unchanged matching files again.
edit:
Because there are so many options I'm interested in a solution that clearly exploits the knowledge from the previous compare because any other solution is not really feasable when doing repeated comparisons.
If you are willing to spend a bit of money, Beyond Compare is a pretty powerful diffing tool that can do folder based diffing.
Beyond Compare
I personally use WinMerge and find it very useful. It has filters that exclude svn file. Under linux i prefer Meld.
One option would be to use rsync. Something like:
rsync -n -r -v -C dir_a dir_b
The -n option does a dry-run so no files will be modified. -r does a recursive comparison. Optionally turn on verbose mode with -v. (You could use -i to itemize the changes instead of -v.) To ignore commonly ignored files such as .svn/ use -C.
This should be faster than a simple diff as I read the rsync manpage:
Rsync finds files that need to be transferred using a "quick check"
algorithm (by default) that looks for files that have changed in size
or in last-modified time. Any changes in the other preserved
attributes (as requested by options) are made on the destination file
directly when the quick check indicates that the file's data does not
need to be updated.
Since the "quick check" algorithm does not look at file contents directly, it might be fooled. In that case, the -c option, which performs a checksum instead, may be needed. It is likely to be faster than an ordinary diff.
In addition, if you plan on syncing the directories at some point, this is a good tool for that job as well.
Not foolproof, but you could just compare the timestamps.
Use total commander ! All the cool developers use it :)
If you are on linux or some variant, you should be able to do:
prompt$ diff -r dir1 dir2 --exclude=.svn
The -r forces recursive lookups. There are a bunch of switches to ignore stuff like whitespace etc.