I use git to interface with an SVN repository. I have several git branches for the different projects I work on.
Now, whenever I switch from one branch to another using 'git checkout ', all the compiled executables and object files from the previous branch are still there. What I would like to see is that switching from branch A to B results in a tree with all object files and binaries from the last time I worked on branch B.
Is there a way to handle this without creating multiple git repositories?
Update: I understand that executables and binaries should not end up in the repository. I'm a bit disappointed in the fact that all the branching stuff in git is useless to me, as it turns out I'll have to clone my proxy git repository for every branch I want to start. Something I already did for SVN and hoped to avoid with git. Of course, I don't have to do it, but it would result in me doing a new make most of the time after switching between branches (not fun).
What you want is a full context, not just the branch... which is generally out of scope for a version control tool. The best way to do that is to use multiple repositories.
Don't worry about the inefficiency of that though... Make your second repository a clone of the first. Git will automatically use links to avoid having multiple copies on disk.
Here's a hack to give you want you want
Since you have separate obj directories, you could modify your Makefiles to make the base location dynamic using something like this:
OBJBASE = `git branch --no-color 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/\1\//'`
OBJDIR = "$(OBJBASE).obj"
# branch master: OBJBASE == "master/", OBJDIR == "master/.obj"
# non-git checkout: OBJBASE == "", OBJDIR == ".obj"
That will but your branch name into OBJBASE, which you can use to build your actual objdir location from. I'll leave it to you to modify it to fit your environment and make it friendly to non-git users of your Makefiles.
This is not git or svn specific - you should have your compiler and other tools direct the output of intermediate files like .o files to directories that are not under version control.
To keep multiple checkouts of the same repo, you can use git --work-tree.
For example,
mkdir $BRANCH.d
GIT_INDEX_FILE=$BRANCH.index git --work-tree $BRANCH.d checkout $BRANCH
You could set your IDE compiler to generate all private temporary files (.class and so on) in <output>\branchName\....
By configuration your compilation setting branch by branch, you can register the name of the branch in the output directory path.
That way, even if though private files remain when you git checkout, your project on the new branch is ready to go.
In the contrib/ directory of the git distribution, there is a script called git-new-workdir that allows you to checkout multiples branches in different directories without cloning your repository.
Those files aren't tracked by Git or Subversion, so they're left alone on the assumption that they are of some use to you.
I just do my checkouts in different directories. Saves me the trouble of doing cleanup.
A make clean should not be necessary because files that are different between different branches get checked out with the actual date!!!
This means that if your Makefile is correct, only those object-files, libs and executables are compiled again that really changed because of the checkout. Which is exactly the reason a makefile is there in the first place.
The exception is if you need to switch compiler options or even compilers in different branches. In that case probably git-new-workdir is the best solution.
If the compiled executables are files that have been checked in
then git stash solves the problem.
[compile]
git stash save "first branch"
git checkout other_branch
[Fiddle with your code]
[compile]
git stash save "second branch"
git checkout first_branch
git stash apply [whatever index your "first branch" stash has]
# alternatively git stash pop [whatever index...]
If the compiled executables are files that have not and will not be checked in
then simply add them to .gitignore
Related
Context:
MainProject depends on a header-only dependency Module.
Both MainProject and Module are:
still under development and subject to modifications
modern CMake projects
independent repositories on Github
controlled by me
Problem:
Few months before, I tried without success to manage this dependency using CMake and versioning. Pressed by deadlines, I ended up opting for the "simplest solution" to copy-paste project Module headers in MainProject. Developing MainProject led to add features and modify interfaces in the Module local copy. Now there are two diverging Module.
How it could have worked
It could have worked if Module was very stable (copy/pasting headers is actually the solution I opted for dependencies that are stable and for which I don't have ownership).
I could have modified/commited/pushed/recopy/repasted the Module repository for every modification I wanted to bring. But of course I did not because ... time and deadlines.
Question
Now I would like to step back from this solution (ie, reflect the modifications on the initial Module project) and chose a better dependency management strategy.
What I can think of
create a new branch update on Module git project, copy-paste the modified version, commit it and use git diff to check the differences with branch master
use one or a combination of these three approaches (but I don't know how to chose)
git submodules
git subtrees
C++20 modules
Your Module project literally is a Git submodule: an independently-updated history, from which your MainProject builds use specific revisions.
Developing MainProject led to add features and modify interfaces in the Module local copy. Now there are two diverging Module
Quickest: from a clean checkout of your current MainProject revision,
git tag slice $(git commit-tree -m slice #:path/to/Module)
git rm -rf path/to/Module
git submodule add u://r/l path/to/Module
git push path/to/Module slice
cd path/to/Module
git read-tree -um slice
git commit -m 'Module content from MainProject'
and now you've got your content and ancestry looking serviceable, and you can add labels and push it wherever it needs to go, e.g. git checkout -b MainProjectModule; git push -u origin MainProjectModule
If you've got a long history of Module changes in your main project that you want to preserve in the Module history proper, it's doable, and even fairly efficient, but you'll need to adapt some history surgery to achieve it, instead of tagging a nonce commit, tag the submodule commit that command produces and merge that rather than just adding its tip content as a new commit.
I have a C# solution with projects that are not all in folders under the *.sln directory. If I make a change in a file from such a lateral project and then try to commit the solution (as shown in picture below) these changes are not listed, but only those made in projects under the solution directory.
My experience is that the property to see differences from all projects, independently from their physical locations, is sometimes present. Sometimes not, however, and I do not see why. Where are the configuration data for VisualHG in VS 2017 telling it which projects to consider (when pushing 'commit solution')?
I do not know how to find these settings, either (to my experience, what you have described, happens really), but I can propose a workaround.
Namely, you can simply create a batch file (something like my_batch_commit.bat) where you will have a list of all local folders with your partial HG repositories, e.g
cd C:\MyProjects\Project_A
thg commit
cd C:\MyProjects\Project_B
thg commit
cd C:\MyProjects\Project_C
thg commit
REM ...
the folders are those where the corresponding .hg directories lie. thg.exe is the name of the Tortoise HG GUI (must be retrievable via PATH, but if Tortoise HG is correctly installed, this is fulfilled automatically).
If you now execute the batch file, you will be presented with a number of THG instances, one for each HG repository, so you can deal with them one after another.
Having used CMake, I've become used to out-of-source builds, which are encouraged with CMake. How can out-of-source builds be done with Cargo?
Using in-source-builds again feels like a step backwards:
Development tools need to be configured to ignore paths. Sometimes multiple plugins and development tools - especially using VIM or Emacs!
Some tools can't be configured to easily hide build files. While dotfiles are typically hidden, they will still show Cargo.lock and target/, worse still, recursively exposing their contents.
Deleting un-tracked files to remove everything outside of version control, typically to cleanup editor temp files or some test output, can backfire if you forgot to add a new file to version control and don't manually check the file list properly before deleting them.
Dependencies are downloaded into your source code path, sometimes adding *.rs files in the target directory as part of building indirect deps, so operating on all *.rs files may accidentally pickup other files which aren't in a hidden directory, so might not be ignored even after development tools have been configured.
While it's possible to work around all these issues, I'd rather just have an external build path and keep the source directory pristine.
You can specify the directory of the target/ folder either via configuration file (key build.target-dir) or environment variable (CARGO_TARGET_DIR). Here is an example using a configuration file:
Suppose you want to have a directory ~/work/ in which you want to save the Cargo project (~/work/foo/) and next to it the target directory (~/work/my-target/).
$ cd ~/work
$ cargo new --bin foo
$ mkdir .cargo
$ $EDITOR .cargo/config
Then insert the following into the configuration file:
[build]
target-dir = "./my-target"
If you then build in your normal Cargo project directory:
$ cd foo
$ cargo build
You will notice that there is no target/ dir, but everything is in ~/work/my-target/.
However, the Cargo.lock is still saved inside the Cargo project directory, but that kinda makes sense. For executables, you should check the Cargo.lock file into your git! For libraries, you shouldn't. I guess having to ignore one file is better than having to ignore an entire folder.
Lastly, there are a few caveats to changing the target-dir, which are listed in the PR which introduced the feature.
While useful manually setting this up isn't all that convenient, I wanted to be able to build multiple crates within a source tree, having all of them out-of-source, something that ../target-dir configuration option wouldn't achieve.
Helper utility for convenient out-of-source builds
Using the environment variable I've written a small utility to wrap cargo, so it automatically builds out-of-source, supporting crates both at the top-level, on in a subdirectory of the source tree.
Thanks to Lukas for pointing out CARGO_TARGET_DIR and target-dir configuration option.
What I really wanted was a dynamic CARGO_TARGET_DIR that changes relative to where I am.
This bash alias puts all builds in a mirrored directory structure, e.g. instead of putting target into ~/mydir/myproj it puts in into ~/rustbuild/mydir/myproj
alias cargo='CARGO_TARGET_DIR=$(echo $PWD | sed "s|$HOME|$HOME/rustbuild|g") cargo'
You could also make your rustbuild directory hidden.
I'm the author of a utilty that makes compressing projects using zip a bit easier, especially when you have to compress regularly, such as for updating projects submitted to an application store (like Chrome's Web Store).
I'm attempting to make quite a few improvements, but have run into an issue, described below.
A Quick Overview
My utility's command format is similar to command OPTIONS DEST DIR1 {DIR2 DIR3 DIR4...}. It works by running zip -r DEST.zip DIR1; a fairly simple process. The benefit to my utility, however, is the ability to use a predetermined file (think .gitignore) to ignore specific files/directories, or files/directories which match a pattern.
It's pretty simple -- if the "ignorefile" exists in a target directory (DIR1, DIR2, DIR3, etc), my utility will add exclusions to the zip -r DEST.zip DIR1 command using the pattern -x some_file or -x some_dir/*.
The Issue
I am running into an issue with directory exclusion, however, and I can't quite figure out why (this is probably be because I am still quite the sh novice). I'll run through some examples:
Let's say that I want to ignore two things in my project directory: .git/* and .gitignore. Running command foo.zip project_dir builds the following command:
zip -r foo.zip project -x project/.git/\* -x project/.gitignore
Woohoo! Success! Well... not quite.
In this example, .gitignore is not added to the compressed output file, foo.zip. The directory, .git/*, and all of it's subdirectories (and files) are added to the compressed output file.
Manually running the command:
zip -r foo.zip project_dir -x project/.git/\* -x project/.gitignore
Works as expected, of course, so naturally I am pretty puzzled as to why my identical, but dynamically-built command, does not work.
Attempted Resolutions
I have attempted a few different methods of resolving this to no avail:
Removing -x project/.git/\* from the command, and instead adding each subdirectory and file within that directory, such as -x project/.git/config -x project/.git/HEAD, etc (including children of subdirectories)
Removing the backslash before the asterisk, so that the resulting exclusion option within the command is -x project/.git/*
Bashing my head on the keyboard in angst (I'm really surprised this didn't work, it usually does)
Some notes
My utility uses /bin/sh; I would prefer to keep it that way for maximum compatibility.
I am aware of the git archive feature -- my use of .git/* and .gitignore in the above example is simply as an example; my utility is not dependent on git nor is used exclusively for projects which are git repositories.
I suspected the problem would be in the evaluation of the generated command, since you said the same command when executed directly did right.
So as the comment section says, I think you already found the correct solution. This happens because if you run that variable directly, some things like globs can be expanded directly, instead of passed to the command. And arguments may be messed up, depending on the situation.
Yes, in that case:
eval $COMMAND
is the way to go.
I am wondering if anyone has experience working on Django projects in a small team (3 in my case), using Git source control management.
The project is hosted on a development server, which is why I am having such a problem. Developers can't see if their code works until they commit their changes to their local repository, then push those changes to the server. Even then, however, git doesn't seem to be updating the files inside the directory holding the repository on the server - probably because it only stores the changes to save space.
We are beginning to tread on each other's toes when working on this project, so some kind of version control is required - but I just can't figure out an solution.
If anyone has overcome a similar problem I'd love to hear how it can be done.
When pushing to a remote repository, best results are when the remote repository is a "bare" repository with no working directory. It sounds like you have a working directory on the remote repository, which will not be updated by Git when doing a push.
For your situation, I would recommend that developers have their own testing environment that they can test against locally before having to push their code anywhere else. Having one central location where everybody needs to push their work before they can even try it will lead to much pain and suffering.
For deployment, I would recommend pushing to a central "bare" repository, then having a process where the deployment server pulls the latest code from the central repository into its working directory.
When you push to a (shared) git repository, it doesn't update that repository's working files. Basically because the working files might be dirty and in that case you'd have to merge--- and for that you need to have full shell access there, which may not be the case in general.
If you want to have the most recent "master" of the shared repo checked out somewhere, you can arrange for that by writing a post-update hook. I'll give an example of one below that I use to check out the "ui" subdirectory and make it available to Apache.
However, I will say that I think your process could be improved. Developers generally need personal servers that they can test on before pushing to a shared point: otherwise that shared repo is likely to be hideously unreliable. Consider, if I push a change to it and it doesn't work, is that my change that broke it or a side-effect of someone else's?
OK, I use this as a post-update hook:
#!/bin/sh
# Should be run from a Git repository, with a set of refs to update from on the command line.
# This is the post-update hook convention.
info() {
echo "post-update: $#"
}
die() {
echo "post-update: $#" >&2
exit 1
}
output_dir=..
for refname in "$#"; do
case $refname in
refs/heads/master)
new_tree_id=$(git rev-parse $refname:ui)
new_dir="$output_dir/tree-$new_tree_id"
if [ ! -d "$new_dir" ]; then
info "Checking out UI"
mkdir "$new_dir"
git archive --format=tar $new_tree_id | ( cd $new_dir && tar xf - )
fi
prev_link_target=$(readlink $output_dir/current)
if [ -n "$prev_link_target" -a "$prev_link_target" = "tree-$new_tree_id" ]; then
info "UI unchanged"
else
rm -f $output_dir/current
ln -snf "tree-$new_tree_id" "$output_dir/current"
info "UI updated"
title=$(git show --quiet --pretty="format:%s" "$refname" | \
sed -e 's/[^A-Za-z][^A-Za-z]*/_/g')
date=$(git show --quiet --pretty="format:%ci" "$refname" | \
sed -e 's/\([0-9]*\)-\([0-9]*\)-\([0-9]*\) \([0-9]*\):\([0-9]*\):\([0-9]*\) +0000/\1\2\3T\4\5\6Z/')
ln -s "tree-$new_tree_id" "$output_dir/${date}__${title}"
fi
;;
esac
done
As mentioned, this just checks out the "ui" subdirectory. That's the ":ui" bit setting new_tree_id. Just take the ":ui" out (or change to "^{tree}") to check out everything.
Checkouts go in the directory containing the git repo, controlled by output_dir. The script expects to be running inside the git repo (which in turn is expected to be bare): this isn't very clean.
Checkouts are put into "tree-XXXX" directories and a "current" symlink managed to point to the most recent. This makes the change from one to another atomic, although it's unlikely to take so long that it matters. It also means reverts reuse the old files. And it also means it chews up disk space as you keep pushing revisions...
Had the same problem, also working with django.
Agree to testing locally prior to deployment, as already mentioned.
You can then push the local version to a new branch on the server. Then you do a merge with this branch and the master. After this you'll see the updated files.
If you accidentally pushed to the master branch, then you can do a git reset --hard. However all changes not commited in the current working branch will be lost. So take care.