get operation in fabric to use hostname - fabric

I am pretty new to fabric and trying to setup a deployment in the below fashion:
Main repo --> Local_repo -> Deployment server
I want to
(1)push the build from the main repo to the local repo
(2)Deployment server needs to pull the available code from the local repo
I did the first step sucessfully using put but then I am not able to 2nd step using get operation.
I tried using git pull but then I get an error stating its not a git repo and same goes for hg pull as well.
Is there a way where we can combine get operation with the host name: for ex:
get ('username#localrepo/local_repo_build_path', deployment_server_local_path)

If you want to use a git pull, you have to most likely use the context managers cd/lcd to move into the directory of the repo. Also you can't specify the username/host like that. It's set in the #host or #role definition for the task, and it'll pick that up automatically, though also it's not going to pull down a full dir, you'd need to use the contrib rsync for something like that.

Related

Can I track and commit without push?

What I'm trying to do is to version a file without ever pushing it to github, is it something I can do ?
Context :
For instance, I have a local database for dev (Django) which is SQLite3 which creates a file "db.sqlite3". I don't want this file to be on github, but I'd like to be able to reset it to previous version if I mess with the migrations for example.
Maybe they are better ways than git, I'm open to suggestions.
Thank you !
As far as I’m aware a file is either tracked (included in commits and pushes) or ignored. There isn’t a middle ground and I’m not sure there needs to be.
https://git-scm.com/book/en/v2/images/lifecycle.png
If you can isolate the file within its own directory- create a separate git repo for that directory and don’t have a remote associated with it.
Maintain git repo inside another git repo
Furthermore you could create an automatic schedule for git commits (using cron for example) so you can do point in time recovery for that directory.
If you can’t move the file then schedule a backup of the file to another folder and commit the backup to a different local repo.
Copy SQLite database to another path

Git commit from a bisect

I have only ever used git add . for this project, but somewhere along the line I started getting the strange "modified content, untracked content" error on one of my subdirectories (called users). Other stackoverflow answers didn't work for me. I used checkout to go back through previous commits, but the buggy/untracked subdirectory didn't change with the rest of the directory. I ended up making manual changes to it and then running git checkout master to make sure everything else was back where it started.
Git is saying that I'm bisecting, and it won't let me commit. I looked over stackoverflow answers, and tried some of the following commands:
git pull:
There is no tracking information for the current branch.
Please specify which branch you want to merge with.
git pull origin master:
fatal: 'origin' does not appear to be a git repository
git branch --set-upstream-to=origin/master master:
error: the requested upstream branch 'origin/master' does not exist
hint:
hint: If you are planning on basing your work on an upstream
hint: branch that already exists at the remote, you may need to
hint: run "git fetch" to retrieve it.
hint:
hint: If you are planning to push out a new local branch that
hint: will track its remote counterpart, you may want to use
hint: "git push -u" to set the upstream config as you push.
git pull --rebase:
There is no tracking information for the current branch.
Please specify which branch you want to rebase against.
Apologies if these commands are all over the place. I intend to really learn how git works soon, but right now I just want to commit the changes so I can deploy my project.
UPDATE: I used git bisect reset, created a new branch out of my detached head, and then merged with the master. This kept the changes I made, so now I just need to figure out how to get users tracked again in my commits. git add users still isn't doing anything.

Google Container Registry build trigger on folder change

I can setup a build trigger on GCR to build my Docker image every time my Git repository gets updated. However, I have a single repository with multiple folders, and a Docker file in each folder.
Ex:
my_app
-- service-1
Dockerfile-1
-- service-2
Dockerfile-2
How do I only build Dockerfile-1 when the service-1 folder gets updated?
This is a variation on this GitHub feature request -- in your case, differential behavior based on the changed files (folders) rather than the branch.
We are considering this feature as part of the development of support for more advanced workflow control and will post back on that GitHub issue when it becomes available.
The work-around available to you today is to use a bash script that conditionally builds (or doesn't) based on an inspection of the files changed in the $COMMIT_SHA that triggered the build. Note that the git builder can be used to get the list of files changed via git diff-tree --no-commit-id --name-only -r $COMMIT_SHA.

Using Bitbucket for existing project

I have an existing django project on my local machine (in virtualwrapper). How do I add it to the Bitbucket?
Let say my django project is like this:
project
--manage.py
--apps
--db.sqlite3
...
So should I do 'git init' under 'project' directory?
Since it is develop in the virtualwrapper, so I think only the project files will be pushed to the Bitbucket, is that right? If I want to develop the project on a different computer, and want to pull the project files from Bitbucket, how should I do it? I mean should I create another virtual environment in my new machine, install django and necessary pakcages before import the files from bitbucket?
I am new to git, so I don't know what is the best to do it.
So should I do 'git init' under 'project' directory?
Yes, but after that, don't add everything.
Create a .gitignore file first, where you declare the files that shouldn't be versioned (the one that are generated)
Then add and commit: that updates a local repo.
you can easily link it to an existing empty BitBucket repo:
git remote add origin ssh://git#bitbucket.org/username/myproject.git
git push -u origin master # to push changes for the first time
Normally, you wouldn't store a binary like db.sqlite3 in a source repo.
But this blog post suggests a way to do so through
In a .gitattributes or .git/info/attributes file, give Git a filename pattern and the name of a diff driver, which we'll define next. In my case, I added:
db.sqlite3 diff=sqlite3
Then in .git/config or $HOME/.gitconfig, define the diff driver. Mine looks like:
[diff "sqlite3"]
textconv = dumpsqlite3
I chose to define an external dumpsqlite3 script, since this can be useful elsewhere.
It just dumps SQL to stdout for the filename given by its first argument:
#!/bin/sh
sqlite3 $1 .dump

How can I get git in sync with remote master branch

I'm confused about why my production server seems to think it is ahead of my master branch. I use fabric to deploy, and it runs a git pull on the server from my master branch on github. I make no changes that I'm aware of on the production server itself, and I certainly do not make commits on the production server.
git status yields:
# On branch master
# Your branch is ahead of 'github/master' by 57 commits.
#
nothing to commit (working directory clean)
As far as I can tell, what's on the production server matches the master branch on in my dev environment. At least the site acts the same, but I find this disconcerting. Any ideas on how to get the production repository on the same page as github master and stop giving me this message?
Edit (4/11/2013):
Just to clarify, when I use fabric to deploy, it runs:
git pull github master on the server.
My git status results written above in the original question, are on the server. I NEVER make commits on the server, only in my dev environment, which I push to github, which are in turn pulled to the server. That's why I'm confused. I certainly don't want to push anything from my production server to github, that's the opposite direction of my workflow.
2nd Edit (4/11/2013):
here's the fabfile git code:
def prepare_remote_deployment_area():
"""Prepare the production host with updated resources"""
with cd(DEPLOYMENT_ROOT):
run('git pull github master')
this is called from deploy:
def deploy():
"""Deploy an update (committed to GitHub) to production"""
check_git_status()
check_git_status(on_remote=True)
prepare_remote_deployment_area()
restart_uwsgi()
restart_nginx()
restart_celery()
Again, this all seems to work in the sense that the changes I make in my dev environment show up on production. I just don't understand why the production repository thinks it's so far ahead of github master.
Perhaps git pull
git pull origin master
More info
NAME
git-pull - Fetch from and merge with another repository or a local
branch
DESCRIPTION
Incorporates changes from a remote repository into the current
branch. In its default mode, git pull is shorthand for git fetch
followed by git merge FETCH_HEAD.