Can I track and commit without push? - django

What I'm trying to do is to version a file without ever pushing it to github, is it something I can do ?
Context :
For instance, I have a local database for dev (Django) which is SQLite3 which creates a file "db.sqlite3". I don't want this file to be on github, but I'd like to be able to reset it to previous version if I mess with the migrations for example.
Maybe they are better ways than git, I'm open to suggestions.
Thank you !

As far as I’m aware a file is either tracked (included in commits and pushes) or ignored. There isn’t a middle ground and I’m not sure there needs to be.
https://git-scm.com/book/en/v2/images/lifecycle.png
If you can isolate the file within its own directory- create a separate git repo for that directory and don’t have a remote associated with it.
Maintain git repo inside another git repo
Furthermore you could create an automatic schedule for git commits (using cron for example) so you can do point in time recovery for that directory.
If you can’t move the file then schedule a backup of the file to another folder and commit the backup to a different local repo.
Copy SQLite database to another path

Related

What's the best practice for including a .env file in a jenkins build?

Here is my situation:
I have a Django app, which depends on config values being stored in a .env file. This .env file is separate from source control, to keep sensitive info private. This Django app is deployed in a docker container, and I have jenkins set up to rebuild the container whenever changes are checked into our git repository. The build will fail unless there is a .env file present in the build environment. What is the best way to include that file?
I currently have jenkins set up to execute a shell command that writes the file to the build environment, but I can't help but feel like that is sub-optimal, security-wise. What would be a better way to do this?
The answer we have come up with is to store the file on s3, and use aws cli to fetch the file at build time. Since the build is destined to be uploaded to ec2 anyway, it makes sense to use the aws credentials for both operations.
Would including the file to source code with access granted only to you/authorised users break your privacy policy?
Else you can try to always keep the file in Jenkins workspace dir and never delete it when cleaning workspace.

Using Bitbucket for existing project

I have an existing django project on my local machine (in virtualwrapper). How do I add it to the Bitbucket?
Let say my django project is like this:
project
--manage.py
--apps
--db.sqlite3
...
So should I do 'git init' under 'project' directory?
Since it is develop in the virtualwrapper, so I think only the project files will be pushed to the Bitbucket, is that right? If I want to develop the project on a different computer, and want to pull the project files from Bitbucket, how should I do it? I mean should I create another virtual environment in my new machine, install django and necessary pakcages before import the files from bitbucket?
I am new to git, so I don't know what is the best to do it.
So should I do 'git init' under 'project' directory?
Yes, but after that, don't add everything.
Create a .gitignore file first, where you declare the files that shouldn't be versioned (the one that are generated)
Then add and commit: that updates a local repo.
you can easily link it to an existing empty BitBucket repo:
git remote add origin ssh://git#bitbucket.org/username/myproject.git
git push -u origin master # to push changes for the first time
Normally, you wouldn't store a binary like db.sqlite3 in a source repo.
But this blog post suggests a way to do so through
In a .gitattributes or .git/info/attributes file, give Git a filename pattern and the name of a diff driver, which we'll define next. In my case, I added:
db.sqlite3 diff=sqlite3
Then in .git/config or $HOME/.gitconfig, define the diff driver. Mine looks like:
[diff "sqlite3"]
textconv = dumpsqlite3
I chose to define an external dumpsqlite3 script, since this can be useful elsewhere.
It just dumps SQL to stdout for the filename given by its first argument:
#!/bin/sh
sqlite3 $1 .dump

git (python/django repo) merge challenge (original repo copied to another repo) now need to merge it back

I have a git repo at repo1URL.git. It's a production repo and I didn't want to mess with it (in hindsight it would have been easier to just create a branch in that repo!).
It's a django app that is very poorly structured (with third party libs added as folders, rather than via virtualenv and pip.
I made a physical copy and deleted the .git folder of the copy, created a new repo at repo1URL-changes.git and added it as a new remote into the copy. Cleaned up things by removing unnecessary folder, etc, and pushed it to the new repo.
Now I'd like to merge those changes into the main production repo. I found the following question: how to import existing git repo into another
I followed the instruction # Selected Answer. But the results of the:
git merge ZZZ
is rather a nightmare! With conflicts even on .png files and almost every other file.
What's the best way to go about this?

How do I push git commits in different Django directories to GitHub without having to keep doing pull merges?

I am pushing one set of commits to GitHub from my Django app directory and another set of commits from my templates directory. Because each is a different directory, I had to set up a git remote in each one so that it knows to push to GitHub.
Here's the problem: Each set of commits has its own "master" branch that isn't aware of the other. First, I push the commits from the Django app directory's master branch to GitHub's master branch and that works fine. But then, if I push the set of commits from the template directory's master branch to the same GitHub master branch, I get an error saying that I need to do a merge before I can push. The push goes through after I do the merge, but it's annoying to have to keep merging the GitHub master to the different master branches of my different Django directories.
My question is: is it possible to set one master branch for all the different django directories I need to work with so that i can just do one push for files in all my directories? It seems that I need to initialize a .git file for each directory I want to work with which consequently gives each directory its own master branch.
That is very strange. I use a single master branch for django projects that are hosted remotely on github.
You need to make a clone of the local repo in the topmost directory to have a single remote master branch.
For example, my project name is: BigCoolApp
If you do: cd /user/BigCoolApp
You should find a .git folder in there. It will be among all the other folders. If done correctly, you will now have a single master branch for your Django project.

get operation in fabric to use hostname

I am pretty new to fabric and trying to setup a deployment in the below fashion:
Main repo --> Local_repo -> Deployment server
I want to
(1)push the build from the main repo to the local repo
(2)Deployment server needs to pull the available code from the local repo
I did the first step sucessfully using put but then I am not able to 2nd step using get operation.
I tried using git pull but then I get an error stating its not a git repo and same goes for hg pull as well.
Is there a way where we can combine get operation with the host name: for ex:
get ('username#localrepo/local_repo_build_path', deployment_server_local_path)
If you want to use a git pull, you have to most likely use the context managers cd/lcd to move into the directory of the repo. Also you can't specify the username/host like that. It's set in the #host or #role definition for the task, and it'll pick that up automatically, though also it's not going to pull down a full dir, you'd need to use the contrib rsync for something like that.