Creating a submodule in a git project - django

I have a Django project and it's currently hosted in GitHub and it's private. I'm looking to move many useful parts of it into an open-source project. I think I need to use a 'submodule' thing, but unfortunately I have no idea how to operate these.
Please can someone help me :)
Joe

move many useful parts of it into an open-source project.
That means extract one or several directories (and their associated history) into several independant git repositories, each one pushed to a public GitHub repo.
To extract a sub-directory from a Git repo, see the filter-branch command in this SO question
(also in:
"Howto extract a git subdirectory and make a submodule out of it?"
Detach subdirectory into separate Git repository
)
to reference those new repositories, reference them in your original private repo to see them again directly from your current Django project: see true nature of submodules.

Related

Why does PlatformIO not download the latest commit of a git repo when URL specified as a lib_deps

Aim:
Get PlatformIO to track the latest commit of a c++ (personal/private) git repo to use in a PlatformIO project as a library
Issue:
When trying to build the project an old commit of the repo is pulled and used by the project. This version of the repo has bugs which prevent compilations
Things I have already tried:
Deleting the PlatformIO cache
Specifying exact commit which should be used for the project
Specifying exact branch which should be used for the project
May be relevant
One git repo is dependent on another but is not explicitly linked as something like a submodule (this is to be addressed but it is only expected to be an issue once the code that is pulled is the correct version)

Do I need to stage the 'vcpkg' directory to git

I am new to c++ development environment from javascript dev environment. Comparing to javascript package management, c++ is complicated. I found vcpkg that like npm for cpp.
The question :- When it comes to 'vcpkg' do I need to stage all files (to git) that contains in /vcpkg directory. Or just add it to .gitignore.
The project diretory :-
The /vcpkg directory contains a lot of files, that why I asked.
You shouldn't upload the dependencies to your repository. The correct thing to do is to use vcpkg in manifest mode. This way vcpkg.json package will be used to keep track of your dependencies. Every time you install or remove a package vcpkg.json will be automatically updated eliminating the need to upload your dependencies to your repository. You only need to upload vcpkg.json to your repository which is much faster. It also has many more advantages, take a look at https://vcpkg.readthedocs.io/en/latest/users/manifests/

Git - How to commit a local repository to a subfolder of another local repository?

I have a Django project that I've started some time ago and I was hosting it at bitbucket. Now I need to host it at openshift, and the way to do that is that they provide you with a git repository and every time you push they deploy automatically. The problem is that the repository comes with several top-level folder for configuration and setup, and the effective django project must be inside a subfolder called wsig/openshift.
My question is, how can I commit my changes from my local django repository to the wsig/openshift subfolder of my local openshift repository? Because I intend to continue to develop on the bitbucket/local repository
You are probably looking for submodules. From the docs:
Submodules allow foreign repositories to be embedded within a
dedicated subdirectory of the source tree, always pointed at a
particular commit.
So you would do and would have the bitbucket repository as a seperate repository embedded in a subfolder of the openshift repository by running
git submodule add path_to_bitbucket folder/in/openshift
in the openshift repository.
You will have to run an occaisonal git submodule update to keep openshift up to date but you probably already expected extra work of that sort.
I had the exact same problem too! Is highly annoying but I took another road:
Why don't you create a Python 2.7 project from scratch? Current Django structure is honestly annoying. The way I did was:
Create an openshift project which was in that annoying structure.
Copy, preserve (in my local FS, not in Openshift) a copy of settings.py and wsgi.py.
Discard that project, and start a bare Python 2.7 project.
Check it out to my local FS, create a Django project in my local fs.
Replace the contents of wsgi and settings accordingly (adapting any possible misplaced paths - it's easier than it looks).
Commit/push (this new structure).
You will do differently in point 4: you will checkout that remote branch (bitbucket) as well, merge it on the openshift branch, change accordingly those files in point 5, and push the openshift branch.
There you have a brand-new project, matching your structure (perhaps you want to configure both remote branches in your environment: openshift and bitbucket).
That's the way I did and honestly I have nothing to regret.
Offtopic, but perhaps would be useful since you're using Django: This is specially important if you want to -also- use (gunicorn|uwsgi)+nginx (with a custom cart. which does not provide apache but nginx, and python), and so cannot use the default Django cart.

AWS, OpsWorks and Chef dependencies: what's the cleanest solution?

I've got a Chef project that, locally with Vagrant, works really nicely. I'm using librarian-chef, which means I can specify my dependencies in a Cheffile like this:
site 'http://community.opscode.com/api/v1'
cookbook 'jenkins'
When I then run librarian-chef install, it pulls down jenkins and all the cookbooks it depends on into a cookbooks directory.
There's also another directory, site-cookbooks, which is where I'm writing all of my own custom cookbooks and recipes.
In the Vagrantfile, you can then tell it to look at two different paths for cookbooks:
config.vm.provision "chef_solo" do |chef|
chef.cookbooks_path = ["cookbooks", "site-cookbooks"]
# snip
end
This works perfectly when I run vagrant up. However, it doesn't seem to play nicely with AWS OpsWorks – as this requires all cookbooks to be at the top level of the Chef repository.
My question then, is: what's the nicest way to use Chef with OpsWorks without including all of the dependencies at the top level of my repository?
OpsWorks doesn't play nice with a number of tools created for Chef. I tried using it not long after it came out and gave up on it (OpsWorks was using Chef 9 at the time).
I suggest you either move from OpsWorks to Enterprise Chef, or try the following:
1. Create a separate repo for your cookbooks
Keep all cookbooks in a separate repo. If you want it a part of a larger repository, you can include it as git submodule. Git submodules are generally a bad thing but cookbooks are a separate entity, that can live independently of the rest of your project, so it actually works quite well in this case.
To add a cookbooks repository inside another repo, use:
git submodule add git://github.com/my/cookbooks.git ./cookbooks
2. Keep your cookbooks together with community cookbooks
You can either clone the cookbooks into your repository, add them as submodules, or try using librarian-chef/Berkshelf to manage them. You could try using this method, it should work with librarian-chef: https://sethvargo.com/using-amazon-opsworks-with-berkshelf/
As of recently, OpsWorks supports Chef 11.10 and Berkshelf, giving you a much nicer way of managing cookbook dependencies.

How do clone a Mercurial repository into a directory that already exists?

I have a client's Django project that I'm developing locally, using Mercurial for version control. I push my local repository to my personal remote server (where I keep all my projects) and then when I come to deploy it (on whichever web server) I clone that respository there from my personal server.
This works fine on most servers (where I have total control) but I have a few projects where I'm deploying on to WebFaction. WebFaction is great, but a little unusual with it's setup, as I need to first declare the Django project as an 'application' through their control panel. This creates a few things automatically, such as an 'apache2', 'myproject', etc folder. It's this same folder though where I want to clone the repository from my personal remote server. Doing the usual hg clone command just doesn't work though as it says the destination folder already exists. There isn't much I can do about the contents of this folder really, so I need to work around this.
I'm not an expert at Mercurial and the only way I could seem to work it out is clone it to another folder and then moving all the contents (including the .hg) into the actual folder I want. This seems silly though...
I'm using Mercurial v1.6.2 (installed through easy_install). Could anyone share some light on this?
Many thanks.
Copying just the .hg dir definitely works, but you could also do a hg init and then hg pull http://remote/repo. A repo that has just been initalized has only the 000000000000000 changeset, so you can pull from any repo without getting the "unrelated repos" warning. This is essentially the same as hg clone --pull with a manual init.
You can copy just the .hg folder, then revert or update to tip. E.g.:
cp -a src/.hg dest/
cd dest
hg up -C
you can either move the folder after the fact, or you can just make a symlink to it. my webfaction directory is actually symlinked, so i know it works fine.
In the main, it looks like you might be trying to use Mercurial as an installation manager which is certainly not its design goal.
If I am reading you correctly, part of your source repository should be something like make deploy which puts the files into their proper places. Put another way, having a repository clone (in .hg) in your deployment directory seems odd and trouble-prone.