Developing two projects with bower - build

I have Project A (a full blown web app), and Project B - a component project used by Project A.
Project A uses bower to define a dependency on Project B.
The goal in a nutshell - I want to be able to develop both A and B simultaneously, so that when the sources of B change, A will be notified, and these changes will be reflected in the browser right away.
The problem seems to be that I can't find a convenient way to define the dependency of A on B's sources, rather than it's wrapped up artifacts.
In A's index.html, I need to somehow include the very final artifacts provided by B. Namely, it would be something like project_b.js and project_b.css.
These artifacts are the final products of B - they are the result of building the sources.
In B's bower.json file, then, I need to define these artifacts under the "main" section.
I also use grunt-bower-install / grunt-wiredap in order to properly orchestrate all bower dependencies into A's index.html. A tool of this kind will eventually work with the files defined in B's "main" section.
But what happens in dev mode?
Ideally, in dev mode I could change the code of the source of B, and see it reflected in A.
bower link solves only a part of the problem, since in order for A to really react to the changes in B's source, I will have to initiate a build a in B, that will change the project's "main" artifacts, and only then will A really be affected.
That's not convenient IMO.
In dev mode, I usually work against source files, and avoid continuously building (up to some minor tasks, such as jshint, compass, etc.).
I would like to also work against the source files of B...
This is also important with regard to debugging.
Do you have an idea of how to achieve this?
I have some tricks in mind, but I'm not sure they'd really work.
Thank you very much in any case,
Daniel

Related

C++ build artifacts management

We are moving our build and testing system to Jenkins, and are looking for an easy way (where we don't have to code all the logic ourselves) to manage the build artifacts.
Basically we need an organised way to store them in order by the type of the build, the user who built it and such for example:
johnd/nightly/r543241/win32/program.zip
johnd/nightly/trunk/lin64/program.tar.gz
master/release/2.1/win32/program.zip
This way we can upload when a build is complete, and retrieve the required artifact in the testing stage with ease.
Until now we simply stored the files in directories on NFS, but recently started considering an artifact manager. I've looked at Artifcatory, Archiva and Nexus. But all seem very Java centered or at least required maven to work with. Since I don't want to introduce more complexity (We mainly work with python, scons is our build tool) and I don't want to introduce maven to the mix, I'm looking for something that has an easy command line (or better REST/Python interface) to upload, download, manage artifacts.
If you don't use an artifact manager, but use some other clever method to manage your C++ artifacts for release/test needs I'd be happy to hear that also.
I have exactly the same issue. I didn't succeed to integrate artifactory easilly, so I forgot it.
For the moment I use the copy artefact plugin to copy by scp the file to an NFS share. But this is not a good solution since shares may be mounted differently on two machines, and windows cannot access them directly.
But a real artifact manager would be so such a good thing for our project.
In my previous job, I was using a documentation CMS (LiveLink) which provided good services:
- self organising folder.
- persistant url (we can move, rename folders and files, a public URL to a given files never changes, that was SOOO great : "http://server/go/123456789123456789" always pointed to the same file. So you just had to give this url to wget and you get the file
- file versionning.
- meta data, advanced search
I'm still searching such a solution in opensource software and don't find it.
In you just need to share artefact between two jobs, simple use the "copy artifact for another build" plugin. I use it to split my job into a a build job and a test job (actually there are 3 test jobs, one very fast done after each commit, one quite slow done every day, and one very slow done each week end).

Common test data for multiple independent maven projects

I have a maven project that converts text files of a specific format to another format.
For testing I have put in src/test/resources a large amount of test files.
I also have another project that uses the first one to do the conversion and then do some extra stuff on the output format. I also want to test this project against the same test dataset, but I dont want to have duplicate data sets and I want to be able to test the first project alone since it is also a standalone converter project.
Is there any common solution for doing that? I dont mind not having the test dataset inside the projects source tree as long as each project can access the data set independently of the other. I dont want to setup a database for that also. I am thinking something like a repository of test data simpler than an RDBMS. Is there any application for this kind of need that I can use with a specific maven plugin? Ease of setup and simplicity is my priority. Also I m thinking something like packaging the test data and putting it in a internal maven repo and then downloading it and unzip it in the junit code. Or better, is there a maven plugin that can do this for me?
Any ideas?
It is possible to share resources with Maven, for example with the help of the Assembly and Dependency plugins. Basically:
Create a module with a packaging of type pom to hold the shared resources
Use the assembly plugin to package the module as a zip
Declare a dependency on this zip in "consumer" modules
And use dependency:unpack-dependencies (plus some exclusion rules) to unpack the zip
This approach is detailed in How to share resources across projects in Maven. It requires a bit of setup (which is "provided") and is not too complicated.
Just place them in a tree in the src/main/resources directory of a separate module specially to share the test data. They will be added to the jar file and me nicely compressed and versioned in your nexus repository, file-share, ~/.m2/repository or whatever you use to store/distribute maven artifacts.
Then just add a dependency in the projects you need the data in the test scope and use resource loading to get them from the jars.
You do not need any special plugins or other infrastructure. This just works.

One project, Multiple customers with git?

Im new to GIT and dont know yet how much it will fit my needs, but it looks impressive.
I have a single webapp that i use for differents customers (django+javascript)
I plan to use GIT to handle these differents customers version as branches. Each customer can have custom files, folders and settings, improved versions... but the should share the same 'core'. We are a small team and suscribed a github account.
Is the branch the good way to handle this case ?
About the settings file, how would you proceed ? Would you .gitignore the customer specific settings file and add a settings.xml.sample file for example is the repo ?
Also, is there any way to prevent some files to be merged into master ? (but commited to the customer branch). For example, id like to save some customer data to the customer branch but prevent from to being commited to master.
Is the .gitignore file branch-specific ? YES
EDIT
After reading all your answers (thanks!) i decided to first refactor my django project structure to isolate the core and my differents apps in an apps subfolder. Doing this makes a cleaner project, and tweaking the .gitignore file makes it easy to use git branches to manage the differents customers and settings !
Ju.
In addition to cpharmston's answer, it sounds like you need to do some refactoring to separate out what is truly custom for each client and what isn't. Then you may consider adding additional repositories to track the customizations for each client (entirely new repos, not branches). Then your deployment can pull your "core" from your main repo, and the client-specific stuff from that repo.
I would not use branches to accomplish what you are trying to do.
In source control, branches are intended to be used for things that are meant to be merged back into trunk. For example, Alex Gaynor spent his summer of code working on a branch of Django that allows for support for multiple databases, with the goal of eventually merging it back into the Django trunk.
Checkouts (or clones, in Git's case) might better suit what you are trying to do. You would create a repo containing all of the project's base files (and .sample files, if you will), and clone the repo to all the various locations you wish to deploy the code. Then manually create the configuration and customization files at each deployment (take care not to add them to the repo). Whenever you update the code in the repo, run a pull on each deployment to update the code. Viola!
Other answers are correct that you'll be in the best shape for maintenance to the extent that you separate out your core code from the custom per-client code. However, I'll break from the crowd and say that if you're unable to do that (say because you need to add extra functionality to core code for a certain client), DVCS branches would work just fine for what you want to do. Though I'd probably recommend per-directory branches rather than in-repo branches for this purpose (git can do per-directory branches as well, it's nothing but a cloned repo that diverges).
I use hg, not git, but all of my Django projects are cloned from the same base "project template" repo that has utility scripts, a basic common set of INSTALLED_APPS, etc. This means when I make changes to that project template, I can easily merge those common updates into existing projects. This isn't exactly the same as what you're planning, but is similar. You will occasionally have to deal with merge conflicts, if you modify the same area of code in the core that you've already customized for a specific client.
Matthew Talbert is correct, you really need to separate the custom stuff from non-custom stuff. If you can refactor all the core code to be contained in one directory, your clients can use it as a read-only git submodule. The added benefit is that you lock them into an explicit version of the core code. This means that they would have to consciously update to a newer revision, which is what you want for production code.
After reading all your answers (thanks!) i decided to first refactor my django project structure to isolate the core and my differents apps in an apps subfolder. Doing this makes a cleaner project, and tweaking the .gitignore in the differents branches file makes it easy to use git branches to manage the differents customers and settings !

Handling relations between multiple subversion projects

In my company we are using one SVN repository to hold our C++ code. The code base is composed from a common part (infrastructure and applications), and client projects (developed as plugins).
The repository layout looks like this:
Infrastructure
App1
App2
App3
project-for-client-1
App1-plugin
App2-plugin
Configuration
project-for-client-2
App1-plugin
App2-plugin
Configuration
A typical release for a client project includes the project data and every project that is used by it (e.g. Infrastructure).
The actual layout of each directory is -
Infrastructure
branches
tags
trunk
project-for-client-2
branches
tags
trunk
And the same goes for the rest of the projects.
We have several problems with the layout above:
It's hard to start a fresh development environment for a client project, since one has to checkout all of the involved projects (for example: Infrastructure, App1, App2, project-for-client-1).
It's hard to tag a release in a client projects, for the same reason as above.
In case a client project needs to change some common code (e.g. Infrastructure), we sometimes use a branch. It's hard to keep track which branches are used in projects.
Is there any way in SVN to solve any of the above? I thought of using svn:externals in the client projects, but after reading this post I understand it might not be right choice.
You could handle this with svn:externals. This is the url to a spot in an svn repo
This lets you pull in parts of a different repository (or the same one). One way to use this is under project-for-client2, you add an svn:externals link to the branch of infrastructure you need, the branch of app1 you need, etc. So when you check out project-for-client2, you get all of the correct pieces.
The svn:externals links are versioned along with everything else, so as project-for-client1 get tagged, branched, and updated the correct external branches will always get pulled in.
A suggestion is to change directory layout from
Infrastructure
branches
tags
trunk
project-for-client-1
branches
tags
trunk
project-for-client-2
branches
tags
trunk
to
branches
feature-1
Infrastructure
project-for-client-1
project-for-client-2
tags
trunk
Infrastructure
project-for-client-1
project-for-client-2
There are some problems with this layout too. Branches become massive, but at least it's easier to tag specific places in your code.
To work with the code, one would simply checkout the trunk and work with that. Then you don't need the scripts that check out all the different projects. They just refer to Infrastructure with "../Infrastructure". Another problem with this layout is that you need to checkout several copies if you want to work on projects completely independently. Otherwise a change in Infrastructure for one project might cause another project not to compile until that is updated too.
This might make releases a little more cumbersome too, and separating the code for different projects.
Yeah, it sucks. We do the same thing, but I can't really think of a better layout.
So, what we have is a set of scripts that can automate everything subversion related. Each customer's project will contain a file called project.list, which contains all of the subversion projects/paths needed to build that customer. For example:
Infrastructure/trunk
LibraryA/trunk
LibraryB/branches/foo
CustomerC/trunk
Each script then looks something like this:
for PROJ in $(cat project.list); do
# execute commands here
done
Where the commands might be a checkout, update or tag. It is a bit more complicated than that, but it does mean that everything is consistent, checking out, updating and tagging becomes a single command.
And of course, we try to branch as little as possible, which is the most important suggestion I can possibly make. If we need to branch something, we will try to either work off of the trunk or the previously-tagged version of as many of the dependencies as possible.
First, I don't agree that externals are evil. Although they aren't perfect.
At the moment you're doing multiple checkouts to build up a working copy. If you used externals it would do exactly this, but automatically and consistently each time.
If you point your externals to tags (and or specific revisions) within the target projects, you only need to tag the current project per release (as that tag would indicate exactly what external you were pointing to). You'd also have a record within your project of exactly when you changed your externals references to use a new version of a particular library.
Externals aren't a panacea - and as the post shows there can be problems. I'm sure there's something better than externals, but I haven't found it yet (even conceptually). Certainly, the structure you're using can yield a great deal of information and control in your development process, using externals can add to that. However, the problems he had weren't fundamental corruption issues - a clean get would resolve everything, and are pretty rare (are you really unable to create a new branch of a library in your repo?).
Points to consider - using recursive externals. I'm not sold on either the yes or no of this and tend to go with a pragmatic approach.
Consider using piston as the article suggests, I've not seen it in action so can't really comment, it may do the same job as externals in a better way.
From my experience I think it's more useful to have a repository for each individual project. Otherwise you have the problems you say and furthermore, revision numbers change if other projects change which could be confusing.
Only when there's a relation between individual projects, such as software, hardware schematics, documentation, etc. We use a single repository so the revision number serves to get the whole pack to a known state.

Best practice for managing project variants in Git?

I have to develop two Django projects which share 90% of the same code, but have some variations in several applications, templates and within the model itself.
I'm using Git for distributed source-control.
My requirements are that :
common code for both projects is developed in one place (Project1's development environment)
periodically this is merged into the development environment of the second project (Project2)
the variations are not easily encapsulated within apps. (eg. there are apps. such as "profiles" which vary between Project1 and Project2 but for which there's also an ongoing common evolution)
both Project1 and Project2 have public repositories, so that I can collaborate with others
similarly Project1 and Project2 ought to have development, demo, staging and production servers.
however, the public repository is not on the same server in both cases. So, for example, when I'm developing in Project1, I want to be able to "push" to my github server, but not have Project2 stuff going there.
there are files such as local_settings.py which are completely different between Project1 and Project2, but should be shared between multiple developers of each Project
So what's the best way of managing this situation?
What would seem to be ideal would be something like a "filtered pull" where instead of .gitignore saying "ignore this file entirely", I can say "ignore this file when pulling from that repo" I couldn't see anything quite like that in the documentation, but might there be something like that?
Move the common code to its own library and make it a dependency of those two projects. This is not a question of version control, but a question of code reuse, design and eliminating duplication.
Considering it is a Django / Pinax site, with variants scattered around inside several different applications, I would not recommend using submodules.
The variants should be managed independently in project1 branch and project2 branch, removing the need to "filter" a gitignore result.
If you identify some really common codes, they may end up in a third repo you could then "subtree merged" to project1 and project2 repositories (the meaning of subtree merge strategy being illustrated in this SO answer)
You can use two different git branches for development. When you make changes in one that are common to the other, just git-cherrypick them over. You can push and pull specific branches, too, so that no one ever need know you're working on both of them at the same time.
I would try submodules: http://www.kernel.org/pub/software/scm/git/docs/user-manual.html#submodules
I would make a third repo where I would place the code that Projects share. Then Project1 and Project2 would have their own repo and they can pull from that "shared" third repo.
I think that your idea of "filtered pull" will make it difficult to mantein.