Best practice for DRF versioning - copy whole folder? Or subclass previous version? - django

Based on this question, and this answer in particular, what's the most sustainable way of creating v2, v3, etc - most times, each version introduces incremental changes over the previous version. Most endpoints stay the same, most fields stay the same.
Option 1: Copy v1 folder, redo the internal references to ensure the code is updated, and then make your changes over that. This keeps every version self contained. If a bug shows up, you fix it in all versions. Versions are clean and dependencies are easier to manage. However, you end up with lots of duplicated code after v30, for example.
Option 2: Create v2 folder, and make v2 classes subclass v1 classes, providing the base functionality, and then add your changes. This promotes code re-use, but can get unwieldly very fast, eg. tracing a change/fixing a bug when you have over 30 versions.
Any prevailing best practices, pros/cons?

Your Option 2 will turn into Option 1 in a few versions.
In my opinion there are two cases:
1 case: you have traditional mostly-CRUD API and then I would suggest to look at this post which shows a way to create a transitions between versions through Serializers.
2 case: your API is more about algorithms, logic and data processing - then you can go with Option 1 - create another app in DRF (copy the folder), move all common libraries out of the app and keep only classes that could change and need a backwards compatibility support in the app.

Related

Best Practice - REST API versioning: Where and How to physically store source code

My question is not about best practices for REST API URI design.
I've decided for myself, that i'm going to use the following approach:
https://theserver.com/api/v1/whatsoever
I'm much more curious about how to design the actual sourcecode in advance to easily extend the API with more versions.
Let's assume we've used a classic MVC-Framework for your favorite programming language. Our API works fine but we want to add & change functionality that is not backwards compatible. We did think about a nice URI design, but didn't think how our code should look in order to work nicely with different API versions. Crap.. What now?
Question: How should the source code for a versionable REST API look like?
Nice to have:
Not mixing up the different versions
Still best use of DRY
Don't reinvent the wheel over again
will be extended
Possible answers i can think of:
Same project - different Namespaces & Subfolders
Namespace: namespace App\Http\Controllers\v1\Users;
Folder: {root_folder}\app\Http\Controllers\v1\Users\UserLoginController.php
Different projects
Point https://theserver.com/api/v1/whatsoever to project 1
and https://theserver.com/api/v2/whatsoever to project 2
Here is my logic: - First of all we need to answer to the question "Why we need versioning?"
- If we can extend our API in way it is backward compatible, in that case we don't need versioning (All applications and services are going to use the same API and no changes are needed).
- If we can not can not provide backward compatible API in that case we need to introduce next version of our API. This will allow all applications and services to migrate smoothly to new version while to old one is working. After a time period (one year), first version can be obsoleted and stopped.
So based on the answer above I would keep API versions in separate branches in my repository. One codebase, multiple branches for each version. First branch corresponds to v1 which is stable and receives only fixes. No active development here. Second branch corresponds to v2 which has all new features.

Maintaining South migrations on Django forks

I'm working on a pretty complex Django project (50+ models) with some complicated logic (lots of different workflows, views, signals, APIs, background tasks etc.). Let's call this project-base. Currently using Django 1.6 + South migrations and quite a few other 3rd party apps.
Now, one of the requirements is to create a fork of this project that will add some fields/models here and there and some extra logic on top of that. Let's call this project-fork. Most of the extra work will be on top of the existing models, but there will also be a few new ones.
As project-base continues to be developed, we want these features to also get into project-fork (much like a rebase/merge in git-land). The extra changes in project-fork will not be merged back into project-base.
What could be the best possible way to accomplish this? Here are some of my ideas:
Use South merges in project-fork to keep it up-to-date with latest changes from project-base, as explained here. Use signals and any other means necessarry to keep the new logic from project-fork as loosely coupled as possible to avoid any potential conflicts.
Do not modify ANY of the original project-base models and instead create new models in different apps that reference the old models (i.e. using OneToOneField). Extra logic could end up in the old and/or new apps.
your idea here please :)
I would go with option 1 as it seems less complicated as a whole, but might expose a greater risk. Here's how I would see it happening:
Migrations on project-base:
0001_project_base_one
0002_project_base_two
0003_project_base_three
Migrations on project-fork:
0001_project_base_one
0002_project_fork_one
After merge, the migrations would look like this:
0001_project_base_one
0002_project_base_two
0002_project_fork_one
0003_project_base_three
0004_project_fork_merge_noop (added to merge in changes from both projects)
Are there any pitfalls using this approach? Is there a better way?
Thank you for your time.
Official South workflow:
The official South recommendation is to try with the --merge flag: http://south.readthedocs.org/en/latest/tutorial/part5.html#team-workflow
This obviously won't work in all cases, from my experience it works in most though.
Pitfalls:
Multiple changes to the same model can still break
Duplicate changes can break things
The "better" way is usually to avoid simultaneous changes to the same models, the easiest way to do this is by reducing the error-window as much as possible.
My personal workflows in these cases:
With small forks where the model changes are obvious from the beginning:
Discus which model changes will need to be done for the fork
Apply those changes to both/all branches as fast as possible to avoid conflicts
Work on the fork ....
Merge the fork which gives no new migrations
With large forks where the changes are not always obvious and/or will change again:
Do the normal fork and development stuff while trying to stay up to date with the latest master/develop branch as much as possible
Before merging back, throw away all schemamigrations in the fork
Merge all changes from the master/develop
Recreate all needed schema changes
Merge to develop/master

Bamboo CI Plan Templates?

Many of my project builds utilize the same stages, jobs and tasks over and over again. Is there any way to define a "template" plan and use it to make other templated plans from? I'm not talking about cloning, because with cloning, you are then able to make independent changes to all the clones.
What I want is a way to template, say, 10 different plans, and then if I want to add a new job/task to all of them, I would only need to change the template and that would ripple out into all the plans utilizing the template.
Is this possible, and if so, how?
That isn't currently possible, unfortunately:
A fairly old feature request for plan templates to reuse across projects (BAM-907) has been resolved as Fixed due to the introduction of plan branches in Bamboo 4.0 (see Using plan branches for details):
Plan Branches are a Bamboo Plan configuration that represent a branch in your version control system. They inherit all of the configuration defined by the parent Plan, except that instead of building against the repository's main line, they build against a specified branch. It is also worth noting that only users with edit access to the Plan can create Plan Branches that inherit from that plan.
While plan branches are a killer simplification for typical Git workflows around feature branches and pull requests indeed and might help accordingly, they neither fully cover the original request nor yours, presumably - that aspect is meanwhile tracked via Add possibility to create plan templates and choose a template when creating a plan (BAM-11380) and esp. Build and deployment templates (BAM-13600), with the latter featuring a somewhat promising comment from January 2014:
Thank you for reporting this issue. We've been thinking about templates a lot over the last few months. When we've got more news to share on this, we will be sure to update this ticket.
I know this question is closed, just wanted to add something I bumped into today:
https://ecosystem.atlassian.net/browse/PLATFORM-48
By the looks of this (issue in review at the time of this comment) we should be able to use templates for Bamboo plans pretty soon.

Sitecore : how to avoid breaking dependencies of items that are referenced via Control-Datasource

Using Sitecore.NET 6.3.0.
Our Sitecore content has a great deal of items that refer to other items via the control renderings collection. This is done by setting a path to the item as the controls' datasource.
Since this link is specified via a path - not an id - to the linked item, it is currently possible to break the link if you change the linked item's location, or delete it completely.
With the goal of either preventing broken links, or at least detecting them before a publish, what is the best approach to avoiding this problem?
I'm aware that it is possible to link in a standard way (reference by id), but this would rule out any links where we must link via relative paths.
Is there any way to go about detecting, or even better preventing broken links of this kind?
EDIT: This is more akin to assigning a DataSource to a sublayout in presentation layout details, rather than doing anything in code. (Its something a content editor would do).
I'm not sure what the best approach is, but I do agree there's potential for a lot of problems here.
While reading your post, I was thinking... albeit a naughty move, but you COULD change the field type of the "DataSource" field. Of course, mucking about in system templates is something to be cautionable about - but in this case, the alternative seems slightly worse.
If doing so, you would need to hook into the getRenderingDatasource pipeline as well, and override the GetDatasourceLocation step.
I've not done this myself, so cannot guarantee it will work. Seems fairly straight forward however :-)

Handling relations between multiple subversion projects

In my company we are using one SVN repository to hold our C++ code. The code base is composed from a common part (infrastructure and applications), and client projects (developed as plugins).
The repository layout looks like this:
Infrastructure
App1
App2
App3
project-for-client-1
App1-plugin
App2-plugin
Configuration
project-for-client-2
App1-plugin
App2-plugin
Configuration
A typical release for a client project includes the project data and every project that is used by it (e.g. Infrastructure).
The actual layout of each directory is -
Infrastructure
branches
tags
trunk
project-for-client-2
branches
tags
trunk
And the same goes for the rest of the projects.
We have several problems with the layout above:
It's hard to start a fresh development environment for a client project, since one has to checkout all of the involved projects (for example: Infrastructure, App1, App2, project-for-client-1).
It's hard to tag a release in a client projects, for the same reason as above.
In case a client project needs to change some common code (e.g. Infrastructure), we sometimes use a branch. It's hard to keep track which branches are used in projects.
Is there any way in SVN to solve any of the above? I thought of using svn:externals in the client projects, but after reading this post I understand it might not be right choice.
You could handle this with svn:externals. This is the url to a spot in an svn repo
This lets you pull in parts of a different repository (or the same one). One way to use this is under project-for-client2, you add an svn:externals link to the branch of infrastructure you need, the branch of app1 you need, etc. So when you check out project-for-client2, you get all of the correct pieces.
The svn:externals links are versioned along with everything else, so as project-for-client1 get tagged, branched, and updated the correct external branches will always get pulled in.
A suggestion is to change directory layout from
Infrastructure
branches
tags
trunk
project-for-client-1
branches
tags
trunk
project-for-client-2
branches
tags
trunk
to
branches
feature-1
Infrastructure
project-for-client-1
project-for-client-2
tags
trunk
Infrastructure
project-for-client-1
project-for-client-2
There are some problems with this layout too. Branches become massive, but at least it's easier to tag specific places in your code.
To work with the code, one would simply checkout the trunk and work with that. Then you don't need the scripts that check out all the different projects. They just refer to Infrastructure with "../Infrastructure". Another problem with this layout is that you need to checkout several copies if you want to work on projects completely independently. Otherwise a change in Infrastructure for one project might cause another project not to compile until that is updated too.
This might make releases a little more cumbersome too, and separating the code for different projects.
Yeah, it sucks. We do the same thing, but I can't really think of a better layout.
So, what we have is a set of scripts that can automate everything subversion related. Each customer's project will contain a file called project.list, which contains all of the subversion projects/paths needed to build that customer. For example:
Infrastructure/trunk
LibraryA/trunk
LibraryB/branches/foo
CustomerC/trunk
Each script then looks something like this:
for PROJ in $(cat project.list); do
# execute commands here
done
Where the commands might be a checkout, update or tag. It is a bit more complicated than that, but it does mean that everything is consistent, checking out, updating and tagging becomes a single command.
And of course, we try to branch as little as possible, which is the most important suggestion I can possibly make. If we need to branch something, we will try to either work off of the trunk or the previously-tagged version of as many of the dependencies as possible.
First, I don't agree that externals are evil. Although they aren't perfect.
At the moment you're doing multiple checkouts to build up a working copy. If you used externals it would do exactly this, but automatically and consistently each time.
If you point your externals to tags (and or specific revisions) within the target projects, you only need to tag the current project per release (as that tag would indicate exactly what external you were pointing to). You'd also have a record within your project of exactly when you changed your externals references to use a new version of a particular library.
Externals aren't a panacea - and as the post shows there can be problems. I'm sure there's something better than externals, but I haven't found it yet (even conceptually). Certainly, the structure you're using can yield a great deal of information and control in your development process, using externals can add to that. However, the problems he had weren't fundamental corruption issues - a clean get would resolve everything, and are pretty rare (are you really unable to create a new branch of a library in your repo?).
Points to consider - using recursive externals. I'm not sold on either the yes or no of this and tend to go with a pragmatic approach.
Consider using piston as the article suggests, I've not seen it in action so can't really comment, it may do the same job as externals in a better way.
From my experience I think it's more useful to have a repository for each individual project. Otherwise you have the problems you say and furthermore, revision numbers change if other projects change which could be confusing.
Only when there's a relation between individual projects, such as software, hardware schematics, documentation, etc. We use a single repository so the revision number serves to get the whole pack to a known state.