Best Practice - REST API versioning: Where and How to physically store source code - web-services

My question is not about best practices for REST API URI design.
I've decided for myself, that i'm going to use the following approach:
https://theserver.com/api/v1/whatsoever
I'm much more curious about how to design the actual sourcecode in advance to easily extend the API with more versions.
Let's assume we've used a classic MVC-Framework for your favorite programming language. Our API works fine but we want to add & change functionality that is not backwards compatible. We did think about a nice URI design, but didn't think how our code should look in order to work nicely with different API versions. Crap.. What now?
Question: How should the source code for a versionable REST API look like?
Nice to have:
Not mixing up the different versions
Still best use of DRY
Don't reinvent the wheel over again
will be extended
Possible answers i can think of:
Same project - different Namespaces & Subfolders
Namespace: namespace App\Http\Controllers\v1\Users;
Folder: {root_folder}\app\Http\Controllers\v1\Users\UserLoginController.php
Different projects
Point https://theserver.com/api/v1/whatsoever to project 1
and https://theserver.com/api/v2/whatsoever to project 2

Here is my logic: - First of all we need to answer to the question "Why we need versioning?"
- If we can extend our API in way it is backward compatible, in that case we don't need versioning (All applications and services are going to use the same API and no changes are needed).
- If we can not can not provide backward compatible API in that case we need to introduce next version of our API. This will allow all applications and services to migrate smoothly to new version while to old one is working. After a time period (one year), first version can be obsoleted and stopped.
So based on the answer above I would keep API versions in separate branches in my repository. One codebase, multiple branches for each version. First branch corresponds to v1 which is stable and receives only fixes. No active development here. Second branch corresponds to v2 which has all new features.

Related

Create a separate app for my REST API or place it inside my working app?

I'm building simple gis system on geodjango.
The app displays a set of maps and I'm also attempting to provide a RESTFUL API for these maps.
I'm facing a decision whether to create a separate app for the API or to work inside my existing app.
The two apps are logically separate but they share the same models.
So what is considered better?
Although a case can be made for either of the approaches, I think keeping the APIs inside their associated apps would be a better one. Since the code in APIs is going to depend on the models, or other utility methods anyway, keeping APIs in the same app would lead to more cohesive code. Besides the very ideology behind Django apps is that they can be isolated and reused.
There used to be a similar case with storing the templates. In the initial days of Django, people used to prefer to store all the templates altogether in the same global folder (with subdirectories by the names of the app), however, in recent times even Django has started discouraging the said approach in the favour of storing templates in the respective app itself.
#hspandher's answer is very solid and will allow for most of your needs to be implemented.
There is though another approach which may be a bit more complicated to achieve but gives you all the space you may need for experimentation and reusability potential:
Separate everything:
Backend:
Isolate your API from its visualization (see frontend below) and make it completely autonomous and self-contained.
That can be achieved by separating your apps inside your Django project and expose the corresponding APIs which must be the only way for an external factor (ex. client, another app etc.) to "talk" with any one of your apps.
Frontend:
Assuming that you have your APIs exposed, you effectively separated the visualization from the logic and therefore you have many options on how to visualize your maps.
For example, you can now build a React app which can make requests to your API and visualize the responses by using any of those tools: leaflet.js, D3.js, or anything that you like really.
Summary:
The benefits of this separation are:
Separation of logic and implementation.
Better maintainability.
Many tool and technology options to use.
Reusability.
As a side note, you can read about 12 factor method and think about using it in your implementation.

Is Managed Addin Framework alive

I need to implement solution with add-ins executed in their AppDomain. I came across MAF, which is - by description - what I needed.
However the documentaion and its CodePlex project seems to be a bit outdated, some pages in docs do not exist for "Current version" of .NET.
I also found posts about gotchas and complexity of MAF.
So I'm now not sure if I should use it or rather do all the work by myself (add-in management, loading/unloading AppDomains, etc).
Any thought and/or experience appreciated
MAF is a supported piece of the .Net framework, but it hasn't received much attention in years.
Pros
Supports out of process/app domain loading of Addins
Supports backward compatibility for AddIns
Cons
Complex (Requires 5 DLLs in the pipeline)
Requires investment in tooling (You need to update/maintain your own copy of the Pipeline generation code)
Hasn't received any updates in functionality since it was released
There is not a lot of information on the web on best practices or issues people have run into
While there are more cons in that list, it does work and mostly does what you expect. My suggestion is to try it out and see how it works. At the end of the day, the consumers of your API are using an interface and you can always swap out the MAF layer in the future and your AddIns wouldn't need to change.

Best practice for DRF versioning - copy whole folder? Or subclass previous version?

Based on this question, and this answer in particular, what's the most sustainable way of creating v2, v3, etc - most times, each version introduces incremental changes over the previous version. Most endpoints stay the same, most fields stay the same.
Option 1: Copy v1 folder, redo the internal references to ensure the code is updated, and then make your changes over that. This keeps every version self contained. If a bug shows up, you fix it in all versions. Versions are clean and dependencies are easier to manage. However, you end up with lots of duplicated code after v30, for example.
Option 2: Create v2 folder, and make v2 classes subclass v1 classes, providing the base functionality, and then add your changes. This promotes code re-use, but can get unwieldly very fast, eg. tracing a change/fixing a bug when you have over 30 versions.
Any prevailing best practices, pros/cons?
Your Option 2 will turn into Option 1 in a few versions.
In my opinion there are two cases:
1 case: you have traditional mostly-CRUD API and then I would suggest to look at this post which shows a way to create a transitions between versions through Serializers.
2 case: your API is more about algorithms, logic and data processing - then you can go with Option 1 - create another app in DRF (copy the folder), move all common libraries out of the app and keep only classes that could change and need a backwards compatibility support in the app.

What are some specific reasons why one would use SphinxAPI over SphinxQL?

Are there any capabilities that one inherently lacks that the other doesn't?
SphinxQL (according to benchmarks on the Sphinx blog) returns queries faster than SphinxAPI for interpreted languages and the premise of such a comparison would likely be that the functionalities present in both are the same.
Why the API then?
Any clarity on this issue is much appreciated.
(This is about the C++ based open source search engine)
I just found a satisfactory answer:
SphinxQL is simply a language for querying Sphinx.
SphinxAPI is a framework that allows you to compute results based on the queries.
The queries could still be via SphinxQL or they could be via the API's syntax...it doesn't matter...SphinxQL and the SphinxAPI are different objects that accomplish different things (as highlighted above)
SphinxAPI is a legacy. That is why I'd rather go with a flow with a API than switch to SphinxQL in production. But for new projects SphinxQL is the only choice as it is evolve quicker and gets all features first. The next big thing that using SphinxQL you don't tie to developer of API for not officially supported languages or platform instead you could use any MySQL client \ library.

Handling the Same Class Definition From Multiple Web Services

The situation:
We have a library project that houses much of our code for the various integrations we work on. Many of the integrations consume web service apis, and my supervisor doesn't want 5 gazillion web service references added to the project.
What we generally do, then, is add a reference to a new project and copy the References.vb to the solution and just call the generated code. Not terribly convenient if changes are made to the service, but it works.
Recently, I ran into a problem where we have to use 3 web services for the same integration. 2 of these contain the same class definitions, however, they're in different namespaces because they belong to different services. This became a problem for me because one of the services searches a user based on user ID, and the other pulls back blocks of users. Both return an object, or list of, that is exactly the same semantically. And I need to process the data the same, whether it came from one service or the other.
My solution, was to strip out the duplicated classes in the service and replace them with classes inherited from common base classes. This allowed me to work with both objects as if they were the same, however, it required modifying the generated web service proxy. Therefore this change will need to be made every time I need to regenerate the proxy.
I'm curious what you all might think a better solution to this would be.
You're going to regret playing games with copying Reference.vb and editing generated files.
Switch to WCF and you'll be able to tell it you want to reuse the types, instead of having multiple types that are more or less the same.
BTW, they would be "less" the same if not all of the web references are updated at the same time after a server change.
The other option would be to build an abstraction layer over top of the web service pre-generated proxies, such that when you make to the calls to the abstraction layer you can always use the same objects, as they are squeezed into (and out of) the web service proxies in the abstraction layer. This would also allow for unit testing :)
I think you really should be looking at WCF for 3.5+, but for .NET 2.0 look at something like WSCF (Web Services Contract First), which defines the contracts in XML and generates a set of libraries reusable across services. E.g You define a MyComany.WS.Common namespace and use that namespace in multiple projects. The code generation then builds a shared library of types which get used across all the web-services. We use this extensively in our .NET 2 solutions and it's great. We had to do some additional work around the code generation to get it to fit into our build process, but once that was done we never looked back.
We're migrating to .NET 3.5 over time, so the WSCF will become obsolete
Heres the link to the thinktecture site for WSCF.
wsdl.exe using the /sharetypes switch allows the same types to be used across multiple service definitions, provided the wire signatures are not correct. I was unable to use it in my situation, though, because the various wsdl contracts were carelessly namespaced.