I am writing huge Rest API, to make it easily discoverable, i am making the pattern like this way.
http://127.0.0.1:8000/membership/api/v1/make-a-payment
But i notice poeple used to make the pattern like this way: http://127.0.0.1:8000/ap/v1/blabla
Can you anyone tell me what is the best practice?
is it ok writing pattern like this way? http://127.0.0.1:8000/membership/api/v1/make-a-payment ?
I just trying to make it easily discoverable by swegger docs.
What is your opinion?
Can you anyone tell me what is the best practice?
REST doesn't care what spelling conventions you use for your resource identifiers.
RFC 3986 defines paths and path segments.
The path component contains data, usually organized in hierarchical form...
Many will therefore choose to align their identifier hierarchy with the hierarchy of their resources. It's not required that you do so, but as an organizing principle it's not bad, and of course no worse than any other arbitrarily selected convention.
For example, a common practice is that collection items will have identifier spellings subordinate to the identifier of the collection itself.
/photos <-- the collection
/photos/17 <-- an item in the collection
So you might reasonably ask whether api is one item of several in a membership collection, or if membership is one item of several in an api collection.
You might also want to review relative resolution and how dot-segments can be used to navigate up the hierarchy. If references between memberships and some other idea are more common than references between api and some other idea, then the spelling /api/membership may prove to be the more convenient choice.
I think a good guideline is this: any path segment implies the existence of siblings at the same hierarchical level. /membership/api implies the existence of /membership/something-that-isnt-api -- otherwise, why not just /membership ?
Related
What are the benefits of the "Convention over Configuration" paradigm in web development? And are there cases where sticking with it don't make sense?
Thanks
Convention states that 90% of the time it will be a certain way. When you deviate from that convention then you can make changes...versus forcing each and every user to understand each and every configuration parameter. The idea is that if you need it to differ you will search it out at that point in time versus trying to wrap your head around all the configuration parameters when it often times has no real value.
IMHO it always makes sense. Making convention the priority over explicit configuration is ideal. Again if someone has a concern, they will force themselves to investigate the need.
I think the benefit is simple: No configuration necessary. You don't need to define locations for this-or-that type of resource, for example, for the app/framework to find them itself.
As for cases where it does not make sense: any situation where it will be fairly frequent that alternative configurations would be required, or where it makes sense that a developer/admin would need to 'opt-in' to some behavior explicitly (for example, to prevent unintended and unexpected side-effects that could have security implications).
The benefit of convention over configuration paradigm in web development the productivity since you won't be required to configured to set all the rules and there are less decision that a programmer has to make. This is evident when using the .NET Framework.
The most obvious benefit is that you will have to write lesser code. Let's take case of Java Persistence API. When you define a POJO having attributes and corresponding setters/getters, it's a simple class. But the moment you annotate it with #javax.persistence.Entity it becomes an entity object (table) which can get persisted in DB. Now this was achieved by just a simple annotation, no other config file.
Another plus point is, all your logic is at one place and in one language (i.e. you get rid of separate xml).
I think this wikipedia article has explained it very well:
Convention over configuration (also known as coding by convention) is
a software design paradigm used by software frameworks that attempts
to decrease the number of decisions that a developer using the
framework is required to make without necessarily losing flexibility.
The concept was introduced by David Heinemeier Hansson to describe the
philosophy of the Ruby on Rails web framework, but is related to
earlier ideas like the concept of "sensible defaults" and the
principle of least astonishment in user interface design.
The phrase essentially means a developer only needs to specify
unconventional aspects of the application. For example, if there is a
class Sales in the model, the corresponding table in the database is
called "sales" by default. It is only if one deviates from this
convention, such as the table "product sales", that one needs to write
code regarding these names.
When the convention implemented by the tool matches the desired
behavior, it behaves as expected without having to write configuration
files. Only when the desired behavior deviates from the implemented
convention is explicit configuration required.
I heavily rely on CFC. Sometimes within an application, I would have multiple CFC containing dozens of functions per CFC. So over time, it's easy to forget or miss out on already created functions.
So my question is how do you guys manage all these functions? Do you keep a separate document listing all the functions and indexing them that way? Is there an automated feature built in that we can use?
What I've been doing is naming functions more meaningfully but it's very tedious. There has to be a better way to do this. Just looking for your thoughts.
Thank you in advance.
I don't think there's a magic bullet here. Programmers with a bit more OCD than I will likely respond and give you an iron clad solution. For me (or my team) I keep a library of common components in a folder that I reuse for various sites and applications. Then I add them as a /util or /lib folder for a given project and use them (or extend them) as needed. Good planning - good documentation (a Wiki is a great choice for a team) is a must.
Planning carefully whether to extend a CFC is especially important. Otherwise you have to chase down nested function that are part of some super class way down in the weeds (as in, this works, but I really have no idea why it works).
This is where frameworks can provide much needed structure. For common functions and events they generally provide a location and a convention for creating such things. That makes them easy to decipher (as long as you've been indoctrinated into the framework). They have some downsides but they make life a lot easier :)
-You should follow the proper naming conventions for each and every cfcs.
-Each cfcs should be meant for a particular purpose. i.e. login cfcs should should only contain the login related functions.
-All common functions should be kept together in a cfc and that can be extended by the other cfcs.
-You can use a generic cfc for random functions.
Now, if you want to add a new function for any functionality then you can scan only 3 cfcs i.e. dedicated to that functionality, common and random. And you add the new as per the best fit.
Problem domain
I'm working on a rather big application, which uses a hierarchical data model. It takes images, extracts images' features and creates analysis objects on top of these. So the basic model is like Object-(1:N)-Image_features-(1:1)-Image. But the same set of images may be used to create multiple analysis objects (with different options).
Then an object and image can have a lot of other connected objects, like the analysis object can be refined with additional data or complex conclusions (solutions) can be based on the analysis object and other data.
Current solution
This is a sketch of the solution. Stacks represent sets of objects, arrows represent pointers (i.e. image features link to their images, but not vice versa). Some parts: images, image features, additional data, may be included in multiple analysis objects (because user wants to make analysis on different sets of object, combined differently).
Images, features, additional data and analysis objects are stored in global storage (god-object). Solutions are stored inside analysis objects by means of composition (and contain solution features in turn).
All the entities (images, image features, analysis objects, solutions, additional data) are instances of corresponding classes (like IImage, ...). Almost all the parts are optional (i.e., we may want to discard images after we have a solution).
Current solution drawbacks
Navigating this structure is painful, when you need connections like the dotted one in the sketch. If you have to display an image with a couple of solutions features on top, you first have to iterate through analysis objects to find which of them are based on this image, and then iterate through the solutions to display them.
If to solve 1. you choose to explicitly store dotted links (i.e. image class will have pointers to solution features, which are related to it), you'll put very much effort maintaining consistency of these pointers and constantly updating the links when something changes.
My idea
I'd like to build a more extensible (2) and flexible (1) data model. The first idea was to use a relational model, separating objects and their relations. And why not use RDBMS here - sqlite seems an appropriate engine to me. So complex relations will be accessible by simple (left)JOIN's on the database: pseudocode "images JOIN images_to_image_features JOIN image_features JOIN image_features_to_objects JOIN objects JOIN solutions JOIN solution_features") and then fetching actual C++ objects for solution features from global storage by ID.
The question
So my primary question is
Is using RDBMS an appropriate solution for problems I described, or it's not worth it and there are better ways to organize information in my app?
If RDBMS is ok, I'd appreciate any advice on using RDBMS and relational approach to store C++ objects' relationships.
You may want to look at Semantic Web technologies, such as RDF, RDFS and OWL that provide an alternative, extensible way of modeling the world. There are some open-source triple stores available, and some of the mainstream RDBMS also have triple store capabilities.
In particular take a look at Manchester Universities Protege/OWL tutorial: http://owl.cs.manchester.ac.uk/tutorials/protegeowltutorial/
And if you decide this direction is worth looking at further, I can recommend "SEMANTIC WEB for the WORKING ONTOLOGIST"
Just based on the diagram, I would suggest that an RDBMS solution would indeed work. It has been years since I was a developer on an RDMS (called RDM, of course!), but I was able to renew my knowledge and gain very many valuable insights into data structure and layout very similar to what you describe by reading the fabulous book "The Art of SQL" by Stephane Faroult. His book will go a long way to answer your questions.
I've included a link to it on Amazon, to ensure accuracy: http://www.amazon.com/The-Art-SQL-Stephane-Faroult/dp/0596008945
You will not go wrong by reading it, even if in the end it does not solve your problem fully, because the author does such a great job of breaking down a relation in clear terms and presenting elegant solutions. The book is not a manual for SQL, but an in-depth analysis of how to think about data and how it interrelates. Check it out!
Using an RDBMS to track the links between data can be an efficient way to store and think about the analysis you are seeking, and the links are "soft" -- that is, they go away when the hard objects they link are deleted. This ensures data integrity; and Mssr Fauroult can answer what to do to ensure that remains true.
I don't recommend RDBMS based on your requirement for an extensible and flexible model.
Whenever you change your data model, you will have to change DB schema and that can involve more work than change in code.
Any problems with DB queries are discovered only at runtime. This can make a lot of difference to the cost of maintenance.
I strongly recommend using standard C++ OO programming with STL.
You can make use of encapsulation to ensure any data change is done properly, with updates to related objects and indexes.
You can use STL to build highly efficient indexes on the data
You can create facades to get you the information easily, rather than having to go to multiple objects/collections. This will be one-time work
You can make unit test cases to ensure correctness (much less complicated compared to unit testing with databases)
You can make use of polymorphism to build different kinds of objects, different types of analysis etc
All very basic points, but I reckon your effort would be best utilized if you improve the current solution rather than by look for a DB based solution.
http://www.boost.org/doc/libs/1_51_0/libs/multi_index/doc/index.html
"you'll put very much effort maintaining consistency of these pointers
and constantly updating the links when something changes."
With the help of Boost.MultiIndex you can create almost every kind of index on a "table". I think the quoted problem is not so serious, so the original solution is manageable.
We are currently building a pile of SOAP Web Service to front the access of various backend systems.
While defining our Request/Response message XML, we see multiple services needing the ‘Account’ object with different ‘mandatory/optional’ fields.
How should we define and enforce the validation of these ‘mandatory/optional’ fields on the same Message? I see these options
1) Enforce validation with XSD by creating different 'Account' Complexe Type
Pros : Design time clarity.
Cons : proliferation of Object Type, Less reuse of Object,
2) Enforce validation with XSD by Extending+Restriction a single base 'Account' type
Pros : Design time clarity.
Cons : Not sure of the support of the Extend+Restriction feature (java, .Net)
3) Using a single 'Account' type and enforcing validation in runtime (ie in the Code).
Pros: Simple
Cons: No design time validation. Need to communicate field requirements via a specification doc.
What are you’re thoughts on that?
I would have to assume that: i) some of what you would call optional fields are actually fields that are not applicable (don't make sense) to all accounts and ii) we're not talking trivial scenarios (like two type of accounts with 2 fields each-kind of thing).
Firstly, I would say that unless you're really lucky, from a requirements perspective, then you're going to end up with some sort of "validation in runtime" no matter what option you're going with. XML Schema can't express some common data validation requirements, such as cross field validation; or simply because the data in your XML is not sufficient to feed the rules to validate the integrity of the message (the data in the message being a subset on what's available at the time the XML is being un/marshalled).
Secondly, I would avoid deriving new complex types through restricton; from an authoring perspective you don't achieve much in terms of reuse, and you might end up with problems in how that is interpreted by your XSD to code tooling. I like to think that the original intention of deriving through restriction was to provide a tool for people to use in xsd:redefine scenarios; for people that wouldn't want to fiddle with XML Schemas that were authored by someone else. If one owns (authors) the schema, one can work around the need to restrict by defining the "lesser" object first and extend from that.
As to the "proliferation of objects", you are kind of getting that with option #2 as well (when compared with #1); what I mean by that, all the tools I know will create a class for each named (global) complex type you have in your XSD; so if you have to have three type of accounts, you'll have three for scenario #1, and four, or so, if you choose to extend from one, or so, base classes; a worst case scenario for the later would be when you need three specializations (concrete if you wish); anyway, from my experience, the difference in real life scenarios is not something that would really tip the decision one way or the other.
Extending base types in XML Schema is good for reuse; however, reuse brings coupling; if you're analysing this from a forward/backward compatibility point of view, extending something in the base type could mess up some of the unmarshalling (deserialization) of the XML for clients of your service(s) that don't want to change their code base, yet you want to maintain only one Web Service endpoint for all; in this case, a forward-compatibility strategy that relies on an xsd:any at the end of a compositor (xsd:sequence) would be rendered useless in your first release that goes and extends your base type.
There is even more; because of this, I don't think there's a correct answer, just for the criteria you seem to imply by setting your pro/cons.
All of my preferred options below assume that you put high value on the requirement to ensure forward/backward compatibility of your services, and you want to minimize the cost of your clients having to deal with your services (because of XML Schema changes).
I would say that if all your domain (accounts in particular) can be fully modeled (assume no future change basically) and that there is enough commonality to justify reuse, then go with option #2. Otherwise, go with option #1 since I have yet to see things that don't change...
If the modeling of your domain can be done 80% or more (or some number that you think is high) and that there is enough commonality to justify reuse, then I would still go with option #2, with the caveat that any future extensions for common attributes across accounts, must be applied for each individual account (basically turning your option into a hybrid, by doing #1).
For anything else, I would go #1. Whew, I can't believe I wrote all of this...
So, I've come back to ask, once more, a patterns-related question. This may be too generic to answer, but my problem is this (I am programming and applying concepts that I learn as I go along):
I have several structures within structures (note, I'm using the word structure in the general sense, not in the strict C struct sense (whoa, what a tongue twister)), and quite a bit of complicated inter-communications going on. Using the example of one of my earlier questions, I have Unit objects, UnitStatistics objects, General objects, Army objects, Soldier objects, Battle objects, and the list goes on, some organized in a tree structure.
After researching a little bit and asking around, I decided to use the mediator pattern because the interdependencies were becoming a trifle too much, and the classes were starting to appear too tightly coupled (yes, another term which I just learned and am too happy about not to use it somewhere). The pattern makes perfect sense and it should straighten some of the chaotic spaghetti that I currently have boiling in my project pot.
But well, I guess I haven't learned yet enough about OO design. My question is this (finally. PS, I hope it makes sense): should I have one central mediator that deals with all communications within the program, and is it even possible? Or should I have, say, an abstract mediator and one subclassed mediator per structure type that deals with communication of a particular set of classes, e.g. a concrete mediator per army which helps out the army, its general, its units, etc.
I'm leaning more towards the second option, but I really am no expert when it comes to OO design. So third question is, what should I read to learn more about this kind of subject (I've looked at Head First's Design Patterns and the GoF book, but they're more of a "learn the vocabulary" kind of book than a "learn how to use your vocabulary" kind of book, which is what I need in this case.
As always, thanks for any and all help (including the witty comments).
I don't think you've provided enough info above to be able to make an informed decision as to which is best.
From looking at your other questions it seems that most of the communication occurs between components within an Army. You don't mention much occurring between one Army and another. In which case it would seem to make sense to have each Mediator instance coordinate communication between the components comprising a single Army - i.e. the Generals, Soldiers etc. So if you have 10 Army's then you will have 10 ArmyMediator's.
If you really want to learn O-O Design you're going to have to try things out and run the risk of getting it wrong from time to time. I think you'll learn just as much, if not more, from having to refactor a design that doesn't quite model the problem correctly into one that does, as you will from getting the design right the first time around.
Often you just won't have enough information up front to be able to choose the right design from the go anyway. Just choose the simplest one that works for now, and improve it later when you have a better idea of the requirements and/or the shortcomings of the current design.
Regarding books, personally I think the GoF book is more useful if you focus less on the specific set of patterns they describe, and focus more on the overall approach of breaking classes down into smaller reusable components, each of which typically encapsulates a single unit of functionality.
I can't answer your question directly, because I have never used that design pattern. However, whenever I have this problem, of message passing between various objects, I use the signal-slot pattern. Usually I use Qt's, but my second option is Boost's. They both solve the problem by having a single, global message passing handler. They are also both type-safe are quite efficient, both in terms of cpu-cycles and in productivity. Because they are so flexible, i.e. any object and emit any kind of signal, and any other object can receive any signal, you'll end up solving, I think, what you describe.
Sorry if I just made things worse by not choosing any of the 2 option, but instead adding a 3rd!
In order to use Mediator you need to determine:
(1) What does the group of objects, which need mediation, consist of?
(2) Among these, which are the ones that have a common interface?
The Mediator design pattern relies on the group of objects that are to be mediated to have a "common interface"; i.e., same base class: the widgets in the GoF book example inherit from same Widget base, etc.
So, for your application:
(1) Which are the structures (Soldier, General, Army, Unit, etc.) that need mediation between each other?
(2) Which ones of those (Soldier, General, Army, Unit, etc.) have a common base?
This should help you determine, as a first step, an outline of the participants in the Mediator design pattern. You may find out that some structures in (1) fall outside of (2). Then, yo may need to force them adhering to a common interface, too, if you can change that or if you can afford to make that change... (may turn out to be too much redesigning work and it violates the Open-Closed principle: your design should be, as much as possible, open to adding new features but closed to modifying existent ones).
If you discover that (1) and (2) above result in a partition of separate groups, each with its own mediator, then the number of these partitions dictate the number of different types of mediators. Now, should these different mediators have a common interface of their own? Maybe, maybe not. Polymorphism is a way of handling complexity by grouping different entities under a common interface such that they can be handled as a group rather then individually. So, would there be any benefit to group all these supposedly different types of mediators under a common interface (like the DialogDirector in the GoF book example)? Possibly, if:
(a) You may have to use a heterogeneous collection of mediators;
or
(b) You envision in the future that these mediators will evolve (and they probably will). Hence providing an abstract interface allows you to derive more evolved versions of mediators without affecting existent ones or their colleagues (the clients of the mediators).
So, without knowing more, I'd have to guess that, yes, it's probably better to use abstract mediators and to subclass them, for each group partition, just to prepare yourself for future changes without having to redesign your mediators (remember the Open-Closed principle).
Hope this helps.