I'm looking for a possiblity to monitor all messages in a SOA enviroment with an intermediary, who'll be designed to enforce different rule-sets over the message's structure and sequences (e.g., let's say it'll check and ensure that Service A has to be consumed before B).
Obviously the first idea that came to mind is how WS-Adressing might help here, but I'm not sure if it does, as I don't really see any mechanism there to ensure that a message will get delivered via a given intermediary (as it is in WS-Routing, which is an outdated proprietary protocol by Microsoft).
Or maybe there's even a different approach that the monitor wouldn't be part of the route but would be notified on request/responses, which might it then again make somehow harder to actively enforce rules.
I'm looking forward to any suggestions.
You can implement a "service firewall" either by intercepting all the calls in each service as part of your basic servicehost. Alternatively you can use 3rd party solutions and route all your service calls to them (they will do the intercepting and then forward calls to your services).
You can use ESBs to do the routing (and intercepting) or you can use dedicated solutions like IBM's datapower, XML firewall from Layer7 etc.
For all my (technical) services I use messaging and the command processor pattern, which I describe here, without actually calling the pattern name though. I send a message and the framework finds to corresponding class that implements the interface that corresponds to my message. I can create multiple classes that can handle my message, or a single class that handles a multitude of messages. In the article these are classes implementing the IHandleMessages interface.
Either way, as long as I can create multiple classes implementing this interface, and they are all called, I can easily add auditing without adding this logic to my business logic or anything. Just add an additional implementation for every single message, or enhance the framework so it also accepts IHandleMessages implementations. That class can than audit every single message and store all of them centrally.
After doing that, you can find out more information about the messages and the flow. For example, if you put into the header information of your WCF/MSMQ message where it came from and perhaps some unique identifier for that single message, you can track the flow over various components.
NServiceBus also has this functionality for auditing and the team is working on additional tooling for this, called ServiceInsight.
Hope this helps.
Related
I've been diving into ReST lately, and a few things still bug me:
1) Since there are only resources and no services to call, how can I provide operations to the client that only do stuff and don't change any data?
For example, in my application it is possible to trigger a service that connects to a remote server and executes a shell scripts. I don't know how this scenario would apply to a resource?
2) Another thing I'm not sure about is side effects: Let's say I have a resource that can be in certain states. When transitioning into another state, a lot of things might happen (e-mails might be sent). The transition is triggered by the client. Should I handle this transition merely by letting the resource be updated via PUT? This feels a bit odd.
For the client this means that updating an attribute of this ressource might only change the attribute, or it also might do a lot of other things. So PUT =/= PUT, kind of.
And implementation wise, I have to check what exacty the PUT request changed, and according to that trigger the side effects. So there would be a lot of checks like if(old_attribute != new_attribute) {side_effects}
Is this how it's supposed to be?
BR,
Philipp
Since there are only resources and no services to call, how can I provide operations to the client that only do stuff and don't change any data?
HTTP is a document transport application. Send documents (ie: messages) that trigger the behaviors that you want.
In other words, you can think about the message you are sending as a description of a task, or as an entry being added to a task queue. "I'm creating a task resource that describes some work I want done."
Jim Webber covers this pretty well.
Another thing I'm not sure about is side effects: Let's say I have a resource that can be in certain states. When transitioning into another state, a lot of things might happen (e-mails might be sent). The transition is triggered by the client. Should I handle this transition merely by letting the resource be updated via PUT?
Maybe, but that's not your only choice -- you could handle the transition by having the client put some other resource (ie, a message describing the change to be made). That affords having a number of messages (commands) that describe very specific modifications to the domain entity.
In other words, you can work around PUT =/= PUT by putting more specific things.
(In HTTP, the semantics of PUT are effectively create or replace. Which is great for dumb documents, or CRUD, but need a bit of design help when applied to an entity with its own agency.)
And implementation wise, I have to check what exacty the PUT request changed, and according to that trigger the side effects.
Is this how it's supposed to be?
Sort of. Review Udi Dahan's talk on reliable messaging; it's not REST specific, but it may help clarify the separation of responsibilities here.
Is there any method to identify from which source an API is called? source refer to IOS application, web application like a page or button click( Ajax calls etc).
Although, saving a flag like (?source=ios or ?source=webapp) while calling api can be done but i just wanted to know is there any other better option to accomplish this?
I also feel this requirement is weird, because in general an App or a web application is used by n number of users so it is difficult to monitor those many API calls.
please give your valuable suggestions.
There is no perfect way to solve this. Designating a special flag won't solve your problem, because the consumer can put in whatever she wants and you cannot be sure if it is legit or not. The same holds true if you issue different API keys for different consumers - you never know if they decide to switch them up.
The only option that comes to my mind is to analyze the HTTP header and see what you can deduce from it. As you probably know a typical HTTP header looks something like this:
You can try and see how the requests from all sources differ in your case and decide if you can reliably differentiate between them. If you have the luxury of developing the client (i.e. this is not a public API), you can set your custom User-Agent strings for different sources.
But keep in mind that Referrer is not mandatory and thus it is not very reliable, and the user agent can also be spoofed. So it is a solution that is better than nothing, but it's not 100% reliable.
Hope this helps, also here is a similar question. Good luck!
I am writing a C++ API which is to be used as a web service. The functions in the API take in images/path_to_images as input parameters, process them, and give a different set of images/paths_to_images as outputs. I was thinking of implementing a REST interface to enable developers to use this API for their projects (independent of whatever language they'd like to work in). But, I understand REST is good only when you have a collection of data that you want to query or manipulate, which is not exactly the case here.
[The collection I have is of different functions that manipulate the supplied data.]
So, is it better for me to implement an RPC interface for this, or can this be done using REST itself?
Like lcfseth, I would also go for REST. REST is indeed resource-based and, in your case, you might consider that there's no resource to deal with. However, that's not exactly true, the image converter in your system is the resource. You POST images to it and it returns new images. So I'd simply create a URL such as:
POST http://example.com/image-converter
You POST images to it and it returns some array with the path to the new images.
Potentially, you could also have:
GET http://example.com/image-converter
which could tell you about the status of the image conversion (assuming it is a time consuming process).
The advantage of doing it like that is that you are re-using HTTP verbs that developers are familiar with, the interface is almost self-documenting (though of course you still need to document the format accepted and returned by the POST call). With RPC, you would have to define new verbs and document them.
REST use common operation GET,POST,DELETE,HEAD,PUT. As you can imagine, this is very data oriented. However there is no restriction on the data type and no restriction on the size of the data (none I'm aware of anyway).
So it's possible to use it in almost every context (including sending binary data). One of the advantages of REST is that web browser understand REST and your user won't need to have a dedicated application to send requests.
RPC presents more possibilities and can also be used. You can define custom operations for example.
Not sure you need that much power given what you intend to do.
Personally I would go with REST.
Here's a link you might wanna read:
http://www.sitepen.com/blog/2008/03/25/rest-and-rpc-relationship/
Compared to RPC, REST's(json style interface) is lightweight, it's easy for API user to use. RPC(soap/xml) seems complex and heavy.
I guess that what you want is HTTP+JSON based API, not the REST API that claimed by the REST author
http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven
I'm using Django to implement a private rest-like API and I'm unsure of how to handle different versions of the API on the backend.
Meaning, if I have 2 versions of the API what does my code look like? Should I have different apps that handle different version? Should different functions handle different versions? Or should I just use if statements for when one version differs from another?
I plan on stating the version in the Header.
Thanks
You do not need to version REST APIs. With REST, versioning happens at runtime either through what one might call 'must-ignore payload extension rules' or through content negotiation.
'must-ignore payload extension rules' refer to an aspect you build into the design of your messages. 'Must-ignore' means that a piece of software that processes a message of the given format must ignore any unknown syntactical constructs. This is what we all know from HTML and what makes it possible to insert all sorts of fancy tags into an HTML page without the parser choking.
'Must-ignore' allows you to evolve the capabilities of your service by adding stuff to what you send already without considering clients that only understand the older versions.
Content-negotiation refers to the HTTP-built-in mechanism of negotiating the actual representation the server sends to a given client at runtime. The typical scenario is this: Clients send the Accept header in the request to advertise what they are capable of and servers pick the representation to send back based on these capabilities. But there are also variations of this theme (see here for details: http://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html ).
Content negotiation allows for incompatible changes, meaning that I can evolve my service to being able to send incompatible old and new versions and based on the Accept header my service will send the appropriate one.
Bottom line: with both approaches, your API remains as it is. No need to do any versioning at the API level - especially not the often suggested (but totally wrong) inclusion of version identifiers in the URIs (remember, you are doing REST here, not SOAP!)
The situation:
We have a library project that houses much of our code for the various integrations we work on. Many of the integrations consume web service apis, and my supervisor doesn't want 5 gazillion web service references added to the project.
What we generally do, then, is add a reference to a new project and copy the References.vb to the solution and just call the generated code. Not terribly convenient if changes are made to the service, but it works.
Recently, I ran into a problem where we have to use 3 web services for the same integration. 2 of these contain the same class definitions, however, they're in different namespaces because they belong to different services. This became a problem for me because one of the services searches a user based on user ID, and the other pulls back blocks of users. Both return an object, or list of, that is exactly the same semantically. And I need to process the data the same, whether it came from one service or the other.
My solution, was to strip out the duplicated classes in the service and replace them with classes inherited from common base classes. This allowed me to work with both objects as if they were the same, however, it required modifying the generated web service proxy. Therefore this change will need to be made every time I need to regenerate the proxy.
I'm curious what you all might think a better solution to this would be.
You're going to regret playing games with copying Reference.vb and editing generated files.
Switch to WCF and you'll be able to tell it you want to reuse the types, instead of having multiple types that are more or less the same.
BTW, they would be "less" the same if not all of the web references are updated at the same time after a server change.
The other option would be to build an abstraction layer over top of the web service pre-generated proxies, such that when you make to the calls to the abstraction layer you can always use the same objects, as they are squeezed into (and out of) the web service proxies in the abstraction layer. This would also allow for unit testing :)
I think you really should be looking at WCF for 3.5+, but for .NET 2.0 look at something like WSCF (Web Services Contract First), which defines the contracts in XML and generates a set of libraries reusable across services. E.g You define a MyComany.WS.Common namespace and use that namespace in multiple projects. The code generation then builds a shared library of types which get used across all the web-services. We use this extensively in our .NET 2 solutions and it's great. We had to do some additional work around the code generation to get it to fit into our build process, but once that was done we never looked back.
We're migrating to .NET 3.5 over time, so the WSCF will become obsolete
Heres the link to the thinktecture site for WSCF.
wsdl.exe using the /sharetypes switch allows the same types to be used across multiple service definitions, provided the wire signatures are not correct. I was unable to use it in my situation, though, because the various wsdl contracts were carelessly namespaced.