Logical Layer to connect multiple .Net Services - web-services

I am not sure if this is the appropriate place for this, but I have come up with a "conceptual" modular design architecture that separates the logic out into individual services to allow an almost plug and play type scenario whereby there are no dependencies between the services. Think a list of features and only enabling the ones that you want.
To facilitate this I realise that I will need some type of middleware that will connect these all together and control the flow of data. However I am not sure of the specifics around what would be appropriate to achieve this.
I plan on implementing the services using .NET soap based services, so is this a case of using something like Tibco?
Any suggestions around what would be most appropriate or even where to start looking would be great.
If the above description didn't make sense hopefully this image is a bit clearer in describing the relationship between the services.
Thanks.

Depending on your needs you could use NServiceBus (http://particular.net/nservicebus). NServiceBus is communication middle ware which can be used with different types of queuing systems like MSMQ, RabbitMQ and others. It is essentially a servicebus which is very developer friendly and focused. It does not only facilitate asynchronous message based distributed communication but also:
Publish / Subscribe that is transport agnostic using automatic registration
Transports: Can be used with MSMQ, RabbitMQ, Azure Storage Queues, etc.
Security: Supports encryption of messages
BLOB's: Has support for storing large message payloads transparently with the data bus to allow for communicatie message larger then the transport allows.
Scalability: Out and upscaling to increase throughput
Reliability: Deduplication, idempotent processing without having distributed transactions.
Orchestration: Sagas can help in controlling message flow and routing.
Exception handling: Exceptions get automatically retried in two different stages.
Monitoring: Tools like Service Pulse, Service Insight and Windows Performance monitors to monitor performance and errors. See what errors occurred and
Serialization: Can use different serializers that support formats like xml, json, binary
Open Source: All source code is available
Auditing: Can move all processed message to an audit queue for archiving or audit requirements
Community: Has a large community of developers that are active on the forums but also supply additional transports, serializers and other features.
I must mention that I work for Particular but also that there are other options to consider. NServiceBus does not use SOAP for message exchange but a lightweight message in a format of choice as mentioned as the serialization bullet. It can integrate with services that require SOAP. It has the ability to expose an service (endpoint) as a WCF service for easy integration and it can use SOAP from within code to call external SOAP services using the features that the .net framework and visual studio provide.
Good luck in choosing the right technology for your project.

Related

Avro message for Google Cloud Pub-Sub?

What is a best data format for publishing and consuming to/from Pub-Sub? I am looking at Avro message format due to it's binary format.
Usecases are there would be real time Microservice applications publishing Avro messages to pub-sub. Given that avro message is best suited when batching up messages(along with a schema attached with the binary message) and then publishing the messages, would that be a better suitable format for this usecase involving microservice?
Google Cloud Documentation contains some JSON examples but when looking for efficiency the main suggestion is to use the available client libraries, except if your needs don't met what client libraries can offer or if you are running on Google App Engine standard environment, in which case the use of two APIs is suggested.
In fact, the most important factor for efficiency is using the gRPC API instead of the REST API (which libraries' calls do by default). As mentioned here:
There are two major factors at work here: more efficient data encoding
and HTTP/2. gRPC keeps data in binary both in client memory and on the
wire by building on HTTP/2 and Protocol Buffers. This eliminates
processing and space required for string encoding schemes such as
Base64 or JSON. In addition, HTTP/2 itself makes things go faster with
multiplexed requests over a single connection and header compression.
I did not find data format explicit mentions anywhere. I suggest you to use your preferred language for the message, as for example Python. Client library description here and sample code here.
Based on this StackOverflow post, you can optimize your PubSub system efficienctly by:
Making sure you are using gRPC
Batching where possible, to reduce the number of calls and eliminate latency.
Only compressing when needed and after benchmarking (implies extra logic in your application)
Finally, if you intend to deploy a robust PubSub system, have a look on this Anusha Ramesh post. She is Project Manager at Google now and suggests and elaborates on three tips:
Don't underestimate the importance of capacity planning.
Make sure your pub/sub system is fault-tolerant.
NSM: Never Stop Monitoring.
There isn't going to be one correct answer for the best format to use for the messages for all use cases. Avro is certainly a popular choice. Protocol buffers would be another possibility, as would Thrift. For Pub/Sub, the data is all just bytes and it is up to the publisher and the subscriber to determine the interpretation of this data. People have run comparisons on the different data formats, so you may want to make the decision based on your needs in terms of performance and message sizes.
Pub/Sub itself uses Protocol buffers for defining its data types. With regard to batching, the Cloud Pub/Sub client libraries do batching themselves for publish, so you don't necessarily have to worry about that on your own. You can control the batch settings to optimize throughput and latency based on your use case by calling, for example, setBatchSettings in the Publisher.Builder for Java (other languages have an equivalent as well). You may decide to do your own batching if you want to associate some metadata with a set of messages instead of with each individual message or you have very specific needs in terms of how messages are batched together. Otherwise, depending on the client library to do the batching is probably the correct decision.

Web services, architectural design advice for central logging

We have a certain number of SOAP and REST Web Services, which provide legal information for clients. Management demands to log all the information which is requested by this services. Using logs they want to collect statistics and bill clients.
My colleague offered to use central relational database for logging.
I don’t like this solution, because number of services are growing and I think such architecture will be bottleneck for productivity.
Can you advise me what architectural design will be good for such kind of task ?
When you say the central database will be a bottleneck, do you mean that it will be too slow to keep up with the logging requests? Or are you saying you are expecting database changes for each logging request type?
I would define a more generic payload for logging (figure out your minimum standardized fields), and then create a database for those logs.
<log><loglevel>INFO</loglevel><systemName>ClientValueActualizer</systemName><userIp>123.123.123.432</userIp><logpayload><![CDATA[useful payload for billing]]</logpayload></log>
If you are worried about capacity, you could throw a queue in front of it, which would have the advantage of not bogging down the client if the logs are busy.
You can decouple the consumption of these messages into separate systems. each of which can understand the various payloads. The risk here is if you want to add new attributes, it will be difficult to control what systems are sending what. But that's just a general issue with decoupled services.
you can consider Apache Kafka as distributed commit log. This be good for performance wise as it scales out horizontally and it can deliver messages only when client pulls those messages.

Open source ESBs supporting web service transactions?

Do any of the major open source ESBs such as Mule or ServiceMix properly support web service transaction specifications (like http://en.wikipedia.org/wiki/WS-Atomic_Transaction)?
I've just briefly looked but it seems like support is not very good.
I would like to use the ESB to build macro services by composing from modular smaller services. I think this would be a pretty typical use of an ESB, and I don't see how you can implement anything practical if you don't have transactions.
WS Atomic Transaction is not in the list of supported WS-standards for Mule, so no luck here.
In term of design, the "transaction over SOAP" paradigm never really took off. Approaches likes stateful conversations with idempotent retries and compensations is usually what people prefer when integrating services over HTTP. It sure is more work than simply flipping the "transaction" switch on.

What is web service composition?

What exactly is web service composition?
Composition refers to the way something is build, the new term at the moment is mash-up which basically means utilising a variety of different services in a composite application. So that functionality of disparate application can be used in one application.
I think your referring to service granularity - which means how much functionality a service exposes. a coarse grained service will expose a whole process as a consumable unit whereas a fine grained service will expose a specific unit of logic from a larger process. Obviously, it is up to the service architects to determine what granularity of service works best in the given environment.
This also, in a way has to do with the style of SOAP message you are using whether it is RPC style or document and that a service should be atomic and not hold external state. Meaning it does not need to know any more information other than that in the SOAP message to perform its function.
Hope this gives you a good starting point. The trouble with service-orientation is that it differs depending on who you read, but the main points stay the same!
Jon
Some web services which are provided for clients are abstract and composition of some smaller web services and it's called web service composition.
Sometimes there are more than one web service in order to use as the mentioned small web services, so we choose them based on QoS (Quality of Service) and many researches have been done on this subject.
Web service composition involves integration of two or more web service to achieve more added value of business functionality. A work flow composer is responsible of aggregating different web services to act as a single service according to functional requirements as well as QoS constrains. BPEL is one of the popular composers uses XML language to perform service composition. Fine-grained services perform single business task and provides higher flexibility and reusability. However, coarse-grained service involves performing complex business functionality leading to lower flexibility

Notification/messaging software for news/updates?

I think a package that would be quite useful is a centralised notification/news system.
This would run on a web server and client libraries could send messages to the server. Examples of messages might be:
Commits to version control.
Continuous build server failures (including logs).
News from project management.
Users could create accounts on the server and decide how they want to view the messages, e.g. email, RSS, etc. There could be filters based on channels, priorities, regexs, etc.
Does anyone know of any software package that provides these features (or could be extended to do so)? (Preferrably Windows based, but please cover other platforms)
If I can't find one I was thinking of writing one in Python using Django.
I found an XMPP protocol (xep-0060) from the pubsubhubbub link:
This specification defines an XMPP
protocol extension for generic
publish-subscribe functionality. The
protocol enables XMPP entities to
create nodes (topics) at a pubsub
service and publish information at
those nodes; an event notification
(with or without payload) is then
broadcasted to all entities that have
subscribed to the node. Pubsub
therefore adheres to the classic
Observer design pattern and can serve
as the foundation for a wide variety
of applications, including news feeds,
content syndication, rich presence,
geolocation, workflow systems, network
management systems, and any other
application that requires event
notifications.
What you describe sounds like a normal feed aggregator service, but with real-time?
I recently saw a video of a Google tech talk where they announced a product that hooks over the existing RSS/Atom structure but provides real-time notification. Didn't bookmark it unfortunately (hopefully someone will comment with it?), but that sounds like the underlying technology for what you want.