I am working on an integration project where we want to use JIRA tickets for business follow up operations. The JIRA (externally hosted) is not always available hence I want to use some Guaranteed delivery patterns. So the question, is it possible WSO2 ESB to use existing connectors (JIRA) in the message processor?
Message processors and connectors are independent. This is what you have to do (you are in right track at the moment too).
Put your message to a message store. This can be the in-memory message store (which looses messages upon a server restart) or a persistent message store such as an activemq queue.
Then, configure a message processor to consume messages from this store. There are two types of message processors namely forwarding and sampling processors. Here you need a sampling processor.https://docs.wso2.com/display/ESB490/Message+Processors
These consumed messages can be handed over to a sequence where the sequence can use the jira connector to create the jira.
Problem I see with this approach is, sampling processors do not support guaranteed delivery (but the forwarding processor do). But, AFAIK, we cannot use connectors with forwarding processors because we need to provide an endpoint in the forwarding processors configs.
You will understand the difference and the pros and cons of two types when you go through the docs. As a workaround, I can suggest following.
Create a proxy service which uses jira connector to create the jira
Then use the forwarding processor to send the consumed message to that proxy service.
I think, with above approach, you will be able to achieve guaranteed delivery.
Related
In developing backend components, I need to decide how these components will interact and communicate with each other. In particular, I need to decide whether it is better to use (RESTful, micro) web services versus a message broker (e.g. RabbitMQ). Are there certain criteria to help decide between using web services for each component versus messaging?
Eranda covered some of this in his answer, but I think three of the key drivers are:
Are you modeling a Request-Response type interaction?
Can your interaction be asynchronous?
How much knowledge does the sender of the information need to have about the recipients?
It is possible to do Request-Response type interactions with an asynchronous messaging infrastructure but it adds significantly to the complexity, so generally Request-Response type interactions (i.e. does the sender need some data returned from the recipient) are more easily modeled as RPC/REST interactions.
If your interaction can be asynchronous then it is possible to implement this using a REST interaction but it may scale better if you use a fire and forget messaging type interaction.
An asynchronous messaging interaction will also be much more appropriate if the provider of the information doesn't care who is consuming the information. An information provider could be publishing information and new consumers of that information could be added to the system later without having to change the provider.
Web server and message broker have their own use cases. Web server used to host web services and the message broker are use to exchange messages between two points. If you need to deploy a web service then you have to use a web server, where you can process that message and send back a response. Now let's think that you need to have publisher/subscriber pattern or/and reliable messaging between any two nodes, between two servers, between client and server, or server and client, that's where the message broker comes into the picture where you can use a message broker in the middle of two nodes to achieve it. Using message broker gives you the reliability but you have to pay it with the performance. So the components you should use depends on your use case though there are multiple options available.
I want to enable event notifications for my customers. There are many possible ways to send notifications: emails, sms, XMPP/other IM, pre-recorded voice messages over SIP, phone-specific message push services, REST callbacks etc.
I don't want to develop all these transports myself, so I need a web service that can manage those notifications for customers. Also I don't want to store emails/phones/other personally identifiable information.
The notifications are transactional (i.e. it's not mass delivering same message to everyone). Paid solutions are welcome.
There is http://pagerduty.com but it is
designed to work within enterprise and not with outside customers
focused on full cycle of incident response as opposed to simple message delivery
So it puts more burden on respondents and I want something that requires zero effort for the users to setup.
Monitis is another example. It has multiple transports including Twitter, but again it's designed for insiders and not for service subscribers coming in bulk numbers.
Amazon SNS seems to be too low-level as it only manages delivery of push notifications, but for diplaying them I have to write a mobile app which I don't want.
XMPP servers as described in How best to deliver notifications to various IM / notification services? have traditionally supported the idea of different transports, but I'd like a third-party hosted service.
Twilio has only 2 transports: SMS and voice call and more oriented on full 2-side communications.
I cannot even find the right google keywords to search for the service/SaaS I want.
The question is, are there any such services? A sample of a few would give me an idea of what to look for.
This comes very late, perhaps too late but...
You should not need to implement any of the transport but you may be required to build some of the gateways and you will most likely need to assembly the application which talks to each of the gateways. You are not likely going to find a single service for this.
You've already outlined the strategy. You basically have these pieces:
transports
gateways
application
Each of the transports is accessed through some client via either an API or a CLI - so you'll need to figure out what your environment is. Java is probably a good choice but other cross-platform environments would likely work. Existing infrastructure like Apache ServiceMix has support for some of these transports:
https://cwiki.apache.org/confluence/display/SM/Components+list
and there may be other middle-ware with similar, distinct transports.
You will likely want a gateway for each provider for each transport type. You may be able to find a provider which adequately services multiple transports, e.g. Twilio's SMS and voice, but that will likely be the exception. You may also find that because of the differences in transports (and therefore functionality), it's more convenient to build a gateway for each transport type. So, you might have two configured providers in your SMS gateway, one for Twilio and one for Kannel, and you might have your Twilio account used in the SMS gateway and in the SIP gateway.
The final step is assembling your application into something meaningful. This might be something like:
sent.......: "Thanks for your purchase, ${username}!"
sent to the channels (i.e., provider-transport pair) configured, perhaps, by the user and being able to collect the response from the user:
response...: "It was a pleasure! --Bob"
You will need to store the basics of the each transport's endpoint, e.g., phone number for SMS, username for chat, etc., so if you have PII security issues to address you'll need to think though that. One option may be to turn all the PII over to each provider but you'll still need to keep each account for your users in each provider, and you will likely need to know something about the user, like "${username}" above, to personalize your notification appropriately within your application. So, removing all PII from your application seems unlikely.
I'm not sure how much this help but perhaps it gives you some ideas.
I would like to use Amazon SQS in my application to queue requests from other external systems that don't belong to me.
What is the better way of doing this, directly expose the SQS Queue and the required messageformat OR publish a web service (WCF) that queues the request.
Also I read that SQS is relative slow for a singe access, but am I right that it can handle easyly a lot of concurrent accesses from different clients?
Best
Thomas
This is largely a matter of preference and depends a bit on your situation. But my recommendation would be to wrap it with your own web-service.
Building your web-service allows you to do things like validation, throttling, schema versioning etc. E.g. you can reject invalid messages with immediate synchronous feedback to the sender. If the external systems are publishing directly to your queue, then invalid messages become your problem not theirs, and if you revise your schema and want to reject old-schema messages then you either have to drop them or set up a separate back-channel to feed back information to the publisher. That adds unnecessary complexity to your system. Having a web-service would even let you switch to other queuing technologies later if you need to.
But building your own web-service has downsides too: will your own service be able to handle the same load as the SQS API with the same low latency? It won't scale infinitely like SQS, so how responsive will you need to be to changes in load? Have you got the resources to manage a separate service? And it's more work than just giving a client's AWS account permission to publish to your queue.
If you're happy with the extra work involved, and you want a more future-proof system, IMHO it's worth building the web-service wrapper.
I've been tasked with creating an intermediate layer which needs to exchange data (over HTTP) between two independent systems (e.g. Receiver <=> Intermediate Layer (IL) <=> Sender). Receiver and Sender both expose a set of API's via Web Services. Everytime a transaction occurs in the Sender system, the IL should know about it (I'm thinking of creating a Windows Service which constantly pings the Sender), massage the data, then deliver it to the Receiver. The IL can temporarily store the data in a SQL database until it is transferred to the Receiver. I have the following questions -
Can WCF (haven't used it a lot) be used to talk to the Sender and Receiver (both expose web services)?
How do I ensure guaranteed delivery?
How do I ensure security of the messages over the Internet?
What are best practices for handling concurrency issues?
What are best practices for error handling?
How do I ensure reliability of the data (data is not tampered along the way)
How do I ensure the receipt of the data back to the Sender?
What are the constraints that I need to be aware of?
I need to implement this on MS platform using a custom .NET solution. I was told not to use any middleware like BizTalk. The receiver is an SDFC instance, if that matters.
Any pointers are greatly appreciated. Thank you.
A Windows Service that orchestras the exchange sounds fine.
Yes WCF can deal with traditional Web Services.
How do I ensure guaranteed delivery?
To ensure delivery you can use TransactionScope to handle the passing of data between the
Receiver <=> Intermediate Layer and Intermediate Layer <=> Sender but I wouldn't try and do them together.
You might want to consider some sort of queuing mechanism to send the data to the receiver; I guess I'm thinking more of a logical queue rather than an actual queuing component. A workflow framework could also be an option.
make sure you have good logging / auditing in place; make sure it's rock solid, has the right information and is easy to read. Assuming you write a service it will execute without supervision so the operational / support aspects are more demanding.
Think about scenarios:
How do you manage failed deliveries?
What happens if the reciever (or sender) is unavailbale for periods of time (and how long is that?); for example: do you need to "escalate" to an operator via email?
How do I ensure security of the messages over the Internet?
HTTPS. Assuming other existing clients make calls to the Web Services how do they ensure security? (I'm thinking encryption).
What are best practices for handling concurrency issues?
Hmm probably a separate question. You should be able to find information on that easily enough. How much data are we taking? what sort of frequency? How many instances of the Windows Service were you thinking of having - if one is enough why would concurrency be an issue?
What are best practices for error handling?
Same as for concurrency, but I can offer some pointers:
Use an established logging framework, I quite like MS EntLibs but there are others (re-using whatever's currently used is probably going to make more sense - if there is anything).
Remember that execution is unattended so ensure information is complete, clear and unambiguous. I'd be tempted to log more and dial it down once a level of comfort is reached.
use a top level handler to ensure nothing get's lost; but don;t be afraid to log deep in the application where you can still get useful context (like the metadata of the data being sent / recieved).
How do I ensure the receipt of the data back to the Sender?
Include it (sending the receipt) as a step that is part of the transaction.
On a different angle - have a look on CodePlex for ESB type libraries, you might find something useful: http://www.codeplex.com/site/search?query=ESB&ac=8
For example ESBasic which seems to be a class library which you could reuse.
I'm working on a real time application implemented using in a SOA-style (read loosely coupled components connected via some messaging protocol - JMS, MQ or HTTP).
The architect who designed this system opted to use JMS to connect the components. This system is real time so there no need to queue up messages should one component fail (the transaction will simply time out). Further, there is no need for guaranteed delivery or rollback.
In this instance, is there any benefit to using JMS over something like an HTTP web service (speed, resource footprint, etc)?
One thing that I'm thinking is since the JMS approach requires us to set a thread pool size (the number of components listening to a JMS topic/queue), wouldn't a HTTP service be a better fit since this additional configuration is not needed (a new thread is created for each HTTP request making the application scalable to an "unlimited" number of requests until the server runs out of resources).
Am I missing something?
I don't disagree with the points made by S.Lott at all, but here are a couple of points to consider regarding HTTP web services:
Your clients only need to know how to communicate via HTTP - a protocol well supported by just about every modern langauge in one form or another. JMS, though popular, is more specialist than HTTP, and so restricts the languages your interconnected systems can use. Perhaps not an issue for your system at the moment, but will you need to plug in other systems later that might struggle to support JMS connectivity?
Standards like WSDL and SOAP which you could levarage for your services are well supported by many langauges and there are plenty of tools around that will generate code to implement both ends of the pipeline (client and server) for you from a WSDL file, reducing the amount of dev you'll have to do. These standards also make it relatively simple to define and publish the specification of the data you'll be passing between your systems, something you'll presumably have to do by hand using a queueing technology like JMS.
On the downside, as pointed out by S.Lott, JMS gives you functionality that you throw away using the (stateless) HTTP protocol: guaranteed ordering & reliability; monitoring; scalability; etc. Are you sure you don't need these, and won't need these going forward?
Great question, btw.
I think it's really dependent on the situation. Where I work, we support Remoting, JMS, MQ, HTTP, and sFTP. We are implementing a middleware appliance that speaks Remoting, JMS, MQ, and HTTP, and a software middleware component that speaks JMS, MQ, and HTTP.
As sgreeve alluded to above, standards help us become flexible, but proprietary formats allow more functionality.
In a nutshell, I'd say use HTTP for stateless calls (which could end up meeting almost all of your needs), and whatever proprietary formats you need for stateful calls. If you work in a big enterprise, a hardware appliance is usually a great fit as middleware: Lightning fast compression, encryption, transformation, and translation, with very low total cost of ownership.
I don't know enough about your requirements, but you may be overlooking Manageability, Flexibility and Performance.
JMS allows you to monitor and manage the queue. These are features HTTP lacks, and you'd have to build rather than buy from a vendor.
Also, There are queues and topics in JMS, allowing multiple subscribers to a single publisher. Not possible in HTTP.
While you may not need those things in release 1.0, you might want them in the future.
Also, JMS may be able to use other transport mechanisms like named sockets, which reduces the overheads if there isn't all that socket negotiation going on with (almost) every request.
If you go down the HTTP route and you want to support more than one machine or some kind of reliability - you are going to need a load balancer capable of discovering the available web servers and loading requests across them - then failing over to another web server if a particular box/process dies. Clients making HTTP requests are also going to have to deal with servers failing and retrying operations in some loop.
This is one of the main features of a message queue - reliable load balancing with failover and loose coupling among the producers and consumers without them having to include retry logic - so your client or server code doesn't have to worry about this kinda thing. This is totally separate to whether or not you want message persistence or want to use ACID transactions to produce/consume messages (which can be very handy BTW).
If you focus just on the server side using Java - whether Servlets or MessageListener/MDBs they are kinda similar either way really. The difference is the load balancer.
So maybe the question should really be - is a JMS broker easier to setup & work with than setting up your DNS/NAT/IP/HTTP load balancer infrastructure?
I suppose it depends on what you mean by real-time... Neither JMS nor HTTP in my opinion support "real-time" applications well, meaning they cannot offer predictable/deterministic performance nor properly prioritize flows in the presence of contention.
Part of it is that these technologies are built on top of TCP which serializes all traffic into a single FIFO meaning that different traffic flows cannot be easily prioritized. Moreover TCP timers are not easily controlled resulting unpredictable blocking and timeouts... For this reason many streaming applications use UDP instead of TCP as an underlying protocol.
Another problem with JMS is that typical implementations use a broker that centralizes message dispatch. This is not the best architecture to get deterministic performance.
If you are looking for a middleware that can offer you the kind of reliability guarantees and publish-subscribe semantics you get with JMS, but was developed to fit the real-time application domain I recommend you take a look at the OMG Data-Distribution Service (DDS). See dds.omg.org and this article I wrote arguing why DDS is the best middleware to implement a real-time SOA. http://soa.sys-con.com/node/467488