Why DBLookup isn't a custom endpoint rather than mediator - wso2

The DBLookup construct is implemented as mediator in WSO2. Are there any reasons why this wasn't implemented as Customer Endpoint rather?

One way to think about EPs is to consider them a final destination for your data, while mediators are intermediate stops where they will be modified and/or enriched.
The DBLookup, in particular, was primarily tought as a way to enrich a given message with data recovered from a database (hence its name).
In theory, one could write a custom endpoint to send received messages directly a database. However, WSO2 has its DSS product which covers this kind of scenario and it is much more flexible.

Related

POST Request to REST API with Apache Beam

I have the use case that we're pulling messages from PubSub, and then the idea is to POST those messages to the REST API of PowerBI. We want to create a Live Report using the PushDatasets feature.
The main idea should be something like this:
PubSub -> Apache Beam -> POST REST API -> PowerBI Dashboard
I haven't found any implementation about POST Request inside an Apache Beam job (the runner is not a problem right now), just a GET request inside a DoFn. I don't even know if this is possible.
Has someone experienced doing something like this? or maybe another framework/tool that may be more helpful?
Thanks.
Sending POST requests to an external API is certainly possible, but requires some care. It could be as simple as making the POST inside the body of a DoFn, but be aware that this could lead to duplicates since messages within your pipeline belong to a batch and the Beam model allows entire batches to be reprocessed in case of worker failures, exceptions, etc.
There is some advice in the beam docs on grouping elements for efficient external service calls.
Choosing the best course of action here largely depends on the details of the API you're calling. Does it take message IDs that can be used for deduplication on the PowerBI side? Can the API accept batches of messages? Is there rate limiting?

Difference between Data Mapper Mediator and Payload Factory Mediator

Besides the syntax what is the core difference between a data mapper and payload factory? They both can convert/transform data from one format to another.
I have used the data mapper only a few times (you stick with what you know). In my opinion both mediators provide mostly the same functionality (as does the xslt mediator) but the underlying technology and mainly the development method is radically different.
datamapper provides a graphical way of transforming messages. It uses
existing output and input messages to seed the transformation so it is strong when you have the output of service A and the input of service B and just need to map the data from A to B.
payloadFactory is able to quickly build messages. I use it mostly to create requests where only a few fields need to be mapped from the original request to the new request.
xslt is a versatile and powerful way of transforming messages but it requires some experience. A lot of 3th party tooling is available to assist with the transformation.

Querying dynamodb on mobile client vs backend query and response via api?

I am querying my contacts to match a list of contacts (primary keys) on dynamodb to see if any are using my service.
I have two options to go about this:
1) client side: I call the aws sdk directly in my mobile device and handle the response accordingly.
2) via API Gateway: I send a json of my contacts to my backend (aws lambda), which computes off client and responds via json.
I am wondering what are the pros and cons of each, or if one is clearly better?
Thanks
Like many things, it depends. I don't think one is clearly better than the other.
*1 client side sdk is good because it's probably the easiest and quickest way to get going and less to build/configure/maintain.
*2 API gateway is good because it will probably be easier to call your lambda from different clients(browsers, other services,etc) and those clients wouldn't need to depend on the SDK, they could just use RESTful calls if that's how you set it up. You would also be able to support different content-types with a mapping template such as XML, YAML, etc.
It really just comes down to your use case, style, plans for reuse in the near future. You could probably start with #1 and migrate to #2 if you find you need more of the API Gateway features.

how to persist runtime parameter of a service call then use as parameter for the next service call WSO2 ESB

I am seeking advice on the most appropriate method for the following use case.
I have created a number of services using the WSO2 Data Services Server which I want to run periodically passing in parameters of last run date. ie. the data services has two parameters start and end dates to run the sql against.
I plan to create a service within WSO2 ESB to mediate the execution of these service, combine the results to pass onto another web service. I think I can manage this ;-) I will use a scheduled task to start this at a predefined interval.
Where I am seeking advice is how to keep track of the last successful run time as I need to use this as parameters for the data services web services.
My options as I see them
create a config table in my database and create another data services web service to retrieve and persist these values
use vfs transport and somehow persist these values to a text file as xml, csv or json
use some other way like property values in the esb sequence and somehow persist these
any other??
With my current knowledge it would seem that 1 is easiest but it doesn't feel right as I would have to have write access to the database, something I possibly wouldn't normally have when architecting a solution like this in the future, 2 appears like it could work with my limited knowledge of WSO2 ESB to date but is 3 the best option? But as you see from the detail above this is where I start to flounder.
Any suggestions would be most welcome
I do not have much experience with ESB. However I also feel that your first option would be easier to implement.
A related topic was also discussed in WSO2 architecture mailing list recently with subject "[Architecture] Allow ESB to put and update registry properties"
It was discussed to introduce a registry mediator, but I'm not sure it will be implemented soon.
I hope this helps.
As of now there is no direct method to save content to ESB through ESB. But you can always write a custom mediator to do that or use the script mediator to achieve this
Following is the code snippet for the script mediator
<script language="js"><![CDATA[
importPackage(Packages.org.apache.synapse.config);
/* creates a new resource */
mc.getConfiguration().getRegistry().newResource("conf:/store/myStore",false);
/* update the resource */
mc.getConfiguration().getRegistry().updateResource(
"conf:/store/myStore", mc.getProperty("myProperty").toString());
]]></script>
I've written a blog post on how to do this in ESB 4.8.1. You can find it here

Simple ETL: Smooks or ETL product

I am fairly new to the subject and doing some research.
I have an ESB (using WSO2 ESB) and want to extract master data from the passing messages (like Customers, Orders, etc) and store them in DB to keep as a reference data. Source data is in XML coming from web services.
So there needs to be a component that will be able to maintain master data: insert new objects, delete old and update changed (would be also nice to have data events so ESB can route data accordingly).Basically, the logic will be similar for any entity type and it might be good idea to autogenerate it for all new entity types...
Options as I see them now:
Use Smooks with either SQLExecutor or Hibernate for persistence with all matching logic written either in smooks config or in DAO annotations
Use some open source ETL tool (like Talend, Kettle, Clover, etc). So the data will be passed to the ETL and all transformation logic is defined there. Also could accommodate future scenarios when they appear or can be an overkill..
.
Would appreciate if you share your thoughts and point me to the right direction.
You'd better to leave your database part to another tool.
If you have a fair amount of database interactions in your message flow, you can expect serious decreases in your performance.
However you do not need an ETL for the use case you explained. You can simply do it using WSO2 DSS by creating services to insert or update your data inside the database.
We have been using this for message logging purposes (inside DB) beside the ESB and are happy with that. It's better to use it as non-blocking fire-and-forget web services in your message flow within ESB. Hope this helps.