We started using WSO2 recently as an integration layer to communicate with different systems but we hit a problem while calling an existing stored procedure present in SQL server database. This stored procedure takes a user defined table type (UDTT) and single SP call can pass thousands of rows in the UDTT.
We've tried enabling batch requests but when the DSS API is tried with multiple rows, multiple database calls are being made once for each row defeating the purpose of having a UDTT as input, so we would like to know if it's really possible to pass multiple rows using a single database call.
We reached out to WSO2 support and we've been informed that it's not possible as of now.
But it would be really nice to have this feature as WSO2 is used for enterprise level systems and usually the data is huge and can't be sent row by row to database, therefore increasing traffic to database server. Really wish we had this feature, now we have to search for alternatives.
Related
I've just started messing around with AWS DynamoDB in my iOS app and I have a few questions.
Currently, I have my app communicating directly to my DynamoDB database. I've been reading around lately and people are saying this isn't the proper way to go about getting data from my database.
By this I mean is I just have a function in my code querying my Dynamo database and returning the result.
How I do it works but is there a better way I should be going about this?
Amazon DynamoDB itself is a highly-scalable service and standing up another server in front of it requires scaling the service also in line with the RCU/WCU configured for your tables, which we can and should avoid.
If your mobile application doesn't need a backend server and you can perform all the business functions from the mobile device, then you should probably think about
Using the AWS DynamoDB SDK for iOS devices to write your client application that runs on the mobile device
Use AWS Token Vending Machine to authenticate your mobile users to grant them credentials to be used to run operations on DynamoDB tables.
Control access (i.e what operations should be allowed on tables etc.,) using IAM policies.
HTH.
From what you say, I can guess that you are talking about a way you can distribute data to many clients (ios apps).
There are few integration patterns (a very good book on this: Enterprise Integration Patterns), one of which is called shared database. It is essentially about using a common database for multiple clients to share the data. Main drawback for that pattern (in your case) is that you are doing assumption about how the database schema looks like. It can potentially bring you some headache supporting the schema in the future, if your business logic changes.
The more advanced approach would be sending events on every change in your data instead of directly writing changes to the database from client apps. This way you can add additional processing to the events before the data they carry is written to the database. For example, you may want to change the event format in the new version of your app, but still want to support legacy users, so you add translation procedure which transforms both types of events to the format which fits the database schema. It's basically a question of whether to work with diffs vs snapshots.
You should be aware of added complexity of working with events, and it can be an overkill if your app is simple and changes in schema are unlikely.
Also consider that you can do data preprocessing using DynamoDB Streams, which gives you some advantages of using events still keeping it simple to implement.
I am fairly new to the subject and doing some research.
I have an ESB (using WSO2 ESB) and want to extract master data from the passing messages (like Customers, Orders, etc) and store them in DB to keep as a reference data. Source data is in XML coming from web services.
So there needs to be a component that will be able to maintain master data: insert new objects, delete old and update changed (would be also nice to have data events so ESB can route data accordingly).Basically, the logic will be similar for any entity type and it might be good idea to autogenerate it for all new entity types...
Options as I see them now:
Use Smooks with either SQLExecutor or Hibernate for persistence with all matching logic written either in smooks config or in DAO annotations
Use some open source ETL tool (like Talend, Kettle, Clover, etc). So the data will be passed to the ETL and all transformation logic is defined there. Also could accommodate future scenarios when they appear or can be an overkill..
.
Would appreciate if you share your thoughts and point me to the right direction.
You'd better to leave your database part to another tool.
If you have a fair amount of database interactions in your message flow, you can expect serious decreases in your performance.
However you do not need an ETL for the use case you explained. You can simply do it using WSO2 DSS by creating services to insert or update your data inside the database.
We have been using this for message logging purposes (inside DB) beside the ESB and are happy with that. It's better to use it as non-blocking fire-and-forget web services in your message flow within ESB. Hope this helps.
I am just getting started in web services using Lotus Notes. What I would like to be able to do is to create a web service that generates a sequential number. The code to generate the number is based on existing code we have used for some time within our databases (just straight lotus script, no web services). Basically there is a document that stores the next number, the next number is returned and is updated for the next call save conflicts are detected and the number is tried again if there was a issue saving the number.
I thought I might use a web service for to generate the number. So are web services processed sequentially or in parallel? Because if they are serial then I won't need to deal with two people trying to save the number at the same time.
Web services are a way for two systems to communicate with each other where they would not have a common language.
For example LotusScript agent connecting to a .Net server.
When creating a web service provider (server) on Domino you can code it in LotusScript or Java. The server then provides a WSDL file for the consumer (client) to write the code required to talk to that web service.
This tutorial should explain it better for you:
http://www-10.lotus.com/ldd/ddwiki.nsf/dx/Creating_your_first_Web_Service_provider_and_consumer_in_LotusScript_and_Java.
Now as for Domino. Web services run in order they are requested from the server. However there is no control to say "Don't start until Webservice X has finished".
You could also code this into an application but run the serious risk of deadlocks of memory/performance issues for other users unless you counter for that.
The Domino server can also be set to not run web services/agents in parallel. But again you risk the same issues.
If it is a unique ID then you could go by the UNID of the document you create from the web service. Or you can use #UNIQUE via an evaluate, but both only return text.
http://publib.boulder.ibm.com/infocenter/domhelp/v8r0/topic/com.ibm.designer.domino.main.doc/H_UNIQUE.html
From the Lotus Designer Documentation:
To enable concurrent Web services on a server, you must enable concurrent Web Agents on that server. Open the Server document you want to edit. Click the Internet Protocols - Domino Web Engine tab. Enable Run Web Agents concurrently.
The maximum number of concurrent Web service calls is determind by "Max concurrent agents"-setting. From the Lotus Administration Documentation:
Max concurrent agents Specifies the number of agents allowed to run concurrently. Valid values are 1 through 10. Default values are 1 for daytime and 2 for nighttime. Enabling a higher number of concurrent agents can relieve a heavily loaded Agent Manager, but also reduces the resources available to run other server tasks.
Lotus Notes Domino Version 8.5.x
Yes web services Will run in parrallel. But since you wrote that your code deals with save conflict, you should NOT have problem.
As in standard notes calls by 2 users: the 1st get the doc then the 2nd get the doc and save (speedy two) then first will get save conflict.
In conclusion yes it's parallel BUT it's not a problem.
I would have thought that they would by default run sequentially as asynchronous web agents is off unless you switch it on. So although it's a good design pattern to do 'safe' sequentially number if you only allocate a number via the web service and you haven't changed the asynchronous setting then you'll be fine
Let me also add:
Employ document locking to assure number uniqueness in sequential document numbering solution
There is a simple solution that avoids synchronicity considerations.
You should generate a temporary number using #Unique, then use a scheduled agent to assign sequential numbers in order of document creation, selecting only unprocessed documents using a properly constituted view. If you're not concerned about the order in which documents were created and only concerned that all numbers are unique, a view is not necessary, and you can just trigger the agent on unprocessed documents.
The temporary number can be used for reference temporarily until a proper sequential number is assigned.
When the scheduled agent runs, it should send authors confirmation with the correct reference number.
Or, you could export to DXL and get the sequence= attribute of the tag. This only works if you're accessing a single instance of the database, though. And the DXL export/XML import is a huge amount of overhead.
Unfortunately, I can't see a way to easily get the sequence number of the note from LotusScript NotesDocument. If you have an active support contract, you could open a Problem Management Report for a software enhancement request ("APAR", in IBM's parlance, though I do not know what its acronym expands to).
Good luck!
I've recently been toying with data migration into Microsoft Dynamics CRM using MS SQL Server Integration Services. First, the basic problem domain:
I have an exported flat file from a previous homebrew CRM system, the goal is to efficiently cleanup the data, and then to move the data over into Dynamics CRM. I've decided to put in one entity at a time in order to keep the orchestrations simple. There is currently an Attribute in CRM that contains the primary key we used in the old CRM. The basic process in my head currently is, import the flat-file into SSIS using the Excel Adapter, then make a connection to the Microsoft Dynamics Database in order to Query for data related to the import. Since I'm not updating the database in anyway, I figure this is fine. Once I have my list of Account Guids and Foreign Keys, I will then compare the list of Excel rows to the list from the CRM database, and create a new derived column with the GUID in it indicating that the operation should be an update, and that the guid to use is the one in that row.
I then create a script object, and make a call out to the CRM Web Service, I go down the Excel file Row by Row, and if it's has a value in the derived column, it updates the CRM, else it just creates a new entity.
If all goes well I'll package the SSIS and execute it from the SQL server.
Is there any gaping flaw in this logic? I'm sure there are ways to make it faster, but I can't think of any that would make a drastic difference. Any thoughts?
Your design is good. Actually, specialized CRM integration software Scribe (and probably others too) do this very much this way with most of their adapters. They use direct database access for reads and calling web service for insert/update/delete and other operations.
I just wonder if this complication is actually necessary. It depends on the size of the data you have to import. I usually deal with data that gets imported over one night.
Sounds good to me - by getting the GUIDs directly from the database, you are are reducing the number of necessary web service calls.
CozyRoc has recently released a new version, which includes Dynamics CRM integration components. Check the official release announcement here.
We have two ColdFusion applications that share a common database. There are three instances of each application. (One instance of each application runs on each of three servers.)
I can see that the three instances of a given application should share a client variable store. (Load-balancing can cause a single user session to bounce between the three instances.) My question is: Is there any danger to having all instances of both applications share the same data store? Or should only one application be pointing at a given data store?
You can use the same client data store. The CDATA table has an 'app' column that stores the coldfusion application name. That column will keep your data unique to each application.
I'm working at an enterprise level ColdFusion shop with multiple CF applications running on the same server that are all pointed at the same client variable store. The only concern within the organization is how the client variable store affects regular backups, and that falls under the data team's purview. We don't have any problems with the different apps actually using the same client variable storage.
Related, from the ColdFusion documentation:
Some browsers allow only 20 cookies to
be set from a particular host.
ColdFusion uses two of these cookies
for the CFID and CFToken identifiers,
and also creates a cookie named
cfglobals to hold global data about
the client, such as HitCount,
TimeCreated, and LastVisit. This
limits you to 17 unique applications
per client-host pair.
I guess this deals more with how many applications you actually run rather than whether you have them all share the same client data store, but it does suggest that there may be some kind of hard limit on the total number of apps you can run at once, although I'd recommend splitting across hosts (or just using a different domain name) if you're planning on more than 16 apps!
As Eric stated above, running multiple apps off of one datasource is fine. What I would warn you is that these databases can fill up fast if you're not careful to block spiders and search engines from using them. Because CF creates client variables on each request for a new session, a search engine will get a new one every time because it never sends its old credentials/cookies so CF thinks it's a new user who needs a new client variable set. Also, be absolutely certain to check "Disable global client variable updates" in CF admin. This will save you a lot of unnecessary overhead.
I would think that multiple applications sharing the same data store would open up the possibility of users from one application having access to the other applications. While it may not be likely, the possibility could exist. (I don't have any facts to back that up, it just seems like a logical conclusion).
The question is, are you comfortable with that possibility, or do you have to absolutely make sure each application is secure?