I am just getting started in web services using Lotus Notes. What I would like to be able to do is to create a web service that generates a sequential number. The code to generate the number is based on existing code we have used for some time within our databases (just straight lotus script, no web services). Basically there is a document that stores the next number, the next number is returned and is updated for the next call save conflicts are detected and the number is tried again if there was a issue saving the number.
I thought I might use a web service for to generate the number. So are web services processed sequentially or in parallel? Because if they are serial then I won't need to deal with two people trying to save the number at the same time.
Web services are a way for two systems to communicate with each other where they would not have a common language.
For example LotusScript agent connecting to a .Net server.
When creating a web service provider (server) on Domino you can code it in LotusScript or Java. The server then provides a WSDL file for the consumer (client) to write the code required to talk to that web service.
This tutorial should explain it better for you:
http://www-10.lotus.com/ldd/ddwiki.nsf/dx/Creating_your_first_Web_Service_provider_and_consumer_in_LotusScript_and_Java.
Now as for Domino. Web services run in order they are requested from the server. However there is no control to say "Don't start until Webservice X has finished".
You could also code this into an application but run the serious risk of deadlocks of memory/performance issues for other users unless you counter for that.
The Domino server can also be set to not run web services/agents in parallel. But again you risk the same issues.
If it is a unique ID then you could go by the UNID of the document you create from the web service. Or you can use #UNIQUE via an evaluate, but both only return text.
http://publib.boulder.ibm.com/infocenter/domhelp/v8r0/topic/com.ibm.designer.domino.main.doc/H_UNIQUE.html
From the Lotus Designer Documentation:
To enable concurrent Web services on a server, you must enable concurrent Web Agents on that server. Open the Server document you want to edit. Click the Internet Protocols - Domino Web Engine tab. Enable Run Web Agents concurrently.
The maximum number of concurrent Web service calls is determind by "Max concurrent agents"-setting. From the Lotus Administration Documentation:
Max concurrent agents Specifies the number of agents allowed to run concurrently. Valid values are 1 through 10. Default values are 1 for daytime and 2 for nighttime. Enabling a higher number of concurrent agents can relieve a heavily loaded Agent Manager, but also reduces the resources available to run other server tasks.
Lotus Notes Domino Version 8.5.x
Yes web services Will run in parrallel. But since you wrote that your code deals with save conflict, you should NOT have problem.
As in standard notes calls by 2 users: the 1st get the doc then the 2nd get the doc and save (speedy two) then first will get save conflict.
In conclusion yes it's parallel BUT it's not a problem.
I would have thought that they would by default run sequentially as asynchronous web agents is off unless you switch it on. So although it's a good design pattern to do 'safe' sequentially number if you only allocate a number via the web service and you haven't changed the asynchronous setting then you'll be fine
Let me also add:
Employ document locking to assure number uniqueness in sequential document numbering solution
There is a simple solution that avoids synchronicity considerations.
You should generate a temporary number using #Unique, then use a scheduled agent to assign sequential numbers in order of document creation, selecting only unprocessed documents using a properly constituted view. If you're not concerned about the order in which documents were created and only concerned that all numbers are unique, a view is not necessary, and you can just trigger the agent on unprocessed documents.
The temporary number can be used for reference temporarily until a proper sequential number is assigned.
When the scheduled agent runs, it should send authors confirmation with the correct reference number.
Or, you could export to DXL and get the sequence= attribute of the tag. This only works if you're accessing a single instance of the database, though. And the DXL export/XML import is a huge amount of overhead.
Unfortunately, I can't see a way to easily get the sequence number of the note from LotusScript NotesDocument. If you have an active support contract, you could open a Problem Management Report for a software enhancement request ("APAR", in IBM's parlance, though I do not know what its acronym expands to).
Good luck!
Related
I have a few questions about server-cost estimations.
How do you decide what type of instance is required for X number of concurrent users? Is it totally based on experience or is there a certain rule that you follow for the same?
I was using JMeter for load testing, and I was wondering, how do you test POST APIs with separate data for each user? Or is there any other platform that you use?
In the case of POST API calls, do we need to create a separate DB for load testing (which I think, we should)? If yes, should we create a test DB in the same DB instance (i.e., in the same AWS RDS)? And does it needs to have some data present in it? As that might change its performance, right?
How to load test a workflow? Suppose we need to load test a case where we want 5,000 users to hit Auth API. It will consist of two APIs, one to request an OTP and the other to use that OTP to get the token.
Please help me out, on this. As I am quite new to scaling and was just wondering if someone with experience in this can help.
Thanks.
It doesn't look like a single "question" to me going forwards you might want to split it into 4 different ones.
Just measure it, I don't think it's possible to predict the resources usage, start load test with 1 virtual user and gradually increase the load to the anticipated number of users at the same time looking at resources consumption in AWS CloudWatch or other monitoring solution like JMeter PerfMon Plugin. In case if you detect that CPU or RAM is the bottleneck switch to higher instance and repeat the test.
There are multiple ways of doing parameterization in JMeter tests, the most commonly used approach is CSV Data Set Config so each user would read the next line from the CSV file containing the test data on each iteration
DB should live on a separate host as if you place it under the same machine as the application server they will be mutually interfering each other and you might face race conditions. With regards to the database size - if possible make a clone of production data
You should simulate real usage of the application with 100% accuracy, if user needs to authorize before making an API call your load test script should do the same.
I have a requirement to publish Siebel inbound web service with only one port, at the same time WS has to receive three different operations.
My WS's are based on workflow.
As I could read in the bookshelf the only one operation is possible to add in the one port of WS based on WF:
https://docs.oracle.com/cd/E14004_01/books/CRMWeb/CRMWeb_Overview12.html
(see p.5)
However I've found vanilla WS that looks as I need:
FinancialAssetService
Could anyone give me some tips how to create such WS?
Is it possible to receive different IO by different operations of this WS?
Thanks in advance!
Well, if your web service provides 3 operations, you must be invoking 3 different workflows, right? (It says so in the page you linked: a workflow corresponds to a single Web service operation). Then, yes, you'll need to define 3 "service ports" in your web service.
However, I don't see why that would be a problem at all. I've never done this myself, but you can define the same endpoint URL and HTTP port for each one of the 3 service ports. The external application consuming your service would never notice any difference.
As for your second question, yes, having 3 different workflows would obviously allow you to choose different integration objects for each operation.
On the other hand, if you only have one workflow and need 3 operations because you want it to accept different input structures, then you might want to rethink your solution. Perhaps create 3 tiny workflows (or a BS with 3 operations) to just transform the data to a common IO (using Siebel data mappings), and then pass it to your existing WF.
I just installed Sitecore Experience Platform and configured it according to the Sitecore scaling recommendations for processing servers.
But I want to know the following things:
1.How can I use the sitecore processing server?
2.How can I check whether processing server is working fine?
3.How collections DB data is processed and send to reporting server?
The processing server is a piece of the whole analytics (xDB) part of the Sitecore solution. More info can be found here.
Snippet:
"The processing and aggregation component extracts information from
captured, raw analytics data and transforms it into a form suitable
for use in reporting applications. It also performs specific tasks on
the collection database that involve mass updates.
You implement processing and aggregation on a Sitecore application
server connected to both the collection and reporting databases. A
processing server can run independently on a dedicated server, or on
the same server together with other Sitecore components. By
implementing multiple processing or aggregation servers, it is
possible to achieve higher performance on high-traffic solutions."
In short: the processing server will aggregate the data in Mongo and processes it (to the reporting database). This can be put on a separate server in order to spare resources on your other servers. I'm not quite sure what it all does behind the scenes and how to check exactly and only that part of the process, but you could check the the reporting tools in the Sitecore backend, like Experience Analytics. If those are working, you probably are fine. Also, check the logs on the processing server - that will give you an indication what he is doing and if any errors occur.
I want to create a weather app for my android phone but now I got stuck on the backend part of the app.
I have found a weatherservice where I can, for free, get detailed information about a certain location through their webservice. But they have stated in their rules that I am not allowed to poll their service with a high frequency. So I thought that I could create a webservice on my own that retrieve weatherinformation from the weatherstation that I found and then make it available through my webservice so that my app only make calls to my service.
the communication will be like below
MyApp <--> MyWebService <--> commercial webservice
the android app talks to MyWebService. And my webservice talks to the commercial service.
So I want MyWebService to do to things.
retrieve information from the commercial webservice once every hour and update my database
handle requests from my androidApp
My problem is that I know to little about web application and web services. I don't really know what language to choose for the webservice.
PHP with soap or REST looks like a good candidate for the second task. But I can't find any sample on how to handle the first task. Is there any easy way to tell the server to run my script once every hour?
I have been looking a litle into C# as well which would suit me a litle bit more as I am more used to C#. But the same question arise here. How do I handle the second task?
This is something that I wanted to write for a long time, but I feel totaly lost here.
Doing things "once an hour" (or more generally, scheduling tasks) from a web-only application is tricky for a number of reasons. It is much better in general to use the built-in mechanism of the operating system to perform scheduled tasks (e.g. cron under Linux, or Scheduled Tasks under Windows), or to write a service/daemon process that handles the updates.
Having said that, there is a fairly straightforward way to meet your requirement. You can cache the result of the commercial web service in your web application tier, along with a timestamp of the last time you retrieved the information. When a web request comes into your web service from your app, first check the timestamp of the cache. If the timestamp is less than one hour old, just returned the cached weather data. If the timestamp is more than an hour, call the commercial web service directly from there, write the result and the current time into your cache, and return the data you just got to the app.
PHP is certainly well-suited to this kind of task. Detailed instructions on how to do that are beyond the scope of a Stack Overflow question. Google for PHP and caching, try out some examples, and ask detailed follow-up questions if you get stuck.
Nowadays a lot of web applications are providing API for other applications to use.
I am new to the usage of API so I want to understand the use cases for it.
Lets take Basecamp as an example.
What are the use cases for using their API in my web application?
For inserting current data in my web application into a newly created Basecamp account instead of inserting everything manually which could take days or weeks if the data is huge?
For updating my application data when the user changes something in Basecamp. If so, how do I know for example when a user add/edit/remove a contact in Basecamp. Do I make a request and check every minute from the backend?
For making backup of the Basecamp data so I can move it to other applications if necessary?
Are all the above examples good use cases for the usage of API?
Are there more use cases?
I want to have a clear picture of why it's good to use another web service API and how I can leverage that on my application.
Thanks.
I've found the biggest reason to use and provide web services is to be able to programmatically drive the application with another process. This allows the coupling of different actions in different applications driven by one event/process/trigger.
For example I could create a use a webservice provided by Basecamp, my bug tracking database and the continuous integration server. I could tie all those things together and kick them off from a commit hook script.
I can have a monitor in production automatically open a ticket in our ticket tracker. This could trigger an autoremediation process from the ticket tracker which logs into the box remotely and restarts the service.
The other major reason I've seen to use and provide web service is to reduce double entry. If you do change management in your production environment that usually means you create Change tickets. The changes that occur may also need to be reflected in the Change Management Database which is usually a model of how production is suppose to look. Most of these systems don't automatically drive the update of your configuration item with the data from the change. Using web services you can stitch them together to eliminate the double (manual) entry that would normally occur.
APIs are used any time you want to get data to/from an application without using the default interface.
*I'd bet there's a mobile app would use the basecamp api.
*You could use the api to pull information from basecamp into another application (like project manager software or an individual's todo webpage)
*the geekiest of us may prefer to update basecamp from a script/command line rather than interrupting our work flow to open a web page and click around.