I am completely new to Load Runner Performance testing tool.
I have an application where practitioner will do assessment for the patient.
The steps to do the assessment in that application is below:
1. Login to application.
2. Answer the security question.
3. Go to members page by clicking on Members tab.
4. Select the status for the assessment as Not started.
5. Select one member in the Not started assessment.
6. Begin the assessment.
7. Submit the assessment.
8. log out.
Between the steps 6 and 7 there are nearly 5 to 6 steps which are client side operations.
After I begin the assessment I will have different sections of assessment as below:
1. Clinical profile.
2. Subjective section.
3. Review of the system and physical examination.
4. General questions.
5. Current Diagnostics.
6. Preventive care.
7. Assessment and plan.
In each section I will have different controls like text boxes, check boxes buttons, etc..
After I complete filling one section I will have to click on next button to go to next section. This next button or any other action in the above mentioned section will not hit the server.
Silverlight protocol will start When I begin the assessment.
When I begin the assessment I will get the data(textboxes, checkboxes, buttons) of all above mentioned section in an XML file. This XML file will be encrypted.
After I submit the assessment, responses of filled section will be populated to XML file and the XML file will be sent as part of request in encrypted format.
When I recorded it in the loadrunner, I observed the response to the begin assessment request as totally encrypted format. Inorder to submit assessment I have to capture the data from Begin Assessment response and add the responses to the questions in each section to XML file and send that XML file as the response.
Please help me out on my question.
Loadrunner (LR) is more akin to API testing than UI (browser) testing. That is, all that LR is really concerned about is that final post. LR is oblivious to performance on the end user's browser and couldn't care less about Silverlight, Ajax, or any other browser side load. About the only impact Silverlight has on LR is to increase the recorded thinktime. LR wants to measure the impact of that final post on the networks, web server strings (including load balancers and application servers), and DB servers. When you use VuGen to create the script all you should really see is that final POST and the generated XML in the body. You can parameterize the XML values sent in the post using parameter lists (tables) to vary the responses across Vusers. That is up to you.
Related
For my master thesis I have to develop a web service that constantly monitors user interaction in a web application (recording, for example, the most clicked buttons or the most visited pages). The main goal of this project is to generate a set of behavioral profiles.
I have some experience with web services, but I feel I need some sort of starting point with this kind of project (similar projects, existing thesis or articles). Hope someone can help.
Assuming you are talking about restFul web services, a good starting point would be to list down all APIs that you think you will need in terms of GET, POST, PUT, DELETE methods.
For example, you might first need a list of all the buttons, links and pages in the website you are supposed to monitor (lets say getAllItems). Then for such individual items, you need to code to capture the 'onclick', 'onbuttonpress' etc events (lets say getOnClickForButtonA). You will have to save all this information periodically in a database.
When you have enough information in the database, you can write code to read this and generate some stats out of it.
So, a set of services to gather all info.
Set of services to store all this info.
Set of services to analyze gathered info.
Hope this helps!
To start off, I am extremely sorry if my question is not clear but I have very little knowledge about web services in general and the vast nature of varying available information has driven me crazy over the past few weeks. So please do bear with me.
Summary: I want to create a live score update app for android. (I haven't added android as a tag because I do know how to retrieve data from say twitter's JSON api.) However, like the twitter JSON api, I want to be able to add(POST maybe?) data to the Apache 7.0 service that I have running. I then want the app to be able to be able to retrieve this data that I have posted.
I had asked a more generic question earlier and I was told that I should look up some api's. I did that but I have still not been unable to make a break through.
So my questions is:
Is setting up an API on my local web service the correct way to do this?
If so, how can I setup an API that will return JSON objects to the Android app. Also, I would need to be able to constantly update this API with new data.
Additionally, would I also need to setup a database for all this?
Any links to well explained matter would be appreciated too.
Note: I would like to carry this out using a RESTful Web Service through Jersey and use JSON Objects during retrieval.
Again, I am sorry about my terrible knowledge with web services in general despite trying my best to research a lot. The best I could do was get my RESTful Web to respond to a GET with some pre-defined text that I had set in Eclipse.
Thanks.
If I understand you correctly, what you try to do is something like this:
There will be a match or multiple matches of some sort. Whenever a team/player scores someone (i.e. you) will use the app to update the score. People who previously subscribed to the match, will be notified and see the updated score.
Even though I'm not familiar with backends based on Java, the implementation should be fairly similar to other programming languages.
First of all a few words to REST in general. REST is generally needed, when you need to share information between multiple devices and or users. This seems to be the case here. To implement the REST you are going to need an API of some sorts. Within the web APIs are implemented by webservers answering to certain predefined HTTP Requests.
Thus setting up an API on a web server is the correct way.
Next a few words on databases. A database is generally needed, if you want to store information persistently. This might, or might not be what you are planning to do. If there are just going to be a few matches at the same time and you don't care about persistence of the data, you can use Java to store a collection of match objects in memory. I'm just saying it is possible, not that it is a good idea. Once your server crashes or you run out of memory due to w/e reason, data is going to be lost. (Of course within the actual implementation you want to cache data for current matches in some way and keeping objects in memory is way to do so).
I'd recommend to use a database.
Within the database, you can then store and access information about the matches like the score, which users subscribed, who played, etc.
JSON is just a way to represent the data/objects that will be shared between the server and the client. You can use JSON to encode request and response data/bodies.
The user has to be informed about the updated score. There are two basic ways to do so. Push or Pull. With pull, the client will check for updated scores after fixed intervals or actions. With push, the server will notify the client about changed scores which will cause him to update the information. Since you are planning on doing a live application and using Java anyways, push seems to be the better way to go.
Last but not least let's have a look at a possible implementation using
Webserver (API endpoints + database)
Administrator (keeps score updated)
User (receives updates)
We assume that the server will respond to HTTP Requests (POST#/api/my-endpoint) with JSON-Objects.
Possible flow
1)
First the administrator creates a match
REQUEST
POST # /api/matches
body: team1=someteam&team2=someotherteam
The server now will create a match object and store it in the database. The response will contain information about the object and whether the action was successful.
2)
The user asks for a list of matches
REQUEST
GET # /api/matches/curret
The response will be a JSON object containing a list of current matches.
RESPONSE
{
matches: [
{id: 1, teams:...}, ...
]
}
3)
(If push)
A user subscribes to a match
REQUEST
GET # /api/SOME_MATCH_ID/observe
The user will now be added as an observer for the match. Again, the response contains information about whether the action was successful or not.
4)
The administrator updates a score
REQUEST
UPDATE # /api/SOME_MATCH_ID
body: team1scored...
The score now gets update on the server (in memory/database) and the user will be notified about the updated score.
5)
The user gets the updated score
REQUEST
GET # /api/SOME_MATCH_ID
RESPONSE
... (Updated score in some way)
I have a real time web analytics problem to address, and I'm wondering if some of the WSO2 products might be an appropriate solution.
An ecommerce web site shows pages of products to a browser user, and the web site vendor wants to collect details of what products were viewed in a list, what products were selected from the list for more info, what products were put into the basket, and what products were actually purchased - all in real time. I can use web page tagging to generate logging events for the four states (I.e. In list, view detail, in basket, purchased). The web site vendor wants too see results summarized by product and by rolling time band (e.g. Last hour, last 6 hours, last 24 hours, last 72 hours) by the four product states.
As a complete WSO2 newbie I'm hoping somebody can help with some pointers on how to address this. I've been reading about the BAM module to capture events. Is that a good place to start? Also can anybody suggest a good in memory data store to hold the event data aggregated by event type and rolling time period?
TIA
Yes, BAM is more kind batch processing, monitoring and complex engine and using it you can capture data, process and then present. In architectural point of view, the product states that are changed by the browser user will be captured by the web server and publish to BAM server.
A good point to start is learning about data publishing. Once you define the data [in BAM it is known as stream definition] to be published, you can write a hive script to process it and present. You can pump all data to BAM and then you can use hive script to process and store it in the manner you wanted. Later you can retrieve and present.
We need our Sitecore web application to process 60-80 web requests per second. We are using Sitecore 7.0. We have tried a 1 Webserver + 1 Database server deployment, but it only processes 20-25 requests per second. Web server queues up all the other requests in the Memory. As we increase the load, memory fills up.(We did all Sitecore performance enhancements recommended). We need 4X performance to reach the goal :).
Will it be possible to achieve this goal by upgrading the existing server, or do we have to add more web servers in production environment.
Note: We are using Lucene indexing as well.
Here are some things you can consider without changing overall architecture of your deployment
CDN to offload media and static asset requests
This leaves your content delivery server available to handle important content queries and display logic.
Example www.cloudflare.com
Configure and use Sitecore's built-in caching
This is from the guide:
Investigation and configuration of the Sitecore Caches is broken down
into multiple tasks. This way each task is more focused and
simplified. The focus is on configuration and tuning of the Sitecore
Database Caches (prefetch, data, and item caches.)
For configuration
of the output rendering caching properties, the customer should be
made aware of both the Sitecore Cache Configuration Reference and the
Sitecore Presentation Component Reference as to how properly enable
and the properties to expire these caches.
Check out the Sitecore Tuning Guide
Find Slow Queries or Controls
It sounds like your application follows Sitecore best practices, but I leave this note in for anyone that might find this answer. Use Sitecore's built-in Debug mode to identify the slowest running controls and sublayouts. Additionally, if you have Analytics set up there is a "Slow Pages" report that might give you some information on where your application is slowing down.
Those things being said, if you're prepared to provision additional servers and set up a load-balanced environment then read on.
Separate Content Delivery and Content Management
To me the first logical step before load-balancing content delivery servers is to separate the content management from the equation. This is pretty easy and the Scaling Guide walks you through getting the HistoryEngine set up to keep those Lucene indexes up to date.
Set up Load Balancer with 2 or more Content Delivery servers
Once you've done the first step this can be as easy as cloning your content delivery server and adding it to your load balancer "pool". There are a couple of things to consider here like: Does your web application allow users to log in? So you'll need to worry about sticky sessions or machine keys. Does your web application use file media instead of blob media? I haven't had to deal with this, but I understand that's another consideration.
Scale your SQL solution
I've seen applications with up to four load balanced content delivery servers and the SQL Server did not have a problem - I think this will be unique to each case depending on a lot of factors: horsepower and tuning of SQL Server, content model of your application, complexity of your queries, caching configuration on content delivery servers, etc. Again, the Scaling Guide covers SQL Mirroring and Failover, so that is going to be your first stop on getting that going.
Finally, I would say contact Sitecore. These guys have probably seen more of what's gone right and what's gone wrong with installations and could get you on the right path. Good luck!
This answer written from a Sitecore developer perspective:
Bottom line: You need to figure out exactly where your performance bottleneck is. That is going to take some digging, but will be very worthwhile. You should definitely be able to serve 60-80 requests/s without any trouble... but of course that makes a lot of assumptions about the nature of your site and the requests.
For my site, I found Sitecore's caching implementation to be sub-par... I created some very simple and aggressive application-specific caches in my app and this made all the difference in the world. For instance, we have 900+ "Partner" items where our sites' advertisements live... and simply putting all these objects in an array in the Application object sped up page requests significantly. Finding an object in a Hashtable indexed by its Item.Name or ID is going to be a lot faster than Sitecore.Context.Database.GetItem("/itempath") or a SelectItems() call (at least, that's my experience). If your architecture and data set will allow this strategy, we've had good experience with it.
Another thing to watch out for is XSLT renderings. Personally, I avoid them completely in favor of ASP.NET UserControls. The XSLT rendering is just slow. As much as 10x slower than a native UserControl rendering the same HTML. So if you have a few of these... replace with some custom code and you'll see a world of difference.
Say I want to develop a web application that will have registered users and will be registered as a twitter app (allowing users to give it permissions to view their timeline and post on their behalf). The sole function of the application will be to re-tweet tweets from users' timeline according to users' settings and desires.
I understand that the website for this app will use the common technologies like HTML, CSS and JS on the client-side. The server side (where the user defines what kind of tweets the application should retweet) will have to be coded in PHP/Python/Perl/... based on a DB MySQL/Postgre/...
What I don't understand, and would really appreciate your help with, is where the real "business logic" will be coded? For example, what technology should I use to code the function that will sit on my server: contacting Twitter server every 5 minutes, reading the timeline of every user I have, checking whether there are tweets worth retweeting (according to what the user has defined), and sending Tweeter the necessary commands to retweet the chosen tweets on behalf of my users.
All that will happen off-line for the user, and will be an on-going and cyclic process - but what technology should I use to code it?
Thanks!
I have heard about this API for PHP. It is actually the only one that I have heard of for PHP, though. I know that there are some good Python libraries out there, but I don't know about Perl.
I am actually working on a new API for C# (won't be a good fit for you, as you're clearly not using Windows Servers), and started building it while working on an enterprise web application that prompted several questions similar to your own.
Here is what you are going to have to do:
Before you start, you are going to have to get in touch with one of Twitter's data partners (I believe that you may contact Twitter for the reference)
The reason for this is that you are going to be requiring many more requests than you think
Twitter's time interval used for Twitter's recorded rate cap is 900 seconds (5 minutes)
With the general rate limit if you are querying a user's timeline only once every rate limit, you are limiting your number of visitors on your site to 300 at a time
Here's where it gets tricky - if every user makes one Tweet (meaning you send the Tweet - not rate limited - and then refresh the timeline - rate limited - so that they can see the updated tweet) you have now dropped your maximum number of active users at any given time to 150
Factor in the company's own timeline (-1 visitor), plus the number of visitors who leave their browsers open (now you need more logic, and you have to either kick them off or simply keep track of whose timeline you won't be refreshing), the number of users who make more than one tweet (-1 visitor for each Tweet), etc.
Moral of the story: contact one of their data partners and get yourself either unlimited requests, or at least a significant enough amount to accommodate your number of visitors/users (plus a bit of padding)
If you adhere to this advise, skip steps 2 and 3, otherwise, skip step 4
(Note: Steps 2 and 3 are only for rate-capped implementations) Using your desired language, make a service that runs on the server and makes the queries to Twitter
Based on the information that you gave, I suggest that you use Python to make this service
The service will run at all times and be on it's own clock to base the 5-minute intervals between the requests on
You will have to use a caching or a database system for storing the data
(Note: Steps 2 and 3 are only for rate-capped implementations) Add the necessary code to make a request to the service that you created for the data and perform these requests every 5 minutes
I suggest that the clock used for making these requests to the service be running a little bit behind the clock used for the service to account for instances of slow data transfer, etc.
You will also have to have to call some methods on the service for adding/removing users from the queue
(Note: Step 4 is only for unlimited request implementations) Forget about the service and simply include the request code directly in the page that the user is on.
The user's timeline will be updated based on when they visited the site or when their timeline was last refreshed (if a Tweet was made)
The only caveat to this implementation is that you will have to pay for the unlimited/larger data rate limit