Hardware sizing - sizing

Basically I am looking for the guidelines for How to approach on hard ware sizing for a
given requirements.what are the parameters need to consider to arrive at sizing decisons.
It will be really great if any body can help on identifing sizing details for appserver,
webserver,database etc..Please

Well usually what you do in such cases is follow the hardware sizing sheet supplied by the product vendor for the particular appserver / database. Some of the common parameters you need will be availability window, number of concurrent users, disk space consumed, application memory requirements etc.

Related

Problem loading large amount of geoJSON data to Leaflet map [duplicate]

I'm doing some tests with an HTML map in conjunction with Leaflet. Server side I have a Ruby Sinatra app serving json markers fetched by a MySQL table. What are the best practices working with 2k-5k and potentially more markers?
Load all the markers in the first place and then delegate everything to Leaflet.markercluster.
Load the markers every time the map viewport change, sending southWest & northEast points to the server, elaborate the clipping server side and then sync the marker buffer client side with the server-fetched entries (what I'm doing right now).
A mix of the two above approaches.
Thanks,
Luca
A few months have passed since I originally posted the question and I made it through!
As #Brett DeWoody correctly noted the right approach is to be strictly related to the number of DOM elements on the screen (I'm referring mainly to markers). The more the merrier if your device is faster (CPU especially). Since the app I was developing has both desktop and tablet as target devices, CPU was a relevant factor just like the marker density of different geo-areas.
I decided to separate DBase querying/fetching and map representation/displaying. Basically, the user adjusts controls/inputs to filter the whole dataset, afterward records are fetched and Leaflet.markercluster does the job of representation. When a filter is modified the cycle starts over. Users can choose the map zoom level of clusterization depending on their CPU power.
In my particular scenario, the above-mentioned represented the best approach (verified by console.time). I found that viewport optimization was good for lower marker-density areas (a pity).
Hope it may be helpful.
Cheers,
Luca
Try options and optimize when you see issues rather than optimizing early. You can probably get away with just Leaflet.markercluster unless your markers have lots of data attached to them.

Separation of Runtime and History

I whould like to use separate databases for runtime and history data without implementing a custom HistoryEventHandler. Does someone know how this is possible?
I read the camunda user guides but this did not help much because it only hints the custom implementation way.
Currently, everytime I query history data (about 2mil activity entries) the performance of the system drops as it kind of blocks the runtime, too. I'd like to avoid this without loosing the ability to query historic data.
That would be a really cool feature, but it is currently not supported. You will have to disable the default history and implement a custom handler.
Camunda BPM offers Optimize, which pulls the history data from the Engine to an Elastic Search database. If you are using the Enterprise version, it may be a way to solve it.
(Based on your comments to other answers, it appears that you're interested in learning more about custom HistoryEventHandler implementations. Thus, I'm adding this answer in the hope that it will help.)
Implementing a custom History Event Handler isn't difficult, but there are a few important points to keep in mind:
Unless you want to skip the storage of history information in the standard Camunda history tables, you'll want to use their CompositeHistoryEventHandler. This simply gives you the ability to use multiple HistoryEventHandler implementations.
Any HistoryEventHandler implementations will complete in the same threads as the ones executing process instances; thus, you will want to be cognizant of the performance impacts your custom HistoryEventHandler will have.
You may want to consider publishing your history events through a message bus or messaging system to allow for reliable delivery without impacting Camunda workflow instance performance.
Finally, it may make sense to use your custom HistoryEventHandler along with Camunda's default HistoryEventHandler and their functionality for deleting process instances after a period of time. This would allow you to use their querying capabilities for some period of time without having the history stack up (and thus slowing down your system).

C++ Usage Statistics

I have made an application in C++ and would like to know how to go about implementing a usage statistics system so that I may gather some data regarding how users use the program.
Eg. IP Address, Number of hours spent in application, and OS used.
In theory I know I can code this myself if I must, but I was wondering if there is a framework available to make this easier to do. Unfortunately I was unable to find anything on google.
Though there is no any kind of such framework, you could reduce the work you have to do (in order to retrieve all these information) by using some approaches and techniques, which I tried describe below. Please, anybody feel free to correct me.
Let's summarise, what groups of information do we need to complete the task:
User Environment Information. I suggest you to look at Microsoft's WMI infrastructure, in particular to WMI classes: Desktop, File System, Networking, etc. Using this classes in your application can help you retrieve almost all kind of system information. But if you don't satisfy with this, see #2.
Application and System Performance. Under these terms I mean overall system performance, processor's count, processes running in OS, etc. To retrieve these data you can use the NtQuerySystemInformation function. With its help, you will get an access to detailed SystemProcessInformation, SystemProcessorPerformanceInformation (retrieves info about each processor) information, and much more.
User Related Information. It's hard to find a framework to do such things, so I suggest you simply start writing code, having in mind your requirements:
counting how many times each button was pressed, each text field was changes, etc.
measuring delay time between consecutive actions in some kind of predefined sequences (for example, if you have a settings gui form and you expect from the user to fill very fast all required text fields, so using a time delay measuments can give you an information if the user acted as we expected from him or delayed after TextBox2 for a 5 minutes).
anything that could be interested to you.
So, how you could implement the last item (User Related Information) requirements? As for me, I'd do something like folowing (some may seem very hard to implement or too pointless):
- creating a kind of base Counter class and derive from it some controls (buttons, edits, etc).
- using a windows hooks for mouse or keybord while getting a child handle (to recognize a control, for example).
- using Callback class, which can do all "dirty" work (counting, measuring, performing additional actions).
You could store all this information either in a textfile or an SQLite database or there wherever you prefer.
I would recommend taking a look at DeskMetrics. This StackOverflow post summarizes the issue.
Building your own framework could take you months of development (apart from maintenance). With something like Trackerbird Software Analytics you can integrate a DLL with your app and start tracking in 30 minutes and you get all the cool real-time visualizations.
Disclaimer: I am affiliated with company.

Windows Phone 7 - Best Practices for Speeding up Data Fetch

I have a Windows Phone 7 app that (currently) calls an OData service to get data, and throws the data into a listbox. It is horribly slow right now. The first thing I can think of is because OData returns way more data than I actually need.
What are some suggestions/best practices for speeding up the fetching of data in a Windows Phone 7 app? Anything I could be doing in the app to speed up the retrieval of data and putting into in front of the user faster?
Sounds like you've already got some clues about what to chase.
Some basic things I'd try are:
Make your HTTP requests as small as possible - if possible, only fetch the entities and fields you absolutely need.
Consider using multiple HTTP requests to fetch the data incrementally instead of fetching everything in one go (this can, of course, actually make the app slower, but generally makes the app feel faster)
For large text transfers, make sure that the content is being zipped for transfer (this should happen at the HTTP level)
Be careful that the XAML rendering the data isn't too bloated - large XAML structure repeated in a list can cause slowness.
When optimising, never assume you know where the speed problem is - always measure first!
Be careful when inserting images into a list - the MS MarketPlace app often seems to stutter on my phone - and I think this is caused by the image fetch and render process.
In addition to Stuart's great list, also consider the format of the data that's sent.
Check out this blog post by Rob Tiffany. It discusses performance based on data formats. It was written specifically with WCF in mind but the points still apply.
As an extension to the Stuart's list:
In fact there are 3 areas - communication, parsing, UI. Measure them separately:
Do just the communication with the processing switched off.
Measure parsing of fixed ODATA-formatted string.
Whether you believe or not it can be also the UI.
For example a bad usage of ProgressBar can result in dramatical decrease of the processing speed. (In general you should not use any UI animations as explained here.)
Also, make sure that the UI processing does not block the data communication.

In which layer shall i18n/multi-lingual be handled?

The project that we worked on consists of 3 tiers: the presentation tier, the business logic tier and data tier, I will call them here the front, mid and back.
The front is written in PHP and it communicates with the mid via web service (XML-RPC, SOAP, etc.). Users can also write their own clients to talk to the mid. The nid is developed in Java, it performs business logic and provides data to the front, it may also throws exception to the front.
The question I am having is, if I want to have multi-lingual support in future, where shall I develop i18n? It makes sense to be at the front because of all the texts that it has, what about exception and other messages coming from the mid?
If a user develops their own client and the mid has multi-lingual support, the messages coming from it (like exception as said above) can therefore be in their selected language. That's the advantage I'm seeing. I just don't like the idea of having two layers with i18n code and having to handle i18n when I am handling an exception.
It depends a lot on your application.
If you think UI localization, the presentations is definitely affected.
I would say that the middle tier should not generate any messages.
Exceptions are intended for developers, not for users. So in the presentation capture the exception, and present it to the user in a localized way saying something like "Fatal error 12313 occurred, please send this report to ..."
(maybe even nicer, you don't show the exception text at all, offer a "Send a crash report" button, with a "Show report" button for the user to see that you are not sending any private data).
But if you thing about stuff beyond UI, then the others might also be affected.
The business logic might be affected (for instance the way the tax systems work are different from country to country). And that is independent of the UI (Canada or Australia have another tax system than US, even if the UI is still English).
So you might want to design this layer very modular.
The content of the database might also be affected. Imagine you have products that are not available (or are banned) in certain countries. So you might need extra fields (or tables) to carry that info.
So in the end the answer is "you have to think about i18n at every level!" and ask yourself "what if"
I would ask you a question: The i18n data would be handled in the back layer (data tier)?
If you say yes then you got it, but if you say no then I would put it in the mid layer (busieness tier) because medium and larger projects use to interact with I18N (exceptions, currencies, message formats, time zones, charsets, etc...)
I would put it in the front layer (presentation tier) for smaller projects.
Regards.
If you want to be completely internationalized, exceptions and other messages from the middle-tier should not include text. You should specify a code that the client must look up in a table to understand.