Sitecore - reusing an application-wide index searcher - sitecore

Sitecore.NET 6.6.0 (rev. 130404)
In our project we are using Sitecore.Search.IndexSearchContext to perform all of our queries. Specifically, we use IndexSearchContext.Searcher method to get access to the internal Lucene searcher and pass Lucene queries to it.
I have found out (via web articles and experimentation) that if we reuse the same IndexSearchContext instance to perform all of our queries, it's significantly faster than creating and destroying an IndexSearchContext for each query that gets executed.
I have also read that IndexSearchContext is not sensitive to index updates which are made after the IndexSearchContext was created. Because of this, I'm disposing the shared IndexSearchContext and creating a new one every 30 seconds so that queries would get latest results with only a 30 second delay. This approach requires me to carefully handle thread-safety of creation and dispose of the shared index searcher.
Is this a safe approach to do things? Is it discouraged to reuse an application-wide index searcher in sitecore?
thanks

I would suggest you hook up to the "publish:end" and "publish:end:remote" (in a multi server environment), and drop your IndexSearchContext when these events fire. Ultimately you're in a Sitecore environment, and only when new content publishes, should your index become out of date. This version of the truth is a bit simplified, admittedly, as I don't know the full extent of the application you're running.

To be honest I haven't seen any performance issues with spawning many IndexSearchContext. Unless you have an extreme use of it and need an extremely optimized environment, I would advise against it. I have seen a lot of problems with locked indexes and you also might run into some HTML cache issues (if used).
All in all it sounds a bit like premature optimization. However I do not know your complete setup and I may be wrong.

I have tried the approach and can confirm it works. I'm recreating the index searcher every 10 seconds and that has significantly improved the number of concurrent requests that can be handled. However, Sitecore's IndexSearchContext cannot be shared like this (it's intended to be created and destroyed in a single thread). What I did was instantiating a raw Lucene IndexSearcher and sharing that across the application.

Related

Memory usage in EF 1 web service application keeps growing on each call - query cache issues?

hoping some of you clever people can help me out here!
We have an ASP.NET web service app, using Entity Framework 1 and EFPocoAdapter. The mem usage of the app pool running this web service keeps growing on every web service call. We currently monitor its mem usage and once it starts to get over 1GB we recycle the app pool to free up the memory.
We instantiate the object context in each web method in a 'using' statement so that doesn't leave open object contexts (observed with efprof).
So I used Ants memory profiler 7 to track whats going on and after the first call to the web service (at this point the EF framework generates its view, etc), I've taken a snapshot. Then make the same call and take another snapshot. Ants shows that the new objects created since the last snapshot are pretty much all related to System.Data.Common.QueryCache.QueryCacheManager.
I know the point of the cache is to improve performance, but in our case I think we need to NOT cache every query plan as the likelihood of repeating those calls is minimal due to the nature of our main app / business.
So, my question..... is there a way of turning off this caching, or am I barking up the wrong tree here and there's something else going on I'm unaware of?
I've searched all over the web for an answer to this, and all I can find is the MergeOption property which seems to be related more to entity tracking for speed / performance improvements.
If you don't modify the data just select it, you can simply turn off object modification tracking:
datacontext.ObjectTrackingEnabled = false;
It worked for me in a similar situation with Linq2SQL.

SQL Query minimizing/caching in a C++ application

I'm writing a project in C++/Qt and it is able to connect to any type of SQL database supported by the QtSQL (http://doc.qt.nokia.com/latest/qtsql.html). This includes local servers and external ones.
However, when the database in question is external, the speed of the queries starts to become a problem (slow UI, ...). The reason: Every object that is stored in the database is lazy-loaded and as such will issue a query every time an attribute is needed. On average about 20 of these objects are to be displayed on screen, each of them showing about 5 attributes. This means that for every screen that I show about 100 queries get executed. The queries execute quite fast on the database server itself, but the overhead of the actual query running over the network is considerable (measured in seconds for an entire screen).
I've been thinking about a few ways to solve the issue, the most important approaches seem to be (according to me):
Make fewer queries
Make queries faster
Tackling (1)
I could find some sort of way to delay the actual fetching of the attribute (start a transaction), and then when the programmer writes endTransaction() the database tries to fetch everything in one go (with SQL UNION or a loop...). This would probably require quite a bit of modification to the way the lazy objects work but if people comment that it is a decent solution I think it could be worked out elegantly. If this solution speeds up everything enough then an elaborate caching scheme might not even be necessary, saving a lot of headaches
I could try pre-loading attribute data by fetching it all in one query for all the objects that are requested, effectively making them non-lazy. Of course in that case I will have to worry about stale data. How would I detect stale data without at least sending one query to the external db? (Note: sending a query to check for stale data for every attribute check would provide a best-case 0x performance increase and a worst-caste 2x performance decrease when the data is actually found to be stale)
Tackling (2)
Queries could for example be made faster by keeping a local synchronized copy of the database running. However I don't really have a lot of possibilities on the client machines to run for example exactly the same database type as the one on the server. So the local copy would for example be an SQLite database. This would also mean that I couldn't use an db-vendor specific solution. What are my options here? What has worked well for people in these kinds of situations?
Worries
My primary worries are:
Stale data: there are plenty of queries imaginable that change the db in such a way that it prohibits an action that would seem possible to a user with stale data.
Maintainability: How loosely can I couple in this new layer? It would obviously be preferable if it didn't have to know everything about my internal lazy object system and about every object and possible query
Final question
What would be a good way to minimize the cost of making a query? Good meaning some sort of combination of: maintainable, easy to implement, not too aplication specific. If it comes down to pick any 2, then so be it. I'd like to hear people talk about their experiences and what they did to solve it.
As you can see, I've thought of some problems and ways of handling it, but I'm at a loss for what would constitute a sensible approach. Since it will probable involve quite a lot of work and intensive changes to many layers in the program (hopefully as few as possible), I thought about asking all the experts here before making a final decision on the matter. It is also possible I'm just overlooking a very simple solution, in which case a pointer to it would be much appreciated!
Assuming all relevant server-side tuning has been done (for example: MySQL cache, best possible indexes, ...)
*Note: I've checked questions of users with similar problems that didn't entirely satisfy my question: Suggestion on a replication scheme for my use-case? and Best practice for a local database cache? for example)
If any additional information is necessary to provide an answer, please let me know and I will duly update my question. Apologies for any spelling/grammar errors, english is not my native language.
Note about "lazy"
A small example of what my code looks like (simplified of course):
QList<MyObject> myObjects = database->getObjects(20, 40); // fetch and construct object 20 to 40 from the db
// ...some time later
// screen filling time!
foreach (const MyObject& o, myObjects) {
o->getInt("status", 0); // == db request
o->getString("comment", "no comment!"); // == db request
// about 3 more of these
}
At first glance it looks like you have two conflicting goals: Query speed, but always using up-to-date data. Thus you should probably fall back to your needs to help decide here.
1) Your database is nearly static compared to use of the application. In this case use your option 1b and preload all the data. If there's a slim chance that the data may change underneath, just give the user an option to refresh the cache (fully or for a particular subset of data). This way the slow access is in the hands of the user.
2) The database is changing fairly frequently. In this case "perhaps" an SQL database isn't right for your needs. You may need a higher performance dynamic database that pushes updates rather than requiring a pull. That way your application would get notified when underlying data changed and you would be able to respond quickly. If that doesn't work however, you want to concoct your query to minimize the number of DB library and I/O calls. For example if you execute a sequence of select statements your results should have all the appropriate data in the order you requested it. You just have to keep track of what the corresponding select statements were. Alternately if you can use a looser query criteria so that it returns more than one row for your simple query that ought to help performance as well.

Optimisation tips when migrating data into Sitecore CMS

I am currently faced with the task of importing around 200K items from a custom CMS implementation into Sitecore. I have created a simple import page which connects to an external SQL database using Entity Framework and I have created all the required data templates.
During a test import of about 5K items I realized that I needed to find a way to make the import run a lot faster so I set about to find some information about optimizing Sitecore for this purpose. I have concluded that there is not much specific information out there so I'd like to share what I've found and open the floor for others to contribute further optimizations. My aim is to create some kind of maintenance mode for Sitecore that can be used when importing large columes of data.
The most useful information I found was on Mark Cassidy's blogpost http://intothecore.cassidy.dk/2009/04/migrating-data-into-sitecore.html. At the bottom of this post he provides a few tips for when you are running an import.
If migrating large quantities of data, try and disable as many Sitecore event handlers and whatever else you can get away with.
Use BulkUpdateContext()
Don't forget your target language
If you can, make the fields shared and unversioned. This should help migration execution speed.
The first thing I noticed out of this list was the BulkUpdateContext class as I had never heard of it. I quickly understood why as a search on the SND forum and in the PDF documentation returned no hits. So imagine my surprise when i actually tested it out and found that it improves item creation/deletes by at least ten fold!
The next thing I looked at was the first point where he basically suggests creating a version of web config that only has the bare essentials needed to perform the import. So far I have removed all events related to creating, saving and deleting items and versions. I have also removed the history engine and system index declarations from the master database element in web config as well as any custom events, schedules and search configurations. I expect that there are a lot of other things I could look to remove/disable in order to increase performance. Pipelines? Schedules?
What optimization tips do you have?
Incidentally, BulkUpdateContext() is a very misleading name - as it really improves item creation speed, not item updating speed. But as you also point out, it improves your import speed massively :-)
Since I wrote that post, I've added a few new things to my normal routines when doing imports.
Regularly shrink your databases. They tend to grow large and bulky. To do this; first go to Sitecore Control Panel -> Database and select "Clean Up Database". After this, do a regular ShrinkDB on your SQL server
Disable indexes, especially if importing into the "master" database. For reference, see http://intothecore.cassidy.dk/2010/09/disabling-lucene-indexes.html
Try not to import into "master" however.. you will usually find that imports into "web" is a lot faster, mostly because this database isn't (by default) connected to the HistoryManager or other gadgets
And if you're really adventureous, there's a thing you could try that I'd been considering trying out myself, but never got around to. They might work, but I can't guarantee that they will :-)
Try removing all your field types from App_Config/FieldTypes.config. The theory here is, that this should essentially disable all of Sitecore's special handling of the content of these fields (like updating the LinkDatabase and so on). You would need to manually trigger a rebuild of the LinkDatabase when done with the import, but that's a relatively small price to pay
Hope this helps a bit :-)
I'm guessing you've already hit this, but putting the code inside a SecurityDisabler() block may speed things up also.
I'd be a lot more worried about how Sitecore performs with this much data... assuming you only do the import once, who cares how long that process takes. Is this going to be a regular occurrence?

Windows Phone 7 - Best Practices for Speeding up Data Fetch

I have a Windows Phone 7 app that (currently) calls an OData service to get data, and throws the data into a listbox. It is horribly slow right now. The first thing I can think of is because OData returns way more data than I actually need.
What are some suggestions/best practices for speeding up the fetching of data in a Windows Phone 7 app? Anything I could be doing in the app to speed up the retrieval of data and putting into in front of the user faster?
Sounds like you've already got some clues about what to chase.
Some basic things I'd try are:
Make your HTTP requests as small as possible - if possible, only fetch the entities and fields you absolutely need.
Consider using multiple HTTP requests to fetch the data incrementally instead of fetching everything in one go (this can, of course, actually make the app slower, but generally makes the app feel faster)
For large text transfers, make sure that the content is being zipped for transfer (this should happen at the HTTP level)
Be careful that the XAML rendering the data isn't too bloated - large XAML structure repeated in a list can cause slowness.
When optimising, never assume you know where the speed problem is - always measure first!
Be careful when inserting images into a list - the MS MarketPlace app often seems to stutter on my phone - and I think this is caused by the image fetch and render process.
In addition to Stuart's great list, also consider the format of the data that's sent.
Check out this blog post by Rob Tiffany. It discusses performance based on data formats. It was written specifically with WCF in mind but the points still apply.
As an extension to the Stuart's list:
In fact there are 3 areas - communication, parsing, UI. Measure them separately:
Do just the communication with the processing switched off.
Measure parsing of fixed ODATA-formatted string.
Whether you believe or not it can be also the UI.
For example a bad usage of ProgressBar can result in dramatical decrease of the processing speed. (In general you should not use any UI animations as explained here.)
Also, make sure that the UI processing does not block the data communication.

Make my desktop app appears to load/quit faster

I currently have a GUI single-threaded application in C++ and Qt. It takes a good 1 minute to load (read from disk) and ~5 seconds to close (saving settings, finalize connections, ...).
What can I do to make my application appear to be faster?
My first thought was to have a server component of the app that does all the works while the GUI component is only for displaying. The communication is done via socket, pipe or memory map. That seems like an overkill (in term of development effort) since my application is only used by a handful of people.
The first step is to start profiling. Use an actual, low-overhead profiling tool (eg, on Linux, you could use oprofile), not guesswork. What is your app doing in that one minute it takes to start up? Can any of that work be deferred until later, or perhaps skipped entirely?
For example, if you're loading, say, a list of document templates, you could defer that until the user tells you to create a new document. If you're scanning the system for a list of fonts, load a cached list from last startup and use that until you finish updating the font list in a separate thread. These are just examples - use a profiler to figure out where the time's actually going, and then attack the code starting with the largest time figures.
In any case, some of the more effective approaches to keep in mind:
Skip work until needed. If you're doing initialization for some feature that's used infrequently, skip it until that feature is actually used.
Defer work until after startup. You can take care of a lot of things on a separate thread while the UI is responsive. If you are collecting information that changes infrequently but is needed immediately, consider caching the value from a previous run, then updating it in the background.
For your shutdown time, hide your GUI instantly, and then spend those five seconds shutting down in the background. As long as the user doesn't notice the work, it might as well be instantaneous.
You could employ the standard trick of showing something interesting while you load.
Like many games nowadays show a tip or two while they are loading
It looks to me like you're only guessing at where all this time is being burned. "Read from disk" would not be high on my list of candidates. Learn more about what's really going on.
Use a decent profiler.
Profiling is a given, of course.
Most likely, you may find I/O is substantial - reading in your startup files. As bdonlan notes, deferring work is a standard technique. Google 'lazy evaluation'.
You can also consider caching data that does not change. Save a cache in a faster format, such as binary. This is most useful if you happen to have a large static data set read into something like an array.