Sitecore Database Cleanup Fails - sitecore

Sitecore 6.6
I'm speaking with Sitecore Support about this as well, but thought I'd reach out to the community too.
We have a custom agent that syncs media on the file system with the media library. It's a new agent and we made the mistake of not monitoring the database size. It should be importing about 8 gigs of data, but the database ballooned to 713 GB in a pretty short amount of time. Turns out the "Blobs" table in both "master" and "web" databases is holding pretty much all of this space.
I attempted to use the "Clean Up Databases" tool from the Control Panel. I only selected one of the databases. This ran for 6 hours before it bombed due to consuming all the available locks on the SQL Server:
Exception: System.Data.SqlClient.SqlException
Message: The instance of the SQL Server Database Engine cannot obtain a LOCK
resource at this time. Rerun your statement when there are fewer active users.
Ask the database administrator to check the lock and memory configuration for
this instance, or to check for long-running transactions.
It then rolled everything back. Note: I increased the SQL and DataProvider timeouts to infinity.
Anyone else deal with something like this? It would be good if I could 'clean up' the databases in smaller chunks to avoid overwhelming the SQL Server.
Thanks!

Thanks for the responses, guys.
I also spoke with support and they were able to provide a SQL script that will clean the Blobs table:
DECLARE #UsableBlobs table(
ID uniqueidentifier
);
INSERT INTO
#UsableBlobs
select convert(uniqueidentifier,[Value]) as EmpID from [Fields]
where [Value] != ''
and (FieldId='{40E50ED9-BA07-4702-992E-A912738D32DC}' or FieldId='{DBBE7D99-1388-4357-BB34-AD71EDF18ED3}')
delete top (1000) from [Blobs]
where [BlobId] not in (select * from #UsableBlobs)
The only change I made to the script was to add the "top (1000)" so that it deleted in smaller chunks. I eventually upped that number to 200,000 and it would run for about an hour at a time.
Regarding cause, we're not quite sure yet. We believe our custom agent was running too frequently, causing the inserts to stack on top of each other.
Also note that there was a Sitecore update that apparently addressed a problem with the Blobs table getting out of control. The update was 6.6, Update 3.

I faced such a problem previously, and we had contacted Sitecore Support.
They gave us a Sitecore Support DLL, and suggessted a Web.Config change for Dataprovider -- from main type="Sitecore.Data.$(database).$(database)DataProvider, Sitecore.Kernel" to the new one.
The reason I am posting on this question of yours is that because the most of the time taken for us was in Cleaning up Blobs and and they gave us this DLL to increase Cleanup Blobs speed. So I think it might help you too.
Hence, I would like to suggest if you could please request Sitecore Support in this case, I am sure you might get the best solution to solve your case.
Hope this helps you!
Regards,
Varun Shringarpure

If you have a staging environment I would recommend taking a copy of database and try to shrink the database. Part of the database size might also be related to the transaction log.
if you have a DBA please have him (her) involved.

Related

Identify AWS service for fast retrival of data

I have one generic question, actually, I am hunting for a solution to a problem,
Currently, we are generating the reports directly from the oracle database, now from the performance perspective, we want to migrate data from oracle to any specific AWS service which could perform better. We will pass data from that AWS service to our reporting software.
Could you please help which service would be idle for this?
Thanks,
Vishwajeet
To answer well, additional info is needed:
How much data is needed to generate a report?
Are there any transformed/computed values needed?
What is good performance? 1 second? 30 seconds?
What is the current query time on Oracle and what kind of query? Joins, aggregations etc.

Sitecore returns incorrect items in multi server farm

I have a Sitecore website deployed in multi server environment. When I make some changes to Sitecore items sometimes they are shown correctly, but sometimes it shows old data.
I understand that sitecore caches items, but it sometimes showing wrong data and sometime its fine. If its caching it should always be same data at least.
For example:
Sitecore.Globalization.Translate.TextByDomain("MyDictionary", "Category");
Sometimes it returns correct data sometimes it shows wrong data i.e. the one before I changed to item.
I am using Sitecore 8.0
Items get cached on the individual servers in memory, and these are not cleared unless you activate event queues. Further content might be cached in the output cache, which needs to be cleared after you publish.
Here is a guide on how to activate event queues and here is also a good description
Here is how to make your sites clear output cache after publish
Thanks Jens for your help. The links really helped me with my understanding of Sitecore farm.
But the issue turned out to be rather silly. For some reason on one content delivery server Application Pool account didn't had permission on the virtual directory.

WHM Cpanel / AWS hosting, this partition is full, how?

I've got this hosting account with AWS, and I had a partition that was getting really full. So I hired someone to create a new partition and he showed me how to migrate a few accounts into the new partition.
I thought this would bring down the amount of data that was in the near full partition, but it hasn't.
See from the screenshot, what is called 'home3' is totally full, and 'home4' (the new partition) is slowly filling up.
I'm assuming home3 is full of backups or something.
How do I clean up home3 without messing with command line tools? Or, do I need to hire a real pro to do this? Is there something in WHM that allows me to do a clean up? Because if there is I can't find it.
If you are not sure what is stored in /home3 partition, you should hire server admin to check and perform necessary migration for you. Before you delete any data, you should know whether those data are important or not.
Also, in some cause, it requires a server reboot after deleting or moving data from full partition. Actual free space should be reflected after server reboot.

Sync Framework 2.1 - Large data over WCF

I have implemented data sync using MS Sync framework 2.1 over WCF to sync multiple SQL Express databases with a central SQL server. Syncing is happening every three minutes through a windows service. Recently, we noticed that huge amounts of data is being exchanged over the network (~100 MB every 15 minutes). When I checked using Fiddler, the client calls the service with a GetKnowledge request four times in a session and each response is around 6 MB in size, although there are no changes at all in either database. This does not seem to be normal? How do I optimize the system to reduce such heavy traffic? Please help.
I have defined two scopes with first one having 15 tables all download only. The second one has 3 tables with upload only direction.
The XML response has a very huge number of <range> tags under coreFragments/coreFragment/ranges tag which is the major portion contributing to the response size.
Let me know if any additional information is required.
must be the sync knowledge. do you do lots of deletes? or do you have lots of replicas? try running a metadata cleanup and see if it compacts the sync knowledge.
Creating one to one scopes and re-provisioning fixed the issue. I am not still sure what caused the original issue.
Do you happen to have any join tables and use any ORM. If you do, then this post might help.
https://kumarkrish.wordpress.com/2015/01/07/microsoft-sync-frameworks-heavy-traffic/

Limit number of business connector users in AX2009

Background
We provide some webservices to export and import some data to a website. Unfortunatly the programmers of that website don't seem to, or don't want to understand, that if they try three times and get three errors, the 1,000,000th time it also will give an error.
So they constantly open new requests to the webservice wich result in a constant flow of new business connector users. The problem with this is that they creating database blocks, but the database will not be able to solve this because when it will time out, there are a few 1000 new business connector users waiting to block that process all over again. This morning the whole server was inresponsive and a reboot of the AOS toke about 32 minutes to complete. (normally it would take 2 minutes)
Question
I was searching for a way to limit the number of business connector users. The only related post I found was this one:
http://www.archivum.info/microsoft.public.axapta.programming/2010-01/00045/RE-.NET-business-connector-amp-Web-Services.html
Unfortunatly there is no answer to their question and I couldn't find more topics.. Does anyone have an idea how I could solve this?
Any help or pointers in the right direction would be greatly appriciated.. :)
It sounds as if the problem is with the web service. Can you rework it so that it does not cause blocking?
Meanwhile, look into the MaxConcurrenctBCSessions setting.
see
http://msdn.microsoft.com/en-us/library/aa569637(v=ax.10).aspx