I am using vtiger 6.4 and I want to add many options in picklist field but currently its not possible to add more than 20. Can somebody help me to increase this limit to 100?
Thanks in advance.
Related
I have one generic question, actually, I am hunting for a solution to a problem,
Currently, we are generating the reports directly from the oracle database, now from the performance perspective, we want to migrate data from oracle to any specific AWS service which could perform better. We will pass data from that AWS service to our reporting software.
Could you please help which service would be idle for this?
Thanks,
Vishwajeet
To answer well, additional info is needed:
How much data is needed to generate a report?
Are there any transformed/computed values needed?
What is good performance? 1 second? 30 seconds?
What is the current query time on Oracle and what kind of query? Joins, aggregations etc.
I have published API with two GET methods.
/api/1/questions
/api/1/complaints
where /api/1 is a context. Currently DAS shows hits count for context. I want different stats for these two methods. Could you please help me to configure.
I think this is what you are getting in "API Usage by Resource Path" report in All Statistics in Publisher:
See https://docs.wso2.com/display/AM1100/Viewing+API+Statistics for details.
There is a JIRA for this issue, we will fix this in our next release.
We have just upgraded Sitecore from 6.6 to 7.2, and ECM from 1.3 to 2.1. We are having issues with the Speak UI for ECM being extremely slow. Every operation that seems to fetch data takes several minutes. For example: retrieving the list of recipient lists, where an ajax post request takes minutes to the url http://[domain]/speak/EmailCampaign/TaskPages/AdHockTaskPage?type=Adhoc&id={FD1B449B-C3EA-4820-A10E-E7976A897B8F}&sc_speakcontentlang=nb-NO&ff=1
This is in all our environments (test/production), and we can't get anything from the logs even if we set debug=true on all config settings we can find.
A side note is that we previously used a module to segment the emails (https://marketplace.sitecore.net/en/Modules/Sitecore_EmailCampaign_Segment.aspx) that is deprecated for our version, so we've tried to clean up items and files for this, without anything getting better.
Has anyone had this issue?
I've had some issues in the past with performance. The general advice is to go through the performance tuning document first:
https://sdn.sitecore.net/upload/sdn5/products/ecm/200/ecm_tuning_guide_20-usletter.pdf
There are settings described in the document that can fix issues if you have high CPU usage so you should probably check if your CPU is maxing out. The document also describes some of the tools that you can use to measure performance. I found this tool quite useful to give you an idea of what's going on -
/sitecore/admin/dispatchsummary.aspx.
Failing that, you're best off contacting support.
For anyone with this problem in the future:
This was a bug with the upgrade from 1.3 -> 2.1. Sitecore support filed it as a bug and sent me a dll which partly replaced some rendering logic for a ECM Speak component. So get in touch with Sitecore support and they should be able to help you!
Sitecore 6.6
I'm speaking with Sitecore Support about this as well, but thought I'd reach out to the community too.
We have a custom agent that syncs media on the file system with the media library. It's a new agent and we made the mistake of not monitoring the database size. It should be importing about 8 gigs of data, but the database ballooned to 713 GB in a pretty short amount of time. Turns out the "Blobs" table in both "master" and "web" databases is holding pretty much all of this space.
I attempted to use the "Clean Up Databases" tool from the Control Panel. I only selected one of the databases. This ran for 6 hours before it bombed due to consuming all the available locks on the SQL Server:
Exception: System.Data.SqlClient.SqlException
Message: The instance of the SQL Server Database Engine cannot obtain a LOCK
resource at this time. Rerun your statement when there are fewer active users.
Ask the database administrator to check the lock and memory configuration for
this instance, or to check for long-running transactions.
It then rolled everything back. Note: I increased the SQL and DataProvider timeouts to infinity.
Anyone else deal with something like this? It would be good if I could 'clean up' the databases in smaller chunks to avoid overwhelming the SQL Server.
Thanks!
Thanks for the responses, guys.
I also spoke with support and they were able to provide a SQL script that will clean the Blobs table:
DECLARE #UsableBlobs table(
ID uniqueidentifier
);
INSERT INTO
#UsableBlobs
select convert(uniqueidentifier,[Value]) as EmpID from [Fields]
where [Value] != ''
and (FieldId='{40E50ED9-BA07-4702-992E-A912738D32DC}' or FieldId='{DBBE7D99-1388-4357-BB34-AD71EDF18ED3}')
delete top (1000) from [Blobs]
where [BlobId] not in (select * from #UsableBlobs)
The only change I made to the script was to add the "top (1000)" so that it deleted in smaller chunks. I eventually upped that number to 200,000 and it would run for about an hour at a time.
Regarding cause, we're not quite sure yet. We believe our custom agent was running too frequently, causing the inserts to stack on top of each other.
Also note that there was a Sitecore update that apparently addressed a problem with the Blobs table getting out of control. The update was 6.6, Update 3.
I faced such a problem previously, and we had contacted Sitecore Support.
They gave us a Sitecore Support DLL, and suggessted a Web.Config change for Dataprovider -- from main type="Sitecore.Data.$(database).$(database)DataProvider, Sitecore.Kernel" to the new one.
The reason I am posting on this question of yours is that because the most of the time taken for us was in Cleaning up Blobs and and they gave us this DLL to increase Cleanup Blobs speed. So I think it might help you too.
Hence, I would like to suggest if you could please request Sitecore Support in this case, I am sure you might get the best solution to solve your case.
Hope this helps you!
Regards,
Varun Shringarpure
If you have a staging environment I would recommend taking a copy of database and try to shrink the database. Part of the database size might also be related to the transaction log.
if you have a DBA please have him (her) involved.
I am using mezzanine + cartridge for a shopping cart.I want to use the concept of reward points(like discount coupon) in it and users who have enough reward points so that they can pay through them and want to use them then at checkout time the payment details step must be disabled for these users and order is placed.
So I just want to know how I can disable this payment step in cartridge. Anybody who have used cartridge in their projects can tell me it is feasible or not. and if yes then HOW ?
There is one module of discount coupon in cartridge but it does not fulfil my requirements so I am not using this.
Thanks.
This is completely feasible. Cartridge is just a Django application.
Is there a one-to-one ratio of reward points to your local currency?
I might start by extending the Account model with a cartridge.shop.fields.MoneyField called "reward_points". Next, add the necessary fields and logic to a customized version of cartridge.shop.forms.OrderForm.
Your best option for getting timely and accurate answers to questions about Mezzanine and Cartridge are the #mezzanine channel on Freenode and the mezzanine-users Google Groups mailing list.