I can not find where the logs of an updated ticket exists in vtiger opensource 6.5 - vtiger

I want to know the average between the ticket created and closed, so I need to know as I understand the update log of the ticket
where can I find the table of this, and I can not find trouble ticket table it's not exists !!!

Changes are log in vtiger_modtracker_basic and vtiger_modtracker_detail.
In vtiger_modtracker_basic you can find crmid, who made the change, when and what (the status)
Here the status values:
0 = updated
1 = deleted
2 = created
3 = restored
4 = linked
5 = unlinked
In vtiger_modtracker_detail you can find which field has changed and the new and previous value.
So for your need, you should join both tables and calcuate the delay between the creation time and the date where the ticket changes its status from open to close.

Related

Map reduce returns old data

I have the following map:
from doc in docs
select new {Name = doc.Name, Count = 1}
reduce
from result in results
group result by new {result.Name}
into g
select new {
Name = g.Key.Name,
Count = Enumerable.Sum(g, x => ((int) x.Count))
}
If I put a lock on the index folder and then save a document and then delete the document and resave the document to trigger a reindex the old document still appears in the index query results despite the index being reported as up to date. The last indexed date is also older than the date the document was updated so therefore the index should not contain any old results.
Any ideas what's going on? This is actually part of a large problem I've discovered on a production system. I'm not clear why it's happening but I've been able to reproduce a similar situation by locking the index so I suspect there's some process causing the lock. It means the index results return projections that are old.
How can I get the reduce to filter out results that are old?
If you disabled the index and the documents are updated/deleted. You'll get outdated results from the map-reduce index. This can happen even when the index isn't disabled.
The reason is that indexes are eventual consistent. You can read about it here:
https://ravendb.net/docs/article-page/3.5/Csharp/users-issues/understanding-eventual-consistency
You can use WaitForNonStaleResultsAsOfLastWrite:
https://ravendb.net/docs/article-page/2.5/Csharp/client-api/querying/stale-indexes#setting-cut-off-point
What you're describing is stale indexes: you update/create/delete a document and immediately queried for the document, but the query returns stale results.
The recommended way to fix this is by calling .WaitForIndexesAfterSaveChanges() during your create/update/delete calls:
// Inform Raven you'll wait for indexes when calling .SaveChanges
session.Advanced.WaitForIndexesAfterSaveChanges(
timeout: TimeSpan.FromSeconds(30),
throwOnTimeout: false);
// Do your update.
session.Store(new Employee
{
FirstName = "John",
LastName = "Doe"
});
// This won't return until affected indexes are updated.
session.SaveChanges();
// Now you can run a query against your index, and it will return the updated data.
...
This way, .SaveChanges will block until the indexes are updated. Run your query immediately after .SaveChanges and you'll see the updated results as expected.

How to update only the queried objects in django

I have a table ActivityLog to which new data is added in every second.
I am querying this table every 5 seconds using an Api in the following way.
logs = ActivityLog.objects.prefetch_related('activity').filter(login=login_obj,read_status=0)
Now let's say when I queried this table at time 13:20:05 I've got 5 objects in logs and after my querying 5 more rows were added to the table at 13:20:06. When I try to update only the queried logsdataset using logs.update(read_status=1) it also updates the newly added data in the table. That is instead of updating 5 objects it updates 10 objects. How can I update only the 5 objects that I've queried without looping through it.
Take a look at select_for_update. Just be aware that the rows will be locked in the meanwhile.

Increment Number OnInsert()

I am trying to increase a field number whenever a new row is added to my table. First I created a variable lastItem specified as a Record with Subtype to my Table. Now I created the following Code on the OnInsert() trigger:
lastItem.FINDLAST;
ItemNo := lastItem.ItemNo + 10;
The above code seems not to work on the OnInsert() trigger but works for one row when I enter it on the ItemNo - OnValidate() trigger.
Any ideas how to get an increasing Number on every new row in my table?
Are you sure that's Dynamics CRM? The code is a Dynamics NAV C/AL code and you talking about the Item table? In this case let NAV to give you the next number from the No. Series properly.
You can use the same approach in any other table : related pattern
You should stay away from doing direct SQL updates and adding triggers to the DB when using Dynamics CRM as it's not supported.
The appropriate way would be to use a plug-in which reads the last value and then does the increment. You'd would register this to run when a new record is created in the system.
You can find some example source code on this CodePlex project: CRM 2011 Autonumbering Solution
You should use the property autoincrement of the field. In this way you increment the field one on one in every row.

How to get FBA Fee and commission using Amazon MWS

I am going to extract order details from Amazon and store in a database. I am getting all data except FBA fee and Commission of an order.
Can anyone please guide me on this to get FBA Fee and Commision?
The comission is part of the settlement reports you'll receive every fortnight. I'm not using FBA, but I would assume FBA fees would be included there as well where applicable. Two of those reports are automatically created whenever Amazon is preparing a payout. You can get a list of these reports (they seem to be stored forever) using the GetReportList() call. Their reporttypes are _GET_FLAT_FILE_PAYMENT_SETTLEMENT_DATA_ and _GET_V2_SETTLEMENT_REPORT_DATA_FLAT_FILE_. The two reports cover the same settlement in different formats.
Edit: More details on how to do this:
Call GetReportList using the following parameters:
'Acknowledged' = 'false'
'ReportTypeList.Type.1' = '_GET_FLAT_FILE_PAYMENT_SETTLEMENT_DATA_'
'ReportTypeList.Type.2' = '_GET_V2_SETTLEMENT_REPORT_DATA_FLAT_FILE_'
Please note: You might just want to pick just one of the two ReportTypes.
Also: Acknowledged=false is not actually needed, but I recommend acknowledging the reports you have already processed, so you'll only get a list of new reports to work on, see step 5 below.
You'll get a list of reports back (a "GetReportListResult"). This document gives you a list of reports. You'll need their ReportId for the next step.
Call GetReportusing the ReportId from step 2
Parse the response. It is a CSV file ("flat file" in Amazon terminology) with all your orders within two weeks prior to the report generation.
Upon successfull processing, call UpdateReportAcknowledgements with ReportIdList.Id.1 = ReportId from step 2 to acknowledge the report. This ensures that the next call for GetReportList (step 1) does not get the same data again.
You should get a UpdateReportAdcknowledgementsResult back when Amazon has set that flag.
There is a new API _GET_FBA_ESTIMATED_FBA_FEES_TXT_DATA_
request = new RequestReportRequest();
request.MarketplaceIdList = new IdList();
request.Merchant = amznAccess.merchantId();
request.MarketplaceIdList.Id.Add(amznAccess.marketplaceId());
request.ReportType = "_GET_FBA_ESTIMATED_FBA_FEES_TXT_DATA_";
don't forget to set request start date (for eg 30 days)

Entity Framework DB First: Timestamp column not working

Using db first approach, I want my application to throw a concurrency exception whenever I try to update an (out-of-date) entity which it's correspoinding row in the database has been already updated by another application/user/session.
I am using Entity Framework 5 on .Net 4.5. The corresponding table has a Timestamp column to maintain row version.
I have done this in the past by adding a timestamp field to the table you wish to perform a concurrency check. (in my example i added a column called ConcurrencyCheck)
There are two types of concurrency mode here depending on your needs :
1 Concurrency Mode: Fixed :
Then re-add/refresh your table in your model. For fixed concurrency , make sure your set your concurrency mode to fixed for your table when you import it into your model : like this :
Then to trap this :
try
{
context.SaveChanges();
}
catch (OptimisticConcurrencyException ex) {
////handle your exception here...
2. Concurrency Mode: None
If you wish to handle your own concurrency checking , i.e. raise a validation informing the user and not even allowing a save to occur then you can set Concurrency mode None.
1.Ensure you change the ConcurrencyMode in the properties of the new column you just added to "None".
2. To use this in your code , i would create a variable to store your current timestamp on the screen you which to check a save on.
private byte[] CurrentRecordTimestamp
{
get
{
return (byte[])Session["currentRecordTimestamp"];
}
set
{
Session["currentRecordTimestamp"] = value;
}
}
1.On page load (assuming you're using asp.net and not mvc/razor you dont mention above), or when you populate the screen with the data you wish you edit , i would pull out the current record under edit's ConcurrencyCheck value into this variable you created.
this.CurrentRecordTimestamp = currentAccount.ConcurrencyCheck;
Then if the user leaves the record open , and someone else in the meantime changes it , and then they also attempt to save , you can compare this timestamp value you saved earlier with the concurrency value it is now.
if (Convert.ToBase64String(accountDetails.ConcurrencyCheck) != Convert.ToBase64String(this.CurrentRecordTimestamp))
{
}
After reviewing many posts here and on the web explaining concurrency and timestamp in Entity Framework 5, I came into the conclusion that basically it is impossible to get a concurrency exception when the model is generated from an existing database.
One workaround is modifying the generated entities in the .edmx file and setting the "Concurrency Mode" of the entity's timestamp property to "Fixed". Unfortunately, if the model is repeatedly re-generated from the database this modification may be lost.
However, there is one tricky workaround:
Initialize a transaction scope with isolation level of Repeatable Read or higher
Get the timestamp of the row
Compare the new timestamp with the old one
Not equal --> Exception
Equal --> Commit the transaction
The isolation level is important to prevent concurrent modifications of inferring.
PS:
Erikset's solution seems to be fine to overcome regenerating the model file.
EF detects a concurrency conflict if no rows were affected. Then if you use stored procedures to delete and update you could manually add the timestamp value in the where clause:
UPDATE | DELETE ... WHERE PKfield = PkValue and Rowversionfield = rowVersionValue
Then if the row has been deleted or modified by anyone else the Sql statement affects 0 rows and EF interpret it as concurrency conflict.