vTiger Custom Module suddenly stopped storing data - vtiger

My custom module has suddenly stopped storing data in the base table and no filters are showing up either.

Try Deleting module using following script and as well as using console.php.
Put and run from your root directory (using browser)
<?php
include_once('vtlib/Vtiger/Module.php');
$Vtiger_Utils_Log = true;
$MODULENAME = 'ModuleName'; //module name to delete
$moduleInstance = Vtiger_Module::getInstance($MODULENAME);
if($module) {
$module->delete();
}
//DB adjustment needs to be done -- Running this script is not enugh.
echo "Success";
Remember while deleting module, everything must be deleted.
Check Following table after deleting module using above script and console.php
vtiger_field
vtiger_relatedlists
vtiger_<--Your-Module-Name-->
vtiger_<--Your-Module-Name-->_user_field
vtiger_<--Your-Module-Name-->cf
The first two table are important, search your module and related fields on those two tables and delete that row. And last three tables you need to delete completely. But dont ever delete last three tables directly.

Related

A trouble with Django loaddata command

I am writing some free software based on Django.
I have a class Item which describes a pricing plan (such as "subscription, $10 per week without a trial period").
My code often creates new items based on existing. For example new item created based on the above item would be: "subscription, $10 per week with the trial period 10 days" (for the case if a customer paid for 10 days already).
Now there are two kinds of items:
predefined items (as in the first example);
modified items (based on another item, as in the second example).
Now the trouble:
I create predefined items using ./manage.py loaddata ... command which loads the items from a JSON file.
I create modified items in my Python code.
If I add a new item to the JSON code and run ./manage.py loaddata ... again, then (accordingly to how I understand) the loaddata command may overwrite one of the modified items (created later by my Python code).
What to do to avoid overwriting modified items with new predefined items? More generally, how to keep predefined and modified items distinct, to be sure the code could differentiate which items are predefined and which are not?
Dumpdata and Loaddata should not be used to create "modified items". Maybe treat these commands more like "backup" and "restore". If you want to load new created items from a json file, write a custom management command:
Custom Management Commands
First declare an abstract model:
class ItemBase(models.Model):
class Meta:
abstract = True
name = models.CharField(max_length=100)
# ...
Then declare:
class PredefinedItem(ItemBase):
pass
class ModifiedItem(ItemBase):
base = models.OneToOneField(PredefinedItem, null=True)
#atomic
#staticmethod
def obtain_predefined(id):
try:
return ModifiedItem.objects.get(base_id=id)
except ModifiedItem.DoesNotExist:
predefined = PredefinedItem.objects.get(pk=id)
return ModifiedItem.objects.create(base=id, **model_to_dict(
predefined,
fields=[f.name for f in ItemBase._meta.fields]))
obtain_predefined() allows to make a copy of a predefined object, of this copy to be used instead of the predefined object itself. Thus we can not worry about predefined object overwriting modified objects.
Note: https://stackoverflow.com/a/52787554/856090 answer used.

Error while saving transformation in pentaho spoon

I am getting below error while I save the transformation in pentaho spoon:
Error saving transformation to repository!
Error updating batch
Cannot insert duplicate key row in object 'dbo.R_STEP_ATTRIBUTE' with unique index 'IDX_RSAT'. The duplicate key value is (2314, PARTITIONING_SCHEMA, 0).
Everything was working fine before I ran a job that creates multiple excel files. While this job was running suddenly a memory issue occurred and the job was aborted. After that I tried to save my file but it is deleted for saving but not been saved. So I lost the job I created.
Please help me to know the reason.
The last save of the directory did not end gracefully.
There is a small chance that you can repair it by easing the db-caches file in the .kettle directory.
If it does not work, create a new repository and copy the current in the new. Try the global repository export/import. Then erase the old rep and do the same from the just rebuild repository.
The intermediary repository may be on files rather than on a database.
If it is the first time you do this, plan for a one-two hours.
There is a easy way to recover this.
As AlainD says, the problem occurs when you save or delete a transformations, and suddenly you lost the connection or had a problem with Kettle.
When that occurs, you will find a lot of step records into the table R_STEP_ATTRIBUTE. In the error shown is the [ID_TRANSFORMATION] = 2314.
So, if you check the table R_TRANSFORMATION with [ID_TRANSFORMATION] = 2314, maybe wont find any transformation with that id.
After check that, you can delete all the records related with that [ID_TRANSFORMATION], for example:
delete from R_STEP_ATTRIBUTE where ID_TRANSFORMATION=2314
We just solved this issue by executing the following SQL statement
DELETE
FROM R_STEP_ATTRIBUTE
WHERE ID_STEP NOT IN (SELECT ID_STEP FROM R_STEP)

If module did not create properly then remove data from which tables for reinstall the module in vtiger

am new in vtiger. Could anyone tell me which tables are store for module related data? Suppose I create a module ABC for any purpose. Then it will create two tables (1) vtiger_ABC and (2) vtiger_ABCcf and vtiger_crm is common. My questions are
1) without this three tables which tables are required extra.
2) If module did not create properly then remove data from which tables for reinstall the module in vtiger7. Please tell me the tables name.
1) Those three tables are the only ones strictly needed (vtiger_yourmodule, vtiger_yourmodulecf and vtiger_crmentity). Of course you could create additional tables, but only if you have a special need. For basic entity modules you only need those three.
2) you should run a script for module uninstallation:
<?php
require_once 'vtlib/Vtiger/Module.php';
$Vtiger_Utils_Log = true;
$MODULENAME = 'yourmodule';
$moduleInstance = Vtiger_Module::getInstance($MODULENAME);
if ($moduleInstance) {
echo "Module is present - deleting module instance...\n";
$moduleInstance->delete();
} else {
echo "Module not found...\n";
}
put it on your vtiger root folder and execute it through a browser. That script will delete some entries in some other tables. You can also manually delete tables vtiger_yourmodule and vtiger_yourmodulecf and delete your module's folder in vtiger_root/modules/yourmodule

Rails 4 ActiveRecord Sql Server - Unable to save binary into image column

We are working to upgrade our application to a more current version of Ruby & Rails. Our app integrates with a legacy database (SQL Server 2008 R2) that has a table with a column of image data type (we are unable to change this column to varbinary(max)). Previously we were able to save a binary into the image column. However now we are getting conversion errors.
We are working to upgrade to the following (among others):
Rails 4.2.1
ActiveRecord_SQLServer_Adapter (4.2.4)
tiny_tds (0.6.3.rc1)
freeTDS (v0.91.112)
When we now attempt to save into the image column, we get errors similar to:
TinyTds::Error: Unclosed quotation mark after the character string
Researching various issues within tiny_tds & activerecord_sqlserver_adapter, we decided to create a second table that matched the first but change the data type from image to varbinary(max). We can save a binary into the column.
The code causing the challenge is in a background job where we grab images from s3, store them locally and then push the image into the database. Again, we don't control the legacy database and thus can't change the data type (or confront the issue of why we are storing the image in the db in the first place).
...
#d = Doc.new
...
open("#{Rails.root}/cache/pictures/image.png", "wb") do |file|
file << open(r.image.url).read
end
#d.document = File.binread("#{Rails.root}/cache/pictures/image.png")
#d.save!
Given the upgrade has broken our saving images, we are trying to figure out how best to determine a fix. We could obviously roll back until we find a version that works. However we hope to find a fix. Anyone have any ideas?
Update:
We added the following configuration as we had triggers on the table being inserted: ActiveRecord::ConnectionAdapters::SQLServerAdapter.use_output_inserted = true
When we remove this configuration we get the following error:
TinyTds::Error: The target table 'doc' of the DML statement cannot have any enabled triggers if the statement contains an OUTPUT clause without INTO clause.
Note: We are unable to make any modifications to the triggers.
Per feedback on the ActiveRecord_SQLServer_Adapter site, we rolled back to 4.1.11 and we are now able to save into the image column.
We also had to add this snippet to overcome the issue with the triggers.

Loading from pickled data causes database error with new saves

In order to save time moving data I pickled some models and dumped them to file. I then reloaded them into another database using the same exact model. The save worked fine and the objects kept their old id which is what I wanted. However, when saving new objects I run into nextval errors.
Not being very adept with postgres, I'm not sure how to fix this so I can keep old records with their existing ID while being able to continue adding new data.
Thanks,
Thomas
There is actually a django command that prints out sequence reset SQL called sqlsequencereset.
$ python manage.py sqlsequencereset issues
BEGIN;
SELECT setval('"issues_project_id_seq"', coalesce(max("id"), 1), max("id") IS NOT null) FROM "issues_project";
COMMIT;
I think that you are talking about the sequence that is being used for autoincrementing your id fields.
the easiest solution here would be in a "psql" shell:
select max(id)+1 from YOURAPP_YOURMODEL;
and use the value in this command:
alter sequence YOURAPP_YOURMODEL_id_seq restart with MAX_ID_FROM_PREV_STATEMENT;
that should do the trick.