Get Module data of a user from Vtiger CRM - vtiger

I am new in vTiger and need to fetch all data from "Project" module for a specific user
Like:
SELECT * FROM Projects WHERE assigned_user_id='9'
I did not find the "assigned_user_id" field in vtiger_project table, so how can we get from it.
My Project fields are as below:
https://imgur.com/a/s4zzSqJ
Please check, and suggest me how to get all project data of a particular user assigned to him.

Try the following:
SELECT * FROM vtiger_project INNER JOIN vtiger_crmentity ON projectid = crmid AND deleted = 0 where smownerid = 1;
the Assigned to field does not exist in the vtiger_project table, instead it is stored int he vtiger_crmentity table (that is true for all vtiger modules that have records).
The delted = 0 condition makes sure the record has not been (logically) deleted on vtiger.

Update I've misunderstood the question. My answer below is valid when you are using the REST API.
According to the result from ListTypes the name of the module is Project, without the 's'.
ID's in Vtiger are prefixed with the module/object ID and an x, so your 9 would actually be:
19x9
Assuming 19 is the object ID for the user module. It is 19 in my Vtiger V7. You can make sure by calling Describe
If you need to find the name of the assigned user field, call Describe for the Project module. AFAIK assigned_user_id is correct.
Your query:
SELECT * FROM Project WHERE assigned_user_id='19x9'

Related

how django knows to update or insert

I'm reading django doc and see how django knows to do the update or insert method when calling save(). The doc says:
If the object’s primary key attribute is set to a value that evaluates to True (i.e. a value other than None or the empty string), Django executes an UPDATE.
If the object’s primary key attribute is not set or if the UPDATE didn’t update anything, Django executes an INSERT link.
But in practice, when I create a new instance of a Model and set its "id" property to a value that already exist in my database records. For example: I have a Model class named "User" and have a propery named "name".Just like below:
class User(model.Model):
name=model.CharField(max_length=100)
Then I create a new User and save it:
user = User(name="xxx")
user.save()
now in my database table, a record like id=1, name="xxx" exists.
Then I create a new User and just set the propery id=1:
newuser = User(id=1)
newuser.save()
not like the doc says.when I had this down.I checked out two records in my database table.One is id = 1 ,another is id=2
So, can anyone explain this to me? I'm confused.Thanks!
Because in newer version of django ( 1.5 > ), django does not check whether the id is in the database or not. So this could depend on the database. If the database report that this is duplicate, then it will update and if the database does not report it then it will insert. Check the doc -
In Django 1.5 and earlier, Django did a SELECT when the primary key
attribute was set. If the SELECT found a row, then Django did an
UPDATE, otherwise it did an INSERT. The old algorithm results in one
more query in the UPDATE case. There are some rare cases where the
database doesn’t report that a row was updated even if the database
contains a row for the object’s primary key value. An example is the
PostgreSQL ON UPDATE trigger which returns NULL. In such cases it is
possible to revert to the old algorithm by setting the select_on_save
option to True.
https://docs.djangoproject.com/en/1.8/ref/models/instances/#how-django-knows-to-update-vs-insert
But if you want this behavior, set select_on_save option to True.
You might wanna try force_update if that is required -
https://docs.djangoproject.com/en/1.8/ref/models/instances/#forcing-an-insert-or-update

How to interact with Existing database with Model through template function in Django

I have an existing table called empname in my postgres database
(Projectid,empid,name,Location) as
(1,101,Raj,India),
(2,201,David,USA)
So in the app console it will have like the following
1)Projectid=Textbox
2)Ops =(view,insert,Edit)-Dropdown
Case1:
So if i write project id as 1 and select View Result:It will display all the records for Projectid =1(Here 1 record)
Case2:
If i write projectid as 3 and select insert it will ask for all the inputs like empid,name,address and based on that it will update the table .
Case3:
If i write projectid as 2 and select edit.Then it will show all the field for that id and user can edit any column and can save which will update the records in backend for the existing table
If there is not data found for the respective project id then it will display no records found
Please help me on this as I am stuck up with models
Once you have your models created, the next task should be the form models. I can identify atleast 3 form classes that you will need to create. One to display the information(case 1), another to collect information(case 2) and the last class to edit the information. Wire up the form to the views and add the urls.
A good reference could be a django a user registration form since it will have all the three cases taken care of.http://www.tangowithdjango.com/book17/chapters/login.html

Sphinx Search powered app - wrong design?

I'm building web app and using django and Sphinx for free text search. I need to apply additional restrictions before making request to searchd, consider 2 tables:
Entity
id
title
description
created_by_id
updated_by_id
created_date
updated_date
and
EntityUser
id
entity_id [FK to the table above]
joining_user_id
is_approved
created_by_id
updated_by_id
created_date
updated_date
I've built RT index for main table Entity, all works fine, but then I want to make a query only on those entities to which user has joined, i.e. where for specific user_id & entity_id exists record in EntityUser with is_approved=1. Problem is that I can't index EntityUser, because there are no string fields - this table only holds integers/timestamps as you see. Not sure if I could make a query in SphinxQL containing subquery to another idex even if I could build index for that table. Knowing that Sphinx was used for quite big projects with great success, I doubt it's a limitation of Sphinx - is it bad design of DB/application or leak of knowledge how to build proper RT index? Can I somehow extend existing index so that I can use restriction above?
I was thinking that I could apply the additional restrictions after Sphinx returns IDs of records on MySQL side, but that's not going to work: N records with highest weight would be returned, but after applying additional restrictions the result could be empty. So I need to get an area of search and then perform query only on those entities user can possibly see.
Adapting the example from http://sphinxsearch.com/docs/current.html#attributes, you might be able to use something like this in your conf:
...
sql_query = SELECT app_entity.id as id,
app_entity.title as title,
app_entity.description as description,
app_entityuser.id as userid
FROM app_entity, app_entityuser
WHERE app_entity.id = app_entityuser.entity_id AND app_entityuser.is_approved = 1
sql_attr_uint = id
sql_attr_uint = userid
...
I should provide a disclaimer: I have not tried this.
I did find a related SO post, but it doesn't look like they quite solved it: Django-sphinx result filtering using attributes?
Good luck!
Actually I've found the answer and it has nothing to do with the design of application or DB.
In fact that's simple - I just need to use MVA for RT index as I would do for plain one (rt_attr_multi or rt_attr_multi_64). In configuration file I will have to do something like this:
...
rt_attr_multi = entity_users
}
and then populate it with IDs of users which have joined the Entity and have been approved. Problem was that I couldn't understand how to use MVA with RT index, but not it's clear. There are not enough real-word examples with RT indexes and MVA I think, so I've shared this to help to solve similar problems.
UPDATE: was fighting last hour to generate RT index and always was getting "unknown column: 'entity_users'". Finally found the reason - if you add MVA to RT index (don't know if that's the same for plain), you've got to not only restart searchd daemon (service), but also DELETE everything you have in "data" folder (or where you have stored your index)!

Django AutoField not returning new primary_key

We've got a small problem with a Django project we're working on and our postgresql database.
The project we're working on is a site/db conversion from a PHP site to a django site. So we used inspect db to generate the models from the current PHP backend.
It gave us this and we added the primary_key and unique equals True:
class Company(models.Model):
companyid = models.IntegerField(primary_key=True,unique=True)
...
...
That didn't seem to be working when we finally got to saving a new Company entry. It would return a not-null constraint error, so we migrated to an AutoField like below:
class Company(models.Model):
companyid = models.AutoField(primary_key=True)
...
...
This saves the Company entry fine but the problem is when we do
result = form.save()
We can't do
result.pk or result.companyid
to get the newly given Primary Key in the database (yet we can see that it has been given a proper companyid in the database.
We are at a loss for what is happening. Any ideas or answers would be greatly appreciated, thanks!
I just ran into the same thing, but during a django upgrade of a project with a lot of history. What a pain...
Anyway, the problem seems to result from the way django's postgresql backend gets the primary key for a newly created object: it uses pg_get_serial_sequence to resolve the sequence for a table's primary key. In my case, the id column wasn't created with a serial type, but rather with an integer, which means that my sequence isn't properly connected to the table.column.
The following is based on a table with the create statement, you'll have to adjust your table names, columns and sequence names according to your situation:
CREATE TABLE "mike_test" (
"id" integer NOT NULL PRIMARY KEY,
"somefield" varchar(30) NOT NULL UNIQUE
);
The solution if you're using postgresql 8.3 or later is pretty easy:
ALTER SEQUENCE mike_test_id_seq OWNED BY mike_test.id;
If you're using 8.1 though, things are a little muckier. I recreated my column with the following (simplest) case:
ALTER TABLE mike_test ADD COLUMN temp_id serial NOT NULL;
UPDATE mike_test SET temp_id = id;
ALTER TABLE mike_test DROP COLUMN id;
ALTER TABLE mike_test ADD COLUMN id serial NOT NULL PRIMARY KEY;
UPDATE mike_test SET id = temp_id;
ALTER TABLE mike_test DROP COLUMN temp_id;
SELECT setval('mike_test_id_seq', (SELECT MAX(id) FROM mike_test));
If your column is involved in any other constraints, you'll have even more fun with it.

Automate the generation of natural keys

I'm studying a way to serialize part of the data in database A and deserialize it in database B (a sort of save/restore between different installations) and I've had a look to Django natural keys to avoid problems due to duplicated IDs.
The only issue is that I should add a custom manager and a new method to all my models. Is there a way to make Django automatically generate natural keys by looking at unique=True or unique_togheter fields?
Please note this answer has nothing to do with Django, but hopefully give you another alternative to think about.
You didn't mention your database, however, in SQL Server there is a BINARY_CHECKSUM() keyword you can use to give you a unique value for the data held in the row. Think of it as a hash against all the fields in the row.
This checksum method can be used to update a database from another by checking if local row checksum <> remote row checksum.
This SQL below will update a local database from a remote database. It won't insert new rows, for that you use insert ... where id > #MaxLocalID
SELECT delivery_item_id, BINARY_CHECKSUM(*) AS bc
INTO #DI
FROM [REMOTE.NETWORK.LOCAL].YourDatabase.dbo.delivery_item di
SELECT delivery_item_id, BINARY_CHECKSUM(*) AS bc
INTO #DI_local
FROM delivery_item di
-- Get rid of items that already match
DELETE FROM #DI_local
WHERE delivery_item_id IN (SELECT l.delivery_item_id
FROM #DI x, #DI_local l
WHERE l.delivery_item_id = x.delivery_item_id
AND l.bc = x.bc)
DROP TABLE #DI
UPDATE DI
SET engineer_id = X.engineer_id,
... -- Set other fields here
FROM delivery_item DI,
[REMOTE.NETWORK.LOCAL].YourDatabase.dbo.delivery_item x,
#DI_local L
WHERE x.delivery_item_id = L.delivery_item_id
AND DI.delivery_item_id = L.delivery_item_id
DROP TABLE #DI_local
For the above to work, you will need a linked server between your local database and the remote database:
-- Create linked server if you don't have one already
IF NOT EXISTS ( SELECT srv.name
FROM sys.servers srv
WHERE srv.server_id != 0
AND srv.name = N'REMOTE.NETWORK.LOCAL' )
BEGIN
EXEC master.dbo.sp_addlinkedserver #server = N'REMOTE.NETWORK.LOCAL',
#srvproduct = N'SQL Server'
EXEC master.dbo.sp_addlinkedsrvlogin
#rmtsrvname = N'REMOTE.NETWORK.LOCAL',
#useself = N'False', #locallogin = NULL,
#rmtuser = N'your user name',
#rmtpassword = 'your password'
END
GO
In that case you should use a GUID as your key. The database can automatically generate these for you. Google uniqueidentifier. We have 50+ warehouses all inserting data remotely and send their data up to our primary database using SQL Server replication. They all use a GUID as the primary key as this is guaranteed to be unique. It works very well.
my solution has nothing to do with natural keys but uses picke/unpickle.
It's not the most efficient way, but it's simple and easy to adapt to your code. I don't know if it works with a complex db structure, but if this is not your case give it a try!
when connected to db A:
import pickle
records_a = your_model.objects.filter(...)
f = open("pickled.records_a.txt", 'wb')
pickle.dump(records_a, f)
f.close()
then move the file and when connected to db B run:
import pickle
records_a = pickle.load(open('pickled.records_a.txt'))
for r in records_a:
r.id = None
r.save()
Hope this helps
make a custom base model by extending models.Model class, and write your generic manager inside it, and acustom .save() method then edit your models to extend the custome base model. this will have no side effect on your db tables structure nor old saved data, except when you update some old rows. and if you had old data try to make a fake update to all your recoreds.