Apollo Client Cache : How to read all queries that have executed - apollo

I have a getStudents Query with different variables option possible (firstName, lastName, dob, standardId, classId) which fetches the list of students that match the search criteria. I want to know is there a way in Apollo Cache I can get all getStudents Queries that have been executed along with its variables.
e.g. If so far we have run 3 getStudents queries with
getStudents (variables : {firstName : "abc"})
getStudents (variables : {firstName : "abc" , lastName : "def" })
getStudents (variables : {firstName : "abc", standardId: 1 })
I need all of the above queries along with the variables with which it was executed. I can not find any such option in the Apollo Documents.
Can some one please guide me here.

Related

How to get the price and the id from request.POST.getlist()?

i tried to get two value from request.POST.getlist() what i am doing is :
for price in request.POST.getlist('price')
print(price)
what if I want to get two values with two keys i mean i want the price and the id ??
for price, id in request.POST.getlist('price','id') /something like that ???
i trying to submit the data to a form :
for prix in request.POST.getlist('prix'):
na = AssociationForm({'Prix_Unitaire':str(round(float(request.POST['prix']),2)),'Quantite':request.POST['quantite']},instance=Association())
na.save()
if your post data look like as below:
{"price":[2,3,4], "id":[1,2,3]}
for price, id in zip(request.POST.getlist('price'), reqest.POST.getlist('id')):
# do your business here

concatenate attributes in search expression

I am trying to build Filter Expression in query for searching data in dynamodb.
var params = {
TableName: "ContactsTable",
ExpressionAttributeNames: {
"#lastName": "LastName",
"#firstName": "FirstName",
"#contactType": "ContactType"
},
FilterExpression: "contains(#lastName, :searchedName) or contains(#firstName, :searchedName)",
ExpressionAttributeValues: {
":companyContactType": event.query.companyContactType,
":searchedName": event.query.searchedValue
},
KeyConditionExpression: "#contactType = :companyContactType"
};
Users generally search for LastName, FirstName (they append comma to LastName as a common search pattern). However data is stored in separate attributes named LastName and FirstName so that they can search by that as well.
Is there a way by which I can dynamically concatenate these two fields something like contains(#lastName<append comma>#firstName, :searchedName)?
You should remove comma from user input, split words and, for each word, check if it is contained in both (first name and last name) and 'or' everything together, or even use begins_with instead of contains.
Ex. For "john smith" will result in
contains(#lastName, "john") or
contains(#lastName, "smith" ) or
contains(#firstName, "john") or
contains(#firstName, "smith")
Also contains() is case sensitive from what i know,so you might want to insert first name and last name as lowercase as well as lowercase user's search term.

How one can create users field like "Assigned To" field in vtigercrm?

I have a module ABC in that i need to create another users picklist irrespective of "Assigned To" like already there for every module. I tried to create that by copying existed field for "Assigned To" in vtiger_field table. But it is not working for me nothing new happens in module ABC.
tabid : ABD(tabid)
columname : doneby
tablename : vtiger_crmentity
generatedtype : 1
uitype : 53
fieldname : doneby
fieldlabel : Inspected By
readonly : 1
presence : 2
maximumlength : 100
block : <confused>(Need clarification)
typeofdata : V~M
summaryfield : 1
i really confused with block and typeofdata field.
Can anybody help me to get users list field?
You can easily create it by modifying tablename with vtiger_abc and block is likely to be same as where you expected follow the already existed block which you identify in vtiger_field previous colums in the same table.
More ever you need to create the same name field in vtiger_abc table also which must be varchar(size).
Change the typeofdata with V~0 this is for optional. Because "Assigned To" entity field will have mandatory so if you really need it as mandatory then let it be.
update these two fields.
tablename : vtiger_abc
block : <update with existing block of same table field>

Couchbase view using “WHERE” clause dynamically

I have Json Documents in the following format
Name :
Class :
City :
Type :
Age :
Level :
Mother :
Father :
I have a map function like this
function(doc,meta)
{
emit([doc.Name,doc.Age,doc.Type,doc.Level],null);
}
What I can do is give "name" and filter out all results but what I also want to do is give "age" only and filter out on that. For that couchbase does not provide functionality to skip "Name" key. So I have to create a new map function which has "Age" as first key, but I also have to query on only "Level" key also so like this. I would have to create many map functions for each field which obviously is not feasible so is there anything I can do apart from making new map function to achieve this type of functionality?
I can't us n1ql because I have 150 million documents so it will take a lot of time.
First of all - that is not a very good reduce function.
it does not have any filtering
function header should be function(doc, meta)
if you have mixture between json and binary objects - add meta.type == "json"
Now for the things you can do:
If you are using v4 and above (v4.1 in much more recommended) you can use N1QL and use it very similar to SQL language. (I didn't understand why you can't use n1ql)
You can emit multiple items in multiple order
i.e. if I have doc in the format of
{
"name": "Roi",
"age": 31
}
I can emit to the index two values:
function (doc, meta) {
if (meta.type=="json") {
emit(doc.name, null);
emit(doc.age, null);
}
}
Now I can query by 2 values.
this is much better than creating 2 views.
Anyway, if you have something to filter by - it is always recommended.

How to store the result of query on the current table without changing the table schema?

I have a structure
{
id: "123",
scans:[{
"scanid":"123",
"status":"sleep"
}]
},
{
id: "123",
scans:[{
"scanid":"123",
"status":"sleep"
}]
}
Query to remove duplicate:
SELECT *
FROM (
SELECT
*,
ROW_NUMBER()
OVER (PARTITION BY id)
row_number,
FROM table1
)
WHERE row_number = 1
I specified destination table as table1.
Here I have made scans as repeated records, scanid as string and status as string. But when I do some query (I am making a query to remove duplicate) and overwrite the existing table, the table schema is changed. It becomes scans_scanid(string) and scans_status(string). Scans record schema is changed now. Please suggest where am I going wrong?
It is known that NEST() is not compatible with UnFlatten Results Output and mostly is used for intermediate result in subquery.
Try below workaround
Note, I use INTEGER for id and scanid. If they should be STRING you need to
a. make change in output schema section
as well as
b. remove use of parseInt() function in t = {scanid:parseInt(x[0]), status:x[1]}
SELECT id, scans.scanid, scans.status
FROM JS(
( // input table
SELECT id, NEST(CONCAT(STRING(scanid), ',', STRING(status))) AS scans
FROM (
SELECT id, scans.scanid, scans.status
FROM (
SELECT id, scans.scanid, scans.status,
ROW_NUMBER() OVER (PARTITION BY id) AS dup
FROM table1
) WHERE dup = 1
) GROUP BY id
),
id, scans, // input columns
"[{'name': 'id', 'type': 'INTEGER'}, // output schema
{'name': 'scans', 'type': 'RECORD',
'mode': 'REPEATED',
'fields': [
{'name': 'scanid', 'type': 'INTEGER'},
{'name': 'status', 'type': 'STRING'}
]
}
]",
"function(row, emit){ // function
var c = [];
for (var i = 0; i < row.scans.length; i++) {
x = row.scans[i].toString().split(',');
t = {scanid:parseInt(x[0]), status:x[1]}
c.push(t);
};
emit({id: row.id, scans: c});
}"
)
Here I use BigQuery User-Defined Functions. They are extremely powerful yet still have some Limits and Limitations to be aware of. Also have in mind - they are quite a candidates for being qualified as expensive High-Compute queries
Complex queries can consume extraordinarily large computing resources
relative to the number of bytes processed. Typically, such queries
contain a very large number of JOIN or CROSS JOIN clauses or complex
User-defined Functions.
1) If you run the query on the web UI, the result is automatically flattened, so that's why you see the schema is changed.
You need to run your query and write to a destination table, you have options on the web UI also to do this.
2) If you don't run your query on the web UI but still see schema changed, you should make explicit selects so the schema is retained for you eg:
select 'foo' as scans.scanid
This creates for you a record like output, but it won't be a repeated record for that please read further.
3) For some use cases you may need to use the NEST(expr) function which
Aggregates all values in the current aggregation scope into a repeated
field. For example, the query "SELECT x, NEST(y) FROM ... GROUP BY x"
returns one output record for each distinct x value, and contains a
repeated field for all y values paired with x in the query input. The
NEST function requires a GROUP BY clause.
BigQuery automatically flattens query results, so if you use the NEST
function on the top level query, the results won't contain repeated
fields. Use the NEST function when using a subselect that produces
intermediate results for immediate use by the same query.