Vtiger. Change query limit - vtiger

In vtiger wiki written:
Query always limits its output to 100 records, client application can use limit operator to get different records.
This query does not work:
doQuery("select * from Leads limit='200';")
How to specify the operator in a query?

The "limit" clause only works if the number given is lower than 100. You can't get more records than 100 using "limit" with 1 request.
To get more than 100 records from vTiger services you need to make various request using the "offset" in the "limit" clause.

If you really read the Wiki documentation, you'd see that you need to use:
select *
from Leads
limit 200;
Stop using unnecessary single quotes ('200') - the limit expects a numerical value, there's absolutely no point in converting that to a string (by using single quotes) .....
and drop the equal sign, too - it's not shown in the docs anywhere .....

Related

Importrange + Query + Matches + Regexp

I am trying to filter out data from a different sheet with a specific account number. However the code below doesn't give out any results
=query(importrange(Setup!B1,"Sheet1!A2:F"),"Select * where Col4 matches '\d\d-\d\d\d\d-[14-7]\d\d\d'",1)
This is supposed to filter out all accounts where the 1st digit in the 3rd group of numbers is either 1,4,5,6 or 7. The width of the account numbers are all the same following the format xx-xxxx-xxxx.
Try using this as your match criteria:
\d{2}-\d{4}-[14-7]{3}\d
Also, while I can't see your data, make sure you actually have a header in the first row of your IMPORTRANGE results (which you've requested with the 1 at the end of the QUERY). If you don't actually have headers, the 1 will leave you with one more result than you want; if that is the case, just remove the ,1 from the end of the QUERY.
If this doesn't produce the results you want, it may be due to mixed data types in your raw data that are being filtered out by the QUERY. In that case, you can try using FILTER and REGEXMATCH instead:
=ArrayFormula(FILTER(IMPORTRANGE(Setup!B1,"Sheet1!A2:F"),REGEXMATCH(IMPORTRANGE(Setup!B1,"Sheet1!D2:D"),"\d{2}-\d{4}-[14-7]{3}\d")))
It is always hard to write complex formulas sight unseen. If none of these solutions (which work in my local sheet) produce the results you expect, I encourage you to share a link to both of your sheets. The raw data sheet being called by IMPORTRANGE can be "View Only"; but you'll want to set the Share permission on the second sheet (the one with the IMPORTRANGE formula itself) to "Anyone with the link..." and "Editor," so that those here can access it to test.

AWS IoT Analytics Delta Window

I am having real problems getting the AWS IoT Analytics Delta Window (docs) to work.
I am trying to set it up so that every day a query is run to get the last 1 hour of data only. According to the docs the schedule feature can be used to run the query using a cron expression (in my case every hour) and the delta window should restrict my query to only include records that are in the specified time window (in my case the last hour).
The SQL query I am running is simply SELECT * FROM dev_iot_analytics_datastore and if I don't include any delta window I get the records as expected. Unfortunately when I include a delta expression I get nothing (ever). I left the data accumulating for about 10 days now so there are a couple of million records in the database. Given that I was unsure what the optimal format would be I have included the following temporal fields in the entries:
datetime : 2019-05-15T01:29:26.509
(A string formatted using ISO Local Date Time)
timestamp_sec : 1557883766
(A unix epoch expressed in seconds)
timestamp_milli : 1557883766509
(A unix epoch expressed in milliseconds)
There is also a value automatically added by AWS called __dt which is a uses the same format as my datetime except it seems to be accurate to within 1 day. i.e. All values entered within a given day have the same value (e.g. 2019-05-15 00:00:00.00)
I have tried a range of expressions (including the suggested AWS expression) from both standard SQL and Presto as I'm not sure which one is being used for this query. I know they use a subset of Presto for the analytics so it makes sense that they would use it for the delta but the docs simply say '... any valid SQL expression'.
Expressions I have tried so far with no luck:
from_unixtime(timestamp_sec)
from_unixtime(timestamp_milli)
cast(from_unixtime(unixtime_sec) as date)
cast(from_unixtime(unixtime_milli) as date)
date_format(from_unixtime(timestamp_sec), '%Y-%m-%dT%h:%i:%s')
date_format(from_unixtime(timestamp_milli), '%Y-%m-%dT%h:%i:%s')
from_iso8601_timestamp(datetime)
What are the offset and time expression parameters that you are using?
Since delta windows are effectively filters inserted into your SQL, you can troubleshoot them by manually inserting the filter expression into your data set's query.
Namely, applying a delta window filter with -3 minute (negative) offset and 'from_unixtime(my_timestamp)' time expression to a 'SELECT my_field FROM my_datastore' query translates to an equivalent query:
SELECT my_field FROM
(SELECT * FROM "my_datastore" WHERE
(__dt between date_trunc('day', iota_latest_succeeded_schedule_time() - interval '1' day)
and date_trunc('day', iota_current_schedule_time() + interval '1' day)) AND
iota_latest_succeeded_schedule_time() - interval '3' minute < from_unixtime(my_timestamp) AND
from_unixtime(my_timestamp) <= iota_current_schedule_time() - interval '3' minute)
Try using a similar query (with no delta time filter) with correct values for offset and time expression and see what you get, The (_dt between ...) is just an optimization for limiting the scanned partitions. You can remove it for the purposes of troubleshooting.
Please try the following:
Set query to SELECT * FROM dev_iot_analytics_datastore
Data selection filter:
Data selection window: Delta time
Offset: -1 Hours
Timestamp expression: from_unixtime(timestamp_sec)
Wait for dataset content to run for a bit, say 15 minutes or more.
Check contents
After several weeks of testing and trying all the suggestions in this post along with many more it appears that the extremely technical answer was to 'switch off and back on'. I deleted the whole analytics stack and rebuild everything with different names and it now seems to now be working!
Its important that even though I have flagged this as the correct answer due to the actual resolution. Both the answers provided by #Populus and #Roger are correct had my deployment being functioning as expected.
I found by chance that changing SELECT * FROM datastore to SELECT id1, id2, ... FROM datastore solved the problem.

How to set page numbers in XSLT having multiple simple-page-master?.This is for dynamically inserting pagenumbers in PDF

I have an XSLT file having multiple simple-page-master. The multiple simple-page-master is used for setting different height and width.
Here I am facing an issue that the pagenumbers corresponding to each simple-page-master starts with 1.
I used < fo:page-number/> for dynamically generating Page numbers.
I also want to get the total number of pages since I have to write Page numbers as
Page 1 of 20
I need pagenumbers in a sequence.
How can I solve this?
Mmmh without more samples, it's hard to try an answer...
I gamble one but quite blindly :
You should have some somewhere. Add a pagenumber param in it. And in your add .
You should be able to output the page number via the param. To get the total number of simple-page-master, add another param or use the XPath count(//simple-page-master) (better to count once and use params).

Querying geonames for rows 1000-1200

I have been querying Geonames for parks per state. Mostly there are under 1000 parks per state, but I just queried Conneticut, and there are just under 1200 parks there.
I already got the 1-1000 with this query:
http://api.geonames.org/search?featureCode=PRK&username=demo&country=US&style=full&adminCode1=CT&maxRows=1000
But increasing the maxRows to 1200 gives an error that I am querying for too many at once. Is there a way to query for rows 1000-1200 ?
I don't really see how to do it with their API.
Thanks!
You should be using the startRow parameter in the query to page results. The documentation notes that it takes an integer value (0 based indexing) and should be
Used for paging results. If you want to get results 30 to 40, use startRow=30 and maxRows=10. Default is 0.
So to get the next 1000 data points (1000-1999), you should change your query to
http://api.geonames.org/search?featureCode=PRK&username=demo&country=US&style=full&adminCode1=CT&maxRows=1000&startRow=1000
I'd suggest reducing the maxRows to something manageable as well - something that will put less of a load on their servers and make for quicker responses to your queries.

Coldfusion 8 - Problems with indexing large data using verity

I am currently running coldfusion 8 with verity running on a K2 server. I am using a query to index several different columns with my table using cfindex. One of the columns is a large varchar type.
It seems that when the data is being indexed only the first 30KB is being stored, resulting in no results being brought back if I search for anything after that. I tried moving several different phrases and words further up in the data, within the 30KB and the results then appear.
I then carried out more verity tests using the browse command in the command prompt to see whats actually in the collection.
i.e. Coldfusion8\verity\collections\\parts browse 0000001.ddd
I found out that the body being indexed (CF_BODY) never exceeds the size of 32000.
Can anyone tell me if there is a fixed index size per document for verity?
Many thanks,
Richard
Punch line
Version 6 has operator limits:
up to 32 764 children in one "topic" for ANY operator
up to 64 children for NEAR
Exceeding these values doesn't necessarily give error message. When you search, you're certain you don't exceed them?
Source
Verity documentation, Appendix B: Query limits says there are two limitations: search time and operator's. Quote below is whole section telling about the latter, straight from the book.
Verity Query Language and Topic Guide, Version 6.0:
Note the following limits on the use of operators:
There can be a maximum of 32,764 children for the ANY operator. If a topic exceeds
this limit, the search engine does not always return an error message.
The NEAR operator can evaluate only 64 children. If a topic exceeds this limit, the
search engine does not return an error message.
For example, assume you have created a large topic that uses the ACCRUE operator with
8365 children. This topic exceeds the 1024 limit for any ACCRUE-class topic and the
16000/3 limit for the total number of nodes.
In this case, you cannot substitute ANY for ACCRUE, because that would cause the topic
to exceed the 8,000 limit for the maximum number of children for the ANY operator.
Instead, you can build a deeper tree structure by grouping topics and creating some
named subnodes.