Slow distance query in GeoDjango with PostGIS - django

I am using GeoDjango with Postgres 10 and PostGIS. I have two models as follows:
class Postcode(models.Model):
name = models.CharField(max_length=8, unique=True)
location = models.PointField(geography=True)
class Transaction(models.Model):
transaction_id = models.CharField(max_length=60)
price = models.IntegerField()
date_of_transfer = models.DateField()
postcode = models.ForeignKey(Postcode, on_delete=models.CASCADE)
property_type = models.CharField(max_length=1,blank=True)
street = models.CharField(blank=True, max_length=200)
class Meta:
indexes = [models.Index(fields=['-date_of_transfer',]),
models.Index(fields=['price',]),
]
Given a particular postcode, I would like to find the nearest transactions within a specified distance. To do this, I am using the following code:
transactions = Transaction.objects.filter(price__gte=min_price) \
.filter(postcode__location__distance_lte=(pc.location,D(mi=distance))) \
.annotate(distance=Distance('postcode__location',pc.location)).order_by('distance')[0:25]
The query runs slowly taking about 20 - 60 seconds (depending on filter criteria) on a Windows PC i5 2500k with 16GB RAM. If I order by date_of_transfer then it runs in <1 second for larger distances (over 1 mile) but is still slow for small distances (e.g. 45 seconds for a distance of 0.1m).
So far I have tried:
* changing the location field from Geometry to Geography
* using dwithin instead of distance_lte
Neither of these had more than a marginal impact on the speed of the query.
The SQL generated by GeoDjango for the current version is:
SELECT "postcodes_transaction"."id",
"postcodes_transaction"."transaction_id",
"postcodes_transaction"."price",
"postcodes_transaction"."date_of_transfer",
"postcodes_transaction"."postcode_id",
"postcodes_transaction"."street",
ST_Distance("postcodes_postcode"."location",
ST_GeogFromWKB('\x0101000020e6100000005471e316f3bfbf4ad05fe811c14940'::bytea)) AS "distance"
FROM "postcodes_transaction" INNER JOIN "postcodes_postcode"
ON ("postcodes_transaction"."postcode_id" = "postcodes_postcode"."id")
WHERE ("postcodes_transaction"."price" >= 50000
AND ST_Distance("postcodes_postcode"."location", ST_GeomFromEWKB('\x0101000020e6100000005471e316f3bfbf4ad05fe811c14940'::bytea)) <= 1609.344
AND "postcodes_transaction"."date_of_transfer" >= '2000-01-01'::date
AND "postcodes_transaction"."date_of_transfer" <= '2017-10-01'::date)
ORDER BY "distance" ASC LIMIT 25
On the postcodes table, there is an index on the location field as follows:
CREATE INDEX postcodes_postcode_location_id
ON public.postcodes_postcode
USING gist
(location);
The transaction table has 22 million rows and the postcode table has 2.5 million rows. Any suggestions on what approaches I can take to improve the performance of this query?
Here is the query plan for reference:
"Limit (cost=2394838.01..2394840.93 rows=25 width=76) (actual time=19028.400..19028.409 rows=25 loops=1)"
" Output: postcodes_transaction.id, postcodes_transaction.transaction_id, postcodes_transaction.price, postcodes_transaction.date_of_transfer, postcodes_transaction.postcode_id, postcodes_transaction.street, (_st_distance(postcodes_postcode.location, '0101 (...)"
" -> Gather Merge (cost=2394838.01..2893397.65 rows=4273070 width=76) (actual time=19028.399..19028.407 rows=25 loops=1)"
" Output: postcodes_transaction.id, postcodes_transaction.transaction_id, postcodes_transaction.price, postcodes_transaction.date_of_transfer, postcodes_transaction.postcode_id, postcodes_transaction.street, (_st_distance(postcodes_postcode.location, (...)"
" Workers Planned: 2"
" Workers Launched: 2"
" -> Sort (cost=2393837.99..2399179.33 rows=2136535 width=76) (actual time=18849.396..18849.449 rows=387 loops=3)"
" Output: postcodes_transaction.id, postcodes_transaction.transaction_id, postcodes_transaction.price, postcodes_transaction.date_of_transfer, postcodes_transaction.postcode_id, postcodes_transaction.street, (_st_distance(postcodes_postcode.loc (...)"
" Sort Key: (_st_distance(postcodes_postcode.location, '0101000020e6100000005471e316f3bfbf4ad05fe811c14940'::geography, '0'::double precision, true))"
" Sort Method: quicksort Memory: 1013kB"
" Worker 0: actual time=18615.809..18615.948 rows=577 loops=1"
" Worker 1: actual time=18904.700..18904.721 rows=576 loops=1"
" -> Hash Join (cost=699247.34..2074281.07 rows=2136535 width=76) (actual time=10705.617..18841.448 rows=5573 loops=3)"
" Output: postcodes_transaction.id, postcodes_transaction.transaction_id, postcodes_transaction.price, postcodes_transaction.date_of_transfer, postcodes_transaction.postcode_id, postcodes_transaction.street, _st_distance(postcodes_postcod (...)"
" Inner Unique: true"
" Hash Cond: (postcodes_transaction.postcode_id = postcodes_postcode.id)"
" Worker 0: actual time=10742.668..18608.763 rows=5365 loops=1"
" Worker 1: actual time=10749.748..18897.838 rows=5522 loops=1"
" -> Parallel Seq Scan on public.postcodes_transaction (cost=0.00..603215.80 rows=6409601 width=68) (actual time=0.052..4214.812 rows=5491618 loops=3)"
" Output: postcodes_transaction.id, postcodes_transaction.transaction_id, postcodes_transaction.price, postcodes_transaction.date_of_transfer, postcodes_transaction.postcode_id, postcodes_transaction.street"
" Filter: ((postcodes_transaction.price >= 50000) AND (postcodes_transaction.date_of_transfer >= '2000-01-01'::date) AND (postcodes_transaction.date_of_transfer <= '2017-10-01'::date))"
" Rows Removed by Filter: 2025049"
" Worker 0: actual time=0.016..4226.643 rows=5375779 loops=1"
" Worker 1: actual time=0.016..4188.138 rows=5439515 loops=1"
" -> Hash (cost=682252.00..682252.00 rows=836667 width=36) (actual time=10654.921..10654.921 rows=1856 loops=3)"
" Output: postcodes_postcode.location, postcodes_postcode.id"
" Buckets: 131072 Batches: 16 Memory Usage: 1032kB"
" Worker 0: actual time=10692.068..10692.068 rows=1856 loops=1"
" Worker 1: actual time=10674.101..10674.101 rows=1856 loops=1"
" -> Seq Scan on public.postcodes_postcode (cost=0.00..682252.00 rows=836667 width=36) (actual time=5058.685..10651.176 rows=1856 loops=3)"
" Output: postcodes_postcode.location, postcodes_postcode.id"
" Filter: (_st_distance(postcodes_postcode.location, '0101000020e6100000005471e316f3bfbf4ad05fe811c14940'::geography, '0'::double precision, true) <= '1609.344'::double precision)"
" Rows Removed by Filter: 2508144"
" Worker 0: actual time=5041.442..10688.265 rows=1856 loops=1"
" Worker 1: actual time=5072.242..10670.215 rows=1856 loops=1"
"Planning time: 0.538 ms"
"Execution time: 19065.962 ms"

Related

VB6 - Adding and searching for an item in sorted list - better performance

I need to maintain an old system written in vb6.
Currently in this system there is an sorted list of integers with approximately 1 million items. To locate an item, I use binary search to obtain the shortest response time.
I need to add new items to this list in runtime and these items are added to the end of the list, however the binary search requires that the list is sorted.
Currently the list is a User-Defined Data Types and the ordering is by the Code field
Private Type ProductDetails
ProdID as String
ProdName as String
Code as Double
End Type
The main requirement is the response time to find an item in whatever position it is.
Any idea or tip of this implementation will be very welcome.
thankful
Nadia
Fastest way is with collection:
this is output from test sub:
creating 1.000.000 long product list
**done in 16,6219940185547 seconds**
searching list
fastest way (instant) if you know index (if you have ID)
ID0500000
500000 ID0500000 This is product 500000
**done in 0 seconds**
searching by name (find item 500.000 which is in the middle od list
500000 ID0500000 This is product 500000
**done in 0,425003051757813 seconds**
listing all items that have '12345' in their name
12345 ID0012345 This is product 12345
112345 ID0112345 This is product 112345
123450 ID0123450 This is product 123450
123451 ID0123451 This is product 123451
123452 ID0123452 This is product 123452
123453 ID0123453 This is product 123453
123454 ID0123454 This is product 123454
123455 ID0123455 This is product 123455
123456 ID0123456 This is product 123456
123457 ID0123457 This is product 123457
123458 ID0123458 This is product 123458
123459 ID0123459 This is product 123459
212345 ID0212345 This is product 212345
312345 ID0312345 This is product 312345
412345 ID0412345 This is product 412345
512345 ID0512345 This is product 512345
612345 ID0612345 This is product 612345
712345 ID0712345 This is product 712345
812345 ID0812345 This is product 812345
912345 ID0912345 This is product 912345
**done in 2,05299377441406 seconds**
so, if you load list in memory at the beginning of the app it should search fast enough
'following this code goes in module:
Option Explicit
Private Type ProductDetails
ProdID As String
ProdName As String
Code As Double
End Type
Dim pdList As New Collection
Function genRndNr(nrPlaces) 'must be more then 10
Dim prefix As String
Dim suffix As String
Dim pon As Integer
prefix = Right("0000000000" + CStr(DateDiff("s", "2020-01-01", Now)), 10)
suffix = Space(nrPlaces - 10)
For pon = 1 To Len(suffix)
Randomize
Randomize Rnd * 1000000
Mid(suffix, pon, 1) = CStr(Int(Rnd * 10))
Next
genRndNr = prefix + suffix
End Function
Sub test()
Dim pd As ProductDetails
Dim pdData As Variant
Dim query As String
Dim pon As Long
Dim startT As Long
Debug.Print "creating 1.000.000 long product list"
startT = Timer
For pon = 1 To 1000000
pd.Code = pon
pd.ProdID = "ID" + Right("0000000000" + CStr(pon), 7) 'id= from ID0000001 to ID1000000
pd.ProdName = "This is product " + CStr(pon) 'product name
pdData = Array(pd.Code, pd.ProdID, pd.ProdName) 'create array
pdList.Add pdData, pd.ProdID 'adding array to collection (pd.ID=index)
Next
Debug.Print "done in " + CStr(Timer - startT) + " seconds"
Debug.Print
Debug.Print
Debug.Print "searching list"
startT = Timer
Debug.Print "fastest way (instant) if you know index (if you have ID)"
query = "ID0500000"
pdData = pdList(query)
pd.Code = pdData(0)
pd.ProdID = pdData(1)
pd.ProdName = pdData(2)
Debug.Print query
Debug.Print pd.Code, pd.ProdID, pd.ProdName
Debug.Print "done in " + CStr(Timer - startT) + " seconds"
Debug.Print
Debug.Print
Debug.Print "searching by name (find item 500.000 which is in the middle od list"
startT = Timer
query = "This is product 500000"
For Each pdData In pdList
If query = pdData(2) Then
pd.Code = pdData(0)
pd.ProdID = pdData(1)
pd.ProdName = pdData(2)
Exit For
End If
Next
Debug.Print pd.Code, pd.ProdID, pd.ProdName
Debug.Print "done in " + CStr(Timer - startT) + " seconds"
Debug.Print
Debug.Print
Debug.Print "listing all items that have '12345' in their name"
startT = Timer
query = "*12345*"
For Each pdData In pdList
If pdData(2) Like query Then
pd.Code = pdData(0)
pd.ProdID = pdData(1)
pd.ProdName = pdData(2)
Debug.Print pd.Code, pd.ProdID, pd.ProdName
End If
Next
Debug.Print "done in " + CStr(Timer - startT) + " seconds"
Debug.Print
Debug.Print
'clear pd memory buffer
Set pdList = Nothing
End Sub

Siddhi Send Performance Issues - Embedded

We are evaluating Siddhi as a embedded CEP processor for our application. While scale testing we found that as you increase the number of rules the time it takes to insert an event increases significantly for each unique ID. For example:
Create 10 rules (using windows and a partition by id)
Load 1000 unique entries at a time. Track the timing. Note that insert performance increases from ms -> many seconds as you approach 100K unique entries. The more rules you have also increases this time.
Now load the "next" time for each record - insertion time remains constant regardless of ID.
Here is a code file which reproduces this:
public class ScaleSiddhiTest {
private SiddhiManager siddhiManager = new SiddhiManager();
#Test
public void testWindow() throws InterruptedException {
String plan = "#Plan:name('MetricPlan') \n" +
"define stream metricStream (id string, timestamp long, metric1 double,metric2 double); \n" +
"partition with (id of metricStream) begin \n" +
"\n" +
"from metricStream#window.externalTime(timestamp, 300000) \n" +
"select id, avg(metric1) as value, 'Metric1-rule0' as ruleName\n" +
"having value>-1.000000 \n" +
"insert into outputStream;\n" +
"\n" +
"from metricStream#window.externalTime(timestamp, 300000) \n" +
"select id, avg(metric2) as value, 'Metric2-rule1' as ruleName\n" +
"having value>-1.000000 \n" +
"insert into outputStream;\n" +
"\n" +
"from metricStream#window.externalTime(timestamp, 300000) \n" +
"select id, avg(metric1) as value, 'Metric1-rule2' as ruleName\n" +
"having value>-1.000000 \n" +
"insert into outputStream;\n" +
"\n" +
"from metricStream#window.externalTime(timestamp, 300000) \n" +
"select id, avg(metric2) as value, 'Metric2-rule3' as ruleName\n" +
"having value>-1.000000 \n" +
"insert into outputStream;\n" +
"\n" +
"from metricStream#window.externalTime(timestamp, 300000) \n" +
"select id, avg(metric1) as value, 'Metric1-rule4' as ruleName\n" +
"having value>-1.000000 \n" +
"insert into outputStream;\n" +
"\n" +
"from metricStream#window.externalTime(timestamp, 300000) \n" +
"select id, avg(metric2) as value, 'Metric2-rule5' as ruleName\n" +
"having value>-1.000000 \n" +
"insert into outputStream;\n" +
"\n" +
"from metricStream#window.externalTime(timestamp, 300000) \n" +
"select id, avg(metric1) as value, 'Metric1-rule6' as ruleName\n" +
"having value>-1.000000 \n" +
"insert into outputStream;\n" +
"\n" +
"from metricStream#window.externalTime(timestamp, 300000) \n" +
"select id, avg(metric2) as value, 'Metric2-rule7' as ruleName\n" +
"having value>-1.000000 \n" +
"insert into outputStream;\n" +
"\n" +
"from metricStream#window.externalTime(timestamp, 300000) \n" +
"select id, avg(metric1) as value, 'Metric1-rule8' as ruleName\n" +
"having value>-1.000000 \n" +
"insert into outputStream;\n" +
"\n" +
"from metricStream#window.externalTime(timestamp, 300000) \n" +
"select id, avg(metric2) as value, 'Metric2-rule9' as ruleName\n" +
"having value>-1.000000 \n" +
"insert into outputStream;\n" +
"\n" +
"from metricStream#window.externalTime(timestamp, 300000) \n" +
"select id, avg(metric1) as value, 'Metric1-rule10' as ruleName\n" +
"having value>-1.000000 \n" +
"insert into outputStream;\n" +
"\n" +
"from metricStream#window.externalTime(timestamp, 300000) \n" +
"select id, avg(metric2) as value, 'Metric2-rule11' as ruleName\n" +
"having value>-1.000000 \n" +
"insert into outputStream;\n" +
"\n" +
"from metricStream#window.externalTime(timestamp, 300000) \n" +
"select id, avg(metric1) as value, 'Metric1-rule12' as ruleName\n" +
"having value>-1.000000 \n" +
"insert into outputStream;\n" +
"\n" +
"from metricStream#window.externalTime(timestamp, 300000) \n" +
"select id, avg(metric2) as value, 'Metric2-rule13' as ruleName\n" +
"having value>-1.000000 \n" +
"insert into outputStream;\n" +
"\n" +
"from metricStream#window.externalTime(timestamp, 300000) \n" +
"select id, avg(metric1) as value, 'Metric1-rule14' as ruleName\n" +
"having value>-1.000000 \n" +
"insert into outputStream;\n" +
"\n" +
"from metricStream#window.externalTime(timestamp, 300000) \n" +
"select id, avg(metric2) as value, 'Metric2-rule15' as ruleName\n" +
"having value>-1.000000 \n" +
"insert into outputStream;\n" +
"\n" +
"from metricStream#window.externalTime(timestamp, 300000) \n" +
"select id, avg(metric1) as value, 'Metric1-rule16' as ruleName\n" +
"having value>-1.000000 \n" +
"insert into outputStream;\n" +
"\n" +
"from metricStream#window.externalTime(timestamp, 300000) \n" +
"select id, avg(metric2) as value, 'Metric2-rule17' as ruleName\n" +
"having value>-1.000000 \n" +
"insert into outputStream;\n" +
"\n" +
"from metricStream#window.externalTime(timestamp, 300000) \n" +
"select id, avg(metric1) as value, 'Metric1-rule18' as ruleName\n" +
"having value>-1.000000 \n" +
"insert into outputStream;\n" +
"\n" +
"from metricStream#window.externalTime(timestamp, 300000) \n" +
"select id, avg(metric2) as value, 'Metric2-rule19' as ruleName\n" +
"having value>-1.000000 \n" +
"insert into outputStream;\n" +
"end ;";
// Generating runtime
ExecutionPlanRuntime executionPlanRuntime = siddhiManager.createExecutionPlanRuntime(plan);
AtomicInteger counter = new AtomicInteger();
// Adding callback to retrieve output events from query
executionPlanRuntime.addCallback("outputStream", new StreamCallback() {
#Override
public void receive(Event[] events) {
counter.addAndGet(events.length);
}
});
// Starting event processing
executionPlanRuntime.start();
// Retrieving InputHandler to push events into Siddhi
InputHandler inputHandler = executionPlanRuntime.getInputHandler("metricStream");
int numOfUniqueItems = 10000;
IntStream.range(0, 2).forEach(curMinute->{
long iterationStartTime = System.currentTimeMillis();
AtomicLong lastStart = new AtomicLong(System.currentTimeMillis());
IntStream.range(0, numOfUniqueItems).forEach(id->{
try {
inputHandler.send(TimeUnit.MINUTES.toMillis(curMinute), new Object[]{id, TimeUnit.MINUTES.toMillis(curMinute), 10.0, 20.0});
if( id > 0 && id % 1000 == 0 ){
long ls = lastStart.get();
long curTime = System.currentTimeMillis();
lastStart.set(curTime);
System.out.println("It took " + (curTime - ls) + " ms to load the last 1000 entities. Num Alarms So Far: " + counter.get());
}
} catch (Exception e ){
throw new RuntimeException(e);
}
});
System.out.println("It took " + (System.currentTimeMillis() - iterationStartTime) + "ms to load the last " + numOfUniqueItems);
});
// Shutting down the runtime
executionPlanRuntime.shutdown();
siddhiManager.shutdown();
}
}
here are my questions:
Are we doing anything incorrect here that may be leading to the initial load performance issues?
Any recommendations to work around this problem?
UPDATE:
Per an suggested answer below I updated the test to use group by instead of partitions. The same growth is shown for initial load of each object, except it is even worse:
Specifically, I changed the rules to:
#Plan:name('MetricPlan')
define stream metricStream (id string, timestamp long, metric1 double,metric2 double);
from metricStream#window.externalTime(timestamp, 300000)
select id, avg(metric1) as value, 'Metric1-rule0' as ruleName
group by id
having value>-1.000000
insert into outputStream;
...
Here are the result outputs for the Group By vs Partition By. Both show the growth for the initial load.
Group By Load Results
Load 10K Items - Group By
It took 3098 ms to load the last 1000 entities. Num Alarms So Far: 20020
It took 2507 ms to load the last 1000 entities. Num Alarms So Far: 40020
It took 5993 ms to load the last 1000 entities. Num Alarms So Far: 60020
It took 4878 ms to load the last 1000 entities. Num Alarms So Far: 80020
It took 6079 ms to load the last 1000 entities. Num Alarms So Far: 100020
It took 8466 ms to load the last 1000 entities. Num Alarms So Far: 120020
It took 11840 ms to load the last 1000 entities. Num Alarms So Far: 140020
It took 12634 ms to load the last 1000 entities. Num Alarms So Far: 160020
It took 14779 ms to load the last 1000 entities. Num Alarms So Far: 180020
It took 87053ms to load the last 10000
Load Same 10K Items - Group By
It took 31 ms to load the last 1000 entities. Num Alarms So Far: 220020
It took 22 ms to load the last 1000 entities. Num Alarms So Far: 240020
It took 19 ms to load the last 1000 entities. Num Alarms So Far: 260020
It took 19 ms to load the last 1000 entities. Num Alarms So Far: 280020
It took 17 ms to load the last 1000 entities. Num Alarms So Far: 300020
It took 20 ms to load the last 1000 entities. Num Alarms So Far: 320020
It took 17 ms to load the last 1000 entities. Num Alarms So Far: 340020
It took 18 ms to load the last 1000 entities. Num Alarms So Far: 360020
It took 18 ms to load the last 1000 entities. Num Alarms So Far: 380020
It took 202ms to load the last 10000
Partition By Load Results
Load 10K Items - Partition By
It took 1148 ms to load the last 1000 entities. Num Alarms So Far: 20020
It took 1870 ms to load the last 1000 entities. Num Alarms So Far: 40020
It took 1393 ms to load the last 1000 entities. Num Alarms So Far: 60020
It took 1745 ms to load the last 1000 entities. Num Alarms So Far: 80020
It took 2040 ms to load the last 1000 entities. Num Alarms So Far: 100020
It took 2108 ms to load the last 1000 entities. Num Alarms So Far: 120020
It took 3068 ms to load the last 1000 entities. Num Alarms So Far: 140020
It took 2798 ms to load the last 1000 entities. Num Alarms So Far: 160020
It took 3532 ms to load the last 1000 entities. Num Alarms So Far: 180020
It took 23363ms to load the last 10000
Load Same 10K Items - Partition By
It took 39 ms to load the last 1000 entities. Num Alarms So Far: 220020
It took 21 ms to load the last 1000 entities. Num Alarms So Far: 240020
It took 30 ms to load the last 1000 entities. Num Alarms So Far: 260020
It took 22 ms to load the last 1000 entities. Num Alarms So Far: 280020
It took 35 ms to load the last 1000 entities. Num Alarms So Far: 300020
It took 26 ms to load the last 1000 entities. Num Alarms So Far: 320020
It took 25 ms to load the last 1000 entities. Num Alarms So Far: 340020
It took 34 ms to load the last 1000 entities. Num Alarms So Far: 360020
It took 48 ms to load the last 1000 entities. Num Alarms So Far: 380020
It took 343ms to load the last 10000
This type of growth almost seems to imply that on load of an ID which is not found it is compared against every other ID, instead of leveraging a hash etc. Hence the linear growth we see as the number of unique IDs increase.
Yes, the behavior is as expected.
When you use a partition with ID for each new ID you push a new instance of the partition is created, so if your partition is big it may have taken more time to create the partition. In the second time since the partition is already created for the unique ID it processes that faster.
In your case, I don't think using a partition is an ideal solution. Partition is only useful if you have inner streams or when you use non time based windows.
E.g.
partition with (id of metricStream)
begin
from metricStream ...
insert into #TempStream ;
from #TempStream ....
select ...
insert into outputStream;
end;
If you want to just group time-based aggregations then use the group by keyword.
from metricStream#window.externalTime(timestamp, 300000)
select id, avg(metric1) as value, 'Metric1-rule18' as ruleName
group by id
having value>-1.000000
insert into outputStream;

gqlQuerry comparing datetime objects

Here is the relevant code copied from my application on GAE.
today = datetime.datetime.strptime(date_variable, "%d/%m/%Y")
yesterday = ref_today - datetime.timedelta(days=1)
tomorrow = ref_today + datetime.timedelta(days=1)
logging.info('%s : %s : %s', yesterday, today, tomorrow)
#2016-02-19 00:00:00 : 2016-02-20 00:00:00 : 2016-02-21 00:00:00
records = db.GqlQuery("SELECT * FROM ProgrammeQueue"
" WHERE scheduledFrom < :1 AND scheduledFrom > :2 "
" ORDER BY scheduledFrom DESC",
tomorrow, yesterday)
Problem Statement :
Output: all records of 19/02/2016 and 20/02/2016
Expected: records = all records of 20/02/2016
What am I doing wrong ?
You query states:
WHERE scheduledFrom < :tomorrow AND scheduledFrom > :yesterday
where tomorrow and yesterday are datetimes. the time is set to 00:00:00, so the results will include dates of 19/02/2016 where the time is greater than
00:00:00.
maybe your query should be rewritten to use date objects not datetime objects (depending on your model definition). or maybe you need to rewrite it to something like this:
records = db.GqlQuery("SELECT * FROM ProgrammeQueue"
" WHERE scheduledFrom < :1 AND scheduledFrom >= :2 "
" ORDER BY scheduledFrom DESC",
tomorrow, today)

QSqlQuery ignoring sort order for SQLite DB

The results returned by my QSqlQuery are always in the same order, regardless of the ORDER BY state:
void Sy_loggingModel::reload()
{
auto query = d_->buildQuery();
query.setForwardOnly( true );
if ( !query.exec() ) {
throw Sy_exception( QObject::tr( "Failed to query logging data: " ) +
query.lastError().text() );
}
beginResetModel();
qDebug() << query.lastQuery()
<< d_->filter_ // First ? param
<< d_->sortedColumn_; // Second ? param
d_->entries_.clear();
while ( query.next() ) {
auto timestamp = query.value( 1 ).toLongLong();
auto level = query.value( 2 ).toInt();
d_->entries_ << Sy_loggingModel_d::Entry{
query.value( 0 ).toLongLong(),
QDateTime::fromMSecsSinceEpoch( timestamp ).toString(),
static_cast< Sy_loggerInterface::DebugLevel >( level ),
query.value( 3 ).toString() };
qDebug() << "\t" << query.value( 0 ).toLongLong()
<< timestamp
<< level
<< query.value( 3 ).toString();
}
endResetModel();
}
Produces this output when alternating between sort orders:
"SELECT rowid, timestamp, debugLevel, message FROM Sy_logger WHERE rowid >= ? AND debugLevel IN ( 0, 1, 2 ) ORDER BY ? DESC;" 0 1
1 1415399097350 0 "Opened database ./logs/Syren2.log"
2 1415399097382 1 "Listening on port 23000"
3 1415399418377 2 "New log rotation settings received, Metric: 0, Interval: 720"
4 1416178611851 2 "Opened database ./logs/Syren2.log"
5 1416178611852 2 "Listening on port 23000"
6 1416178612776 2 "New log rotation settings received, Metric: 0, Interval: 720"
"SELECT rowid, timestamp, debugLevel, message FROM Sy_logger WHERE rowid >= ? AND debugLevel IN ( 0, 1, 2 ) ORDER BY ? ASC;" 0 1
1 1415399097350 0 "Opened database ./logs/Syren2.log"
2 1415399097382 1 "Listening on port 23000"
3 1415399418377 2 "New log rotation settings received, Metric: 0, Interval: 720"
4 1416178611851 2 "Opened database ./logs/Syren2.log"
5 1416178611852 2 "Listening on port 23000"
6 1416178612776 2 "New log rotation settings received, Metric: 0, Interval: 720"
The SQL statement returns the expected result set when used from the command line. Any suggestions? I'm using Qt v5.3.2.
The documentation says:
If the ORDER BY expression is a constant integer K then the expression is considered an alias for the K-th column of the result set.
However, parameters are not considered constants, so the value you use for this parameters is used as an expression that happens to be the same for all rows.
If you want to sort by different columns, you have to construct the SQL statement dynamically.

How do I take already calculated totals that are in a loop and add them together?

I created this program in Python 2.7.3
I did this in my Computer Science class. He assigned it in two parts. For the first part we had to create a program to calculate a monthly cell phone bill for five customers. The user inputs the number of texts, minutes, and data used. Additionaly, there are overage fees. $10 for every GB of data over the limit, $.4, per minute over the limit, and $.2 per text sent over the limit. 500 is the limit amount of text messages, 750 is the limit amount of minutes, and 2 GB is the limit amount of data for the plan.
For part 2 of the assignment. I have to calculate the total tax collected, total charges (each customer bill added together), total goverment fees collected, total customers who had overages etc.
Right now all I want help on is adding the customer bills all together. As I said earlier, when you run the program it prints the Total bill for 5 customers. I don't know how to assign those seperate totals to a variable, add them together, and then eventually print them as one big variable.
TotalBill = 0
monthly_charge = 69.99
data_plan = 30
minute = 0
tax = 1.08
govfees = 9.12
Finaltext = 0
Finalminute = 0
Finaldata = 0
Finaltax = 0
TotalCust_ovrtext = 0
TotalCust_ovrminute = 0
TotalCust_ovrdata = 0
TotalCharges = 0
for i in range (1,6):
print "Calculate your cell phone bill for this month"
text = input ("Enter texts sent/received ")
minute = input ("Enter minute's used ")
data = input ("Enter Data used ")
if data > 2:
data = (data-2)*10
TotalCust_ovrdata = TotalCust_ovrdata + 1
elif data <=2:
data = 0
if minute > 750:
minute = (minute-750)*.4
TotalCust_ovrminute = TotalCust_ovrminute + 1
elif minute <=750:
minute = 0
if text > 500:
text = (text-500)*.2
TotalCust_ovrtext = TotalCust_ovrtext + 1
elif text <=500:
text = 0
TotalBill = ((monthly_charge + data_plan + text + minute + data) * (tax)) + govfees
print ("Your Total Bill is... " + str(round(TotalBill,2)))
print "The toatal number of Customer's who went over their minute's usage limit is... " ,TotalCust_ovrminute
print "The total number of Customer's who went over their texting limit is... " ,TotalCust_ovrtext
print "The total number of Customer's who went over their data limit is... " ,TotalCust_ovrdata
Some of the variables created are not used in the program. Please overlook them.
As Preet suggested.
create another variable like TotalBill i.e.
AccumulatedBill = 0
Then at the end of your loop put.
AccumulatedBill += TotalBill
This will add each TotalBill to Accumulated. Then simply print out the result at the end.
print "Total for all customers is: %s" %(AccumulatedBill)
Note: you don't normally use uppercase on variables for the first letter of the word. Use either camelCase or underscore_separated.