AWS DynamoDB | How to query timestamps between two dates? - amazon-web-services

I have a tabel with users, where I wish to query all users created in f.x April.
When the user is being created, a timestamp is automatically created for that user.
I made an index in my table, with timestamp as partition key and id as sort key.
The timestamp is in unix miliseconds.
This is my code for this query:
GetUsersOnTimestamp(): Promise<any> {
return new Promise( (resolve, reject) => {
const _dynamoDB = new AWS.DynamoDB.DocumentClient();
const startDate = 1554069600000;
const endDate = 1556661600000;
const params = {
TableName: 'user-table',
IndexName: 'timestamp-id-index',
KeyConditionExpression: '#timestamp = :hkey BETWEEN :sdate AND :edate',
ExpressionAttributeNames: {
'#timestamp': 'timestamp'
},
ExpressionAttributeValues: {
':hkey': 'timestamp',
':sdate': startDate,
':edate': endDate,
}
};
I get the following error:
ExpressionAttributeValues contains invalid key: Syntax error; key: "hkey"

You can't conditionally query for your partition key. You have to specify a full partition key value without any condition. The BETWEEN comparison operator is only available for querying the sort key conditionally.
From the DynamoDB documentation:
You must specify the partition key name and value as an equality
condition.
You can optionally provide a second condition for the sort key (if
present). The sort key condition must use one of the following
comparison operators:
a = b — true if the attribute a is equal to the value b
a < b — true if a is less than b
a <= b — true if a is less than or equal to b
a > b — true if a is greater than b
a >= b — true if a is greater than or equal to b
a BETWEEN b AND c — true if a is greater than or equal to b, and less than or equal to c.
The following function is also supported:
begins_with (a, substr) — true if the value of attribute a begins with a particular substring.
To get the ability to query for a range of timestamps is not straight forward to achieve with DynamoDB. One solution would be to add an additional field to your items which contains just year and month of your timestamp. You could then create a global secondary index (GSI) with the year-month-field as primary key and the full timestamp as sort key. With this approach you could query all users created in a given month.

Related

advance filter expression in dynamodb

I couldn't get any solution yet that's why questioning...
I have a table where two fields are categoryID_A, categoryID_B. I have some category ids like 'df23a5', '234za2', 'k0f9e3'. Need to get filtered out rows from that table including eighter the categoryID_A or categoryID_B matches with those ids.
...
FilterExpression: 'createdAt >= :time_within AND createdAt <= :from_time AND (categoryID_A IN (:m1,:m2,:m3) OR categoryID_B IN (:m1,:m2,:m3))',
ExpressionAttributeValues: {
':from_time': '2022-10-10T02:44:12.481Z',
':time_within': '2022-11-11T02:44:12.481Z',
':m1': 'df23a5',
':m2': '234za2',
':m3': 'k0f9e3'
}
...

Convert a table into a function that can act like a Table.SelectRows condition

I have a table of Project:
that I would like to filter by the FIELD, OPERATOR, and VALUE columns contained in the Project Group table:
The Power Query M to apply this filter would be:
let
Source = #"Project",
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Projectid", Int64.Type}}),
#"Filtered Rows" = Table.SelectRows(#"Changed Type", each [Projectid] >= 100000 and [Projectid] <= 500000)
in
#"Filtered Rows"
Results (need to remove the error row):
How do I convert the FIELD, OPERATOR, and VALUE columns into a function that can be used as a condition for the SelectRows function?
If you need to do comparisons, might be best to first change the types of the columns (in both tables) that are being compared. Preferably to type number.
The code below assumes that:
the OPERATOR column of Project Group table can only contain: > or < and that these values should be interpreted as >= and <= respectively.
the column in Project table (that needs to be compared) can change and its name will be in the FIELD column of the Project Group. It's assumed that the name matches exactly. If this is not the case, you might need to standardise things (or at least perform a case-insensitive search) to ensure values can be mapped to column names correctly.
Based on the assumptions above, here's one approach:
let
// Dummy table for example purposes
project = Table.FromColumns({
{0..10},
{5..15}
}, type table [projectId = number, name = number]),
// Dummy table for example purposes
projectGroup = Table.FromColumns({
{"projectId", "projectId"},
{">", "<"},
{5, 7}
}, type table [FIELD = text, OPERATOR = text, VALUE = number]),
// Should take in a row from "Project" table and return a boolean
// representing whether said row matches the criteria contained
// within "Project Group" table.
selectorFunc = (projectRow as record) as logical =>
let
shouldKeepProjectRow = Table.MatchesAllRows(projectGroup, (projectGroupRow as record) =>
let
fieldNameToCheck = projectGroupRow[FIELD],
valueFromProjectRow = Record.Field(projectRow, fieldNameToCheck),
compared = if projectGroupRow[OPERATOR] = ">" then
valueFromProjectRow >= projectGroupRow[VALUE]
else
valueFromProjectRow <= projectGroupRow[VALUE]
in compared
)
in shouldKeepProjectRow,
selectedRows = Table.SelectRows(project, selectorFunc)
in
selectedRows
The main function used is Table.MatchesAllRows (https://learn.microsoft.com/en-us/powerquery-m/table-matchesallrows).
Another approach could potentially be: Expression.Evaluate: https://learn.microsoft.com/en-us/powerquery-m/expression-evaluate. However, I've not used it, so I'm not sure whether there are any "gotchas"/implications to be aware of.

DynamoDbException: Conditions can be of length 1 or 2 only

I am from SQL background and very new to Dynamodb. I have a single table like this:
-----------------------------------------------------------
| dbId | genId | colData | deviceId | updateOn | params |
-----------------------------------------------------------
| | | | | | |
----------------------------------------------------------
Here dbId is primary key and genId is sort key. I have created two local secondary indexs in deviceId and updateOn. In SQL I can query from the table:
String dbId = "151/2020-21";
int deviceId = 1001;
long updOn = 1608456211308;
String query = "select * from tableName where dbId = '"+dbId+"' and deviceId != "+deviceId+" and updateOn > " + updOn;
In DynamoDb my keyConditionExpression is : dbId = :dbId and updateOn > :updateOn and deviceId != :deviceId. It gives me error :
DynamoDbException: Invalid KeyConditionExpression: Syntax error; token: "!", near: "deviceId !="
I removed the '!' to this : dbId = :dbId and updateOn > :updateOn and deviceId = :deviceId. It gives me error:
DynamoDbException: Conditions can be of length 1 or 2 only
How can I perform my desired query in Dynamodb? How should I design my Dynamodb table (I mean, primary key, indexes etc) so that I get the same sql like result?
Now, reason you are facing error 'Conditions can be of length 1 or 2 only' is because you are specifying 3 conditions within KeyConditionExpression. Please specify only partition key and sort key.
Lastly, you don't need to create a LSI for this purpose. You can perform same operation on original table or need to add Global Secondary Index, Which can help you to achieve the query on two conditions only.
As mentioned, the attribute included in "KeyConditionExpression" should be your hash key only, matching your base table schema. If you want to query on all Then this not possible because there will be no sort key in any case as recommended by AWS Doc's.
When it comes to query() in dynamoDb, it is recommended to specify only the partition key and sort key in KeyConditionExpression.
So if you want to perform the query() with more conditions, you should consider FilterExpression. E.g:
const params = {
TableName: 'table-name',
KeyConditionExpression: 'pk = :pk AND sk = :sk',
FilterExpression: 'dbId = :dbId and updateOn > :updateOn and deviceId != :deviceId',
ExpressionAttributeValues: {
':pk': '12345',
':sk': 'lorem-ipsum',
':dbId': '0987654321',
':updateOn': '2022-06-25',
':deviceId': '123',
},
}
const docClient = new AWS.DynamoDB.DocumentClient();
const result = await docClient.query(params).promise();

Access the previous record to compare the value in DAX POWER BI

I need to access the previous record of the DTH_REFER_PEDID column to make the IF comparison (DTH_REFER_PEDID-1 <> "A").
That is, I'm reading the index X, I need to compare with the index X-1
Addition_Stats = VAR Atendido_OV = PR_HIST_MOVIM_PEDID[OVITEM_Hist]
VAR linha_anterior2 = CALCULATE(values(PR_HIST_MOVIM_PEDID[STA_ITEM_PEDCL]);filter(PR_HIST_MOVIM_PEDID;EARLIER(PR_HIST_MOVIM_PEDID[DTH_REFER_PEDID])))
Return
if(PR_HIST_MOVIM_PEDID[DTH_REFER_PEDID].[Month]<PR_HIST_MOVIM_PEDID[DAT_MAIOR_PLANE].[Month];"Atraso mês ant";
if(PR_HIST_MOVIM_PEDID[STA_ITEM_PEDCL] = "A" && PR_HIST_MOVIM_PEDID[DTH_REFER_PEDID].[Day]<=PR_HIST_MOVIM_PEDID[DAT_MAIOR_PLANE].[Day];"Atendido no Prazo";
if((PR_HIST_MOVIM_PEDID[STA_ITEM_PEDCL]="P"||PR_HIST_MOVIM_PEDID[STA_ITEM_PEDCL]="L") && PR_HIST_MOVIM_PEDID[DTH_REFER_PEDID].[Day]<= PR_HIST_MOVIM_PEDID[DAT_MAIOR_PLANE].[Day];"Planejado no prazo";
if(PR_HIST_MOVIM_PEDID[STA_ITEM_PEDCL]<>"A" && PR_HIST_MOVIM_PEDID[DTH_REFER_PEDID].[Day]>PR_HIST_MOVIM_PEDID[DAT_MAIOR_PLANE].[Day];"Em atraso";
if(PR_HIST_MOVIM_PEDID[STA_ITEM_PEDCL] = "A"
&& linha_anterior2 <>"A"
&& PR_HIST_MOVIM_PEDID[DTH_REFER_PEDID].[Day]>PR_HIST_MOVIM_PEDID[DAT_MAIOR_PLANE].[Day];"Atend fora Prazo"
;IF((PR_HIST_MOVIM_PEDID[OVITEM_Hist]=Atendido_OV)&&(PR_HIST_MOVIM_PEDID[DTH_REFER_PEDID]>FIRSTDATE(PR_HIST_MOVIM_PEDID[DTH_REFER_PEDID].[Date]));"A retido";"NA")
)
)
)
)
)
//)
The error displayed is: A circular dependency has been detected: PR_HIST_MOVIM_PEDID [Addition_Stats].
How do I compare DTH_REFER_PEDID-1 <> "A"?
An easy way to work with previous or next records is:
Make sure your data is in a table with a primary key (=ID)
Make a query with all the fields as in your table and add one colum with ID+1. (or ID-1)
Make another query with the table and the query mentioned above and make a join between ID and ID+1 (or ID-1). Place all the fields of the table and the 1st query and you end up with all the values in 1 record. This way you can work with the previous or next values.

Why does Relation.size sometimes return a Hash in Rails 4

I can run a query in two different ways to return a Relation.
When I interrogate the size of the Relation one query gives a Fixnum as expected the other gives a Hash which is a hash of each value in the Relations Group By statement with the number of occurrences of each.
In Rails 3 I assume it always returned a Fixnum as I never had a problem whereeas with Rails 4 it sometimes returns a Hash and a statement like Rel.size.zero? gives the error:
undefined method `zero?' for {}:Hash
Am I best just using the .blank? method to check for zero records to be sure of avoiding unexpected errors?
Here is a snippet of code with looging statements for the two queries and the resulting log
CODE:
assessment_responses1=AssessmentResponse.select("process").where("client_id=? and final = ?",self.id,false).group("process")
logger.info("-----------------------------------------------------------")
logger.info("assessment_responses1.class = #{assessment_responses1.class}")
logger.info("assessment_responses1.size.class = #{assessment_responses1.size.class}")
logger.info("assessment_responses1.size value = #{assessment_responses1.size}")
logger.info("............................................................")
assessment_responses2=AssessmentResponse.select("distinct process").where("client_id=? and final = ?",self.id,false)
logger.info("assessment_responses2.class = #{assessment_responses2.class}")
logger.info("assessment_responses2.size.class = #{assessment_responses2.size.class}")
logger.info("assessment_responses2.size values = #{assessment_responses2.size}")
logger.info("-----------------------------------------------------------")
LOG
-----------------------------------------------------------
assessment_responses1.class = ActiveRecord::Relation::ActiveRecord_Relation_AssessmentResponse
(0.5ms) SELECT COUNT(`assessment_responses`.`process`) AS count_process, process AS process FROM `assessment_responses` WHERE `assessment_responses`.`organisation_id` = 17 AND (client_id=43932 and final = 0) GROUP BY process
assessment_responses1.size.class = Hash
CACHE (0.0ms) SELECT COUNT(`assessment_responses`.`process`) AS count_process, process AS process FROM `assessment_responses` WHERE `assessment_responses`.`organisation_id` = 17 AND (client_id=43932 and final = 0) GROUP BY process
assessment_responses1.size value = {"6 Month Review(1)"=>3, "Assessment(1)"=>28, "Assessment(2)"=>28}
............................................................
assessment_responses2.class = ActiveRecord::Relation::ActiveRecord_Relation_AssessmentResponse
(0.5ms) SELECT COUNT(distinct process) FROM `assessment_responses` WHERE `assessment_responses`.`organisation_id` = 17 AND (client_id=43932 and final = 0)
assessment_responses2.size.class = Fixnum
CACHE (0.0ms) SELECT COUNT(distinct process) FROM `assessment_responses` WHERE `assessment_responses`.`organisation_id` = 17 AND (client_id=43932 and final = 0)
assessment_responses2.size values = 3
-----------------------------------------------------------
size on an ActiveRecord::Relation object translates to count, because the former tries to get the count of the Relation. But when you call count on a grouped Relation object, you receive a hash.
The keys of this hash are the grouped column's values; the values of this hash are the respective counts.
AssessmentResponse.group(:client_id).count # this will return a Hash
AssessmentResponse.group(:client_id).size # this will also return a Hash
This is true for the following methods: count, sum, average, maximum, and minimum.
If you want to check for rows being present or not, simply use exists? i.e. do the following:
AssessmentResponse.group(:client_id).exists?
Instead of this:
AssessmentResponse.group(:client_id).count.zero?