DynamoDbException: Conditions can be of length 1 or 2 only - amazon-web-services

I am from SQL background and very new to Dynamodb. I have a single table like this:
-----------------------------------------------------------
| dbId | genId | colData | deviceId | updateOn | params |
-----------------------------------------------------------
| | | | | | |
----------------------------------------------------------
Here dbId is primary key and genId is sort key. I have created two local secondary indexs in deviceId and updateOn. In SQL I can query from the table:
String dbId = "151/2020-21";
int deviceId = 1001;
long updOn = 1608456211308;
String query = "select * from tableName where dbId = '"+dbId+"' and deviceId != "+deviceId+" and updateOn > " + updOn;
In DynamoDb my keyConditionExpression is : dbId = :dbId and updateOn > :updateOn and deviceId != :deviceId. It gives me error :
DynamoDbException: Invalid KeyConditionExpression: Syntax error; token: "!", near: "deviceId !="
I removed the '!' to this : dbId = :dbId and updateOn > :updateOn and deviceId = :deviceId. It gives me error:
DynamoDbException: Conditions can be of length 1 or 2 only
How can I perform my desired query in Dynamodb? How should I design my Dynamodb table (I mean, primary key, indexes etc) so that I get the same sql like result?

Now, reason you are facing error 'Conditions can be of length 1 or 2 only' is because you are specifying 3 conditions within KeyConditionExpression. Please specify only partition key and sort key.
Lastly, you don't need to create a LSI for this purpose. You can perform same operation on original table or need to add Global Secondary Index, Which can help you to achieve the query on two conditions only.
As mentioned, the attribute included in "KeyConditionExpression" should be your hash key only, matching your base table schema. If you want to query on all Then this not possible because there will be no sort key in any case as recommended by AWS Doc's.

When it comes to query() in dynamoDb, it is recommended to specify only the partition key and sort key in KeyConditionExpression.
So if you want to perform the query() with more conditions, you should consider FilterExpression. E.g:
const params = {
TableName: 'table-name',
KeyConditionExpression: 'pk = :pk AND sk = :sk',
FilterExpression: 'dbId = :dbId and updateOn > :updateOn and deviceId != :deviceId',
ExpressionAttributeValues: {
':pk': '12345',
':sk': 'lorem-ipsum',
':dbId': '0987654321',
':updateOn': '2022-06-25',
':deviceId': '123',
},
}
const docClient = new AWS.DynamoDB.DocumentClient();
const result = await docClient.query(params).promise();

Related

mariaDB extract only the digits from {"user":"128"}

From the string {"user":"128"}, expected only 128.
Could also be {"user":"8"} or {"user":"12"} or {"user":"128798"}
In this fiddle example the matched value should be 3
Schema:
CREATE TABLE test1 (
id INT AUTO_INCREMENT,
userid INT(11),
PRIMARY KEY (id)
);
INSERT INTO test1(userid)
VALUES
('126'),
('2457'),
('3'),
('40');
CREATE TABLE test2 (
id INT AUTO_INCREMENT,
code VARCHAR(1024),
PRIMARY KEY (id)
);
INSERT INTO test2(code)
VALUES
('{"user":"128"}'),
('{"user":"2459"}'),
('{"user":"3"}'),
('{"user":"46"}');
Query:
(SELECT sub.`userid`, rev.`code` REGEXP '[0-9]'
FROM `test1` AS sub, `test2` AS rev
WHERE sub.`userid`= rev.`code`
)
Have tried REGEXP '[0-9]' and also '[[:digit:]]' but no luck.
I have also tried concat ('',value * 1) = value or concat(value * 1) = value
SOLUTION: fiddle
REGEXP will return 0 or 1 if the regexp matches the string.
You should have a look to 'regexp_substr'
https://dev.mysql.com/doc/refman/8.0/en/regexp.html#function_regexp-substr
select REGEXP_SUBSTR(code, '"[0-9]+"') from test2
This works for me, after selecting MySQL 8 version in your fiddler.
it returns :
"128"
"2459"
null
"46"
or
select REGEXP_SUBSTR(code, '[0-9]+') from test2
if you want 3 instead of null for "k3"
akina suggestion of using JSON_EXTRACT is way better, but does require that you redefine the column as a JSON datatype from VARCHAR.
Use the function REGEXP_SUBSTR:
MariaDB [(none)]> SELECT REGEXP_SUBSTR('{"user":"128"}', '\\d+');
+-----------------------------------------+
| REGEXP_SUBSTR('{"user":"128"}', '\\d+') |
+-----------------------------------------+
| 128 |
+-----------------------------------------+
1 row in set (0.000 sec)

Query condition missed key schema element: source_transaction_id in dynamoDB Scala

I am trying to execute a query with secondary Index as following
val valMap = new ValueMap()
.withString(":v_source_transaction_id", "843f45ad-cb1d-4f41-9ede-366c9304e447")
//.withString(":source_transaction_trace_id","843f45ad-cb1d-4f41-9ede-366c9304e443")
println(valMap)
//search is defined then extract dates from search.. else continue with simple logic.
val keyConditionExpression = """source_transaction_id =
| :v_source_transaction_id""".stripMargin
val spec = new QuerySpec()
.withProjectionExpression("source_transaction_id, transaction_date")
.withKeyConditionExpression(keyConditionExpression)
.withValueMap(valMap)
.withMaxResultSize(2)
case class DataItems(transaction_date: String)
val itemList = new ListBuffer[DataItems]
val items = table.getIndex("gsi-settlement").query(spec)
println(table.getIndex("gsi-settlement"))
val iterator = items.iterator()
while (iterator.hasNext) {
val next = iterator.next()
itemList += DataItems(next.getString("transaction_date"))
}
itemList.foreach(println)
here gsi settlement is the secondary index and source transaction id is primary key and I am getting the following error:
[AmazonDynamoDBException: Query condition missed key schema element: source_transaction_id
Try this :
val keyConditionExpression = """source_transaction_id = :v_source_transaction_id""".stripMargin
Currently your keyConditionExpression coming up as
source_transaction_id =
:v_source_transaction_id
Error : AmazonDynamoDBException: Query condition missed key schema element: source_transaction_id is suggesting that you are missing the GSI key in your query

AWS DynamoDB | How to query timestamps between two dates?

I have a tabel with users, where I wish to query all users created in f.x April.
When the user is being created, a timestamp is automatically created for that user.
I made an index in my table, with timestamp as partition key and id as sort key.
The timestamp is in unix miliseconds.
This is my code for this query:
GetUsersOnTimestamp(): Promise<any> {
return new Promise( (resolve, reject) => {
const _dynamoDB = new AWS.DynamoDB.DocumentClient();
const startDate = 1554069600000;
const endDate = 1556661600000;
const params = {
TableName: 'user-table',
IndexName: 'timestamp-id-index',
KeyConditionExpression: '#timestamp = :hkey BETWEEN :sdate AND :edate',
ExpressionAttributeNames: {
'#timestamp': 'timestamp'
},
ExpressionAttributeValues: {
':hkey': 'timestamp',
':sdate': startDate,
':edate': endDate,
}
};
I get the following error:
ExpressionAttributeValues contains invalid key: Syntax error; key: "hkey"
You can't conditionally query for your partition key. You have to specify a full partition key value without any condition. The BETWEEN comparison operator is only available for querying the sort key conditionally.
From the DynamoDB documentation:
You must specify the partition key name and value as an equality
condition.
You can optionally provide a second condition for the sort key (if
present). The sort key condition must use one of the following
comparison operators:
a = b — true if the attribute a is equal to the value b
a < b — true if a is less than b
a <= b — true if a is less than or equal to b
a > b — true if a is greater than b
a >= b — true if a is greater than or equal to b
a BETWEEN b AND c — true if a is greater than or equal to b, and less than or equal to c.
The following function is also supported:
begins_with (a, substr) — true if the value of attribute a begins with a particular substring.
To get the ability to query for a range of timestamps is not straight forward to achieve with DynamoDB. One solution would be to add an additional field to your items which contains just year and month of your timestamp. You could then create a global secondary index (GSI) with the year-month-field as primary key and the full timestamp as sort key. With this approach you could query all users created in a given month.

How to do a cross join / cartesian product in RavenDB?

I have a web application that uses RavenDB on the backend and allows the user to keep track of inventory. The three entities in my domain are:
public class Location
{
string Id
string Name
}
public class ItemType
{
string Id
string Name
}
public class Item
{
string Id
DenormalizedRef<Location> Location
DenormalizedRef<ItemType> ItemType
}
On my web app, there is a page for the user to see a summary breakdown of the inventory they have at the various locations. Specifically, it shows the location name, item type name, and then a count of items.
The first approach I took was a map/reduce index on InventoryItems:
this.Map = inventoryItems =>
from inventoryItem in inventoryItems
select new
{
LocationName = inventoryItem.Location.Name,
ItemTypeName = inventoryItem.ItemType.Name,
Count = 1
});
this.Reduce = indexEntries =>
from indexEntry in indexEntries
group indexEntry by new
{
indexEntry.LocationName,
indexEntry.ItemTypeName,
} into g
select new
{
g.Key.LocationName,
g.Key.ItemTypeName,
Count = g.Sum(entry => entry.Count),
};
That is working fine but it only displays rows for Location/ItemType pairs that have a non-zero count of items. I need to have it show all Locations and for each location, all item types even those that don't have any items associated with them.
I've tried a few different approaches but no success so far. My thought was to turn the above into a Multi-Map/Reduce index and just add another map that would give me the cartesian product of Locations and ItemTypes but with a Count of 0. Then I could feed that into the reduce and would always have a record for every location/itemtype pair.
this.AddMap<object>(docs =>
from itemType in docs.WhereEntityIs<ItemType>("ItemTypes")
from location in docs.WhereEntityIs<Location>("Locations")
select new
{
LocationName = location.Name,
ItemTypeName = itemType.Name,
Count = 0
});
This isn't working though so I'm thinking RavenDB doesn't like this kind of mapping. Is there a way to get a cross join / cartesian product from RavenDB? Alternatively, any other way to accomplish what I'm trying to do?
EDIT: To clarify, Locations, ItemTypes, and Items are documents in the system that the user of the app creates. Without any Items in the system, if the user enters three Locations "London", "Paris", and "Berlin" along with two ItemTypes "Desktop" and "Laptop", the expected result is that when they look at the inventory summary, they see a table like so:
| Location | Item Type | Count |
|----------|-----------|-------|
| London | Desktop | 0 |
| London | Laptop | 0 |
| Paris | Desktop | 0 |
| Paris | Laptop | 0 |
| Berlin | Desktop | 0 |
| Berlin | Laptop | 0 |
Here is how you can do this with all the empty locations as well:
AddMap<InventoryItem>(inventoryItems =>
from inventoryItem in inventoryItems
select new
{
LocationName = inventoryItem.Location.Name,
Items = new[]{
{
ItemTypeName = inventoryItem.ItemType.Name,
Count = 1}
}
});
)
this.AddMap<Location>(locations=>
from location in locations
select new
{
LocationName = location.Name,
Items = new object[0]
});
this.Reduce = results =>
from result in results
group result by result.LocationName into g
select new
{
LocationName = g.Key,
Items = from item in g.SelectMany(x=>x.Items)
group item by item.ItemTypeName into gi
select new
{
ItemTypeName = gi.Key,
Count = gi.Sum(x=>x.Count)
}
};

Regex / subString to extract all matching patterns / groups

I get this as a response to an API hit.
1735 Queries
Taking 1.001303 to 31.856310 seconds to complete
SET timestamp=XXX;
SELECT * FROM ABC_EM WHERE last_modified >= 'XXX' AND last_modified < 'XXX';
38 Queries
Taking 1.007646 to 5.284330 seconds to complete
SET timestamp=XXX;
show slave status;
6 Queries
Taking 1.021271 to 1.959838 seconds to complete
SET timestamp=XXX;
SHOW SLAVE STATUS;
2 Queries
Taking 4.825584, 18.947725 seconds to complete
use marketing;
SET timestamp=XXX;
SELECT * FROM ABC WHERE last_modified >= 'XXX' AND last_modified < 'XXX';
I have extracted this out of the response html and have it as a string now.I need to retrieve values as concisely as possible such that I get a map of values of this format Map(Query -> T1 to T2 seconds) Basically what this is the status of all the slow queries running on MySQL slave server. I am building an alert system over it . So from this entire paragraph in the form of String I need to separate out the queries and save the corresponding time range with them.
1.001303 to 31.856310 is a time range . And against the time range the corresponding query is :
SET timestamp=XXX; SELECT * FROM ABC_EM WHERE last_modified >= 'XXX' AND last_modified < 'XXX';
This information I was hoping to save in a Map in scala. A Map of the form (query:String->timeRange:String)
Another example:
("use marketing; SET timestamp=XXX; SELECT * FROM ABC WHERE last_modified >= 'XXX' AND last_modified xyz ;"->"4.825584 to 18.947725 seconds")
"""###(.)###(.)\n\n(.*)###""".r.findAllIn(reqSlowQueryData).matchData foreach {m => println("group0"+m.group(1)+"next group"+m.group(2)+m.group(3)}
I am using the above statement to extract the the repeating cells to do my manipulations on it later. But it doesnt seem to be working;
THANKS IN ADvance! I know there are several ways to do this but all the ones striking me are inefficient and tedious. I need Scala to do the same! Maybe I can extract recursively using the subString method ?
If you want use scala try this:
val regex = """(\d+).(\d+).*(\d+).(\d+) seconds""".r // extract range
val txt = """
|1735 Queries
|
|Taking 1.001303 to 31.856310 seconds to complete
|
|SET timestamp=XXX; SELECT * FROM ABC_EM WHERE last_modified >= 'XXX' AND last_modified < 'XXX';
|
|38 Queries
|
|Taking 1.007646 to 5.284330 seconds to complete
|
|SET timestamp=XXX; show slave status;
|
|6 Queries
|
|Taking 1.021271 to 1.959838 seconds to complete
|
|SET timestamp=XXX; SHOW SLAVE STATUS;
|
|2 Queries
|
|Taking 4.825584, 18.947725 seconds to complete
|
|use marketing; SET timestamp=XXX; SELECT * FROM ABC WHERE last_modified >= 'XXX' AND last_modified < 'XXX';
""".stripMargin
def logToMap(txt:String) = {
val (_,map) = txt.lines.foldLeft[(Option[String],Map[String,String])]((None,Map.empty)){
(acc,el) =>
val (taking,map) = acc // taking contains range
taking match {
case Some(range) if el.trim.nonEmpty => //Some contains range
(None,map + ( el -> range)) // add to map
case None =>
regex.findFirstIn(el) match { //extract range
case Some(range) => (Some(range),map)
case _ => (None,map)
}
case _ => (taking,map) // probably empty line
}
}
map
}
Modified ajozwik's answer to work for SQL commands over multiple lines :
val regex = """(\d+).(\d+).*(\d+).(\d+) seconds""".r // extract range
def logToMap(txt:String) = {
val (_,map) = txt.lines.foldLeft[(Option[String],Map[String,String])]((None,Map.empty)){
(accumulator,element) =>
val (taking,map) = accumulator
taking match {
case Some(range) if element.trim.nonEmpty=> {
if (element.contains("Queries"))
(None, map)
else
(Some(range),map+(range->(map.getOrElse(range,"")+element)))
}
case None =>
regex.findFirstIn(element) match {
case Some(range) => (Some(range),map)
case _ => (None,map)
}
case _ => (taking,map)
}
}
println(map)
map
}