I have a table in DynamoDB that looks like this:
I added a global secondary index on "Category" to the table and it worked fine and gave me the number of items in the table under item count.
I then realized that i actually needed to be able to search for in a particular "Category" but sorted by "UserRating"
So I deleted the GSI and made a new one like this:
This all worked fine I thought, the names where correct the types (string) for Category and (number) for UserRating was correct.
But then after it finished creating the GSI I looked at the console and it is showing item count 0 even though there should be 13 in this testing table as pictured below:
Thanks for your help.
As per Amazon documentation this is being updated approx every 6 hours.
ItemCount - The number of items in the global secondary index. DynamoDB updates this value approximately every six hours. Recent changes might not be reflected in this value.
In my case, even though the console was still showing ItemCount zero and was returning no results for scan/query of the index, I was able to successfully query it from my code.
I think this is more likely to happen when you have a simple type mis-match between the key-schema types and the concrete item types.
From Managing Global Secondary Indexes -
Note
In some cases, DynamoDB will not be able to write data from the table to the index due to index key violations. This can occur if the data type of an attribute value does not match the data type of an index key schema data type, or if the size of an attribute exceeds the maximum length for an index key attribute. Index key violations do not interfere with global secondary index creation; however, when the index becomes ACTIVE, the violating keys will not be present in the index.
DynamoDB provides a standalone tool for finding and resolving these issues. For more information, see Detecting and Correcting Index Key Violations.
By Example
Item looks like:
"Items": [
{
"Timestamp": {
"N": "1542475507"
},
"DevID": {
"S": "slfhioh1234oi23lk23kl4h235pjpo235lnsfvuwerfj2roin2l3rn9fj9f8hwen"
},
"UID": {
"S": "1"
}
}
],
index looks like:
"GlobalSecondaryIndexes": [
{
"IndexName": "UID-Timestamp-index",
"Projection": {
"ProjectionType": "KEYS_ONLY"
},
"ProvisionedThroughput": {
"WriteCapacityUnits": 1,
"ReadCapacityUnits": 1
},
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "UID"
},
{
"KeyType": "RANGE",
"AttributeName": "Timestamp"
}
],
}
]
Table has the attribute definitions:
"AttributeDefinitions": [
{
"AttributeName": "Timestamp",
"AttributeType": "S"
},
{
"AttributeName": "UID",
"AttributeType": "S"
}
]
That item will NOT appear in your new index.
It is completely possible to have a mismatch in type ( in this case "S" != "N" ) without it being flagged when created. This makes sense. You may want to do this sort of thing on purpose, but when you do it accidentally - it's not great.
I also had strange behavior (no results) when the index name contains dashes, as in the OP's screenshots. Replacing the dashes with underscores fixed my problem.
Found the answer. I had my read write capacity set to 1 unit for everything while testing as soon as I increased it fixed the error and I could see the items.
Related
This question is a follow up to another SO question.
Summary: I have an API returning a nested JSON array. Data is being extracted via APEX REST Data Sources. The Row Selector in the Data Profile is set to "." (to select the "root node").
The lines array has been manually added to a column (LINES) to the Data Profile, set data type to JSON Document, and used lines as the selector.
SAMPLE JSON RESPONSE FROM API
[ {
"order_number": "so1223",
"order_date": "2022-07-01",
"full_name": "Carny Coulter",
"email": "ccoulter2#ovh.net",
"credit_card": "3545556133694494",
"city": "Myhiya",
"state": "CA",
"zip_code": "12345",
"lines": [
{
"product": "Beans - Fava, Canned",
"quantity": 1,
"price": 1.99
},
{
"product": "Edible Flower - Mixed",
"quantity": 1,
"price": 1.50
}
]
},
{
"order_number": "so2244",
"order_date": "2022-12-28",
"full_name": "Liam Shawcross",
"email": "lshawcross5#exblog.jp",
"credit_card": "6331104669953298",
"city": "Humaitá",
"state": "NY",
"zip_code": "98670",
"lines": [
{
"order_id": 5,
"product": "Beans - Green",
"quantity": 2,
"price": 4.33
},
{
"order_id": 1,
"product": "Grapefruit - Pink",
"quantity": 5,
"price": 5.00
}
]
},
]
The order attributes have been synchronized to a local table (Table name: SOTEST_LOCAL)
The table has the correct data. As you can see below, the LINES column contains the JSON array.
I then created an ORDER_LINES child table to extract the JSON from LINES column in the SOTEST_LOCAL table. (Sorry for the table names.. I should've named the tables as ORDERS_LOCAL and ORDER_LINES_LOCAL)
CREATE TABLE "SOTEST_ORDER_LINES_LOCAL"
( "LINE_ID" NUMBER,
"ORDER_ID" NUMBER,
"LINE_NUMBER" NUMBER,
"PRODUCT" VARCHAR2(200) COLLATE "USING_NLS_COMP",
"QUANTITY" NUMBER,
"PRICE" NUMBER,
CONSTRAINT "SOTEST_ORDER_LINES_LOCAL_PK" PRIMARY KEY ("LINE_ID")
USING INDEX ENABLE
) DEFAULT COLLATION "USING_NLS_COMP"
/
ALTER TABLE "SOTEST_ORDER_LINES_LOCAL" ADD CONSTRAINT "SOTEST_ORDER_LINES_LOCAL_FK" FOREIGN KEY ("ORDER_ID")
REFERENCES "SOTEST_LOCAL" ("ORDER_ID") ON DELETE CASCADE ENABLE
/
QuickSQL version..
SOTEST_ORDER_LINES_LOCAL
LINE_ID /pk
ORDER_ID /fk SOTEST_LOCAL references ORDER_ID
LINE_NUMBER
PRODUCT
QUANTITY
PRICE
So per Carsten's answer in the previous question, I can write SQL to extract the JSON array from the LINES column in the SOTEST_LOCAL table to the child table SOTEST_ORDER_LINES_LOCAL.
My question is two parts.
Where exactly do I write the SQL? Would I write it in SQL Workshop in SQL Commands?
The REST data is being synchronized to make a request every hour. So would I need to write a function that runs every time there is new data being merged?
there are multiple options for this:
Create a trigger on the local synchronization table
You could create an trigger on your ORDERS table, which runs AFTER INSERT, UPDATE or DELETE on your ORDERS table, and which maintains the LINES table. The nice things about this one is that the maintenance of the child table is independent from APEX or the REST Synchronization; it would also work if you just inserted rows with plain SQL*Plus.
Here's some pseudo-code on how the trigger could look like.
create or replace trigger tr_maintain_lines
after insert or update or delete on ORDERS_LOCAL
for each row
begin
if inserting then
insert into SOTEST_ORDER_LINES_LOCAL ( order_id, line_id, line_number, product, quantity, price)
( select :new.id,
seq_lines.nextval,
j.line#,
j.product,
j.quantity,
j.price
from json_table(
:new.lines,
'$[*]' columns (
line# for ordinality,
product varchar2(255) path '$.product',
quantity number path '$.quantity',
price number path '$.price' ) ) );
elsif deleting then
delete SOTEST_ORDER_LINES_LOCAL
where order_id = :old.id;
elsif updating then
--
-- handle the update case here.
-- I would simply delete and re-insert LINES rows.
end if;
end;
Handle child table maintenance in APEX itself.
You could turn off the schedule of your REST Source synchronization, and have it only running when called with APEX_REST_SOURCE_SYNC.SYNCHRONIZE_DATA (https://docs.oracle.com/en/database/oracle/apex/22.1/aeapi/SYNCHRONIZE_DATA-Procedure.html#GUID-660DE4D1-4BAF-405A-A871-6B8C201969C9).
Then create an APEX Automation, which runs on your desired schedule, and this automation has two Actions. One would be the REST Source Synchronization, the other one would call PL/SQL code to maintain the child tables.
Have a look into this blog posting which talks a bit about more complex synchronization scenarios (although it does exactly fit scenario): https://blogs.oracle.com/apex/post/synchronize-parent-child-rest-sources
I hope this helps
I want to query a DDB GSI with key condition, and apply filter on returned result using contains function.
Data I have in DDB table:
{
"lookupType": "PRODUCT_GROUP",
"name": "Spring framework study set",
"structureJson": {
"productIds": [
"FBCUPOQsrp",
"Y4LDaiViLY",
"J6N3UWq9CK"
]
},
"ownerId": "mT9R9y6zGO"
}
{
"lookupType": "PRODUCT_GROUP",
"name": "Relational databases study set",
"structureJson": {
"productIds": [
"QWQWQWQWQW",
"XZXZXZXZXZ"
]
},
"ownerId": "mT9R9y6zGO"
}
...
I have a compound GSI (ownerId - HASH, lookupType - RANGE).
When I try to query the DDB (query structure is in "2" field) I get the result (the result is in "3"):
{
"0":[
],
"2":{
"TableName":"Products",
"IndexName":"ProductsOwnerIdLookupTypeIndex",
"KeyConditionExpression":"#ownerId = :ownerId and #lookupType = :lookupType",
"FilterExpression":"contains(#structureMember, :memberId)",
"ExpressionAttributeNames":{
"#ownerId":"ownerId",
"#lookupType":"lookupType",
"#structureMember":"structureJson.productIds"
},
"ExpressionAttributeValues":{
":ownerId":"mT9R9y6zGO",
":lookupType":"PRODUCT_GROUP",
":memberId":"FBCUPOQsrp"
}
},
"3":{
"Items":[
],
"Count":0,
"ScannedCount":2
}
}
The returned result set is empty, despite I have data with given field values.
How I see the query (or what I want to achieve):
When I query the GSI with ownerId = mT9R9y6zGO and lookupType = PRODUCT_GROUP it will find 2 items - Spring and Relational DB sets
As the second step DDB will scan the returned query result with contains (structureJson.productIds, FBCUPOQsrp) filter expression and it should return one result to me, but I get empty set
Is something wrong with the query or do I miss some point in DDB query workflow?
I have an AWS DynamoDb cart table with the following item structure -
{
"cart_id": "5e4d0f9f-f08c-45ae-986a-f1b5ac7b7c13",
"user_id": 1234,
"type": "OTHER",
"currency": "INR",
"created_date": 132432423,
"expiry": 132432425,
"total_amount": 90000,
"total_quantity": 2,
"items": [
{
"amount": 90000,
"category": "Laptops",
"name": "Apple MacBook Pro",
"quantity": 1
}
]
}
-
{
"cart_id": "12340f9f-f08c-45ae-986a-f1b5ac7b1234",
"user_id": 1234,
"type": "SPECIAL",
"currency": "INR",
"created_date": 132432423,
"expiry": 132432425,
"total_amount": 1000,
"total_quantity": 2,
"items": [
{
"amount": 1000,
"category": "Special",
"name": "Special Item",
"quantity": 1
}
]
}
The table will have cart_id as Primary key,
user_id as an Index or GSI,
type as an Index or GSI.
I want to be able to query the cart table,
to find the items which have user_id = 1234 AND type != "SPECIAL".
I don't know if this means for the query -
--key-condition-expression "user_id = 1234 AND type != 'SPECIAL'"
I understand that an AWS DynamoDb table cannot be queried using multiple indexes at the same time,
I came across the following question, it has a similar use case and the answer is recommending creating a composite key,
Querying with multiple local Secondary Index Dynamodb
Does it mean that while putting a new item in the table,
I will need to maintain another column like user_id_type,
with its value as 1234SPECIAL and create an Index / GSI for user_id_type ?
Sample item structure -
{
"cart_id": "5e4d0f9f-f08c-45ae-986a-f1b5ac7b7c13",
"user_id": 1234,
"type": "OTHER",
"user_id_type" : "1234OTHER",
"currency": "INR",
"created_date": 132432423,
"expiry": 132432425,
"total_amount": 90000,
"total_quantity": 2,
"items": [
{
"amount": 90000,
"category": "Laptops",
"name": "Apple MacBook Pro",
"quantity": 1
}
]
}
References -
1. Querying with multiple local Secondary Index Dynamodb
2. Is there a way to query multiple hash keys in DynamoDB?
Your assumption is correct. Maybe you can add into that a delimitter field1_field2 or hash them if either of them is too big in size hashOfField1_hashOfField2
That mean spending some more processing power on your side, however. As DynamoDB does not natively support It.
Composite key in DynamoDB with more than 2 columns?
Dynamodb: query using more than two attributes
Additional info on your use case
KeyConditionExpression only allowed for the hash key.
You can put it in the FilterExpression
Why is there no **not equal** comparison in DynamoDB queries?
Does it mean that while putting a new item in the table,
I will need to maintain another column like user_id_type,
with its value as 1234SPECIAL and create an Index / GSI for user_id_type?
The answer is it depends on how many columns (dynamodb is schema-less, by a column I mean data field) you need and are you happy with 2 round trips to DB.
your query:
user_id = 1234 AND type != "SPECIAL"
1- if you need all information in the cart but you are happy with two round trips:
Solution: Create a GSI with user_id (HASH) and type (RANGE), then add cart_id (base table Hash key) as projection.
Explanation: so, you need one query on index table to get the cart_id given user_id and type
--key-condition-expression "user_id = 1234 AND type != 'SPECIAL'"
then you need to use cart_id(s) from the result and make another query to the base table
2- if you do not need all of cart information.
Solution: you need to create a GSI and make user_id HASH and type as RANGE and add more columns (columns you need) to projections.
Explanation: projection is additional columns you want to have in your index table. So, add some extra columns, which are more likely to be used as a result of the query, to avoid an extra round trip to the base table
Note: adding too many extra columns can double your costs, as any update on base table results in updates in GSI tables projection fields)
3- if you want just one round trip and you need all data
then you need to manage it by yourself and your suggestion can be applied
One possible answer is to create a single index with a sort key. Then you can do this:
{
TableName: "...",
IndexName: "UserIdAndTypeIndex",
KeyConditionExpression: "user_id = :user_id AND type != :type",
ExpressionAttributeValues: {
":user_id": 1234,
":type": "SPECIAL"
}
}
You can build GraphQL schema with AWS AppSync from your DynamoDB table and than query it in your app with GraphQL. Link
I defined following schema in BigQuery
[
{
"mode": "REQUIRED",
"name": "customer_id",
"type": "STRING"
},
{
"mode": "REPEATED",
"name": "segments",
"type": "RECORD",
"fields": [
{
"mode": "REQUIRED",
"name": "segment_id",
"type": "STRING"
}
]
}
]
I try to insert a new segment_id to specific customer ids something like this:
#standardSQL
UPDATE `sample-project.customer_segments.segments`
SET segments = ARRAY(
SELECT segment FROM UNNEST(segments) AS segment
UNION ALL
SELECT STRUCT('NEW_SEGMENT')
)
WHERE customer_id IN ('0000000000', '0000000001', '0000000002')
Is it possible to assign more than 10 thousands cusomer_id to IN query at BigQuery?
Is it possible to assign more than 10 thousands cusomer_id to IN query at BigQuery?
Assuming (based on example in your question) the length of customer_id is around 10 chars plus three chars for apostrophes and comma you will and up with extra around 130KB which is within limit of 250KB (see more in Quotas & Limits)
So, you should be fine with 10K and easily can calculate the limit - looks like limit will go around 19K
Just to clarify:
I meant below limitations (mostly first one)
Maximum unresolved query length — 256 KB
Maximum resolved query length — 12 MB
When working with a long list of possible values, it's a good idea to use a query parameter instead of inlining the entire list into the query, assuming you are working with the command line client or API. For example,
#standardSQL
UPDATE `sample-project.customer_segments.segments`
SET segments = ARRAY(
SELECT segment FROM UNNEST(segments) AS segment
UNION ALL
SELECT STRUCT('NEW_SEGMENT')
)
WHERE customer_id IN UNNEST(#customer_ids)
Here you would create a query parameter of type ARRAY<STRING> containing the customer IDs.
I think I'm misunderstanding DynamoDb. I would like to query for all items, with a child field of the json, which match an identifier I'm passing. The structure is something like -
{
"messageId": "ced96cab-767e-509198be5-3d2896a3efeb",
"identifier": {
"primary": "9927fd47-5d33-4f51-a5bb-f292a0c733b1",
"secondary": "none",
"tertiary": "cfd96cab-767e-5091-8be5-3d2896a3efeb"
},
"attributes": {
"MyID": {
"Type": "String",
"Value": "9927fd47-5c33-4f51-a5bb-f292a0c733b1"
}
}
}
I would like to query for all items in DynamoDb that has a value of MyID that I'm passing. Everything I've read seems to say you need to use the key which in my case is the messageId, this is unique for each entry and not a value I can use.
Hope this makes sense.
The DynamoDB Query API can be used only if you know the value of Partition key. Otherwise, you may need to scan the whole table using FilterExpression to find the item.
Scanning tables
You can create GSI on scalar attribute only. In the above case, it is a document data type (i.e. MAP). So, GSI can't be created.