how to update an attribute in a nested array object DynamoDb AWS - amazon-web-services

I want to update the choices attribute directly through AWS API Gateway.
{
"id" : "1",
"general" : {
"questions : [
"choices" : ["1","2","3"]
]
}
}
Here is my resolver mapping template
#set($inputRoot = $input.path('$'))
{
"TableName" : "models",
"Key" : {
"accountId" : {
"S": "$inputRoot.accountId"
},
"category" : {
"S" : "model"
}
},
"UpdateExpression" : "SET general.questions = :questions",
"ExpressionAttributeValues" : {
":questions" : {
"L" : [
#foreach($elem in $inputRoot.questions)
{
"M" : {
"choices" : {
"L" : [
#foreach($elem1 in $elem.choices)
{"S" : "$elem1"}
#if(foreach.hasNext),#end
#end
]
}
}
}
#if($foreach.hasNext),#end
#end
]
}
}
}
But I am getting Internal server error 500 on execution.
Gateway response body: {"message": "Internal server error"}
Does DynamoDb support updating this expression or is there any error in mapping template ? If so what should be the mapping template for the object I am trying to update.

Related

TRC20 curl for getaccount/getbalance

What is the correct curl format to view trc20 balance of an account? I tried the below command, but the output showed no balance.
curl -X POST http://127.0.0.1:8090/wallet/triggersmartcontract -d
'{
"contract_address":"TCFLL5dx5ZJdKnWuesXxi1VPwjLVmWZZy9",
"address":"TUT5SVvKmnxKpKdHi2tXMzPfffQNg7e3MU",
"function_selector":"balanceOf(address)",
"owner_address":"TUT5SVvKmnxKpKdHi2tXMzPfffQNg7e3MU",
"visible":true
}'
output:
{
"result" : {
"result" : true
},
"transaction" : {
"raw_data" : {
"ref_block_hash" : "0d9745f14e11d7fa",
"expiration" : 1605942390000,
"ref_block_bytes" : "9c45",
"contract" : [
{
"type" : "TriggerSmartContract",
"parameter" : {
"type_url" : "type.googleapis.com/protocol.TriggerSmartContract",
"value" : {
"contract_address" : "TCFLL5dx5ZJdKnWuesXxi1VPwjLVmWZZy9",
"owner_address" : "TUT5SVvKmnxKpKdHi2tXMzPfffQNg7e3MU",
"data" : "70a08231"
}
}
}
],
"timestamp" : 1605942331560
},
"txID" : "a3bdcb595a94f9805301fb74b33f2b536d3a6bb5050b7eb7b12808bb1e36fcd7",
"visible" : true,
"ret" : [
{}
],
"raw_data_hex" : "0a029c4522080d9745f14e11d7fa40f0d980cdde2e5a6d081f12690a31747970652e676f6f676c65617069732e636f6d2f70726f746f636f6c2e54726967676572536d617274436f6e747261637412340a1541cab799601a50938457902e1a31d3faa26ca1d76012154118fd0626daf3af02389aef3ed87db9c33f638ffa220470a0823170a891fdccde2e"
},
"constant_result" : [
"0000000000000000000000000000000000000000000000000000000000000000"
]
}
I use the default main_net_config with supportConstant = true. Do I have to enable anything else in my config file?

Mongodb db.collection.distinct() on aws documentdb doesn't use index

Transitioning to new AWS documentDB service. Currently, on Mongo 3.2. When I run db.collection.distinct("FIELD_NAME") it returns the results really quickly. I did a database dump to AWS document DB (Mongo 3.6 compatible) and this simple query just gets stuck.
Here's my .explain() and the indexes on the working instance versus AWS documentdb:
Explain function on working instance:
> db.collection.explain().distinct("FIELD_NAME")
{
"queryPlanner" : {
"plannerVersion" : 1,
"namespace" : "db.collection",
"indexFilterSet" : false,
"parsedQuery" : {
"$and" : [ ]
},
"winningPlan" : {
"stage" : "PROJECTION",
"transformBy" : {
"_id" : 0,
"FIELD_NAME" : 1
},
"inputStage" : {
"stage" : "DISTINCT_SCAN",
"keyPattern" : {
"FIELD_NAME" : 1
},
"indexName" : "FIELD_INDEX_NAME",
"isMultiKey" : false,
"isUnique" : false,
"isSparse" : false,
"isPartial" : false,
"indexVersion" : 1,
"direction" : "forward",
"indexBounds" : {
"FIELD_NAME" : [
"[MinKey, MaxKey]"
]
}
}
},
"rejectedPlans" : [ ]
},
Explain on AWS documentdb, not working:
rs0:PRIMARY> db.collection.explain().distinct("FIELD_NAME")
{
"queryPlanner" : {
"plannerVersion" : 1,
"namespace" : "db.collection",
"winningPlan" : {
"stage" : "AGGREGATE",
"inputStage" : {
"stage" : "HASH_AGGREGATE",
"inputStage" : {
"stage" : "COLLSCAN"
}
}
}
},
}
Index on both of these instances:
{
"v" : 1,
"key" : {
"FIELD_NAME" : 1
},
"name" : "FIELD_INDEX_NAME",
"ns" : "db.collection"
}
Also, this database has a couple million documents but there are only about 20 distinct values for that "FIELD_NAME". Any help would be appreciated.
I tried it with .hint("index_name") and that didn't work. I tried clearing plan cache but I get Feature not supported: planCacheClear
COLLSCAN and IXSCAN don't have too much difference in this case, both need to scan all the documents or index entries.

AppSync to DynamoDB update query mapping error

I have the following DynamoDB mapping template, to update an existing DynamoDB item:
{
"version" : "2017-02-28",
"operation" : "UpdateItem",
"key" : {
"id": $util.dynamodb.toDynamoDBJson($ctx.args.application.id),
"tenant": $util.dynamodb.toDynamoDBJson($ctx.identity.claims['http://domain/tenant'])
},
"update" : {
"expression" : "SET #sourceUrl = :sourceUrl, #sourceCredential = :sourceCredential, #instanceSize = :instanceSize, #users = :users",
"expressionNames" : {
"#sourceUrl" : "sourceUrl",
"#sourceCredential" : "sourceCredential",
"#instanceSize" : "instanceSize",
"#users" : "users"
},
"expressionValues" : {
":sourceUrl" : $util.dynamodb.toDynamoDbJson($ctx.args.application.sourceUrl),
":sourceCredential" : $util.dynamodb.toDynamoDbJson($ctx.args.application.sourceCredential),
":instanceSize" : $util.dynamodb.toDynamoDbJson($ctx.args.application.instanceSize),
":users" : $util.dynamodb.toDynamoDbJson($ctx.args.application.users)
}
},
"condition" : {
"expression" : "attribute_exists(#id) AND attribute_exists(#tenant)",
"expressionNames" : {
"#id" : "id",
"#tenant" : "tenant"
}
}
}
But I'm getting the following error:
message: "Unable to parse the JSON document: 'Unrecognized token '$util': was expecting ('true', 'false' or 'null')↵ at [Source: (String)"{↵ "version" : "2017-02-28",↵ "operation" : "UpdateItem",↵ "key" : {↵ "id": {"S":"abc-123"},↵ "tenant": {"S":"test"}↵ },↵ "update" : {↵ "expression" : "SET #sourceUrl = :sourceUrl, #sourceCredential = :sourceCredential, #instanceSize = :instanceSize, #users = :users",↵ "expressionNames" : {↵ "#sourceUrl" : "sourceUrl",↵ "#sourceCredential" : "sourceCredential",↵ "#instanceSize" : "instanceSize",↵ "#users" : "users"↵ }"[truncated 400 chars]; line: 17, column: 29]'"
I've tried removing parts, and it seems to be related to the expressionValues, but I can't see anything wrong with the syntax.
Seems like you misspelled the toDynamoDBJson method
Replace
$util.dynamodb.toDynamoDbJson($ctx.args.application.sourceUrl)
with
$util.dynamodb.toDynamoDBJson($ctx.args.application.sourceUrl)
Note the uppercase B in toDynamoDBJson.

How I can update boolean field in DynamoDb with Mapping template?

I have next code for update item in DB with mapping template:
$!{expSet.put("available", ":available")}
$!{expValues.put(":available", { "BOOL": $ctx.args.available })}
when I send available = false - it's ok, but if available = true I get error
"Unable to parse the JSON document: 'Unexpected character (':' (code
58)): was expecting double-quote to start field name
schema in GraphQl
type Item {
....
available: Boolean!
....
}
What I do wrong ?
Your UpdateItem request mapping template should look something like:
{
"version" : "2017-02-28",
"operation" : "UpdateItem",
"key" : {
"id" : { "S" : "${context.arguments.id}" }
},
"update" : {
"expression" : "SET #available = :available",
"expressionNames": {
"#available" : "available"
},
"expressionValues": {
":available" : { "BOOL": ${context.arguments.available} }
}
}
}

Is it possible to create an array pipeline object in AWS datapipeline via Cloudformation?

When creating a data pipeline via API / CLI that creates an EmrCluster, I can specify multiple steps using an array structure:
{ "objects" : [
{ "id" : "myEmrCluster",
"terminateAfter" : "1 hours",
"schedule" : {"ref":"theSchedule"}
"step" : ["some.jar,-param1,val1", "someOther.jar,-foo,bar"] },
{ "id" : "theSchedule", "period":"1 days" }
] }
I can call put-pipeline-definition referencing the file above to create a number of steps for the EMR cluster.
Now if I want to create the pipeline using CloudFormation, I can use the PipelineObjects property in a AWS::DataPipeline::Pipeline resource type to configure the pipeline. However, pipeline objects can only be of type StringValue or RefValue. How can i create an array pipeline object field?
Here's a corresponding cloudformation template:
"Resources" : {
"MyEMRCluster" : {
"Type" : "AWS::DataPipeline::Pipeline",
"Properties" : {
"Name" : "MyETLJobs",
"Activate" : "true",
"PipelineObjects" : [
{
"Id" : "myEmrCluster",
"Fields" : [
{ "Key" : "terminateAfter","StringValue":"1 hours" },
{ "Key" : "schedule","RefValue" : "theSchedule" },
{ "Key" : "step","StringValue" : "some.jar,-param1,val1" }
]
},
{
"Id" : "theSchedule",
"Fields" : [
{ "Key" : "period","StringValue":"1 days" }
]
}
]
}
}
}
With the above template, step is a StringValue, equivalent to:
"step" : "some.jar,-param1,val1"
and not an array like the desired config.
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-datapipeline-pipeline-pipelineobjects-fields.html shows only StringValue and RefValue are valid keys - is it possible to create an array of steps via CloudFormation??
Thanks in advance.
Ah, I'm not sure where I saw that steps could be configured as an array - the documentation has no mention about that - instead, it specifies that to have multiple steps, multiple step entries should be used.
{
"Id" : "myEmrCluster",
"Fields" : [
{ "Key" : "terminateAfter","StringValue":"1 hours" },
{ "Key" : "schedule","RefValue" : "theSchedule" },
{ "Key" : "step","StringValue" : "some.jar,-param1,val1" },
{ "Key" : "step","StringValue" : "someOther.jar,-foo,bar" }
]
}
}