Update handle if else condition - if-statement

I have the below scenario to be handled :
If a is not present in dynamodb, then update a with received value and not b.
If a is present in dynamodb, then, update b with received value and not a.
Can someone tell me how this can be handled?

you can use Condition Expressions in dynamoDB to manipulate data . for more help there is below link of documents
you can use Amazon Documentation it will help you more than any documents :
Reference: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.ConditionExpressions.html

Related

Error trying to create a GCP Pub Sub BigQuery subscriptions

I am trying to create a GCP Pub Sub BigQuery subscriptions using the console: https://cloud.google.com/pubsub/docs/bigquery
However, I get the following error message:
API returned error: ‘Request contains an invalid argument.’
Any help would be appreciated.
NOTE
When the big query table does not exist I get the following error:
2.Pub Sub schema was not deleted
Update:
Actually, despite the generic error message, turns out to be a straightforward issue.
To use the "write metadata" option the BigQuery table must previously have the following field structure:
subscription_name (string),
message_id (string)
publish_time (timestamp)
data (bytes, string or json)
attributes (string or json)
they are better described in the docs here
Once they are created, this option works fine.
I believe that problem with the "Use topic schema" is also related to the schema used in the topic, since the table must have already the same structure (but there is a need to check it in your configuration). If your topic follow an avro schema, this might help: https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-avro#avro_conversions
---------- previous answer
Not a definitive answer, but am having the same problem and figured out that it is somewhat related to the following options:
Use topic schema
Write metadata
Unchecking then make it work.
The same thing happens algo using terraform to try to set up the infrastructure.
I am still investigating if it is a bug or perhaps an error in my schema definition, but maybe it can be a starting point to you as well
A 404 error would indicate that either the BigQuery table does not exist or the Pub/Sub schema associated with the topic was deleted. For the former, ensure that the project, data set, and table names all match the names of the existing table to which you want to write data. For the latter, you could look at the topic details page and make sure that the schema name is not _deleted-schema_.

AWS Step Functions - Read Message from SQS Queue and Save to DynamoDb - Can't Save Messsage Attributes

I am trying to implement the step function to read from SQS and save the message to DynamoDb. I followed the tutorial on AWS and it works.
When I add Message Attributes to the message these aren't stored in the Db.
My question is how do I get step functions to find and save the message attributes?
Link to tutorial - https://docs.aws.amazon.com/step-functions/latest/dg/sample-map-state.html
Thanks,
Sean
Figured it out and if anyone else has the same problem.
Go to Lambda functions and edit the auto generated function and add the following to the
Node.js:
MessageAttributeNames: ["All"],
Thanks to anyone who took a look at the question

AWS DynamoDB scan for unindexed nested attributes

Does anyone know how to run a scan command on an AWS DynamoDB table,
and get just the items where someNestedElement == 'foo' (this nested element is not indexed)?
Preferably in ruby with the aws-sdk or via the aws console.
Thanks
so indeed as #mircea commented, I succeeded with
dynamodb.scan(table_name: "my_table", filter_expression: "some.nested.attribute = :name", expression_attribute_values: { ":name" =>"my_name"} )
I had a problem due to using preserved words such as "Data" and "Raw".
Does anyone know how to aggregate and group the items returned by the scan?

Is it possible to get a time for state transition for an Amazon EC2 instance?

I'm accessing EC2 with the aws-sdk for Ruby. I have an array of instances from describe_instances().
This provides me with the state of the instances and even a state transition reason. But how can I get a time for the state transition?
Edit
So I have:
client=Aws::EC2::Client()
resp =client.describe_instances({ filters })
and I would need
resp.reservations[0].instances[0].state_transition_time #=> Time
similar to
resp.reservations[0].instances[0].state_transition_reason #=> String
This information is not available via the Amazon EC2 API at this time. The aws-sdk gem returns all of the information available from the DescribeInstances operation as documented here: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstances.html
The State Transition Reason is not always populated with a date and time and may not even be populated at all per the documentation. I have not found any hints in the documentation that specify the conditions in which you DO get a date/time, but in my experience, the date/time are present in the State Transition Reason for between 30 and 90 days. After that, the reason seems to persist, but the date is dropped from the string.
All of the documentation that I can find is listed here:
Attribute Definition
EC2 API - Ruby

get a permanent browseNodeId from amazon product API

I'm using Amazon Product API to get new products of certain category everyday.
Sometimes the BrowseNodeId changes, let's say I want to get new books about 'Python', at the moment I'm calling Amazon API like this:
get_new_products(BrowseNodeId=36848)
but sometimes BrowseNodeId changes and I get this error:
InvalidParameterValue: AWS.InvalidParameterValue: 36848 is not a
valid value for BrowseNodeId. Please change this value and retry your
request.
What can I do to keep the BrowseNodeId of Python updated? Which API should I use to get the BrowseNodeId passing the string 'Python'?
I need something like this getBrowseNodeId('Python') to get the correct and updated BrowseNodeId to avoid the error posted over.
Any advice or workflow that I can follow?
Thanks in advance.
Best regards.
Amazon does seem to change the Ids every so often. The recommended way is to take a product and go through its associated BrowseNodes.