AWS How to get List of triggers with each dynamodb table using java SDK - amazon-web-services

There is a dynamodb table which has triggers in the triggers tab. How to get the list of those triggers using AWS Java SDK for a given table.
I have gone through the JAVA SDK but so far have not been able to find any API to get the list.

https://docs.aws.amazon.com/cli/latest/reference/lambda/list-event-source-mappings.html
Java:
AWSLambda client = AWSLambdaClientBuilder.standard().build();
ListEventSourceMappingsRequest request = new ListEventSourceMappingsRequest().withEventSourceArn("your table stream arn");
ListEventSourceMappingsResult result = client.listEventSourceMappings(request);
List<EventSourceMappingConfiguration> eventSourceMappings = result.getEventSourceMappings();

Believe these operations are part of DynamoDB Streams - see listStreams

Related

AWS-CDK - DynamoDB Initial Data

Using the AWS CDK for a Serverless project but I've hit a sticking point. My project deploys a DynamoDB table which I need to populate with data prior to my Lambda function executing.
The data that needs to be loaded is generated by making API calls and isn't static data that can be loaded by a .json file or something simple.
Any ideas on how to approach this requirement for a production workload?
You can use AwsCustomResource in order to make a PutItem call to the table.
AwsSdkCall initializeData = AwsSdkCall.builder()
.service("DynamoDB")
.action("putItem")
.physicalResourceId(PhysicalResourceId.of(tableName + "_initialization"))
.parameters(Map.ofEntries(
Map.entry("TableName", tableName),
Map.entry("Item", Map.ofEntries(
Map.entry("id", Map.of("S", "0")),
Map.entry("data", Map.of("S", data))
)),
Map.entry("ConditionExpression", "attribute_not_exists(id)")
))
.build();
AwsCustomResource tableInitializationResource = AwsCustomResource.Builder.create(this, "TableInitializationResource")
.policy(AwsCustomResourcePolicy.fromStatements(List.of(
PolicyStatement.Builder.create()
.effect(Effect.ALLOW)
.actions(List.of("dynamodb:PutItem"))
.resources(List.of(table.getTableArn()))
.build()
)))
.onCreate(initializeData)
.onUpdate(initializeData)
.build();
tableInitializationResource.getNode().addDependency(table);
The PutItem operation will be triggered if the stack is created or if table is updated (tableName is supposed to be different in that case). If it doesn't work for some reason, you can set physicalResourceId to random value, i.e. UUID, to trigger the operation for each stack update (the operation is idempotent due to ConditionExpression)
CustomResource allows you to write custom provisioning logic. In this case you could use something like a AWS Lambda Function in a Custom Resource to read in the custom json and update DynamoDb.

facing issue DataSourceId parameter in the CreateDataSource method in quickSight in JAVA sdk

So I am using quick sight java SDK for integrating s3 with quicksight, and for that, I am using CreateDataSource.so while using this method, I have to pass one parameter in this method DataSourceId. The description of this parameter in the AWS documentation is like "An ID for the data source. This ID is unique per AWS Region for each AWS account," and I do not know how to get this.i to have to get this parameter programmatically. Type of this parameter is String
getClient().createDataSource(new CreateDataSourceRequest().withDataSourceId("DataSourceID").withAwsAccountId("AWS ACCOUNT").withName("display name of data soure").withType(DataSourceType.S3));
getClient is a client for using quicksight API
and yeah for the integration of s3 and quicksight I can not do this through AWS console I have to do this programmatically
I do not know how to get this.i to have to get this parameter programmatically
This is something that you set yourself, for example:
DataSourceId = "my-first-test-data-source-id"

Moving data from sqs to s3

I have a queue in SQS that many notifications with data are pushed into it (more than 9M notifications a day)
I'd like to know if there is a way to create s3 objects from messages in sqs to s3 (path is being mentioned in attribute of the meesage)
I prefer to have an out-of-the-box solution without coding.
If such a solution doesn't exist, would you recommend to have a lambda function to do that instead of a process (code runs on ec2)
Thanks,
I found an official document describing something similar to what you want to achieve -- it's more recommended in case you have big/large messages. As of today, feature is only available in the Java SDK. Example:
/*
* Set the Amazon SQS extended client configuration with large payload
* support enabled.
*/
final ExtendedClientConfiguration extendedClientConfig =
new ExtendedClientConfiguration()
.withLargePayloadSupportEnabled(s3, S3_BUCKET_NAME);
final AmazonSQS sqsExtended =
new AmazonSQSExtendedClient(AmazonSQSClientBuilder
.defaultClient(), extendedClientConfig);
Example reference: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/working-java-example-using-s3-for-large-sqs-messages.html

How to get tags for amazon dynamo db in java sdk

I am trying to get tags associated with dynamodb table. I could not find anything in api. any one guide me
Here is the class and method.
Class: AmazonDynamoDBClient
Method:
ListTagsOfResourceResult listTagsOfResource(ListTagsOfResourceRequest
listTagsOfResourceRequest) List all tags on an Amazon DynamoDB
resource.
Sample code:
ListTagsOfResourceRequest listTagsOfResourceRequest = new ListTagsOfResourceRequest()
.withResourceArn("arn:aws:dynamodb:us-east-1:123456789012:table/Movies");
return dynamoDBClient.listTagsOfResource(listTagsOfResourceRequest);
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/index.html

Writing to Kinesis stream using AWS Lambda Function

Can we create a Lambda function like which can get executed when we write a record to Dynamo DB table & that record is written to Kinesis stream ?? Basically can we write to Kinesis stream using Lambda function?? If yes please share sample code for that..Also I want to know how does that work.....Thank You
Yes. You can create a Dynamo Trigger backed by a Lambda function, and have that Lambda Function write to a stream.
Here is a walk through that shows how to create a trigger:
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.html
In the body of your Lambda function you can then call the Kinesis "putRecord" function. Here's info on "putRecord":
http://docs.aws.amazon.com/kinesis/latest/APIReference/API_PutRecord.html
If you are implementing your Lambda function in Node.js, here's a link to the SDK docs for Kinesis:
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Kinesis.html#putRecord-property
Similarly here is a link for the Java SDK (if you are using java):
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/kinesis/AmazonKinesis.html#putRecord(com.amazonaws.services.kinesis.model.PutRecordRequest)
And a link to the Boto docs (if you are using python):
http://boto.cloudhackers.com/en/latest/ref/kinesis.html
The doc links should have all the info your need.
Not answering the question but updating latest solution to the part of the question
when we write a record to Dynamo DB table & that record is written to Kinesis stream
You can directly enable Kinesis streams on dynamo db table now https://aws.amazon.com/about-aws/whats-new/2020/11/now-you-can-use-amazon-kinesis-data-streams-to-capture-item-level-changes-in-your-amazon-dynamodb-table/