Can we develop streams in c++ using AWS sdk c++ - amazon-web-services

I created a table with streams enabled, how can i get the stream records whenever update/insertion is happened? What are the steps involved? Or it is not possible in C++?

You should be able to do it using the sdk for C++.
Here there is an example
https://aws.amazon.com/blogs/developer/using-a-thread-pool-with-the-aws-sdk-for-c/
The AWS blog has many examples more https://aws.amazon.com/blogs/developer/category/cpp/
Check these links below (look at aws-cpp-sdk-kinesis) for more info:
https://github.com/aws/aws-sdk-cpp
https://aws.amazon.com/blogs/aws/aws-sdk-for-c-now-ready-for-production-use/
https://aws.amazon.com/blogs/aws/introducing-the-aws-sdk-for-c/

Related

Is it possible to create an AWS lambda layer for C++ runtime?

We have a few c++ workloads in our organization that need to be migrated to the cloud, and we are trying to go serverless.
As a PoC, I am trying to create a sample lambda function that runs c++ code. This uses the c++ runtime.
I was able to create a zip file which packages a C++ hello world function, along with the C++ runtime by following this AWS blog.
I wanted to isolate the C++ runtime in a lambda layer for obvious advantages that a lambda layer provides. But I was not able to find any documentation that helps with this.
My question is: Is it possible to create a lambda layer for C++ runtime?
If so, how should my bootstrap file look like?
Any pointers on this would really help! Thanks!

python aws sdk is missing transcribe streaming API

I checked github code for transcribe streaming options and it looks like there is no transcribe streaming mentions neither in docs nor in config file: src/botocore/botocore/data/transcribe/2017-10-26/service-2.json.
But I see documentation for Ruby: https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/TranscribeStreamingService.html
This is why I believe it makes sense to do it using scripting (not compiled) language.
Are there any plans to add it in the future? If yes, when approximately?
Or am I simply missing it?
I saw documentation describes low level communication for this kind of API, but I want to save dev time re-using the lib.
python sdk for transcribe streaming is not available yet.

How can I create a rule for a thing using the aws iot java sdk?

I am trying to create a rule without using the UI provided by AWS but using the java sdk. Looking over the codes of the java sdk I have not seen any codes that creates rule. Any help would be appreciated.
After painful research and posting questions in aws forum, I have found out that there is no way you can create a custom rule gui by installing it from here but you have to follow this approach .You can find how to here
Here are some docs that may help you as well
link 1
link 2

Is it possible to extend the WebJobs SDK?

Is there a way to extend the Azure WebJobs SDK? If I want something other than a queue, blob or table to trigger my job function.
As of the 1.1.0 release of the WebJobs SDK you can now write your own binding extensions. See the azure-webjobs-sdk-extensions repo for more information. That repo contains several extensions built on the new extensibility model that you can use in your applications (e.g. TimerTrigger, FileTrigger, SendGrid, etc.) Those extensions also serve as a demonstration of how to author your own extensions. The Binding Extensions Overview section of the Wiki walks you through the process of creating your own extension, starting from a sample template.
Sorry, that's not possible yet. However, you can always write your own event, use JobHost.Call inside it to invoke the function(s) and get all the benefits of WebJobs SDK (logging on dashboard, bindings, etc.)

jar containing org.apache.hadoop.hive.dynamodb

I was trying to programmatically Load a dynamodb table into HDFS (via java, and not hive), I couldnt find examples online on how to do it, so thought I'd download the jar containing org.apache.hadoop.hive.dynamodb and reverse engineer the process.
Unfortunately, I couldn't find the file as well :(.
Could someone answer the following questions for me (listed in order of priority).
Java example that loads a dynamodb table into HDFS (that can be passed to a mapper as a table input format).
the jar containing org.apache.hadoop.hive.dynamodb.
Thanks!
It's in hive-bigbird-handler.jar. Unfortunately AWS doesn't provide any source or at least Java Doc about it. But you can find the jar on any node of an EMR Cluster:
/home/hadoop/.versions/hive-0.8.1/auxlib/hive-bigbird-handler-0.8.1.jar
You might want to checkout this Article:
Amazon DynamoDB Part III: MapReducin’ Logs
Unfortunately, Amazon haven’t released the sources for
hive-bigbird-handler.jar, which is a shame considering its usefulness.
Of particular note, it seems it also includes built-in support for
Hadoop’s Input and Output formats, so one can write straight on
MapReduce Jobs, writing directly into DynamoDB.
Tip: search for hive-bigbird-handler.jar to get to the interesting parts... ;-)
1- I am not aware of any such example, but you might find this library useful. It provides InputFormats, OutputFormats, and Writable classes for reading and writing data to Amazon DynamoDB tables.
2- I don't think they have made it available publically.