Convert object to ByteBuffer - amazon-web-services

My situation is that developing on spring.boot.version = 1.4.2, can't upgrade our boot version(our service is so huge).
I need to use Kafka for our service.
So I implemented this feature using spring-cloud-stream-binder-kafka.
spring-cloud-stream-binder-kafka:1.1.2.RELEASE is supporting spring boot version 1.4.6, so I can implemented this feature.
Until now, it's not bad.
But we're using AWS on our services, there is no kafka in AWS as you know.
So I tried to use spring-cloud-stream-binder-kinesis:1.0.0.RELEASE.
But unfortunately, spring-cloud-stream-binder-kinesis:1.0.0.RELEASE version is supporting over bootVersion 2.0.0.
So I have to implement this feature using Kinesis Producer Library.
(I refer https://github.com/awslabs/amazon-kinesis-producer/blob/master/java/amazon-kinesis-producer-sample/src/com/amazonaws/services/kinesis/producer/sample/SampleProducer.java)
I have to publish Java object to kinesis, so I should pass Java Object to data argument of KinesisProducer.addUserRecord.
So, how can I convert Java Object to ByteBuffer?

You need to first convert it to a byte[], then call ByteBuffer.wrap() on that array.
You could use Java serialization to do this, but I strongly recommend using some form of JSON serialization. That will make the records easily used by other consumers, which is one of the reasons to use something like Kinesis in the first place.
Also, AWS does provide a managed Kafka service. I haven't used it, so can't compare to a self-managed Kafka cluster, and don't know if it's available in all regions. But if you already have the tools and experience to use Kafka, it might be a better choice for you.

Related

Is there any equivalent feature to BPMN UserTask available in AWS Step functions?

We have our old 'Camunda based Spring boot application' which we currently deployed it into kubernetes which is running in an AWS EC2 Instance. This application acts as a backend for an angular based UI application.
Now we need to develop a new application similar to the above, which needs to interact with UI.
Our process flow will contain some UserTasks (BPMN) which will wait until manual interaction performed by human user via angular UI.
We are evaluating the possibility of using AWS stepfunctions instead of Camunda, if possible.
I googled but unable to find a concrete answer.
Is AWS stepfunctions have any feature similar to BPMN/Camunda's UserTask ?
Short answer: No.
====================================
Long answer:
After a whole day of study, I decided to continue with CamundaBPM because of below reasons.
AWS step-functions don't have an equivalent feature of UserTask in BPMN.
Step functions supports minimal human intervention via sending emails/messages by using AWS SQS(simple queue service) and AWS SNS(simple notification service).
Refer this link for full example. This manual interaction also based on 'task token'. So this interaction is limited to basic conversational style.
Step-function is NOT coming with in-built database & data management support, the developer has to take care of designing database schema, creating tables, their relationship etc.
On the other hand, Camunda is taking care of creating tables, their relationship, saving & fetching data.
No GUI modeler is available in step-functions, instead you need to draw workflow in a JSON-like language. This will be very difficult if your workflow becomes complex.
Drawing workflow in Camunda is just drag-and-drop using it's modeler.

Using DynamoDB from CakePHP 3 installed to Elastic Beanstalk

I have installed CakePHP 3 using directions from this tutorial:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/php-cakephp-tutorial.html
It is working perfectly and actually installation was quite easy. There is PHP, CakePHP, MySQL working and also I noticed that the newest AWS SDK as whole is installed in vendor directory. So I am fully set to use also DynamoDB as my data source. You might ask why I should use DynamoDb since I am already using MySQL/MarianDB, this is because we have an application that is already in production and it is using DynamoDB. But we should be able to write admin application using CakePHP in top of DynamoDB. This is not technical decision but coming from business side.
I found good tutorial written by StarTutorial how to use DynamoDB as session handler in CakePHP 3:
https://www.startutorial.com/articles/view/using-amazon-dynamodb-as-session-handler-in-cakephp-3
Well, there is not long way to using DynamoDB for putting data, getting data and doing scans, isn't there? Do you have any simple example how to do it, how to write data to DynamoDB or do scan?
I have also read the article:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStarted.PHP.html
and this is working fine, no problem. But I would like to all the advantages of the CakePHP 3, templating, security and so on, thousands of hours time saved with well written code and very fast to start coding for example admin console :)
Thank you,
You could create a Lambda function (in case you want to go serverless) or any other microservice to abstract communication with your DynamoDB. This will definitely simplify your PHP code. You may call Lambda functions directly (via API Gateway), or post messages to SQS for better decoupling. I would recommend the use of SQS -- you'll need some kind of microservice anyway to consume messages and deal with your DynamoDB in a CQRS fashion. Hope it helps!
Thank you for your answer, I was looking for a example how to use the AWS SDK for DynamoDB without creating more complexity to this environment as it is. This way I would have to create yet another layer without using the SDK that already exists. Can you please give wokring example how AWS SDK is used from CakePHP 3 so that it can use DynamoDB as a data source for its applications without losing it´s own resources an capabilities (MVC, security etc).
Thank you,
After a hard debug and found bugs I was able to get it working with only using AWS SDK in CakePHP 3.

how to integrate AWS services for language without sdk

AWS provides SDK only for some languages. How could I integrate AWS services in an application for which an official SDK is not provided. Eg C of Scala or Rust? I know that for Scala, some aws sdk projects are available but as they are individual contributions (and not aws releases), I am reluctant to use them.
All the SDKs do is wrap some minimal interface around the API calls made to the AWS servers. For any service you wish to integrate into your application, just head over to their API documentation, and write your own codes/wrappers.
For eg. this link takes you to the API reference for EC2 service.
In the early days of AWS, we needed an SDK for C++. At that time an SDK for C++ did not exist, so I wrote one based upon the REST API. This is no easy task as the Amazon API is huge and by the time you complete coding for one service, you have to go back and update with all of the AWS feature improvements and changes. This seems like a never ending loop.
Several of the AWS SDKs were started by third party developers and then contributed to Amazon as open source projects. If you have a popular language that you feel that others could benefit from, start an open source project and get everyone involved. It could become an official project if there is enough demand. Ping me if you do as I might be interested in contributing.

AWS Serverless application load time with the Spring framework

I am building a web application in AWS using the serverless architecture.
The purpose of the application is to expose a public API to upload files from around the world.
I use AWS API-Gateway and Lambda to execute my code and S3 as storage.
I know that it is very much possible and well supported (even by 3rd parties like the Serverless framework) to use Java Spring framework to write the code that I deploy in my Lambda function.
However, is it really recommended? Spring applications usually take 30 seconds or more to load completely and Lambda should run Immediately.
How come this option is even supported by AWS (since it sounds like a very bad idea)?
Java is one of the supported programming languages of AWS Lambda. It is possible to run an application using Java, you just have to take the warmup time into consideration, if that fits your use-case - then use it. You could also use SNS and a hook to your lambda to keep it warm if you do not receive requests
Using Java with AWS lambdas is perfectly fine but Lambdas are functions not applications!
So you should avoid to use a framework like Spring because you don't need that.
The question is what do you want to achieve in your function and why do you need a framework to execute such small amount of code?
What's your use case?
Personally, I would AVOID using java runtime for AWS lambda as much as possible. I understand that it's very tempting to use java assuming that you are looking into migrating an existing implementation into microservices. But you are always going to pay the penalty of slow warm-up time compared to other runtimes. You may also miss out on Java compiler optimisations as the lambda may not be invoked enough number of times to trigger C1 and C2 compilations.
My preference would be only to use java for lambda if you are planning to write a lean implementation, means no spring, hibernate etc. etc.

implementing MQTT using Google cloud pub sub

I want to implement MQTT using pubsub API of google app engine in python. How can I run pub sub library in standard library. If I am required to run the older version of this API, can anyone provide with the sample. Also one issue with the latest library is that it is alpha version. Later on I will connect the MQTT client using the GCP-IOT protocol.
I would strongly advise against it. Not only you are wasting your time and energy, you are also trying to use something that is not meant to be used it that way. In the end, the cost is going to be huge compared deploying an MQTT on your own instance.
If you are looking for a fully managed solution from GCP, you might be interested in trying out GCP Core IOT which is currently in private beta. More details here: https://cloud.google.com/iot-core/
I second checking out Google IoT Core.
If you have a special use case, you could always connect Google PubSub to another MQTT-enabled IoT platform like Losant. Here is an example of it:
https://docs.losant.com/applications/integrations/#google-pubsub
Then, as you subscribe to messages from PubSub you could publish to MQTT topics and vice versa.
Disclaimer: I work for Losant.