How to deploy and Run Amazon Kinesis Application on Amazon Kinesis service - amazon-web-services

I am trying to understand how to deploy an Amazon Kinesis Client application that was built using the Kinesis client library (KCL).
I found this but it only states
You can follow your own best practices for deploying code to an Amazon EC2 instance when you deploy a Amazon Kinesis application. For example, you can add your Amazon Kinesis application to one of your Amazon EC2 AMIs.
which is not giving a broader picture to me.
These examples use an Ant script to run Java program. Is this the best practice to follow?
Also, I understand even before running the EC2 instances I need to make sure
The developed code JAR/WAR or any other format needs to be on the EC2 instance
The EC2 instance needs to have all the required environment like Ant setup in place already to execute the program.
Could someone please add some more detail on this?

Amazon Kinesis will be responsible for ingesting data, not running your application. You can run your application anywhere, but it is a good idea to run it in EC2, as you are probably going to use other AWS Services, such as S3 or DynamoDB (Kinesis Client Library uses DynamoDB for sharding, for example).
To understand Kinesis better, I'd recommend that you launch the Kinesis Data Visualization Sample. When you launch this app, use the provided CloudFormation template. It will create a stack with the Kinesis stream and an EC2 instance with the application, that uses Kinesis Client Library and is a fully working example to start from.

The best way I have found to host a consumer program is using EMR, but not as a hadoop cluster. Package your program as a jar, and place it in s3. Launch an emr cluster and have it run your jar. Using the data pipeline you can schedule this job flow to run at regular intervals. You can also scale an emr cluster, or use a actual EMR job to process the stream if you choose to get the high tech.

You can also use Beanstalk. I believe this article is highly useful.

Related

Is it a good practice to have an AWS EMR standing cluster always running structured streaming?

I have a Spark Structured Streaming job which takes data as input from AWS MSK (Kafka) and write to AWS S3. Is it a good idea to have a standing AWS EMR cluster always running the same? Or are there better ways to manage this infrastructure?
Please let me know if you need further details.
You need some worker pool that is consuming and writing.
Your other options include using EKS instead or YARN on EMR to run Spark, or you could not use Spark and use Kafka Connect S3 Sink instead on an EC2/EKS cluster.

What's the most efficient way to export files from EC2 to S3 on timed intervals?

Working on a problem at the moment where I want to export a file on an EC2 instance running a Windows AMI at four hour intervals to an S3 bucket. Currently, the architecture I'm thinking is as follows.
1. CloudWatch Events rule using scheduled trigger
2. Rule triggers Lambda function to run
3. Lambda function would use some form of the AWS CLI on the windows EC2 instance to extract (sync, cp, etc.) the file
4. File is placed is S3 bucket
Does anyone see a path that's more efficient than this one? I want to ensure that I'm handling this in the most straightforward manner. Thanks in advance for any input!
It is quite difficult to have external code (eg an AWS Lambda function) cause something to execute on a Windows computer. You could use Systems Manager Run Command, but that's a rather complex solution.
It would be much simpler to have the Windows computer push the files to Amazon S3:
Create a scheduled task in Windows
Use aws s3 cp or aws s3 sync to copy the files to Amazon S3
Done!
Your solution seems solid. Alternatively you may want to write daemon-like service (background process) that runs on each EC2 and does the data transfer from that instance to S3. What I like about your solution is how you can centrally control the scheduling easily. For my distributed solution you can have the processes read from central config, but that seems more complicated than the CW/Lambda solution.
For the EC2 process solution, this may be useful:
How to mount Amazon S3 Bucket as a Windows Drive, but it should be easy (and more scalable) to just use the AWS SDK instead to talk to S3

What is the difference between AWS Elastic MapReduce and AWS Kinesis Data Analytics?

I'm executing a Flink Job with this tools.
I think both can do exactly the same with the proper configuration. Does Kinesis Data Analytics do something that EMR can not do or vice versa?
Amazon Kinesis Data Analytics is the easiest way to analyze streaming data, gain actionable insights, and respond to your business and customer needs in real time.
Amazon Elastic Map Reduce provides a managed Hadoop framework that makes it easy, fast, and cost-effective to process vast amounts of data across dynamically scalable Amazon EC2 instances. You can also run other popular distributed frameworks such as Apache Spark, HBase, Presto, and Flink in EMR.
The major difference is maintainability and management from your side.
If you want more independent management and more control then I would say go for AWS EMR. Where its your responsibility to manage the EMR infrastructure as well as the Apache Flink cluster in it.
But if you want less control and more focus on application development and you need to deliver faster(tight deadline) then KDA is the way to go. Here AWS provides all the bells and whistles you need for running your application. This also easily sets up with AWS s3 as code source and provides a bare minimum Configuration Management using the UI.
It scales automatically as well.(Need to understand KCU though).
It provides the same Flink dashboard where you can monitor your application and AWS Cloudwatch integration for debugging your application.
Please go through this nice presentation and let me know it that helps.
Please let me know.
https://www.youtube.com/watch?v=c_LswkrwOvk
I will say one major difference between the two is that Kinesis does not provide a hosted Hadoop service unlike Elastic MapReduce (now EMR)
Got this same question also. This video was helpful in explaining with a real architecture scenario and AWS explanation here tries to explain how Kinesis and EMR can fit together with possible use cases.

On AWS, run an AWS CLI command daily

I have an AWS CLI invocation (in this case, to launch a configured EMR cluster to do some steps and then shut down) but I'm not sure how to go about running it daily.
I guess one way to do it is an EC2 micro instance running a cron job, or an ECS task in a micro that launches the command, but that all seems like it might be overkill. It looks like there's also a way to do it in Lambda, but rom what I can tell it'd be kludgy.
This doesn't have to be a good long-term solution, something that's suitable until I can do it right (Data Pipelines) would work just fine.
Suggestions?
If it is not a strict requirement to use the AWS CLI, you can use one of the AWS SDK instead to programmatically invoke Lambda.
Schedule a CloudWatch Rules using cron
When configured, the CloudWatch Rules will trigger a Lambda function
Implement a Lambda function that calls EMR using one of the supported SDKs (e.g. the EMR class in the AWS JavaScript SDK)
Make sure that you have the IAM configuration in place
Full example is available in the Schedule AWS Lambda Functions Using CloudWatch Events
Kludgy? Yes, configuration is needed, however if you take into account the amount of work required to launch EC2 / ECS (and make sure that it re-launches in the event of failure), I'd say it evens out.
Not sure about the whole task that you are doing, but to avoid doing it:
Manually
Avoid another set up for resources in AWS (as you mentioned)
I would create a simple job in a Continuous Integration (CI) server like jenkins,bamboo,circleci ..... (list can go on). I would assume that you might already have a CI server running, why not use it?

Continuous Integration on AWS EMR

We have a long running EMR cluster that has multiple libraries installed on it using bootstrap actions. Some of these libraries are under continuous development and their codebase is on GitHub.
I've been looking to plug Travis CI with AWS EMR in a similar way to Travis and CodeDeploy. The idea is to get the code on GitHub tested and deployed automatically to EMR while using bootstrap actions to install the updated libraries on all EMR's nodes.
A solution I came up with is to use an EC2 instance in the middle, where Travis and CodeDeploy can be first used to deploy the code on the instance. After that a lunch script on the instance is triggered to create a new EMR cluster with the updated libraries.
However, the above solution means we need to create a new EMR cluster every time we deploy a new version of the system
Any other suggestions?
You definitely don't want to maintain an EC2 instance to orchestrate a CI/CD process like that. First of all, it introduces a number of challenges because then you need to deal with an entire server instance, keep it maintained, deal with networking, apply monitoring and alerts to deal with availability issues, and even then, you won't have availability guarantees, which may cause other issues. Most of all, maintaining an EC2 instance for a purpose like that simply is unnecessary.
I recommend that you investigate using Amazon CodePipeline with a Lambda Step Function.
The Step Function can be used to orchestrate the provisioning of your EMR cluster in a fully serverless environment. With CodePipeline, you can setup a web hook into your Github repo to pull your code and spin up a new deployment automatically whenever changes are committed to your master Github branch (or whatever branch you specify). You can use EMRFS to sync an S3 bucket or folder to your EMR file system for your cluster and then obtain the security benefits of IAM, as well as additional consistency guarantees that come with EMRFS. With Lambda, you also get seamless integration into other services, such as Kinesis, DynamoDB, and CloudWatch, among many others, that will simplify many administrative and development tasks, as well as enable you to have more sophisticated automation with minimal effort.
There are some great resources and tutorials for using CodePipeline with EMR, as well as in general. Here are some examples:
https://aws.amazon.com/blogs/big-data/implement-continuous-integration-and-delivery-of-apache-spark-applications-using-aws/
https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-ecs-ecr-codedeploy.html
https://chalice-workshop.readthedocs.io/en/latest/index.html
There are also great tutorials for orchestrating applications with Lambda Step Functions, including the use of EMR. Here are some examples:
https://aws.amazon.com/blogs/big-data/orchestrate-apache-spark-applications-using-aws-step-functions-and-apache-livy/
https://aws.amazon.com/blogs/big-data/orchestrate-multiple-etl-jobs-using-aws-step-functions-and-aws-lambda/
https://github.com/DavidWells/serverless-workshop/tree/master/lessons-code-complete/events/step-functions
https://github.com/aws-samples/lambda-refarch-imagerecognition
https://github.com/aws-samples/aws-serverless-workshops
In the very worst case, if all of those options fail, such as if you need very strict control over the startup process on the EMR cluster after the EMR cluster completes its bootstrapping, you can always create a Java JAR that is loaded as a final step and then use that to either execute a shell script or use the various Amazon Java libraries to run your provisioning commands. In even this case, you still have no need to maintain your own EC2 instance for orchestration purposes (which, in my opinion, still would be hard to justify even if it was running in a Docker container in Kubernetes) because you can easily maintain that deployment process as well with a fully serverless approach.
There are many great videos from the Amazon re:Invent conferences that you may want to watch to get a jump start before you dive into the workshops. For example:
https://www.youtube.com/watch?v=dCDZ7HR7dms
https://www.youtube.com/watch?v=Xi_WrinvTnM&t=1470s
Many more such videos are available on YouTube.
Travis CI also supports Lambda deployment, as mentioned here: https://docs.travis-ci.com/user/deployment/lambda/