Amazon EMR and Spark streaming - amazon-web-services

Amazon EMR, Apache Spark 2.3, Apache Kafka, ~10 mln records per day.
Apache Spark used for processing events in batches by 5 minutes, once per day worker nodes are dying and AWS reprovision automatically the nodes. On reviewing the log messages it looks like no space in the nodes, but they are having about 1Tb storage there.
Did someone has the issues with storage space in cases when it should be more than enough?
I was thinking the log aggregation could not copy properly the logs to s3 bucket, that should be done automatically by spark process as I see.
What kind of the information should I provide to help to resolve this issue?
Thank you in advance!

I had a similar issue with a Structured Streaming app on EMR, and disk space rapidly increasing to the point of stalling/crashing application.
In my case the fix was to disable the Spark Event log:
spark.eventLog.enabled to false
http://queirozf.com/entries/spark-streaming-commong-pitfalls-and-tips-for-long-running-streaming-applications#aws-emr-only-event-logs-under-hdfs-var-log-spark-apps-when-using-a-history-server

I believe I fixed the issue using the custom log4j.properties, on deploy to Amazon EMR I replaced /etc/spark/log4j.properties and then run spark-submit with my streaming application.
Now it's working well.
https://gist.github.com/oivoodoo/d34b245d02e98592eff6a83cfbc401e3
Also it could be helpful for someone who is using streaming application and need to rollout the updates with graceful stop.
https://gist.github.com/oivoodoo/4c1ef67544b2c5023c249f21813392af
https://gist.github.com/oivoodoo/cb7147a314077e37543fdf3020730814

Related

S3 write concurrency using AWS Glue

I have a suspicion we are hitting an S3 write concurrency issue with an AWS Glue job. I am testing 10 DPUs writing 10k objects, 1 MB each (~10 GB total) and it is taking 2+ hours for just the write stage of the job. It seems like across 10 DPUs I should be able to distribute good enough to get much better throughput. I am hitting several different bucket prefixes and do not think I'm getting throttled by S3 or anything.
I see that my job is using EMRFS (the default S3 FileSystem API implementation for Glue), so that is good for write throughput from my understanding. I found some suggestions that say to adjust fs.s3.maxConnections, hive.mv.files.threads and set hive.blobstore.use.blobstore.as.scratchdir = false.
Where can I see what the current settings for these are in my Glue jobs and how can I configure them? While I see many settings and configurations in the Spark UI logs I can generate from my jobs, I'm not finding these settings.
How can I see what actual S3 write concurrency I'm getting in each worker in the job? Is this something I can see in the Spark UI logs or is there another metric somewhere that would show this?

Writing data from AWS RDS SQL Server to S3 bucket

According to this: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/User.SQLServer.Options.S3-integration.html we should be able to write a file from RDS to S3. But when we try it fails with
blocked because RDS is a managed service with SLA and guard rails to help deliver it.
Anyone know a way around this?
Seems like running your task will interfere with SLA's, so it would be directly related to the Note mentioned in the link you shared:
Note: S3 integration tasks share the same queue as native backup and
restore tasks. At maximum, you can have only two tasks in progress at
any time in this queue. Therefore, two running native backup and
restore tasks will block any S3 integration tasks.
There must be some automated backup activity going on when you run your task.

What is the difference between AWS Glue ETL Job and AWS EMR?

If I had to perform ETL on a huge dataset(say 1Tb) stored in S3 as csv files, Both AWS Glue ETL job and AWS EMR steps can be used. Then how is AWS Glue different from AWS EMR. And which is the better solution in this case.
Most of the differences are already listed so I'll focus more on the use case specific.
When to choose aws glue
Data size is huge but structured i.e. it is in the table structure and is of known format (CSV, parquet, orc, json).
Lineage is required, if you need the data lineage graph while developing your etl job prefer developing the etl using glue native libraries.
The developers don't need to tweak the performance parameters like setting number of executors, per executor memory and so on.
You don't want the overhead of managing large cluster and pay only for what you use.
When to use EMR
Data is huge but semi-structured or unstructured where you can't take any benefit from Glue catalog.
You believe only in the outputs and lineage is not required.
You need to define more memory per executor depending upon the type of your job and requirement.
You can manage the cluster easily or if you have so many jobs which can run concurrently on the cluster saving you money.
In case of structured data, you should use EMR when you want more Hadoop capabilities like hive, presto for further analytics.
So it depends on what your use case is. Both are great service.
Glue allows you to submit ETL scripts directly in PySpark/Python/Scala, without the need for managing an EMR cluster. All setup/tear-down of infrastructure is managed.
There are also a few other managed components like Crawlers, Glue Data Catalog, etc which make it easier to work on your data.
You could use either for your use-case, Glue would be faster however you may not have the flexibility you get with EMR.
Glue uses EMR under the hood. This is evident when you ssh into the driver of your Glue dev-endpoint.
Now since Glue is a managed spark environment or say managed EMR environment, it comes with reduced flexibility. The type of workers that you can chose is limited. The number of language libraries that you can use in your spark code is limited. Glue did not support packages like pandas, numpy until recently. Apps like presto cant be integrated with Glue although Athena is a good alternative to a separate presto installation.
The main issue however is that Glue jobs have a cold start time from anywhere between 1 minute to 15 minutes.
EMR is a good choice for exploratory data analysis but for a production environment with CI/CD, Glue seems to be the better choice.
EDIT - Glue jobs no longer have a cold start wait time
From the AWS Glue FAQ:
AWS Glue works on top of the Apache Spark environment to provide a scale-out execution environment for your data transformation jobs. AWS Glue infers, evolves, and monitors your ETL jobs to greatly simplify the process of creating and maintaining jobs.
Amazon EMR provides you with direct access to your Hadoop environment, affording you lower-level access and greater flexibility in using tools beyond Spark.
Source: https://aws.amazon.com/glue/faqs/
AWS Glue is a ETL service from AWS. AWS Glue will generate ETL code in Scala or Python to extract data from the source, transform the data to match the target schema, and load it into the target
AWS EMR is a service where you can process large amount of data , its a supporting big data platform .It Supports Hadoop,Spark,Flink,Presto, Hive etc.You can spin up EC2 with the above listed softwares and make a similar ecosystem.
In your case , you want to process 1 TB of data .Now if you want do computations on the same data , you can use EMR and if you want to run the analytics on the transformed data , use Glue .
Following is something that i compiled post working on analytics projects (though a lot of it depends on use case) - but generally speaking :
Criteria
Glue
EMR
Costs
Comparatively Costlier
Much Cheaper (Due to Spot Instance Functionality, There have been cases when there are saving of upto 50% over top-off glue costs - even more depending upon the use case)
Orchestration
Inbuilt(Glue WorkFlows & Triggers)
Through Cloud Watch Triggers & Step Functions
Infra Work Required
No Infra Setup - Select Worker Type However,Roles & Permissions are needed
Identify the Type of Node Needed & Setup Autoscaling rules etc
Cluster Resiliency & Robustness
Highly Resilient (AWS MANAGED)
If Spot Instances are used then interruption might occur with 2 min notification(Though the System Recovers Automatically - For eg - Job Times might elongate)
Skill Sets Needed
PySpark & Intermediate AWS Knowledge
DevOps to Setup EMR & Manage, Intermediate Knowledge of Orchestration via Cloud Watch & Step Function, PySpark
Applicable Use Cases
Attractive Option in event: 1. You are not worried about Costs but need highly resilient infra2. Batch Setups wherein the Job might complete in fixed time3. Short RealTime Streaming Jobs which need to run for let's say hrs during a day
1. Use Case is of Volatile Clusters - Mostly Used for Batch Processing (Day MINUS Scenarios) - Thereby making a costs effective solution for Batch Jobs2. Attractive option for 24/7 Spark Streaming Programs3. You Need a Hadoop Ecosystem & Related tools (like HDFS, HIVE, HUE, Impala etc)4. You need to run Flink Programs etc5. You need control over Infra & It's tuning parameters
Also going back to OP's use case of 1TB of data processing. If its one time processing Glue should suffice, if its a Daily Once Batch EMR & GLUE will both be good (depending on how job is tuned Glue can be an attractive option), if its a multiple time daily job - then EMR is a better option (Considering balance of performance and cost)

What to use AWS Fargate or AWS Beanstalk

I have a java application that reads from a SQS queue and does some business processing and finally writes it to a datastore. As the SQS queue grows I want to be able to scale to read more messages and process them. Each SQS message will take about 15 to 20 minutes to process. I was looking at a service like AWS Fargate or AWS Beanstalk to deploy my application. Money is not a concern but usability is. What would be the best platform?
Fargate would be an ideal solution, as it has following advantages over Beanstalk:
It's serverless
More fine-grained control for custom application architectures.
No need to write EB extensions.
Build and Test image locally and Promote same to Fargate.
With application autoscaling, you can scale on the go.
Pricing is per second with a 1-minute minimum
FAQ:
https://aws.amazon.com/fargate/faqs/
Pricing:
https://aws.amazon.com/fargate/pricing/
I've had a very similar use case to this and I used Batch. (which was not available in 2014 when the question was asked)
https://aws.amazon.com/batch/
In my case I was processing audio and video files from the queue.
You can set a lambda to fire on the SQS queue and have that drop the job onto batch for processing.
If you have the minimum cluster size set to zero then you will have no servers running when there is no work to do, but you can have them autoscale up to process as much work as you require when the jobs come in.
The advantage compared to lambda is that the code that executes can be any container with as much resource as you want to throw at it.
For your use case it will be perfect, but for anything that can complete processing in a a few seconds or a minute it's worth making each job process more than one task per execution or all of the time will be spent firing up and shutting down containers.

run scheduled task in AWS without cron

Currently I have a single server in amazon where I put all my cronjobs. I want to eliminate this single point of failure, and expose all my tasks as web services. I'd like to expose the services behind a VPC ELB to a few servers that will run the tasks when called.
Is there some service that Amazon (AWS) offers that can run a reoccurring job (really call a webservice) at scheduled intervals? I'd really like to be able to keep the cron functionality in terms of time/day specification, but farm out the HA of the driver (thing that calls endpoints at the right time) to AWS.
I like how SQS offers web endpoint(s), but from what I can tell you cant schedule them. SWF doesn't seem to be a good fit either.
AWS announced support for scheduled functions in Lambda at its 2015 re:Invent conference. With this feature users can execute Lambda functions on a scheduled basis using a cron-like syntax. The Lambda docs show an example of using Python to perform scheduled events.
Currently, the minimum resolution that a scheduled lambda can run at is 1 minute (the same as cron, but not as fine grained as systemd timers).
The Lambder project helps to simplify the use of scheduled functions on Lambda.
λ Gordon's cron example has perhaps the simplest interface for deploying scheduled lambda functions.
Original answer, saved for posterity.
As Eric Hammond and others have stated, there is no native AWS service for scheduled tasks. There are only workarounds and half solutions as mentioned in other answers.
To recap the current options:
The single-instance autoscale group that starts and stops on a schedule, as described by Eric Hammond.
Using a Simple Workflow Service timer, which is not at all intuitive. This case study mentions that JPL used SWF to build a distributed cron, but there are no implementation details. There is also a reference to a code example buried in the SWF code samples.
Run it yourself using something like cronlock.
Use something like the Unreliable Town Clock (UTC) to run Lambda functions on a schedule. Remember that Lambda cannot currently access resources within a VPC
Hopefully a better solution will come along soon.
Introducing Events in AWS Cloudwatch
You can schedule by minute, hourly, days or using CRON expression using console and without Lambda or any programming.
I just scheduled my ASP.net WEB API(HTTP Post) using SNS HTTP endpoint to execute every minute and it's working perfectly.
Is there some service that Amazon (AWS) offers that can run a reoccurring job at scheduled intervals?
This is one of a few single points of failure that people (including me) keep mentioning when designing architectures with AWS. Until Amazon solves it with a service, here's a hack I've published which is actively used by some companies.
AWS Auto Scaling can run and terminate instances using a recurring schedule specified in the cron format.
http://docs.amazonwebservices.com/AutoScaling/latest/APIReference/API_PutScheduledUpdateGroupAction.html
You can have the instance automatically run a process on startup.
If you don't know how long the job will last, you can set things up so that your job terminates the instance when it has completed.
Here's an article I wrote that walks through exact commands needed to set this up:
Running EC2 Instances on a Recurring Schedule with Auto Scaling
http://alestic.com/2011/11/ec2-schedule-instance
Starting a whole instance just to kick off a set of jobs seems a bit like overkill, but if it's a t1.micro, then it only costs a couple pennies.
That t1.micro doesn't have to do the actual work either. Your instance could inject messages into SQS or through SNS so that the other redundant servers pick up the tasks.
This a hosted third party site that can regularly call scheduled scripts on your domain.
This will not work if you need your script to run in the shell, and not as Apache.
Sounds like this might be useful to you:
http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-using-task-runner.html
Task Runner is a task agent application that polls AWS Data Pipeline
for scheduled tasks and executes them on Amazon EC2 instances, Amazon
EMR clusters, or other computational resources, reporting status as it
does so. Depending on your application, you may choose to:
Allow AWS Data Pipeline to install and manage one or more Task Runner
applications for you on computational resources that it manages
automatically. In this case, you do not need to install or configure
Task Runner as described in this section. This is the recommended
configuration.
Manually install and configure Task Runner on a computational resource
such as a long-running EC2 instance or a physical server. To do so,
use the procedures in this section.
Develop and install a custom task agent instead of Task Runner. The
procedures for doing so will depend on the implementation of the
custom task agent.
Amazon has introducted Lambda last year for NodeJS, yesterday Amazon added the features Scheduled Functions, VPC Support, and Python Support.
By leveraging Scheduled Function - a proper replacement for CRON can be attained.
More Info - http://aws.amazon.com/lambda/details/
As of August 2020, Amazon has moved the Lambda/CloudWatch events to a service called EventBridge (https://aws.amazon.com/eventbridge/). It was launched in July 2019, after most of the answers to this question.
Looks like this is a relatively new option from AWS BeanStalk:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-tiers.html#worker-periodictasks
Basically, they act like regular SQS receivers, but they're called on a cron schedule instead of in response to a SQS message.
SWF is a Web service from AWS that can be used to schedule tasks. Most of the work goes into specifying what a task and a schedule is.
http://milindparikh.blogspot.com/2015/07/introducing-diksha-aws-lambda-function.html is a scalable scheduler written against SWF.
CloudWatch Events are great, but there is a limit on their number. If you need a scale and willing to sacrifice the precision you could use DynamoDB's TTL as a timer.
The idea is to put items into a DynamoDB table with a TTL set to the time you need to run a task. DynamoDB will delete those items somewhere around the specified time (within 48 hours of expiration). Those deleted items will appear in the DynamoDB stream, associated with a table. A lambda function could listen the stream and take appropriate actions upon the deletions.
Read more in "DynamoDB TTL as an ad-hoc scheduling mechanism" by theburningmonk.com.
The AWS Elastic Load Balancers will ping your instances to check that they're healthy. You can add your cron-like tasks to the script that the ELB is pinging, and it will execute very regularly.
You'd want to add some logic so that each tasks is executed the right amount of times and at the right interval, but this could be accomplished with a database table that tracks executions. Each time the ELB pings your server, your server would check the database to see if any job is pending, and then execute that job.
The ELB will timeout if the script takes too long to execute, so it's important to not create a situation where your ELB health check will take many seconds to process the cron tasks. To overcome this, you can employ the AWS Simple Notification Service. Your ELB health check script can simply publish a message to an SNS topic, and then that topic can deliver the message via an HTTP request to your web server.
In other words:
ELB pings your EC2 instance...
EC2 instance checks for pending jobs and sends a message to SNS if any are found...
SNS notifies your app via HTTP...
The HTTP call from SNS is what actually processes the cron job