AWS EMR Spark Application Logging to CloudWatch - amazon-web-services

It is not clear to me that
application logging inside a Spark App itself, running on AWS EMR,
executed via spark-shell or Steps
will end up in CloudWatch Logs for reporting, if the CloudWatch Agent is installed on the EMR Cluster.
Will it or not?

Related

How to send OS logs and container logs from AWS EKS to Cloudwatch?

My understanding is that the Cloudwatch agent is available both as a Linux binary and as a Kubernetes deamonset.
I am aware that the EKS container logs could be forwarded to Cloudwatch using the cloudwatch agent that runs as EKS daemonset.
I have a query on how to send OS logs from EKS nodes to cloudwatch? Would the cloudwatch agent daemonset service be able to send the OS logs to Cloudwatch? or is the Linux binary required to be run on the EKS nodes to send OS logs?
I believe that either can be used independently.
You can set up Fluentd or Fluent Bit as a DaemonSet to send logs to CloudWatch Logs.
Alternatively, you can install CloudWatch agent on Amazon EKS nodes using Distributor and State Manager. You may also consider including it in the launch template to automate installation because the EKS-optimized AMI does not include the agent by default.

How to get logstreams from EMR steps to CloudWatch?

We use Airflow to orchestrate aws ETL jobs. Some of these jobs start an EMR cluster and add EMR steps to that cluster. Airflow pulls logs from Cloudwatch. Logs created during an EMR step is not directly available in CloudWatch, but are only found in hidden in the cluster logs. If an EMR step fails, the logs to identify the error is therefore not readily available. We would like logs created from EMR steps to be displayed in CloudWatch. We have tried the installing the CloudWatch agent, but that does not seem to work. How can we get logstreams from EMR steps to CloudWatch?

Check for any running spark jobs on EMR using CLI or Boto3

I am developing a module to delete the emr cluster but before deletion I need to check whether any spark jobs are running in the cluster. Is there a way we can check this using boto3 API call or through CLI command. Basically, I dont need any YARN applications in running state. In console we can find this in 'Application User Interfaces' tab, under High-level application history.

monitoring spark cluster in AWS EMR without spark UI

I am running a spark cluster on AWS EMR. How do I get all all the details of the jobs and executors that are running on AWS EMR without using the spark UI. I am going to use it for monitoring and optimization.
You can checkout nagios or ganglia for cluster health but you cant see the jobs running on spark with these tools.
If you are using AWS EMR you can do that using lynx server. something like below.
Login to the master node of the cluster.
try the below command
lynx http://localhost:4040
Note : before you type the command make sure you are running a job

How to prevent EMR Spark step from retrying?

I have an AWS EMR cluster (emr-4.2.0, Spark 1.5.2), where I am submitting steps from aws cli. My problem is, that if the Spark application fails, then YARN is trying to run the application again (under the same EMR step).
How can I prevent this?
I was trying to set --conf spark.yarn.maxAppAttempts=1, which is correctly set in Environment/Spark Properties, but it doesn't prevent YARN from restarting the application.