after deploying my Spark Streaming Job on a Standalone Spark cluster, i got some problems with checkpointing. The console log yields a hint:
WARN ReliableCheckpointRDD: Error writing partitioner org.apache.spark.HashPartitioner#2 to hdfs://10.254.25.21:8020/path/1914a5db-96be-4634-b2ce-ee867119fd95/rdd-18129
I am using the default HashPartitioner dividing the data in two partitions. I set my HDFS checkpointing directory to my Spark master and HDFS port as follows:
ssc.checkpoint("hdfs://10.254.25.21:8020/path")
In my Job I never manually call .checkpoint(duration) on any DStream myself. But I got many stateful Streams resulting from PairDStreams mapWithState() invokations. The code of catching the exception can be found in ReliableCheckpointRDD line 209ff. Unfortunately, I could not find any references to this error on the web.
In my job, the exception is thrown for every stateful DStream whenever checkpointing is triggered.
Any help is appreciated!
Edit #1
This does not affect the correctness of the results. Yet I wonder if the performance is getting worse as I am doing some performance analysis.
Related
I have AWS S3 folder where the big number of JSON files is stored. I need to ETL these files with AWS EMR over Spark and store the transformation into AWS RDS.
I have implemented the Spark job for this purpose on Scala and everything is working fine. I plan to execute this job once a week.
From time to time the external logic can add a new files to AWS S3 folder so the next time when my Spark job is starting I'd like to process only the new(unprocessed) JSON files.
Right now I don't know where to store the information about the processed JSON files so the Spark job can decide what files/folders to process. Could you please advise me what is the best practice(and how) to track this changes with Spark/AWS?
If it is spark streaming job, checkpointing is what you are looking for, it is discussed here.
Checkpointing stores the state information (ie offsets etc) in hdfs/s3 bucket, so when the job is started again, spark picks up only the un-processed files. Checkpointing offers better fault tolerance in case of failures as well, as state is handled automatically by spark itself.
Again checkpointing only works in the streaming mode of spark job.
I am using s3distcp to copy a 500GB dataset into my EMR cluster. It's a 12 node r4.4xlarge cluster each with 750GB disk. It's using the EMR release label emr-5.13.0 and I'm adding Hadoop: Amazon 2.8.3, Ganglia: 3.7.2 and Spark 2.3.0. I'm using the following command to copy the data into the cluster:
s3-dist-cp --src=s3://bucket/prefix/ --dest=hdfs:///local/path/ --groupBy=.*(part_).* --targetSize=128 --outputCodec=none
When I look at the disk usage in either Ganglia or the namenode UI (port 50070 on the EMR cluster) then I can see that one node has most of it's disk filled and the others have a similar percentage used. Clicking through a lot of the files (~50) I can see that a replicate of the file always appears on the full node.
I'm using Spark to transform this data, write it to HDFS and then copy back to S3. I'm having trouble with this dataset as my tasks are being killed. I'm not certain this is the cause of the problem. I don't need to copy the data locally, nor decompress it. Initially I thought the BZIP2 codec was not splitable and decompressing would help gain parallelism in my Spark jobs but I was wrong, it is splitable. I have also discovered the hdfs balancer command which I'm using to redistribute the replicas and see if this solves my Spark problems.
However, now I've seen what I think is odd behaviour I would like to understand if this is normal for s3distcp/HDFS to create a replica of the files always on one node?
s3distcp is closed source; I can't comment in detail about its internals.
When HDFS creates replicas of data, it tries to save one block to the local machine, then 2 more elsewhere (Assuming replication==3). Whichever host is running the distcp worker processes will end up having a copy of the entire file. So if only one host is used for the copy, that fills up.
FWIW, I don't believe you need to do that distcp, not if you can do a read and filter of the data straight off S3, saving that result to hdfs. Your spark workers will do the filtering, and write their blocks back to the machines running these workers and other hosts in the chain. And for short-lived clusters, you could also try lowering the hdfs replication factor (2?), so save on HDFS data across the cluster, at the cost of having one less place for spark to schedule work adjacent to the data
I've been struggling to find out what is wrong with my spark job that indefinitely hangs where I try to write it out to either S3 or HDFS (~100G of data in parquet format).
The line that causes the hang:
spark_df.write.save(MY_PATH,format='parquet',mode='append')
I have tried this in overwrite as well as append mode, and tried saving to HDFS and S3, but the job will hang no matter what.
In the Hadoop Resource Manager GUI, it shows the state of the spark application as "RUNNING", but looking it seems nothing is actually being done by Spark and when I look at the Spark UI there are no jobs running.
The one thing that has gotten it to work is to increase the size of the cluster while it is in this hung state (I'm on AWS). This, however, doesn't matter if I start the cluster with 6 workers and increase to 7, or if I start with 7 and increase to 8 which seems somewhat odd to me. The cluster is using all of the memory available in both cases, but I am not getting memory errors.
Any ideas on what could be going wrong?
Thanks for the help all. I ended up figuring out the problem was actually a few separate issues. Here's how I understand them:
When I was saving directly to S3, it was related to the issue that Steve Loughran mentioned where the renames on S3 were just incredibly slow (so it looked like my cluster was doing nothing). On writes to S3, all the data is copied to temporary files and then "renamed" on S3 -- the problem is that renames don't happen like they do on a filesystem and actually take O(n) time. So all of my data was copied to S3 and then all of the time was spent renaming the files.
The other problem I faced was with saving my data to HDFS and then moving it to S3 via s3-dist-cp. All of my clusters resources were being used by Spark, and so when the Application Master tried giving resources to move the data to via s3-dist-cp it was unable to. The moving of data couldn't happen because of Spark, and Spark wouldn't shut down because my program was still trying to copy data to S3 (so they were locked).
Hope this can help someone else!
I am new to GAE and I am trying to quickly find a way to retrieve logs in DataStore, clean them to my specs, and then save them to a table to be called on later for a reports view in my app. I was thinking of using Google Data Flow and creating batch jobs (app is python/Django) but the documentation does not seem to fit my use case so maybe data flow is not the answer. I could create a python script with BigQuery and schedule through CRON but then I would have to contend with errors and it would seem that there is a faster way to solve this problem.
Any help/thoughts/suggestions is always greatly appreciated.
You can use Dataflow/Beam Python SDK to develop a pipeline that read entities from Datastore [1], transform data, and write a table to BigQuery [2]. To schedule this job to run regularly you'll have to use a third party mechanism such as a cron job. Note that Dataflow performs automatic scaling and perform retries to handle errors so you are not expected to manually address these complexities.
[1] https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/gcp/datastore/v1/datastoreio.py
[2] https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/gcp/bigquery.py
Running PySpark 2 job on EMR 5.1.0 as a step. Even after the script is done with a _SUCCESS file written to S3 and Spark UI showing the job as completed, EMR still shows the step as "Running". I've waited for over an hour to see if Spark was just trying to clean itself up but the step never shows as "Completed". The last thing written in the logs is:
INFO MultipartUploadOutputStream: close closed:false s3://mybucket/some/path/_SUCCESS
INFO DefaultWriterContainer: Job job_201611181653_0000 committed.
INFO ContextCleaner: Cleaned accumulator 0
I didn't have this problem with Spark 1.6. I've tried a bunch of different hadoop-aws and aws-java-sdk jars to no avail.
I'm using the default Spark 2.0 configurations so I don't think anything else like metadata is being written. Also the size of the data doesn't seem to have an impact on this problem.
If you aren't already, you should close your spark context.
sc.stop()
Also, if you are watching the Spark Web UI via a browser, you should close that as it sometimes keeps the spark context alive. I recall seeing this on the spark dev mailing list, but can't find the jira for it.
We experienced this problem and resolved it by running the job in cluster deploy mode using the following spark-submit option:
spark-submit --deploy-mode cluster
It was something to do with when running in client mode the driver runs in the master instance and the spark-submit process is getting stuck despite the spark spark context closing. This was causing the instance controller to continuously polling for process as it never receives the completion signal. Running the driver on one of the instance nodes using the above option doesn't seem to have this problem. Hope this helps
I experienced the same issue with Spark on AWS EMR and I solved the issue by calling sys.exit(O) at the end of my Python script. The same worked with Scala program with System.exit(0).