I am trying to simulate the Hadoop environment using latest Hadoop version 2.6.0, Java SDK 1.70 on my Ubuntu desktop. I configured the hadoop with necessary environment parameters and all its processes are up and running and they can be seen with the following jps command:
nandu#nandu-Desktop:~$ jps
2810 NameNode
3149 SecondaryNameNode
3416 NodeManager
3292 ResourceManager
2966 DataNode
4805 Jps
I could also see the above information, plus the dfs files through the Firefox browser. However, when I tried to run a simple WordCound MapReduce job, it hangs and it doesn't produce any output or shows any error message(s). After a while I killed the process using the "hadoop job -kill " command. Can you please guide me, to find the cause of this issue and how to resolve it? I am giving below the Job start and kill(end) screenshot.
If you need additional information, please let me know.
Your help will be highly appreciated.
Thanks,
===================================================================
nandu#nandu-Desktop:~/dev$ hadoop jar wc.jar WordCount /user/nandu/input /user/nandu/output
15/02/27 10:35:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/02/27 10:35:20 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
15/02/27 10:35:21 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
15/02/27 10:35:21 INFO input.FileInputFormat: Total input paths to process : 2
15/02/27 10:35:21 INFO mapreduce.JobSubmitter: number of splits:2
15/02/27 10:35:22 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1425048764581_0003
15/02/27 10:35:22 INFO impl.YarnClientImpl: Submitted application application_1425048764581_0003
15/02/27 10:35:22 INFO mapreduce.Job: The url to track the job: http://nandu-Desktop:8088/proxy/application_1425048764581_0003/
15/02/27 10:35:22 INFO mapreduce.Job: Running job: job_1425048764581_0003
==================== at this point the job was killed ===================
15/02/27 10:38:23 INFO mapreduce.Job: Job job_1425048764581_0003 running in uber mode : false
15/02/27 10:38:23 INFO mapreduce.Job: map 0% reduce 0%
15/02/27 10:38:23 INFO mapreduce.Job: Job job_1425048764581_0003 failed with state KILLED due to: Application killed by user.
15/02/27 10:38:23 INFO mapreduce.Job: Counters: 0
I encountered similar problem while running provided MapReduce sample in hadoop package. In my case it was hanging due to low disk space on my VM (about 1.5 GB was empty). When I freed some disk space it ran pretty fine. Also, please check other system resource requirements are fulfilled.
Related
I have a 2 node EMR (Version 4.6.0) Cluster (1 master (m4.large) , 1 core (r4.xlarge) ) with HBase installed. I'm using default EMR configurations. I want to export HBase tables using
hbase org.apache.hadoop.hbase.mapreduce.Export -D hbase.mapreduce.include.deleted.rows=true Table_Name hdfs:/full_backup/Table_Name 1
I'm getting the following error
2022-04-04 11:29:20,626 INFO [main] util.RegionSizeCalculator: Calculating region sizes for table "Table_Name".
2022-04-04 11:29:20,900 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing master protocol: MasterService
2022-04-04 11:29:20,900 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x17ff27095680070
2022-04-04 11:29:20,903 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x17ff27095680070
2022-04-04 11:29:20,904 INFO [main] zookeeper.ZooKeeper: Session: 0x17ff27095680070 closed
2022-04-04 11:29:20,980 INFO [main] mapreduce.JobSubmitter: number of splits:1
2022-04-04 11:29:20,994 INFO [main] Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
2022-04-04 11:29:21,192 INFO [main] mapreduce.JobSubmitter: Submitting tokens for job: job_1649071534731_0002
2022-04-04 11:29:21,424 INFO [main] impl.YarnClientImpl: Submitted application application_1649071534731_0002
2022-04-04 11:29:21,454 INFO [main] mapreduce.Job: The url to track the job: http://ip-10-0-2-244.eu-west-1.compute.internal:20888/proxy/application_1649071534731_0002/
2022-04-04 11:29:21,455 INFO [main] mapreduce.Job: Running job: job_1649071534731_0002
2022-04-04 11:29:28,541 INFO [main] mapreduce.Job: Job job_1649071534731_0002 running in uber mode : false
2022-04-04 11:29:28,542 INFO [main] mapreduce.Job: map 0% reduce 0%
It is stuck at this progress and not running. However when I add a task node and redo the same command, it gets finished within seconds.
Based on the documentation, https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-master-core-task-nodes.html , core node itself should handle tasks as well. What could be going wrong?
I am attempting to export a Hive database table into a MySQL database table on an Amazon AWS cluster using the command:
sqoop export --connect jdbc:mysql://database_hostname/universities --table 19_20 --username admin -P --export-dir '/final/hive/19_20'
I am trying to export from the folder '/final/hive/19_20' which is the Hive output directory into a MySQL database 'universities', table '19_20'.
In response I get:
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/share/aws/redshift/jdbc/redshift-jdbc42-1.2.37.1061.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
21/04/11 01:42:13 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7
Enter password:
21/04/11 01:42:18 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
21/04/11 01:42:18 INFO tool.CodeGenTool: Beginning code generation
21/04/11 01:42:19 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `19_20` AS t LIMIT 1
21/04/11 01:42:19 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `19_20` AS t LIMIT 1
21/04/11 01:42:19 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce
/tmp/sqoop-hadoop/compile/8aac2b94e7d11dc02d064c8213465c05/_19_20.java:37: warning: Can't initialize javac processor due to (most likely) a class loader problem: java.lang.NoClassDefFoundError: com/sun/tools/javac/processing/JavacProcessingEnvironment
public class _19_20 extends SqoopRecord implements DBWritable, Writable {
^
at lombok.javac.apt.LombokProcessor.getJavacProcessingEnvironment(LombokProcessor.java:411)
at lombok.javac.apt.LombokProcessor.init(LombokProcessor.java:91)
at lombok.core.AnnotationProcessor$JavacDescriptor.want(AnnotationProcessor.java:124)
at lombok.core.AnnotationProcessor.init(AnnotationProcessor.java:177)
at lombok.launch.AnnotationProcessorHider$AnnotationProcessor.init(AnnotationProcessor.java:73)
at com.sun.tools.javac.processing.JavacProcessingEnvironment$ProcessorState.<init>(JavacProcessingEnvironment.java:508)
at com.sun.tools.javac.processing.JavacProcessingEnvironment$DiscoveredProcessors$ProcessorStateIterator.next(JavacProcessingEnvironment.java:605)
at com.sun.tools.javac.processing.JavacProcessingEnvironment.discoverAndRunProcs(JavacProcessingEnvironment.java:698)
at com.sun.tools.javac.processing.JavacProcessingEnvironment.access$1800(JavacProcessingEnvironment.java:91)
at com.sun.tools.javac.processing.JavacProcessingEnvironment$Round.run(JavacProcessingEnvironment.java:1043)
at com.sun.tools.javac.processing.JavacProcessingEnvironment.doProcessing(JavacProcessingEnvironment.java:1184)
at com.sun.tools.javac.main.JavaCompiler.processAnnotations(JavaCompiler.java:1170)
at com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:856)
at com.sun.tools.javac.main.Main.compile(Main.java:523)
at com.sun.tools.javac.api.JavacTaskImpl.doCall(JavacTaskImpl.java:129)
at com.sun.tools.javac.api.JavacTaskImpl.call(JavacTaskImpl.java:138)
at org.apache.sqoop.orm.CompilationManager.compile(CompilationManager.java:224)
at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:107)
at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:63)
at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
Caused by: java.lang.ClassNotFoundException: com.sun.tools.javac.processing.JavacProcessingEnvironment
at java.lang.ClassLoader.findClass(ClassLoader.java:523)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at lombok.launch.ShadowClassLoader.loadClass(ShadowClassLoader.java:530)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 26 more
Note: /tmp/sqoop-hadoop/compile/8aac2b94e7d11dc02d064c8213465c05/_19_20.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
1 warning
21/04/11 01:42:24 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/8aac2b94e7d11dc02d064c8213465c05/19_20.jar
21/04/11 01:42:24 INFO mapreduce.ExportJobBase: Beginning export of 19_20
21/04/11 01:42:24 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
21/04/11 01:42:26 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
21/04/11 01:42:26 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
21/04/11 01:42:26 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
21/04/11 01:42:26 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-6-179.ec2.internal/172.31.6.179:8032
21/04/11 01:42:26 INFO client.AHSProxy: Connecting to Application History server at ip-172-31-6-179.ec2.internal/172.31.6.179:10200
21/04/11 01:42:28 INFO input.FileInputFormat: Total input files to process : 1
21/04/11 01:42:29 INFO input.FileInputFormat: Total input files to process : 1
21/04/11 01:42:29 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
21/04/11 01:42:29 INFO lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev 3fb854bbfdabadafad1fa2cca072658fa097fd67]
21/04/11 01:42:29 INFO mapreduce.JobSubmitter: number of splits:4
21/04/11 01:42:29 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
21/04/11 01:42:29 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1618090360850_0017
21/04/11 01:42:29 INFO conf.Configuration: resource-types.xml not found
21/04/11 01:42:29 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
21/04/11 01:42:29 INFO resource.ResourceUtils: Adding resource type - name = memory-mb, units = Mi, type = COUNTABLE
21/04/11 01:42:29 INFO resource.ResourceUtils: Adding resource type - name = vcores, units = , type = COUNTABLE
21/04/11 01:42:29 INFO impl.YarnClientImpl: Submitted application application_1618090360850_0017
21/04/11 01:42:29 INFO mapreduce.Job: The url to track the job: http://ip-172-31-6-179.ec2.internal:20888/proxy/application_1618090360850_0017/
21/04/11 01:42:29 INFO mapreduce.Job: Running job: job_1618090360850_0017
21/04/11 01:42:37 INFO mapreduce.Job: Job job_1618090360850_0017 running in uber mode : false
21/04/11 01:42:37 INFO mapreduce.Job: map 0% reduce 0%
21/04/11 01:43:00 INFO mapreduce.Job: map 100% reduce 0%
21/04/11 01:43:01 INFO mapreduce.Job: Job job_1618090360850_0017 failed with state FAILED due to: Task failed task_1618090360850_0017_m_000002
Job failed as tasks failed. failedMaps:1 failedReduces:0
21/04/11 01:43:01 INFO mapreduce.Job: Counters: 12
Job Counters
Failed map tasks=3
Killed map tasks=1
Launched map tasks=4
Data-local map tasks=4
Total time spent by all maps in occupied slots (ms)=3779136
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=78732
Total vcore-milliseconds taken by all map tasks=78732
Total megabyte-milliseconds taken by all map tasks=120932352
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
21/04/11 01:43:01 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
21/04/11 01:43:01 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 34.8867 seconds (0 bytes/sec)
21/04/11 01:43:01 INFO mapreduce.ExportJobBase: Exported 0 records.
21/04/11 01:43:01 ERROR mapreduce.ExportJobBase: Export job failed!
21/04/11 01:43:01 ERROR tool.ExportTool: Error during export:
Export job failed!
at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:445)
at org.apache.sqoop.manager.SqlManager.exportTable(SqlManager.java:931)
at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:80)
at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
Please let me know if this can be fixed and what to do to fix it.
I was not able to fully resolve sqoop exports on AWS, however I stopped receiving the errors with Lombok by downgrading to the prior version of emr.
I hope this helps anyone else experiencing this issue.
This question already has an answer here:
Sqoop on Dataproc cannot export data to Avro format
(1 answer)
Closed 3 years ago.
I have submitted Sqoop job via GCP Dataproc Cluster and set it --as-avrodatafile configuration argument, but it is failing with below error:
/08/12 22:34:34 INFO impl.YarnClientImpl: Submitted application application_1565634426340_0021
19/08/12 22:34:34 INFO mapreduce.Job: The url to track the job: http://sqoop-gcp-ingest-mzp-m:8088/proxy/application_1565634426340_0021/
19/08/12 22:34:34 INFO mapreduce.Job: Running job: job_1565634426340_0021
19/08/12 22:34:40 INFO mapreduce.Job: Job job_1565634426340_0021 running in uber mode : false
19/08/12 22:34:40 INFO mapreduce.Job: map 0% reduce 0%
19/08/12 22:34:45 INFO mapreduce.Job: Task Id : attempt_1565634426340_0021_m_000000_0, Status : FAILED
Error: org.apache.avro.reflect.ReflectData.addLogicalTypeConversion(Lorg/apache/avro/Conversion;)V
19/08/12 22:34:50 INFO mapreduce.Job: Task Id : attempt_1565634426340_0021_m_000000_1, Status : FAILED
Error: org.apache.avro.reflect.ReflectData.addLogicalTypeConversion(Lorg/apache/avro/Conversion;)V
19/08/12 22:34:55 INFO mapreduce.Job: Task Id : attempt_1565634426340_0021_m_000000_2, Status : FAILED
Error: org.apache.avro.reflect.ReflectData.addLogicalTypeConversion(Lorg/apache/avro/Conversion;)V
19/08/12 22:35:00 INFO mapreduce.Job: map 100% reduce 0%
19/08/12 22:35:01 INFO mapreduce.Job: Job job_1565634426340_0021 failed with state FAILED due to: Task failed task_1565634426340_0021_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
19/08/12 22:35:01 INFO mapreduce.Job: Counters: 11
Job Counters
Failed map tasks=4
Launched map tasks=4
Other local map tasks=4
Total time spent by all maps in occupied slots (ms)=41976
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=13992
Total vcore-milliseconds taken by all map tasks=13992
Total megabyte-milliseconds taken by all map tasks=42983424
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
19/08/12 22:35:01 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
19/08/12 22:35:01 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 30.5317 seconds (0 bytes/sec)
19/08/12 22:35:01 INFO mapreduce.ImportJobBase: Retrieved 0 records.
19/08/12 22:35:01 DEBUG util.ClassLoaderStack: Restoring classloader: sun.misc.Launcher$AppClassLoader#61baa894
19/08/12 22:35:01 ERROR tool.ImportTool: Import failed: Import job failed!
19/08/12 22:35:01 DEBUG manager.OracleManager$ConnCache: Caching released connection for jdbc:oracle:thin:#10.25.42.52:1521/uataca.aaamidatlantic.com/GCPREADER
Job output is complete
Without specifying --as-avrodatafile argument it is working fine.
To fix this issue you need to set mapreduce.job.classloader property value to true when submitting your job:
gcloud dataproc jobs submit hadoop --cluster="${CLUSTER_NAME}" \
--class="org.apache.sqoop.Sqoop" \
--properties="mapreduce.job.classloader=true" \
. . .
-- \
--as-avrodatafile \
. . .
I have created a runnable jar file and executing in Hadoop, I am not getting the Output. The code works fine. I have checked it in eclipse with adding hadoop jar files and I have got the Output absolutely right
hduser#Strawhats:~$ hadoop jar /home/hduser/Desktop/project.jar /user/hduser/input /user/hduser/output
17/02/20 19:18:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/02/20 19:18:04 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
17/02/20 19:18:04 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
17/02/20 19:18:05 INFO mapred.FileInputFormat: Total input paths to process : 1
17/02/20 19:18:05 INFO mapreduce.JobSubmitter: number of splits:2
17/02/20 19:18:05 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1487596891791_0003
17/02/20 19:18:06 INFO impl.YarnClientImpl: Submitted application application_1487596891791_0003
17/02/20 19:18:06 INFO mapreduce.Job: The url to track the job: http://Strawhats:8088/proxy/application_1487596891791_0003/
17/02/20 19:18:06 INFO mapreduce.Job: Running job: job_1487596891791_0003
I have downloaded (since I don't have space for running CDH or Sandbox) Hadoop 2.6.0 and hadoop streaming from here
I ran the command of
bin/hadoop jar contrib/hadoop-streaming-2.6.0.jar \
-file ${HADOOP_HOME}/py_mapred/mapper.py -mapper ${HADOOP_HOME}/py_mapred/mapper.py \
-file ${HADOOP_HOME}/py_mapred/reducer.py -reducer ${HADOOP_HOME}/py_mapred/reducer.py \
-input /input/davinci/* -output /input/davinci-output
where I stored the downloaded streaming jar in ${HADOOP_HOME}/contrib, and the other files in py_mapred. At the same time, I copyFromLocal to /input directory on hdfs. Now, when I run the command, the following lines show up:
15/08/14 17:35:45 WARN streaming.StreamJob: -file option is deprecated, please use generic option -files instead.
15/08/14 17:35:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
packageJobJar: [/usr/local/cellar/hadoop/2.6.0/py_mapred/mapper.py, /usr/local/cellar/hadoop/2.6.0/py_mapred/reducer.py, /var/folders/c5/4xfj65v15g91f71c_b9whnpr0000gn/T/hadoop-unjar3313567263260134566/] [] /var/folders/c5/4xfj65v15g91f71c_b9whnpr0000gn/T/streamjob9165494241574343777.jar tmpDir=null
15/08/14 17:35:47 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
15/08/14 17:35:47 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
15/08/14 17:35:48 INFO mapred.FileInputFormat: Total input paths to process : 1
15/08/14 17:35:48 INFO mapreduce.JobSubmitter: number of splits:2
15/08/14 17:35:48 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1439538212023_0002
15/08/14 17:35:49 INFO impl.YarnClientImpl: Submitted application application_1439538212023_0002
15/08/14 17:35:49 INFO mapreduce.Job: The url to track the job: http://Jonathans-MacBook-Pro.local:8088/proxy/application_1439538212023_0002/
15/08/14 17:35:49 INFO mapreduce.Job: Running job: job_1439538212023_0002
It looks like the command has been accepted. I checked on localhost:8088 and the job does register. However it's not running, despite the fact that it says Running job: job_1439538212023_0002. Is there something wrong with my command? Is it due to permission setting? Why isn't the job running?
Thank you
Here is right way for streaming:
bin/hadoop jar contrib/hadoop-streaming-2.6.0.jar \
-file ${HADOOP_HOME}/py_mapred/mapper.py -mapper '/usr/bin/python mapper.py' -file ${HADOOP_HOME}/py_mapred/reducer.py -reducer '/usr/bin/python reducer.py' -input /input/davinci/* -output /input/davinci-output