We use cookies to give you the best experience on our website. If you continue to browse, then you agree to our privacy policy and cookie policy. Image for the cookie policy date

hadoop wordcount program


hello
  
I tried hadoop wordcount program using syncfusion by watching your youtube video.
getting exceptions. pls help me.


ava.lang.Exception: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in localfetcher#1
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in localfetcher#1
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.io.FileNotFoundException: C:/tmp/hadoop-Anish%20balajee/mapred/local/localRunner/Anish%20balajee/jobcache/job_local51755170_0001/attempt_local51755170_0001_m_000000_0/output/file.out.index
at org.apache.hadoop.fs.RawLocalFileSystem.open(RawLocalFileSystem.java:198)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:764)
at org.apache.hadoop.io.SecureIOUtils.openFSDataInputStream(SecureIOUtils.java:156)
at org.apache.hadoop.mapred.SpillRecord.<init>(SpillRecord.java:70)
at org.apache.hadoop.mapred.SpillRecord.<init>(SpillRecord.java:62)
at org.apache.hadoop.mapred.SpillRecord.<init>(SpillRecord.java:57)
at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.copyMapOutput(LocalFetcher.java:123)
at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.doCopy(LocalFetcher.java:101)
at org.apache.hadoop.mapreduce.task.reduce.LocalFetcher.run(LocalFetcher.java:84)
2016-05-14 23:02:11,190 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1375)) - Job job_local51755170_0001 failed with state FAILED due to: NA
2016-05-14 23:02:11,216 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1380)) - Counters: 26
File System Counters
FILE: Number of bytes read=150
FILE: Number of bytes written=870124
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=3291641
HDFS: Number of bytes written=0
HDFS: Number of read operations=5
HDFS: Number of large read operations=0
HDFS: Number of write operations=1
Map-Reduce Framework
Map input records=65007
Map output records=566317
Map output bytes=5478301
Map output materialized bytes=630528
Input split bytes=91
Combine input records=566317
Combine output records=42065
Spilled Records=42065
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=242
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
Total committed heap usage (bytes)=182452224
File Input Format Counters 
Bytes Read=3291641
Exception in thread "main" java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)
at com.syncfusion.Wordcount.run(Wordcount.java:132)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at com.syncfusion.Wordcount.main(Wordcount.java:142)
:run FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':run'.
> Process 'command 'C:\Syncfusion\BigData\2.11.0.92\BigDataSDK\Java\jdk1.7.0_51\bin\java.exe'' finished with non-zero exit value 1

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.

BUILD FAILED

Total time: 29.502 secs


1 Reply

MK Madhan Kumar S Syncfusion Team May 17, 2016 02:39 PM UTC

Hi Balaji, 

We are afraid that we are unable to reproduce the issue reported by you in our Wordcount sample. Could you please share us below information, this will help us to reproduce the issue at our end, and to provide a better solution to you? 

1.                   Did you ran Wordcount sample in Pseudo node cluster or Remote cluster? 
2.                   Could you please share the sample program(C# or java) with which you have reproduced the issue? 

Regards, 
Madhan Kumar S 


Loader.
Up arrow icon