当前位置:首页 > 服务端 > DataNode上执行文件读写时报java.io.IOException: Bad connect ack with firstBadLink as 192.168.X.X错误解决记录

DataNode上执行文件读写时报java.io.IOException: Bad connect ack with firstBadLink as 192.168.X.X错误解决记录

 今天在集群上看到有两个任务跑失败了:

    Err log:

In order to change the average load for a reducer (in bytes):

set hive.exec.reducers.bytes.per.reducer=<number>

In order to limit the maximum number of reducers:

set hive.exec.reducers.max=<number>

In order to set a constant number of reducers:

set mapreduce.job.reduces=<number>

java.io.IOException: Bad connect ack with firstBadLink as 192.168.44.57:50010

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1460)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)

Job Submission failed with exception 'java.io.IOException(Bad connect ack with firstBadLink as 192.168.44.57:50010)'

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

MapReduce Jobs Launched:

Stage-Stage-1: Map: 16 Reduce: 8 Cumulative CPU: 677.62 sec HDFS Read: 1794135673 HDFS Write: 136964041 SUCCESS

Stage-Stage-5: Map: 16 Reduce: 8 Cumulative CPU: 864.95 sec HDFS Read: 1794135673 HDFS Write: 120770083 SUCCESS

Stage-Stage-4: Map: 70 Reduce: 88 Cumulative CPU: 5431.46 sec HDFS Read: 22519878178 HDFS Write: 422001541 SUCCESS

Total MapReduce CPU Time Spent: 0 days 1 hours 56 minutes 14 seconds 30 msec

task BFD_JOB_TASK_521_20150721041704 is complete.

     错误的大概意思是:Job在在运行Map的时候,map的输出正准备往磁盘上写的时候,报:

               java.io.IOException: Bad connect ack with firstBadLink as 192.168.44.57:5001了

     原因是:

              Datanode往hdfs上写时,实际上是通过使用xcievers这个中间服务往linux上的文件系统上写文件的。其实这个xcievers就是一些负责在DataNode和本地磁盘上读,写文件的线程。

     DataNode上Block越多,这个线程的数量就应该越多。然后问题来了,这个线程数有个上线(默认是配置的4096)。所以,当Datenode上的Block数量过多时,就会有些Block文件找不到

     线程来负责他的读和写工作了。所以就出现了上面的错误(写块失败)。

     解决方案是:

              在hdfs-site.xml中添加:                                 

 <property>
                  <name>dfs.datanode.max.transfer.threads</name>
                  <value>16000</value>
 </property>

 

    Tips:

           这个漏洞还可能照成这个错误:

                                 10/12/08 20:10:31 INFO hdfs.DFSClient: Could not obtain block
                                 blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No live nodes
                                 contain current block. Will get new block locations from namenode and retry...

作者:天天吃火锅
来源链接:https://www.cnblogs.com/hortonworks/p/6388191.html

版权声明:
1、Java侠(https://www.javaxia.com)以学习交流为目的,由作者投稿、网友推荐和小编整理收藏优秀的IT技术及相关内容,包括但不限于文字、图片、音频、视频、软件、程序等,其均来自互联网,本站不享有版权,版权归原作者所有。

2、本站提供的内容仅用于个人学习、研究或欣赏,以及其他非商业性或非盈利性用途,但同时应遵守著作权法及其他相关法律的规定,不得侵犯相关权利人及本网站的合法权利。
3、本网站内容原作者如不愿意在本网站刊登内容,请及时通知本站(javaclubcn@163.com),我们将第一时间核实后及时予以删除。





本文链接:https://www.javaxia.com/server/124987.html

标签: IOException
分享给朋友:

“DataNode上执行文件读写时报java.io.IOException: Bad connect ack with firstBadLink as 192.168.X.X错误解决记录” 的相关文章