Hadoop常见问题

 报错1 :is group-writable, and the group is not root.  Its permissions are 0775,

datanode启动时,日志报错

1.“xxxx” is group-writable, and the group is not root.  Its permissions are 0775, and it is owned by gid 3245.  Please fix this or select a different socket path.

从报错可以看出,hadoop的目录结构中,权限不对,重置下目录权限

chown 755 hadoop/    再启动datanode,查查日志,又是

xxx  is group-writable, and the group is not root.  Its permissions are 0775, and it is owned by gid 3245.  Please fix this or select a different socket path

没办法,换了个目录生成socket文件,并设置权限为755,datanode启动正常

报错2:we cannot start a localDataXceiverServer because libhadoop cannot be loaded.

java.lang.RuntimeException: Although a UNIX domain socket path is configured as /app/log4x/apps/hadoop/etc/DN_PORT, we cannot start a localDataXceiverServer because libhadoop cannot be loaded.

 1.检查主机情况

# hadoop checknative -a
23/11/07 21:13:08 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
23/11/07 21:13:08 DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path
23/11/07 21:13:08 DEBUG util.NativeCodeLoader: java.library.path=:/app/log4x/apps/hadoop/lib/native/Linux-amd64-64/:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
23/11/07 21:13:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
23/11/07 21:13:08 DEBUG util.Shell: setsid exited with exit code 0
Native library checking:
hadoop:  false 
zlib:    false 
snappy:  false 
lz4:     false 
bzip2:   false 
openssl: false 
23/11/07 21:13:08 INFO util.ExitUtil: Exiting with status 1

2.新加配置

在~/.bash_profile文件中,新增配置,指定nvtive的路径

export JAVA_LIBRARY_PATH=/app/log4x/apps/hadoop/lib/native

source ~/.bash_porfile

# hadoop checknative -a
23/11/08 16:06:00 WARN bzip2.Bzip2Factory: Failed to load/initialize native-bzip2 library system-native, will use pure-Java version
23/11/08 16:06:00 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop:  true /app/log4x/apps/hadoop/lib/native/libhadoop.so
zlib:    true /lib64/libz.so.1
snappy:  true /lib64/libsnappy.so.1
lz4:     true revision:99
bzip2:   false 
openssl: false Cannot load libcrypto.so (libcrypto.so: cannot open shared object file: No such file or directory)!
23/11/08 16:06:00 INFO util.ExitUtil: Exiting with status 1

报错3:IPC's epoch 1 is not the current writer epoch  0

org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 3/5. 4 exceptions thrown:
10.255.33.120:43001: IPC's epoch 1 is not the current writer epoch  0

奇怪了,我也报出这样的问题,看了别人的解决思路

 1,先把报错关键信息 "IPC's epoch  is less than the last promised epoch" 贴到google上查了一下,大部分外国人的回答都是因为网络原因引起的.
    2,据上,经过看日志,每次启动另一个namenode的时候都会去探测三个 journalnode服务的8485端口,提示是faild的,
        说明最有可能是网络问题,排查如下:
        ifconfig -a看网卡是否有丢包,
        查看/etc/sysconfig/selinux 配置 SELINUX=disabled 是否是对的,
        /etc/init.d/iptables status  查看防火墙是否运行,因为我们hadoop是运行内网环境,记得之前部署的时候,防火墙是关闭的, 看来问题找到了
        /etc/init.d/iptables stop
        先后检查了,三个 journalnode服务器的防火墙,都莫名其妙的启着的,马上关闭
        再重新启动两个namenode,查看日志,正常了,

我检查了下我的这边:

1.SELINUX都是disabled

# ansible -i hosts tt -m shell -a "sudo cat /etc/sysconfig/selinux  | grep SELINUX"
[WARNING]: Consider using 'become', 'become_method', and 'become_user' rather than running sudo
host_121 | CHANGED | rc=0 >>
# SELINUX= can take one of these three values:
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
SELINUXTYPE=targeted 
host_118 | CHANGED | rc=0 >>
# SELINUX= can take one of these three values:
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
SELINUXTYPE=targeted 
host_119 | CHANGED | rc=0 >>
# SELINUX= can take one of these three values:
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
SELINUXTYPE=targeted 
host_122 | CHANGED | rc=0 >>
# SELINUX= can take one of these three values:
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
SELINUXTYPE=targeted 
host_120 | CHANGED | rc=0 >>
# SELINUX= can take one of these three values:
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
SELINUXTYPE=targeted 
host_126 | CHANGED | rc=0 >>
# SELINUX= can take one of these three values:
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
SELINUXTYPE=targeted 

2.查看防火墙状态

        有关着的也有起着的。我以为是防火墙的问题,但看了另一套hadoop的环境信息,都是在相通主机上,用户名不通而已,不可能只对当前用户不起作用。 

# ansible -i hosts tt -m shell -a "systemctl status iptables |grep  Active:"
host_118 | CHANGED | rc=0 >>
   Active: failed (Result: exit-code) since ? 2022-04-26 11:04:51 CST; 1 years 6 months ago
host_121 | CHANGED | rc=0 >>
   Active: active (exited) since ? 2022-04-26 11:05:40 CST; 1 years 6 months ago
host_119 | CHANGED | rc=0 >>
   Active: active (exited) since ? 2022-04-26 10:59:41 CST; 1 years 6 months ago
host_122 | CHANGED | rc=0 >>
   Active: active (exited) since ? 2021-03-18 17:13:00 CST; 2 years 7 months ago
host_120 | CHANGED | rc=0 >>
   Active: failed (Result: exit-code) since ? 2022-04-26 11:05:06 CST; 1 years 6 months ago
host_126 | CHANGED | rc=0 >>
   Active: active (exited) since ? 2021-03-18 15:58:10 CST; 2 years 7 months ago

3.查看 journalnode日志

2023-11-08 11:46:08,640 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting journal Storage Directory /app/log4x/apps/hadoop/jn/log4x-hcluster with nsid: 1531841238
2023-11-08 11:46:08,642 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /app/log4x/apps/hadoop/jn/log4x-hcluster/in_use.lock acquired by nodename 3995@hkcrmlog04
2023-11-08 11:49:37,343 INFO org.apache.hadoop.hdfs.qjournal.server.Journal: Updating lastPromisedEpoch from 0 to 1 for client /10.255.33.121
2023-11-08 11:49:37,345 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 43001, call org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol.journal from 10.255.33.121:51086 Call#65 Retry#0
java.io.IOException: IPC's epoch 1 is not the current writer epoch  0
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:445)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:342)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:148)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:158)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25421)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:975)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)

确实更防火墙没关系,format了namenode节点后,根据hdfs-site.xml文件中配置的sockt路径,创建了sock。

# ll
total 4
srwxr-xr-x 1 log4x log4x    0 Nov  8 15:05 DN_PORT
drwxr-xr-x 2 log4x log4x 4096 Nov  8 14:47 hadoop
[log4x@hkcrmlog04 etc]$ pwd
/app/log4x/apps/hadoop/etc
[log4x@hkcrmlog04 etc]$ grep -ir 'DN_PORT' hadoop/
hadoop/hdfs-site.xml.bajk:              <value>/app/ailog4x/hadoop/etc/DN_PORT</value>
hadoop/hdfs-site.xml.1107bak:           <value>/app/log4x/apps/hadoop/etc/DN_PORT</value>
hadoop/hdfs-site.xml:           <value>/app/log4x/apps/hadoop/etc/DN_PORT</value>

4.说明

4.1 nc -Ul DN_PORT  创建为socket 。若命令不存在,yum install -y nc安装即可

4.2 chmod 666 DN_PORT  权限给到666

4.2 chmod -R 755 hadoop/   整个hadoop的权限给到755

报错4:Operation category JOURNAL is not supported in state standby

 Operation category JOURNAL is not supported in state standby。。。

 Call From hkcrmlog03/10.255.33.120 to hkcrmlog04:40101 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
        at org.apache.hadoop.ipc.Client.call(Client.java:1474)
        at org.apache.hadoop.ipc.Client.call(Client.java:1401)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy15.rollEditLog(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:148)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:271)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$600(EditLogTailer.java:61)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:313)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:412)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
Caused by: java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:715)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707)
        at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1523)
        at org.apache.hadoop.ipc.Client.call(Client.java:1440)

1.查看状态

# bin/hdfs haadmin -getServiceState l4xnn2
standby
[log4x@hkcrmlog03 hadoop]$ 
[log4x@hkcrmlog03 hadoop]$ bin/hdfs haadmin -getServiceState l4xnn1
standby

# bin/hdfs haadmin -transitionToActive --forcemanual l4xnn1
You have specified the forcemanual flag. This flag is dangerous, as it can induce a split-brain scenario that WILL CORRUPT your HDFS namespace, possibly irrecoverably.

It is recommended not to use this flag, but instead to shut down the cluster and disable automatic failover if you prefer to manually manage your HA state.

You may abort safely by answering 'n' or hitting ^C now.

Are you sure you want to continue? (Y or N) y
23/11/08 15:46:00 WARN ha.HAAdmin: Proceeding with manual HA state management even though
automatic failover is enabled for NameNode at /10.255.33.121:40101
23/11/08 15:46:00 WARN ha.HAAdmin: Proceeding with manual HA state management even though
automatic failover is enabled for NameNode at /10.255.33.120:40101
[log4x@hkcrmlog03 hadoop]$ 
[log4x@hkcrmlog03 hadoop]$ 
[log4x@hkcrmlog03 hadoop]$ 
[log4x@hkcrmlog03 hadoop]$ bin/hdfs haadmin -getServiceState l4xnn1             
active
[log4x@hkcrmlog03 hadoop]$ bin/hdfs haadmin -getServiceState l4xnn2             
standby

namenode的日志已正常。

报错5:Error replaying edit log at offset 0.  Expected transaction ID was 1

org.apache.hadoop.hdfs.server.namenode.EditLogInputException: Error replaying edit log at offset 0.  Expected transaction ID was 1

在节点上从新执行了

# $HADOOP_PREFIX/bin/hdfs namenode -format

在启动namenode

# $HADOOP_PREFIX/sbin/hadoop-daemon.sh --script hdfs start namenode
$ bin/hdfs haadmin -getServiceState l4xnn1
active
[log4x@hkcrmlog03 hadoop]$ 
[log4x@hkcrmlog03 hadoop]$ bin/hdfs haadmin -getServiceState l4xnn2
standby

namenode已正常。

报错6:journal Storage Directory

journal Storage Directory /app/log4x/apps/hadoop/jn/log4x-hcluster: NameNode

大概为journalnode保存的元数据和namenode的不一致,导致,3台机器中有2台报了这个错误。

在nn1上启动journalnode,再执行hdfs namenode -initializeSharedEdits,使得journalnode与namenode保持一致。再重新启动namenode就没有问题了。

在查看2个namenode的状态。

 

报错7:Decided to synchronize log to startTxId: 1

 Decided to synchronize log to startTxId: 1

 namenode元数据被破坏,需要修复

解决恢复一下namenode
  

hadoop namenode –recover

 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/124885.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

Apipost-Helper:IDEA中的类postman工具

今天给大家推荐一款IDEA插件&#xff1a;Apipost-Helper-2.0&#xff0c;写完代码IDEA内一键生成API文档&#xff0c;无需安装、打开任何其他软件&#xff1b;写完代码IDEA内一键调试&#xff0c;无需安装、打开任何其他软件&#xff1b;生成API目录树&#xff0c;双击即可快速…

工业摄像机参数计算

在工业相机选型的时候有点懵&#xff0c;有一些参数都不知道咋计算的。有些概念也没有区分清楚。‘’ 靶面尺寸 CMOS 或者是 CCD 使用几分之几英寸来标注的时候&#xff0c;这个几分之几英寸计算的是什么尺寸&#xff1f; 一开始我以为这个计算的就是靶面的实际对角线的尺寸…

小程序发成绩

在这个数字化快速发展的时代&#xff0c;让学生能够方便快捷地获取自己的成绩已经成为一项基本的需求。那么&#xff0c;如何实现这一目标呢&#xff1f;对于许多老师来说&#xff0c;可能首先想到的是使用各种代码或者Excel来发布成绩查询。今天&#xff0c;我们就来探讨一下这…

《微服务架构设计模式》之三:微服务架构中的进程通信

概述 交互方式 客户端和服务端交互方式可以从两个维度来分&#xff1a; 维度1&#xff1a;一对一和多对多 一对一&#xff1a;每个客户端请求由一个实例来处理。 一对多&#xff1a;每个客户端请求由多个实例来处理。维度2&#xff1a;同步和异步 同步模式&#xff1a;客户端…

maven 上传本地jar包到nexus

maven上传命令 mvn deploy:deploy-file -DgroupIdcom.microsoft.sqlserver -DartifactIdsqljdbc4 -Dversion4.0 -Dpackagingjar -DfileC:\java\top-sdk-java-1.0.1-lib\lib\bcprov-jdk16-1.46.jar -Durlhttp://ip:port/repository/maven-releases/ -DrepositoryIdsnapshot…

Linux系统编程——文件的打开及创建

打开(open) 使用open函数需要包含以下三个头文件&#xff1a; #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> open的函数定义格式 int open(const char *pathname,int flags); int open(const char *pathname,int flags,mode_t mode…

C语言 变量

C 变量 变量其实只不过是程序可操作的存储区的名称。C 中每个变量都有特定的类型&#xff0c;类型决定了变量存储的大小和布局&#xff0c;该范围内的值都可以存储在内存中&#xff0c;运算符可应用于变量上。 变量的名称可以由字母、数字和下划线字符组成。它必须以字母或下…

POI.5.2.4常用操作-数据导入导出常规操作

1、APACHE POI简介 Apache POI 简介是用Java编写的免费开源的跨平台的 Java API&#xff0c;Apache POI提供API给Java程式对Microsoft Office&#xff08;Excel、WORD、PowerPoint、Visio等&#xff09;格式档案读和写的功能。 1.1、其他操作Excel工具 Java Excel是一开放源码…

pda超高频RFID工业手持终端,远距离多标签读取神器

RFID技术的基本原理是通过射频信号进行通信。RFID系统由两部分组成&#xff0c;一是RFID读写器&#xff0c;二是RFID标签。读写器通过发送射频信号&#xff0c;激活标签&#xff0c;并从标签中读取存储的数据。标签可以附着在物品上&#xff0c;并携带物品的相关信息。当物品通…

Codeforces Round 908 (Div. 2)题解

目录 A. Secret Sport 题目分析: B. Two Out of Three 题目分析: C. Anonymous Informant 题目分析: A. Secret Sport 题目分析: A,B一共打n场比赛&#xff0c;输入一个字符串由A和‘B’组成代表A赢或者B赢&#xff08;无平局&#xff09;&#xff0c;因为题目说明这个人…

迅为龙芯3A5000主板,支持PCIE 3.0、USB 3.0和 SATA 3.0显示接口2 路、HDMI 和1路 VGA,可直连显示器

性能强 采用全国产龙芯3A5000处理器&#xff0c;基于龙芯自主指令系统 (LoongArch)的LA464微结构&#xff0c;并进一步提升频率&#xff0c;降低功耗&#xff0c;优化性能。 桥片 桥片采用龙芯 7A2000&#xff0c;支持PCIE 3.0、USB 3.0和 SATA 3.0显示接口2 路、HDMI 和1路 …

9 网关的作用

1、总结&#xff1a; 1.如果离开本局域网&#xff0c;就需要经过网关&#xff0c;网关是路由器的一个网口。 2.路由器是一个三层设备&#xff0c;里面有如何寻找下一跳的规则 3.经过路由器之后 MAC 头要变&#xff0c;如果 IP 不变&#xff0c;相当于不换护照的欧洲旅游&#…

HBase学习笔记(1)—— 知识点总结

目录 HBase概述 HBase 基本架构 HBase安装部署启动 HBase Shell HBase数据读写流程 HBase 优化 HBase概述 HBase是以 hdfs 为数据存储的&#xff0c;一种分布式、非关系型的、可扩展的 NoSQL 数据库 关系型数据库和非关系型数据库的区别&#xff1a; 关系型数据库和非关…

UT代码编译至build文件夹

得克萨斯大学奥斯汀分校代码&#xff1a;代码文件按照网上很多的做法是直接**cmake .****make**则会出现以下的内容&#xff1a;但是这样做未免有些杂乱&#xff0c;会将编译生成的Makefile和其他数据文件全部存放在utaustinvilla3d-master下&#xff0c;比较杂乱。根据我们编译…

Linux--gcc与make

文章目录 gcc/g的使用背景知识gcc与ggcc的编译过程预处理编译汇编链接 函数库自动化构建工具--make三个时间伪目标文件其他表示方法mybin的推导过程 gcc/g的使用 背景知识 GCC是一个开源的编译器套件&#xff0c;支持多种编程语言&#xff0c;并提供了广泛的语言特性和标准库…

C++入门学习(4)引用 (讲解拿指针比较)

上期回顾 在学习完函数重载之后&#xff0c;我们可以使用多个重名函数进行操作&#xff0c;会发现C真的是弥补了好多C语言的不足之处&#xff0c;真的不禁感概一下&#xff0c;时代的进步是需要人去做出改变的&#xff0c;而不是一味的使用啊&#xff01;所以我们今天继续学一下…

674. 最长连续递增序列 718. 最长重复子数组 1143.最长公共子序列 1035.不相交的线

674. 最长连续递增序列 题目&#xff1a; 给定一个未经排序的整数数组nums&#xff0c;找到最长且 连续递增的子序列&#xff0c;并返回该序列的长度。 dp数组含义&#xff1a; dp[i]&#xff1a;以下标i为结尾的连续递增的子序列长度为dp[i]。 递推公式&#xff1a; 怎么…

3D RPG Course | Core 学习日记四:鼠标控制人物移动

前言 前边我们做好了Navgation智能导航地图烘焙&#xff0c;并且设置好了Player的NavMeshAgent&#xff0c;现在我们可以开始实现鼠标控制人物的移动了。除了控制人物移动以外&#xff0c;我们还需要实现鼠标指针的变换。 实现要点 要实现鼠标控制人物移动&#xff0c;点击…

Python 框架学习 Django篇 (九) 产品发布、服务部署

我们前面编写的所有代码都是在windows上面运行的&#xff0c;因为我们还处于开发阶段 当我们完成具体任务开发后&#xff0c;就需要把我们开发的网站服务发布给真正的用户 通常来说我们会选择一台公有云服务器比如阿里云ecs&#xff0c;现在的web服务通常都是基于liunx操作系统…

虹科示波器 | 汽车免拆检测 | 2017款路虎发现车行驶中发动机抖动且加速无力

一、故障现象 一辆2017款路虎发现车&#xff0c;搭载3.0L发动机&#xff0c;累计行驶里程约为3.8万km。车主反映&#xff0c;车辆在行驶过程中突然出现发动机抖动且加速无力的现象&#xff0c;于是请求拖车救援。 二、故障诊断 拖车到店后首先试车&#xff0c;发动机怠速轻微抖…