(转)hadoop 根据SecondaryNameNode恢复Namenode

标签: hadoop secondarynamenode namenode | 发表时间:2014-10-16 18:11 | 作者:rainbow_小春
出处:http://www.iteye.com

 制造namenode宕机的情况 
1) kill 掉namenode的进程

[hadoop@hadoop bin]$ kill -9 13481

2)删除dfs.name.dir所指向的文件夹,这里是/home/hadoop/hdfs/name.

 

current  image  in_use.lock  previous.checkpoint
[hadoop@hadoop name]$ rm -rf *

 

删除name目录下的所有内容,但是必须保证name这个目录是存在的
3)从secondarynamenode元数据namesecondary目录下复制到namenode name目录下

   我的secodarynamenode 元数据目录

/home/hadoop/tmp/dfs/namesecondary

   复制过程

[hadoop@hadoop name]$ cp -R /home/hadoop/tmp/dfs/namesecondary/* .

4)启动namenode

[hadoop@hadoop bin]$ ./hadoop-daemon.sh start namenode

 5)检查
使用hadoop fsck /user命令检查文件Block的完整性

<font size="2">[hadoop@hadoop bin]$ hadoop fsck /</font>
Warning: $HADOOP_HOME is deprecated.

FSCK started by hadoop from /192.168.0.101 for path / at Sun Dec 22 23:04:31 CST 2013
...................................
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201310222119_0001/job.jar:  Under replicated blk_-8571652065964704775_1020. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201310222119_0002/job.jar:  Under replicated blk_-5947701456602696019_1021. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201310222119_0003/job.jar:  Under replicated blk_8214183112681524571_1022. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201310222142_0001/job.jar:  Under replicated blk_4805420250921446015_1024. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201310222142_0002/job.jar:  Under replicated blk_7913185784171356584_1027. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201310222142_0004/job.jar:  Under replicated blk_-8411847042533891069_1035. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201310222142_0005/job.jar:  Under replicated blk_2163772543235273521_1036. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201310222142_0007/job.jar:  Under replicated blk_-3491660194168043022_1044. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201310242056_0002/job.jar:  Under replicated blk_5280511346594851641_1270. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201310242056_0003/job.jar:  Under replicated blk_5588149584508213931_1271. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201310242056_0004/job.jar:  Under replicated blk_-1846184614352398688_1272. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201310242056_0005/job.jar:  Under replicated blk_8253537375261552577_1273. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201310242100_0001/job.jar:  Under replicated blk_-6858089306760733073_1275. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201310242100_0002/job.jar:  Under replicated blk_-630176777256891004_1276. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201310242100_0003/job.jar:  Under replicated blk_3453389521553623867_1277. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201310242100_0004/job.jar:  Under replicated blk_-4262000880964323956_1278. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201310242105_0001/job.jar:  Under replicated blk_-5324801167724976561_1280. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201310242105_0002/job.jar:  Under replicated blk_3284342834321881345_1281. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201310242105_0004/job.jar:  Under replicated blk_5174401550469241860_1295. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201310242105_0009/job.jar:  Under replicated blk_6390129220783606015_1327. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201310242105_0010/job.jar:  Under replicated blk_8995477665353821346_1328. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201311292212_0007/job.jar:  Under replicated blk_-6447241034801532571_1699. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201312082210_0001/job.jar:  Under replicated blk_-187920261151639503_1741. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201312082210_0002/job.jar:  Under replicated blk_1912732980088631445_1742. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/staging/hadoop/.staging/job_201312092348_0001/job.jar:  Under replicated blk_448639237400606735_1953. Target Replicas is 10 but found 1 replica(s).
.
/home/hadoop/tmp/mapred/system/jobtracker.info: CORRUPT block blk_-4973841422235657473

/home/hadoop/tmp/mapred/system/jobtracker.info: MISSING 1 blocks of total size 4 B.Status: CORRUPT
 Total size:        367257 B
 Total dirs:        83
 Total files:        60
 Total blocks (validated):        57 (avg. block size 6443 B)
  ********************************
  CORRUPT FILES:        1
  MISSING BLOCKS:        1
  MISSING SIZE:                4 B
  CORRUPT BLOCKS:         1
  ********************************
 Minimally replicated blocks:        56 (98.24561 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:        25 (43.85965 %)
 Mis-replicated blocks:                0 (0.0 %)
 Default replication factor:        1
 Average block replication:        0.98245615
 Corrupt blocks:                1
 Missing replicas:                225 (401.7857 %)
 Number of data-nodes:                1
 Number of racks:                1
FSCK ended at Sun Dec 22 23:04:31 CST 2013 in 89 milliseconds

恢复工作完成,检查hdfs的数据

 

原文链接:hadoop 根据SecondaryNameNode恢复Namenode
http://www.aboutyun.com/thread-6196-1-1.html
(出处: about云开发)

原作者:lzw



已有 0 人发表留言,猛击->> 这里<<-参与讨论


ITeye推荐



相关 [hadoop secondarynamenode namenode] 推荐:

(转)hadoop 根据SecondaryNameNode恢复Namenode

- - 行业应用 - ITeye博客
 制造namenode宕机的情况 . 1) kill 掉namenode的进程. 2)删除dfs.name.dir所指向的文件夹,这里是/home/hadoop/hdfs/name.. 删除name目录下的所有内容,但是必须保证name这个目录是存在的. 3)从secondarynamenode元数据namesecondary目录下复制到namenode name目录下.

浅析Hadoop Secondary NameNode,CheckPoint Node,Backup Node

- - CSDN博客云计算推荐文章
Hadoop SecondaryNameNode并不是Hadoop 第二个NameNode,它不提供NameNode服务,而仅仅是NameNode的一个工具. 这个工具帮助NameNode管理Metadata数据. NameNode的HDFS文件信息(即Metadata)记录在内存中,client的文件写操作直接修改内存中的Metadata,同时也会记录到硬盘的Edits文件,这是一个Log文件.

Hadoop Namenode HA 合并到主干

- - NoSQLFan
Hadoop 的 Namenode 单点问题一直广受诟病,而这个问题最近将会得到解决,对Namenode 的HA方案已经完成实施并合并到主干,经过严格的测试后将会在后续版本中发布. HA方案中,主要进行了如下的一些工作:. 其主要原理是将NameNode分为两种角色,Active和Standby,Active就是正在进行服务的NameNode,而Standby又分三种情况.

"Namenode瓶颈解决方案"分享总结

- - ITeye博客
此分享是关于Namenode中Blocksmap太大等引起的瓶颈问题解决方案. 1.用行级锁缩小Namespace锁粒度 2.用Cache + FusionIO解决吃内存的问题 3.持久化Blockmap, 加速重启过程 4.无状态化Namenode, 支持热部署 5.简单主备策略保证可靠性. 都是海量惹得祸_之_大家来聊Namenode瓶颈解决方案.rar (3.2 MB).

Hadoop Streaming 编程

- - 学着站在巨人的肩膀上
Hadoop Streaming是Hadoop提供的一个编程工具,它允许用户使用任何可执行文件或者脚本文件作为Mapper和Reducer,例如:. 采用shell脚本语言中的一些命令作为mapper和reducer(cat作为mapper,wc作为reducer). 本文安排如下,第二节介绍Hadoop Streaming的原理,第三节介绍Hadoop Streaming的使用方法,第四节介绍Hadoop Streaming的程序编写方法,在这一节中,用C++、C、shell脚本 和python实现了WordCount作业,第五节总结了常见的问题.

Hadoop使用(一)

- Pei - 博客园-首页原创精华区
Hadoop使用主/从(Master/Slave)架构,主要角色有NameNode,DataNode,secondary NameNode,JobTracker,TaskTracker组成. 其中NameNode,secondary NameNode,JobTracker运行在Master节点上,DataNode和TaskTracker运行在Slave节点上.

Hadoop MapReduce技巧

- - 简单文本
我在使用Hadoop编写MapReduce程序时,遇到了一些问题,通过在Google上查询资料,并结合自己对Hadoop的理解,逐一解决了这些问题. Hadoop对MapReduce中Key与Value的类型是有要求的,简单说来,这些类型必须支持Hadoop的序列化. 为了提高序列化的性能,Hadoop还为Java中常见的基本类型提供了相应地支持序列化的类型,如IntWritable,LongWritable,并为String类型提供了Text类型.

Hadoop TaskScheduler浅析

- - kouu&#39;s home
TaskScheduler,顾名思义,就是MapReduce中的任务调度器. 在MapReduce中,JobTracker接收JobClient提交的Job,将它们按InputFormat的划分以及其他相关配置,生成若干个Map和Reduce任务. 然后,当一个TaskTracker通过心跳告知JobTracker自己还有空闲的任务Slot时,JobTracker就会向其分派任务.

HADOOP安装

- - OracleDBA Blog---三少个人自留地
最近有时间看看hadoop的一些东西,而且在测试的环境上做了一些搭建的工作. 首先,安装前需要做一些准备工作. 使用一台pcserver作为测试服务器,同时使用Oracle VM VirtualBox来作为虚拟机的服务器. 新建了三个虚拟机以后,安装linux,我安装的linux的版本是redhat linux 5.4 x64版本.

Hadoop Corona介绍

- - 董的博客
Dong | 可以转载, 但必须以超链接形式标明文章原始出处和作者信息及 版权声明. 网址: http://dongxicheng.org/hadoop-corona/hadoop-corona/. Hadoop Corona是facebook开源的下一代MapReduce框架. 其基本设计动机和Apache的YARN一致,在此不再重复,读者可参考我的这篇文章 “下一代Apache Hadoop MapReduce框架的架构”.