首先 dfs.replication這個(gè)參數(shù)是個(gè)client參數(shù),即node level參數(shù)。需要在每臺(tái)datanode上設(shè)置。
其實(shí)默認(rèn)為3個(gè)副本已經(jīng)夠用了,設(shè)置太多也沒(méi)什么用。
一個(gè)文件,上傳到hdfs上時(shí)指定的是幾個(gè)副本就是幾個(gè)。以后你修改了副本數(shù),對(duì)已經(jīng)上傳了的文件也不會(huì)起作用。可以再上傳文件的同時(shí)指定創(chuàng)建的副本數(shù)
Hadoop dfs -D dfs.replication=1 -put 70M logs/2
可以通過(guò)命令來(lái)更改已經(jīng)上傳的文件的副本數(shù):
hadoop fs -setrep -R 3 /
查看當(dāng)前hdfs的副本數(shù)
hadoop fsck -locations
FSCK started by hadoop from /172.18.6.112 for path / at Thu Oct 27 13:24:25 CST 2011
....................Status: HEALTHY
Total size: 4834251860 B
Total dirs: 21
Total files: 20
Total blocks (validated): 82 (avg. block size 58954290 B)
Minimally replicated blocks: 82 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 3.0
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 3
Number of racks: 1
FSCK ended at Thu Oct 27 13:24:25 CST 2011 in 10 milliseconds
The filesystem under path '/' is HEALTHY
某個(gè)文件的副本數(shù),可以通過(guò)ls中的文件描述符看到
hadoop dfs -ls
-rw-r--r-- 3 hadoop supergroup 153748148 2011-10-27 16:11 /user/hadoop/logs/201108/impression_witspixel2011080100.thin.log.gz
如果你只有3個(gè)datanode,但是你卻指定副本數(shù)為4,是不會(huì)生效的,因?yàn)槊總€(gè)datanode上只能存放一個(gè)副本。
參考:http://blog.csdn.net/lskyne/article/details/8898666
posted on 2018-11-26 11:52
xzc 閱讀(873)
評(píng)論(0) 編輯 收藏 所屬分類(lèi):
hadoop