??xml version="1.0" encoding="utf-8" standalone="yes"?>精品国产黄a∨片高清在线 ,精品免费国产一区二区三区四区,毛片免费在线http://www.aygfsteel.com/bacoo/category/35981.html心怀未来Q开创未来!zh-cnWed, 07 Jan 2009 23:15:44 GMTWed, 07 Jan 2009 23:15:44 GMT60InputFormat学习http://www.aygfsteel.com/bacoo/archive/2009/01/07/250221.htmlso trueso trueWed, 07 Jan 2009 01:40:00 GMThttp://www.aygfsteel.com/bacoo/archive/2009/01/07/250221.htmlhttp://www.aygfsteel.com/bacoo/comments/250221.htmlhttp://www.aygfsteel.com/bacoo/archive/2009/01/07/250221.html#Feedback0http://www.aygfsteel.com/bacoo/comments/commentRss/250221.htmlhttp://www.aygfsteel.com/bacoo/services/trackbacks/250221.htmlInputFormatQ就是ؓ了能够从一个jobconf中得C个split集合QInputSplit[]Q,然后再ؓq个split集合配上一个合适的RecordReaderQgetRecordReaderQ来d每个split中的数据?/p>

InputSplitQ承自Writable接口Q因此一个InputSplit实则包含了四个接口函敎ͼd写(readFields和writeQ,getLength能够l出q个split中所记录的数据大,getLocations能够得到q个split位于哪些L之上QblkLocations[blkIndex].getHosts()Q,q里需要说明的是一个block要么对应一个splitQ要么对应多个splitQ因此每个split都可以从它所属的block中获取主Z息,而且我猜block的大应该是split的整数倍,否则有可能一个split跨越两个block?/p>

对于RecordReaderQ其实这个接口主要就是ؓ了维护一l?lt;K,V>键值对QQ何一个实C该接口的cȝ构造函数都需要是“(Configuration conf, Class< ? extends InputSplit> split)”的Ş式,因ؓ一个RecordReader是有针对性的Q就是针ҎUsplit来进行的Q因此必d与某Usplitl定h。这个接口中最重要的方法就是nextQ在利用nextq行dK和VӞ需要先通过createKey和createValue来创建K和V的对象,然后再传lnext作ؓ参数Q得next对Ş参中的数据成员进行修攏V?/p>

一个fileQFileStatusQ分成多个block存储QBlockLocation[]Q,每个block都有固定的大(file.getBlockSize()Q,然后计算出每个split所需的大(computeSplitSize(goalSize, minSize, blockSize)Q,然后长度ؓlengthQfile.getLen()Q的file分割为多个splitQ最后一个不一个split大小的部分单独ؓ其分配一个splitQ最后返回这个file分割的最l结果(return splits.toArray(new FileSplit[splits.size()])Q?/p>

一个jobQ会得到输入的文件\径(conf.get("mapred.input.dir", "")Q,然后据此可以得到一个Path[]Q对于每个PathQ都可以得到一个fsQFileSystem fs = p.getFileSystem(job)Q,然后再得C个FileStatus[]QFileStatus[] matches = fs.globStatus(p, inputFilter)Q,再把里面的每个FileStatus拿出来,判断其是否ؓdirQ如果是的话FileStatus stat:fs.listStatus(globStat.getPath(), inputFilter)Q然后再stat加入到最l的l果集中resultQ如果是文g的话Q那q接加入到l果集中。说得简z一些,是一个job会得到input.dir中的所有文Ӟ每个文g都用FileStatus来记录?/p>

MultiFileSplit的官Ҏq是“A sub-collection of input files. Unlike {@link FileSplit}, MultiFileSplit class does not represent a split of a file, but a split of input files into smaller sets. The atomic unit of split is a file.”Q一个MultiFileSplit中含有多个小文gQ每个文件应该只隶属于一个blockQ然后getLocationsp回所有小文g对应的block的getHostsQgetLengthq回所有文件的大小d?/p>

对于MultiFileInputFormatQ它的getSplitsq回的是一个MultiFileSplit的集合,也就是一个个的小文g,举个单的例子׃很清楚了Q假定这个job中有5个小文gQ大分别ؓ2Q?Q?Q?Q?Q假定我们期望split的L目ؓ3的话Q先出个double avgLengthPerSplit = ((double)totLength) / numSplitsQ结果应该ؓ5Q然后再切分Q因此得到的三个文gؓQ{文g1?}、{文g3}、{文g4?}。如果这五个文g的大分别ؓ2Q?Q?Q?Q?Q那么应该得到四个文件簇为:{文g1}、{文g2}、{文g3?}、{文g5}。此外,q个cȝgetRecordReader依然是个abstract的方法,因此其子cdd实现q个函数?/p>

so true 2009-01-07 09:40 发表评论
]]>
配置分布式hadoop时ssh斚w该注意的事项http://www.aygfsteel.com/bacoo/archive/2008/11/15/240625.htmlso trueso trueFri, 14 Nov 2008 17:25:00 GMThttp://www.aygfsteel.com/bacoo/archive/2008/11/15/240625.htmlhttp://www.aygfsteel.com/bacoo/comments/240625.htmlhttp://www.aygfsteel.com/bacoo/archive/2008/11/15/240625.html#Feedback0http://www.aygfsteel.com/bacoo/comments/commentRss/240625.htmlhttp://www.aygfsteel.com/bacoo/services/trackbacks/240625.html配置ssh无密码访问:
比如QA是serverQB是clientQ现在B希望通过ssh无密码访问AQ那么就需要把B的公匙放到A的authorized_keys文g中?/p>

1。首先需要A支持q种讉K模式Q?br /> 配置A?etc/ssh/sshd_configQ将q两设|如下:
RSAAuthentication yes
PubkeyAuthentication yes

2。B生id_rsa.pubQƈ这个文件中的内Ҏl用“>>”d到A的authorized_keys文g末尾?/p>

3。在B上,ssh A的ip/A的hostname可以实现无密码登陆A?/p>

但是q么做是有前提的Q很多h都忽略了q个前提Q导致费了很多周折都没有成功Q就像我似的Q我p了很多时间才扑ֈ问题所在?br /> 因ؓA或B机器里都有很多个账户Q在B上键入ssh命o后,我们q没有制定连接到A上的那个帐户Q那么这里面默认的潜规则是什么呢Q就是你在B上sshӞ当前使用的那个帐P假如名字是hahaQ就会作Z期待q接到A上的帐户Q我们可以显C的通过ssh -l haha [hostname]或者ssh haha@[hostname]q种方式来连接到A上的haha帐户Q如果用隐士规则的话Q那么系l就是依据你在B上当前用的帐户来作为A上被q接的帐戗?br /> 因此Q要实现无密码访问的前提是QA和B上有同样的帐户名Uͼ完全一_包括大小写。(我就很郁P因ؓ我在windows下用cygwin和一个linux机器q接Qwindows下的帐户W一个字母大写了Q而linux的帐LW一个字母是写的,D我费了很长时间都没有发现问题症结所在)。其实,q也是Z么在配置hadoop分布式计时Q必要求的每个机器上都必须有一个完全一L用户名?/p>

既然说到了后面的q些注意事项Q那么也要提醒大Ӟ在上面给出的三个步骤中的W?步,必须是在{同的帐户下得到的id_rsa.pub文gQ否则还是不行?/p>

so true 2008-11-15 01:25 发表评论
]]>
一个简单shell脚本http://www.aygfsteel.com/bacoo/archive/2008/11/15/240624.htmlso trueso trueFri, 14 Nov 2008 17:23:00 GMThttp://www.aygfsteel.com/bacoo/archive/2008/11/15/240624.htmlhttp://www.aygfsteel.com/bacoo/comments/240624.htmlhttp://www.aygfsteel.com/bacoo/archive/2008/11/15/240624.html#Feedback0http://www.aygfsteel.com/bacoo/comments/commentRss/240624.htmlhttp://www.aygfsteel.com/bacoo/services/trackbacks/240624.html今天能写样一个shell脚本Q其实ƈ没有费太大力气,因此q不是说我几l周折终有结果而兴奋,而是觉得自己现在l于可以t实下来做自己喜Ƣ做的事情,能够专注的去学该学的东西而兴奋。之前学了很多杂七杂八的东西Q因为目标不明确Q很痛苦Q究其根本,是因Z知道自己从事什么职业,只知道自己想从事ITq行Q但具体的工作方向却不知道,因此啥都要学习,q个q程对于我来说很痛苦。因为我是一个比较喜Ƣ踏t实实做事的人,不做׃做,做就要做得很好。我之前看过一关于论q程序员躁的文章,写得太精彩了。而里面提到的很多躁的做法都在我w上得到了印证,q让我很郁闷。现在,工作定了Q我知道该学点啥了,目标专注了,太美好了?/p>

借用Steven Jobs的一番话来说是Q?/p>

The only way to be truely satisfied is to do what you believe is great work, and the only way to do great work is to love what you do!

我觉得一个h能做到这一步,真的很幸,自己d力,L搏,d现自q价|让自己对自己的表现满意,q是我经常对自己说的一句话?/p>

现在的我Q工作定了,奛_也定了,也就是媳妇定了,我需要做的就是去奋斗Q去努力Q去拼搏?/p>

我很感谢自己能遇到这样一个媳妇,能支持我Q关心我Q我不知道自׃后会不会很成功,但是我知道有了这个好内柱Q我做什么都t实。我知道Q有了她Q我太幸,我也一定会带给她幸的QI promise!

 

好了Q下面就把代码脓出来吧,呵呵Q?/p>

#!/bin/sh

cd /hadoop/logs

var="`ls *.log`"
cur=""
name=""
file=log_name.txt

if [ -e $file ]; then
 rm $file
fi

for cur in $var
do
 name=`echo $cur | cut -d'-' -f3`
 
 #cat $cur | grep ^2008 | awk '{print $0 " [`echo $name`]"}' >> $file
 cat $cur | grep ^2008 | sed "s/^.*$/&[$name]/" >> $file
 #awk '{print $0 " [`echo $name`]"}' >> $file
done

cp $file __temp.txt
sort __temp.txt >$file
rm __temp.txt

q行的结果是Q?/p>

2008-11-14 10:08:47,671 INFO org.apache.hadoop.dfs.NameNode: STARTUP_MSG: [namenode]
2008-11-14 10:08:48,140 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=9000[namenode]
2008-11-14 10:08:48,171 INFO org.apache.hadoop.dfs.NameNode: Namenode up at: bacoo/192.168.1.34:9000[namenode]
2008-11-14 10:08:48,171 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null[namenode]
2008-11-14 10:08:48,234 INFO org.apache.hadoop.dfs.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext[namenode]
2008-11-14 10:08:48,875 INFO org.apache.hadoop.dfs.FSNamesystemMetrics: Initializing FSNamesystemMeterics using context object:org.apache.hadoop.metrics.spi.NullContext[namenode]
2008-11-14 10:08:48,875 INFO org.apache.hadoop.fs.FSNamesystem: fsOwner=Zhaoyb,None,root,Administrators,Users,Debugger,Users[namenode]
2008-11-14 10:08:48,875 INFO org.apache.hadoop.fs.FSNamesystem: isPermissionEnabled=true[namenode]
2008-11-14 10:08:48,875 INFO org.apache.hadoop.fs.FSNamesystem: supergroup=supergroup[namenode]
2008-11-14 10:08:48,890 INFO org.apache.hadoop.fs.FSNamesystem: Registered FSNamesystemStatusMBean[namenode]
2008-11-14 10:08:48,953 INFO org.apache.hadoop.dfs.Storage: Edits file edits of size 4 edits # 0 loaded in 0 seconds.[namenode]
2008-11-14 10:08:48,953 INFO org.apache.hadoop.dfs.Storage: Image file of size 80 loaded in 0 seconds.[namenode]
2008-11-14 10:08:48,953 INFO org.apache.hadoop.dfs.Storage: Number of files = 0[namenode]
2008-11-14 10:08:48,953 INFO org.apache.hadoop.dfs.Storage: Number of files under construction = 0[namenode]
2008-11-14 10:08:48,953 INFO org.apache.hadoop.fs.FSNamesystem: Finished loading FSImage in 657 msecs[namenode]
2008-11-14 10:08:49,000 INFO org.apache.hadoop.dfs.StateChange: STATE* Leaving safe mode after 0 secs.[namenode]
2008-11-14 10:08:49,000 INFO org.apache.hadoop.dfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes[namenode]
2008-11-14 10:08:49,000 INFO org.apache.hadoop.dfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks[namenode]
2008-11-14 10:08:49,609 INFO org.mortbay.util.Credential: Checking Resource aliases[namenode]
2008-11-14 10:08:50,015 INFO org.mortbay.http.HttpServer: Version Jetty/5.1.4[namenode]
2008-11-14 10:08:50,015 INFO org.mortbay.util.Container: Started HttpContext[/logs,/logs][namenode]
2008-11-14 10:08:50,015 INFO org.mortbay.util.Container: Started HttpContext[/static,/static][namenode]
2008-11-14 10:08:54,656 INFO org.mortbay.util.Container: Started org.mortbay.jetty.servlet.WebApplicationHandler@17f11fb[namenode]
2008-11-14 10:08:55,453 INFO org.mortbay.util.Container: Started WebApplicationContext[/,/][namenode]
2008-11-14 10:08:55,468 INFO org.apache.hadoop.fs.FSNamesystem: Web-server up at: 0.0.0.0:50070[namenode]
2008-11-14 10:08:55,468 INFO org.mortbay.http.SocketListener: Started SocketListener on 0.0.0.0:50070[namenode]
2008-11-14 10:08:55,468 INFO org.mortbay.util.Container: Started org.mortbay.jetty.Server@61a907[namenode]
2008-11-14 10:08:55,484 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting[namenode]
2008-11-14 10:08:55,484 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting[namenode]
2008-11-14 10:08:55,515 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9000: starting[namenode]
2008-11-14 10:08:55,515 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000: starting[namenode]
2008-11-14 10:08:55,515 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000: starting[namenode]
2008-11-14 10:08:55,515 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000: starting[namenode]
2008-11-14 10:08:55,515 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9000: starting[namenode]
2008-11-14 10:08:55,531 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000: starting[namenode]
2008-11-14 10:08:55,531 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000: starting[namenode]
2008-11-14 10:08:55,531 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9000: starting[namenode]
2008-11-14 10:08:55,531 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000: starting[namenode]
2008-11-14 10:08:55,531 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9000: starting[namenode]
2008-11-14 10:08:56,015 INFO org.apache.hadoop.dfs.NameNode.Secondary: STARTUP_MSG: [secondarynamenode]
2008-11-14 10:08:56,156 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=SecondaryNameNode, sessionId=null[secondarynamenode]
2008-11-14 10:08:56,468 WARN org.apache.hadoop.dfs.Storage: Checkpoint directory \tmp\hadoop-SYSTEM\dfs\namesecondary is added.[secondarynamenode]
2008-11-14 10:08:56,546 INFO org.mortbay.util.Credential: Checking Resource aliases[secondarynamenode]
2008-11-14 10:08:56,609 INFO org.mortbay.http.HttpServer: Version Jetty/5.1.4[secondarynamenode]
2008-11-14 10:08:56,609 INFO org.mortbay.util.Container: Started HttpContext[/logs,/logs][secondarynamenode]
2008-11-14 10:08:56,609 INFO org.mortbay.util.Container: Started HttpContext[/static,/static][secondarynamenode]
2008-11-14 10:08:56,953 INFO org.mortbay.jetty.servlet.XMLConfiguration: No WEB-INF/web.xml in file:/E:/cygwin/hadoop/webapps/secondary. Serving files and default/dynamic servlets only[secondarynamenode]
2008-11-14 10:08:56,953 INFO org.mortbay.util.Container: Started org.mortbay.jetty.servlet.WebApplicationHandler@b1a4e2[secondarynamenode]
2008-11-14 10:08:57,062 INFO org.mortbay.util.Container: Started WebApplicationContext[/,/][secondarynamenode]
2008-11-14 10:08:57,078 INFO org.apache.hadoop.dfs.NameNode.Secondary: Secondary Web-server up at: 0.0.0.0:50090[secondarynamenode]
2008-11-14 10:08:57,078 INFO org.mortbay.http.SocketListener: Started SocketListener on 0.0.0.0:50090[secondarynamenode]
2008-11-14 10:08:57,078 INFO org.mortbay.util.Container: Started org.mortbay.jetty.Server@18a8ce2[secondarynamenode]
2008-11-14 10:08:57,078 WARN org.apache.hadoop.dfs.NameNode.Secondary: Checkpoint Period   :3600 secs (60 min)[secondarynamenode]
2008-11-14 10:08:57,078 WARN org.apache.hadoop.dfs.NameNode.Secondary: Log Size Trigger    :67108864 bytes (65536 KB)[secondarynamenode]
2008-11-14 10:08:59,828 INFO org.apache.hadoop.mapred.JobTracker: STARTUP_MSG: [jobtracker]
2008-11-14 10:09:00,015 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=JobTracker, port=9001[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9001: starting[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9001: starting[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9001: starting[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9001: starting[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9001: starting[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9001: starting[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9001: starting[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9001: starting[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9001: starting[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9001: starting[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9001: starting[jobtracker]
2008-11-14 10:09:00,125 INFO org.mortbay.util.Credential: Checking Resource aliases[jobtracker]
2008-11-14 10:09:01,703 INFO org.mortbay.http.HttpServer: Version Jetty/5.1.4[jobtracker]
2008-11-14 10:09:01,703 INFO org.mortbay.util.Container: Started HttpContext[/logs,/logs][jobtracker]
2008-11-14 10:09:01,703 INFO org.mortbay.util.Container: Started HttpContext[/static,/static][jobtracker]
2008-11-14 10:09:02,312 INFO org.mortbay.util.Container: Started org.mortbay.jetty.servlet.WebApplicationHandler@1cd280b[jobtracker]
2008-11-14 10:09:08,359 INFO org.mortbay.util.Container: Started WebApplicationContext[/,/][jobtracker]
2008-11-14 10:09:08,375 INFO org.apache.hadoop.mapred.JobTracker: JobTracker up at: 9001[jobtracker]
2008-11-14 10:09:08,375 INFO org.apache.hadoop.mapred.JobTracker: JobTracker webserver: 50030[jobtracker]
2008-11-14 10:09:08,375 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=[jobtracker]
2008-11-14 10:09:08,375 INFO org.mortbay.http.SocketListener: Started SocketListener on 0.0.0.0:50030[jobtracker]
2008-11-14 10:09:08,375 INFO org.mortbay.util.Container: Started org.mortbay.jetty.Server@16a9b9c[jobtracker]
2008-11-14 10:09:12,984 INFO org.apache.hadoop.mapred.JobTracker: Starting RUNNING[jobtracker]
2008-11-14 10:09:56,894 INFO org.apache.hadoop.dfs.DataNode: STARTUP_MSG: [datanode]
2008-11-14 10:10:02,516 INFO org.apache.hadoop.mapred.TaskTracker: STARTUP_MSG: [tasktracker]
2008-11-14 10:10:08,768 INFO org.apache.hadoop.dfs.Storage: Formatting ...[datanode]
2008-11-14 10:10:08,768 INFO org.apache.hadoop.dfs.Storage: Storage directory /hadoop/hadoopfs/data is not formatted.[datanode]
2008-11-14 10:10:11,343 INFO org.apache.hadoop.dfs.DataNode: Registered FSDatasetStatusMBean[datanode]
2008-11-14 10:10:11,347 INFO org.apache.hadoop.dfs.DataNode: Opened info server at 50010[datanode]
2008-11-14 10:10:11,352 INFO org.apache.hadoop.dfs.DataNode: Balancing bandwith is 1048576 bytes/s[datanode]
2008-11-14 10:10:16,430 INFO org.mortbay.util.Credential: Checking Resource aliases[tasktracker]
2008-11-14 10:10:17,976 INFO org.mortbay.util.Credential: Checking Resource aliases[datanode]
2008-11-14 10:10:20,068 INFO org.mortbay.http.HttpServer: Version Jetty/5.1.4[datanode]
2008-11-14 10:10:20,089 INFO org.mortbay.util.Container: Started HttpContext[/logs,/logs][datanode]
2008-11-14 10:10:20,089 INFO org.mortbay.util.Container: Started HttpContext[/static,/static][datanode]
2008-11-14 10:10:20,725 INFO org.mortbay.http.HttpServer: Version Jetty/5.1.4[tasktracker]
2008-11-14 10:10:20,727 INFO org.mortbay.util.Container: Started HttpContext[/logs,/logs][tasktracker]
2008-11-14 10:10:20,727 INFO org.mortbay.util.Container: Started HttpContext[/static,/static][tasktracker]
2008-11-14 10:10:27,078 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/localhost[jobtracker]
2008-11-14 10:10:32,171 INFO org.apache.hadoop.dfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 192.168.1.167:50010 storage DS-1556534590-127.0.0.1-50010-1226628640386[namenode]
2008-11-14 10:10:32,187 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.1.167:50010[namenode]
2008-11-14 10:13:57,171 WARN org.apache.hadoop.dfs.Storage: Checkpoint directory \tmp\hadoop-SYSTEM\dfs\namesecondary is added.[secondarynamenode]
2008-11-14 10:13:57,187 INFO org.apache.hadoop.fs.FSNamesystem: Number of transactions: 5 Total time for transactions(ms): 0 Number of syncs: 3 SyncTimes(ms): 4125 [namenode]
2008-11-14 10:13:57,187 INFO org.apache.hadoop.fs.FSNamesystem: Roll Edit Log from 192.168.1.34[namenode]
2008-11-14 10:13:57,953 INFO org.apache.hadoop.dfs.NameNode.Secondary: Downloaded file fsimage size 80 bytes.[secondarynamenode]
2008-11-14 10:13:57,968 INFO org.apache.hadoop.dfs.NameNode.Secondary: Downloaded file edits size 288 bytes.[secondarynamenode]
2008-11-14 10:13:58,593 INFO org.apache.hadoop.fs.FSNamesystem: fsOwner=Zhaoyb,None,root,Administrators,Users,Debugger,Users[secondarynamenode]
2008-11-14 10:13:58,593 INFO org.apache.hadoop.fs.FSNamesystem: isPermissionEnabled=true[secondarynamenode]
2008-11-14 10:13:58,593 INFO org.apache.hadoop.fs.FSNamesystem: supergroup=supergroup[secondarynamenode]
2008-11-14 10:13:58,640 INFO org.apache.hadoop.dfs.Storage: Edits file edits of size 288 edits # 5 loaded in 0 seconds.[secondarynamenode]
2008-11-14 10:13:58,640 INFO org.apache.hadoop.dfs.Storage: Number of files = 0[secondarynamenode]
2008-11-14 10:13:58,640 INFO org.apache.hadoop.dfs.Storage: Number of files under construction = 0[secondarynamenode]
2008-11-14 10:13:58,718 INFO org.apache.hadoop.dfs.Storage: Image file of size 367 saved in 0 seconds.[secondarynamenode]
2008-11-14 10:13:58,796 INFO org.apache.hadoop.fs.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0 Number of syncs: 0 SyncTimes(ms): 0 [secondarynamenode]
2008-11-14 10:13:58,921 INFO org.apache.hadoop.dfs.NameNode.Secondary: Posted URL 0.0.0.0:50070putimage=1&port=50090&machine=192.168.1.34&token=-16:145044639:0:1226628551796:1226628513000[secondarynamenode]
2008-11-14 10:13:59,078 INFO org.apache.hadoop.fs.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0 Number of syncs: 0 SyncTimes(ms): 0 [namenode]
2008-11-14 10:13:59,078 INFO org.apache.hadoop.fs.FSNamesystem: Roll FSImage from 192.168.1.34[namenode]
2008-11-14 10:13:59,265 WARN org.apache.hadoop.dfs.NameNode.Secondary: Checkpoint done. New Image Size: 367[secondarynamenode]
2008-11-14 10:29:02,171 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 0 time(s).[secondarynamenode]
2008-11-14 10:29:04,187 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 1 time(s).[secondarynamenode]
2008-11-14 10:29:06,109 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 2 time(s).[secondarynamenode]
2008-11-14 10:29:08,015 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 3 time(s).[secondarynamenode]
2008-11-14 10:29:10,031 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 4 time(s).[secondarynamenode]
2008-11-14 10:29:11,937 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 5 time(s).[secondarynamenode]
2008-11-14 10:29:13,843 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 6 time(s).[secondarynamenode]
2008-11-14 10:29:15,765 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 7 time(s).[secondarynamenode]
2008-11-14 10:29:17,671 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 8 time(s).[secondarynamenode]
2008-11-14 10:29:19,593 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 9 time(s).[secondarynamenode]
2008-11-14 10:29:21,078 ERROR org.apache.hadoop.dfs.NameNode.Secondary: Exception in doCheckpoint: [secondarynamenode]
2008-11-14 10:29:21,171 ERROR org.apache.hadoop.dfs.NameNode.Secondary: java.io.IOException: Call failed on local exception[secondarynamenode]
2008-11-14 10:34:23,156 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 0 time(s).[secondarynamenode]
2008-11-14 10:34:25,078 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 1 time(s).[secondarynamenode]
2008-11-14 10:34:27,078 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 2 time(s).[secondarynamenode]
2008-11-14 10:34:29,078 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 3 time(s).[secondarynamenode]
2008-11-14 10:34:31,000 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 4 time(s).[secondarynamenode]
2008-11-14 10:34:32,906 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 5 time(s).[secondarynamenode]
2008-11-14 10:34:34,921 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 6 time(s).[secondarynamenode]
2008-11-14 10:34:36,828 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 7 time(s).[secondarynamenode]
2008-11-14 10:34:38,640 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 8 time(s).[secondarynamenode]
2008-11-14 10:34:40,546 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 9 time(s).[secondarynamenode]
2008-11-14 10:34:41,468 ERROR org.apache.hadoop.dfs.NameNode.Secondary: Exception in doCheckpoint: [secondarynamenode]
2008-11-14 10:34:41,468 ERROR org.apache.hadoop.dfs.NameNode.Secondary: java.io.IOException: Call failed on local exception[secondarynamenode]
2008-11-14 10:38:43,359 INFO org.apache.hadoop.dfs.NameNode.Secondary: SHUTDOWN_MSG: [secondarynamenode]

我相信,q样可以按照时间的序Q把生的日志好好理一遍顺序了Q而且每一个步骤后面还都有了各自对应的nodecd?

so true 2008-11-15 01:23 发表评论
]]>
վ֩ģ壺 | Ӣ| | | ʳ| | | ̩˳| | | | | | | | ͼʲ| ں| | | | | ɳ| | | տ| ɽ| | | Ĵʡ| | | | | | | ¸| Ƹ| | ٹ| ͡| ʯ|