隨筆 - 175  文章 - 202  trackbacks - 0
          <2011年7月>
          262728293012
          3456789
          10111213141516
          17181920212223
          24252627282930
          31123456

          第一個Blog,記錄哈哈的生活

          常用鏈接

          留言簿(16)

          隨筆分類

          隨筆檔案

          文章分類

          文章檔案

          收藏夾

          Java links

          搜索

          •  

          最新評論

          閱讀排行榜

          評論排行榜

          轉(zhuǎn)自:http://blog.csdn.net/wang382758656/article/details/5771332

          @import url(http://www.aygfsteel.com/CuteSoft_Client/CuteEditor/Load.ashx?type=style&file=SyntaxHighlighter.css);@import url(/css/cuteeditor.css);
          1.Copy a file from the local file system to HDFS
          The srcFile variable needs to contain the full name (path + file name) of the file in the local file system. 
          The dstFile variable needs to contain the desired full name of the file in the Hadoop file system.

          Configuration config = new Configuration();
            FileSystem hdfs = FileSystem.get(config);
            Path srcPath = new Path(srcFile);
            Path dstPath = new Path(dstFile);
            hdfs.copyFromLocalFile(srcPath, dstPath);



          2.Create HDFS file
          The fileName variable contains the file name and path in the Hadoop file system. 
          The content of the file is the buff variable which is an array of bytes.

          //byte[] buff - The content of the file

            Configuration config = new Configuration();
            FileSystem hdfs = FileSystem.get(config);
            Path path = new Path(fileName);
            FSDataOutputStream outputStream = hdfs.create(path);
            outputStream.write(buff, 0, buff.length);


          3.Rename HDFS file
          In order to rename a file in Hadoop file system, we need the full name (path + name) of 
          the file we want to rename. The rename method returns true if the file was renamed, otherwise false.

          Configuration config = new Configuration();
            FileSystem hdfs = FileSystem.get(config);
            Path fromPath = new Path(fromFileName);
            Path toPath = new Path(toFileName);
            boolean isRenamed = hdfs.rename(fromPath, toPath);



          4.Delete HDFS file
          In order to delete a file in Hadoop file system, we need the full name (path + name) 
          of the file we want to delete. The delete method returns true if the file was deleted, otherwise false.

          Configuration config = new Configuration();
            FileSystem hdfs = FileSystem.get(config);
            Path path = new Path(fileName);
            boolean isDeleted = hdfs.delete(path, false);

          Recursive delete:
            Configuration config = new Configuration();
            FileSystem hdfs = FileSystem.get(config);
            Path path = new Path(fileName);
            boolean isDeleted = hdfs.delete(path, true);


           
            
          5.Get HDFS file last modification time
          In order to get the last modification time of a file in Hadoop file system, 
          we need the full name (path + name) of the file.

          Configuration config = new Configuration();
            FileSystem hdfs = FileSystem.get(config);
            Path path = new Path(fileName);
            FileStatus fileStatus = hdfs.getFileStatus(path);
            long modificationTime = fileStatus.getModificationTime


            
           6.Check if a file exists in HDFS
          In order to check the existance of a file in Hadoop file system, 
          we need the full name (path + name) of the file we want to check. 
          The exists methods returns true if the file exists, otherwise false.

          Configuration config = new Configuration();
            FileSystem hdfs = FileSystem.get(config);
            Path path = new Path(fileName);
            boolean isExists = hdfs.exists(path);


            
           7.Get the locations of a file in the HDFS cluster
           A file can exist on more than one node in the Hadoop file system cluster for two reasons:
          Based on the HDFS cluster configuration, Hadoop saves parts of files on different nodes in the cluster.
          Based on the HDFS cluster configuration, Hadoop saves more than one copy of each file on different nodes for redundancy (The default is three).
           

          Configuration config = new Configuration();
            FileSystem hdfs = FileSystem.get(config);
            Path path = new Path(fileName);
            FileStatus fileStatus = hdfs.getFileStatus(path);

            BlockLocation[] blkLocations = hdfs.getFileBlockLocations(path, 0, fileStatus.getLen());

          BlockLocation[] blkLocations = hdfs.getFileBlockLocations(fileStatus, 0, fileStatus.getLen());
               //這個地方,作者寫錯了,需要把path改為fileStatus
            int blkCount = blkLocations.length;
            for (int i=0; i < blkCount; i++) {
              String[] hosts = blkLocations[i].getHosts();
              // Do something with the block hosts
             }


          8. Get a list of all the nodes host names in the HDFS cluster

            his method casts the FileSystem Object to a DistributedFileSystem Object. 
            This method will work only when Hadoop is configured as a cluster. 
            Running Hadoop on the local machine only, in a non cluster configuration will
             cause this method to throw an Exception.
             

          Configuration config = new Configuration();
            FileSystem fs = FileSystem.get(config);
            DistributedFileSystem hdfs = (DistributedFileSystem) fs;
            DatanodeInfo[] dataNodeStats = hdfs.getDataNodeStats();
            String[] names = new String[dataNodeStats.length];
            for (int i = 0; i < dataNodeStats.length; i++) {
                names[i] = dataNodeStats[i].getHostName();
            }


            
            
          程序?qū)嵗?/span>

          /*
           * 
           * 演示操作HDFS的java接口
           * 
           * */



          import org.apache.hadoop.conf.*;
          import org.apache.hadoop.fs.*;
          import org.apache.hadoop.hdfs.*;
          import org.apache.hadoop.hdfs.protocol.*;
          import java.util.Date;

          public class DFSOperater {

              /**
               * @param args
               */

              public static void main(String[] args) {

                  Configuration conf = new Configuration();
                  
                  try {
                      // Get a list of all the nodes host names in the HDFS cluster

                      FileSystem fs = FileSystem.get(conf);
                      DistributedFileSystem hdfs = (DistributedFileSystem)fs;
                      DatanodeInfo[] dataNodeStats = hdfs.getDataNodeStats();
                      String[] names = new String[dataNodeStats.length];
                      System.out.println("list of all the nodes in HDFS cluster:"); //print info

                      for(int i=0; i < dataNodeStats.length; i++){
                          names[i] = dataNodeStats[i].getHostName();
                          System.out.println(names[i]); //print info

                      }
                      Path f = new Path("/user/cluster/dfs.txt");
                      
                      //check if a file exists in HDFS

                      boolean isExists = fs.exists(f);
                      System.out.println("The file exists? [" + isExists + "]");
                      
                      //if the file exist, delete it

                      if(isExists){
                           boolean isDeleted = hdfs.delete(f, false);//fase : not recursive

                           if(isDeleted)System.out.println("now delete " + f.getName());                 
                      }
                      
                      //create and write

                      System.out.println("create and write [" + f.getName() + "] to hdfs:");
                      FSDataOutputStream os = fs.create(f, true, 0);
                      for(int i=0; i<10; i++){
                          os.writeChars("test hdfs ");
                      }
                      os.writeChars("/n");
                      os.close();
                      
                      //get the locations of a file in HDFS

                      System.out.println("locations of file in HDFS:");
                      FileStatus filestatus = fs.getFileStatus(f);
                      BlockLocation[] blkLocations = fs.getFileBlockLocations(filestatus, 0,filestatus.getLen());
                      int blkCount = blkLocations.length;
                      for(int i=0; i < blkCount; i++){
                          String[] hosts = blkLocations[i].getHosts();
                          //Do sth with the block hosts

                          System.out.println(hosts);
                      }
                      
                      //get HDFS file last modification time

                      long modificationTime = filestatus.getModificationTime(); // measured in milliseconds since the epoch

                      Date d = new Date(modificationTime);
                   System.out.println(d);
                      //reading from HDFS

                      System.out.println("read [" + f.getName() + "] from hdfs:");
               FSDataInputStream dis = fs.open(f);
               System.out.println(dis.readUTF());
               dis.close();

                  } catch (Exception e) {
                      // TODO: handle exception

                      e.printStackTrace();
                  }
                          
              }

          }


          posted on 2011-07-28 12:03 哈哈的日子 閱讀(1335) 評論(2)  編輯  收藏

          FeedBack:
          # re: HDFS的JAVA接口API操作實例(轉(zhuǎn)) 2011-07-29 09:05 tongxing
          Hadoop是部署在linux下的,現(xiàn)在寫程序都要先打成jar包然后放到里面運(yùn)行。我在考慮個問題現(xiàn)在目前有個網(wǎng)站往該網(wǎng)站上傳的數(shù)據(jù)量都要達(dá)到50G,所以能不能網(wǎng)站和hadoop的dfs對接然后上傳。這樣就不用把數(shù)據(jù)放到一臺電腦上了。就是不知道能否實現(xiàn)?
          貌似博客園的和這個賬戶不同啊?郁悶!  回復(fù)  更多評論
            
          # re: HDFS的JAVA接口API操作實例(轉(zhuǎn)) 2011-07-29 09:52 哈哈的日子
          @tongxing
          程序可以遠(yuǎn)程連過去的,應(yīng)該不用放過去吧。

          數(shù)據(jù)是比較容易放到 hdfs 上的,但,性能需要考慮一下。
          可以確定的是,小文件性能不好。  回復(fù)  更多評論
            

          只有注冊用戶登錄后才能發(fā)表評論。


          網(wǎng)站導(dǎo)航:
           
          主站蜘蛛池模板: 武穴市| 文昌市| 会昌县| 乌拉特中旗| 嘉义市| 布拖县| 延庆县| 宾阳县| 子洲县| 武清区| 苏尼特左旗| 手游| 大埔区| 枣强县| 普洱| 河东区| 阿拉尔市| 芜湖市| 桃园市| 佛学| 察哈| 青海省| 安多县| 湘阴县| 大港区| 宾阳县| 额济纳旗| 营山县| 溧阳市| 瑞丽市| 富阳市| 白河县| 南汇区| 兴业县| 勐海县| 天津市| 正蓝旗| 五常市| 义马市| 梓潼县| 扎赉特旗|