• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java FSImageFormatProtobuf类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf的典型用法代码示例。如果您正苦于以下问题:Java FSImageFormatProtobuf类的具体用法?Java FSImageFormatProtobuf怎么用?Java FSImageFormatProtobuf使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



FSImageFormatProtobuf类属于org.apache.hadoop.hdfs.server.namenode包,在下文中一共展示了FSImageFormatProtobuf类的9个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: serializeSnapshotDiffSection

import org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf; //导入依赖的package包/类
/**
 * save all the snapshot diff to fsimage
 */
public void serializeSnapshotDiffSection(OutputStream out)
    throws IOException {
  INodeMap inodesMap = fsn.getFSDirectory().getINodeMap();
  final List<INodeReference> refList = parent.getSaverContext()
      .getRefList();
  int i = 0;
  Iterator<INodeWithAdditionalFields> iter = inodesMap.getMapIterator();
  while (iter.hasNext()) {
    INodeWithAdditionalFields inode = iter.next();
    if (inode.isFile()) {
      serializeFileDiffList(inode.asFile(), out);
    } else if (inode.isDirectory()) {
      serializeDirDiffList(inode.asDirectory(), refList, out);
    }
    ++i;
    if (i % FSImageFormatProtobuf.Saver.CHECK_CANCEL_INTERVAL == 0) {
      context.checkCancelled();
    }
  }
  parent.commitSection(headers,
      FSImageFormatProtobuf.SectionName.SNAPSHOT_DIFF);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:26,代码来源:FSImageFormatPBSnapshot.java


示例2: Saver

import org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf; //导入依赖的package包/类
public Saver(FSImageFormatProtobuf.Saver parent,
    FileSummary.Builder headers, SaveNamespaceContext context,
    FSNamesystem fsn) {
  this.parent = parent;
  this.headers = headers;
  this.context = context;
  this.fsn = fsn;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:9,代码来源:FSImageFormatPBSnapshot.java


示例3: Loader

import org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf; //导入依赖的package包/类
public Loader(FSNamesystem fsn, FSImageFormatProtobuf.Loader parent) {
  this.fsn = fsn;
  this.fsDir = fsn.getFSDirectory();
  this.snapshotMap = new HashMap<Integer, Snapshot>();
  this.parent = parent;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:7,代码来源:FSImageFormatPBSnapshot.java


示例4: visit

import org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf; //导入依赖的package包/类
public void visit(RandomAccessFile file) throws IOException {
  Configuration conf = new Configuration();
  if (!FSImageUtil.checkFileFormat(file)) {
    throw new IOException("Unrecognized FSImage");
  }

  FileSummary summary = FSImageUtil.loadSummary(file);

  try (FileInputStream fin = new FileInputStream(file.getFD())) {
    InputStream is;
    ArrayList<FileSummary.Section> sections =
        Lists.newArrayList(summary.getSectionsList());
    Collections.sort(sections,
        new Comparator<FileSummary.Section>() {
          @Override
          public int compare(FsImageProto.FileSummary.Section s1,
              FsImageProto.FileSummary.Section s2) {
            FSImageFormatProtobuf.SectionName n1 =
                FSImageFormatProtobuf.SectionName.fromString(s1.getName());
            FSImageFormatProtobuf.SectionName n2 =
                FSImageFormatProtobuf.SectionName.fromString(s2.getName());
            if (n1 == null) {
              return n2 == null ? 0 : -1;
            } else if (n2 == null) {
              return -1;
            } else {
              return n1.ordinal() - n2.ordinal();
            }
          }
        });

    ImmutableList<Long> refIdList = null;
    for (FileSummary.Section section : sections) {
      fin.getChannel().position(section.getOffset());
      is = FSImageUtil.wrapInputStreamForCompression(conf,
          summary.getCodec(), new BufferedInputStream(new LimitInputStream(
              fin, section.getLength())));
      switch (SectionName.fromString(section.getName())) {
      case STRING_TABLE:
        LOG.info("Loading string table");
        stringTable = FSImageLoader.loadStringTable(is);
        break;
      case INODE_REFERENCE:
        // Load INodeReference so that all INodes can be processed.
        // Snapshots are not handled and will just be ignored for now.
        LOG.info("Loading inode references");
        refIdList = FSImageLoader.loadINodeReferenceSection(is);
        break;
      default:
        break;
      }
    }

    loadDirectories(fin, sections, summary, conf);
    loadINodeDirSection(fin, sections, summary, conf, refIdList);
    metadataMap.sync();
    output(conf, summary, fin, sections);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:60,代码来源:PBImageTextWriter.java


示例5: load

import org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf; //导入依赖的package包/类
/**
 * Load fsimage into the memory.
 * @param inputFile the filepath of the fsimage to load.
 * @return FSImageLoader
 * @throws IOException if failed to load fsimage.
 */
static FSImageLoader load(String inputFile) throws IOException {
  Configuration conf = new Configuration();
  RandomAccessFile file = new RandomAccessFile(inputFile, "r");
  if (!FSImageUtil.checkFileFormat(file)) {
    throw new IOException("Unrecognized FSImage");
  }

  FsImageProto.FileSummary summary = FSImageUtil.loadSummary(file);


  try (FileInputStream fin = new FileInputStream(file.getFD())) {
    // Map to record INodeReference to the referred id
    ImmutableList<Long> refIdList = null;
    String[] stringTable = null;
    byte[][] inodes = null;
    Map<Long, long[]> dirmap = null;

    ArrayList<FsImageProto.FileSummary.Section> sections =
        Lists.newArrayList(summary.getSectionsList());
    Collections.sort(sections,
        new Comparator<FsImageProto.FileSummary.Section>() {
          @Override
          public int compare(FsImageProto.FileSummary.Section s1,
                             FsImageProto.FileSummary.Section s2) {
            FSImageFormatProtobuf.SectionName n1 =
                FSImageFormatProtobuf.SectionName.fromString(s1.getName());
            FSImageFormatProtobuf.SectionName n2 =
                FSImageFormatProtobuf.SectionName.fromString(s2.getName());
            if (n1 == null) {
              return n2 == null ? 0 : -1;
            } else if (n2 == null) {
              return -1;
            } else {
              return n1.ordinal() - n2.ordinal();
            }
          }
        });

    for (FsImageProto.FileSummary.Section s : sections) {
      fin.getChannel().position(s.getOffset());
      InputStream is = FSImageUtil.wrapInputStreamForCompression(conf,
          summary.getCodec(), new BufferedInputStream(new LimitInputStream(
          fin, s.getLength())));

      if (LOG.isDebugEnabled()) {
        LOG.debug("Loading section " + s.getName() + " length: " + s.getLength
                ());
      }
      switch (FSImageFormatProtobuf.SectionName.fromString(s.getName())) {
        case STRING_TABLE:
          stringTable = loadStringTable(is);
          break;
        case INODE:
          inodes = loadINodeSection(is);
          break;
        case INODE_REFERENCE:
          refIdList = loadINodeReferenceSection(is);
          break;
        case INODE_DIR:
          dirmap = loadINodeDirectorySection(is, refIdList);
          break;
        default:
          break;
      }
    }
    return new FSImageLoader(stringTable, inodes, dirmap);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:75,代码来源:FSImageLoader.java


示例6: visit

import org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf; //导入依赖的package包/类
public void visit(RandomAccessFile file) throws IOException {
  Configuration conf = new Configuration();
  if (!FSImageUtil.checkFileFormat(file)) {
    throw new IOException("Unrecognized FSImage");
  }

  FileSummary summary = FSImageUtil.loadSummary(file);

  try (FileInputStream fin = new FileInputStream(file.getFD())) {
    InputStream is;
    ArrayList<FileSummary.Section> sections =
        Lists.newArrayList(summary.getSectionsList());
    Collections.sort(sections,
        new Comparator<FileSummary.Section>() {
          @Override
          public int compare(FsImageProto.FileSummary.Section s1,
              FsImageProto.FileSummary.Section s2) {
            FSImageFormatProtobuf.SectionName n1 =
                FSImageFormatProtobuf.SectionName.fromString(s1.getName());
            FSImageFormatProtobuf.SectionName n2 =
                FSImageFormatProtobuf.SectionName.fromString(s2.getName());
            if (n1 == null) {
              return n2 == null ? 0 : -1;
            } else if (n2 == null) {
              return -1;
            } else {
              return n1.ordinal() - n2.ordinal();
            }
          }
        });

    for (FileSummary.Section section : sections) {
      fin.getChannel().position(section.getOffset());
      is = FSImageUtil.wrapInputStreamForCompression(conf,
          summary.getCodec(), new BufferedInputStream(new LimitInputStream(
              fin, section.getLength())));
      switch (SectionName.fromString(section.getName())) {
      case STRING_TABLE:
        stringTable = FSImageLoader.loadStringTable(is);
        break;
      default:
        break;
      }
    }

    loadDirectories(fin, sections, summary, conf);
    loadINodeDirSection(fin, sections, summary, conf);
    metadataMap.sync();
    output(conf, summary, fin, sections);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:52,代码来源:PBImageTextWriter.java


示例7: load

import org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf; //导入依赖的package包/类
/**
 * Load fsimage into the memory.
 * @param inputFile the filepath of the fsimage to load.
 * @return FSImageLoader
 * @throws IOException if failed to load fsimage.
 */
static FSImageLoader load(String inputFile) throws IOException {
  Configuration conf = new Configuration();
  RandomAccessFile file = new RandomAccessFile(inputFile, "r");
  if (!FSImageUtil.checkFileFormat(file)) {
    throw new IOException("Unrecognized FSImage");
  }

  FsImageProto.FileSummary summary = FSImageUtil.loadSummary(file);


  try (FileInputStream fin = new FileInputStream(file.getFD())) {
    // Map to record INodeReference to the referred id
    ImmutableList<Long> refIdList = null;
    String[] stringTable = null;
    byte[][] inodes = null;
    Map<Long, long[]> dirmap = null;

    ArrayList<FsImageProto.FileSummary.Section> sections =
        Lists.newArrayList(summary.getSectionsList());
    Collections.sort(sections,
        new Comparator<FsImageProto.FileSummary.Section>() {
          @Override
          public int compare(FsImageProto.FileSummary.Section s1,
                             FsImageProto.FileSummary.Section s2) {
            FSImageFormatProtobuf.SectionName n1 =
                FSImageFormatProtobuf.SectionName.fromString(s1.getName());
            FSImageFormatProtobuf.SectionName n2 =
                FSImageFormatProtobuf.SectionName.fromString(s2.getName());
            if (n1 == null) {
              return n2 == null ? 0 : -1;
            } else if (n2 == null) {
              return -1;
            } else {
              return n1.ordinal() - n2.ordinal();
            }
          }
        });

    for (FsImageProto.FileSummary.Section s : sections) {
      fin.getChannel().position(s.getOffset());
      InputStream is = FSImageUtil.wrapInputStreamForCompression(conf,
          summary.getCodec(), new BufferedInputStream(new LimitInputStream(
          fin, s.getLength())));

      if (LOG.isDebugEnabled()) {
        LOG.debug("Loading section " + s.getName() + " length: " + s.getLength
            ());
      }
      switch (FSImageFormatProtobuf.SectionName.fromString(s.getName())) {
        case STRING_TABLE:
          stringTable = loadStringTable(is);
          break;
        case INODE:
          inodes = loadINodeSection(is);
          break;
        case INODE_REFERENCE:
          refIdList = loadINodeReferenceSection(is);
          break;
        case INODE_DIR:
          dirmap = loadINodeDirectorySection(is, refIdList);
          break;
        default:
          break;
      }
    }
    return new FSImageLoader(stringTable, inodes, dirmap);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:75,代码来源:FSImageLoader.java


示例8: load

import org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf; //导入依赖的package包/类
/**
 * Load fsimage into the memory.
 * @param inputFile the filepath of the fsimage to load.
 * @return FSImageLoader
 * @throws IOException if failed to load fsimage.
 */
static FSImageLoader load(String inputFile) throws IOException {
  Configuration conf = new Configuration();
  RandomAccessFile file = new RandomAccessFile(inputFile, "r");
  if (!FSImageUtil.checkFileFormat(file)) {
    throw new IOException("Unrecognized FSImage");
  }

  FsImageProto.FileSummary summary = FSImageUtil.loadSummary(file);


  try (FileInputStream fin = new FileInputStream(file.getFD())) {
    // Map to record INodeReference to the referred id
    ImmutableList<Long> refIdList = null;
    String[] stringTable = null;
    byte[][] inodes = null;
    Map<Long, long[]> dirmap = null;

    ArrayList<FsImageProto.FileSummary.Section> sections =
        Lists.newArrayList(summary.getSectionsList());
    Collections.sort(sections,
        new Comparator<FsImageProto.FileSummary.Section>() {
          @Override
          public int compare(FsImageProto.FileSummary.Section s1,
                             FsImageProto.FileSummary.Section s2) {
            FSImageFormatProtobuf.SectionName n1 =
                FSImageFormatProtobuf.SectionName.fromString(s1.getName());
            FSImageFormatProtobuf.SectionName n2 =
                FSImageFormatProtobuf.SectionName.fromString(s2.getName());
            if (n1 == null) {
              return n2 == null ? 0 : -1;
            } else if (n2 == null) {
              return -1;
            } else {
              return n1.ordinal() - n2.ordinal();
            }
          }
        });

    for (FsImageProto.FileSummary.Section s : sections) {
      fin.getChannel().position(s.getOffset());
      InputStream is = FSImageUtil.wrapInputStreamForCompression(conf,
          summary.getCodec(), new BufferedInputStream(new LimitInputStream(
          fin, s.getLength())));

      LOG.debug("Loading section " + s.getName() + " length: " + s.getLength
              ());
      switch (FSImageFormatProtobuf.SectionName.fromString(s.getName())) {
        case STRING_TABLE:
          stringTable = loadStringTable(is);
          break;
        case INODE:
          inodes = loadINodeSection(is);
          break;
        case INODE_REFERENCE:
          refIdList = loadINodeReferenceSection(is);
          break;
        case INODE_DIR:
          dirmap = loadINodeDirectorySection(is, refIdList);
          break;
        default:
          break;
      }
    }
    return new FSImageLoader(stringTable, inodes, dirmap);
  }
}
 
开发者ID:yncxcw,项目名称:big-c,代码行数:73,代码来源:FSImageLoader.java


示例9: load

import org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf; //导入依赖的package包/类
/**
 * Load fsimage into the memory.
 * @param inputFile the filepath of the fsimage to load.
 * @return FSImageLoader
 * @throws IOException if failed to load fsimage.
 */
static FSImageLoader load(String inputFile) throws IOException {
  Configuration conf = new Configuration();
  RandomAccessFile file = new RandomAccessFile(inputFile, "r");
  if (!FSImageUtil.checkFileFormat(file)) {
    throw new IOException("Unrecognized FSImage");
  }

  FsImageProto.FileSummary summary = FSImageUtil.loadSummary(file);
  FileInputStream fin = null;

  try {
    // Map to record INodeReference to the referred id
    ImmutableList<Long> refIdList = null;
    String[] stringTable = null;
    byte[][] inodes = null;
    Map<Long, long[]> dirmap = null;

    fin = new FileInputStream(file.getFD());

    ArrayList<FsImageProto.FileSummary.Section> sections =
        Lists.newArrayList(summary.getSectionsList());
    Collections.sort(sections,
        new Comparator<FsImageProto.FileSummary.Section>() {
          @Override
          public int compare(FsImageProto.FileSummary.Section s1,
                             FsImageProto.FileSummary.Section s2) {
            FSImageFormatProtobuf.SectionName n1 =
                FSImageFormatProtobuf.SectionName.fromString(s1.getName());
            FSImageFormatProtobuf.SectionName n2 =
                FSImageFormatProtobuf.SectionName.fromString(s2.getName());
            if (n1 == null) {
              return n2 == null ? 0 : -1;
            } else if (n2 == null) {
              return -1;
            } else {
              return n1.ordinal() - n2.ordinal();
            }
          }
        });

    for (FsImageProto.FileSummary.Section s : sections) {
      fin.getChannel().position(s.getOffset());
      InputStream is = FSImageUtil.wrapInputStreamForCompression(conf,
          summary.getCodec(), new BufferedInputStream(new LimitInputStream(
          fin, s.getLength())));

      LOG.debug("Loading section " + s.getName() + " length: " + s.getLength
              ());
      switch (FSImageFormatProtobuf.SectionName.fromString(s.getName())) {
        case STRING_TABLE:
          stringTable = loadStringTable(is);
          break;
        case INODE:
          inodes = loadINodeSection(is);
          break;
        case INODE_REFERENCE:
          refIdList = loadINodeReferenceSection(is);
          break;
        case INODE_DIR:
          dirmap = loadINodeDirectorySection(is, refIdList);
          break;
        default:
          break;
      }
    }
    return new FSImageLoader(stringTable, inodes, dirmap);
  } finally {
    IOUtils.cleanup(null, fin);
  }
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:77,代码来源:FSImageLoader.java



注:本文中的org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java Builder类代码示例发布时间:2022-05-23
下一篇:
Java TravelMode类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap