• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java FileSummary类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.server.namenode.FsImageProto.FileSummary的典型用法代码示例。如果您正苦于以下问题:Java FileSummary类的具体用法?Java FileSummary怎么用?Java FileSummary使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



FileSummary类属于org.apache.hadoop.hdfs.server.namenode.FsImageProto包,在下文中一共展示了FileSummary类的18个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: saveNameSystemSection

import org.apache.hadoop.hdfs.server.namenode.FsImageProto.FileSummary; //导入依赖的package包/类
private void saveNameSystemSection(FileSummary.Builder summary)
    throws IOException {
  final FSNamesystem fsn = context.getSourceNamesystem();
  OutputStream out = sectionOutputStream;
  BlockIdManager blockIdManager = fsn.getBlockIdManager();
  NameSystemSection.Builder b = NameSystemSection.newBuilder()
      .setGenstampV1(blockIdManager.getGenerationStampV1())
      .setGenstampV1Limit(blockIdManager.getGenerationStampV1Limit())
      .setGenstampV2(blockIdManager.getGenerationStampV2())
      .setLastAllocatedBlockId(blockIdManager.getLastAllocatedBlockId())
      .setTransactionId(context.getTxId());

  // We use the non-locked version of getNamespaceInfo here since
  // the coordinating thread of saveNamespace already has read-locked
  // the namespace for us. If we attempt to take another readlock
  // from the actual saver thread, there's a potential of a
  // fairness-related deadlock. See the comments on HDFS-2223.
  b.setNamespaceId(fsn.unprotectedGetNamespaceInfo().getNamespaceID());
  if (fsn.isRollingUpgrade()) {
    b.setRollingUpgradeStartTime(fsn.getRollingUpgradeInfo().getStartTime());
  }
  NameSystemSection s = b.build();
  s.writeDelimitedTo(out);

  commitSection(summary, SectionName.NS_INFO);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:27,代码来源:FSImageFormatProtobuf.java


示例2: output

import org.apache.hadoop.hdfs.server.namenode.FsImageProto.FileSummary; //导入依赖的package包/类
private void output(Configuration conf, FileSummary summary,
    FileInputStream fin, ArrayList<FileSummary.Section> sections)
    throws IOException {
  InputStream is;
  long startTime = Time.monotonicNow();
  out.println(getHeader());
  for (FileSummary.Section section : sections) {
    if (SectionName.fromString(section.getName()) == SectionName.INODE) {
      fin.getChannel().position(section.getOffset());
      is = FSImageUtil.wrapInputStreamForCompression(conf,
          summary.getCodec(), new BufferedInputStream(new LimitInputStream(
              fin, section.getLength())));
      outputINodes(is);
    }
  }
  long timeTaken = Time.monotonicNow() - startTime;
  LOG.debug("Time to output inodes: {}ms", timeTaken);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:19,代码来源:PBImageTextWriter.java


示例3: loadDirectories

import org.apache.hadoop.hdfs.server.namenode.FsImageProto.FileSummary; //导入依赖的package包/类
/** Load the directories in the INode section. */
private void loadDirectories(
    FileInputStream fin, List<FileSummary.Section> sections,
    FileSummary summary, Configuration conf)
    throws IOException {
  LOG.info("Loading directories");
  long startTime = Time.monotonicNow();
  for (FileSummary.Section section : sections) {
    if (SectionName.fromString(section.getName())
        == SectionName.INODE) {
      fin.getChannel().position(section.getOffset());
      InputStream is = FSImageUtil.wrapInputStreamForCompression(conf,
          summary.getCodec(), new BufferedInputStream(new LimitInputStream(
              fin, section.getLength())));
      loadDirectoriesInINodeSection(is);
    }
  }
  long timeTaken = Time.monotonicNow() - startTime;
  LOG.info("Finished loading directories in {}ms", timeTaken);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:PBImageTextWriter.java


示例4: loadINodeDirSection

import org.apache.hadoop.hdfs.server.namenode.FsImageProto.FileSummary; //导入依赖的package包/类
private void loadINodeDirSection(
    FileInputStream fin, List<FileSummary.Section> sections,
    FileSummary summary, Configuration conf, List<Long> refIdList)
    throws IOException {
  LOG.info("Loading INode directory section.");
  long startTime = Time.monotonicNow();
  for (FileSummary.Section section : sections) {
    if (SectionName.fromString(section.getName())
        == SectionName.INODE_DIR) {
      fin.getChannel().position(section.getOffset());
      InputStream is = FSImageUtil.wrapInputStreamForCompression(conf,
          summary.getCodec(), new BufferedInputStream(
              new LimitInputStream(fin, section.getLength())));
      buildNamespace(is, refIdList);
    }
  }
  long timeTaken = Time.monotonicNow() - startTime;
  LOG.info("Finished loading INode directory section in {}ms", timeTaken);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:20,代码来源:PBImageTextWriter.java


示例5: visit

import org.apache.hadoop.hdfs.server.namenode.FsImageProto.FileSummary; //导入依赖的package包/类
void visit(RandomAccessFile file) throws IOException {
  if (!FSImageUtil.checkFileFormat(file)) {
    throw new IOException("Unrecognized FSImage");
  }

  FileSummary summary = FSImageUtil.loadSummary(file);
  try (FileInputStream in = new FileInputStream(file.getFD())) {
    for (FileSummary.Section s : summary.getSectionsList()) {
      if (SectionName.fromString(s.getName()) != SectionName.INODE) {
        continue;
      }

      in.getChannel().position(s.getOffset());
      InputStream is = FSImageUtil.wrapInputStreamForCompression(conf,
          summary.getCodec(), new BufferedInputStream(new LimitInputStream(
              in, s.getLength())));
      run(is);
      output();
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:22,代码来源:FileDistributionCalculator.java


示例6: output

import org.apache.hadoop.hdfs.server.namenode.FsImageProto.FileSummary; //导入依赖的package包/类
private void output(Configuration conf, FileSummary summary,
    FileInputStream fin, ArrayList<FileSummary.Section> sections)
    throws IOException {
  InputStream is;
  long startTime = Time.monotonicNow();
  for (FileSummary.Section section : sections) {
    if (SectionName.fromString(section.getName()) == SectionName.INODE) {
      fin.getChannel().position(section.getOffset());
      is = FSImageUtil.wrapInputStreamForCompression(conf,
          summary.getCodec(), new BufferedInputStream(new LimitInputStream(
              fin, section.getLength())));
      outputINodes(is);
    }
  }
  long timeTaken = Time.monotonicNow() - startTime;
  LOG.debug("Time to output inodes: {}ms", timeTaken);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:18,代码来源:PBImageTextWriter.java


示例7: loadINodeDirSection

import org.apache.hadoop.hdfs.server.namenode.FsImageProto.FileSummary; //导入依赖的package包/类
private void loadINodeDirSection(
    FileInputStream fin, List<FileSummary.Section> sections,
    FileSummary summary, Configuration conf)
    throws IOException {
  LOG.info("Loading INode directory section.");
  long startTime = Time.monotonicNow();
  for (FileSummary.Section section : sections) {
    if (SectionName.fromString(section.getName())
        == SectionName.INODE_DIR) {
      fin.getChannel().position(section.getOffset());
      InputStream is = FSImageUtil.wrapInputStreamForCompression(conf,
          summary.getCodec(), new BufferedInputStream(
              new LimitInputStream(fin, section.getLength())));
      buildNamespace(is);
    }
  }
  long timeTaken = Time.monotonicNow() - startTime;
  LOG.info("Finished loading INode directory section in {}ms", timeTaken);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:20,代码来源:PBImageTextWriter.java


示例8: saveNameSystemSection

import org.apache.hadoop.hdfs.server.namenode.FsImageProto.FileSummary; //导入依赖的package包/类
private void saveNameSystemSection(FileSummary.Builder summary)
    throws IOException {
  final FSNamesystem fsn = context.getSourceNamesystem();
  OutputStream out = sectionOutputStream;
  NameSystemSection.Builder b = NameSystemSection.newBuilder()
      .setGenstampV1(fsn.getGenerationStampV1())
      .setGenstampV1Limit(fsn.getGenerationStampV1Limit())
      .setGenstampV2(fsn.getGenerationStampV2())
      .setLastAllocatedBlockId(fsn.getLastAllocatedBlockId())
      .setTransactionId(context.getTxId());

  // We use the non-locked version of getNamespaceInfo here since
  // the coordinating thread of saveNamespace already has read-locked
  // the namespace for us. If we attempt to take another readlock
  // from the actual saver thread, there's a potential of a
  // fairness-related deadlock. See the comments on HDFS-2223.
  b.setNamespaceId(fsn.unprotectedGetNamespaceInfo().getNamespaceID());
  if (fsn.isRollingUpgrade()) {
    b.setRollingUpgradeStartTime(fsn.getRollingUpgradeInfo().getStartTime());
  }
  NameSystemSection s = b.build();
  s.writeDelimitedTo(out);

  commitSection(summary, SectionName.NS_INFO);
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:26,代码来源:FSImageFormatProtobuf.java


示例9: visit

import org.apache.hadoop.hdfs.server.namenode.FsImageProto.FileSummary; //导入依赖的package包/类
void visit(RandomAccessFile file) throws IOException {
  if (!FSImageUtil.checkFileFormat(file)) {
    throw new IOException("Unrecognized FSImage");
  }

  FileSummary summary = FSImageUtil.loadSummary(file);
  FileInputStream in = null;
  try {
    in = new FileInputStream(file.getFD());
    for (FileSummary.Section s : summary.getSectionsList()) {
      if (SectionName.fromString(s.getName()) != SectionName.INODE) {
        continue;
      }

      in.getChannel().position(s.getOffset());
      InputStream is = FSImageUtil.wrapInputStreamForCompression(conf,
          summary.getCodec(), new BufferedInputStream(new LimitInputStream(
              in, s.getLength())));
      run(is);
      output();
    }
  } finally {
    IOUtils.cleanup(null, in);
  }
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:26,代码来源:FileDistributionCalculator.java


示例10: commitSection

import org.apache.hadoop.hdfs.server.namenode.FsImageProto.FileSummary; //导入依赖的package包/类
public void commitSection(FileSummary.Builder summary, SectionName name)
    throws IOException {
  long oldOffset = currentOffset;
  flushSectionOutputStream();

  if (codec != null) {
    sectionOutputStream = codec.createOutputStream(underlyingOutputStream);
  } else {
    sectionOutputStream = underlyingOutputStream;
  }
  long length = fileChannel.position() - oldOffset;
  summary.addSections(FileSummary.Section.newBuilder().setName(name.name)
      .setLength(length).setOffset(currentOffset));
  currentOffset += length;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:16,代码来源:FSImageFormatProtobuf.java


示例11: saveFileSummary

import org.apache.hadoop.hdfs.server.namenode.FsImageProto.FileSummary; //导入依赖的package包/类
private static void saveFileSummary(OutputStream out, FileSummary summary)
    throws IOException {
  summary.writeDelimitedTo(out);
  int length = getOndiskTrunkSize(summary);
  byte[] lengthBytes = new byte[4];
  ByteBuffer.wrap(lengthBytes).asIntBuffer().put(length);
  out.write(lengthBytes);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:9,代码来源:FSImageFormatProtobuf.java


示例12: saveInodes

import org.apache.hadoop.hdfs.server.namenode.FsImageProto.FileSummary; //导入依赖的package包/类
private void saveInodes(FileSummary.Builder summary) throws IOException {
  FSImageFormatPBINode.Saver saver = new FSImageFormatPBINode.Saver(this,
      summary);

  saver.serializeINodeSection(sectionOutputStream);
  saver.serializeINodeDirectorySection(sectionOutputStream);
  saver.serializeFilesUCSection(sectionOutputStream);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:9,代码来源:FSImageFormatProtobuf.java


示例13: saveSnapshots

import org.apache.hadoop.hdfs.server.namenode.FsImageProto.FileSummary; //导入依赖的package包/类
private void saveSnapshots(FileSummary.Builder summary) throws IOException {
  FSImageFormatPBSnapshot.Saver snapshotSaver = new FSImageFormatPBSnapshot.Saver(
      this, summary, context, context.getSourceNamesystem());

  snapshotSaver.serializeSnapshotSection(sectionOutputStream);
  snapshotSaver.serializeSnapshotDiffSection(sectionOutputStream);
  snapshotSaver.serializeINodeReferenceSection(sectionOutputStream);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:9,代码来源:FSImageFormatProtobuf.java


示例14: saveSecretManagerSection

import org.apache.hadoop.hdfs.server.namenode.FsImageProto.FileSummary; //导入依赖的package包/类
private void saveSecretManagerSection(FileSummary.Builder summary)
    throws IOException {
  final FSNamesystem fsn = context.getSourceNamesystem();
  DelegationTokenSecretManager.SecretManagerState state = fsn
      .saveSecretManagerState();
  state.section.writeDelimitedTo(sectionOutputStream);
  for (SecretManagerSection.DelegationKey k : state.keys)
    k.writeDelimitedTo(sectionOutputStream);

  for (SecretManagerSection.PersistToken t : state.tokens)
    t.writeDelimitedTo(sectionOutputStream);

  commitSection(summary, SectionName.SECRET_MANAGER);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:15,代码来源:FSImageFormatProtobuf.java


示例15: saveStringTableSection

import org.apache.hadoop.hdfs.server.namenode.FsImageProto.FileSummary; //导入依赖的package包/类
private void saveStringTableSection(FileSummary.Builder summary)
    throws IOException {
  OutputStream out = sectionOutputStream;
  StringTableSection.Builder b = StringTableSection.newBuilder()
      .setNumEntry(saverContext.stringMap.size());
  b.build().writeDelimitedTo(out);
  for (Entry<String, Integer> e : saverContext.stringMap.entrySet()) {
    StringTableSection.Entry.Builder eb = StringTableSection.Entry
        .newBuilder().setId(e.getValue()).setStr(e.getKey());
    eb.build().writeDelimitedTo(out);
  }
  commitSection(summary, SectionName.STRING_TABLE);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:14,代码来源:FSImageFormatProtobuf.java


示例16: loadSummary

import org.apache.hadoop.hdfs.server.namenode.FsImageProto.FileSummary; //导入依赖的package包/类
public static FileSummary loadSummary(RandomAccessFile file)
    throws IOException {
  final int FILE_LENGTH_FIELD_SIZE = 4;
  long fileLength = file.length();
  file.seek(fileLength - FILE_LENGTH_FIELD_SIZE);
  int summaryLength = file.readInt();

  if (summaryLength <= 0) {
    throw new IOException("Negative length of the file");
  }
  file.seek(fileLength - FILE_LENGTH_FIELD_SIZE - summaryLength);

  byte[] summaryBytes = new byte[summaryLength];
  file.readFully(summaryBytes);

  FileSummary summary = FileSummary
      .parseDelimitedFrom(new ByteArrayInputStream(summaryBytes));
  if (summary.getOndiskVersion() != FILE_VERSION) {
    throw new IOException("Unsupported file version "
        + summary.getOndiskVersion());
  }

  if (!NameNodeLayoutVersion.supports(Feature.PROTOBUF_FORMAT,
      summary.getLayoutVersion())) {
    throw new IOException("Unsupported layout version "
        + summary.getLayoutVersion());
  }
  return summary;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:30,代码来源:FSImageUtil.java


示例17: Saver

import org.apache.hadoop.hdfs.server.namenode.FsImageProto.FileSummary; //导入依赖的package包/类
public Saver(FSImageFormatProtobuf.Saver parent,
    FileSummary.Builder headers, SaveNamespaceContext context,
    FSNamesystem fsn) {
  this.parent = parent;
  this.headers = headers;
  this.context = context;
  this.fsn = fsn;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:9,代码来源:FSImageFormatPBSnapshot.java


示例18: saveNameSystemSection

import org.apache.hadoop.hdfs.server.namenode.FsImageProto.FileSummary; //导入依赖的package包/类
private void saveNameSystemSection(FileSummary.Builder summary)
    throws IOException {
  final FSNamesystem fsn = context.getSourceNamesystem();
  OutputStream out = sectionOutputStream;
  BlockIdManager blockIdManager = fsn.getBlockIdManager();
  NameSystemSection.Builder b = NameSystemSection.newBuilder()
      .setGenstampV1(blockIdManager.getGenerationStampV1())
      .setGenstampV1Limit(blockIdManager.getGenerationStampV1Limit())
      .setGenstampV2(blockIdManager.getGenerationStampV2())
      .setLastAllocatedBlockId(blockIdManager.getLastAllocatedContiguousBlockId())
      .setLastAllocatedStripedBlockId(blockIdManager.getLastAllocatedStripedBlockId())
      .setTransactionId(context.getTxId());

  // We use the non-locked version of getNamespaceInfo here since
  // the coordinating thread of saveNamespace already has read-locked
  // the namespace for us. If we attempt to take another readlock
  // from the actual saver thread, there's a potential of a
  // fairness-related deadlock. See the comments on HDFS-2223.
  b.setNamespaceId(fsn.unprotectedGetNamespaceInfo().getNamespaceID());
  if (fsn.isRollingUpgrade()) {
    b.setRollingUpgradeStartTime(fsn.getRollingUpgradeInfo().getStartTime());
  }
  NameSystemSection s = b.build();
  s.writeDelimitedTo(out);

  commitSection(summary, SectionName.NS_INFO);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:28,代码来源:FSImageFormatProtobuf.java



注:本文中的org.apache.hadoop.hdfs.server.namenode.FsImageProto.FileSummary类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java ObjectTreeParser类代码示例发布时间:2022-05-22
下一篇:
Java JTicketsBagRestaurantMap类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap