• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java CacheDirectiveStats类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.protocol.CacheDirectiveStats的典型用法代码示例。如果您正苦于以下问题:Java CacheDirectiveStats类的具体用法?Java CacheDirectiveStats怎么用?Java CacheDirectiveStats使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



CacheDirectiveStats类属于org.apache.hadoop.hdfs.protocol包,在下文中一共展示了CacheDirectiveStats类的7个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: checkLimit

import org.apache.hadoop.hdfs.protocol.CacheDirectiveStats; //导入依赖的package包/类
/**
 * Throws an exception if the CachePool does not have enough capacity to
 * cache the given path at the replication factor.
 *
 * @param pool CachePool where the path is being cached
 * @param path Path that is being cached
 * @param replication Replication factor of the path
 * @throws InvalidRequestException if the pool does not have enough capacity
 */
private void checkLimit(CachePool pool, String path,
    short replication) throws InvalidRequestException {
  CacheDirectiveStats stats = computeNeeded(path, replication);
  if (pool.getLimit() == CachePoolInfo.LIMIT_UNLIMITED) {
    return;
  }
  if (pool.getBytesNeeded() + (stats.getBytesNeeded() * replication) > pool
      .getLimit()) {
    throw new InvalidRequestException("Caching path " + path + " of size "
        + stats.getBytesNeeded() / replication + " bytes at replication "
        + replication + " would exceed pool " + pool.getPoolName()
        + "'s remaining capacity of "
        + (pool.getLimit() - pool.getBytesNeeded()) + " bytes.");
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:25,代码来源:CacheManager.java


示例2: computeNeeded

import org.apache.hadoop.hdfs.protocol.CacheDirectiveStats; //导入依赖的package包/类
/**
 * Computes the needed number of bytes and files for a path.
 * @return CacheDirectiveStats describing the needed stats for this path
 */
private CacheDirectiveStats computeNeeded(String path, short replication) {
  FSDirectory fsDir = namesystem.getFSDirectory();
  INode node;
  long requestedBytes = 0;
  long requestedFiles = 0;
  CacheDirectiveStats.Builder builder = new CacheDirectiveStats.Builder();
  try {
    node = fsDir.getINode(path);
  } catch (UnresolvedLinkException e) {
    // We don't cache through symlinks
    return builder.build();
  }
  if (node == null) {
    return builder.build();
  }
  if (node.isFile()) {
    requestedFiles = 1;
    INodeFile file = node.asFile();
    requestedBytes = file.computeFileSize();
  } else if (node.isDirectory()) {
    INodeDirectory dir = node.asDirectory();
    ReadOnlyList<INode> children = dir
        .getChildrenList(Snapshot.CURRENT_STATE_ID);
    requestedFiles = children.size();
    for (INode child : children) {
      if (child.isFile()) {
        requestedBytes += child.asFile().computeFileSize();
      }
    }
  }
  return new CacheDirectiveStats.Builder()
      .setBytesNeeded(requestedBytes)
      .setFilesCached(requestedFiles)
      .build();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:40,代码来源:CacheManager.java


示例3: convert

import org.apache.hadoop.hdfs.protocol.CacheDirectiveStats; //导入依赖的package包/类
public static CacheDirectiveStatsProto convert(CacheDirectiveStats stats) {
  CacheDirectiveStatsProto.Builder builder = 
      CacheDirectiveStatsProto.newBuilder();
  builder.setBytesNeeded(stats.getBytesNeeded());
  builder.setBytesCached(stats.getBytesCached());
  builder.setFilesNeeded(stats.getFilesNeeded());
  builder.setFilesCached(stats.getFilesCached());
  builder.setHasExpired(stats.hasExpired());
  return builder.build();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:11,代码来源:PBHelper.java


示例4: convert

import org.apache.hadoop.hdfs.protocol.CacheDirectiveStats; //导入依赖的package包/类
public static CacheDirectiveStats convert(CacheDirectiveStatsProto proto) {
  CacheDirectiveStats.Builder builder = new CacheDirectiveStats.Builder();
  builder.setBytesNeeded(proto.getBytesNeeded());
  builder.setBytesCached(proto.getBytesCached());
  builder.setFilesNeeded(proto.getFilesNeeded());
  builder.setFilesCached(proto.getFilesCached());
  builder.setHasExpired(proto.getHasExpired());
  return builder.build();
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:10,代码来源:PBHelperClient.java


示例5: run

import org.apache.hadoop.hdfs.protocol.CacheDirectiveStats; //导入依赖的package包/类
@Override
public int run(Configuration conf, List<String> args) throws IOException {
  CacheDirectiveInfo.Builder builder =
      new CacheDirectiveInfo.Builder();
  String pathFilter = StringUtils.popOptionWithArgument("-path", args);
  if (pathFilter != null) {
    builder.setPath(new Path(pathFilter));
  }
  String poolFilter = StringUtils.popOptionWithArgument("-pool", args);
  if (poolFilter != null) {
    builder.setPool(poolFilter);
  }
  boolean printStats = StringUtils.popOption("-stats", args);
  String idFilter = StringUtils.popOptionWithArgument("-id", args);
  if (idFilter != null) {
    builder.setId(Long.parseLong(idFilter));
  }
  if (!args.isEmpty()) {
    System.err.println("Can't understand argument: " + args.get(0));
    return 1;
  }
  TableListing.Builder tableBuilder = new TableListing.Builder().
      addField("ID", Justification.RIGHT).
      addField("POOL", Justification.LEFT).
      addField("REPL", Justification.RIGHT).
      addField("EXPIRY", Justification.LEFT).
      addField("PATH", Justification.LEFT);
  if (printStats) {
    tableBuilder.addField("BYTES_NEEDED", Justification.RIGHT).
                addField("BYTES_CACHED", Justification.RIGHT).
                addField("FILES_NEEDED", Justification.RIGHT).
                addField("FILES_CACHED", Justification.RIGHT);
  }
  TableListing tableListing = tableBuilder.build();
  try {
    DistributedFileSystem dfs = AdminHelper.getDFS(conf);
    RemoteIterator<CacheDirectiveEntry> iter =
        dfs.listCacheDirectives(builder.build());
    int numEntries = 0;
    while (iter.hasNext()) {
      CacheDirectiveEntry entry = iter.next();
      CacheDirectiveInfo directive = entry.getInfo();
      CacheDirectiveStats stats = entry.getStats();
      List<String> row = new LinkedList<String>();
      row.add("" + directive.getId());
      row.add(directive.getPool());
      row.add("" + directive.getReplication());
      String expiry;
      // This is effectively never, round for nice printing
      if (directive.getExpiration().getMillis() >
          Expiration.MAX_RELATIVE_EXPIRY_MS / 2) {
        expiry = "never";
      } else {
        expiry = directive.getExpiration().toString();
      }
      row.add(expiry);
      row.add(directive.getPath().toUri().getPath());
      if (printStats) {
        row.add("" + stats.getBytesNeeded());
        row.add("" + stats.getBytesCached());
        row.add("" + stats.getFilesNeeded());
        row.add("" + stats.getFilesCached());
      }
      tableListing.addRow(row.toArray(new String[row.size()]));
      numEntries++;
    }
    System.out.print(String.format("Found %d entr%s%n",
        numEntries, numEntries == 1 ? "y" : "ies"));
    if (numEntries > 0) {
      System.out.print(tableListing);
    }
  } catch (IOException e) {
    System.err.println(AdminHelper.prettifyException(e));
    return 2;
  }
  return 0;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:78,代码来源:CacheAdmin.java


示例6: waitForCacheDirectiveStats

import org.apache.hadoop.hdfs.protocol.CacheDirectiveStats; //导入依赖的package包/类
private static void waitForCacheDirectiveStats(final DistributedFileSystem dfs,
    final long targetBytesNeeded, final long targetBytesCached,
    final long targetFilesNeeded, final long targetFilesCached,
    final CacheDirectiveInfo filter, final String infoString)
          throws Exception {
  LOG.info("Polling listCacheDirectives " + 
      ((filter == null) ? "ALL" : filter.toString()) + " for " +
      targetBytesNeeded + " targetBytesNeeded, " +
      targetBytesCached + " targetBytesCached, " +
      targetFilesNeeded + " targetFilesNeeded, " +
      targetFilesCached + " targetFilesCached");
  GenericTestUtils.waitFor(new Supplier<Boolean>() {
    @Override
    public Boolean get() {
      RemoteIterator<CacheDirectiveEntry> iter = null;
      CacheDirectiveEntry entry = null;
      try {
        iter = dfs.listCacheDirectives(filter);
        entry = iter.next();
      } catch (IOException e) {
        fail("got IOException while calling " +
            "listCacheDirectives: " + e.getMessage());
      }
      Assert.assertNotNull(entry);
      CacheDirectiveStats stats = entry.getStats();
      if ((targetBytesNeeded == stats.getBytesNeeded()) &&
          (targetBytesCached == stats.getBytesCached()) &&
          (targetFilesNeeded == stats.getFilesNeeded()) &&
          (targetFilesCached == stats.getFilesCached())) {
        return true;
      } else {
        LOG.info(infoString + ": " +
            "filesNeeded: " +
            stats.getFilesNeeded() + "/" + targetFilesNeeded +
            ", filesCached: " + 
            stats.getFilesCached() + "/" + targetFilesCached +
            ", bytesNeeded: " +
            stats.getBytesNeeded() + "/" + targetBytesNeeded +
            ", bytesCached: " + 
            stats.getBytesCached() + "/" + targetBytesCached);
        return false;
      }
    }
  }, 500, 60000);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:46,代码来源:TestCacheDirectives.java


示例7: run

import org.apache.hadoop.hdfs.protocol.CacheDirectiveStats; //导入依赖的package包/类
@Override
public int run(Configuration conf, List<String> args) throws IOException {
  CacheDirectiveInfo.Builder builder =
      new CacheDirectiveInfo.Builder();
  String pathFilter = StringUtils.popOptionWithArgument("-path", args);
  if (pathFilter != null) {
    builder.setPath(new Path(pathFilter));
  }
  String poolFilter = StringUtils.popOptionWithArgument("-pool", args);
  if (poolFilter != null) {
    builder.setPool(poolFilter);
  }
  boolean printStats = StringUtils.popOption("-stats", args);
  String idFilter = StringUtils.popOptionWithArgument("-id", args);
  if (idFilter != null) {
    builder.setId(Long.parseLong(idFilter));
  }
  if (!args.isEmpty()) {
    System.err.println("Can't understand argument: " + args.get(0));
    return 1;
  }
  TableListing.Builder tableBuilder = new TableListing.Builder().
      addField("ID", Justification.RIGHT).
      addField("POOL", Justification.LEFT).
      addField("REPL", Justification.RIGHT).
      addField("EXPIRY", Justification.LEFT).
      addField("PATH", Justification.LEFT);
  if (printStats) {
    tableBuilder.addField("BYTES_NEEDED", Justification.RIGHT).
                addField("BYTES_CACHED", Justification.RIGHT).
                addField("FILES_NEEDED", Justification.RIGHT).
                addField("FILES_CACHED", Justification.RIGHT);
  }
  TableListing tableListing = tableBuilder.build();
  try {
    DistributedFileSystem dfs = getDFS(conf);
    RemoteIterator<CacheDirectiveEntry> iter =
        dfs.listCacheDirectives(builder.build());
    int numEntries = 0;
    while (iter.hasNext()) {
      CacheDirectiveEntry entry = iter.next();
      CacheDirectiveInfo directive = entry.getInfo();
      CacheDirectiveStats stats = entry.getStats();
      List<String> row = new LinkedList<String>();
      row.add("" + directive.getId());
      row.add(directive.getPool());
      row.add("" + directive.getReplication());
      String expiry;
      // This is effectively never, round for nice printing
      if (directive.getExpiration().getMillis() >
          Expiration.MAX_RELATIVE_EXPIRY_MS / 2) {
        expiry = "never";
      } else {
        expiry = directive.getExpiration().toString();
      }
      row.add(expiry);
      row.add(directive.getPath().toUri().getPath());
      if (printStats) {
        row.add("" + stats.getBytesNeeded());
        row.add("" + stats.getBytesCached());
        row.add("" + stats.getFilesNeeded());
        row.add("" + stats.getFilesCached());
      }
      tableListing.addRow(row.toArray(new String[0]));
      numEntries++;
    }
    System.out.print(String.format("Found %d entr%s%n",
        numEntries, numEntries == 1 ? "y" : "ies"));
    if (numEntries > 0) {
      System.out.print(tableListing);
    }
  } catch (IOException e) {
    System.err.println(prettifyException(e));
    return 2;
  }
  return 0;
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:78,代码来源:CacheAdmin.java



注:本文中的org.apache.hadoop.hdfs.protocol.CacheDirectiveStats类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java DaoConfig类代码示例发布时间:2022-05-22
下一篇:
Java PythonSdkType类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap