• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java Expiration类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo.Expiration的典型用法代码示例。如果您正苦于以下问题:Java Expiration类的具体用法?Java Expiration怎么用?Java Expiration使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



Expiration类属于org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo包,在下文中一共展示了Expiration类的9个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: validate

import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo.Expiration; //导入依赖的package包/类
public static void validate(CachePoolInfo info) throws IOException {
  if (info == null) {
    throw new InvalidRequestException("CachePoolInfo is null");
  }
  if ((info.getLimit() != null) && (info.getLimit() < 0)) {
    throw new InvalidRequestException("Limit is negative.");
  }
  if (info.getMaxRelativeExpiryMs() != null) {
    long maxRelativeExpiryMs = info.getMaxRelativeExpiryMs();
    if (maxRelativeExpiryMs < 0l) {
      throw new InvalidRequestException("Max relative expiry is negative.");
    }
    if (maxRelativeExpiryMs > Expiration.MAX_RELATIVE_EXPIRY_MS) {
      throw new InvalidRequestException("Max relative expiry is too big.");
    }
  }
  validateName(info.poolName);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:19,代码来源:CachePoolInfo.java


示例2: validateExpiryTime

import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo.Expiration; //导入依赖的package包/类
/**
 * Calculates the absolute expiry time of the directive from the
 * {@link CacheDirectiveInfo.Expiration}. This converts a relative Expiration
 * into an absolute time based on the local clock.
 * 
 * @param info to validate.
 * @param maxRelativeExpiryTime of the info's pool.
 * @return the expiration time, or the pool's max absolute expiration if the
 *         info's expiration was not set.
 * @throws InvalidRequestException if the info's Expiration is invalid.
 */
private static long validateExpiryTime(CacheDirectiveInfo info,
    long maxRelativeExpiryTime) throws InvalidRequestException {
  LOG.trace("Validating directive {} pool maxRelativeExpiryTime {}", info,
      maxRelativeExpiryTime);
  final long now = new Date().getTime();
  final long maxAbsoluteExpiryTime = now + maxRelativeExpiryTime;
  if (info == null || info.getExpiration() == null) {
    return maxAbsoluteExpiryTime;
  }
  Expiration expiry = info.getExpiration();
  if (expiry.getMillis() < 0l) {
    throw new InvalidRequestException("Cannot set a negative expiration: "
        + expiry.getMillis());
  }
  long relExpiryTime, absExpiryTime;
  if (expiry.isRelative()) {
    relExpiryTime = expiry.getMillis();
    absExpiryTime = now + relExpiryTime;
  } else {
    absExpiryTime = expiry.getMillis();
    relExpiryTime = absExpiryTime - now;
  }
  // Need to cap the expiry so we don't overflow a long when doing math
  if (relExpiryTime > Expiration.MAX_RELATIVE_EXPIRY_MS) {
    throw new InvalidRequestException("Expiration "
        + expiry.toString() + " is too far in the future!");
  }
  // Fail if the requested expiry is greater than the max
  if (relExpiryTime > maxRelativeExpiryTime) {
    throw new InvalidRequestException("Expiration " + expiry.toString()
        + " exceeds the max relative expiration time of "
        + maxRelativeExpiryTime + " ms.");
  }
  return absExpiryTime;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:47,代码来源:CacheManager.java


示例3: parseExpirationString

import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo.Expiration; //导入依赖的package包/类
private static CacheDirectiveInfo.Expiration parseExpirationString(String ttlString)
    throws IOException {
  CacheDirectiveInfo.Expiration ex = null;
  if (ttlString != null) {
    if (ttlString.equalsIgnoreCase("never")) {
      ex = CacheDirectiveInfo.Expiration.NEVER;
    } else {
      long ttl = DFSUtil.parseRelativeTime(ttlString);
      ex = CacheDirectiveInfo.Expiration.newRelative(ttl);
    }
  }
  return ex;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:14,代码来源:CacheAdmin.java


示例4: parseExpirationString

import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo.Expiration; //导入依赖的package包/类
private static Expiration parseExpirationString(String ttlString)
    throws IOException {
  Expiration ex = null;
  if (ttlString != null) {
    if (ttlString.equalsIgnoreCase("never")) {
      ex = CacheDirectiveInfo.Expiration.NEVER;
    } else {
      long ttl = DFSUtil.parseRelativeTime(ttlString);
      ex = CacheDirectiveInfo.Expiration.newRelative(ttl);
    }
  }
  return ex;
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:14,代码来源:CacheAdmin.java


示例5: validateExpiryTime

import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo.Expiration; //导入依赖的package包/类
/**
 * Calculates the absolute expiry time of the directive from the
 * {@link CacheDirectiveInfo.Expiration}. This converts a relative Expiration
 * into an absolute time based on the local clock.
 * 
 * @param info to validate.
 * @param maxRelativeExpiryTime of the info's pool.
 * @return the expiration time, or the pool's max absolute expiration if the
 *         info's expiration was not set.
 * @throws InvalidRequestException if the info's Expiration is invalid.
 */
private static long validateExpiryTime(CacheDirectiveInfo info,
    long maxRelativeExpiryTime) throws InvalidRequestException {
  if (LOG.isTraceEnabled()) {
    LOG.trace("Validating directive " + info
        + " pool maxRelativeExpiryTime " + maxRelativeExpiryTime);
  }
  final long now = new Date().getTime();
  final long maxAbsoluteExpiryTime = now + maxRelativeExpiryTime;
  if (info == null || info.getExpiration() == null) {
    return maxAbsoluteExpiryTime;
  }
  Expiration expiry = info.getExpiration();
  if (expiry.getMillis() < 0l) {
    throw new InvalidRequestException("Cannot set a negative expiration: "
        + expiry.getMillis());
  }
  long relExpiryTime, absExpiryTime;
  if (expiry.isRelative()) {
    relExpiryTime = expiry.getMillis();
    absExpiryTime = now + relExpiryTime;
  } else {
    absExpiryTime = expiry.getMillis();
    relExpiryTime = absExpiryTime - now;
  }
  // Need to cap the expiry so we don't overflow a long when doing math
  if (relExpiryTime > Expiration.MAX_RELATIVE_EXPIRY_MS) {
    throw new InvalidRequestException("Expiration "
        + expiry.toString() + " is too far in the future!");
  }
  // Fail if the requested expiry is greater than the max
  if (relExpiryTime > maxRelativeExpiryTime) {
    throw new InvalidRequestException("Expiration " + expiry.toString()
        + " exceeds the max relative expiration time of "
        + maxRelativeExpiryTime + " ms.");
  }
  return absExpiryTime;
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:49,代码来源:CacheManager.java


示例6: run

import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo.Expiration; //导入依赖的package包/类
@Override
public int run(Configuration conf, List<String> args) throws IOException {
  CacheDirectiveInfo.Builder builder =
      new CacheDirectiveInfo.Builder();
  String pathFilter = StringUtils.popOptionWithArgument("-path", args);
  if (pathFilter != null) {
    builder.setPath(new Path(pathFilter));
  }
  String poolFilter = StringUtils.popOptionWithArgument("-pool", args);
  if (poolFilter != null) {
    builder.setPool(poolFilter);
  }
  boolean printStats = StringUtils.popOption("-stats", args);
  String idFilter = StringUtils.popOptionWithArgument("-id", args);
  if (idFilter != null) {
    builder.setId(Long.parseLong(idFilter));
  }
  if (!args.isEmpty()) {
    System.err.println("Can't understand argument: " + args.get(0));
    return 1;
  }
  TableListing.Builder tableBuilder = new TableListing.Builder().
      addField("ID", Justification.RIGHT).
      addField("POOL", Justification.LEFT).
      addField("REPL", Justification.RIGHT).
      addField("EXPIRY", Justification.LEFT).
      addField("PATH", Justification.LEFT);
  if (printStats) {
    tableBuilder.addField("BYTES_NEEDED", Justification.RIGHT).
                addField("BYTES_CACHED", Justification.RIGHT).
                addField("FILES_NEEDED", Justification.RIGHT).
                addField("FILES_CACHED", Justification.RIGHT);
  }
  TableListing tableListing = tableBuilder.build();
  try {
    DistributedFileSystem dfs = AdminHelper.getDFS(conf);
    RemoteIterator<CacheDirectiveEntry> iter =
        dfs.listCacheDirectives(builder.build());
    int numEntries = 0;
    while (iter.hasNext()) {
      CacheDirectiveEntry entry = iter.next();
      CacheDirectiveInfo directive = entry.getInfo();
      CacheDirectiveStats stats = entry.getStats();
      List<String> row = new LinkedList<String>();
      row.add("" + directive.getId());
      row.add(directive.getPool());
      row.add("" + directive.getReplication());
      String expiry;
      // This is effectively never, round for nice printing
      if (directive.getExpiration().getMillis() >
          Expiration.MAX_RELATIVE_EXPIRY_MS / 2) {
        expiry = "never";
      } else {
        expiry = directive.getExpiration().toString();
      }
      row.add(expiry);
      row.add(directive.getPath().toUri().getPath());
      if (printStats) {
        row.add("" + stats.getBytesNeeded());
        row.add("" + stats.getBytesCached());
        row.add("" + stats.getFilesNeeded());
        row.add("" + stats.getFilesCached());
      }
      tableListing.addRow(row.toArray(new String[row.size()]));
      numEntries++;
    }
    System.out.print(String.format("Found %d entr%s%n",
        numEntries, numEntries == 1 ? "y" : "ies"));
    if (numEntries > 0) {
      System.out.print(tableListing);
    }
  } catch (IOException e) {
    System.err.println(AdminHelper.prettifyException(e));
    return 2;
  }
  return 0;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:78,代码来源:CacheAdmin.java


示例7: testExpiry

import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo.Expiration; //导入依赖的package包/类
@Test(timeout=120000)
public void testExpiry() throws Exception {
  String pool = "pool1";
  dfs.addCachePool(new CachePoolInfo(pool));
  Path p = new Path("/mypath");
  DFSTestUtil.createFile(dfs, p, BLOCK_SIZE*2, (short)2, 0x999);
  // Expire after test timeout
  Date start = new Date();
  Date expiry = DateUtils.addSeconds(start, 120);
  final long id = dfs.addCacheDirective(new CacheDirectiveInfo.Builder()
      .setPath(p)
      .setPool(pool)
      .setExpiration(CacheDirectiveInfo.Expiration.newAbsolute(expiry))
      .setReplication((short)2)
      .build());
  waitForCachedBlocks(cluster.getNameNode(), 2, 4, "testExpiry:1");
  // Change it to expire sooner
  dfs.modifyCacheDirective(new CacheDirectiveInfo.Builder().setId(id)
      .setExpiration(Expiration.newRelative(0)).build());
  waitForCachedBlocks(cluster.getNameNode(), 0, 0, "testExpiry:2");
  RemoteIterator<CacheDirectiveEntry> it = dfs.listCacheDirectives(null);
  CacheDirectiveEntry ent = it.next();
  assertFalse(it.hasNext());
  Date entryExpiry = new Date(ent.getInfo().getExpiration().getMillis());
  assertTrue("Directive should have expired",
      entryExpiry.before(new Date()));
  // Change it back to expire later
  dfs.modifyCacheDirective(new CacheDirectiveInfo.Builder().setId(id)
      .setExpiration(Expiration.newRelative(120000)).build());
  waitForCachedBlocks(cluster.getNameNode(), 2, 4, "testExpiry:3");
  it = dfs.listCacheDirectives(null);
  ent = it.next();
  assertFalse(it.hasNext());
  entryExpiry = new Date(ent.getInfo().getExpiration().getMillis());
  assertTrue("Directive should not have expired",
      entryExpiry.after(new Date()));
  // Verify that setting a negative TTL throws an error
  try {
    dfs.modifyCacheDirective(new CacheDirectiveInfo.Builder().setId(id)
        .setExpiration(Expiration.newRelative(-1)).build());
  } catch (InvalidRequestException e) {
    GenericTestUtils
        .assertExceptionContains("Cannot set a negative expiration", e);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:46,代码来源:TestCacheDirectives.java


示例8: run

import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo.Expiration; //导入依赖的package包/类
@Override
public int run(Configuration conf, List<String> args) throws IOException {
  CacheDirectiveInfo.Builder builder =
      new CacheDirectiveInfo.Builder();
  String pathFilter = StringUtils.popOptionWithArgument("-path", args);
  if (pathFilter != null) {
    builder.setPath(new Path(pathFilter));
  }
  String poolFilter = StringUtils.popOptionWithArgument("-pool", args);
  if (poolFilter != null) {
    builder.setPool(poolFilter);
  }
  boolean printStats = StringUtils.popOption("-stats", args);
  String idFilter = StringUtils.popOptionWithArgument("-id", args);
  if (idFilter != null) {
    builder.setId(Long.parseLong(idFilter));
  }
  if (!args.isEmpty()) {
    System.err.println("Can't understand argument: " + args.get(0));
    return 1;
  }
  TableListing.Builder tableBuilder = new TableListing.Builder().
      addField("ID", Justification.RIGHT).
      addField("POOL", Justification.LEFT).
      addField("REPL", Justification.RIGHT).
      addField("EXPIRY", Justification.LEFT).
      addField("PATH", Justification.LEFT);
  if (printStats) {
    tableBuilder.addField("BYTES_NEEDED", Justification.RIGHT).
                addField("BYTES_CACHED", Justification.RIGHT).
                addField("FILES_NEEDED", Justification.RIGHT).
                addField("FILES_CACHED", Justification.RIGHT);
  }
  TableListing tableListing = tableBuilder.build();
  try {
    DistributedFileSystem dfs = getDFS(conf);
    RemoteIterator<CacheDirectiveEntry> iter =
        dfs.listCacheDirectives(builder.build());
    int numEntries = 0;
    while (iter.hasNext()) {
      CacheDirectiveEntry entry = iter.next();
      CacheDirectiveInfo directive = entry.getInfo();
      CacheDirectiveStats stats = entry.getStats();
      List<String> row = new LinkedList<String>();
      row.add("" + directive.getId());
      row.add(directive.getPool());
      row.add("" + directive.getReplication());
      String expiry;
      // This is effectively never, round for nice printing
      if (directive.getExpiration().getMillis() >
          Expiration.MAX_RELATIVE_EXPIRY_MS / 2) {
        expiry = "never";
      } else {
        expiry = directive.getExpiration().toString();
      }
      row.add(expiry);
      row.add(directive.getPath().toUri().getPath());
      if (printStats) {
        row.add("" + stats.getBytesNeeded());
        row.add("" + stats.getBytesCached());
        row.add("" + stats.getFilesNeeded());
        row.add("" + stats.getFilesCached());
      }
      tableListing.addRow(row.toArray(new String[0]));
      numEntries++;
    }
    System.out.print(String.format("Found %d entr%s%n",
        numEntries, numEntries == 1 ? "y" : "ies"));
    if (numEntries > 0) {
      System.out.print(tableListing);
    }
  } catch (IOException e) {
    System.err.println(prettifyException(e));
    return 2;
  }
  return 0;
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:78,代码来源:CacheAdmin.java


示例9: run

import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo.Expiration; //导入依赖的package包/类
@Override
public int run(Configuration conf, List<String> args) throws IOException {
  CacheDirectiveInfo.Builder builder =
      new CacheDirectiveInfo.Builder();
  String pathFilter = StringUtils.popOptionWithArgument("-path", args);
  if (pathFilter != null) {
    builder.setPath(new Path(pathFilter));
  }
  String poolFilter = StringUtils.popOptionWithArgument("-pool", args);
  if (poolFilter != null) {
    builder.setPool(poolFilter);
  }
  boolean printStats = StringUtils.popOption("-stats", args);
  if (!args.isEmpty()) {
    System.err.println("Can't understand argument: " + args.get(0));
    return 1;
  }
  TableListing.Builder tableBuilder = new TableListing.Builder().
      addField("ID", Justification.RIGHT).
      addField("POOL", Justification.LEFT).
      addField("REPL", Justification.RIGHT).
      addField("EXPIRY", Justification.LEFT).
      addField("PATH", Justification.LEFT);
  if (printStats) {
    tableBuilder.addField("BYTES_NEEDED", Justification.RIGHT).
                addField("BYTES_CACHED", Justification.RIGHT).
                addField("FILES_NEEDED", Justification.RIGHT).
                addField("FILES_CACHED", Justification.RIGHT);
  }
  TableListing tableListing = tableBuilder.build();
  try {
    DistributedFileSystem dfs = getDFS(conf);
    RemoteIterator<CacheDirectiveEntry> iter =
        dfs.listCacheDirectives(builder.build());
    int numEntries = 0;
    while (iter.hasNext()) {
      CacheDirectiveEntry entry = iter.next();
      CacheDirectiveInfo directive = entry.getInfo();
      CacheDirectiveStats stats = entry.getStats();
      List<String> row = new LinkedList<String>();
      row.add("" + directive.getId());
      row.add(directive.getPool());
      row.add("" + directive.getReplication());
      String expiry;
      // This is effectively never, round for nice printing
      if (directive.getExpiration().getMillis() >
          Expiration.MAX_RELATIVE_EXPIRY_MS / 2) {
        expiry = "never";
      } else {
        expiry = directive.getExpiration().toString();
      }
      row.add(expiry);
      row.add(directive.getPath().toUri().getPath());
      if (printStats) {
        row.add("" + stats.getBytesNeeded());
        row.add("" + stats.getBytesCached());
        row.add("" + stats.getFilesNeeded());
        row.add("" + stats.getFilesCached());
      }
      tableListing.addRow(row.toArray(new String[0]));
      numEntries++;
    }
    System.out.print(String.format("Found %d entr%s\n",
        numEntries, numEntries == 1 ? "y" : "ies"));
    if (numEntries > 0) {
      System.out.print(tableListing);
    }
  } catch (IOException e) {
    System.err.println(prettifyException(e));
    return 2;
  }
  return 0;
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:74,代码来源:CacheAdmin.java



注:本文中的org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo.Expiration类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java Loadable类代码示例发布时间:2022-05-22
下一篇:
Java VerticalLayoutData类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap