• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java StoreUtils类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hbase.regionserver.StoreUtils的典型用法代码示例。如果您正苦于以下问题:Java StoreUtils类的具体用法?Java StoreUtils怎么用?Java StoreUtils使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



StoreUtils类属于org.apache.hadoop.hbase.regionserver包,在下文中一共展示了StoreUtils类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: selectCompaction

import org.apache.hadoop.hbase.regionserver.StoreUtils; //导入依赖的package包/类
@Override
public CompactionRequest selectCompaction(Collection<StoreFile> candidateFiles,
    List<StoreFile> filesCompacting, boolean isUserCompaction, boolean mayUseOffPeak,
    boolean forceMajor) throws IOException {
  
  if(forceMajor){
    LOG.warn("Major compaction is not supported for FIFO compaction policy. Ignore the flag.");
  }
  boolean isAfterSplit = StoreUtils.hasReferences(candidateFiles);
  if(isAfterSplit){
    LOG.info("Split detected, delegate selection to the parent policy.");
    return super.selectCompaction(candidateFiles, filesCompacting, isUserCompaction, 
      mayUseOffPeak, forceMajor);
  }
  
  // Nothing to compact
  Collection<StoreFile> toCompact = getExpiredStores(candidateFiles, filesCompacting);
  CompactionRequest result = new CompactionRequest(toCompact);
  return result;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:21,代码来源:FIFOCompactionPolicy.java


示例2: getNextMajorCompactTime

import org.apache.hadoop.hbase.regionserver.StoreUtils; //导入依赖的package包/类
/**
 * @param filesToCompact
 * @return When to run next major compaction
 */
public long getNextMajorCompactTime(final Collection<StoreFile> filesToCompact) {
  // default = 24hrs
  long ret = comConf.getMajorCompactionPeriod();
  if (ret > 0) {
    // default = 20% = +/- 4.8 hrs
    double jitterPct = comConf.getMajorCompactionJitter();
    if (jitterPct > 0) {
      long jitter = Math.round(ret * jitterPct);
      // deterministic jitter avoids a major compaction storm on restart
      Integer seed = StoreUtils.getDeterministicRandomSeed(filesToCompact);
      if (seed != null) {
        // Synchronized to ensure one user of random instance at a time.
        double rnd = -1;
        synchronized (this) {
          this.random.setSeed(seed);
          rnd = this.random.nextDouble();
        }
        ret += jitter - Math.round(2L * jitter * rnd);
      } else {
        ret = 0; // If seed is null, then no storefiles == no major compaction
      }
    }
  }
  return ret;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:30,代码来源:RatioBasedCompactionPolicy.java


示例3: getNextMajorCompactTime

import org.apache.hadoop.hbase.regionserver.StoreUtils; //导入依赖的package包/类
public long getNextMajorCompactTime(final Collection<StoreFile> filesToCompact) {
  // default = 24hrs
  long ret = comConf.getMajorCompactionPeriod();
  if (ret > 0) {
    // default = 20% = +/- 4.8 hrs
    double jitterPct = comConf.getMajorCompactionJitter();
    if (jitterPct > 0) {
      long jitter = Math.round(ret * jitterPct);
      // deterministic jitter avoids a major compaction storm on restart
      Integer seed = StoreUtils.getDeterministicRandomSeed(filesToCompact);
      if (seed != null) {
        double rnd = (new Random(seed)).nextDouble();
        ret += jitter - Math.round(2L * jitter * rnd);
      } else {
        ret = 0; // no storefiles == no major compaction
      }
    }
  }
  return ret;
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:21,代码来源:RatioBasedCompactionPolicy.java


示例4: selectCompaction

import org.apache.hadoop.hbase.regionserver.StoreUtils; //导入依赖的package包/类
@Override
public CompactionRequestImpl selectCompaction(Collection<HStoreFile> candidateFiles,
    List<HStoreFile> filesCompacting, boolean isUserCompaction, boolean mayUseOffPeak,
    boolean forceMajor) throws IOException {
  if(forceMajor){
    LOG.warn("Major compaction is not supported for FIFO compaction policy. Ignore the flag.");
  }
  boolean isAfterSplit = StoreUtils.hasReferences(candidateFiles);
  if(isAfterSplit){
    LOG.info("Split detected, delegate selection to the parent policy.");
    return super.selectCompaction(candidateFiles, filesCompacting, isUserCompaction, 
      mayUseOffPeak, forceMajor);
  }

  // Nothing to compact
  Collection<HStoreFile> toCompact = getExpiredStores(candidateFiles, filesCompacting);
  CompactionRequestImpl result = new CompactionRequestImpl(toCompact);
  return result;
}
 
开发者ID:apache,项目名称:hbase,代码行数:20,代码来源:FIFOCompactionPolicy.java


示例5: getNextMajorCompactTime

import org.apache.hadoop.hbase.regionserver.StoreUtils; //导入依赖的package包/类
public long getNextMajorCompactTime(final List<StoreFile> filesToCompact) {
  // default = 24hrs
  long ret = comConf.getMajorCompactionPeriod();
  if (ret > 0) {
    // default = 20% = +/- 4.8 hrs
    double jitterPct = comConf.getMajorCompactionJitter();
    if (jitterPct > 0) {
      long jitter = Math.round(ret * jitterPct);
      // deterministic jitter avoids a major compaction storm on restart
      Integer seed = StoreUtils.getDeterministicRandomSeed(filesToCompact);
      if (seed != null) {
        double rnd = (new Random(seed)).nextDouble();
        ret += jitter - Math.round(2L * jitter * rnd);
      } else {
        ret = 0; // no storefiles == no major compaction
      }
    }
  }
  return ret;
}
 
开发者ID:daidong,项目名称:DominoHBase,代码行数:21,代码来源:CompactionPolicy.java


示例6: isMajorCompaction

import org.apache.hadoop.hbase.regionserver.StoreUtils; //导入依赖的package包/类
@Override
public boolean isMajorCompaction(Collection<StoreFile> filesToCompact) throws IOException {
  boolean isAfterSplit = StoreUtils.hasReferences(filesToCompact);
  if(isAfterSplit){
    LOG.info("Split detected, delegate to the parent policy.");
    return super.isMajorCompaction(filesToCompact);
  }
  return false;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:10,代码来源:FIFOCompactionPolicy.java


示例7: needsCompaction

import org.apache.hadoop.hbase.regionserver.StoreUtils; //导入依赖的package包/类
@Override
public boolean needsCompaction(Collection<StoreFile> storeFiles, 
    List<StoreFile> filesCompacting) {  
  boolean isAfterSplit = StoreUtils.hasReferences(storeFiles);
  if(isAfterSplit){
    LOG.info("Split detected, delegate to the parent policy.");
    return super.needsCompaction(storeFiles, filesCompacting);
  }
  return hasExpiredStores(storeFiles);
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:11,代码来源:FIFOCompactionPolicy.java


示例8: needsCompactions

import org.apache.hadoop.hbase.regionserver.StoreUtils; //导入依赖的package包/类
public boolean needsCompactions(StripeInformationProvider si, List<StoreFile> filesCompacting) {
  // Approximation on whether we need compaction.
  return filesCompacting.isEmpty()
      && (StoreUtils.hasReferences(si.getStorefiles())
        || (si.getLevel0Files().size() >= this.config.getLevel0MinFiles())
        || needsSingleStripeCompaction(si));
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:8,代码来源:StripeCompactionPolicy.java


示例9: needEmptyFile

import org.apache.hadoop.hbase.regionserver.StoreUtils; //导入依赖的package包/类
private boolean needEmptyFile(CompactionRequestImpl request) {
  // if we are going to compact the last N files, then we need to emit an empty file to retain the
  // maxSeqId if we haven't written out anything.
  OptionalLong maxSeqId = StoreUtils.getMaxSequenceIdInList(request.getFiles());
  OptionalLong storeMaxSeqId = store.getMaxSequenceId();
  return maxSeqId.isPresent() && storeMaxSeqId.isPresent() &&
      maxSeqId.getAsLong() == storeMaxSeqId.getAsLong();
}
 
开发者ID:apache,项目名称:hbase,代码行数:9,代码来源:DateTieredCompactor.java


示例10: getNextMajorCompactTime

import org.apache.hadoop.hbase.regionserver.StoreUtils; //导入依赖的package包/类
/**
 * @param filesToCompact
 * @return When to run next major compaction
 */
public long getNextMajorCompactTime(Collection<HStoreFile> filesToCompact) {
  /** Default to {@link org.apache.hadoop.hbase.HConstants#DEFAULT_MAJOR_COMPACTION_PERIOD}. */
  long period = comConf.getMajorCompactionPeriod();
  if (period <= 0) {
    return period;
  }

  /**
   * Default to {@link org.apache.hadoop.hbase.HConstants#DEFAULT_MAJOR_COMPACTION_JITTER},
   * that is, +/- 3.5 days (7 days * 0.5).
   */
  double jitterPct = comConf.getMajorCompactionJitter();
  if (jitterPct <= 0) {
    return period;
  }

  // deterministic jitter avoids a major compaction storm on restart
  OptionalInt seed = StoreUtils.getDeterministicRandomSeed(filesToCompact);
  if (seed.isPresent()) {
    // Synchronized to ensure one user of random instance at a time.
    double rnd;
    synchronized (this) {
      this.random.setSeed(seed.getAsInt());
      rnd = this.random.nextDouble();
    }
    long jitter = Math.round(period * jitterPct);
    return period + jitter - Math.round(2L * jitter * rnd);
  } else {
    return 0L;
  }
}
 
开发者ID:apache,项目名称:hbase,代码行数:36,代码来源:SortedCompactionPolicy.java


示例11: shouldPerformMajorCompaction

import org.apache.hadoop.hbase.regionserver.StoreUtils; //导入依赖的package包/类
@Override
public boolean shouldPerformMajorCompaction(Collection<HStoreFile> filesToCompact)
  throws IOException {
  boolean isAfterSplit = StoreUtils.hasReferences(filesToCompact);
  if(isAfterSplit){
    LOG.info("Split detected, delegate to the parent policy.");
    return super.shouldPerformMajorCompaction(filesToCompact);
  }
  return false;
}
 
开发者ID:apache,项目名称:hbase,代码行数:11,代码来源:FIFOCompactionPolicy.java


示例12: needsCompaction

import org.apache.hadoop.hbase.regionserver.StoreUtils; //导入依赖的package包/类
@Override
public boolean needsCompaction(Collection<HStoreFile> storeFiles,
    List<HStoreFile> filesCompacting) {
  boolean isAfterSplit = StoreUtils.hasReferences(storeFiles);
  if(isAfterSplit){
    LOG.info("Split detected, delegate to the parent policy.");
    return super.needsCompaction(storeFiles, filesCompacting);
  }
  return hasExpiredStores(storeFiles);
}
 
开发者ID:apache,项目名称:hbase,代码行数:11,代码来源:FIFOCompactionPolicy.java


示例13: needsCompactions

import org.apache.hadoop.hbase.regionserver.StoreUtils; //导入依赖的package包/类
public boolean needsCompactions(StripeInformationProvider si, List<HStoreFile> filesCompacting) {
  // Approximation on whether we need compaction.
  return filesCompacting.isEmpty()
      && (StoreUtils.hasReferences(si.getStorefiles())
        || (si.getLevel0Files().size() >= this.config.getLevel0MinFiles())
        || needsSingleStripeCompaction(si));
}
 
开发者ID:apache,项目名称:hbase,代码行数:8,代码来源:StripeCompactionPolicy.java


示例14: selectCompaction

import org.apache.hadoop.hbase.regionserver.StoreUtils; //导入依赖的package包/类
/**
 * @param candidateFiles candidate files, ordered from oldest to newest
 * @return subset copy of candidate list that meets compaction criteria
 * @throws java.io.IOException
 */
public CompactSelection selectCompaction(List<StoreFile> candidateFiles,
    boolean isUserCompaction, boolean forceMajor)
  throws IOException {
  // Prelimanry compaction subject to filters
  CompactSelection candidateSelection = new CompactSelection(candidateFiles);
  long cfTtl = this.storeConfig.getStoreFileTtl();
  if (!forceMajor) {
    // If there are expired files, only select them so that compaction deletes them
    if (comConf.shouldDeleteExpired() && (cfTtl != Long.MAX_VALUE)) {
      CompactSelection expiredSelection = selectExpiredStoreFiles(
        candidateSelection, EnvironmentEdgeManager.currentTimeMillis() - cfTtl);
      if (expiredSelection != null) {
        return expiredSelection;
      }
    }
    candidateSelection = skipLargeFiles(candidateSelection);
  }

  // Force a major compaction if this is a user-requested major compaction,
  // or if we do not have too many files to compact and this was requested
  // as a major compaction.
  // Or, if there are any references among the candidates.
  boolean majorCompaction = (
    (forceMajor && isUserCompaction)
    || ((forceMajor || isMajorCompaction(candidateSelection.getFilesToCompact()))
        && (candidateSelection.getFilesToCompact().size() < comConf.getMaxFilesToCompact()))
    || StoreUtils.hasReferences(candidateSelection.getFilesToCompact())
    );

  if (!majorCompaction) {
    // we're doing a minor compaction, let's see what files are applicable
    candidateSelection = filterBulk(candidateSelection);
    candidateSelection = applyCompactionPolicy(candidateSelection);
    candidateSelection = checkMinFilesCriteria(candidateSelection);
  }
  candidateSelection =
      removeExcessFiles(candidateSelection, isUserCompaction, majorCompaction);
  return candidateSelection;
}
 
开发者ID:daidong,项目名称:DominoHBase,代码行数:45,代码来源:CompactionPolicy.java


示例15: selectCompaction

import org.apache.hadoop.hbase.regionserver.StoreUtils; //导入依赖的package包/类
/**
 * @param candidateFiles candidate files, ordered from oldest to newest. All files in store.
 * @return subset copy of candidate list that meets compaction criteria
 * @throws java.io.IOException
 */
public CompactionRequest selectCompaction(Collection<StoreFile> candidateFiles,
    final List<StoreFile> filesCompacting, final boolean isUserCompaction,
    final boolean mayUseOffPeak, final boolean forceMajor) throws IOException {
  // Preliminary compaction subject to filters
  ArrayList<StoreFile> candidateSelection = new ArrayList<StoreFile>(candidateFiles);
  // Stuck and not compacting enough (estimate). It is not guaranteed that we will be
  // able to compact more if stuck and compacting, because ratio policy excludes some
  // non-compacting files from consideration during compaction (see getCurrentEligibleFiles).
  int futureFiles = filesCompacting.isEmpty() ? 0 : 1;
  boolean mayBeStuck = (candidateFiles.size() - filesCompacting.size() + futureFiles)
      >= storeConfigInfo.getBlockingFileCount();
  candidateSelection = getCurrentEligibleFiles(candidateSelection, filesCompacting);
  LOG.debug("Selecting compaction from " + candidateFiles.size() + " store files, " +
      filesCompacting.size() + " compacting, " + candidateSelection.size() +
      " eligible, " + storeConfigInfo.getBlockingFileCount() + " blocking");

  // If we can't have all files, we cannot do major anyway
  boolean isAllFiles = candidateFiles.size() == candidateSelection.size();
  if (!(forceMajor && isAllFiles)) {
    candidateSelection = skipLargeFiles(candidateSelection, mayUseOffPeak);
    isAllFiles = candidateFiles.size() == candidateSelection.size();
  }

  // Try a major compaction if this is a user-requested major compaction,
  // or if we do not have too many files to compact and this was requested as a major compaction
  boolean isTryingMajor = (forceMajor && isAllFiles && isUserCompaction)
      || (((forceMajor && isAllFiles) || isMajorCompaction(candidateSelection))
        && (candidateSelection.size() < comConf.getMaxFilesToCompact()));
  // Or, if there are any references among the candidates.
  boolean isAfterSplit = StoreUtils.hasReferences(candidateSelection);
  if (!isTryingMajor && !isAfterSplit) {
    // We're are not compacting all files, let's see what files are applicable
    candidateSelection = filterBulk(candidateSelection);
    candidateSelection = applyCompactionPolicy(candidateSelection, mayUseOffPeak, mayBeStuck);
    candidateSelection = checkMinFilesCriteria(candidateSelection);
  }
  candidateSelection = removeExcessFiles(candidateSelection, isUserCompaction, isTryingMajor);
  // Now we have the final file list, so we can determine if we can do major/all files.
  isAllFiles = (candidateFiles.size() == candidateSelection.size());
  CompactionRequest result = new CompactionRequest(candidateSelection);
  result.setOffPeak(!candidateSelection.isEmpty() && !isAllFiles && mayUseOffPeak);
  result.setIsMajor(isTryingMajor && isAllFiles, isAllFiles);
  return result;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:50,代码来源:RatioBasedCompactionPolicy.java


示例16: isMajorCompaction

import org.apache.hadoop.hbase.regionserver.StoreUtils; //导入依赖的package包/类
@Override
public boolean isMajorCompaction(final Collection<StoreFile> filesToCompact)
    throws IOException {
  boolean result = false;
  long mcTime = getNextMajorCompactTime(filesToCompact);
  if (filesToCompact == null || filesToCompact.isEmpty() || mcTime == 0) {
    return result;
  }
  // TODO: Use better method for determining stamp of last major (HBASE-2990)
  long lowTimestamp = StoreUtils.getLowestTimestamp(filesToCompact);
  long now = System.currentTimeMillis();
  if (lowTimestamp > 0l && lowTimestamp < (now - mcTime)) {
    // Major compaction time has elapsed.
    long cfTtl = this.storeConfigInfo.getStoreFileTtl();
    if (filesToCompact.size() == 1) {
      // Single file
      StoreFile sf = filesToCompact.iterator().next();
      Long minTimestamp = sf.getMinimumTimestamp();
      long oldest = (minTimestamp == null)
          ? Long.MIN_VALUE
          : now - minTimestamp.longValue();
      if (sf.isMajorCompaction() &&
          (cfTtl == HConstants.FOREVER || oldest < cfTtl)) {
        float blockLocalityIndex = sf.getHDFSBlockDistribution().getBlockLocalityIndex(
            RSRpcServices.getHostname(comConf.conf, false)
        );
        if (blockLocalityIndex < comConf.getMinLocalityToForceCompact()) {
          if (LOG.isDebugEnabled()) {
            LOG.debug("Major compaction triggered on only store " + this +
                "; to make hdfs blocks local, current blockLocalityIndex is " +
                blockLocalityIndex + " (min " + comConf.getMinLocalityToForceCompact() +
                ")");
          }
          result = true;
        } else {
          if (LOG.isDebugEnabled()) {
            LOG.debug("Skipping major compaction of " + this +
                " because one (major) compacted file only, oldestTime " +
                oldest + "ms is < ttl=" + cfTtl + " and blockLocalityIndex is " +
                blockLocalityIndex + " (min " + comConf.getMinLocalityToForceCompact() +
                ")");
          }
        }
      } else if (cfTtl != HConstants.FOREVER && oldest > cfTtl) {
        LOG.debug("Major compaction triggered on store " + this +
          ", because keyvalues outdated; time since last major compaction " +
          (now - lowTimestamp) + "ms");
        result = true;
      }
    } else {
      if (LOG.isDebugEnabled()) {
        LOG.debug("Major compaction triggered on store " + this +
            "; time since last major compaction " + (now - lowTimestamp) + "ms");
      }
      result = true;
    }
  }
  return result;
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:60,代码来源:RatioBasedCompactionPolicy.java


示例17: selectCompaction

import org.apache.hadoop.hbase.regionserver.StoreUtils; //导入依赖的package包/类
/**
 * @param candidateFiles candidate files, ordered from oldest to newest. All files in store.
 * @return subset copy of candidate list that meets compaction criteria
 * @throws java.io.IOException
 */
public CompactionRequest selectCompaction(Collection<StoreFile> candidateFiles,
    final List<StoreFile> filesCompacting, final boolean isUserCompaction,
    final boolean mayUseOffPeak, final boolean forceMajor) throws IOException {
  // Preliminary compaction subject to filters
  ArrayList<StoreFile> candidateSelection = new ArrayList<StoreFile>(candidateFiles);
  // Stuck and not compacting enough (estimate). It is not guaranteed that we will be
  // able to compact more if stuck and compacting, because ratio policy excludes some
  // non-compacting files from consideration during compaction (see getCurrentEligibleFiles).
  int futureFiles = filesCompacting.isEmpty() ? 0 : 1;
  boolean mayBeStuck = (candidateFiles.size() - filesCompacting.size() + futureFiles)
      >= storeConfigInfo.getBlockingFileCount();
  candidateSelection = getCurrentEligibleFiles(candidateSelection, filesCompacting);
  LOG.debug("Selecting compaction from " + candidateFiles.size() + " store files, " +
      filesCompacting.size() + " compacting, " + candidateSelection.size() +
      " eligible, " + storeConfigInfo.getBlockingFileCount() + " blocking");

  // If we can't have all files, we cannot do major anyway
  boolean isAllFiles = candidateFiles.size() == candidateSelection.size();
  if (!(forceMajor && isAllFiles)) {
    candidateSelection = skipLargeFiles(candidateSelection);
    isAllFiles = candidateFiles.size() == candidateSelection.size();
  }

  // Try a major compaction if this is a user-requested major compaction,
  // or if we do not have too many files to compact and this was requested as a major compaction
  boolean isTryingMajor = (forceMajor && isAllFiles && isUserCompaction)
      || (((forceMajor && isAllFiles) || isMajorCompaction(candidateSelection))
        && (candidateSelection.size() < comConf.getMaxFilesToCompact()));
  // Or, if there are any references among the candidates.
  boolean isAfterSplit = StoreUtils.hasReferences(candidateSelection);
  if (!isTryingMajor && !isAfterSplit) {
    // We're are not compacting all files, let's see what files are applicable
    candidateSelection = filterBulk(candidateSelection);
    candidateSelection = applyCompactionPolicy(candidateSelection, mayUseOffPeak, mayBeStuck);
    candidateSelection = checkMinFilesCriteria(candidateSelection);
  }
  candidateSelection = removeExcessFiles(candidateSelection, isUserCompaction, isTryingMajor);
  // Now we have the final file list, so we can determine if we can do major/all files.
  isAllFiles = (candidateFiles.size() == candidateSelection.size());
  CompactionRequest result = new CompactionRequest(candidateSelection);
  result.setOffPeak(!candidateSelection.isEmpty() && !isAllFiles && mayUseOffPeak);
  result.setIsMajor(isTryingMajor && isAllFiles, isAllFiles);
  return result;
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:50,代码来源:RatioBasedCompactionPolicy.java


示例18: isMajorCompaction

import org.apache.hadoop.hbase.regionserver.StoreUtils; //导入依赖的package包/类
public boolean isMajorCompaction(final Collection<StoreFile> filesToCompact)
    throws IOException {
  boolean result = false;
  long mcTime = getNextMajorCompactTime(filesToCompact);
  if (filesToCompact == null || filesToCompact.isEmpty() || mcTime == 0) {
    return result;
  }
  // TODO: Use better method for determining stamp of last major (HBASE-2990)
  long lowTimestamp = StoreUtils.getLowestTimestamp(filesToCompact);
  long now = System.currentTimeMillis();
  if (lowTimestamp > 0l && lowTimestamp < (now - mcTime)) {
    // Major compaction time has elapsed.
    long cfTtl = this.storeConfigInfo.getStoreFileTtl();
    if (filesToCompact.size() == 1) {
      // Single file
      StoreFile sf = filesToCompact.iterator().next();
      Long minTimestamp = sf.getMinimumTimestamp();
      long oldest = (minTimestamp == null)
          ? Long.MIN_VALUE
          : now - minTimestamp.longValue();
      if (sf.isMajorCompaction() &&
          (cfTtl == HConstants.FOREVER || oldest < cfTtl)) {
        float blockLocalityIndex = sf.getHDFSBlockDistribution().getBlockLocalityIndex(
            RSRpcServices.getHostname(comConf.conf)
        );
        if (blockLocalityIndex < comConf.getMinLocalityToForceCompact()) {
          if (LOG.isDebugEnabled()) {
            LOG.debug("Major compaction triggered on only store " + this +
                "; to make hdfs blocks local, current blockLocalityIndex is " +
                blockLocalityIndex + " (min " + comConf.getMinLocalityToForceCompact() +
                ")");
          }
          result = true;
        } else {
          if (LOG.isDebugEnabled()) {
            LOG.debug("Skipping major compaction of " + this +
                " because one (major) compacted file only, oldestTime " +
                oldest + "ms is < ttl=" + cfTtl + " and blockLocalityIndex is " +
                blockLocalityIndex + " (min " + comConf.getMinLocalityToForceCompact() +
                ")");
          }
        }
      } else if (cfTtl != HConstants.FOREVER && oldest > cfTtl) {
        LOG.debug("Major compaction triggered on store " + this +
          ", because keyvalues outdated; time since last major compaction " +
          (now - lowTimestamp) + "ms");
        result = true;
      }
    } else {
      if (LOG.isDebugEnabled()) {
        LOG.debug("Major compaction triggered on store " + this +
            "; time since last major compaction " + (now - lowTimestamp) + "ms");
      }
      result = true;
    }
  }
  return result;
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:59,代码来源:RatioBasedCompactionPolicy.java


示例19: selectCompaction

import org.apache.hadoop.hbase.regionserver.StoreUtils; //导入依赖的package包/类
/**
 * @param candidateFiles candidate files, ordered from oldest to newest
 * @return subset copy of candidate list that meets compaction criteria
 * @throws java.io.IOException
 */
public CompactionRequest selectCompaction(Collection<StoreFile> candidateFiles,
    final List<StoreFile> filesCompacting, final boolean isUserCompaction,
    final boolean mayUseOffPeak, final boolean forceMajor) throws IOException {
  // Preliminary compaction subject to filters
  ArrayList<StoreFile> candidateSelection = new ArrayList<StoreFile>(candidateFiles);
  // Stuck and not compacting enough (estimate). It is not guaranteed that we will be
  // able to compact more if stuck and compacting, because ratio policy excludes some
  // non-compacting files from consideration during compaction (see getCurrentEligibleFiles).
  int futureFiles = filesCompacting.isEmpty() ? 0 : 1;
  boolean mayBeStuck = (candidateFiles.size() - filesCompacting.size() + futureFiles)
      >= storeConfigInfo.getBlockingFileCount();
  candidateSelection = getCurrentEligibleFiles(candidateSelection, filesCompacting);
  LOG.debug("Selecting compaction from " + candidateFiles.size() + " store files, " +
      filesCompacting.size() + " compacting, " + candidateSelection.size() +
      " eligible, " + storeConfigInfo.getBlockingFileCount() + " blocking");

  long cfTtl = this.storeConfigInfo.getStoreFileTtl();
  if (!forceMajor) {
    // If there are expired files, only select them so that compaction deletes them
    if (comConf.shouldDeleteExpired() && (cfTtl != Long.MAX_VALUE)) {
      ArrayList<StoreFile> expiredSelection = selectExpiredStoreFiles(
          candidateSelection, EnvironmentEdgeManager.currentTimeMillis() - cfTtl);
      if (expiredSelection != null) {
        return new CompactionRequest(expiredSelection);
      }
    }
    candidateSelection = skipLargeFiles(candidateSelection);
  }

  // Force a major compaction if this is a user-requested major compaction,
  // or if we do not have too many files to compact and this was requested
  // as a major compaction.
  // Or, if there are any references among the candidates.
  boolean majorCompaction = (
    (forceMajor && isUserCompaction)
    || ((forceMajor || isMajorCompaction(candidateSelection))
        && (candidateSelection.size() < comConf.getMaxFilesToCompact()))
    || StoreUtils.hasReferences(candidateSelection)
    );

  if (!majorCompaction) {
    // we're doing a minor compaction, let's see what files are applicable
    candidateSelection = filterBulk(candidateSelection);
    candidateSelection = applyCompactionPolicy(candidateSelection, mayUseOffPeak, mayBeStuck);
    candidateSelection = checkMinFilesCriteria(candidateSelection);
  }
  candidateSelection = removeExcessFiles(candidateSelection, isUserCompaction, majorCompaction);
  CompactionRequest result = new CompactionRequest(candidateSelection);
  result.setOffPeak(!candidateSelection.isEmpty() && !majorCompaction && mayUseOffPeak);
  return result;
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:57,代码来源:RatioBasedCompactionPolicy.java


示例20: isMajorCompaction

import org.apache.hadoop.hbase.regionserver.StoreUtils; //导入依赖的package包/类
public boolean isMajorCompaction(final Collection<StoreFile> filesToCompact)
    throws IOException {
  boolean result = false;
  long mcTime = getNextMajorCompactTime(filesToCompact);
  if (filesToCompact == null || filesToCompact.isEmpty() || mcTime == 0) {
    return result;
  }
  // TODO: Use better method for determining stamp of last major (HBASE-2990)
  long lowTimestamp = StoreUtils.getLowestTimestamp(filesToCompact);
  long now = System.currentTimeMillis();
  if (lowTimestamp > 0l && lowTimestamp < (now - mcTime)) {
    // Major compaction time has elapsed.
    long cfTtl = this.storeConfigInfo.getStoreFileTtl();
    if (filesToCompact.size() == 1) {
      // Single file
      StoreFile sf = filesToCompact.iterator().next();
      Long minTimestamp = sf.getMinimumTimestamp();
      long oldest = (minTimestamp == null)
          ? Long.MIN_VALUE
          : now - minTimestamp.longValue();
      if (sf.isMajorCompaction() &&
          (cfTtl == HConstants.FOREVER || oldest < cfTtl)) {
        if (LOG.isDebugEnabled()) {
          LOG.debug("Skipping major compaction of " + this +
              " because one (major) compacted file only and oldestTime " +
              oldest + "ms is < ttl=" + cfTtl);
        }
      } else if (cfTtl != HConstants.FOREVER && oldest > cfTtl) {
        LOG.debug("Major compaction triggered on store " + this +
          ", because keyvalues outdated; time since last major compaction " +
          (now - lowTimestamp) + "ms");
        result = true;
      }
    } else {
      if (LOG.isDebugEnabled()) {
        LOG.debug("Major compaction triggered on store " + this +
            "; time since last major compaction " + (now - lowTimestamp) + "ms");
      }
      result = true;
    }
  }
  return result;
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:44,代码来源:RatioBasedCompactionPolicy.java



注:本文中的org.apache.hadoop.hbase.regionserver.StoreUtils类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java TileEntityDispenser类代码示例发布时间:2022-05-23
下一篇:
Java XSStringBuilder类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap