• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java HdfsConstants类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.server.common.HdfsConstants的典型用法代码示例。如果您正苦于以下问题:Java HdfsConstants类的具体用法?Java HdfsConstants怎么用?Java HdfsConstants使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



HdfsConstants类属于org.apache.hadoop.hdfs.server.common包,在下文中一共展示了HdfsConstants类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: replaceBlock

import org.apache.hadoop.hdfs.server.common.HdfsConstants; //导入依赖的package包/类
private boolean replaceBlock( Block block, DatanodeInfo source,
    DatanodeInfo sourceProxy, DatanodeInfo destination, int namespaceId) throws IOException {
  Socket sock = new Socket();
  sock.connect(NetUtils.createSocketAddr(
      destination.getName()), HdfsConstants.READ_TIMEOUT);
  sock.setKeepAlive(true);
  // sendRequest
  DataOutputStream out = new DataOutputStream(sock.getOutputStream());
  out.writeShort(DataTransferProtocol.DATA_TRANSFER_VERSION);
  out.writeByte(DataTransferProtocol.OP_REPLACE_BLOCK);
  out.writeInt(namespaceId);
  out.writeLong(block.getBlockId());
  out.writeLong(block.getGenerationStamp());
  Text.writeString(out, source.getStorageID());
  sourceProxy.write(out);
  out.flush();
  // receiveResponse
  DataInputStream reply = new DataInputStream(sock.getInputStream());

  short status = reply.readShort();
  if(status == DataTransferProtocol.OP_STATUS_SUCCESS) {
    return true;
  }
  return false;
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:26,代码来源:TestBlockReplacement.java


示例2: validateEditLog

import org.apache.hadoop.hdfs.server.common.HdfsConstants; //导入依赖的package包/类
public static FSEditLogLoader.EditLogValidation validateEditLog(
    LedgerHandleProvider ledgerProvider,
    EditLogLedgerMetadata ledgerMetadata) throws IOException {
  BookKeeperEditLogInputStream in;
  try {
    in = new BookKeeperEditLogInputStream(ledgerProvider,
        ledgerMetadata.getLedgerId(), 0, ledgerMetadata.getFirstTxId(),
        ledgerMetadata.getLastTxId(), ledgerMetadata.getLastTxId() == -1);
  } catch (LedgerHeaderCorruptException e) {
    LOG.warn("Log at ledger id" + ledgerMetadata.getLedgerId() +
        " has no valid header", e);
    return new FSEditLogLoader.EditLogValidation(0,
        HdfsConstants.INVALID_TXID, HdfsConstants.INVALID_TXID, true);
  }

  try {
    return FSEditLogLoader.validateEditLog(in);
  } finally {
    IOUtils.closeStream(in);
  }
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:22,代码来源:BookKeeperEditLogInputStream.java


示例3: validateAndGetEndTxId

import org.apache.hadoop.hdfs.server.common.HdfsConstants; //导入依赖的package包/类
long validateAndGetEndTxId(EditLogLedgerMetadata ledger, boolean fence)
    throws IOException {
  FSEditLogLoader.EditLogValidation val;
  if (!fence) {
    val = BookKeeperEditLogInputStream.validateEditLog(this, ledger);
  } else {
    val = BookKeeperEditLogInputStream.validateEditLog(
        new FencingLedgerHandleProvider(), ledger);
  }
  InjectionHandler.processEvent(InjectionEvent.BKJM_VALIDATELOGSEGMENT,
      val);
  if (val.getNumTransactions() == 0) {
    return HdfsConstants.INVALID_TXID; // Ledger is corrupt
  }
  return val.getEndTxId();
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:17,代码来源:BookKeeperJournalManager.java


示例4: getJournalInputStreamDontCheckLastTxId

import org.apache.hadoop.hdfs.server.common.HdfsConstants; //导入依赖的package包/类
static EditLogInputStream getJournalInputStreamDontCheckLastTxId(
    JournalManager jm, long txId) throws IOException {
  List<EditLogInputStream> streams = new ArrayList<EditLogInputStream>();
  jm.selectInputStreams(streams, txId, true, false);
  if (streams.size() < 1) {
    throw new IOException("Cannot obtain stream for txid: " + txId);
  }
  Collections.sort(streams, JournalSet.EDIT_LOG_INPUT_STREAM_COMPARATOR);

  if (txId == HdfsConstants.INVALID_TXID) {
    return streams.get(0);
  }

  for (EditLogInputStream elis : streams) {
    if (elis.getFirstTxId() == txId) {
      return elis;
    }
  }
  throw new IOException("Cannot obtain stream for txid: " + txId);
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:21,代码来源:TestBookKeeperJournalManager.java


示例5: createCompression

import org.apache.hadoop.hdfs.server.common.HdfsConstants; //导入依赖的package包/类
/**
 * Create a compression instance based on the user's configuration in the given
 * Configuration object.
 * @throws IOException if the specified codec is not available.
 */
static FSImageCompression createCompression(Configuration conf, boolean forceUncompressed)
  throws IOException {
  boolean compressImage = (!forceUncompressed) && conf.getBoolean(
    HdfsConstants.DFS_IMAGE_COMPRESS_KEY,
    HdfsConstants.DFS_IMAGE_COMPRESS_DEFAULT);

  if (!compressImage) {
    return createNoopCompression();
  }

  String codecClassName = conf.get(
    HdfsConstants.DFS_IMAGE_COMPRESSION_CODEC_KEY,
    HdfsConstants.DFS_IMAGE_COMPRESSION_CODEC_DEFAULT);
  return createCompression(conf, codecClassName);
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:21,代码来源:FSImageCompression.java


示例6: getInputStream

import org.apache.hadoop.hdfs.server.common.HdfsConstants; //导入依赖的package包/类
/**
 * Get input stream from the given journal starting at txid.
 * Does not perform validation of the streams.
 * 
 * This should only be used for tailing inprogress streams!!!
 */
public static EditLogInputStream getInputStream(JournalManager jm, long txid)
    throws IOException {
  List<EditLogInputStream> streams = new ArrayList<EditLogInputStream>();
  jm.selectInputStreams(streams, txid, true, false);
  if (streams.size() < 1) {
    throw new IOException("Cannot obtain stream for txid: " + txid);
  }
  Collections.sort(streams, JournalSet.EDIT_LOG_INPUT_STREAM_COMPARATOR);
  
  // we want the "oldest" available stream
  if (txid == HdfsConstants.INVALID_TXID) {
    return streams.get(0);
  }
  
  // we want a specific stream
  for (EditLogInputStream elis : streams) {
    if (elis.getFirstTxId() == txid) {
      return elis;
    }
  }
  // we cannot obtain the stream
  throw new IOException("Cannot obtain stream for txid: " + txid);
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:30,代码来源:JournalSet.java


示例7: validateEditLog

import org.apache.hadoop.hdfs.server.common.HdfsConstants; //导入依赖的package包/类
static FSEditLogLoader.EditLogValidation validateEditLog(File file) throws IOException {
  EditLogFileInputStream in;
  try {
    in = new EditLogFileInputStream(file);
    in.getVersion();
  } catch (LogHeaderCorruptException corrupt) {
    // If it's missing its header, this is equivalent to no transactions
    FSImage.LOG.warn("Log at " + file + " has no valid header",
        corrupt);
    return new FSEditLogLoader.EditLogValidation(0, HdfsConstants.INVALID_TXID, 
                                                 HdfsConstants.INVALID_TXID, true);
  }
  
  try {
    return FSEditLogLoader.validateEditLog(in);
  } finally {
    IOUtils.closeStream(in);
  }
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:20,代码来源:EditLogFileInputStream.java


示例8: EditLogFile

import org.apache.hadoop.hdfs.server.common.HdfsConstants; //导入依赖的package包/类
EditLogFile(File file, long firstTxId, 
            long lastTxId, boolean isInProgress) {
  boolean checkTxIds = true;
  checkTxIds &= ((lastTxId == HdfsConstants.INVALID_TXID && isInProgress)
    || (lastTxId != HdfsConstants.INVALID_TXID && lastTxId >= firstTxId));
  checkTxIds &= ((firstTxId > -1) || (firstTxId == HdfsConstants.INVALID_TXID));
  if (!checkTxIds)
    throw new IllegalArgumentException("Illegal transaction ids: "
        + firstTxId + ", " + lastTxId + " in progress: " + isInProgress);
  if(file == null)
    throw new IllegalArgumentException("File can not be NULL");
  
  this.firstTxId = firstTxId;
  this.lastTxId = lastTxId;
  this.file = file;
  this.isInProgress = isInProgress;
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:18,代码来源:FileJournalManager.java


示例9: scanStorageForLatestEdits

import org.apache.hadoop.hdfs.server.common.HdfsConstants; //导入依赖的package包/类
/**
 * Scan the local storage directory, and return the segment containing
 * the highest transaction.
 * @return the EditLogFile with the highest transactions, or null
 * if no files exist.
 */
private synchronized EditLogFile scanStorageForLatestEdits() throws IOException {
  if (!fjm.getStorageDirectory().getCurrentDir().exists()) {
    return null;
  }
  
  LOG.info("Scanning storage " + fjm);
  List<EditLogFile> files = fjm.getLogFiles(0);
  
  while (!files.isEmpty()) {
    EditLogFile latestLog = files.remove(files.size() - 1);
    latestLog.validateLog();
    LOG.info("Latest log is " + latestLog);
    if (latestLog.getLastTxId() == HdfsConstants.INVALID_TXID) {
      // the log contains no transactions
      LOG.warn("Latest log " + latestLog + " has no transactions. " +
          "moving it aside and looking for previous log");
      latestLog.moveAsideEmptyFile();
    } else {
      return latestLog;
    }
  }
  
  LOG.info("No files in " + fjm);
  return null;
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:32,代码来源:Journal.java


示例10: getSegmentInfo

import org.apache.hadoop.hdfs.server.common.HdfsConstants; //导入依赖的package包/类
/**
 * @return the current state of the given segment, or null if the
 * segment does not exist.
 */
private SegmentStateProto getSegmentInfo(long segmentTxId)
    throws IOException {
  EditLogFile elf = fjm.getLogFile(segmentTxId);
  if (elf == null) {
    return null;
  }
  if (elf.isInProgress()) {
    elf.validateLog();
  }
  if (elf.getLastTxId() == HdfsConstants.INVALID_TXID) {
    LOG.info("Edit log file " + elf + " appears to be empty. " +
        "Moving it aside...");
    elf.moveAsideEmptyFile();
    return null;
  }
  SegmentStateProto ret = new SegmentStateProto(segmentTxId, elf.getLastTxId(), elf.isInProgress());
  LOG.info("getSegmentInfo(" + segmentTxId + "): " + elf + " -> " + ret);
  return ret;
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:24,代码来源:Journal.java


示例11: accessBlock

import org.apache.hadoop.hdfs.server.common.HdfsConstants; //导入依赖的package包/类
/**
 * try to access a block on a data node. If fails - throws exception
 * @param datanode
 * @param lblock
 * @throws IOException
 */
private void accessBlock(DatanodeInfo datanode, LocatedBlock lblock)
  throws IOException {
  InetSocketAddress targetAddr = null;
  Socket s = null;
  DFSClient.BlockReader blockReader = null; 
  Block block = lblock.getBlock(); 
 
  targetAddr = NetUtils.createSocketAddr(datanode.getName());
    
  s = new Socket();
  s.connect(targetAddr, HdfsConstants.READ_TIMEOUT);
  s.setSoTimeout(HdfsConstants.READ_TIMEOUT);

  blockReader = 
    DFSClient.BlockReader.newBlockReader(s, targetAddr.toString() + ":" + 
        block.getBlockId(), block.getBlockId(), lblock.getBlockToken(),
        block.getGenerationStamp(), 0, -1, 4096);

  // nothing - if it fails - it will throw and exception
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre,代码行数:27,代码来源:TestDataNodeVolumeFailure.java


示例12: replaceBlock

import org.apache.hadoop.hdfs.server.common.HdfsConstants; //导入依赖的package包/类
private boolean replaceBlock( Block block, DatanodeInfo source,
    DatanodeInfo sourceProxy, DatanodeInfo destination) throws IOException {
  Socket sock = new Socket();
  sock.connect(NetUtils.createSocketAddr(
      destination.getName()), HdfsConstants.READ_TIMEOUT);
  sock.setKeepAlive(true);
  // sendRequest
  DataOutputStream out = new DataOutputStream(sock.getOutputStream());
  out.writeShort(DataTransferProtocol.DATA_TRANSFER_VERSION);
  out.writeByte(DataTransferProtocol.OP_REPLACE_BLOCK);
  out.writeLong(block.getBlockId());
  out.writeLong(block.getGenerationStamp());
  Text.writeString(out, source.getStorageID());
  sourceProxy.write(out);
  BlockTokenSecretManager.DUMMY_TOKEN.write(out);
  out.flush();
  // receiveResponse
  DataInputStream reply = new DataInputStream(sock.getInputStream());

  short status = reply.readShort();
  if(status == DataTransferProtocol.OP_STATUS_SUCCESS) {
    return true;
  }
  return false;
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre,代码行数:26,代码来源:TestBlockReplacement.java


示例13: testInternalReleaseLease_UNKNOWN_COMM

import org.apache.hadoop.hdfs.server.common.HdfsConstants; //导入依赖的package包/类
/**
 * Mocks FSNamesystem instance, adds an empty file, sets status of last two
 * blocks to non-defined and UNDER_CONSTRUCTION and invokes lease recovery
 * method. IOException is expected for releasing a create lock on a 
 * closed file. 
 * @throws IOException as the result
 */
@Test(expected=IOException.class)
public void testInternalReleaseLease_UNKNOWN_COMM () throws IOException {
  if(LOG.isDebugEnabled()) {
    LOG.debug("Running " + GenericTestUtils.getMethodName());
  }
  LeaseManager.Lease lm = mock(LeaseManager.Lease.class);
  Path file = 
    spy(new Path("/" + GenericTestUtils.getMethodName() + "_test.dat"));    
  DatanodeDescriptor dnd = mock(DatanodeDescriptor.class);
  PermissionStatus ps =
    new PermissionStatus("test", "test", new FsPermission((short)0777));
  
  mockFileBlocks(2, null, 
    HdfsConstants.BlockUCState.UNDER_CONSTRUCTION, file, dnd, ps, false);
  
  fsn.internalReleaseLease(lm, file.toString(), null);
  assertTrue("FSNamesystem.internalReleaseLease suppose to throw " +
    "IOException here", false);
}
 
开发者ID:cumulusyebl,项目名称:cumulus,代码行数:27,代码来源:TestNNLeaseRecovery.java


示例14: testInternalReleaseLease_COMM_COMM

import org.apache.hadoop.hdfs.server.common.HdfsConstants; //导入依赖的package包/类
/**
 * Mocks FSNamesystem instance, adds an empty file, sets status of last two
 * blocks to COMMITTED and COMMITTED and invokes lease recovery
 * method. AlreadyBeingCreatedException is expected.
 * @throws AlreadyBeingCreatedException as the result
 */
@Test(expected=AlreadyBeingCreatedException.class)
public void testInternalReleaseLease_COMM_COMM () throws IOException {
  if(LOG.isDebugEnabled()) {
    LOG.debug("Running " + GenericTestUtils.getMethodName());
  }
  LeaseManager.Lease lm = mock(LeaseManager.Lease.class);
  Path file = 
    spy(new Path("/" + GenericTestUtils.getMethodName() + "_test.dat"));
  DatanodeDescriptor dnd = mock(DatanodeDescriptor.class);
  PermissionStatus ps =
    new PermissionStatus("test", "test", new FsPermission((short)0777));

  mockFileBlocks(2, HdfsConstants.BlockUCState.COMMITTED, 
    HdfsConstants.BlockUCState.COMMITTED, file, dnd, ps, false);

  fsn.internalReleaseLease(lm, file.toString(), null);
  assertTrue("FSNamesystem.internalReleaseLease suppose to throw " +
    "AlreadyBeingCreatedException here", false);
}
 
开发者ID:cumulusyebl,项目名称:cumulus,代码行数:26,代码来源:TestNNLeaseRecovery.java


示例15: testInternalReleaseLease_1blocks

import org.apache.hadoop.hdfs.server.common.HdfsConstants; //导入依赖的package包/类
/**
 * Mocks FSNamesystem instance, adds an empty file with 1 block
 * and invokes lease recovery method. 
 * AlreadyBeingCreatedException is expected.
 * @throws AlreadyBeingCreatedException as the result
 */
@Test(expected=AlreadyBeingCreatedException.class)
public void testInternalReleaseLease_1blocks () throws IOException {
  if(LOG.isDebugEnabled()) {
    LOG.debug("Running " + GenericTestUtils.getMethodName());
  }
  LeaseManager.Lease lm = mock(LeaseManager.Lease.class);
  Path file = 
    spy(new Path("/" + GenericTestUtils.getMethodName() + "_test.dat"));
  DatanodeDescriptor dnd = mock(DatanodeDescriptor.class);
  PermissionStatus ps =
    new PermissionStatus("test", "test", new FsPermission((short)0777));

  mockFileBlocks(1, null, HdfsConstants.BlockUCState.COMMITTED, file, dnd, ps, false);

  fsn.internalReleaseLease(lm, file.toString(), null);
  assertTrue("FSNamesystem.internalReleaseLease suppose to throw " +
    "AlreadyBeingCreatedException here", false);
}
 
开发者ID:cumulusyebl,项目名称:cumulus,代码行数:25,代码来源:TestNNLeaseRecovery.java


示例16: testInternalReleaseLease_COMM_CONSTRUCTION

import org.apache.hadoop.hdfs.server.common.HdfsConstants; //导入依赖的package包/类
/**
 * Mocks FSNamesystem instance, adds an empty file, sets status of last two
 * blocks to COMMITTED and UNDER_CONSTRUCTION and invokes lease recovery
 * method. <code>false</code> is expected as the result
 * @throws IOException in case of an error
 */
@Test
public void testInternalReleaseLease_COMM_CONSTRUCTION () throws IOException {
  if(LOG.isDebugEnabled()) {
    LOG.debug("Running " + GenericTestUtils.getMethodName());
  }
  LeaseManager.Lease lm = mock(LeaseManager.Lease.class);
  Path file = 
    spy(new Path("/" + GenericTestUtils.getMethodName() + "_test.dat"));
  DatanodeDescriptor dnd = mock(DatanodeDescriptor.class);
  PermissionStatus ps =
    new PermissionStatus("test", "test", new FsPermission((short)0777));
  
  mockFileBlocks(2, HdfsConstants.BlockUCState.COMMITTED, 
    HdfsConstants.BlockUCState.UNDER_CONSTRUCTION, file, dnd, ps, false);
      
  assertFalse("False is expected in return in this case",
    fsn.internalReleaseLease(lm, file.toString(), null));
}
 
开发者ID:cumulusyebl,项目名称:cumulus,代码行数:25,代码来源:TestNNLeaseRecovery.java


示例17: testCommitBlockSynchronization_BlockNotFound

import org.apache.hadoop.hdfs.server.common.HdfsConstants; //导入依赖的package包/类
@Test
public void testCommitBlockSynchronization_BlockNotFound () 
  throws IOException {
  if(LOG.isDebugEnabled()) {
    LOG.debug("Running " + GenericTestUtils.getMethodName());
  }
  long recoveryId = 2002;
  long newSize = 273487234;
  Path file = 
    spy(new Path("/" + GenericTestUtils.getMethodName() + "_test.dat"));
  DatanodeDescriptor dnd = mock(DatanodeDescriptor.class);
  PermissionStatus ps =
    new PermissionStatus("test", "test", new FsPermission((short)0777));
  
  mockFileBlocks(2, HdfsConstants.BlockUCState.COMMITTED, 
    HdfsConstants.BlockUCState.UNDER_CONSTRUCTION, file, dnd, ps, false);
  
  BlockInfo lastBlock = fsn.dir.getFileINode(anyString()).getLastBlock(); 
  try {
    fsn.commitBlockSynchronization(lastBlock,
      recoveryId, newSize, true, false, new DatanodeID[1]);
  } catch (IOException ioe) {
    assertTrue(ioe.getMessage().startsWith("Block (="));
  }
}
 
开发者ID:cumulusyebl,项目名称:cumulus,代码行数:26,代码来源:TestNNLeaseRecovery.java


示例18: accessBlock

import org.apache.hadoop.hdfs.server.common.HdfsConstants; //导入依赖的package包/类
/**
 * try to access a block on a data node. If fails - throws exception
 * @param datanode
 * @param lblock
 * @throws IOException
 */
private void accessBlock(DatanodeInfo datanode, LocatedBlock lblock)
  throws IOException {
  InetSocketAddress targetAddr = null;
  Socket s = null;
  BlockReader blockReader = null; 
  Block block = lblock.getBlock(); 
 
  targetAddr = NetUtils.createSocketAddr(datanode.getName());
    
  s = new Socket();
  s.connect(targetAddr, HdfsConstants.READ_TIMEOUT);
  s.setSoTimeout(HdfsConstants.READ_TIMEOUT);

  String file = BlockReader.getFileName(targetAddr, block.getBlockId());
  blockReader = 
    BlockReader.newBlockReader(s, file, block, lblock
      .getBlockToken(), 0, -1, 4096);

  // nothing - if it fails - it will throw and exception
}
 
开发者ID:cumulusyebl,项目名称:cumulus,代码行数:27,代码来源:TestDataNodeVolumeFailure.java


示例19: replaceBlock

import org.apache.hadoop.hdfs.server.common.HdfsConstants; //导入依赖的package包/类
private boolean replaceBlock( Block block, DatanodeInfo source,
    DatanodeInfo sourceProxy, DatanodeInfo destination) throws IOException {
  Socket sock = new Socket();
  sock.connect(NetUtils.createSocketAddr(
      destination.getName()), HdfsConstants.READ_TIMEOUT);
  sock.setKeepAlive(true);
  // sendRequest
  DataOutputStream out = new DataOutputStream(sock.getOutputStream());
  out.writeShort(DataTransferProtocol.DATA_TRANSFER_VERSION);
  REPLACE_BLOCK.write(out);
  out.writeLong(block.getBlockId());
  out.writeLong(block.getGenerationStamp());
  Text.writeString(out, source.getStorageID());
  sourceProxy.write(out);
  BlockTokenSecretManager.DUMMY_TOKEN.write(out);
  out.flush();
  // receiveResponse
  DataInputStream reply = new DataInputStream(sock.getInputStream());

  return DataTransferProtocol.Status.read(reply) == SUCCESS;
}
 
开发者ID:cumulusyebl,项目名称:cumulus,代码行数:22,代码来源:TestBlockReplacement.java


示例20: getBlockReader

import org.apache.hadoop.hdfs.server.common.HdfsConstants; //导入依赖的package包/类
/**
 * Get a BlockReader for the given block.
 */
public BlockReader getBlockReader(LocatedBlock testBlock, int offset, int lenToRead)
    throws IOException {
  InetSocketAddress targetAddr = null;
  Socket sock = null;
  Block block = testBlock.getBlock();
  DatanodeInfo[] nodes = testBlock.getLocations();
  targetAddr = NetUtils.createSocketAddr(nodes[0].getName());
  sock = new Socket();
  sock.connect(targetAddr, HdfsConstants.READ_TIMEOUT);
  sock.setSoTimeout(HdfsConstants.READ_TIMEOUT);

  return BlockReader.newBlockReader(
    sock, targetAddr.toString()+ ":" + block.getBlockId(), block,
    testBlock.getBlockToken(), 
    offset, lenToRead,
    conf.getInt("io.file.buffer.size", 4096));
}
 
开发者ID:cumulusyebl,项目名称:cumulus,代码行数:21,代码来源:BlockReaderTestUtil.java



注:本文中的org.apache.hadoop.hdfs.server.common.HdfsConstants类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java IPreferenceStoreAccess类代码示例发布时间:2022-05-23
下一篇:
Java XmlSchemaEnumerationFacet类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap