• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java DataTransferProtocol类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.protocol.DataTransferProtocol的典型用法代码示例。如果您正苦于以下问题:Java DataTransferProtocol类的具体用法?Java DataTransferProtocol怎么用?Java DataTransferProtocol使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



DataTransferProtocol类属于org.apache.hadoop.hdfs.protocol包,在下文中一共展示了DataTransferProtocol类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: replaceBlock

import org.apache.hadoop.hdfs.protocol.DataTransferProtocol; //导入依赖的package包/类
private boolean replaceBlock( Block block, DatanodeInfo source,
    DatanodeInfo sourceProxy, DatanodeInfo destination, int namespaceId) throws IOException {
  Socket sock = new Socket();
  sock.connect(NetUtils.createSocketAddr(
      destination.getName()), HdfsConstants.READ_TIMEOUT);
  sock.setKeepAlive(true);
  // sendRequest
  DataOutputStream out = new DataOutputStream(sock.getOutputStream());
  out.writeShort(DataTransferProtocol.DATA_TRANSFER_VERSION);
  out.writeByte(DataTransferProtocol.OP_REPLACE_BLOCK);
  out.writeInt(namespaceId);
  out.writeLong(block.getBlockId());
  out.writeLong(block.getGenerationStamp());
  Text.writeString(out, source.getStorageID());
  sourceProxy.write(out);
  out.flush();
  // receiveResponse
  DataInputStream reply = new DataInputStream(sock.getInputStream());

  short status = reply.readShort();
  if(status == DataTransferProtocol.OP_STATUS_SUCCESS) {
    return true;
  }
  return false;
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:26,代码来源:TestBlockReplacement.java


示例2: closeBlockReader

import org.apache.hadoop.hdfs.protocol.DataTransferProtocol; //导入依赖的package包/类
/**
 * Close the given BlockReader and cache its socket.
 */
private void closeBlockReader(BlockReader reader, boolean reuseConnection) 
    throws IOException {
  if (reader.hasSentStatusCode()) {
    Socket oldSock = reader.takeSocket();
    if (dfsClient.getDataTransferProtocolVersion() < 
        DataTransferProtocol.READ_REUSE_CONNECTION_VERSION ||
        !reuseConnection) {
        // close the sock for old datanode.
      if (oldSock != null) {
        IOUtils.closeSocket(oldSock);
      }
    } else {
      socketCache.put(oldSock);
    }
  }
  reader.close();
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:21,代码来源:DFSInputStream.java


示例3: readBlockSizeInfo

import org.apache.hadoop.hdfs.protocol.DataTransferProtocol; //导入依赖的package包/类
/**
 * Read the block length information from data stream
 * 
 * @throws IOException
 */
private synchronized void readBlockSizeInfo() throws IOException {
  if (!transferBlockSize) {
    return;
  }
  blkLenInfoUpdated = true;
  isBlockFinalized = in.readBoolean();
  updatedBlockLength = in.readLong();
  if (dataTransferVersion >= DataTransferProtocol.READ_PROFILING_VERSION) {
    readDataNodeProfilingData();
  }
  
  if (LOG.isDebugEnabled()) {
    LOG.debug("ifBlockComplete? " + isBlockFinalized + " block size: "
        + updatedBlockLength);
  }      
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:22,代码来源:BlockReader.java


示例4: createLocatedBlocks

import org.apache.hadoop.hdfs.protocol.DataTransferProtocol; //导入依赖的package包/类
LocatedBlocks createLocatedBlocks(List<LocatedBlock> blocks,
    BlockMetaInfoType type,int namespaceid, int methodsFingerprint) {
  switch (type) {
  case VERSION_AND_NAMESPACEID:
    return new LocatedBlocksWithMetaInfo(
        computeContentSummary().getLength(), blocks,
        isUnderConstruction(), DataTransferProtocol.DATA_TRANSFER_VERSION,
        namespaceid, methodsFingerprint);
  case VERSION:
    return new VersionedLocatedBlocks(computeContentSummary().getLength(), blocks,
      isUnderConstruction(), DataTransferProtocol.DATA_TRANSFER_VERSION);
  default:
    return new LocatedBlocks(computeContentSummary().getLength(), blocks,
      isUnderConstruction());
  }
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:17,代码来源:INode.java


示例5: replaceBlock

import org.apache.hadoop.hdfs.protocol.DataTransferProtocol; //导入依赖的package包/类
private boolean replaceBlock( Block block, DatanodeInfo source,
    DatanodeInfo sourceProxy, DatanodeInfo destination) throws IOException {
  Socket sock = new Socket();
  sock.connect(NetUtils.createSocketAddr(
      destination.getName()), HdfsConstants.READ_TIMEOUT);
  sock.setKeepAlive(true);
  // sendRequest
  DataOutputStream out = new DataOutputStream(sock.getOutputStream());
  out.writeShort(DataTransferProtocol.DATA_TRANSFER_VERSION);
  out.writeByte(DataTransferProtocol.OP_REPLACE_BLOCK);
  out.writeLong(block.getBlockId());
  out.writeLong(block.getGenerationStamp());
  Text.writeString(out, source.getStorageID());
  sourceProxy.write(out);
  BlockTokenSecretManager.DUMMY_TOKEN.write(out);
  out.flush();
  // receiveResponse
  DataInputStream reply = new DataInputStream(sock.getInputStream());

  short status = reply.readShort();
  if(status == DataTransferProtocol.OP_STATUS_SUCCESS) {
    return true;
  }
  return false;
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre,代码行数:26,代码来源:TestBlockReplacement.java


示例6: testWrite

import org.apache.hadoop.hdfs.protocol.DataTransferProtocol; //导入依赖的package包/类
private void testWrite(Block block, BlockConstructionStage stage, long newGS,
    String description, Boolean eofExcepted) throws IOException {
  sendBuf.reset();
  recvBuf.reset();
  DataTransferProtocol.Sender.opWriteBlock(sendOut, block, 0, stage, newGS,
      block.getNumBytes(), block.getNumBytes(), "cl", null,
      new DatanodeInfo[1], BlockTokenSecretManager.DUMMY_TOKEN);
  if (eofExcepted) {
    ERROR.write(recvOut);
    sendRecvData(description, true);
  } else if (stage == BlockConstructionStage.PIPELINE_CLOSE_RECOVERY) {
    //ok finally write a block with 0 len
    SUCCESS.write(recvOut);
    Text.writeString(recvOut, ""); // first bad node
    sendRecvData(description, false);
  } else {
    writeZeroLengthPacket(block, description);
  }
}
 
开发者ID:cumulusyebl,项目名称:cumulus,代码行数:20,代码来源:TestDataTransferProtocol.java


示例7: replaceBlock

import org.apache.hadoop.hdfs.protocol.DataTransferProtocol; //导入依赖的package包/类
private boolean replaceBlock( Block block, DatanodeInfo source,
    DatanodeInfo sourceProxy, DatanodeInfo destination) throws IOException {
  Socket sock = new Socket();
  sock.connect(NetUtils.createSocketAddr(
      destination.getName()), HdfsConstants.READ_TIMEOUT);
  sock.setKeepAlive(true);
  // sendRequest
  DataOutputStream out = new DataOutputStream(sock.getOutputStream());
  out.writeShort(DataTransferProtocol.DATA_TRANSFER_VERSION);
  REPLACE_BLOCK.write(out);
  out.writeLong(block.getBlockId());
  out.writeLong(block.getGenerationStamp());
  Text.writeString(out, source.getStorageID());
  sourceProxy.write(out);
  BlockTokenSecretManager.DUMMY_TOKEN.write(out);
  out.flush();
  // receiveResponse
  DataInputStream reply = new DataInputStream(sock.getInputStream());

  return DataTransferProtocol.Status.read(reply) == SUCCESS;
}
 
开发者ID:cumulusyebl,项目名称:cumulus,代码行数:22,代码来源:TestBlockReplacement.java


示例8: replaceBlock

import org.apache.hadoop.hdfs.protocol.DataTransferProtocol; //导入依赖的package包/类
private boolean replaceBlock( Block block, DatanodeInfo source,
    DatanodeInfo sourceProxy, DatanodeInfo destination) throws IOException {
  Socket sock = new Socket();
  sock.connect(NetUtils.createSocketAddr(
      destination.getName()), HdfsConstants.READ_TIMEOUT);
  sock.setKeepAlive(true);
  // sendRequest
  DataOutputStream out = new DataOutputStream(sock.getOutputStream());
  out.writeShort(DataTransferProtocol.DATA_TRANSFER_VERSION);
  out.writeByte(DataTransferProtocol.OP_REPLACE_BLOCK);
  out.writeLong(block.getBlockId());
  out.writeLong(block.getGenerationStamp());
  Text.writeString(out, source.getStorageID());
  sourceProxy.write(out);
  out.flush();
  // receiveResponse
  DataInputStream reply = new DataInputStream(sock.getInputStream());

  short status = reply.readShort();
  if(status == DataTransferProtocol.OP_STATUS_SUCCESS) {
    return true;
  }
  return false;
}
 
开发者ID:thisisvoa,项目名称:hadoop-0.20,代码行数:25,代码来源:TestBlockReplacement.java


示例9: register

import org.apache.hadoop.hdfs.protocol.DataTransferProtocol; //导入依赖的package包/类
void register() throws IOException {
  // get versions from the namenode
  nsInfo = nameNode.versionRequest();
  dnRegistration.setStorageInfo(new DataStorage(nsInfo, "", null), "");
  String storageId = DataNode.createNewStorageId(dnRegistration.getPort());
  dnRegistration.setStorageID(storageId);
  // register datanode
  dnRegistration = nameNode.register(dnRegistration,
      DataTransferProtocol.DATA_TRANSFER_VERSION);
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:11,代码来源:NNThroughputBenchmark.java


示例10: sendRequest

import org.apache.hadoop.hdfs.protocol.DataTransferProtocol; //导入依赖的package包/类
/**
 * Send a block replace request to the output stream
 */
private void sendRequest(DataOutputStream out) throws IOException {
  ReplaceBlockHeader header = new ReplaceBlockHeader(new VersionAndOpcode(
      dataTransferProtocolVersion, DataTransferProtocol.OP_REPLACE_BLOCK));
  header.set(namespaceId, block.getBlock().getBlockId(), block.getBlock()
      .getGenerationStamp(), source.getStorageID(), proxySource);
  header.writeVersionAndOpCode(out);
  header.write(out);
  out.flush();
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:13,代码来源:BlockMover.java


示例11: receiveResponse

import org.apache.hadoop.hdfs.protocol.DataTransferProtocol; //导入依赖的package包/类
/**
 * Receive a block copy response from the input stream
 */
private void receiveResponse(DataInputStream in) throws IOException {
  short status = in.readShort();
  if (status != DataTransferProtocol.OP_STATUS_SUCCESS) {
    throw new IOException("block move is failed");
  }
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:10,代码来源:BlockMover.java


示例12: register

import org.apache.hadoop.hdfs.protocol.DataTransferProtocol; //导入依赖的package包/类
/** 
 * Register standby with this primary
 */
@Override
public int register() throws IOException {
  enforceActive("Standby can only register with active namenode");
  verifyCheckpointerAddress();
  return DataTransferProtocol.DATA_TRANSFER_VERSION;
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:10,代码来源:AvatarNode.java


示例13: updateDataTransferProtocolVersionIfNeeded

import org.apache.hadoop.hdfs.protocol.DataTransferProtocol; //导入依赖的package包/类
void updateDataTransferProtocolVersionIfNeeded(int remoteDataTransferVersion) {
  int newDataTransferVersion = 0;
  if (remoteDataTransferVersion < DataTransferProtocol.DATA_TRANSFER_VERSION) {
    // client is newer than server
    newDataTransferVersion = remoteDataTransferVersion;
  } else {
    // client is older or the same as server
    newDataTransferVersion = DataTransferProtocol.DATA_TRANSFER_VERSION;
  }
  synchronized (dataTransferVersion) {
    if (dataTransferVersion != newDataTransferVersion) {
      dataTransferVersion = newDataTransferVersion;
    }
  }    
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:16,代码来源:DFSClient.java


示例14: getOutPacketVersion

import org.apache.hadoop.hdfs.protocol.DataTransferProtocol; //导入依赖的package包/类
int getOutPacketVersion() throws IOException {
  if (ifPacketIncludeVersion()) {
    return this.preferredPacketVersion;
  } else {
    // If the server side runs on an older version that doesn't support
    // packet version, the older format that checksum is in the first
    // is used.
    //
    return DataTransferProtocol.PACKET_VERSION_CHECKSUM_FIRST;
  }
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:12,代码来源:DFSClient.java


示例15: getHeartbeatPacket

import org.apache.hadoop.hdfs.protocol.DataTransferProtocol; //导入依赖的package包/类
static DFSOutputStreamPacket getHeartbeatPacket(
    DFSOutputStream dfsOutputStream, boolean includePktVersion,
    int packetVersion) throws IOException {
  if (packetVersion == DataTransferProtocol.PACKET_VERSION_CHECKSUM_FIRST) {
    return new DFSOutputStreamPacketNonInlineChecksum(dfsOutputStream);
  } else if (!includePktVersion) {
    throw new IOException(
        "Older version doesn't support inline checksum packet format.");
  } else {
    return new DFSOutputStreamPacketInlineChecksum(dfsOutputStream);
  }
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:13,代码来源:DFSOutputStreamPacketFactory.java


示例16: getPacket

import org.apache.hadoop.hdfs.protocol.DataTransferProtocol; //导入依赖的package包/类
static DFSOutputStreamPacket getPacket(DFSOutputStream dfsOutputStream,
    boolean includePktVersion, int packetVersion, int pktSize,
    int chunksPerPkt, long offsetInBlock, WritePacketClientProfile profile)
    throws IOException {
  if (packetVersion == DataTransferProtocol.PACKET_VERSION_CHECKSUM_FIRST) {
    return new DFSOutputStreamPacketNonInlineChecksum(dfsOutputStream,
        pktSize, chunksPerPkt, offsetInBlock, profile);
  } else if (!includePktVersion) {
    throw new IOException(
        "Older version doesn't support inline checksum packet format.");
  } else {
    return new DFSOutputStreamPacketInlineChecksum(dfsOutputStream, pktSize,
        chunksPerPkt, offsetInBlock, profile);
  }
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:16,代码来源:DFSOutputStreamPacketFactory.java


示例17: read

import org.apache.hadoop.hdfs.protocol.DataTransferProtocol; //导入依赖的package包/类
@Override
public synchronized int read(byte[] buf, int off, int len)
                             throws IOException {

  //for the first read, skip the extra bytes at the front.
  if (lastChunkLen < 0 && startOffset > firstChunkOffset) {
    // Skip these bytes. But don't call this.skip()!
    int toSkip = (int)(startOffset - firstChunkOffset);
    if ( skipBuf == null ) {
      skipBuf = new byte[bytesPerChecksum];
    }
    if ( super.read(skipBuf, 0, toSkip) != toSkip ) {
      // should never happen
      throw new IOException("Could not skip required number of bytes");
    }
    updateStatsAfterRead(toSkip);
  }

  boolean eosBefore = eos;
  int nRead = super.read(buf, off, len);
  
  // if gotEOS was set in the previous read, send a status code to the DN:
  if (dnSock != null && eos && !eosBefore && nRead >= 0) {
    if (needChecksum()) {
      sendReadResult(dnSock, DataTransferProtocol.OP_STATUS_CHECKSUM_OK);
    } else {
      sendReadResult(dnSock, DataTransferProtocol.OP_STATUS_SUCCESS);
    }
  }
  updateStatsAfterRead(nRead);
  return nRead;
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:33,代码来源:BlockReader.java


示例18: BlockReader

import org.apache.hadoop.hdfs.protocol.DataTransferProtocol; //导入依赖的package包/类
private BlockReader( String file, long blockId, DataInputStream in,
                     DataChecksum checksum, boolean verifyChecksum,
                     long startOffset, long firstChunkOffset,
                     Socket dnSock, long bytesToCheckReadSpeed,
                     long minSpeedBps,
                     long dataTransferVersion,
                     FSClientReadProfilingData cliData) {
  super(new Path("/blk_" + blockId + ":of:" + file)/*too non path-like?*/,
        1, verifyChecksum,
        checksum.getChecksumSize() > 0? checksum : null,
        checksum.getBytesPerChecksum(),
        checksum.getChecksumSize());

  this.dnSock = dnSock;
  this.in = in;
  this.checksum = checksum;
  this.startOffset = Math.max( startOffset, 0 );
  this.dataTransferVersion = dataTransferVersion;
  this.transferBlockSize =
      (dataTransferVersion >= DataTransferProtocol.SEND_DATA_LEN_VERSION);      
  this.firstChunkOffset = firstChunkOffset;
  this.pktIncludeVersion =
      (dataTransferVersion >= DataTransferProtocol.PACKET_INCLUDE_VERSION_VERSION);
  lastChunkOffset = firstChunkOffset;
  lastChunkLen = -1;

  bytesPerChecksum = this.checksum.getBytesPerChecksum();
  checksumSize = this.checksum.getChecksumSize();
  
  this.bytesRead = 0;
  this.timeRead = 0;
  this.minSpeedBps = minSpeedBps;
  this.bytesToCheckReadSpeed = bytesToCheckReadSpeed;
  this.slownessLoged = false;
  this.cliData = cliData;
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:37,代码来源:BlockReader.java


示例19: doGet

import org.apache.hadoop.hdfs.protocol.DataTransferProtocol; //导入依赖的package包/类
/** {@inheritDoc} */
public void doGet(HttpServletRequest request, HttpServletResponse response
    ) throws ServletException, IOException {
  final UnixUserGroupInformation ugi = getUGI(request);
  final PrintWriter out = response.getWriter();
  final String filename = getFilename(request, response);
  final XMLOutputter xml = new XMLOutputter(out, "UTF-8");
  xml.declaration();

  Configuration daemonConf = (Configuration) getServletContext()
    .getAttribute(HttpServer.CONF_CONTEXT_ATTRIBUTE);
  final Configuration conf = (daemonConf == null) ? new Configuration()
    : new Configuration(daemonConf);
  final int socketTimeout = conf.getInt("dfs.socket.timeout", HdfsConstants.READ_TIMEOUT);
  final SocketFactory socketFactory = NetUtils.getSocketFactory(conf, ClientProtocol.class);
  UnixUserGroupInformation.saveToConf(conf,
      UnixUserGroupInformation.UGI_PROPERTY_NAME, ugi);
  final ProtocolProxy<ClientProtocol> nnproxy =
    DFSClient.createRPCNamenode(conf);

  try {
    final MD5MD5CRC32FileChecksum checksum = DFSClient.getFileChecksum(
        DataTransferProtocol.DATA_TRANSFER_VERSION,
        filename, nnproxy.getProxy(), nnproxy, socketFactory, socketTimeout);
    MD5MD5CRC32FileChecksum.write(xml, checksum);
  } catch(IOException ioe) {
    new RemoteException(ioe.getClass().getName(), ioe.getMessage()
        ).writeXml(filename, xml);
  }
  xml.endDocument();
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:32,代码来源:FileChecksumServlets.java


示例20: sendRequest

import org.apache.hadoop.hdfs.protocol.DataTransferProtocol; //导入依赖的package包/类
private void sendRequest(DataOutputStream out) throws IOException {
  /* Write the header */
  ReplaceBlockHeader replaceBlockHeader = new ReplaceBlockHeader(
      DataTransferProtocol.DATA_TRANSFER_VERSION, namespaceId,
      block.getBlock().getBlockId(), block.getBlock().getGenerationStamp(),
      source.getStorageID(), proxySource.getDatanode());
  replaceBlockHeader.writeVersionAndOpCode(out);
  replaceBlockHeader.write(out);
  out.flush();
}
 
开发者ID:rhli,项目名称:hadoop-EAR,代码行数:11,代码来源:Balancer.java



注:本文中的org.apache.hadoop.hdfs.protocol.DataTransferProtocol类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java LibraryLoader类代码示例发布时间:2022-05-23
下一篇:
Java Action类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap