• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java ClientContext类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.ClientContext的典型用法代码示例。如果您正苦于以下问题:Java ClientContext类的具体用法?Java ClientContext怎么用?Java ClientContext使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



ClientContext类属于org.apache.hadoop.hdfs包,在下文中一共展示了ClientContext类的13个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: testLegacyScrAfterEviction

import org.apache.hadoop.hdfs.ClientContext; //导入依赖的package包/类
@Test
public void testLegacyScrAfterEviction()
    throws IOException, InterruptedException, TimeoutException {
  getClusterBuilder().setUseScr(true)
                     .setUseLegacyBlockReaderLocal(true)
                     .build();
  doShortCircuitReadAfterEvictionTest();

  // In the implementation of legacy short-circuit reads, any failure is
  // trapped silently, reverts back to a remote read, and also disables all
  // subsequent legacy short-circuit reads in the ClientContext.
  // Assert that it didn't get disabled.
  ClientContext clientContext = client.getClientContext();
  Assert.assertFalse(clientContext.getDisableLegacyBlockReaderLocal());
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:16,代码来源:TestScrLazyPersistFiles.java


示例2: checkFileContent

import org.apache.hadoop.hdfs.ClientContext; //导入依赖的package包/类
/** Check file content, reading as user {@code readingUser} */
static void checkFileContent(URI uri, Path name, byte[] expected,
    int readOffset, String readingUser, Configuration conf,
    boolean legacyShortCircuitFails)
    throws IOException, InterruptedException {
  // Ensure short circuit is enabled
  DistributedFileSystem fs = getFileSystem(readingUser, uri, conf);
  ClientContext getClientContext = ClientContext.getFromConf(conf);
  if (legacyShortCircuitFails) {
    assertFalse(getClientContext.getDisableLegacyBlockReaderLocal());
  }
  
  FSDataInputStream stm = fs.open(name);
  byte[] actual = new byte[expected.length-readOffset];
  stm.readFully(readOffset, actual);
  checkData(actual, readOffset, expected, "Read 2");
  stm.close();
  // Now read using a different API.
  actual = new byte[expected.length-readOffset];
  stm = fs.open(name);
  IOUtils.skipFully(stm, readOffset);
  //Read a small number of bytes first.
  int nread = stm.read(actual, 0, 3);
  nread += stm.read(actual, nread, 2);
  //Read across chunk boundary
  nread += stm.read(actual, nread, 517);
  checkData(actual, readOffset, expected, nread, "A few bytes");
  //Now read rest of it
  while (nread < actual.length) {
    int nbytes = stm.read(actual, nread, actual.length - nread);
    if (nbytes < 0) {
      throw new EOFException("End of file reached before reading fully.");
    }
    nread += nbytes;
  }
  checkData(actual, readOffset, expected, "Read 3");
  
  if (legacyShortCircuitFails) {
    assertTrue(getClientContext.getDisableLegacyBlockReaderLocal());
  }
  stm.close();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:43,代码来源:TestShortCircuitLocalRead.java


示例3: checkFileContentDirect

import org.apache.hadoop.hdfs.ClientContext; //导入依赖的package包/类
/** Check the file content, reading as user {@code readingUser} */
static void checkFileContentDirect(URI uri, Path name, byte[] expected,
    int readOffset, String readingUser, Configuration conf,
    boolean legacyShortCircuitFails)
    throws IOException, InterruptedException {
  // Ensure short circuit is enabled
  DistributedFileSystem fs = getFileSystem(readingUser, uri, conf);
  ClientContext clientContext = ClientContext.getFromConf(conf);
  if (legacyShortCircuitFails) {
    assertTrue(clientContext.getDisableLegacyBlockReaderLocal());
  }
  
  HdfsDataInputStream stm = (HdfsDataInputStream)fs.open(name);

  ByteBuffer actual = ByteBuffer.allocateDirect(expected.length - readOffset);

  IOUtils.skipFully(stm, readOffset);

  actual.limit(3);

  //Read a small number of bytes first.
  int nread = stm.read(actual);
  actual.limit(nread + 2);
  nread += stm.read(actual);

  // Read across chunk boundary
  actual.limit(Math.min(actual.capacity(), nread + 517));
  nread += stm.read(actual);
  checkData(arrayFromByteBuffer(actual), readOffset, expected, nread,
      "A few bytes");
  //Now read rest of it
  actual.limit(actual.capacity());
  while (actual.hasRemaining()) {
    int nbytes = stm.read(actual);

    if (nbytes < 0) {
      throw new EOFException("End of file reached before reading fully.");
    }
    nread += nbytes;
  }
  checkData(arrayFromByteBuffer(actual), readOffset, expected, "Read 3");
  if (legacyShortCircuitFails) {
    assertTrue(clientContext.getDisableLegacyBlockReaderLocal());
  }
  stm.close();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:47,代码来源:TestShortCircuitLocalRead.java


示例4: tryRead

import org.apache.hadoop.hdfs.ClientContext; //导入依赖的package包/类
private static void tryRead(final Configuration conf, LocatedBlock lblock,
    boolean shouldSucceed) {
  InetSocketAddress targetAddr = null;
  IOException ioe = null;
  BlockReader blockReader = null;
  ExtendedBlock block = lblock.getBlock();
  try {
    DatanodeInfo[] nodes = lblock.getLocations();
    targetAddr = NetUtils.createSocketAddr(nodes[0].getXferAddr());

    blockReader = new BlockReaderFactory(new DFSClient.Conf(conf)).
        setFileName(BlockReaderFactory.getFileName(targetAddr, 
                      "test-blockpoolid", block.getBlockId())).
        setBlock(block).
        setBlockToken(lblock.getBlockToken()).
        setInetSocketAddress(targetAddr).
        setStartOffset(0).
        setLength(-1).
        setVerifyChecksum(true).
        setClientName("TestBlockTokenWithDFS").
        setDatanodeInfo(nodes[0]).
        setCachingStrategy(CachingStrategy.newDefaultStrategy()).
        setClientCacheContext(ClientContext.getFromConf(conf)).
        setConfiguration(conf).
        setRemotePeerFactory(new RemotePeerFactory() {
          @Override
          public Peer newConnectedPeer(InetSocketAddress addr,
              Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId)
              throws IOException {
            Peer peer = null;
            Socket sock = NetUtils.getDefaultSocketFactory(conf).createSocket();
            try {
              sock.connect(addr, HdfsServerConstants.READ_TIMEOUT);
              sock.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);
              peer = TcpPeerServer.peerFromSocket(sock);
            } finally {
              if (peer == null) {
                IOUtils.closeSocket(sock);
              }
            }
            return peer;
          }
        }).
        build();
  } catch (IOException ex) {
    ioe = ex;
  } finally {
    if (blockReader != null) {
      try {
        blockReader.close();
      } catch (IOException e) {
        throw new RuntimeException(e);
      }
    }
  }
  if (shouldSucceed) {
    Assert.assertNotNull("OP_READ_BLOCK: access token is invalid, "
          + "when it is expected to be valid", blockReader);
  } else {
    Assert.assertNotNull("OP_READ_BLOCK: access token is valid, "
        + "when it is expected to be invalid", ioe);
    Assert.assertTrue(
        "OP_READ_BLOCK failed due to reasons other than access token: ",
        ioe instanceof InvalidBlockTokenException);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:67,代码来源:TestBlockTokenWithDFS.java


示例5: accessBlock

import org.apache.hadoop.hdfs.ClientContext; //导入依赖的package包/类
/**
 * try to access a block on a data node. If fails - throws exception
 * @param datanode
 * @param lblock
 * @throws IOException
 */
private void accessBlock(DatanodeInfo datanode, LocatedBlock lblock)
  throws IOException {
  InetSocketAddress targetAddr = null;
  ExtendedBlock block = lblock.getBlock(); 
 
  targetAddr = NetUtils.createSocketAddr(datanode.getXferAddr());

  BlockReader blockReader = new BlockReaderFactory(new DFSClient.Conf(conf)).
    setInetSocketAddress(targetAddr).
    setBlock(block).
    setFileName(BlockReaderFactory.getFileName(targetAddr,
                  "test-blockpoolid", block.getBlockId())).
    setBlockToken(lblock.getBlockToken()).
    setStartOffset(0).
    setLength(-1).
    setVerifyChecksum(true).
    setClientName("TestDataNodeVolumeFailure").
    setDatanodeInfo(datanode).
    setCachingStrategy(CachingStrategy.newDefaultStrategy()).
    setClientCacheContext(ClientContext.getFromConf(conf)).
    setConfiguration(conf).
    setRemotePeerFactory(new RemotePeerFactory() {
      @Override
      public Peer newConnectedPeer(InetSocketAddress addr,
          Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId)
          throws IOException {
        Peer peer = null;
        Socket sock = NetUtils.getDefaultSocketFactory(conf).createSocket();
        try {
          sock.connect(addr, HdfsServerConstants.READ_TIMEOUT);
          sock.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);
          peer = TcpPeerServer.peerFromSocket(sock);
        } finally {
          if (peer == null) {
            IOUtils.closeSocket(sock);
          }
        }
        return peer;
      }
    }).
    build();
  blockReader.close();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:50,代码来源:TestDataNodeVolumeFailure.java


示例6: doShortCircuitReadAfterEvictionTest

import org.apache.hadoop.hdfs.ClientContext; //导入依赖的package包/类
private void doShortCircuitReadAfterEvictionTest() throws IOException,
    InterruptedException {
  final String METHOD_NAME = GenericTestUtils.getMethodName();
  Path path1 = new Path("/" + METHOD_NAME + ".01.dat");
  Path path2 = new Path("/" + METHOD_NAME + ".02.dat");

  final int SEED = 0xFADED;
  makeRandomTestFile(path1, BLOCK_SIZE, true, SEED);

  // Verify short-circuit read from RAM_DISK.
  ensureFileReplicasOnStorageType(path1, RAM_DISK);
  File metaFile = cluster.getBlockMetadataFile(0,
      DFSTestUtil.getFirstBlock(fs, path1));
  assertTrue(metaFile.length() <= BlockMetadataHeader.getHeaderSize());
  assertTrue(verifyReadRandomFile(path1, BLOCK_SIZE, SEED));

  // Sleep for a short time to allow the lazy writer thread to do its job.
  Thread.sleep(3 * LAZY_WRITER_INTERVAL_SEC * 1000);

  // Verify short-circuit read from RAM_DISK once again.
  ensureFileReplicasOnStorageType(path1, RAM_DISK);
  metaFile = cluster.getBlockMetadataFile(0,
      DFSTestUtil.getFirstBlock(fs, path1));
  assertTrue(metaFile.length() <= BlockMetadataHeader.getHeaderSize());
  assertTrue(verifyReadRandomFile(path1, BLOCK_SIZE, SEED));

  // Create another file with a replica on RAM_DISK, which evicts the first.
  makeRandomTestFile(path2, BLOCK_SIZE, true, SEED);
  Thread.sleep(3 * LAZY_WRITER_INTERVAL_SEC * 1000);
  triggerBlockReport();

  // Verify short-circuit read still works from DEFAULT storage.  This time,
  // we'll have a checksum written during lazy persistence.
  ensureFileReplicasOnStorageType(path1, DEFAULT);
  metaFile = cluster.getBlockMetadataFile(0,
      DFSTestUtil.getFirstBlock(fs, path1));
  assertTrue(metaFile.length() > BlockMetadataHeader.getHeaderSize());
  assertTrue(verifyReadRandomFile(path1, BLOCK_SIZE, SEED));

  // In the implementation of legacy short-circuit reads, any failure is
  // trapped silently, reverts back to a remote read, and also disables all
  // subsequent legacy short-circuit reads in the ClientContext.  If the test
  // uses legacy, then assert that it didn't get disabled.
  ClientContext clientContext = client.getClientContext();
  if (clientContext.getUseLegacyBlockReaderLocal()) {
    Assert.assertFalse(clientContext.getDisableLegacyBlockReaderLocal());
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:49,代码来源:TestScrLazyPersistFiles.java


示例7: checkFileContent

import org.apache.hadoop.hdfs.ClientContext; //导入依赖的package包/类
/** Check file content, reading as user {@code readingUser} */
static void checkFileContent(URI uri, Path name, byte[] expected,
    int readOffset, String readingUser, Configuration conf,
    boolean legacyShortCircuitFails)
    throws IOException, InterruptedException {
  // Ensure short circuit is enabled
  DistributedFileSystem fs = getFileSystem(readingUser, uri, conf);
  ClientContext getClientContext = ClientContext.getFromConf(conf);
  if (legacyShortCircuitFails) {
    assertFalse(getClientContext.getDisableLegacyBlockReaderLocal());
  }

  FSDataInputStream stm = fs.open(name);
  byte[] actual = new byte[expected.length-readOffset];
  stm.readFully(readOffset, actual);
  checkData(actual, readOffset, expected, "Read 2");
  stm.close();
  // Now read using a different API.
  actual = new byte[expected.length-readOffset];
  stm = fs.open(name);
  IOUtils.skipFully(stm, readOffset);
  //Read a small number of bytes first.
  int nread = stm.read(actual, 0, 3);
  nread += stm.read(actual, nread, 2);
  //Read across chunk boundary
  nread += stm.read(actual, nread, 517);
  checkData(actual, readOffset, expected, nread, "A few bytes");
  //Now read rest of it
  while (nread < actual.length) {
    int nbytes = stm.read(actual, nread, actual.length - nread);
    if (nbytes < 0) {
      throw new EOFException("End of file reached before reading fully.");
    }
    nread += nbytes;
  }
  checkData(actual, readOffset, expected, "Read 3");

  if (legacyShortCircuitFails) {
    assertTrue(getClientContext.getDisableLegacyBlockReaderLocal());
  }
  stm.close();
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:43,代码来源:TestShortCircuitLocalRead.java


示例8: checkFileContentDirect

import org.apache.hadoop.hdfs.ClientContext; //导入依赖的package包/类
/** Check the file content, reading as user {@code readingUser} */
static void checkFileContentDirect(URI uri, Path name, byte[] expected,
    int readOffset, String readingUser, Configuration conf,
    boolean legacyShortCircuitFails)
    throws IOException, InterruptedException {
  // Ensure short circuit is enabled
  DistributedFileSystem fs = getFileSystem(readingUser, uri, conf);
  ClientContext clientContext = ClientContext.getFromConf(conf);
  if (legacyShortCircuitFails) {
    assertTrue(clientContext.getDisableLegacyBlockReaderLocal());
  }

  HdfsDataInputStream stm = (HdfsDataInputStream)fs.open(name);

  ByteBuffer actual = ByteBuffer.allocateDirect(expected.length - readOffset);

  IOUtils.skipFully(stm, readOffset);

  actual.limit(3);

  //Read a small number of bytes first.
  int nread = stm.read(actual);
  actual.limit(nread + 2);
  nread += stm.read(actual);

  // Read across chunk boundary
  actual.limit(Math.min(actual.capacity(), nread + 517));
  nread += stm.read(actual);
  checkData(arrayFromByteBuffer(actual), readOffset, expected, nread,
      "A few bytes");
  //Now read rest of it
  actual.limit(actual.capacity());
  while (actual.hasRemaining()) {
    int nbytes = stm.read(actual);

    if (nbytes < 0) {
      throw new EOFException("End of file reached before reading fully.");
    }
    nread += nbytes;
  }
  checkData(arrayFromByteBuffer(actual), readOffset, expected, "Read 3");
  if (legacyShortCircuitFails) {
    assertTrue(clientContext.getDisableLegacyBlockReaderLocal());
  }
  stm.close();
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:47,代码来源:TestShortCircuitLocalRead.java


示例9: tryRead

import org.apache.hadoop.hdfs.ClientContext; //导入依赖的package包/类
protected void tryRead(final Configuration conf, LocatedBlock lblock,
    boolean shouldSucceed) {
  InetSocketAddress targetAddr = null;
  IOException ioe = null;
  BlockReader blockReader = null;
  ExtendedBlock block = lblock.getBlock();
  try {
    DatanodeInfo[] nodes = lblock.getLocations();
    targetAddr = NetUtils.createSocketAddr(nodes[0].getXferAddr());

    blockReader = new BlockReaderFactory(new DfsClientConf(conf)).
        setFileName(BlockReaderFactory.getFileName(targetAddr,
                      "test-blockpoolid", block.getBlockId())).
        setBlock(block).
        setBlockToken(lblock.getBlockToken()).
        setInetSocketAddress(targetAddr).
        setStartOffset(0).
        setLength(-1).
        setVerifyChecksum(true).
        setClientName("TestBlockTokenWithDFS").
        setDatanodeInfo(nodes[0]).
        setCachingStrategy(CachingStrategy.newDefaultStrategy()).
        setClientCacheContext(ClientContext.getFromConf(conf)).
        setConfiguration(conf).
        setTracer(FsTracer.get(conf)).
        setRemotePeerFactory(new RemotePeerFactory() {
          @Override
          public Peer newConnectedPeer(InetSocketAddress addr,
              Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId)
              throws IOException {
            Peer peer = null;
            Socket sock = NetUtils.getDefaultSocketFactory(conf).createSocket();
            try {
              sock.connect(addr, HdfsConstants.READ_TIMEOUT);
              sock.setSoTimeout(HdfsConstants.READ_TIMEOUT);
              peer = DFSUtilClient.peerFromSocket(sock);
            } finally {
              if (peer == null) {
                IOUtils.closeSocket(sock);
              }
            }
            return peer;
          }
        }).
        build();
  } catch (IOException ex) {
    ioe = ex;
  } finally {
    if (blockReader != null) {
      try {
        blockReader.close();
      } catch (IOException e) {
        throw new RuntimeException(e);
      }
    }
  }
  if (shouldSucceed) {
    Assert.assertNotNull("OP_READ_BLOCK: access token is invalid, "
          + "when it is expected to be valid", blockReader);
  } else {
    Assert.assertNotNull("OP_READ_BLOCK: access token is valid, "
        + "when it is expected to be invalid", ioe);
    Assert.assertTrue(
        "OP_READ_BLOCK failed due to reasons other than access token: ",
        ioe instanceof InvalidBlockTokenException);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:68,代码来源:TestBlockTokenWithDFS.java


示例10: accessBlock

import org.apache.hadoop.hdfs.ClientContext; //导入依赖的package包/类
/**
 * try to access a block on a data node. If fails - throws exception
 * @param datanode
 * @param lblock
 * @throws IOException
 */
private void accessBlock(DatanodeInfo datanode, LocatedBlock lblock)
  throws IOException {
  InetSocketAddress targetAddr = null;
  ExtendedBlock block = lblock.getBlock(); 
 
  targetAddr = NetUtils.createSocketAddr(datanode.getXferAddr());

  BlockReader blockReader = new BlockReaderFactory(new DfsClientConf(conf)).
    setInetSocketAddress(targetAddr).
    setBlock(block).
    setFileName(BlockReaderFactory.getFileName(targetAddr,
                  "test-blockpoolid", block.getBlockId())).
    setBlockToken(lblock.getBlockToken()).
    setStartOffset(0).
    setLength(-1).
    setVerifyChecksum(true).
    setClientName("TestDataNodeVolumeFailure").
    setDatanodeInfo(datanode).
    setCachingStrategy(CachingStrategy.newDefaultStrategy()).
    setClientCacheContext(ClientContext.getFromConf(conf)).
    setConfiguration(conf).
    setTracer(FsTracer.get(conf)).
    setRemotePeerFactory(new RemotePeerFactory() {
      @Override
      public Peer newConnectedPeer(InetSocketAddress addr,
          Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId)
          throws IOException {
        Peer peer = null;
        Socket sock = NetUtils.getDefaultSocketFactory(conf).createSocket();
        try {
          sock.connect(addr, HdfsConstants.READ_TIMEOUT);
          sock.setSoTimeout(HdfsConstants.READ_TIMEOUT);
          peer = DFSUtilClient.peerFromSocket(sock);
        } finally {
          if (peer == null) {
            IOUtils.closeSocket(sock);
          }
        }
        return peer;
      }
    }).
    build();
  blockReader.close();
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:51,代码来源:TestDataNodeVolumeFailure.java


示例11: doShortCircuitReadAfterEvictionTest

import org.apache.hadoop.hdfs.ClientContext; //导入依赖的package包/类
private void doShortCircuitReadAfterEvictionTest() throws IOException,
    InterruptedException {
  final String METHOD_NAME = GenericTestUtils.getMethodName();
  Path path1 = new Path("/" + METHOD_NAME + ".01.dat");
  Path path2 = new Path("/" + METHOD_NAME + ".02.dat");

  final int SEED = 0xFADED;
  makeRandomTestFile(path1, BLOCK_SIZE, true, SEED);

  // Verify short-circuit read from RAM_DISK.
  ensureFileReplicasOnStorageType(path1, RAM_DISK);
  File metaFile = MiniDFSCluster.getBlockMetadataFile(0,
      DFSTestUtil.getFirstBlock(fs, path1));
  assertTrue(metaFile.length() <= BlockMetadataHeader.getHeaderSize());
  assertTrue(verifyReadRandomFile(path1, BLOCK_SIZE, SEED));

  // Sleep for a short time to allow the lazy writer thread to do its job.
  Thread.sleep(3 * LAZY_WRITER_INTERVAL_SEC * 1000);

  // Verify short-circuit read from RAM_DISK once again.
  ensureFileReplicasOnStorageType(path1, RAM_DISK);
  metaFile = MiniDFSCluster.getBlockMetadataFile(0,
      DFSTestUtil.getFirstBlock(fs, path1));
  assertTrue(metaFile.length() <= BlockMetadataHeader.getHeaderSize());
  assertTrue(verifyReadRandomFile(path1, BLOCK_SIZE, SEED));

  // Create another file with a replica on RAM_DISK, which evicts the first.
  makeRandomTestFile(path2, BLOCK_SIZE, true, SEED);
  Thread.sleep(3 * LAZY_WRITER_INTERVAL_SEC * 1000);
  triggerBlockReport();

  // Verify short-circuit read still works from DEFAULT storage.  This time,
  // we'll have a checksum written during lazy persistence.
  ensureFileReplicasOnStorageType(path1, DEFAULT);
  metaFile = MiniDFSCluster.getBlockMetadataFile(0,
      DFSTestUtil.getFirstBlock(fs, path1));
  assertTrue(metaFile.length() > BlockMetadataHeader.getHeaderSize());
  assertTrue(verifyReadRandomFile(path1, BLOCK_SIZE, SEED));

  // In the implementation of legacy short-circuit reads, any failure is
  // trapped silently, reverts back to a remote read, and also disables all
  // subsequent legacy short-circuit reads in the ClientContext.  If the test
  // uses legacy, then assert that it didn't get disabled.
  ClientContext clientContext = client.getClientContext();
  if (clientContext.getUseLegacyBlockReaderLocal()) {
    Assert.assertFalse(clientContext.getDisableLegacyBlockReaderLocal());
  }
}
 
开发者ID:yncxcw,项目名称:FlexMap,代码行数:49,代码来源:TestScrLazyPersistFiles.java


示例12: tryRead

import org.apache.hadoop.hdfs.ClientContext; //导入依赖的package包/类
private static void tryRead(final Configuration conf, LocatedBlock lblock,
    boolean shouldSucceed) {
  InetSocketAddress targetAddr = null;
  IOException ioe = null;
  BlockReader blockReader = null;
  ExtendedBlock block = lblock.getBlock();
  try {
    DatanodeInfo[] nodes = lblock.getLocations();
    targetAddr = NetUtils.createSocketAddr(nodes[0].getXferAddr());

    blockReader = new BlockReaderFactory(new DFSClient.Conf(conf)).
        setFileName(BlockReaderFactory.getFileName(targetAddr, 
                      "test-blockpoolid", block.getBlockId())).
        setBlock(block).
        setBlockToken(lblock.getBlockToken()).
        setInetSocketAddress(targetAddr).
        setStartOffset(0).
        setLength(-1).
        setVerifyChecksum(true).
        setClientName("TestBlockTokenWithDFS").
        setDatanodeInfo(nodes[0]).
        setCachingStrategy(CachingStrategy.newDefaultStrategy()).
        setClientCacheContext(ClientContext.getFromConf(conf)).
        setConfiguration(conf).
        setRemotePeerFactory(new RemotePeerFactory() {
          @Override
          public Peer newConnectedPeer(InetSocketAddress addr)
              throws IOException {
            Peer peer = null;
            Socket sock = NetUtils.getDefaultSocketFactory(conf).createSocket();
            try {
              sock.connect(addr, HdfsServerConstants.READ_TIMEOUT);
              sock.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);
              peer = TcpPeerServer.peerFromSocket(sock);
            } finally {
              if (peer == null) {
                IOUtils.closeSocket(sock);
              }
            }
            return peer;
          }
        }).
        build();
  } catch (IOException ex) {
    ioe = ex;
  } finally {
    if (blockReader != null) {
      try {
        blockReader.close();
      } catch (IOException e) {
        throw new RuntimeException(e);
      }
    }
  }
  if (shouldSucceed) {
    Assert.assertNotNull("OP_READ_BLOCK: access token is invalid, "
          + "when it is expected to be valid", blockReader);
  } else {
    Assert.assertNotNull("OP_READ_BLOCK: access token is valid, "
        + "when it is expected to be invalid", ioe);
    Assert.assertTrue(
        "OP_READ_BLOCK failed due to reasons other than access token: ",
        ioe instanceof InvalidBlockTokenException);
  }
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:66,代码来源:TestBlockTokenWithDFS.java


示例13: accessBlock

import org.apache.hadoop.hdfs.ClientContext; //导入依赖的package包/类
/**
 * try to access a block on a data node. If fails - throws exception
 * @param datanode
 * @param lblock
 * @throws IOException
 */
private void accessBlock(DatanodeInfo datanode, LocatedBlock lblock)
  throws IOException {
  InetSocketAddress targetAddr = null;
  ExtendedBlock block = lblock.getBlock(); 
 
  targetAddr = NetUtils.createSocketAddr(datanode.getXferAddr());

  BlockReader blockReader = new BlockReaderFactory(new DFSClient.Conf(conf)).
    setInetSocketAddress(targetAddr).
    setBlock(block).
    setFileName(BlockReaderFactory.getFileName(targetAddr,
                  "test-blockpoolid", block.getBlockId())).
    setBlockToken(lblock.getBlockToken()).
    setStartOffset(0).
    setLength(-1).
    setVerifyChecksum(true).
    setClientName("TestDataNodeVolumeFailure").
    setDatanodeInfo(datanode).
    setCachingStrategy(CachingStrategy.newDefaultStrategy()).
    setClientCacheContext(ClientContext.getFromConf(conf)).
    setConfiguration(conf).
    setRemotePeerFactory(new RemotePeerFactory() {
      @Override
      public Peer newConnectedPeer(InetSocketAddress addr)
          throws IOException {
        Peer peer = null;
        Socket sock = NetUtils.getDefaultSocketFactory(conf).createSocket();
        try {
          sock.connect(addr, HdfsServerConstants.READ_TIMEOUT);
          sock.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);
          peer = TcpPeerServer.peerFromSocket(sock);
        } finally {
          if (peer == null) {
            IOUtils.closeSocket(sock);
          }
        }
        return peer;
      }
    }).
    build();
  blockReader.close();
}
 
开发者ID:Seagate,项目名称:hadoop-on-lustre2,代码行数:49,代码来源:TestDataNodeVolumeFailure.java



注:本文中的org.apache.hadoop.hdfs.ClientContext类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java HanziToPinyin类代码示例发布时间:2022-05-22
下一篇:
Java DomUtil类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap