• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java SecurityTestUtil类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.security.token.block.SecurityTestUtil的典型用法代码示例。如果您正苦于以下问题:Java SecurityTestUtil类的具体用法?Java SecurityTestUtil怎么用?Java SecurityTestUtil使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



SecurityTestUtil类属于org.apache.hadoop.hdfs.security.token.block包,在下文中一共展示了SecurityTestUtil类的8个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: waitTokenExpires

import org.apache.hadoop.hdfs.security.token.block.SecurityTestUtil; //导入依赖的package包/类
private void waitTokenExpires(FSDataOutputStream out) throws IOException {
  Token<BlockTokenIdentifier> token = DFSTestUtil.getBlockToken(out);
  while (!SecurityTestUtil.isBlockTokenExpired(token)) {
    try {
      Thread.sleep(10);
    } catch (InterruptedException ignored) {
    }
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:10,代码来源:TestDFSStripedOutputStreamWithFailure.java


示例2: testAppend

import org.apache.hadoop.hdfs.security.token.block.SecurityTestUtil; //导入依赖的package包/类
/**
 * testing that APPEND operation can handle token expiration when
 * re-establishing pipeline is needed
 */
@Test
public void testAppend() throws Exception {
  MiniDFSCluster cluster = null;
  int numDataNodes = 2;
  Configuration conf = getConf(numDataNodes);

  try {
    cluster = new MiniDFSCluster.Builder(conf).numDataNodes(numDataNodes).build();
    cluster.waitActive();
    assertEquals(numDataNodes, cluster.getDataNodes().size());

    final NameNode nn = cluster.getNameNode();
    final BlockManager bm = nn.getNamesystem().getBlockManager();
    final BlockTokenSecretManager sm = bm.getBlockTokenSecretManager();

    // set a short token lifetime (1 second)
    SecurityTestUtil.setBlockTokenLifetime(sm, 1000L);
    Path fileToAppend = new Path(FILE_TO_APPEND);
    FileSystem fs = cluster.getFileSystem();

    // write a one-byte file
    FSDataOutputStream stm = writeFile(fs, fileToAppend,
        (short) numDataNodes, BLOCK_SIZE);
    stm.write(rawData, 0, 1);
    stm.close();
    // open the file again for append
    stm = fs.append(fileToAppend);
    int mid = rawData.length - 1;
    stm.write(rawData, 1, mid - 1);
    stm.hflush();

    /*
     * wait till token used in stm expires
     */
    Token<BlockTokenIdentifier> token = DFSTestUtil.getBlockToken(stm);
    while (!SecurityTestUtil.isBlockTokenExpired(token)) {
      try {
        Thread.sleep(10);
      } catch (InterruptedException ignored) {
      }
    }

    // remove a datanode to force re-establishing pipeline
    cluster.stopDataNode(0);
    // append the rest of the file
    stm.write(rawData, mid, rawData.length - mid);
    stm.close();
    // check if append is successful
    FSDataInputStream in5 = fs.open(fileToAppend);
    assertTrue(checkFile1(in5));
  } finally {
    if (cluster != null) {
      cluster.shutdown();
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:61,代码来源:TestBlockTokenWithDFS.java


示例3: testWrite

import org.apache.hadoop.hdfs.security.token.block.SecurityTestUtil; //导入依赖的package包/类
/**
 * testing that WRITE operation can handle token expiration when
 * re-establishing pipeline is needed
 */
@Test
public void testWrite() throws Exception {
  MiniDFSCluster cluster = null;
  int numDataNodes = 2;
  Configuration conf = getConf(numDataNodes);

  try {
    cluster = new MiniDFSCluster.Builder(conf).numDataNodes(numDataNodes).build();
    cluster.waitActive();
    assertEquals(numDataNodes, cluster.getDataNodes().size());

    final NameNode nn = cluster.getNameNode();
    final BlockManager bm = nn.getNamesystem().getBlockManager();
    final BlockTokenSecretManager sm = bm.getBlockTokenSecretManager();

    // set a short token lifetime (1 second)
    SecurityTestUtil.setBlockTokenLifetime(sm, 1000L);
    Path fileToWrite = new Path(FILE_TO_WRITE);
    FileSystem fs = cluster.getFileSystem();

    FSDataOutputStream stm = writeFile(fs, fileToWrite, (short) numDataNodes,
        BLOCK_SIZE);
    // write a partial block
    int mid = rawData.length - 1;
    stm.write(rawData, 0, mid);
    stm.hflush();

    /*
     * wait till token used in stm expires
     */
    Token<BlockTokenIdentifier> token = DFSTestUtil.getBlockToken(stm);
    while (!SecurityTestUtil.isBlockTokenExpired(token)) {
      try {
        Thread.sleep(10);
      } catch (InterruptedException ignored) {
      }
    }

    // remove a datanode to force re-establishing pipeline
    cluster.stopDataNode(0);
    // write the rest of the file
    stm.write(rawData, mid, rawData.length - mid);
    stm.close();
    // check if write is successful
    FSDataInputStream in4 = fs.open(fileToWrite);
    assertTrue(checkFile1(in4));
  } finally {
    if (cluster != null) {
      cluster.shutdown();
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:57,代码来源:TestBlockTokenWithDFS.java


示例4: testAppend

import org.apache.hadoop.hdfs.security.token.block.SecurityTestUtil; //导入依赖的package包/类
/**
 * testing that APPEND operation can handle token expiration when
 * re-establishing pipeline is needed
 */
@Test
public void testAppend() throws Exception {
  MiniDFSCluster cluster = null;
  int numDataNodes = 2;
  Configuration conf = getConf(numDataNodes);

  try {
    cluster = new MiniDFSCluster.Builder(conf).numDataNodes(numDataNodes).build();
    cluster.waitActive();
    assertEquals(numDataNodes, cluster.getDataNodes().size());

    final NameNode nn = cluster.getNameNode();
    final BlockManager bm = nn.getNamesystem().getBlockManager();
    final BlockTokenSecretManager sm = bm.getBlockTokenSecretManager();

    // set a short token lifetime (1 second)
    SecurityTestUtil.setBlockTokenLifetime(sm, 1000L);
    Path fileToAppend = new Path(FILE_TO_APPEND);
    FileSystem fs = cluster.getFileSystem();
    byte[] expected = generateBytes(FILE_SIZE);
    // write a one-byte file
    FSDataOutputStream stm = writeFile(fs, fileToAppend,
        (short) numDataNodes, BLOCK_SIZE);
    stm.write(expected, 0, 1);
    stm.close();
    // open the file again for append
    stm = fs.append(fileToAppend);
    int mid = expected.length - 1;
    stm.write(expected, 1, mid - 1);
    stm.hflush();

    /*
     * wait till token used in stm expires
     */
    Token<BlockTokenIdentifier> token = DFSTestUtil.getBlockToken(stm);
    while (!SecurityTestUtil.isBlockTokenExpired(token)) {
      try {
        Thread.sleep(10);
      } catch (InterruptedException ignored) {
      }
    }

    // remove a datanode to force re-establishing pipeline
    cluster.stopDataNode(0);
    // append the rest of the file
    stm.write(expected, mid, expected.length - mid);
    stm.close();
    // check if append is successful
    FSDataInputStream in5 = fs.open(fileToAppend);
    assertTrue(checkFile1(in5, expected));
  } finally {
    if (cluster != null) {
      cluster.shutdown();
    }
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:61,代码来源:TestBlockTokenWithDFS.java


示例5: testWrite

import org.apache.hadoop.hdfs.security.token.block.SecurityTestUtil; //导入依赖的package包/类
/**
 * testing that WRITE operation can handle token expiration when
 * re-establishing pipeline is needed
 */
@Test
public void testWrite() throws Exception {
  MiniDFSCluster cluster = null;
  int numDataNodes = 2;
  Configuration conf = getConf(numDataNodes);

  try {
    cluster = new MiniDFSCluster.Builder(conf).numDataNodes(numDataNodes).build();
    cluster.waitActive();
    assertEquals(numDataNodes, cluster.getDataNodes().size());

    final NameNode nn = cluster.getNameNode();
    final BlockManager bm = nn.getNamesystem().getBlockManager();
    final BlockTokenSecretManager sm = bm.getBlockTokenSecretManager();

    // set a short token lifetime (1 second)
    SecurityTestUtil.setBlockTokenLifetime(sm, 1000L);
    Path fileToWrite = new Path(FILE_TO_WRITE);
    FileSystem fs = cluster.getFileSystem();

    byte[] expected = generateBytes(FILE_SIZE);
    FSDataOutputStream stm = writeFile(fs, fileToWrite, (short) numDataNodes,
        BLOCK_SIZE);
    // write a partial block
    int mid = expected.length - 1;
    stm.write(expected, 0, mid);
    stm.hflush();

    /*
     * wait till token used in stm expires
     */
    Token<BlockTokenIdentifier> token = DFSTestUtil.getBlockToken(stm);
    while (!SecurityTestUtil.isBlockTokenExpired(token)) {
      try {
        Thread.sleep(10);
      } catch (InterruptedException ignored) {
      }
    }

    // remove a datanode to force re-establishing pipeline
    cluster.stopDataNode(0);
    // write the rest of the file
    stm.write(expected, mid, expected.length - mid);
    stm.close();
    // check if write is successful
    FSDataInputStream in4 = fs.open(fileToWrite);
    assertTrue(checkFile1(in4, expected));
  } finally {
    if (cluster != null) {
      cluster.shutdown();
    }
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:58,代码来源:TestBlockTokenWithDFS.java


示例6: isBlockTokenExpired

import org.apache.hadoop.hdfs.security.token.block.SecurityTestUtil; //导入依赖的package包/类
protected boolean isBlockTokenExpired(LocatedBlock lb) throws IOException {
  return SecurityTestUtil.isBlockTokenExpired(lb.getBlockToken());
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:4,代码来源:TestBlockTokenWithDFS.java


示例7: testAppend

import org.apache.hadoop.hdfs.security.token.block.SecurityTestUtil; //导入依赖的package包/类
/**
 * testing that APPEND operation can handle token expiration when
 * re-establishing pipeline is needed
 */
@Test
public void testAppend() throws Exception {
  MiniDFSCluster cluster = null;
  int numDataNodes = 2;
  Configuration conf = getConf(numDataNodes);

  try {
    cluster =
        new MiniDFSCluster.Builder(conf).numDataNodes(numDataNodes).build();
    cluster.waitActive();
    assertEquals(numDataNodes, cluster.getDataNodes().size());

    final NameNode nn = cluster.getNameNode();
    final BlockManager bm = nn.getNamesystem().getBlockManager();
    final BlockTokenSecretManager sm = bm.getBlockTokenSecretManager();

    // set a short token lifetime (1 second)
    SecurityTestUtil.setBlockTokenLifetime(sm, 1000L);
    Path fileToAppend = new Path(FILE_TO_APPEND);
    FileSystem fs = cluster.getFileSystem();

    // write a one-byte file
    FSDataOutputStream stm =
        writeFile(fs, fileToAppend, (short) numDataNodes, BLOCK_SIZE);
    stm.write(rawData, 0, 1);
    stm.close();
    // open the file again for append
    stm = fs.append(fileToAppend);
    int mid = rawData.length - 1;
    stm.write(rawData, 1, mid - 1);
    stm.hflush();

    /*
     * wait till token used in stm expires
     */
    Token<BlockTokenIdentifier> token = DFSTestUtil.getBlockToken(stm);
    while (!SecurityTestUtil.isBlockTokenExpired(token)) {
      try {
        Thread.sleep(10);
      } catch (InterruptedException ignored) {
      }
    }

    // remove a datanode to force re-establishing pipeline
    cluster.stopDataNode(0);
    // append the rest of the file
    stm.write(rawData, mid, rawData.length - mid);
    stm.close();
    // check if append is successful
    FSDataInputStream in5 = fs.open(fileToAppend);
    assertTrue(checkFile1(in5));
  } finally {
    if (cluster != null) {
      cluster.shutdown();
    }
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:62,代码来源:TestBlockTokenWithDFS.java


示例8: testWrite

import org.apache.hadoop.hdfs.security.token.block.SecurityTestUtil; //导入依赖的package包/类
/**
 * testing that WRITE operation can handle token expiration when
 * re-establishing pipeline is needed
 */
@Test
public void testWrite() throws Exception {
  MiniDFSCluster cluster = null;
  int numDataNodes = 2;
  Configuration conf = getConf(numDataNodes);

  try {
    cluster =
        new MiniDFSCluster.Builder(conf).numDataNodes(numDataNodes).build();
    cluster.waitActive();
    assertEquals(numDataNodes, cluster.getDataNodes().size());

    final NameNode nn = cluster.getNameNode();
    final BlockManager bm = nn.getNamesystem().getBlockManager();
    final BlockTokenSecretManager sm = bm.getBlockTokenSecretManager();

    // set a short token lifetime (1 second)
    SecurityTestUtil.setBlockTokenLifetime(sm, 1000L);
    Path fileToWrite = new Path(FILE_TO_WRITE);
    FileSystem fs = cluster.getFileSystem();

    FSDataOutputStream stm =
        writeFile(fs, fileToWrite, (short) numDataNodes, BLOCK_SIZE);
    // write a partial block
    int mid = rawData.length - 1;
    stm.write(rawData, 0, mid);
    stm.hflush();

    /*
     * wait till token used in stm expires
     */
    Token<BlockTokenIdentifier> token = DFSTestUtil.getBlockToken(stm);
    while (!SecurityTestUtil.isBlockTokenExpired(token)) {
      try {
        Thread.sleep(10);
      } catch (InterruptedException ignored) {
      }
    }

    // remove a datanode to force re-establishing pipeline
    cluster.stopDataNode(0);
    // write the rest of the file
    stm.write(rawData, mid, rawData.length - mid);
    stm.close();
    // check if write is successful
    FSDataInputStream in4 = fs.open(fileToWrite);
    assertTrue(checkFile1(in4));
  } finally {
    if (cluster != null) {
      cluster.shutdown();
    }
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:58,代码来源:TestBlockTokenWithDFS.java



注:本文中的org.apache.hadoop.hdfs.security.token.block.SecurityTestUtil类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java ServerNotActive类代码示例发布时间:2022-05-22
下一篇:
Java TDoubleArrayList类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap