• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java BlockPoolTokenSecretManager类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager的典型用法代码示例。如果您正苦于以下问题:Java BlockPoolTokenSecretManager类的具体用法?Java BlockPoolTokenSecretManager怎么用?Java BlockPoolTokenSecretManager使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



BlockPoolTokenSecretManager类属于org.apache.hadoop.hdfs.security.token.block包,在下文中一共展示了BlockPoolTokenSecretManager类的9个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: getEncryptionKeyFromUserName

import org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager; //导入依赖的package包/类
/**
 * Given a secret manager and a username encoded as described above, determine
 * the encryption key.
 * 
 * @param blockPoolTokenSecretManager to determine the encryption key.
 * @param userName containing the keyId, blockPoolId, and nonce.
 * @return secret encryption key.
 * @throws IOException
 */
private static byte[] getEncryptionKeyFromUserName(
    BlockPoolTokenSecretManager blockPoolTokenSecretManager, String userName)
    throws IOException {
  String[] nameComponents = userName.split(NAME_DELIMITER);
  if (nameComponents.length != 3) {
    throw new IOException("Provided name '" + userName + "' has " +
        nameComponents.length + " components instead of the expected 3.");
  }
  int keyId = Integer.parseInt(nameComponents[0]);
  String blockPoolId = nameComponents[1];
  byte[] nonce = Base64.decodeBase64(nameComponents[2]);
  return blockPoolTokenSecretManager.retrieveDataEncryptionKey(keyId,
      blockPoolId, nonce);
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:24,代码来源:DataTransferEncryptor.java


示例2: startDataNode

import org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager; //导入依赖的package包/类
/**
 * This method starts the data node with the specified conf.
 * 
 * @param conf - the configuration
 *  if conf's CONFIG_PROPERTY_SIMULATED property is set
 *  then a simulated storage based data node is created.
 * 
 * @param dataDirs - only for a non-simulated storage data node
 * @throws IOException
 */
void startDataNode(Configuration conf, 
                   AbstractList<File> dataDirs,
                  // DatanodeProtocol namenode,
                   SecureResources resources
                   ) throws IOException {
  if(UserGroupInformation.isSecurityEnabled() && resources == null) {
    if (!conf.getBoolean("ignore.secure.ports.for.testing", false)) {
      throw new RuntimeException("Cannot start secure cluster without "
          + "privileged resources.");
    }
  }

  // settings global for all BPs in the Data Node
  this.secureResources = resources;
  this.dataDirs = dataDirs;
  this.conf = conf;
  this.dnConf = new DNConf(conf);

  storage = new DataStorage();
  
  // global DN settings
  registerMXBean();
  initDataXceiver(conf);
  startInfoServer(conf);

  // BlockPoolTokenSecretManager is required to create ipc server.
  this.blockPoolTokenSecretManager = new BlockPoolTokenSecretManager();
  initIpcServer(conf);

  metrics = DataNodeMetrics.create(conf, getDisplayName());

  blockPoolManager = new BlockPoolManager(this);
  blockPoolManager.refreshNamenodes(conf);

  // Create the ReadaheadPool from the DataNode context so we can
  // exit without having to explicitly shutdown its thread pool.
  readaheadPool = ReadaheadPool.getInstance();
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:49,代码来源:DataNode.java


示例3: getEncryptionKeyFromUserName

import org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager; //导入依赖的package包/类
/**
 * Given a secret manager and a username encoded as described above,
 * determine
 * the encryption key.
 *
 * @param blockPoolTokenSecretManager
 *     to determine the encryption key.
 * @param userName
 *     containing the keyId, blockPoolId, and nonce.
 * @return secret encryption key.
 * @throws IOException
 */
private static byte[] getEncryptionKeyFromUserName(
    BlockPoolTokenSecretManager blockPoolTokenSecretManager, String userName)
    throws IOException {
  String[] nameComponents = userName.split(NAME_DELIMITER);
  if (nameComponents.length != 3) {
    throw new IOException("Provided name '" + userName + "' has " +
        nameComponents.length + " components instead of the expected 3.");
  }
  int keyId = Integer.parseInt(nameComponents[0]);
  String blockPoolId = nameComponents[1];
  byte[] nonce = Base64.decodeBase64(nameComponents[2]);
  return blockPoolTokenSecretManager
      .retrieveDataEncryptionKey(keyId, blockPoolId, nonce);
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:27,代码来源:DataTransferEncryptor.java


示例4: startDataNode

import org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager; //导入依赖的package包/类
/**
 * This method starts the data node with the specified conf.
 *
 * @param conf
 *     - the configuration
 *     if conf's CONFIG_PROPERTY_SIMULATED property is set
 *     then a simulated storage based data node is created.
 * @param dataDirs
 *     - only for a non-simulated storage data node
 * @throws IOException
 */
void startDataNode(Configuration conf, AbstractList<File> dataDirs,
    // DatanodeProtocol namenode,
    SecureResources resources) throws IOException {
  if (UserGroupInformation.isSecurityEnabled() && resources == null) {
    if (!conf.getBoolean("ignore.secure.ports.for.testing", false)) {
      throw new RuntimeException(
          "Cannot start secure cluster without " + "privileged resources.");
    }
  }

  // settings global for all BPs in the Data Node
  this.secureResources = resources;
  this.dataDirs = dataDirs;
  this.conf = conf;
  this.dnConf = new DNConf(conf);

  storage = new DataStorage();
  
  // global DN settings
  registerMXBean();
  initDataXceiver(conf);
  startInfoServer(conf);

  // BlockPoolTokenSecretManager is required to create ipc server.
  this.blockPoolTokenSecretManager = new BlockPoolTokenSecretManager();
  initIpcServer(conf);

  metrics = DataNodeMetrics.create(conf, getDisplayName());

  blockPoolManager = new BlockPoolManager(this);
  blockPoolManager.refreshNamenodes(conf);

  // Create the ReadaheadPool from the DataNode context so we can
  // exit without having to explicitly shutdown its thread pool.
  readaheadPool = ReadaheadPool.getInstance();
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:48,代码来源:DataNode.java


示例5: getEncryptedStreams

import org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager; //导入依赖的package包/类
/**
 * Factory method for DNs, where the nonce, keyId, and encryption key are not
 * yet known. The nonce and keyId will be sent by the client, and the DN
 * will then use those pieces of info and the secret key shared with the NN
 * to determine the encryptionKey used for the SASL handshake/encryption.
 * 
 * Establishes a secure connection assuming that the party on the other end
 * has the same shared secret. This does a SASL connection handshake, but not
 * a general-purpose one. It's specific to the MD5-DIGEST SASL mechanism with
 * auth-conf enabled. In particular, it doesn't support an arbitrary number of
 * challenge/response rounds, and we know that the client will never have an
 * initial response, so we don't check for one.
 *
 * @param underlyingOut output stream to write to the other party
 * @param underlyingIn input stream to read from the other party
 * @param blockPoolTokenSecretManager secret manager capable of constructing
 *        encryption key based on keyId, blockPoolId, and nonce
 * @return a pair of streams which wrap the given streams and encrypt/decrypt
 *         all data read/written
 * @throws IOException in the event of error
 */
public static IOStreamPair getEncryptedStreams(
    OutputStream underlyingOut, InputStream underlyingIn,
    BlockPoolTokenSecretManager blockPoolTokenSecretManager,
    String encryptionAlgorithm) throws IOException {
  
  DataInputStream in = new DataInputStream(underlyingIn);
  DataOutputStream out = new DataOutputStream(underlyingOut);
  
  Map<String, String> saslProps = Maps.newHashMap(SASL_PROPS);
  saslProps.put("com.sun.security.sasl.digest.cipher", encryptionAlgorithm);
  
  if (LOG.isDebugEnabled()) {
    LOG.debug("Server using encryption algorithm " + encryptionAlgorithm);
  }
  
  SaslParticipant sasl = new SaslParticipant(Sasl.createSaslServer(MECHANISM,
      PROTOCOL, SERVER_NAME, saslProps,
      new SaslServerCallbackHandler(blockPoolTokenSecretManager)));
  
  int magicNumber = in.readInt();
  if (magicNumber != ENCRYPTED_TRANSFER_MAGIC_NUMBER) {
    throw new InvalidMagicNumberException(magicNumber);
  }
  try {
    // step 1
    performSaslStep1(out, in, sasl);
    
    // step 2 (server-side only)
    byte[] remoteResponse = readSaslMessage(in);
    byte[] localResponse = sasl.evaluateChallengeOrResponse(remoteResponse);
    sendSaslMessage(out, localResponse);
    
    // SASL handshake is complete
    checkSaslComplete(sasl);
    
    return sasl.createEncryptedStreamPair(out, in);
  } catch (IOException ioe) {
    if (ioe instanceof SaslException &&
        ioe.getCause() != null &&
        ioe.getCause() instanceof InvalidEncryptionKeyException) {
      // This could just be because the client is long-lived and hasn't gotten
      // a new encryption key from the NN in a while. Upon receiving this
      // error, the client will get a new encryption key from the NN and retry
      // connecting to this DN.
      sendInvalidKeySaslErrorMessage(out, ioe.getCause().getMessage());
    } else {
      sendGenericSaslErrorMessage(out, ioe.getMessage());
    }
    throw ioe;
  }
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:73,代码来源:DataTransferEncryptor.java


示例6: SaslServerCallbackHandler

import org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager; //导入依赖的package包/类
public SaslServerCallbackHandler(BlockPoolTokenSecretManager
    blockPoolTokenSecretManager) {
  this.blockPoolTokenSecretManager = blockPoolTokenSecretManager;
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:5,代码来源:DataTransferEncryptor.java


示例7: getEncryptedStreams

import org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager; //导入依赖的package包/类
/**
 * Factory method for DNs, where the nonce, keyId, and encryption key are not
 * yet known. The nonce and keyId will be sent by the client, and the DN
 * will then use those pieces of info and the secret key shared with the NN
 * to determine the encryptionKey used for the SASL handshake/encryption.
 * <p/>
 * Establishes a secure connection assuming that the party on the other end
 * has the same shared secret. This does a SASL connection handshake, but not
 * a general-purpose one. It's specific to the MD5-DIGEST SASL mechanism with
 * auth-conf enabled. In particular, it doesn't support an arbitrary number
 * of
 * challenge/response rounds, and we know that the client will never have an
 * initial response, so we don't check for one.
 *
 * @param underlyingOut
 *     output stream to write to the other party
 * @param underlyingIn
 *     input stream to read from the other party
 * @param blockPoolTokenSecretManager
 *     secret manager capable of constructing
 *     encryption key based on keyId, blockPoolId, and nonce
 * @return a pair of streams which wrap the given streams and encrypt/decrypt
 * all data read/written
 * @throws IOException
 *     in the event of error
 */
public static IOStreamPair getEncryptedStreams(OutputStream underlyingOut,
    InputStream underlyingIn,
    BlockPoolTokenSecretManager blockPoolTokenSecretManager,
    String encryptionAlgorithm) throws IOException {
  
  DataInputStream in = new DataInputStream(underlyingIn);
  DataOutputStream out = new DataOutputStream(underlyingOut);
  
  Map<String, String> saslProps = Maps.newHashMap(SASL_PROPS);
  saslProps.put("com.sun.security.sasl.digest.cipher", encryptionAlgorithm);
  
  if (LOG.isDebugEnabled()) {
    LOG.debug("Server using encryption algorithm " + encryptionAlgorithm);
  }
  
  SaslParticipant sasl = new SaslParticipant(
      Sasl.createSaslServer(MECHANISM, PROTOCOL, SERVER_NAME, saslProps,
          new SaslServerCallbackHandler(blockPoolTokenSecretManager)));
  
  int magicNumber = in.readInt();
  if (magicNumber != ENCRYPTED_TRANSFER_MAGIC_NUMBER) {
    throw new InvalidMagicNumberException(magicNumber);
  }
  try {
    // step 1
    performSaslStep1(out, in, sasl);
    
    // step 2 (server-side only)
    byte[] remoteResponse = readSaslMessage(in);
    byte[] localResponse = sasl.evaluateChallengeOrResponse(remoteResponse);
    sendSaslMessage(out, localResponse);
    
    // SASL handshake is complete
    checkSaslComplete(sasl);
    
    return sasl.createEncryptedStreamPair(out, in);
  } catch (IOException ioe) {
    if (ioe instanceof SaslException &&
        ioe.getCause() != null &&
        ioe.getCause() instanceof InvalidEncryptionKeyException) {
      // This could just be because the client is long-lived and hasn't gotten
      // a new encryption key from the NN in a while. Upon receiving this
      // error, the client will get a new encryption key from the NN and retry
      // connecting to this DN.
      sendInvalidKeySaslErrorMessage(out, ioe.getCause().getMessage());
    } else {
      sendGenericSaslErrorMessage(out, ioe.getMessage());
    }
    throw ioe;
  }
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:78,代码来源:DataTransferEncryptor.java


示例8: SaslServerCallbackHandler

import org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager; //导入依赖的package包/类
public SaslServerCallbackHandler(
    BlockPoolTokenSecretManager blockPoolTokenSecretManager) {
  this.blockPoolTokenSecretManager = blockPoolTokenSecretManager;
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:5,代码来源:DataTransferEncryptor.java


示例9: SaslDataTransferServer

import org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager; //导入依赖的package包/类
/**
 * Creates a new SaslDataTransferServer.
 *
 * @param dnConf configuration of DataNode
 * @param blockPoolTokenSecretManager used for checking block access tokens
 *   and encryption keys
 */
public SaslDataTransferServer(DNConf dnConf,
    BlockPoolTokenSecretManager blockPoolTokenSecretManager) {
  this.blockPoolTokenSecretManager = blockPoolTokenSecretManager;
  this.dnConf = dnConf;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:13,代码来源:SaslDataTransferServer.java



注:本文中的org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java Method类代码示例发布时间:2022-05-22
下一篇:
Java SimpleStatsCounter类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap