本文整理汇总了Java中org.apache.hadoop.crypto.CryptoOutputStream类的典型用法代码示例。如果您正苦于以下问题:Java CryptoOutputStream类的具体用法?Java CryptoOutputStream怎么用?Java CryptoOutputStream使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
CryptoOutputStream类属于org.apache.hadoop.crypto包,在下文中一共展示了CryptoOutputStream类的10个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: createStreamPair
import org.apache.hadoop.crypto.CryptoOutputStream; //导入依赖的package包/类
/**
* Create IOStreamPair of {@link org.apache.hadoop.crypto.CryptoInputStream}
* and {@link org.apache.hadoop.crypto.CryptoOutputStream}
*
* @param conf the configuration
* @param cipherOption negotiated cipher option
* @param out underlying output stream
* @param in underlying input stream
* @param isServer is server side
* @return IOStreamPair the stream pair
* @throws IOException for any error
*/
public static IOStreamPair createStreamPair(Configuration conf,
CipherOption cipherOption, OutputStream out, InputStream in,
boolean isServer) throws IOException {
if (LOG.isDebugEnabled()) {
LOG.debug("Creating IOStreamPair of CryptoInputStream and " +
"CryptoOutputStream.");
}
CryptoCodec codec = CryptoCodec.getInstance(conf,
cipherOption.getCipherSuite());
byte[] inKey = cipherOption.getInKey();
byte[] inIv = cipherOption.getInIv();
byte[] outKey = cipherOption.getOutKey();
byte[] outIv = cipherOption.getOutIv();
InputStream cIn = new CryptoInputStream(in, codec,
isServer ? inKey : outKey, isServer ? inIv : outIv);
OutputStream cOut = new CryptoOutputStream(out, codec,
isServer ? outKey : inKey, isServer ? outIv : inIv);
return new IOStreamPair(cIn, cOut);
}
开发者ID:naver,项目名称:hadoop,代码行数:32,代码来源:DataTransferSaslUtil.java
示例2: createWrappedOutputStream
import org.apache.hadoop.crypto.CryptoOutputStream; //导入依赖的package包/类
/**
* Wraps the stream in a CryptoOutputStream if the underlying file is
* encrypted.
*/
public HdfsDataOutputStream createWrappedOutputStream(DFSOutputStream dfsos,
FileSystem.Statistics statistics, long startPos) throws IOException {
final FileEncryptionInfo feInfo = dfsos.getFileEncryptionInfo();
if (feInfo != null) {
// File is encrypted, wrap the stream in a crypto stream.
// Currently only one version, so no special logic based on the version #
getCryptoProtocolVersion(feInfo);
final CryptoCodec codec = getCryptoCodec(conf, feInfo);
KeyVersion decrypted = decryptEncryptedDataEncryptionKey(feInfo);
final CryptoOutputStream cryptoOut =
new CryptoOutputStream(dfsos, codec,
decrypted.getMaterial(), feInfo.getIV(), startPos);
return new HdfsDataOutputStream(cryptoOut, statistics, startPos);
} else {
// No FileEncryptionInfo present so no encryption.
return new HdfsDataOutputStream(dfsos, statistics, startPos);
}
}
开发者ID:naver,项目名称:hadoop,代码行数:23,代码来源:DFSClient.java
示例3: createStreamPair
import org.apache.hadoop.crypto.CryptoOutputStream; //导入依赖的package包/类
/**
* Create IOStreamPair of {@link org.apache.hadoop.crypto.CryptoInputStream}
* and {@link org.apache.hadoop.crypto.CryptoOutputStream}
*
* @param conf the configuration
* @param cipherOption negotiated cipher option
* @param out underlying output stream
* @param in underlying input stream
* @param isServer is server side
* @return IOStreamPair the stream pair
* @throws IOException for any error
*/
public static IOStreamPair createStreamPair(Configuration conf,
CipherOption cipherOption, OutputStream out, InputStream in,
boolean isServer) throws IOException {
LOG.debug("Creating IOStreamPair of CryptoInputStream and "
+ "CryptoOutputStream.");
CryptoCodec codec = CryptoCodec.getInstance(conf,
cipherOption.getCipherSuite());
byte[] inKey = cipherOption.getInKey();
byte[] inIv = cipherOption.getInIv();
byte[] outKey = cipherOption.getOutKey();
byte[] outIv = cipherOption.getOutIv();
InputStream cIn = new CryptoInputStream(in, codec,
isServer ? inKey : outKey, isServer ? inIv : outIv);
OutputStream cOut = new CryptoOutputStream(out, codec,
isServer ? outKey : inKey, isServer ? outIv : inIv);
return new IOStreamPair(cIn, cOut);
}
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:30,代码来源:DataTransferSaslUtil.java
示例4: hsync
import org.apache.hadoop.crypto.CryptoOutputStream; //导入依赖的package包/类
/**
* Sync buffered data to DataNodes (flush to disk devices).
*
* @param syncFlags
* Indicate the detailed semantic and actions of the hsync.
* @throws IOException
* @see FSDataOutputStream#hsync()
*/
public void hsync(EnumSet<SyncFlag> syncFlags) throws IOException {
OutputStream wrappedStream = getWrappedStream();
if (wrappedStream instanceof CryptoOutputStream) {
((CryptoOutputStream) wrappedStream).flush();
wrappedStream = ((CryptoOutputStream) wrappedStream).getWrappedStream();
}
((DFSOutputStream) wrappedStream).hsync(syncFlags);
}
开发者ID:naver,项目名称:hadoop,代码行数:17,代码来源:HdfsDataOutputStream.java
示例5: HdfsDataOutputStream
import org.apache.hadoop.crypto.CryptoOutputStream; //导入依赖的package包/类
public HdfsDataOutputStream(CryptoOutputStream out,
FileSystem.Statistics stats, long startPosition) throws IOException {
super(out, stats, startPosition);
Preconditions.checkArgument(
out.getWrappedStream() instanceof DFSOutputStream,
"CryptoOutputStream should wrap a DFSOutputStream");
}
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:8,代码来源:HdfsDataOutputStream.java
示例6: hsync
import org.apache.hadoop.crypto.CryptoOutputStream; //导入依赖的package包/类
/**
* Sync buffered data to DataNodes (flush to disk devices).
*
* @param syncFlags
* Indicate the detailed semantic and actions of the hsync.
* @throws IOException
* @see FSDataOutputStream#hsync()
*/
public void hsync(EnumSet<SyncFlag> syncFlags) throws IOException {
OutputStream wrappedStream = getWrappedStream();
if (wrappedStream instanceof CryptoOutputStream) {
wrappedStream.flush();
wrappedStream = ((CryptoOutputStream) wrappedStream).getWrappedStream();
}
((DFSOutputStream) wrappedStream).hsync(syncFlags);
}
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:17,代码来源:HdfsDataOutputStream.java
示例7: CryptoFSDataOutputStream
import org.apache.hadoop.crypto.CryptoOutputStream; //导入依赖的package包/类
public CryptoFSDataOutputStream(FSDataOutputStream out, CryptoCodec codec,
int bufferSize, byte[] key, byte[] iv) throws IOException {
super(new CryptoOutputStream(out, codec, bufferSize, key, iv,
out.getPos()), null, out.getPos());
this.fsOut = out;
}
开发者ID:nucypher,项目名称:hadoop-oss,代码行数:7,代码来源:CryptoFSDataOutputStream.java
示例8: HdfsDataOutputStream
import org.apache.hadoop.crypto.CryptoOutputStream; //导入依赖的package包/类
public HdfsDataOutputStream(CryptoOutputStream out, FileSystem.Statistics stats,
long startPosition) throws IOException {
super(out, stats, startPosition);
Preconditions.checkArgument(out.getWrappedStream() instanceof DFSOutputStream,
"CryptoOutputStream should wrap a DFSOutputStream");
}
开发者ID:naver,项目名称:hadoop,代码行数:7,代码来源:HdfsDataOutputStream.java
示例9: getCurrentBlockReplication
import org.apache.hadoop.crypto.CryptoOutputStream; //导入依赖的package包/类
/**
* Get the actual number of replicas of the current block.
*
* This can be different from the designated replication factor of the file
* because the namenode does not maintain replication for the blocks which are
* currently being written to. Depending on the configuration, the client may
* continue to write to a block even if a few datanodes in the write pipeline
* have failed, or the client may add a new datanodes once a datanode has
* failed.
*
* @return the number of valid replicas of the current block
*/
public synchronized int getCurrentBlockReplication() throws IOException {
OutputStream wrappedStream = getWrappedStream();
if (wrappedStream instanceof CryptoOutputStream) {
wrappedStream = ((CryptoOutputStream) wrappedStream).getWrappedStream();
}
return ((DFSOutputStream) wrappedStream).getCurrentBlockReplication();
}
开发者ID:naver,项目名称:hadoop,代码行数:20,代码来源:HdfsDataOutputStream.java
示例10: getCurrentBlockReplication
import org.apache.hadoop.crypto.CryptoOutputStream; //导入依赖的package包/类
/**
* Get the actual number of replicas of the current block.
*
* This can be different from the designated replication factor of the file
* because the namenode does not maintain replication for the blocks which are
* currently being written to. Depending on the configuration, the client may
* continue to write to a block even if a few datanodes in the write pipeline
* have failed, or the client may add a new datanodes once a datanode has
* failed.
*
* @return the number of valid replicas of the current block
*/
public synchronized int getCurrentBlockReplication() throws IOException {
OutputStream wrappedStream = getWrappedStream();
if (wrappedStream instanceof CryptoOutputStream) {
wrappedStream = ((CryptoOutputStream) wrappedStream).getWrappedStream();
}
return ((DFSOutputStream) wrappedStream).getCurrentBlockReplication();
}
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:20,代码来源:HdfsDataOutputStream.java
注:本文中的org.apache.hadoop.crypto.CryptoOutputStream类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论