本文整理汇总了Java中org.apache.cassandra.io.util.DataOutputStreamPlus类的典型用法代码示例。如果您正苦于以下问题:Java DataOutputStreamPlus类的具体用法?Java DataOutputStreamPlus怎么用?Java DataOutputStreamPlus使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
DataOutputStreamPlus类属于org.apache.cassandra.io.util包,在下文中一共展示了DataOutputStreamPlus类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: flushBf
import org.apache.cassandra.io.util.DataOutputStreamPlus; //导入依赖的package包/类
/**
* Closes the index and bloomfilter, making the public state of this writer valid for consumption.
*/
void flushBf()
{
if (components.contains(Component.FILTER))
{
String path = descriptor.filenameFor(Component.FILTER);
try (HadoopFileUtils.HadoopFileChannel hos = HadoopFileUtils.newFilesystemChannel(path,
descriptor.getConfiguration());
DataOutputStreamPlus stream = new BufferedDataOutputStreamPlus(hos))
{
// bloom filter
FilterFactory.serialize(bf, stream);
stream.flush();
//SyncUtil.sync(hos);
}
catch (IOException e)
{
logger.info(e.getMessage());
throw new FSWriteError(e, path);
}
}
}
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:26,代码来源:BigTableWriter.java
示例2: serialize
import org.apache.cassandra.io.util.DataOutputStreamPlus; //导入依赖的package包/类
public synchronized void serialize(DataOutputStreamPlus out, int version, StreamSession session) throws IOException
{
if (completed)
{
return;
}
CompressionInfo compressionInfo = FileMessageHeader.serializer.serialize(header, out, version);
final SSTableReader reader = ref.get();
StreamWriter writer = compressionInfo == null ?
new StreamWriter(reader, header.sections, session) :
new CompressedStreamWriter(reader, header.sections,
compressionInfo, session);
writer.write(out);
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:17,代码来源:OutgoingFileMessage.java
示例3: sendInitMessage
import org.apache.cassandra.io.util.DataOutputStreamPlus; //导入依赖的package包/类
@SuppressWarnings("resource")
public void sendInitMessage(Socket socket, boolean isForOutgoing) throws IOException
{
StreamInitMessage message = new StreamInitMessage(
FBUtilities.getBroadcastAddress(),
session.sessionIndex(),
session.planId(),
session.description(),
isForOutgoing,
session.keepSSTableLevel(),
session.isIncremental());
ByteBuffer messageBuf = message.createMessage(false, protocolVersion);
DataOutputStreamPlus out = getWriteChannel(socket);
out.write(messageBuf);
out.flush();
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:17,代码来源:ConnectionHandler.java
示例4: hugeBFSerialization
import org.apache.cassandra.io.util.DataOutputStreamPlus; //导入依赖的package包/类
static void hugeBFSerialization(boolean oldBfHashOrder) throws IOException
{
ByteBuffer test = ByteBuffer.wrap(new byte[] {0, 1});
File file = FileUtils.createTempFile("bloomFilterTest-", ".dat");
BloomFilter filter = (BloomFilter) FilterFactory.getFilter(((long) Integer.MAX_VALUE / 8) + 1, 0.01d, true, oldBfHashOrder);
filter.add(FilterTestHelper.wrap(test));
DataOutputStreamPlus out = new BufferedDataOutputStreamPlus(new FileOutputStream(file));
FilterFactory.serialize(filter, out);
filter.bitset.serialize(out);
out.close();
filter.close();
DataInputStream in = new DataInputStream(new FileInputStream(file));
BloomFilter filter2 = (BloomFilter) FilterFactory.deserialize(in, true, oldBfHashOrder);
Assert.assertTrue(filter2.isPresent(FilterTestHelper.wrap(test)));
FileUtils.closeQuietly(in);
filter2.close();
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:20,代码来源:BloomFilterTest.java
示例5: testEstimatedHistogramWrite
import org.apache.cassandra.io.util.DataOutputStreamPlus; //导入依赖的package包/类
private static void testEstimatedHistogramWrite() throws IOException
{
EstimatedHistogram hist0 = new EstimatedHistogram();
EstimatedHistogram hist1 = new EstimatedHistogram(5000);
long[] offsets = new long[1000];
long[] data = new long[offsets.length + 1];
for (int i = 0; i < offsets.length; i++)
{
offsets[i] = i;
data[i] = 10 * i;
}
data[offsets.length] = 100000;
EstimatedHistogram hist2 = new EstimatedHistogram(offsets, data);
try (DataOutputStreamPlus out = getOutput("utils.EstimatedHistogram.bin"))
{
EstimatedHistogram.serializer.serialize(hist0, out);
EstimatedHistogram.serializer.serialize(hist1, out);
EstimatedHistogram.serializer.serialize(hist2, out);
}
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:22,代码来源:SerializationsTest.java
示例6: rewriteSSTableMetadata
import org.apache.cassandra.io.util.DataOutputStreamPlus; //导入依赖的package包/类
private void rewriteSSTableMetadata(Descriptor descriptor, Map<MetadataType, MetadataComponent> currentComponents) throws IOException
{
String filePath = descriptor.tmpFilenameFor(Component.STATS);
try (DataOutputStreamPlus out = new BufferedDataOutputStreamPlus(new FileOutputStream(filePath)))
{
serialize(currentComponents, out, descriptor.version);
out.flush();
}
// we cant move a file on top of another file in windows:
if (FBUtilities.isWindows)
FileUtils.delete(descriptor.filenameFor(Component.STATS));
FileUtils.renameWithConfirm(filePath, descriptor.filenameFor(Component.STATS));
}
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:15,代码来源:MetadataSerializer.java
示例7: saveSummary
import org.apache.cassandra.io.util.DataOutputStreamPlus; //导入依赖的package包/类
/**
* Save index summary to Summary.db file.
*/
public static void saveSummary(Descriptor descriptor, DecoratedKey first, DecoratedKey last, IndexSummary summary)
{
String filePath = descriptor.filenameFor(Component.SUMMARY);
//TODO: add a retry here on deletion
HadoopFileUtils.deleteIfExists(filePath, descriptor.getConfiguration());
//TODO: will make the retry nicer
int attempt = 0;
int maxAttempt = 5;
boolean isSuccess = false;
while (!isSuccess) {
if (attempt > 0)
FBUtilities.sleepQuietly((int) Math.round(Math.pow(2, attempt)) * 1000);
try (HadoopFileUtils.HadoopFileChannel hos = HadoopFileUtils.newFilesystemChannel(filePath,
descriptor.getConfiguration());
DataOutputStreamPlus oStream = new BufferedDataOutputStreamPlus(hos)) {
IndexSummary.serializer.serialize(summary, oStream, descriptor.version.hasSamplingLevel());
if (first != null && last != null) {
ByteBufferUtil.writeWithLength(first.getKey(), oStream);
ByteBufferUtil.writeWithLength(last.getKey(), oStream);
}
isSuccess = true;
} catch (Throwable e) {
logger.trace("Cannot save SSTable Summary: ", e);
// corrupted hence delete it and let it load it now.
HadoopFileUtils.deleteIfExists(filePath, descriptor.getConfiguration());
attempt++;
if (attempt == maxAttempt) //TODO: do we need to record all retried excpetions here or assume they'r same
throw new RuntimeException("Have retried for " + maxAttempt + " times but still failed!", e);
}
}
}
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:41,代码来源:SSTableReader.java
示例8: write
import org.apache.cassandra.io.util.DataOutputStreamPlus; //导入依赖的package包/类
@Override
public void write(DataOutputStreamPlus out) throws IOException
{
long totalSize = totalSize();
logger.debug("[Stream #{}] Start streaming file {} to {}, repairedAt = {}, totalSize = {}", session.planId(),
sstable.getFilename(), session.peer, sstable.getSSTableMetadata().repairedAt, totalSize);
try (ChannelProxy fc = sstable.getDataChannel().sharedCopy())
{
long progress = 0L;
// calculate chunks to transfer. we want to send continuous chunks altogether.
List<Pair<Long, Long>> sections = getTransferSections(compressionInfo.chunks);
int sectionIdx = 0;
// stream each of the required sections of the file
for (final Pair<Long, Long> section : sections)
{
// length of the section to stream
long length = section.right - section.left;
logger.trace("[Stream #{}] Writing section {} with length {} to stream.", session.planId(), sectionIdx++, length);
// tracks write progress
long bytesTransferred = 0;
while (bytesTransferred < length)
{
final long bytesTransferredFinal = bytesTransferred;
final int toTransfer = (int) Math.min(CHUNK_SIZE, length - bytesTransferred);
limiter.acquire(toTransfer);
long lastWrite = out.applyToChannel((wbc) -> fc.transferTo(section.left + bytesTransferredFinal, toTransfer, wbc));
bytesTransferred += lastWrite;
progress += lastWrite;
session.progress(sstable.descriptor, ProgressInfo.Direction.OUT, progress, totalSize);
}
}
logger.debug("[Stream #{}] Finished streaming file {} to {}, bytesTransferred = {}, totalSize = {}",
session.planId(), sstable.getFilename(), session.peer, progress, totalSize);
}
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:40,代码来源:CompressedStreamWriter.java
示例9: serialize
import org.apache.cassandra.io.util.DataOutputStreamPlus; //导入依赖的package包/类
public void serialize(PrepareMessage message, DataOutputStreamPlus out, int version, StreamSession session) throws IOException
{
// requests
out.writeInt(message.requests.size());
for (StreamRequest request : message.requests)
StreamRequest.serializer.serialize(request, out, version);
// summaries
out.writeInt(message.summaries.size());
for (StreamSummary summary : message.summaries)
StreamSummary.serializer.serialize(summary, out, version);
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:12,代码来源:PrepareMessage.java
示例10: serialize
import org.apache.cassandra.io.util.DataOutputStreamPlus; //导入依赖的package包/类
public static void serialize(StreamMessage message, DataOutputStreamPlus out, int version, StreamSession session) throws IOException
{
ByteBuffer buff = ByteBuffer.allocate(1);
// message type
buff.put(message.type.type);
buff.flip();
out.write(buff);
message.type.outSerializer.serialize(message, out, version, session);
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:10,代码来源:StreamMessage.java
示例11: getWriteChannel
import org.apache.cassandra.io.util.DataOutputStreamPlus; //导入依赖的package包/类
@SuppressWarnings("resource")
protected static DataOutputStreamPlus getWriteChannel(Socket socket) throws IOException
{
WritableByteChannel out = socket.getChannel();
// socket channel is null when encrypted(SSL)
if (out == null)
return new WrappedDataOutputStreamPlus(new BufferedOutputStream(socket.getOutputStream()));
return new BufferedDataOutputStreamPlus(out);
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:10,代码来源:ConnectionHandler.java
示例12: rewriteSSTableMetadata
import org.apache.cassandra.io.util.DataOutputStreamPlus; //导入依赖的package包/类
private void rewriteSSTableMetadata(Descriptor descriptor, Map<MetadataType, MetadataComponent> currentComponents) throws IOException
{
String filePath = descriptor.tmpFilenameFor(Component.STATS);
try (DataOutputStreamPlus out = new BufferedDataOutputStreamPlus(new FileOutputStream(filePath)))
{
serialize(currentComponents, out, descriptor.version);
out.flush();
}
// we cant move a file on top of another file in windows:
if (FBUtilities.isWindows())
FileUtils.delete(descriptor.filenameFor(Component.STATS));
FileUtils.renameWithConfirm(filePath, descriptor.filenameFor(Component.STATS));
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:15,代码来源:MetadataSerializer.java
示例13: testBloomFilterWrite
import org.apache.cassandra.io.util.DataOutputStreamPlus; //导入依赖的package包/类
private static void testBloomFilterWrite(boolean offheap, boolean oldBfHashOrder) throws IOException
{
IPartitioner partitioner = Util.testPartitioner();
try (IFilter bf = FilterFactory.getFilter(1000000, 0.0001, offheap, oldBfHashOrder))
{
for (int i = 0; i < 100; i++)
bf.add(partitioner.decorateKey(partitioner.getTokenFactory().toByteArray(partitioner.getRandomToken())));
try (DataOutputStreamPlus out = getOutput(oldBfHashOrder ? "2.1" : "3.0", "utils.BloomFilter.bin"))
{
FilterFactory.serialize(bf, out);
}
}
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:14,代码来源:SerializationsTest.java
示例14: testBloomFilterWrite1000
import org.apache.cassandra.io.util.DataOutputStreamPlus; //导入依赖的package包/类
private static void testBloomFilterWrite1000(boolean offheap, boolean oldBfHashOrder) throws IOException
{
try (IFilter bf = FilterFactory.getFilter(1000000, 0.0001, offheap, oldBfHashOrder))
{
for (int i = 0; i < 1000; i++)
bf.add(Util.dk(Int32Type.instance.decompose(i)));
try (DataOutputStreamPlus out = getOutput(oldBfHashOrder ? "2.1" : "3.0", "utils.BloomFilter1000.bin"))
{
FilterFactory.serialize(bf, out);
}
}
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:13,代码来源:SerializationsTest.java
示例15: testEndpointStateWrite
import org.apache.cassandra.io.util.DataOutputStreamPlus; //导入依赖的package包/类
private void testEndpointStateWrite() throws IOException
{
DataOutputStreamPlus out = getOutput("gms.EndpointState.bin");
HeartBeatState.serializer.serialize(Statics.HeartbeatSt, out, getVersion());
EndpointState.serializer.serialize(Statics.EndpointSt, out, getVersion());
VersionedValue.serializer.serialize(Statics.vv0, out, getVersion());
VersionedValue.serializer.serialize(Statics.vv1, out, getVersion());
out.close();
// test serializedSize
testSerializedSize(Statics.HeartbeatSt, HeartBeatState.serializer);
testSerializedSize(Statics.EndpointSt, EndpointState.serializer);
testSerializedSize(Statics.vv0, VersionedValue.serializer);
testSerializedSize(Statics.vv1, VersionedValue.serializer);
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:16,代码来源:SerializationsTest.java
示例16: getOutput
import org.apache.cassandra.io.util.DataOutputStreamPlus; //导入依赖的package包/类
@SuppressWarnings("resource")
protected static DataOutputStreamPlus getOutput(String version, String name) throws IOException
{
File f = new File("test/data/serialization/" + version + '/' + name);
f.getParentFile().mkdirs();
return new BufferedDataOutputStreamPlus(new FileOutputStream(f).getChannel());
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:8,代码来源:AbstractSerializationsTester.java
示例17: serialize
import org.apache.cassandra.io.util.DataOutputStreamPlus; //导入依赖的package包/类
public File serialize(Map<MetadataType, MetadataComponent> metadata, MetadataSerializer serializer, Version version)
throws IOException, FileNotFoundException
{
// Serialize to tmp file
File statsFile = File.createTempFile(Component.STATS.name, null);
try (DataOutputStreamPlus out = new BufferedDataOutputStreamPlus(new FileOutputStream(statsFile)))
{
serializer.serialize(metadata, out, version);
}
return statsFile;
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:12,代码来源:MetadataSerializerTest.java
示例18: serialize
import org.apache.cassandra.io.util.DataOutputStreamPlus; //导入依赖的package包/类
public void serialize(RetryMessage message, DataOutputStreamPlus out, int version, StreamSession session) throws IOException
{
UUIDSerializer.serializer.serialize(message.cfId, out, MessagingService.current_version);
out.writeInt(message.sequenceNumber);
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:6,代码来源:RetryMessage.java
示例19: serialize
import org.apache.cassandra.io.util.DataOutputStreamPlus; //导入依赖的package包/类
public void serialize(ReceivedMessage message, DataOutputStreamPlus out, int version, StreamSession session) throws IOException
{
UUIDSerializer.serializer.serialize(message.cfId, out, MessagingService.current_version);
out.writeInt(message.sequenceNumber);
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:6,代码来源:ReceivedMessage.java
示例20: serialize
import org.apache.cassandra.io.util.DataOutputStreamPlus; //导入依赖的package包/类
public void serialize(IncomingFileMessage message, DataOutputStreamPlus out, int version, StreamSession session) throws IOException
{
throw new UnsupportedOperationException("Not allowed to call serialize on an incoming file");
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:5,代码来源:IncomingFileMessage.java
注:本文中的org.apache.cassandra.io.util.DataOutputStreamPlus类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论