本文整理汇总了Java中org.rocksdb.FlushOptions类的典型用法代码示例。如果您正苦于以下问题:Java FlushOptions类的具体用法?Java FlushOptions怎么用?Java FlushOptions使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
FlushOptions类属于org.rocksdb包,在下文中一共展示了FlushOptions类的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: checkpoint
import org.rocksdb.FlushOptions; //导入依赖的package包/类
/**
* Flush the data in memtable of RocksDB into disk, and then create checkpoint
*
* @param batchId
*/
@Override
public void checkpoint(long batchId) {
long startTime = System.currentTimeMillis();
try {
rocksDb.flush(new FlushOptions());
Checkpoint cp = Checkpoint.create(rocksDb);
cp.createCheckpoint(getLocalCheckpointPath(batchId));
} catch (RocksDBException e) {
LOG.error("Failed to create checkpoint for batch-" + batchId, e);
throw new RuntimeException(e.getMessage());
}
if (JStormMetrics.enabled)
rocksDbFlushAndCpLatency.update(System.currentTimeMillis() - startTime);
}
开发者ID:alibaba,项目名称:jstorm,代码行数:21,代码来源:RocksDbHdfsState.java
示例2: dbShutdownHandler
import org.rocksdb.FlushOptions; //导入依赖的package包/类
@Override
protected void dbShutdownHandler()
throws Exception
{
log.alarm("RocksDB: flushing");
FlushOptions fl = new FlushOptions();
fl.setWaitForFlush(true);
db.flush(fl);
log.alarm("RocksDB: flush complete");
}
开发者ID:fireduck64,项目名称:jelectrum,代码行数:13,代码来源:JRocksDB.java
示例3: rocksDbTest
import org.rocksdb.FlushOptions; //导入依赖的package包/类
private static void rocksDbTest(RocksDB db, List<ColumnFamilyHandle> handlers) {
try {
ColumnFamilyHandle handler1 = null;
ColumnFamilyHandle handler2 = null;
if (handlers.size() > 0) {
// skip default column family
handler1 = handlers.get(1);
handler2 = handlers.get(2);
} else {
handler1 = db.createColumnFamily(new ColumnFamilyDescriptor("test1".getBytes()));
handler2 = db.createColumnFamily(new ColumnFamilyDescriptor("test2".getBytes()));
}
int startValue1 = getStartValue(db, handler1);
int startValue2 = getStartValue(db, handler2);;
Checkpoint cp = Checkpoint.create(db);
if (isCompaction) {
db.compactRange();
LOG.info("Compaction!");
}
long flushWaitTime = System.currentTimeMillis() + flushInterval;
for (int i = 0; i < putNum || putNum == -1; i++) {
db.put(handler1, String.valueOf(i % 1000).getBytes(), String.valueOf(startValue1 + i).getBytes());
db.put(handler2, String.valueOf(i % 1000).getBytes(), String.valueOf(startValue2 + i).getBytes());
if (isFlush && flushWaitTime <= System.currentTimeMillis()) {
db.flush(new FlushOptions());
if (isCheckpoint) {
cp.createCheckpoint(cpPath + "/" + i);
}
flushWaitTime = System.currentTimeMillis() + flushInterval;
}
}
} catch (RocksDBException e) {
LOG.error("Failed to put or flush", e);
}
}
开发者ID:alibaba,项目名称:jstorm,代码行数:39,代码来源:RocksDbUnitTest.java
示例4: FeatureStoreRocksDb
import org.rocksdb.FlushOptions; //导入依赖的package包/类
FeatureStoreRocksDb(MetricsContext metricsContext, File dbPath) {
MetricRegistry metrics = metricsContext.metrics();
String context = metricsContext.context();
putTimer = metrics.timer(MetricRegistry.name(
context + "." + METRICS_PATH, "putTimer"));
putMeter = metrics.meter(MetricRegistry.name(
context + "." + METRICS_PATH, "putMeter"));
this.loadAllTimer = metrics.timer(MetricRegistry.name(
context + "." + METRICS_PATH, "loadAllTimer"));
this.loadAllMeter = metrics.meter(MetricRegistry.name(
context + "." + METRICS_PATH, "loadAllMeter"));
this.findAllTimer = metrics.timer(MetricRegistry.name(
context + "." + METRICS_PATH, "findAllTimer"));
this.findAllMeter = metrics.meter(MetricRegistry.name(
context + "." + METRICS_PATH, "findAllMeter"));
BlockBasedTableConfig tableConfig = new BlockBasedTableConfig();
tableConfig.setBlockCacheSize(BLOCK_CACHE_SIZE);
tableConfig.setBlockSize(BLOCK_SIZE);
options = new Options();
options.setTableFormatConfig(tableConfig);
options.setWriteBufferSize(WRITE_BUFFER_SIZE);
options.setCompressionType(COMPRESSION_TYPE);
options.setCompactionStyle(COMPACTION_STYLE);
options.setMaxWriteBufferNumber(MAX_WRITE_BUFFER_NUMBER);
options.setCreateIfMissing(CREATE_IF_MISSING);
options.setErrorIfExists(ERROR_IF_EXISTS);
writeOptions = new WriteOptions();
writeOptions.setDisableWAL(DISABLE_WAL);
writeOptions.setSync(true);
readOptions = new ReadOptions();
readOptions.setVerifyChecksums(true);
readOptions.setFillCache(true);
flushOptions = new FlushOptions();
flushOptions.setWaitForFlush(WAIT_FOR_FLUSH);
final File parent = new File("/tmp", "outland");
rocksDir = new File(dbPath, "feature-store");
//noinspection ResultOfMethodCallIgnored
rocksDir.getParentFile().mkdirs();
rocks = initializeRocksDb(); // todo: move this out?
}
开发者ID:dehora,项目名称:outland,代码行数:47,代码来源:FeatureStoreRocksDb.java
示例5: openDB
import org.rocksdb.FlushOptions; //导入依赖的package包/类
@SuppressWarnings("unchecked")
public void openDB(ProcessorContext context) {
// initialize the default rocksdb options
final BlockBasedTableConfig tableConfig = new BlockBasedTableConfig();
tableConfig.setBlockCacheSize(BLOCK_CACHE_SIZE);
tableConfig.setBlockSize(BLOCK_SIZE);
options = new Options();
options.setTableFormatConfig(tableConfig);
options.setWriteBufferSize(WRITE_BUFFER_SIZE);
options.setCompressionType(COMPRESSION_TYPE);
options.setCompactionStyle(COMPACTION_STYLE);
options.setMaxWriteBufferNumber(MAX_WRITE_BUFFERS);
options.setCreateIfMissing(true);
options.setErrorIfExists(false);
options.setInfoLogLevel(InfoLogLevel.ERROR_LEVEL);
// this is the recommended way to increase parallelism in RocksDb
// note that the current implementation of setIncreaseParallelism affects the number
// of compaction threads but not flush threads (the latter remains one). Also
// the parallelism value needs to be at least two because of the code in
// https://github.com/facebook/rocksdb/blob/62ad0a9b19f0be4cefa70b6b32876e764b7f3c11/util/options.cc#L580
// subtracts one from the value passed to determine the number of compaction threads
// (this could be a bug in the RocksDB code and their devs have been contacted).
options.setIncreaseParallelism(Math.max(Runtime.getRuntime().availableProcessors(), 2));
wOptions = new WriteOptions();
wOptions.setDisableWAL(true);
fOptions = new FlushOptions();
fOptions.setWaitForFlush(true);
final Map<String, Object> configs = context.appConfigs();
final Object configSetterValue = configs.get(StreamsConfig.ROCKSDB_CONFIG_SETTER_CLASS_CONFIG);
final Class<RocksDBConfigSetter> configSetterClass = (Class<RocksDBConfigSetter>) ConfigDef.parseType(
StreamsConfig.ROCKSDB_CONFIG_SETTER_CLASS_CONFIG,
configSetterValue,
ConfigDef.Type.CLASS);
if (configSetterClass != null) {
final RocksDBConfigSetter configSetter = Utils.newInstance(configSetterClass);
configSetter.setConfig(name, options, configs);
}
// we need to construct the serde while opening DB since
// it is also triggered by windowed DB segments without initialization
this.serdes = new StateSerdes<>(
ProcessorStateManager.storeChangelogTopic(context.applicationId(), name),
keySerde == null ? (Serde<K>) context.keySerde() : keySerde,
valueSerde == null ? (Serde<V>) context.valueSerde() : valueSerde);
this.dbDir = new File(new File(context.stateDir(), parentDir), this.name);
try {
this.db = openDB(this.dbDir, this.options, TTL_SECONDS);
} catch (IOException e) {
throw new StreamsException(e);
}
}
开发者ID:YMCoding,项目名称:kafka-0.11.0.0-src-with-comment,代码行数:57,代码来源:RocksDBStore.java
注:本文中的org.rocksdb.FlushOptions类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论