本文整理汇总了Java中org.apache.cassandra.dht.Range类的典型用法代码示例。如果您正苦于以下问题:Java Range类的具体用法?Java Range怎么用?Java Range使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
Range类属于org.apache.cassandra.dht包,在下文中一共展示了Range类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: testNoDifference
import org.apache.cassandra.dht.Range; //导入依赖的package包/类
/**
* When there is no difference between two, LocalSyncTask should return stats with 0 difference.
*/
@Test
public void testNoDifference() throws Throwable
{
final InetAddress ep1 = InetAddress.getByName("127.0.0.1");
final InetAddress ep2 = InetAddress.getByName("127.0.0.1");
Range<Token> range = new Range<>(partirioner.getMinimumToken(), partirioner.getRandomToken());
RepairJobDesc desc = new RepairJobDesc(UUID.randomUUID(), UUID.randomUUID(), KEYSPACE1, "Standard1", Arrays.asList(range));
MerkleTrees tree1 = createInitialTree(desc);
MerkleTrees tree2 = createInitialTree(desc);
// difference the trees
// note: we reuse the same endpoint which is bogus in theory but fine here
TreeResponse r1 = new TreeResponse(ep1, tree1);
TreeResponse r2 = new TreeResponse(ep2, tree2);
LocalSyncTask task = new LocalSyncTask(desc, r1, r2, ActiveRepairService.UNREPAIRED_SSTABLE);
task.run();
assertEquals(0, task.get().numberOfDifferences);
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:26,代码来源:LocalSyncTaskTest.java
示例2: deserialize
import org.apache.cassandra.dht.Range; //导入依赖的package包/类
public StreamRequest deserialize(DataInputPlus in, int version) throws IOException
{
String keyspace = in.readUTF();
long repairedAt = in.readLong();
int rangeCount = in.readInt();
List<Range<Token>> ranges = new ArrayList<>(rangeCount);
for (int i = 0; i < rangeCount; i++)
{
Token left = Token.serializer.deserialize(in, MessagingService.globalPartitioner(), version);
Token right = Token.serializer.deserialize(in, MessagingService.globalPartitioner(), version);
ranges.add(new Range<>(left, right));
}
int cfCount = in.readInt();
List<String> columnFamilies = new ArrayList<>(cfCount);
for (int i = 0; i < cfCount; i++)
columnFamilies.add(in.readUTF());
return new StreamRequest(keyspace, ranges, columnFamilies, repairedAt);
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:19,代码来源:StreamRequest.java
示例3: LeveledScanner
import org.apache.cassandra.dht.Range; //导入依赖的package包/类
public LeveledScanner(Collection<SSTableReader> sstables, Collection<Range<Token>> ranges)
{
this.ranges = ranges;
// add only sstables that intersect our range, and estimate how much data that involves
this.sstables = new ArrayList<>(sstables.size());
long length = 0;
for (SSTableReader sstable : sstables)
{
this.sstables.add(sstable);
long estimatedKeys = sstable.estimatedKeys();
double estKeysInRangeRatio = 1.0;
if (estimatedKeys > 0 && ranges != null)
estKeysInRangeRatio = ((double) sstable.estimatedKeysForRanges(ranges)) / estimatedKeys;
length += sstable.uncompressedLength() * estKeysInRangeRatio;
}
totalLength = length;
Collections.sort(this.sstables, SSTableReader.sstableComparator);
sstableIterator = this.sstables.iterator();
assert sstableIterator.hasNext(); // caller should check intersecting first
currentScanner = sstableIterator.next().getScanner(ranges, CompactionManager.instance.getRateLimiter());
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:26,代码来源:LeveledCompactionStrategy.java
示例4: testGetNeighborsPlusOneInLocalDC
import org.apache.cassandra.dht.Range; //导入依赖的package包/类
@Test
public void testGetNeighborsPlusOneInLocalDC() throws Throwable
{
TokenMetadata tmd = StorageService.instance.getTokenMetadata();
// generate rf+1 nodes, and ensure that all nodes are returned
Set<InetAddress> expected = addTokens(1 + Keyspace.open(keyspaceName).getReplicationStrategy().getReplicationFactor());
expected.remove(FBUtilities.getBroadcastAddress());
// remove remote endpoints
TokenMetadata.Topology topology = tmd.cloneOnlyTokenMap().getTopology();
HashSet<InetAddress> localEndpoints = Sets.newHashSet(topology.getDatacenterEndpoints().get(DatabaseDescriptor.getLocalDataCenter()));
expected = Sets.intersection(expected, localEndpoints);
Collection<Range<Token>> ranges = StorageService.instance.getLocalRanges(keyspaceName);
Set<InetAddress> neighbors = new HashSet<InetAddress>();
for (Range<Token> range : ranges)
{
neighbors.addAll(ActiveRepairService.getNeighbors(keyspaceName, range, Arrays.asList(DatabaseDescriptor.getLocalDataCenter()), null));
}
assertEquals(expected, neighbors);
}
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:22,代码来源:AntiEntropyServiceTestAbstract.java
示例5: testMoveRight
import org.apache.cassandra.dht.Range; //导入依赖的package包/类
@Test
public void testMoveRight() throws UnknownHostException
{
// Moves to the right : last part to fetch, nothing to stream
int movingNodeIdx = 1;
BigIntegerToken newToken = new BigIntegerToken("35267647932558653966460912964485513216");
BigIntegerToken[] tokens = initTokens();
BigIntegerToken[] tokensAfterMove = initTokensAfterMove(tokens, movingNodeIdx, newToken);
Pair<Set<Range<Token>>, Set<Range<Token>>> ranges = calculateStreamAndFetchRanges(tokens, tokensAfterMove, movingNodeIdx);
assertEquals("No data should be streamed", ranges.left.size(), 0);
assertEquals(ranges.right.iterator().next().left, tokens[movingNodeIdx]);
assertEquals(ranges.right.iterator().next().right, tokensAfterMove[movingNodeIdx]);
}
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:17,代码来源:OldNetworkTopologyStrategyTest.java
示例6: write
import org.apache.cassandra.dht.Range; //导入依赖的package包/类
/**
* If the key is to be associated with a valid value, a mutation is created
* for it with the given column family and columns. In the event the value
* in the column is missing (i.e., null), then it is marked for
* {@link Deletion}. Similarly, if the entire value for a key is missing
* (i.e., null), then the entire key is marked for {@link Deletion}.
* </p>
*
* @param keybuff
* the key to write.
* @param value
* the value to write.
* @throws IOException
*/
@Override
public void write(ByteBuffer keybuff, List<Mutation> value) throws IOException
{
Range<Token> range = ringCache.getRange(keybuff);
// get the client for the given range, or create a new one
RangeClient client = clients.get(range);
if (client == null)
{
// haven't seen keys for this range: create new client
client = new RangeClient(ringCache.getEndpoint(range));
client.start();
clients.put(range, client);
}
for (Mutation amut : value)
client.put(Pair.create(keybuff, amut));
progressable.progress();
}
开发者ID:pgaref,项目名称:ACaZoo,代码行数:34,代码来源:ColumnFamilyRecordWriter.java
示例7: SSTableScanner
import org.apache.cassandra.dht.Range; //导入依赖的package包/类
/**
* @param sstable SSTable to scan; must not be null
* @param tokenRanges A set of token ranges to scan
* @param limiter background i/o RateLimiter; may be null
*/
private SSTableScanner(SSTableReader sstable, Collection<Range<Token>> tokenRanges, RateLimiter limiter)
{
assert sstable != null;
this.dfile = limiter == null ? sstable.openDataReader() : sstable.openDataReader(limiter);
this.ifile = sstable.openIndexReader();
this.sstable = sstable;
this.dataRange = null;
List<AbstractBounds<RowPosition>> boundsList = new ArrayList<>(tokenRanges.size());
for (Range<Token> range : Range.normalize(tokenRanges))
addRange(range.toRowBounds(), boundsList);
this.rangeIterator = boundsList.iterator();
}
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:21,代码来源:SSTableScanner.java
示例8: testGetScannerForNoIntersectingRanges
import org.apache.cassandra.dht.Range; //导入依赖的package包/类
/** see CASSANDRA-5407 */
@Test
public void testGetScannerForNoIntersectingRanges() throws Exception
{
Keyspace keyspace = Keyspace.open(KEYSPACE1);
ColumnFamilyStore store = keyspace.getColumnFamilyStore("Standard1");
partitioner = store.getPartitioner();
new RowUpdateBuilder(store.metadata, 0, "k1")
.clustering("xyz")
.add("val", "abc")
.build()
.applyUnsafe();
store.forceBlockingFlush();
boolean foundScanner = false;
for (SSTableReader s : store.getLiveSSTables())
{
try (ISSTableScanner scanner = s.getScanner(new Range<Token>(t(0), t(1)), null))
{
scanner.next(); // throws exception pre 5407
foundScanner = true;
}
}
assertTrue(foundScanner);
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:27,代码来源:SSTableReaderTest.java
示例9: RepairJob
import org.apache.cassandra.dht.Range; //导入依赖的package包/类
/**
* Create repair job to run on specific columnfamily
*/
public RepairJob(UUID sessionId, String keyspace, String columnFamily, Range<Token> range, boolean isSequential)
{
this.desc = new RepairJobDesc(sessionId, keyspace, columnFamily, range);
this.isSequential = isSequential;
this.treeRequests = new RequestCoordinator<InetAddress>(isSequential)
{
public void send(InetAddress endpoint)
{
ValidationRequest request = new ValidationRequest(desc, gcBefore);
MessagingService.instance().sendOneWay(request.createMessage(), endpoint);
}
};
this.differencers = new RequestCoordinator<Differencer>(isSequential)
{
public void send(Differencer d)
{
StageManager.getStage(Stage.ANTI_ENTROPY).execute(d);
}
};
}
开发者ID:pgaref,项目名称:ACaZoo,代码行数:24,代码来源:RepairJob.java
示例10: getHelper
import org.apache.cassandra.dht.Range; //导入依赖的package包/类
TreeRange getHelper(Hashable hashable, Token pleft, Token pright, byte depth, Token t)
{
if (hashable instanceof Leaf)
{
// we've reached a hash: wrap it up and deliver it
return new TreeRange(this, pleft, pright, depth, hashable);
}
// else: node.
Inner node = (Inner)hashable;
if (Range.contains(pleft, node.token, t))
// left child contains token
return getHelper(node.lchild, pleft, node.token, inc(depth), t);
// else: right child contains token
return getHelper(node.rchild, node.token, pright, inc(depth), t);
}
开发者ID:pgaref,项目名称:ACaZoo,代码行数:17,代码来源:MerkleTree.java
示例11: getPendingRangesMM
import org.apache.cassandra.dht.Range; //导入依赖的package包/类
public Multimap<Range<Token>, InetAddress> getPendingRangesMM(String keyspaceName)
{
Multimap<Range<Token>, InetAddress> map = HashMultimap.create();
PendingRangeMaps pendingRangeMaps = this.pendingRanges.get(keyspaceName);
if (pendingRangeMaps != null)
{
for (Map.Entry<Range<Token>, List<InetAddress>> entry : pendingRangeMaps)
{
Range<Token> range = entry.getKey();
for (InetAddress address : entry.getValue())
{
map.put(range, address);
}
}
}
return map;
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:20,代码来源:TokenMetadata.java
示例12: forceRepairAsync
import org.apache.cassandra.dht.Range; //导入依赖的package包/类
public int forceRepairAsync(String keyspace, RepairParallelism parallelismDegree, Collection<String> dataCenters, Collection<String> hosts, Collection<Range<Token>> ranges, boolean fullRepair, String... columnFamilies)
{
if (ranges.isEmpty() || Keyspace.open(keyspace).getReplicationStrategy().getReplicationFactor() < 2)
return 0;
int cmd = nextRepairCommand.incrementAndGet();
if (ranges.size() > 0)
{
if (FBUtilities.isWindows() && parallelismDegree != RepairParallelism.PARALLEL)
{
logger.warn("Snapshot-based repair is not yet supported on Windows. Reverting to parallel repair.");
parallelismDegree = RepairParallelism.PARALLEL;
}
new Thread(createRepairTask(cmd, keyspace, ranges, parallelismDegree, dataCenters, hosts, fullRepair, columnFamilies)).start();
}
return cmd;
}
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:18,代码来源:StorageService.java
示例13: getSplits
import org.apache.cassandra.dht.Range; //导入依赖的package包/类
/**
* @return list of Token ranges (_not_ keys!) together with estimated key count,
* breaking up the data this node is responsible for into pieces of roughly keysPerSplit
*/
public List<Pair<Range<Token>, Long>> getSplits(String keyspaceName, String cfName, Range<Token> range, int keysPerSplit)
{
Keyspace t = Keyspace.open(keyspaceName);
ColumnFamilyStore cfs = t.getColumnFamilyStore(cfName);
List<DecoratedKey> keys = keySamples(Collections.singleton(cfs), range);
long totalRowCountEstimate = cfs.estimatedKeysForRange(range);
// splitCount should be much smaller than number of key samples, to avoid huge sampling error
int minSamplesPerSplit = 4;
int maxSplitCount = keys.size() / minSamplesPerSplit + 1;
int splitCount = Math.max(1, Math.min(maxSplitCount, (int)(totalRowCountEstimate / keysPerSplit)));
List<Token> tokens = keysToTokens(range, keys);
return getSplits(tokens, splitCount, cfs);
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:21,代码来源:StorageService.java
示例14: transfer
import org.apache.cassandra.dht.Range; //导入依赖的package包/类
private void transfer(SSTableReader sstable, List<Range<Token>> ranges) throws Exception
{
StreamPlan streamPlan = new StreamPlan("StreamingTransferTest").transferFiles(LOCAL, makeStreamingDetails(ranges, Refs.tryRef(Arrays.asList(sstable))));
streamPlan.execute().get();
verifyConnectionsAreClosed();
//cannot add files after stream session is finished
try
{
streamPlan.transferFiles(LOCAL, makeStreamingDetails(ranges, Refs.tryRef(Arrays.asList(sstable))));
fail("Should have thrown exception");
}
catch (RuntimeException e)
{
//do nothing
}
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:18,代码来源:StreamingTransferTest.java
示例15: pendingRangeChanges
import org.apache.cassandra.dht.Range; //导入依赖的package包/类
/** @return the number of nodes bootstrapping into source's primary range */
public int pendingRangeChanges(InetAddress source)
{
int n = 0;
Collection<Range<Token>> sourceRanges = getPrimaryRangesFor(getTokens(source));
lock.readLock().lock();
try
{
for (Token token : bootstrapTokens.keySet())
for (Range<Token> range : sourceRanges)
if (range.contains(token))
n++;
}
finally
{
lock.readLock().unlock();
}
return n;
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:20,代码来源:TokenMetadata.java
示例16: getScanners
import org.apache.cassandra.dht.Range; //导入依赖的package包/类
/**
* Returns a list of KeyScanners given sstables and a range on which to scan.
* The default implementation simply grab one SSTableScanner per-sstable, but overriding this method
* allow for a more memory efficient solution if we know the sstable don't overlap (see
* LeveledCompactionStrategy for instance).
*/
public ScannerList getScanners(Collection<SSTableReader> sstables, Range<Token> range)
{
RateLimiter limiter = CompactionManager.instance.getRateLimiter();
ArrayList<ISSTableScanner> scanners = new ArrayList<ISSTableScanner>();
try
{
for (SSTableReader sstable : sstables)
scanners.add(sstable.getScanner(range, limiter));
}
catch (Throwable t)
{
try
{
new ScannerList(scanners).close();
}
catch (Throwable t2)
{
t.addSuppressed(t2);
}
throw t;
}
return new ScannerList(scanners);
}
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:30,代码来源:AbstractCompactionStrategy.java
示例17: LeveledScanner
import org.apache.cassandra.dht.Range; //导入依赖的package包/类
public LeveledScanner(Collection<SSTableReader> sstables, Range<Token> range)
{
this.range = range;
// add only sstables that intersect our range, and estimate how much data that involves
this.sstables = new ArrayList<SSTableReader>(sstables.size());
long length = 0;
for (SSTableReader sstable : sstables)
{
this.sstables.add(sstable);
long estimatedKeys = sstable.estimatedKeys();
double estKeysInRangeRatio = 1.0;
if (estimatedKeys > 0 && range != null)
estKeysInRangeRatio = ((double) sstable.estimatedKeysForRanges(Collections.singleton(range))) / estimatedKeys;
length += sstable.uncompressedLength() * estKeysInRangeRatio;
}
totalLength = length;
Collections.sort(this.sstables, SSTable.sstableComparator);
sstableIterator = this.sstables.iterator();
assert sstableIterator.hasNext(); // caller should check intersecting first
currentScanner = sstableIterator.next().getScanner(range, CompactionManager.instance.getRateLimiter());
}
开发者ID:pgaref,项目名称:ACaZoo,代码行数:26,代码来源:LeveledCompactionStrategy.java
示例18: getExpectedCompactedFileSize
import org.apache.cassandra.dht.Range; //导入依赖的package包/类
/**
* Calculate expected file size of SSTable after compaction.
*
* If operation type is {@code CLEANUP} and we're not dealing with an index sstable,
* then we calculate expected file size with checking token range to be eliminated.
*
* Otherwise, we just add up all the files' size, which is the worst case file
* size for compaction of all the list of files given.
*
* @param sstables SSTables to calculate expected compacted file size
* @param operation Operation type
* @return Expected file size of SSTable after compaction
*/
public long getExpectedCompactedFileSize(Iterable<SSTableReader> sstables, OperationType operation)
{
if (operation != OperationType.CLEANUP || isIndex())
{
return SSTableReader.getTotalBytes(sstables);
}
// cleanup size estimation only counts bytes for keys local to this node
long expectedFileSize = 0;
Collection<Range<Token>> ranges = StorageService.instance.getLocalRanges(keyspace.getName());
for (SSTableReader sstable : sstables)
{
List<Pair<Long, Long>> positions = sstable.getPositionsForRanges(ranges);
for (Pair<Long, Long> position : positions)
expectedFileSize += position.right - position.left;
}
double compressionRatio = metric.compressionRatio.getValue();
if (compressionRatio > 0d)
expectedFileSize *= compressionRatio;
return expectedFileSize;
}
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:37,代码来源:ColumnFamilyStore.java
示例19: getScanner
import org.apache.cassandra.dht.Range; //导入依赖的package包/类
/**
* Direct I/O SSTableScanner over a defined collection of ranges of tokens.
*
* @param ranges the range of keys to cover
* @return A Scanner for seeking over the rows of the SSTable.
*/
public ISSTableScanner getScanner(Collection<Range<Token>> ranges, RateLimiter limiter)
{
if (ranges != null)
return BigTableScanner.getScanner(this, ranges, limiter);
else
return getScanner(limiter);
}
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:14,代码来源:BigTableReader.java
示例20: getScanner
import org.apache.cassandra.dht.Range; //导入依赖的package包/类
public static ISSTableScanner getScanner(SSTableReader sstable, Collection<Range<Token>> tokenRanges, RateLimiter limiter)
{
// We want to avoid allocating a SSTableScanner if the range don't overlap the sstable (#5249)
List<Pair<Long, Long>> positions = sstable.getPositionsForRanges(tokenRanges);
if (positions.isEmpty())
return new EmptySSTableScanner(sstable);
return new BigTableScanner(sstable, ColumnFilter.all(sstable.metadata), null, limiter, false, makeBounds(sstable, tokenRanges).iterator());
}
开发者ID:Netflix,项目名称:sstable-adaptor,代码行数:10,代码来源:BigTableScanner.java
注:本文中的org.apache.cassandra.dht.Range类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论