• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java MemorySegment类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.flink.core.memory.MemorySegment的典型用法代码示例。如果您正苦于以下问题:Java MemorySegment类的具体用法?Java MemorySegment怎么用?Java MemorySegment使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



MemorySegment类属于org.apache.flink.core.memory包,在下文中一共展示了MemorySegment类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: nextSegment

import org.apache.flink.core.memory.MemorySegment; //导入依赖的package包/类
@Override
protected MemorySegment nextSegment(MemorySegment current, int bytesUsed) throws IOException {
	finalizeSegment(current, bytesUsed);
	
	final MemorySegment next;
	if (this.writer == null) {
		this.targetList.add(current);
		next = this.memSource.nextSegment();
	} else {
		this.writer.writeBlock(current);
		try {
			next = this.writer.getReturnQueue().take();
		} catch (InterruptedException iex) {
			throw new IOException("Hash Join Partition was interrupted while grabbing a new write-behind buffer.");
		}
	}
	
	this.currentBlockNumber++;
	return next;
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:21,代码来源:HashPartition.java


示例2: close

import org.apache.flink.core.memory.MemorySegment; //导入依赖的package包/类
private void close(boolean delete) throws IOException {
	try {
		// send off set last segment, if we have not been closed before
		MemorySegment current = getCurrentSegment();
		if (current != null) {
			writeSegment(current, getCurrentPositionInSegment());
		}

		clear();
		if (delete) {
			writer.closeAndDelete();
		} else {
			writer.close();
		}
	}
	finally {
		memManager.release(memory);
	}
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:20,代码来源:FileChannelOutputView.java


示例3: buildBloomFilterForBucketsInPartition

import org.apache.flink.core.memory.MemorySegment; //导入依赖的package包/类
final protected void buildBloomFilterForBucketsInPartition(int partNum, HashPartition<BT, PT> partition) {
	// Find all the buckets which belongs to this partition, and build bloom filter for each bucket(include its overflow buckets).
	final int bucketsPerSegment = this.bucketsPerSegmentMask + 1;

	int numSegs = this.buckets.length;
	// go over all segments that are part of the table
	for (int i = 0, bucket = 0; i < numSegs && bucket < numBuckets; i++) {
		final MemorySegment segment = this.buckets[i];
		// go over all buckets in the segment
		for (int k = 0; k < bucketsPerSegment && bucket < numBuckets; k++, bucket++) {
			final int bucketInSegmentOffset = k * HASH_BUCKET_SIZE;
			byte partitionNumber = segment.get(bucketInSegmentOffset + HEADER_PARTITION_OFFSET);
			if (partitionNumber == partNum) {
				byte status = segment.get(bucketInSegmentOffset + HEADER_STATUS_OFFSET);
				if (status == BUCKET_STATUS_IN_MEMORY) {
					buildBloomFilterForBucket(bucketInSegmentOffset, segment, partition);
				}
			}
		}
	}
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:22,代码来源:MutableHashTable.java


示例4: buildBloomFilterForBucket

import org.apache.flink.core.memory.MemorySegment; //导入依赖的package包/类
/**
 * Set all the bucket memory except bucket header as the bit set of bloom filter, and use hash code of build records
 * to build bloom filter.
 */
final void buildBloomFilterForBucket(int bucketInSegmentPos, MemorySegment bucket, HashPartition<BT, PT> p) {
	final int count = bucket.getShort(bucketInSegmentPos + HEADER_COUNT_OFFSET);
	if (count <= 0) {
		return;
	}

	int[] hashCodes = new int[count];
	// As the hashcode and bloom filter occupy same bytes, so we read all hashcode out at first and then write back to bloom filter.
	for (int i = 0; i < count; i++) {
		hashCodes[i] = bucket.getInt(bucketInSegmentPos + BUCKET_HEADER_LENGTH + i * HASH_CODE_LEN);
	}
	this.bloomFilter.setBitsLocation(bucket, bucketInSegmentPos + BUCKET_HEADER_LENGTH);
	for (int hashCode : hashCodes) {
		this.bloomFilter.addHash(hashCode);
	}
	buildBloomFilterForExtraOverflowSegments(bucketInSegmentPos, bucket, p);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:22,代码来源:MutableHashTable.java


示例5: putNormalizedKey

import org.apache.flink.core.memory.MemorySegment; //导入依赖的package包/类
@Override
public void putNormalizedKey(T iValue, MemorySegment target, int offset, int numBytes) {
	int value = iValue.ordinal() - Integer.MIN_VALUE;
	
	// see IntValue for an explanation of the logic
	if (numBytes == 4) {
		// default case, full normalized key
		target.putIntBigEndian(offset, value);
	}
	else if (numBytes <= 0) {
	}
	else if (numBytes < 4) {
		for (int i = 0; numBytes > 0; numBytes--, i++) {
			target.put(offset + i, (byte) (value >>> ((3-i)<<3)));
		}
	}
	else {
		target.putLongBigEndian(offset, value);
		for (int i = 4; i < numBytes; i++) {
			target.put(offset + i, (byte) 0);
		}
	}
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:24,代码来源:EnumComparator.java


示例6: putNormalizedKeyDate

import org.apache.flink.core.memory.MemorySegment; //导入依赖的package包/类
public static void putNormalizedKeyDate(Date record, MemorySegment target, int offset, int numBytes) {
	final long value = record.getTime() - Long.MIN_VALUE;

	// see IntValue for an explanation of the logic
	if (numBytes == 8) {
		// default case, full normalized key
		target.putLongBigEndian(offset, value);
	}
	else if (numBytes < 8) {
		for (int i = 0; numBytes > 0; numBytes--, i++) {
			target.put(offset + i, (byte) (value >>> ((7-i)<<3)));
		}
	}
	else {
		target.putLongBigEndian(offset, value);
		for (int i = 8; i < numBytes; i++) {
			target.put(offset + i, (byte) 0);
		}
	}
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:21,代码来源:DateComparator.java


示例7: getHashJoin

import org.apache.flink.core.memory.MemorySegment; //导入依赖的package包/类
@Override
public <BT, PT> MutableHashTable<BT, PT> getHashJoin(
		TypeSerializer<BT> buildSideSerializer,
		TypeComparator<BT> buildSideComparator,
		TypeSerializer<PT> probeSideSerializer, TypeComparator<PT> probeSideComparator,
		TypePairComparator<PT, BT> pairComparator,
		MemoryManager memManager, IOManager ioManager,
		AbstractInvokable ownerTask,
		double memoryFraction,
		boolean useBitmapFilters) throws MemoryAllocationException {
	
	final int numPages = memManager.computeNumberOfPages(memoryFraction);
	final List<MemorySegment> memorySegments = memManager.allocatePages(ownerTask, numPages);
	
	return new ReOpenableMutableHashTable<BT, PT>(buildSideSerializer, probeSideSerializer,
			buildSideComparator, probeSideComparator, pairComparator,
			memorySegments, ioManager, useBitmapFilters);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:19,代码来源:NonReusingBuildSecondReOpenableHashJoinIterator.java


示例8: assertNormalizableKey

import org.apache.flink.core.memory.MemorySegment; //导入依赖的package包/类
@SuppressWarnings("unchecked")
private <T extends Comparable<T>> void assertNormalizableKey(NormalizableKey<T> key1, NormalizableKey<T> key2, int len) {
	
	byte[] normalizedKeys = new byte[32];
	MemorySegment wrapper = MemorySegmentFactory.wrap(normalizedKeys);
	
	key1.copyNormalizedKey(wrapper, 0, len);
	key2.copyNormalizedKey(wrapper, len, len);
	
	for (int i = 0; i < len; i++) {
		int comp;
		int normKey1 = normalizedKeys[i] & 0xFF;
		int normKey2 = normalizedKeys[len + i] & 0xFF;
		
		if ((comp = (normKey1 - normKey2)) != 0) {
			if (Math.signum(key1.compareTo((T) key2)) != Math.signum(comp)) {
				Assert.fail("Normalized key comparison differs from actual key comparision");
			}
			return;
		}
	}
	if (key1.compareTo((T) key2) != 0 && key1.getMaxNormalizedKeyLen() <= len) {
		Assert.fail("Normalized key was not able to distinguish keys, " +
				"although it should as the length of it sufficies to uniquely identify them");
	}
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:27,代码来源:NormalizableKeyTest.java


示例9: testRequestBuffersWithRemoteInputChannel

import org.apache.flink.core.memory.MemorySegment; //导入依赖的package包/类
/**
 * Tests that input gate requests and assigns network buffers for remote input channel.
 */
@Test
public void testRequestBuffersWithRemoteInputChannel() throws Exception {
	final SingleInputGate inputGate = new SingleInputGate(
		"t1",
		new JobID(),
		new IntermediateDataSetID(),
		ResultPartitionType.PIPELINED_CREDIT_BASED,
		0,
		1,
		mock(TaskActions.class),
		UnregisteredMetricGroups.createUnregisteredTaskMetricGroup().getIOMetricGroup());

	RemoteInputChannel remote = mock(RemoteInputChannel.class);
	inputGate.setInputChannel(new IntermediateResultPartitionID(), remote);

	final int buffersPerChannel = 2;
	NetworkBufferPool network = mock(NetworkBufferPool.class);
	// Trigger requests of segments from global pool and assign buffers to remote input channel
	inputGate.assignExclusiveSegments(network, buffersPerChannel);

	verify(network, times(1)).requestMemorySegments(buffersPerChannel);
	verify(remote, times(1)).assignExclusiveSegments(anyListOf(MemorySegment.class));
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:27,代码来源:SingleInputGateTest.java


示例10: HashPartition

import org.apache.flink.core.memory.MemorySegment; //导入依赖的package包/类
/**
 * Constructor creating a partition from a spilled partition file that could be read in one because it was
 * known to completely fit into memory.
 * 
 * @param buildSideAccessors The data type accessors for the build side data-type.
 * @param probeSideAccessors The data type accessors for the probe side data-type.
 * @param partitionNumber The number of the partition.
 * @param recursionLevel The recursion level of the partition.
 * @param buffers The memory segments holding the records.
 * @param buildSideRecordCounter The number of records in the buffers.
 * @param segmentSize The size of the memory segments.
 */
HashPartition(TypeSerializer<BT> buildSideAccessors, TypeSerializer<PT> probeSideAccessors,
		int partitionNumber, int recursionLevel, List<MemorySegment> buffers,
		long buildSideRecordCounter, int segmentSize, int lastSegmentLimit)
{
	super(0);
	
	this.buildSideSerializer = buildSideAccessors;
	this.probeSideSerializer = probeSideAccessors;
	this.partitionNumber = partitionNumber;
	this.recursionLevel = recursionLevel;
	
	this.memorySegmentSize = segmentSize;
	this.segmentSizeBits = MathUtils.log2strict(segmentSize);
	this.finalBufferLimit = lastSegmentLimit;
	
	this.partitionBuffers = (MemorySegment[]) buffers.toArray(new MemorySegment[buffers.size()]);
	this.buildSideRecordCounter = buildSideRecordCounter;
	
	this.overflowSegments = new MemorySegment[2];
	this.numOverflowSegments = 0;
	this.nextOverflowBucket = 0;
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:35,代码来源:HashPartition.java


示例11: getMatchesFor

import org.apache.flink.core.memory.MemorySegment; //导入依赖的package包/类
public HashBucketIterator<BT, PT> getMatchesFor(PT record) throws IOException {
	final TypeComparator<PT> probeAccessors = this.probeSideComparator;
	final int hash = hash(probeAccessors.hash(record), this.currentRecursionDepth);
	final int posHashCode = hash % this.numBuckets;
	
	// get the bucket for the given hash code
	final int bucketArrayPos = posHashCode >> this.bucketsPerSegmentBits;
	final int bucketInSegmentOffset = (posHashCode & this.bucketsPerSegmentMask) << NUM_INTRA_BUCKET_BITS;
	final MemorySegment bucket = this.buckets[bucketArrayPos];
	
	// get the basic characteristics of the bucket
	final int partitionNumber = bucket.get(bucketInSegmentOffset + HEADER_PARTITION_OFFSET);
	final HashPartition<BT, PT> p = this.partitionsBeingBuilt.get(partitionNumber);
	
	// for an in-memory partition, process set the return iterators, else spill the probe records
	if (p.isInMemory()) {
		this.recordComparator.setReference(record);
		this.bucketIterator.set(bucket, p.overflowSegments, p, hash, bucketInSegmentOffset);
		return this.bucketIterator;
	}
	else {
		throw new IllegalStateException("Method is not applicable to partially spilled hash tables.");
	}
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:25,代码来源:MutableHashTable.java


示例12: insert

import org.apache.flink.core.memory.MemorySegment; //导入依赖的package包/类
/**
 * Inserts the given record into the hash table.
 * Note: this method doesn't care about whether a record with the same key is already present.
 * @param record The record to insert.
 * @throws IOException (EOFException specifically, if memory ran out)
    */
@Override
public void insert(T record) throws IOException {
	if (closed) {
		return;
	}

	final int hashCode = MathUtils.jenkinsHash(buildSideComparator.hash(record));
	final int bucket = hashCode & numBucketsMask;
	final int bucketSegmentIndex = bucket >>> numBucketsPerSegmentBits; // which segment contains the bucket
	final MemorySegment bucketSegment = bucketSegments[bucketSegmentIndex];
	final int bucketOffset = (bucket & numBucketsPerSegmentMask) << bucketSizeBits; // offset of the bucket in the segment
	final long firstPointer = bucketSegment.getLong(bucketOffset);

	try {
		final long newFirstPointer = recordArea.appendPointerAndRecord(firstPointer, record);
		bucketSegment.putLong(bucketOffset, newFirstPointer);
	} catch (EOFException ex) {
		compactOrThrow();
		insert(record);
		return;
	}

	numElements++;
	resizeTableIfNecessary();
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:32,代码来源:InPlaceMutableHashTable.java


示例13: initBackChannel

import org.apache.flink.core.memory.MemorySegment; //导入依赖的package包/类
/**
 * The iteration head prepares the backchannel: it allocates memory, instantiates a {@link BlockingBackChannel} and
 * hands it to the iteration tail via a {@link Broker} singleton.
 **/
private BlockingBackChannel initBackChannel() throws Exception {

	/* get the size of the memory available to the backchannel */
	int backChannelMemoryPages = getMemoryManager().computeNumberOfPages(this.config.getRelativeBackChannelMemory());

	/* allocate the memory available to the backchannel */
	List<MemorySegment> segments = new ArrayList<MemorySegment>();
	int segmentSize = getMemoryManager().getPageSize();
	getMemoryManager().allocatePages(this, segments, backChannelMemoryPages);

	/* instantiate the backchannel */
	BlockingBackChannel backChannel = new BlockingBackChannel(new SerializedUpdateBuffer(segments, segmentSize,
		getIOManager()));

	/* hand the backchannel over to the iteration tail */
	Broker<BlockingBackChannel> broker = BlockingBackChannelBroker.instance();
	broker.handIn(brokerKey(), backChannel);

	return backChannel;
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:25,代码来源:IterationHeadTask.java


示例14: ReadEnd

import org.apache.flink.core.memory.MemorySegment; //导入依赖的package包/类
private ReadEnd(MemorySegment firstMemSegment, LinkedBlockingQueue<MemorySegment> emptyBufferTarget,
								Deque<MemorySegment> fullBufferSource, BlockChannelReader<MemorySegment> spilledBufferSource,
								List<MemorySegment> emptyBuffers, int numBuffersSpilled)
	throws IOException {
	super(firstMemSegment, firstMemSegment.getInt(0), HEADER_LENGTH);

	this.emptyBufferTarget = emptyBufferTarget;
	this.fullBufferSource = fullBufferSource;

	this.spilledBufferSource = spilledBufferSource;

	requestsRemaining = numBuffersSpilled;
	this.spilledBuffersRemaining = numBuffersSpilled;

	// send the first requests
	while (requestsRemaining > 0 && emptyBuffers.size() > 0) {
		this.spilledBufferSource.readBlock(emptyBuffers.remove(emptyBuffers.size() - 1));
		requestsRemaining--;
	}
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:21,代码来源:SerializedUpdateBuffer.java


示例15: close

import org.apache.flink.core.memory.MemorySegment; //导入依赖的package包/类
public List<MemorySegment> close() throws IOException {
	if (LOG.isDebugEnabled()) {
		LOG.debug("Spilling Resettable Iterator closing. Stored " + this.elementCount + " records.");
	}

	this.inView = null;
	
	final List<MemorySegment> memory = this.buffer.close();
	memory.addAll(this.memorySegments);
	this.memorySegments.clear();
	
	if (this.releaseMemoryOnClose) {
		this.memoryManager.release(memory);
		return Collections.emptyList();
	} else {
		return memory;
	}
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:19,代码来源:SpillingResettableMutableObjectIterator.java


示例16: set

import org.apache.flink.core.memory.MemorySegment; //导入依赖的package包/类
void set(MemorySegment bucket, MemorySegment[] overflowSegments, HashPartition<BT, PT> partition,
		int searchHashCode, int bucketInSegmentOffset)
{
	this.bucket = bucket;
	this.originalBucket = bucket;
	this.overflowSegments = overflowSegments;
	this.partition = partition;
	this.searchHashCode = searchHashCode;
	this.bucketInSegmentOffset = bucketInSegmentOffset;
	this.originalBucketInSegmentOffset = bucketInSegmentOffset;
	this.posInSegment = this.bucketInSegmentOffset + BUCKET_HEADER_LENGTH;
	this.countInSegment = bucket.getShort(bucketInSegmentOffset + HEADER_COUNT_OFFSET);
	this.numInSegment = 0;
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:15,代码来源:MutableHashTable.java


示例17: createBuffer

import org.apache.flink.core.memory.MemorySegment; //导入依赖的package包/类
private static BufferOrEvent createBuffer(int channel, int size) {
	byte[] bytes = new byte[size];
	RND.nextBytes(bytes);

	MemorySegment memory = MemorySegmentFactory.allocateUnpooledSegment(PAGE_SIZE);
	memory.put(0, bytes);

	Buffer buf = new NetworkBuffer(memory, FreeingBufferRecycler.INSTANCE);
	buf.setSize(size);

	// retain an additional time so it does not get disposed after being read by the input gate
	buf.retainBuffer();

	return new BufferOrEvent(buf, channel);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:16,代码来源:BarrierBufferAlignmentLimitTest.java


示例18: clearAllMemory

import org.apache.flink.core.memory.MemorySegment; //导入依赖的package包/类
@Override
public void clearAllMemory(List<MemorySegment> target) {
	if (initialBuildSideChannel != null) {
		try {
			this.initialBuildSideWriter.closeAndDelete();
		} catch (IOException ioex) {
			throw new RuntimeException("Error deleting the partition files. Some temporary files might not be removed.");
		}
	}
	super.clearAllMemory(target);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:12,代码来源:ReOpenableHashPartition.java


示例19: allocateSegments

import org.apache.flink.core.memory.MemorySegment; //导入依赖的package包/类
/**
 * attempts to allocate specified number of segments and should only be used by compaction partition
 * fails silently if not enough segments are available since next compaction could still succeed
 * 
 * @param numberOfSegments allocation count
 */
public void allocateSegments(int numberOfSegments) {
	while (getBlockCount() < numberOfSegments) {
		MemorySegment next = this.availableMemory.nextSegment();
		if (next != null) {
			this.partitionPages.add(next);
		} else {
			return;
		}
	}
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:17,代码来源:InMemoryPartition.java


示例20: ChannelReaderInputViewIterator

import org.apache.flink.core.memory.MemorySegment; //导入依赖的package包/类
public ChannelReaderInputViewIterator(BlockChannelReader<MemorySegment> reader, LinkedBlockingQueue<MemorySegment> returnQueue,
		List<MemorySegment> segments, List<MemorySegment> freeMemTarget, TypeSerializer<E> accessors, int numBlocks)
throws IOException
{
	this.accessors = accessors;
	this.freeMemTarget = freeMemTarget;
	this.inView = new ChannelReaderInputView(reader, segments, numBlocks, false);
}
 
开发者ID:axbaretto,项目名称:flink,代码行数:9,代码来源:ChannelReaderInputViewIterator.java



注:本文中的org.apache.flink.core.memory.MemorySegment类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java ConvertedClosure类代码示例发布时间:2022-05-22
下一篇:
Java FSCollectionFactory类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap