• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java AdaptiveRecvByteBufAllocator类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中io.netty.channel.AdaptiveRecvByteBufAllocator的典型用法代码示例。如果您正苦于以下问题:Java AdaptiveRecvByteBufAllocator类的具体用法?Java AdaptiveRecvByteBufAllocator怎么用?Java AdaptiveRecvByteBufAllocator使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



AdaptiveRecvByteBufAllocator类属于io.netty.channel包,在下文中一共展示了AdaptiveRecvByteBufAllocator类的9个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: startServer

import io.netty.channel.AdaptiveRecvByteBufAllocator; //导入依赖的package包/类
/**
 * starts server to handle discovery-request from client-channel
 *
 * @throws Exception
 */
public void startServer() throws Exception {

    ServerBootstrap bootstrap = new ServerBootstrap();
    bootstrap.childOption(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT);
    bootstrap.group(acceptorGroup, workerGroup);
    bootstrap.childOption(ChannelOption.TCP_NODELAY, true);
    bootstrap.childOption(ChannelOption.RCVBUF_ALLOCATOR,
            new AdaptiveRecvByteBufAllocator(1024, 16 * 1024, 1 * 1024 * 1024));
    bootstrap.channel(EventLoopUtil.getServerSocketChannelClass(workerGroup));
    EventLoopUtil.enableTriggeredMode(bootstrap);

    bootstrap.childHandler(new ServiceChannelInitializer(this, config, false));
    // Bind and start to accept incoming connections.
    bootstrap.bind(config.getServicePort()).sync();
    LOG.info("Started Pulsar Discovery service on port {}", config.getServicePort());

    if (config.isTlsEnabled()) {
        ServerBootstrap tlsBootstrap = bootstrap.clone();
        tlsBootstrap.childHandler(new ServiceChannelInitializer(this, config, true));
        tlsBootstrap.bind(config.getServicePortTls()).sync();
        LOG.info("Started Pulsar Discovery TLS service on port {}", config.getServicePortTls());
    }
}
 
开发者ID:apache,项目名称:incubator-pulsar,代码行数:29,代码来源:DiscoveryService.java


示例2: start

import io.netty.channel.AdaptiveRecvByteBufAllocator; //导入依赖的package包/类
public void start() throws Exception {
    this.producerNameGenerator = new DistributedIdGenerator(pulsar.getZkClient(), producerNameGeneratorPath,
            pulsar.getConfiguration().getClusterName());

    ServerBootstrap bootstrap = new ServerBootstrap();
    bootstrap.childOption(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT);
    bootstrap.group(acceptorGroup, workerGroup);
    bootstrap.childOption(ChannelOption.TCP_NODELAY, true);
    bootstrap.childOption(ChannelOption.RCVBUF_ALLOCATOR,
            new AdaptiveRecvByteBufAllocator(1024, 16 * 1024, 1 * 1024 * 1024));

    bootstrap.channel(EventLoopUtil.getServerSocketChannelClass(workerGroup));
    EventLoopUtil.enableTriggeredMode(bootstrap);

    ServiceConfiguration serviceConfig = pulsar.getConfiguration();

    bootstrap.childHandler(new PulsarChannelInitializer(this, serviceConfig, false));
    // Bind and start to accept incoming connections.
    bootstrap.bind(new InetSocketAddress(pulsar.getBindAddress(), port)).sync();
    log.info("Started Pulsar Broker service on port {}", port);

    if (serviceConfig.isTlsEnabled()) {
        ServerBootstrap tlsBootstrap = bootstrap.clone();
        tlsBootstrap.childHandler(new PulsarChannelInitializer(this, serviceConfig, true));
        tlsBootstrap.bind(new InetSocketAddress(pulsar.getBindAddress(), tlsPort)).sync();
        log.info("Started Pulsar Broker TLS service on port {}", tlsPort);
    }

    // start other housekeeping functions
    this.startStatsUpdater();
    this.startInactivityMonitor();
    this.startMessageExpiryMonitor();
    this.startBacklogQuotaChecker();
    // register listener to capture zk-latency
    ClientCnxnAspect.addListener(zkStatsListener);
    ClientCnxnAspect.registerExecutor(pulsar.getExecutor());
}
 
开发者ID:apache,项目名称:incubator-pulsar,代码行数:38,代码来源:BrokerService.java


示例3: start

import io.netty.channel.AdaptiveRecvByteBufAllocator; //导入依赖的package包/类
public void start() throws Exception {
    localZooKeeperConnectionService = new LocalZooKeeperConnectionService(getZooKeeperClientFactory(),
            proxyConfig.getZookeeperServers(), proxyConfig.getZookeeperSessionTimeoutMs());
    localZooKeeperConnectionService.start(new ShutdownService() {
        @Override
        public void shutdown(int exitCode) {
            LOG.error("Lost local ZK session. Shutting down the proxy");
            Runtime.getRuntime().halt(-1);
        }
    });

    discoveryProvider = new BrokerDiscoveryProvider(this.proxyConfig, getZooKeeperClientFactory());
    this.configurationCacheService = new ConfigurationCacheService(discoveryProvider.globalZkCache);
    ServiceConfiguration serviceConfiguration = PulsarConfigurationLoader.convertFrom(proxyConfig);
    authenticationService = new AuthenticationService(serviceConfiguration);
    authorizationManager = new AuthorizationManager(serviceConfiguration, configurationCacheService);

    ServerBootstrap bootstrap = new ServerBootstrap();
    bootstrap.childOption(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT);
    bootstrap.group(acceptorGroup, workerGroup);
    bootstrap.childOption(ChannelOption.TCP_NODELAY, true);
    bootstrap.childOption(ChannelOption.RCVBUF_ALLOCATOR,
            new AdaptiveRecvByteBufAllocator(1024, 16 * 1024, 1 * 1024 * 1024));

    bootstrap.channel(EventLoopUtil.getServerSocketChannelClass(workerGroup));
    EventLoopUtil.enableTriggeredMode(bootstrap);

    bootstrap.childHandler(new ServiceChannelInitializer(this, proxyConfig, false));
    // Bind and start to accept incoming connections.
    bootstrap.bind(proxyConfig.getServicePort()).sync();
    LOG.info("Started Pulsar Proxy at {}", serviceUrl);

    if (proxyConfig.isTlsEnabledInProxy()) {
        ServerBootstrap tlsBootstrap = bootstrap.clone();
        tlsBootstrap.childHandler(new ServiceChannelInitializer(this, proxyConfig, true));
        tlsBootstrap.bind(proxyConfig.getServicePortTls()).sync();
        LOG.info("Started Pulsar TLS Proxy on port {}", proxyConfig.getWebServicePortTls());
    }
}
 
开发者ID:apache,项目名称:incubator-pulsar,代码行数:40,代码来源:ProxyService.java


示例4: testOptionsHaveCorrectTypes

import io.netty.channel.AdaptiveRecvByteBufAllocator; //导入依赖的package包/类
@Test
public void testOptionsHaveCorrectTypes() throws Exception {
    final ServerBootstrap bootstrap = new ServerBootstrap();
    final ChannelOptions options = new ChannelOptions();

    options.setAllocator(new PooledByteBufAllocator());
    options.setRecvBufAllocator(new AdaptiveRecvByteBufAllocator());
    options.setConnectTimeout(1);
    options.setWriteSpinCount(1);
    options.setWriteBufferWaterMark(new WriteBufferWaterMark(8192, 32768));
    options.setAllowHalfClosure(true);
    options.setAutoRead(true);
    options.setSoBroadcast(true);
    options.setSoKeepAlive(true);
    options.setSoReuseAddr(true);
    options.setSoSndBuf(8192);
    options.setSoRcvBuf(8192);
    options.setSoLinger(0);
    options.setSoBacklog(0);
    options.setSoTimeout(0);
    options.setIpTos(0);
    options.setIpMulticastAddr(getLoopbackAddress());
    options.setIpMulticastIf(getNetworkInterfaces().nextElement());
    options.setIpMulticastTtl(300);
    options.setIpMulticastLoopDisabled(true);
    options.setTcpNodelay(true);

    final Map<ChannelOption, Object> channelOptionMap = options.get();
    for (final Map.Entry<ChannelOption, Object> entry : channelOptionMap.entrySet()) {
        bootstrap.option(entry.getKey(), entry.getValue());
        bootstrap.childOption(entry.getKey(), entry.getValue());
    }
}
 
开发者ID:kgusarov,项目名称:spring-boot-netty,代码行数:34,代码来源:ChannelOptionsTest.java


示例5: FileMsgSender

import io.netty.channel.AdaptiveRecvByteBufAllocator; //导入依赖的package包/类
public FileMsgSender(String zkAddrs, String zkNode) {
    this.zkNode = zkNode;
    this.zkClient = ZKClientCache.get(zkAddrs);
    this.bootstrap = new Bootstrap();
    bootstrap.group(GROUP);
    bootstrap.option(ChannelOption.RCVBUF_ALLOCATOR, AdaptiveRecvByteBufAllocator.DEFAULT);
    bootstrap.option(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT);
    bootstrap.option(ChannelOption.TCP_NODELAY, true);
    bootstrap.option(ChannelOption.SO_KEEPALIVE, true);
    bootstrap.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 1000);
    bootstrap.channel(NioSocketChannel.class);
    bootstrap.handler(new FileMsgSendInitializer());

}
 
开发者ID:classtag,项目名称:scratch_zookeeper_netty,代码行数:15,代码来源:FileMsgSender.java


示例6: bindUdpPort

import io.netty.channel.AdaptiveRecvByteBufAllocator; //导入依赖的package包/类
private boolean bindUdpPort(final InetAddress addr, final int port) {

        EventLoopGroup group = new NioEventLoopGroup();
        bootstrapUDP = new Bootstrap();
        bootstrapUDP.group(group).channel(NioDatagramChannel.class)
                .handler(new DatagramHandler(this, Transport.UDP));

        bootstrapUDP.option(ChannelOption.RCVBUF_ALLOCATOR, new AdaptiveRecvByteBufAllocator(1500, 1500, RECV_BUFFER_SIZE));
        bootstrapUDP.option(ChannelOption.SO_RCVBUF, RECV_BUFFER_SIZE);
        bootstrapUDP.option(ChannelOption.SO_SNDBUF, SEND_BUFFER_SIZE);
        // bootstrap.setOption("trafficClass", trafficClass);
        // bootstrap.setOption("soTimeout", soTimeout);
        // bootstrap.setOption("broadcast", broadcast);
        bootstrapUDP.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, CONNECT_TIMEOUT_MS);
        bootstrapUDP.option(ChannelOption.SO_REUSEADDR, true);

        try {
            InetSocketAddress iAddr = new InetSocketAddress(addr, port);
            udpChannel = (DatagramChannel) bootstrapUDP.bind(iAddr).sync().channel();

            //addLocalSocket(iAddr, c);
            logger.info("Successfully bound to ip:port {}:{}", addr, port);
        } catch (InterruptedException e) {
            logger.error("Problem when trying to bind to {}:{}", addr.getHostAddress(), port);
            return false;
        }

        return true;
    }
 
开发者ID:kompics,项目名称:kompics,代码行数:30,代码来源:NettyNetwork.java


示例7: init

import io.netty.channel.AdaptiveRecvByteBufAllocator; //导入依赖的package包/类
@Override
public void init(Object object) {
    logger.info("初始化Netty 开始");
    PropertiesWrapper propertiesWrapper = (PropertiesWrapper) object;

    int defaultValue = Runtime.getRuntime().availableProcessors() * 2;

    bossGroupNum = propertiesWrapper.getIntProperty(SystemEnvironment.NETTY_BOSS_GROUP_NUM, defaultValue);
    workerGroupNum = propertiesWrapper.getIntProperty(SystemEnvironment.NETTY_WORKER_GROUP_NUM, defaultValue);
    backlog = propertiesWrapper.getIntProperty(SystemEnvironment.NETTY_BACKLOG, BACKLOG);

    name = propertiesWrapper.getProperty(SystemEnvironment.NETTY_SERVER_NAME, "NETTY_SERVER");
    int port = propertiesWrapper.getIntProperty(SystemEnvironment.TCP_PROT, D_PORT);
    Thread thread = new Thread(new Runnable() {
        public void run() {
            bootstrap = new ServerBootstrap();
            bossGroup = new NioEventLoopGroup(bossGroupNum);
            workerGroup = new NioEventLoopGroup(workerGroupNum);
            bootstrap.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class)
                    .option(ChannelOption.SO_BACKLOG, backlog)
                    .option(ChannelOption.SO_REUSEADDR, Boolean.valueOf(true))
                    // .option(ChannelOption.TCP_NODELAY,
                    // Boolean.valueOf(true))
                    // .option(ChannelOption.SO_KEEPALIVE,
                    // Boolean.valueOf(true))
                    .childOption(ChannelOption.TCP_NODELAY, true).childOption(ChannelOption.SO_KEEPALIVE, true)
                    .childOption(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT)
                    .childOption(ChannelOption.RCVBUF_ALLOCATOR, AdaptiveRecvByteBufAllocator.DEFAULT)
                    .handler(new LoggingHandler(LogLevel.INFO)).childHandler(new NettyServerInitializer());
            ChannelFuture f;
            try {
                f = bootstrap.bind(port).sync();
                f.channel().closeFuture().sync();
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
    }, "Netty-Start-Thread");
    thread.start();
    logger.info("初始化Netty 线程启动");
    super.setActive();
}
 
开发者ID:zerosoft,项目名称:CodeBroker,代码行数:43,代码来源:NettyNetService.java


示例8: HubManager

import io.netty.channel.AdaptiveRecvByteBufAllocator; //导入依赖的package包/类
private HubManager(final Properties properties) {
		Runtime.getRuntime().addShutdownHook(new Thread(){
			public void run() { try { close(); } catch (Exception x) {/* No Op */} }
		});
		log.info(">>>>> Initializing HubManager...");
		metricMetaService = new MetricsMetaAPIImpl(properties);
		tsdbEndpoint = TSDBEndpoint.getEndpoint(metricMetaService.getSqlWorker());
		for(String url: tsdbEndpoint.getUpServers()) {
			final URL tsdbUrl = URLHelper.toURL(url);
			tsdbAddresses.add(new InetSocketAddress(tsdbUrl.getHost(), tsdbUrl.getPort()));
		}
		endpointCount = tsdbAddresses.size(); 
		endpointSequence = new AtomicInteger(endpointCount);
		group = new NioEventLoopGroup(Runtime.getRuntime().availableProcessors() * 2, metricMetaService.getForkJoinPool());
		bootstrap = new Bootstrap();
		bootstrap
			.handler(channelInitializer)
			.group(group)
			.channel(NioSocketChannel.class)
			.option(ChannelOption.RCVBUF_ALLOCATOR, new AdaptiveRecvByteBufAllocator())
			.option(ChannelOption.ALLOCATOR, BufferManager.getInstance());
		final ChannelPoolHandler poolHandler = this;
		poolMap = new AbstractChannelPoolMap<InetSocketAddress, SimpleChannelPool>() {
		    @Override
		    protected SimpleChannelPool newPool(final InetSocketAddress key) {
				final Bootstrap b = new Bootstrap().handler(channelInitializer)
				.group(group)
				.remoteAddress(key)
				.channel(NioSocketChannel.class)
				.option(ChannelOption.RCVBUF_ALLOCATOR, new AdaptiveRecvByteBufAllocator())
				.option(ChannelOption.ALLOCATOR, BufferManager.getInstance());		    	
		        return new SimpleChannelPool(b, poolHandler);
		    }
		};
		eventExecutor = new DefaultEventExecutor(metricMetaService.getForkJoinPool());
		channelGroup = new DefaultChannelGroup("MetricHubChannelGroup", eventExecutor);
		
//		tsdbAddresses.parallelStream().forEach(addr -> {
//			final Set<Channel> channels = Collections.synchronizedSet(new HashSet<Channel>(3));
//			IntStream.of(1,2,3).parallel().forEach(i -> {
//				final ChannelPool pool = poolMap.get(addr); 
//				try {channels.add(pool.acquire().awaitUninterruptibly().get());
//				} catch (Exception e) {}
//				log.info("Acquired [{}] Channels", channels.size());
//				channels.parallelStream().forEach(ch -> pool.release(ch));
//			});
//		});
		
		
		
		
		log.info("<<<<< HubManager Initialized.");
	}
 
开发者ID:nickman,项目名称:HeliosStreams,代码行数:54,代码来源:HubManager.java


示例9: connectAndSend

import io.netty.channel.AdaptiveRecvByteBufAllocator; //导入依赖的package包/类
private void connectAndSend(Integer command, String message) {
        EventLoopGroup workerGroup = new NioEventLoopGroup();
        try {
            Bootstrap b = new Bootstrap(); // (1)
            b.group(workerGroup); // (2)
            b.channel(NioSocketChannel.class); // (3)
            b.option(ChannelOption.SO_KEEPALIVE, true); // (4)
            b.option(ChannelOption.MAX_MESSAGES_PER_READ,
                    CommonConstants.MAX_MESSAGES_PER_READ); // (4)
//            b.option(ChannelOption.RCVBUF_ALLOCATOR,
//                    new FixedRecvByteBufAllocator(CommonConstants.MAX_RECEIVED_BUFFER_SIZE));
            b.option(ChannelOption.RCVBUF_ALLOCATOR,
                    new AdaptiveRecvByteBufAllocator(
                            CommonConstants.MIN_RECEIVED_BUFFER_SIZE,
                            CommonConstants.RECEIVED_BUFFER_SIZE,
                            CommonConstants.MAX_RECEIVED_BUFFER_SIZE));
            if (command == null || message == null) {
                b.handler(new ChannelInitializerImpl());
            } else {
                b.handler(new ChannelInitializerImpl(command, message));
            }

            // Start the client.
            ChannelFuture f = b.connect(host, port).sync(); // (5)
            // Wait until the connection is closed.
            //f.channel().closeFuture().sync();
            boolean completed = f.channel().closeFuture().await(timeout, TimeUnit.SECONDS); // (6)
            if (!completed) {
                String PRSCODE_PREFIX = "<" + PRS_CODE + ">";
                String PRSCODE_AFFIX = "</" + PRS_CODE + ">";

                String prsCode = "[Unkonwn prscode]";
                if (message == null) {
                } else {
                    int start = message.indexOf(PRSCODE_PREFIX);
                    int end = message.indexOf(PRSCODE_AFFIX) + PRSCODE_AFFIX.length();
                    prsCode = (start == -1 || end == -1) ? prsCode : message.substring(start, end);
                }
                Logger.getLogger(P2PTunnel.class.getName()).log(Level.WARNING,
                        "[{0}] operation exceeds {1}seconds and is timeout, channel is forcily closed",
                        new Object[]{prsCode, timeout});
                //forcily close channel to avoid connection leak
                //acutally if no forcible channel close calling, connection still will be closed.
                //but for comprehensive consideration, I call channel close again
                f.channel().close().sync();
            }
        } catch (InterruptedException ex) {
            Logger.getLogger(P2PTunnel.class.getName()).log(Level.SEVERE,
                    "channel management hits a problem, due to\n{0}", ex);
        } finally {
            workerGroup.shutdownGracefully();
        }
    }
 
开发者ID:toyboxman,项目名称:yummy-xml-UI,代码行数:54,代码来源:P2PTunnel.java



注:本文中的io.netty.channel.AdaptiveRecvByteBufAllocator类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java TransactionRolledBackException类代码示例发布时间:2022-05-22
下一篇:
Java VndErrors类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap