• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java ColumnFamilySplit类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.cassandra.hadoop.ColumnFamilySplit的典型用法代码示例。如果您正苦于以下问题:Java ColumnFamilySplit类的具体用法?Java ColumnFamilySplit怎么用?Java ColumnFamilySplit使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



ColumnFamilySplit类属于org.apache.cassandra.hadoop包,在下文中一共展示了ColumnFamilySplit类的7个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: initialize

import org.apache.cassandra.hadoop.ColumnFamilySplit; //导入依赖的package包/类
@Override
public void initialize(InputSplit split, TaskAttemptContext context) throws IOException
{
    this.split = (ColumnFamilySplit) split;
    Configuration conf = HadoopCompat.getConfiguration(context);
    totalRowCount = (this.split.getLength() < Long.MAX_VALUE)
                  ? (int) this.split.getLength()
                  : ConfigHelper.getInputSplitSize(conf);
    cfName = ConfigHelper.getInputColumnFamily(conf);
    keyspace = ConfigHelper.getInputKeyspace(conf);
    partitioner = ConfigHelper.getInputPartitioner(conf);
    inputColumns = CqlConfigHelper.getInputcolumns(conf);
    userDefinedWhereClauses = CqlConfigHelper.getInputWhereClauses(conf);

    try
    {
        if (cluster != null)
            return;

        // create a Cluster instance
        String[] locations = split.getLocations();
        cluster = CqlConfigHelper.getInputCluster(locations, conf);
    }
    catch (Exception e)
    {
        throw new RuntimeException(e);
    }

    if (cluster != null)
        session = cluster.connect(quote(keyspace));

    if (session == null)
      throw new RuntimeException("Can't create connection session");

    //get negotiated serialization protocol
    nativeProtocolVersion = cluster.getConfiguration().getProtocolOptions().getProtocolVersion();

    // If the user provides a CQL query then we will use it without validation
    // otherwise we will fall back to building a query using the:
    //   inputColumns
    //   whereClauses
    cqlQuery = CqlConfigHelper.getInputCql(conf);
    // validate that the user hasn't tried to give us a custom query along with input columns
    // and where clauses
    if (StringUtils.isNotEmpty(cqlQuery) && (StringUtils.isNotEmpty(inputColumns) ||
                                             StringUtils.isNotEmpty(userDefinedWhereClauses)))
    {
        throw new AssertionError("Cannot define a custom query with input columns and / or where clauses");
    }

    if (StringUtils.isEmpty(cqlQuery))
        cqlQuery = buildQuery();
    logger.debug("cqlQuery {}", cqlQuery);

    rowIterator = new RowIterator();
    logger.debug("created {}", rowIterator);
}
 
开发者ID:vcostet,项目名称:cassandra-kmean,代码行数:58,代码来源:CqlRecordReader.java


示例2: initialize

import org.apache.cassandra.hadoop.ColumnFamilySplit; //导入依赖的package包/类
@Override
public void initialize(InputSplit split, TaskAttemptContext context) throws IOException
{
    this.split = (ColumnFamilySplit) split;
    Configuration conf = HadoopCompat.getConfiguration(context);
    totalRowCount = (this.split.getLength() < Long.MAX_VALUE)
                  ? (int) this.split.getLength()
                  : ConfigHelper.getInputSplitSize(conf);
    cfName = ConfigHelper.getInputColumnFamily(conf);
    keyspace = ConfigHelper.getInputKeyspace(conf);
    partitioner = ConfigHelper.getInputPartitioner(conf);
    inputColumns = CqlConfigHelper.getInputcolumns(conf);
    userDefinedWhereClauses = CqlConfigHelper.getInputWhereClauses(conf);

    try
    {
        if (cluster != null)
            return;

        // create a Cluster instance
        String[] locations = split.getLocations();
        cluster = CqlConfigHelper.getInputCluster(locations, conf);
    }
    catch (Exception e)
    {
        throw new RuntimeException(e);
    }

    if (cluster != null)
        session = cluster.connect(quote(keyspace));

    if (session == null)
      throw new RuntimeException("Can't create connection session");

    //get negotiated serialization protocol
    nativeProtocolVersion = cluster.getConfiguration().getProtocolOptions().getProtocolVersion().toInt();

    // If the user provides a CQL query then we will use it without validation
    // otherwise we will fall back to building a query using the:
    //   inputColumns
    //   whereClauses
    cqlQuery = CqlConfigHelper.getInputCql(conf);
    // validate that the user hasn't tried to give us a custom query along with input columns
    // and where clauses
    if (StringUtils.isNotEmpty(cqlQuery) && (StringUtils.isNotEmpty(inputColumns) ||
                                             StringUtils.isNotEmpty(userDefinedWhereClauses)))
    {
        throw new AssertionError("Cannot define a custom query with input columns and / or where clauses");
    }

    if (StringUtils.isEmpty(cqlQuery))
        cqlQuery = buildQuery();
    logger.trace("cqlQuery {}", cqlQuery);

    rowIterator = new RowIterator();
    logger.trace("created {}", rowIterator);
}
 
开发者ID:scylladb,项目名称:scylla-tools-java,代码行数:58,代码来源:CqlRecordReader.java


示例3: HiveCassandraStandardSplit

import org.apache.cassandra.hadoop.ColumnFamilySplit; //导入依赖的package包/类
public HiveCassandraStandardSplit() {
  super((Path) null, 0, 0, (String[]) null);
  columnMapping = "";
  split  = new ColumnFamilySplit(null,null,null);
}
 
开发者ID:dvasilen,项目名称:Hive-Cassandra,代码行数:6,代码来源:HiveCassandraStandardSplit.java


示例4: getSplit

import org.apache.cassandra.hadoop.ColumnFamilySplit; //导入依赖的package包/类
public ColumnFamilySplit getSplit() {
  return split;
}
 
开发者ID:dvasilen,项目名称:Hive-Cassandra,代码行数:4,代码来源:HiveCassandraStandardSplit.java


示例5: getSplits

import org.apache.cassandra.hadoop.ColumnFamilySplit; //导入依赖的package包/类
@Override
public InputSplit[] getSplits(JobConf jobConf, int numSplits) throws IOException {
  String ks = jobConf.get(AbstractColumnSerDe.CASSANDRA_KEYSPACE_NAME);
  String cf = jobConf.get(AbstractColumnSerDe.CASSANDRA_CF_NAME);
  int slicePredicateSize = jobConf.getInt(AbstractColumnSerDe.CASSANDRA_SLICE_PREDICATE_SIZE,
      AbstractColumnSerDe.DEFAULT_SLICE_PREDICATE_SIZE);
  int sliceRangeSize = jobConf.getInt(
      AbstractColumnSerDe.CASSANDRA_RANGE_BATCH_SIZE,
      AbstractColumnSerDe.DEFAULT_RANGE_BATCH_SIZE);
  int splitSize = jobConf.getInt(
      AbstractColumnSerDe.CASSANDRA_SPLIT_SIZE,
      AbstractColumnSerDe.DEFAULT_SPLIT_SIZE);
  String cassandraColumnMapping = jobConf.get(AbstractColumnSerDe.CASSANDRA_COL_MAPPING);
  int rpcPort = jobConf.getInt(AbstractColumnSerDe.CASSANDRA_PORT, 9160);
  String host = jobConf.get(AbstractColumnSerDe.CASSANDRA_HOST);
  String partitioner = jobConf.get(AbstractColumnSerDe.CASSANDRA_PARTITIONER);

  if (cassandraColumnMapping == null) {
    throw new IOException("cassandra.columns.mapping required for Cassandra Table.");
  }

  SliceRange range = new SliceRange();
  range.setStart(new byte[0]);
  range.setFinish(new byte[0]);
  range.setReversed(false);
  range.setCount(slicePredicateSize);
  SlicePredicate predicate = new SlicePredicate();
  predicate.setSlice_range(range);

  ConfigHelper.setInputRpcPort(jobConf, "" + rpcPort);
  ConfigHelper.setInputInitialAddress(jobConf, host);
  ConfigHelper.setInputPartitioner(jobConf, partitioner);
  ConfigHelper.setInputSlicePredicate(jobConf, predicate);
  ConfigHelper.setInputColumnFamily(jobConf, ks, cf);
  ConfigHelper.setRangeBatchSize(jobConf, sliceRangeSize);
  ConfigHelper.setInputSplitSize(jobConf, splitSize);

  Job job = new Job(jobConf);
  JobContext jobContext = new JobContext(job.getConfiguration(), job.getJobID());

  Path[] tablePaths = FileInputFormat.getInputPaths(jobContext);
  List<org.apache.hadoop.mapreduce.InputSplit> splits = getSplits(jobContext);
  InputSplit[] results = new InputSplit[splits.size()];

  for (int i = 0; i < splits.size(); ++i) {
    HiveCassandraStandardSplit csplit = new HiveCassandraStandardSplit(
        (ColumnFamilySplit) splits.get(i), cassandraColumnMapping, tablePaths[0]);
    csplit.setKeyspace(ks);
    csplit.setColumnFamily(cf);
    csplit.setRangeBatchSize(sliceRangeSize);
    csplit.setSplitSize(splitSize);
    csplit.setHost(host);
    csplit.setPort(rpcPort);
    csplit.setSlicePredicateSize(slicePredicateSize);
    csplit.setPartitioner(partitioner);
    csplit.setColumnMapping(cassandraColumnMapping);
    results[i] = csplit;
  }
  return results;
}
 
开发者ID:dvasilen,项目名称:Hive-Cassandra,代码行数:61,代码来源:HiveCassandraStandardColumnInputFormat.java


示例6: initialize

import org.apache.cassandra.hadoop.ColumnFamilySplit; //导入依赖的package包/类
@Override
public void initialize(InputSplit split, TaskAttemptContext context) throws IOException {
  this.split = (ColumnFamilySplit) split;
  Configuration conf = context.getConfiguration();
  predicate = ConfigHelper.getInputSlicePredicate(conf);
  if (!isSliceRangePredicate(predicate)) {
    throw new AssertionError("WideRowsRequire a slice range");
  }


  totalRowCount = ConfigHelper.getInputSplitSize(conf);
  Log.info("total rows = "+totalRowCount);
  batchRowCount = 1;
  rowPageSize = predicate.getSlice_range().getCount();
  startSlicePredicate = predicate.getSlice_range().start;
  cfName = ConfigHelper.getInputColumnFamily(conf);
  consistencyLevel = ConsistencyLevel.valueOf(ConfigHelper.getReadConsistencyLevel(conf));


  keyspace = ConfigHelper.getInputKeyspace(conf);

  try {
    // only need to connect once
    if (socket != null && socket.isOpen()) {
      return;
    }

    // create connection using thrift
    String location = getLocation();
    socket = new TSocket(location, ConfigHelper.getInputRpcPort(conf));
    TBinaryProtocol binaryProtocol = new TBinaryProtocol(new TFramedTransport(socket));
    client = new Cassandra.Client(binaryProtocol);
    socket.open();

    // log in
    client.set_keyspace(keyspace);
    if (ConfigHelper.getInputKeyspaceUserName(conf) != null) {
      Map<String, String> creds = new HashMap<String, String>();
      creds.put(IAuthenticator.USERNAME_KEY, ConfigHelper.getInputKeyspaceUserName(conf));
      creds.put(IAuthenticator.PASSWORD_KEY, ConfigHelper.getInputKeyspacePassword(conf));
      AuthenticationRequest authRequest = new AuthenticationRequest(creds);
      client.login(authRequest);
    }
  } catch (Exception e) {
    throw new RuntimeException(e);
  }

  iter = new WideRowIterator();
}
 
开发者ID:dvasilen,项目名称:Hive-Cassandra,代码行数:50,代码来源:ColumnFamilyWideRowRecordReader.java


示例7: getSplits

import org.apache.cassandra.hadoop.ColumnFamilySplit; //导入依赖的package包/类
@Override
public InputSplit[] getSplits(JobConf jobConf, int numSplits) throws IOException {
	final String ks = jobConf.get(AbstractCassandraSerDe.CASSANDRA_KEYSPACE_NAME);
	final String cf = jobConf.get(AbstractCassandraSerDe.CASSANDRA_CF_NAME);
	final int splitSize = jobConf.getInt(AbstractCassandraSerDe.CASSANDRA_SPLIT_SIZE, AbstractCassandraSerDe.DEFAULT_SPLIT_SIZE);
	final int rpcPort = jobConf.getInt(AbstractCassandraSerDe.CASSANDRA_PORT, Integer.parseInt(AbstractCassandraSerDe.DEFAULT_CASSANDRA_PORT));
	final String host = jobConf.get(AbstractCassandraSerDe.CASSANDRA_HOST);
	final String partitionerString = jobConf.get(AbstractCassandraSerDe.CASSANDRA_PARTITIONER);
	final String cassandraColumnMapping = jobConf.get(AbstractCassandraSerDe.CASSANDRA_COL_MAPPING);

	if (cassandraColumnMapping == null) {
		throw new IOException("cassandra.columns.mapping required for Cassandra Table.");
	}

	final Path dummyPath = new Path(ks + "/" + cf);

	SliceRange range = new SliceRange();
	range.setStart(new byte[0]);
	range.setFinish(new byte[0]);
	range.setReversed(false);
	range.setCount(Integer.MAX_VALUE);
	SlicePredicate predicate = new SlicePredicate();
	predicate.setSlice_range(range);

	ConfigHelper.setInputPartitioner(jobConf, partitionerString);
	ConfigHelper.setInputColumnFamily(jobConf, ks, cf);
	ConfigHelper.setInputSplitSize(jobConf, splitSize);
	ConfigHelper.setInputInitialAddress(jobConf, host);
	ConfigHelper.setInputSlicePredicate(jobConf, predicate);
	ConfigHelper.setInputRpcPort(jobConf, Integer.toString(rpcPort));

	ColumnFamilyInputFormat cfif = new ColumnFamilyInputFormat();
	InputSplit[] cfifSplits = cfif.getSplits(jobConf, numSplits);
	InputSplit[] results = new InputSplit[cfifSplits.length];
	for (int i = 0; i < cfifSplits.length; i++) {
		ColumnFamilySplit cfSplit = (ColumnFamilySplit) cfifSplits[i];
		SSTableSplit split = new SSTableSplit(cassandraColumnMapping, cfSplit.getStartToken(), cfSplit.getEndToken(), cfSplit.getLocations(), dummyPath);
		split.setKeyspace(ks);
		split.setColumnFamily(cf);
		split.setEstimatedRows(cfSplit.getLength());
		split.setPartitioner(partitionerString);
		results[i] = split;
		logger.debug("Created split: {}", split);
	}

	return results;
}
 
开发者ID:richardalow,项目名称:cassowary,代码行数:48,代码来源:SSTableInputFormatImpl.java



注:本文中的org.apache.cassandra.hadoop.ColumnFamilySplit类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java TypeReference类代码示例发布时间:2022-05-22
下一篇:
Java Apriori类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap