• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java WrappedMapper类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.mapreduce.lib.map.WrappedMapper的典型用法代码示例。如果您正苦于以下问题:Java WrappedMapper类的具体用法?Java WrappedMapper怎么用?Java WrappedMapper使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



WrappedMapper类属于org.apache.hadoop.mapreduce.lib.map包,在下文中一共展示了WrappedMapper类的10个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: testLoadMapper

import org.apache.hadoop.mapreduce.lib.map.WrappedMapper; //导入依赖的package包/类
@SuppressWarnings({"rawtypes", "unchecked"})
@Test (timeout=10000)
public void testLoadMapper() throws Exception {

  Configuration conf = new Configuration();
  conf.setInt(JobContext.NUM_REDUCES, 2);

  CompressionEmulationUtil.setCompressionEmulationEnabled(conf, true);
  conf.setBoolean(MRJobConfig.MAP_OUTPUT_COMPRESS, true);

  TaskAttemptID taskId = new TaskAttemptID();
  RecordReader<NullWritable, GridmixRecord> reader = new FakeRecordReader();

  LoadRecordGkGrWriter writer = new LoadRecordGkGrWriter();

  OutputCommitter committer = new CustomOutputCommitter();
  StatusReporter reporter = new TaskAttemptContextImpl.DummyReporter();
  LoadSplit split = getLoadSplit();

  MapContext<NullWritable, GridmixRecord, GridmixKey, GridmixRecord> mapContext = new MapContextImpl<NullWritable, GridmixRecord, GridmixKey, GridmixRecord>(
          conf, taskId, reader, writer, committer, reporter, split);
  // context
  Context ctx = new WrappedMapper<NullWritable, GridmixRecord, GridmixKey, GridmixRecord>()
          .getMapContext(mapContext);

  reader.initialize(split, ctx);
  ctx.getConfiguration().setBoolean(MRJobConfig.MAP_OUTPUT_COMPRESS, true);
  CompressionEmulationUtil.setCompressionEmulationEnabled(
          ctx.getConfiguration(), true);

  LoadJob.LoadMapper mapper = new LoadJob.LoadMapper();
  // setup, map, clean
  mapper.run(ctx);

  Map<GridmixKey, GridmixRecord> data = writer.getData();
  // check result
  assertEquals(2, data.size());

}
 
开发者ID:yncxcw,项目名称:big-c,代码行数:40,代码来源:TestGridMixClasses.java


示例2: buildProxyMapperContext

import org.apache.hadoop.mapreduce.lib.map.WrappedMapper; //导入依赖的package包/类
/**
 * Utility to generate dummy Mapper#Context for use in Giraph internals.
 * This is the "key hack" to inject MapReduce-related data structures
 * containing YARN cluster metadata (and our GiraphConf from the AppMaster)
 * into our Giraph BSP task code.
 * @param tid the TaskAttemptID to construct this Mapper#Context from.
 * @return sort of a Mapper#Context if you squint just right.
 */
private Context buildProxyMapperContext(final TaskAttemptID tid) {
  MapContext mc = new MapContextImpl<Object, Object, Object, Object>(
    conf, // our Configuration, populated back at the GiraphYarnClient.
    tid,  // our TaskAttemptId, generated w/YARN app, container, attempt IDs
    null, // RecordReader here will never be used by Giraph
    null, // RecordWriter here will never be used by Giraph
    null, // OutputCommitter here will never be used by Giraph
    new TaskAttemptContextImpl.DummyReporter() { // goes in task logs for now
      @Override
      public void setStatus(String msg) {
        LOG.info("[STATUS: task-" + bspTaskId + "] " + msg);
      }
    },
    null); // Input split setting here will never be used by Giraph

  // now, we wrap our MapContext ref so we can produce a Mapper#Context
  WrappedMapper<Object, Object, Object, Object> wrappedMapper
    = new WrappedMapper<Object, Object, Object, Object>();
  return wrappedMapper.getMapContext(mc);
}
 
开发者ID:renato2099,项目名称:giraph-gora,代码行数:29,代码来源:GiraphYarnTask.java


示例3: StubContext

import org.apache.hadoop.mapreduce.lib.map.WrappedMapper; //导入依赖的package包/类
public StubContext(Configuration conf, RecordReader<Text, CopyListingFileStatus> reader, int taskId)
    throws IOException, InterruptedException {

  WrappedMapper<Text, CopyListingFileStatus, Text, Text> wrappedMapper = new WrappedMapper<>();

  MapContextImpl<Text, CopyListingFileStatus, Text, Text> contextImpl = new MapContextImpl<>(conf,
      getTaskAttemptID(taskId), reader, writer, null, reporter, null);

  this.reader = reader;
  mapperContext = wrappedMapper.getMapContext(contextImpl);
}
 
开发者ID:HotelsDotCom,项目名称:circus-train,代码行数:12,代码来源:StubContext.java


示例4: createMapContext

import org.apache.hadoop.mapreduce.lib.map.WrappedMapper; //导入依赖的package包/类
/**
 * Create a map context that is based on ChainMapContext and the given record
 * reader and record writer
 */
private <KEYIN, VALUEIN, KEYOUT, VALUEOUT> 
Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context createMapContext(
    RecordReader<KEYIN, VALUEIN> rr, RecordWriter<KEYOUT, VALUEOUT> rw,
    TaskInputOutputContext<KEYIN, VALUEIN, KEYOUT, VALUEOUT> context,
    Configuration conf) {
  MapContext<KEYIN, VALUEIN, KEYOUT, VALUEOUT> mapContext = 
    new ChainMapContextImpl<KEYIN, VALUEIN, KEYOUT, VALUEOUT>(
      context, rr, rw, conf);
  Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context mapperContext = 
    new WrappedMapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT>()
      .getMapContext(mapContext);
  return mapperContext;
}
 
开发者ID:naver,项目名称:hadoop,代码行数:18,代码来源:Chain.java


示例5: testCloneMapContext

import org.apache.hadoop.mapreduce.lib.map.WrappedMapper; //导入依赖的package包/类
@Test
public void testCloneMapContext() throws Exception {
  TaskID taskId = new TaskID(jobId, TaskType.MAP, 0);
  TaskAttemptID taskAttemptid = new TaskAttemptID(taskId, 0);
  MapContext<IntWritable, IntWritable, IntWritable, IntWritable> mapContext =
  new MapContextImpl<IntWritable, IntWritable, IntWritable, IntWritable>(
      conf, taskAttemptid, null, null, null, null, null);
  Mapper<IntWritable, IntWritable, IntWritable, IntWritable>.Context mapperContext = 
    new WrappedMapper<IntWritable, IntWritable, IntWritable, IntWritable>().getMapContext(
        mapContext);
  ContextFactory.cloneMapContext(mapperContext, conf, null, null);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:13,代码来源:TestContextFactory.java


示例6: StubContext

import org.apache.hadoop.mapreduce.lib.map.WrappedMapper; //导入依赖的package包/类
public StubContext(Configuration conf,
    RecordReader<Text, CopyListingFileStatus> reader, int taskId)
    throws IOException, InterruptedException {

  WrappedMapper<Text, CopyListingFileStatus, Text, Text> wrappedMapper
          = new WrappedMapper<Text, CopyListingFileStatus, Text, Text>();

  MapContextImpl<Text, CopyListingFileStatus, Text, Text> contextImpl
          = new MapContextImpl<Text, CopyListingFileStatus, Text, Text>(conf,
          getTaskAttemptID(taskId), reader, writer,
          null, reporter, null);

  this.reader = reader;
  this.mapperContext = wrappedMapper.getMapContext(contextImpl);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:16,代码来源:StubContext.java


示例7: testSleepMapper

import org.apache.hadoop.mapreduce.lib.map.WrappedMapper; //导入依赖的package包/类
@SuppressWarnings({"unchecked", "rawtypes"})
@Test (timeout=30000)
public void testSleepMapper() throws Exception {
  SleepJob.SleepMapper test = new SleepJob.SleepMapper();

  Configuration conf = new Configuration();
  conf.setInt(JobContext.NUM_REDUCES, 2);

  CompressionEmulationUtil.setCompressionEmulationEnabled(conf, true);
  conf.setBoolean(MRJobConfig.MAP_OUTPUT_COMPRESS, true);
  TaskAttemptID taskId = new TaskAttemptID();
  FakeRecordLLReader reader = new FakeRecordLLReader();
  LoadRecordGkNullWriter writer = new LoadRecordGkNullWriter();
  OutputCommitter committer = new CustomOutputCommitter();
  StatusReporter reporter = new TaskAttemptContextImpl.DummyReporter();
  SleepSplit split = getSleepSplit();
  MapContext<LongWritable, LongWritable, GridmixKey, NullWritable> mapcontext = new MapContextImpl<LongWritable, LongWritable, GridmixKey, NullWritable>(
          conf, taskId, reader, writer, committer, reporter, split);
  Context context = new WrappedMapper<LongWritable, LongWritable, GridmixKey, NullWritable>()
          .getMapContext(mapcontext);

  long start = System.currentTimeMillis();
  LOG.info("start:" + start);
  LongWritable key = new LongWritable(start + 2000);
  LongWritable value = new LongWritable(start + 2000);
  // should slip 2 sec
  test.map(key, value, context);
  LOG.info("finish:" + System.currentTimeMillis());
  assertTrue(System.currentTimeMillis() >= (start + 2000));

  test.cleanup(context);
  assertEquals(1, writer.getData().size());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:34,代码来源:TestGridMixClasses.java


示例8: StubContext

import org.apache.hadoop.mapreduce.lib.map.WrappedMapper; //导入依赖的package包/类
public StubContext(Configuration conf, RecordReader<Text, FileStatus> reader,
                   int taskId) throws IOException, InterruptedException {

  WrappedMapper<Text, FileStatus, Text, Text> wrappedMapper
          = new WrappedMapper<Text, FileStatus, Text, Text>();

  MapContextImpl<Text, FileStatus, Text, Text> contextImpl
          = new MapContextImpl<Text, FileStatus, Text, Text>(conf,
          getTaskAttemptID(taskId), reader, writer,
          null, reporter, null);

  this.reader = reader;
  this.mapperContext = wrappedMapper.getMapContext(contextImpl);
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:15,代码来源:StubContext.java


示例9: testSleepMapper

import org.apache.hadoop.mapreduce.lib.map.WrappedMapper; //导入依赖的package包/类
@SuppressWarnings({"unchecked", "rawtypes"})
@Test (timeout=10000)
public void testSleepMapper() throws Exception {
  SleepJob.SleepMapper test = new SleepJob.SleepMapper();

  Configuration conf = new Configuration();
  conf.setInt(JobContext.NUM_REDUCES, 2);

  CompressionEmulationUtil.setCompressionEmulationEnabled(conf, true);
  conf.setBoolean(MRJobConfig.MAP_OUTPUT_COMPRESS, true);
  TaskAttemptID taskId = new TaskAttemptID();
  FakeRecordLLReader reader = new FakeRecordLLReader();
  LoadRecordGkNullWriter writer = new LoadRecordGkNullWriter();
  OutputCommitter committer = new CustomOutputCommitter();
  StatusReporter reporter = new TaskAttemptContextImpl.DummyReporter();
  SleepSplit split = getSleepSplit();
  MapContext<LongWritable, LongWritable, GridmixKey, NullWritable> mapcontext = new MapContextImpl<LongWritable, LongWritable, GridmixKey, NullWritable>(
          conf, taskId, reader, writer, committer, reporter, split);
  Context context = new WrappedMapper<LongWritable, LongWritable, GridmixKey, NullWritable>()
          .getMapContext(mapcontext);

  long start = System.currentTimeMillis();
  LOG.info("start:" + start);
  LongWritable key = new LongWritable(start + 2000);
  LongWritable value = new LongWritable(start + 2000);
  // should slip 2 sec
  test.map(key, value, context);
  LOG.info("finish:" + System.currentTimeMillis());
  assertTrue(System.currentTimeMillis() >= (start + 2000));

  test.cleanup(context);
  assertEquals(1, writer.getData().size());
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:34,代码来源:TestGridMixClasses.java


示例10: setup

import org.apache.hadoop.mapreduce.lib.map.WrappedMapper; //导入依赖的package包/类
@SuppressWarnings("unchecked")
@Override
protected void setup(final Context context) throws IOException, InterruptedException {
    Configuration conf = context.getConfiguration();
    @SuppressWarnings("unchecked")
    Class<Mapper>[] mappersClass = (Class<Mapper>[]) conf.getClasses(CONF_KEY);
    mappers = new ArrayList<Mapper>(mappersClass.length);
    cleanups = new ArrayList<Method>(mappersClass.length);
    maps = new ArrayList<Method>(mappersClass.length);
    WrappedMapper wrappedMapper = new WrappedMapper();
    contexts = Lists.newArrayList();
    int[] redirectToReducer = context.getConfiguration().getInts(MultiJob.REDIRECT_TO_REDUCER);
    for (int i = 0; i < mappersClass.length; i++) {
        Class<Mapper> mapperClass = mappersClass[i];
        final int finalI = redirectToReducer[i];
        WrappedMapper.Context myContext = wrappedMapper.new Context(context) {
            @Override
            public void write(Object k, Object v) throws IOException, InterruptedException {
                context.write(new PerMapperOutputKey(finalI, k),
                        new PerMapperOutputValue(finalI, v));
            }
        };
        contexts.add(myContext);
        Mapper mapper = ReflectionUtils.newInstance(mapperClass, conf);
        mappers.add(mapper);
        Methods.invoke(Methods.get(mapperClass, "setup", Context.class), mapper, myContext);
        cleanups.add(Methods.get(mapperClass, "cleanup", Context.class));
        maps.add(Methods.getWithNameMatches(mapperClass, "map"));
    }
}
 
开发者ID:elazarl,项目名称:multireducers,代码行数:31,代码来源:MultiMapper.java



注:本文中的org.apache.hadoop.mapreduce.lib.map.WrappedMapper类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java Invocation类代码示例发布时间:2022-05-21
下一篇:
Java WorkHandler类代码示例发布时间:2022-05-21
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap