• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java DistributedCache类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.mapreduce.filecache.DistributedCache的典型用法代码示例。如果您正苦于以下问题:Java DistributedCache类的具体用法?Java DistributedCache怎么用?Java DistributedCache使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



DistributedCache类属于org.apache.hadoop.mapreduce.filecache包,在下文中一共展示了DistributedCache类的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: setupDistributedCache

import org.apache.hadoop.mapreduce.filecache.DistributedCache; //导入依赖的package包/类
@SuppressWarnings("deprecation")
public static void setupDistributedCache(Configuration conf,
    Map<String, LocalResource> localResources) throws IOException {

  // Cache archives
  parseDistributedCacheArtifacts(conf, localResources, LocalResourceType.ARCHIVE,
      DistributedCache.getCacheArchives(conf), DistributedCache.getArchiveTimestamps(conf),
      getFileSizes(conf, MRJobConfig.CACHE_ARCHIVES_SIZES),
      DistributedCache.getArchiveVisibilities(conf));

  // Cache files
  parseDistributedCacheArtifacts(conf, localResources, LocalResourceType.FILE,
      DistributedCache.getCacheFiles(conf), DistributedCache.getFileTimestamps(conf),
      getFileSizes(conf, MRJobConfig.CACHE_FILES_SIZES),
      DistributedCache.getFileVisibilities(conf));
}
 
开发者ID:Tencent,项目名称:angel,代码行数:17,代码来源:AngelApps.java


示例2: setupDistributedCache

import org.apache.hadoop.mapreduce.filecache.DistributedCache; //导入依赖的package包/类
public static void setupDistributedCache( 
    Configuration conf, 
    Map<String, LocalResource> localResources) 
throws IOException {
  
  // Cache archives
  parseDistributedCacheArtifacts(conf, localResources,  
      LocalResourceType.ARCHIVE, 
      DistributedCache.getCacheArchives(conf), 
      DistributedCache.getArchiveTimestamps(conf),
      getFileSizes(conf, MRJobConfig.CACHE_ARCHIVES_SIZES), 
      DistributedCache.getArchiveVisibilities(conf));
  
  // Cache files
  parseDistributedCacheArtifacts(conf, 
      localResources,  
      LocalResourceType.FILE, 
      DistributedCache.getCacheFiles(conf),
      DistributedCache.getFileTimestamps(conf),
      getFileSizes(conf, MRJobConfig.CACHE_FILES_SIZES),
      DistributedCache.getFileVisibilities(conf));
}
 
开发者ID:naver,项目名称:hadoop,代码行数:23,代码来源:MRApps.java


示例3: testSetupDistributedCacheConflictsFiles

import org.apache.hadoop.mapreduce.filecache.DistributedCache; //导入依赖的package包/类
@SuppressWarnings("deprecation")
public void testSetupDistributedCacheConflictsFiles() throws Exception {
  Configuration conf = new Configuration();
  conf.setClass("fs.mockfs.impl", MockFileSystem.class, FileSystem.class);
  
  URI mockUri = URI.create("mockfs://mock/");
  FileSystem mockFs = ((FilterFileSystem)FileSystem.get(mockUri, conf))
      .getRawFileSystem();
  
  URI file = new URI("mockfs://mock/tmp/something.zip#something");
  Path filePath = new Path(file);
  URI file2 = new URI("mockfs://mock/tmp/something.txt#something");
  Path file2Path = new Path(file2);
  
  when(mockFs.resolvePath(filePath)).thenReturn(filePath);
  when(mockFs.resolvePath(file2Path)).thenReturn(file2Path);
  
  DistributedCache.addCacheFile(file, conf);
  DistributedCache.addCacheFile(file2, conf);
  conf.set(MRJobConfig.CACHE_FILE_TIMESTAMPS, "10,11");
  conf.set(MRJobConfig.CACHE_FILES_SIZES, "10,11");
  conf.set(MRJobConfig.CACHE_FILE_VISIBILITIES, "true,true");
  Map<String, LocalResource> localResources = 
    new HashMap<String, LocalResource>();
  MRApps.setupDistributedCache(conf, localResources);
  
  assertEquals(1, localResources.size());
  LocalResource lr = localResources.get("something");
  //First one wins
  assertNotNull(lr);
  assertEquals(10l, lr.getSize());
  assertEquals(10l, lr.getTimestamp());
  assertEquals(LocalResourceType.FILE, lr.getType());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:35,代码来源:TestMRApps.java


示例4: setupDistributedCache

import org.apache.hadoop.mapreduce.filecache.DistributedCache; //导入依赖的package包/类
@SuppressWarnings("deprecation")
public static void setupDistributedCache( 
    Configuration conf, 
    Map<String, LocalResource> localResources) 
throws IOException {
  
  // Cache archives
  parseDistributedCacheArtifacts(conf, localResources,  
      LocalResourceType.ARCHIVE, 
      DistributedCache.getCacheArchives(conf), 
      DistributedCache.getArchiveTimestamps(conf),
      getFileSizes(conf, MRJobConfig.CACHE_ARCHIVES_SIZES), 
      DistributedCache.getArchiveVisibilities(conf));
  
  // Cache files
  parseDistributedCacheArtifacts(conf, 
      localResources,  
      LocalResourceType.FILE, 
      DistributedCache.getCacheFiles(conf),
      DistributedCache.getFileTimestamps(conf),
      getFileSizes(conf, MRJobConfig.CACHE_FILES_SIZES),
      DistributedCache.getFileVisibilities(conf));
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:24,代码来源:MRApps.java


示例5: setClasspath

import org.apache.hadoop.mapreduce.filecache.DistributedCache; //导入依赖的package包/类
@SuppressWarnings("deprecation")
public static void setClasspath(Map<String, String> environment, Configuration conf)
    throws IOException {
  String classpathEnvVar = Environment.CLASSPATH.name();
  Apps.addToEnvironment(environment, classpathEnvVar, Environment.PWD.$());
  Apps.addToEnvironment(environment, classpathEnvVar, Environment.PWD.$() + Path.SEPARATOR + "*");
  // a * in the classpath will only find a .jar, so we need to filter out
  // all .jars and add everything else
  addToClasspathIfNotJar(DistributedCache.getFileClassPaths(conf),
      DistributedCache.getCacheFiles(conf), conf, environment, classpathEnvVar);
  addToClasspathIfNotJar(DistributedCache.getArchiveClassPaths(conf),
      DistributedCache.getCacheArchives(conf), conf, environment, classpathEnvVar);
  
  AngelApps.setAngelFrameworkClasspath(environment, conf);
}
 
开发者ID:Tencent,项目名称:angel,代码行数:16,代码来源:AngelApps.java


示例6: runJob

import org.apache.hadoop.mapreduce.filecache.DistributedCache; //导入依赖的package包/类
static boolean runJob(JobConf conf, Path inDir, Path outDir, int numMaps, 
                         int numReds) throws IOException, InterruptedException {

  FileSystem fs = FileSystem.get(conf);
  if (fs.exists(outDir)) {
    fs.delete(outDir, true);
  }
  if (!fs.exists(inDir)) {
    fs.mkdirs(inDir);
  }
  String input = "The quick brown fox\n" + "has many silly\n"
      + "red fox sox\n";
  for (int i = 0; i < numMaps; ++i) {
    DataOutputStream file = fs.create(new Path(inDir, "part-" + i));
    file.writeBytes(input);
    file.close();
  }

  DistributedCache.addFileToClassPath(TestMRJobs.APP_JAR, conf, fs);
  conf.setOutputCommitter(CustomOutputCommitter.class);
  conf.setInputFormat(TextInputFormat.class);
  conf.setOutputKeyClass(LongWritable.class);
  conf.setOutputValueClass(Text.class);

  FileInputFormat.setInputPaths(conf, inDir);
  FileOutputFormat.setOutputPath(conf, outDir);
  conf.setNumMapTasks(numMaps);
  conf.setNumReduceTasks(numReds);

  JobClient jobClient = new JobClient(conf);
  
  RunningJob job = jobClient.submitJob(conf);
  return jobClient.monitorAndPrintJob(conf, job);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:35,代码来源:TestMROldApiJobs.java


示例7: testCombinerShouldUpdateTheReporter

import org.apache.hadoop.mapreduce.filecache.DistributedCache; //导入依赖的package包/类
@Test
public void testCombinerShouldUpdateTheReporter() throws Exception {
  JobConf conf = new JobConf(mrCluster.getConfig());
  int numMaps = 5;
  int numReds = 2;
  Path in = new Path(mrCluster.getTestWorkDir().getAbsolutePath(),
      "testCombinerShouldUpdateTheReporter-in");
  Path out = new Path(mrCluster.getTestWorkDir().getAbsolutePath(),
      "testCombinerShouldUpdateTheReporter-out");
  createInputOutPutFolder(in, out, numMaps);
  conf.setJobName("test-job-with-combiner");
  conf.setMapperClass(IdentityMapper.class);
  conf.setCombinerClass(MyCombinerToCheckReporter.class);
  //conf.setJarByClass(MyCombinerToCheckReporter.class);
  conf.setReducerClass(IdentityReducer.class);
  DistributedCache.addFileToClassPath(TestMRJobs.APP_JAR, conf);
  conf.setOutputCommitter(CustomOutputCommitter.class);
  conf.setInputFormat(TextInputFormat.class);
  conf.setOutputKeyClass(LongWritable.class);
  conf.setOutputValueClass(Text.class);

  FileInputFormat.setInputPaths(conf, in);
  FileOutputFormat.setOutputPath(conf, out);
  conf.setNumMapTasks(numMaps);
  conf.setNumReduceTasks(numReds);
  
  runJob(conf);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:29,代码来源:TestMRAppWithCombiner.java


示例8: testSetupDistributedCacheConflicts

import org.apache.hadoop.mapreduce.filecache.DistributedCache; //导入依赖的package包/类
@SuppressWarnings("deprecation")
public void testSetupDistributedCacheConflicts() throws Exception {
  Configuration conf = new Configuration();
  conf.setClass("fs.mockfs.impl", MockFileSystem.class, FileSystem.class);
  
  URI mockUri = URI.create("mockfs://mock/");
  FileSystem mockFs = ((FilterFileSystem)FileSystem.get(mockUri, conf))
      .getRawFileSystem();
  
  URI archive = new URI("mockfs://mock/tmp/something.zip#something");
  Path archivePath = new Path(archive);
  URI file = new URI("mockfs://mock/tmp/something.txt#something");
  Path filePath = new Path(file);
  
  when(mockFs.resolvePath(archivePath)).thenReturn(archivePath);
  when(mockFs.resolvePath(filePath)).thenReturn(filePath);
  
  DistributedCache.addCacheArchive(archive, conf);
  conf.set(MRJobConfig.CACHE_ARCHIVES_TIMESTAMPS, "10");
  conf.set(MRJobConfig.CACHE_ARCHIVES_SIZES, "10");
  conf.set(MRJobConfig.CACHE_ARCHIVES_VISIBILITIES, "true");
  DistributedCache.addCacheFile(file, conf);
  conf.set(MRJobConfig.CACHE_FILE_TIMESTAMPS, "11");
  conf.set(MRJobConfig.CACHE_FILES_SIZES, "11");
  conf.set(MRJobConfig.CACHE_FILE_VISIBILITIES, "true");
  Map<String, LocalResource> localResources = 
    new HashMap<String, LocalResource>();
  MRApps.setupDistributedCache(conf, localResources);
  
  assertEquals(1, localResources.size());
  LocalResource lr = localResources.get("something");
  //Archive wins
  assertNotNull(lr);
  assertEquals(10l, lr.getSize());
  assertEquals(10l, lr.getTimestamp());
  assertEquals(LocalResourceType.ARCHIVE, lr.getType());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:38,代码来源:TestMRApps.java


示例9: testSetupDistributedCacheConflicts

import org.apache.hadoop.mapreduce.filecache.DistributedCache; //导入依赖的package包/类
@SuppressWarnings("deprecation")
@Test(timeout = 120000, expected = InvalidJobConfException.class)
public void testSetupDistributedCacheConflicts() throws Exception {
  Configuration conf = new Configuration();
  conf.setClass("fs.mockfs.impl", MockFileSystem.class, FileSystem.class);
  
  URI mockUri = URI.create("mockfs://mock/");
  FileSystem mockFs = ((FilterFileSystem)FileSystem.get(mockUri, conf))
      .getRawFileSystem();
  
  URI archive = new URI("mockfs://mock/tmp/something.zip#something");
  Path archivePath = new Path(archive);
  URI file = new URI("mockfs://mock/tmp/something.txt#something");
  Path filePath = new Path(file);
  
  when(mockFs.resolvePath(archivePath)).thenReturn(archivePath);
  when(mockFs.resolvePath(filePath)).thenReturn(filePath);
  
  DistributedCache.addCacheArchive(archive, conf);
  conf.set(MRJobConfig.CACHE_ARCHIVES_TIMESTAMPS, "10");
  conf.set(MRJobConfig.CACHE_ARCHIVES_SIZES, "10");
  conf.set(MRJobConfig.CACHE_ARCHIVES_VISIBILITIES, "true");
  DistributedCache.addCacheFile(file, conf);
  conf.set(MRJobConfig.CACHE_FILE_TIMESTAMPS, "11");
  conf.set(MRJobConfig.CACHE_FILES_SIZES, "11");
  conf.set(MRJobConfig.CACHE_FILE_VISIBILITIES, "true");
  Map<String, LocalResource> localResources = 
    new HashMap<String, LocalResource>();
  MRApps.setupDistributedCache(conf, localResources);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:31,代码来源:TestMRApps.java


示例10: testSetupDistributedCacheConflictsFiles

import org.apache.hadoop.mapreduce.filecache.DistributedCache; //导入依赖的package包/类
@SuppressWarnings("deprecation")
@Test(timeout = 120000, expected = InvalidJobConfException.class)
public void testSetupDistributedCacheConflictsFiles() throws Exception {
  Configuration conf = new Configuration();
  conf.setClass("fs.mockfs.impl", MockFileSystem.class, FileSystem.class);
  
  URI mockUri = URI.create("mockfs://mock/");
  FileSystem mockFs = ((FilterFileSystem)FileSystem.get(mockUri, conf))
      .getRawFileSystem();
  
  URI file = new URI("mockfs://mock/tmp/something.zip#something");
  Path filePath = new Path(file);
  URI file2 = new URI("mockfs://mock/tmp/something.txt#something");
  Path file2Path = new Path(file2);
  
  when(mockFs.resolvePath(filePath)).thenReturn(filePath);
  when(mockFs.resolvePath(file2Path)).thenReturn(file2Path);
  
  DistributedCache.addCacheFile(file, conf);
  DistributedCache.addCacheFile(file2, conf);
  conf.set(MRJobConfig.CACHE_FILE_TIMESTAMPS, "10,11");
  conf.set(MRJobConfig.CACHE_FILES_SIZES, "10,11");
  conf.set(MRJobConfig.CACHE_FILE_VISIBILITIES, "true,true");
  Map<String, LocalResource> localResources = 
    new HashMap<String, LocalResource>();
  MRApps.setupDistributedCache(conf, localResources);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:28,代码来源:TestMRApps.java


示例11: copyLog4jPropertyFile

import org.apache.hadoop.mapreduce.filecache.DistributedCache; //导入依赖的package包/类
@SuppressWarnings("deprecation")
private void copyLog4jPropertyFile(Job job, Path submitJobDir,
    short replication) throws IOException {
  Configuration conf = job.getConfiguration();

  String file =
      validateFilePath(
          conf.get(MRJobConfig.MAPREDUCE_JOB_LOG4J_PROPERTIES_FILE), conf);
  LOG.debug("default FileSystem: " + jtFs.getUri());
  FsPermission mapredSysPerms =
      new FsPermission(JobSubmissionFiles.JOB_DIR_PERMISSION);
  if (!jtFs.exists(submitJobDir)) {
    throw new IOException("Cannot find job submission directory! "
        + "It should just be created, so something wrong here.");
  }

  Path fileDir = JobSubmissionFiles.getJobLog4jFile(submitJobDir);

  // first copy local log4j.properties file to HDFS under submitJobDir
  if (file != null) {
    FileSystem.mkdirs(jtFs, fileDir, mapredSysPerms);
    URI tmpURI = null;
    try {
      tmpURI = new URI(file);
    } catch (URISyntaxException e) {
      throw new IllegalArgumentException(e);
    }
    Path tmp = new Path(tmpURI);
    Path newPath = copyRemoteFiles(fileDir, tmp, conf, replication);
    DistributedCache.addFileToClassPath(new Path(newPath.toUri().getPath()),
        conf);
  }
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:34,代码来源:JobResourceUploader.java


示例12: copyLog4jPropertyFile

import org.apache.hadoop.mapreduce.filecache.DistributedCache; //导入依赖的package包/类
@SuppressWarnings("deprecation")
private void copyLog4jPropertyFile(Job job, Path submitJobDir,
    short replication) throws IOException {
  Configuration conf = job.getConfiguration();

  String file = validateFilePath(
      conf.get(MRJobConfig.MAPREDUCE_JOB_LOG4J_PROPERTIES_FILE), conf);
  LOG.debug("default FileSystem: " + jtFs.getUri());
  FsPermission mapredSysPerms = 
    new FsPermission(JobSubmissionFiles.JOB_DIR_PERMISSION);
  if (!jtFs.exists(submitJobDir)) {
    throw new IOException("Cannot find job submission directory! " 
        + "It should just be created, so something wrong here.");
  }
  
  Path fileDir = JobSubmissionFiles.getJobLog4jFile(submitJobDir);

  // first copy local log4j.properties file to HDFS under submitJobDir
  if (file != null) {
    FileSystem.mkdirs(jtFs, fileDir, mapredSysPerms);
    URI tmpURI = null;
    try {
      tmpURI = new URI(file);
    } catch (URISyntaxException e) {
      throw new IllegalArgumentException(e);
    }
    Path tmp = new Path(tmpURI);
    Path newPath = copyRemoteFiles(fileDir, tmp, conf, replication);
    DistributedCache.addFileToClassPath(new Path(newPath.toUri().getPath()), conf);
  }
}
 
开发者ID:Nextzero,项目名称:hadoop-2.6.0-cdh5.4.3,代码行数:32,代码来源:JobSubmitter.java


示例13: testSetupDistributedCacheConflictsFiles

import org.apache.hadoop.mapreduce.filecache.DistributedCache; //导入依赖的package包/类
@SuppressWarnings("deprecation")
public void testSetupDistributedCacheConflictsFiles() throws Exception {
  Configuration conf = new Configuration();
  conf.setClass("fs.mockfs.impl", MockFileSystem.class, FileSystem.class);
  
  URI mockUri = URI.create("mockfs://mock/");
  FileSystem mockFs = ((FilterFileSystem)FileSystem.get(mockUri, conf))
      .getRawFileSystem();
  
  URI file = new URI("mockfs://mock/tmp/something.zip#something");
  Path filePath = new Path(file);
  URI file2 = new URI("mockfs://mock/tmp/something.txt#something");
  Path file2Path = new Path(file);
  
  when(mockFs.resolvePath(filePath)).thenReturn(filePath);
  when(mockFs.resolvePath(file2Path)).thenReturn(file2Path);
  
  DistributedCache.addCacheFile(file, conf);
  DistributedCache.addCacheFile(file2, conf);
  conf.set(MRJobConfig.CACHE_FILE_TIMESTAMPS, "10,11");
  conf.set(MRJobConfig.CACHE_FILES_SIZES, "10,11");
  conf.set(MRJobConfig.CACHE_FILE_VISIBILITIES, "true,true");
  Map<String, LocalResource> localResources = 
    new HashMap<String, LocalResource>();
  MRApps.setupDistributedCache(conf, localResources);
  
  assertEquals(1, localResources.size());
  LocalResource lr = localResources.get("something");
  //First one wins
  assertNotNull(lr);
  assertEquals(10l, lr.getSize());
  assertEquals(10l, lr.getTimestamp());
  assertEquals(LocalResourceType.FILE, lr.getType());
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:35,代码来源:TestMRApps.java


示例14: setupDistCacheInputs

import org.apache.hadoop.mapreduce.filecache.DistributedCache; //导入依赖的package包/类
public static void setupDistCacheInputs(JobConf job, String indices, String pathsString, ArrayList<String> paths) {
	job.set(DISTCACHE_INPUT_INDICES, indices);
	job.set(DISTCACHE_INPUT_PATHS, pathsString);
	Path p = null;
	
	for(String spath : paths) {
		p = new Path(spath);
		
		DistributedCache.addCacheFile(p.toUri(), job);
		DistributedCache.createSymlink(job);
	}
}
 
开发者ID:apache,项目名称:systemml,代码行数:13,代码来源:MRJobConfiguration.java


示例15: addClasspathToEnv

import org.apache.hadoop.mapreduce.filecache.DistributedCache; //导入依赖的package包/类
@SuppressWarnings("deprecation")
public static void addClasspathToEnv(Map<String, String> environment,
    String classpathEnvVar, Configuration conf) throws IOException {
  MRApps.addToEnvironment(
      environment,
      classpathEnvVar,
      MRJobConfig.JOB_JAR + Path.SEPARATOR + MRJobConfig.JOB_JAR, conf);
  MRApps.addToEnvironment(
      environment,
      classpathEnvVar,
      MRJobConfig.JOB_JAR + Path.SEPARATOR + "classes" + Path.SEPARATOR,
      conf);

  MRApps.addToEnvironment(
      environment,
      classpathEnvVar,
      MRJobConfig.JOB_JAR + Path.SEPARATOR + "lib" + Path.SEPARATOR + "*",
      conf);

  MRApps.addToEnvironment(
      environment,
      classpathEnvVar,
      crossPlatformifyMREnv(conf, Environment.PWD) + Path.SEPARATOR + "*",
      conf);

  // a * in the classpath will only find a .jar, so we need to filter out
  // all .jars and add everything else
  addToClasspathIfNotJar(DistributedCache.getFileClassPaths(conf),
      DistributedCache.getCacheFiles(conf),
      conf,
      environment, classpathEnvVar);
  addToClasspathIfNotJar(DistributedCache.getArchiveClassPaths(conf),
      DistributedCache.getCacheArchives(conf),
      conf,
      environment, classpathEnvVar);
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:37,代码来源:MRApps.java



注:本文中的org.apache.hadoop.mapreduce.filecache.DistributedCache类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java MuleException类代码示例发布时间:2022-05-21
下一篇:
Java Sentence类代码示例发布时间:2022-05-21
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap