• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java CopyListing类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.tools.CopyListing的典型用法代码示例。如果您正苦于以下问题:Java CopyListing类的具体用法?Java CopyListing怎么用?Java CopyListing使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



CopyListing类属于org.apache.hadoop.tools包,在下文中一共展示了CopyListing类的12个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: setAsCopyListingClass

import org.apache.hadoop.tools.CopyListing; //导入依赖的package包/类
static void setAsCopyListingClass(Configuration conf) {
  conf.setClass(CONF_LABEL_COPY_LISTING_CLASS, CircusTrainCopyListing.class, CopyListing.class);
}
 
开发者ID:HotelsDotCom,项目名称:circus-train,代码行数:4,代码来源:CircusTrainCopyListing.java


示例2: testGetSplits

import org.apache.hadoop.tools.CopyListing; //导入依赖的package包/类
public void testGetSplits(int nMaps) throws Exception {
  DistCpOptions options = getOptions(nMaps);
  Configuration configuration = new Configuration();
  configuration.set("mapred.map.tasks",
                    String.valueOf(options.getMaxMaps()));
  Path listFile = new Path(cluster.getFileSystem().getUri().toString()
      + "/tmp/testGetSplits_1/fileList.seq");
  CopyListing.getCopyListing(configuration, CREDENTIALS, options).
      buildListing(listFile, options);

  JobContext jobContext = new JobContextImpl(configuration, new JobID());
  UniformSizeInputFormat uniformSizeInputFormat = new UniformSizeInputFormat();
  List<InputSplit> splits
          = uniformSizeInputFormat.getSplits(jobContext);

  int sizePerMap = totalFileSize/nMaps;

  checkSplits(listFile, splits);

  int doubleCheckedTotalSize = 0;
  int previousSplitSize = -1;
  for (int i=0; i<splits.size(); ++i) {
    InputSplit split = splits.get(i);
    int currentSplitSize = 0;
    RecordReader<Text, CopyListingFileStatus> recordReader =
      uniformSizeInputFormat.createRecordReader(split, null);
    StubContext stubContext = new StubContext(jobContext.getConfiguration(),
                                              recordReader, 0);
    final TaskAttemptContext taskAttemptContext
       = stubContext.getContext();
    recordReader.initialize(split, taskAttemptContext);
    while (recordReader.nextKeyValue()) {
      Path sourcePath = recordReader.getCurrentValue().getPath();
      FileSystem fs = sourcePath.getFileSystem(configuration);
      FileStatus fileStatus [] = fs.listStatus(sourcePath);
      if (fileStatus.length > 1) {
        continue;
      }
      currentSplitSize += fileStatus[0].getLen();
    }
    Assert.assertTrue(
         previousSplitSize == -1
             || Math.abs(currentSplitSize - previousSplitSize) < 0.1*sizePerMap
             || i == splits.size()-1);

    doubleCheckedTotalSize += currentSplitSize;
  }

  Assert.assertEquals(totalFileSize, doubleCheckedTotalSize);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:51,代码来源:TestUniformSizeInputFormat.java


示例3: testPreserveStatus

import org.apache.hadoop.tools.CopyListing; //导入依赖的package包/类
@Test
public void testPreserveStatus() {
  TaskAttemptContext taskAttemptContext = getTaskAttemptContext(config);
  JobContext jobContext = new JobContextImpl(taskAttemptContext.getConfiguration(),
      taskAttemptContext.getTaskAttemptID().getJobID());
  Configuration conf = jobContext.getConfiguration();


  String sourceBase;
  String targetBase;
  FileSystem fs = null;
  try {
    OutputCommitter committer = new CopyCommitter(null, taskAttemptContext);
    fs = FileSystem.get(conf);
    FsPermission sourcePerm = new FsPermission((short) 511);
    FsPermission initialPerm = new FsPermission((short) 448);
    sourceBase = TestDistCpUtils.createTestSetup(fs, sourcePerm);
    targetBase = TestDistCpUtils.createTestSetup(fs, initialPerm);

    DistCpOptions options = new DistCpOptions(Arrays.asList(new Path(sourceBase)),
        new Path("/out"));
    options.preserve(FileAttribute.PERMISSION);
    options.appendToConf(conf);
    options.setTargetPathExists(false);
    
    CopyListing listing = new GlobbedCopyListing(conf, CREDENTIALS);
    Path listingFile = new Path("/tmp1/" + String.valueOf(rand.nextLong()));
    listing.buildListing(listingFile, options);

    conf.set(DistCpConstants.CONF_LABEL_TARGET_WORK_PATH, targetBase);

    committer.commitJob(jobContext);
    if (!checkDirectoryPermissions(fs, targetBase, sourcePerm)) {
      Assert.fail("Permission don't match");
    }

    //Test for idempotent commit
    committer.commitJob(jobContext);
    if (!checkDirectoryPermissions(fs, targetBase, sourcePerm)) {
      Assert.fail("Permission don't match");
    }

  } catch (IOException e) {
    LOG.error("Exception encountered while testing for preserve status", e);
    Assert.fail("Preserve status failure");
  } finally {
    TestDistCpUtils.delete(fs, "/tmp1");
    conf.unset(DistCpConstants.CONF_LABEL_PRESERVE_STATUS);
  }

}
 
开发者ID:naver,项目名称:hadoop,代码行数:52,代码来源:TestCopyCommitter.java


示例4: testDeleteMissing

import org.apache.hadoop.tools.CopyListing; //导入依赖的package包/类
@Test
public void testDeleteMissing() {
  TaskAttemptContext taskAttemptContext = getTaskAttemptContext(config);
  JobContext jobContext = new JobContextImpl(taskAttemptContext.getConfiguration(),
      taskAttemptContext.getTaskAttemptID().getJobID());
  Configuration conf = jobContext.getConfiguration();

  String sourceBase;
  String targetBase;
  FileSystem fs = null;
  try {
    OutputCommitter committer = new CopyCommitter(null, taskAttemptContext);
    fs = FileSystem.get(conf);
    sourceBase = TestDistCpUtils.createTestSetup(fs, FsPermission.getDefault());
    targetBase = TestDistCpUtils.createTestSetup(fs, FsPermission.getDefault());
    String targetBaseAdd = TestDistCpUtils.createTestSetup(fs, FsPermission.getDefault());
    fs.rename(new Path(targetBaseAdd), new Path(targetBase));

    DistCpOptions options = new DistCpOptions(Arrays.asList(new Path(sourceBase)),
        new Path("/out"));
    options.setSyncFolder(true);
    options.setDeleteMissing(true);
    options.appendToConf(conf);

    CopyListing listing = new GlobbedCopyListing(conf, CREDENTIALS);
    Path listingFile = new Path("/tmp1/" + String.valueOf(rand.nextLong()));
    listing.buildListing(listingFile, options);

    conf.set(DistCpConstants.CONF_LABEL_TARGET_WORK_PATH, targetBase);
    conf.set(DistCpConstants.CONF_LABEL_TARGET_FINAL_PATH, targetBase);

    committer.commitJob(jobContext);
    if (!TestDistCpUtils.checkIfFoldersAreInSync(fs, targetBase, sourceBase)) {
      Assert.fail("Source and target folders are not in sync");
    }
    if (!TestDistCpUtils.checkIfFoldersAreInSync(fs, sourceBase, targetBase)) {
      Assert.fail("Source and target folders are not in sync");
    }

    //Test for idempotent commit
    committer.commitJob(jobContext);
    if (!TestDistCpUtils.checkIfFoldersAreInSync(fs, targetBase, sourceBase)) {
      Assert.fail("Source and target folders are not in sync");
    }
    if (!TestDistCpUtils.checkIfFoldersAreInSync(fs, sourceBase, targetBase)) {
      Assert.fail("Source and target folders are not in sync");
    }
  } catch (Throwable e) {
    LOG.error("Exception encountered while testing for delete missing", e);
    Assert.fail("Delete missing failure");
  } finally {
    TestDistCpUtils.delete(fs, "/tmp1");
    conf.set(DistCpConstants.CONF_LABEL_DELETE_MISSING, "false");
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:56,代码来源:TestCopyCommitter.java


示例5: testDeleteMissingFlatInterleavedFiles

import org.apache.hadoop.tools.CopyListing; //导入依赖的package包/类
@Test
public void testDeleteMissingFlatInterleavedFiles() {
  TaskAttemptContext taskAttemptContext = getTaskAttemptContext(config);
  JobContext jobContext = new JobContextImpl(taskAttemptContext.getConfiguration(),
      taskAttemptContext.getTaskAttemptID().getJobID());
  Configuration conf = jobContext.getConfiguration();


  String sourceBase;
  String targetBase;
  FileSystem fs = null;
  try {
    OutputCommitter committer = new CopyCommitter(null, taskAttemptContext);
    fs = FileSystem.get(conf);
    sourceBase = "/tmp1/" + String.valueOf(rand.nextLong());
    targetBase = "/tmp1/" + String.valueOf(rand.nextLong());
    TestDistCpUtils.createFile(fs, sourceBase + "/1");
    TestDistCpUtils.createFile(fs, sourceBase + "/3");
    TestDistCpUtils.createFile(fs, sourceBase + "/4");
    TestDistCpUtils.createFile(fs, sourceBase + "/5");
    TestDistCpUtils.createFile(fs, sourceBase + "/7");
    TestDistCpUtils.createFile(fs, sourceBase + "/8");
    TestDistCpUtils.createFile(fs, sourceBase + "/9");

    TestDistCpUtils.createFile(fs, targetBase + "/2");
    TestDistCpUtils.createFile(fs, targetBase + "/4");
    TestDistCpUtils.createFile(fs, targetBase + "/5");
    TestDistCpUtils.createFile(fs, targetBase + "/7");
    TestDistCpUtils.createFile(fs, targetBase + "/9");
    TestDistCpUtils.createFile(fs, targetBase + "/A");

    DistCpOptions options = new DistCpOptions(Arrays.asList(new Path(sourceBase)), 
        new Path("/out"));
    options.setSyncFolder(true);
    options.setDeleteMissing(true);
    options.appendToConf(conf);

    CopyListing listing = new GlobbedCopyListing(conf, CREDENTIALS);
    Path listingFile = new Path("/tmp1/" + String.valueOf(rand.nextLong()));
    listing.buildListing(listingFile, options);

    conf.set(DistCpConstants.CONF_LABEL_TARGET_WORK_PATH, targetBase);
    conf.set(DistCpConstants.CONF_LABEL_TARGET_FINAL_PATH, targetBase);

    committer.commitJob(jobContext);
    if (!TestDistCpUtils.checkIfFoldersAreInSync(fs, targetBase, sourceBase)) {
      Assert.fail("Source and target folders are not in sync");
    }
    Assert.assertEquals(fs.listStatus(new Path(targetBase)).length, 4);

    //Test for idempotent commit
    committer.commitJob(jobContext);
    if (!TestDistCpUtils.checkIfFoldersAreInSync(fs, targetBase, sourceBase)) {
      Assert.fail("Source and target folders are not in sync");
    }
    Assert.assertEquals(fs.listStatus(new Path(targetBase)).length, 4);
  } catch (IOException e) {
    LOG.error("Exception encountered while testing for delete missing", e);
    Assert.fail("Delete missing failure");
  } finally {
    TestDistCpUtils.delete(fs, "/tmp1");
    conf.set(DistCpConstants.CONF_LABEL_DELETE_MISSING, "false");
  }

}
 
开发者ID:naver,项目名称:hadoop,代码行数:66,代码来源:TestCopyCommitter.java


示例6: testGetSplits

import org.apache.hadoop.tools.CopyListing; //导入依赖的package包/类
@Test
public void testGetSplits() throws Exception {
  DistCpOptions options = getOptions();
  Configuration configuration = new Configuration();
  configuration.set("mapred.map.tasks",
                    String.valueOf(options.getMaxMaps()));
  CopyListing.getCopyListing(configuration, CREDENTIALS, options).buildListing(
          new Path(cluster.getFileSystem().getUri().toString()
                  +"/tmp/testDynInputFormat/fileList.seq"), options);

  JobContext jobContext = new JobContextImpl(configuration, new JobID());
  DynamicInputFormat<Text, CopyListingFileStatus> inputFormat =
      new DynamicInputFormat<Text, CopyListingFileStatus>();
  List<InputSplit> splits = inputFormat.getSplits(jobContext);

  int nFiles = 0;
  int taskId = 0;

  for (InputSplit split : splits) {
    RecordReader<Text, CopyListingFileStatus> recordReader =
         inputFormat.createRecordReader(split, null);
    StubContext stubContext = new StubContext(jobContext.getConfiguration(),
                                              recordReader, taskId);
    final TaskAttemptContext taskAttemptContext
       = stubContext.getContext();
    
    recordReader.initialize(splits.get(0), taskAttemptContext);
    float previousProgressValue = 0f;
    while (recordReader.nextKeyValue()) {
      CopyListingFileStatus fileStatus = recordReader.getCurrentValue();
      String source = fileStatus.getPath().toString();
      System.out.println(source);
      Assert.assertTrue(expectedFilePaths.contains(source));
      final float progress = recordReader.getProgress();
      Assert.assertTrue(progress >= previousProgressValue);
      Assert.assertTrue(progress >= 0.0f);
      Assert.assertTrue(progress <= 1.0f);
      previousProgressValue = progress;
      ++nFiles;
    }
    Assert.assertTrue(recordReader.getProgress() == 1.0f);

    ++taskId;
  }

  Assert.assertEquals(expectedFilePaths.size(), nFiles);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:48,代码来源:TestDynamicInputFormat.java


示例7: testGetSplits

import org.apache.hadoop.tools.CopyListing; //导入依赖的package包/类
@Test
public void testGetSplits() throws Exception {
  DistCpOptions options = getOptions();
  Configuration configuration = new Configuration();
  configuration.set("mapred.map.tasks",
                    String.valueOf(options.getMaxMaps()));
  CopyListing.getCopyListing(configuration, CREDENTIALS, options).buildListing(
          new Path(cluster.getFileSystem().getUri().toString()
                  +"/tmp/testDynInputFormat/fileList.seq"), options);

  JobContext jobContext = new JobContextImpl(configuration, new JobID());
  DynamicInputFormat<Text, CopyListingFileStatus> inputFormat =
      new DynamicInputFormat<Text, CopyListingFileStatus>();
  List<InputSplit> splits = inputFormat.getSplits(jobContext);

  int nFiles = 0;
  int taskId = 0;

  for (InputSplit split : splits) {
    StubContext stubContext = new StubContext(jobContext.getConfiguration(),
                                              null, taskId);
    final TaskAttemptContext taskAttemptContext
       = stubContext.getContext();

    RecordReader<Text, CopyListingFileStatus> recordReader =
        inputFormat.createRecordReader(split, taskAttemptContext);
    stubContext.setReader(recordReader);
    recordReader.initialize(splits.get(0), taskAttemptContext);
    float previousProgressValue = 0f;
    while (recordReader.nextKeyValue()) {
      CopyListingFileStatus fileStatus = recordReader.getCurrentValue();
      String source = fileStatus.getPath().toString();
      System.out.println(source);
      Assert.assertTrue(expectedFilePaths.contains(source));
      final float progress = recordReader.getProgress();
      Assert.assertTrue(progress >= previousProgressValue);
      Assert.assertTrue(progress >= 0.0f);
      Assert.assertTrue(progress <= 1.0f);
      previousProgressValue = progress;
      ++nFiles;
    }
    Assert.assertTrue(recordReader.getProgress() == 1.0f);

    ++taskId;
  }

  Assert.assertEquals(expectedFilePaths.size(), nFiles);
}
 
开发者ID:aliyun-beta,项目名称:aliyun-oss-hadoop-fs,代码行数:49,代码来源:TestDynamicInputFormat.java


示例8: testGetSplits

import org.apache.hadoop.tools.CopyListing; //导入依赖的package包/类
public void testGetSplits(int nMaps) throws Exception {
  DistCpOptions options = getOptions(nMaps);
  Configuration configuration = new Configuration();
  configuration.set("mapred.map.tasks",
                    String.valueOf(options.getMaxMaps()));
  Path listFile = new Path(cluster.getFileSystem().getUri().toString()
      + "/tmp/testGetSplits_1/fileList.seq");
  CopyListing.getCopyListing(configuration, CREDENTIALS, options).
      buildListing(listFile, options);

  JobContext jobContext = new JobContextImpl(configuration, new JobID());
  UniformSizeInputFormat uniformSizeInputFormat = new UniformSizeInputFormat();
  List<InputSplit> splits
          = uniformSizeInputFormat.getSplits(jobContext);

  int sizePerMap = totalFileSize/nMaps;

  checkSplits(listFile, splits);

  int doubleCheckedTotalSize = 0;
  int previousSplitSize = -1;
  for (int i=0; i<splits.size(); ++i) {
    InputSplit split = splits.get(i);
    int currentSplitSize = 0;
    RecordReader<Text, FileStatus> recordReader = uniformSizeInputFormat.createRecordReader(
            split, null);
    StubContext stubContext = new StubContext(jobContext.getConfiguration(),
                                              recordReader, 0);
    final TaskAttemptContext taskAttemptContext
       = stubContext.getContext();
    recordReader.initialize(split, taskAttemptContext);
    while (recordReader.nextKeyValue()) {
      Path sourcePath = recordReader.getCurrentValue().getPath();
      FileSystem fs = sourcePath.getFileSystem(configuration);
      FileStatus fileStatus [] = fs.listStatus(sourcePath);
      Assert.assertEquals(fileStatus.length, 1);
      currentSplitSize += fileStatus[0].getLen();
    }
    Assert.assertTrue(
         previousSplitSize == -1
             || Math.abs(currentSplitSize - previousSplitSize) < 0.1*sizePerMap
             || i == splits.size()-1);

    doubleCheckedTotalSize += currentSplitSize;
  }

  Assert.assertEquals(totalFileSize, doubleCheckedTotalSize);
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:49,代码来源:TestUniformSizeInputFormat.java


示例9: testPreserveStatus

import org.apache.hadoop.tools.CopyListing; //导入依赖的package包/类
@Test
public void testPreserveStatus() {
  TaskAttemptContext taskAttemptContext = getTaskAttemptContext(config);
  JobContext jobContext = new JobContextImpl(taskAttemptContext.getConfiguration(),
      taskAttemptContext.getTaskAttemptID().getJobID());
  Configuration conf = jobContext.getConfiguration();


  String sourceBase;
  String targetBase;
  FileSystem fs = null;
  try {
    OutputCommitter committer = new CopyCommitter(null, taskAttemptContext);
    fs = FileSystem.get(conf);
    FsPermission sourcePerm = new FsPermission((short) 511);
    FsPermission initialPerm = new FsPermission((short) 448);
    sourceBase = TestDistCpUtils.createTestSetup(fs, sourcePerm);
    targetBase = TestDistCpUtils.createTestSetup(fs, initialPerm);

    DistCpOptions options = new DistCpOptions(Arrays.asList(new Path(sourceBase)),
        new Path("/out"));
    options.preserve(FileAttribute.PERMISSION);
    options.appendToConf(conf);

    CopyListing listing = new GlobbedCopyListing(conf, CREDENTIALS);
    Path listingFile = new Path("/tmp1/" + String.valueOf(rand.nextLong()));
    listing.buildListing(listingFile, options);

    conf.set(DistCpConstants.CONF_LABEL_TARGET_WORK_PATH, targetBase);

    committer.commitJob(jobContext);
    if (!checkDirectoryPermissions(fs, targetBase, sourcePerm)) {
      Assert.fail("Permission don't match");
    }

    //Test for idempotent commit
    committer.commitJob(jobContext);
    if (!checkDirectoryPermissions(fs, targetBase, sourcePerm)) {
      Assert.fail("Permission don't match");
    }

  } catch (IOException e) {
    LOG.error("Exception encountered while testing for preserve status", e);
    Assert.fail("Preserve status failure");
  } finally {
    TestDistCpUtils.delete(fs, "/tmp1");
  }

}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:50,代码来源:TestCopyCommitter.java


示例10: testGetSplits

import org.apache.hadoop.tools.CopyListing; //导入依赖的package包/类
@Test
public void testGetSplits() throws Exception {
  DistCpOptions options = getOptions();
  Configuration configuration = new Configuration();
  configuration.set("mapred.map.tasks",
                    String.valueOf(options.getMaxMaps()));
  CopyListing.getCopyListing(configuration, CREDENTIALS, options).buildListing(
          new Path(cluster.getFileSystem().getUri().toString()
                  +"/tmp/testDynInputFormat/fileList.seq"), options);

  JobContext jobContext = new JobContextImpl(configuration, new JobID());
  DynamicInputFormat<Text, FileStatus> inputFormat =
      new DynamicInputFormat<Text, FileStatus>();
  List<InputSplit> splits = inputFormat.getSplits(jobContext);

  int nFiles = 0;
  int taskId = 0;

  for (InputSplit split : splits) {
    RecordReader<Text, FileStatus> recordReader =
         inputFormat.createRecordReader(split, null);
    StubContext stubContext = new StubContext(jobContext.getConfiguration(),
                                              recordReader, taskId);
    final TaskAttemptContext taskAttemptContext
       = stubContext.getContext();
    
    recordReader.initialize(splits.get(0), taskAttemptContext);
    float previousProgressValue = 0f;
    while (recordReader.nextKeyValue()) {
      FileStatus fileStatus = recordReader.getCurrentValue();
      String source = fileStatus.getPath().toString();
      System.out.println(source);
      Assert.assertTrue(expectedFilePaths.contains(source));
      final float progress = recordReader.getProgress();
      Assert.assertTrue(progress >= previousProgressValue);
      Assert.assertTrue(progress >= 0.0f);
      Assert.assertTrue(progress <= 1.0f);
      previousProgressValue = progress;
      ++nFiles;
    }
    Assert.assertTrue(recordReader.getProgress() == 1.0f);

    ++taskId;
  }

  Assert.assertEquals(expectedFilePaths.size(), nFiles);
}
 
开发者ID:ict-carch,项目名称:hadoop-plus,代码行数:48,代码来源:TestDynamicInputFormat.java


示例11: deleteMissing

import org.apache.hadoop.tools.CopyListing; //导入依赖的package包/类
private void deleteMissing(Configuration conf) throws IOException {
  LOG.info("-delete option is enabled. About to remove entries from " +
      "target that are missing in source");

  // Sort the source-file listing alphabetically.
  Path sourceListing = new Path(conf.get(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH));
  FileSystem clusterFS = sourceListing.getFileSystem(conf);
  Path sortedSourceListing = DistCpUtils.sortListing(clusterFS, conf, sourceListing);

  // Similarly, create the listing of target-files. Sort alphabetically.
  Path targetListing = new Path(sourceListing.getParent(), "targetListing.seq");
  CopyListing target = new GlobbedCopyListing(new Configuration(conf), null);

  List<Path> targets = new ArrayList<Path>(1);
  Path targetFinalPath = new Path(conf.get(DistCpConstants.CONF_LABEL_TARGET_FINAL_PATH));
  targets.add(targetFinalPath);
  Path resultNonePath = Path.getPathWithoutSchemeAndAuthority(targetFinalPath)
      .toString().startsWith(DistCpConstants.HDFS_RESERVED_RAW_DIRECTORY_NAME)
      ? DistCpConstants.RAW_NONE_PATH : DistCpConstants.NONE_PATH;
  DistCpOptions options = new DistCpOptions(targets, resultNonePath);
  //
  // Set up options to be the same from the CopyListing.buildListing's perspective,
  // so to collect similar listings as when doing the copy
  //
  options.setOverwrite(overwrite);
  options.setSyncFolder(syncFolder);
  options.setTargetPathExists(targetPathExists);
  
  target.buildListing(targetListing, options);
  Path sortedTargetListing = DistCpUtils.sortListing(clusterFS, conf, targetListing);
  long totalLen = clusterFS.getFileStatus(sortedTargetListing).getLen();

  SequenceFile.Reader sourceReader = new SequenceFile.Reader(conf,
                               SequenceFile.Reader.file(sortedSourceListing));
  SequenceFile.Reader targetReader = new SequenceFile.Reader(conf,
                               SequenceFile.Reader.file(sortedTargetListing));

  // Walk both source and target file listings.
  // Delete all from target that doesn't also exist on source.
  long deletedEntries = 0;
  try {
    CopyListingFileStatus srcFileStatus = new CopyListingFileStatus();
    Text srcRelPath = new Text();
    CopyListingFileStatus trgtFileStatus = new CopyListingFileStatus();
    Text trgtRelPath = new Text();

    FileSystem targetFS = targetFinalPath.getFileSystem(conf);
    boolean srcAvailable = sourceReader.next(srcRelPath, srcFileStatus);
    while (targetReader.next(trgtRelPath, trgtFileStatus)) {
      // Skip sources that don't exist on target.
      while (srcAvailable && trgtRelPath.compareTo(srcRelPath) > 0) {
        srcAvailable = sourceReader.next(srcRelPath, srcFileStatus);
      }

      if (srcAvailable && trgtRelPath.equals(srcRelPath)) continue;

      // Target doesn't exist at source. Delete.
      boolean result = (!targetFS.exists(trgtFileStatus.getPath()) ||
          targetFS.delete(trgtFileStatus.getPath(), true));
      if (result) {
        LOG.info("Deleted " + trgtFileStatus.getPath() + " - Missing at source");
        deletedEntries++;
      } else {
        throw new IOException("Unable to delete " + trgtFileStatus.getPath());
      }
      taskAttemptContext.progress();
      taskAttemptContext.setStatus("Deleting missing files from target. [" +
          targetReader.getPosition() * 100 / totalLen + "%]");
    }
  } finally {
    IOUtils.closeStream(sourceReader);
    IOUtils.closeStream(targetReader);
  }
  LOG.info("Deleted " + deletedEntries + " from target: " + targets.get(0));
}
 
开发者ID:hopshadoop,项目名称:hops,代码行数:76,代码来源:CopyCommitter.java


示例12: testPreserveStatus

import org.apache.hadoop.tools.CopyListing; //导入依赖的package包/类
@Test
public void testPreserveStatus() {
  TaskAttemptContext taskAttemptContext = getTaskAttemptContext(config);
  JobContext jobContext = new JobContextImpl(taskAttemptContext.getConfiguration(),
      taskAttemptContext.getTaskAttemptID().getJobID());
  Configuration conf = jobContext.getConfiguration();


  String sourceBase;
  String targetBase;
  FileSystem fs = null;
  try {
    OutputCommitter committer = new CopyCommitter(null, taskAttemptContext);
    fs = FileSystem.get(conf);
    FsPermission sourcePerm = new FsPermission((short) 511);
    FsPermission initialPerm = new FsPermission((short) 448);
    sourceBase = TestDistCpUtils.createTestSetup(fs, sourcePerm);
    targetBase = TestDistCpUtils.createTestSetup(fs, initialPerm);

    DistCpOptions options = new DistCpOptions(Arrays.asList(new Path(sourceBase)),
        new Path("/out"));
    options.preserve(FileAttribute.PERMISSION);
    options.appendToConf(conf);

    CopyListing listing = new GlobbedCopyListing(conf, CREDENTIALS);
    Path listingFile = new Path("/tmp1/" + String.valueOf(rand.nextLong()));
    listing.buildListing(listingFile, options);

    conf.set(DistCpConstants.CONF_LABEL_TARGET_WORK_PATH, targetBase);

    committer.commitJob(jobContext);
    if (!checkDirectoryPermissions(fs, targetBase, sourcePerm)) {
      Assert.fail("Permission don't match");
    }

    //Test for idempotent commit
    committer.commitJob(jobContext);
    if (!checkDirectoryPermissions(fs, targetBase, sourcePerm)) {
      Assert.fail("Permission don't match");
    }

  } catch (IOException e) {
    LOG.error("Exception encountered while testing for preserve status", e);
    Assert.fail("Preserve status failure");
  } finally {
    TestDistCpUtils.delete(fs, "/tmp1");
    conf.unset(DistCpConstants.CONF_LABEL_PRESERVE_STATUS);
  }

}
 
开发者ID:chendave,项目名称:hadoop-TCP,代码行数:51,代码来源:TestCopyCommitter.java



注:本文中的org.apache.hadoop.tools.CopyListing类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java InvalidORBid类代码示例发布时间:2022-05-22
下一篇:
Java CategoryLineAnnotation类代码示例发布时间:2022-05-22
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap