• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java ArArchiveEntry类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.commons.compress.archivers.ar.ArArchiveEntry的典型用法代码示例。如果您正苦于以下问题:Java ArArchiveEntry类的具体用法?Java ArArchiveEntry怎么用?Java ArArchiveEntry使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



ArArchiveEntry类属于org.apache.commons.compress.archivers.ar包,在下文中一共展示了ArArchiveEntry类的13个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: DebianPackageWriter

import org.apache.commons.compress.archivers.ar.ArArchiveEntry; //导入依赖的package包/类
public DebianPackageWriter ( final OutputStream stream, final GenericControlFile packageControlFile, final TimestampProvider timestampProvider ) throws IOException
{
    this.packageControlFile = packageControlFile;
    this.timestampProvider = timestampProvider;
    if ( getTimestampProvider () == null )
    {
        throw new IllegalArgumentException ( "'timestampProvider' must not be null" );
    }
    BinaryPackageControlFile.validate ( packageControlFile );

    this.ar = new ArArchiveOutputStream ( stream );

    this.ar.putArchiveEntry ( new ArArchiveEntry ( "debian-binary", this.binaryHeader.length, 0, 0, AR_ARCHIVE_DEFAULT_MODE, getTimestampProvider ().getModTime () / 1000 ) );
    this.ar.write ( this.binaryHeader );
    this.ar.closeArchiveEntry ();

    this.dataTemp = File.createTempFile ( "data", null );

    this.dataStream = new TarArchiveOutputStream ( new GZIPOutputStream ( new FileOutputStream ( this.dataTemp ) ) );
    this.dataStream.setLongFileMode ( TarArchiveOutputStream.LONGFILE_GNU );
}
 
开发者ID:eclipse,项目名称:neoscada,代码行数:22,代码来源:DebianPackageWriter.java


示例2: DebianPackageWriter

import org.apache.commons.compress.archivers.ar.ArArchiveEntry; //导入依赖的package包/类
public DebianPackageWriter ( final OutputStream stream, final BinaryPackageControlFile packageControlFile, final Supplier<Instant> timestampSupplier ) throws IOException
{
    Objects.requireNonNull ( timestampSupplier );

    this.timestampSupplier = timestampSupplier;
    this.packageControlFile = packageControlFile;
    BinaryPackageControlFile.validate ( packageControlFile );

    this.ar = new ArArchiveOutputStream ( stream );

    this.ar.putArchiveEntry ( new ArArchiveEntry ( "debian-binary", this.binaryHeader.length, 0, 0, AR_ARCHIVE_DEFAULT_MODE, timestampSupplier.get ().getEpochSecond () ) );
    this.ar.write ( this.binaryHeader );
    this.ar.closeArchiveEntry ();

    this.dataTemp = File.createTempFile ( "data", null );

    this.dataStream = new TarArchiveOutputStream ( new GZIPOutputStream ( new FileOutputStream ( this.dataTemp ) ) );
    this.dataStream.setLongFileMode ( TarArchiveOutputStream.LONGFILE_GNU );
}
 
开发者ID:eclipse,项目名称:packagedrone,代码行数:20,代码来源:DebianPackageWriter.java


示例3: compressData

import org.apache.commons.compress.archivers.ar.ArArchiveEntry; //导入依赖的package包/类
/**
 * Compress data
 * 
 * @param fileCompressor
 *            FileCompressor object
 * @return
 * @throws Exception
 */
@Override
public byte[] compressData(FileCompressor fileCompressor) throws Exception {
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    ArArchiveOutputStream aos = new ArArchiveOutputStream(baos);
    try {
        for (BinaryFile binaryFile : fileCompressor.getMapBinaryFile()
                .values()) {
            ArArchiveEntry entry = new ArArchiveEntry(
                    binaryFile.getDesPath(), binaryFile.getActualSize());
            aos.putArchiveEntry(entry);
            aos.write(binaryFile.getData());
            aos.closeArchiveEntry();
        }
        aos.flush();
        aos.finish();
    } catch (Exception e) {
        FileCompressor.LOGGER.error("Error on compress data", e);
    } finally {
        aos.close();
        baos.close();
    }
    return baos.toByteArray();
}
 
开发者ID:espringtran,项目名称:compressor4j,代码行数:32,代码来源:ArProcessor.java


示例4: create

import org.apache.commons.compress.archivers.ar.ArArchiveEntry; //导入依赖的package包/类
/**
 * Detects the type of the given ArchiveEntry and returns an appropriate AttributeAccessor for it.
 * 
 * @param entry the adaptee
 * @return a new attribute accessor instance
 */
public static AttributeAccessor<?> create(ArchiveEntry entry) {
    if (entry instanceof TarArchiveEntry) {
        return new TarAttributeAccessor((TarArchiveEntry) entry);
    } else if (entry instanceof ZipArchiveEntry) {
        return new ZipAttributeAccessor((ZipArchiveEntry) entry);
    } else if (entry instanceof CpioArchiveEntry) {
        return new CpioAttributeAccessor((CpioArchiveEntry) entry);
    } else if (entry instanceof ArjArchiveEntry) {
        return new ArjAttributeAccessor((ArjArchiveEntry) entry);
    } else if (entry instanceof ArArchiveEntry) {
        return new ArAttributeAccessor((ArArchiveEntry) entry);
    }

    return new FallbackAttributeAccessor(entry);
}
 
开发者ID:thrau,项目名称:jarchivelib,代码行数:22,代码来源:AttributeAccessor.java


示例5: addArFile

import org.apache.commons.compress.archivers.ar.ArArchiveEntry; //导入依赖的package包/类
private void addArFile ( final File file, final String entryName ) throws IOException
{
    final ArArchiveEntry entry = new ArArchiveEntry ( entryName, file.length (), 0, 0, AR_ARCHIVE_DEFAULT_MODE, timestampProvider.getModTime () / 1000 );
    this.ar.putArchiveEntry ( entry );

    ByteStreams.copy ( new FileInputStream ( file ), this.ar );

    this.ar.closeArchiveEntry ();
}
 
开发者ID:eclipse,项目名称:neoscada,代码行数:10,代码来源:DebianPackageWriter.java


示例6: getThing

import org.apache.commons.compress.archivers.ar.ArArchiveEntry; //导入依赖的package包/类
@Test
public void getThing(){
	ArArchiveEntry e = new ArArchiveEntry(
								new File("src/test/resources/build-essential_11.6ubuntu6_amd64.deb"),
								"control.tar.gz");
	System.out.println(e.getLength());
}
 
开发者ID:tcplugins,项目名称:tcDebRepository,代码行数:8,代码来源:ArStreamerTest.java


示例7: addArFile

import org.apache.commons.compress.archivers.ar.ArArchiveEntry; //导入依赖的package包/类
private void addArFile ( final File file, final String entryName, final Supplier<Instant> timestampSupplier ) throws IOException
{
    final ArArchiveEntry entry = new ArArchiveEntry ( entryName, file.length (), 0, 0, AR_ARCHIVE_DEFAULT_MODE, timestampSupplier.get ().getEpochSecond () );
    this.ar.putArchiveEntry ( entry );

    IOUtils.copy ( new FileInputStream ( file ), this.ar );

    this.ar.closeArchiveEntry ();
}
 
开发者ID:eclipse,项目名称:packagedrone,代码行数:10,代码来源:DebianPackageWriter.java


示例8: read

import org.apache.commons.compress.archivers.ar.ArArchiveEntry; //导入依赖的package包/类
/**
 * Read from compressed file
 * 
 * @param srcPath
 *            path of compressed file
 * @param fileCompressor
 *            FileCompressor object
 * @throws Exception
 */
@Override
public void read(String srcPath, FileCompressor fileCompressor)
        throws Exception {
    long t1 = System.currentTimeMillis();
    byte[] data = FileUtil.convertFileToByte(srcPath);
    ByteArrayInputStream bais = new ByteArrayInputStream(data);
    ArArchiveInputStream ais = new ArArchiveInputStream(bais);
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    try {
        byte[] buffer = new byte[1024];
        int readByte;
        ArArchiveEntry entry = ais.getNextArEntry();
        while (entry != null && entry.getSize() > 0) {
            long t2 = System.currentTimeMillis();
            baos = new ByteArrayOutputStream();
            readByte = ais.read(buffer);
            while (readByte != -1) {
                baos.write(buffer, 0, readByte);
                readByte = ais.read(buffer);
            }
            BinaryFile binaryFile = new BinaryFile(entry.getName(),
                    baos.toByteArray());
            fileCompressor.addBinaryFile(binaryFile);
            LogUtil.createAddFileLog(fileCompressor, binaryFile, t2,
                    System.currentTimeMillis());
            entry = ais.getNextArEntry();
        }
    } catch (Exception e) {
        FileCompressor.LOGGER.error("Error on get compressor file", e);
    } finally {
        baos.close();
        ais.close();
        bais.close();
    }
    LogUtil.createReadLog(fileCompressor, srcPath, data.length, t1,
            System.currentTimeMillis());
}
 
开发者ID:espringtran,项目名称:compressor4j,代码行数:47,代码来源:ArProcessor.java


示例9: thinArchivesDoNotContainAbsolutePaths

import org.apache.commons.compress.archivers.ar.ArArchiveEntry; //导入依赖的package包/类
@Test
public void thinArchivesDoNotContainAbsolutePaths() throws IOException {
  CxxPlatform cxxPlatform =
      CxxPlatformUtils.build(new CxxBuckConfig(FakeBuckConfig.builder().build()));
  BuildRuleResolver ruleResolver =
      new SingleThreadedBuildRuleResolver(
          TargetGraph.EMPTY, new DefaultTargetNodeToBuildRuleTransformer());
  assumeTrue(cxxPlatform.getAr().resolve(ruleResolver).supportsThinArchives());
  ProjectWorkspace workspace =
      TestDataHelper.createProjectWorkspaceForScenario(this, "cxx_library", tmp);
  workspace.setUp();
  Path archive =
      workspace.buildAndReturnOutput("-c", "cxx.archive_contents=thin", "//:foo#default,static");

  // NOTE: Replace the thin header with a normal header just so the commons compress parser
  // can parse the archive contents.
  try (OutputStream outputStream =
      Files.newOutputStream(workspace.getPath(archive), StandardOpenOption.WRITE)) {
    outputStream.write(ObjectFileScrubbers.GLOBAL_HEADER);
  }

  // Now iterate the archive and verify it contains no absolute paths.
  try (ArArchiveInputStream stream =
      new ArArchiveInputStream(new FileInputStream(workspace.getPath(archive).toFile()))) {
    ArArchiveEntry entry;
    while ((entry = stream.getNextArEntry()) != null) {
      if (!entry.getName().isEmpty()) {
        assertFalse(
            "found absolute path: " + entry.getName(),
            workspace.getDestPath().getFileSystem().getPath(entry.getName()).isAbsolute());
      }
    }
  }
}
 
开发者ID:facebook,项目名称:buck,代码行数:35,代码来源:CxxLibraryIntegrationTest.java


示例10: thatGeneratedArchivesAreDeterministic

import org.apache.commons.compress.archivers.ar.ArArchiveEntry; //导入依赖的package包/类
@Test
@SuppressWarnings("PMD.AvoidUsingOctalValues")
public void thatGeneratedArchivesAreDeterministic() throws IOException, InterruptedException {
  assumeTrue(Platform.detect() == Platform.MACOS || Platform.detect() == Platform.LINUX);
  ProjectFilesystem filesystem = TestProjectFilesystems.createProjectFilesystem(tmp.getRoot());
  CxxPlatform platform =
      CxxPlatformUtils.build(new CxxBuckConfig(FakeBuckConfig.builder().build()));

  // Build up the paths to various files the archive step will use.
  BuildRuleResolver ruleResolver =
      new SingleThreadedBuildRuleResolver(
          TargetGraph.EMPTY, new DefaultTargetNodeToBuildRuleTransformer());
  SourcePathResolver sourcePathResolver =
      DefaultSourcePathResolver.from(new SourcePathRuleFinder(ruleResolver));
  Archiver archiver = platform.getAr().resolve(ruleResolver);
  Path output = filesystem.getPath("output.a");
  Path input = filesystem.getPath("input.dat");
  filesystem.writeContentsToPath("blah", input);
  Preconditions.checkState(filesystem.resolve(input).toFile().setExecutable(true));

  // Build an archive step.
  ArchiveStep archiveStep =
      new ArchiveStep(
          filesystem,
          archiver.getEnvironment(sourcePathResolver),
          archiver.getCommandPrefix(sourcePathResolver),
          ImmutableList.of(),
          getArchiveOptions(false),
          output,
          ImmutableList.of(input),
          archiver,
          filesystem.getPath("scratchDir"));
  FileScrubberStep fileScrubberStep =
      new FileScrubberStep(filesystem, output, archiver.getScrubbers());

  // Execute the archive step and verify it ran successfully.
  ExecutionContext executionContext = TestExecutionContext.newInstanceWithRealProcessExecutor();
  TestConsole console = (TestConsole) executionContext.getConsole();
  int exitCode = archiveStep.execute(executionContext).getExitCode();
  assertEquals("archive step failed: " + console.getTextWrittenToStdErr(), 0, exitCode);
  exitCode = fileScrubberStep.execute(executionContext).getExitCode();
  assertEquals("archive scrub step failed: " + console.getTextWrittenToStdErr(), 0, exitCode);

  // Now read the archive entries and verify that the timestamp, UID, and GID fields are
  // zero'd out.
  try (ArArchiveInputStream stream =
      new ArArchiveInputStream(new FileInputStream(filesystem.resolve(output).toFile()))) {
    ArArchiveEntry entry = stream.getNextArEntry();
    assertEquals(
        ObjectFileCommonModificationDate.COMMON_MODIFICATION_TIME_STAMP, entry.getLastModified());
    assertEquals(0, entry.getUserId());
    assertEquals(0, entry.getGroupId());
    assertEquals(String.format("0%o", entry.getMode()), 0100644, entry.getMode());
  }
}
 
开发者ID:facebook,项目名称:buck,代码行数:56,代码来源:ArchiveStepIntegrationTest.java


示例11: inputDirs

import org.apache.commons.compress.archivers.ar.ArArchiveEntry; //导入依赖的package包/类
@Test
public void inputDirs() throws IOException, InterruptedException {
  assumeTrue(Platform.detect() == Platform.MACOS || Platform.detect() == Platform.LINUX);
  ProjectFilesystem filesystem = TestProjectFilesystems.createProjectFilesystem(tmp.getRoot());
  CxxPlatform platform =
      CxxPlatformUtils.build(new CxxBuckConfig(FakeBuckConfig.builder().build()));

  // Build up the paths to various files the archive step will use.
  BuildRuleResolver ruleResolver =
      new SingleThreadedBuildRuleResolver(
          TargetGraph.EMPTY, new DefaultTargetNodeToBuildRuleTransformer());
  SourcePathResolver sourcePathResolver =
      DefaultSourcePathResolver.from(new SourcePathRuleFinder(ruleResolver));
  Archiver archiver = platform.getAr().resolve(ruleResolver);
  Path output = filesystem.getPath("output.a");
  Path input = filesystem.getPath("foo/blah.dat");
  filesystem.mkdirs(input.getParent());
  filesystem.writeContentsToPath("blah", input);

  // Build an archive step.
  ArchiveStep archiveStep =
      new ArchiveStep(
          filesystem,
          archiver.getEnvironment(sourcePathResolver),
          archiver.getCommandPrefix(sourcePathResolver),
          ImmutableList.of(),
          getArchiveOptions(false),
          output,
          ImmutableList.of(input.getParent()),
          archiver,
          filesystem.getPath("scratchDir"));

  // Execute the archive step and verify it ran successfully.
  ExecutionContext executionContext = TestExecutionContext.newInstanceWithRealProcessExecutor();
  TestConsole console = (TestConsole) executionContext.getConsole();
  int exitCode = archiveStep.execute(executionContext).getExitCode();
  assertEquals("archive step failed: " + console.getTextWrittenToStdErr(), 0, exitCode);

  // Now read the archive entries and verify that the timestamp, UID, and GID fields are
  // zero'd out.
  try (ArArchiveInputStream stream =
      new ArArchiveInputStream(new FileInputStream(filesystem.resolve(output).toFile()))) {
    ArArchiveEntry entry = stream.getNextArEntry();
    assertThat(entry.getName(), Matchers.equalTo("blah.dat"));
  }
}
 
开发者ID:facebook,项目名称:buck,代码行数:47,代码来源:ArchiveStepIntegrationTest.java


示例12: thinArchives

import org.apache.commons.compress.archivers.ar.ArArchiveEntry; //导入依赖的package包/类
@Test
public void thinArchives() throws IOException, InterruptedException {
  assumeTrue(Platform.detect() == Platform.MACOS || Platform.detect() == Platform.LINUX);
  ProjectFilesystem filesystem = TestProjectFilesystems.createProjectFilesystem(tmp.getRoot());
  CxxPlatform platform =
      CxxPlatformUtils.build(new CxxBuckConfig(FakeBuckConfig.builder().build()));

  // Build up the paths to various files the archive step will use.
  BuildRuleResolver ruleResolver =
      new SingleThreadedBuildRuleResolver(
          TargetGraph.EMPTY, new DefaultTargetNodeToBuildRuleTransformer());
  SourcePathResolver sourcePathResolver =
      DefaultSourcePathResolver.from(new SourcePathRuleFinder(ruleResolver));
  Archiver archiver = platform.getAr().resolve(ruleResolver);

  assumeTrue(archiver.supportsThinArchives());

  Path output = filesystem.getPath("foo/libthin.a");
  filesystem.mkdirs(output.getParent());

  // Create a really large input file so it's obvious that the archive is thin.
  Path input = filesystem.getPath("bar/blah.dat");
  filesystem.mkdirs(input.getParent());
  byte[] largeInputFile = new byte[1024 * 1024];
  byte[] fillerToRepeat = "hello\n".getBytes(StandardCharsets.UTF_8);
  for (int i = 0; i < largeInputFile.length; i++) {
    largeInputFile[i] = fillerToRepeat[i % fillerToRepeat.length];
  }
  filesystem.writeBytesToPath(largeInputFile, input);

  // Build an archive step.
  ArchiveStep archiveStep =
      new ArchiveStep(
          filesystem,
          archiver.getEnvironment(sourcePathResolver),
          archiver.getCommandPrefix(sourcePathResolver),
          ImmutableList.of(),
          getArchiveOptions(true),
          output,
          ImmutableList.of(input),
          archiver,
          filesystem.getPath("scratchDir"));

  // Execute the archive step and verify it ran successfully.
  ExecutionContext executionContext = TestExecutionContext.newInstanceWithRealProcessExecutor();
  TestConsole console = (TestConsole) executionContext.getConsole();
  int exitCode = archiveStep.execute(executionContext).getExitCode();
  assertEquals("archive step failed: " + console.getTextWrittenToStdErr(), 0, exitCode);

  // Verify that the thin header is present.
  assertThat(filesystem.readFirstLine(output), Matchers.equalTo(Optional.of("!<thin>")));

  // Verify that even though the archived contents is really big, the archive is still small.
  assertThat(filesystem.getFileSize(output), Matchers.lessThan(1000L));

  // NOTE: Replace the thin header with a normal header just so the commons compress parser
  // can parse the archive contents.
  try (OutputStream outputStream =
      Files.newOutputStream(filesystem.resolve(output), StandardOpenOption.WRITE)) {
    outputStream.write(ObjectFileScrubbers.GLOBAL_HEADER);
  }

  // Now read the archive entries and verify that the timestamp, UID, and GID fields are
  // zero'd out.
  try (ArArchiveInputStream stream =
      new ArArchiveInputStream(new FileInputStream(filesystem.resolve(output).toFile()))) {
    ArArchiveEntry entry = stream.getNextArEntry();

    // Verify that the input names are relative paths from the outputs parent dir.
    assertThat(
        entry.getName(), Matchers.equalTo(output.getParent().relativize(input).toString()));
  }
}
 
开发者ID:facebook,项目名称:buck,代码行数:74,代码来源:ArchiveStepIntegrationTest.java


示例13: ArAttributeAccessor

import org.apache.commons.compress.archivers.ar.ArArchiveEntry; //导入依赖的package包/类
public ArAttributeAccessor(ArArchiveEntry entry) {
    super(entry);
}
 
开发者ID:thrau,项目名称:jarchivelib,代码行数:4,代码来源:AttributeAccessor.java



注:本文中的org.apache.commons.compress.archivers.ar.ArArchiveEntry类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java ICounter类代码示例发布时间:2022-05-23
下一篇:
Java RoboMenuItem类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap