本文整理汇总了Java中htsjdk.samtools.util.BlockCompressedOutputStream类的典型用法代码示例。如果您正苦于以下问题:Java BlockCompressedOutputStream类的具体用法?Java BlockCompressedOutputStream怎么用?Java BlockCompressedOutputStream使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
BlockCompressedOutputStream类属于htsjdk.samtools.util包,在下文中一共展示了BlockCompressedOutputStream类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: makeTabixCompressedIndex
import htsjdk.samtools.util.BlockCompressedOutputStream; //导入依赖的package包/类
private void makeTabixCompressedIndex(final File sourceFile, final File indexFile, final AsciiFeatureCodec codec,
final TabixFormat format) throws IOException {
TabixIndexCreator indexCreator = new TabixIndexCreator(format);
try (
BlockCompressedInputStream inputStream = new BlockCompressedInputStream(
new FileInputStream(sourceFile));
LittleEndianOutputStream outputStream = new LittleEndianOutputStream(
new BlockCompressedOutputStream(indexFile))
) {
long p = 0;
String line = inputStream.readLine();
while (line != null) {
//add the feature to the index
Feature decode = codec.decode(line);
if (decode != null) {
indexCreator.addFeature(decode, p);
}
// read the next line if available
p = inputStream.getFilePointer();
line = inputStream.readLine();
}
// write the index to a file
Index index = indexCreator.finalizeIndex(p);
// VERY important! either use write based on input file or pass the little endian a BGZF stream
index.write(outputStream);
}
}
开发者ID:epam,项目名称:NGB,代码行数:31,代码来源:FileManager.java
示例2: makeIndex
import htsjdk.samtools.util.BlockCompressedOutputStream; //导入依赖的package包/类
public void makeIndex(final String plusFile, final String path) {
try {
final File tempFile = new File(plusFile + BinIndexWriter.TEMP);
BlockCompressedInputStream reader = BasicUtils.checkStream(new File(path));
LittleEndianOutputStream los = new LittleEndianOutputStream(new BlockCompressedOutputStream(tempFile));
readBGZInputstream(reader, los);
los.writeShort(BinIndexWriter.PLUS_FILE_END);
writeOthers(los);
los.close();
boolean success = tempFile.renameTo(new File(plusFile));
if (!success) {
System.err.println("Make index has completed. But rename from '" + tempFile.getAbsolutePath() + "' to '" + plusFile + "' with error. ");
System.exit(1);
}
} catch (IOException e) {
e.printStackTrace();
}
}
开发者ID:mulinlab,项目名称:vanno,代码行数:20,代码来源:AppendixFileWriter.java
示例3: makeIndex
import htsjdk.samtools.util.BlockCompressedOutputStream; //导入依赖的package包/类
public void makeIndex() {
try {
final String outputPath = path + IntervalIndex.PLUS_EXTENSION;
final File outputFile = new File(outputPath + IntervalIndex.TEMP);
BlockCompressedInputStream reader = checkStream(new File(path));
LittleEndianOutputStream los = new LittleEndianOutputStream(new BlockCompressedOutputStream(outputFile));
readBGZInputstream(reader, los);
los.writeInt(IntervalIndex.PLUS_FILE_END);
writeOthers(los);
los.close();
boolean success = outputFile.renameTo(new File(outputPath));
if (!success) {
System.err.println("Make index has completed. But rename from '" + outputFile.getAbsolutePath() + "' to '" + outputPath + "' with error. ");
} else {
throw new IllegalArgumentException("We currently support bgz file format. please use bgzip to compress your file!");
}
} catch (IOException e) {
e.printStackTrace();
}
}
开发者ID:mulinlab,项目名称:vanno,代码行数:24,代码来源:CollectionFileWriter.java
示例4: makeGenesFileWriter
import htsjdk.samtools.util.BlockCompressedOutputStream; //导入依赖的package包/类
/**
* Creates a {@code BufferedWriter} to write genes to file, determined by a {@code GeneFile} object
*
* @param geneFeatureClass {@code Class<? extends GeneFeature>} defines GeneFeature type that will be
* written, and therefore, gene file extension.
* @param geneFile {@code GeneFile} that represents a file in the system
* @return {@code BufferedWriter} to write genes
* @throws IOException
*/
public BufferedWriter makeGenesFileWriter(Class<? extends GeneFeature> geneFeatureClass, GeneFile geneFile,
GeneFileType type) throws IOException {
final Map<String, Object> params = new HashMap<>();
params.put(DIR_ID.name(), geneFile.getId());
params.put(USER_ID.name(), geneFile.getCreatedBy());
String extension = getGeneFileExtension(geneFeatureClass, geneFile);
params.put(GENE_EXTENSION.name(), extension);
File file = createGeneFileByType(type, params);
if (type.equals(GeneFileType.ORIGINAL)) {
geneFile.setPath(file.getAbsolutePath());
}
return geneFile.getCompressed() ?
new BufferedWriter(new OutputStreamWriter(new BlockCompressedOutputStream(file),
Charset.defaultCharset())) :
new BufferedWriter(new OutputStreamWriter(new FileOutputStream(file), Charset.defaultCharset()));
}
开发者ID:react-dev26,项目名称:NGB-master,代码行数:30,代码来源:FileManager.java
示例5: makeGeneBlockCompressedOutputStream
import htsjdk.samtools.util.BlockCompressedOutputStream; //导入依赖的package包/类
/**
* Creates a {@code BlockCompressedOutputStream} to write gene file of specified GeneFileType
* @param gffType a type of gene file
* @param geneFile a GeneFile, for which data to write
* @param type a GeneFileType of helper file to create
* @return a {@code BlockCompressedOutputStream} to write gene file of specified GeneFileType
* @throws FileNotFoundException
*/
public BlockCompressedOutputStream makeGeneBlockCompressedOutputStream(
GffCodec.GffType gffType, GeneFile geneFile, GeneFileType type)
throws FileNotFoundException {
final Map<String, Object> params = new HashMap<>();
params.put(DIR_ID.name(), geneFile.getId());
params.put(USER_ID.name(), geneFile.getCreatedBy());
String extension = gffType.getExtensions()[0];
params.put(GENE_EXTENSION.name(), extension);
File file = createGeneFileByType(type, params);
if (type.equals(GeneFileType.ORIGINAL)) {
geneFile.setPath(file.getAbsolutePath());
}
return new BlockCompressedOutputStream(file);
}
开发者ID:react-dev26,项目名称:NGB-master,代码行数:27,代码来源:FileManager.java
示例6: makeMafFileWriter
import htsjdk.samtools.util.BlockCompressedOutputStream; //导入依赖的package包/类
/**
* Creates a writer for a specified MafFile
*
* @param mafFile a MafFile to create writer for
* @return a MafFile to write
* @throws IOException
*/
public BufferedWriter makeMafFileWriter(MafFile mafFile) throws IOException {
final Map<String, Object> params = new HashMap<>();
params.put(DIR_ID.name(), mafFile.getId());
params.put(USER_ID.name(), mafFile.getCreatedBy());
File file = new File(toRealPath(substitute(MAF_FILE, params)));
Assert.isTrue(file.createNewFile());
LOGGER.debug("Writing MAF file at {}", file.getAbsolutePath());
mafFile.setPath(file.getAbsolutePath());
mafFile.setCompressed(true);
return new BufferedWriter(new OutputStreamWriter(
new BlockCompressedOutputStream(file), Charset.defaultCharset()));
}
开发者ID:react-dev26,项目名称:NGB-master,代码行数:24,代码来源:FileManager.java
示例7: initOutputStream
import htsjdk.samtools.util.BlockCompressedOutputStream; //导入依赖的package包/类
/**
* htsjdk for now only support BlockCompressedOutputStream for local file system rather than hdfs.
*/
@Override
public void initOutputStream(String filePath, Configuration conf) {
// TODO Auto-generated method stub
OutputType typeTobuild = determineOutputTypeFromFile(filePath);
try {
switch (typeTobuild) {
case VCF:
os = new FileOutputStream(new File(filePath));
break;
case BLOCK_COMPRESSED_VCF:
os = new BlockCompressedOutputStream(new File(filePath));
}
} catch (IOException e) {
throw new RuntimeEOFException(e);
}
}
开发者ID:BGI-flexlab,项目名称:SOAPgaea,代码行数:21,代码来源:VCFLocalWriter.java
示例8: printSettings
import htsjdk.samtools.util.BlockCompressedOutputStream; //导入依赖的package包/类
/**
* Output a curated set of important settings to the logger.
*
* May be overridden by subclasses to specify a different set of settings to output.
*/
protected void printSettings() {
if ( VERBOSITY != Log.LogLevel.DEBUG ) {
logger.info("HTSJDK Defaults.COMPRESSION_LEVEL : " + Defaults.COMPRESSION_LEVEL);
logger.info("HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : " + Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS);
logger.info("HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : " + Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS);
logger.info("HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : " + Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE);
}
else {
// At DEBUG verbosity, print all the HTSJDK defaults:
Defaults.allDefaults().entrySet().stream().forEach(e->
logger.info("HTSJDK " + Defaults.class.getSimpleName() + "." + e.getKey() + " : " + e.getValue())
);
}
// Log the configuration options:
ConfigFactory.logConfigFields(ConfigFactory.getInstance().getGATKConfig(), Log.LogLevel.DEBUG);
final boolean usingIntelDeflater = (BlockCompressedOutputStream.getDefaultDeflaterFactory() instanceof IntelDeflaterFactory && ((IntelDeflaterFactory)BlockCompressedOutputStream.getDefaultDeflaterFactory()).usingIntelDeflater());
logger.info("Deflater: " + (usingIntelDeflater ? "IntelDeflater": "JdkDeflater"));
final boolean usingIntelInflater = (BlockGunzipper.getDefaultInflaterFactory() instanceof IntelInflaterFactory && ((IntelInflaterFactory)BlockGunzipper.getDefaultInflaterFactory()).usingIntelInflater());
logger.info("Inflater: " + (usingIntelInflater ? "IntelInflater": "JdkInflater"));
logger.info("GCS max retries/reopens: " + BucketUtils.getCloudStorageConfiguration(NIO_MAX_REOPENS).maxChannelReopens());
logger.info("Using google-cloud-java patch 6d11bef1c81f885c26b2b56c8616b7a705171e4f from https://github.com/droazen/google-cloud-java/tree/dr_all_nio_fixes");
}
开发者ID:broadinstitute,项目名称:gatk,代码行数:31,代码来源:CommandLineProgram.java
示例9: createSAMWriter
import htsjdk.samtools.util.BlockCompressedOutputStream; //导入依赖的package包/类
/**
* Create a common SAMFileWriter for use with Picard tools.
*
* @param outputFile - if this file has a .cram extension then a reference is required. Can not be null.
* @param referenceFile - the reference source to use. Can not be null if a output file has a .cram extension.
* @param header - header to be used for the output writer
* @param preSorted - if true then the records must already be sorted to match the header sort order
* @return SAMFileWriter
*/
public SAMFileWriter createSAMWriter(
final File outputFile,
final File referenceFile,
final SAMFileHeader header,
final boolean preSorted)
{
BlockCompressedOutputStream.setDefaultCompressionLevel(COMPRESSION_LEVEL);
SAMFileWriterFactory factory = new SAMFileWriterFactory()
.setCreateIndex(CREATE_INDEX)
.setCreateMd5File(CREATE_MD5_FILE);
if (MAX_RECORDS_IN_RAM != null) {
factory = factory.setMaxRecordsInRam(MAX_RECORDS_IN_RAM);
}
return ReadUtils.createCommonSAMWriterFromFactory(factory, outputFile, referenceFile, header, preSorted);
}
开发者ID:broadinstitute,项目名称:gatk,代码行数:28,代码来源:PicardCommandLineProgram.java
示例10: main
import htsjdk.samtools.util.BlockCompressedOutputStream; //导入依赖的package包/类
public static void main(String[] args) {
try {
String inFile = "/psychipc01/disk2/references/1000Genome/release/20130502_v5a/ALL.chr1.phase3_shapeit2_mvncall_integrated_v5a.20130502.genotypes.vcf.gz";
String outFile = "/psychipc01/disk2/references/1000Genome/release/20130502_v5a/ALL.chr1.phase3_shapeit2_mvncall_integrated_v5a.20130502.genotypes1.vcf.gz";
BlockCompressedInputStream br = new BlockCompressedInputStream(new File(inFile));
BlockCompressedOutputStream bw = new BlockCompressedOutputStream(new File(outFile));
String line = null;
String[] cells = null;
int[] orgIndices = new int[]{0, 1, 2, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 25, 33, 34, 35, 36, 37, 38, 39, 40};
int selectedColNum = orgIndices.length;
int i, pos;
String delimiter = "\t";
while ((line = br.readLine()) != null) {
line = line.trim();
if (line.trim().length() == 0) {
continue;
}
bw.write(line.replaceAll("[|]", "/").getBytes());
bw.write("\n".getBytes());
}
bw.close();
br.close();
} catch (Exception ex) {
ex.printStackTrace();
}
}
开发者ID:mulinlab,项目名称:vanno,代码行数:32,代码来源:LocalFile.java
示例11: createGeneCompressedIndex
import htsjdk.samtools.util.BlockCompressedOutputStream; //导入依赖的package包/类
private void createGeneCompressedIndex(File indexFile, File file, GffCodec.GffType gffType) throws IOException {
AsciiFeatureCodec<GeneFeature> codec = new GffCodec(gffType);
TabixIndexCreator indexCreator = new TabixIndexCreator(TabixFormat.GFF);
try (
BlockCompressedInputStream inputStream = new BlockCompressedInputStream(new FileInputStream(file));
LittleEndianOutputStream outputStream = new LittleEndianOutputStream(
new BlockCompressedOutputStream(indexFile))
) {
long p = 0;
String line = inputStream.readLine();
while (line != null) {
//add the feature to the index
GeneFeature decode = codec.decode(line);
if (decode != null) {
indexCreator.addFeature(decode, p);
}
// read the next line if available
p = inputStream.getFilePointer();
line = inputStream.readLine();
}
// write the index to a file
Index index = indexCreator.finalizeIndex(p);
// VERY important! either use write based on input file or pass the little endian a BGZF stream
index.write(outputStream);
}
}
开发者ID:react-dev26,项目名称:NGB-master,代码行数:30,代码来源:FileManager.java
示例12: run
import htsjdk.samtools.util.BlockCompressedOutputStream; //导入依赖的package包/类
/**
* @param maxMemory - in megabytes
* @throws IOException
*
*/
public void run(int maxMemory) throws IOException {
try (
PrintWriter writer = NgbFileUtils.isGzCompressed(outputFile.getName()) ?
new PrintWriter(new OutputStreamWriter(new BlockCompressedOutputStream(outputFile), UTF_8)) :
new PrintWriter(outputFile, UTF_8);
AsciiLineReader reader = NgbFileUtils.isGzCompressed(outputFile.getName()) ?
new AsciiLineReader(new BlockCompressedInputStream(inputFile)) :
new AsciiLineReader(new FileInputStream(inputFile))
) {
SortableRecordCodec codec = new SortableRecordCodec();
SortingCollection cltn = SortingCollection.newInstance(
SortableRecord.class, codec, comparator, maxMemory * 1024 * 1024 / ESTIMATED_RECORD_SIZE, tmpDir);
Parser parser = getParser();
String firstDataRow = writeHeader(reader, writer);
if (firstDataRow != null && !firstDataRow.isEmpty()) {
cltn.add(parser.createRecord(firstDataRow));
}
SortableRecord next;
while ((next = parser.readNextRecord(reader)) != null) {
cltn.add(next);
}
CloseableIterator<SortableRecord> iter = cltn.iterator();
while (iter.hasNext()) {
SortableRecord al = iter.next();
writer.println(al.getText());
}
iter.close();
}
}
开发者ID:react-dev26,项目名称:NGB-master,代码行数:42,代码来源:AbstractFeatureSorter.java
示例13: openFileForWriting
import htsjdk.samtools.util.BlockCompressedOutputStream; //导入依赖的package包/类
public static OutputStream openFileForWriting(final File file) throws IOException
{
if (file.getName().endsWith(".vcf.gz"))
{
return new BlockCompressedOutputStream(file);
}
else if (file.getName().endsWith(".gz"))
{
return new GZIPOutputStream(new FileOutputStream(file));
}
else
{
return new FileOutputStream(file);
}
}
开发者ID:dariober,项目名称:ASCIIGenome,代码行数:16,代码来源:IOUtils.java
示例14: blockCompressAndIndex
import htsjdk.samtools.util.BlockCompressedOutputStream; //导入依赖的package包/类
/**
* Block compress input file and create associated tabix index. Newly created file and index are
* deleted on exit if deleteOnExit true.
* @throws IOException
* @throws InvalidRecordException
* */
private void blockCompressAndIndex(String in, String bgzfOut, boolean deleteOnExit) throws IOException, InvalidRecordException {
File inFile= new File(in);
File outFile= new File(bgzfOut);
LineIterator lin= utils.IOUtils.openURIForLineIterator(inFile.getAbsolutePath());
BlockCompressedOutputStream writer = new BlockCompressedOutputStream(outFile);
long filePosition= writer.getFilePointer();
TabixIndexCreator indexCreator=new TabixIndexCreator(TabixFormat.GFF);
while(lin.hasNext()){
String line = lin.next();
GtfLine gtf= new GtfLine(line.split("\t"));
writer.write(line.getBytes());
writer.write('\n');
indexCreator.addFeature(gtf, filePosition);
filePosition = writer.getFilePointer();
}
writer.flush();
File tbi= new File(bgzfOut + TabixUtils.STANDARD_INDEX_EXTENSION);
if(tbi.exists() && tbi.isFile()){
writer.close();
throw new RuntimeException("Index file exists: " + tbi);
}
Index index = indexCreator.finalizeIndex(writer.getFilePointer());
index.writeBasedOnFeatureFile(outFile);
writer.close();
if(deleteOnExit){
outFile.deleteOnExit();
File idx= new File(outFile.getAbsolutePath() + TabixUtils.STANDARD_INDEX_EXTENSION);
idx.deleteOnExit();
}
}
开发者ID:dariober,项目名称:ASCIIGenome,代码行数:44,代码来源:UcscFetch.java
示例15: run
import htsjdk.samtools.util.BlockCompressedOutputStream; //导入依赖的package包/类
/**
* @param maxMemory - in megabytes
* @throws IOException
*
*/
public void run(int maxMemory) throws IOException {
try (
PrintWriter writer = NgbFileUtils.isGzCompressed(outputFile.getName()) ?
new PrintWriter(new OutputStreamWriter(new BlockCompressedOutputStream(outputFile), UTF_8)) :
new PrintWriter(outputFile, UTF_8);
AsciiLineReader reader = NgbFileUtils.isGzCompressed(inputFile.getName()) ?
new AsciiLineReader(new BlockCompressedInputStream(inputFile)) :
new AsciiLineReader(new FileInputStream(inputFile))
) {
SortableRecordCodec codec = new SortableRecordCodec();
SortingCollection cltn = SortingCollection.newInstance(
SortableRecord.class, codec, comparator, maxMemory * 1024 * 1024 / ESTIMATED_RECORD_SIZE, tmpDir);
Parser parser = getParser();
String firstDataRow = writeHeader(reader, writer);
if (firstDataRow != null && !firstDataRow.isEmpty()) {
cltn.add(parser.createRecord(firstDataRow));
}
SortableRecord next;
while ((next = parser.readNextRecord(reader)) != null) {
cltn.add(next);
}
CloseableIterator<SortableRecord> iter = cltn.iterator();
while (iter.hasNext()) {
SortableRecord al = iter.next();
writer.println(al.getText());
}
iter.close();
}
}
开发者ID:epam,项目名称:NGB,代码行数:42,代码来源:AbstractFeatureSorter.java
示例16: init
import htsjdk.samtools.util.BlockCompressedOutputStream; //导入依赖的package包/类
private void init(
OutputStream output, SAMFileHeader header, boolean writeHeader)
throws IOException
{
origOutput = output;
compressedOut = new BlockCompressedOutputStream(origOutput, null);
binaryCodec = new BinaryCodec(compressedOut);
recordCodec = new BAMRecordCodec(header);
recordCodec.setOutputStream(compressedOut);
if (writeHeader)
this.writeHeader(header);
}
开发者ID:HadoopGenomics,项目名称:Hadoop-BAM,代码行数:16,代码来源:BAMRecordWriter.java
示例17: writeBAMHeaderToStream
import htsjdk.samtools.util.BlockCompressedOutputStream; //导入依赖的package包/类
/**
* Private helper method for {@link #convertHeaderlessHadoopBamShardToBam} that takes a SAMFileHeader and writes it
* to the provided `OutputStream`, correctly encoded for the BAM format and preceded by the BAM magic bytes.
*
* @param samFileHeader SAM header to write
* @param outputStream stream to write the SAM header to
*/
private static void writeBAMHeaderToStream( final SAMFileHeader samFileHeader, final OutputStream outputStream ) {
final BlockCompressedOutputStream blockCompressedOutputStream = new BlockCompressedOutputStream(outputStream, null);
final BinaryCodec outputBinaryCodec = new BinaryCodec(new DataOutputStream(blockCompressedOutputStream));
final String headerString;
final Writer stringWriter = new StringWriter();
new SAMTextHeaderCodec().encode(stringWriter, samFileHeader, true);
headerString = stringWriter.toString();
outputBinaryCodec.writeBytes(ReadUtils.BAM_MAGIC);
// calculate and write the length of the SAM file header text and the header text
outputBinaryCodec.writeString(headerString, true, false);
// write the sequences binarily. This is redundant with the text header
outputBinaryCodec.writeInt(samFileHeader.getSequenceDictionary().size());
for (final SAMSequenceRecord sequenceRecord: samFileHeader.getSequenceDictionary().getSequences()) {
outputBinaryCodec.writeString(sequenceRecord.getSequenceName(), true, true);
outputBinaryCodec.writeInt(sequenceRecord.getSequenceLength());
}
try {
blockCompressedOutputStream.flush();
} catch (final IOException ioe) {
throw new RuntimeIOException(ioe);
}
}
开发者ID:broadinstitute,项目名称:gatk,代码行数:35,代码来源:SparkUtils.java
示例18: instanceMain
import htsjdk.samtools.util.BlockCompressedOutputStream; //导入依赖的package包/类
@Override
public Object instanceMain(final String[] argv) {
// First, we parse the commandline arguments, then we set important statics like VALIDATION_STRINGENCY, and
// finally, we call into the normal instance main (post arg-parsing). If don't start with the argument parsing
// we always get default values for VALIDATION_STRINGENCY, COMPRESSION_LEVEL, etc.
if (!parseArgs(argv)) {
//an information only argument like help or version was specified, just exit
return 0;
}
// set general SAM/BAM parameters
SamReaderFactory.setDefaultValidationStringency(VALIDATION_STRINGENCY);
BlockCompressedOutputStream.setDefaultCompressionLevel(COMPRESSION_LEVEL);
if (MAX_RECORDS_IN_RAM != null) {
SAMFileWriterImpl.setDefaultMaxRecordsInRam(MAX_RECORDS_IN_RAM);
}
if (CREATE_INDEX){
SAMFileWriterFactory.setDefaultCreateIndexWhileWriting(true);
}
SAMFileWriterFactory.setDefaultCreateMd5File(CREATE_MD5_FILE);
// defer to parent to finish the initialization and starting the program.
return instanceMainPostParseArgs();
}
开发者ID:broadinstitute,项目名称:gatk,代码行数:28,代码来源:PicardCommandLineProgram.java
示例19: testIntelInflaterDeflaterWithPrintReads
import htsjdk.samtools.util.BlockCompressedOutputStream; //导入依赖的package包/类
@Test(dataProvider = "JdkFlags")
public void testIntelInflaterDeflaterWithPrintReads(final boolean use_jdk_inflater, final boolean use_jdk_deflater) throws Exception {
if (!isIntelInflaterDeflaterSupported()) {
throw new SkipException("IntelInflater/IntelDeflater not available on this platform");
}
final File ORIG_BAM = new File(largeFileTestDir, INPUT_FILE);
final File outFile = GATKBaseTest.createTempFile(INPUT_FILE, ".bam");
final ArrayList<String> args = new ArrayList<>();
args.add("--input"); args.add(ORIG_BAM.getAbsolutePath());
args.add("--output"); args.add(outFile.getAbsolutePath());
args.add("--use-jdk-inflater"); args.add(String.valueOf(use_jdk_inflater));
args.add("--use-jdk-deflater"); args.add(String.valueOf(use_jdk_deflater));
// store current default factories, so they can be restored later
InflaterFactory currentInflaterFactory = BlockGunzipper.getDefaultInflaterFactory();
DeflaterFactory currentDeflaterFactory = BlockCompressedOutputStream.getDefaultDeflaterFactory();
// set default factories to jdk version
// because PrintReads cannot change the factory to Jdk if it was already set to Intel
BlockGunzipper.setDefaultInflaterFactory(new InflaterFactory());
BlockCompressedOutputStream.setDefaultDeflaterFactory(new DeflaterFactory());
// run PrintReads
runCommandLine(args);
// restore default factories
BlockGunzipper.setDefaultInflaterFactory(currentInflaterFactory);
BlockCompressedOutputStream.setDefaultDeflaterFactory(currentDeflaterFactory);
// validate input and output files are the same
SamAssertionUtils.assertSamsEqual(outFile, ORIG_BAM);
}
开发者ID:broadinstitute,项目名称:gatk,代码行数:35,代码来源:IntelInflaterDeflaterIntegrationTest.java
示例20: apply
import htsjdk.samtools.util.BlockCompressedOutputStream; //导入依赖的package包/类
@Override
public OrderedByteArray apply(OrderedByteArray object) {
if (object == null)
throw new NullPointerException();
log.debug("processing container " + object.order);
Container container;
try {
container = ContainerIO.readContainer(header.getVersion(), new ByteArrayInputStream(object.bytes));
if (container.isEOF())
return null;
ArrayList<CramCompressionRecord> records = new ArrayList<CramCompressionRecord>(container.nofRecords);
parser.getRecords(container, records, ValidationStringency.SILENT);
n.normalize(records, null, 0, container.header.substitutionMatrix);
ByteArrayOutputStream bamBAOS = new ByteArrayOutputStream();
BlockCompressedOutputStream os = new BlockCompressedOutputStream(bamBAOS, null);
codec.setOutputStream(os);
for (CramCompressionRecord record : records) {
SAMRecord samRecord = f.create(record);
codec.encode(samRecord);
}
os.flush();
OrderedByteArray bb = new OrderedByteArray();
bb.bytes = bamBAOS.toByteArray();
bb.order = object.order;
log.debug(String.format("Converted OBA %d, records %d", object.order, records.size()));
return bb;
} catch (IOException | IllegalArgumentException | IllegalAccessException e) {
throw new RuntimeException(e);
}
}
开发者ID:enasequence,项目名称:cramtools,代码行数:34,代码来源:CramToBam_OBA_Function.java
注:本文中的htsjdk.samtools.util.BlockCompressedOutputStream类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论