本文整理汇总了Java中org.supercsv.io.ICsvMapWriter类的典型用法代码示例。如果您正苦于以下问题:Java ICsvMapWriter类的具体用法?Java ICsvMapWriter怎么用?Java ICsvMapWriter使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
ICsvMapWriter类属于org.supercsv.io包,在下文中一共展示了ICsvMapWriter类的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: main
import org.supercsv.io.ICsvMapWriter; //导入依赖的package包/类
public static void main(String[] args) throws IOException, ParsingException
{
CsvPreference csvPrefs = new CsvPreference('"', '|', "\n");
ICsvMapWriter out = new CsvMapWriter(new FileWriter(args[1]), csvPrefs);
out.writeHeader(FIELDS);
WikiReader wikiReader = new WikiReader();
wikiReader.setSkipRedirects(true);
FhlcExporter fe = new FhlcExporter(out);
wikiReader.addWikiPageParser(fe);
InputStream in = new FileInputStream(args[0]);
wikiReader.read(in);
in.close();
out.close();
}
开发者ID:werelate,项目名称:wikidata,代码行数:16,代码来源:FhlcExporter.java
示例2: getCSVMapWriter
import org.supercsv.io.ICsvMapWriter; //导入依赖的package包/类
public ICsvMapWriter getCSVMapWriter(String fileToWrite) {
try {
return new CsvMapWriter(new FileWriterWithEncoding(fileToWrite,"UTF-8", true),
new CsvPreference.Builder(CsvPreference.EXCEL_PREFERENCE)
.useEncoder(new DefaultCsvEncoder())
.build() );
} catch (IOException e) {
logger.error("Error in creating CSV Bean writer!"+e);
}
return null;
}
开发者ID:qcri-social,项目名称:AIDR,代码行数:12,代码来源:ReadWriteCSV.java
示例3: doWork
import org.supercsv.io.ICsvMapWriter; //导入依赖的package包/类
/**
* Executes the job.query and creates a data file that will contains the records from job.from to job.to positions.
*/
private void doWork(final DownloadFileWork work) throws IOException {
final DatasetUsagesCollector datasetUsagesCollector = new DatasetUsagesCollector();
try (ICsvMapWriter csvMapWriter = new CsvMapWriter(new FileWriterWithEncoding(work.getJobDataFileName(),
Charsets.UTF_8),
CsvPreference.TAB_PREFERENCE)) {
SolrQueryProcessor.processQuery(work, new Predicate<Integer>() {
@Override
public boolean apply(@Nullable Integer occurrenceKey) {
try {
org.apache.hadoop.hbase.client.Result result = work.getOccurrenceMapReader().get(occurrenceKey);
Map<String, String> occurrenceRecordMap = buildOccurrenceMap(result, DownloadTerms.SIMPLE_DOWNLOAD_TERMS);
if (occurrenceRecordMap != null) {
//collect usages
datasetUsagesCollector.collectDatasetUsage(occurrenceRecordMap.get(GbifTerm.datasetKey.simpleName()),
occurrenceRecordMap.get(DcTerm.license.simpleName()));
//write results
csvMapWriter.write(occurrenceRecordMap, COLUMNS);
return true;
} else {
LOG.error(String.format("Occurrence id %s not found!", occurrenceKey));
}
} catch (Exception e) {
throw Throwables.propagate(e);
}
return false;
}
});
} finally {
// Release the lock
work.getLock().unlock();
LOG.info("Lock released, job detail: {} ", work.toString());
}
getSender().tell(new Result(work, datasetUsagesCollector.getDatasetUsages(),
datasetUsagesCollector.getDatasetLicenses()), getSelf());
}
开发者ID:gbif,项目名称:occurrence,代码行数:42,代码来源:SimpleCsvDownloadActor.java
示例4: FhlcExporter
import org.supercsv.io.ICsvMapWriter; //导入依赖的package包/类
public FhlcExporter(ICsvMapWriter out) {
this.out = out;
}
开发者ID:werelate,项目名称:wikidata,代码行数:4,代码来源:FhlcExporter.java
示例5: doWork
import org.supercsv.io.ICsvMapWriter; //导入依赖的package包/类
/**
* Executes the job.query and creates a data file that will contains the records from job.from to job.to positions.
*/
public void doWork(final DownloadFileWork work) throws IOException {
final DatasetUsagesCollector datasetUsagesCollector = new DatasetUsagesCollector();
try (
ICsvMapWriter intCsvWriter = new CsvMapWriter(new FileWriterWithEncoding(work.getJobDataFileName()
+ TableSuffixes.INTERPRETED_SUFFIX,
Charsets.UTF_8),
CsvPreference.TAB_PREFERENCE);
ICsvMapWriter verbCsvWriter = new CsvMapWriter(new FileWriterWithEncoding(work.getJobDataFileName()
+ TableSuffixes.VERBATIM_SUFFIX,
Charsets.UTF_8),
CsvPreference.TAB_PREFERENCE);
ICsvBeanWriter multimediaCsvWriter = new CsvBeanWriter(new FileWriterWithEncoding(work.getJobDataFileName()
+ TableSuffixes.MULTIMEDIA_SUFFIX,
Charsets.UTF_8),
CsvPreference.TAB_PREFERENCE)) {
SolrQueryProcessor.processQuery(work, new Predicate<Integer>() {
@Override
public boolean apply(@Nullable Integer occurrenceKey) {
try {
// Writes the occurrence record obtained from HBase as Map<String,Object>.
org.apache.hadoop.hbase.client.Result result = work.getOccurrenceMapReader().get(occurrenceKey);
Map<String, String> occurrenceRecordMap = OccurrenceMapReader.buildInterpretedOccurrenceMap(result);
Map<String, String> verbOccurrenceRecordMap = OccurrenceMapReader.buildVerbatimOccurrenceMap(result);
if (occurrenceRecordMap != null) {
datasetUsagesCollector.incrementDatasetUsage(occurrenceRecordMap.get(GbifTerm.datasetKey.simpleName()));
intCsvWriter.write(occurrenceRecordMap, INT_COLUMNS);
verbCsvWriter.write(verbOccurrenceRecordMap, VERB_COLUMNS);
writeMediaObjects(multimediaCsvWriter, result, occurrenceKey);
return true;
} else {
LOG.error(String.format("Occurrence id %s not found!", occurrenceKey));
}
} catch (Exception e) {
throw Throwables.propagate(e);
}
return false;
}
});
} finally {
// Unlock the assigned lock.
work.getLock().unlock();
LOG.info("Lock released, job detail: {} ", work.toString());
}
getSender().tell(new Result(work, datasetUsagesCollector.getDatasetUsages()), getSelf());
}
开发者ID:gbif,项目名称:occurrence,代码行数:51,代码来源:DownloadDwcaActor.java
注:本文中的org.supercsv.io.ICsvMapWriter类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论