本文整理汇总了Java中org.apache.clerezza.rdf.core.serializedform.SupportedFormat类的典型用法代码示例。如果您正苦于以下问题:Java SupportedFormat类的具体用法?Java SupportedFormat怎么用?Java SupportedFormat使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
SupportedFormat类属于org.apache.clerezza.rdf.core.serializedform包,在下文中一共展示了SupportedFormat类的10个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: generateRdf
import org.apache.clerezza.rdf.core.serializedform.SupportedFormat; //导入依赖的package包/类
@Override
protected TripleCollection generateRdf(HttpRequestEntity entity) throws IOException {
String rdfDataFormat = SupportedFormat.TURTLE;
InputStream configIn = null;
String queryString = entity.getRequest().getQueryString();
log.info("Query string: " + queryString);
//String configUri = getRequestParamValue(queryString, "config");
String configUri = entity.getRequest().getParameter("config");
log.info("Config file URI: " + configUri);
if(configUri != null) {
configIn = getRemoteConfigFile(configUri);
}
final InputStream inputRdfData = entity.getData();
TripleCollection duplicates = findSameEntities(inputRdfData, rdfDataFormat, configIn);
return duplicates;
}
开发者ID:fusepoolP3,项目名称:p3-silkdedup,代码行数:19,代码来源:DuplicatesTransformer.java
示例2: testSilkRdfTurtle
import org.apache.clerezza.rdf.core.serializedform.SupportedFormat; //导入依赖的package包/类
@Test
public void testSilkRdfTurtle() throws IOException {
Response response
= RestAssured.given().header("Accept", "text/turtle")
.contentType("text/turtle;charset=UTF-8")
.content(ttlData)
.expect().statusCode(HttpStatus.SC_OK).content(new StringContains("http://www.w3.org/2002/07/owl#sameAs")).header("Content-Type", SupportedFormat.TURTLE).when()
.post();
Graph graph = Parser.getInstance().parse(response.getBody().asInputStream(), "text/turtle");
Iterator<Triple> typeTriples = graph.filter(null, OWL.sameAs, null);
Assert.assertTrue("No equivalent entities found", typeTriples.hasNext());
}
开发者ID:fusepoolP3,项目名称:p3-silkdedup,代码行数:17,代码来源:DuplicatesTransformerTest.java
示例3: testRemoteConfig
import org.apache.clerezza.rdf.core.serializedform.SupportedFormat; //导入依赖的package包/类
@Test
public void testRemoteConfig() {
// Set up a service in the mock server to respond to a get request that must be sent by the transformer
// to fetch the silk config file
stubFor(get(urlEqualTo("/fusepoolp3/silk-config-file.xml"))
.willReturn(aResponse()
.withStatus(HttpStatus.SC_OK)
.withHeader("Content-Type", "text/xml")
.withBody(silkconf)));
// The response object acts as a transformer's client. It sends a post request
// to the transformer with the url of the silk config file, the data to be interlinked
// and gets the result from the transformer.
Response response =
RestAssured.given().header("Accept", "text/turtle")
.contentType("text/turtle")
.content(ttlData)
.expect().statusCode(HttpStatus.SC_OK).when()
.post("/?config=http://localhost:" + mockPort +"/fusepoolp3/silk-config-file.xml");
Graph graph = Parser.getInstance().parse(response.getBody().asInputStream(), SupportedFormat.TURTLE);
Iterator<Triple> typeTriples = graph.filter(null, OWL.sameAs, null);
Assert.assertTrue("No equivalent entities found", typeTriples.hasNext());
}
开发者ID:fusepoolP3,项目名称:p3-silkdedup,代码行数:26,代码来源:SilkConfigTest.java
示例4: uploadRdf
import org.apache.clerezza.rdf.core.serializedform.SupportedFormat; //导入依赖的package包/类
/**
* Load RDF data sent by HTTP POST. Use the Dataset custom header
* to address the dataset in which to store the rdf data.
* Use this service with the following curl command:
* curl -X POST -u admin: -H "Content-Type: application/rdf+xml"
* -H "Dataset: mydataset" -T <rdf_file> http://localhost:8080/dlcupload/rdf
*/
@POST
@Path("rdf")
@Produces("text/plain")
public String uploadRdf(@Context final UriInfo uriInfo,
@HeaderParam("Content-Type") String mediaType,
@HeaderParam("Dataset") String dataset,
final InputStream stream) throws Exception {
AccessController.checkPermission(new AllPermission());
final MGraph graph = new SimpleMGraph();
String message = "";
if(mediaType.equals(SupportedFormat.RDF_XML)) {
parser.parse(graph, stream, SupportedFormat.RDF_XML);
}
else {
message = "Add header Content-Type: application/rdf+xml ";
}
return message + "Added " + graph.size() + " triples to dataset " + dataset + "\n";
}
开发者ID:fusepool,项目名称:datalifecycle,代码行数:30,代码来源:DlcUploader.java
示例5: initTest
import org.apache.clerezza.rdf.core.serializedform.SupportedFormat; //导入依赖的package包/类
@BeforeClass
public static void initTest() throws Exception {
//read the fise:Enhancements use by the test and validate that they do
//confirm to the FISE enhancement structure
InputStream in = Fise2FamEngineTest.class.getClassLoader().getResourceAsStream(TEST_ENHANCEMENTS);
Assert.assertNotNull("Unable to load reaource '"+TEST_ENHANCEMENTS+"' via Classpath",in);
origEnhancements = new IndexedMGraph();
rdfParser.parse(origEnhancements, in, SupportedFormat.TURTLE, null);
in.close();
Assert.assertFalse(origEnhancements.isEmpty());
//parse the ID of the ContentItem form the enhancements
Iterator<Triple> it = origEnhancements.filter(null, Properties.ENHANCER_EXTRACTED_FROM, null);
Assert.assertTrue(it.hasNext());
Resource id = it.next().getObject();
Assert.assertTrue(id instanceof UriRef);
ciUri = (UriRef)id;
//validate that the enhancements in the file are valid
EnhancementStructureHelper.validateAllTextAnnotations(
origEnhancements, CONTENT, null);
//init the ContentItem for testing Keyword and Sentiment annotations
//read the content for the KS test
in = Fise2FamEngineTest.class.getClassLoader().getResourceAsStream(TEST_KEY_SENT_FILE);
Assert.assertNotNull("Unable to load reaource '"+TEST_KEY_SENT_FILE+"' via Classpath",in);
ksContent = IOUtils.toString(in, "UTF-8");
in.close();
//read RDF enhancements for the KS test
in = Fise2FamEngineTest.class.getClassLoader().getResourceAsStream(TEST_KEY_SENT_ENHANCEMENTS);
Assert.assertNotNull("Unable to load reaource '"+TEST_KEY_SENT_ENHANCEMENTS+"' via Classpath",in);
ksOrigEnhancements = new IndexedMGraph();
rdfParser.parse(ksOrigEnhancements, in, SupportedFormat.TURTLE, null);
in.close();
Assert.assertFalse(ksOrigEnhancements.isEmpty());
//parse the ID of the ContentItem form the enhancements
it = ksOrigEnhancements.filter(null, Properties.ENHANCER_EXTRACTED_FROM, null);
Assert.assertTrue(it.hasNext());
id = it.next().getObject();
Assert.assertTrue(id instanceof UriRef);
ksCiUri = (UriRef)id;
}
开发者ID:fusepoolP3,项目名称:p3-stanbol-engine-fam,代码行数:41,代码来源:Fise2FamEngineTest.java
示例6: logRdf
import org.apache.clerezza.rdf.core.serializedform.SupportedFormat; //导入依赖的package包/类
private static void logRdf(String title, TripleCollection graph) {
if(log.isDebugEnabled()){
ByteArrayOutputStream out = new ByteArrayOutputStream();
rdfSerializer.serialize(out, graph, SupportedFormat.TURTLE);
try {
log.debug("{} {}",title == null ? "RDF:\n" : title, out.toString("UTF8"));
} catch (UnsupportedEncodingException e) {/*ignore*/}
}
}
开发者ID:fusepoolP3,项目名称:p3-stanbol-engine-fam,代码行数:10,代码来源:Fise2FamEngineTest.java
示例7: logEnhancements
import org.apache.clerezza.rdf.core.serializedform.SupportedFormat; //导入依赖的package包/类
/**
* Logs the enhancements as TURTLE on DEBUG level
* @param ci the contentItem
*/
private void logEnhancements(ContentItem ci){
if(LOG.isDebugEnabled()){
ByteArrayOutputStream bout = new ByteArrayOutputStream();
serializer.serialize(bout, ci.getMetadata(), SupportedFormat.TURTLE);
LOG.debug("Enhancements of {}",ci.getUri().getUnicodeString());
LOG.debug(new String(bout.toByteArray(),Charset.forName("UTF8")));
}
}
开发者ID:michelemostarda,项目名称:machinelinking-stanbol-enhancement-engine,代码行数:13,代码来源:MLLanguageIdentifierEnhancementEngineTest.java
示例8: findSameEntities
import org.apache.clerezza.rdf.core.serializedform.SupportedFormat; //导入依赖的package包/类
/**
* The client RDF data is always used as the source data source, of type file, for the comparisons with a target data source.
* The target data source can be of type file or a SPARQL endpoint. If the target data source in the Silk config file
* is set to be of type file then the same client data will be used and the task is a deduplication task (Silk works only with local files).
* The updated configuration file and the input RDF data and the output files are stored in the /tmp/ folder.
* @param inputRdf
* @return
* @throws IOException
*/
protected TripleCollection findSameEntities(InputStream inputRdf, String rdfFormat, InputStream configIn) throws IOException {
// Default silk config file
File configFile = null;
if(configIn != null){
configFile = FileUtil.inputStreamToFile(configIn, "silk-config-", ".xml");
}
else {
configFile = FileUtil.inputStreamToFile(getClass().getResourceAsStream("silk-config-file.xml"), "silk-config-", ".xml");
}
// file with original data serialized in N-TRIPLE format
File ntFile = File.createTempFile("input-rdf", ".nt");
// file containing the equivalences
File outFile = File.createTempFile("output-", ".nt");
// update the config file with the paths of the source datasource and output files and the format
// if the type of target datasource is "file" update the path (deduplication)
SilkConfigFileParser silkParser = new SilkConfigFileParser(configFile.getAbsolutePath());
silkParser.updateOutputFile(outFile.getAbsolutePath());
silkParser.updateSourceDataSourceFile(ntFile.getAbsolutePath(), "N-TRIPLE");
if (silkParser.getTargetDataSourcetype().equals("file")) {
silkParser.updateTargetDataSourceFile(ntFile.getAbsolutePath(), "N-TRIPLE"); //deduplication
}
silkParser.saveChanges();
// change the format into N-TRIPLE
Parser parser = Parser.getInstance();
TripleCollection origGraph = parser.parse(inputRdf, rdfFormat);
Serializer serializer = Serializer.getInstance();
serializer.serialize(new FileOutputStream(ntFile), origGraph, SupportedFormat.N_TRIPLE);
// interlink entities
Silk.executeFile(configFile, null, 1, true);
log.info("Interlinking task completed.");
TripleCollection equivalences = parseResult(outFile);
// add the equivalence set to the input rdf data to be sent back to the client
TripleCollection resultGraph = new SimpleMGraph();
resultGraph.addAll(origGraph);
resultGraph.addAll(equivalences);
// remove all temporary files
configFile.delete();
ntFile.delete();
outFile.delete();
// returns the result to the client
return resultGraph;
}
开发者ID:fusepoolP3,项目名称:p3-silkdedup,代码行数:59,代码来源:DuplicatesTransformer.java
示例9: testGetPoint
import org.apache.clerezza.rdf.core.serializedform.SupportedFormat; //导入依赖的package包/类
@Test
public void testGetPoint() throws IOException {
TripleCollection graph = Parser.getInstance().parse(getClass().getResourceAsStream(TEST_DATASET), SupportedFormat.TURTLE);
WGS84Point point = jenas.getPoint(graph);
Assert.assertTrue(point != null);
}
开发者ID:fusepoolP3,项目名称:p3-geo-enriching-transformer,代码行数:7,代码来源:JenaSpatialTest.java
示例10: parseResult
import org.apache.clerezza.rdf.core.serializedform.SupportedFormat; //导入依赖的package包/类
/**
* Reads the silk output (n-triples) and returns the owl:sameas statements
* as a result
*
* @param file
* @return
* @throws IOException
*/
public TripleCollection parseResult(File file) throws IOException {
Parser parser = Parser.getInstance();
return parser.parse(new FileInputStream(file), SupportedFormat.N_TRIPLE);
}
开发者ID:fusepoolP3,项目名称:p3-silkdedup,代码行数:13,代码来源:DuplicatesTransformer.java
注:本文中的org.apache.clerezza.rdf.core.serializedform.SupportedFormat类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论