• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java Linear类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中de.bwaldvogel.liblinear.Linear的典型用法代码示例。如果您正苦于以下问题:Java Linear类的具体用法?Java Linear怎么用?Java Linear使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



Linear类属于de.bwaldvogel.liblinear包,在下文中一共展示了Linear类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: learn

import de.bwaldvogel.liblinear.Linear; //导入依赖的package包/类
/** Learns a new SVM model with the LibSVM package. */
@Override
public Model learn(ExampleSet exampleSet) throws OperatorException {

	Parameter params = getParameters(exampleSet);

	if (exampleSet.size() < 2) {
		throw new UserError(this, 110, 2);
	}

	Linear.resetRandom();
	Linear.disableDebugOutput();
	Problem problem = getProblem(exampleSet);
	de.bwaldvogel.liblinear.Model model = Linear.train(problem, params);

	return new FastMarginModel(exampleSet, model, getParameterAsBoolean(PARAMETER_USE_BIAS));
}
 
开发者ID:transwarpio,项目名称:rapidminer,代码行数:18,代码来源:FastLargeMargin.java


示例2: getFeatureImportance

import de.bwaldvogel.liblinear.Linear; //导入依赖的package包/类
/**
 * @param gatherer 
 * @param features 
 * @return an array of feature IDs (>=1), ordered by feature importance, without zero-importance features.
 */
private static <T extends Serializable, G extends Serializable> int[] getFeatureImportance(ExampleGatherer<T, G> gatherer,
        int[] features) {
	ZScoreFeatureNormalizer scaleFn = ZScoreFeatureNormalizer.fromGatherer(gatherer);
	Parameter param = new Parameter(SolverType.L2R_L2LOSS_SVR, 0.01, 0.001);
	Problem problem = gatherer.generateLibLinearProblem(features, scaleFn);
	Model m = Linear.train(problem, param);
	double[] weights = m.getFeatureWeights();

	int[] ftrImportance = Arrays.stream(features).boxed().sorted(new Comparator<Integer>() {
		@Override
		public int compare(Integer fId0, Integer fId1) {
			return Double.compare(Math.abs(weights[ArrayUtils.indexOf(features, fId0)]), Math.abs(ArrayUtils.indexOf(features, fId1)));
		}
	}).filter(fId -> weights[ArrayUtils.indexOf(features, fId)] != 0.0).mapToInt(fId -> fId.intValue()).toArray();

	return ftrImportance;
}
 
开发者ID:marcocor,项目名称:smaph,代码行数:23,代码来源:TuneModelLibSvm.java


示例3: serialize

import de.bwaldvogel.liblinear.Linear; //导入依赖的package包/类
public void serialize(OutputStream out) throws IOException {
  
  DataOutputStream ds = new DataOutputStream(out);
  
  ByteArrayOutputStream modelBytes = new ByteArrayOutputStream();
  Linear.saveModel(new OutputStreamWriter(modelBytes, LIBLINEAR_MODEL_ENCODING), model);

  ds.writeInt(modelBytes.size());
  ds.write(modelBytes.toByteArray());
  
  // write string array
  // write label count
  ds.writeInt(outcomeLabels.length);
  
  // write each label
  for (String outcomeLabel : outcomeLabels) {
    ds.writeUTF(outcomeLabel);
  }

  // write entry count
  ds.writeInt(predMap.size());
  for (Map.Entry<String, Integer> entry : predMap.entrySet()) {
    ds.writeUTF(entry.getKey());
    ds.writeInt(entry.getValue());
  }
}
 
开发者ID:apache,项目名称:opennlp-addons,代码行数:27,代码来源:LiblinearModel.java


示例4: trainSvm

import de.bwaldvogel.liblinear.Linear; //导入依赖的package包/类
/**
 * Train SVM model. Return alpha and w matrix.
 * 
 * */
public StoreAlphaWeight trainSvm(File saveModel) throws Exception{
	StoreAlphaWeight saww=new StoreAlphaWeight();
	this.modelFile=saveModel;
	Problem problem=new Problem();
	problem.l=train; 
	problem.n=dimensions;
	problem.x=vectrain;
	problem.y=trainattr;
	SolverType s=SolverType.MCSVM_CS;  
       Parameter parameter = new Parameter(s, C, eps);
       Model modelg = Linear.train(problem, parameter, saww);
       try {
		modelg.save(saveModel);
	} catch (IOException e) {
		// TODO Auto-generated catch block
		e.printStackTrace();
	}
       return saww;
}
 
开发者ID:thunlp,项目名称:MMDW,代码行数:24,代码来源:Evaluate_SVM.java


示例5: evaluateSvm

import de.bwaldvogel.liblinear.Linear; //导入依赖的package包/类
public double[] evaluateSvm() throws Exception{
       	int right=0;
		Model model = Model.load(modelFile);
        for(int t=0;t<test;t++){
            double prediction = Linear.predict(model, vectest[t]);
            if(prediction==testattr[t]){
            	right++;
            }
          }
        double precision=(double)right/test;
        System.err.println("*************Precision = "+precision*100+"%*************");
        double storeResult[]=new double[3];
        storeResult[0]=right;
        storeResult[1]=test;
        storeResult[2]=precision;
        return storeResult;
}
 
开发者ID:thunlp,项目名称:MMDW,代码行数:18,代码来源:Evaluate_SVM.java


示例6: train

import de.bwaldvogel.liblinear.Linear; //导入依赖的package包/类
public void train(List<Pair<CounterInterface<Integer>,Integer>> trainSet) {
	Problem problem = new Problem();
	FeatureNode[][] x = new FeatureNode[trainSet.size()][];
	double[] y = new double[trainSet.size()];
	int maxFeature = 0;
	for (int i=0; i<x.length; ++i) {
		CounterInterface<Integer> features = trainSet.get(i).getFirst();
		for (Map.Entry<Integer, Double> feat : features.entries()) {
			maxFeature = Math.max(feat.getKey()+1, maxFeature);
		}
		x[i] = convertToFeatureNodes(features);
		y[i] = trainSet.get(i).getSecond();
	}
	
	problem.l = trainSet.size();
	problem.n = maxFeature;
	problem.x = x;
	problem.y = y;
	problem.bias = 0.0;
	
	Parameter parameter = new Parameter(solverType, C, eps);
	model = Linear.train(problem, parameter);
}
 
开发者ID:tberg12,项目名称:murphy,代码行数:24,代码来源:LibLinearWrapper.java


示例7: predictOne

import de.bwaldvogel.liblinear.Linear; //导入依赖的package包/类
public Matrix predictOne(Feature[] x) {
	Matrix result = null;
	if (model.isProbabilityModel()) {
		double[] probabilities = new double[model.getNrClass()];
		Linear.predictProbability(model, x, probabilities);
		result = Matrix.Factory.zeros(1, model.getNrClass());
		for (int i = 0; i < probabilities.length; i++) {
			int label = model.getLabels()[i];
			result.setAsDouble(probabilities[i], 0, label);
		}
	} else {
		double classId = Linear.predict(model, x);
		result = Matrix.Factory.zeros(1, Math.max(model.getNrClass(), (int) (classId + 1)));
		result.setAsDouble(1.0, 0, (int) classId);
	}

	return result;
}
 
开发者ID:jdmp,项目名称:java-data-mining-package,代码行数:19,代码来源:LibLinearClassifier.java


示例8: score

import de.bwaldvogel.liblinear.Linear; //导入依赖的package包/类
@Override
public Map<OUTCOME_TYPE, Double> score(List<Feature> features) throws CleartkProcessingException {
  FeatureNode[] encodedFeatures = this.featuresEncoder.encodeAll(features);
  
  // get score for each outcome
  int[] encodedOutcomes = this.model.getLabels();
  double[] scores = new double[encodedOutcomes.length];
  if (this.model.isProbabilityModel()) {
    Linear.predictProbability(this.model, encodedFeatures, scores);
  } else {
    Linear.predictValues(this.model, encodedFeatures, scores);
  }
  
  // handle 2-class model, which is special-cased by LIBLINEAR to only return one score
  if (this.model.getNrClass() == 2 && scores[1] == 0.0) {
    scores[1] = -scores[0];
  }
  
  // create scored outcome objects
  Map<OUTCOME_TYPE, Double> scoredOutcomes = Maps.newHashMap();
  for (int i = 0; i < encodedOutcomes.length; ++i) {
    OUTCOME_TYPE outcome = this.outcomeEncoder.decode(encodedOutcomes[i]);
    scoredOutcomes.put(outcome, scores[i]);
  }
  return scoredOutcomes;
}
 
开发者ID:ClearTK,项目名称:cleartk,代码行数:27,代码来源:GenericLibLinearClassifier.java


示例9: testLinearModel

import de.bwaldvogel.liblinear.Linear; //导入依赖的package包/类
private static Prediction[] testLinearModel(LibLINEARModel model, Feature[][] problem) {
	Prediction[] pred = new Prediction[problem.length];		
	for (int i = 0; i < problem.length; i++) {
		double[] decVal = new double[(model.getModel().getNrClass() <= 2) ? 1 : model.getModel().getNrClass()];
		if (!model.hasProbabilities()) {
			pred[i] = new Prediction(Linear.predictValues(model.getModel(), problem[i], decVal), i);
			pred[i].setProbabilities(false);
		} else {
			pred[i] = new Prediction(Linear.predictProbability(model.getModel(), problem[i], decVal), i);
			pred[i].setProbabilities(true);
		}
		pred[i].setDecisionValue(decVal);
		pred[i].setClassLabels(model.getModel().getLabels());
		pred[i].setPairWise(false); // LibLINEAR does not do pairwise multiclass prediction, but 1 vs all
		pred[i].setUsedKernel(model.getKernelSetting());
	}
	return pred;
}
 
开发者ID:Data2Semantics,项目名称:mustard,代码行数:19,代码来源:LibLINEAR.java


示例10: train

import de.bwaldvogel.liblinear.Linear; //导入依赖的package包/类
public static void train() throws IOException, InvalidInputDataException{
	String file = "output\\svm/book_svm.svm";
	Problem problem = Problem.readFromFile(new File(file),-1);

	SolverType solver = SolverType.L2R_LR; // -s 0
	double C = 1.0;    // cost of constraints violation
	double eps = 0.01; // stopping criteria

	Parameter parameter = new Parameter(solver, C, eps);
	Model model = Linear.train(problem, parameter);
	File modelFile = new File("output/model");
	model.save(modelFile);
	System.out.println(modelFile.getAbsolutePath());
	// load model or use it directly
	model = Model.load(modelFile);

	Feature[] instance = { new FeatureNode(1, 4), new FeatureNode(2, 2) };
	double prediction = Linear.predict(model, instance);
	System.out.println(prediction);
	int nr_fold = 10;
    double[] target = new double[problem.l];
	Linear.crossValidation(problem, parameter, nr_fold, target);
}
 
开发者ID:laozhaokun,项目名称:sentimentclassify,代码行数:24,代码来源:Main.java


示例11: predict2

import de.bwaldvogel.liblinear.Linear; //导入依赖的package包/类
@Deprecated
public static int[] predict2(Model model, Feature[][] data, int[] labels) {

	int N = data.length;
	int[] pre_label = new int[N];

	for ( int i = 0; i < N; i ++ ) {
		pre_label[i] = Linear.predict(model, data[i]);
	}

	if (labels != null) {
		int cnt_correct = 0;
		for ( int i = 0; i < N; i ++ ) {
			if ( pre_label[i] == labels[i] )
				cnt_correct ++;
		}
		double accuracy = (double)cnt_correct / (double)N;
		System.out.println(String.format("Accuracy: %.2f%%\n", accuracy * 100));
	}

	return pre_label;

}
 
开发者ID:MingjieQian,项目名称:JML,代码行数:24,代码来源:MultiClassSVM.java


示例12: analyze

import de.bwaldvogel.liblinear.Linear; //导入依赖的package包/类
@Override
public Analysis analyze(Analyzable a) throws AnalyzerFailureException {
    if (a == null) return null;

    if (!(a instanceof IdentifiableTextContent)) {
        throw new AnalyzerFailureException("Analyzable not identifiable. This analyzer requires an IdentifiableTextContent.");
    }

    IdentifiableTextContent tc = (IdentifiableTextContent) a;

    try {
        Feature[] vector = LibLinearUtils.toLibLinear(representer.represent(tc.getText()).toSvmNodes());
        double[] probs = new double[labels.size()];
        Linear.predictProbability(model, vector, probs);
        ClassificationAnalysis analysis = new ClassificationAnalysis(tc.getId());
        for (int i = 0; i < labelIndeces.length; ++i) {
            analysis.addClassification(labels.get(labelIndeces[i]), probs[i] >= threshold ? probs[i] : 0d);
        }
        return analysis;
    } catch (Exception e) {
        throw new AnalyzerFailureException("Classifier failed on record " + tc.getId(), e);
    }
}
 
开发者ID:groupon,项目名称:nakala,代码行数:24,代码来源:LibLinearTextClassifier.java


示例13: getLabel

import de.bwaldvogel.liblinear.Linear; //导入依赖的package包/类
@Override
public String getLabel(JCas cas) {
    Vector<Feature[]> instanceFeatures = applyFeatures(cas, features);
    Feature[] instance = combineInstanceFeatures(instanceFeatures);
    probEstimates = new double[model.getNrClass()];
    Double prediction;
    if (model.getSolverType().isLogisticRegressionSolver()) {
        prediction = Linear.predictProbability(model, instance, probEstimates);
        score = probEstimates[prediction.intValue()];
    } else {
        prediction = Linear.predict(model, instance);
    }
    label = labelMappings.get(prediction);
    return label;
}
 
开发者ID:uhh-lt,项目名称:LT-ABSA,代码行数:16,代码来源:LinearClassifier.java


示例14: getLabel

import de.bwaldvogel.liblinear.Linear; //导入依赖的package包/类
@Override
public String getLabel(JCas cas) {
    Vector<Feature[]> instanceFeatures = applyFeatures(cas, features);

    Feature[] instance = combineInstanceFeatures(instanceFeatures);
    probEstimates = new double[model.getNrClass()];
    Double prediction = Linear.predictProbability(model, instance, probEstimates);

    label = labelMappings.get(prediction);
    score = probEstimates[prediction.intValue()];
    return label;
}
 
开发者ID:uhh-lt,项目名称:GermEval2017-Baseline,代码行数:13,代码来源:LinearClassifier.java


示例15: LiblinearModel

import de.bwaldvogel.liblinear.Linear; //导入依赖的package包/类
public LiblinearModel(InputStream in) throws IOException {
  
  DataInputStream di = new DataInputStream(in);
  
  int modelByteLength = di.readInt();
  
  // TODO: We should have a fixed memory limit here ...
  
  byte modelBytes[] = new byte[modelByteLength];
  di.read(modelBytes);
  
  int outcomeLabelLength = di.readInt();
  
  outcomeLabels = new String[outcomeLabelLength];
  for (int i = 0; i < outcomeLabelLength; i++) {
    outcomeLabels[i] = di.readUTF();
  }
  
  predMap = new HashMap<String, Integer>();
  
  int predMapSize = di.readInt();
  for (int i = 0; i < predMapSize; i++) {
    String key = di.readUTF();
    int value = di.readInt();
    predMap.put(key, value);
  }
  
  model = Linear.loadModel(new InputStreamReader(new ByteArrayInputStream(modelBytes), LIBLINEAR_MODEL_ENCODING));
}
 
开发者ID:apache,项目名称:opennlp-addons,代码行数:30,代码来源:LiblinearModel.java


示例16: predict

import de.bwaldvogel.liblinear.Linear; //导入依赖的package包/类
@Override
protected double[] predict(Model model, FeatureNode[] featuresNodes) {

	double[] temp,probs;
	int classes,label;
	double sum = 0;
	
	classes = model.getNrClass();
	temp = new double[classes];
	Linear.predictValues(model, featuresNodes, temp);

	for (int i = 0; i < classes; i++) {
		temp[i] = 1 / (1 + Math.exp(-temp[i]));
		sum += temp[i];			
	}

	probs=new double[classes];
	
	for(int i=0;i<classes;i++){
		
		label = model.getLabels()[i];
		
		if (label > 0)
			probs[label-1]=temp[i]/sum;
	}
	
	return probs;
}
 
开发者ID:SI3P,项目名称:supWSD,代码行数:29,代码来源:LibLinearClassifier.java


示例17: crossValidate

import de.bwaldvogel.liblinear.Linear; //导入依赖的package包/类
private static Prediction[] crossValidate(Problem prob, Parameter linearParams, int folds) {
	double[] prediction = new double[prob.l];
	Linear.crossValidation(prob, linearParams, folds, prediction);
	Prediction[] pred2 = new Prediction[prob.l];

	for (int i = 0; i < pred2.length; i++) {
		pred2[i] = new Prediction(prediction[i], i);
	}
	return pred2;
}
 
开发者ID:Data2Semantics,项目名称:mustard,代码行数:11,代码来源:LibLINEAR.java


示例18: performPrediction

import de.bwaldvogel.liblinear.Linear; //导入依赖的package包/类
@Override
public ExampleSet performPrediction(ExampleSet exampleSet, Attribute predictedLabel) throws OperatorException {
	FastExample2SparseTransform ripper = new FastExample2SparseTransform(exampleSet);
	Attribute label = getLabel();

	Attribute[] confidenceAttributes = null;
	if (label.isNominal() && label.getMapping().size() >= 2) {
		confidenceAttributes = new Attribute[linearModel.label.length];
		for (int j = 0; j < linearModel.label.length; j++) {
			String labelName = label.getMapping().mapIndex(linearModel.label[j]);
			confidenceAttributes[j] = exampleSet.getAttributes()
					.getSpecial(Attributes.CONFIDENCE_NAME + "_" + labelName);
		}
	}

	Iterator<Example> i = exampleSet.iterator();
	while (i.hasNext()) {
		Example e = i.next();

		// set prediction
		FeatureNode[] currentNodes = FastLargeMargin.makeNodes(e, ripper, this.useBias);

		double predictedClass = Linear.predict(linearModel, currentNodes);
		e.setValue(predictedLabel, predictedClass);

		// use simple calculation for binary cases...
		if (label.getMapping().size() == 2) {
			double[] functionValues = new double[linearModel.nr_class];
			Linear.predictValues(linearModel, currentNodes, functionValues);
			double prediction = functionValues[0];
			if (confidenceAttributes != null && confidenceAttributes.length > 0) {
				e.setValue(confidenceAttributes[0], 1.0d / (1.0d + java.lang.Math.exp(-prediction)));
				if (confidenceAttributes.length > 1) {
					e.setValue(confidenceAttributes[1], 1.0d / (1.0d + java.lang.Math.exp(prediction)));
				}
			}
		}

	}
	return exampleSet;
}
 
开发者ID:transwarpio,项目名称:rapidminer,代码行数:42,代码来源:FastMarginModel.java


示例19: predictScore

import de.bwaldvogel.liblinear.Linear; //导入依赖的package包/类
public double predictScore(FeaturePack<T> fp, FeatureNormalizer fn) {
	return Linear.predict(model, featureMapToFeatures(fn.ftrToNormalizedFtrArray(fp)));
}
 
开发者ID:marcocor,项目名称:smaph,代码行数:4,代码来源:LibLinearModel.java


示例20: classify

import de.bwaldvogel.liblinear.Linear; //导入依赖的package包/类
@Override
public Integer classify(SparseInstance instance) {
    Feature[] feat = getFeatureArray(instance);
    return (int) Linear.predict(model, feat);
}
 
开发者ID:clearwsd,项目名称:clearwsd,代码行数:6,代码来源:LibLinearClassifier.java



注:本文中的de.bwaldvogel.liblinear.Linear类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java GreyPalette类代码示例发布时间:2022-05-21
下一篇:
Java PKCS12ParametersGenerator类代码示例发布时间:2022-05-21
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap