本文整理汇总了Java中org.encog.neural.networks.training.propagation.resilient.ResilientPropagation类的典型用法代码示例。如果您正苦于以下问题:Java ResilientPropagation类的具体用法?Java ResilientPropagation怎么用?Java ResilientPropagation使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。
ResilientPropagation类属于org.encog.neural.networks.training.propagation.resilient包,在下文中一共展示了ResilientPropagation类的7个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。
示例1: trainAndStore
import org.encog.neural.networks.training.propagation.resilient.ResilientPropagation; //导入依赖的package包/类
@Test
public void trainAndStore() {
BasicMLDataSet dataSet = getData();
// Create network
BasicNetwork network = getNetwork();
// Train
System.out.println("Training network...");
Train train = new ResilientPropagation(network, dataSet);
for (int i = 0; i < TRAIN_ITERATIONS; i++) {
train.iteration();
}
System.out.println("Training finished, error: " + train.getError());
// Save to file
System.out.println("Saving to file...");
saveToFile(network);
System.out.println("Done");
}
开发者ID:Ignotus,项目名称:torcsnet,代码行数:21,代码来源:EncogMLPTrainingTest.java
示例2: main
import org.encog.neural.networks.training.propagation.resilient.ResilientPropagation; //导入依赖的package包/类
/**
* The main method.
* @param args No arguments are used.
*/
public static void main(final String args[]) {
// create a neural network, without using a factory
BasicNetwork network = new BasicNetwork();
network.addLayer(new BasicLayer(null,true,2));
network.addLayer(new BasicLayer(new ActivationSigmoid(),true,3));
network.addLayer(new BasicLayer(new ActivationSigmoid(),false,1));
network.getStructure().finalizeStructure();
network.reset();
// create training data
MLDataSet trainingSet = new BasicMLDataSet(XOR_INPUT, XOR_IDEAL);
// train the neural network
final ResilientPropagation train = new ResilientPropagation(network, trainingSet);
int epoch = 1;
do {
train.iteration();
System.out.println("Epoch #" + epoch + " Error:" + train.getError());
epoch++;
} while(train.getError() > 0.01);
train.finishTraining();
// test the neural network
System.out.println("Neural Network Results:");
for(MLDataPair pair: trainingSet ) {
final MLData output = network.compute(pair.getInput());
System.out.println(pair.getInput().getData(0) + "," + pair.getInput().getData(1)
+ ", actual=" + output.getData(0) + ",ideal=" + pair.getIdeal().getData(0));
}
Encog.getInstance().shutdown();
}
开发者ID:neo4j-contrib,项目名称:neo4j-ml-procedures,代码行数:40,代码来源:XORHelloWorld.java
示例3: main
import org.encog.neural.networks.training.propagation.resilient.ResilientPropagation; //导入依赖的package包/类
/**
* The main method.
* @param args No arguments are used.
*/
public static void main(final String args[]) {
// create a neural network, without using a factory
BasicNetwork network = new BasicNetwork();
network.addLayer(new BasicLayer(null,true,2));
network.addLayer(new BasicLayer(new ActivationSigmoid(),true,3));
network.addLayer(new BasicLayer(new ActivationSigmoid(),false,1));
network.getStructure().finalizeStructure();
network.reset();
// create training data
MLDataSet trainingSet = new BasicMLDataSet(XOR_INPUT, XOR_IDEAL);
// train the neural network
final ResilientPropagation train = new ResilientPropagation(network, trainingSet);
int epoch = 1;
do {
train.iteration();
System.out.println("Epoch #" + epoch + " Error:" + train.getError());
epoch++;
} while(train.getError() > 0.01);
train.finishTraining();
// test the neural network
System.out.println("Neural Network Results:");
for(MLDataPair pair: trainingSet ) {
final MLData output = network.compute(pair.getInput());
System.out.println(pair.getInput().getData(0) + "," + pair.getInput().getData(1)
+ ", actual=" + output.getData(0) + ",ideal=" + pair.getIdeal().getData(0));
}
Encog.getInstance().shutdown();
}
开发者ID:encog,项目名称:encog-sample-java,代码行数:40,代码来源:HelloWorld.java
示例4: getTrain
import org.encog.neural.networks.training.propagation.resilient.ResilientPropagation; //导入依赖的package包/类
private Train getTrain (NeuralDataSet trainingSet, BasicNetwork network) {
//final Train train =
//new ManhattanPropagation(network, trainingSet,
//0.001);
// Train the neural network, we use resilient propagation
final ResilientPropagation train = new ResilientPropagation(network, trainingSet);
train.setThreadCount(0);
// Reset if improve is less than 1% over 5 cycles
train.addStrategy(new RequiredImprovementStrategy(DEFAULT_SELECTION_LIMIT));
return train;
}
开发者ID:taochen,项目名称:ssascaling,代码行数:16,代码来源:EncogFeedForwardNeuralNetwork.java
示例5: test
import org.encog.neural.networks.training.propagation.resilient.ResilientPropagation; //导入依赖的package包/类
public static void test(double[][] inputValues, double[][] outputValues)
{
NeuralDataSet trainingSet = new BasicNeuralDataSet(inputValues, outputValues);
BasicNetwork network = new BasicNetwork();
network.addLayer(new BasicLayer(new ActivationSigmoid(), false, 4));
network.addLayer(new BasicLayer(new ActivationSigmoid(), false, 1000));
network.addLayer(new BasicLayer(new ActivationLinear(), false, 1));
network.getStructure().finalizeStructure();
network.reset();
final Train train = new ResilientPropagation(network, trainingSet);
int epoch = 1;
do
{
train.iteration();
System.out.println("Epoch #" + epoch + " Error:" + train.getError());
epoch++;
}
while(epoch < 10000);
System.out.println("Neural Network Results:");
for(MLDataPair pair : trainingSet)
{
final MLData output = network.compute(pair.getInput());
System.out.println(pair.getInput().getData(0) + "," + pair.getInput().getData(1) + ", actual="
+ output.getData(0) + ",ideal=" + pair.getIdeal().getData(0));
}
}
开发者ID:santjuan,项目名称:dailyBot,代码行数:28,代码来源:NeuralNetworkAnalysis.java
示例6: withResilieant
import org.encog.neural.networks.training.propagation.resilient.ResilientPropagation; //导入依赖的package包/类
private MLRegression withResilieant() {
final MLTrain train = new ResilientPropagation(EncogUtility.simpleFeedForward(400, 100, 0, 10, false),
this.training);
EncogUtility.trainToError(train, 0.01515);
return (MLRegression) train.getMethod();
}
开发者ID:openimaj,项目名称:openimaj,代码行数:7,代码来源:HandWritingNeuralNetENCOG.java
示例7: train
import org.encog.neural.networks.training.propagation.resilient.ResilientPropagation; //导入依赖的package包/类
public void train(final ArrayList<DataPoint> dataHistory) {
if (isTraining()) {
throw new IllegalStateException();
}
setTrainerThread(new Thread() {
public void run() {
// Clean and normalize the data history
ArrayList<DataPoint> cleanedDataHistory = cleanDataHistory(dataHistory);
ArrayList<DataPoint> normalizedDataHistory = normalizeDataHistory(cleanedDataHistory);
// Create a new neural network and data set
BasicNetwork neuralNetwork = EncogUtility.simpleFeedForward(2, getHiddenLayerNeurons(0),
getHiddenLayerNeurons(1), 5, true);
MLDataSet dataSet = new BasicMLDataSet();
// Add all points of the data history to the data set
for (DataPoint dataPoint : normalizedDataHistory) {
MLData input = new BasicMLData(2);
input.setData(0, dataPoint.getX());
input.setData(1, dataPoint.getY());
// If getButton() is 0, the output will be 0, 0, 0, 0
// If getButton() is 2, the output will be 0, 1, 0, 0
// If getButton() is 4, the output will be 0, 0, 0, 1
MLData ideal = new BasicMLData(5);
for (int i = 0; i <= 4; i++) {
ideal.setData(i, (dataPoint.getButton() == i) ? 1 : 0);
}
MLDataPair pair = new BasicMLDataPair(input, ideal);
dataSet.add(pair);
}
// Create a training method
MLTrain trainingMethod = new ResilientPropagation((ContainsFlat) neuralNetwork, dataSet);
long startTime = System.currentTimeMillis();
int timeLeft = getMaxTrainingTime();
int iteration = 0;
// Train the network using multiple iterations on the training method
do {
trainingMethod.iteration();
timeLeft = (int) ((startTime + getMaxTrainingTime()) - System.currentTimeMillis());
iteration++;
sendNeuralNetworkIteration(iteration, trainingMethod.getError(), timeLeft);
} while (trainingMethod.getError() > getMaxTrainingError() && timeLeft > 0
&& !trainingMethod.isTrainingDone());
trainingMethod.finishTraining();
// Return the neural network to all listeners
sendNeuralNetworkTrainerResult(neuralNetwork);
}
});
getTrainerThread().start();
}
开发者ID:bsmulders,项目名称:StepManiaSolver,代码行数:58,代码来源:NeuralNetworkTrainer.java
注:本文中的org.encog.neural.networks.training.propagation.resilient.ResilientPropagation类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。 |
请发表评论