• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Java GradientNormalization类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.deeplearning4j.nn.conf.GradientNormalization的典型用法代码示例。如果您正苦于以下问题:Java GradientNormalization类的具体用法?Java GradientNormalization怎么用?Java GradientNormalization使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



GradientNormalization类属于org.deeplearning4j.nn.conf包,在下文中一共展示了GradientNormalization类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: softMaxRegression

import org.deeplearning4j.nn.conf.GradientNormalization; //导入依赖的package包/类
private static MultiLayerNetwork softMaxRegression(int seed,
		int iterations, int numRows, int numColumns, int outputNum) {
	MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
			.seed(seed)
			.gradientNormalization(
					GradientNormalization.ClipElementWiseAbsoluteValue)
			.gradientNormalizationThreshold(1.0)
			.iterations(iterations)
			.momentum(0.5)
			.momentumAfter(Collections.singletonMap(3, 0.9))
			.optimizationAlgo(OptimizationAlgorithm.CONJUGATE_GRADIENT)
			.list(1)
			.layer(0,
					new OutputLayer.Builder(
							LossFunction.NEGATIVELOGLIKELIHOOD)
							.activation("softmax")
							.nIn(numColumns * numRows).nOut(outputNum)
							.build()).pretrain(true).backprop(false)
			.build();

	MultiLayerNetwork model = new MultiLayerNetwork(conf);

	return model;
}
 
开发者ID:PacktPublishing,项目名称:Machine-Learning-End-to-Endguide-for-Java-developers,代码行数:25,代码来源:NeuralNetworks.java


示例2: getConfiguration

import org.deeplearning4j.nn.conf.GradientNormalization; //导入依赖的package包/类
@Override
   protected MultiLayerConfiguration getConfiguration()
   {
return new NeuralNetConfiguration.Builder().seed(parameters.getSeed())
	.gradientNormalization(GradientNormalization.ClipElementWiseAbsoluteValue)
	.gradientNormalizationThreshold(1.0).iterations(parameters.getIterations()).momentum(0.5)
	.momentumAfter(Collections.singletonMap(3, 0.9))
	.optimizationAlgo(OptimizationAlgorithm.CONJUGATE_GRADIENT).list(4)
	.layer(0,
		new AutoEncoder.Builder().nIn(parameters.getInputSize()).nOut(500).weightInit(WeightInit.XAVIER)
			.lossFunction(LossFunction.RMSE_XENT).corruptionLevel(0.3).build())
	.layer(1, new AutoEncoder.Builder().nIn(500).nOut(250).weightInit(WeightInit.XAVIER)
		.lossFunction(LossFunction.RMSE_XENT).corruptionLevel(0.3)

		.build())
	.layer(2,
		new AutoEncoder.Builder().nIn(250).nOut(200).weightInit(WeightInit.XAVIER)
			.lossFunction(LossFunction.RMSE_XENT).corruptionLevel(0.3).build())
	.layer(3, new OutputLayer.Builder(LossFunction.NEGATIVELOGLIKELIHOOD).activation("softmax").nIn(200)
		.nOut(parameters.getOutputSize()).build())
	.pretrain(true).backprop(false).build();
   }
 
开发者ID:amrabed,项目名称:DL4J,代码行数:23,代码来源:StackedAutoEncoderModel.java


示例3: getConfiguration

import org.deeplearning4j.nn.conf.GradientNormalization; //导入依赖的package包/类
@Override
   protected MultiLayerConfiguration getConfiguration()
   {
final ConvulationalNetParameters parameters = (ConvulationalNetParameters) this.parameters;
final MultiLayerConfiguration.Builder builder = new NeuralNetConfiguration.Builder().seed(parameters.getSeed())
	.iterations(parameters.getIterations())
	.gradientNormalization(GradientNormalization.RenormalizeL2PerLayer)
	.optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT).list(3)
	.layer(0,
		new ConvolutionLayer.Builder(10, 10).stride(2, 2).nIn(parameters.getChannels()).nOut(6)
			.weightInit(WeightInit.XAVIER).activation("relu").build())
	.layer(1, new SubsamplingLayer.Builder(SubsamplingLayer.PoolingType.MAX, new int[] { 2, 2 }).build())
	.layer(2, new OutputLayer.Builder(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD)
		.nOut(parameters.getOutputSize()).weightInit(WeightInit.XAVIER).activation("softmax").build())
	.backprop(true).pretrain(false);

new ConvolutionLayerSetup(builder, parameters.getRows(), parameters.getColumns(), parameters.getChannels());

return builder.build();
   }
 
开发者ID:amrabed,项目名称:DL4J,代码行数:21,代码来源:ConvolutionalNetModel.java


示例4: method

import org.deeplearning4j.nn.conf.GradientNormalization; //导入依赖的package包/类
@OptionMetadata(
  displayName = "gradient normalization method",
  description = "The gradient normalization method (default = None).",
  commandLineParamName = "gradientNormalization",
  commandLineParamSynopsis = "-gradientNormalization <specification>",
  displayOrder = 22
)
public GradientNormalization getGradientNormalization() {
  return this.gradientNormalization;
}
 
开发者ID:Waikato,项目名称:wekaDeeplearning4j,代码行数:11,代码来源:NeuralNetConfiguration.java


示例5: testImdbClassification

import org.deeplearning4j.nn.conf.GradientNormalization; //导入依赖的package包/类
@Test
public void testImdbClassification() throws Exception {

  // Init data
  data = DatasetLoader.loadImdb();

  // Define layers
  LSTM lstm1 = new LSTM();
  lstm1.setNOut(3);
  lstm1.setActivationFunction(new ActivationTanH());

  RnnOutputLayer rnnOut = new RnnOutputLayer();

  // Network config
  NeuralNetConfiguration nnc = new NeuralNetConfiguration();
  nnc.setL2(1e-5);
  nnc.setUseRegularization(true);
  nnc.setGradientNormalization(GradientNormalization.ClipElementWiseAbsoluteValue);
  nnc.setGradientNormalizationThreshold(1.0);
  nnc.setLearningRate(0.02);

  // Config classifier
  clf.setLayers(lstm1, rnnOut);
  clf.setNeuralNetConfiguration(nnc);
  clf.settBPTTbackwardLength(20);
  clf.settBPTTforwardLength(20);
  clf.setQueueSize(0);

  // Randomize data
  data.randomize(new Random(42));

  // Reduce datasize
  RemovePercentage rp = new RemovePercentage();
  rp.setPercentage(95);
  rp.setInputFormat(data);
  data = Filter.useFilter(data, rp);

  TestUtil.holdout(clf, data, 50, tii);
}
 
开发者ID:Waikato,项目名称:wekaDeeplearning4j,代码行数:40,代码来源:RnnSequenceClassifierTest.java


示例6: testAngerRegression

import org.deeplearning4j.nn.conf.GradientNormalization; //导入依赖的package包/类
@Test
public void testAngerRegression() throws Exception {
  // Define layers
  LSTM lstm1 = new LSTM();
  lstm1.setNOut(32);
  lstm1.setActivationFunction(new ActivationTanH());

  RnnOutputLayer rnnOut = new RnnOutputLayer();
  rnnOut.setLossFn(new LossMSE());
  rnnOut.setActivationFunction(new ActivationIdentity());

  // Network config
  NeuralNetConfiguration nnc = new NeuralNetConfiguration();
  nnc.setL2(1e-5);
  nnc.setUseRegularization(true);
  nnc.setGradientNormalization(GradientNormalization.ClipElementWiseAbsoluteValue);
  nnc.setGradientNormalizationThreshold(1.0);
  nnc.setLearningRate(0.02);

  tii.setTruncateLength(80);
  // Config classifier
  clf.setLayers(lstm1, rnnOut);
  clf.setNeuralNetConfiguration(nnc);
  clf.settBPTTbackwardLength(20);
  clf.settBPTTforwardLength(20);
  //    clf.setQueueSize(4);
  clf.setNumEpochs(3);
  final EpochListener l = new EpochListener();
  l.setN(1);
  clf.setIterationListener(l);
  data = DatasetLoader.loadAnger();
  // Randomize data
  data.randomize(new Random(42));
  TestUtil.holdout(clf, data, 33);
}
 
开发者ID:Waikato,项目名称:wekaDeeplearning4j,代码行数:36,代码来源:RnnSequenceClassifierTest.java


示例7: getModel

import org.deeplearning4j.nn.conf.GradientNormalization; //导入依赖的package包/类
public static MultiLayerNetwork getModel(int numInputs) {
    MultiLayerConfiguration conf =  new NeuralNetConfiguration.Builder()
            .seed(seed)
            .iterations(iterations)
            .gradientNormalization(GradientNormalization.ClipElementWiseAbsoluteValue)
            .gradientNormalizationThreshold(1.0)
            .regularization(true)
            .dropOut(Config.DROPOUT)
            .updater(Config.UPDATER)
            .adamMeanDecay(0.5)
            .adamVarDecay(0.5)
            .weightInit(WeightInit.XAVIER)
            .optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)
            .list()
            .layer(0, new RBM.Builder(RBM.HiddenUnit.BINARY, RBM.VisibleUnit.GAUSSIAN)
                    .nIn(numInputs).nOut(2750).dropOut(0.75)
                    .activation(Activation.RELU).build())
            .layer(1, new RBM.Builder(RBM.HiddenUnit.BINARY, RBM.VisibleUnit.BINARY)
                    .nIn(2750).nOut(2000)
                    .activation(Activation.RELU).build())
            .layer(2, new RBM.Builder(RBM.HiddenUnit.BINARY, RBM.VisibleUnit.BINARY)
                    .nIn(2000).nOut(1000)
                    .activation(Activation.RELU).build())
            .layer(3, new RBM.Builder(RBM.HiddenUnit.BINARY, RBM.VisibleUnit.BINARY)
                    .nIn(1000).nOut(200)
                    .activation(Activation.RELU).build())
            .layer(4, new OutputLayer.Builder(Config.LOSS_FUNCTION)
                    .nIn(200).nOut(Config.NUM_OUTPUTS).updater(Config.UPDATER)
                    .adamMeanDecay(0.6).adamVarDecay(0.7)
                    .build())
            .pretrain(true).backprop(true)
            .build();
    return new MultiLayerNetwork(conf);
}
 
开发者ID:madeleine789,项目名称:dl4j-apr,代码行数:35,代码来源:DBN.java


示例8: resetLayerDefaultConfig

import org.deeplearning4j.nn.conf.GradientNormalization; //导入依赖的package包/类
/**
 * Reset the learning related configs of the layer to default. When instantiated with a global neural network configuration
 * the parameters specified in the neural network configuration will be used.
 * For internal use with the transfer learning API. Users should not have to call this method directly.
 */
public void resetLayerDefaultConfig() {
    //clear the learning related params for all layers in the origConf and set to defaults
    this.setIUpdater(null);
    this.setWeightInit(null);
    this.setBiasInit(Double.NaN);
    this.setDist(null);
    this.setL1(Double.NaN);
    this.setL2(Double.NaN);
    this.setGradientNormalization(GradientNormalization.None);
    this.setGradientNormalizationThreshold(1.0);
    this.iUpdater = null;
    this.biasUpdater = null;
}
 
开发者ID:deeplearning4j,项目名称:deeplearning4j,代码行数:19,代码来源:BaseLayer.java


示例9: regressionTestLSTM1

import org.deeplearning4j.nn.conf.GradientNormalization; //导入依赖的package包/类
@Test
public void regressionTestLSTM1() throws Exception {

    File f = new ClassPathResource("regression_testing/080/080_ModelSerializer_Regression_LSTM_1.zip")
                    .getTempFileFromArchive();

    MultiLayerNetwork net = ModelSerializer.restoreMultiLayerNetwork(f, true);

    MultiLayerConfiguration conf = net.getLayerWiseConfigurations();
    assertEquals(3, conf.getConfs().size());

    assertTrue(conf.isBackprop());
    assertFalse(conf.isPretrain());

    GravesLSTM l0 = (GravesLSTM) conf.getConf(0).getLayer();
    assertTrue(l0.getActivationFn() instanceof ActivationTanH);
    assertEquals(3, l0.getNIn());
    assertEquals(4, l0.getNOut());
    assertEquals(GradientNormalization.ClipElementWiseAbsoluteValue, l0.getGradientNormalization());
    assertEquals(1.5, l0.getGradientNormalizationThreshold(), 1e-5);

    GravesBidirectionalLSTM l1 = (GravesBidirectionalLSTM) conf.getConf(1).getLayer();
    assertTrue(l1.getActivationFn() instanceof ActivationSoftSign);
    assertEquals(4, l1.getNIn());
    assertEquals(4, l1.getNOut());
    assertEquals(GradientNormalization.ClipElementWiseAbsoluteValue, l1.getGradientNormalization());
    assertEquals(1.5, l1.getGradientNormalizationThreshold(), 1e-5);

    RnnOutputLayer l2 = (RnnOutputLayer) conf.getConf(2).getLayer();
    assertEquals(4, l2.getNIn());
    assertEquals(5, l2.getNOut());
    assertTrue(l2.getActivationFn() instanceof ActivationSoftmax);
    assertTrue(l2.getLossFn() instanceof LossMCXENT);
}
 
开发者ID:deeplearning4j,项目名称:deeplearning4j,代码行数:35,代码来源:RegressionTest080.java


示例10: regressionTestCGLSTM1

import org.deeplearning4j.nn.conf.GradientNormalization; //导入依赖的package包/类
@Test
public void regressionTestCGLSTM1() throws Exception {

    File f = new ClassPathResource("regression_testing/080/080_ModelSerializer_Regression_CG_LSTM_1.zip")
                    .getTempFileFromArchive();

    ComputationGraph net = ModelSerializer.restoreComputationGraph(f, true);

    ComputationGraphConfiguration conf = net.getConfiguration();
    assertEquals(3, conf.getVertices().size());

    assertTrue(conf.isBackprop());
    assertFalse(conf.isPretrain());

    GravesLSTM l0 = (GravesLSTM) ((LayerVertex) conf.getVertices().get("0")).getLayerConf().getLayer();
    assertTrue(l0.getActivationFn() instanceof ActivationTanH);
    assertEquals(3, l0.getNIn());
    assertEquals(4, l0.getNOut());
    assertEquals(GradientNormalization.ClipElementWiseAbsoluteValue, l0.getGradientNormalization());
    assertEquals(1.5, l0.getGradientNormalizationThreshold(), 1e-5);

    GravesBidirectionalLSTM l1 =
                    (GravesBidirectionalLSTM) ((LayerVertex) conf.getVertices().get("1")).getLayerConf().getLayer();
    assertTrue(l1.getActivationFn() instanceof ActivationSoftSign);
    assertEquals(4, l1.getNIn());
    assertEquals(4, l1.getNOut());
    assertEquals(GradientNormalization.ClipElementWiseAbsoluteValue, l1.getGradientNormalization());
    assertEquals(1.5, l1.getGradientNormalizationThreshold(), 1e-5);

    RnnOutputLayer l2 = (RnnOutputLayer) ((LayerVertex) conf.getVertices().get("2")).getLayerConf().getLayer();
    assertEquals(4, l2.getNIn());
    assertEquals(5, l2.getNOut());
    assertTrue(l2.getActivationFn() instanceof ActivationSoftmax);
    assertTrue(l2.getLossFn() instanceof LossMCXENT);
}
 
开发者ID:deeplearning4j,项目名称:deeplearning4j,代码行数:36,代码来源:RegressionTest080.java


示例11: regressionTestLSTM1

import org.deeplearning4j.nn.conf.GradientNormalization; //导入依赖的package包/类
@Test
public void regressionTestLSTM1() throws Exception {

    File f = new ClassPathResource("regression_testing/071/071_ModelSerializer_Regression_LSTM_1.zip")
                    .getTempFileFromArchive();

    MultiLayerNetwork net = ModelSerializer.restoreMultiLayerNetwork(f, true);

    MultiLayerConfiguration conf = net.getLayerWiseConfigurations();
    assertEquals(3, conf.getConfs().size());

    assertTrue(conf.isBackprop());
    assertFalse(conf.isPretrain());

    GravesLSTM l0 = (GravesLSTM) conf.getConf(0).getLayer();
    assertEquals("tanh", l0.getActivationFn().toString());
    assertEquals(3, l0.getNIn());
    assertEquals(4, l0.getNOut());
    assertEquals(GradientNormalization.ClipElementWiseAbsoluteValue, l0.getGradientNormalization());
    assertEquals(1.5, l0.getGradientNormalizationThreshold(), 1e-5);

    GravesBidirectionalLSTM l1 = (GravesBidirectionalLSTM) conf.getConf(1).getLayer();
    assertEquals("softsign", l1.getActivationFn().toString());
    assertEquals(4, l1.getNIn());
    assertEquals(4, l1.getNOut());
    assertEquals(GradientNormalization.ClipElementWiseAbsoluteValue, l1.getGradientNormalization());
    assertEquals(1.5, l1.getGradientNormalizationThreshold(), 1e-5);

    RnnOutputLayer l2 = (RnnOutputLayer) conf.getConf(2).getLayer();
    assertEquals(4, l2.getNIn());
    assertEquals(5, l2.getNOut());
    assertEquals("softmax", l2.getActivationFn().toString());
    assertTrue(l2.getLossFn() instanceof LossMCXENT);
}
 
开发者ID:deeplearning4j,项目名称:deeplearning4j,代码行数:35,代码来源:RegressionTest071.java


示例12: regressionTestCGLSTM1

import org.deeplearning4j.nn.conf.GradientNormalization; //导入依赖的package包/类
@Test
public void regressionTestCGLSTM1() throws Exception {

    File f = new ClassPathResource("regression_testing/071/071_ModelSerializer_Regression_CG_LSTM_1.zip")
                    .getTempFileFromArchive();

    ComputationGraph net = ModelSerializer.restoreComputationGraph(f, true);

    ComputationGraphConfiguration conf = net.getConfiguration();
    assertEquals(3, conf.getVertices().size());

    assertTrue(conf.isBackprop());
    assertFalse(conf.isPretrain());

    GravesLSTM l0 = (GravesLSTM) ((LayerVertex) conf.getVertices().get("0")).getLayerConf().getLayer();
    assertEquals("tanh", l0.getActivationFn().toString());
    assertEquals(3, l0.getNIn());
    assertEquals(4, l0.getNOut());
    assertEquals(GradientNormalization.ClipElementWiseAbsoluteValue, l0.getGradientNormalization());
    assertEquals(1.5, l0.getGradientNormalizationThreshold(), 1e-5);

    GravesBidirectionalLSTM l1 =
                    (GravesBidirectionalLSTM) ((LayerVertex) conf.getVertices().get("1")).getLayerConf().getLayer();
    assertEquals("softsign", l1.getActivationFn().toString());
    assertEquals(4, l1.getNIn());
    assertEquals(4, l1.getNOut());
    assertEquals(GradientNormalization.ClipElementWiseAbsoluteValue, l1.getGradientNormalization());
    assertEquals(1.5, l1.getGradientNormalizationThreshold(), 1e-5);

    RnnOutputLayer l2 = (RnnOutputLayer) ((LayerVertex) conf.getVertices().get("2")).getLayerConf().getLayer();
    assertEquals(4, l2.getNIn());
    assertEquals(5, l2.getNOut());
    assertEquals("softmax", l2.getActivationFn().toString());
    assertTrue(l2.getLossFn() instanceof LossMCXENT);
}
 
开发者ID:deeplearning4j,项目名称:deeplearning4j,代码行数:36,代码来源:RegressionTest071.java


示例13: regressionTestLSTM1

import org.deeplearning4j.nn.conf.GradientNormalization; //导入依赖的package包/类
@Test
public void regressionTestLSTM1() throws Exception {

    File f = new ClassPathResource("regression_testing/060/060_ModelSerializer_Regression_LSTM_1.zip")
                    .getTempFileFromArchive();

    MultiLayerNetwork net = ModelSerializer.restoreMultiLayerNetwork(f, true);

    MultiLayerConfiguration conf = net.getLayerWiseConfigurations();
    assertEquals(3, conf.getConfs().size());

    assertTrue(conf.isBackprop());
    assertFalse(conf.isPretrain());

    GravesLSTM l0 = (GravesLSTM) conf.getConf(0).getLayer();
    assertEquals("tanh", l0.getActivationFn().toString());
    assertEquals(3, l0.getNIn());
    assertEquals(4, l0.getNOut());
    assertEquals(GradientNormalization.ClipElementWiseAbsoluteValue, l0.getGradientNormalization());
    assertEquals(1.5, l0.getGradientNormalizationThreshold(), 1e-5);

    GravesBidirectionalLSTM l1 = (GravesBidirectionalLSTM) conf.getConf(1).getLayer();
    assertEquals("softsign", l1.getActivationFn().toString());
    assertEquals(4, l1.getNIn());
    assertEquals(4, l1.getNOut());
    assertEquals(GradientNormalization.ClipElementWiseAbsoluteValue, l1.getGradientNormalization());
    assertEquals(1.5, l1.getGradientNormalizationThreshold(), 1e-5);

    RnnOutputLayer l2 = (RnnOutputLayer) conf.getConf(2).getLayer();
    assertEquals(4, l2.getNIn());
    assertEquals(5, l2.getNOut());
    assertEquals("softmax", l2.getActivationFn().toString());
    assertTrue(l2.getLossFn() instanceof LossMCXENT);
}
 
开发者ID:deeplearning4j,项目名称:deeplearning4j,代码行数:35,代码来源:RegressionTest060.java


示例14: regressionTestCGLSTM1

import org.deeplearning4j.nn.conf.GradientNormalization; //导入依赖的package包/类
@Test
public void regressionTestCGLSTM1() throws Exception {

    File f = new ClassPathResource("regression_testing/060/060_ModelSerializer_Regression_CG_LSTM_1.zip")
                    .getTempFileFromArchive();

    ComputationGraph net = ModelSerializer.restoreComputationGraph(f, true);

    ComputationGraphConfiguration conf = net.getConfiguration();
    assertEquals(3, conf.getVertices().size());

    assertTrue(conf.isBackprop());
    assertFalse(conf.isPretrain());

    GravesLSTM l0 = (GravesLSTM) ((LayerVertex) conf.getVertices().get("0")).getLayerConf().getLayer();
    assertEquals("tanh", l0.getActivationFn().toString());
    assertEquals(3, l0.getNIn());
    assertEquals(4, l0.getNOut());
    assertEquals(GradientNormalization.ClipElementWiseAbsoluteValue, l0.getGradientNormalization());
    assertEquals(1.5, l0.getGradientNormalizationThreshold(), 1e-5);

    GravesBidirectionalLSTM l1 =
                    (GravesBidirectionalLSTM) ((LayerVertex) conf.getVertices().get("1")).getLayerConf().getLayer();
    assertEquals("softsign", l1.getActivationFn().toString());
    assertEquals(4, l1.getNIn());
    assertEquals(4, l1.getNOut());
    assertEquals(GradientNormalization.ClipElementWiseAbsoluteValue, l1.getGradientNormalization());
    assertEquals(1.5, l1.getGradientNormalizationThreshold(), 1e-5);

    RnnOutputLayer l2 = (RnnOutputLayer) ((LayerVertex) conf.getVertices().get("2")).getLayerConf().getLayer();
    assertEquals(4, l2.getNIn());
    assertEquals(5, l2.getNOut());
    assertEquals("softmax", l2.getActivationFn().toString());
    assertTrue(l2.getLossFn() instanceof LossMCXENT);
}
 
开发者ID:deeplearning4j,项目名称:deeplearning4j,代码行数:36,代码来源:RegressionTest060.java


示例15: doBefore

import org.deeplearning4j.nn.conf.GradientNormalization; //导入依赖的package包/类
@Before
public void doBefore() {
    NeuralNetConfiguration conf = new NeuralNetConfiguration.Builder()
                    .gradientNormalization(GradientNormalization.RenormalizeL2PerLayer).seed(123)
                    .layer(new LocalResponseNormalization.Builder().k(2).n(5).alpha(1e-4).beta(0.75).build())
                    .build();

    layer = new LocalResponseNormalization().instantiate(conf, null, 0, null, false);
    activationsActual = layer.activate(x);
}
 
开发者ID:deeplearning4j,项目名称:deeplearning4j,代码行数:11,代码来源:LocalResponseTest.java


示例16: testRegularization

import org.deeplearning4j.nn.conf.GradientNormalization; //导入依赖的package包/类
@Test
public void testRegularization() {
    // Confirm a structure with regularization true will not throw an error

    NeuralNetConfiguration conf = new NeuralNetConfiguration.Builder()
                    .gradientNormalization(GradientNormalization.RenormalizeL2PerLayer).l1(0.2)
                    .l2(0.1).seed(123)
                    .layer(new LocalResponseNormalization.Builder().k(2).n(5).alpha(1e-4).beta(0.75).build())
                    .build();
}
 
开发者ID:deeplearning4j,项目名称:deeplearning4j,代码行数:11,代码来源:LocalResponseTest.java


示例17: getUpsampling1DLayer

import org.deeplearning4j.nn.conf.GradientNormalization; //导入依赖的package包/类
private Layer getUpsampling1DLayer() {
    NeuralNetConfiguration conf = new NeuralNetConfiguration.Builder()
                    .gradientNormalization(GradientNormalization.RenormalizeL2PerLayer).seed(123)
                    .layer(new Upsampling1D.Builder(size).build()).build();
    return conf.getLayer().instantiate(conf, null, 0,
            null, true);
}
 
开发者ID:deeplearning4j,项目名称:deeplearning4j,代码行数:8,代码来源:Upsampling1DTest.java


示例18: getSubsamplingLayer

import org.deeplearning4j.nn.conf.GradientNormalization; //导入依赖的package包/类
private Layer getSubsamplingLayer(SubsamplingLayer.PoolingType pooling) {
    NeuralNetConfiguration conf = new NeuralNetConfiguration.Builder()
                    .gradientNormalization(GradientNormalization.RenormalizeL2PerLayer).seed(123)
                    .layer(new SubsamplingLayer.Builder(pooling, new int[] {2, 2}).build()).build();

    return conf.getLayer().instantiate(conf, null, 0, null, true);
}
 
开发者ID:deeplearning4j,项目名称:deeplearning4j,代码行数:8,代码来源:SubsamplingLayerTest.java


示例19: testLfwModel

import org.deeplearning4j.nn.conf.GradientNormalization; //导入依赖的package包/类
@Test
public void testLfwModel() throws Exception {
    final int numRows = 28;
    final int numColumns = 28;
    int numChannels = 3;
    int outputNum = LFWLoader.NUM_LABELS;
    int numSamples = LFWLoader.NUM_IMAGES;
    int batchSize = 2;
    int seed = 123;
    int listenerFreq = 1;

    LFWDataSetIterator lfw = new LFWDataSetIterator(batchSize, numSamples,
                    new int[] {numRows, numColumns, numChannels}, outputNum, false, true, 1.0, new Random(seed));

    MultiLayerConfiguration.Builder builder = new NeuralNetConfiguration.Builder().seed(seed)
                    .gradientNormalization(GradientNormalization.RenormalizeL2PerLayer)
                    .optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT).list()
                    .layer(0, new ConvolutionLayer.Builder(5, 5).nIn(numChannels).nOut(6)
                                    .weightInit(WeightInit.XAVIER).activation(Activation.RELU).build())
                    .layer(1, new SubsamplingLayer.Builder(SubsamplingLayer.PoolingType.MAX, new int[] {2, 2})
                                    .stride(1, 1).build())
                    .layer(2, new OutputLayer.Builder(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD)
                                    .nOut(outputNum).weightInit(WeightInit.XAVIER).activation(Activation.SOFTMAX)
                                    .build())
                    .setInputType(InputType.convolutionalFlat(numRows, numColumns, numChannels)).backprop(true)
                    .pretrain(false);

    MultiLayerNetwork model = new MultiLayerNetwork(builder.build());
    model.init();

    model.setListeners(new ScoreIterationListener(listenerFreq));

    model.fit(lfw.next());

    DataSet dataTest = lfw.next();
    INDArray output = model.output(dataTest.getFeatureMatrix());
    Evaluation eval = new Evaluation(outputNum);
    eval.eval(dataTest.getLabels(), output);
    System.out.println(eval.stats());
}
 
开发者ID:deeplearning4j,项目名称:deeplearning4j,代码行数:41,代码来源:DataSetIteratorTest.java


示例20: deepBeliefNetwork

import org.deeplearning4j.nn.conf.GradientNormalization; //导入依赖的package包/类
private static MultiLayerNetwork deepBeliefNetwork(int seed,
		int iterations, int numRows, int numColumns, int outputNum) {
	MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
			.seed(seed)
			.gradientNormalization(
					GradientNormalization.ClipElementWiseAbsoluteValue)
			.gradientNormalizationThreshold(1.0)
			.iterations(iterations)
			.momentum(0.5)
			.momentumAfter(Collections.singletonMap(3, 0.9))
			.optimizationAlgo(OptimizationAlgorithm.CONJUGATE_GRADIENT)
			.list(4)
			.layer(0,
					new RBM.Builder().nIn(numRows * numColumns).nOut(500)
							.weightInit(WeightInit.XAVIER)
							.lossFunction(LossFunction.RMSE_XENT)
							.visibleUnit(RBM.VisibleUnit.BINARY)
							.hiddenUnit(RBM.HiddenUnit.BINARY).build())
			.layer(1,
					new RBM.Builder().nIn(500).nOut(250)
							.weightInit(WeightInit.XAVIER)
							.lossFunction(LossFunction.RMSE_XENT)
							.visibleUnit(RBM.VisibleUnit.BINARY)
							.hiddenUnit(RBM.HiddenUnit.BINARY).build())
			.layer(2,
					new RBM.Builder().nIn(250).nOut(200)
							.weightInit(WeightInit.XAVIER)
							.lossFunction(LossFunction.RMSE_XENT)
							.visibleUnit(RBM.VisibleUnit.BINARY)
							.hiddenUnit(RBM.HiddenUnit.BINARY).build())
			.layer(3,
					new OutputLayer.Builder(
							LossFunction.NEGATIVELOGLIKELIHOOD)
							.activation("softmax").nIn(200).nOut(outputNum)
							.build()).pretrain(true).backprop(false)
			.build();

	MultiLayerNetwork model = new MultiLayerNetwork(conf);

	return model;
}
 
开发者ID:PacktPublishing,项目名称:Machine-Learning-End-to-Endguide-for-Java-developers,代码行数:42,代码来源:NeuralNetworks.java



注:本文中的org.deeplearning4j.nn.conf.GradientNormalization类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java AttributeSet类代码示例发布时间:2022-05-21
下一篇:
Java Event类代码示例发布时间:2022-05-21
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap