• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Python gen_nn_ops._batch_norm_with_global_normalization_grad函数代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Python中tensorflow.python.ops.gen_nn_ops._batch_norm_with_global_normalization_grad函数的典型用法代码示例。如果您正苦于以下问题:Python _batch_norm_with_global_normalization_grad函数的具体用法?Python _batch_norm_with_global_normalization_grad怎么用?Python _batch_norm_with_global_normalization_grad使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。



在下文中一共展示了_batch_norm_with_global_normalization_grad函数的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: _BatchNormWithGlobalNormalizationGrad

def _BatchNormWithGlobalNormalizationGrad(op, grad):
    """Return the gradients for the 5 inputs of BatchNormWithGlobalNormalization.

  We do not backprop anything for the mean and var intentionally as they are
  not being trained with backprop in the operation.

  Args:
    op: The BatchNormOp for which we need to generate gradients.
    grad: Tensor.  The gradients passed to the BatchNormOp.

  Returns:
    dx: Backprop for input, which is (grad * (g * rsqrt(v + epsilon)))
    dm: Backprop for mean, which is
        sum_over_rest(grad * g) * (-1 / rsqrt(v + epsilon))
    dv: Backprop for variance, which is
        sum_over_rest(grad * g * (x - m)) * (-1/2) * (v + epsilon) ^ (-3/2)
    db: Backprop for beta, which is grad reduced in all except the
        last dimension.
    dg: Backprop for gamma, which is (grad * ((x - m) * rsqrt(v + epsilon)))
  """
    dx, dm, dv, db, dg = gen_nn_ops._batch_norm_with_global_normalization_grad(
        op.inputs[0],
        op.inputs[1],
        op.inputs[2],
        op.inputs[4],
        grad,
        op.get_attr("variance_epsilon"),
        op.get_attr("scale_after_normalization"),
    )
    return dx, dm, dv, db, dg
开发者ID:adsar,项目名称:tensorflow,代码行数:30,代码来源:nn_grad.py


示例2: testBatchNormGradImpl

 def testBatchNormGradImpl(self):
     x_shape = [7, 5, 4, 6]
     param_shape = [6]
     np.random.seed(1)  # Make it reproducible.
     x_val = np.random.random_sample(x_shape).astype(np.float32)
     m_val = np.random.random_sample(param_shape).astype(np.float32)
     v_val = np.random.random_sample(param_shape).astype(np.float32)
     beta_val = np.random.random_sample(param_shape).astype(np.float32)
     gamma_val = np.random.random_sample(param_shape).astype(np.float32)
     backprop_val = np.random.random_sample(x_shape).astype(np.float32)
     for use_gpu in [False, True]:
         with self.test_session(use_gpu=use_gpu) as sess:
             x = constant_op.constant(x_val, name="x")
             m = constant_op.constant(m_val, name="m")
             v = constant_op.constant(v_val, name="v")
             beta = constant_op.constant(beta_val, name="beta")
             gamma = constant_op.constant(gamma_val, name="gamma")
             backprop = constant_op.constant(backprop_val, name="backprop")
             epsilon = 0.001
             for scale_after_normalization in [True, False]:
                 dx, dm, dv, db, dg = gen_nn_ops._batch_norm_with_global_normalization_grad(
                     x, m, v, gamma, backprop, epsilon, scale_after_normalization
                 )
                 on = self._opsBatchNorm(x, m, v, beta, gamma, epsilon, scale_after_normalization)
                 odx, odm, odv, odb, odg = gradients.gradients([on], [x, m, v, beta, gamma], [backprop])
                 if scale_after_normalization:
                     all_grads = sess.run([dx, dm, dv, db, dg, odx, odm, odv, odb, odg])
                     to_check = ["dx", "dm", "dv", "db", "dg"]
                 else:
                     all_grads = sess.run([dx, dm, dv, db, odx, odm, odv, odb])
                     to_check = ["dx", "dm", "dv", "db"]
                 for i, n in enumerate(to_check):
                     print(n)
                     self.assertAllClose(all_grads[i + len(to_check)], all_grads[i], atol=0.000001)
开发者ID:adam-erickson,项目名称:tensorflow,代码行数:34,代码来源:nn_test.py



注:本文中的tensorflow.python.ops.gen_nn_ops._batch_norm_with_global_normalization_grad函数示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Python gen_nn_ops._elu_grad函数代码示例发布时间:2022-05-27
下一篇:
Python gen_math_ops._tanh_grad函数代码示例发布时间:2022-05-27
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap