• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Python tensorflow.lgamma函数代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Python中tensorflow.lgamma函数的典型用法代码示例。如果您正苦于以下问题:Python lgamma函数的具体用法?Python lgamma怎么用?Python lgamma使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。



在下文中一共展示了lgamma函数的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: beta_log_prob

def beta_log_prob(self, val):
  conc0 = self.parameters['concentration0']
  conc1 = self.parameters['concentration1']
  result = (conc1 - 1.0) * tf.log(val)
  result += (conc0 - 1.0) * tf.log(1.0 - val)
  result += -tf.lgamma(conc1) - tf.lgamma(conc0) + tf.lgamma(conc1 + conc0)
  return result
开发者ID:ekostem,项目名称:edward,代码行数:7,代码来源:conjugate_log_probs.py


示例2: logpmf

    def logpmf(self, x, n, p):
        """Log of the probability mass function.

        Parameters
        ----------
        x : tf.Tensor
            A n-D tensor for n > 1, where the inner (right-most)
            dimension represents the multivariate dimension. Each
            element is the number of outcomes in a bucket and not a
            one-hot.
        n : tf.Tensor
            A tensor of one less dimension than ``x``,
            representing the number of outcomes, equal to sum x[i]
            along the inner (right-most) dimension.
        p : tf.Tensor
            A tensor of one less dimension than ``x``, representing
            probabilities which sum to 1.

        Returns
        -------
        tf.Tensor
            A tensor of one dimension less than the input.
        """
        x = tf.cast(x, dtype=tf.float32)
        n = tf.cast(n, dtype=tf.float32)
        p = tf.cast(p, dtype=tf.float32)
        multivariate_idx = len(get_dims(x)) - 1
        if multivariate_idx == 0:
            return tf.lgamma(n + 1.0) - \
                   tf.reduce_sum(tf.lgamma(x + 1.0)) + \
                   tf.reduce_sum(x * tf.log(p))
        else:
            return tf.lgamma(n + 1.0) - \
                   tf.reduce_sum(tf.lgamma(x + 1.0), multivariate_idx) + \
                   tf.reduce_sum(x * tf.log(p), multivariate_idx)
开发者ID:TalkingData,项目名称:edward,代码行数:35,代码来源:distributions.py


示例3: logpdf

    def logpdf(self, x, df, loc=0, scale=1):
        """Log of the probability density function.

        Parameters
        ----------
        x : tf.Tensor
            A n-D tensor.
        df : tf.Tensor
            A tensor of same shape as ``x``, and with all elements
            constrained to :math:`df > 0`.
        loc : tf.Tensor
            A tensor of same shape as ``x``.
        scale : tf.Tensor
            A tensor of same shape as ``x``, and with all elements
            constrained to :math:`scale > 0`.

        Returns
        -------
        tf.Tensor
            A tensor of same shape as input.
        """
        x = tf.cast(x, dtype=tf.float32)
        df = tf.cast(df, dtype=tf.float32)
        loc = tf.cast(loc, dtype=tf.float32)
        scale = tf.cast(scale, dtype=tf.float32)
        z = (x - loc) / scale
        return tf.lgamma(0.5 * (df + 1.0)) - tf.lgamma(0.5 * df) - \
               0.5 * (tf.log(np.pi) + tf.log(df)) - tf.log(scale) - \
               0.5 * (df + 1.0) * tf.log(1.0 + (1.0/df) * tf.square(z))
开发者ID:TalkingData,项目名称:edward,代码行数:29,代码来源:distributions.py


示例4: beta

def beta(alpha, beta, y):
    # need to clip y, since log of 0 is nan...
    y = tf.clip_by_value(y, 1e-6, 1-1e-6)
    return (alpha - 1.) * tf.log(y) + (beta - 1.) * tf.log(1. - y) \
        + tf.lgamma(alpha + beta)\
        - tf.lgamma(alpha)\
        - tf.lgamma(beta)
开发者ID:blutooth,项目名称:dgp,代码行数:7,代码来源:densities.py


示例5: beta

def beta(x, alpha, beta):
    # need to clip x, since log of 0 is nan...
    x = tf.clip_by_value(x, 1e-6, 1-1e-6)
    return (alpha - 1.) * tf.log(x) + (beta - 1.) * tf.log(1. - x) \
        + tf.lgamma(alpha + beta)\
        - tf.lgamma(alpha)\
        - tf.lgamma(beta)
开发者ID:vincentadam87,项目名称:GPflow,代码行数:7,代码来源:logdensities.py


示例6: multinomial_log_prob

def multinomial_log_prob(self, val):
  n = self.parameters['total_count']
  probs = self.parameters['probs']
  f_n = tf.cast(n, tf.float32)
  f_val = tf.cast(val, tf.float32)
  result = tf.reduce_sum(tf.log(probs) * f_val, -1)
  result += tf.lgamma(f_n + 1) - tf.reduce_sum(tf.lgamma(f_val + 1), -1)
  return result
开发者ID:ekostem,项目名称:edward,代码行数:8,代码来源:conjugate_log_probs.py


示例7: student_t

def student_t(x, mean, scale, deg_free):
    const = tf.lgamma(tf.cast((deg_free + 1.) * 0.5, tf.float64))\
        - tf.lgamma(tf.cast(deg_free * 0.5, tf.float64))\
        - 0.5*(tf.log(tf.square(scale)) + tf.cast(tf.log(deg_free), tf.float64)
               + np.log(np.pi))
    const = tf.cast(const, tf.float64)
    return const - 0.5*(deg_free + 1.) * \
        tf.log(1. + (1. / deg_free) * (tf.square((x - mean) / scale)))
开发者ID:blutooth,项目名称:dgp,代码行数:8,代码来源:densities.py


示例8: binomial_log_prob

def binomial_log_prob(self, val):
  n = self.parameters['total_count']
  probs = self.parameters['probs']
  f_n = tf.cast(n, tf.float32)
  f_val = tf.cast(val, tf.float32)
  result = f_val * tf.log(probs) + (f_n - f_val) * tf.log(1.0 - probs)
  result += tf.lgamma(f_n + 1) - tf.lgamma(f_val + 1) - \
      tf.lgamma(f_n - f_val + 1)
  return result
开发者ID:ekostem,项目名称:edward,代码行数:9,代码来源:conjugate_log_probs.py


示例9: log_nb_positive

def log_nb_positive(x, mu, theta, eps=1e-8):
    """
    log likelihood (scalar) of a minibatch according to a nb model. 
    
    Variables:
    mu: mean of the negative binomial (has to be positive support) (shape: minibatch x genes)
    theta: inverse dispersion parameter (has to be positive support) (shape: minibatch x genes)
    eps: numerical stability constant
    """    
    res = tf.lgamma(x + theta) - tf.lgamma(theta) - tf.lgamma(x + 1) + x * tf.log(mu + eps) \
                                - x * tf.log(theta + mu + eps) + theta * tf.log(theta + eps) \
                                - theta * tf.log(theta + mu + eps)
    return tf.reduce_sum(res, axis=-1)
开发者ID:ssehztirom,项目名称:scVI-reproducibility,代码行数:13,代码来源:scVI.py


示例10: chi2_log_prob

def chi2_log_prob(self, val):
  df = self.parameters['df']
  eta = 0.5 * df - 1
  result = tf.reduce_sum(eta * tf.log(val), -1)
  result += tf.exp(-0.5 * val)
  result -= tf.lgamma(eta + 1) + (eta + 1) * tf.log(2.0)
  return result
开发者ID:ekostem,项目名称:edward,代码行数:7,代码来源:conjugate_log_probs.py


示例11: gamma_log_prob

def gamma_log_prob(self, val):
  conc = self.parameters['concentration']
  rate = self.parameters['rate']
  result = (conc - 1.0) * tf.log(val)
  result -= rate * val
  result += -tf.lgamma(conc) + conc * tf.log(rate)
  return result
开发者ID:ekostem,项目名称:edward,代码行数:7,代码来源:conjugate_log_probs.py


示例12: inverse_gamma_log_prob

def inverse_gamma_log_prob(self, val):
  conc = self.parameters['concentration']
  rate = self.parameters['rate']
  result = -(conc + 1) * tf.log(val)
  result -= rate * tf.reciprocal(val)
  result += -tf.lgamma(conc) + conc * tf.log(rate)
  return result
开发者ID:ekostem,项目名称:edward,代码行数:7,代码来源:conjugate_log_probs.py


示例13: testPoissonLogPmfContinuousRelaxation

  def testPoissonLogPmfContinuousRelaxation(self):
    batch_size = 12
    lam = tf.constant([3.0] * batch_size)
    x = np.array([-3., -0.5, 0., 2., 2.2, 3., 3.1, 4., 5., 5.5, 6., 7.]).astype(
        np.float32)
    poisson = self._make_poisson(rate=lam,
                                 interpolate_nondiscrete=True)

    expected_continuous_log_pmf = (x * poisson.log_rate - tf.lgamma(1. + x)
                                   - poisson.rate)
    neg_inf = tf.fill(
        tf.shape(expected_continuous_log_pmf),
        value=np.array(-np.inf,
                       dtype=expected_continuous_log_pmf.dtype.as_numpy_dtype))
    expected_continuous_log_pmf = tf.where(x >= 0.,
                                           expected_continuous_log_pmf,
                                           neg_inf)
    expected_continuous_pmf = tf.exp(expected_continuous_log_pmf)

    log_pmf = poisson.log_prob(x)
    self.assertEqual(log_pmf.get_shape(), (batch_size,))
    self.assertAllClose(self.evaluate(log_pmf),
                        self.evaluate(expected_continuous_log_pmf))

    pmf = poisson.prob(x)
    self.assertEqual(pmf.get_shape(), (batch_size,))
    self.assertAllClose(self.evaluate(pmf),
                        self.evaluate(expected_continuous_pmf))
开发者ID:asudomoeva,项目名称:probability,代码行数:28,代码来源:poisson_test.py


示例14: Poisson

def Poisson(lambda_, name=None):
    k = tf.placeholder(config.int_dtype, name=name)

    Distribution.logp = k*lambda_ - lambda_ + tf.lgamma(k+1)

    # TODO Distribution.integral = ...

    return k
开发者ID:ibab,项目名称:tensorprob,代码行数:8,代码来源:poisson.py


示例15: actual_hypersphere_volume

 def actual_hypersphere_volume(dims, radius):
   # https://en.wikipedia.org/wiki/Volume_of_an_n-ball
   # Using tf.lgamma because we'd have to otherwise use SciPy which is not
   # a required dependency of core.
   radius = np.asarray(radius)
   dims = tf.cast(dims, dtype=radius.dtype)
   return tf.exp((dims / 2.) * np.log(np.pi) - tf.lgamma(1. + dims / 2.) +
                 dims * tf.log(radius))
开发者ID:asudomoeva,项目名称:probability,代码行数:8,代码来源:test_util.py


示例16: _log_unnormalized_prob

 def _log_unnormalized_prob(self, x):
   # The log-probability at negative points is always -inf.
   # Catch such x's and set the output value accordingly.
   safe_x = tf.maximum(x if self.interpolate_nondiscrete else tf.floor(x), 0.)
   y = safe_x * self.log_rate - tf.lgamma(1. + safe_x)
   is_supported = tf.broadcast_to(tf.equal(x, safe_x), tf.shape(y))
   neg_inf = tf.fill(tf.shape(y),
                     value=np.array(-np.inf, dtype=y.dtype.as_numpy_dtype))
   return tf.where(is_supported, y, neg_inf)
开发者ID:asudomoeva,项目名称:probability,代码行数:9,代码来源:poisson.py


示例17: variational_expectations

 def variational_expectations(self, Fmu, Fvar, Y):
     if self.invlink is tf.exp:
         return (
             -self.shape * Fmu
             - tf.lgamma(self.shape)
             + (self.shape - 1.0) * tf.log(Y)
             - Y * tf.exp(-Fmu + Fvar / 2.0)
         )
     else:
         return Likelihood.variational_expectations(self, Fmu, Fvar, Y)
开发者ID:GPflow,项目名称:GPflow,代码行数:10,代码来源:likelihoods.py


示例18: _log_normalization

  def _log_normalization(self, name='log_normalization'):
    """Returns the log normalization of an LKJ distribution.

    Args:
      name: Python `str` name prefixed to Ops created by this function.

    Returns:
      log_z: A Tensor of the same shape and dtype as `concentration`, containing
        the corresponding log normalizers.
    """
    # The formula is from D. Lewandowski et al [1], p. 1999, from the
    # proof that eqs 16 and 17 are equivalent.
    with tf.name_scope('log_normalization_lkj', name, [self.concentration]):
      logpi = np.log(np.pi)
      ans = tf.zeros_like(self.concentration)
      for k in range(1, self.dimension):
        ans += logpi * (k / 2.)
        ans += tf.lgamma(self.concentration + (self.dimension - 1 - k) / 2.)
        ans -= tf.lgamma(self.concentration + (self.dimension - 1) / 2.)
      return ans
开发者ID:asudomoeva,项目名称:probability,代码行数:20,代码来源:lkj.py


示例19: __init__

    def __init__(self, train_set, test_set, dictparam):
        super(Elosplit, self).__init__(train_set, test_set, dictparam)

        k = tf.constant(map(lambda x: float(x), range(10)))
        last_vect = tf.expand_dims(ToolBox.last_vector(10),0)
        win_vector = ToolBox.win_vector(10)

        # Define parameters
        self.elo_atk = tf.Variable(tf.zeros([self.nb_teams, self.nb_times]))
        self.elo_def = tf.Variable(tf.zeros([self.nb_teams, self.nb_times]))

        # Define the model
        for key, proxy in [('train', self.train_data), ('test', self.test_data)]:
            elo_atk_h = ToolBox.get_elomatch(proxy['team_h'], proxy['time'], self.elo_atk)
            elo_def_h = ToolBox.get_elomatch(proxy['team_h'], proxy['time'], self.elo_def)
            elo_atk_a = ToolBox.get_elomatch(proxy['team_a'], proxy['time'], self.elo_atk)
            elo_def_a = ToolBox.get_elomatch(proxy['team_a'], proxy['time'], self.elo_def)
            lambda_h = tf.expand_dims(tf.exp(self.param['goals_bias'] + elo_atk_h - elo_def_a), 1)
            lambda_a = tf.expand_dims(tf.exp(self.param['goals_bias'] + elo_atk_a - elo_def_h), 1)
            score_h = tf.exp(-lambda_h + tf.log(lambda_h) * k - tf.lgamma(k + 1))
            score_a = tf.exp(-lambda_a + tf.log(lambda_a) * k - tf.lgamma(k + 1))
            score_h += tf.matmul(tf.expand_dims((1. - tf.reduce_sum(score_h, reduction_indices=[1])), 1), last_vect)
            score_a += tf.matmul(tf.expand_dims((1. - tf.reduce_sum(score_a, reduction_indices=[1])), 1), last_vect)

            self.score[key] = tf.batch_matmul(tf.expand_dims(score_h, 2), tf.expand_dims(score_a, 1))
            self.res[key] = tf.reduce_sum(self.score[key] * win_vector, reduction_indices=[1,2])

        # Define the costs
        self.init_cost()
        for key in ['train', 'test']:
            for proxy in [self.elo_atk, self.elo_def]:
                cost = ToolBox.get_raw_elo_cost(self.param['metaparam0'], self.param['metaparam1'], proxy, self.nb_times)
                self.regulizer[key].append(cost)

                cost = ToolBox.get_timediff_elo_cost(self.param['metaparam2'], proxy, self.nb_times)
                self.regulizer[key].append(cost)

        # Finish the initialization
        super(Elosplit, self).finish_init()
开发者ID:Merlin69,项目名称:JBMSoccer,代码行数:39,代码来源:Elosplit.py


示例20: GumbelSoftmaxLogDensity

def GumbelSoftmaxLogDensity(y, p, tau):
    # EPS = tf.constant(1e-10)
    k = tf.shape(y)[-1]
    k = tf.cast(k, tf.float32)
    # y = y + EPS
    # y = tf.divide(y, tf.reduce_sum(y, -1, keep_dims=True))
    y = normalize_to_unit_sum(y)
    sum_p_over_y = tf.reduce_sum(tf.divide(p, tf.pow(y, tau)), -1)
    logp = tf.lgamma(k)
    logp = logp + (k - 1) * tf.log(tau)
    logp = logp - k * tf.log(sum_p_over_y)
    logp = logp + sum_p_over_y
    return logp
开发者ID:QianQQ,项目名称:Voice-Conversion,代码行数:13,代码来源:layers.py



注:本文中的tensorflow.lgamma函数示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Python tensorflow.linspace函数代码示例发布时间:2022-05-27
下一篇:
Python tensorflow.less_equal函数代码示例发布时间:2022-05-27
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap