• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Python math_ops.real函数代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Python中tensorflow.python.ops.math_ops.real函数的典型用法代码示例。如果您正苦于以下问题:Python real函数的具体用法?Python real怎么用?Python real使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。



在下文中一共展示了real函数的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: svd

def svd(tensor, full_matrices=False, compute_uv=True, name=None):
  """Computes the singular value decompositions of one or more matrices.

  Computes the SVD of each inner matrix in `tensor` such that
  `tensor[..., :, :] = u[..., :, :] * diag(s[..., :, :]) * transpose(v[..., :,
  :])`

  ```prettyprint
  # a is a tensor.
  # s is a tensor of singular values.
  # u is a tensor of left singular vectors.
  # v is a tensor of right singular vectors.
  s, u, v = svd(a)
  s = svd(a, compute_uv=False)
  ```

  Args:
    tensor: `Tensor` of shape `[..., M, N]`. Let `P` be the minimum of `M` and
      `N`.
    full_matrices: If true, compute full-sized `u` and `v`. If false
      (the default), compute only the leading `P` singular vectors.
      Ignored if `compute_uv` is `False`.
    compute_uv: If `True` then left and right singular vectors will be
      computed and returned in `u` and `v`, respectively. Otherwise, only the
      singular values will be computed, which can be significantly faster.
    name: string, optional name of the operation.

  Returns:
    s: Singular values. Shape is `[..., P]`. The values are sorted in reverse
      order of magnitude, so s[..., 0] is the largest value, s[..., 1] is the
      second largest, etc.
    u: Left singular vectors. If `full_matrices` is `False` (default) then
      shape is `[..., M, P]`; if `full_matrices` is `True` then shape is
      `[..., M, M]`. Not returned if `compute_uv` is `False`.
    v: Right singular vectors. If `full_matrices` is `False` (default) then
      shape is `[..., N, P]`. If `full_matrices` is `True` then shape is
      `[..., N, N]`. Not returned if `compute_uv` is `False`.

  @compatibility(numpy)
  Mostly equivalent to numpy.linalg.svd, except that the order of output
  arguments here is `s`, `u`, `v` when `compute_uv` is `True`, as opposed to
  `u`, `s`, `v` for numpy.linalg.svd.
  @end_compatibility
  """
  # pylint: disable=protected-access
  s, u, v = gen_linalg_ops._svd(
      tensor, compute_uv=compute_uv, full_matrices=full_matrices)
  # pylint: enable=protected-access
  if compute_uv:
    return math_ops.real(s), u, v
  else:
    return math_ops.real(s)
开发者ID:cameronphchen,项目名称:tensorflow,代码行数:52,代码来源:linalg_ops.py


示例2: _compareRealImag

 def _compareRealImag(self, cplx, use_gpu):
   np_real, np_imag = np.real(cplx), np.imag(cplx)
   np_zeros = np_real * 0
   with self.test_session(use_gpu=use_gpu,
                          force_gpu=use_gpu and test_util.is_gpu_available()):
     inx = ops.convert_to_tensor(cplx)
     tf_real = math_ops.real(inx)
     tf_imag = math_ops.imag(inx)
     tf_real_real = math_ops.real(tf_real)
     tf_imag_real = math_ops.imag(tf_real)
     self.assertAllEqual(np_real, self.evaluate(tf_real))
     self.assertAllEqual(np_imag, self.evaluate(tf_imag))
     self.assertAllEqual(np_real, self.evaluate(tf_real_real))
     self.assertAllEqual(np_zeros, self.evaluate(tf_imag_real))
开发者ID:bunbutter,项目名称:tensorflow,代码行数:14,代码来源:cwise_ops_test.py


示例3: _compareRealImag

  def _compareRealImag(self, cplx, use_gpu):
    np_real, np_imag = np.real(cplx), np.imag(cplx)
    np_zeros = np_real * 0

    with test_util.device(use_gpu=use_gpu):
      inx = ops.convert_to_tensor(cplx)
      tf_real = math_ops.real(inx)
      tf_imag = math_ops.imag(inx)
      tf_real_real = math_ops.real(tf_real)
      tf_imag_real = math_ops.imag(tf_real)
      self.assertAllEqual(np_real, self.evaluate(tf_real))
      self.assertAllEqual(np_imag, self.evaluate(tf_imag))
      self.assertAllEqual(np_real, self.evaluate(tf_real_real))
      self.assertAllEqual(np_zeros, self.evaluate(tf_imag_real))
开发者ID:Wajih-O,项目名称:tensorflow,代码行数:14,代码来源:cwise_ops_test.py


示例4: _trace

  def _trace(self):
    # The diagonal of the [[nested] block] circulant operator is the mean of
    # the spectrum.
    # Proof:  For the [0,...,0] element, this follows from the IDFT formula.
    # Then the result follows since all diagonal elements are the same.

    # Therefore, the trace is the sum of the spectrum.

    # Get shape of diag along with the axis over which to reduce the spectrum.
    # We will reduce the spectrum over all block indices.
    if self.spectrum.get_shape().is_fully_defined():
      spec_rank = self.spectrum.get_shape().ndims
      axis = np.arange(spec_rank - self.block_depth, spec_rank, dtype=np.int32)
    else:
      spec_rank = array_ops.rank(self.spectrum)
      axis = math_ops.range(spec_rank - self.block_depth, spec_rank)

    # Real diag part "re_d".
    # Suppose spectrum.shape = [B1,...,Bb, N1, N2]
    # self.shape = [B1,...,Bb, N, N], with N1 * N2 = N.
    # re_d_value.shape = [B1,...,Bb]
    re_d_value = math_ops.reduce_sum(math_ops.real(self.spectrum), axis=axis)

    if not self.dtype.is_complex:
      return math_ops.cast(re_d_value, self.dtype)

    # Imaginary part, "im_d".
    if self.is_self_adjoint:
      im_d_value = 0.
    else:
      im_d_value = math_ops.reduce_sum(math_ops.imag(self.spectrum), axis=axis)

    return math_ops.cast(math_ops.complex(re_d_value, im_d_value), self.dtype)
开发者ID:JonathanRaiman,项目名称:tensorflow,代码行数:33,代码来源:linear_operator_circulant.py


示例5: _operator_and_matrix

  def _operator_and_matrix(self, build_info, dtype, use_placeholder):
    shape = build_info.shape
    # For this test class, we are creating Hermitian spectrums.
    # We also want the spectrum to have eigenvalues bounded away from zero.
    #
    # pre_spectrum is bounded away from zero.
    pre_spectrum = linear_operator_test_util.random_uniform(
        shape=self._shape_to_spectrum_shape(shape), minval=1., maxval=2.)
    pre_spectrum_c = _to_complex(pre_spectrum)

    # Real{IFFT[pre_spectrum]}
    #  = IFFT[EvenPartOf[pre_spectrum]]
    # is the IFFT of something that is also bounded away from zero.
    # Therefore, FFT[pre_h] would be a well-conditioned spectrum.
    pre_h = math_ops.ifft2d(pre_spectrum_c)

    # A spectrum is Hermitian iff it is the DFT of a real convolution kernel.
    # So we will make spectrum = FFT[h], for real valued h.
    h = math_ops.real(pre_h)
    h_c = _to_complex(h)

    spectrum = math_ops.fft2d(h_c)

    lin_op_spectrum = spectrum

    if use_placeholder:
      lin_op_spectrum = array_ops.placeholder_with_default(spectrum, shape=None)

    operator = linalg.LinearOperatorCirculant2D(
        lin_op_spectrum, input_output_dtype=dtype)

    mat = self._spectrum_to_circulant_2d(spectrum, shape, dtype=dtype)

    return operator, mat
开发者ID:ThunderQi,项目名称:tensorflow,代码行数:34,代码来源:linear_operator_circulant_test.py


示例6: logdet

def logdet(matrix, name=None):
  """Computes log of the determinant of a hermitian positive definite matrix.

  ```python
  # Compute the determinant of a matrix while reducing the chance of over- or
  underflow:
  A = ... # shape 10 x 10
  det = tf.exp(tf.logdet(A))  # scalar
  ```

  Args:
    matrix:  A `Tensor`. Must be `float16`, `float32`, `float64`, `complex64`,
      or `complex128` with shape `[..., M, M]`.
    name:  A name to give this `Op`.  Defaults to `logdet`.

  Returns:
    The natural log of the determinant of `matrix`.

  @compatibility(numpy)
  Equivalent to numpy.linalg.slogdet, although no sign is returned since only
  hermitian positive definite matrices are supported.
  @end_compatibility
  """
  # This uses the property that the log det(A) = 2*sum(log(real(diag(C))))
  # where C is the cholesky decomposition of A.
  with ops.name_scope(name, 'logdet', [matrix]):
    chol = gen_linalg_ops.cholesky(matrix)
    return 2.0 * math_ops.reduce_sum(
        math_ops.log(math_ops.real(array_ops.matrix_diag_part(chol))),
        reduction_indices=[-1])
开发者ID:AndrewTwinz,项目名称:tensorflow,代码行数:30,代码来源:linalg_impl.py


示例7: test_defining_spd_operator_by_taking_real_part

  def test_defining_spd_operator_by_taking_real_part(self):
    with self.cached_session() as sess:
      # S is real and positive.
      s = linear_operator_test_util.random_uniform(
          shape=(10, 2, 3, 4), dtype=dtypes.float32, minval=1., maxval=2.)

      # Let S = S1 + S2, the Hermitian and anti-hermitian parts.
      # S1 = 0.5 * (S + S^H), S2 = 0.5 * (S - S^H),
      # where ^H is the Hermitian transpose of the function:
      #    f(n0, n1, n2)^H := ComplexConjugate[f(N0-n0, N1-n1, N2-n2)].
      # We want to isolate S1, since
      #   S1 is Hermitian by construction
      #   S1 is real since S is
      #   S1 is positive since it is the sum of two positive kernels

      # IDFT[S] = IDFT[S1] + IDFT[S2]
      #         =      H1  +      H2
      # where H1 is real since it is Hermitian,
      # and H2 is imaginary since it is anti-Hermitian.
      ifft_s = fft_ops.ifft3d(math_ops.cast(s, dtypes.complex64))

      # Throw away H2, keep H1.
      real_ifft_s = math_ops.real(ifft_s)

      # This is the perfect spectrum!
      # spectrum = DFT[H1]
      #          = S1,
      fft_real_ifft_s = fft_ops.fft3d(
          math_ops.cast(real_ifft_s, dtypes.complex64))

      # S1 is Hermitian ==> operator is real.
      # S1 is real ==> operator is self-adjoint.
      # S1 is positive ==> operator is positive-definite.
      operator = linalg.LinearOperatorCirculant3D(fft_real_ifft_s)

      # Allow for complex output so we can check operator has zero imag part.
      self.assertEqual(operator.dtype, dtypes.complex64)
      matrix, matrix_t = sess.run([
          operator.to_dense(),
          array_ops.matrix_transpose(operator.to_dense())
      ])
      operator.assert_positive_definite().run()  # Should not fail.
      np.testing.assert_allclose(0, np.imag(matrix), atol=1e-6)
      self.assertAllClose(matrix, matrix_t)

      # Just to test the theory, get S2 as well.
      # This should create an imaginary operator.
      # S2 is anti-Hermitian ==> operator is imaginary.
      # S2 is real ==> operator is self-adjoint.
      imag_ifft_s = math_ops.imag(ifft_s)
      fft_imag_ifft_s = fft_ops.fft3d(
          1j * math_ops.cast(imag_ifft_s, dtypes.complex64))
      operator_imag = linalg.LinearOperatorCirculant3D(fft_imag_ifft_s)

      matrix, matrix_h = sess.run([
          operator_imag.to_dense(),
          array_ops.matrix_transpose(math_ops.conj(operator_imag.to_dense()))
      ])
      self.assertAllClose(matrix, matrix_h)
      np.testing.assert_allclose(0, np.real(matrix), atol=1e-7)
开发者ID:JonathanRaiman,项目名称:tensorflow,代码行数:60,代码来源:linear_operator_circulant_test.py


示例8: _compareMulGradient

  def _compareMulGradient(self, data):
    # data is a float matrix of shape [n, 4].  data[:, 0], data[:, 1],
    # data[:, 2], data[:, 3] are real parts of x, imaginary parts of
    # x, real parts of y and imaginary parts of y.
    with self.cached_session():
      inp = ops.convert_to_tensor(data)
      xr, xi, yr, yi = array_ops.split(value=inp, num_or_size_splits=4, axis=1)

      def vec(x):  # Reshape to a vector
        return array_ops.reshape(x, [-1])

      xr, xi, yr, yi = vec(xr), vec(xi), vec(yr), vec(yi)

      def cplx(r, i):  # Combine to a complex vector
        return math_ops.complex(r, i)

      x, y = cplx(xr, xi), cplx(yr, yi)
      # z is x times y in complex plane.
      z = x * y
      # Defines the loss function as the sum of all coefficients of z.
      loss = math_ops.reduce_sum(math_ops.real(z) + math_ops.imag(z))
      epsilon = 0.005
      jacob_t, jacob_n = gradient_checker.compute_gradient(
          inp, list(data.shape), loss, [1], x_init_value=data, delta=epsilon)
    self.assertAllClose(jacob_t, jacob_n, rtol=epsilon, atol=epsilon)
开发者ID:Wajih-O,项目名称:tensorflow,代码行数:25,代码来源:cwise_ops_test.py


示例9: _ComplexGrad

def _ComplexGrad(op, grad):
  """Returns the real and imaginary components of 'grad', respectively."""
  x = op.inputs[0]
  y = op.inputs[1]
  sx = array_ops.shape(x)
  sy = array_ops.shape(y)
  rx, ry = gen_array_ops._broadcast_gradient_args(sx, sy)
  return (array_ops.reshape(math_ops.reduce_sum(math_ops.real(grad), rx), sx),
          array_ops.reshape(math_ops.reduce_sum(math_ops.imag(grad), ry), sy))
开发者ID:neuroradiology,项目名称:tensorflow,代码行数:9,代码来源:math_grad.py


示例10: dct

def dct(input, type=2, n=None, axis=-1, norm=None, name=None):  # pylint: disable=redefined-builtin
  """Computes the 1D [Discrete Cosine Transform (DCT)][dct] of `input`.

  Currently only Type II is supported. Implemented using a length `2N` padded
  @{tf.spectral.rfft}, as described here: https://dsp.stackexchange.com/a/10606

  @compatibility(scipy)
  Equivalent to scipy.fftpack.dct for the Type-II DCT.
  https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.fftpack.dct.html
  @end_compatibility

  Args:
    input: A `[..., samples]` `float32` `Tensor` containing the signals to
      take the DCT of.
    type: The DCT type to perform. Must be 2.
    n: For future expansion. The length of the transform. Must be `None`.
    axis: For future expansion. The axis to compute the DCT along. Must be `-1`.
    norm: The normalization to apply. `None` for no normalization or `'ortho'`
      for orthonormal normalization.
    name: An optional name for the operation.

  Returns:
    A `[..., samples]` `float32` `Tensor` containing the DCT of `input`.

  Raises:
    ValueError: If `type` is not `2`, `n` is not `None, `axis` is not `-1`, or
      `norm` is not `None` or `'ortho'`.

  [dct]: https://en.wikipedia.org/wiki/Discrete_cosine_transform
  """
  _validate_dct_arguments(type, n, axis, norm)
  with _ops.name_scope(name, "dct", [input]):
    # We use the RFFT to compute the DCT and TensorFlow only supports float32
    # for FFTs at the moment.
    input = _ops.convert_to_tensor(input, dtype=_dtypes.float32)

    axis_dim = input.shape[-1].value or _array_ops.shape(input)[-1]
    axis_dim_float = _math_ops.to_float(axis_dim)
    scale = 2.0 * _math_ops.exp(_math_ops.complex(
        0.0, -_math.pi * _math_ops.range(axis_dim_float) /
        (2.0 * axis_dim_float)))

    # TODO(rjryan): Benchmark performance and memory usage of the various
    # approaches to computing a DCT via the RFFT.
    dct2 = _math_ops.real(
        rfft(input, fft_length=[2 * axis_dim])[..., :axis_dim] * scale)

    if norm == "ortho":
      n1 = 0.5 * _math_ops.rsqrt(axis_dim_float)
      n2 = n1 * _math_ops.sqrt(2.0)
      # Use tf.pad to make a vector of [n1, n2, n2, n2, ...].
      weights = _array_ops.pad(
          _array_ops.expand_dims(n1, 0), [[0, axis_dim - 1]],
          constant_values=n2)
      dct2 *= weights

    return dct2
开发者ID:AndrewTwinz,项目名称:tensorflow,代码行数:57,代码来源:spectral_ops.py


示例11: _AngleGrad

def _AngleGrad(op, grad):
  """Returns -grad / (Im(x) + iRe(x))"""
  x = op.inputs[0]
  with ops.control_dependencies([grad]):
    re = math_ops.real(x)
    im = math_ops.imag(x)
    z = math_ops.reciprocal(math_ops.complex(im, re))
    zero = constant_op.constant(0, dtype=grad.dtype)
    complex_grad = math_ops.complex(grad, zero)
    return -complex_grad * z
开发者ID:neuroradiology,项目名称:tensorflow,代码行数:10,代码来源:math_grad.py


示例12: svd

def svd(tensor, full_matrices=False, compute_uv=True, name=None):
  """Computes the singular value decompositions of one or more matrices.

  Computes the SVD of each inner matrix in `tensor` such that
  `tensor[..., :, :] = u[..., :, :] * diag(s[..., :, :]) * transpose(v[..., :,
  :])`

  ```prettyprint
  # a is a tensor.
  # s is a tensor of singular values.
  # u is a tensor of left singular vectors.
  # v is a tensor of right singular vectors.
  s, u, v = svd(a)
  s = svd(a, compute_uv=False)
  ```

  Args:
    matrix: `Tensor` of shape `[..., M, N]`. Let `P` be the minimum of `M` and
      `N`.
    full_matrices: If true, compute full-sized `u` and `v`. If false
      (the default), compute only the leading `P` singular vectors.
      Ignored if `compute_uv` is `False`.
    compute_uv: If `True` then left and right singular vectors will be
      computed and returned in `u` and `v`, respectively. Otherwise, only the
      singular values will be computed, which can be significantly faster.
    name: string, optional name of the operation.

  Returns:
    s: Singular values. Shape is `[..., P]`.
    u: Right singular vectors. If `full_matrices` is `False` (default) then
      shape is `[..., M, P]`; if `full_matrices` is `True` then shape is
      `[..., M, M]`. Not returned if `compute_uv` is `False`.
    v: Left singular vectors. If `full_matrices` is `False` (default) then
      shape is `[..., N, P]`. If `full_matrices` is `True` then shape is
      `[..., N, N]`. Not returned if `compute_uv` is `False`.
  """
  # pylint: disable=protected-access
  s, u, v = gen_linalg_ops._svd(
      tensor, compute_uv=compute_uv, full_matrices=full_matrices)
  if compute_uv:
    return math_ops.real(s), u, v
  else:
    return math_ops.real(s)
开发者ID:curtiszimmerman,项目名称:tensorflow,代码行数:43,代码来源:linalg_ops.py


示例13: _assert_positive_definite

 def _assert_positive_definite(self):
   # This operator has the action  Ax = F^H D F x,
   # where D is the diagonal matrix with self.spectrum on the diag.  Therefore,
   # <x, Ax> = <Fx, DFx>,
   # Since F is bijective, the condition for positive definite is the same as
   # for a diagonal matrix, i.e. real part of spectrum is positive.
   message = (
       "Not positive definite:  Real part of spectrum was not all positive.")
   return check_ops.assert_positive(
       math_ops.real(self.spectrum), message=message)
开发者ID:JonathanRaiman,项目名称:tensorflow,代码行数:10,代码来源:linear_operator_circulant.py


示例14: _checkGradReal

  def _checkGradReal(self, func, x, use_gpu=False):
    with self.test_session(use_gpu=use_gpu):
      inx = ops.convert_to_tensor(x)
      # func is a forward RFFT function (batched or unbatched).
      z = func(inx)
      # loss = sum(|z|^2)
      loss = math_ops.reduce_sum(math_ops.real(z * math_ops.conj(z)))
      x_jacob_t, x_jacob_n = test.compute_gradient(
          inx, list(x.shape), loss, [1], x_init_value=x, delta=1e-2)

    self.assertAllClose(x_jacob_t, x_jacob_n, rtol=1e-2, atol=1e-2)
开发者ID:lldavuull,项目名称:tensorflow,代码行数:11,代码来源:fft_ops_test.py


示例15: _assert_positive_definite

  def _assert_positive_definite(self):
    if self.dtype.is_complex:
      message = (
          "Diagonal operator had diagonal entries with non-positive real part, "
          "thus was not positive definite.")
    else:
      message = (
          "Real diagonal operator had non-positive diagonal entries, "
          "thus was not positive definite.")

    return check_ops.assert_positive(
        math_ops.real(self._diag),
        message=message)
开发者ID:AliMiraftab,项目名称:tensorflow,代码行数:13,代码来源:linear_operator_tril.py


示例16: _checkGrad

 def _checkGrad(self, func, x, y, use_gpu=False):
   with self.test_session(use_gpu=use_gpu):
     inx = ops.convert_to_tensor(x)
     iny = ops.convert_to_tensor(y)
     # func is a forward or inverse FFT function (batched or unbatched)
     z = func(math_ops.complex(inx, iny))
     # loss = sum(|z|^2)
     loss = math_ops.reduce_sum(math_ops.real(z * math_ops.conj(z)))
     ((x_jacob_t, x_jacob_n),
      (y_jacob_t, y_jacob_n)) = gradient_checker.compute_gradient(
          [inx, iny], [list(x.shape), list(y.shape)],
          loss, [1],
          x_init_value=[x, y],
          delta=1e-2)
   self.assertAllClose(x_jacob_t, x_jacob_n, rtol=1e-2, atol=1e-2)
   self.assertAllClose(y_jacob_t, y_jacob_n, rtol=1e-2, atol=1e-2)
开发者ID:kdavis-mozilla,项目名称:tensorflow,代码行数:16,代码来源:fft_ops_test.py


示例17: _compareGradient

 def _compareGradient(self, x):
   # x[:, 0] is real, x[:, 1] is imag.  We combine real and imag into
   # complex numbers. Then, we extract real and imag parts and
   # computes the squared sum. This is obviously the same as sum(real
   # * real) + sum(imag * imag). We just want to make sure the
   # gradient function is checked.
   with self.cached_session():
     inx = ops.convert_to_tensor(x)
     real, imag = array_ops.split(value=inx, num_or_size_splits=2, axis=1)
     real, imag = array_ops.reshape(real, [-1]), array_ops.reshape(imag, [-1])
     cplx = math_ops.complex(real, imag)
     cplx = math_ops.conj(cplx)
     loss = math_ops.reduce_sum(math_ops.square(
         math_ops.real(cplx))) + math_ops.reduce_sum(
             math_ops.square(math_ops.imag(cplx)))
     epsilon = 1e-3
     jacob_t, jacob_n = gradient_checker.compute_gradient(
         inx, list(x.shape), loss, [1], x_init_value=x, delta=epsilon)
   self.assertAllClose(jacob_t, jacob_n, rtol=epsilon, atol=epsilon)
开发者ID:Wajih-O,项目名称:tensorflow,代码行数:19,代码来源:cwise_ops_test.py


示例18: _checkGradComplex

  def _checkGradComplex(self, func, x, y, result_is_complex=True,
                        rtol=1e-2, atol=1e-2):
    with self.cached_session(use_gpu=True):
      inx = ops.convert_to_tensor(x)
      iny = ops.convert_to_tensor(y)
      # func is a forward or inverse, real or complex, batched or unbatched FFT
      # function with a complex input.
      z = func(math_ops.complex(inx, iny))
      # loss = sum(|z|^2)
      loss = math_ops.reduce_sum(math_ops.real(z * math_ops.conj(z)))

      ((x_jacob_t, x_jacob_n),
       (y_jacob_t, y_jacob_n)) = gradient_checker.compute_gradient(
           [inx, iny], [list(x.shape), list(y.shape)],
           loss, [1],
           x_init_value=[x, y],
           delta=1e-2)

    self.assertAllClose(x_jacob_t, x_jacob_n, rtol=rtol, atol=atol)
    self.assertAllClose(y_jacob_t, y_jacob_n, rtol=rtol, atol=atol)
开发者ID:ziky90,项目名称:tensorflow,代码行数:20,代码来源:fft_ops_test.py


示例19: _operator_and_mat_and_feed_dict

  def _operator_and_mat_and_feed_dict(self, build_info, dtype, use_placeholder):
    shape = build_info.shape
    # For this test class, we are creating Hermitian spectrums.
    # We also want the spectrum to have eigenvalues bounded away from zero.
    #
    # pre_spectrum is bounded away from zero.
    pre_spectrum = linear_operator_test_util.random_uniform(
        shape=self._shape_to_spectrum_shape(shape), minval=1., maxval=2.)
    pre_spectrum_c = _to_complex(pre_spectrum)

    # Real{IFFT[pre_spectrum]}
    #  = IFFT[EvenPartOf[pre_spectrum]]
    # is the IFFT of something that is also bounded away from zero.
    # Therefore, FFT[pre_h] would be a well-conditioned spectrum.
    pre_h = math_ops.ifft2d(pre_spectrum_c)

    # A spectrum is Hermitian iff it is the DFT of a real convolution kernel.
    # So we will make spectrum = FFT[h], for real valued h.
    h = math_ops.real(pre_h)
    h_c = _to_complex(h)

    spectrum = math_ops.fft2d(h_c)

    if use_placeholder:
      spectrum_ph = array_ops.placeholder(dtypes.complex64)
      # Evaluate here because (i) you cannot feed a tensor, and (ii)
      # it is random and we want the same value used for both mat and feed_dict.
      spectrum = spectrum.eval()
      operator = linalg.LinearOperatorCirculant2D(
          spectrum_ph, input_output_dtype=dtype)
      feed_dict = {spectrum_ph: spectrum}
    else:
      operator = linalg.LinearOperatorCirculant2D(
          spectrum, input_output_dtype=dtype)
      feed_dict = None

    mat = self._spectrum_to_circulant_2d(spectrum, shape, dtype=dtype)

    return operator, mat, feed_dict
开发者ID:Huoxubeiyin,项目名称:tensorflow,代码行数:39,代码来源:linear_operator_circulant_test.py


示例20: _assert_positive_definite

 def _assert_positive_definite(self):
   return check_ops.assert_positive(
       math_ops.real(self.multiplier),
       message="LinearOperator was not positive definite.")
开发者ID:AliMiraftab,项目名称:tensorflow,代码行数:4,代码来源:linear_operator_identity.py



注:本文中的tensorflow.python.ops.math_ops.real函数示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Python math_ops.reciprocal函数代码示例发布时间:2022-05-27
下一篇:
Python math_ops.range函数代码示例发布时间:2022-05-27
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap