• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    公众号

Python tensorflow.op_scope函数代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Python中tensorflow.op_scope函数的典型用法代码示例。如果您正苦于以下问题:Python op_scope函数的具体用法?Python op_scope怎么用?Python op_scope使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。



在下文中一共展示了op_scope函数的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: rnn_decoder

def rnn_decoder(decoder_inputs, initial_state, cell, scope=None):
    """RNN Decoder that creates training and sampling sub-graphs.

    Args:
        decoder_inputs: Inputs for decoder, list of tensors.
                        This is used only in trianing sub-graph.
        initial_state: Initial state for the decoder.
        cell: RNN cell to use for decoder.
        scope: Scope to use, if None new will be produced.

    Returns:
        List of tensors for outputs and states for training and sampling sub-graphs.
    """
    with tf.variable_scope(scope or "dnn_decoder"):
        states, sampling_states = [initial_state], [initial_state]
        outputs, sampling_outputs = [], []
        with tf.op_scope([decoder_inputs, initial_state], "training"):
            for i, inp in enumerate(decoder_inputs):
                if i > 0:
                    tf.get_variable_scope().reuse_variables()
                output, new_state = cell(inp, states[-1])
                outputs.append(output)
                states.append(new_state)
        with tf.op_scope([initial_state], "sampling"):
            for i, _ in enumerate(decoder_inputs):
                if i == 0:
                    sampling_outputs.append(outputs[i])
                    sampling_states.append(states[i])
                else:
                    sampling_output, sampling_state = cell(
                        sampling_outputs[-1], sampling_states[-1])
                    sampling_outputs.append(sampling_output)
                    sampling_states.append(sampling_state)
    return outputs, states, sampling_outputs, sampling_states
开发者ID:4chin,项目名称:tensorflow,代码行数:34,代码来源:seq2seq_ops.py


示例2: dot

def dot(a, b):
    with tf.op_scope([a, b], 'dot'):
        # TODO: implement N-dimensinal dot product that consistent with Numpy.
        a_shape = a.get_shape().as_list()
        a_dims = len(a_shape)
        b_shape = b.get_shape().as_list()
        b_dims = len(b_shape)

        # scalar dot scalar, scalar dot tensor or tensor dot scalar: just do element-wise multiply.
        if a_dims == 0 or b_dims == 0:
            return a * b

        # vector dot vector, where we can just perform element-wise prod, and then sum them all.
        if a_dims == 1 and b_dims == 1:
            return tf.reduce_sum(a * b)

        # vector dot matrix or matrix dot vector, where we should expand the vector to matrix, and then squeeze result.
        if a_dims <= 2 and b_dims <= 2:
            if a_dims == 1:
                a = tf.expand_dims(a, dim=0)
            if b_dims == 1:
                b = tf.expand_dims(b, dim=1)
            ret = tf.matmul(a, b)
            if a_dims == 1:
                ret = tf.squeeze(ret, [0])
            if b_dims == 1:
                ret = tf.squeeze(ret, [1])
            return ret

    # throw exception, that we do not know how to handle the situation.
    raise TypeError('Tensor dot between shape %r and %r is not supported.' % (a_shape, b_shape))
开发者ID:korepwx,项目名称:ipwxlearn,代码行数:31,代码来源:op.py


示例3: l1_orthogonal_regularizer

def l1_orthogonal_regularizer(logits_to_normalize, l1_alpha_loss_factor = 10, name = None):

  '''Motivation from this loss function comes from: https://redd.it/3wx4sr
  Specifically want to thank spurious_recollectio and harponen on reddit for discussing this suggestion to me '''

  '''Will add a L1 Loss linearly to the softmax cost function.


    Returns:
  final_reg_loss: One Scalar Value representing the loss averaged across the batch'''

  '''this is different than unitary because it is an orthongonal matrix approximation -- it will
  suffer from timesteps longer than 500 and will take more computation power of O(n^3)'''

  with tf.op_scope(logits_to_normalize, name, "rnn_l2_loss"): #need to have this for tf to work

    '''the l1 equation is: alpha * T.abs(T.dot(W, W.T) - (1.05) ** 2 * T.identity_like(W))'''
    Weights_for_l1_loss = tf.get_variable("linear")

    matrix_dot_product= tf.matmul(Weights_for_l1_loss, Weights_for_l1_loss, transpose_a = True)

    #we need to check here that we have the right dimension -- should it be 0 or the 1 dim?
    identity_matrix = lfe.identity_like(Weights_for_l1_loss)

    matrix_minus_identity = matrix_dot_product - 2*1.05*identity_matrix

    absolute_cost = tf.abs(matrix_minus_identity)

    final_l1_loss = l1_alpha_loss_factor*(absolute_cost/batch_size)

  return final_l1_loss
开发者ID:dmakian,项目名称:TwitchRNNBot,代码行数:31,代码来源:seq2seq_enhanced.py


示例4: U_t_variance

def U_t_variance(timestep_outputs_matrix, total_timesteps, gamma = 5):

	with tf.op_scope(timestep_outputs_matrix + total_timesteps + gamma, "U_t_variance"):

		G_i_matrix = G_i_piecewise_variance(timestep_outputs_matrix, total_timesteps)
		tf.mul(timestep_outputs_matrix, )
		tf.reduce_prod(timestep_outputs_matrix_with_g)
开发者ID:dmakian,项目名称:TwitchRNNBot,代码行数:7,代码来源:decoding_enhanced.py


示例5: sampled_sequence_loss

def sampled_sequence_loss(inputs, targets, weights, loss_function,
                          average_across_timesteps=True,
                          average_across_batch=True, name=None):
  """Weighted cross-entropy loss for a sequence of logits, batch-collapsed.

  Args:
    inputs: List of 2D Tensors of shape [batch_size x hid_dim].
    targets: List of 1D batch-sized int32 Tensors of the same length as inputs.
    weights: List of 1D batch-sized float-Tensors of the same length as inputs.
    loss_function: Sampled softmax function (inputs, labels) -> loss
    average_across_timesteps: If set, divide the returned cost by the total
      label weight.
    average_across_batch: If set, divide the returned cost by the batch size.
    name: Optional name for this operation, defaults to 'sequence_loss'.

  Returns:
    A scalar float Tensor: The average log-perplexity per symbol (weighted).

  Raises:
    ValueError: If len(inputs) is different from len(targets) or len(weights).
  """
  with tf.op_scope(inputs + targets + weights, name, 'sampled_sequence_loss'):
    cost = tf.reduce_sum(sequence_loss_by_example(
        inputs, targets, weights, loss_function,
        average_across_timesteps=average_across_timesteps))
    if average_across_batch:
      batch_size = tf.shape(targets[0])[0]
      return cost / tf.cast(batch_size, tf.float32)
    else:
      return cost
开发者ID:Peratham,项目名称:models,代码行数:30,代码来源:seq2seq_lib.py


示例6: sequence_loss_by_example

def sequence_loss_by_example(inputs, targets, weights, loss_function,
                             average_across_timesteps=True, name=None):
  """Sampled softmax loss for a sequence of inputs (per example).

  Args:
    inputs: List of 2D Tensors of shape [batch_size x hid_dim].
    targets: List of 1D batch-sized int32 Tensors of the same length as logits.
    weights: List of 1D batch-sized float-Tensors of the same length as logits.
    loss_function: Sampled softmax function (inputs, labels) -> loss
    average_across_timesteps: If set, divide the returned cost by the total
      label weight.
    name: Optional name for this operation, default: 'sequence_loss_by_example'.

  Returns:
    1D batch-sized float Tensor: The log-perplexity for each sequence.

  Raises:
    ValueError: If len(inputs) is different from len(targets) or len(weights).
  """
  if len(targets) != len(inputs) or len(weights) != len(inputs):
    raise ValueError('Lengths of logits, weights, and targets must be the same '
                     '%d, %d, %d.' % (len(inputs), len(weights), len(targets)))
  with tf.op_scope(inputs + targets + weights, name,
                   'sequence_loss_by_example'):
    log_perp_list = []
    for inp, target, weight in zip(inputs, targets, weights):
      crossent = loss_function(inp, target)
      log_perp_list.append(crossent * weight)
    log_perps = tf.add_n(log_perp_list)
    if average_across_timesteps:
      total_size = tf.add_n(weights)
      total_size += 1e-12  # Just to avoid division by 0 for all-0 weights.
      log_perps /= total_size
  return log_perps
开发者ID:Peratham,项目名称:models,代码行数:34,代码来源:seq2seq_lib.py


示例7: seq2seq_inputs

def seq2seq_inputs(X, y, input_length, output_length, sentinel=None, name=None):
    """Processes inputs for Sequence to Sequence models.

    Args:
        X: Input Tensor [batch_size, input_length, embed_dim].
        y: Output Tensor [batch_size, output_length, embed_dim].
        input_length: length of input X.
        output_length: length of output y.
        sentinel: optional first input to decoder and final output expected.
                  if sentinel is not provided, zeros are used.
                  Due to fact that y is not available in sampling time, shape
                  of sentinel will be inferred from X.

    Returns:
        Encoder input from X, and decoder inputs and outputs from y.
    """
    with tf.op_scope([X, y], name, "seq2seq_inputs"):
        in_X = array_ops.split_squeeze(1, input_length, X)
        y = array_ops.split_squeeze(1, output_length, y)
        if not sentinel:
            # Set to zeros of shape of y[0], using X for batch size.
            sentinel_shape = tf.pack([tf.shape(X)[0], y[0].get_shape()[1]])
            sentinel = tf.zeros(sentinel_shape)
            sentinel.set_shape(y[0].get_shape())
        in_y = [sentinel] + y
        out_y = y + [sentinel]
        return in_X, in_y, out_y
开发者ID:4chin,项目名称:tensorflow,代码行数:27,代码来源:seq2seq_ops.py


示例8: FullyConnectedLayer

def FullyConnectedLayer(tensor, size, weight_init=None, bias_init=None,
                        name=None):
  """Fully connected layer.

  Args:
    tensor: Input tensor.
    size: Number of nodes in this layer.
    weight_init: Weight initializer.
    bias_init: Bias initializer.
    name: Name for this op. Defaults to 'fully_connected'.

  Returns:
    A new tensor representing the output of the fully connected layer.

  Raises:
    ValueError: If input tensor is not 2D.
  """
  if len(tensor.get_shape()) != 2:
    raise ValueError('Dense layer input must be 2D, not %dD'
                     % len(tensor.get_shape()))
  if weight_init is None:
    num_features = tensor.get_shape()[-1].value
    weight_init = tf.truncated_normal([num_features, size], stddev=0.01)
  if bias_init is None:
    bias_init = tf.zeros([size])

  with tf.op_scope([tensor], name, 'fully_connected'):
    w = tf.Variable(weight_init, name='w')
    b = tf.Variable(bias_init, name='b')
    return tf.nn.xw_plus_b(tensor, w, b)
开发者ID:skearnes,项目名称:deepchem,代码行数:30,代码来源:model_ops.py


示例9: MultitaskLogits

def MultitaskLogits(features, num_tasks, num_classes=2, weight_init=None,
                    bias_init=None, dropout=None, name=None):
  """Create a logit tensor for each classification task.

  Args:
    features: A 2D tensor with dimensions batch_size x num_features.
    num_tasks: Number of classification tasks.
    num_classes: Number of classes for each task.
    weight_init: Weight initializer.
    bias_init: Bias initializer.
    dropout: Float giving dropout probability for weights (NOT keep
      probability).
    name: Name for this op. Defaults to 'multitask_logits'.

  Returns:
    A list of logit tensors; one for each classification task.
  """
  logits = []
  with tf.name_scope('multitask_logits'):
    for task_idx in range(num_tasks):
      with tf.op_scope([features], name,
                       ('task' + str(task_idx).zfill(len(str(num_tasks))))):
        logits.append(
            Logits(features, num_classes, weight_init=weight_init,
                   bias_init=bias_init, dropout=dropout))
  return logits
开发者ID:skearnes,项目名称:deepchem,代码行数:26,代码来源:model_ops.py


示例10: inference

def inference(data, num_classes, scope):
  with tf.op_scope([data], scope):
    with scopes.arg_scope([ops.conv2d, ops.fc, ops.dropout], is_training=True):
      with tf.variable_scope('fc1'):
        fc1 = ops.fc(
            data,
            num_units_out=2048,
            activation=tf.nn.sigmoid)
      with tf.variable_scope('fc2'):
        fc2 = ops.fc(
            fc1,
            num_units_out=2048,
            activation=tf.nn.sigmoid)
      with tf.variable_scope('fc3'):
        fc3 = ops.fc(
            fc2,
            num_units_out=2048,
            activation=tf.nn.sigmoid)
      with tf.variable_scope('fc4'):
        fc4 = ops.fc(
            fc3,
            num_units_out=2048,
            activation=tf.nn.sigmoid)
      with tf.variable_scope('fc5'):
        fc5 = ops.fc(
            fc4,
            num_units_out=num_classes,
            activation=None)
  return fc5
开发者ID:houcy,项目名称:models,代码行数:29,代码来源:msr_ffn.py


示例11: norm_stabilizer_loss

def norm_stabilizer_loss(logits_to_normalize, norm_regularizer_factor = 50, name = None):
  '''Will add a Norm Stabilizer Loss 

    Args:
  logits_to_normalize:This can be output logits or hidden states. The state of each decoder cell in each time-step. This is a list
    with length len(decoder_inputs) -- one item for each time-step.
    Each item is a 2D Tensor of shape [batch_size x cell.state_size] (or it can be [batch_size x output_logits])

  norm_regularizer_factor: The factor required to apply norm stabilization. Keep 
    in mind that a larger factor will allow you to achieve a lower loss, but it will take
    many more epochs to do so!

    Returns:
  final_reg_loss: One Scalar Value representing the loss averaged across the batch'''

  with tf.op_scope(logits_to_normalize, name, "norm_stabilizer_loss"): #need to have this for tf to work
    batch_size = tf.shape(logits_to_normalize[0])[0] #you choose the batch size number

    squared_sum = tf.zeros((batch_size),tf.float32) #batch size in zeros
    for q in xrange(len(bucket_states)-1): #this represents the summation part from t to T
      '''one problem you're having right now is that you can't take the sqrt of negative number...you need to figure this out first

      You need to take the euclidean norm of the value -- can't find how to do this in tf....

      okay so Amn matrix means that the m is going down and n is going horizontal -- so we choose to reduce sum on axis 1 '''
      difference = tf.sub(lfe.frobenius_norm(bucket_states[q+1], reduction_indicies = 1),lfe.frobenius_norm(bucket_states[q], reduction_indicies = 1))
      '''the difference has the dimensions of [batch_size]'''

      squared_sum = tf. add(squared_sum, tf.square(difference))
    #We want to average across batch sizes and divide by T
    final_reg_loss = norm_regularizer_factor*(tf.add_n(squared_sum)/((len(bucket_states))*(batch_size)))
    return final_reg_loss
开发者ID:ml-lab,项目名称:Seq2Seq_Upgrade_TensorFlow,代码行数:32,代码来源:seq2seq_enhanced.py


示例12: std_forward

def std_forward(a, weights, bias_weights, name=None):
  with tf.op_scope([a, W, Wb], name, 'std_forward') as scope:
    a = tf.convert_to_tensor(a, dtype=tf.float32, name='input')
    weights = tf.convert_to_tensor(weights, dtype=tf.float32, name='weights')
    bias_weights = tf.convert_to_tensor(bias_weights, dtype=tf.float32, name='bias_weights')
    biased = tf.concat(1, (weights, bias_weights), name='biased')
    return tf.matmul(biased, a, name=scope)
开发者ID:rgobbel,项目名称:rntn,代码行数:7,代码来源:tf_rntn.py


示例13: my_model_with_buckets

def my_model_with_buckets(encoder_inputs, decoder_inputs, targets, weights,
                          buckets, seq2seq, softmax_loss_function=None,
                          per_example_loss=False, name=None):
    """Improved version of model_with_buckets, to take the states
    """
    if len(encoder_inputs) < buckets[-1][0]:
        raise ValueError("Length of encoder_inputs (%d) must be at least that of la"
                         "st bucket (%d)." % (len(encoder_inputs), buckets[-1][0]))
    if len(targets) < buckets[-1][1]:
        raise ValueError("Length of targets (%d) must be at least that of last"
                         "bucket (%d)." % (len(targets), buckets[-1][1]))
    if len(weights) < buckets[-1][1]:
        raise ValueError("Length of weights (%d) must be at least that of last"
                         "bucket (%d)." % (len(weights), buckets[-1][1]))

    all_inputs = encoder_inputs + decoder_inputs + targets + weights
    losses = []
    outputs = []
    states = []
    with tf.op_scope(all_inputs, name, "my_model_with_buckets"):
        for j, bucket in enumerate(buckets):
            with tf.variable_scope(tf.get_variable_scope(), reuse=True if j > 0 else None):
                bucket_outputs, _, bucket_enc_state = seq2seq(encoder_inputs[:bucket[0]], decoder_inputs[:bucket[1]])
                outputs.append(bucket_outputs)
                states.append(bucket_enc_state)
                if per_example_loss:
                    losses.append(tf.nn.seq2seq.sequence_loss_by_example(
                        outputs[-1], targets[:bucket[1]], weights[:bucket[1]],
                        softmax_loss_function=softmax_loss_function))
                else:
                    losses.append(tf.nn.seq2seq.sequence_loss(
                        outputs[-1], targets[:bucket[1]], weights[:bucket[1]],
                        softmax_loss_function=softmax_loss_function))

    return outputs, losses, states
开发者ID:ManasMahanta,项目名称:misc,代码行数:35,代码来源:stock_model.py


示例14: batch_sample_with_temperature_old

def batch_sample_with_temperature_old(arr, temperature=1.0):
    """
    Samples from something resembeling a multinomial distribution.
    Works by multiplying the probabilities of each value by a 
    random uniform number and then selecting the max.

    Where arr is of shape (batch_size, vocab_size)
    Returns the index of the item that was sampled in each row.

    source: https://github.com/tensorflow/tensorflow/issues/456
    """
    batch_size, vocab_size = arr.get_shape()

    with tf.op_scope([arr, temperature], "batch_sample_with_temperature"):


        # subtract by the largest value in each batch to improve stability
        c = tf.reduce_max(arr, reduction_indices=1, keep_dims=True)
        softmax = tf.nn.softmax(arr - c) + 1e-6
        x = tf.log(softmax) # / temperature

        # softmax again
        x = tf.nn.softmax(x) / temperature

        # perform the sampling
        u = tf.random_uniform(tf.shape(arr), minval=1e-6, maxval=1)
        sampled_idx = tf.argmax(tf.sub(x, -tf.log(-tf.log(u))), dimension=1) 
        
    return sampled_idx, x
开发者ID:wulfebw,项目名称:adversarial_rl,代码行数:29,代码来源:learning_utils.py


示例15: loss

def loss(logits, one_hot_labels, batch_size, scope):
  with tf.op_scope([logits, one_hot_labels], scope, 'CrossEntropyLoss'):
    cross_entropy = tf.nn.softmax_cross_entropy_with_logits(
        logits,
        one_hot_labels,
        name='xentropy')
  return cross_entropy
开发者ID:houcy,项目名称:models,代码行数:7,代码来源:vgg.py


示例16: cross_entropy_loss

def cross_entropy_loss(logits, one_hot_labels, label_smoothing=0,
                       weight=1.0, scope=None):
  """Define a Cross Entropy loss using softmax_cross_entropy_with_logits.

  It can scale the loss by weight factor, and smooth the labels.

  Args:
    logits: [batch_size, num_classes] logits outputs of the network .
    one_hot_labels: [batch_size, num_classes] target one_hot_encoded labels.
    label_smoothing: if greater than 0 then smooth the labels.
    weight: scale the loss by this factor.
    scope: Optional scope for op_scope.

  Returns:
    A tensor with the softmax_cross_entropy loss.
  """
  logits.get_shape().assert_is_compatible_with(one_hot_labels.get_shape())
  with tf.op_scope([logits, one_hot_labels], scope, 'CrossEntropyLoss'):
    num_classes = one_hot_labels.get_shape()[-1].value
    one_hot_labels = tf.cast(one_hot_labels, logits.dtype)
    if label_smoothing > 0:
      smooth_positives = 1.0 - label_smoothing
      smooth_negatives = label_smoothing / num_classes
      one_hot_labels = one_hot_labels * smooth_positives + smooth_negatives
    cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits,
                                                            one_hot_labels,
                                                            name='xentropy')
    weight = tf.convert_to_tensor(weight,
                                  dtype=logits.dtype.base_dtype,
                                  name='loss_weight')
    loss = tf.mul(weight, tf.reduce_mean(cross_entropy), name='value')
    tf.add_to_collection(LOSSES_COLLECTION, loss)
    return loss
开发者ID:paengs,项目名称:Net2Net,代码行数:33,代码来源:losses.py


示例17: sequence_classifier

def sequence_classifier(decoding, labels, sampling_decoding=None, name=None):
    """Returns predictions and loss for sequence of predictions.

    Args:
        decoding: List of Tensors with predictions.
        labels: List of Tensors with labels.
        sampling_decoding: Optional, List of Tensor with predictions to be used
                           in sampling. E.g. they shouldn't have dependncy on ouptuts.
                           If not provided, decoding is used.

    Returns:
        Predictions and losses tensors.
    """
    with tf.op_scope([decoding, labels], name, "sequence_classifier"):
        predictions, xent_list = [], []
        for i, pred in enumerate(decoding):
            xent_list.append(
                tf.nn.softmax_cross_entropy_with_logits(
                    pred, labels[i], name="sequence_loss/xent_raw{0}".format(i)))
            if sampling_decoding:
                predictions.append(tf.nn.softmax(sampling_decoding[i]))
            else:
                predictions.append(tf.nn.softmax(pred))
        xent = tf.add_n(xent_list, name="sequence_loss/xent")
        loss = tf.reduce_sum(xent, name="sequence_loss")
        return array_ops.expand_concat(1, predictions), loss
开发者ID:4chin,项目名称:tensorflow,代码行数:26,代码来源:seq2seq_ops.py


示例18: l2_orthogonal_regularizer

def l2_orthogonal_regularizer(logits_to_normalize, l2_alpha_loss_factor = 10, name = None):

  '''Motivation from this loss function comes from: https://www.reddit.com/r/MachineLearning/comments/3uk2q5/151106464_unitary_evolution_recurrent_neural/
  Specifically want to thank spurious_recollectio on reddit for discussing this suggestion to me '''

  '''Will add a L2 Loss linearly to the softmax cost function.


    Returns:
  final_reg_loss: One Scalar Value representing the loss averaged across the batch'''

  '''this is different than unitary because it is an orthongonal matrix approximation -- it will
  suffer from timesteps longer than 500 and will take more computation power of O(n^3)'''

  with tf.op_scope(logits_to_normalize, name, "rnn_l2_loss"): #need to have this for tf to work

    '''somehow we need to get the Weights from the rnns right here....i don't know how! '''
    '''the l1 equation is: alpha * T.abs(T.dot(W, W.T) - (1.05) ** 2 * T.identity_like(W))'''
    '''The Equation of the Cost Is: loss += alpha * T.sum((T.dot(W, W.T) - (1.05)*2 T.identity_like(W)) * 2)'''
    Weights_for_l2_loss = tf.get_variable("linear")

    matrix_dot_product= tf.matmul(Weights_for_l2_loss, Weights_for_l2_loss, transpose_a = True)

    #we need to check here that we have the right dimension -- should it be 0 or the 1 dim?
    identity_matrix = lfe.identity_like(Weights_for_l2_loss)

    matrix_minus_identity = matrix_dot_product - 2*1.05*identity_matrix

    square_the_loss = tf.square(matrix_minus_identity)

    final_l2_loss = l2_alpha_loss_factor*(tf.reduce_sum(square_the_loss)/(batch_size))
  return final_l2_loss
开发者ID:dmakian,项目名称:TwitchRNNBot,代码行数:32,代码来源:seq2seq_enhanced.py


示例19: unzip

def unzip(x, split_dim, current_length, num_splits=2, name=None):
  """Splits a tensor by unzipping along the split_dim.

  For example the following array split into 2 would be:
      [1, 2, 3, 4, 5, 6] -> [1, 3, 5], [2, 4, 6]
  and by 3:
      [1, 2, 3, 4] -> [1, 4], [2], [3]

  Args:
    x: The tensor to split.
    split_dim: The dimension to split along.
    current_length: Current length along the split_dim.
    num_splits: The number of splits.
    name: Optional name for this op.
  Returns:
    A length num_splits sequence.
  """
  with tf.op_scope([x], name, 'unzip') as scope:
    x = tf.convert_to_tensor(x, name='x')
    # There is probably a more efficient way to do this.
    all_splits = tf.split(split_dim, current_length, x, name=scope)
    splits = [[] for _ in xrange(num_splits)]
    for i in xrange(current_length):
      splits[i % num_splits].append(all_splits[i])
    return [tf.concat(split_dim, s) for s in splits]
开发者ID:Dapid,项目名称:prettytensor,代码行数:25,代码来源:functions.py


示例20: sequence_loss

def sequence_loss(logits, targets, weights, num_decoder_symbols,
                  average_across_timesteps=True, average_across_batch=True,
                  softmax_loss_function=None, name=None):
  """Weighted cross-entropy loss for a sequence of logits, batch-collapsed.

  Args:
    logits: list of 2D Tensors os shape [batch_size x num_decoder_symbols].
    targets: list of 1D batch-sized int32-Tensors of the same length as logits.
    weights: list of 1D batch-sized float-Tensors of the same length as logits.
    num_decoder_symbols: integer, number of decoder symbols (output classes).
    average_across_timesteps: If set, divide the returned cost by the total
      label weight.
    average_across_batch: If set, divide the returned cost by the batch size.
    softmax_loss_function: function (inputs-batch, labels-batch) -> loss-batch
      to be used instead of the standard softmax (the default if this is None).
    name: optional name for this operation, defaults to "sequence_loss".

  Returns:
    A scalar float Tensor: the average log-perplexity per symbol (weighted).

  Raises:
    ValueError: if len(logits) is different from len(targets) or len(weights).
  """
  with tf.op_scope(logits + targets + weights, name, "sequence_loss"):
    cost = tf.reduce_sum(sequence_loss_by_example(
        logits, targets, weights, num_decoder_symbols,
        average_across_timesteps=average_across_timesteps,
        softmax_loss_function=softmax_loss_function))
    if average_across_batch:
      batch_size = tf.shape(targets[0])[0]
      return cost / tf.cast(batch_size, tf.float32)
    else:
      return cost
开发者ID:adeelzaman,项目名称:tensorflow,代码行数:33,代码来源:seq2seq.py



注:本文中的tensorflow.op_scope函数示例由纯净天空整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Python tensorflow.pack函数代码示例发布时间:2022-05-27
下一篇:
Python tensorflow.ones_like函数代码示例发布时间:2022-05-27
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap