You can add class weights to the loss function, by multiplying logits.
Regular cross entropy loss is this:
loss(x, class) = -log(exp(x[class]) / (sum_j exp(x[j])))
= -x[class] + log(sum_j exp(x[j]))
in weighted case:
loss(x, class) = weights[class] * -x[class] + log(sum_j exp(weights[class] * x[j]))
So by multiplying logits, you are re-scaling predictions of each class by its class weight.
For example:
ratio = 31.0 / (500.0 + 31.0)
class_weight = tf.constant([ratio, 1.0 - ratio])
logits = ... # shape [batch_size, 2]
weighted_logits = tf.mul(logits, class_weight) # shape [batch_size, 2]
xent = tf.nn.softmax_cross_entropy_with_logits(
weighted_logits, labels, name="xent_raw")
There is a standard losses function now that supports weights per batch:
tf.losses.sparse_softmax_cross_entropy(labels=label, logits=logits, weights=weights)
Where weights should be transformed from class weights to a weight per example (with shape [batch_size]). See documentation here.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…