tf.stop_gradient(tensor)
might be what you are looking for. The tensor will be treated as constant for gradient computation purposes. You can create two losses with different parts treated as constants.
The other option (and often better) would be to create 2 optimizers but explicitly optimize only subsets of variables, e.g.
train_a = tf.train.GradientDescentOptimizer(0.1).minimize(loss_a, var_list=[A])
train_b = tf.train.GradientDescentOptimizer(0.1).minimize(loss_b, var_list=[B])
and you can iterate between them on the updates.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…