This is not an exact answer but I want to share my observations about this. I downloaded the data from Kaggle and I've run same example 3 times. How I used metrics:
metrics = [
keras.metrics.FalseNegatives(name="fn"),
keras.metrics.FalsePositives(name="fp"),
keras.metrics.TrueNegatives(name="tn"),
keras.metrics.TruePositives(name="tp"),
keras.metrics.Precision(name="precision"),
keras.metrics.Recall(name="recall"),
]
I also tried to add tf.keras but both of the resulted same, so I sticked to the one in tutorial.
As floating numbers' sum don't make sense, I decided to try it myself with the same data. For training I have 227846 samples.
Part 1)
Epoch 2/30
fn: 40.0000 - fp: 5700.0000 - tn: 221729.0000 - tp: 377.0000 - precision: 0.0620 - recall: 0.9041
When you sum up 40 + 5700 + 221729 + 377 = 227846
Part 2)
Epoch 25/30
fn: 9.0000 - fp: 3501.0000 - tn: 223928.0000 - tp: 408.0000 - precision: 0.1044 - recall: 0.9784
Sum all, 3501 + 223928 + 408 + 9 = 227846
Same thing applies for validation too. (Part 1 - Part 2)
Part 3)
I also expected them to be without fractions, like 9.000 except 9.248.
An original example from Keras' docs.
Both of the epochs' sum are equal to each other. As I am not sure, there might be something wrong with data/processing in tutorial. So I decided to download the data from link that is in tutorial:
Epoch 8/10
fn: 22.0000 - fp: 5191.0000 - tn: 222238.0000 - tp: 395.0000
I applied the same process both of the data. So I tried to see if there is something fishy about data pre-processing in the tutorial. I had a train shape of (227846,30). In tutorial it is (182276, 29). 182276 because of splitting different ratios, but 29 features is the different one.
I conclude there might be something that I can not see about data processing in the tutorial, the more I look it, the more blind I become. I am unable to see if something wrong in the processing exists.
The results are like this, I also searched about metrics, all of them were integers.