I'm covering some legacy code with tests, and the initial goal was to get the line coverage close to 100%. That is good to have such high coverage, but I feel like falling into an XY Problem. First of all, some lines of code are impossible to test without proper mocking, and mocking is not reasonable in that case. For example in the function just logs into an external log service, there is no need to test the exact format of the log message.
Personally I think that the only valid thing to cover with tests is a requirement. It is not something that has to be written, but this is a hyperplane that divides possible implementations into valid and invalid ones.
I'm thinking on an idea to implement something like a database of reqirements, were each test case would register the requiremens this test case covers. After running the build in Jenkins I may compare the list of the requirements that were registered by the tests run with the whole list of the requirements. This would have some sense only if I could expose this metric in Jenkins Cobertura together with class/line/conditionals coverage.
Is there any [easy] way to customize the list of metrics and to add a new one into Jenkins Cobertura?
question from:
https://stackoverflow.com/questions/65851057/add-custom-jenkins-cobertura-metrics 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…