The premise of the question is, frankly, astounding. Manually validating (via code review) the impact of all the knock-on effects is much more difficult. Testing and verifying that the bug fix worked is easy. For example, a developer might find a way to fix a bug with a single line by calling an existing high level routine that cascades calls to many lower level routines. By "broadly impact", I mean for example the potential for regression is high because of the interconnectedness of the codebase, or the scope of knock-on effects, not necessarily that the change itself is a large one. I can understand this being the interpretation, but that wasn't really my intention. This is a question asking what the best options are in the situation as described, especially with a pressing deadline, no comprehensive suite of unit tests available or unit tests not viable for the fragmented code that's changed.ĮDIT: I get the impression that a few of the answers/comments so far have picked up on my phrase "broadly impact", and possibly taken that to mean that the change involved a large number of lines of code. This is not specifically a question whether testing should be done as part of a code review. What to do in this situation? Merge and hope nothing slips through? (Not advocating that!) Do the best one can and try only to spot any obvious flaws (perhaps this is the most code review should aim for anyway?) Merge and test extra-thoroughly as a better alternative than code review at all? Perhaps even exceeding the time it took to do the development itself. In this situation, the amount of time it would take to verify the safety of the changes, absence of regression, etc. But occasionally there are changes that broadly impact existing complex, fragile code. OK so a lot of code review is fairly routine.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |