We did the Gilded Rose kata at one of our local Python user group meetings.
What I remember most about it, other than the complicated logic, was using 'coverage' in branch mode to find two places where the manually developed and pretty comprehensive unit test suite was incomplete.
How does one pick the test cases for something like the GildedRose? Emily Bache, who provided the test cases for this essay and who ran the Python user group kata that I participated in, developed the test cases manually.
Are those provide test cases sufficient? As this essay shows, they are not.
The solution shown here "recorded bunch of input examples and output results from the program we wanted to refactor." However, are those seeded versions themselves sufficient?
Another solution is to use branch coverage testing to identify untested paths, and write unit tests which specifically target those paths. When I did this using Bache's unit test suite, I found that two branches were not exercised.
Even coverage analysis may not be sufficient. I mention it to suggest an alternate technique that I found useful in working with this kata.
Coverage assisted test development is, in my opinion, an underused technique.
"How does one pick the test cases for something like the GildedRose? Emily Bache, who provided the test cases for this essay and who ran the Python user group kata that I participated in, developed the test cases manually."
I created my tests based on empty example from Emily
https://github.com/emilybache/GildedRose-Refactoring-Kata/bl...
and right now I assume I have my own tests covering all edge cases because I discovered edge cases with golden master technique and I added missing tests to gilded_rose_spec.rb file.
What I remember most about it, other than the complicated logic, was using 'coverage' in branch mode to find two places where the manually developed and pretty comprehensive unit test suite was incomplete.