OpenUP: defect trend analysis/metrics at the end of an iteration [message #49178] |
Fri, 11 April 2008 08:41 |
Roman Smirak Messages: 136 Registered: July 2009 |
Senior Member |
|
|
Hi,
I would like to share feelings with you: I have noticed test/quality
results (number of defects, defect trends, ..., coverage, other code quality
metrics) provide great basis for iteration/sprint retrospective.
If I check out Assess Result Task or Conduct Project Retrospective Guideline
it has an abstract or vague statement like: "Encourage the team to capture
all information (project data, opinions, and so on) by using various tools
(white boards, charts, timelines) that provide a visual representation so
that the team can identify relationships and emerging patterns."
I have experienced people don't analyze defects at the end of an iteration
usually and they don't get it from SCRUM book.
If I could vote I would emphasize more in context of Iteration assessment
the defect/quality metrics analysis. Or is it against some principle?
AFAIK Standard RUP Iteration Assessment templates guides you to analyze the
data.
Regards,
Roman
|
|
|
Powered by
FUDForum. Page generated in 0.02851 seconds