Walkthroughs and Inspections
This appendix provides a brief overview of a technique known as a walkthrough. You may find it useful to conduct walkthroughs of the specifications developed during a systems analysis project. In order to use the concept, though, you need to know what a walkthrough is, why walkthroughs are held, who participates in a walkthrough, and what the procedures are.
After you have finished reading this appendix, you may need additional information. Two possible references are Structured Walkthroughs, 4th edition, by Edward Yourdon (Englewood Cliffs, NJ: Prentice Hall, 1989), and Technical Inspections and Reviews, 3rd edition, by Daniel Freedman and Gerald Weinberg (New York: Dorset House, 1990). If you’re interested in more formal “inspections,” consult Software Inspections, by Tom Gilb and Dorothy Graham (Reading, MA: Addison-Wesley, 1993) or Software Inspection: An Industry Best Practice, by David A. Wheeler (IEEE Press, 1996).
WHAT IS A WALKTHROUGH?
As the term is used in the systems development industry, a walkthrough is a peer group review of any technical product. This means that the review involves other systems analysts who are working with you, as well as users, programmers, systems designers, operations people, and others who may be involved in various aspects of the project you are working on. But a walkthrough, under normal conditions, does not include your boss, or the head of the department, or the vice-president from the user organization.
Obviously, these eminent persons will want some opportunity to review various aspects of the project, including the specifications you are working on. But they are generally less involved in the technical details than your peers, and may not be able to offer any detailed suggestions. And politics is usually a factor in such high-powered reviews. This does not mean that such reviews are bad or that they are irrelevant, merely that they are different from the walkthroughs discussed in this appendix. The danger of allowing management in a peer-group review is that politics will generally intrude, and/or the walkthrough will turn into a performance evaluation of a person, rather than a technical review of a product.
Note that there can be many different types of walkthroughs in a typical project:
- Analysis walkthroughs
- Design walkthroughs
- Code walkthroughs
- Test walkthroughs
Since the subject of this book is systems analysis, we will concentrate here on analysis walkthroughs. In practical terms, this means that a group of systems analysts, together with other interested parties, gather together to review use-case diagrams, object models, dataflow diagrams, entity-relationship diagrams, state-transition diagrams, data dictionary entries, and process specifications -- i.e., all the technical products developed by the systems analyst.
WHY DO WE HAVE WALKTHROUGHS?
The primary reason for conducting a walkthrough is to spot errors as quickly and economically as possible. As mentioned earlier in this book, it is generally much cheaper to find and remove errors as early as possible, rather than waiting until a product has been finished and sent on to the next stage of development.
There are other ways of finding errors besides a walkthrough: the person who produced the product (e.g., a DFD) can review it himself and try to find his own errors. But common sense and many years of experience in the data processing field tell us that this is often a very uneconomical way to do things. People are often unable to find their own errors, no matter how much they study their work. This is true whether one is reading a text document for typographical errors, or reading a computer program to look for bugs, or reading a DFD to look for errors. A group of knowledgeable, interested peers can often find errors much more quickly.
Another way of finding errors is to use a CASE tool of the sort discussed in Appendix A; this is roughly analogous to using a compiler to find syntax errors in a program, rather than desk-checking the program listing. If you have a CASE tool, by all means use it to identify all of the syntax errors that it is capable of finding. But just as a compiler does not find all the errors in a computer program (e.g., it does not find run-time errors and logic errors, because it performs a static analysis rather than a dynamic analysis of the program), so a CASE tool does not pretend to find all the errors in a set of specification models. A walkthrough is still a useful complement to any mechanical tools that are available.
Indeed, one of the things that the analyst workstation is highly unlikely to do is comment on the style of the products; this is something that people are eminently well qualified to do. Thus, when examining a DFD, the human reviewers may ask such questions as:
- Are there too many bubbles in the diagram?
- Have the process names been chosen in a meaningful way?
- Has the diagram been drawn so that it is esthetically pleasing? Is it likely that the user will actually read the diagram, or is he likely to be confused by it?
- Have dataflows been packaged in a meaningful way from one level to another?
- Have elementary activities been packaged together in an intelligent way to form higher-level bubbles?
In addition, there are other benefits that organizations generally find with a walkthrough approach: training, and insurance. A peer group review process is an excellent vehicle for teaching new, junior members of the project team (as well as older, burned-out members of the team!) details about the application, about systems analysis, or about the details of dataflow diagram notation. And because all the members of the review group become somewhat familiar with the product (and often intimately familiar with it), the walkthrough becomes an insurance policy against the unexpected, untimely departure of the producer from the project team; someone else should be able to pick up the work done by the producer and carry it forward.
The big danger with all this is that the producer may not agree with the benefits and may consider the entire walkthrough process to be an invasion of his privacy. If the producer considers the DFDs to be his property (as opposed to a corporate asset), then he may resent having to show them to someone else. If his notion of style differs widely from that of his peers, there may be violent arguments in the walkthrough. And if the producer is opposed to the notion of training and insurance, he may well reject the concept of a walkthrough.
In general, walkthroughs succeed in an environment where the notion of a team is already accepted and in place; in a typical project, it means that each individual must understand that a serious failure or error in his or her work could endanger the success of the entire project, which means that the potential of errors in his work is a legitimate concern of the other team members. For more on the notion of teams, especially so-called “egoless teams,” you should read Gerald Weinberg’s classic The Psychology of Computer Programming, Silver Anniversary Edition, (New York: Dorset House, 1996).
WHEN TO HAVE A WALKTHROUGH
A walkthrough can take place at virtually any point in the development of a technical product -- from the moment that it first represents a gleam in the eye of the producer, to the point where the producer is absolutely convinced the product is finished and ready to be turned over to the customer (or to the next stage of the development process). In general, it is preferable to have a walkthrough as early as possible, but not so early that the product is incomplete or riddled with many trivial errors that the author could have removed.
Let’s take the example of a dataflow diagram to illustrate this point. The producer will typically go through several iterations of (1) discussing the relevant area of the system with the user; (2) mentally visualizing a DFD; (3) sketching various incomplete versions of the DFD on paper napkins and the back of envelopes; (4) sketching a relatively clean version of the DFD on a clean sheet of paper; (5) entering the details about the DFD into an automated analyst workstation of the sort discussed in Appendix A; (6) conducting whatever error-checking operations are available in the workstation product to remove syntax errors; (7) printing out a final version of the DFD on a plotter or laser printer; and (8) delivering the final DFD to the boss with a triumphant announcement that the task was finished ahead of schedule.
In this case, it’s too early to have a meaningful walkthrough at stage (1), (2) or (3); the walkthrough can be conducted effectively at stage (4), (5) or (6); and stages (7) and (8) are too late. Precisely when one has the walkthrough will depend on how much automated support is available, how quickly it can be made available (i.e., does the analyst have to wait for four days to get access to the CASE tool, which is being shared with 27 other analysts?), and how much it costs to use the automated support.
The primary reason for avoiding a walkthrough at a late stage is that the producer will have invested so much of his ego in the product that he will generally be reluctant to make any changes, other than corrections of gross errors (and sometimes not even that!). Then, too, the producer may have needlessly wasted a lot of time removing errors from his product, when the review team could have done it more quickly and economically if they had seen the product at an earlier point.
And, finally, one must remember the psychology of the reviewers themselves: they are spending their own time to participate in finding someone else’s errors, and they will feel this way to some extent, no matter how much of an egoless team they claim to be. Given this perception, the reviewers should not be shown a sloppy, incomplete product; but they should also not be shown a finished, frozen, perfect product.
If you are going to spend an hour of your time reviewing your colleague’s DFD, it’s nice to know that you’ve done something useful by finding an error that the producer had not seen on his or her own. If, on the other hand, you spend an hour staring at a product without finding anything to comment on, there is a natural human tendency to view the effort as somewhat of a waste of time and to be less available the next time you are asked to participate in a walkthrough.
ROLES IN A WALKTHROUGH
Many organizations conduct walkthroughs with no more training or formalism than what has been described above. But many have found that it helps to introduce some formalism or structure to the review; hence the common term structured walkthrough. One common characteristic of a structured walkthrough is a set of formal roles that are played by the reviewers. Different reviewers can play different roles from one walkthrough to the next; and in some cases a reviewer might play more than one role. Here are the common roles found in a walkthrough:
- Presenter, the person who explains to the reviewing group what the product does, what assumptions were made when it was created, and so on. In many cases, the presenter is the producer, but not always. Some organizations feel that if the producer presents his or her own product, then (1) the product may be so cryptic that it would never stand on its own if the producer were not immediately available to explain it, and (2) the producer may subtly (and presumably innocently) brainwash the reviewing audience into making the same errors, oversights, and errors of omission and commission that he or she did.
- Chairperson, the person who organizes the meeting and runs it. His or her purpose is to keep the discussion moving along in an orderly, constructive way, to prevent tangential discussions, as well as critiques of the presenter. For obvious reasons, there is a temptation to let the project manager serve as the chairperson; but for reasons described earlier in this appendix, a manager’s presence in the peer group review often changes the tenor of the review in a very negative way.
- Scribe, the person who makes a written record of the significant events of the review. Aside from such trivial things as recording when the walkthrough took place, whose product was being reviewed, and who attended the walkthrough, the scribe writes notes about significant technical questions that were raised, bugs that were found, and suggestions for improvement or modification made by the reviewers. Depending on the technology available, this person might take notes on a pad of paper, or type information into a word processor, or perhaps even enter information into a formal “issue management” tool.
- Maintenance oracle, a reviewer whose main orientation is the long-term maintenance of the product. He will generally be concerned that the product is not too idiosyncratic or not sufficiently well-documented. He is likely to play a larger role in design walkthroughs and code walkthroughs than systems analysis walkthroughs.
- Standards bearer, the role of this person is obvious: to ensure that the product is consistent with the overall standards that have been adopted by the project and/or organization. Sometimes the primary role of this person is to advise the producer (and other members of the team) as to whether the product will ultimately be judged acceptable (in terms of adherence to standards) by a formal quality assurance group.
There are two last points to make about these roles: first, keep in mind that the roles can change from one walkthrough to the next. And, second, remember to include the user as one of these roles if the user is an active participant in the project.
As indicated in the previous section, successful walkthroughs are generally characterized by a set of formal roles and procedures. The procedures vary from organization to organization, but the following list is typical:
- Schedule the walkthrough a day or two in advance. Make sure that you distribute appropriate materials to the reviewers. If the walkthrough is scheduled without sufficient advance warning, the reviewers will not have the chance to study the product before they arrive at the walkthrough.
- Ensure that the reviewers have indeed spent some time reviewing the product. One easy way of doing this is to ask each reviewer to bring to the walkthrough at least one positive comment and one negative comment about the product. The danger here is that some of the reviewers may be so busy or so uninterested in the product that they will do no advance work and just sit silently in the walkthrough without making any contribution.
- Ask the presenter to make a brief presentation of the product. This is often done using flipcharts, overhead transparency projectors, and the like. This is where the group literally walks through the product.
- Solicit comments from the reviewers. This is normally orchestrated by the chairperson, who may decide to go around the room, asking each reviewer in turn to point out a bug or make a comment about the product.
- Ensure that issues are presented, but not resolved, in the walkthrough. This is especially important if a nontrivial bug has been pointed out; let the producer figure out how to solve it on his own time, rather than allowing an unstructured brainstorming session to ensue. This is also an important procedure when issues of style are raised: the producer may disagree with the comments, and it’s better for him to consider them after the meeting (or talk separately with the person who made the style suggestion).
- Keep the walkthrough relatively brief -- no more than an hour. People cannot be expected to maintain a high level of concentration for more than about an hour; their attention will wander, and there is a serious danger that the walkthrough will degenerate into a bull session.
- Take a vote on the results of the walkthrough meeting. The typical recommendations that are made by the walkthrough reviewers are (1) “We think the product is OK as it stands,” or (2) “We think some errors should be fixed and some minor style issues should be addressed, but we trust the producer to make the appropriate changes without any further reviews,” and (3) “We have found a sufficient number of bugs and/or style issues that we would like to have another walkthrough when the producer has made all the appropriate changes.” Depending on the nature of the team and the way in which people are assumed to take responsibility for their work in the organization, this vote may be binding, or it may simply represent an optional suggestion made by the reviewers.
Though the walkthrough approach is a simple and straightforward reviewing process, it is not used as widely as you might think. One reason for this is the apparent increase in time required to conduct the walkthroughs: it can take up as much as 5% to 10% of the total project time. On the other hand, most organizations that have used walkthroughs have reported dramatic reductions in the number of errors that go undetected. Perhaps the most important reason for not using walkthroughs is that some programmers and systems analysts continue to regard their programs and their dataflow diagrams as their personal property, rather than a corporate asset. Thus, they prefer not to show their work to others and strongly resist any criticisms or suggestions for improvement. This is a dangerous point of view; more and more organizations are beginning to realize that they must introduce some form of peer group review if they are to succeed at improving the quality of the systems they produce.