2.1 Introduction
The purpose of this dissertation is to
increase understanding of how experienced practitioners as individuals
evaluate diagrammatic models in Formal Technical Review (FTR). In this
research, those aspects of FTR relating to evaluation of an artifact by
practitioners as individuals are referred to as Practitioner Evaluation (PE).
The relevant FTR literature is reviewed for theory and research applicable to
PE. However, FTR developed pragmatically without relation to underlying
cognitive theory, and the literature consists primarily of case studies with a
very limited number of controlled experiments.
Other work on the evaluation of
diagrams and graphs is also reviewed for possible theoretical models that could
be used in the current research. Human-Computer Interaction (HCI) is an
Information Systems area that has drawn extensively on cognitive science to
develop and evaluate Graphical User Interfaces (GUIs). A brief overview of
cognitive-based approaches utilized in HCI is presented. One of these
approaches, the Human Information Processing System model, in which the
human mind is treated as an information-processing system, provides the cognitive
theoretical model for this research and is discussed separately because of its
importance. Work on attention and the comprehension of graphics is also briefly
reviewed.
Two further areas are identified as
necessary for the development of the research task and tools: (1) types of
diagrammatic models and (2) types of software defects. Relevant work in each of
these areas is briefly reviewed and, since typologies appropriate to this
research were not located, appropriate typologies are developed.
2.2 Formal
Technical Review
Software review as a technique to
detect software defects is not new -- it has been used since the earliest days
of programming. For example, Babbage and von Neumann regularly asked colleagues
to examine their programs [Freedman and Weinberg 1990], and in the 1950s and
1960s, large software projects often included some type of software review
[Knight and Myers 1993]. However, the first significant formalization of
software review practice is generally considered to be the development by
Michael Fagan [1976] of a species of FTR that he called "inspection."
Following Tjahjono [1996, 2], Formal
Technical Review may be defined as any "evaluation technique that involves
the bringing together of a group of technical [and sometimes non-technical]
personnel to analyze a software artifact, typically with the goal of
discovering errors or other anomalies." As such, FTR has the following
distinguishing characteristics:
1. Formal
process.
2. Use of groups
or teams. Most FTR techniques involve real
groups, but nominal groups are used as well.
3. Review by
knowledgeable individuals or practitioners.
4. Focus on
detection of defects.
2.2.1 Types of
Formal Technical Review
While the focus of this research is on
the individual evaluation aspects of reviews, for context several other FTR
techniques are discussed as well. Among the most common forms of FTR are the
following:
1.Desk
Checking, or reading over a program by hand while sitting at
one's desk, is the oldest software review technique [Adrion et al. 1982].
Strictly speaking, desk checking is not a form of FTR since it does not involve
a formal process or a group. Moreover, desk checking is generally perceived as
ineffective and unproductive due to (a) its lack of discipline and (b) the general
ineffectiveness of people in detecting their own errors. To correct for the
second problem, programmers often swap programs and check each other's work.
Since desk checking is an individual process not involving group dynamics,
research in this area would be relevant but none applicable to the current
research was found.
It should be noted that Humphrey
[1995] has developed a review method, called Personal Review (PR), which
is similar to desk checking. In PR, each programmer examines his own products
to find as many defects as possible utilizing a disciplined process in
conjunction with Humphrey's Personal Software Process (PSP) to improve his own
work. The review strategy includes the use of checklists to guide the review process,
review metrics to improve the process, and defect causal analysis to prevent
the same defects from recurring in the future. The approach taken in developing
the Personal Review process is an engineering one; no reference is made in
Humphrey [1995] to cognitive theory.
2. Peer Rating is a technique in which anonymous programs are
evaluated in terms of their overall quality, maintainability, extensibility,
usability and clarity by selected programmers who have similar backgrounds
[Myers 1979]. Shneiderman [1980] suggests that peer ratings of programs are
productive, enjoyable, and non-threatening experiences. The technique is often
referred to as Peer Reviews [Shneiderman 1980], but some authors use the term
peer reviews for generic review methods involving peers [Paulk et al 1993;
Humphrey 1989].
3. Walkthroughs are presentation reviews in which a review participant,
usually the software author, narrates a description of the software and the
other members of the review group provide feedback throughout the presentation
[Freedman and Weinberg 1990; Gilb and Graham 1993]. It should be noted that the
term "walkthrough" has been used in the literature variously. Some
authors unite it with "structured" and treat it as a disciplined,
formal review process [Myers 1979; Yourdon 1989; Adrion et al. 1982]. However,
the literature generally describes walkthrough as an undisciplined process
without advance preparation on the part of reviewers and with the meeting focus
on education of participants [Fagan 1976].
4. Round-robin
Review is a evaluation process in which a
copy of the review materials is made available and routed to each participant;
the reviewers then write their comments/questions concerning the materials and
pass the materials with comments to another reviewer and to the moderator or
author eventually [Hart 1982].
5. Inspection was developed by
Fagan [1976, 1986] as a well-planned and well-defined group review process to detect
software defects – defect repair occurs outside the scope of the process. The
original Fagan Inspection (FI) is the most cited review method in the
literature and is the source for a variety of similar inspection techniques
[Tjahjono 1996]. Among the FI-derived techniques are Active Design Review
[Parnas and Weiss 1987], Phased Inspection [Knight and Myers 1993], N-Fold
Inspection [Schneider et al. 1992], and FTArm [Tjahjono 1996]. Unlike the
review techniques previously discussed, inspection is often used to control the
quality and productivity of the development process.
A Fagan
Inspection consists of six well-defined phases:
i. Planning. Participants are selected and the materials to be
reviewed are prepared and checked for review suitability.
ii. Overview. The author
educates the participants about the review materials through a presentation.
iii. Preparation. The participants
learn the materials individually.
iv. Meeting. The reader (a
participant other than the author) narrates or paraphrases the review materials
statement by statement, and the other participants raise issues and questions.
Questions continue on a point only until an error is recognized or the item is
deemed correct.
v. Rework. The author fixes
the defects identified in the meeting.
vi. Follow-up. The
"corrected" products are reinspected.
Practitioner
Evaluation is primarily associated with the Preparation phase.
In addition to
classification by technique-type, FTR may also be classified on other
dimensions, including the following:
A. Small vs. Large Team Reviews. Siy [1996] classifies reviews into those conducted by small
(1-4 reviewers) [Bisant and Lyle 1996] and large (more than 4 reviewers) [Fagan
1976, 1986] teams. If each reviewer depends on different expertise and
experiences, a large team should allow a wider variety of defects to be
detected and thus better coverage. However, a large team requires more effort
due to more individuals inspecting the artifact, generally involves greater
scheduling problems [Ballman and Votta 1994], and may make it more difficult
for all participants to participate fully.
B. No vs. Single vs. Multiple
Session Reviews. The traditional Fagan Inspection provided for one
session to inspect the software artifact, with the possibility of a follow-up
session to inspect corrections. However, variants have been suggested.
Humphrey [1989] comments
that three-quarters of the errors found in well-run inspections are found
during preparation. Based on an economic analysis of a series of inspections at
AT&T, Votta [1993] argues that inspection meetings are generally not
economic and should be replaced with depositions, where the author and
(optionally) the moderator meet separately with inspectors to collect their
results.
On the other
hand, some authors [Knight and Myers 1993; Schneider et al. 1992] have argued
for multiple sessions, conducted either in series or parallel. Gilb and Graham
[1993] do not use multiple inspection sessions but add a root cause analysis
session immediately after the inspection meeting.
C. Nonsystematic vs. Systematic Defect-Detection
Technique Reviews. The most frequently used detection methods (ad hoc and
checklist) rely on nonsystematic techniques, and reviewer responsibilities are
general and not differentiated for single session reviews [Siy 1996]. However,
some methods employ more prescriptive techniques, such as questionnaires
[Parnas and Weiss 1987] and correctness proofs [Britcher 1988].
D.Single Site vs.
Multiple Site Reviews. The traditional FTR techniques have assumed that the
group-meeting component would occur face-to-face at a single site. However,
with improved telecommunications, and especially with computer support (see
item F below), it has become increasingly feasible to conduct even the group
meeting from multiple sites.
E. Synchronous vs. Asynchronous
Reviews. The traditional FTR techniques have also assumed that
the group meeting component would occur in real-time; i.e., synchronously.
However, some newer techniques that eliminate the group meeting or are based on
computer support utilize asynchronous reviews.
F. Manual vs. Computer-supported
Reviews. In recent years, several computer supported review
systems have been developed [Brothers et al. 1990; Johnson and Tjahjono 1993;
Gintell et al. 1993; Mashayekhi et al 1994]. The type of support varies from
simple augmentation of the manual practices [Brothers et al. 1990; Gintell et
al. 1993] to totally new review methods [Johnson and Tjahjono 1993].
2.2.2 Economic
Analyses of Formal Technical Review
Wheeler et al. [1996], after reviewing
a number of studies that support the economic benefit of FTR, conclude that
inspections reduce the number of defects throughout development, cause defects
to be found earlier in the development process where they are less expensive to
correct, and uncover defects that would be difficult or impossible to discover by
testing. They also note "these benefits are not without their costs,
however. Inspections require an investment of approximately 15 percent of the
total development cost early in the process [p. 11]."
In discussing overall economic
effects, Wheeler et al. cite Fagan [1986] to the effect that investment in
inspections has been reported to yield a 25-to-35 percent overall increase in
productivity. They also reproduce a graphical analysis from Boehm [1987] that
indicates inspections reduce total development cost by approximately 30%.
The Wheeler et al. [1996] analysis
does not specify the relative value of Practitioner Evaluation to FTR, but two
recent economic analyses provide indications.
·
Votta [1993]. After analyzing
data collected from 13 traditional inspections conducted at AT&T, Votta
reports that the approximately 4% increase in faults found at collection
meetings (synergy) does not economically justify the development delays caused
by the need to schedule meetings and the additional developer time associated
with the actual meetings. He also argues that it is not cost-effective to use
the collection meeting to reduce the number of items incorrectly identified as
defective prior to the meeting ("false positives"). Based on these
findings, he concludes that almost all inspection meetings requiring all
reviewers to be present should be replaced with Depositions, which are
three person meetings with only the author, moderator, and one reviewer
present.
·
Siy [1996]. In his analysis
of the factors driving inspection costs and benefits, Siy reports that changes
in FTR structural elements, such as group size, number of sessions, and
coordination of multiple sessions, were largely ineffective in improving the
effectiveness of inspections. Instead, inputs into the process (reviewers and
code units) accounted for more outcome variation than structural factors. He
concludes by stating "better techniques by which reviewers detect
defects, not better process structures, are the key to improving inspection
effectiveness [Abstract, p. 2]." (emphasis added)
Votta's analysis effectively
attributes most of the economic benefit of FTR to PE, and Siy's explicitly
states that better PE techniques "are the key to improving inspection
effectiveness." These findings, if supported by additional research, would
further support the contention that a better understanding of Practitioner
Evaluation is necessary.
2.2.3 Psychological Aspects of FTR
Work on the psychological aspects of FTR can be
categorized into four groups.
1.Egoless Programming. Gerald Weinberg
[1971] began the examination of psychological issues associated with software
review in his work on egoless programming. According to Weinberg, programmers
are often reluctant to allow their programs to be read by other programmers
because the programs are often considered to be an extension of the self and
errors discovered in the programs to be a challenge to one's self-image. Two
implications of this theory are as follows:
i. The ability of a programmer to find errors in his own
work tends to be impaired since he tends to justify his own actions, and it is
therefore more effective to have other people check his work.
ii. Each
programmer should detach himself from his own work. The work should be
considered a public property where other people can freely criticize, and thus,
improve its quality; otherwise, one tends to become defensive, and reluctant to
expose one's own failures.
These two
concepts have led to the justification of FTR groups, as well as the
establishment of independent quality assurance groups that specialize in
finding software defects in many software organizations [Humphrey 1989].
2. Role of
Management. Another
psychological aspect of FTR that has been examined is the recording of data and
its dissemination to management. According to Dobbins [1987], this must be done
in such a way that individual programmers will not feel intimidated or
threatened.
3. Positive
Psychological Impacts. Hart [1982] observes that reviews can make one more
careful in writing programs (e.g., double checking code) in anticipation of
having to present or share the programs with other participants. Thus, errors
are often eliminated even before the actual review sessions.
4.Group Process. Most FTR methods
are implemented using small groups. Therefore, several key issues from small
group theory apply to FTR, such as group think (tendency to suppress dissent in
the interests of group harmony), group deviants (influence by minority), and
domination of the group by a single member. Other key issues include social
facilitation (presence of others boosts one's performance) and social loafing
(one member free rides on the group's effort) [Myers 1990]. The issue of
moderator domination in inspections is also documented in the literature
[Tjahjono 1996].
Perhaps the most
interesting research from the perspective of the current study is that of Sauer
et al. [2000]. This research is unusual in that it has an explicit theoretical
basis and outlines a behaviorally motivated program of research into the
effectiveness of software development technical reviews. The finding that most
of the variation in effectiveness of software development technical reviews is
the result of variations in expertise among the participants provides
additional motivation for developing a solid understanding of Formal Technical
Review at the individual level.
It should be noted that all of this
work, while based on psychological theory, does not address the issue of how
practitioners actually evaluate software artifacts.
2.3 Approaches to
the Evaluation of Diagrammatic Models
The focus of this dissertation is the
exploration of how practitioners as individuals evaluate diagrammatic models
for semantic errors that would cause the resulting system not to meet the functionality,
performance, security, usability, maintainability, testability
or other requirements necessary to the purposes of the system [Bass et al.
1998; Boehm et al. 1978].
2.3.1 General
Approaches
Information Systems is an applied
discipline that traditionally adapts concepts and techniques from reference
disciplines such as management, psychology, and engineering to solve
information systems problems. In searching for a theoretical model that could
be used in the current research, three separate approaches were explored.
1. Computer Aided
Design (CAD). Since CAD uses diagrams to specify the design and
construction of physical entities [Yoshikawa and Warman 1987], it seemed
reasonable to assume that techniques developed to evaluate CAD diagrams might
be adapted for the evaluation of diagrams used to specify software systems.
However, a review of the literature found relatively little literature on the
evaluation of CAD diagrams, and that which was found pertained to the formal
(i.e., "mathematical") evaluation of circuit designs. Discussion with
William Miller of the University
of South Florida Engineering
faculty supported this conclusion [Miller 2000], and this approach was
abandoned.
2.Radiological
Images. While x-rays are not technically diagrams and do not
specify a system, they are visual artifacts and do convey information.
Therefore, it was reasoned that rules for reading radiological images might
provide insights into the evaluation of software diagrammatic models. Review of
the literature found nothing appropriate. More importantly, as further
conceptual work was done regarding the purposes of evaluating software
diagrammatic models, it became apparent that the reading of x-rays was not an
appropriate analog. This approach was therefore also abandoned.
3.Human-Computer
Interaction (HCI). In reviewing the HCI literature, the following facts
were noted:
·
The language,
concepts, and purposes of HCI are very similar to those of information systems,
and it is arguable that HCI is a part of information systems. (See, for
example, the Huber [1983] and Robey [1983] debate on cognitive style and DSS
design.)
·
HCI is solidly
rooted in psychology, a traditional information systems reference discipline.
·
Computer
user-interfaces almost always have a visual component and are increasingly
diagrammatic in design.
·
User-interfaces
can be and are evaluated in terms of the semantic error criteria described
above; i.e., defects in functionality, performance, efficiency, etc.
Based on these
facts, a decision was made to attempt to identify an HCI evaluation technique
that could be adapted for evaluation of software diagrammatic models.
2.3.2
Human-Computer Interaction
Human-computer interaction (HCI) has
been defined as "the processes, dialogues . . . and actions that a user
employs to interact with a computer environment [Baecker and Buxton
1987, 40]."
2.3.2.1 HCI
Evaluation Techniques
Mack and Nielsen [1994] identify eight
usability inspection techniques:
1. Heuristic
Evaluation. Heuristic evaluation is an informal method that involves
having usability specialists judge whether each dialogue element conforms to
established usability principles or heuristics. Nielsen, the author of the
technique, recommends that evaluators go through the interface twice and notes
that "[t]his two-pass approach is similar in nature to the phased
inspection method for code inspection (Knight and Myers 1993) [Nielsen 1994,
29]."
2. Guideline
Reviews. Guideline reviews are inspections where an interface is
checked for conformance with a comprehensive list of guidelines. Nielsen and
Mack note that "since guideline documents contain on the order of 1,000
guidelines, guideline reviews require a high degree of expertise and are fairly
rare in practice [Nielsen and Mack 1994, 5]."
3. Pluralistic
Walkthroughs. A pluralistic walkthrough is a meeting in which users,
developers, and human factors experts step through a scenario, discussing
usability issues associated with dialogue elements involved in the scenario
steps.
4. Consistency
Inspections. Consistency inspections have designers representing
multiple projects inspect an interface to see whether it consistent with other
interfaces in the "family" of products.
5. Standards
Inspections. In a standards inspection, an expert on some interface
standard checks the interface for compliance with that standard.
6. Cognitive
Walkthroughs. Cognitive walkthroughs use an explicitly detailed
procedure to simulate a user's problem-solving process at each step in the
human-computer dialog, checking to see if the simulated user's goals and memory
for actions can be assumed to lead to the next correct action.
7. Formal
Usability Inspections. Formal usability inspections are designed to be very
similar to the Fagan Inspection used in code reviews.
8. Feature
Inspections. In feature inspections the focus is on the
functionality provided by the software system being inspected; i.e., whether
the function as designed meets the needs of the intended end users.
These HCI evaluation techniques are
clearly similar to FTR in that they involve the use of knowledgeable
individuals to detect defects in a software artifact; most also involve a
formal process and a group.
2.3.2.2 Cognitive
Psychology and HCI
To assist in the design of better dialogues,
HCI researchers have attempted to apply the findings of cognitive psychology
since, all other factors being equal, an interface that requires less
short-term memory resources or can be manipulated more quickly because fewer
cognitive steps are required should be superior. The following is a brief
overview of cognitive-based approaches utilized in HCI.
·
Human Information
Processing System (HIPS). During the 1960s and 1970s, the main paradigm in
cognitive psychology was to characterize humans as information processors
that processed information much like a computer. While some of the assumptions
of the original model proved to be overly restrictive and other approaches have
become popular, updated HIPS models continue to be useful for HCI research.
Given the importance of this model for this research, a more complete treatment
is provided in Section 2.4.1 below.
·
Computational approaches also adopt the
computer metaphor as a theoretical framework but conceptualize the cognitive
system in terms of the goals, planning, and action involved in
task performance. Tasks are analyzed not in terms of the amount of information
processed in the various stages but in terms of how the system deals with new
information [Preece et al. 1994].
·
Connectionist approaches simulate behavior
through neural network or Parallel Distributed Processing (PDP) models in which
cognition is represented as a web of interconnected nodes. Connectionist models
have become increasingly accepted in cognitive psychology [Ashcraft 1994], and
this fact has been reflected in HCI research [Preece et al. 1994].
·
Human
Factors/Actors. Bannon [1991, 28] argues that the term human factors
should be replaced with the term human actors to indicate "emphasis
is placed on the person as an autonomous agent that has the capacity to
regulate and coordinate his or her behavior, rather than being a simple passive
element in a human-machine system." The change is supposed to facilitate
focusing on the way people act in real work settings instead of viewing them as
information processors.
·
Distributed
Cognition. An emerging theoretical framework is distributed
cognition. The goal of distributed cognition is to conceptualize cognitive
activities as embodied and situated within the work context in which they occur
[Hutchins 1990; Hutchins and Klausen 1992].
The human factors/actors
and distributed cognition models are not appropriate to the
current study. The connectionist models show great promise but are not
yet sufficiently developed to be useful for this research. The information
processor models are however appropriate and sufficiently mature; they
provide the primary cognitive theoretical base for the dissertation. Computational
approaches are also utilized in that the study analyzes the cognitive system in
terms of the task planning involved in task performance.
2.4 Human
Information Processing System (HIPS) Models and Related Topics
2.4.1 General
Model
One of the major paradigms in
cognitive science is the Human Information Processing System model. In
this model, humans are characterized as information processors, in which
information enters the mind, is processed in a series of ordered stages, and
then exits [Preece et al. 1994]. Figure 2.1 summarizes one version of the basic
model [Barber 1988].
An early attempt to apply the model
was Card et al.'s The Psychology of Human-Computer Interaction [1983].
In that work, the authors stated that the human mind is also an information-processing
system and developed a simplified model of it that they called the Model Human
Processor. Based on this model, they made predictions about the usability of
various user interfaces, performed experiments, and reported their findings. The
results were equivocal, and subsequent cognitive psychology research has shown
that the serial stage approach to cognition of the original model is overly
simplistic.
The original model also did not
include memory and attention. Later versions do include these processes, and
Cowan [1995], in his exhaustive examination of the intersection of memory and
attention, discusses a number of these. Figure 2.2 summarizes a model that does
include memory and attention [Barber 1988].
HIPS models, such as Anderson 's ACT-R [1993], continue to
be developed and are useful. Further, the information processing approach
has recently been described as the primary metatheory of cognitive psychology
[Ashcraft 1994].
2.4.2 Coping with
Attention as a Limited Resource
One of the earliest psychological
definitions of attention is that of William James [1890, vol. 1, 403-404]:
Everyone knows
what attention is. It is the taking possession of the mind, in clear and vivid
form, of one out of what seem several simultaneously possible objects or trains
of thought. Focalization, concentration of consciousness are of its essence. It
implies withdrawal from some things in order to deal more effectively with
others . . . (emphasis added)
This appeal to intuition explicitly
states that attention is a limited resource.
In reaction to the introspection
methodology of James, the Behaviorist movement asserted that the study of
internal representations and processes was unscientific. Since behaviorists
dominated American psychological thought during the first half of the Twentieth
Century, little or no work was done on attention in America during this period. In Europe , Gestalt psychology became dominant at this
time and that school, while not actively hostile to attention studies, did not
encourage work in the area. World War II however led to a rethinking of
psychological approaches and acceptance of using the experimental techniques
developed by the behaviorists to study internal states and processes [Cowan
1995].
An example of this rethinking is the
work of Broadbent [1952] and Cherry [1953]. They used a technique to study
attention in which different spoken messages are presented to a subject's two
ears at the same time. Their research shows that subjects are able to attend to
one message if the messages are distinguished by physical (rather than merely
semantic) cues, but recall almost nothing of the nonattended channel. In 1956,
Miller reviewed a series of experiments that utilized a different methodology
and noted that, across many domains, subjects could keep in mind no more than
about seven "chunks" simultaneously. These findings were among the
first experimental evidence that attentional capacity is a limited
resource.
More recent
experimental work continues to indicate that attention is a limited resource
[Cowan 1995]. Even those cognitive psychologists who have recently challenged
the very concept of attention assume their "attention" analog is
limited. One example of this would be Allport [1980] and Wickens [1984], who
argue that the concept of attention should be replaced with the concept of
multiple limited processing resources.
Based on an examination of the
exhaustive review by Cowan [1995] of the intersection of memory and attention,
the Shiffrin [1988, 739] definition appears to be representative of
contemporary thought:
Attention has been used to refer to all those aspects of human
cognition that the subject can control . . . and to all aspects of cognition
having to do with limited resources or capacity, and methods of dealing
with such constraints. (emphasis added)
Since human cognitive resources are limited, cognitively
complex tasks may overload these resources and decrease the quality and/or
quantity of outputs. Various approaches to measuring the cognitive complexity
of tasks have been developed. In HCI, an informal view of complexity is often
utilized. For example, Grant [1990, sec.
1.3] defines a complex task as “one for which there are a large number of
potential practical strategies.” This definitions is not inconsistent with the measure assumed by Simon
[1962] in his paper on the use of hierarchical decomposition to decrease the
complexity of problemsolving.
Simon [1990] argues that humans
develop mechanisms to enable them to deal with complex, real-life situations
despite their limited cognitive resources. One such mechanism is task planning.
According to Fredericksen and Breuleaux [1990], task planning is a cognitive
bargain in which the time and effort spent working with an abstract, and
therefore, smaller problem space during planning minimizes actual work on the
task in the original, detailed problem space.
Earley and Perry [1987, 279] define a
task plan as "a cognitively based routine for attaining a particular
objective and consists of multiple steps." Newell and Simon [1972]
identify planning from verbal protocols as those passages in which:
1. a person is considering
abstract specifications of the action/information transformations required to
achieve goals;
2. a person considers sequences of
two or more such actions or transformations; and
3. after developing the sequences,
some or all of them are actually performed.
Two further items should be noted
regarding planning:
1. Not all planning is original. Successful plans
learned from others or by experience may be stored in memory or externally
[Newell and Simon 1972; Wood and Locke 1990]. Without the recall, modification,
and use of previous plans, the development of expertise would be impossible.
2. Planning is
not complete before action. Both theory and
analysis of verbal protocols indicate that periods of planning are interleaved
with action [McDermott 1978; Newell and Simon 1972]. In other words,
practitioners will often plan a response to part of a task, complete some or
all of the actions specified in that plan, plan a new response incorporating
information acquired during prior action period(s), complete the new actions,
etc.
2.4.3
Application of the HIPS Model to This Research
In the HIPS model, the nature and
amount of stimuli impact both information processing and output. This research
uses a key concept of the HIPS model,
attention, in two ways:
1.
Attention is a critical and limited
resource, and when attention is overloaded, outputs decrease in quality and
quantity; therefore, a meta-cognitive strategy such as task planning that
minimizes attentional load should improve outputs.
2.
Patterns are another meta-cognitive
strategy for minimizing attentional load; therefore, understanding which
patterns better support the cognitive processing associated with evaluation of
diagrammatic models may allow individuals to be trained to use these better
patterns, thus lessening their attentional load and improving their outputs.
2.5 Research On
the Comprehension of Graphics
Larkin and Simon [1987] consider why
diagrams can be superior to a verbal description for solving problems, and
suggest the following reasons:
·
Diagrams can
group together all information that is used together, thus avoiding large
amounts of search for the elements needed to make a problem-solving inference.
·
Diagrams
typically use location to group information about a single element, avoiding
the need to match symbolic labels.
·
Diagrams
automatically support a large number of perceptual inferences, which are
extremely easy for humans.
As noted in Chapter 1, two of these
depend on spatial patterns.
Winn [1994] presents an overview of
how the symbol system of graphics interacts with the viewers' perceptual and
cognitive processes, which is summarized in figure 2.3. In his description, the
graphical symbol system consists of two elements: (1) Symbols that bear an
unambiguous one-to-one relationship to objects in the domain of reference, and
(2) The spatial relations of the symbols to each other. Thus, how symbols
are configured spatially will affect the way viewers understand how the
associated objects are related and interact. For the purposes of this
dissertation, a particularly interesting finding is that biases based on
reading direction (left-to-right for English) affect the interpretation of
graphics.
Zhang [1997] proposes a theoretical
framework for external representation based problem solving. In an experiment
she conducted using a Tic-Tac-Toe board and its logical isomorphs, the results
show that Tic-Tac-Toe behavior is determined by the configuration of the board.
External representations are thus shown to be more than just memory aids and a
representational determinism is suggested. This last point is particularly
relevant to this dissertation since it states that the form of representation
determines what information can be perceived in a diagram.
2.6 Types of
Diagrammatic Models
Selection of diagrammatic models to be
included in the research task requires an appropriate typology. Two
diagrammatic model typologies were examined, Wieringa [1998] and Visible
Systems [1999].
2.6.1 Wieringa
1998
Wieringa, in his discussion of
graphical structures or models that may be used in software specification
techniques, lists four general classes:
1. Decomposition
Specification Techniques. These represent
the conceptual structure of data in a database system. Examples include Entity-Relationship
Diagrams (ERDs) and such ERD extensions as OO class diagrams.
2. Communication
Specification Techniques. These show how
the conceptual components interact to realize external system interactions.
Examples include Dataflow Diagrams (DFDs), Context Diagrams, SADT
Activity Diagrams, Object Communication Diagrams, SDL Block
Diagrams, Sequence Diagrams, and Collaboration Diagrams.
3. Function
Specification Techniques. These specify
the external functions of a system or the functions of system components.
Examples Function Refinement Trees, Event-Response Specifications,
and Use Case Diagrams.
4. Behavior
Specification Techniques. These show how
functions of a system or its components are ordered in time. Examples include Process
Graphs, JSD Process Structure Diagrams, Finite (and Extended
Finite) State Diagrams, Mealy Machines, Moore Machines, Statecharts,
and Process Dependency Diagrams.
2.6.2 Visible
Systems
The methods listing in Visible Systems
[1999] was examined as a representative of practitioner-oriented,
CASE-tools-based typologies. Seven models are listed; of these, six are
diagrammatic in nature.
1. Functional
Decomposition Model. Shows the
business functions and the processes they support drawn in a hierarchical
structure; also known as the Business Model. This type of model is of a
high-level functional nature and specifically applies to functions and not to
the data that those functions use. It is generally appropriate for defining the
overall functioning of an enterprise, not for individual projects.
2. Data Model. Shows the data
entities of an application and the relationships between the entities. Entities
and relationships can be selected in subsets to produce views of the data model.
The diagramming technique normally used to depict graphically the data model is
the Entity Relationship Diagram (ERD) and the model is sometimes
referred to as the Entity-Relationship Model.
3. Process Model. Shows how things
occur in the organization via a sequence of processes, actions, stores, inputs
and outputs. Processes are decomposed into more detail, producing a layered
hierarchical structure. The diagramming technique used for process modeling in
structured analysis is the Data Flow Diagram (DFD). Several notations
are available for representing process modeling, with the most widely used
being Yourdon/DeMarco and Gane & Sarson.
4. Product Model. Shows a
hierarchical, top-down design map of how the application is to be programmed,
built, integrated, and tested. The modeling technique used in structured design
is the structure chart. It is a tree or hierarchical diagram that
defines the overall architecture of a program or system by showing the program
modules and their interrelationships.
5. State
Transition Model (Real Time Model). Shows how
objects transition to and from various states or conditions and the events or
triggers that cause them to change between the different states.
6. Object Class
Model. Shows classes of objects, subclasses,
aggregations and inheritance and defines structures and packaging of data for
an application.
2.6.3 Evaluation
of Typologies in Prior Work
In evaluating these two typologies for
this research, two problems were noted:
1.Neither classification scheme
includes diagrammatic representations of Graphical User Interfaces (GUIs).
While such representations are not technically graphs (and thus not discussed
by Wieringa) and are not listed in Visible Systems, they may be used to specify
parts of a system and are therefore appropriate to this research.
2. Wieringa's work is based on the
theoretical characteristics of graphs while Visible Analyst is representative
of practitioner-oriented, CASE-tool-based typologies. Neither is appropriate to
the research of this dissertation since neither captures factors likely to
affect the cognitive processing of practitioners in evaluating software
diagrammatic models.
While it would be relatively easy to
add diagrammatic representations of GUIs to Wieringa or Visible Analyst, it was
concluded that the second problem disqualified them for the purposes of this
research. Further review of several leading systems analysis and design texts
[Fertuck 1995; Hoffer et al. 1998; Kendall and Kendall 1995] did not yield an
appropriate typology of diagrammatic models, and it was therefore deemed
necessary to develop one specifically for this dissertation.
2.6.4
Diagrammatic Model Typology Development
The first step in the development
process was to consult several systems analysis and design and structured
techniques texts for classification insights and to derive lists of commonly
used diagrammatic models. These included Fertuck [1995], Hoffer et al. [1998],
Kendall and Kendall [1995], and Martin and McClure [1985].
Martin and McClure make a major
distinction between hierarchical diagrams (i.e., those having one
overall node or root and which do not remerge) and mesh or network
diagrams (i.e., those not having a single overall node or root or which do
remerge). For the purposes of this research, this distinction is
operationalized as the categorical variable hierarchical/not hierarchical.
Martin and McClure also make a major
distinction between diagrams showing sequence and those that do not. Sequence
usually implies temporal directionality; for this dissertation, the distinction
is broadened to include the possibility of logical and other forms of
directionality and is operationalized as the categorical variable directional/not
directional.
A distinction found in all texts referenced
is between data-oriented and process-oriented diagrams. Inspection of diagram
types shows that the distinction is actually a data/process orientation
continuum. For the purposes of this dissertation, this continuum is collapsed
into the categorical variable data/hybrid/process oriented.
As a test of the feasibility of the
classification scheme, twenty diagram types from Martin and McClure, UML
diagrams from Harmon and Watson [1998], and a model of a "typical"
GUI were then categorized. The results of this categorization are shown in
table 2.1.
Table 2.1 Diagrammatic Model Types
HIERARCHICAL
|
NOT HIERARCHICAL
|
||||||||||
DIRECTIONAL
|
NOT DIRECTIONAL
|
DIRECTIONAL
|
NOT DIRECTIONAL
|
||||||||
DATA
I |
HYBRID
II |
PROCESS
III |
DATA
IV |
HYBRID
V |
PROCESS
VI |
DATA
VII |
HYBRID
VIII |
PROCESS
IX |
DATA
X |
HYBRID
XI |
PROCESS
XII |
|
Functional
Decomposi-tion II
|
Functional
Decomposi-tion I
|
|
|
|
|
Data Flow
|
|
Data Analysis
|
“Typical”
GUI |
|
|
Structure Charts
|
|
|
|
|
|
|
Flow Charts
|
Entity-
Relationship |
UML Use Case
|
|
|
HIPO
(Overview) |
HIPO
(VTC) |
|
|
|
|
Data Navigation
|
|
Inverted-L
|
UML Class
|
|
|
HIPO
(Detail) |
|
|
|
|
|
UML Sequence
|
|
|
|
|
Warnier-Orr
(Data) |
|
Warnier-Orr
(Process) |
|
|
|
|
UML Collaboration
|
|
|
|
|
Michael Jackson
Data-Structure
|
Michael Jackson System
Network
|
Michael Jackson
Program-Structure
|
|
|
|
|
|
|
|
|
|
|
|
Nassi-Shneiderman Charts
|
|
|
|
|
UML Activity
|
|
|
|
|
|
Action II
|
Action I
|
|
|
|
|
|
|
|
|
|
Inspection of table 2.1 shows that
only seven of the twelve (2 x 2 x 3) possible categories are actually
populated. Table 2.2 shows the categorization of the diagram types after
collapsing unpopulated categories.
Table 2.2 Diagrammatic Model Types
(Collapsed)
HIERARCHICAL
|
NOT HIERARCHICAL
|
|||||
DIRECTIONAL
|
DIRECTIONAL
|
NOT DIRECTIONAL
|
||||
DATA
I |
HYBRID
II |
PROCESS
III |
HYBRID
VIII |
PROCESS
IX |
DATA
X |
HYBRID
XI |
|
Functional Decomposition II
|
Functional
Decomposi-tion I
|
Data Flow
|
|
Data Analysis
|
“Typical” GUI
|
|
Structure Charts
|
|
|
Flow Charts
|
Entity-
Relationship |
UML Use Case
|
|
HIPO
(Overview) |
HIPO
(VTC) |
Data Navigation
|
|
Inverted-L
|
UML Class
|
|
HIPO
(Detail) |
|
UML Sequence
|
|
|
|
Warnier-Orr
(Data) |
|
Warnier-Orr
(Process) |
UML Collaboration
|
|
|
|
Michael Jackson
Data-Structure
|
Michael Jackson System
Network
|
Michael Jackson
Program-Structure
|
|
|
|
|
|
|
Nassi-Shneiderman Charts
|
UML Activity
|
|
|
|
|
Action II
|
Action I
|
|
|
|
|
No comments:
Post a Comment