Abstract

Providing Feedback in Computer-based Instruction: What the Research Tells Us

B. Jean Mason and Roger Bruning

Center for Instructional Innovation

University of Nebraska-Lincoln

Because computer-based instruction typically provides a self-contained learning environment, designers need to pay especially close attention to the kinds of feedback they incorporate into their programming. Research in traditional learning settings, while clearly demonstrating feedback's importance, has not shown any particular kind of feedback to be universally superior. This paper examines the literature on feedback in computer-based instruction, grouping feedback into seven clusters: knowledge-of-response, knowledge-of-correct-response, answer-until-correct, topic-contingent, response-contingent, bug-related, and attribute-isolation. A theoretical framework based on the research is provided to assist designers, developers, and instructors in creating effective feedback in computer-based instruction appropriate to a variety of conditions. Variables to be considered in determining type of feedback and level of elaboration include student achievement, task complexity, timing of feedback, prior knowledge, and learner control.


 

Providing Feedback in Computer-based Instruction: What the Research Tells Us

B. Jean Mason and Roger Bruning

Center for Instructional Innovation

University of Nebraska-Lincoln

The use of computers in education is growing at a rapid rate, with use of computers for educational purposes more than doubling between 1984 (36.2%) and 1997 (84%) (National Center for Education Statistics, 1999). One of the main advantages of computer-based education is the ability to provide immediate feedback on individual responses. In general terms, feedback is any message generated in response to a learner’s action. Among the most important outcomes of feedback are helping learners identify errors and become aware of misconceptions. Feedback is also a significant factor in motivating further learning. As described by Cohen (1985), "this component (feedback) is one of the more instructionally powerful and least understood features in instructional design" (p. 33).

Computer-provided feedback would seem to have several important advantages. First of all, once the requisite programming is in place, computers can tirelessly provide feedback in response to student work. Unlike feedback from an instructor or tutor, this feedback can remain unbiased, accurate, and nonjudgmental, irrespective of student characteristics or the nature of the student response. In addition, the interactive ability of computer-based instruction has the potential for enhancing the quality and type of feedback that can be implemented, limited only by the ingenuity and energy of course designers. Thus, computer-based feedback can, at least theoretically, be adapted to the learning styles and needs of each individual student, a goal that seldom is attained in a traditional classroom. Because a prototypical computer-based instruction (CBI) application typically provides a learning environment in which the student works individually with little human interaction, attention to feedback is likely to be even more critical in CBI than in traditional classroom instruction. In short, the success of computer-based instruction depends not only on what is presented or encountered, but on the quality and appropriateness of feedback provided to learners.

The current paper summarizes and analyzes recent research in computer-based feedback and proposes a theory-based framework to assist educators, programmers, and instructional design specialists in incorporating effective feedback into educational software and programs. In our review, we have attempted to consider all available literature relating to the use of feedback in computer-based instruction. Excluded from this review, however, is the considerable literature focusing on use of computer-based feedback in assessment and the research on postfeedback timing delays, neither of which typically relate to characteristics of instructional design. In addition, because relatively little of the research has been done in CB settings, this paper does not address the growing body of literature surrounding the use of motivational, process-oriented, and goal-directed feedback (feedback that provides learners information about their progress toward a desired goal as opposed to feedback on discrete responses).

Prior research on feedback

Feedback research has a long, well-documented history (see Kulhavy & Stock, 1989, for an excellent review) that dates back to the era of programmed instruction (e.g. Pressey, 1950; Skinner, 1968) and indeed to the early days of psychology itself (e.g. Thorndike, 1913). Most of this early research on feedback was conceptualized within an associationistic or behavioral framework in which feedback was regarded as a contingent event that reinforced or weakened responses. In this view, errors did not receive positive feedback and were weakened, while positive feedback strengthened correct responses. This view of feedback’s functioning, although mechanistic, did emphasize positive consequences for successful performance and helped move educators toward a more positive instructional stance. What this approach did not do is provide a systematic means for error correction and thus represented a somewhat limited conception of the role feedback might play in learning.

The emergence of information-processing theory in the 1970s and 1980s provided a new, cognitive framework for understanding feedback’s role in learning. From this perspective, errors are not viewed so much as mistakes as they are a source of information about students’ cognitive processes (see Bruning, Schraw & Ronning, 1999). Thus, errors not only are an expected part of learning, but are an important resource for learning and teaching. Feedback helps learners determine performance expectations, judge their level of understanding, and become aware of misconceptions. It also provides clues about the best approaches for correcting mistakes and improving performance. This information-processing view, highlighting the informational role of feedback, now represents the mainstream theoretical perspective for analyzing the role of feedback in instruction (for excellent discussions of information processing and earlier perspectives on feedback, see reviews by Kulhavy & Stock, 1989, and Mory, 1992). Most current research in feedback now builds on information-processing theory in attempting to determine the kinds of feedback that are most effective for instruction.

According to Kulhavy and Stock (1989), effective feedback provides the learner with two types of information: verification and elaboration. Verification is the simple judgment of whether an answer is correct or incorrect, while elaboration is the informational component providing relevant cues to guide the learner toward a correct answer. Most researchers now share the view that successful feedback (feedback that facilitates the greatest gains in learning) must include both verification and elaboration. This combination can highlight response errors, give correct response options, and provide information that both strengthens correct responses and makes them memorable (e.g., Bangert-Drowns, Kulik, Kulik, & Morgan, 1991).

While virtually all feedback in CBI provides response verification (although there are exceptions, see, for example, Vye, 1999), there are considerable differences in types of elaboration. Feedback elaboration is typically informational, topic-specific, or response-specific. Informational elaboration does not specifically address individual responses, but provides a framework of relevant information from which the correct answer can be drawn. Topic-specific elaboration, in contrast, provides more specific information about the target question or topic. Topic-specific elaboration leads the learner through the correct answer, but it does not address incorrect responses. The most specific and direct form of feedback is response-specific. Response-specific elaboration addresses both the correct answer and incorrect response choices; if a learner selects an incorrect response, response-specific feedback explains why the selected response is incorrect and provides information about what the correct answer should be. Teaching effectiveness research conducted in classroom settings (e.g., Whyte, Karolick, Neilsen, Elder, & Hawley, 1995) suggests that response-specific feedback enhances student achievement more than other general forms of feedback

Feedback can take on many forms depending on the levels of verification and elaboration incorporated into the item response. Overall, the literature (see Gilman, 1969; Kulhavy & Stock, 1989; Merrill, 1987; Overbaugh, 1994; Pressey, 1950; and Schimmel, 1988) has focused on eight commonly used levels of feedback:

In general, research on feedback can be described in terms of these eight categories, with researchers drawing on these approaches both individually and in combination. In the present paper, they provide the framework for examining current research on factors influencing the effectiveness of feedback within the context of CBI. We now turn to this research.

Research on feedback in computer-based instruction

As is true with the results of research on feedback in traditional settings, there is little disagreement about the efficacy of incorporating active responding and knowledge of results into computer-based instructional units (Cyboran, 1995; Zemke & Armstrong, 1997). Again, however, results of research seeking to determine the most effective type of feedback have been mixed, but has identified several key dimensions that may influence feedback’s effectiveness. These include elaboration, student achievement levels, depth of understanding, attitude toward feedback, learner control, response certitude, and timing.

Elaboration. While several studies (e.g., Clark, 1993; Hodes, 1984-85; Merrill, 1987; Mory, 1994; Park & Gittelman, 1992) have found that providing elaborative feedback within a CBI unit did not influence students’ knowledge of the material, a larger body of research shows enhanced learning in response to more elaborate feedback (e.g., Clariana, 1990, 1992; Gilman, 1969; Pridemore & Klein, 1991, 1995; Morrison et al., 1995; Roper, 1977; Waldrop, Justen, & Adams, 1986; Whyte et al., 1995). Since the findings concerning the effects of feedback elaboration are contradictory, it is important to examine individual studies to identify differences that may have impacted the results.

Representatives of studies finding that the level of feedback elaboration did not influence learning. Mory (1994) examined the effects of adaptive feedback (adjusting the amount of feedback based on the student’s confidence in his/her answer) of computer-based practice questions on learning outcomes and learning efficiency. The results showed no significant learning differences between instruction that incorporated adaptive or non-adaptive feedback. Citing some methodological concerns (e.g., poor design of the feedback screens), however, the researchers caution against dismissing adaptive feedback as an effective instructional tool.

Park and Gittelman (1992) obtained a similar lack of effect for elaboration in a study that examined learning differences in completing a skill application CBI unit (specifically troubleshooting electric circuits). They compared explanatory (topic-contingent) and knowledge-of-response feedback with natural feedback (which showed the effect of their action or response on the electrical circuit). The results, as measured by a computer-based post-test, showed no significant differences between the feedback groups in posttest scores. Students in the knowledge-of-response condition were significantly less efficient, however, in solving posttest problems than students in the other feedback conditions. The authors concluded that the type of feedback did not impact learning outcomes, but cautioned against broad generalizations of the results due to the nature of the topic and skill being measured.

Research by Hodes (1984-85) compared learning outcomes after students received either corrective (knowledge-of-response plus additional information) or noncorrective (knowledge-of-response with no additional information) feedback. While no feedback effects were found, follow-up analysis that divided subjects by gender found a significant gender by feedback interaction indicating that boys receiving corrective feedback scored significantly higher than girls receiving noncorrective feedback. This result hinted at possible sex-related factors in computer-based feedback, leading the author to recommend further research to examine possible sex biases in computer-based feedback.

Merrill (1987) examined the effects of attribute-isolation and corrective feedback on student learning. While no learning differences were found as a function of receiving either type of feedback, there was a trend for higher order questions to produce increased learning (regardless of feedback type). In a similar study focusing on low ability learners, Clark (1993) also reported no learning differences after students received either answer-until-correct feedback (which the researcher referred to as knowledge-of-correct-response) or inductive feedback (a combination of bug-related and topic-contingent).

In contrast to these findings of no-differences, a somewhat larger body of studies (e.g., Clariana, 1990, 1992; Gilman, 1969; Pridemore & Klein, 1991, 1995; Morrison et al., 1995; Roper, 1977; Waldrop, Justen, & Adams, 1986; Whyte et al., 1995) has found significant learning gains in response to various types of feedback used within CBI. A meta-analysis of the effects of feedback in CBI (Azevedo & Bernard, 1995) showed that achievement outcomes (immediate and delayed) generally are greater for students receiving CBI that utilize active responding and computer-based feedback than for comparison groups with no feedback. While this meta-analysis is supportive of the use of computerized feedback, it does not provide insight into the specific type of feedback that is most effective.

Early CBI research by Gilman (1969), using teletypewriter terminals equipped with random-access slide projectors, compared student learning after receiving one of five types of feedback: no-feedback, knowledge-of-response, knowledge-of-correct-response, response-contingent, or a combination of knowledge-of-correct-response and response-contingent feedback. The results indicated that simple knowledge-of-response feedback did not improve learning above the no-feedback condition, but that knowledge-of-correct-response, response-contingent, and a combination of these types of feedback can significantly improve student understanding of the material. Research by Waldrop, Justen, and Adams (1986) showed similar findings that feedback containing more elaborative information produced increased understanding. Specifically, response-contingent feedback was significantly more effective than knowledge-of-response feedback, while answer-until-correct feedback was not significantly different from either response-contingent or knowledge-of-response feedback. Roper (1977) also found that "with increasing information content of the feedback in the program, subjects scored higher in the post-test" (p. 47). Roper theorized that the increased amount of feedback information provided students with enhanced knowledge from which they could correct misunderstandings. All of these studies provide evidence for increased learning in response to CBI programs incorporating elaborated feedback.

The research by Whyte et al. (1995) also showed that the greatest learning gains in response to CBI came with the highest (most elaborate) levels of feedback. Specifically, students who received a CBI unit with knowledge-of-correct-response plus additional information (previously referred to as response-contingent) feedback scored significantly higher on concept acquisition tests than students receiving other forms of feedback (knowledge-of-response, knowledge-of-response plus additional information, and knowledge-of-correct-response). Similarly, research by Pridemore and Klein (1991) indicated that student learning was higher after receiving elaboration feedback than when receiving verification feedback. Presumably, the extra information available in elaboration feedback allows students to correct conceptual errors; simple judgments about the correctness of the answer, in contrast, did not have much utility for learning.

Follow-up research by Pridemore and Klein (1995) also found learning differences in response to various forms of computer-based feedback, but identified a different trend. Learning and retention for students who received a CBI unit with elaboration feedback was not significantly different than students who received no feedback. Students in both of these conditions, however, demonstrated higher performance than those receiving knowledge-of-correct-response feedback. While these findings seem somewhat contradictory, the authors suggested that the elaboration feedback condition likely gave students additional information, while the no-feedback condition may have motivated students to seek additional information. The knowledge-of-correct-response feedback, in contrast, was not particularly functional because it did not force students to seek the correct answer on their own.

Morrison et al. (1995) found that knowledge-of-correct-response and delayed (providing feedback at the end of the testing session) feedback within a CBI unit produced greater learning than answer-until-correct or no-feedback for lower level questions (verbatim post test questions). For higher-level questions (paraphrased or transformed posttest questions), however, there were no learning differences in response to the various forms of feedback.

Clariana (1992) also examined the effect of various forms of feedback (no-feedback, answer-until-correct, knowledge-of-correct-response, and delayed) on learning outcomes in CBI using a variety of question types (verbatim, inferential, transformed, paraphrased, and transformed-paraphrased). Similar to Morrison et al. (1995), the results of this study showed that knowledge-of-correct-response and delayed feedback were superior to no-feedback on identical questions. In contrast to Morrison et. al. (1995), however, answer-until-correct feedback was equivalent to knowledge-of-correct-response and delayed feedback and was significantly more effective than no-feedback. Other prior research by Clariana (1990) had shown that knowledge-of-correct-response was more effective than answer-until-correct feedback regardless of question type.

In summary, while the research does not identify a "best" type of feedback elaboration in CBI, there is a trend for increased learning in response to more elaborate feedback. This trend, however, appears to be mediated by other variables such as student achievement level, depth of understanding, and the nature of the learning task. Thus, it becomes important to examine the impact of these other variables on feedback effectiveness.

Student achievement levels. Students’ ability to effectively utilize different kinds of feedback may depend on their level of expertise in the content area. Research by Gaynor (1981) and Roper (1977) has indicated that low achieving or low mastery students may benefit from more immediate feedback, while high achieving students may better utilize delayed feedback. These findings may relate to the ability of high achieving students to draw from their previous information base and rethink incorrect information; delayed feedback thus may provide them time to actively process the information. On the other hand, low achieving students may possess a less accurate understanding of the basic information; thus immediate feedback allows them to correct conceptual errors (Roper, 1977).

In the research by Clariana (1990), previously mentioned, the researcher examined differences in the use of knowledge-of-correct-response and answer-until-correct feedback for low ability learners. The results of this study indicated that low ability students benefited more from knowledge-of-correct-response than from answer-until-correct feedback, even when response choices were narrowed. It is possible that low ability learners benefit more from knowledge-of-correct-response as they do not have the prerequisite knowledge to effectively reexamine and evaluate the options available during answer-until-correct feedback. As a consequence, answer-until-correct can become a frustrating task with little educational benefit. In contrast to these findings, however, research by Clark (1993) found no learning differences between low ability learners receiving either answer-until-correct (which the author termed knowledge-of-correct-response) or inductive feedback.

Together, these findings indicate that there may be differences in the extent to which lower and higher ability students effectively utilize various types of feedback. While lower ability students may benefit from more immediate, specific forms of feedback, higher ability students may gain more knowledge from feedback that allows for active processing by the student.

Depth of knowledge. A subset of studies on the effects of feedback in CBI has examined the influence of feedback on various levels of student understanding. Morrison et al. (1995) found that delayed and knowledge-of-correct-response feedback may be more beneficial than answer-until-correct or no-feedback for lower level learning (as indicated by verbatim and paraphrased items), but that feedback effects become weaker when higher order understanding (as shown through transformed questions) is the learning goal. Clariana (1992) also found that feedback effects were weaker for higher order learning (as shown through the decreased similarity between initial and posttest questions). Interestingly, while overall comprehension decreased as the level of learning increased, the results of this study suggested, consistent with Morrison et al.'s findings, that answer-until-correct feedback may be more effective for higher order learning than for lower level processing. While answer-until-correct feedback was not significantly different than no-feedback for lower level learning, it was equivalent to knowledge-of-correct-response and delayed feedback for higher order questions. Learners may engage in increased information processing when utilizing answer-until-correct feedback (e.g., the learner must reconsider the question and answer again) in contrast to a presumably more passive learning stance generated by knowledge-of-correct-response and delayed feedback. Further research is obviously needed in this area to clarify conflicting findings.

Attitude toward feedback. Research (e.g., Pridemore & Klein, 1991, 1995) has shown that students’ attitude toward feedback does not necessarily impact learning outcomes. However, while a student’s attitude toward an approach to providing feedback may not bear directly on its measured effectiveness, it may still be important to incorporate feedback that is perceived as beneficial. Several studies (Pridemore & Klein, 1991, 1995; Waddick, 1994) have revealed a desire by students to receive more elaborative, immediate feedback. In Waddick's case study, students liked the constant, immediate feedback on progress (knowledge-of-response feedback automatically given, option to access knowledge-of-correct-response) and rated this type of feedback as more helpful than classroom instruction. Pridemore and Klein (1991) likewise found that students receiving verification feedback were more likely to desire additional information than students receiving elaboration feedback. Additional research by Pridemore and Klein (1995) corroborated this finding, indicating that students receiving no feedback desired more information about the correct answer, even though there were no attitude differences between students receiving knowledge-of-correct-response or elaborative feedback. In contrast to these findings, however, research by Morrison et al. (1995) found no attitudinal differences between students receiving no-feedback, answer-until-correct, knowledge-of-correct-response, and delayed feedback.

Learner control. A much less researched aspect of feedback is the influence of the learners’ control over the feedback message. In Waddick's case study (1994), knowledge-of-response feedback was provided in a CBI unit where learners were given the opportunity to access knowledge-of-correct-response at their own discretion. While no statistical data were collected, the author reported increased mastery of the material and positive views from the students.

In Pridemore and Klein’s (1991) research, the authors examined the effect of allowing learners to select whether they would like to receive any feedback. The results of this study failed to find any learning or attitude differences between students with control over the feedback compared to students who automatically received feedback. It is important to note, however, that most learners in the verification (74%) and elaboration (91%) feedback groups did opt to view feedback.

Schimmel (1988) recommends allowing high ability learners to select desired feedback types because they have extensive prior knowledge and enhanced metacognitive skills, such as the ability to self-monitor task difficulty, that allow them to guide their own learning. In contrast, since low ability learners tend to be less confident in their own academic skills and less aware of their metacognitive processes, they may be inclined to select feedback that provides them with the correct answer as opposed to the type of feedback that promotes the greatest learning.

Response certitude. Another factor that appears to influence feedback effectiveness in CBI is students’ confidence in selected responses. This metacognitive estimate of the learner’s understanding of the material combined with their understanding of the question and response possibilities is termed response certitude (Kulhavy & Stock, 1989). The importance of response certitude lies in the fact that a correct response may represent levels of knowledge ranging from complete understanding to a guess. Consequently, an incorrect answer may result from any of variety of factors, ranging from careless error to lack of comprehension. Depending on the cause of errors, the feedback for each of these types of responses will be utilized differently by the learner.

Research on response certitude shows three basic patterns: (1) high certitude-correct answer, (2) high certitude-incorrect answer, and (3) low certitude-correct/incorrect answer. When certitude is low, regardless of the correctness of the answer, learners typically do not have the knowledge or skills to effectively benefit from the feedback. They therefore spend little time on feedback (Kulhavy, Yekovich, & Dyer, 1976). When there is high certitude and a correct response, learners will spend very little time on feedback since simple verification is all that is required (Kulhavy, 1977; Kulhavy, Yekovich, & Dyer, 1976). When there is high certitude and an incorrect response, however, feedback may be most effective. In this situation, students are motivated to find the source of the error, with elaborative feedback helping learners eliminate the incorrect response and substitute the correct one (Kulhavy, White, Topp, Chan, and Adams, 1985).

Mory (1994) examined response certitude in CBI, replicating the high certitude feedback study times reported in earlier studies (Kulhavy & Stock, 1989; Kulhavy, 1977; Kulhavy et al., 1985; and Kulhavy, Yekovich & Dyer, 1976) but finding a different pattern of results for low certitude responses. Feedback times for low certitude responses were significantly higher than feedback study times for high certitude errors. While there were differences in the amount of feedback study time, this study did not find a significant learning effect for feedback tailored to response certitude and correctness.

Timing. Historically, one of the most researched areas of feedback has been the effects of immediate versus delayed feedback; here the results are quite mixed. Some researchers argue that immediate feedback is needed to correct student errors prior to the error being encoded into memory; others argue that delayed feedback reduces proactive interference, which allows the initial error to be forgotten and the correct information to be encoded with no interference (Kulhavy & Anderson, 1972).

The meta-analysis of feedback in computer-based instruction by Azevedo and Bernard (1995) states that "immediate delivery of a feedback message provides the best instructional advantage to the student" (p. 122). An earlier meta-analysis of feedback (not computer-based) in verbal learning (Kulik & Kulik, 1988) paralleled this finding. It showed that immediate feedback is more effective than delayed in applied studies and list-learning, but is less effective for acquisition of test content (for more information see discussion by Brackbill, Bravos & Starr (1962) on the delayed retention effect). While these meta-analyses lend support for the superiority of immediate feedback, closer examination indicates that the effectiveness of feedback timing may be mediated by other factors.

Research has shown, for example, that immediate feedback can be more effective for decision-making and novel information tasks (Jonassen & Hannum, 1987) as well as for lower level, knowledge-based tasks (Gaynor, 1981). For higher level tasks, such as abstract concepts and application/comprehension skills, delayed feedback has proven more effective (Gaynor, 1981; Jonassen & Hannum, 1987). Other research (e.g., Gaynor, 1981; Roper, 1977) indicates that concept acquisition is facilitated through immediate feedback while long-term retention is enhanced with delayed feedback (Bardwell, 1981). It is important to keep in mind, however, that both initial and delayed feedback may be utilized within a single instructional program. Roper (1977) recommends combining immediate verification feedback with delayed informational feedback. In this manner, students can have immediate knowledge of the correctness of their response, but still have time to think about the error before informational feedback is given.

Conclusions and recommendations

Consistent with earlier research on feedback in conventional learning settings, the findings on feedback in CBI indicate that there is no clear-cut "best" type of feedback in computer-based instruction for all learners and learning outcomes. The challenge therefore is to identify the type of feedback that is most effective in specific educational settings. As previously highlighted, there are several factors to consider when designing computer-based feedback: student achievement levels, nature of the learning task, and prior knowledge. In addition, designers need to make decisions concerning the amount of learner control, attitudes toward feedback, and demands for efficiency. Figure 1 provides a framework for decision-making about feedback in CBI to aid designers in selecting the most appropriate feedback strategy. It represents our judgment, based on the research, that providing effective feedback in CBI requires consideration of students’ achievement levels, prior knowledge, and the nature of the learning task in order to determine the most effective feedback timing and elaboration.

Figure 1. Decision-making framework for the provision of feedback timing and elaboration in CBI based upon student achievement level, nature of the learning task, and prior knowledge.

The first step in designing effective feedback is to consider the achievement level of the students who will be utilizing the program as well as the nature of the learning task; based on these factors, the designer can determine the most effective timing of the feedback. As highlighted previously, lower ability students are likely to have a more limited knowledge base from which to actively process incorrect information and self-correct errors. Therefore, low achieving students may benefit more from immediate feedback irrespective of the learning task. For these students, this immediate feedback not only corrects initial errors, but provides verification to help solidify correct knowledge. Higher achieving students, on the other hand, may not need the verification provided by immediate feedback, as they may be more able to self-monitor their own responses.

In addition to determining students’ achievement levels, it is important to tailor feedback to the nature of the learning task. If the goal of instruction is teaching novel information or facilitate concept acquisition, it probably is more beneficial to incorporate immediate feedback, which will assist in correcting initial errors in understanding and help prevent inaccurate information from being encoded. If the instruction aims at developing higher order skills such as comprehension, application, or abstract reasoning, however, delayed or end-of-session feedback is likely to be most effective. The students’ involvement in a complex task carries with it the presupposition that students have some prerequisite knowledge from which to correct their own errors. Thus, the delay in the feedback should provide students the opportunity to think about their answers and self-correct misconceptions prior to receiving feedback.

After determining the timing, the next step is to consider the students’ level of prior knowledge so that the most effective type of verification or elaboration can be implemented. The research indicates, in general, that it is disadvantageous for students to receive knowledge-of-response or answer-until-correct feedback. Providing students with a simple judgment of the correctness of their response typically does not provide enough information for students to correct misunderstandings or to learn from their errors. Similarly, answer-until-correct feedback does not provide students with any elaborative information; thus the task may turn into a guessing activity, especially if they lack a sophisticated knowledge base and the skills to reexamine their response. The multiple responses required in answer-until-correct feedback may confuse the learner and prevent the encoding of correct information.

Ideally, the feedback designed into courseware should provide knowledge about the correctness of a response in conjunction with more elaborative types of information. While there is considerable discrepancy in the research concerning the use of knowledge-of-correct-response feedback, there is a general belief that "more is better" in terms of elaborative information. In this sense, it may be beneficial to combine various types of feedback to not only verify correct answers but to provide the learner with information that can support and clarify reasoning.

As indicated in Figure 1, both lower and higher ability students may benefit from knowledge-of-response in conjunction with response-contingent or topic-contingent feedback. If students have relatively little prior knowledge, knowledge-of-correct-response with response-contingent feedback will help them identify the correct answer and will assist them in determining why the selected answer was incorrect. Since they have little prior knowledge, knowledge-of-correct-response with response-contingent feedback allows them to concentrate on the specific error as opposed to more general misunderstandings. On the other hand, if students have a solid knowledge base, knowledge-of-correct-response with topic-contingent feedback not only will identify the correct answer but allow them to review relevant material so that they may determine where their error was made. To the extent that students possess relevant prior knowledge, they are more likely to make effective use of the instructional material since they have a base of information from which to relate and apply the information.

As students’ achievement levels increase and the learning tasks become more difficult, less specific feedback may be necessary. Higher achieving students possessing larger knowledge bases may benefit more from feedback providing general information and allowing them to reevaluate their own answers. Higher achieving students working on lower level learning tasks may be able to effectively utilize knowledge-of-response with topic-contingent feedback because they possess necessary metacognitive skills to identify errors and actively seek the correct information. In this same vein, these same students working on more complex learning tasks may benefit from knowledge-of-response with delayed knowledge-of-correct-response plus response-contingent feedback. In this situation, they are likely to have the skills to identify errors and seek correct information, even if their prior knowledge is low. In addition, if it is a higher order learning task and students have the requisite knowledge, answer-until-correct with delayed topic-contingent feedback may provide them with the opportunity to identify and correct mistakes prior to receiving additional information.

A potential criticism of extensive elaborative feedback is reduced efficiency of the instructional unit. This problem can be effectively addressed, however, by providing increase learner control over the type and elaboration of the feedback. As discussed by Waddick (1994), it is possible to provide knowledge-of-response and then allow students to choose if they would like to receive additional feedback. This option avoids the presentation of unnecessary feedback, yet allows students to receive more clarification as desired. In addition, allowing students the option of receiving more feedback regardless of the correctness of their response may help to develop understanding when a correct answer was simply a guess.

It is important for designers to keep in mind that the capabilities of current technology do not necessitate selecting and implementing only a single type of feedback. As highlighted by Cohen (1985) "few programs fully utilize the new technologies’ capabilities, and even today, with vastly more flexible technologies, most CBI programs don’t reach the ideal of providing personalized, interactive, dynamic educational experience. Today, as was true a decade and half ago, most program fail to provide a comprehensive, broad instructional program that can be adaptive to individual learner needs and respond effectively to learner input" (Cohen, p. 36). Due to the extensive time and costs involved in designing computer-based instructional units, designers need take advantage of the possibilities for adaptive, interactive programming available in CBI so that programs may be utilized effectively by a variety of audiences. The unique interactive potential of CBI allows instructional designers to incorporate various types of feedback within a single CBI unit that can be modified and adapted to fit the needs of a variety of learners.

The present empirically-based guidelines provide a starting point to assist instructional designers in designing feedback strategies that will maximize the educational benefits of CBI. Ideally, computer-based feedback can be designed to be adaptive to the needs of the user, where a variety of levels of elaborative information can be incorporated into a single computer-based program that could be accessed according to the achievement level of the student as well as the nature of the task for that particular individual. "As adaptive systems become more powerful and capable of offering tailor-made feedback based on dynamic assessment, effects due to the inclusion of feedback in computer-based instruction should continue to raise achievement levels in on-line learning environments" (Azevedo & Bernard, 1995, p. 122). As computer technology advances, so do the opportunities to include new kinds of feedback. Recent feedback research (e.g., Lesgold, 1994; Locke & Latham, 1990; Vye, 1999), for example, begun moving from feedback oriented toward individual test items to more motivational, process-oriented feedback interventions. This type of feedback is not focused on specific learner responses, but provides learners with information about their progress toward broader educational goals or allows them to judge their responses in relation to those of peers or experts. While such goal-directed feedback has been only minimually explored in computer-based applications, future developments seem likely to include a shift in the focus of computer-based feedback toward these higher-order educational goals.

References

Anderson, R. C., Kulhavy, R. W., & Andre, T. (1972). Conditions under which feedback facilitates learning from programmed lessons. Journal of Educational Psychology, 63(3), 186 — 188.

Azevedo, R., & Bernard, R. M. (1995). A meta-analysis of the effects of feedback in computer-based instruction. Journal of Educational Computing Research, 13(2), 111 — 127.

Bangert-Drowns, R. L., Kulik, C. C., Kulik, J. A., & Morgan, M. (1991). The instructional effects of feedback in test-like events. Review of Educational Research, 61(2), 213 — 238.

Bardwell, R. (1981). Feedback: How does it function? Journal of Experimental Education, 50, 4 — 9.

Brackbill, Y., Bravos, A., & Starr, R. H. (1962). Delayed-improved retention of a difficult task. Journal of Comparative and Physiological Psychology, 55, 947 — 952.

Bruning, R., Schraw, G., & Ronning, R. (1999). Cognitive psychology and instruction. Columbus, OH: Merrill Prentice Hall.

Clark, K. A. (1993). The effects of feedback on the problem solving ability of academically at-risk students. (ERIC Document Reproduction Service No. ED 362 137).

Clariana, R. B. (1990). A comparison of answer-until-correct feedback and knowledge-of-correct-response feedback under two conditions of contextualization. Journal of Computer-Based Instruction, 17(4), 125 — 129.

Clariana, R. B. (1992). The effects of different feedback strategies using computer-administered multiple-choice questions as instruction. Proceedings of Selected Research and Development Presentations at the Convention of the Association for Educational Communications and Technology, (ERIC Document Reproduction Service No. ED 347 983).

Cohen, V. B. (1985, January). A reexamination of feedback in computer-based instruction: Implications for instructional design. Educational Technology, 33 — 37.

Cyboran, V. (1995). Designing feedback for computer-based training. Performance and Instruction, 34(5), 18 — 23.

Gaynor, P. (1981). The effect of feedback delay on retention of computer-based mathematical material. Journal of Computer-Based Instruction, 8(2), 28 — 34.

Gilman, D. A. (1969). Comparison of several feedback methods for correcting errors by computer-assisted instruction. Journal of Educational Psychology, 60(6), 503 — 508.

Hodes, C. L. (1984-85). Relative effectiveness of corrective and noncorrective feedback in computer assisted instruction on learning and achievement. Journal of Educational Technology Systems, 13(4), 249 — 254.

Jonassen, D. H., & Hannum, W. C. (1987, December). Research-based principles for designing computer software. Instructional Technology, 7 — 14.

Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254-284.

Kulhavy, R. W., & Anderson, R. C. (1972). Delay-retention effect with multiple-choice tests. Journal of Educational Psychology, 63(5), 505 — 512.

Kulhavy, R. W., & Stock, W. A. (1989). Feedback in written instruction: The place of response certitude. Educational Psychology Review, 1(4), 279 — 308.

Kulhavy, R. W., White, M. T., Topp, B. W., Chan, A. L., & Adams, J. (1985). Feedback complexity and corrective efficiency. Contemporary Educational Psychology, 10, 285 — 291.

Kulhavy, R. W., Yekovich, F. R., & Dyer, J. W. (1976). Feedback and response confidence. Journal of Educational Psychology, 68(5), 522 — 528.

Kulik, J. A., & Kulik, C. C. (1988, Spring). Timing of feedback and verbal learning. Review of Educational Research, 58(1), 79 — 97.

Lesgold, A. (1994). Assessment of intelligent training technology. In E. Baker & H. O’Neil (Eds.), Technology Assessment in Education and Training (pp. 97-117). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Locke, E. A., & Latham, G. P. (1990). A Theory of Goal Setting and Task Performance. Englewood Cliffs, NJ: Prentice Hall.

Merrill, J. (1987). Levels of questioning and forms of feedback: Instructional factors in courseware design. Journal of Computer-Based Instruction, 14(1), 18 — 22.

Morrison, G. R., Ross, S. M., Gopalakrishnan, M., & Casey, J. (1995). The effects of feedback and incentives on achievement in computer-based instruction. Contemporary Educational Psychology, 20, 32 — 50.

Mory, E. H. (1992). The use of informational feedback in instruction: Implications for future research. Educational Technology Research and Development, 40(3), 5 — 20.

Mory, E. H. (1994). Adaptive feedback in computer-based instruction: Effects of response certitude on performance, feedback-study time, and efficiency. Journal of Educational Computing Research, 11(3), 263 — 290.

National Center for Education Statistics. (1999). The Condition of Education (NCES Publication No. 1999-022). Washington, DC: U.S. Department of Education, Office of Educational Research and Improvement.

Overbaugh, R. C. (1994). Research-based guidelines for computer-based instruction development. Journal of Research on Computing in Education, 27(1), 29 — 47.

Park, O. & Gittelman, S. S. (1992). Selective use of animation and feedback in computer-based instruction. Educational Technology Research and Development, 40(4), 27 — 38.

Pressey, S. L. (1950). Development and appraisal of devices providing immediate automatic scoring of objective tests and concomitant self-instruction. The Journal of Psychology, 29, 417 — 447.

Pridemore, D. R., & Klein, J. D. (1991). Control of feedback in computer-assisted instruction. Educational Technology Research and Development, 39(4), 27 — 32.

Pridemore, D. R., & Klein, J. D. (1995). Control of practice and level of feedback in computer-based instruction. Contemporary Educational Psychology, 20, 444-450.

Roper, W. J. (1977). Feedback in computer assisted instruction. Programmed Learning and Educational Technology, 14(1), 43 — 49.

Schimmel, B. J. (1988). Providing meaningful feedback in courseware. In D.H. Jonassen (Eds.), Instructional designs for microcomputer courseware (183 — 196). Hillsdale, NJ: Lawrence Erlbaum Associates.

Skinner, B. F. (1968). The Technology of Teaching. New York: Meredith Corporation.

Thorndike, E. L. (1913). Educational Psychology. Volume 1: The Original Nature of Man. New York: Columbia University, Teachers College.

Vye, N. (1999, August). Performance Assessments of Learner Outcomes: Lessons from Nashville’s Schools for Thought Project. Paper presented at the OERI Technology Evaluation Institute, Ann Arbor, MI.

Waddick, J. (1994). Case study: The creation of a computer learning environment as an alternative to traditional lecturing methods in chemistry. Educational and Training Technology International, 31(2), 98 — 103.

Waldrop, P. B., Justen, J. E., & Adams, T. M. (1986). A comparison of three types of feedback in a computer-assisted instruction task. Educational Technology, 43 — 45.

Whyte, M. M., Karolick, D. M., Neilsen, M. C., Elder, G. D., & Hawley, W. T. (1995). Cognitive styles and feedback in computer-assisted instruction. Journal of Educational Computing Research, 12(2), 195 — 203.