Australasian Journal of Educational Technology
2009, 25(5), 666-682.
AJET 25

The effect of assessment on the outcomes of asynchronous online discussion as perceived by instructors

Chris Klisc, Tanya McGill and Valerie Hobbs
Murdoch University

Asynchronous online discussion is used in a variety of ways, both in online learning environments and in traditional teaching environments where, increasingly frequently, a blended approach is adopted. However the anticipated benefits of this tool in improving student learning outcomes are still being debated. One of the many factors affecting the outcomes of asynchronous online discussion is that of assessment. This study investigated the influence of assessment of discussion postings on the achievement of discussion outcomes as perceived by instructors. The findings indicate that the incorporation of assessment results in higher levels of discussion outcomes than if no assessment were used. The use of a subsequent assessment based on the online discussion was also examined, but the results were inconclusive.


Introduction

The incorporation of asynchronous online discussion into tertiary education is well established with its many potential benefits much discussed in the literature. When online discussions were first introduced, there was much enthusiasm about the possibility of electronic discussion replacing traditional tutorials where the less confident student has little opportunity for expression, and where the temporal nature of the conversational dialogue limits in depth discussion (Foley & Schuck, 1998; Greenlaw & DeLoach, 2003; Hara, Bonk & Angeli, 2000). Furthermore it was anticipated that asynchronous online discussions would help to improve student communication skills, develop their critical thinking, broaden their appreciation of divergent viewpoints, and help students synthesise and evaluate material from multiple perspectives (Hara et al., 2000; Rodrigues, 1999; Wu & Hiltz, 2004). In short, asynchronous online discussion would promote "interactivity and collaboration among learners" (McKenzie & Murphy, 2000, p. 239) in a way not possible before. These outcomes of communications skills, critical thinking and collaborative learning have been identified in different studies as important indicators of success in asynchronous online discussion. These indicators may be valued differently by different instructors, depending on the aims of the course and the aims of the instructor. In this study the success of asynchronous online discussion is conceptualised broadly to cover this wide range of aims.

Assessment is an important part of the learning process, both for students and instructors. However there is a lack of consensus about the need for assessment of asynchronous online discussions. For example, Williams (2002) argues that assessment of contributions is essential, whereas O'Reilly and Newton (2001) hold a contrary view. This suggests that further research is needed to ascertain what role assessment does play in the success of asynchronous online discussion. Therefore this study investigates the role of assessment in the achievement of a range of discussion outcomes as perceived by instructors.

Literature review

The research to date has covered many different aspects of asynchronous online discussion; these include the promotion of higher levels of cognitive processing in online discussion (Birch & Volkov, 2007; Greenlaw & DeLoach, 2003; McKenzie & Murphy, 2000; Schellens & Valcke, 2006), moderation used to manage the discussion forums (Curtin, 2002; Rodrigues, 1999), and methods for keeping discussion on track (Beaudin, 1999; MacKinnon, 2000; Picciano, 2002). In addition, Muilenburg and Berge (2000) looked at the type of questions necessary to continue topic discussion in this environment, Rourke and Anderson (2002) investigated the use of peer led discussion and other studies (Palmer, Holt & Bray, 2008; Picciano, 2002) have examined the relationship between student performance and student interaction and participation online.

Many current studies, however, take the form of recommendations and 'how to' information, without providing the pedagogical basis needed to validate the recommendations. Hara, Bonk and Angeli (2000) discuss the need for studies that delve into the 'cognitive processes and products of student electronic interchanges', rather than focusing narrowly on accessibility and impact of the technology on student attitudes. Zhang, Zhao, Zhou and Nunamaker (2004) argue that within an online environment, we need to 'integrate appropriate pedagogical methods, to enhance system interactivity and personalization, to better engage learners'. Although current studies make significant contributions to the body of knowledge, there is a clear and consistent call for empirical evidence that clearly demonstrates the conclusions often hinted at in many of the studies (Alavi, Marakas & Yoo, 2002; Arbaugh & Hiltz, 2005; Dennen, 2008; Kienle & Ritterskamp, 2007; McKenzie & Murphy, 2000; Schellens & Valcke, 2006).

One of the most discussed potential outcomes of asynchronous online discussion is its promotion of critical thought in students. The acts of reading the postings of other students in order to understand their meaning, and creating written responses to support one's stance, are believed to stimulate more thought about the topic under discussion. This in turn should help students synthesise and evaluate material from multiple perspectives, thus assisting in the development of critical thought (Hara et al., 2000; Rodrigues, 1999; Wu & Hiltz, 2004).

Research using content analysis of online discussions has provided some evidence of deeper levels of student thinking (Gunawardena, Lowe & Anderson, 1997; Thomas, 2002; Williams, 2002). However, Sringham and Geer (2000) found in their study of 200 first year education students that discussion did not go beyond surface levels, with little evidence of any critical thinking. They suggested that perhaps because the contributions were not assessed, students made little effort. Likewise a study of 20 students enrolled in a Master of Education program in which participation in the online discussion was optional, found that critical thinking and problem resolution were not demonstrated (Ng & Murphy, 2005).

Asynchronous online discussion is also believed to promote interactivity and collaboration among learners in a way not possible before, with the suggestion that interactivity has a great potential to impact learning (Harasim, 1989). The literature on the role of asynchronous online discussion in supporting collaborative learning has, however, been inconclusive. For example, whilst Hiltz (1994) found that "group learning" led to increased perceptions of learning outcomes, Anderson and Kanuka (1998) found little evidence of collaboration, while many other studies point to limited achievement of collaborative learning. Biesenbach-Lucas (2004) found that collaboration can be promoted through the provision of structures such as the incorporation of student initiated prompts, the assignment of posting responsibilities, making connections to course materials, and the inclusion of self evaluation of the discussion. Similarly, other studies stress the need for student support and instructor intervention (Curtis & Lawson, 2001; Lambert, 2003; Lee, 2003; Sringham & Geer, 2000; Taradi & Taradi, 2004) to help students attain collaborative construction of knowledge. Schellens (2006) found that group size is a significant factor influencing interaction and that discussion in smaller groups produces higher levels of knowledge construction.

The improvement of student communication skills has also been suggested as a desirable outcome of online discussion. The literature suggests that the acts of writing and reading improve student communication skills (Applebee, 1984; Kienle & Ritterskamp, 2007; MacKinnon, 2000), though the connection between the online discussion activities and the measurement of the improvement of communication skills has not been extensively examined. A recent study (Birch & Volkov, 2007) asked 70 distance education students if they felt that the online discussion contributed to the development of their communication skills and 85% reported that it did.

Online asynchronous discussions are a common implementation for the adoption of a constructivist approach to learning, and the discussion forum is viewed as an ideal medium for the collaborative construction of knowledge through the active sharing and exchanging of ideas (Anderson & Kanuka, 1998; Leidner & Jarvenpaa, 1995; Moore & Marra, 2005; Weasenforth, Biesenbach-Lucas & Maloni, 2002). As student construction of knowledge via collaboration is quite different from the instructor centred approach adopted in a traditional teaching environment, it is necessary to develop different strategies of teaching and learning (Hazari, 2004; Williams, 2002). Vonderwell, Liang and Alderman (2007) argue that online learning "requires the reconstruction of student and instructor roles, relations and practices" (Vonderwell et al., 2007, p. 31). One such practice is that of assessment, and in developing effective assessment models for use in asynchronous discussion the uniqueness of the online environment needs to be taken into consideration (Bothel, 2002).

Assessment is an important part of the learning process, both for students and instructors. Summative assessment is used for the purposes of grading and is characterised as assessment of learning. Formative assessment is used to adapt teaching and learning to meet student needs, and can be seen as assessment for learning (Vonderwell et al., 2007). In the design of assessment, generally it is necessary to take into account the purpose of the assessment, what is actually being measured and how this can be best measured. Though assessment for the traditional environment has been well studied and researched, there are additional aspects of assessment for the online environment, such as flexibility, collaboration, self assessment and authenticity, which require further research. Successful online assessment models need to incorporate these additional aspects of the online environment, and at the same time continue to meet the summative and formative assessment needs of both instructors and students.

The many studies on online discussions have made significant contributions to the development of formative assessment. In particular, qualitative research using content analysis has helped researchers to understand how students learn in this virtual environment, by examining what happens in an online discussion (Mason, 1992). Henri's (1992) content analysis schema has formed the basis for subsequent studies (Hara et al., 2000; McKenzie & Murphy, 2000; Newman, Webb & Cochrane, 1995; Ng & Murphy, 2005; Stacey & Gerbic, 2003), some of which have adapted and added to Henri's schema, while other schemas investigate online discussion from different perspectives. Gunawardena, Lowe and Anderson (1997) propose an interaction analysis model for examining the negotiation of meaning and co-construction of knowledge in collaborative computer conferencing environments, while Newman, Webb and Cochrane (1995) have developed a schema to detect critical thinking in online discussions.

A number of other studies have combined a simplistic form of content analysis with some form of quantitative measure to assess the achievement of an identified learning outcome. These include studies that have investigated the achievement of collaborative learning (Biesenbach-Lucas, 2004), the quality of interaction (Corich, Kinshuk & Hunt, 2004), evidence of critical thinking (Garrison, Anderson & Archer, 2000; Greenlaw & DeLoach, 2003), or the construction of knowledge (Kaur, 2004). Though these studies imply that the adopted form of measurement could be used for graded assessment, it is acknowledged that the development of summative assessment instruments was not the aim of the study.

The studies that have specifically attempted to develop assessment instruments for summative purposes have had limited success. These studies, like those above, have used a combination of content analysis (adapting one or more of the well acknowledged content analysis schemas) and quantitative measurements such as message count, message length, word count and even key word searches (Chen & Wu, 2004; Hazari, 2004; MacKinnon, 2000; Magnuson, 2005; Vonderwell et al., 2007). Performing content analysis of discussion postings has proved burdensome, especially for large student numbers where the number of postings may be in the hundreds, and hence content analysis has not been used extensively for summative purposes. McKenzie and Murphy (2000) suggest that the reason there is reluctance to use content analysis "may be the time and labour-intensive nature of such an undertaking" (p. 242), and Dennen (2008) suggests that "such extensive message-by-message grading might rapidly become overwhelming for instructors to implement" (p. 7). On the other hand, though the use of quantitative analysis measurements can be very efficient, on its own it fails to reveal what actually happens in the online discussion, as quantitative analysis counts tend to measure participation rather than any actual learning (Dennen, 2008).

Formal assessment within virtual environments is without a doubt important, however its place and form within asynchronous online discussion remains unclear, with debate continuing about whether assessment of the discussion postings themselves is essential for successful learning outcomes (Dennen, 2008; Geer, 2003; Hazari, 2004; Palmer et al., 2008; Vonderwell et al., 2007). Williams (2002) notes that where discussion is not assessed there appears little effort to participate:

It has been found that students tend not to participate in the electronic learning environment unless they have to (i.e. it affects their assessment) or need to (i.e. they have no other way of taking the course, or of communicating). Making conferencing or websites available as optional extras does not seem to work as students perceive this as work 'on top' of normal requirements, and do not engage with them in ways which promote more effective learning. (p. 268)
McKenzie and Murphy (2000) stress the need for assessment of online discussion, claiming that its absence will result in students neither visiting the discussion forum nor participating in the discussion. Their study, which did not include any assessment, indicated that 74% of postings were made by only nine students from a total of thirty enrolled students. On the other hand, O'Reilly and Newton (2001) suggest that assessment may not be necessary, arguing that students have an intrinsic motivation to participate in asynchronous online discussion regardless of assessment. MacKinnon (2000) found that assessment stifled spontaneous discussion, and suggests that if unstructured and unprompted postings is the major goal of the discussion, then perhaps assessment should not be used. This lack of consensus about the need for assessment of asynchronous online discussions suggests that further research in the area is needed. This study addresses the issue.

If assessment of asynchronous online discussions is shown to be of value, a further issue needing investigation is that of what should be assessed. Several studies have shown that a more effective strategy may be to employ an alternative assessment based on the discussion in some way, but not to directly assess the individual contributions. Dennen (2008) differentiates between the 'process of learning' and the 'products of learning', stating that a student's discussion posts represent the 'process of learning' but not necessarily the 'products of learning'. The discussion postings reflect what is discussed, but do not reveal what a student has actually learnt. Dennan suggests that getting students to produce a reflection paper about their discussion experience "serves as a product documenting what the learner has perceived as his or her own process of learning through the act of discussion" (Dennen, 2008, p. 8). Greenlaw and DeLoach (2003) suggest the need for a post discussion exercise, stating this will assist students evaluating the contributions and in the process help in the development of their critical thinking skills. Likewise Geer (2003) did not assess the contributions themselves, but required subsequent submissions of a 300-400 word response to each discussed topic over the prescribed weeks. Lea (2001) also advocates a post discussion assessment, suggesting that if students use the online discussion to gather their information and then subsequently incorporate the information into a reflective essay, it will also help them in developing their writing skills.

Use of a post discussion assessment may be a sensible and practical approach from an instructor's perspective, as evidence indicates that reading and grading discussion postings is a very time consuming activity (Lazarus, 2003). As Brookhart notes, having "an assessment that will take more time than you have ... is not much help" (Brookhart, 2004, p. 11). In his study analysing the daily time logs maintained by instructors of online courses, DiBiase (2004) found that communication via threaded discussions and email consumed the most time.

Research aims

As can be seen from the above discussion of studies on the role of assessment in the success of asynchronous online discussion, further research is required. Success of asynchronous online discussion is defined in this study as the achievement of discussion outcomes, the major outcome being to promote more thought among students, in order to help them synthesise material and develop their critical thinking skills. Successful online discussions also promote interactivity and collaboration among learners, as well as supporting the improvement of communication skills.

The aim of this study was twofold. The first aim was to investigate how discussion outcomes are affected by the incorporation of assessment. The research question to be answered was:

How does having student contributions assessed affect the success of online discussion outcomes?
Secondly the study investigated the extent to which post discussion assessments are adopted and their effect on the achievement of discussion outcomes. The research question to be answered was:
How does having a subsequent assessment based on the online discussion affect the success of discussion outcomes?

Method

The study reported in this paper was part of a broader project investigating the use of asynchronous online discussion. Only those aspects of the project relating to assessment and the success of online discussion are included in this paper. In order to answer the research questions a survey methodology was adopted. The data was collected via an online, web based survey of academics who had used asynchronous online discussion in their teaching.

Participants were recruited via their membership of educational and information systems listservs including ASCILITE (Australasian Society for Computers in Learning in Tertiary Education), ODLAA (Open and Distance Learning Association of Australia) IRMA (Information Resources Management Association), AIS (Association for Information Systems) and the Murdoch University learning management system list. This open form of recruitment allowed the inclusion of instructors teaching both fully online and blended courses. An email request was sent to all members of the targeted listservs, inviting them to participate in the online questionnaire by following the contained link. Completion of the questionnaire was entirely voluntary and participants were assured of their anonymity. The development of the questionnaire is described below.

The questionnaire

Questions were developed to determine the types of assessment used by each respondent, and their perceptions of their success at achieving outcomes that have been claimed for asynchronous online discussion. The unit of analysis was the most recent course that the respondent had taught with use of online discussion of assigned topics. Assigned topic discussion was defined as consisting of some or all of the following three elements: a discussion theme, a series of questions and a set of readings. A discussion theme may contain several sentences describing an issue, controversy or concept. The second element consists of a series of open ended questions designed to stimulate and initiate thought and conversation. Finally a set of readings may be provided that give students information about the topic

Both assessment and evaluation were investigated in this study. Assessment was defined in the questionnaire as a form of summative assessment, where a mark contributing to the student's final mark for the course is given. When using asynchronous online discussion, student postings may be assessed, a post discussion exercise assessed, or no form of assessment at all may be used. Evaluation was also defined in the questionnaire as being where student postings are examined to see whether the discussion objectives were met, and to provide feedback for teaching purposes, but no mark contributing to a student's grade is given. In some circumstances a mixture of evaluation and assessment may be used. Survey participants were asked to indicate which of these alternatives they had used in their teaching. The question relating to the use of assessment consisted of the above two definitions together with the alternatives as shown in Table 1 and respondents could tick whichever applied.

Table 1: Assessment/evaluation alternatives

The discussion contributions were neither assessed nor evaluated
The discussion contributions were evaluated (ie. feedback obtained but not assessed)
The discussion contributions were assessed
The discussion contributions form the basis for subsequent assessment - please describe

Desirable outcomes for asynchronous online discussion have been identified in the literature, including critical thought, deeper levels of student thinking, interactivity and collaboration resulting in the construction of knowledge, and communication skills. The achievement of these outcomes can be seen as a measure of successful discussions. Many of these outcomes are based on the learning objectives of Bloom's Taxonomy (Bloom, Engelhart, Furst, Hill & Krathwohl, 1956) which have been used by many previous studies investigating success in online discussion and so have been adopted for the current study (Christopher, Thomas & Tallent-Runnels, 2004; Gilbert & Dabbagh, 2005; Gunawardena et al., 1997; Schrire, 2006). As this study was exploratory in nature seeking instructor feedback, discussion success was defined here as the instructor's perception of the achievement of the discussion outcomes. Instructors were asked to rate the achievement of the outcomes (listed in Table 2) a scale of 1 to 7, where 1 indicated 'not successful' and 7 corresponded to 'highly successful'. Respondents could also choose a "this was not a discussion aim" alternative if they felt the outcome was not particularly relevant to their situation.

Table 2: Discussion outcomes

Improved student communication skills
Promoted more thought about the topic under discussion
Increased student awareness of differing perspectives
Enhanced deeper levels of student thinking
Developed critical analysis and reflection in students
Improved student learning through the collaborative construction of knowledge

The final section of the questionnaire collected background information about the participants, and included age, gender, computer competency, possession of a teaching qualification, number of years of teaching, and level of professional development, both for the use of online discussion and for the use of software for online discussion.

Results and discussion

The study described in this paper uses information from 79 respondents who used the online discussion tool for discussion of assigned topics and completed the questionnaire between August and October 2006. Table 3 summarises the background characteristics of the participants. Fifty two percent of the participants were male, while 48% were female. Ages ranged from a minimum of 23 to a maximum of 66 years, with an average of 46.16 years. Thirty percent indicated they had a formal teaching qualification, while 70% did not possess any teaching qualification, with all participants coming from tertiary education. Participants had a very wide range of backgrounds in terms of teaching experience and professional development. In general the participants had relatively high levels of computer skills. Courses taught by the participants included business studies, computer science, information systems, education, environmental studies, health studies, humanities, legal studies, library studies, science and veterinary studies, with no one discipline being over represented. Respondents taught in a range of countries, with 44 instructors (55.7%) teaching in Australia or New Zealand and 26 (32.9%) instructors coming from the United States of America and Canada. Two respondents taught in Hong Kong, while Finland, Italy, Jordan, Sweden and Uganda each had one participant (2 respondents did not enter the country for their teaching).

Table 3: Background characteristics of respondents


MeanMin MaxSD
Age (years)46.16236610.63
Teaching in tertiary education (years)11.761359.16
Teaching in schools (years)3.710.2336.55
Skill at computer use (/7)6.18370.89
Level of professional development in using online discussion software (/7)3.65171.73
Level of professional development in using online discussion in order to improve student learning outcomes (/7)3.11171.83

Table 4: Use of assessment and evaluation by the participants


No.%
The discussion contributions were neither assessed nor evaluated1722
The discussion contributions were evaluated (ie. feedback obtained but not assessed)1215
The discussion contributions only were assessed3848
Subsequent assessment only was used45
Both discussion contributions and subsequent assessment were incorporated810

Table 4 summarises the use of assessment and evaluation by the participants. Seventeen participants (22%) did not assess or evaluate the discussion contributions of their students. This substantiates the suggestions in the literature that assessment frameworks which are not hugely time consuming are needed, as the ones currently available do not appear to be extensively used (Dennen, 2008; McKenzie & Murphy, 2000). Twelve participants (15%) evaluated the discussion contributions for formative assessment, but did not undertake any assessment forming part of the students' results. Fifty respondents (63%) in total used some form of assessment. Thirty eight respondents (48%) had assessed discussion contributions only, four (5%) had used subsequent assessment only, and eight respondents (10%) had assessed both discussion contributions and used a form of subsequent assessment.

How does having the student contributions assessed affect the success of online discussion outcomes?

In order to answer the first research question, respondents' perceptions of their success in achieving each of the discussion outcomes were compared between those who had assessed discussion outcomes and those who had not. Table 5 presents the average perceived level of success for each discussion outcome for both the unassessed and assessed discussion groups. The average success of each discussion outcome was compared between unassessed discussion and assessed discussion using independent samples t-tests. The results are summarised next for each outcome.

Table 5: Comparison of discussion outcomes of assessed versus unassessed discussion

Discussion outcomesAssessed discussionUnassessed discussionSignif-
icance
N MeanSDN MeanSD
Improved student communication skills45 5.491.33264.651.810.046
Promoted more thought about the topic under discussion 50 6.280.9728 5.181.720.004
Increased student awareness of differing perspectives496.181.0526 4.881.660.001
Enhanced deeper levels of student thinking505.981.0227 4.591.65<0.001
Developed critical analysis and reflection in students495.651.1526 4.191.74<0.001
Improved student learning through the collaborative
construction of knowledge
505.781.3327 4.631.930.008
Note: All outcomes were measured on a 7 point scale with 1 indicating 'not successful' and 7 corresponding to 'highly successful'

Improved student communications skills is often claimed as a potential benefit of asynchronous online discussion (Birch & Volkov, 2007; Kienle & Ritterskamp, 2007; MacKinnon, 2000), but there has been little evidence that this potential benefit has been realised. In this study, the instructor ratings of the achievement of improved student communication skills were significantly higher for the assessed discussion group than for the unassessed discussion group (5.49 versus 4.65, t(41) = 2.056, p=0.046). This result is consistent with what has been proposed in the literature (Applebee, 1984; Dennen, 2008; Garrison et al., 2000) and supports the suggestion that students take more care in articulating their contribution when they are being assessed, knowing that what they post on the forum will add to their final grade.

The instructor ratings of the achievement of promoted more thought about the topic under discussion were also significantly higher in the assessed discussion group than in the unassessed discussion group (6.28 versus 5.18, t(37) = 3.118, p=0.004). This result is consistent with the suggestion that not only is online discussion a highly suitable medium for facilitating thought, but that the incorporation of assessment is an incentive for students to make an extra effort (Dennen, 2008; Vonderwell et al., 2007; Williams, 2002).

Assessment of discussions was associated with significantly higher instructor ratings of the achievement of increased student awareness of differing perspectives (6.18 versus 4.88, t(36) = 3.627, p<0.001). This result is consistent with what has been proposed in the literature suggesting that assessed contributions result in more careful reading of peer postings and hence more student awareness of the opinions expressed in the postings (Dennen, 2008; Newman et al., 1995).

Significantly higher instructor ratings were reported in the assessed discussion group, compared to the unassessed discussion group, for the achievement of enhanced deeper levels of student thinking (5.98 versus 4.59, t(37) = 3.984, p<0.001). This result is consistent with many findings in the literature where content analysis of online discussion has found evidence of deeper levels of student thinking (Gunawardena et al., 1997; Williams, 2002). Thus assessment appears to encourage more involvement in the discussion and in turn foster deeper thought in students. Consistent with this, the ratings for the achievement of critical analysis and reflection in students were also significantly higher in the assessed discussion group than in the unassessed discussion group (5.65 versus 4.19, t(37) = 3.852, p<0.001).

The literature on the role of asynchronous online discussion in supporting collaborative learning has found that the online discussion environment can facilitate collaborative learning; however collaborative activity does not happen automatically or spontaneously. Much research stresses the need for student support, instructor intervention and thoughtful structuring and integration within the subject matter in order to facilitate collaborative learning (Biesenbach-Lucas, 2004; Curtis & Lawson, 2001; Lambert, 2003; Lee, 2003; Schellens & Valcke, 2006; Sringham & Geer, 2000; Taradi & Taradi, 2004; Weasenforth et al., 2002). In the current study, instructors in the assessed discussion group rated the achievement of improved student learning through the collaborative construction of knowledge by their students significantly higher than did those in the unassessed discussion group (5.78 versus 4.63, t(40) = 2.770, p=0.008). The incorporation of assessment introduces motivation and structure into the discussion and so the result from this study is consistent with the literature.

How does having a subsequent assessment based on the online discussion affect the success of discussion outcomes?

The second research question considers what effect a subsequent assessment has on the success of discussion outcomes. Only the discussion outcomes for which assessment provided significant improvements are considered. Table 6 and Figure 1 show the mean for the rating of the achievement of each of these discussion outcomes for the four categories of assessment: no assessment, the assessment of the discussion contributions only, a subsequent assessment only, or the use of assessment of discussion contributions and a subsequent exercise. ANOVA was used to compare the different assessment groupings. In cases where ANOVA indicated significance differences, the LSD (least significant difference) test was used to perform pairwise comparisons to determine the exact nature of the difference.

Consistent with the answers to the first research question there were significant differences in ratings of achievement of all considered outcomes across the assessment groups: improved student communication skills (F=3.17 p=0.030), the promotion of thought about the discussion topic (F= 5.08, p=0.003), increased student awareness of differing perspectives (F= 6.75, p=0.001), enhanced deeper levels of student thinking (F=8.84, p<=0.001), development of critical analysis and thinking (F=6.63, p<0.001), and improved student learning through the collaborative construction of knowledge (F=4.43, p=0.006). However, in all cases except two, the significant differences between the groups were between those who had not used any assessment and those who had used some form of assessment. The two exceptions with differences between the different types of assessment, were in the outcome of improved student communication skills and in the outcome of enhanced deeper levels of student thinking, where a significant result was achieved between those who had used discussion contributions only, and those who had used both discussion contributions and a subsequent assessment. However the lack of any other differences between the assessment groups may be attributable to the low number of instructors who used a subsequent assessment (n=4), and further research is required before drawing any conclusions regarding the usefulness of post-discussion assessment.

Table 6: Discussion outcomes for the different groupings of assessment

Discussion
outcome
No
assessment
Discussion
contribution
assessment only
Subsequent
assessment only
Discussion
contribution assess.
and subsequent
assessment
FSignif-
icance
NMStd
dev
NMStd
dev
NMStd
dev
NMStd
dev
Improved student
communication skills
264.651.81345.291.34 45.251.2676.570.79 3.170.030
N_CS,
CO_CS
Promoted more thought
about the topic under
discussion
285.181.72386.131.02 46.750.5086.750.71 5.080.003
N_CO,
N_CS
Increased student
awareness of differing
perspectives
264.881.66376.03 1.1246.250.9686.88 .0356.75<0.001
N_CO,
N_CS
Enhanced deeper levels
of student thinking
274.591.65385.761.05 46.500.5886.750.46 8.84<0.001
N_CO,
N_CS,
N_SO,
CO_CS
Developed critical
analysis and reflection
in students
264.191.74375.571.26 45.500.5886.130.64 6.630.001
N_CO,
N_SO,
N_CS
Improved student
learning through the
collaborative construction
of knowledge
274.631.93 385.551.4146.250.508 6.63.0744.430.006
N_CO,
N_CS
N_CO: Significant difference in means (p<0.05) between the no assessment and discussion contribution only groups
N_CS: Significant difference in means (p<0.05) between the no assessment group and the group with both assessment of discussion contributions and subsequent assessment
N_SO: Significant difference in means (p<0.05) between the no assessment and subsequent assessment only groups
CO_CS: Significant difference in means (p<0.05) between the discussion contribution assessment only and assessment of discussion contributions and subsequent assessment groups

Figure 1

Figure 1: Discussion outcomes for the different groupings of assessment

It is interesting however that so few participants had introduced subsequent assessment relating to the discussion postings into their courses. It raises the question of integration of asynchronous online discussions into the main body of knowledge and means of assessment of courses. As discussed earlier, Dennen (2008) stresses the need to assess the 'products of learning' rather than the 'process of learning' and argues that the discussion contributions represent the latter, whereas post-discussion exercises are a better representation of the 'products of learning'. Several studies that examined the assessment of discussion contributions have reached similar conclusions and advocate the use of a reflective piece of writing that assists students in evaluating and synthesising the information presented in the discussion contributions, thereby developing critical thinking skills (Clark, 2000; Geer, 2003; Greenlaw & DeLoach, 2003; Lea, 2001). Further research into the use of a post-discussion assessment will help to clarify the benefit of this type of exercise.

It may also be useful to investigate whether a post-discussion assessment alone is better for student learning than assessing the discussion postings, or alternatively a post-discussion assessment along with a very simplified assessment of the discussion contributions may be best. However any suggestion of assessing the contributions needs to take into account the time-consuming nature of reading and grading discussion postings (Brookhart, 2004; DiBiase, 2004; Lazarus, 2003).

Limitations of the study

This study has a number of limitations that need to be addressed in light of the results. Firstly, as participants were recruited via a number of educational and information system listservs on a voluntary basis, it would be reasonable to conclude that those participants completing the questionnaire may represent the enthusiasts for online discussion rather than instructors in general (Atkinson, 2007; Carbonaro, Bainbridge, & Wolodko, 2002). The results would, therefore, be a reflection of the experience of these instructors rather than a representation of all instructors.

A second limitation relates to the time and effort involved in assessing the online discussions. It is possible that instructors who assess the discussion may rate the achievement of the discussion outcomes more highly than those not assessing the discussion contributions, due to the time and effort investment. Measurement of actual rather than perceived impacts of assessment of online discussion on students' learning outcomes in future research would clarify this issue.

Conclusion

Asynchronous online discussion is widely used in both totally online learning environments and blended environments, but its benefits are still being debated. This study provides empirical evidence which clearly demonstrates the value of assessment associated with asynchronous online discussion, by studying the influence of assessment of discussion postings on the achievement of discussion outcomes as perceived by instructors. The study compared perceptions of the achievement of discussion outcomes between academics who assessed discussion postings and those who did not. The findings indicate that the incorporation of assessment had a significant positive impact on a number of discussion outcomes, including communication skills, amount of thought about the topic under discussion, awareness of differing perspectives, depth of thinking, critical analysis and reflection, and learning through the collaborative construction of knowledge. Finally, the study investigated whether instructors had used a post-discussion assessment and if so how they perceived its effect on discussion outcomes. The results of this however, were inconclusive.

This study has shown that there are very clear benefits for using assessment in asynchronous online discussion. However, in order maximise the benefits more research is needed to find an effective and time efficient method, based on sound pedagogical principles, of assessing discussion contributions, especially if large undergraduate courses are to be assessed for their online discussion contributions. The majority of participants using assessment in this study reported that they assessed the discussion postings, despite the potentially overwhelming burden of reading and marking these contributions (Dennen, 2008; McKenzie & Murphy, 2000; Palmer et al., 2008). However some research suggests that assessing the contributions may not be the best indicator of student learning and that alternative forms of assessment should be investigated (Dennen, 2008; Greenlaw & DeLoach, 2003; Lea, 2001). The current study attempted to determine if having a subsequent assessment based on the online discussion was of value, however the results were inconclusive. A post-discussion assessment has potential for easing the marking burden, compared to assessing discussion contributions, and given the fact that research suggests this form of assessment may be a better indicator of student learning (Dennen, 2008) more research is needed in this area.

References

Alavi, M., Marakas, G. M. & Yoo, Y. (2002). A comparative study of distributed learning environments on learning outcomes. Information Systems Research, 13(4), 404-415.

Anderson, T. & Kanuka, H. (1998). Online social interchange, discord, and knowledge construction. Journal of Distance Education, 13(1), 57-74.

Applebee, A. N. (1984). Writing and reasoning. Review of Educational Research, 54(4), 557-596.

Arbaugh, J. B. & Hiltz, S. R. (2005). Improving quantitative research on ALN effectiveness. In S. R. Hiltz & R. Goldman (Eds.), Learning together online: Research on asynchronous learning networks (pp. 81-102). Mahwah, New Jersey: Lawrence Erlbaum.

Atkinson, R. (2007). Can we trust web based surveys? HERDSA News, 29(3). http://www.roger-atkinson.id.au/pubs/herdsa-news/29-3.html

Beaudin, B. P. (1999). Keeping online asynchronous discussions on topic. Journal of Asynchronous Learning Networks, 3(2), 41-53. [verified 26 Oct 2009] http://www.aln.org/publications/jaln/v3n2/pdf/v3n2_beaudin.pdf

Biesenbach-Lucas, S. (2004). Asynchronous web discussions in teacher training courses: Promoting collaborative learning - or not? Association for the Advancement of Computing in Education Journal, 12(2), 155-170. [verified 26 Oct 2009] http://www.aace.org/pubs/aacej/temp/03lucas155-170.pdf

Birch, D. & Volkov, M. (2007). Assessment of online reflections: engaging english second language (ESL) students. Australasian Journal of Educational Technology, 23(3), 291-306. http://www.ascilite.org.au/ajet/ajet23/birch.html

Bloom, B. S., Engelhart, M. D., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy of educational objectives - The classification of educational goals, Handbook 1, Cognitive domain. London: Longman Group.

Bothel, R. T. (2002). Epilogue: A cautionary note about on-line assessment. New Directions for Teaching and Learning, 91(Fall), 99-104.

Brookhart, S. M. (2004). Assessment theory for college classrooms. New Directions for Teaching and Learning, 100(Winter), 5-14.

Carbonaro, M., Bainbridge, J. & Wolodko, B. (2002). Using Internet surveys to gather research data from teachers: Trials and tribulations. Australian Journal of Educational Technology, 18(3), 275-292. http://www.ascilite.org.au/ajet/ajet18/carbonaro.html

Chen, X. & Wu, B. (2004). Assessing student learning through keyword density analysis of online class messages. In Proceedings of the Tenth Americas Conference on Information Systems (pp. 2984-2990). New York, New York. [verified 31 Oct 2009] http://web.njit.edu/~wu/publication/SIGED02-1556.pdf

Christopher, M. M., Thomas, J. A. & Tallent-Runnels, M. K. (2004). Raising the bar: Encouraging high level thinking in online discussion forums. Roeper Review, 26(3), 166-171. [verified 31 Oct 2009] http://www.thefreelibrary.com/Raising+the+bar:+encouraging+high+level+thinking
+in+online+discussion...-a0116187156

Clark, M. (2000). Getting participation through discussion. ACM SIGCSE Bulletin, 32(1), 129-133.

Corich, S., Kinshuk & Hunt, L. M. (2004). Assessing discussion forum participation: In search of quality [Electronic Version]. International Journal of Instructional Technology and Distance Learning, 1(12). [viewed 12 Dec 2005, verified 31 Oct 2009] http://www.itdl.org/Journal/Dec_04/article01.htm

Curtin, J. (2002). WebCT and online tutorials: New possibilities for student interaction. Australian Journal of Educational Technology, 18(1), 110-126. http://www.ascilite.org.au/ajet/ajet18/curtin.html

Curtis, D. D. & Lawson, M. J. (2001). Exploring collaborative online learning. Journal of Asynchronous Learning Networks, 5(1), 21-34. [verified 31 Oct 2009] http://www.aln.org/publications/jaln/v5n1/pdf/v5n1_curtis.pdf

Dennen, V. P. (2008). Looking for evidence of learning: Assessment and analysis methods for online discourse. Computers in Human Behavior, 24(2), 205-219.

DiBiase, D. (2004). The impact of increasing enrollment on faculty workload and student satisfaction over time. Journal of Asynchronous Learning Networks, 8(2), 45-60. [verified 31 Oct 2009] http://www.sloan-c.org/publications/jaln/v8n2/pdf/v8n2_dibiase.pdf

Foley, G. & Schuck, S. (1998). Web-based conferencing: Pedagogical asset or constraint? Australian Journal of Educational Technology, 14(2), 122-140. http://www.ascilite.org.au/ajet/ajet14/foley.html

Garrison, D. R., Anderson, T. & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2-3), 87-105.

Geer, R. (2003). Initial communicating styles and their impact on further interactions in computer conferences. In Interact Integrate Impact: Proceedings ASCILITE Adelaide 2003 (pp. 194-202). http://www.ascilite.org.au/conferences/adelaide03/docs/pdf/194.pdf

Gilbert, P. K. & Dabbagh, N. (2005). How to structure online discussions for meaningful discourse: A case study. British Journal of Educational Technology, 36(1), 5-18.

Greenlaw, S. A. & DeLoach, S. B. (2003). Teaching critical thinking with electronic discussion. Journal of Economic Education, 34(1), 36-52. [verified 31 Oct 2009] http://www.journalofeconed.org/pdfs/winter2003/4greenlawwinter03.pdf

Gunawardena, C. N., Lowe, C. A. & Anderson, T. (1997). Analysis of a global online debate and the development of an interaction analysis model for examining social construction of knowledge in computer conferencing. Journal of Educational Computing Research, 17(4), 397-431.

Hara, N., Bonk, C. J. & Angeli, C. (2000). Content analysis of online discussion in an applied educational psychology course. Instructional Science, 28(2), 115-152.

Harasim, L. (1989). Online education: A new domain. In R. Mason & A. Kaye (Eds.), Mindweave: Communication, computers and distance education (pp. 50-62). Oxford: Pergamon Press.

Hazari, S. (2004). Strategy for assessment of online course discussions. Journal of Information Systems Education, 15(4), 349-355.

Henri, F. (1992). Computer conferencing and content analysis. In A. R. Kaye (Ed.), Collaborative learning through computer conferencing: The Najaden Papers (pp. 115-136). New York: Springer.

Hiltz, S. R. (1994). The virtual classroom: Learning without limits via computer networks. Norwood NJ: Ablex Publishing Corp.

Kaur, A. (2004). A study of social construction of knowledge and its relationship to academic achievement using asynchronous conferencing tool. Unpublished EdD, Columbia University, Columbia.

Kienle, A., & Ritterskamp, C. (2007). Facilitating asynchronous discussions in learning communities: The impact of moderation strategies. Behaviour & Information Technology, 26(1), 73-80.

Lambert, S. (2003). Collaborative design projects: Evaluating students' online discussions. In Interact, Integrate, Impact: Proceedings ASCILITE Adelaide 2003. http://www.ascilite.org.au/conferences/adelaide03/docs/pdf/293.pdf

Lazarus, B. D. (2003). Teaching courses online: How much time does it take? Journal of Asynchronous Learning Networks, 7(3), 47-54. [verified 31 Oct 2009] http://www.sloan-c.org/publications/jaln/v7n3/pdf/v7n3_lazarus.pdf

Lea, M. (2001). Computer conferencing and assessment: New ways of writing in higher education. Studies in Higher Education, 26(2), 163-181.

Lee, M. C. (2003). Impacts of cognitive structuring methods on students' critical thinking enhancement in on-line collaborative learning. Unpublished PhD, Unversity of Illinois at Chicago, Chicago.

Leidner, D. E. & Jarvenpaa, S. L. (1995). The use of information technology to enhance management school education: A theoretical view. MIS Quarterly, 19(3), 265-291.

MacKinnon, G. R. (2000). The dilemma of evaluating electronic discussion groups. Journal of Research on Computing in Education, 33(2), 125-131.

Magnuson, C. (2005). Experiential learning and the discussion board: A strategy, a rubric, and management techniques. Distance Learning, 2(2), 15-20.

Mason, R. (1992). Evaluation methodologies for computer conferencing applications. In A. R. Kaye (Ed.), Collaborative learning through computer conferencing: The Najaden Papers (Vol. 90, pp. 105-116): Springer-Verlag.

McKenzie, W. & Murphy, D. (2000). I hope this goes somewhere: Evaluation of an online discussion group. Australian Journal of Educational Technology, 16(3), 239-257. http://www.ascilite.org.au/ajet/ajet16/mckenzie.html

Moore, J. L. & Marra, R. M. (2005). A comparative analysis of online discussion participation protocols. Journal of Research on Technology in Education, 38(2), 191-212.

Muilenburg, L. & Berge, Z. L. (2000). A framework for designing questions for online learning. Distance Education Online Symposium NEWS, 10(2). [viewed 27 Jan 2008, verified 31 Oct 2009] http://www.ed.psu.edu/acsde/deos/deosnews/deosnews10_2.asp

Newman, D., Webb, B. & Cochrane, C. (1995). A content analysis method to measure critical thinking in face-to-face and computer support group learning [Electronic Version]. Interpersonal Computing and Technology, 3(2), 56-77. [viewed 5 Apr 2009, verified 31 Oct 2009] http://www.emoderators.com/ipct-j/1995/n2/newman.html

Ng, K. C. & Murphy, D. (2005). Evaluating interactivity and learning in computer conferencing using content analysis techniques. Distance Education, 26(1), 89-109.

O'Reilly, M. & Newton, D. (2001). Why interact if it's not assessed? Academic Exchange, Winter, 70-76.

Palmer, S., Holt, D. & Bray, S. (2008). Does the discussion help? The impact of a formally assessed online discussion on final student results. British Journal of Educational Technology, 39(5), 847-858.

Picciano, A. G. (2002). Beyond student perceptions: Issues of interaction, presence, and performance in an online course. Journal of Asynchronous Learning Networks, 6(1), 21-40. [verified 31 Oct 2009] http://www.sloan-c.org/publications/jaln/v6n1/pdf/v6n1_picciano.pdf

Rodrigues, S. (1999). Evaluation of an online masters course in science teacher education. Journal of Education for Teaching, 25(3), 263-270.

Rourke, L. & Anderson, T. (2002). Using peer teams to lead online discussions. Journal of Interactive Media in Education, 52(1), 5-18.

Schellens, T. & Valcke, M. (2006). Fostering knowledge construction in university students through asynchronous discussion groups. Computers & Education, 46(4), 349-370.

Schrire, S. (2006). Knowledge building in asynchronous discussion groups: Going beyond quantitative analysis. Computers & Education, 46, 49-70.

Sringham, C. & Geer, R. (2000). An investigation of an instrument for analysis of student-led electronic discussions. In Learning to Choose Choosing to Learn: Proceedings ASCILITE Coffs Harbour 2000. (pp. 81-91). http://www.ascilite.org.au/conferences/coffs00/papers/chinawong_sringam.pdf

Stacey, E. & Gerbic, P. (2003). Investigating the impact of computer conferencing: Content analysis as a manageable research tool. In Interact, Integrate, Impact: Proceedings ASCILITE Adelaide 2003 (pp. 495-504). http://www.ascilite.org.au/conferences/adelaide03/docs/pdf/495.pdf

Taradi, S. a. K. & Taradi, M. (2004). Expanding the traditional physiology class with asynchronous online discussions and collaborative projects. Advances in Physiology Education, 28(June), 73-78.

Thomas, M. (2002). Learning within incoherent structures: The space of online discussion forums. Journal of Computer Assisted Learning, 18, 351-366.

Vonderwell, S., Liang, X. & Alderman, K. (2007). Asynchronous discussions and assessment in online learning. Journal of Research on Technology in Education, 39(3), 309-328.

Weasenforth, D., Biesenbach-Lucas, S. & Maloni, C. (2002). Realizing constructivist objectives through collaborative technologies: Threaded discussions. Language Learning & Technology, 6(3), 58-86. [verified 31 Oct 2009] http://llt.msu.edu/vol6num3/weasenforth/

Williams, C. (2002). Learning on-line: A review of recent literature in a rapidly expanding field. Journal of Further and Higher Education, 26(3), 263-272.

Wu, D. & Hiltz, S. R. (2004). Predicting learning from asynchronous online discussions. Journal of Asynchronous Learning Networks, 8(2), 139-152. [verified 31 Oct 2009] http://www.sloan-c.org/publications/jaln/v8n2/pdf/v8n2_wu.pdf

Zhang, D., Zhao, J. L., Zhou, L. & Nunamaker, J. F. (2004). Can e-learning replace classroom learning? Communications of the ACM, 47(5), 75-79.

Authors: Chris Klisc, Lecturer
Dr Tanya McGill, Associate Professor
Dr Valerie Hobbs, Senior Lecturer
School of Information Technology, Murdoch University, Murdoch WA 6150, Australia
Email: c.klisc@murdoch.edu.au, t.mcgill@murdoch.edu.au, v.hobbs@murdoch.edu.au
Web: http://www.it.murdoch.edu.au/

Please cite as: Klisc, C., McGill, T. & Hobbs, V. (2009). The effect of assessment on the outcomes of asynchronous online discussion as perceived by instructors. Australasian Journal of Educational Technology, 25(5), 666-682. http://www.ascilite.org.au/ajet/ajet25/klisc.html


[ PDF version of this article ] [ AJET 25 ] [ AJET home ]
HTML Editor: Roger Atkinson [rjatkinson@bigpond.com]
This URL: http://www.ascilite.org.au/ajet/ajet25/klisc.html
Created 11 Nov 2009. Last corrected 1 Jan 2010.

Page access count