|Australian Journal of Educational Technology
2000, 16(3), 239-257.
This article outlines the application of a particular model of content analysis (Henri, 1992; 1993) to the evaluation of an online discussion group. The discussion group was part of the learning environment for a subject of the Graduate Certificate in Higher Education offered by the Centre for Higher Education at Monash University, Australia. Henri's model focuses on the level of participation and interaction in the discussion group, as well analysing the content of the messages according to a cognitive view of learning. Overall, the analysis confirmed the success of the discussion group, and provided a useful conceptual lens with which to study the online environment.
Advocates of the use of online discussion groups in education are enthusiastic about the benefits this opportunity for interaction affords the learner. Harasim (1989) describes the part that online interaction can play in collaborative learning environments, emphasising the positive effects of being actively engaged in learning, sharing information and perspectives through interaction with other learners. The asynchronous nature of the communication affords extra advantages in terms of promoting reflective thinking, as well as more practically offering increased flexibility of time and place of learning (eg, Bates, 1995; Harasim et al., 1995). As well as allowing flexibility, online discussion groups can help to reduce the isolation of learning at a distance, and play an important role in the social aspects of learning (Harasim et al., 1995, Mason & Weller, 2000).
Following on from the enthusiastic adoption of computer mediated communication technologies have trailed developments in methods for evaluating the quality of online discussion, ranging from highly quantitative approaches, such as the use of tracking software (eg, Pitman et al., 1999) to more qualitative methods, such as discourse analysis (Owen, 2000). Just as the increasing availability of online discussion groups offers challenges to teachers designing learning environments, this technology also offers unique opportunities for evaluation. This article outlines the analysis of an online discussion group that operated as part of a flexible learning subject offered over one semester. In particular, the tool used for evaluation was Henri's (1992; 1993) method for analysing the content of discussion transcripts. This approach was found to provide a useful conceptual lens for coming to an understanding of the success of the group.
Integrated with the web materials for the subject is an online discussion group, using Netscape Communicator. The decision to use this, rather than more specialised and elaborate software, was based on the notion that this was Monash University's default application, which academics had ready access to, could be very simply set up, and was likely to be the software they would use for their own teaching.
In planning for use of the discussion group, one aim was to ensure that it was not just an 'add on', but an integral part of the learning environment. At the same time, it was felt that there should be sufficient flexibility in its use so that participants would use it proactively. Prompting for use of the discussion group was provided by links in the online materials, specific requirements of some online activities and a weekly global email from the subject coordinator. So, although members of the teaching team did post messages to the group, it was most often in response to questions or issues raised by participants. It is interesting that even though there was an introductory message from a teacher, it was not the first message to the group; this came from a student who was keen to get started.
The key role that the teacher or moderator can play in an online environment is a topic of particular interest in the literature, with a key recent contribution being that of Salmon (2000). Based on her extensive experience and research, Salmon has developed a five stage, research based model for what she calls 'e-moderation'. This model of teaching and learning online includes a range of suggestions for teaching in a variety of online contexts, ideas for the evaluation of online discussion groups, along with a list of useful questions for evaluators to explore (Salmon, 2000, p. 121).
As mentioned, in the current context, the discussion group was not the only means of interactivity within the online materials. Use of the InterLearn software, which is based on a database structure, enabled participants to view each other's online activities and at times collaborate on such activities, thus encouraging a constructivist perspective of learning. In this case most of the activities focussed on their practice as teachers in higher education, and how the material they studied could be applied in the learning environments for which they were responsible. It was also envisaged that, by spending time both posting and searching the online activities, participants would be stimulated to use the discussion group to follow up on issues that arose and raise points of debate and special interest. To aid navigation, a link to the discussion group was included as a standard icon on each page of the participants' individual 'worksites', and reminders to visit the group regularly were interspersed in the materials. Overall, though, the group was not 'teacher-led', but was a forum where participants shared their experiences, concerns and opinions. In this way the online environment deviated somewhat from that propounded by Salmon (2000), as in this context (notably that the participants were all academic staff) it was considered that the online moderator would play a lesser role.
The analysis of the online discussion group that follows was part of an overall evaluation strategy for the subject. Other aspects included focus groups, discussions at face to face tutorials and both qualitative and quantitative questionnaires. It should also be noted that participation in the discussion group was not part of the formal assessment of the subject. However, seven of the activities submitted via the online worksite did contribute to the final mark (an ungraded assessment of 'unsatisfactory/ satisfactory').
The transparency of online discussion, the fact that all communication is easily organised, stored and retrieved, suggests that analyses of the discussion records themselves would be a useful approach. Despite the wealth of data that is available in transcript analysis, few researchers attempt an in-depth analysis of the content of the discussion record (Gunawardena, Lowe, & Anderson, 1998; Romiszowski & Mason, 1996). One reason for this reluctance maybe the time and labour-intensive nature of such an undertaking. Another reason, perhaps, is the lack of availability of tried and tested frameworks for evaluating the effectiveness of online discussion using transcripts.
Of the models that have been proposed, the preferred method of analysis varies according to purpose of the evaluation and the interests of the researchers. For example, Levin, Kim, and Riel (1990) focus on the need to understand the nature of the interaction among participants. This analytical approach leads to the construction of 'message maps' that represent the flow of communication within the group, but are not concerned with message content. Other researchers are primarily interested in evaluating the effectiveness of online discussion in terms of the learning process. Each takes a similar approach to analysing the discussion record, first breaking the transcript down into small units and then classifying these units according to content. In some cases the categories are defined retrospectively, and are tailored to capture the flavour of a particular forum (eg, Mowrer, 1996). Others have taken a more theoretical perspective, where the categories for analysis are designed a priori to reflect evidence about the learning process in which the participants are engaged. It is this level of analysis, Henri (1992) argues, that is needed to evaluate and guide the use of online discussion environments. In Henri's model (1992, 1993) the transcripts are analysed according to five dimensions, these being participative, interactive, social, cognitive and metacognitive. Her approach is grounded in a cognitive view of learning, focusing on the level of knowledge and skills evident in the learners' communications. Aspects of this model have been taken up and expanded upon by others interested in comparing the level of critical thinking in face to face seminars and computer conferences (eg, Newman, Johnson, Webb, & Cochrane, 1997).
Henri's approach, however, will not be suited to all evaluation purposes. For example, Gunawardena et al. (1998) judged Henri's model to be inappropriate for their analysis of an online debate. It was suggested that Henri's analysis of interaction did not reflect the 'gestalt' of the entire online discussion, but rather focused on links between specific messages. A gestaltist approach to analysing the interaction of the entire online conference was central to Gunawardena et al.'s purpose to evaluate evidence for the social construction of knowledge. Their own preferred method of content analysis was developed to capture the progression of ideas as they were reflected at different phases of the debate: sharing/comparing information; identifying areas of disagreement; negotiating meaning and co-construction of knowledge; testing and modification of proposed synthesis, and agreement statements. As the nature of the online discussion group that is the subject of our analysis was more an informal sharing of ideas and experiences, Gunawardena et al.'s approach did not seem to fit our purpose.
Therefore, in the current context, the Henri (1992; 1993) framework was chosen to evaluate the effectiveness of the online discussion group because it allowed for analysis of a range of aspects of an online discussion, the level of participation in the form of usage statistics, the nature of the interaction between contributors, and an indication of the learning process through an analysis of the cognitive activity evident in the message content.
Each message unit was classified according to the categories defined by Henri's (1992; 1993) model. This approach yields both quantitative and qualitative data, according to five broad dimensions:, participation, social, interaction, cognitive and metacognitive aspects. Some alterations to this method of analysis were made to include additional information in some categories. The categories used to describe level of participation (including social) and interaction are described in Table 1. Table 2 outlines the categories used to classify the message content relevant to the cognitive and metacognitive dimensions.
Participation / social
The participation dimension included measures of the level of participation, structure and type of participation. The level of participation was indicated by the number of messages, number of message units, and length of each message unit. The structure of the online discussion was observed by recording the day of the week and time of day for each message. The analysis also allowed tracking of the 'threads' of the discussion according to the subject heading of the messages.
Table 1: Summary of the classifications used in the transcript analysis to measure participation and interaction, based on a slightly modified version of Henri's model (1992; 1993).
Level of participation
|Number of messages|
Number of message units
Length of message unit (lines)
|Structure*||Day and time of posting|
|Type of participation*||Administrative (A)|
|Question about submission of work|
Technical problems with access
Self: 'Hi, my name is X and .....'
Other: 'Hope you all have a good Easter ...'
Direct: 'The reading on learning outcomes...'
Indirect: 'ideas to help with noisy students'
Direct response (DR)
Indirect response (IR)
Indirect commentary (IC)
'Hello X, In response to your question about...'
' I agree with X that ...'
'I think that the answer might be ...'
'I agree that students ...'
Relating to the subject under discussion, but isn't in reference to a prior contribution
An indication of the type of participation was gained by coding message units as either referring to some aspect of course administration (A), the use of the technology to access and use the online site (T), or a reference to the content of the subject (C). A further category was added to distinguish between content messages (C) that drew on subject material specific to the course (direct), or relevant to the topic of teaching and learning in general (indirect). The fourth type of participation reflected a social purpose (S), and is equivalent to Henri's second dimension (1992; 1993). The social category was further classified as social expression about one's self (ie, personal introduction), or an expression of sociability directed toward others (eg, asking about others' well-being).
Henri's (1992; 1993) interactivity dimension differentiates between contributions to the online discussion that are explicit, implicit or independent. Explicit interactions can be either in response to a question posed (DR) or a commentary on someone else's message (DC). In explicit interactions the person to whom the communication is directed is indicated in the message. For these explicit interactions a record of the sender and the person to which the message is directed was noted so that patterns of communication between participants might be observed. Implicit interactions were defined as including a response to (IR) or commentary on (IC) a prior message, but without indicating specifically to which message the contribution referred. The independent statement (IS) category was reserved for cases where a message contained new ideas not connected to others previously expressed in the discussion forum.
Cognitive and metacognitive dimensions
The message units defined as relevant to the content of the subject (C) were then classified according to Henri's cognitive and metacognitive dimensions. Examples of how each of these categories were interpreted in the current analysis are given in Table 2 (see Henri, 1992; 1993).
The cognitive skills dimension is based on a taxonomy of cognitive processes and skills thought to reflect the nature of the learning process (see Henri, 1993). The first classification outlines five levels of critical thinking: elementary clarification (introducing a problem and its parts); in depth clarification (analysis indicates added insight and understanding of the nature of the problem); inference; inference (evidence of inductive or deductive reasoning); judgement (making a judgement, summing up); and strategies (proposing what is needed to implement a solution).
Table 2: Summary of the classifications used in the transcript analysis to assess the cognitive and metacognitive dimensions of Henri's model (1992; 1993).
In depth clarification
Introduce a problem; pose a question; pass on information
Analyse a problem, identify assumptions.
Concluding based on evidence from prior statements;
Expresses a judgement about an inference, relevance of
an argument, theory, or solution.
Proposes a solution; outlines what is needed to implement
Repetition without adding new information; statement without justification; suggesting a solution without explanation.
Brings in new information, shows links, solutions proposed with analysis of possible consequences; evidence of justification; presents a wider view.
Comparing self to others as a cognitive being, eg, student perspective vs. teacher perspective.
Showing an awareness of one's approach to a cognitive task, eg, preparing a lecture.
Comment on strategies used to reach an objective and assess progress, eg, I find do X when trying to ....'
Question about value of one's ideas or way of going about a task, eg, 'I do not have a good understanding of ....'
Evidence of organising steps needed and prediction of what
is likely to happen.
Evidence of implementing a strategy and assessing progress.
eg, 'I know I feel .....' / 'I found learning about ... interesting'
The second aspect of cognitive skill to be evaluated according to Henri's (1992; 1993) method was the level of information processing evident in the message content, classified according to the dichotomy of surface versus deep processing. In depth processing reflects organisation and critical evaluation of information, the opposite of this being surface processing indicated by repetition and the absence of evidence of elaboration and justification.
Evidence of participants' metacognitive processes was observed by including Henri's (1992; 1993) categories of metacognitive knowledge and metacognitive skills. Metacognitive knowledge refers to declarative knowledge about the person (what is known about the person as a 'cognitive being'); the task (appreciation of the task and information available); and strategies used (how a cognitive task is successfully completed). Expression of metacognitive skills reflects knowing how to assess one's knowledge, skills and strategies (evaluation), predict and organise what is needed to complete a cognitive task (planning); initiate and supervise progress toward reaching one's objectives (regulation); and recognise and understand one's feelings and thoughts about the task (self-awareness).
The first author completed the initial coding of data. A reliability analysis was undertaken, with a random sample of one-third of the messages being coded by an independent researcher. A comparison of the results showed the level of agreement between the two scorers was 95% on type of participation, 76% on type of interaction, 44% on critical thinking skills, and 95% on information processing. The reliability of classifying message content for the five levels of critical thinking was quite poor. On closer inspection the majority of discrepancies were between neighbouring categories, in particular between levels of clarification (elementary and in-depth), and between the categories showing evidence of drawing conclusions (inference and judgement). A reanalysis of the scoring collapsing the categories into three (clarification, conclusion, strategy) resulted in 68% agreement between scorers. The number of messages showing evidence of metacognitive aspects in the sample was very small, and even for these messages there was poor agreement. Limitations of this approach are taken up further in the discussion.
A total of 44 new subject threads were posted to the discussion group, but 27% of these were never referenced further. Eight threads received five or more referents and account for almost 50% of the total discussion. The most discussed subjects were Blooms taxonomy (n = 6), Learning Outcomes (n = 6), Comparing responses (n = 6), Humanistic theories (n = 7), Teachers' stories (n = 9), Kolb's Learning cycle (n = 9), Objectives (n = 13), and Phenomenography (n = 21). These subjects track quite closely the subjects listed in the module outline for the course, with most topics being obviously relevant to the weekly activities being conducted in the InterLearn environment. It is worth noting that writing about the phenomenography topic in the online discussion forum was listed as one of the online learning module activities.
The majority of the message units (76%) were related to the content of the course. Of these content message units, 75% were judged to be relevant to the course content, directly addressing the subject material covered in the learning modules. Most of the indirect messages were about bringing in other work-related problems (eg, latecomers to classes) or an interesting article in a newspaper, etc.
For those explicit interactions, an analysis of "who was responding to whom" was undertaken to determine if there were any distinctive patterns of communication (ie, either between staff and students or between students themselves). These interactions were fairly evenly spread across participants, with the most active participants responding to a range of other participants. Five students had between 6 and 13 referents, and three of these students were in the top six contributors to the conference. In other words, some students were reinforced for their contributions more than others. were. As you would expect, the more often you contribute the more reinforcement you get (although this is not always the case). A few trends emerged in terms of some students responding quite often to a particular student's messages and vice versa (creating 'interaction dyads'). The teaching staff appeared to respond to a range of participants, to the most active more often, but also seemed to make an effort to respond to new participants. On only four occasions did students directly address a staff comment.
In depth clarification
Table 3 shows a breakdown of these message units into the levels of critical thinking and information processing categories, expressed as a percentage of the total number of content message units. Approximately one half of the message units were classified as either elementary or in-depth clarification. This result reflects the way in which the participants used the online discussion forum to bring in examples from their own teaching and other sources, to pose problems and discuss their experiences. The inference and judgement categories mainly reflect responses that were summative in nature, or reflective a statement of the authors' view on a particular issue, rather than an exploration of a problem. These messages tended to communicate the authors' evaluation of the usefulness or validity of a particular theoretical approach or concept. The strategies category represented often practical solutions offered by participants to others on the problems raised in their own teaching. The breakdown of classifications is representative of both staff and students' contributions.
This category was used to classify responses as examples of either superficial or in depth processing (see Table 3). In the context of the present analysis this classification was not found to be very discriminatory. Given the advanced level of discussion evident in the forum, the surface processing category (22%) was used much less often than the in depth processing category (67%). Messages classified as evidence of surface level processing involved mostly examples where participants contributed information about extra resources without elaboration. Messages demonstrating deeper levels of processing involved relating new information to their experiences, critically evaluating ideas, and exploring strategies.
Metacognitive knowledge and skills
As Henri (1993) observed, metacognitive processes are difficult to assess with more traditional teaching methods, but with online discussion it is possible that participants will contribute more of their reflections on their own learning. Some evidence of this kind of metacognitive activity was seen in the discussion forum, although relatively infrequently (16%). In most cases these comments were classified as reflecting The majority of metacognitive statements occurred in the categories for knowledge about the person (n = 12) and or evaluation of skills (n = 23). Those responses that recognise the person as a cognitive agent in comparison to others tended to be reflections on personal learning style. Evaluation statements communicated the assessment of the person's approach to a task and the efficacy of that approach (eg, reflections on the effectiveness of teaching strategies). The self-awareness category (n = 9) represented participants' reflection on how much they felt they were learning with the activities, and their feelings associated with these learning experiences. However, the low incidence of messages reflecting metacognitive awareness, and the poor reliability in scoring these aspects suggest that this part of the analysis should be viewed with caution.
The analysis of participation levels indicated that the discussion forum was used often by a core group of students who contributed regularly. If engaging in discussion using this forum was part of the formal assessment, then this level of participation would be more of a concern (eg, 11 contributors, including two staff members, posted 80% of the messages). It was interesting that directions to contribute to the forum about Phenomenography as part of the InterLearn online activities led to this subject being the most frequently discussed. Therefore, one way of increasing the interaction in the discussion forum may be to introduce more of these explicit links between the online activities and the discussion forum.
Another influence on the level of activity may have been the stance adopted by the teachers of the subject, which as mentioned was more reactive than proactive. That is, teachers for the most part responded to issues and queries raised by students, including both content and administrative matters. In fact, as the discussion group progressed, the teachers decided to usually delay posting their messages, especially to content matters, as these sometimes had the effect of closing off a topic rather than feeding it.
The lack of participation by some students may have been due to technical difficulties, and there was some indication that a few students found access to the discussion forum possible for the first time quite late in the semester, with access from home being a problem for a small minority. It is also possible that given the structure of the discussion closely followed the module timetable, lack of participation may have been due to some students finding it hard to keep pace with this timetable, and therefore the subject under discussion. It is interesting that a message from a staff member that it was appropriate to post to an old topic quite late in the semester led to a spurt of 'revisiting' old ground. Also, there were a few occasions where students would logon and respond to quite a few messages at once, in a manner of 'catching up' with the discussion.
It is worth noting that feedback from a quantitative evaluation exercise conducted at the conclusion of the subject indicated high levels of student satisfaction. In particular, over three quarters of the participants reported that the online learning materials stimulated their interest most or almost all of the time, with a slightly higher proportion agreeing that 'The way this subject is taught is appropriate for the material'. Further, over 60 per cent reported that 'The interactivity of the online materials fostered the sense of a learning community' for most or almost all the time.
Even though the process of transcript analysis can provide useful data for exploring the way in which participants are contributing to an online conference, it is not without its problems. The classification of message content, in particular, is necessarily subjective. In this case, low reliability in making finer distinctions between levels of critical thinking and metacognitive aspects as defined by Henri (1992; 1993), limits the conclusions that can be drawn. That one scorer was familiar with the content of the subject whereas the other was not may have contributed to the discrepancies. Neither person was involved in the teaching of the subject, nor was a participant in the forum itself. These issues highlight the need to consider the purpose of the analysis, the experience and role of the persons analysing the transcripts, and the suitability of the framework chosen for analysis, as potentially impacting on the interpretation of the results. On the latter point, it is possible that the cognitive focus of Henri's model is more easily applied to an online environment where the discussion focuses on structured problem solving activities, compared to the unstructured nature of this forum.
Although unstructured in itself, it also needs emphasising that the discussion group was part of a highly structured study environment. Thus, for example, some of the messages that might in other contexts be posted by the moderator to the group were in this context sent to participants in a weekly global email. This was designed to overcome a basic potential weakness of discussion groups, in that participants have to first of all visit them to begin interaction! However, most people regularly check their email, and thus this means method of communication provides an excellent means of motivating and encouraging students to visit the group, and alerts them to key aspects of the discussion. Other parts of the overall evaluation confirmed the value of the weekly global email messages. The other aspect of the context that needs to be emphasised was that the discussion group was what might be called a second level of online interaction. The primary (and assessable) component was the shared online activities - in general the discussion group was there for participants to discuss issues that arose in attempting the online activities.
The lesson from this is the clear reminder that a discussion group, to be effective, must be a key and integral part of the learning environment. Students will visit and use such a group only if they perceive that it helps their learning and adds value to their course of study. This may be even more important, or at least of equal importance, as the role of the moderator of the group. If the students aren't motivated to visit an online discussion group, the moderator who relies solely on posting to the group will not be 'heard'.
Overall, analysing the transcripts from the discussion forum provided useful feedback to the course organisers in the ongoing improvement of the subject. Importantly, it affirmed the impression that the discussion group had been an effective part of the learning environment. It also helped in the process of making adjustments to the subject for its second intake. These adjustments included modifications to the print based study guide as well as changes to the online environment. Improvements to InterLearn are also taking place, on the basis of both student feedback and staff experience. Finally, the analysis became part of an evaluation database available to the developers of the other subjects in the course.
Gunawardena, C. N., Lowe, C. A., & Anderson, T. (1998). Transcript analysis of computer-mediated conferences as a tool for testing constructivist and social-constructivist learning theories. In Distance Learning 1998: Proceedings of the Annual Conference on Distance Teaching & Learning, (pp. 139-145). August 5-7, Madison, WI.
Harasim, L. (1989). Online education: A new domain. In R. Mason & A. Kaye (Eds), Mindweave: Communication, computers and distance education, 50-57. Oxford: Pergamon Press. http://www-icdl.open.ac.uk/mindweave/chap4.html [verified 11 Nov 2000]
Harasim, L., Hiltz, S. R., Teles, L., & Turoff, M. (1995). Learning networks: A field guide to teaching and learning online. Cambridge, Massachusetts: The MIT Press.
Henri, F. (1992). Computer conferencing and content analysis. In A. R. Kaye (Ed), Collaborative learning through computer conferencing: The Najaden Papers, 117-136. Berlin: Springer-Verlag.
Henri, F. (1993). The Virtual University: Collaborative learning through computer conferencing. Workshop, Monash University, July, 1993.
Howell-Richardson, C. & Mellar, H. (1996). A methodology for the analysis of patterns of interactions of participation within computer mediated communication courses. Instructional Science, 24, 47-69.
Levin, J. A., Kim, H. & Riel, M. M. (1990). Analyzing instructional interactions on electronic message networks. In L. M. Harasim (Ed.), Online Education: Perspectives on a new environment, 185-213. New York: Praeger.
Mason, R. (1992). Evaluation methodologies for computer conferencing applications. In A. R. Kaye (Ed), Collaborative learning through computer conferencing: The Najaden Papers, 105-116. Berlin: Springer-Verlag.
Mason, R. & Weller, M. (2000). Factors affecting students' satisfaction on a web course. Australian Journal of Educational Technology, 16(2), 173-200. http://www.ascilite.org.au/ajet/ajet16/mason.html
Mowrer, D. E. (1996). A content analysis of student/instructor communication via computer conferencing. Higher Education, 32, 217-241.
Newman, D. R., Johnson, C., Webb, B. & Cochrane, C. (1997). Evaluating the quality of learning in computer supported cooperative learning. Journal of the American Society of Information Science, 48, 484-495.
Murphy, D. & Webster, L. (1999). Partnership in learning: An interactive online software tool. Paper presented at the CREAD Conference on Education and Partnerships, University of British Columbia, Canada, September 21-23. http://cread.cstudies.ubc.ca/proceedi.htm [viewed 8 Jan 2000, verified 11 Nov 2000]
Owen, M. (2000). Structure and discourse in a telematic learning environment. Educational Technology and Society, 3, 3. [verified 11 Nov 2000] http://ifets.ieee.org/periodical/vol_3_2000/b04.html
Pitman, A. J., Gosper, M. & Rich, D. C. (1999). Internet based teaching in geography at Macquarie University: An analysis of student use. Australian Journal of Educational Technology, 15(2), 167-187. http://www.ascilite.org.au/ajet/ajet15/pitman.html
Romiszowski, A. J. & Mason, R. (1996). Computer-mediated communication. In D. H. Jonassen (Ed.), Handbook of research for educational communications and technology, 438-456. New York: Simon & Schuster Macmillan.
Salmon, G. (2000). E-moderating: The key to teaching and learning online. London: Kogan Page.
|Authors: Wendy McKenzie, Department of Psychology, Monash University|
David Murphy, Centre for Higher Education Development, Monash University
Please cite as: McKenzie, W. and Murphy, D. (2000). "I hope this goes somewhere": Evaluation of an online discussion group. Australian Journal of Educational Technology, 16(3), 239-257. http://www.ascilite.org.au/ajet/ajet16/mckenzie.html