|Australian Journal of Educational Technology
2003, 19(2), 241-259.
Recent developments in learning theory have emphasised the importance of context and social interaction. In this vein, the notion of a learning community is gaining momentum. With the advent of asynchronous online discussion forums, learning communities now need not be confined to any specific geographical locations, as people can now interact with one another at any place and time convenient to them. In this paper, we describe appropriate models that can evaluate these online learning communities. We examine pertinent issues including learner-learner interaction, learner-teacher interaction, the thinking skills of the learners, the levels of information processing exhibited by learners in the online discussion, and the roles played by the online moderator. A practical example is also provided to illustrate how these models can be used. Finally, we discuss some drawbacks related to each model and ways for overcoming them.
Nonetheless, providing learners the opportunities to engage in authentic learning experiences is a challenge in most traditional learning environments (Bielaczyz & Collins, 1999). Squire and Johnson (2000) argued that this is due to the fact that learning communities tend to be "distributed across time and space, making them mostly inaccessible to the educator located in a traditional classroom environment" (p. 23). One of the ways to bridge this gap of time and space is by using an asynchronous online discussion forum. These forums allows members of a learning community to interact easily with one another, at any place and time convenient to them. It can also promote student centered learning (Harasim, 1989), and some critical thinking processes such as reasoning and evaluation (Newman, Webb & Cochrane, 1997).
Although the use of asynchronous discussion forums can afford online learning communities with unprecedented learning opportunities, educators are often faced with difficulties in how to evaluate such online communities. As Gunawardena, Carabajal and Lowe (2001) noted,
The development of appropriate methodologies for evaluating the myriad, ever changing forms of online learning presents a critical challenge to distance educators. The open ended nature of online learning, the multiple threads of conversation, and fluid of participation patterns call for new ways of looking at evaluation. (p.3)This paper aims to help educators in the evaluation of online learning communities. We refer to an online learning community as a group of people who participate in an asynchronous online discussion forum with a common objective or interest, in order to learn from one another. We find it useful to adopt activity theory (Jonassen, 2002) as a guide to first help us identify the various evaluation issues that an educator might face when dealing with an online learning community. Once the evaluation issues have been identified, the appropriate models that can examine these issues can be delineated.
In the following sections, we describe activity theory, followed by a discussion on evaluation issues that an educator might face when dealing with an online learning community. Subsequently, we discuss appropriate models for addressing each of the mentioned evaluation issues.
Figure 1: Processes within an activity
(adapted from Cole & Engeström, 1993)
From Figure 1, tools can be perceived as mediating the processes between the subject and object; rules mediate the processes between the subject and the community; while roles mediate the processes between the community and object. Pang & Hung (2001) wrote:
In other words, tools are used by subjects to achieve an object; there need to be rules set up between subjects and the other members in the community in order to achieve the goals; and between members of the community, there needs to be a division of labor in order to achieve the object. (p. 36)In an asynchronous discussion forum, the subject will be a member of an online learning community comprising peer learners, educators or external subject matter experts. The member makes use of tools (e.g. an asynchronous discussion forum such as Blackboard or Knowledge Forum) to exchange ideas and insights with other online members in learning some knowledge or skills (object). While acquiring the knowledge or skills, there are certain rules to be adhered to by the group of learners (such as the use of non-offensive messages) and roles played by various members of the online community (such as the discussion moderator) in order to create a meaningful and memorable online learning experience.
Activity theory provides educators a practical and holistic approach to the evaluation of an online learning community. By considering the various triads of nodes taken from Figure 1, we can form a possible structure for analysis. Due to space constraint, only two of these triads are considered in this paper.
Figure 2: Subject-community-object triad
The triad of subject-community-object describes how the participant of an asynchronous online discussion forum and the surrounding learning community collaborate to act on the object. Looking at this triad, an educator may want to address some possible relevant evaluation questions such as the following:
The next triad is subject-community-roles, examining the roles played by the members of an online learning community in relation to the object of the activity. In an asynchronous online discussion forum, the role of the moderator is widely acknowledged as an important factor that may affect the success of the discussion (Ahern, Peck & Laycock, 1992). Typically, the roles of an online moderator can be classified into three different types: organisational, social or intellectual (Paulsen, 1995). Organisational roles include activities such as explaining the requirements and procedures of the online discussion, and spurring the online participation when it is lagging. Social roles, on the other hand, involve making participants comfortable in an online environment and valuing their contributions. Intellectual roles include bringing up issues that participants have missed, highlighting and pursuing further the important ones. In this paper, we will present an appropriate model that can evaluate the intellectual roles of the online moderator.
Figure 3: Subject community roles
According to Henri (1992), explicit interactions are messages that are either in response to a question posed, or a commentary on someone else's message. In explicit interactions, the person to whom the communication is directed is indicated in the message. An example of an explicit interaction type of message is:
Hi Susan! I agree with you and Uma that having 2 teachers in the computer lab is ideal, other than the use of colour cups as mentioned by James, to indicate to the teacher that a student needs help.Implicit interactions, on the other hand, are messages that include a response to or commentary on a prior message, but without indicating specifically to which message the contribution referred. Finally, the independent statements are messages that contain new ideas, not connected to others that have been previously expressed in the online discussion. By differentiating between explicit, implicit and independent online messages, an educator can thus observe the relationship or patterns of communication between participants. These patterns, however, offer little insight into the contribution individual messages make to the emerging totality of constructed knowledge (Gunawardena et al., 2001). This leads us to the next issue.
Even as learners interact and construct knowledge with one another using asynchronous online discussion forums, one area of concern for educators is the high forum dropout rate due to the physical separation of learners (Rovai, 2002). Tinto (1993) emphasised the importance of community in reducing dropouts when he theorised that learners would increase their levels of satisfaction and the likelihood of persisting in the discussion if they feel involved in the learning community and develop relationships with other learners. Accordingly, the next evaluation issue explores the social presence in an online learning community.
|Phase I||Sharing and comparing of information. For example: Statements of agreement or corroborating examples from one or more other participant.|
|Phase II||Discovery and exploration of dissonance or inconsistency among the ideas, or statements advanced by different participants. For example: Identifying and stating areas of disagreement or asking and answering questions to clarify the source and extent of the disagreements.|
|Phase III||Negotiation of meaning. For example: Negotiation of the meaning of terms or identification of areas of agreement or overlap among conflicting concepts.|
|Phase IV||Testing and modification of proposed synthesis or co-construction. For example: Testing the proposed synthesis against formal data collected or against contradictory information from the literature.|
|Phase V||Statement or application of newly constructed knowledge. For example: Summarising of agreements or students' self reflective statements that illustrate their knowledge or ways of thinking have changed as a result of the online interaction.|
Cognitive processes of the learners
Two specific evaluation questions pertaining to the cognitive processes of online learners will be addressed in this paper:
Aspects of Henri's (1992) critical thinking model have been taken up and expanded upon by others (e.g. Newman, Johnson, Webb, & Cochrane, 1997). Newman et al. (1997) developed ten paired indicators of critical versus uncritical thinking in their model. This list of paired opposites represents surface level of information processing (i.e. uncritical thinking) and in depth level of information processing (i.e. critical thinking). The ten indicators are: Relevance, Importance, Novelty (new information, ideas, solutions), Bringing outside knowledge/experience to bear on problem, Ambiguity and clarity, Linking ideas and interpretation, Justification, Critical assessment, Practical utility, and Width of understanding. Some examples of the different thinking skills and levels of information processing are provided below.
Critical thinking - surface level:
"I find that there are too many empty (white) spaces on the presentation slides." (This was classified as critical thinking - surface level of information processing since the author made his conclusion without giving any justification as to why it was not good to have too many empty spaces on a presentation slide)
Critical thinking - in depth level:
"I feel that the choice of your illustrations are quite well chosen, except for the birds. I feel that the birds are distracting because of their movements and they don't blend well with the other illustrations." (This was coded as critical thinking - in depth level of information processing because the author expressed a judgment and provided a plausible argument as to why his judgment was valid.)
Roles played by the online moderator or instructor of the online learning community
Kirkley, Savery, and Grabner-Hagen (1998) focused on the intellectual roles of the online moderator or instructor, by evaluating the different means of assistance to support learning that an online moderator and instructor can render to the learners. Seven means are described:
|Purpose of evaluation||Evaluation model|
|To describe the nature of the learner-learner and learner-teacher interactions|
|To examine the cognitive processes|
|To analyse the moderator and learners' online roles|
|Henri's Interactivity dimension (1992)
Unit of analysis: thematic unit
|This model distinguishes between interactive versus non-interactive and explicit versus implicit interaction. Explicit and implicit interactions are defined as a three step process: a) communication of information; b) a first response to this information; and c) a second answer relating to the first.
|Rourke, Anderson, Garrison and Archer (1999)
Unit of analysis: combination of thematic and syntactic units
|This model assesses the social presence of online learning community. It distinguishes between three broad categories:
|Gunawardena, Lowe & Anderson (1997)
Unit of analysis: Whole message
|This model evaluates the social construction of knowledge in online discussion forum. It distinguishes between five phases of knowledge construction:
|Henri's Cognitive dimension (1992)
Unit of analysis: thematic unit
|This model evaluates critical thinking of online learners.
Critical thinking. There are five different types:
Each of the five types of critical thinking is classified according to the dichotomy of surface versus deep level information processing.
|Newman, Johnson, Webb & Cochrane (1997)
Unit of analysis: thematic unit
|This model measures the level of critical thinking by expanding on Henri's (1992) model. It includes ten indicators:
Each of the aforementioned ten indicators has its own list of paired opposites, one an indicator of surface level processing, one of in depth processing. For example, "Irrelevant statements or diversions" versus "Relevant statements".
|Kirkley, Savery & Grabner-Hagen (1998)
Unit of analysis: instructional content of each individual sentence
|This model evaluates the different means of learning assistance that an online moderator may render to the learners.
Before the commencement of the online discussion, the students were first briefed, in a face to face environment, on the task they were to do. The hard copies of the students' online postings were printed from Blackboard at the end of the discussion. The actual analysis of the postings would be carried out in two parts. In the first part, the online postings would be read and divided into the appropriate units of analysis (See the following section for a more in depth discussion of units of analysis). The second part involves the use of the models on the identified units of analysis.
The aforementioned models offer educators the means to evaluate a host of different issues pertaining to the learners' online discussion. Thus, depending on an educator's evaluation aims, the appropriate models can be chosen and utilised. For example, in order to evaluate the extent to which the students are responding to one another (i.e. learner-learner interaction), Henri's (1992) model would be used. Based on this model, all the identified units of analysis would be examined if they are explicit, implicit, or independent statements (Henri, 1992). To help better capture and show the pattern of connection among the units of analysis, a visual mapping of all the units can also be done. Explicit and implicit interactions would reveal to the educator whether the students are commenting and responding to each other's ideas. Independent statements, on the other hand, would reveal a minimal sense of real heated discussions or debate with the students taking sides on issues, negotiating or arriving at a compromise. Educators, armed with such knowledge, can thus take the necessary steps (e.g. giving encouragement) to promote interaction among the students.
Educators who wish to go beyond studying the mechanistic relationships among units of analysis, into evaluating the extent of knowledge construction among the learners, would find the model by Gunawardena et al. (1997) helpful. This is because this model reveals the stages each unit of analysis has attained in terms of constructivist knowledge creation. An example of an actual Phase I unit of analysis is given below.
I concur with Sharon that there were no buttons that allowed learners to move from one slide to another. Just a suggestion...you might want to include some appropriate navigation buttons.Educators would be interested to know that the movement from Phase I to Phase V indicates progress from the lower to higher mental functions, and reveals how learners contribute toward the construction of knowledge (Gunawardena, 1999).
The aforementioned unit of analysis can also be classified, by educators evaluating the extent of student involvement in the online community, as an interactive type of social presence (Rourke et al., 1999) since it expresses mutual attention and awareness, by referring explicitly to the contents of messages by others. Evaluating the social presence of a learning community would give educators an idea of how the relationships among the community members are developing. This allows educators to step in, for example, to help develop the relationships by making the group interactions appealing and fun to all.
To evaluate the students' thinking skills and levels of information processing, the Henri (1992) and Newman et al. (1997) models would be used. These models indicate if the thinking skills exhibited by the students represent a surface or an in depth level of information processing. If the thinking is surface, educators can also discover the reasons for it from the two models. Some of the reasons for surface level thinking include students not justifying their judgments or comments, or proposing a solution with little details or explanations. Henri's (1992) and Newman's et al. (1997) models thus offer educators a valuable tool to diagnose and help improve their students' quality of thinking.
The first common drawback is the unreliable use of the unit of analysis (Rourke et al., 1999). Krippendoff (1980) described the unit of analysis as a discrete element of text that is observed, recorded, and thereafter considered data. One way is to take the learners' online message postings and analyse each posting in turn, with reference to the threads of discussion topics (as used in Gunawardena's et al., 1997 model). In this case, the messages are the units of analysis. This method, though simple to use, is not entirely perfect, as online postings usually contain more than one idea or thought. An alternative is the "thematic unit", which is defined by Budd, Thorp, and Donohue (1967) as "a single thought unit or idea unit that conveys a single item of information extracted from a segment of content" (p. 34). Thematic units as adopted by Henri's (1992) model reflect the logic of the indicators, but resist reliable and consistent identification (Howell-Richardson & Mellar, 1996). Yet another alternative (Rourke et al., 1999) is to combine the flexibility of the thematic unit with the identification attributes of a syntactical unit (e.g. a sentence, phrase or paragraph). Nonetheless, despite the fact that many units of analysis have been experimented with, none has been sufficiently reliable, valid and efficient to achieve pre-eminence (Rourke et al., 1999). Krippendorf (1980) concedes that, ultimately, the choice of the unit of analysis "involves considerable compromise" (p. 64) between meaningfulness, productivity, efficiently, and reliability.
The second common drawback in using these models is the high degree of subjectivity involved in discriminating the data and putting them into the correct categories. For example, it is difficult to distinguish one type of cognitive or metacognitive data from the other using Henri's cognitive model, because of the ambiguities and overlaps in the indicators of the cognitive skills (Bullen, 1997; Gunawardena et al., 1997). As a result, it becomes both very time consuming to analyse the online discussion transcripts using the models, and difficult to achieve high reliability (the consistency of results for the same data at different times or under different conditions, such as when coded by different people).
Since high reliability is desirable, how should one go about attaining it? We propose one possible means - reproducibility (inter-coder reliability). Inter-coder reliability can be defined as "the extent to which different coders, each coding the same content, come to the same coding decisions" (Rourke, Anderson, Garrison & Archer, 2001). Two educators should do the analysis independently and have the results cross examined by one another. But prior to doing the actual analysis, we recommend that the educators do a "sample exercise" on other messages to familiarise themselves with the models. Once the educators are comfortable with the models, they can then code and categorise the actual messages independently. The results may then be compared and the inter-coder reliability reported using two common methods: the percent agreement statistic and Cohen's kappa. The former statistic refers to the number of agreements per total number of coding decisions (Rourke et al., 2001). It is calculated using Holsti's (1969) coefficient of reliability:
m = number of coding decisions which the codes agree
n1 = number of coding decisions made by the first coder
n2 = number of coding decisions made by the second coder
Cohen's kappa, on the other hand, is a chance corrected measure of inter-coder reliability that assumes two coders, n cases, and m mutually exclusive and exhaustive nominal categories (Capozzoli, McSweeney, & Sinha, 1999). The formula for it is:
N = the total number of judgments made by each coder
Fo = the number of judgments on which the coders agree
Fc = the number of judgments for which the agreement is expected by chance.
(See Capozzoli et al., 1999; and Cohen, 1960 for further discussion).
For percent agreement figures, Riffe, Lacy, and Fico (1998) stated that "a minimum level of 80% is usually the standard" (p. 128), while for Cohen's kappa, values exceeding 0.75 suggest strong agreement above chance, values in the range of 0.40 to 0.75 indicate fair levels of agreement above chance, and values 0.40 are indicative of poor agreement above chance levels (Fleiss, 1981). Any discrepancies in coding decisions should be discussed and negotiated by the coders until mutual agreement is reached.
The third drawback associated with the use of the aforementioned models is the inability of these models to evaluate the interactions, cognitive processes and roles of "passive learners". Passive learners, as found in a study by Sutton (2000), do not participate often in the discussion but consider themselves to have learned a lot from reading and reflecting on the comments and responses posted by others. Nonetheless, there are alternative means to evaluate the interactions, cognitive processes and roles of passive learners.
One possible method to evaluate the interactions and roles of passive learners is to use certain asynchronous discussion forums (such as the Knowledge Community course management software) that are able to capture the number of times these learners have read the messages posted by others. By using this feature, an educator will know for certain whether these learners are actively reading about the issues presented in the online discussion, albeit in a quiet way, or they are truly uninvolved at all. Educators can then take the appropriate steps to encourage them in their participation. To evaluate the cognitive processes of the "passive learners", educators may want to use other forms of assessment, e.g. projects, assignments, and interviews.
Barab, S. A. & Duffy, T. (2000). Architecting participatory learning environments. In D. Jonassen & S. Land (Eds.), Theoretical foundations of learning environments. Hillsdale, NJ: Lawrence Erlbaum Associates.
Bielaczyc, K. & Collins, A. (1999). Learning communities in classrooms: A reconceptualization of educational practice. In C. Reigeluth (Ed), Instructional design theories and models: Volume II (pp. 269-292). Hillsdale, NJ: Lawrence Erlbaum Associates.
Budd, R. & Donohue, L. (1967). Content analysis of communication. New York: Macmillan.
Capozzoli, M., McSweeney, L. & Sinha, D. (1999). Beyond kappa: A review of interrater agreement measures. The Canadian Journal of Statistics, 27(1), 3-23.
Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37-46.
Cole, M. & Engeström, Y. (1993). A cultural-historical approach to distributed cognition. In G. Salomon (Ed), Distributed Cognitions: Psychological and Educational Considerations (pp. 1-46). Cambridge University Press, New York.
Collins, A., Brown, J. S. & Newman, S. E. (1989). Cognitive apprenticeship: Teaching the crafts of reading, writing, and mathematics. In L. B. Resnick (Ed), Knowing, learning, and instruction: Essays in honor of Robert Glaser (pp. 453-494). Hillsdale, NJ: Erlbaum.
Gunawardena, C.N., Lowe, C.A. & Anderson, T. (1997). Analysis of a global online debate and the development of an interaction analysis model for examining social construction of knowledge in computer conferencing. Journal Educational Computing Research, 17(4), 397-431.
Gunawardena, C. N. (1999). The challenge of designing and evaluating 'interaction' in web-based distance education. (ERIC Document Reproduction Service No. ED448718)
Gunawardena, C., Carabajal, K. & Lowe, C. A. (2001). Critical analysis of models and methods used to evaluate online learning networks. (ERIC Document Reproduction Service No. ED456159)
Harasim, L. (1989). On-line education as a new domain. In R. D. Mason & A. R. Kay (Eds), Mindweave: Communication, Computers and Distance Education. Oxford: Pergamon Press, 1989. http://icdl.open.ac.uk/literaturestore/mindweave/chap4.html
Henri, F. (1992). Computer conferencing and content analysis. In A.R. Kaye (Ed). Collaborative learning through computer conferencing: The Najaden papers, 117-136. Berlin: Springer-Verlag.
Holsti, O. (1969). Content analysis for the social sciences and humanities. Don Mills: Addision-Wesley Publishing Company.
Howell-Richardson, C. & Mellar, H. (1996). A methodology for the analysis of patterns of participation within computer mediated communication courses. Instructional Science, 24, 47-69.
Hsu, J. F., Chen, D. & Hung, D. (2000). Learning theories and IT: The computer as a tutor. In M. D. Williams (Ed), Integrating technology into teaching and learning (pp. 71-92). Prentice-Hall, Singapore.
Hung, D. & Chen, D. (2000). Appropriating and negotiating knowledge: Technologies for a community of learners. Educational Technology, 40(3), 29-32.
Hung, D. & Wong, A. (2000). Activity theory as a framework for project work in learning environments. Educational Technology, 40(2), 33-37.
Jonassen, D. H. (2002). Learning as activity. Educational Technology, 42(2), 45-51.
Kanuka, H. & Anderson, T. (1998). On-line social interchange, discord and knowledge construction. Journal of Distance Education, 13(1), 57-74. http://cade.icaap.org/vol13.1/kanuka.html
Kirkley, S. E., Savery, J.R. & Grabner-Hagen, M. M. (1998). Electronic teaching: Extending classroom dialogue and assistance through e-mail communication. In C. J. Bonk & K. S. King (Eds.), Electronic Collaborators: Learner-Centered Technologies for Literacy, Apprenticeship, and Discourse (pp. 209-232). Mahwah, NJ: Lawrence Erlbaum Associates, Publishers.
Krippendoff, K. (1980). Content analysis: An introduction to its methodology. Beverly Hills: Sage Publications.
Kuutti, K. (1996). Activity theory as a potential framework for human-computer interaction research. In B. A. Nardi (Ed), Context and consciousness: Activity theory and human-computer interaction (pp.17-44). Cambridge, MA: MIT Press.
Moore, M. G. (1989). Editorial: Three types of interaction. American Journal of Distance Education, 3(2), 1-6.
Newman, D.R., Johnson, C., Webb, B. & Cochrane, C. (1997). Evaluating the quality of learning in computer supported cooperative learning. Journal of the American Society of Information Science, 48, 484-495.
Pang, M. N. & Hung, D. (2001). Activity theory as a framework for analyzing CBT and E-learning environments. Educational Technology, 41(4), 36-42.
Paulsen, M.F. (1995). Moderating educational computer conferences. In Z. L. Berge and M. P. Collins (Eds), Computer mediated communication and the online classroom: Vol.3. Distance Learning (pp. 81-89). Cresskill, NJ Hampton Press, Inc.
Riffe, D., Lacy, S. & Fico, F. (1998). Analyzing media messages: Quantitative content analysis. New Jersey: Lawrence Erlbaum Associates, Inc.
Roehler, L. R. & Cantlon, D. J. (1997). Scaffolding: A powerful tool in social constructivist classrooms. In K. Hogan & M. Pressley (Eds), Scaffolding student learning (pp. 6-42). Cambridge, Massachusetts: Brookline Books.
Rourke, L., Anderson, T., Garrison, D. R. & Archer, W. (1999). Assessing social presence in asynchronous text-based computer conferencing. Journal of Distance Education, 14(2). [viewed 9 May 2003, verified 27 Jul 2003] http://cade.icaap.org/vol14.2/rourke_et_al.html
Rourke, L., Anderson, T., Garrison, D. R. & Archer, W. (2001). Methodological issues in the content analysis of computer conference transcripts. International Journal of Artificial Intelligence in Education, 12, 8-22.
Rovai, A. P. (2002). Development of an instrument to measure classroom community. Internet and Higher Education, 5, 197-211.
Squire, K. D. & Johnson, C. B. (2000). Supporting distributed communities of practice with interactive television. Educational Technology Research and Development, 48(1), 23-43.
Sutton, L.A. (2000). Vicarious interaction in a course enhanced through the use of computer-mediated communication. Unpublished PhD dissertation. Arizona State University.
Tinto, V. (1993). Leaving college: Rethinking the causes and cures of college attrition. (2nd ed.). Chicago, IL: University of Chicago Press.
|Authors: Khe Foon Hew, firstname.lastname@example.org|
Nanyang Technological University, Singapore
Wing Sum Cheung, email@example.com