|Australasian Journal of Educational Technology
2006, 22(4), 474-494.
This article reports on a large scale implementation of personal response units in three introductory science courses at the University of Western Ontario in Canada. An online survey of students was conducted to gather their perceptions on the uses of the devices, triangulated by participant observation of the classes and email interviews with the instructors. Although the students' perceptions were generally favourable, problems associated with implementation were widespread. Advantages and disadvantages of the technology are discussed along with suggestions for its use.
However, despite recent rhetoric, the use of interactive electronic devices in the classroom is not new. Judson and Sawada (2002) wrote about the long history of such devices which were first used in the 1960s, particularly in science classrooms. They claim that, apart from enhanced technical features such as the display of histograms of students' answers and easier record keeping, the devices have changed little in the intervening 40 or so years - with the multiple choice question format remaining the stable basis of interaction throughout that period. One of the more interesting historical points is that scientists in general (Bessler & Nisbet, 1971; Casanova, 1971; Shapiro, 1997; Wood, 2004) and physicists in particular (Abrahamson, 1999; Hake, 1998; Perkins & Wieman, 2004; Poulis, Massen, Robens & Gilbert, 1998) seem to have conducted more research on clickers in post secondary educational settings than other educators. However, there is also literature in accounting (Carnaghan & Webb, 2005) and computer science (Dufresne, Gerace, Leonard, Mestre & Wenk, 1996; Littauer, 1972).
Most students and lecturers who use clickers, like them (Draper & Brown, 2002; Learning Media Unit, 2001). There have also been studies of the effectiveness of interactive devices in the lecture hall (Roschelle, Penuel & Abrahamson, 2004). Segovia (2004), for example, found that clickers improved the class average by nearly 12% in an introductory accounting class and attributed the increase to greater self competitiveness among students from the immediate feedback they received, whether positive or negative. In a follow up study to their earlier report, Draper and Brown (2004) found that the greatest benefits to learning accrued when clicker based questions were used by instructors to start peer discussions (interactive engagement) and when they used diagnostic assessment during class to alter their teaching strategies. Indeed, Judson and Sawada (2002) noted that interactive peer learning was the key to effective use of technology.
It is easy to speculate why these devices might positively impact lectures to large groups. The nature of the lecture hall has always been problematic for good teaching practice (Laurillard, 1993). The one to many (lecturer to students) format makes many kinds of educative interactions more difficult, simply because the teacher cannot interact individually with each student in the time allotted for a class session. Clickers are, like tutorials, a tool that provides for interactivity. However, it is unlikely that the large scale lecture format will change easily because it is economically efficient.
Despite the best lecturing skills, the basis of the lecture method is, at heart, transmissive because of its one to many nature. Thus, the method steers some lecturers towards a way of thinking that knowledge can be something to be passed from the lecturer to the student. Indeed, clicker technology supports behaviourism in that one of its major attractive qualities is the provision of swift feedback to students. In some ways the interactional pattern functions like a simple stimulus-response system. However, in social constructivist classrooms, personal knowing is created by students in interaction with the material, other students, the instructor, their memories, and the world in general (Garrison & Archer, 2000; Vygotsky, 1978), and clickers can support more sophisticated practices. For example, Draper, Cargill and Cutts (2002) described five pedagogic uses for clickers: formative and summative 'practice' assessment, formative feedback for learning and teaching, peer assessment and community building, research on human responses, and discussion initiation.
Discourse about clickers in the wider world is much less critical. One article (Associated Press, 2005) suggested that these devices were new and revolutionary. It was claimed that the devices alter the very basis of classroom dynamics by giving students, in large lecture based classes, the power of individual feedback and motivation. Another news story (Gilbert, 2005) estimated that over 700 institutions used clickers made by the largest manufacturer. It also noted that well over one million individual clickers had been sold in 2004. Instructors, it was said, can make up questions, apparently on the fly. The students click their answers and the systems "instantly [emphasis added] gather responses."
The proponents of these devices claim that the privacy of student-device interaction takes away feelings of embarrassment felt by shy students when answering. Clickers are claimed to increase interaction with other students and course content. The benefits accrue not only to students but also to instructors and institutions. The devices increase lecture attendance and automatically enter grades, saving costs. It has been claimed that clickers can be used to teach more effectively when lecturers are more learner centred (National Research Council (USA), 1999).
Much of the current popular literature seems to come from media feeds from the product manufacturers such as eInstruction (eInstruction, 2006), Turning Technologies (Turning Technologies, 2006) or institutions such as Purdue University (Schenke, 2005) that use the devices, all of which have vested interests in the success of the devices. The author could find only one current academic book on personal response units (Duncan, 2005).
However, there are rumblings on the Internet of discontent about clickers. One blogger (Brookshier, 2005) wrote:
Clickers are not always cheap or even well made and many schools put the burdon [sic] of cost and care to the student. It is inevitable that a student would see that there might be a better way.With such polarity, the research questions took on added interest.
Data collected by the receiver from the clickers has to be decoded in two ways: by user and by response. That data has to be matched against the names of students enrolled in the course.
The clicker system in our study came with software that could be programmed to work with WebCT to input the results into each course's electronic grade book. The grade book also received enrollment data from the University's enterprise software package. Hence, for a clicker session to work successfully, each clicker had to be registered to a particular student. Upon successful registration of their device, each student received a pad number for every course in which she or he was enrolled through the online teaching platform.
Once successfully registered with an assigned pad number for their course, students had to 'sign in' their devices at each individual class session (called 'joining') by turning their devices on and entering the class channel (a two digit number projected by the instructor onto a screen). When a clicker successfully joined, the pad number of that clicker in that course appeared on a projected grid image so that each student knew that her or his clicker was functioning properly in the class.
All students in three first year science courses (named Biology 022, Biology 023, and Physics 028) were invited to participate. Although the precise number of students in the courses could not be ascertained due to privacy legislation (some students took two of the courses), it was estimated that there were most likely between 1,200 and 1,400 unique student members in the three classes. Some 560 students responded to the survey, a response rate of between 40 and 50 percent. Instructors of these large classes were the first to volunteer to use clickers.
The study was, first and foremost, qualitative and dealt with the perceptions of the students. These perceptions were triangulated with email interviews with the instructors and extensive field notes taken while attending as participant-observers for every lecture in every course from the first one in September until instructors finished teaching their sections in late October. Thus, this study covers the first two months of implementation.
The original intention had been to study the effectiveness of the technology by correlating student achievement with measures of attitude and the kinds and levels of questions that the instructors used. However, most students had not been able to use their clickers successfully by the date the study started and it was decided to change the focus of the project. Rather than using an online survey in which students were personally identifiable and their grades compared to their experience of using the clickers, students were asked anonymously about their experience with the devices, what problems they thought they faced, and what advantages they perceived.
The online survey was conducted using a secure, in house survey tool (Survey-in-a-Box). All survey questions were open ended and students could answer the questions in any way, and at whatever length, they chose (to a maximum of 1,600 characters). The survey consisted of 13 questions, some of which were directed to specific courses (see Appendix A for the survey questions). However, since the two biology courses were anti-requisites, students taking both biology and physics faced at most 11 questions. Those students who took only one of the biology or physics courses faced nine questions.
In the analysis, patterns in the answers were examined to create categories. The unit of analysis was the thought. As each answer was analysed, the major opinion(s) or perception(s) expressed in it were coded. Categories were added or changed as necessary and earlier responses continuously re-coded. Once all answers had been coded, the responses were analysed as a body of answers. Overlapping categories were renamed until each was unique and some categories were subsumed into others. Then all responses were recoded again. To provide some inter-rater reliability, a second rater coded the responses independently. The two raters negotiated their coding and category structures until both agreed that the categories conclusively represented every student response, i.e., that the set of categories captured every response meaningfully, that together they represented a coherent set of categories and that no categories were orphaned (left unrelated to the conceptual framework). At that point, the instructors were asked about the findings through individual email messages and finally, confirmatory evidence was sought in the field notes.
Once completed, the most basic statistical analysis was performed. For example basic descriptive statistics (N=560 overall) were obtained and the percentage of responses in each category calculated, However, there was no use of tests of significance. It was our contention that the results of any qualitative survey tend to underestimate the values. Students, who might have chosen a factor had they been given a list from which to choose, or Likert scales to score, might not have thought to write about a particular factor in the opinion they were expressing.
The classes were held in one of two modern, large scale lecture halls, each capable of holding over 700 students. Each lecture hall had two giant projection screens, multimedia equipment, and an instructor's bench containing a laptop dock with access to the University's intranet and the Internet. Students were comfortably, if closely, seated, auditorium style and all seats had direct line of sight to the projection screens. Acoustic characteristics appeared to be quite good when the researchers sat in various locations around the halls.
From the beginning, problems arose. Many students had trouble registering their clickers. Those who successfully registered them, often had troubling joining them in class. Figure 1 illustrates the sequence and pattern of registering and using a clicker. The shaded boxes represent the process that works and the unshaded boxes illustrate ways that the clicker process could become derailed. The starburst shapes represent actions or inactions of the system. Data to support this diagram came from descriptions provided by the instructors at several regular instructor meetings called by the team leader.
Students purchased access codes to register their clickers for one or more academic terms. However, when students first purchased their clickers at the University bookstore they were packaged with a set of instructions from the manufacturer that differed from those actually required in this specific implementation. Instructors told their students to go their course websites to register their clickers. However, some students used the written instructions enclosed with their devices and went to the clicker manufacturer's website and attempted to register there. The manufacturer's website asked for payment for access that had already been purchased at the bookstore. If students paid online, their clickers were registered, but with the manufacturer and not with the University. However, if they realised their error and subsequently used the correct instructions on the course website, they received a message that their clicker had already been registered (which it had)! Students' only relief occurred when Information Technology staff at the University de-registered their clickers and students followed the correct instructions.
Another set of problems arose due to the typeface used with the registration codes. A small Arial font used identical symbols for the number zero and the letter "O" and also for the number "1" and the letter "L". Such symbols made correct interpretation of the registration code very difficult for some students. If a student had just one example of both 1 and a 0, she or he might have to try to register their clicker four times before successful entering the correct code. The more 0's, O's, 1's, and L's, the greater the number of possible permutations of the code.
Figure 1: Registration and use of clickers
Yet another set of problems occurred when the manufacturer unfortunately printed and sent duplicate registration codes that were subsequently sold to students by the University bookstore. Those students had to return their codes and get new ones.
The outcome in each of the preceding cases, was a simple failure to register but students received no clear indication why the process had failed. In some cases the clickers simply did not work while in others students received a message saying their clicker was already registered. Instructors set up clinics to problem solve with students in a one on one fashion. By the end of the study almost all students had successfully registered their clickers.
In addition to the registration problems, the clickers did not always work reliably. A number of specific problems were identified. For example, sometimes the batteries enclosed with the clickers were worn out, despite being new. At other times, there appeared to be a software glitch in the system and no students could join. At other times, the clickers would go into 'sleep' mode to preserve battery life and would not rejoin the class. At yet other times, the problem remained unknown and the clicker had to be replaced.
When any of the problems occurred, students found that their clickers would not join. Instructors noted that large numbers of students were not joining. However, there was not a lot they could do about it. The biology instructors had already decided to give students a 5% bonus if they answered 80% of the questions right or wrong and take any technical difficulties into account. The physics instructor decided that since his course outline described online tests, he would have to use them but also provided scan sheets for those whose clickers were not working.
|Responses rated positive||Responses rated negative|
|Bio X||Bio Y||Phys Z||Bio X||Bio Y||Phys Z|
|Responses by all students||1015||243||552||342||132||384|
However, students did experience a large number of problems in connecting their clickers. Only 37.9% reported having no trouble at all. Nearly half (47.8%) said that they had trouble registering their clickers or getting a pad number and some 17.9% noted that they had suffered more than one problem. One of the primary causes of these multiple problem situations was created when students tried to re-register their clickers for a second course (11.4%). In fact, the clickers had to be registered only once; students simply had to acquire a pad number for each course.
When students were asked for their views about the sources of the problems they had faced, I found that, again, the number who reported no problems (35.9%) was very similar to the result from the previous question, giving some confidence that the group that faced no problems comprised slightly more than one-third of the overall group. Students perceived three main sources of the problems with their clickers. The most highly reported source (17.8%) was problems coming from either the clicker software or the hardware itself. The second most commonly reported source (14.8%) was the font used to register the clicker. It was described as both too small and with a problematic typeface. The final major source of the problem was self: 12.6% of students said that they were the source of the problem. Presumably, that meant that they made one or mor e errors in registering or using their clickers. Despite this, some 9.4% of the students said that they did not know the source of the problems that they had faced.
When students explained why they liked using clickers, the most common reason they gave (36.2%), was receiving feedback on how well they understood the material which they were studying. Some 22.9% said that they enjoyed the interactivity during lectures. The third most popular answer given (20.7%) was peer comparison, that is knowing how well they were doing versus their classmates. Other reasons that students said they liked the clickers were: feeling more involved (15.4%), getting exam hints (14.9%), and better learning (11.6%). No other reasons were mentioned by more than 10% of the students. Only 4.5% volunteered that they had no reason to like them.
There were slight differences between the courses. In Biology 022, as in the other courses, the most commonly reported reason for liking clickers was getting feedback on understanding (42.7%), with peer comparison (29.9%) as the second, and interactivity the third most common reason (25.8%). In Biology 023, feedback on understanding was the most commonly cited (35.0%), with interactivity the second (21.9%). The use of the clicker for taking attendance was the third most commonly stated reason in this group (12.4%), with many students saying they were happy that other class members were forced to attend by the necessity to use the clicker in class. In Physics 028, getting feedback on understanding was again the most commonly cited reason (36.2%) while using the clicker for testing (22.0%) was the second. Unlike either biology instructor, the physics instructor gave clicker based tests in which questions were projected onto the screen and students could choose their answers by clicker, or if it was not working that day, by entering the answer on a scan sheet. It is interesting that students perceived the metacognitive benefits as the most useful aspect of clickers in all three courses, whereas variations between the courses appeared in the second and third most commonly perceived benefits.
When it came to reasons why students did not like to use clickers, the largest overall group (38.5%) said that they had no reason to dislike the technology. Technical reasons comprised the second largest group (24.0%), and poor use of the technology in class made up the third (15.2%). The only other reason that appeared in more than 10% of responses was time wastage in class due to setting up and using the technology (12.0%).
Figure 2: Reasons why students liked clickers
Figure 3: Reasons why students disliked clickers
When the courses were compared on this basis there were, again, slight variations. In Biology 022, no negative feeling against clickers was mentioned in 45.2% of the comments. The second and third largest groups were technical problems (26.1%) and poor application of the technology (14.2%). In Biology 023, technical problems were mentioned the most often (39.6%), with no problems a close second (34.1%). This result was quite startling because our field notes indicated that the Biology 023 instructor was the most comfortable and knowledgeable using the clicker technology in class and the class had fewer overall technical glitches than the other courses. The third reason cited was that the clickers went into sleep mode (25.3%) and this may have accounted for the perception of greater technical difficulties. Sleep mode problems were not reported in large numbers in either of the other courses (2.9% and 5.2%) despite our observations that sleep mode problems were endemic in all three courses. Perhaps the easier implementation in Biology 023 had an effect on what students considered the problems to be. In courses where there were many more serious problems in registering the devices and joining the class, a smaller proportion of the students may have connected for a long enough period to face the sleep problem. In Physics 028, there were more overall problems reported by students. The first two reasons fit the overall pattern (no problem 31.2%, technical problems 23.6%) but then students noted what they considered multiple other problems (use in tests 19.0%, poor application 15.2%, answering tests in lockstep 11.8%, time wastage 11.8%).
The next set of questions asked students if there were ways that the clickers should or should not be used, compared to their perception of what was actually happening in their classes. Almost one-half of the students (47.3%) could not see any other way to use the clickers than the way they were being used. The only real suggestion was that the clickers should be used in more courses (36.4%). No other reason even reached the 6% mark. When students were challenged to tell us about ways the clickers should not be used but were being used, a clear majority said there were not any (61.9%). However, a clear group of 14.1% thought that clickers should not be used for testing. The next two questions asked students about their perceptions of the ways that clickers related to their learning. In general, the comments closely mirrored those from the previous questions and did not produce any new insights. The reasons that students gave for liking clickers were almost equivalent to the reasons they gave to explain the reasons clickers help them to learn. In the same way, the reasons why students said they didn't like clickers corresponded exactly to their perceptions about clickers handicapping their learning. Thus, reporting statistics and providing quotations from the answers to these two questions is almost completely redundant.
The last question asked students who were taking two courses to compare them and to choose which type of clicker use they thought was better. There did not appear to be overwhelming support for any one of the courses over the others. All courses had their advocates and their detractors in approximately the same ratios. In fact, the most common response was that the use of the clickers was equivalent in the science courses they were taking (20.5%). The major difference noted was that the clickers were used for testing in physics and there was a ratio of 1.5:1 of students who disliked the tests to those who liked them. However, the total number of students who raised the issue (N=50) was only a small fraction of those who answered the physics questions (N=273). Students did raise other issues such as question timing (they prefer questions throughout a class rather than strictly at the beginning) and question types used in class (preferring conceptual and diagnostic questions as opposed to factual questions). However, the numbers of students raising these points was small (6.4% and 8.9%). Over 42% of the answers were coded as not applicable, in that students were only taking one course or it was not clear to which course or courses students were referring.
Figure 4: Perceived advantages and disadvantages of clickers
At this stage the numbers of responses were not important because, as stated earlier, this qualitative survey likely under-reported rates that would have been obtained by a quantitative survey. Upcoming research will use a quantitative survey design. This diagram simply presents a coherent framework to understand what the students had written overall.
All responses were classified as advantages or disadvantages. Within the advantages there were three main types. These were attitudinal, interactional and pedagogical. The categories within each type are not hierarchical; the boxes in the diagram are drawn for convenience.
Attitudinally, fun was demonstrated in quotes such as, "almost like a game", convenience, "a very c onvenient way to answer test questions" and getting bonus marks, "so we can get an extra 5%" helped promote the students' attitudes towards their courses. These results are strikingly similar to the attitudes of students reported by Sharma, Khachan, Chan and O'Byrne (2005). Most people enjoy having fun, using conveniences in their lives, and getting something for nothing.
Using the interactional lens, students told us that the reception of feedback through the clicker's testing function and their involvement in class, their attendance at class and the use of non-graded surveys were all ways that helped them cope with the dehumanising aspects of being in such a large lecture hall. One student wrote, "it makes it a better learning experience and it gets us involved in the discussions and lets us know whether we are on the right track or not depending on the questions asked and our knowledge." Perhaps these students, who had just left high school classes of around 25 to 30 students, welcomed greater involvement than the typical lecture hall experience provides. Indeed as Williams (2003) reported, interactivity is one of the important benefits of clickers and clicker like technologies because they can create the opportunity for more meaningful learning.
When it came to pedagogical advantages, students said that their learning was helped in three ways: increased metacognition, better learning, and testing. Metacognitively, the clicker based questions helped students better understand their own misconceptions, "when there is a misconception in a question, the professor clears things up" and their normative place in the class, "being able to answer with the clicker shows how well you are doing when compared to the rest of the class." In some ways this is similar to Elliott's (2003) finding that students were aware of their increasing levels of concentration. Their learning was also helped because the clicker based questions acted as signposts to the important course content, "it lets me know where I should be in this course and what I need to improve on" and the testing function provided review of material and hints for future tests and examinations, "they give a good idea as to what the questions on exams are going to be like."
However, the clickers were also perceived by students to have disadvantages. These were pedagogical, technical and financial.
Pedagogically, the time spent by instructors fiddling around to make the clicker system work, took time away from dealing with course matters, "I dislike how it took so long to get everybody up and running, and how much class time was wasted explaining how the clicker works and what to do to get help, etc." Clickers were also perceived as wasting student time when they felt required to attend all classes whether or not they had other pressing work to do, "I really dislike clickers because I figure that I pay for my education and at this level I should not be graded for my participation. Some may call it easy marks but I'm just fundamentally against it, seeing as university is a time for independence and personal management." Many students claimed that a good percentage of the questions posed in class were irrelevant or unhelpful, that cheating on clicker tests was rampant and that the kinds of questions that could be posed by instructors were limited by the nature of computerisation.
Another prominent disadvantage was technical. The students reported many technical problems that they had faced. They also noted that clickers were limited to particular, narrowly circumscribed uses. This technology can do little else than allow a student's choice to be transmitted and received. For example, clickers, themselves, cannot import and display content and they cannot provide continuous output about a student's changing level of understanding. The technology also limits the conduct of tests in that it creates test situations in which students are presented with the same question(s) simultaneously and are required to answer within the same time frame. Students cannot answer the easiest questions first or return to earlier questions as they would in a paper based test.
Finally, there is a financial disadvantage. Students are required to purchase a clicker and a registration code, whether they were taking one or many clicker based courses, in one or more than one academic term. This purchase, along with the use of an online platform to answer student questions, effectively transfers some course costs from the departmental budget (for teaching assistants) onto students. These data were not collected in this study and no claim is made that this situation has arisen. There is one simple claim that the devices could be used to shift expenses.
Figure 5: Recommendations for implementing clickers
These recommendations are related to implementation in three areas: pedagogical, technical and interactional.
Pedagogy is always an important component of technological implemen-tation. When implementing a new technology such as personal response units, it is crucial that the instructions received by instructors and students are clear, accurate, simple, consistent, comprehensive and fail safe. Steps should be taken to eliminate any possibility that students or instructors could receive instructions that are, in any way, confusing. Toward that end, instructors should receive advanced training in the use of any new technology, along with problem solving strategies relevant to that technology, prior to their use of it. Instructors should not have to waste class time because they do not know what to do when a problematic situation arises. Instructors should have enough training to know immediately whether a problem is solvable or not and whether use of the technology should be abandoned for that class session. In addition, students should also receive, at the very beginning of the term, specific technological instruction in 'walk in' training clinics to provide them with timely, focused, one on one help.
One of the most disconcerting aspects of this implementation was the lack of technical reliability. Students should not be required to purchase a device or system that has not been thoroughly tested or which contains software that has not passed beta testing. Control over, and hence accountability for technical reliability, should rest with one person in the organisation conducting the implementation. It is problematic when a technological system is purchased from a commercial organisation that requires its own equipment, such as servers, to be used for the technology to function properly. In this instance, for example, clickers had to be registered and this process had to be carried out with, and through, the manufacturer's servers. The university should have controlled the registration software on its own servers. In that light, pilot studies should be carried out at the same scale as the implementation being considered and students should not be required to purchase the technology in the pilot study.
Finally, there are the interactional aspects of question timing. When clickers are used in class, better interactions occur when clicker use is varied during the class. Although battery life is shortened when clickers remain on, steps should be taken to ensure they do not go into sleep mode. Perhaps better manufacturing processes or the use of more sophisticated batteries will allow for longer battery life. The strength of the clicker device is that it can help to keep students engaged, when questions may be asked at any time.
Instructors can decide whether they want to use the technology for testing or not and whether they want the students to engage others in discussions about the questions. If no grades, other than participation, are awarded then discussion about the questions and answers can be a pedagogical strength reinforced by dis play of the class answers. Carefully designed questions, related to known areas of student misunderstanding can help students become more metacognitive in their learning. In the same way, review questions can ascertain what, and how much, content that students have retained from their prior instruction. Finally, questions can be designed to diagnose areas of weakness in the overall class so that instructors can reformulate their lessons to ensure those areas are better covered.
There is no doubt that students believe that clickers represent a useful tool for helping them to learn better in the large lecture hall environment.
Finally, the author would like to note the perseverance of the instructors who were observed, talked to, and interviewed by email. The implementation process was time consuming, frustrating at times, yet these instructors maintained their professionalism and often went far beyond the call of duty in helping their students. That kind of teaching is commendable.
Adekoya, A. A., Eyob, E., Ikem, F. M., Omojokun, E. O. & Quay, A. M. (2005). Dynamics of information technology (IT) successful implementation in development countries: A Nigerian case study. The Journal of Computer Information Systems, 45(3), 107.
Associated Press (2005). Interactive 'clickers' changing classrooms: Teachers get instant feedback from clicker-wielding students. [viewed 28 Aug 2006] http://www.msnbc.msn.com/id/7844477/
Bessler, W. C., & Nisbet, J. J. (1971). The use of an electronic response system in teaching biology. Science Education, 3, 275-284.
Bondarouk, T., & Sikkel, K. (2005). Explaining IT implementation through group learning. Information Resources Management Journal, 18(1), 42.
Boudreau, M.-C., & Seligman, L. (2005). Quality of use of a complex technology: A learning-based model. Journal of Organizational and End User Computing, 17(4), 1-22.
Brookshier, D. (2005). New Projects in the Education and Learning Comunity. [viewed 28 Aug 2006, verified 10 Oct 2006] http://weblogs.java.net/blog/turbogeek/archive/2005/10/new_projects_in_14.html
Carnaghan, C. & Webb, A. (2005). Investigating the effects of group response systems on learning outcomes and satisfaction in accounting education. Waterloo, ON: University of Waterloo. [verified 10 Oct 2006] http://www.learning.uwaterloo.ca/LIF/responsepad_june20051.pdf
Casanova, J. (1971). An instructional experiment in organic chemistry, the use of a student response system. Journal of Chemical Education, 48(7), 453-455.
Cooper, R. B. & Zmud, R. W. (1990). Information technology implementation research: A technological diffusion approach. Management Science, 36(2), 123-140.
Cuban, L. (1986). Teachers and machines: The classroom use of technology since 1920. New York: Teachers College Press.
Draper, S. W. & Brown, M. I. (2002). Use of the PRS (Personal Response System) handsets at Glasgow University: Interim evaluation report. [viewed 30 Dec 2005, verified 10 Oct 2006] http://www.psy.gla.ac.uk/~steve/ilig/interim.html
Draper, S. W. & Brown, M. I. (2004). Increasing interactivity in lectures using an electronic voting system. Journal of Computer Assisted Learning, 20(2), 81-94.
Draper, S. W., Cargill, J. & Cutts, Q. (2002). Electronically enhanced classroom interaction. Australian Journal of Educational Technology, 18(1), 13-23. http://www.ascilite.org.au/ajet/ajet18/draper.html
Dufresne, R. J., Gerace, W. J., Leonard, W. J., Mestre, J. P. & Wenk, L. (1996). Classtalk: A classroom communication system for active learning. Journal of Computing in Higher Education, 7(2), 3-47.
Duncan, D. (2005). Clickers in the classroom: How to enhance science teaching using classroom response systems. San Francisco: Pearson/Addison Wesley.
eInstruction (2006). eInstruction's Classroom Performance System. [viewed 28 Aug 2006] http://www.einstruction.com/
Elliott, C. (2003). Using a personal response system in economics teaching. International Review of Economics Education, 1(1), 80-86.
Garrison, D. R. & Archer, W. (2000). A transactional perspective on teaching and learning: A framework for adult and higher education. Oxford, UK: Pergamon.
Gilbert, A. (2005). New for back-to-school: 'Clickers'. [viewed 28 Aug 2006, verified 10 Oct 2006] http://news.com.com/New+for+back-to-school+clickers/2100-1041_3-5819171.html
Goodyear, P. (2005). Educational design and networked learning: Patterns, pattern languages and design practice. Australasian Journal of Educational Technology, 21(1), 82-101. http://www.ascilite.org.au/ajet/ajet21/goodyear.html
Hake, R. R. (1998). Interactive-engagement versus traditional methods: A six-thousand student survey of mechanics test data for introductory physics courses. American Journal of Physics, 66(1), 64-74.
Judson, E. & Sawada, D. (2002). Learning from past and present: Electronic response systems in college lecture halls. Journal of Computers in Mathematics and Science Teaching, 21(2), 167-182.
Kirkwood, A. & Price, L. (2005). Learners and learning in the twenty-first century: What do we know about students' attitudes towards and experiences of information and communication technologies that will help us design courses? Studies in Higher Education, 30(3), 257-274.
Lapointe, L., & Rivard, S. (2005). A multilevel model of resistance to information technology implementation. MIS Quarterly, 29(3), 461-492.
Laurillard, D. (1993). Rethinking university teaching: A framework for the effective use of educational technology. London: Routledge.
Learning Media Unit (2001). Student and staff feedback on using an electronic group response system in a Mechanical Engineering lecture at the University of Sheffield. Learning Media Unit Evaluation Report Project 39. Sheffield, UK: University of Sheffield. [verified 10 Oct 2006] http://www.shef.ac.uk/learningmedia/evalreports/pdfs/P39-Diprose_evaluation_report.pdf
Littauer, R. (1972). Instructional implications of a low-cost electronic student response system. Educational Technology: Teacher and Technology Supplement, 12(10), 69-71.
National Research Council (USA). (1999). How people learn: Brain, mind, experience and school. Washington, DC: National Academy Press. [verified 10 Oct 2006] http://newton.nap.edu/html/howpeople1/
Perkins, K. & Wieman, C. (2004). Revitalizing your class through research-based innovation: Clickers and beyond. Paper presented at the American Astronomical Society Meeting #204, 3 June, Denver, CO. [abstract only, verified 10 Oct 2006] http://www.aas.org/publications/baas/v36n2/aas204/237.htm
Poulis, J., Massen, C., Robens, E., & Gilbert, M. (1998). Physics lecturing with audience paced feedback. American Journal of Physics, 66, 439-441.
Roschelle, J., Penuel, W. R. & Abrahamson, L. (2004). Classroom response and communication systems: Research review and theory. [viewed 1 Jan 2006] http://www.ubiqcomputing.org/CATAALYST_AERA_Proposal.pdf
Schenke, J. (2005). Students zap their way to improved education. [viewed 28 Aug 2006, verified 10 Oct 2006] http://news.uns.purdue.edu/UNS/html3month/2005/050812.T-Evans.clickers.html
Segovia, J. R. (2004). Who wants to learn accounting? The use of personal response systems in Introductory Accounting. Poster Session. Paper presented at the Annual Meeting of the American Accounting Association, 8-11 August, Orlando, FL.
Shapiro, J. A. (1997). Electronic student response found feasible in large science lecture hall: Inexpensive, homemade system sparks student attention and participation. Journal of College Science Teaching, 26, 408-412.
Sharma, M. D., Khachan, J., Chan, B. & O'Byrne, J. (2005). An investigation of the effectiveness of electronic classroom communication systems in large lecture classes. Australasian Journal of Educational Technology, 21(2), 137-154. http://www.ascilite.org.au/ajet/ajet21/sharma.html
Turning Technologies (2006). Turning Technologies Audience Response Systems. [viewed 28 Aug 2006] http://www.turningtechnologies.com/
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Michael Cole (Ed.). Cambridge: Harvard University Press.
Williams, J. B. (2003). 'Learning by remote control': Exploring the use of an audience response system as a vehicle for content delivery. In G. Crisp, D. Thiele, I. Scholten, S. Barker & J. Baron (Eds.), Interact, Integrate, Impact: Proceedings of the 20th ASCILITE Conference. Adelaide, Australia, 7-10 December. http://www.ascilite.org.au/conferences/adelaide03/docs/pdf/739.pdf
Wood, W. B. (2004). Clickers: A teaching gimmick that works. Developmental Cell, 7, 796-798.
|Author: Dr John Barnett, Assistant Professor, Science and Online Education, Faculty of Education, The University of Western Ontario, 1137 Western Rd, London, Ontario, Canada. Email: email@example.com Web:
Please cite as: Barnett, J. (2006). Implementation of personal response units in very large lecture classes: Student perceptions. Australasian Journal of Educational Technology, 22(4), 474-494. http://www.ascilite.org.au/ajet/ajet22/barnett.html