|Australasian Journal of Educational Technology
2009, 25(2), 235-249.
A strategic assessment of audience response systems used in higher education
Robin H. Kay and Ann LeSage
University of Ontario Institute of Technology
An audience response system (ARS) permits students to respond to multiple choice questions using a remote control device. All responses are instantly displayed, usually in chart form, and subsequently reviewed and discussed by the instructor and the class. This paper offers a comprehensive review of teaching strategies used with ARS and includes a discussion of general, motivational, assessment based, and learning based approaches. Several promising strategies have been identified, particularly collecting formative assessment feedback and peer based instruction. More systematic, detailed research in a broader range of contexts is recommended.
ARSs were first introduced at Stanford and Cornell Universities in the mid 1960s, but did not become commercially available until 1992 (Abrahamson, 2006; Judson & Sawada, 2002). In 1999, a new generation of more affordable ARSs was created, with widespread use emerging in 2003. Today, numerous colleges and universities use ARSs (Abrahamson, 2006).
The purpose of the following review is to provide a current, comprehensive analysis of strategies used with ARSs in higher education.
Fies & Marshall (2006) completed an extensive analysis of methods used to assess ARSs, however they examined only 16 peer reviewed studies, of which only two were published after 2004. Therefore, some of their findings and conclusions are incomplete. For example, they noted that few studies reported the use of ARSs for formative assessment purposes, however, since 2004, 16 new studies have been completed where formative assessment was examined. The authors also claimed that ARSs were used primarily for individual, not collaborative interactions, a practice that has changed markedly since 2004.
A more recent review by Simpson & Oliver (2007) analysed more than 40 papers. Only 17 of the cited articles were from peer reviewed journals, with the majority of the results founded on five references. Finally, Caldwell (2007) analysed 25 peer reviewed articles. Most papers examined were published after 2000 with 10 studies completed after 2004. The principal foci of Caldwell's work were to discern the primary users of ARSs, determine the rationale for using ARSs, explore questioning techniques utilised during ARS lessons, and identify best practices associated with ARS classroom use. However, fewer details were offered concerning specific teaching strategies used, and their impact on student learning.
The articles selected for this review focus solely on strategies used with ARSs in the higher education domain. Twenty-six labels (see Appendix A in Kay, 2008a) were used to search for relevant articles. A total of 52 papers and chapters were analysed. Given that previous literature reviews included no more than 25 peer reviewed papers, one can be reasonably confident that this review accurately reflects the state of current research on ARSs. Of the 52 studies analysed, 50 were conducted between 2000 and 2007, with 37 articles published since 2004.
Each of the studies included in this review was analysed based on the following categories: rationale or theory for using ARS, context of use, benefits and challenges associated with using ARS, strategies and pedagogical use, and individual differences in the use of ARS. See Appendix B (see Kay, 2008b) for a detailed description of the coding of variables used in this study. See Appendix C (Kay, 2008c) for a description of all the articles reviewed for this study.
In summary, the conclusions from the current review reflect the attitudes and learning efforts of large classes of undergraduate students who were studying mathematics or science based subject areas.
Four categories of strategies were examined in the current review. General strategies are those referring to the preparation and process involved in using an ARS. Motivational strategies involve explicit attempts to involve or engage students. Assessment strategies refer to the use of an ARS to guide instruction or evaluate progress. Finally, learning based strategies include specific techniques designed to increase learning performance.
|Explain||Explain to class why ARS is being used||Beatty, 2004; Caldwell, 2007; Dufresne & Gerace, 2004; Trees & Jackson, 2007.|
|Preparation||Planning required to develop effective questions||Allen & Tanner, 2005; Boyle, 2006; Beatty, 2004; Boyle, 2006; Caldwell, 2007; Poulis et al., 1998; Beatty et al., 2006; McCabe, 2006; Stuart et al., 2004.|
|Type||Type of questions that works with ARS||Beatty, 2004; Beatty et al., 2006; Brewer, 2004; Caldwell, 2007; Crouch & Mazur, 2001; Cutts, 2006; Dufresne & Gerace, 2004; Fies & Marshall, 2006; McCabe, 2006; Horowitz, 2006; Kennedy & Cutts, 2005; Miller et al., 2006; Poulis et al., 1998.|
|Format||Format in which questions are offered||Caldwell, 2007; Cutts, 2006; Horowitz, 2006; McCabe, 2006; Robertson, 2000; Simpson & Oliver, 2007; Uhari et al., 2003.|
|Attendance||Students go to class more||Caldwell , 2007; Burnstein & Lederman, 2001; Greer & Heaney, 2004.|
|Engagement||Students are more engaged in class||Bergtrom, 2006; Caldwell, 2007; Draper & Brown, 2004; Latessa & Mouw, 2005; Preszler et al., 2007; Siau, Sheng, & Nah, 2006; Simpson & Oliver, 2007.|
|Participation||Students participate with peers more in class to solve problems||Bullock et al., 2002; Caldwell, 2007; Draper & Brown, 2004; Greer & Heaney, 2004; Jones et al., 2001; Siau, Sheng, & Nah, 2006; Stuart et al., 2004; Uhari et al., 2003; Van Dijk et al., 2001.|
|Formative||Assessment is done that improves student understanding and quality of teaching||Beatty, 2004; Bergtrom, 2006; Brewer, 2004; Bullock et al., 2002; Caldwell, 2007; Draper & Brown, 2004; Dufresne & Gerace, 2004; Elliott, 2003; Greer & Heaney, 2004; Hatch et al., 2005; Jackson et al., 2005; Siau, Sheng, & Nah, 2006; Simpson & Oliver, 2007; Stuart et al., 2004.|
|Contingent teaching||Adjust teaching method based on feedback from the class||Beatty, 2004; Brewer, 2004; Draper & Brown, 2004; Elliott, 2003; Greer & Heaney, 2004; Jackson et al., 2005; Kennedy & Cutts, 2005; Poulis et al., 1998; Simpson & Oliver, 2007.|
|Summative||Used ARS for graded tests||Draper et al., 2002; Fies & Marshall, 2006; Simpson & Oliver, 2007.|
|Learning based strategies|
|Attention||Students are more focused in class||Bergtrom, 2006; Burnstein & Lederman, 2001; Caldwell, 2007; d'Inverno, et al., 2003; Draper & Brown, 2004; Elliott, 2003; Elliott, 2003; Jackson et al., 2005; Jones et al., 2001; Latessa & Mouw, 2005; Siau, Sheng, & Nah, 2006; Slain et al., 2004.|
|Interaction||Students interact more with peers to discuss ideas||Beatty, 2004; Bergtrom, 2006; Caldwell, 2007; Elliott, 2003; Freeman et al., 2007; Kennedy et al., 2006; Sharma et al., 2005; Siau, Sheng, & Nah, 2006; Slain et al., 2004; Stuart et al., 2004; Trees & Jackson, 2007; Van Dijk et al., 2001.|
|Peer based instruction||Promote peer discussion and resolution of problems||Brewer, 2004; Bullock et al., 2002; Burnstein & Lederman, 2001; Caldwell, 2007; Crouch & Mazur, 2001; Draper & Brown, 2004; Jones et al., 2001; Kennedy & Cutts, 2005; Miller et al., 2006; Nicol & Boyle, 2003.|
|Student preparation||Students read materials ahead of class||Beatty, 2004; Bergtrom, 2006; Bergtrom, 2006; d'Inverno, et al., 2003; El-Rady, 2006; Uhari et al., 2003.|
|Class based discussion||Promote class discussion and resolution of problems||Beatty et al., 2006; Caldwell, 2007; d'Inverno, et al., 2003; Nicol & Boyle, 2003; Reay et al., 2005; Sharma et al., 2005.|
|Case studies||Present and solve case studies||Jones et al., 2001.|
|Experiments||Present and solve experiments||Draper et al., 2002; Simpson & Oliver, 2007.|
Because of the potential problems that could arise when introducing an ARS, a number of researchers recommend that instructors should explain why this new technology is being used. Beatty (2004) suggests that students will be more comfortable when they understand the rationale for a specific classroom practice. Caldwell (2007) and Tree & Jackson (2007) add that if teachers expect to garner full student support for this new method, they need to explain why ARSs are being used and what they expect to gain by using the technology. Finally, although ARSs are reportedly easy to use (d'Inverno et al., 2003; Elliott, 2003; Hinde & Hunt, 2006; Jones, Connolly, Gear & Read, 2001; Pradhan, Sparano, & Ananth, 2005; Sharma, Khachan, Chan & O'Byrne, 2005; Siau et al., 2006), it might be wise to have practice questions to allow students to become familiar with using the technology (Caldwell, 2007). To date, the need for explanation and practice has not been systematically studied; however, it seems to be a reasonable approach for addressing some of the challenges noted earlier. For example, if students understand why they are using ARS they may be more accepting of this new way of learning, more willing to put forth the extra effort required to discuss and solve problems in class, and less sensitive when they respond incorrectly to the questions presented.
While setting up and using an ARS is a relatively simple and quick task, creating effective multiple choice questions is a challenging and time consuming process (Allen & Tanner, 2005; Boyle, 2006). A number of recommendations have been tendered with respect to preparing effective questions. It has been argued that every question should have an explicit pedagogical purpose (Beatty, 2004; Beatty, Leonard, Gerace & Dufresne, 2006; Caldwell, 2007; Poulis, Massen, Robens & Gilbert 1998). In addition, McCabe (2006) notes that questions should be thoughtfully linked together. Boyle (2006) adds that even when questions have been prepared and used, there is need for continual refinement. Stuart et al. (2004) suggest that spontaneous questions based on student feedback work well, however most research supports a more systematic carefully planned approach (Allen & Tanner, 2005; Beatty, 2004; Beatty et al., 2006; Caldwell, 2007; McCabe, 2006; Poulis et al., 1998). Current research, though, is lacking with respect to concrete evidence on the impact of various question preparation strategies. In addition, there are few classroom-ready collections of ARS questions available in most fields. Thus it is incumbent upon the instructor to develop original questions and many find this task extremely time consuming (Allen & Tanner, 2005; Beatty et al., 2006; El-Rady, 2006; Fagan et al., 2002; Freeman, Comerton-Forder, Pickering & Blayney, 2007; Paschal, 2002).
Allen & Turner (2005) maintain that the cognitive benefits of ARSs are only as strong as the questions asked. Beatty et al. (2006) add that the critical challenge is to create questions that cultivate productive classroom interaction and discourse. A wide range of suggestions have been offered regarding the most effective type of ARS questions, including those that:
Once a question is developed, decisions have to be made about the number of questions to ask, the number of options to provide within a multiple choice question, and how long to take when working with a specific question. Most researchers agree that questions should be sprinkled judiciously throughout a lecture at the rate of two to five questions per 50 minute time period (Allen & Tanner, 2005; Burton, 2006; Caldwell, 2007; Preszler, Dawe, Shuster, & Shuster, 2007; Robertson, 2000). The main reason given for limiting the number of questions is to maintain student interest and enthusiasm (Allen & Tanner, 2005; Burton, 2006; Preszler et al., 2007; Robertson, 2000). Given that thoughtful ARS questions take 5-10 minutes to display, discuss, and resolve (Cutts, 2006), it would be challenging to present more than one question every 10-15 minutes. Finally, because many teachers are concerned about reduced content coverage when using an ARS, limiting the number questions asked is probably a reasonable strategy (e.g., Beatty, 2004; Beatty et al., 2006; Caldwell, 2007; Cutts, 2006; Draper & Brown, 2004).
Many researchers suggest that no more than four or five options be offered when asking an ARS question (Caldwell, 2007; Cutts, 2006; Robertson, 2000; Uhari, Renko & Soini, 2003). This suggestion is based on the experiences of the researchers, yet no specific reasons are given for this restriction. Several researchers have suggested that to accurately monitor whole class understanding, an "I don't know" option be included as a potential response (McCabe, 2006; Simpson & Oliver, 2007).
Little research has been published on the optimum amount of time to take when asking an ARS question. Beatty et al. (2006) anecdotally reported that when the noise level drops in a class, it is time to stop peer discussion and move on. Time allotted for discussion of an ARS question may be related to subject area, level of thinking required, question difficulty level, and the pedagogical goals of the instructor. To date, limited data has been collected comparing the efficacy of different question formats.
However, the strategy of using an ARS to increase attendance is not universally popular and sometimes has unexpected consequences. For example, Greer & Heaney (2004) added that students were displeased about being forced to attend class in order to gain academic credit for ARS participation. An unfortunate consequence of attendance monitoring practice is that 20% to 58% of students observed their peers bringing multiple remote devices to class to record attendance for missing classmates (Caldwell, 2007). Ideally, students should want to attend class for intrinsic, not extrinsic reasons. ARSs should provide an inherent learning incentive so that students want to attend. Attaching a grade to ARS monitored attendance may foster resistance and undermine the goal of developing a student centered environment.
Many instructors and learning theorists would agree that students need to participate actively in the learning process. One of the main reasons an ARS is used is to increase participation. Substantial evidence indicates that using an ARS increases student participation when compared to participation rates in classrooms where an ARS was not used (Bullock et al., 2002; Caldwell, 2007; Draper & Brown, 2004; Greer & Heaney, 2004; Jones et al., 2001; Siau, Sheng, & Nah, 2006; Stuart et al., 2004; Uhari et al.2003; Van Dijk et al., 2001). For example, one study observed that "shy" students participated more in classrooms using ARSs (Greer & Heaney, 2004). Bullock et al. (2002) added that when a portion of students' grades were assigned to ARS use, participation markedly increased. Another study reported that ARSs were more effective when case studies were employed (Jones et al., 2001). Still, other researchers have noted that students were more involved when ARSs were used in groups as opposed to individually (Jones et al., 2001; Van Dijk et al., 2001).
An implicit strategy for using ARSs is the engagement value and if students are engaged, it is argued they are more likely to actively construct knowledge. In general, students in ARS based classes report being more interested or engaged in concepts presented and discussed (Bergtrom, 2006; Preszler et al., 2007; Simpson & Oliver, 2007). It is important to note, though, that detailed data on the reasons for why students are engaged has not been collected to date. For example, students may be more engaged because they are actively involved in the learning process. An alternative explanation might involve the novelty of the technology - it may simply be fun to use a remote control device in class and observe other students' responses. More comprehensive, qualitative research is required to explore plausible explanations for increased student engagement with ARS use.
With contingent teaching, the material that is presented and discussed in class is largely dependent on student feedback from the ARS. The instructor is presenting ideas, gathering formative assessment data, and adjusting content and teaching strategies based on how well students understand the concepts (Brewer, 2004; Cutts, 2006; Draper & Brown, 2004; Elliott, 2003; Greer & Heaney, 2004; Hinde & Hunt, 2006; Jackson et al., 2005; Kennedy & Cutts, 2005; Poulis et al., 1998; Stuart et al., 2004). There is some evidence to suggest that this approach has been successful (Brewer, 2004; Greer & Heaney, 2004), although Abrahamson (2006) speculates that success may be dependent on an instructor's experience level and ability to instantly address problems and misconceptions.
Fies & Marshall (2006) reported most higher education instructors regularly employ a summative assessment strategy when using an ARS. However, limited evidence exists to support this claim - most studies report the formative use of ARSs (e.g., Abrahamson, 2006; Beatty, 2004; Caldwell, 2007; Elliott, 2003; Jackson et al., 2005). It has been suggested that summative assessment encourages rote learning and cannot be used to shape instruction in a dynamic fashion (Dufresne & Gerace, 2004). In fact, understanding how well students understand concepts in traditional, lecture based classes is often a mystery until after the first exam (Bullock et al., 2002). There is some evidence to indicate that higher education students do not enjoy using ARS for grades (Caldwell, 2007). While using ARS for formal testing situations might be attractive to an instructor in terms of quick, easy marking, it may not be the most appropriate pedagogical choice. More research is needed examining the benefits and challenges of using ARSs for summative assessment.
It could be argued that participation is a necessary, but not sufficient component for learning. The quality of participatory effort is perhaps more important. Students need to be interacting with each other, the instructor and new concepts being introduced. They must also be "cognitively" engaged (Van Dijk et al., 2001). Numerous studies suggest that frequent and positive interaction occurs with ARSs (Beatty, 2004; Bergtrom, 2006; Caldwell, 2007; Elliott, 2003; Freeman et al., 2007; Kennedy, Cutts & Draper, 2006; Sharma et al., 2005; Siau et al., 2006; Slain et al., 2004; Stuart et al., 2004; Trees & Jackson, 2007). When an ARS is used, researchers have reported greater articulation of student thinking (Beatty, 2004), more probing questions and an increased focus on student needs (Beatty, 2004; Siau, et al., 2006), effective peer to peer discussions (Bergtrom, 2006; Caldwell, 2007; Kennedy et al., 2006), and active learning (Elliott, 2003; Kennedy et al., 2006; Slain et al., 2004; Stuart et al., 2004).
Peer based learning
One of the most common and successful strategies used with ARS is peer based instruction, which involves displaying a higher level question that could identify a misconception, asking students to click in a response, giving students time to discuss and defend their answers with two to four peers, taking a re-vote on the original question, and having the instructor provide a brief summary (Brewer, 2004; Bullock et al., 2002; Burnstein & Lederman, 2001; Crouch & Mazur, 2001; Cutts, 2006; Draper & Brown, 2004; Hinde & Hunt, 2006; Jones et al., 2001; Kennedy & Cutts, 2005; Miller et al., 2006; Nicol & Boyle, 2003). Crouch & Mazur (2001) provided 10 years of evidence suggesting that peer based instruction provides significant gains in student learning performance. Nicol & Boyle (2003) added that peer instruction is central to the development of student conceptual understanding.
One of the main concerns about using an ARS on a regular basis is coverage of content. Abundant research suggests that teachers, and occasionally students, feel that less content is addressed when using an ARS as opposed to a more traditional lecture format (Beatty, 2004; Beatty et al., 2006; Burnstein & Lederman, 2006; Caldwell, 2007; d'Inverno, et al., 2003; Burton, 2006; Cutts, 2006; Draper & Brown, 2004; Fagan et al., 2002; Freeman et al., 2007; Hatch et al., 2005; Sharma et al., 2005; Siau et al., 2006; Slain et al., 2004; Steinert & Snell, 1999; Stuart et al., 2004). Responding to and discussing knowledge centred questions that identify and target misconceptions can take considerably more time than simply presenting material in a lecture. Some researchers have noted, though, that what is covered in a traditional lecture may not be understood as well as concepts learned with an ARS (Beatty et al., 2006; Caldwell, 2007). However, the fact remains that curriculum coverage is a reality that many lecturers have to face. One way to compensate for material not covered in class is to require students to do more reading and class preparation outside of the lecture (Bergtrom, 2006; Bullock et al., 2002; Burnstein & Lederman, 2001; Caldwell, 2007; Slain et al., 2004). This strategy is discussed next.
Require student preparation before class
A number of researchers have suggested that a good strategy for addressing reduced content coverage is to require students to read materials prior to class so that the class time can be devoted to refining and extending student thinking and knowledge (Beatty, 2004; Bergtrom, 2006; d'Inverno, et al., 2003). D'Inverno et al. (2003) add that providing notes to augment a lecture was a popular instructional practice. Student attitudes toward the extra effort required for class preparation has not been measured, nor has the impact of this strategy on the quality of learning.
A variation of the peer instruction approach augments the role of the entire class. A multiple choice question is displayed, students are immediately asked to discuss possible solutions with their peers before voting, responses are collected and displayed, and then volunteers are asked to explain the answers that they selected. At some point, the instructor stops the class discussion and summarises the results (Beatty et al., 2006; d'Inverno, et al., 2003; Nicol & Boyle, 2003; Reay et al., 2005; Sharma et al., 2005). One research study suggested that students prefer peer based instruction to classroom discussion (Nicol & Boyle, 2003) - students wanted to think about their answers on their own, before discussing it with peers. A discussion first approach tended to mute conversation with some students simply conceding to the most dominant student because they did not have sufficient time to formulate a response. Other researchers have suggested that the classroom discussion approach may be better suited to smaller classes (d'Inverno, et al., 2003).
Occasionally, significant problems emerge when ARS questions are presented for classroom discussion. Some students may dominate group discussions (Nicol & Boyle, 2003) or discussion of different viewpoints may precipitate student confusion (Nicol & Boyle, 2003, Reay et al., 2005). Occasionally, students feel that ARS use distracts them from the lesson (Draper & Brown, 2004) or view class discussion as intimidating and a source of anxiety (Nicol & Boyle, 2003). While these problems have not been reported extensively, instructors and researchers need more information about creating effective discussion that is focussed, non-threatening, and efficient.
Jones et al. (2003) used ARSs successfully with university students by incorporating case study questions in a peer based instruction format. The case studies appeared to make classes far more animated and students talked considerably more with peers instead of working on their own. This approach may be limited to the subject area being covered. For example, it may be more difficult to come up with reasonable case studies for mathematics or physics. Clearly more research is needed to gain a comprehensive picture of the effectiveness of case studies.
While this approach has not been tested rigorously, conducting psychology experiments in class and requiring students to predict outcomes has been examined by Simpson & Oliver (2007). Clearly more research is needed to determine the validity of this approach, but, again, success may be partially dependent on subject area. Science, for example, may be a good fit for an experimental approach, whereas history or English may not.
Nonetheless, a number of significant ARS strategy based problems have yet to be examined. Overall, the results of the current review are limited to math and science based subject areas. More research is needed on the use of ARS in broader range of subject areas. Regarding ARS general strategies, formal evaluation comparing the efficacy of specific question types and format is needed. With respect to ARS motivational strategies, it is unclear why students are more engaged when ARSs are used. Anecdotal explanations have included the novelty effect of using a new technology, the fun of clicking in answers, and increased commitment to learning concepts being presented, but more in depth research is needed to explore and confirm plausible explanations for increased student engagement. Concerning ARS assessment strategies, it appears that formative assessment is a productive technique; yet more research is necessary focussing on the benefits and challenges of using ARS for summative assessment. Finally, when examining learning based strategies, learning performance needs to be assessed to determine the effectiveness of certain strategies such as the impact of increased attention levels, extra student preparation before class, experiments, and case studies.
Abrahamson, L. (2006). A brief history of networked classrooms: Effects, cases, pedagogy, and implications. In D. A. Banks (Ed.), Audience response systems in higher education (pp. 1-25). Hershey, PA: Information Science Publishing.
Albon, R. & Jewels, T. (2007). The impact of audience response systems in a multicultural Asian context. In ICT: Providing choices for learners and learning. Proceedings ascilite Singapore 2007. http://www.ascilite.org.au/conferences/singapore07/procs/albon.pdf
Allen, D. & Tanner, K. (2005). Infusing active learning into the large-enrolment Biology class: Seven strategies, from the simple to complex. Cell Biology Education, 4, 262-268. http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1305885
Beatty, I. (2004). Transforming student learning with classroom communication systems. EDUCAUSE Research Bulletin, 2004(3), 1-13. [viewed 3 Nov 2007] http://www.educause.edu/ir/library/pdf/ERB0403.pdf
Beatty, I. D., Leonard, W. J., Gerace, W. J. & Dufresne, R. J. (2006). Question driven instruction: Teaching science (well) with an audience response system. In D. A. Banks (Ed.), Audience response systems in higher education (pp. 96-115). Hershey, PA: Information Science Publishing.
Bergtrom, G. (2006). Clicker sets as learning objects. Interdisciplinary Journal of Knowledge and Learning Objects, 2. [viewed 3 Nov 2007] http://ijklo.org/Volume2/v2p105-110Bergtrom.pdf
Boyle, J. (2006). Eight years of asking questions. In D. A. Banks (Ed.), Audience response systems in higher education (pp. 289-304). Hershey, PA: Information Science Publishing.
Bransford, J. D., Brown, A. L. & Cocking, R. R. (Eds.) (1999). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press. [verified 8 May 2009] http://www.nap.edu/openbook.php?record_id=6160
Brewer, C. A. (2004). Near real-time assessment of student learning and understanding in biology courses. BioScience, 54(11), 1034-1039.
Bullock, D. W., LaBella, V. P., Clinghan, T., Ding, Z., Stewart, G. & Thibado, P. M. (2002). Enhancing the student-instructor interaction frequency. The Physics Teacher, 40, 30-36.
Burnstein, R. A. & Lederman, L. M. (2001). Using wireless keypads in lecture classes. The Physics Teacher, 39(1), 8-11.
Burnstein, R. A. & Lederman, L. M. (2006). The use and evolution of an audience response system. In D. A. Banks (Ed.), Audience response systems in higher education (pp. 40-52). Hershey, PA: Information Science Publishing.
Burton, K. (2006). The trial of an audience response system to facilitate problem-based learning in legal education. In D. A. Banks (Ed.), Audience response systems in higher education (pp. 265-276). Hershey, PA: Information Science Publishing.
Caldwell, J. E. (2007). Clickers in the large classroom: Current research and best-practice tips. Life Sciences Education, 6(1), 9-20. http://www.lifescied.org/cgi/reprint/6/1/9.pdf
Carnaghan, C. & Webb, A. (2007). Investigating the effects of group response systems on student satisfaction, learning, and engagement in accounting education. Issues in Accounting Education, 22(3), 391-409.
Chemistry Concept Tests (2008). [viewed 28 Nov 2008] http://jchemed.chem.wisc.edu/JCEDLib/QBank/collection/ConcepTests/
Cornell Mathematics Database (2008). [viewed 28 Nov 2008] http://www.math.cornell.edu/~GoodQuestions/GQbysubject_pdfversion.pdf
Crouch, C. H. & Mazur, E. (2001). Peer instruction: Ten years of experience and results. American Journal of Physics, 69(9), 970-977.
Cutts, Q. (2006). Practical lessons from four years of using an ARS in every lecture of a large class. In D. A. Banks (Ed.), Audience response systems in higher education (pp. 65-79). Hershey, PA: Information Science Publishing.
D'Inverno, R., Davis, H. & White, S. (2003). Using a personal response system for promoting student interaction. Teaching Mathematics and Its Applications, 22(4), 163-169.
Draper, S. W. & Brown, M. I. (2004). Increasing interactivity in lectures using an electronic voting system. Journal of Computer Assisted Learning, 20(2), 81-94.
Dufresne, R. J. & Gerace, W. J. (2004). Assessing-to-learn: Formative assessment in physics instruction. The Physics Teacher, 42, 428-433.
Elliott, C. (2003). Using a personal response system in economics teaching. International Review of Economics Education, 1(1). [viewed 3 Nov 2007] http://www.economicsnetwork.ac.uk/iree/i1/elliott.htm
El-Rady, J. (2006). To click or not to click: That's the question. Innovate Journal of Online Education, 2(4). [viewed 3 Nov 2007] http://www.innovateonline.info/index.php?view=article&id=171
Fagan, A. P., Crouch, C. H. & Mazur, E. (2002). Peer Instruction: Results from a range of classrooms. The Physics Teacher, 40(4), 206-209.
Fies, C. & Marshall, J. (2006). Classroom response systems: A review of the literature. Journal of Science Education and Technology, 15(1), 101-109.
Freeman, M., Bell, A., Comerton-Forder, C., Pickering, J. & Blayney, P. (2007). Factors affecting educational innovation with in class electronic response systems. Australasian Journal of Educational Technology, 23(2), 149-170. http://www.ascilite.org.au/ajet/ajet23/freeman.html
Greer, L. & Heaney, P. J. (2004. Real-time analysis of student comprehension: an assessment of electronic student response technology in an introductory earth science course. Journal of Geoscience Education, 52(4), 345-351.
Hatch, J., Jensen, M. & Moore, R. (2005). Manna from heaven or clickers from hell. Journal of College Science Teaching, 34(7), 36-39.
Hinde, K. & Hunt, A. (2006). Using the personal response system to enhance student learning: Some evidence from teaching economics. In D. A. Banks (Ed.), Audience response systems in higher education (pp. 140-154). Hershey, PA: Information Science Publishing.
Horowitz, H. M. (2006). ARS evolution: Reflections and recommendations. In D. A. Banks (Ed.), Audience response systems in higher education (pp. 53-63). Hershey, PA: Information Science Publishing.
Jackson, M., Ganger, A. C., Bridge, P. D. & Ginsburg, K. (2005). Wireless handheld computers in the undergraduate medical curriculum. Medical Education Online, 10(5). [viewed 3 Nov 2007] http://www.med-ed-online.org/pdf/t0000062.pdf
Jones, C., Connolly, M., Gear, A. & Read, M. (2001). Group integrative learning with group process support technology. British Journal of Educational Technology, 32(5), 571-581.
Judson, E. & Sawada, D. (2002). Learning from past and present: Electronic response systems in college lecture halls. Journal of Computers in Mathematics and Science Teaching, 21(2), 167-181.
Kay, R. H. (2008a). Appendix A - Labels used to describe audience response systems. http://faculty.uoit.ca/kay/papers/arsrev/AppendixA_Labels.pdf
Kay, R. H. (2008b). Appendix B - Coding of research papers reviewed for ARS strategy paper. http://faculty.uoit.ca/kay/papers/arsrev/AppendixB_Coding.pdf
Kay, R. H. (2008c). Appendix C - List of studies reviewed for ARS strategy paper. http://faculty.uoit.ca/kay/papers/arsrev/AppendixC_ListOfPapers.pdf
Kennedy, G. E. & Cutts, Q. I. (2005). The association between students' use of electronic voting systems and their learning outcomes. Journal of Computer Assisted Learning, 21(4), 260-268.
Kennedy, G. E., Cutts, Q. & Draper, S. W. (2006). Evaluating electronic voting systems in lectures: Two innovative methods. In D. A. Banks (Ed.), Audience response systems in higher education (pp. 155-174). Hershey, PA: Information Science Publishing.
Latessa, R. & Mouw, D. (2005). Use of audience response system to augment interactive learning. Family Medicine, 37(1), 12-14. [viewed 3 Nov 2007] http://www.stfm.org/fmhub/fm2005/January/Robyn12.pdf
McCabe, M. (2006). Live assessment by questioning in an interactive classroom. In D. A. Banks (Ed.), Audience response systems in higher education (pp. 276-288). Hershey, PA: Information Science Publishing.
Miller, R. L., Santana-Vega, E. & Terrell, M. S. (2006). Can good questions and peer discussion improve calculus instruction? PRIMUS, 16(3), 1-9. [verified 8 May 2009; preprint] http://www.math.cornell.edu/~maria/mathfest_eduction/preprint.pdf
Nicol, D. J. & Boyle, J. T. (2003). Peer instruction versus class-wide discussion in large classes: A comparison of two interaction methods in the wired classroom. Studies in Higher Education, 28(4), 457-473.
Paschal, C. B. (2002). Formative assessment in physiology teaching using a wireless classroom communication system. Advances in Physiology Education, 26(4), 299-308.
Poulis, J., Massen, C., Robens, E. & Gilbert, M. (1998). Physics lecturing with audience paced feedback. American Journal of Physics, 66(5), 439-441.
Pradhan, A., Sparano, D., Ananth, C. V. (2005). The influence of an audience response system on knowledge retention: An application to resident education. American Journal of Obstetrics and Gynecology, 193(5), 1827-1830.
Preszler, R. W., Dawe, A., Shuster, C. B. & Shuster, M. (2007). Assessment of the effects of student response systems on student learning and attitudes over a broad range of biology courses. CBE-Life Sciences Education, 6(1), 29-41. http://www.lifescied.org/cgi/content/full/6/1/29
Reay, N. W., Bao, L., Li, P., Warnakulasooriya, R. & Baugh, G. (2005). Toward the effective use of voting machines in physics lectures. American Journal of Physics, 73(6), 554-558.
Richards, D., Braiding, K. & Vaughan, A. (2006). Fun and feedback at the press of a button. In Who's learning? Whose technology? Proceedings ascilite Sydney 2006. http://www.ascilite.org.au/conferences/sydney06/proceeding/pdf_papers/p151.pdf
Robertson, L. J. (2000). Twelve tips for using a computerised interactive audience response system. Medical Teacher, 22(3), 237-239.
Sharma, M. D., Khachan, J., Chan, B. & O'Byrne, J. (2005). An investigation of the effectiveness of electronic classroom communication systems in large lectures. Australasian Journal of Educational Technology, 21(2), 137-154. http://www.ascilite.org.au/ajet/ajet21/sharma.html
Siau, K., Sheng, H. & Nah, F. (2006). Use of classroom response systems to enhance classroom interactivity. IEEE Transactions on Education, 49(3), 398-403.
Simpson, V. & Oliver, M. (2007). Electronic voting systems for lectures then and now: A comparison of research and practice. Australasian Journal of Educational Technology, 23(2), 187-208. http://www.ascilite.org.au/ajet/ajet23/simpson.html
Slain, D., Abate, M., Hidges, B. M., Stamatakis, M. K. & Wolak, S. (2004).An interactive response system to promote active learning in the doctor of pharmacy curriculum. American Journal of Pharmaceutical Education, 68(5), 1-9.
Steinhert, Y. & Snell, L. S. (1999). Interactive lecturing: Strategies for increasing participation in large group presentations. Medical Teacher, 21(1), 37-42.
Stuart, S. A. J., Brown, M. I. & Draper, S.W. (2004). Using an electronic voting system in logic lectures: one practitioner's application. Journal of Computer Assisted Learning, 20(2), 95-102.
Trees, A. R. & Jackson, M. H. (2007). The learning environment in clicker classrooms: student processes of learning and involvement in large university course using student response systems. Learning, Media, and Technology, 32(1), 21-40.
Uhari, M., Renko, M. & Soini, H. (2003). Experiences of using an interactive audience response system in lectures. BMC Medical Education, 3(12), 1-6. http://www.biomedcentral.com/1472-6920/3/12
Van Dijk, L. A., Van Den Berg, G. C. & Van Keulen, H. (2001). Interactive lectures in engineering education. European Journal of Engineering Education, 26(1), 15-28.
|Authors: Dr Robin H. Kay|
University of Ontario Institute of Technology
Faculty of Education, 2000 Simcoe St. North
Oshawa, Ontario L1H 7L7, Canada
Email: Robin.Kay@uoit.ca Web: http://faculty.uoit.ca/kay/home/
Dr Ann LeSage, University of Ontario Institute of Technology, Faculty of Education
Please cite as: Kay, R. H. & LeSage, A. (2009). A strategic assessment of audience response systems used in higher education. Australasian Journal of Educational Technology, 25(2), 235-249. http://www.ascilite.org.au/ajet/ajet25/kay.html