|Australasian Journal of Educational Technology
2011, 27(1), 66-88.
Using video annotation to reflect on and evaluate physical education pre-service teaching practice
This case study examined the integration of a media annotation tool (MAT) into the learning and assessment activities of an undergraduate teaching (physical education) course. The media form or artefact for annotation was video recordings demonstrating individual learners' teaching practice. The learners categorised (marked sections) and annotated their videos and received peer and teacher feedback within the tool. Their use of MAT was analysed to determine if this learning environment was effective in the context of the case, to critically reflect upon and evaluate pre-service teaching practice. The research site was RMIT University, Melbourne, and data was collected from the pilot users of MAT, a third year class and their teacher/key academic, using pre- and post-surveys and interactive process interviews (combined sessions of direct observation and semi-structured interviews). The data indicated MAT was effective for the main learning purpose of the case, but also identified some areas for further consideration.
The first learner cohort to formally pilot MAT was a third year class in an undergraduate physical education teaching program, studying the subject 'Methods of teaching physical and sport education'. This subject lists among its learning outcomes the ability to critically reflect and evaluate physical education teaching practice. Achieving reflective practice is recognised as significant across teacher training including physical education (Chorney, 2006). The artefacts uploaded into MAT were each learner's video recording of individual teaching episodes recorded during their professional experience placements. Access to each others' video recordings within MAT was limited to peer groups of five to six learners, plus the class teacher. The learners analysed their teaching videos, and received peer and teacher feedback within MAT. This was expected to promote critical reflection and evaluation, in a collaborative learner approach.
Figure 1 provides an example (test site) of how a timeline of a video was marked-up at various points of analysis with a range of coloured and categorised 'Markers'. It presents as reviewing the first 'red' marker, where red denotes an 'Introductory Activity' marker type or analysis category. It has been named 'test marker' and is anchored to the first 30 seconds of video. The only annotation attached to this Marker in the 'Notes' panel, is the word 'testing'. The remaining annotation area can be opened by clicking on the various panels.
Figure 1: A MAT test site (additional yellow labels support introductory text)
There is still a sense of frontier about online annotation tools in tertiary education. Zahn, Pea, Hesse and Rosen expect that researching learning affordances "of advanced video tools [e.g., for segmenting, editing and annotating, compared to general video playing tools on computer] will remain an exciting and challenging field in the learning sciences" (2010:434-35). This research adds to the limited body of knowledge related to the learning effectiveness of annotation tools. Specifically, it focuses on learning stimulated by a video-based artefact, supported in an electronic environment with a structured learning cycle of reflection, annotation, discussion and feedback, in an undergraduate teaching course to critique teaching.
To facilitate constructivist learning online, educational technologies need to enable active learning with some learner-control, and learners should work with meaningful materials and construct their own meaning from them, writes Ally (2004), drawing from multiple learning theorists. Ally adds that this should not remain a single person experience; learners need to interact with other learners and there should be guidance from the teacher, providing points of testing and confirmation of newly developed ideas. Learners must have socio-constructive interaction in the form of dialogue plus reflection, and new educational technologies should enable this (Garrison & Anderson, 2003; Laurillard, 2002; Lin, Hmelo, Kinzer & Secules, 1999). Fahy (2004) acknowledges "the contribution online media often make to constructivist teaching ... [by] expanding the range and variety of experiences usually available in classroom-based learning" enabling learning with real world situations (p.149). Such experiences can be provided for today's learner via an array of contemporary artefacts. Representations of real world situations in digital media, such as video, can form a basis for focused or converging discussion.
Discernible in teacher education literature are three key factors to promote critical reflection:
Feedback in video format is recommended in teacher education - indeed has been providing teachers alternate access to classroom interplay in (USA) teacher education from the 1960s (Sherin & Han, 2004). It is recommended for its benefits such as to enable visualisation, facilitate reflection and improve performance, as Table 1 supports.
Lin et al (1999) discuss the design of technology to support reflection, advocating in-built features that provide scaffolds to support reflective thinking, and "that technology, properly designed and used, enables us to realise reflective learning environments that were not previously possible" (p.44). Whipp (2003) found from a range of studies that "online discussions need to be carefully structured to support high levels of reflection" and her own research concluded that "[i]t may be helpful to offer students a particular framework for critical reflection" (p.331).
|Enable visualisation||to allow "preservice teachers to see and hear their teaching behaviours" (King, 2008:22)|
|to provide a view that the person teaching does not generally have because "self-observation is limited" (Fanselow, 1990:189)|
|Facilitate reflection||to enable repeat viewing to "conduct a rigorous self-assessment, and reflect profoundly upon that assessment" and provide "an effective stimulus to successfully drive preservice teachers to deeper levels of reflection" (King, 2008:28)|
|to base reflection on "accurate video recorded data from teaching practice activities" (Kong, Shroff & Hung, 2009:547)|
|Improve performance||to facilitate "honest and open self-critique ... [to potentially] yield very important and revealing evidence that teachers can use to improve their craft" (Chorney, 2006:25)|
|"to self-evaluate their teaching from filmed lessons, and to modify their teaching behaviours to be more mastery involving" (Morgan & Kingston, 2007:127)|
Electronic annotation of video has been in limited use in education (Butler, Zapart & Li, 2006), despite recognition that video is rendered "more effective as learning resources when segmented and integrated with annotations from other media types", thereby enabling 'active consuming' and knowledge construction (Jayawardana, Hewagamage & Hirakawa, 2001). However, there are recent examples of video annotation emerging across the teacher education sector, particularly for peer review of teaching. Rich and Hannafin (2009) report on several that are currently available for use, including Video Traces, which emerged in interactive museum activities (see Stevens & Hall, 1997) before use in teaching analysis. Other examples include a 'video case library prototype' for in-service teacher review (So, Lossman, Lim & Jacobson, 2009), and FD Commons Annotator that facilitates video and electronic hand-written feedback for academics (Tsukahara et al., 2009).
A limitation in this research was the number of possible participants of MAT users. However, case study research is not based on population sampling; rather the case(s) and their units of analysis are carefully chosen. Selection of the MAT participant group was purposive, where "cases are hand-picked for a specific reason such as use of a new product" (Lewin, 2005:219), and where "the processes being studied are most likely to occur" (Denzin & Lincoln, 2000:370, cited in Silverman, 2005:129). This class provided a significant opportunity to capture data from inaugural or pilot users of a new learning tool, enabling a baseline study to be conducted within a specific learning context. A further limitation was the choice to use volunteers from within the participant cohort as key informants for the more intensive data collection processes of observations and interviews. This 'non-probability sampling' (Lewin, 2005) from within the purposive case relied on those willing and able to commit to a half-hour, mutually arranged appointment and therefore held no guarantee that they were representative of the wider case.
The post-survey (part 2), administered at semester end, included questions related to the learners' use of MAT, and their perspectives of how effective MAT was in their learning. Questioning included whether MAT helped them to learn, collaborate, and reflect on and evaluate their teaching practice. Like the pre-survey, each of the five Likert-styled question sets provided a free form space for additional comments, but the post-survey also culminated with several open ended questions asking, for example, what about MAT was most/least helpful to their learning and advice for others.
In each IPI, the observation component featured first. The participant was instructed to use MAT as they normally would and 'think-aloud' as they performed their activities. The interview phase directly followed, providing an opportunity for the participant to discuss MAT from their own perspective, prompted by open-ended questions if needed for topic coverage. The IPIs were audio-recorded with participant permission. The activities under observation included methods of MAT use and strategies to achieve their learning aims. While the participants were asked to think-aloud to capture audio data at the point of use, there is some noted difficulty in effectively and simultaneously completing an activity and reporting on it accurately (Stratman & Hamp-Lyons (1994), in Matsuta, n.d.). Think-aloud protocol in various adaptations has been used in other educational technology research despite such recognised limitations (e.g., Kozma & Russell, 1997).
The aim of the interview phase was to harness each participant's perception via reflections on:
While almost all respondents reported satisfaction with their previous methods of reflecting on and evaluating their teaching practice (Figure 2), responses to other questions suggest they were generally ready to investigate the use of a tool like MAT in their learning.
Respondents were generally positive towards wanting to view and examine teaching performance via video (both own teaching and their peers'), and towards discussing theory as related to their own and their peer's teaching practice. As illustrated in Figure 3, the students wanted to give and receive feedback on teaching, although marginally more sensitive to receiving feedback from their peers.
Figure 2: Perspective on previous methods of analysing teaching practice
Figure 3: Study preferences: receiving and giving feedback
The students' anticipation of a new tool to facilitate reflection and evaluation of teaching practice was also largely positive (see Figure 4).
Figure 4: Anticipated effectiveness of MAT
Note: The features of MAT are referred to regularly in this sub-section. Figure 1 (see introduction) provides a screen capture of MAT, which helps illustrate these features.By analysing both the observation record sheets of the students' use of MAT and transcripts of the IPIs (then coding and re-coding the data), typical methods of use emerged. Overall, there was generally a similar approach taken by the learners to critically reflect and evaluate their teaching practice; divergence was evident in only few parts of the process. While influencing factors included both the framework of MAT and guidance by the teacher, how others (peers and teacher) used this learning environment also affected the learners' approach (see Colasante, 2010). Although some used MAT slightly differently, the commonality of approach is collated and presented in overview in Figure 5, then further explained.
The activity overview is presented in Figure 5 as an artefact-centred model, given that student work in MAT revolved around their videos - both own and peers - in their MAT small groups. Each of the steps in their process is explained further, as follows.
A: Access video in MAT
Once each learner's teaching practice video (pre-test) was uploaded into MAT, they accessed it via the MAT website, in their own 'Group' area.
B: Annotate own video
Learners watched their own video of teaching practice, then marked it with the pre-titled (and colour coded) 'Marker' types provided. The titles, or categories of analysis, were pre-determined by the teacher as key 'beginner teaching factors' of 'Introductory activity', 'Demonstrations', 'Checking for understanding', 'Transition', 'General feedback', 'Specific feedback', 'ALT-PE' (academic learning time) and 'Teacher position'. A Marker was created at each learner-selected point of the video and dragged along the timeline for the length to be captured. They then typed in a sub-title/tag name of their choice for each (e.g., the name of a specific PE activity, or any other sub-title, such as 'Test Marker' in Fig.1). Some created as many as 20+ Markers.
The first annotation panel anchored to each Marker was 'Notes'. Here the learners typically entered detail based on what was happening in that video segment. For example,
[I wrote in Notes] how that benefits my teaching in the hope that someone else would comment back to me (Renee).
Figure 5: Learning cycle of activities within MAT
C: Comment on peer's video
The learners waited one to three weeks for the peers in their MAT group to analyse their videos. They then watched a marked segment of a peer's video (or watched the video in full) then read anchored Notes (or visa versa), and decided whether to provide feedback to their peer in the second annotation panel of 'Comments'. They repeated this for other marked segments of video, then for other peers in their group.
D: Write conclusions
Eventually, the learners returned to their own video of teaching practice and looked for feedback from group members in the 'Comments' panels. They noted the number of comments received as nil to three per Marker. An example of three responses received for one issue under analysis is:
[My peers] gave me some feedback on how they thought I could improve the game. So, three people commented on this, they said it's a "great idea; the activity lets the student practice the skills they've learnt" and suggested another rule I could put into the game. Another person said "they were on task and enjoying the activity" which was really good to hear. And, the last person said "the students were active" and "they looked like they were having fun" and they suggested another way I could do the game by dividing them up into teams or giving them more space (Donna).Once the Notes and Comments panels were closed in MAT, the learners could enter conclusions to each of their marked-up video segments. Not all wrote a conclusion for each Marker, but when doing so, they used the 'Conclusion' panel typically to do one of three things: summarise; respond to feedback; or link forward to how they would apply this learning in future teaching. Some students explicitly enjoyed reading conclusions by others, for example:
I'm just reading one of ... [a peer's] conclusions. He's talking about using worksheets, which is a good idea, helps reinforce what the students have learnt, and you can check it afterwards and you can work out what they do and do not know and if there's a pattern. So, this one's really good! He's done quite a lot of Conclusions so that gives everybody something to read, gives him something to read too, when he's trying to improve his skills ...; he can learn that I used to make this mistake and now I don't (Brett).E: Review; Read teacher's feedback
As the 'Final Reflection' panel was not yet available to the learners, they were shown this feature via a test site. They responded that their use of this would depend on other factors, e.g.: the importance of the teaching issue under focus, and/or the specific feedback they received from their teacher. For example,
if they're [the teacher is] just saying 'agree with all the above', I don't think there's a reason for you to say 'ok', that you agree as well. But if they're giving more constructive feedback other than what the people in my group did then, yeah, I think it would be a good thing to respond to (Desi).F: Record post-test video
A resourceful use of MAT by three observed learners was the visual use of Markers across the timeline, representing a summary or graph of the given lesson. For example,
you can actually see how long you spend on a certain part of your lesson ... and that indicates to me what I've written on my lesson plans that I've planned weeks before, I'm sticking to my time management (April).
[regarding] 'General feedback', I gave some at the start of the lesson and I can see I gave a fair bit at the end, and then in the middle it was very sparse and ... I need to concentrate on giving feedback of any kind, all the way through the lesson (Brett).
Figure 6: Most valued MAT features, ranked first and second
Conversely, another open-ended survey question asked what was most helpful to their learning with MAT (Figure 7). Note the most frequent grouping of responses, viewing teaching, comprised a mix of own, our (or other collectives), and unspecified who's teaching. Multi-themed responses from individuals were each counted, resulting in more than 23 responses.
Similarly, during interviews four out of seven learners nominated video of their own teaching as the most valued thing about MAT. Some caution also emerged. For example, that all positive feedback from using MAT may not necessarily provide a true picture of your teaching, in that:
maybe it was just that I had a good lesson when it was filmed, but ...watching my video and looking at the Comments, there's nothing for me to really work on, from this lesson ...[and] Whereas if I was being videoed more often or in a different school or different classes, then I'd be able to pick up a lot more for me to work on (Desi).Another offered the conundrum of watching herself teach as both the best and worst thing about MAT, expressing some of her vulnerabilities she felt in watching herself, then adding:
[deep intake of breath] Not that I like looking at myself, but I liked being able to assess myself, I liked being able to pick up on where I could improve, or what I was doing OK, because I find that hard to see sometimes ...; to disassociate when I'm teaching and see myself teach, so this was a good way of doing that. And it was private, I didn't have to show - oh well it wasn't - we had to work in groups, but it could be private, it could just be between you and the lecturer and not out there for everyone to see (Nicole).
Figure 7: Most helpful factor, by open question
While peer feedback was appreciated (Figure 6), the quality of peer feedback was also raised as a barrier issue by all but one interviewee. For example:
a lot of them [peer Comments] weren't very constructive, they were just agreeing with what I'd said rather than giving me help (Brittany).Additionally, when asked to offer advice for future students using MAT, almost all students interviewed raised improving the quality of peer feedback. For example:
just try to be as specific as you can in your feedback to other group members and really be open-minded about other people's feedback that they're giving you ... 'cause it will make you a better teacher in the future if you do (April).While this was a strong theme across the interviewed sample, it must be noted that only a minority in the wider post-survey cohort entered quality of comments against the open questions on what about MAT was least helpful to their learning (four out of 23), with other responses mainly technical in nature (also raised across interviewee sample) or left unanswered.
Post-survey responses revealed that most learners' base perceptions were that MAT was an excellent tool for viewing and annotating media; however, there were a minority who were unsatisfied using MAT (Figure 8).
Figure 8: Overall perceptions
Figure 9: Learning enabled by MAT
A majority agreed MAT allowed them to be challenged in an interesting way, and none disagreed that MAT allowed them to view role modelling of others as they worked through issues, and allowed them to construct meaning from the learning experiences (Figure 9).
Most learners surveyed considered the amount of interaction with their peers in MAT was appropriate, while approximately one-third agreed that there was an appropriate amount of interaction with their teacher (Figure 10) possibly reflecting the late availability of the teacher feedback panel due to technical start-up issues (see 'E' in learners use of MAT section).
Figure 10: Amount of interaction
Figure 11: Effectiveness for the key learning outcome
The major intended learning outcome for activities involving MAT was to critically reflect on and evaluate teaching practice, to which, Figure 11 illustrates an almost unanimous positive response from the learners surveyed.
Further confirmation for effectiveness for the intended learning outcome was received during interviews, where all seven students interviewed agreed MAT helped them to reflect on and evaluate their teaching practice, supporting this with comments such as:
it's perfect for it [to critically reflect and evaluate teaching] ... particularly because the lecturers can see it as well, so that means that you need to be more honest (Nicole).
I was able to learn what my strengths were and what my weaknesses were ... I found it really helpful (Brett).
better really than anything we've really done; we've had to do a similar assignment, but we didn't have MAT, and it was hard to look at your teaching through a video without having any Markers ... whereas now I've got direction for what we are looking for (Renee).
I enjoyed it because when you finish teaching and the supervisor says "oh you did this and that" you sort of think "did I really?" ... when you can actually watch it and laugh at how bad some of the stuff is that you are doing it really sticks in your mind 'cause you have a visual of watching yourself walk through the middle of an activity and you know not to do it again (Brittany).
it was really good, because, in other methods you forget ... But in this you can see, you can actually go back ... [to] certain parts in the lesson and you might notice things you didn't notice when you were teaching ... critically analysing what you've done and when other people analyse it too, that's even better because they might pick up other things as well, so instead of just you analysing it, you've got five other people helping you out as well (Donna).
I think they would find it helps their reflection [and]\Further qualifying with comments such as:
the visual stuff's very powerful. ... a lot of them have come up to me and said 'didn't realise I did that; I was amazed' [and]
most of them thought they were teaching a lot better than what they actually saw themselves as (Carl)
I'm not sure they see the total benefit at the moment, because of the way it's gone [start-up technological issues], ... I think the future groups certainly will ... [benefit] more (Carl).Carl recommended MAT for other academics to use, and nominated the Markers as the most helpful feature of MAT, which aided learning focus and teacher monitoring.
A mainly common method of MAT use emerged from the observation and interview data. This involved the learners accessing their own video of recorded teaching practice - within their peer group area in MAT - to commence reflection and evaluation. They would view their own video, selecting and marking sections that they categorised with pre-determined teacher categories (and further titled with their own tag names). They wrote about their teaching represented in the video segment in the 'Notes' annotation panel. Later they viewed their group members' teaching videos, read their linked annotations, and decided whether to provide peer feedback via the 'Comments' annotation panels. Eventually returning to their own video, they read feedback received from peers. They further annotated some or all of their video segments by writing replies, summaries, or theory-to-practice type annotations in the 'Conclusion' panel. From here they sought feedback from their teacher via the 'Teacher feedback' annotation panel. They reviewed entire analysis cycles in their own and their peers' videos. They did not enter 'Final reflections', as this area was not yet available. The learners anticipated following a similar process for their second recorded teaching episode.
Responses regarding interaction in MAT were largely positive, although there was stronger satisfaction with the amount of peer feedback compared to quality, and the teacher feedback panel requires further trialling. There was support for MAT in learning via enabling construction of meaning from learning experiences and viewing role modelling of others. The learners overwhelmingly agreed that MAT was effective in facilitating reflection on and evaluation of their teaching practice; therefore, MAT seems to have been an appropriate intervention to support this intended learning outcome. In the context of the case, the teacher was largely satisfied with the effectiveness of MAT.
The two things learners most valued about MAT were feedback from others (despite issues raised regarding quality) and viewing their own teaching. However, emergent issues relating to both use of media and collaboration in MAT involve learning and teaching issues worthy of further consideration.
The importance of recording across multiple contexts was made explicit in one interview. Uneasy about her analysis, the interviewee recommended multiple recordings as no weaknesses were exposed in her first (pre-test) teaching episode. Note that recorded sessions may not reveal all contextual issues regardless, e.g., "environmental factors that are not captured, or generally elements that are difficult to reassess" or re-analyse later without the full context of 'being there' (Butler et al., 2006:21). Multiple sampling can provide a range of contexts, even if entire contextual factors are not captured. In this pilot case, it was the original intent of the teacher for the learners to each record and analyse three teaching episodes, reduced to two due to technical delays.
The need for personal versus shared annotations in MAT should be determined per learning activity, by considering benefits for others to read and collaborate, compared to inhibitors. Inhibitions to collaborative annotation could be from a "fear of saying wrong things thereby showing their misunderstanding of the issue at hand" (Lapique & Regev, 1998:3), or "feelings of vulnerability which follow from exposing one's beliefs to others, with a tendency for self blame for any perceived weaknesses uncovered through reflection" (Wildman & Niles, 1987, in Hatton & Smith, 1994:13). However, Marshall and Brush (2002) found that people apply more effort with annotations when intended for sharing, compared to more spontaneous personal annotations. Promoting a 'safe' environment for peers to contribute in MAT is important, as - similarly to video recorded practice - vulnerability needs to be reduced as a barrier to communication by promoting non-judgmental peer discussion groups (Thomas, 2003).
The case under examination used MAT for learner analysis of video recorded teaching practice. The tool provided a structured learning cycle that explicitly enabled annotations by learner, peers and teacher to promote interaction. It promoted active learning with meaningful materials to construct meaning from them. While learner perception of MAT was largely positive in this study, there was minority dissatisfaction in some areas.
The study illustrated that integrating MAT was an effective intervention for the case, that is, for the main purpose of reflection and evaluation of teaching practice. There was particular appreciation for the ability to view self teaching and receive feedback from others. The learners appreciated the ability to analyse their videos of teaching practice; to categorise the video and anchor annotations to segments of the video in cycles of notation and feedback. There were issues raised of vulnerability in viewing and analysing performance, and issues regarding quality of collaborative input from peers. The data illustrates the framework for collaborative annotations in MAT provided the opportunity for socio-constructivist learning in peer to peer networks, but more so where learners made concerted effort to contribute constructively.
Baker, M. & Lund, K. (1996). Flexibly structuring the interaction in a CSCL environment. Paper presented at the EuroAIED Conference, Lisbon. [verified 20 Feb 2011] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.108.5419&rep=rep1&type=pdf
Butler, M., Zapart, T. & Li, R. (2006). Video annotation - improving assessment of transient educational events. Paper presented at the 2006 Informing Science and IT Education Joint Conference, Salford, UK, 25-28 June. [verified 20 Feb 2011] http://informingscience.org/proceedings/InSITE2006/ProcButl168.pdf
Chorney, D. (2006). Teacher development and the role of reflection. Physical & Health Education Journal, 72(3), 22-25.
Colasante, M. (2010). Future-focused learning via online anchored discussion, connecting learners with digital artefacts, other learners, and teachers. In Curriculum, technology & transformation for an unknown future. Proceedings ascilite Sydney 2010. http://www.ascilite.org.au/conferences/sydney10/Ascilite%20conference%20proceedings%202010/Colasante-full.pdf
Colasante, M. & Fenn, J. (2009). 'MAT': A new media annotation tool with an interactive learning cycle for application in tertiary education. Paper presented at the World Conference on Educational Multimedia, Hypermedia and Telecommunications (ED-MEDIA) 2009, Honolulu, 22-26 June.
Dawley, L. (2007). The tools for successful online teaching. [viewed 8 April 2008; available from several sources] http://ebookee.org/The-Tools-for-Successful-Online-Teaching_324424.html
Dwyer, N. & Suthers, D. (2006). Consistent practices in artifact-mediated collaboration. Computer-Supported Collaborative Learning, 1, 481-511. [verified 20 Feb 2011; open access] http://www.springerlink.com/content/20885327pk7x1141/fulltext.pdf
Fahy, P. (2004). Media characteristics and online learning technology. In T. Anderson & F. Elloumi (Eds.), Theory and practice of online learning (pp. 137-171). Canada: Athabasca University. http://cde.athabascau.ca/online_book/
Fanselow, J. (1990). "Let's see": Contrasting conversations about teaching. In J. Richards & D. Nunan (Eds.), Second language teacher education (pp. 182-). New York: Cambridge University Press.
Fowler, F. (2009). Survey research methods (4th ed.). Thousand Oaks, California: SAGE Publications, Inc.
Garrison, D. R. & Anderson, T. (2003). E-learning in the 21st century: A framework for research and practice. Milton Park, Oxon: Routledge.
Hatton, N. & Smith, D. (1994). Facilitating reflection: Issues and research. Paper presented at the 24th Conference of the Australian Teacher Education Association, Brisbane, 3-6 July. http://www.eric.ed.gov:80/ERICWebPortal/contentdelivery/servlet/ERICServlet?accno=ED375110
Jayawardana, C., Hewagamage, K. & Hirakawa, M. (2001). Personalization tools for active learning in digital libraries. The Journal of Academic Media Librarianship, 8(1). [verified 20 Feb 2011] http://wings.buffalo.edu/publications/mcjrnl/v8n1/active.pdf
Jonassen, D., Howland, J., Moore, J. & Marra, R. (2003). Learning to solve problems with technology: A constructivist perspective (2nd ed.). Upper Saddle River, New Jersey: Pearson Education, Inc.
Kienle, A. (2006). Integration of knowledge management and collaborative learning by technical supported communication processes. Education and Information Technologies, 11(2), 161-185.
King, S. (2008). Inspiring critical reflection in preservice teachers. Physical Educator, 65(1), 21-29.
Kong, S. C., Shroff, R. H. & Hung, H. K. (2009). A web enabled video system for self reflection by student teachers using a guiding framework. Australasian Journal of Educational Technology, 25(4), 544-558. http://www.ascilite.org.au/ajet/ajet25/kong.html
Kozma, R. & Russell, J. (1997). Multimedia and understanding: Expert and novice responses to different representations of chemical phenomena. Journal of Research in Science Teaching, 34(9), 949-968.
Lapique, F. & Regev, G. (1998). An experiment using document annotations in education. Paper presented at the WebNet 98 World Conference of the WWW, Internet and Intranet Proceedings. http://www.eric.ed.gov:80/PDFS/ED427713.pdf
Laurillard, D. (2002). Rethinking university teaching: A framework for the effective use of learning technologies (2nd ed.). London.: RoutledgeFalmer.
Lewin, C. (2005). Elementary quantitative methods. In B. Somekh & C. Lewin (Ed.), Research methods in the social sciences. London: SAGE Publications.
Lin, X., Hmelo, C., Kinzer, C. & Secules, T. (1999). Designing technology to support reflection. Educational Technology Research & Development, 47(3), 43-62.
Marshall, C. & Brush, A. (2002). From personal to shared annotations. Paper presented at the Conference on Human Factors in Computing Systems, Minneapolis, Minnesota, USA
Matsuta, K. (n.d.). Think-aloud protocols: A means of observing cognitive processes of language learners, pp. 67-74. [viewed 15 May 2009, verified 21 Feb 2011] http://www.lib.yamagata-u.ac.jp/you-campus/koeki/kiyou-koeki/3/3-pA67-74.pdf
Morgan, K. & Kingston, K. (2007). Development of a self-observation mastery intervention programme for teacher education. Physical Education & Sport Pedagogy, 13(1), 109-129.
Orland-Barak, L. (2005). Portfolios as evidence of reflective practice: What remains 'untold'. Educational Research, 47(1), 25-44.
Rich, P. J. & Hannafin, M. (2009). Video annotation tools: Technologies to scaffold, structure, and transform teacher reflection. Journal of Teacher Education, 60(1), 52-67.
Richards, L. (2005). Handling qualitative data: A practical guide. London: SAGE Publications Ltd.
Rodriguez, Y., Sjostrom, B. & Alvarez, I. (1998). Critical reflective teaching: A constructivist approach to professional development in student teaching. Paper presented at the Annual Meeting of the American Association of Colleges for Teacher Education, New Orleans, Louisiana, 25-28 February. http://www.eric.ed.gov:80/ERICWebPortal/contentdelivery/ servlet/ERICServlet?accno=ED418054
Sherin, M. G. & Han, S. Y. (2004). Teacher learning in the context of a video club. Teaching and Teacher Education, 20(2), 163-183.
Silverman, D. (2005). Doing qualitative research (2nd ed.). London: SAGE Publications.
Shih, R.-C. (2010). Blended learning using video-based blogs: Public speaking for English as a second language students. Australasian Journal of Educational Technology, 26(6), 883-897. http://www.ascilite.org.au/ajet/ajet26/shih.html
So, H.-J., Lossman, H., Lim, W.-Y. & Jacobson, M. (2009). Designing online video based platform for teacher learning in Singapore. Australasian Journal of Educational Technology, 25(3), 440-457. http://www.ascilite.org.au/ajet/ajet25/so-2.html
So, W. W., Hung, V. H., & Yip, W. Y. (2008). The digital video database: A virtual learning community for teacher education. Australasian Journal of Educational Technology, 24(1), 73-90. http://www.ascilite.org.au/ajet/ajet24/so.html
Stevens, R., & Hall, R. (1997). Seeing tornado: How video traces mediate visitor understandings of (natural?) phenomena in a science museum. Science Education, 81(6), 735-748.
Suthers, D., Vatrapu, R., Medina, R. & Dwyer, S. (2008). Beyond threaded discussion: Representational guidance in asynchronous collaborative learning environments. Computers & Education, 50(4), 1103-1127.
Thomas, H. (2003). Transcripts for reflective teaching. Paper presented at the 16th English Australia Education Conference, Melbourne, 2-4 October. http://www.englishaustralia.com.au/ea_conference03/proceedings/pdf/035F_Thomas.pdf
Tsukahara, W., Hori, S., Kato, Y., Egi, H., Terada, T. & Nakagawa, M. (2009). Comparison of peer reviewing between paper-and-pen based review sheet and an annotation support system "FD Commons Annotator". Paper presented at the World Conference on Educational Multimedia, Hypermedia and Telecommunications (ED-MEDIA) 2009, Honolulu, 22-26 June.
van der Pol, J., Admiraal, W. & Simons, P. (2010). Peer evaluation in online anchored discussion for an increased local relevance of replies. Computers in Human Behavior, 26(3), 288-295.
Victorian Institute of Teaching (2005-2011). Professional standards. [viewed Oct 2009, Jan 2011] http://www.vit.vic.edu.au/standardsandlearning/Pages/professional-standards.aspx
Whipp, J. (2003). Scaffolding critical reflection in online discussions: Helping prospective teachers think deeply about field experiences in urban schools. Journal of Teacher Education, 54(4), 321-333.
Wood, C. (2000). Methodology for field work: Interactive interviews. The IAI/UM Summer Institute on Interdisciplinary Science in the Americas. [viewed 16 Apr 2008, verified 21 Feb 2011] http://www.rsmas.miami.edu/IAI/Inst2000/lectures/wood_jul20/reading/qual_appr_1.pdf
Yin, R. (2003). Case study research: Design and methods (3rd ed.). California: SAGE Publications.
Yost, D., Sentner, S. & Forlenza-Bailey, A. (2000). An examination of the construct of critical reflection: Implications for teacher education programming in the 21st century. Journal of Teacher Education, 51(1), 39-49.
Zahn, C., Pea, R., Hesse, F. W. & Rosen, J. (2010). Comparing simple and advanced video tools as supports for complex collaborative design processes. The Journal of the Learning Sciences, 19(3), 403-440.
|Author: Meg Colasante|
Academic Development Group
College of Science, Engineering and Health, RMIT University
GPO Box 2476, Melbourne 3001, Australia
Email: email@example.com Web: http://www.rmit.edu.au/
Please cite as: Colasante, M. (2011). Using video annotation to reflect on and evaluate physical education pre-service teaching practice. Australasian Journal of Educational Technology, 27(1), 66-88. http://www.ascilite.org.au/ajet/ajet27/colasante.html