Friday, April 17, 2015

Entry 55 IDT1415 - Assessment of and for learning - Balancing Assessment with Web 2.0 Tools

Hello everybody,

Here's my full contribution on this task on assessment of and for learning. After reading this week's articles and materials I created a table which aims to assess these tools as suggested in the task. I designed the table (assessment instrument) based on the inclusion of a generalised form of assessment criteria (Moon 2002a:103) by assigning a happy or unhappy face as the equivalent of a Yes/No whenever any of the tools met the criterion as defined by Haunsell et al. (2007) regarding the four strategies: feedforward assessments, cumulative coursework, better understood expectations and standards, and speedier feedback.



I must confess that I found Haunsell et al. (2007) and Carless (2007) the most interesting of the 4 resources while also found myself going back to last week's Knight's (2001). The CambridgeTV video was not very informative - maybe because I watched after doing all the readings :-) - and Huang's (2012) article I found a bit boring. Here's a link to the table.



References


Carless, D., 2007. Learning‐oriented assessment: conceptual bases and practical implications. Innovations in Education and Teaching International, 44(1), pp.57–66.


Hounsell, D., Xu, R. & Tai, C.M., 2007. Balancing assessment of and assessment for learning Guide no 2. Higher Education, (2), p.15.


Huang, J., 2012. The Implementation of Portfolio Assessment in Integrated English Course. English Language and Literature Studies, 2(4), pp.15–21.


Knight, P., 2001. A Briefing on Key Concepts – Formative and summative, criterion & norm-referenced assessment. , pp.1–32.
 

Entry 54 IDT1415 - Assessment Types and Criteria. Am I doing it right?

While reading Biggs (2003), Moon (2002) and Knight (2001) I wondered about  three things and thought I'd share them with you here. First, how free are we in our contexts in regards to assessment methods? Second, how do you assess your students in general e.g. do you apply an established method or do you have your own systems which complement the established procedure set by your institution? Third, how viable is a shift from summative-reliant assessment to a more balanced, or even better formative-reliant approach?

In my own experience, I have come to realise that my Learning Outcomes (LOs) to date are a mix between Assessment Criteria (AC) and LOs as I tend to use a mix between tentative and definite language as well as a mix between low and high level verbs with a stronger tendency towards the latter. This realisation means I will try and tidy them up from now on so that they are consistently in line with LOs or AC definitions in relation to the specific context e.g. LOs for input sessions and AC when measuring achievement. I was pleasantly surprised to see that Biggs' (2003) 4 steps for Constructive alignment are to a great extent already present in my current practice as a teacher trainer as follows:

1. Define ILOs - Intended Learning Outcomes (They're an integral part of my design procedure of input sessions and are presented and briefly discussed with candidates at the beginning of each session.)

2. Choose/design activities which will lead to ILOs (The materials for the input sessions are designed by trying to adhere to the premises of Loop Input (Woodward 2003), which is a concept I was struck by when I first came across it in 2000 and since then informs my approach.)

3. Assessing students' ILOs to see how closely they match what was intended (With specific reference to University of Cambridge Teaching Awards e.g. CELTA, DELTA and YL Extension to CELTA courses, this is more easily done as LOs and Assessment Criteria are already drawn up. Thanks to the fact that assessment on these courses is continuous and integrated it is then a matter of matching examples of achievement of these LOs to the criteria via reflection on aspects of the course such as Teaching Practice, self reflection on TP and peer assessment (Knight 2001), written assignments and performance overall in a triangulation ongoing exercise throughout the course. On these courses there is an emphasis on 'thinking about learning, teaching and assessment' (Knight 2001:8) via the input sessions and also the formative assessment which takes place through discussions of theory (input sessions) and practice (Teaching Practice - TP) which include peer and self-assessment. Peers observe one another in TP, candidates complete a written self reflection immediately after TP, and in TP feedback the tutor moderates candidates' experiences, self reflections and peer assessment contributions.)

4. Arriving at a final grade (Grade assessment criteria (Moon 2002:95) is then applied e.g. candidates can be allocated a grade (Fail, Pass, Pass B, Pass A) according to their continuous and integrated assessment performance while matching their Teaching Practice, Written Assessment and Overall Personal Performance against the criteria included in the CELTA Syllabus and from 2014 specific band descriptors which support the allocation of any given grade).

I believe that the above then give tutors a fair degree of freedom as how the criteria are applied in that there are clearly set criteria and that they are further developed by examples for each criterion as can be seen in the CELTA5 candidate booklet available on the internet - please note the link leads to the 2007 version so it's not updated, however, the criteria and criterion examples are still valid.  

References

Biggs, J., 2003. Aligning teaching for constructing learning. , pp.1–4.

Knight, P., 2001. A Briefing on Key Concepts – Formative and summative, criterion & norm-referenced assessment. , pp.1–32.

Moon, J., 2002. Writing and using assessment criteria. The Module and Programme Development Handbook: A Practical Guide to Linking Levels, Outcomes and Assessment Criteria, pp.79–106.

Woodward, T., 2003. Loop input. ELT Journal, 57(July), pp.301–304. Available at: http://marvin.ibeu.org.br/ibeudigital/images/7/77/ELT_J-2003-Woodward-301-4.pdf.

Entry 53 IDT1415 - Thoughts on Collaboration

Look for a case study in which some form of group work is part of a language course. Reflect on its design and how it was integrated with the rest of the course. Also consider whether technology played any role in the success (or not) of the activity. The point of this task is not to examine the benefits of collaboration for learning (we did that last semester) - we want to focus on its implications for course design


Abstract: This research tries to analyze the way student groups interacted and answered the proposed task in the different work groups. Beyond that, it was our objective to acknowledge how these same students evaluate the teacher’s performance in the seminar monitoring. The results of this study indicated different interaction and organization levels in the same task. Those differences had implications in the way of leading the task and in the final result. About the teacher, the students considered she had a good participation, providing the support asked, being the “facilitator” which was the more valued skill.

Description & Review of Article

The title of this study caught my attention because using forums is something we do here regularly and something I have to do as well on an online course I moderate so I thought it was contextually relevant. Unfortunately, the poor quality of the written English used in this article and the fact that it was published in ScienceDirect were negatively surprising. Goulão's (2012) study lacks precision and generally speaking it also lacks in detail thus failing, in my opinion, to give the reader a clear picture of a study that would otherwise have been very benefitial and informative.

The study sought to explore the interaction between the groups involved (6 teams divided into 2 main themes) and how they carried out the task assigned. Unfortunately, the themes or the task itself are never defined which makes it difficult for the reader to 'see' the whole picture. A second aim was to record the students' assessment of the teacher's monitoring during the seminar mentioned.  However, and yet again, there is only superficial information as to how this was done without acknowledging potential for bias or the 'halo' (Thorndike, 1920) and Hawthorn effects (Dornyei, 2007:53) in the responses from the participants.

The project

The 11 participants were divided into 6 teams while being randomly selected (Goulão 2012:673) to carry out a task that is not defined in the article. The second aim, the assessment of the teacher's monitoring capabilities, is done through a questionnaire given to the participants to complete. The period of the study is not defined either and can only be  inferred as to being confined to the duration of 'an eLearning Master's Degree seminar' (p673).

Criticisms

The analysis of the behaviour and self-organisation of the participants led to the identification of 3 models of interaction which are interesting, but again poorly and superficially described. These models show that 1. a participant takes a leading role organising the work; 2. a participant again initiates the work but then takes a step back and then the group carry out the task; and 3. there is no organisation of the work and although the group carries out the task in the end, there are no roles either assigned or taken for the execution of the task (p674). As regards the  analysis of the responses given by the participants in relation to the monitoring work carried out by the teacher, I would argue the results to be contradictory or at the very least incongruent with the information provided. For instance, it is reported that 77.8% of the respondents thought the 'teacher created and encouraged the learning environment' while there is no evidence to support this in the article, or as mentioned earlier acknowledgement of potential for bias. 

Sullivan Palincsar & Herrenkohl's (2002) idea of creating a shared social context to engage in collaborative learning is missing as it is the provision of explicit guidelines (Galton 2010:4). While it is true that the aim of the project was that of 'analysing the way student groups interacted and answered a proposed task' determining group membership (op.cit.) would have provided clarity for the participants and article readers. As reported in the article, it is not clear whether they were left to their own devices for the sake of the project or not, and this is especially so when looking at the results of the assessment of the teacher's performance (Goulão 2012:676) which point towards teacher involvement  in the creation of a learning environment, management of online discussion, establishing clear guidelines for learning, etc. In addition, there is no indication of creation of interdependence, dedication of time to develop teamwork skills or to build individual accountability (CarnegieMellon 2015), which I would argue could have been done implicitly and to some extent as part of the guidelines even if the aim of this project was to find out how student groups interacted when carrying out a task. In other words, more information as to the type of group work and teamwork skills development  these students have been previously exposed to as well as their understanding of individual accountability would have an impact on the interpretation of the results offered. 

As regards assessment, it is not clear whether the approach adopted was 'Product' or 'Process' (Galton 2010:5) oriented as Goulão reports that all the groups accomplished the task and also how they did it. However, information as to how they completed the task is only used to determine the models identified rather than the process or any learning taking place. Unfortunately, the information provided does not allow the reader to determine whether there was any level of intellectual engagement as described by Sullivan Palincsar & Herrenkohl (2002). Along the same lines, there is no reference as to the criteria for the assessment of the tasks completed by the participants, the application agent of the same or alternative forms of assessment (Galton 2010:6-7).

Conclusion

Sulliva Palicsar & Herrenkohl's (2002) work on the design of collaborative learning context, Galton's (2010) article on Assessing group Work and the best practices for designing group projects suggested by the CarnegieMellon Eberly Center Teaching Excellence & Educational Innovation (2015) site do not seem to have informed this study in any way.

On a more personal note, I believe that in line with learning theory and how memory works, this poorly written article has helped me better understand the importance of the work mentioned here as it has (forced) provided me with a good opportunity to analyse, evaluate and synthesise Collaboration theory forcing me to make use of higher order thinking skills.

References

CarnegieMellon Eberly Center - Teaching Excellence & Educational Innovation. (2015) [online]. Last accessed   2 April 2015 at: http://www.cmu.edu/teaching/designteach/design/instructionalstrategies/groupprojects/design.html

Dornyei, Z., 2007. Research Methods in Applied Linguistics. Oxford, Oxford University Press.

Galton, M., 2010. Assessing group work. International Encyclopedia of Education, pp.342–347.

Goulão, M.D.F., 2012. The Use of Forums and Collaborative Learning: A Study Case. Procedia - Social and Behavioral Sciences, 46(2000), pp.672–677. Available at: http://dx.doi.org/10.1016/j.sbspro.2012.05.180.

Sullivan Palincsar, A. & Herrenkohl, L.R., 2002. Designing Collaborative Learning Contexts. Theory Into Practice, 41(1), pp.26–32.

 Thorndike, E. L. (1920). The Constant Error in Psychological Ratings. Journal of Applied Psychology, 4, 25-29 in: Cherry, K. 2015. What is the halo effect? [online]. Last accessed 2 April 2015 at: http://psychology.about.com/od/socialpsychology/f/halo-effect.htm

And my reflection on the questions posed...

Consider whether technology played any role in the success (or not) of the activity. 

The Goulão (2012) case study could not have been implemented without the use of technology as participants had to make use of Forums in order to complete the task assigned and which constituted the basis for the observation of behaviours. In this sense it could be argued that the study was successful as the participants completed the tasks as reported in the article. Unfortunately, the amount of information provided in the article does not allow for the formation of a clear picture as to which platform was used, for how long, the type of forums, the type of task and the guidelines if any given to the participants.

Reflect on its design and how it was integrated with the rest of the course.

As above, clarity as regards the design of the study is wanting as very little detail is given. The project included students completing a seminar part of a module in an eLearning's Master Degree. It is known that there were 11 participants aged between 29 and 52, but there is no indication as to their level of proficiency in IT, their background or their course of studies other than that 'they attended the Intercultural Social  Psychology subject'. In addition, it is not clear how this study fits in the overall course of studies or timetable as the context information given is very limited. Likewise, it is not clear whether the results of the study informed the researcher's current or future practice, course design or learning outcomes.

Focus on implications of collaboration for course design.

The implications of collaboration for this study were at the heart of the paper as the researcher's main aim was to 'analyse the ways student groups interacted and completed the proposed task'. However, this case study seems to position itself at the beginning of an exploration of collaborative behaviours in order to understand and identify these rather than to ground course design on the implications of collaboration. Nonetheless,  the introduction to the article would seem to indicate an attempt by the author to provide the theoretical grounds for the study which falls short as it provides a report on theory on collaboration rather than an academic argument for the study.