BAILII is celebrating 24 years of free online access to the law! Would you consider making a contribution?
No donation is too small. If every visitor before 31 December gives just £1, it will have a significant impact on BAILII's ability to continue providing free access to the law.
Thank you very much for your support!
[Home] [Databases] [World Law] [Multidatabase Search] [Help] [Feedback] | ||
United Kingdom Journals |
||
You are here: BAILII >> Databases >> United Kingdom Journals >> Bone, 'The impact of formative assessment on student learning' URL: http://www.bailii.org/uk/other/journals/WebJCLI/2006/issue3/bone3.html Cite as: Bone, 'The impact of formative assessment on student learning' |
[New search] [Help]
[2006] 3 Web JCLI | |||
Head of Law, Brighton University
Copyright © Alison
Bone 2006
First published in Web Journal of Current Legal Issues.
This paper examines the use of formative assessment in UK Law Schools using the law of contract as a typical undergraduate law subject to illustrate current practice. The UK Centre for Legal Education provided research funding to carry out the project and a full report which explores in more depth some of the literature on assessment will be available on the UKCLE website (http://www.ukcle.ac.uk/) in 2006. Students and lecturers at a number of old and new universities were interviewed to establish whether there was any type of formal formative assessment, the nature of it and its perceived value in helping students to learn. There were so many variables that it is not possible to conclude that undertaking formative assessment contributed to an improved pass rate in the final summative assessment, but students indicated they appreciated the opportunity to be given feedback at an early stage and that this assessed work did not count towards their final grade.
Assessment of students is a time-consuming exercise and with student numbers increasing anecdotal evidence suggests that there has been a corresponding decrease in formative assessment. Formative assessment usually does not count towards a student’s overall grade and is intended primarily as a learning experience (East 2005). There are of course many ways of giving students feedback other than by ‘assessing’ them by setting a piece of work: a good tutorial for example, especially if the numbers are small, can be a powerful forum for exploring the understanding of the participants. As will be seen however, with the growth of student numbers, the opportunities for providing such relatively informal feedback are reduced. This article sets out the findings of a piece of research which explored the use of formal formative assessment across a sample of UK Law Schools.
There has been a great deal of practical work done on the variety of ways it is possible to give students feedback. The Student Enhanced Learning through Effective Feedback (SENLEF) project (http://www.heacademy.ac.uk/senlef.htm) has developed a conceptual model of feedback and identified (to date) seven broad principles of good feedback practice. These are that good feedback practice:
More detail on the model and the seven principles can be found in a recent article by Nicol and Macfarlane-Dick (2006).
The SENLEF project gathered a number of case studies from across Scottish Higher Education Institutions (HEIs) which illustrate effective and/or innovative feedback practices. A wide range of examples are provided on the website including the use of personal response systems. These are best described as a form of ‘ask the audience’ as used in “Who wants to be a millionaire?”. There are also many other ‘in-class’ techniques as well as electronic virtual learning environments that utilise online peer and self-assessment software, feedback pro-forma as well as new takes on traditional ‘tutor dialogue’.
Some of the case studies develop ideas that may well be currently used by law tutors e.g. criteria related marking grids. They all contain sections on perceived benefits for students and staff as well as issues/challenges – again for both staff and students – and provide valuable hints on adapting the ideas. The case studies can be sorted by discipline, but a search of ‘law’ produces no case studies at all submitted by law tutors, although it is recognised that many of the techniques used in other disciplines are transferable.
Assessment practices vary according to the structure and length of the course. Many qualifying law degrees after experimenting with semester-based courses, have reverted to the ‘year-through’ model and others (mainly ‘old’ university courses) have never deviated from such a format.
The University of London External LLB which has international standing still operates by means of summative assessment alone with a traditional closed-book examination in May/June each year with a weighting of 100%. Modern law courses recognise that to test the wide range of learning outcomes including e.g. development of legal research skills, other forms of assessment (primarily coursework) need to be set in addition to examinations and this is viewed by many as the main - and sometimes only - opportunity to give formal feedback to students on their progress.
There has been surprisingly little research done on assessment in law schools either in the UK or internationally until very recently (see e.g. Bermingham and Hodgson 2005) although many of the findings raised by authors in other fields are of relevance and referred to here.
The student new to law faces a dilemma. The subject is not commonly studied at ‘A’ level prior to commencing undergraduate study and many find the subject requires learning a whole new language with new concepts and constructs. As R.D Laing (1970) so usefully put it:
|
Such coursework ordinarily contributes towards the final mark awarded to a student on completion of their study of a particular topic i.e. it is summative as well as formative and there is evidence to suggest students are primarily interested in their mark rather than any constructive comments. Feedback should promote learning and facilitate improvement, (Quality Assurance Agency 2001 (QAA)), but with time at a premium some tutors find it difficult to give more than a few cursory comments on coursework and are frequently more concerned with the grading/summative aspect of their students’ work. Other ways of providing feedback such as in-class tests and computer quizzes have been looked at but there is very little evidence from this study at least that law tutors are using such methods to any extent.
Giving feedback is acknowledged to be a central skill of assessment. As Brown (1997) states ‘when people are trying out new approaches, they may be insecure and vulnerable. Supportive, constructive feedback is particularly important in these circumstances’ ‘New approaches’ includes of course new subjects, or a new level of study.
UKCLE provided funding for the research which involved collecting data during the academic year 2004/2005.
The purpose of the research was to explore the different forms of feedback - including formal formative assessment – provided to students studying Contract law across a selection of HEIs and to evaluate their perceived effectiveness in enhancing student learning. Key objectives were
Contract law was chosen as a ‘typical’ undergraduate law subject. Unlike Legal Method/Skills courses which are invariably taught at the start of a degree to fledgling law students, Contract may be taught at either level 1, 2 or occasionally 3. It so happened that at all the HEIs that participated in this research, Contract was taught in Year 1. Nevertheless, as will be seen, the nature and type of feedback varied considerably between HEIs.
Several heads of law schools were contacted in the autumn of 2004 and invited to participate in the research. Most were helpful and enthusiastic and followed through i.e. agreed that the author could contact the tutor responsible for the Contract course and gave contact details or e-mailed the tutor direct indicating their support. Others were wary: one said that the staff were too busy to participate in research (before knowing that at most it would take an hour of their time), another that ‘assessment practices were not something the staff here would be that interested in’ (?)
Because of resource restrictions data had to be gathered in a condensed period early in 2005. In all a total of eleven UK universities participated, including at least one HEI from England, Wales, Scotland and Northern Ireland. Pre and post 1992 Universities were represented. Interviews were held with twelve subject tutors (at three HEIs the lectures were divided up between two or more lecturers) but usually only the person with overall responsibility for the subject was interviewed. Discussion covered the nature of feedback mechanisms, their rationale and perceived effectiveness. A total of sixty-five students took part in focus groups to discuss how and when they received feedback on their learning at their HEI and the nature of any assessment they undertook. They were also encouraged to discuss its impact – if any - on their learning.
Out of six ‘old’ universities five set pieces of formative assessment. This was something of a surprise – generally the pre-1992 universities can command higher entrance qualifications and thus justifiably have higher expectations of their students.
In most of these universities the practice has evolved over time but in at least one HEI it had been praised in a QAA1 report. As a result the practice has been extended to cover all first year subjects. One (and the only one in this sample) uses formative assessment across all years of the law course – this is school policy. Unsurprisingly those who use formative assessment generally think it is good practice and thus did not question its validity: a common remark was that ‘it was only fair to let the students practise what they would be given marks for later’.
Out of five ‘new’ universities only one set a piece of formal formative assessment but it was marked by the students themselves against provided marking criteria. As will be seen by the comments of the students reproduced below this was not considered to be particularly helpful. Two other ‘new’ universities used to set formative assessment but abandoned it when numbers increased. Ironically the total number of law students in the first year at both of these universities was far lower than that at other places where such assessment was practised.
Those HEIs that did not set formal formative assessment (five out of the sample) came up with a variety of reasons. Large numbers of students was mentioned twice but the most common rationale was that as first year marks did not count towards the final degree classification it could be said that all first year assessment was formative. Every tutor who did not set formal formative coursework said that at some stage during the year students were invited “if they so wished” to submit work for scrutiny by a tutor who would give them feedback on it. This was usually a seminar/tutorial question that could be written up. It will not surprise anyone that students did not champ at the bit to avail themselves of this opportunity – only four students out of the sample of 65 had done so and three of these had English as a second language. Tutors freely admitted that if any significant number of students did submit work, there were no resources built in to their timetables to allow them to mark and give feedback.
Of the six universities in the sample who set formal formative assessment, two set one piece of work and three set two separate pieces. The sixth set two pieces of work at one time and students could choose to do one or both pieces. Those who set two separate pieces of work all had year-through courses. Two courses enabled students to practise different forms of assessed work by setting an essay for one piece of work and a problem for the other.
The work was labelled ‘compulsory’ in three universities although the label (given the nature of formative assessment) had little meaning compared with other (summative) assessment. When asked what was meant by the description ‘compulsory’ tutors were often unsure themselves and responses varied from “I would chase [them] and ask why they had not submitted it” to “a note would be made that they had not done it” but there appeared to be no sanctions e.g. it was not a pre-requisite of submitting summative assessment that formative assessment had been completed. The ‘compulsory’ label was probably necessary to ensure the work was done. There is considerable evidence (Bermingham and Hodgson 2005) that students will only usually do work that ‘counts’ or, to be blunt, carries sanctions for non-submission.
Two universities gave a choice of questions to students with the intention that students would choose the piece of work that would provide them with a challenge or enable them to improve in a subject area in which they lacked confidence. Students’ reactions to this were mixed – given that they were all first-year undergraduates most of them had never been given a choice of assessment topic before and one said it was ‘like being given a choice of torture methods – since we’d never experienced any of them we did not know which would hurt least!’
Samples of all the formative coursework briefs were provided but it is outside the scope of this paper to analyse their form and content: suffice it to say they covered a range of subjects and required pieces of work that were usually relatively short (a word limit of 1500 words was typical). The most surprising thing about the assignment briefs was that none of them provided any assessment criteria at all. When lecturers were asked why no criteria were given (and not all were asked as it was felt an intrusive query….) the general response was that no criteria were given for any piece of assessed work. One tutor said:
There’s not much point me spelling out what they have to do in advance. I do give some general guidance in the lecture and they can always e-mail me if they are really unsure but they just get on with it.
One course provided a model answer to students on the course website when the work was returned and this was greatly appreciated. Comments included “it was really helpful to see the level of detail required”; “I wasn’t quite sure how to use cases to illustrate my argument until I read [the answer]” and “it boosts your confidence to compare what you wrote with the model answer”. Another university (which used peer assessment as formative assessment) gave students detailed assessment criteria after the work was complete so that they could mark each other’s work. This had a mixed response. Students felt it was useful to see the criteria but were not confident in the application of them to the work. One said “it was a complete waste of time”.
Tutors were asked how they gave feedback to the students on the formal formative assessment and students were asked if – and how - it was useful to them. There was no written generic feedback provided by anyone. At one university the students said that the lecturer “ran through the key points she had expected to find [in the work] during the lecture, but it was all rather quick and not very helpful”.
All students who did formal formative assessment were given individual feedback and in all but one university a pro-forma was used. This was not in itself good or bad - students were mainly interested in the quality of the feedback and this varied from tutor to tutor even within the same course, regardless of whether a form was used or not.
One such form had tick boxes on it with different headings such as ‘Structure’, ‘Writing style’, ‘Quality of argument’ with various grades/comments such as ‘Good’ ‘Satisfactory’ ‘Needs attention’. These were not associated with any numbers and so it was not possible for a student to see how such ticks combined to result in the final mark. There was also sometimes a disparity between the ticks and the grade which was commented on by the students:
I could not see the relationship between the comments [on the work] and the boxes that were ticked on the front. Sometimes the comments were positive and encouraging but the corresponding tick box score was less than satisfactory. When it was looked at as a whole I did not know which to believe. I don’t know why but I always seem to take more notice of the negative score than the positive comment!
Timing of feedback is continually stressed as being crucial both by ‘experts’2 and by the students themselves:
We had to do a piece of work in our first term for all four of our subjects which did not count. We got it back really late – often just before the hand-in date of the [summative] assignment so we could not really make use of the comments on it. There was no form, just comments all over the work. For contract the feedback was really useful – for [another subject] it was a waste of time – just one line.
We got the work back a week after we handed it in which was brilliant. Whether we had a mark or not depended on who had looked at it – only one lecturer automatically gave a mark, the others just gave comments. I did not think a mark was really necessary as we did the work with all our books around us so it wasn’t like an exam. The feedback was really useful – it covered both structure and content.
At this university five tutors marked around 300 pieces of work between them so the turn-round time is impressive! This student also appreciated the comments made. However at other institutions it was the mark that was felt to be crucial.
I think the mark is very important. I want comments but if I had a choice I would rather have a number because I want to know how I am doing across subjects and compared with other people. We don’t get quantitative feedback so I don’t know how many people did better than me or what the top mark was, but most of us talk about our marks so we know how most people in our [seminar] group did.
The mark is just as important as the comments. I need the number to gauge performance against my own ability and also comparatively. It also boosts my confidence.
These comments echo the findings of Bermingham and Hodgson (2005) that although many students appear to be in greater need of qualitative feedback, they valued the grade more.
Tutors very rarely asked students to come and see them as a result of the formal formative assessment: usually they would only do so if the work was particularly poor. If the students did not come they were not usually chased. Good students sometimes requested to see the tutor for one-to- one discussions and these were almost always granted. Often these were international students. Where English is not a student’s first language a one-to-one discussion is particularly appreciated:
We were set two pieces of work and we got plenty of comments both on the form at the front and on the work itself. I got lots of feedback about structure which I found really helpful with other pieces of work….I went to see [my tutor] later to discuss the feedback. I do this with every piece of work…..A comment such as ‘insufficient academic analysis’ needs explaining and we talked through that.
Where students were set both formal formative and summative assessment in the form of coursework, they particularly valued the feedback given on the formative work, recognising that it could help them to improve. For this reason they wanted the feedback given as soon as possible and in time for them to use such feedback in later work that was due to be assessed. The feedback element was of much less importance in relation to the later summative coursework – students concentrated on the mark/grade here
At five institutions there was no formal formative assessment so that the only feedback students received was on their summative coursework which contributed to their overall grade (varying from 20% to 50% of their final mark, combined with an examination).
Students and tutors were asked similar questions about the nature and value of feedback in this summative assessment.
At one old university students were critical about the lack of information they received prior to submitting their work:
After we’d done the work and got it back we seemed to spend every tutorial talking about the structure of the assignment – we hardly ever talked about content. What was really annoying was that we did not realise until we got the feedback from our assignment that we had to quote cases to support our arguments – nobody had mentioned it before!
Some comments were also seen as unhelpful – again confirming the findings of Bermingham and Hodgson (2005):
We had the feedback on a form and there was not very much of it. Mine did not explain what I’d done wrong – it just said ‘good attempt’, but I got 57% so I needed to know what I should have done to get a mark over 60% which was what I was aiming for.
Timing of feedback was also mentioned by this group:
We had to hand in three pieces of coursework [contract and two other subjects] all at the same time and we got them back all at the same time. Often there was similar feedback which made me cross – I made the same error in relation to referencing three times and got penalised on all three!
Because all the work done by these groups of students is summative many seemed unconcerned about the comments – they just noted the mark. In three universities the work was not returned to the students at all – it was retained for ‘quality control’ purposes (presumably amongst other reasons to show a sample to the external examiner). At one HEI (a new university) the students could request to see their work, which was kept in the school office but an administrator confirmed that hardly anybody ever did. The author was shown samples of this work and at least one had a whole page of feedback comment, that it is likely the student never read. The student group was asked about their failure to see the marked work and the general view was that there was no point ‘as we’ve finished Contract now’. (In fact the examination they sat at the end of their course covered both Contract and Tort, but examinations were perceived to be different).
At one university formal formative assessment was one of the measures introduced to deal with what was perceived to be a high failure rate (20%) at first sitting. New materials were written, the textbook was changed to one considered to be less weighty and two pieces of formative assessment were offered. The failure rate dropped considerably and now stands at around 2% - the lowest of all the samples – but of course this cannot be attributed solely to the introduction of formative assessment.
Students who were given formal formative assessment valued the comments they received but particularly commented on the grade. Because they were all first year students they were still ‘finding their feet’ and the contract assignment. was one of the first major pieces of work many of them had done. The mark gave them a benchmark of the standard required for law undergraduate study and all the groups mentioned that it helped boost their self-confidence.
Much has been written on the purpose of assessment:
Assessment should be formative. Assessment is a time-consuming process for all concerned, so it seems like a wasted opportunity if it is not used as a means of letting students know how they are doing, and how they can improve. Assessment that is primarily summative in its function (for example when only a number or grade is given) gives students very little information, other than frequently confirming their own prejudices about themselves. (Brown and others 1996).
That the primary beneficiary of assessment should be the learner or student is repeatedly asserted in the literature…In other words, assessment is viewed as having a primarily formative function.(Macellan 2004).
The good news is that this is understood by the majority of HEIs who participated in this project: six out of eleven universities set at least one piece of formal formative assessment. Students generally appreciated this opportunity and the time taken by tutors to mark and comment on their work, although sometimes it was felt that the comments were too brief or vague to be useful. Prompt return of the work was particularly appreciated. Students said that getting early feedback, even if it included some negative comment, boosted their self-confidence and helped motivate them to improve.
Because of the large number of variables it is not possible to conclude that students who complete formal formative assessment are more successful in their final assessments, be they examination or a combination of coursework and examination.
The bad news is that five out of the eleven universities that participated in this project did not give students any opportunities to demonstrate their understanding of the law of contract in a written form before completing summative work. In three universities even this (final) work was not returned to the students as a matter of course but was retained by the university although students were usually able to request to see it. This meant that even if comments were written on such work with the primary intention of helping the student, such feedback was rarely communicated.
As this report was being compiled, the first National Student Survey results were published on the TQI website (http://www1.tqi.ac.uk/sites/tqi/home/index.cfm). One of the topics covered by the survey was ‘Assessment and Feedback’ and various comments were made in the press to the effect that although students were overwhelmingly satisfied with their lecturers and courses they were ‘less happy with the quality of assessment and feedback’ (Baty 2005). However most of the law schools used in this research (Scotland did not participate in the National Student Survey) had assessment and feedback scores which were good (over 4 on a 5 point scale) but of course it is not known what was at the forefront of students’ minds when they answered the survey since assessment and feedback cover so many different aspects.
Whether or not an individual law school decides to use formal formative assessment It is important that we make our assessment processes as transparent as possible to our students. As Smyth says:
Building students’ knowledge of how and why assessment takes the form it does, raising awareness of ongoing as well as final processes, teaching students how they can be self- and peer assessors, and revealing how critical thinking about assessment is an integral part of the learning process, should be a primary aim of all university tutors. Such aims can be achieved in a number of ways. Of most importance is the involvement of students in the rationale behind assessment practices.(Smyth 2004).
Many thanks to the staff and students who participated in the research for this project and to everyone at UKCLE for their enthusiasm and support.
Baty, P (2005) ‘Students satisfied – official’ Times Higher Education Supplement September 9 p 1
Bermingham, V and Hodgson J (2005) http://www.ukcle.ac.uk/research/projects/hodgson.html (accessed November 2005)
Brown, S, Race, P, Smith, B (1996) 500 tips on assessment Kogan Page, p.9
Brown, G with Bull, J, Pendlebury, M (1997) Assessing Student Learning in Higher Education, London, Routledge, p5.
East, R Formative vs Summative Assessment http://www.ukcle.ac.uk/resources/assessment/formative.html [accessed November 2005]
Laing R D (1970) Knots Tavistock Publishing
Maclellan, E (2004) How convincing is alternative assessment for use in higher education?, Assessment and Evaluation in Higher Education Vol 29 No. 3 pp 311-321
Nicol, D J and Macfarlane-Dick D (20006) Formative Assessment and self-regulated learning: a model and seven principles of good feedback practice, Studies in Higher Education Vol 31, No 2 pp199-218
Quality Assurance Agency (2001) Code of Practice: Assessment of Students General Principles – Quality and Standards in HE. Available at:
http://www.qaa.ac.uk/academicinfrastructure/codeOfPractice/section6/default.asp [accessed April 2006]
SENLEF project http://www.heacademy.ac.uk/senlef.htm [accessed November 2005]
Smyth, K (2004) “The benefits of student learning about critical evaluation rather than being summatively judged” Assessment and Evaluation in Higher Education Vol 29 No. 3 pp 369-378
Teaching Quality Information website http://www1.tqi.ac.uk/sites/tqi/home/index.cfm [accessed December 2005]
1 Quality Assurance Agency: until recently all UK HEIs were subject to a quality assurance process which operated at subject level.
2 It is one of the listed ‘Principles for effective practice’ as set out in the SENLEF project: http://www.heacademy.ac.uk/806.htm#4 (accessed November 2004)