The student—an invaluable source of information in the writing process.

Chris Alexander. Bristol University

 

 

Introduction

 

The aim of this paper is to analyse the kind of things students are concerned about when writing, and then to compare these concerns with what a teacher feels is wrong. I, for many years as an EAP teacher, believed the best way to help a student was to underline mistakes and to write in a corrected version, laborious as it was. In this paper I would like to question the effectiveness and usefulness of this technique without resorting to the student as a source of information i.e. we can learn a lot about student concerns by getting students to underline areas of concern while they are writing, and then by asking them, if necessary, to clarify these sections at a later stage. I would also like to show that students are not only concerned about things that are incorrect, they seem to be concerned about things that are ‘correct’.

The data sample was elicited from students of English during one of their timed writing classes. Students highlighted or underlined anything they were not sure about. Two students were chosen at random immediately after completing the task to give retrospective accounts of these highlighted areas. An inter-play of competing qualitative (i.e. the retrospective accounts) and quantitative data (the teacher and student underlined sections) was created by comparing student data with post-task teacher error assessment. I chose to analyse the above, because the timed writing of expository paragraphs is an integral part of the EAP course in my professional context, and improving methods of error correction by collaborating with the student is an area of personal and professional interest.

Part one is a rationale. Parts two and three will comprise analyses of the paradigms underpinning my research; in parts four and five I will classify and discuss the implications of my findings.

 

 

  1. Rationale

 

To what degree are the things students think might be ‘wrong’, ‘right’? And how can explicit post-task teacher feedback address the dilemmas a student may face when producing ‘such’ acceptable chunks of L2 writing? If both teacher and student were to underline areas of concern, would there be any mismatch regarding perceived writing concerns? ‘Concern’ for the teacher-assessor meant areas of incorrectness, whereas for the student it related to things they were not sure about. Charles (1990) noted in Jordan (2000, 174) states that students who underline parts of a text with which they are dissatisfied are ‘actively involved in the process of correcting and so are likely to be more receptive to the teacher’s comments’. Charles (ibid) holds that ‘a big advantage of the technique is that it reveals the concerns of the student, which may be very different from the teacher’. As a researcher, I would like to know whether there actually is a difference of opinion between the teacher and student. This, I believe would be pertinent in light of what many researchers claim regarding the drawbacks of attending only to surface features of writing and the need to actively involve students in the writing process.

Several researchers have drawn attention to the drawbacks of explicit teacher error correction as the main method of teacher feedback. Jordan (2000, 172) for instance, comments that there is some evidence to indicate that underlining/crossing out errors and simply writing in the correction above or below, though it saves time and ensures that the student has a corrected version, is not an effective method as it does not actively involve the student and often does not prevent the mistake from being repeated; Ferris and Roberts (2001, 161-184) also take this view i.e. they maintain (ibid.) that the controversy continues as to whether error feedback helps L2 student writers to improve the accuracy and overall quality of their writing. Lee (1998, 61-67) even asserts that most teachers primarily attend to grammar in their evaluation of their students’ writings. The idea of actively involving students in the writing process is echoed by: (1) Cresswell (2000, 235-44) who holds that in giving learners control over the initiation of feedback, student self-monitoring is a valuable way of increasing the element of autonomy in the learning of writing (NB self-correction/monitoring is a scheme in which students tell the teacher the kind of help they want); (2) Frankenberg-Garcia (1999, 100-106) who states that there are limitations to what text-based feedback can do, and that the best moment for responding to student writing is before any draft is completed, by providing writers with pre-text feedback (Zamel 1985 also takes this view) (3) Dyer (1996, 315) who holds that locating and correcting grammar errors in composition may leave learners with the impression that local errors are of primary importance; this viewpoint is also held by Chenoweth (1987, 25).

 

2. Design of my study

 

The data sample was elicited from eight participants (i.e. Polish university students of EAP) during one of their timed writing classes. Students were asked to write an expository paragraph on a given topic and to underline things they thought might be wrong. Once this stage of the research had been completed, two students were interviewed to get some retrospective information regarding their underlined sections; the main purpose of these interviews was to confirm the underlined sections, and to find out whether a student could explain why a particular chunk was worrying. The student work was at a later stage assessed by a teacher; the teacher underlined sections that were thought to be incorrect. The main question with regard to the two quantitative sets of data (i.e. the student and teacher underlined sections) was, would the teacher underlined sections coincide with the student underlined sections? To this end I analysed the student data in three ways: (1) I listed the number of areas where teacher concern did not relate to student concern because the section had been underlined by the teacher and not by the student; (2) the number of areas where both teacher and student underlined the same chunk (which was incorrect); (3) the number of areas where student concern was not related to teacher concern i.e. the chunk was thought to be correct by the teacher, but had been underlined by the student. For practical reasons, I could only elicit a relatively small data sample, therefore it may be difficult to make generalisations about this research. The findings of this research apply only to advanced-level (i.e. CAE level) students (NB it, however, would be interesting to research whether there are any differences between lower level learners and advanced level students). Another possible weakness in my research was, I did not consider the degree to which students felt embarrassed about admitting that they had problems/concerns.

Two types of feedback were elicited from the student: (1) their while-writing concerns i.e. the sections they were concerned about; (2) two retrospective accounts elicited immediately after the writing task. With regard to the above, two questions seem pertinent: (a) why did I not use verbal protocol to elicit data from students? (b) why did I only get students to underline areas of concern i.e. students could have used a code? I chose not to use a code e.g. as in White and Arndt (1991) (i.e. students put a code in an appropriate place in the margin of their writing to indicate that they are not sure whether a particular structure is acceptable, appropriate or correct) because such a code would have to be thoroughly explained to, and understood by, the students. I chose not to use concurrent verbal reporting because: (1) it would have been impracticable in the context of my research i.e. it would have been annoying for the other students doing the test; (2) even though the recording of verbal protocol involving the subjects thinking aloud as they are composing is a common introspective data-elicitation technique, some research however questions the validity and reliability of protocol research (concurrent verbal reporting). Kowel and O’ Connell (1987, 125) for instance, argue that thinking aloud while composing may interfere (negatively) with the process of writing by involving a second medium i.e. speech. (also noted in Nunan 1992, 115, McDonough and McDonough 1997, 191). Nisbett + Wilson (1977, 124) warn that reports about ‘our mental life maybe too flawed among other things, too influenced by what we believe we want others to hear’. I hold another problem with concurrent verbal reporting is low or high yield i.e. some students may be shy, feel threatened, or overly gregarious. I maintain that introspective methods produce interesting data, though this data is time consuming to transcribe from a recording.

With regard to the retrospective interview, I held that the reliability of data would be enhanced by ensuring that data were collected as soon as possible after the task (i.e. reducing the time-lag). I also did not inform the two candidates, who were chosen at random, that they would be interviewed because some evidence suggests that this may affect the data (e.g. Erisson and Simon 1984). I explained the purpose of the retrospective interview with the candidate. The retrospective ‘one-to-one’ interview served several purposes: (1) to confirm the highlighted chunks and to find out whether a student could express levels of concern regarding a highlighted/an underlined chunk i.e. was there anything they were particularly worried about in the underlined sections of the writing task? ; (2) to evaluate whether the student could actually explain what was problematic about the structure.

So as avoid any unease about the possible threatening nature of this task, which could have adversely affected the data, the students were told the essays would not be graded and their work would remain anonymous.

 

3. Teacher error assessment.

 

The chunks the assessor considered ‘incorrect’ or ‘unacceptable’ were deviations from the norms of contemporary ‘mainstream’ British English i.e. misspellings (British or American), obvious punctuation errors, cohesion-related errors, semantic errors and stylistic errors (i.e. whether word chunks collocated acceptably). A descriptive (i.e. actual usage) rather than prescriptive (correct usage) approach was taken to grammatical errors. I realised that there could be some overlap between these general ways of grouping errors, but did not feel that this was relevant with regard to my research objective, because I simply wanted the teacher-assessor to identify (underline) errors. An unacceptable ‘chunk’ could be a single word, several words or an entire sentence.

Corder (1981, 10-11) distinguishes between an error and a mistake i.e. errors were related to ‘transitional competence’ (not knowing, a learner cannot self-correct an error), whereas mistakes were ‘errors of performance’ (e.g. random slips caused by fatigue) and therefore ‘not significant to the process of language learning’. Corder (ibid) however notes that determining whether a learner has made a mistake or an error is difficult. The teacher assessor in my research did not attempt to distinguish between a mistake and an error; the terms are used synonymously in this paper. However the issue of whether some of the sections underlined by the teacher and not by the student may have been student slips is relevant, and could be a weakness of my research. However as post teacher-assessment interviews with all the students were not practicable in this research context, I could not assess whether something was a ‘mistake’ or an ‘error’.

A number of researchers have noted that writing teachers give too much error feedback and not enough ‘content’-related feedback e.g. Chenoweth (1987, 25), Dyer (1996, 315); it should be noted that content feedback is of great importance in the ‘process’ approach to writing. Chenoweth (1987, 25) even argues that teachers should avoid correcting all grammatical and spelling mistakes as the idea is to give students a clear message that they should concentrate on content and expression of their thoughts clearly. In my research however, I did not wish to analyse ‘content’ related issues (e.g. thesis statement, ideas, arguments, sufficient detail, relevance, logical order, appropriate style/register, organisation, paragraph structure, good flow, conclusion); the teacher assessed only non-content related errors. The assessor also did not comment on schemata, exophoric referencing or pragmatics. The teacher assessor was a TESOL-trained, British native speaker. None of the students suffered from dyslexia and all the students were asked to write legibly.

 

 

 

 

 

 

 

 

4. Classification of findings

 

The data in table one below are presented in three different ways: (1) teacher concern that was not related to student concern because the chunk was incorrect and had been underlined only by the teacher; (2) areas where teacher and student concern coincided i.e. both teacher and student underlined the same chunk (which was incorrect); (3) sections that students underlined, but were not underlined by the teacher-assessor i.e. these sections caused concern for students but were actually deemed to be correct by the teacher-assessor.

Table one: Data Classification

 

Student

Column one

The number of areas where teacher concern was not related to student concern i.e. the chunk was incorrect but not underlined

Column two

The number of areas where both teacher and student underlined the same chunk (which was incorrect)

Column three

The number of areas where student concern was not related to teacher concern i.e. the chunk was correct

1

2

2

2

2

5

7

0

3

4

4

4

4

9

3

2

5

6

3

6

6

5

2

2

7

6

3

3

8

1

5

3

 

There was significant mismatch between teacher and student in the context of this research; please note the following:

During the short retrospective interviews the following points were noted:

 

5. Implications.

The findings presented in section four seem to suggest that a percentage (in my research 43%) of what (advanced) students think is wrong, is right i.e. some of the things students think may have been written incorrectly are actually correct. The question of whether teacher correction per se actually addresses the dilemmas students face when writing such acceptable chunks of L2 is an interesting one. In one sense, it does give the student some information i.e. the chunk is correct, though, the teacher could only guess (even if the chunk were underlined) what element of the chunk the student found difficult. Consider the following examples taken from student data (please note these students had not been interviewed, so I could not find out what they were concerned about):

The underlined sections in the above examples caused concern for the students, though these underlined sections were deemed correct by the teacher. But what exactly were the students concerned about in these sentences and is such information useful for teachers? I believe such information is invaluable for teachers, consider example one below:

 

(1) Take a baby for example, it only needs parents’ care, nutritious food and many hours of sleep

What was the student concerned about in sentence one above? In this example the student was not sure (this retrospective data was elicited in the post-task interview) whether the ‘it’ (anaphoric referential cohesion) could be used to refer to a ‘baby’ i.e. should ‘she’ or ‘he’ be used instead of ‘it’. If the student had not underlined the ‘it’, the teacher would not have had any information from the student regarding difficulties; the fact that this chunk was correct would not address the dilemma i.e. that ‘it’ can be used to refer to a child or baby whose sex is not known or is not relevant. I therefore hold that getting students to underline chunks gives the teacher useful information when assessing work because if a chunk is correct, and the teacher is not sure what the student is worried about, he/she can ask the student to explain.

(2) she claims it’s ageism

In example two above, the student underlined ‘claims’ and ‘ageism’ which were deemed to be correct by the teacher assessor, but what was the student concerned about here (spelling, collocation, usage, transfer)? This student however did not take part in a retrospective interview, and a teacher could only speculate what help the student needed. This example also suggests that apart from getting students to underline areas of concern, teachers may have to ask students to explain what they are worried about. As the students in their retrospective interviews could explain in detail what they were concerned about in all their underlined sections (NB one of the students could even grade concern i.e. some underlined chunks were more ‘worrying’ than others), I hold a negotiative or more interactive approach to writing could be helpful for student and teacher.

 

(3) Our lifestyle is strongly affected by our financial status

(4) What is more, the young, physically, are at their peak

 

Sentences 3 and 4 above add weight to the claim that teachers cannot know on the basis of student underlined sections alone what a student is worried about; teachers therefore should ask students to explain why they have underlined a chunk. In these examples the students were worried about the spelling and not the position of adverbs (NB this data was elicited in the retrospective interviews).

If 55% of what the teacher underlined as incorrect did not relate to student concern (i.e. these could be termed ‘unexpected errors’), it would be interesting to research how such teacher feedback motivates or possibly demotivates students, after all, it might be traumatic for a student to find out that some of things he or she thought were ‘right’, were ‘wrong’. Consider the following sentences:

(5) The youth is the time widely considered as the most joyful period in one’s life

(6) Although most of people in their thirties or fourties start to think carefully about their health

(7) I belive that it is rather our outlook on life

In sentences 5, 6, and 7 above the underlined sections are correct, though there are ‘unexpected errors’ in these sentence. I present below what the teacher crossed out in these sentences: in sentence 5 for example, the teacher crossed out ‘the’, (NB ‘joyful’ was correct). In sentence six above the teacher crossed out ‘of’, ‘fourties’ (‘thirties’ was correct). In sentence 7, the teacher crossed out ‘belive’ (‘on’ was correct).

The youth is the time widely considered as the most joyful period in one’s life

Although most of people in their thirties or fourties start to think carefully about their health

I belive that it is rather our outlook on life

How would a learner feel about feedback that did not address his or her real concern (i.e. the underlined section) and actually drew attention to unexpected chunks? Worried? Shocked? Threatened? Demoralised? Happy? Interested? Motivated? Grateful? Embarrassed? Demotivated?

The question as to whether error feedback helps L2 writers to improve the accuracy and overall quality of their writing (mentioned in section one Jordan 2000, and Ferris and Roberts 1998) is interesting, though I hold it is important to assess the degree to which error feedback addresses student dilemmas (i.e. of correct and incorrect chunks) and whether feedback on ‘unexpected errors’ is seen as helpful and motivating by the student. Dyer (1996) and Chenoweth (1987) , see section one, draw attention to how locating grammar errors in composition may leave learners with the impression that local errors are of primary importance; this may very well be the case, but what psychological effect does this have on the learner writer if the correction is not related to an area of concern? Therefore giving learners control over the initiation of feedback i.e. student self-monitoring (discussed in Cresswell 2000) or providing writers with pre-text feedback (noted in Frankenberg-Garcia 1999, Zamel 1985) in my opinion could be effective and ‘sensitive’ methodologies.

Correcting what is incorrect as the sole assessment technique may not help a teacher to find out if any students employ avoidance tactics for things they cannot do, NB some research suggests that this is the case e.g. Schachter (1974), Kleinmann (1978), and Kellerman (1977). It, however, may be difficult to find a balance between the need to develop English writing skills and not using up too much valuable lesson time on in-class writing.

With regard to the errors that were underlined by the teacher-assessor, there were very few regular patterns (i.e. examples of identical mistakes). As mistakes on the whole were unique and student-specific, would writing samples of student errors on the board for the class as a whole to correct actually be ‘helpful’ for the whole class? Would this be an effectual way of drawing students’ attention to a mistake, i.e. how much lesson time would this take up and would this be relevant to all the students?

My retrospective interviews suggested that a student may be able to grade concern (i.e. some chunks worried the student more than others). It might be useful to research this further from both teacher and student perspective. Some research noted in Davies (1983, 304) for instance states that ‘the average teacher may never have received any specific evaluation error training, yet may nevertheless have quite clear intuitions about the relative gravity of different types of errors’. There may be differences between NS’s and NNS’s, e.g. James (1977) and Hughes and Lascaratou (1982) noted in Davies (ibid.) found that non-native speakers were more severe in their evaluations of learner errors than native speakers. Lococo (1976) maintains that NS judges tend to judge lexical errors as more serious than grammatical errors. Burt (1975, 53-63) holds that NS judges tend to assess global grammatical errors as more likely to interfere with comprehension than local errors. (NB global errors affect overall sentence organisation, and local errors affect single elements in a sentence). With regard to the replicability of this research, I hold who the teacher assessor is, is a relevant research variable.

Students on the other hand may be concerned about specific things in their writing, and grading/analysing such concerns may help throw some light on (1) the student’s awareness of the supposed limitations of his or her interlanguage (i.e. students may know what they ‘think’ they have difficulties with); (2) previous teaching experiences (i.e. the teachers a student had in the past may have an influence on what students choose to concentrate on during a writing task); (3) learning strategies may influence how a student approaches a writing task (i.e. a student’s past learning experience may influence what a student chooses to concentrate on during a writing task . Consider the following examples taken from my data:

  1. Three out of the five chunks this student underlined related to spelling (I elicited this data informally after the work had been assessed): the words were (1) passivly; (2) realy; (3) thirties (which was spelt correctly). Would this suggest that this learner seems to pay particular attention to spelling? This issue requires further research.
  2. In the student-one retrospective interview data, the student thought that four out of the six underlined chunks might be incorrect because of interference/negative transfer, NB three of these chunks were correct, and the other was stilted, though not necessarily ‘transfer’-related. Was this student overly concerned about transfer or could this be related to the student’s past teaching experiences? I would argue a lot can be learned about the student’s interlanguage awareness, writing strategies and past teaching (whether it be good or bad), by trying to find patterns in a student’s underlined chunks and then following this up with retrospective interviews.

In conclusion, the belief that only ‘ritualistically’ crossing out mistakes and correcting them helps students, needs challenging: it is helpful if what the student is concerned about is incorrect (i.e. the teacher can write in a corrected version), but if what the student is concerned about is correct, I argue a different approach is required. Another important question relates to the ‘psychological’ effect such post-task error correction may have on the student, is it perceived as helpful if it does not relate to student concern (i.e. much of what the teacher underlined as incorrect in my research did not relate to student concern)? A lot can be learned from analysing learner concerns/dilemmas and grading levels of concerns; my data suggested that some students concentrated on (i.e. were concerned about) particular things in their writing e.g. spelling, transfer (NB in some cases what they were concerned about was correct).

 

 

References

 

 

Burt ,M. (1975). Error analysis in the adult EFL classroom. TESOL Quarterly 9: 53-63

Charles, M. (1990). Responding to problems in written English using a student self-monitoring technique. ELT Journal. 44/4

Chaudron, C. (1995). Second Language Classrooms. Cambridge: CUP

Chenoweth, N. (1987). The need to teach writing. ELT Journal 41/1 : 25-29

Corder, S. P. (1981). Error analysis and Interlanguage. Oxford: OUP

Cresswell, A. (2000). Self-monitoring in student writing: developing learner responsibility. ELT Journal 54/3 :235-244

Davies, E. (1983). Error evaluation: the importance of viewpoint. ELT Journal. 37/4: 304-311

Dyer, B. (1996). L1 and L2 composition theories: in Hillocks ‘environmental mode’ and ‘task-based language teaching’. ELT Journal 50/4: 312-317

Ericsson, K. A., and Simon, H. A. (1984). Protocol Analysis: Verbal Reports as Data. Cambridge, Mass.: MIT Press

Ferris, D., and Roberts, B. (2001). Error feedback in L2 writing classes. How explicit does it need to be? Journal of Second Language Writing. 10/3: 161-184

Frankenberg-Garcia, A. (1999). Providing student writers with pre-text feedback. ELT Journal 53/2: 100-106

Hughes, A. and Lascaratou, C. (1982). Competing criteria for error gravity’. ELT Journal 36-3: 175-82

James, C. (1977). Judgement of error gravity. English Language Teaching Journal 31: 116-24

Jordan, R. R. (2000). English for Academic Purposes. Cambridge: CUP

Kellerman, E. (1977). Towards a characterisation of the strategies of transfer in second language learning. Interlanguage Studies Bulletin 2: 58-145

Kleinmann, H. (1978). The strategy of avoidance in adult second language acquisition. In Ritchie (ed.) 1978 Second Language acquisition research. New York: Academic Press

Kowel, S., and O’ Connell, D. C. (1987). Writing as language behaviour: myths, models, methods. In A. Matsuhashi (ed.). Writing in Real Time: Modelling production processes. Norwood, NJ: Ablex

Lee, I. (1998). Writing in the Hong Kong secondary classroom: teachers’ beliefs and practices. Hong Kong Journal of Applied Linguistics 3/1 : 61-76

Lococo, V. (1976). A comparison of three methods for the collection of L2 data: free composition, translation and picture description. Working Papers on Biligualism 8: 59-86

McDonough, J., and McDonough, S. (1997). Research Methods for English Language Teachers. London: Arnold

Nisbett, R., and Wilson, D. (1977). Telling more than we can know: verbal reports on mental processes. Psychological Review 84: 231-259

Nunan, D. (1992). Research Methods in Language Learning. Cambridge: CUP

Schachter, J. (1974). An error in error analysis. Language Learning 27: 205-14

White, R., and Arndt (1991). Process Writing. London: Longman

Zamel, V. (1985). Responding to student writing. TESOL Quarterly 19/1