Abstract
This dissertation discusses the effect of intra-task variations in association with performance conditions on task difficulty in a semi-direct speaking test, namely, the GEPT-I. The variables identified for investigation are linguistic demand of task input in relation to code complexity; amount of time allowed for performance in relation to communicative demand; type of pre-task planning in relation to communicative demand; and test-takers’ familiarity with the non-verbal propositional content of the task input in relation to cognitive complexity.
The dissertation takes a multi-dimensional approach in measuring the effects of the variables in a comparison of data collected from the performances in a controlled task and in an experimental task. The dimensions of the comparison are based on three different sources of data, including task scores, test-takers’ responses to the post-task questionnaires eliciting views of difficulty, and interlanguage measures in the areas of accuracy, fluency, complexity, and lexical density. In addition, learners’ proficiency in English is treated as a moderator variable in order to investigate the extent to which learners’ English proficiency interacts with the effects.
A study to establish the parallel nature of the tasks to be used in the study was first carried out to demonstrate quantitatively and qualitatively that the controlled tasks and the experimental tasks were equivalent before the manipulations of the variables were made. 239 Taiwanese learners participated in the main studies and their performance was analysed. The results indicate that the variation in performance conditions altered the degree of difficulty as measured in part or all of the dimensions of the comparisons. The results also confirm that the effect was altered in some cases due to learners’ proficiency in English; in general the more proficient learners were found to react to the variable more strongly than the less proficient learners.
Significant implications for language testing, learning and research in general are suggested. In particular, recommendations in relation to reliability (i.e., for parallel tasks/forms and inter-/intra-raters) and content validity are made to exam boards.
留言
張貼留言