Abstract
Recently-developed tools which quickly and reliably quantify vocabulary use on a range of measures open up new possibilities for understanding the construct of vocabulary sophistication. To take this work forward, we need to understand how these different measures relate to each other and to human readers’ perceptions of texts. This study applied 356 quantitative measures of vocabulary use generated by an automated vocabulary analysis tool (Kyle & Crossley, 2015) to a large corpus of assignments written for First-Year Composition courses at a university in the United States. Results suggest that the majority of measures can be reduced to a much smaller set without substantial loss of information. However, distinctions need to be retained between measures based on content vs. function words and on different measures of collocational strength. Overall, correlations with grades are reliable but weak.
Original language | English |
---|---|
Pages (from-to) | 33-66 |
Number of pages | 34 |
Journal | International Journal of Corpus Linguistics |
Volume | 24 |
Issue number | 1 |
DOIs | |
Publication status | Published - Jul 2019 |
Externally published | Yes |
Keywords
- academic writing
- First-Year Composition
- vocabulary
- vocabulary sophistication
- writing assessment