The College Board and ACT testing companies have used Vantage Learning’s AES tool IntelliMetric™ to rate the WritePlacer Plus test and the e-Write test, respectively (Haswell, 2004). The Educational Testing Service (ETS) has used its AES tool “e-rater” to replace one of the two human graders for the writing portion of the Graduate Management Admission Test (GMAT) since 1999 (Herrington & Moran, 2001). Implications of these findings for English educators reveal that AES tools have limited capability at this point and that more reliable measures for assessment, like writing portfolios and conferencing, still need to be a part of the methods repertoire.Īn increasing number of school districts and higher education institutions are adopting Automated Essay Scoring (AES) to assess students’ writing for placement or accountability purposes (Shermis & Burstein, 2003b Vantage Learning, 2001b). Findings from the current study do not corroborate previous findings on AES tools. A significant correlation was also present between AES and faculty human scoring in Dimension 4 – Sentence Structure, but no significant correlations existed in other dimensions. On the other hand, there was a significant correlation between scores assigned by two teams of human raters. Results from the data analyses showed no statistically significant correlation between the overall holistic scores assigned by the AES tool and the overall holistic scores assigned by faculty human raters or human raters who scored another standardized writing test. Spearman rank correlation coefficient tests were utilized for data analyses. Specifically, a correlational research design was used to examine the correlations between AES performance and human raters’ performance. The purpose of the current study was to analyze the relationship between automated essay scoring (AES) and human scoring in order to determine the validity and usefulness of AES for large-scale placement tests.