Validating holistic scoring for writing assessment

There is no post-secondary composition curriculum in which to be placed.

But in the US, where open admissions stands as an ideal and sometimes as a reality (over 60% of two-year institutions currently endorse it), where millions of students are enrolled each year, and where writing courses usually form some sort of instructional sequence, placement into writing courses is the norm.

Ten years later the outcomes were no better, and Harvard moved its sophomore forensic course to the first year, turning it into a remedial writing course required of everyone who didn't exempt out of it.

The College Entrance Examination Board began its work in regularizing the certification of applicants in 1900.

As documented by a number of fine studies touching upon the history of writing placement, the rest of the century saw testing firms grow ever more influential and departments of English grow ever more divided between using ready-made goods, running their own placement examinations, or foregoing placement altogether (Elliott, 2005; Mizell, 1994; Russell, 2002; Soliday, 2002; Trachsel, 1992; Wechsler, 1977).

Testing firms are not about to find evidence that they need to pay for more raters to achieve a valid score, nor are colleges that give local tests eager to give more of them (although students allowed to retest their placement decision show a high rate of reversal; Freedman and Robinson, 1982).

On the other hand, the issue of indirect versus direct testing of writing, of even longer heritage, is still unsettled.