Assessing Student Writing Competence

In response to a state mandate in 2000, Mason’s Office of the Provost convened the Writing Assessment Group (WAG) comprising faculty representing each of the colleges and schools with undergraduate majors and co-chaired by the WAC director and the Associate Provost for Institutional Effectiveness.  The motivation for convening the group was threefold: 1) a sense that many of our upper-division students were not writing at a level satisfactory to faculty; 2) the approval of the new general education synthesis requirement [a part of which involved assessing writing competence]; and 3) a mandate from the State Council of Higher Education in Virginia (SCHEV) for all Virginia institutions to submit a plan defining standards of writing competency and for measuring competency, collecting data, and reporting results.  In 2002, WAG became a committee in the Provost's office with the charge of proposing and implementing a process for assessing students' writing competence.

The Process

Departments follow a process modeled on an initial pilot workshop conducted in 2003 to assess the writing competence of their undergraduate majors.  After meeting initially with someone from the Writing Assessment Group, faculty choose an assignment from the writing-intensive class from which to collect papers (a minimum of 20 and up to 60).  Faculty from the department are then guided through a workshop using some of the collected papers as samples from which to create a rubric that identifies traits that are valued in writing in that discipline, often beyond the assignment.  Once the traits have been identified and refined, they are grouped into larger categories.  The rubric is then used to score the remaining papers.  The faculty who create the rubric are the ones who score the papers, thus calibrating the scoring based on a common experience.

The assessment process occurs at the departmental level and gives us valuable information about our students' writing abilities but also, and perhaps more importantly, provides a venue for faculty to talk about their goals and expectations for student writers.  In addition, our process has been recognized as a model program by the Council of Writing Program Administrators and the National Council of Teachers of English, one that “reflects consistent principles … enacted through questions and methods that are appropriate for the institution, the department, and the program.”

Background: 2002-2007

From 2002 to 2007, the following four-part assessment strategy was put into place:  

  1. A faculty survey.  In 2002, the committee designed and administered a Faculty Survey on Student Writing with results informing university writing and writing assessment efforts.
  2. A cross-disciplinary holistic scoring workshop pilot.   In 2003, faculty liaisons from across the disciplines attended a workshop that served as a model to use with their departmental colleagues in determining departmental standards for student writing by developing a rubric and scoring papers. 
  3. Scoring in the majors.  Starting in 2002, undergraduate department began holding assessment workshops and scoring sessions, using an assignment from their writing-intensive course or one typical of writing in their particular major.
  4. Reporting results.  Departments report the results of their scoring sessions to the Office of Institutional Assessment.  Those reports are not publicly available. (Report format template)

[A more comprehensive description of each of these parts, including documentation, can be found here. The WAG co-chairs also wrote an article about this process: Zawacki, Terry Myers, & Gentemann, Karen M. “Merging a Culture of Writing with a Culture of Assessment: Embedded, Discipline-based Writing Assessment.”  In Assessment in Writing. Eds. Paretti, Marie C. and Katrina Powell.  Assessment in the Disciplines Series, Volume 4. Tallahassee: Association of Institutional Research. 2009. 49-64.]

Assessing Writing: 2008-present

In 2007, SCHEV issued a new mandate for "value-added assessment measures [that] indicate progress, or lack thereof, as a consequence of the student's institutional experience." In response, WAC and the Office of Institutional Assessment incorporated the ongoing assessment procedures into a new plan, which includes a common definition of overall written competence to be used for both pre- and post-assessments of student writing.  The pre-assessment took place in fall 2008 using random samples of research-based essays written in introductory composition classes (English 100/101, see rubric below).  “Voices at the Table,” an article detailing this process appeared in the December 2009 issue of Across the Disciplines.

Department Rubrics


Assessing WAC: