Evaluating Writing Graduate Students Teaching Writing

Assessing Student Writing across the Curriculum: A literature review of assignment and rubric design for writing-intensive courses


By Dr. Steven J. Corbett

What do we know about assessing student writing across the disciplines? In terms of designing effective writing assignments and scoring guides—from the cross-curricular research and practice of teacher-scholars across the country—we know quite a bit. And we are learning more every day . . .  Writing instructors and program administrators have a lot to think about when it comes to designing curriculum that works best for their students’ learning. This literature review on the connections between academic writing and assignment design and assessment offers some of the options we all have to choose from for our writing-intensive courses. Part One introduces the complicated notion of academic writing in relation to the importance of assignment design and assessment. Part Two offers some detailed considerations and guidelines for designing effective writing assignments. Part Three ties assignment design to important scoring/assessment considerations. Finally, a works cited and suggestions for further reading (including several available online) section is provided.

Part One: Academic Writing: Why “good” writing assessment is complicated Students writing in their first (and usually second) years of college are writing primarily in pre-disciplinary forms and environments. Most schools across the country have some sort of general education, required first-year writing course. Yet soon students will face more discipline-specific writing nuances in their college careers as well (Beaufort; Carroll; Thaiss and Zawacki; Driscoll) often in courses designated writing-intensive (Bean; Anson, WAC; Townsend). As students move their way through their majors, writing instructors are expected to try to teach students what they need to know about reading and communicating in their fields—the “ways of knowing and doing” intricately tied to disciplinary fields and subfields (Carter). Instructors—especially instructors teaching courses designated as writing-intensive—do their best to teach the nuances of their field’s discourse conventions, while simultaneously learning themselves what factors influence students’ abilities to navigate the writing process of a particular course. One of the key ingredients in this teaching and learning process is writing assignments. What makes a “good” or “effective” writing assignment? Further, how can instructors from across the disciplines not only design good writing assignments, but also develop “good” or “effective” assessment tools that work well for their students, their programs, and themselves?

Assignment design and assessment is so tricky because of the inherently complex nature of what it means to write for the academy. Thaiss and Zawacki list five socio-cultural factors instructors must consider in assignment design, factors that affect and influence how students negotiate assignments from course to course: the academic; the disciplinary; the sub-disciplinary; the local or institutional; and the idiosyncratic or personal. The authors explain that the massive scope of factors that can influence how students negotiate a writing task leads them through three rough stages in their development as disciplinary writers: a first stage where students base their sense of a field’s writing on few criteria generalized into “rules”; a second stage in which students experience a variety of writing exigencies in a variety of courses that leads them to interpret the differences as teacher idiosyncrasy; and a third stage where students come to understand and internalize differences as part of a more nuanced sense of the field. This complex relationship between how students perceive what it means to write at the college level and how instructors go about facilitating this learning has led writing studies scholars for the past thirty years to link the importance of reflective and metacognitive practice to writing assessment, especially holistic assessment (Yancey; Huot and O’Neill; Beaufort 79-82; Carroll 120-26). Efforts to holistically assess the “whole” of writing, at least as we simultaneously try to feasibly and reliably inventory its parts for instruction (Huot), include helping students see and metacognitively negotiate the big picture of their development as disciplinary writers. Constructing effective writing assignments is an integral part of this learning process.

Part Two: Assignment Design: Importance and some key considerations
Yet the art of designing effective writing assignments is indeed a complex process. It involves the sort of multifarious cognitive and social negotiations we discussed in Part One. Several writing specialists (including the field of composition and rhetoric and the subfields of writing-across-the curriculum, or WAC, and writing centers) have reported on their studies and interactions with faculty from across the disciplines—and their students’—perceptions of negotiating assignment expectations and design (Anson WAC; Harris). In a 2002 collection Anson compiles several chapters that speak to some of the nuances of assignment design. Jolliffe reports how he developed what he thought was a great assignment for a quantitative reasoning course, but how fellow instructors and students felt both confused and confined by the assignment parameters. Anson (“Trudy”) reports on how a philosophy instructor gave what he deemed an “open assignment.”

But one student, who decided to take a risk with the assignment by providing illustrations, received a low grade. Farris describes how three instructors from across the disciplines evaluated three political science essays in very different ways. Mullin analyzes how instructors from across the disciplines and writing tutors working with their students noticed “pieces missing” from their assignments—pieces that students could have used to better negotiate the assignment. And finally Bishop offers a focused view of a student Preetha, who wants to study physical therapy, trying to understand the seemingly divergent expectations of her three writing-intensive courses (among the total of six courses she is taking). In another collection, Harris touches on some of these same issues, and others, that can contribute to what she labels “assignments from hell” (or AFHs). According to Harris, AFHs include assignments that: value grammar and other sentence-and word-level issues over ideas, over-emphasize formatting issues over content, offer too many directions or questions to consider, or use intimidating lexicon or technical jargon. In response to their understandings of how teachers and students from across the disciplines negotiate writing assignments, scholars have outlined some of the key criteria for effective assignment design (White; Gardner; Harris). All of the authors recognize that good assignments should:

  1. Lay out the content, scope, goals and purposes of the assignment as clearly as possible including  linking the assignment to any specific course goals and objectives the instructor wishes students to practice and trying to make the assignment sheet as visually easy to read and comprehend as possible;
  2. Provide enough process and development scaffolding so students know how to draw on their knowledge and experiences in the course to negotiate the assignment including possible successful models, and anticipate if students have enough skill and time to complete the assignment satisfactorily;
  3. Provide students with an understanding of the possible choices they have in negotiating the assignment including choices involving topic, form and tone. For example, if there are many questions listed in the assignment sheet, do students know they can choose from among the questions? (For an example of an annotated assignment from the Framework for Success in Postsecondary Writing Assignment Database, see Corbett.)

In addition to these three criteria, White and Harris add the important element of assessment: do students understand how they will be evaluated, if they have the opportunity to write more than one draft, and what constitutes a successful response to the assignment in terms of some sort of grading criteria hierarchy? This final consideration of assessment leads us to the question of grading rubrics or scoring guides.

Part Three: Scoring Guides: Importance and some key considerations
White’s notion of the importance of scoring guides suggests why and how instructors can move toward better coordination between writing assignments and assessing student writing. White argues that well-designed scoring guides can move students toward the sort of metacognitive awareness we discussed above by providing them with fair, consistent, public, clear and responsible feedback. Others have made compelling claims for the value of having students peer review their own and each other’s papers with the same or similar rubric as their teachers (Corbett, LaFrance, and Decker) or including students in the rubric design process (Anson, Davis, and Vilhotti; Inoue; LaFrance). Scholars in WAC and their disciplinary partners have reported success in developing cross-curricular cohorts that closely collaborate in efforts to design effective writing rubrics (Broad et al.; Yancey et al.; Anson et al., “Big”; Soliday). Broad et al., for example, offer several instances of how instructors can collaboratively create meaningful rubrics that begin from collectively identifying attributes of the writing they liked and did not like, and why. Instructors can then move on to creating categories or headings to organize these criteria.

Finally, rubrics are designed based on a clearly delineated grading hierarchy—from very high quality (high passing) writing, to very low quality (low passing) writing. Two essays from the same 2012 volume of the Journal of Writing Assessment speak to the complexity of rubric design in relation to instructors’ laudable attempts at working toward the alignment of teacher expectations with student understanding of assessment criteria. Covill draws on social-cognitive and cognitive theories, as well as studies of the effects of writing rubrics on writing quality and student attitudes toward writing tasks, to frame her study of sixty students enrolled in two sections of a 200-level “Early Child Development” writing-intensive psychology course. Covill randomly assigned students to use one of three assessment tools while they wrote their five-page papers: a long rubric with eleven criteria, a short rubric with five criteria, and an open-ended assessment. Students were assessed for quality of writing and self-efficacy and how they used their respective tool in their writing practices. Covill reports the results of her findings: there was no statistical difference between tool users in the quality of writing or in self-efficacy.

However, she reports some differences in the effects of students’ writing practices for those who used the long rubric. Students reported that the long rubric aided in their initial draft and then again as they revised for their final draft, and that the long rubric helped them (metacognitively) negotiate how to write a good paper for that class and for other classes as well. This leads Covill to speculate that long rubrics may have a more powerful influence on student thinking and writing practices than short rubrics. Yet when examined closely, both the long and the short rubrics actually contain what might be considered primarily generic criteria that could be applied to just about any essay.

Anson et al. (“Big”) argue compellingly in their study that it is futile to attempt to design generic, one-size-fits-all writing rubrics. Informed by extensive experience collaborating with faculty, students, and administrators, the authors advocate the use of contextually-derived assessment and the abandonment of generic rubrics. They write: “Analyses of survey results, meeting transcripts, collected assignments, and samples of student writing show that even where faculty members across the disciplines seem to agree, they don’t” (Online). For example, in a survey, participants identified the “essay” and the “research paper” as the most frequently assigned assignments. But the authors illustrate how these seemingly generic terms can take on very different forms. The research paper looks very different in nursing than in political science.

Even within the field of philosophy, the essay can look very different, from short answers to comprehension questions to much longer pieces that support some sort of innovative interpretation or logic-based explication. The authors report on a cohort of political science faculty who gathered together to collectively analyze , discuss, and revise student writing in relation to assignment and rubric design. Between 2007 and 2009 the authors report significant gains in student abilities to summarize claims, analyze evidence, and connect various perspectives. The authors attributed this success to the revised ways that political science faculty began to imagine their genre- and discipline-specific expectations in their assignments and scoring guides. They were consequently better able to make clear to students how political science analysis works differently than biological or literary analyses in form and function. So is it more important for disciplinary writing instructors to be patient and begin with the baby steps necessary to start the process of collaboratively negotiating how to design effective disciplinary assignments, or to move as quickly as possible toward the giant leaps needed to insure students receive a true picture of the highly nuanced nature of writing for a specific field or subfield? Anson et al. (“Big”), while ultimately holding fast to their argument against generic rubrics, acknowledge that “generic criteria provide a starting point by providing language whose heuristic value compels faculty in the disciplines to think about general but sometimes unconsidered concepts such as ‘rhetorical purpose’ or ‘style appropriate to the defined or invoked audience’” (Online). Perhaps, then, it’s OK to start more generally, while keeping in mind ways to move to more specificity in this or that assignment.

Works Cited and Suggested Readings

Anson, Chris M. “Trudy Does Comics.” Anson 28-32. Anson, Chris M., ed. The WAC Casebook: Scenes for Faculty Reflection and Program Development. New York: Oxford UP, 2002. Print.

Anson, Chris M. Deanna P. Dannels, Pamela Flash, and Amy L. Housley Gaffney. “Big Rubrics and Weird Genres: The Futility of Using Generic Assessment Tools across Diverse Instructional Contexts.” Journal of Writing Assessment 5.1 (2012). Web. 15 Feb. 2014.

Anson, Chris M., Matthew Davis, and Domenica Vilhotti. “‘What Do We Want in this Paper? Generating Criteria Collectively.” Harris, Miles, and Paine 35-45.

Bean, John C. Engaging Ideas: The Professor’s Guide to Integrating Writing, Critical Thinking, and Active Learning in the Classroom 2nd ed. San Francisco: Jossey-Bass, 2011. Print.

Beaufort, Anne. College Writing and Beyond: A New Framework for University Writing Instruction. Logan, UT: Utah State UP, 2007. Print.

Bishop, Wendy. “In the Writing-Intensive Univers(ity).” Anson 53-60.

Broad, Bob, Linda Adler-Kassner, Barry Alford, Jane Detweiler, Heidi Estrem, Susanmarie Harrington, Maureen McBride, Eric Stalions, and Scott Weeden. Organic Writing Assessment: Dynamic Criteria Mapping in Action. Logan: Utah State UP, 2009. Print.

Carroll, Lee Ann. Rehearsing New Roles: How College Students Develop as Writers. Carbondale: Southern Illinois University Press, 2002. Print.

Carter, Michael. “Ways of Knowing, Doing, and Writing in the Disciplines.” College Composition and Communication 58.3 (2007): 385-418. Print.

Corbett, Steven J. “Communicating an Important Topic or Idea in Your Field.” Framework for Success in Postsecondary Writing Assignment Database. Council of Writing Program Administrators, NCTE, and National Writing Project. (Spring 2011). Web. 15 Feb. 2014.

Corbett, Steven J., Michelle LaFrance, and Teagan Decker, eds. Peer Pressure, Peer Power: Theory and Practice in Peer Review and Response for the Writing Classroom. Southlake, TX: Fountainhead Press (In Press).

Covill, Amy E. “College Students’ Use of a Writing Rubric: Effect on Quality of Writing, Self-Efficacy, and Writing Practices.” Journal of Writing Assessment 5.1 (2012). Web. 15 Feb. 2014.

Driscoll, Dana Lynn. “Connected, Disconnected, or Uncertain: Student Attitudes about Future Writing Contexts and Perceptions of Transfer from First Year Writing to the Disciplines.” Across the Disciplines 8.2 (2011). Web. 15 Feb. 2014.

Farris, Christine. “Who Has the Power?” Anson 33-39. Gardner, Traci. Designing Writing Assignments. Urbana, Ill: NCTE, 2008. Web. 15 Feb. 2014.

Harris, Joseph, John D. Miles, and Charles Paine, eds. Teaching with Student Texts: Essays toward an Informed Practice. Logan: Utah State UP, 2010. Print.

Harris, Muriel. “Assignments from Hell: The View from the Writing Center.” What Is ‘College Level’ Writing? Volume 2: Assignments, Readings, and Student Writing Samples. Ed. Patrick Sullivan, Howard Tinberg, and Sheridan Blau. Urbana, IL: NCTE, 2010. 183-206. Print.

Huot, Brian. “Towards a New Theory of Writing Assessment.” College Composition and Communication 47 (1996): 549-566. Print. Huot, Brian, and Peggy O’Neill, eds. Assessing Writing: A Critical Sourcebook. Urbana IL: NCTE, 2009. Print.

Inoue, Asao B. “Teaching the Rhetoric of Writing Assessment.” Harris, Miles, and Paine 46-57. Jolliffe, David A. “Great Assignment, but Nobody’s Happy.” Anson 25-27.

LaFrance, Michelle. “An Example of Guided Peer Review.” Corbett, LaFrance, and Decker.

Mullin, Joan. “Pieces Missing: Assignments and Expectations.” Anson 40-48.

Soliday, Mary. Everyday Genres: Writing Assignments across the Disciplines. Carbondale and Edwardsville: Southern Illinois UP, 2011. Print.

Thaiss, Chris, and Terry Myers Zawacki. Engaged Writers and Dynamic Disciplines: Research on the Academic Writing Life. Portsmouth, NH: Boynton/Cook, 2006. Print.

Townsend, Martha A. 2001. “Writing Intensive Courses and WAC.” WAC for the New Millennium: Strategies for Continuing Writing-across-the-Curriculum-Programs. Ed. Susan H. McLeod, Eric Miraglia, Margot Soven and Chriss Thaiss. Urbana, IL: NCTE, 2001. 233-58. Print.

White, Edward M. Assigning, Responding, Evaluating: A Writing Teacher’s Guide 4th ed. Boston: Bedford/St. Martin’s, 2007. Print.

Yancey, Kathleen Blake. “Looking Back as We Look Forward: Historicizing Writing Assessment.” College Composition and Communication 50.3 (Feb. 1999): 483-503. Print.

Yancey, Kathleen Blake, Emily Baker, Scott Gage, Ruth Kistler, Rory Lee, Natalie Syzmanski, Kara Taczak, and Jill Taylor (The Florida State University Editorial Collective), eds. Special Issue of Across the Disciplines, “Writing Across the Curriculum and Assessment: Activities, Programs, and Insights at the Intersection.” (December 3, 2009) Web. 15 Feb. 2014.

Steven Corbett, Assistant Professor at George Mason University, (Ph.D., University of Washington-Seattle, 2008) teaches courses in writing and rhetoric. He is currently working on projects involving studies and stories of course-based writing tutoring, peer review and response across the curriculum, and writing in the performing and visual arts.  Some of his recent publications include: Peer Pressure, Peer Power: Theory and Practice in Peer Review and Response for the Writing Classroom. (Primary editor, with Michelle LaFrance and Teagan Decker.) Southlake, TX: Fountainhead Press. (Forthcoming, expected spring 2014). “Learning Disability and Response-Ability: Reciprocal Caring in Developmental Peer Response Writing Groups and Beyond.” In Special Issue of Pedagogy: Critical Approaches to Teaching Literature, Language, Composition, and Culture. (Forthcoming, 15.3 [Summer 2014]). “It’s the Little Things that Count in Teaching: Attention to the Less ‘Serious’ Aspects Can Make You a More Effective Instructor.” (Primary author with Michelle LaFrance.) In The Chronicle of Higher Education (September 9, 2013).