Undergraduate introductory STEM courses often include technical writing training. Often it is secondary to content coverage, and so treated as an incidental skill that each student develops “along the way.” Alternatively, the writing instruction process may be haphazardly designed, implemented, and executed. Adding to the learning challenge, faculty may not be who is helping students develop writing skills. Often graduate teaching assistants or even advanced undergraduates are who teaches technical writing. TAs may have under-developed writing skills themselves, lack pedagogical content knowledge needed to guide students effectively, or not have been explicitly trained to teach writing in a particular way.

The first step towards improving writing training is widespread adoption of evidence-based strategies. Equally important is reducing the practical barriers that keep faculty from routinely including scientific writing in post-secondary STEM classes.

In 2014 we launched the SAWHET lab report collection system, to systematically collect report documents and metadata from student writers and graduate instructors. Using these data we are mapping key features of student writing, and connections and correlations between those features. These insights are informing both how we teach technical writing and how we train graduate instructors.

Currently our structured database contains >5000 laboratory reports written by undergraduate students for introductory biology courses in standardized .docx and .txt formats. Additional data and metadata available for each report includes:

  • All comments that a TA made on each report
  • Context. Was it first or second report of the semester? Initial submission or revision? Course and topic prompt? Year in college of the author?
  • Instructor information. Which TA provided feedback? What score was assigned? What was rationale for score?

Our first goal is to better understand student and instructor behaviors within the technical writing instructional space. Our second and more practical goal is to develop techniques for teaching technical writing that:

  • Embody best practices from 40+ years of research on “writing in the disciplines”;
  • Emphasize student development through holistic coaching;
  • Shift student and instructor attention away from copy-correction towards larger writing issues; and
  • Can be implemented at scale by less experienced teaching assistants.

 

Context & Prior Work

From prior studies (see the Literature Review for details), we know students make greater gains as writers if instructors avoid telling students what to correct (“copy editing”), and use a coaching-oriented approach instead (Bazerman 1994; Bazerman and Herrington 2006; Breidenbach 2006; Gottschalk 2003; H. 2011; Kiefer and Neufeld 2002; National Council of Teachers of English and National Writing Project 2011; Perelman 2009; Ruegg 2015; Szymanski 2014; Trupiano 2006; Underwood and Tregidgo 2006; Anson 2001; WPA Council 2011; Harris 1979; R. H. Haswell 2006; R. Haswell 2006; Reynolds et al. 2009; Szymanski 2014; Swilky 1991).

To that end, we train graduate TAs to comment on undergraduate technical writing by addressing global issues / biggest issues first. Ideally, TAs will provide feedback within these parameters:

  • Spend less time writing fewer but more substantive comments, and provide at least two separate rounds of feedback on each document.
    • In routine practice, limit feedback to 3-5 comments per text page. Students cannot process more than this easily.
    • Make recommendations that would result in the greatest improvement first.
    • As students’ writing improves, TAs can shift their focus to the next most-significant errors or issues.
  • Provide comments that encourage student reflection and self-correction rather than prescribe specific changes.
    • Students tend to focus on the smallest, easiest corrections first, and avoid larger, more complex revisions.
    • Eliminating copy corrections leaves students with only the complex revisions to work on.

Our bins-based report grading protocol (Nilson 2014) reinforces this feedback strategy.

This holistic, coaching-oriented approach is well-supported by prior research and is a recommended best practice by (among others) the Writing Across the Curriculum Clearinghouse. Yet we are uncertain how accurately TAs implement the recommended strategy. We have repeatedly asked ourselves:

“What do TAs ACTUALLY spend most of their time and effort making comments about when they grade student writing?”

Self-reported TA effort is likely to be unreliable. What is needed is a more direct measure of TA effort.

In 2018, we began trying to analyze TAs’ comments on student papers directly. We extracted and tabulated all comments from ~500 pairs (initial submissions and revised version) of short lab reports written by undergraduates in 3 different general biology lab courses during a single semester. After initial data cleaning (removing duplicates, splitting comments addressing multiple independent writing issues, and removing comments lacking any context for judging intent), we had a working dataset of ~11,000 separate comments.

We next used a subset of 110 randomly selected comments to build an initial qualitative codebook (Elliott 2018; Saldaña 2015) for classifying comments based on subject and structure. This initial codebook was tested and revised using blocks of 1100 additional randomly selected comments until the categories and criteria stabilized. Using the final codebook, we then sorted all 11,000 comments into separate categories.

TA comments were coded and sorted according to:

  • Subject: did the comment focus mainly on Basic Criteria (minimum pass/fail requirements), Technical Issues, Writing Quality, Logic and Thinking, or Other issues
  • Structure: did the comment simply Point to an error without providing other help, provide only Copy Correction, provide declarative General or Specific Information, or provide Holistic Coaching that fosters student thinking and long-term improvement?

Raw counts of comments for each Subject/Structure pair were then converted to fractional frequencies. The final frequency table (excerpted below) provided us with direct estimates of the most common types of comments TAs made on student work, and of which elements of writing TAs emphasize most often.

Excerpted data from frequency table for hand-coded TA comments on student reports

Excerpted data from frequency table for hand-coded TA comments on student reports

 

The Challenge

First, analyzing TA comments this way is informative but coding by hand is unsustainable. Developing the codebook then rating the initial set of 11,000 comments required nearly 80 hours of investigator effort. Given 10-12,000 comments are generated EACH SEMESTER, a faster classification method is essential for this NSF project to succeed.

Second, even with the codebook in hand, individual coders need training to achieve sufficient accuracy, and there is significant risk of “coding drift” (changes in how code features are interpreted by a single rater over time). Automated classification would eliminate coder training and minimize potential for coding drift.

 


Work Cited

Anson, C. M. 2001. “Talking About Writing: A Classroom-Based Study of Students’ Reflections on Their Drafts.” In Self-Assessment and Development in Writing: A Collaborative Inquiry, edited by Jane Bowman Smith and Kathleen Blake Yancey, 59–74. Hampton Press.

Bazerman, C. 1994. “Systems of Genres and the Enactment of Social Intentions.” In Genre and the New Rhetoric, edited by Aviva Freedman and Peter Medway, 1st ed., 79–101. Taylor & Francis.

Bazerman, C., and A. Herrington. 2006. “Circles of Interest: The Growth of Research Communities in Wac and Wid/Wip.” In Composing a Community: A History of Writing Across the Curriculum, edited by Susan H. McLeod and Margot Iris Soven, 49–66. Parlor Press inc. http://www.parlorpress.com/composing.html.

Breidenbach, C. 2006. “Practical Guidelines for Writers and Teachers.” In Revision: History, Theory, and Practice, edited by Alice Horning and Anne Becker, 197–219. WAC Clearinghouse.

Elliott, V. 2018. “Thinking About the Coding Process in Qualitative Data Analysis.” The Qualitative Report 23 (11): 2850–61.

Gottschalk, K.K. 2003. “The Ecology of Response to Student Essays.” ADE Bulletin 134-135: 49–56.

H., Graves. 2011. “Rhetoric, Knowledge and the ‘Brute Facts of Nature’ in Science Research.” In Writing in Knowledge Societies, edited by Doreen Starke-Meyerring, Anthony Paré, Natasha Artemeva, Miriam Horne, and Larissa Yousoubova, 179–92. WAC Clearinghouse.

Harris, M. 1979. “The Overgraded Paper: Another Case of More Is Less.” In How to Handle the Paper Load: Classroom Practices in Teaching English 1979-1980, edited by G. Stanford, 91–94. National Council of Teachers of English.

Haswell, R. 2006. “The Complexities of Responding to Student Writing; or, Looking for Shortcuts via the Road of Excess.” In Across the Disciplines, 3:91–94. WAC Clearinghouse.

Haswell, R. H. 2006. “Automatons and Automated Scoring: Drudges, Black Boxes, and Dei Ex Machina.” In Machine Scoring of Student Essays: Truth and Consequences, edited by Ericsson P.F. and R.H. Haswell, 57–78. Utah State University Prepss.

Kiefer, K., and J. Neufeld. 2002. “Making the Most of Response: Reconciling Coaching and Evaluating Roles for Teachers Across the Curriculum.” In Academic Writing. Vol. 3. WAC Clearinghouse.

National Council of Teachers of English and National Writing Project. 2011. “Framework for Success in Postsecondary Writing.” Report. NCTE/NWP.

Nilson, L.B. 2014. Specifications Grading: Restoring Rigor, Motivating Students, and Saving Faculty Time. Stylus Press, Sterling VA.

Perelman, L. 2009. “Data Driven Change Is Easy: Assessing and Maintaining It Is the Hard Part.” In Across the Disciplines, 6:91–94. WAC Clearinghouse.

Reynolds, J., R. Smith, C. Moskovitz, and A. Sayle. 2009. “BioTAP: A Systematic Approach to Teaching Scientific Writing and Evaluating Undergraduate Theses.” Bioscience 59 (10): 896–903.

Ruegg, R. 2015. “Differences in the Uptake of Peer and Teacher Feedback.” RELC Journal 46 (2): 131–45.

Saldaña, Johnny. 2015. The Coding Manual for Qualitative Researchers. Third. Sage Publications Ltd. https://us.sagepub.com/en-us/nam/the-coding-manual-for-qualitative-researchers/book243616.

Swilky, J. 1991. “Cross-Curricular Writing Instruction: Can Writing Instructors Resist Institutional Resistance?” Report. ERIC Clearinghouse.

Szymanski, E. A. 2014. “Instructor Feedback in Upper-Division Biology Courses: Moving from Spelling and Syntax to Scientific Discourse.” Across the Disciplines 11. http://wac.colostate.edu/atd/articles/szymanski2014.cfm.

Trupiano, C. 2006. “Best Classroom Practices.” In Revision - History, Theory, and Practice, edited by A. Horning and A. Becker, 177–97. WAC Clearinghouse/Parlor Press.

Underwood, J. S., and A. P. Tregidgo. 2006. “Improving Student Writing Through Effective Feedback: Best Practices and Recommendations.” Journal of Teaching Writing 22 (2): 73–97.

WPA Council. 2011. “Framework for Success in Postsecondary Writing.” Council of Writing Program Administrators, National Council of Teachers of English, & National Writing Project. http://wpacouncil.org/files/framework-for-success-postsecondary-writing.pdf.


Copyright © 2019 A. Daniel Johnson. All rights reserved.