Teaching Writing in the Age of AI

Page 1


Teaching Writing in the Age of AI

STRATEGIES FOR TEACHERS OF SECONDARY

STUDENTS

TROY HICKS & KRISTEN HAWLEY TURNER

Teaching Writing in the Age of AI

STRATEGIES FOR TEACHERS OF SECONDARY STUDENTS

TROY HICKS & KRISTEN HAWLEY TURNER

Copyright © 2026 by Solution Tree Press

All rights reserved, including the right of reproduction of this book in whole or in part in any form. This book, in whole or in part, may not be included in a large language model, used to train AI, or uploaded into any AI system. AI outputs featured in the text were generated with the assistance of ChatGPT, Claude, Gemini, Copilot, Perplexity, Rytr, Quillbot, Sudowrite, and Wordtune. 555 North Morton Street Bloomington, IN 47404

800.733.6786 (toll free) / 812.336.7700 FAX: 812.336.7790

email: info@SolutionTree.com SolutionTree.com

Visit go.solutiontree.com/technology to download the free reproducibles in this book. Printed in the United States of America

Library of Congress Cataloging-in-Publication Data

Names: Hicks, Troy author | Turner, Kristen Hawley author

Title: Teaching writing in the age of AI : strategies for teachers of secondary students / Troy Hicks, Kristen Hawley Turner.

Other titles: Teaching writing in the age of artificial intelligence

Description: Bloomington, IN : Solution Tree Press, [2026] | Includes bibliographical references and index.

Identifiers: LCCN 2025012593 (print) | LCCN 2025012594 (ebook) | ISBN 9781960574848 paperback | ISBN 9781960574855 ebook

Subjects: LCSH: Penmanship--Technological innovations | Artificial intelligence--Educational applications

Classification: LCC LB1590 .H53 2026 (print) | LCC LB1590 (ebook) | DDC 652/.1--dc23/eng/20250912

LC record available at https://lccn.loc.gov/2025012593

LC ebook record available at https://lccn.loc.gov/2025012594

Solution Tree

Cameron L. Rains, CEO

Edmund M. Ackerman, President

Solution Tree Press

Publisher: Kendra Slayton

Associate Publisher: Todd Brakke

Acquisitions Director: Hilary Goff

Editorial Director: Laurel Hecker

Art Director: Rian Anderson

Managing Editor: Sarah Ludwig

Copy Chief: Jessi Finn

Production Editor: Gabriella Jones-Monserrate

Copy Editor: Jessica Starr

Proofreader: Elijah Oates

Cover Designer: Fabiana Cochran

Text Designer: Laura Cox

Content Development Specialist: Amy Rubenstein

Associate Editor: Elijah Oates

Editorial Assistant: Madison Chartier

Acknowledgments

As with all publishing efforts, a book like this emerges from many rounds of conversation, deliberation, and encouragement with one another, our colleagues, our publisher (Solution Tree), and our families.

First, we acknowledge our own author collaboration, which has now stretched past the fifteen-year mark. Finding opportunities to learn with one another over many articles, workshops, webinars, and books has reaffirmed our belief that scholarly work can be both intellectually and emotionally fulfilling despite the pressures of academia.

This connects closely to our appreciation for our colleagues. Over many conversations, email threads, group chats, conference sessions, video meetings, and social media exchanges, we are reminded of the generous spirit guiding those that share our love for education. Our colleagues from the National Writing Project (NWP) have pushed our thinking regarding digital writing for many years. Paul Allison, Andrea Zellner, Tanya Baker, Christina Cantrill, and Elyse Eidman-Aadahl have all explored the boundaries in their use of generative AI and encouraged us to consider both the affordances and limitations of new technologies.

From our individual NWP sites, Andy Schoenborn (Chippewa River Writing Project) and Jill Stedronsky (Drew Writing Project and Digital Literacies Collaborative) inspire us with their personal and classroom inquiries. The faculty and students connected to our academic programs—especially Kate Grunow, Jennifer Parker, Joy VerPlanck, Amber White, Dan Lawson, Jenna Corraro, and Alex Rudniy—introduce us to new technologies, allow us to “try on” new

pedagogical practices, and serve as thought partners along the way. Likewise, the K–12 students who eagerly share their perspectives with us and allow us to ask questions, make mistakes, and keep coming back to them for more insight deserve special acknowledgment. Kristen’s children, Megan and Ryan, shared their writing and thoughts for this book, and Troy also had many informal conversations with his children about their uses of generative AI.

Next, to the team at Solution Tree—including Claudia Wheatley, content specialist and author liaison; Laurel Hecker, editorial director; Kendra Slayton, publisher; Todd Brakke, associate publisher; Rian Anderson, art director; and, especially, Gabriella Jones-Monserrate, our production editor, who helped us cross the finish line—we thank you. Having served in the role of reviewer many times ourselves, we also thank the ten educators that Solution Tree partnered with to provide one final round of feedback as we revised our way through the final manuscript preparation.

Finally, of course, we appreciate our families.

Troy offers thanks to his wife, Sara, for her continued support, as well as to his children for continuing to show him unique uses of AI. Additionally, he would like to thank his best friend, Steve Tuckey, for their ongoing discussions about the many uses of AI in education, as well as his brother, Barry Hicks, for discussing the possibilities of AI in business and nonprofit work. Never stop generating ideas and questions so our conversations may continue.

Kristen thanks her children for being skeptical users and for humoring her whenever the writing teacher inside wants to test out some activities. To the rest of her family—Matt, Mom, Pop, and Rick—thanks for listening to me talk it out and for living on the same tangent with me when we need to.

Solution Tree Press would like to thank the following reviewers:

Tonya Alexander

English Teacher

Owego Free Academy

Owego, New York

Lindsey Bingley

Literacy and Numeracy Strategist

Foothills Academy

Calgary, Alberta, Canada

Erin Kruckenberg

Fifth-Grade Teacher

Jefferson Elementary School

Harvard, Illinois

Stephanie Reynolds

English Teacher

Owego Free Academy

Owego, New York

Chris Siegle

Assistant Principal

Robert Thirsk High School

Calgary, Alberta, Canada

Lauren Smith

Assistant Director of Elementary Learning

Noblesville Schools Noblesville, Indiana

Sarah Svendsen

Prekindergarten Teacher Pine Crest School Boca Raton, Florida

Barbara Tasber

Director of Learning, Leadership and Technology

Windsor Central School District

Windsor, New York

Kim Timmerman

Principal

Adel-De Soto-Minburn Middle School Adel, Iowa

Bryn M. Williams

Principal

Coquitlam School District #43

Vancouver, British Columbia, Canada

Visit go.SolutionTree.com/technology to download the free reproducibles in this book.

About the Authors

Dr. Troy Hicks is a professor of English and education at Central Michigan University (CMU) where he collaborates with K–12 colleagues to explore how they implement newer literacies in their classrooms. He serves as acting associate dean of the College of Education and Human Services and teaches master’s and doctoral courses in educational technology. Additionally, he directs the Chippewa River Writing Project, a site of the National Writing Project (NWP). He has led various grant-funded projects, including two Teaching with Primary Sources projects through the Library of Congress, a National Endowment for the Humanities summer institute, and numerous grants from the NWP. A former middle school teacher, Hicks has earned numerous accolades, including CMU’s Excellence in Teaching Award (2020), the Initiative for 21st Century Literacies Research Divergent Award for Excellence (2020), and CMU’s Honors Program (Honors Faculty of the Year, 2024).

In 2019, Hicks earned recognition as an International Society for Technology in Education (ISTE) Certified Educator. In 2021, he took a facilitation certification course in digital storytelling through StoryCenter; in 2022, he became an affiliated faculty member of the Media Education Lab. He consults regularly with educational companies and nonprofit organizations. Notably, he is a regular presenter at the annual conventions for the National Council of Teachers of English (NCTE) and numerous local, state, regional, national, and international professional development events. He has authored, coauthored, and edited many resources broadly related to teaching literacy in the digital age, including sixteen books and dozens of articles, chapters, blog posts, and other media.

Kristen Hawley Turner is a professor and chair of education at the Caspersen School of Graduate Studies at Drew University in New Jersey. Her research focuses on the intersection between technology and literacy, and she works with teachers across content areas to implement effective literacy instruction and incorporate technology in meaningful ways. A former English and social studies high school teacher, she collaborates with K–12 colleagues and explores how they implement new literacies in their classrooms. Turner is the founder and director of the Drew Writing Project and Digital Literacies Collaborative, a site of the National Writing Project. She has served on the Site Development Leadership Team for the National Writing Project and chair of the Digital and Media Literacy Working Group for the Children and Screens Institute.

Turner is the author of several journal articles and book chapters dealing with adolescent digital language, technology, teacher education, and writing instruction, and she regularly provides professional development workshops for teachers. She is the coauthor of Connected Reading: Teaching Adolescent Readers in a Digital World and Argument in the Real World: Teaching Students to Read and Write Digital Texts and the editor of Ethics of Digital Literacy: Developing Knowledge and Skills Across Grade Levels. Her latest project includes Writing Still Matters with the National Writing Project. She is a cofounder of the Screentime.me research project, the Technopanic Podcast: Living and Learning in a Digital Age, and the Writers Who Care peer-reviewed blog. She presents regularly at local, state, and national conferences, including the annual convention for the NCTE.

Turner has been recognized for both her teaching and her scholarship, including the Divergent Award for Excellence in Literacy in a Digital Age Research (2024); National Technology Leadership Award (2014, 2024); Richard A. Meade Award for scholarship in English education (2022); Linda Rief Award, Honorable Mention, for outstanding writing (2021); Divergent Publication Award for Excellence in Literacy in a Digital Age Research (2021); Innovative Leader Award for the Conference on English Leadership (2016); Dean Robert Ready Award for Putting the Humanities to Work at Drew University (2021); Thomas H. Kean Scholar/Mentor of the Year, Drew University (2020); and the Civic Engagement Faculty Leadership Award, Drew University (2020). She blogs at http://twinlifehavingitall.blogspot.com about being a working mother of twins.

To book Troy Hicks or Kristen Hawley Turner for professional development, contact pd@SolutionTree.com

Introduction

Being and Becoming Writing Teachers in an Age of Generative AI

Although November of 2022—the first public release of ChatGPT—was a watershed moment for society to consider the role of generative artificial intelligence (genAI) in education, those of us in K–12 and higher education spaces had been having many conversations about numerous forms of AI for months and years before ChatGPT’s release. These conversations ranged from fear of what the technologies might do (or are already doing) to us as humans all the way to enthusiasm for the possibilities of transformation. In reality, AI tools (or similar technological solutions like automated essay scoring and spelling and grammar checkers) have been part of the conversation about teaching writing for decades, and each new tool has caused reactions when introduced, from angst to excitement.

From all the perspectives we’ve heard, we repeatedly return to a simple truth: We need to remember our core values as educators.

Therefore, as we describe what it means to be a writer—and a teacher of writing—in a digital world that is increasingly infused with genAI writing tools, we begin this book by reaffirming the following.

Z Who we are as writers and humans navigating a genAI world

Z What we believe as writers, teachers, and teacher educators working to use genAI for teaching and learning

Z The caveats of using genAI in the writing classroom, which will continue to provide us with ethical dilemmas as we teach our students to use these tools in productive, responsible ways

It has been quite some time since either of us has written a “philosophy of teaching writing” statement, though it seems like this book is a good place to do just that. We have been thinking about what we believe—as have thousands of other writing teachers—much more since the introduction of ChatGPT marked a seismic shift in the technologies that influence how we write and communicate. As we began our plans for this book, we thought revisiting our pedagogical roots would be pertinent to help us articulate who we are and what we believe as writers, teachers, and humans interacting with genAI.

Our Philosophy of Teaching Writing

To begin, our understanding of effective writing instruction has been influenced by a few factors.

First, we began our careers as English language arts teachers in the late 1990s and early 2000s—and, yes, it still feels weird to us to say “the turn of the century” when we describe those early days—just as internet technologies and web 2.0 promised to transform teaching and learning.

Second, over the two decades since then, our research with teachers on how technologies impact literacy has shaped our understanding of what it means to teach writing. This work also includes the many courses we have taught and workshops we have conducted in collaboration with preservice and in-service teachers as they intentionally integrated digital literacies into their practice.

A third and equally important influence is our continued learning through connections with colleagues in our professional organizations, most notably the National Writing Project (NWP) and the National Council of Teachers of English (NCTE). Colleagues in these networks have, for decades, pushed both of us to always articulate what we mean when talking about writing in the broadest sense of the activity, as well as what it means to be a teacher of writing. Thus, it is worth pausing for a moment on this final factor.

We are both, at heart, Writing Project teachers who teach and lead with a stance of inquiry, focus on the growth of writers before the correction of writing, and strive to cultivate writers who share their voices publicly and ethically. Our experiences as participants in NWP-sponsored events have helped us rethink our writing instruction. Though we plan in terms of an instructional cycle—with assessment guiding instructional practice and reflection living at the center— we think in terms of a writerly cycle. In a writerly cycle, writers are the center, and everything we do as teachers of writing is intended to cultivate them.

We do not imagine this as the classic brainstorm, draft, revise, edit, publish cycle that is immortalized in countless handbooks, worksheets, posters, and graphics from our own K–12 schooling experiences and stretching into our teaching experiences. When professors Linda Flower and John R. Hayes (1981) introduced a cognitive process theory of writing, they pushed back on a linear process, noting that writing occurs recursively. Their model centered on the writer, and their original conclusion rings true, even in the age of genAI: By placing emphasis on the inventive power of the writer, who is able to explore ideas, to develop, act on, test, and regenerate his or her own goals, we are putting an important part of creativity where it belongs—in the hands of the working, thinking writer. (Flower & Hayes, 1981, p. 386)

To that end, we contend that a writerly cycle in the age of genAI is a series of interconnected, overlapping processes that writers work through independently, with the support of a peer or teacher, and with the assistance of genAI tools.

With NWP-informed practices at our core, we turn to our other professional organizations to help flesh out what we believe. The NCTE position statement, Understanding and Teaching Writing: Guiding Principles (Adler-Kassner, Baca, & Fredricksen, 2018), answers three key questions by articulating ten principles.

1. What is writing? The NCTE provides a succinct, initial definition: Writing is “the act of creating composed knowledge” (Adler-Kassner et al., 2018). This definition is then further nuanced and complicated by describing how writers work, both individually and collaboratively, across contexts and modalities.

2. Who are writers? The NCTE contends that “everyone has the capacity to write” (Adler-Kassner et al., 2018). Subsequent examples discuss the ways that one’s identity as a writer reflects “different ideologies, values, and identities” and—as we will explore more in

this book—that there are “various technologies [writers] have for the production, consumption, and distribution of forms of composed knowledge” (Adler-Kassner et al., 2018).

3. What are the essential principles for teaching writing? The NCTE emphasizes the need for creating “classroom communities of writers” that engage in feedback; exploration of different genres; transparent assessment; and sustained, meaningful writing experiences (AdlerKassner et al., 2018).

We have adapted our answers to these questions as technologies have developed. For example, in our two previous books, we explored how an inquiry-driven approach to examining multimodal arguments, such as public service announcements or infographics, can scaffold writers into the creation of their own media texts, as well as the ways readers can adapt their practices for different encounters with digital texts (Turner & Hicks, 2015, 2017). Through this work, we held firm to our commitment that writing is an act of knowledge creation, that all writers have the capacity to write, and that these essential practices remain at risk of sounding redundant.

So, while genAI is on the minds of teachers, school administrators, parents, politicians, and the general public, it seems appropriate to reconsider what it means to teach writing as educators grapple with when, why, and how genAI could be used to support their students. Yet, even as we engage in this complex thinking, we will remain loyal to the principles outlined by the NCTE and reaffirmed in our work as writers, as teachers of writing, as longtime members of and contributors to the NCTE, and as directors of NWP sites.

It seems as though genAI’s ability to be more directly integrated into a writing process raises concerns—even more so than when users were working to understand technologies like websites, blogs, wikis, podcasts, collaborative word processing documents, and other web 2.0 technologies. Whether a writer uses an external tool to answer questions or generate ideas—or whether genAI becomes commonplace and built in to the applications we use each day, such as email and word processing—the simple fact remains: GenAI exists, and it is accessible to writers of all ages. Because of the increasing ubiquity, genAI raises questions about writing and the teaching of writing.

In the remainder of this introduction, we offer our responses to the same three questions posed in the NCTE policy document (Adler-Kassner et al., 2018). This will provide a frame for you, our readers, as we move into the strategies

in the rest of this book. It is our belief that returning to the foundations of who we are and what we believe best equips us to contend with challenges that present themselves in the era of genAI. From there, we can consider how new genAI technologies work with (or against) those core beliefs.

As with any technology we have written about in the past, we conducted our own inquiry for this book. You will see the ways we engage with genAI as writers and as teachers of writing, modeling the kinds of thinking and interactions that you might provide for your own students.

How We Use AI in This Book

Throughout the book, we share snippets from our various interactions with genAI tools, clearly indicating the incorporated responses and acknowledging uses of those tools (such as ChatGPT, Perplexity, Copilot, Gemini, and others) when they show up. The conversations we’ve had with genAI while crafting this text have been fruitful as we consider ways that all teachers might begin the ethical yet challenging conversations with their own writers about when, why, and how to use—and document the use of—genAI. In doing this work, our decision for how to document our own use of genAI in this book became pragmatic. While style guides such as the Modern Language Association (MLA) and the American Psychological Association (APA) provide formats for citing genAI output, we have chosen not to provide an in-text citation for every output because we clearly call them out. You can even access some of our outputs via this book’s landing page (visit go.SolutionTree .com/technology). We have worked to be transparent about where the text comes from, and we feel this is the most ethical, practical approach for this particular writing task: a book that is constrained by the necessity of printed pages.

The Practice of Writing With GenAI

This initial section demonstrates the first example of our inquiry into writing; we wonder how genAI might share ideas about the moves we ask of our student writers. We used ChatGPT to provide an initial response to the three questions posed in the NCTE statement (Adler-Kassner et al., 2018). We analyzed the

outputs and elaborated on them to help us articulate our own thinking. We found this to be a valuable experience as writers and one we think is worthy of bringing into the writing classroom. See the following sections for ChatGPT’s answers to the following questions: What is writing? Who are writers? and What are the essential practices for teaching writing?

What Is Writing?

Here is ChatGPT’s response to the question, What is writing?

Writing is the process of expressing thoughts, ideas, or information through the use of written language. It involves selecting words, organizing them into coherent sentences and paragraphs, and conveying meaning to readers. Writing can take many forms, including fiction, non-fiction, poetry, journalism, academic papers, technical documents, letters, emails, and more. It serves various purposes, such as entertaining, informing, persuading, documenting, or expressing creativity. Writing can be a solitary activity or a collaborative one, and it plays a fundamental role in communication, education, literature, and countless other aspects of human civilization.

As many of ChatGPT’s responses often do, this one got us thinking and talking. In the broadest sense, we contend that ChatGPT got the definition right. We can then draw these conclusions.

Z The idea that writing is a process is true, both in what we know and believe as writers and in what research shows about how writers work, as already noted previously in this introduction (Adler-Kassner et al., 2018; Flower & Hayes, 1981) and among countless others in the field of writing research.

Z Additionally, the idea that writing is “expressing thoughts, ideas, or information” is also in line with the idea that writing is thinking (Hillocks, 1995). It is a process of inquiry that allows writers to understand what they want to say as well as to express those thoughts to a reader.

Z Writing is fundamental in the ways described here, and it is a fundamental aspect of creating and disseminating knowledge across disciplines and contexts, as evidenced by our field’s longtime interest in writing across the curriculum.

So, we agree with some of what ChatGPT has generated as a definition of writing. As with many other responses that it and other genAI tools offer, it is a decent place to begin the conversation. However, ChatGPT’s definition is rather traditional. The idea that writing relies on written language and that, “It involves selecting words, organizing them into coherent sentences and paragraphs,” certainly limits what writing actually is to a singular modality. Research in our field—and our own experiences as writers—shows that, in a digital age, writing is more than just written words. In addition to our own work, one useful summation of these ideas comes from the NCTE policy document “Definition of Literacy in a Digital Age” (Witte, Bass, O’Byrne, Price-Dennis, & Sibberson, 2019), which presents writing as multimodal—an act of thinking and expression of that thinking. It can include words in the traditional alphabetic sense, and it can also include images, video, audio, maps, or more to convey meaning. Due to the hyperlinked nature of writing in a digital age, the process is inherently interactive, connecting writers and readers in real time and across time. It relies on—perhaps even takes advantage of—the affordances of the tools available to the writer. In this sense, writing involves an understanding of alphabetic language (as it always has), as well as an understanding of design elements across modes and media.

Because of the complexity of writing in a digital age, writing is, perhaps—even more than it always has been—both an intellectual challenge and a potentially joyful process. As parents, educators, and humans in a complex world, we believe that if educators are going to introduce these kinds of challenging yet potentially profound practices into students’ lives, they must know how to do it intentionally. Put another way, introducing genAI into the writing classroom cannot be done in a one-off lesson, with the hope that writers will understand the ethical and practical implications of using it. Instead, teachers must intentionally require writers to do the hard work of writing—to engage in the productive struggle that allows them to think through ideas; to understand what they want to say; and to express new insights in the media, mode, or genre that best represents a purpose and speaks to an audience.

In the digital landscape, both the processes and outcomes of writing are shifting. Writing is still the same, yet, throughout history, technologies have

impacted literacy practices, and the technologies of genAI are no different (Fischer, 2021). In a world with genAI, writing may be more about remixing than just straightforward content generation. This idea is something we will come back to later in the book.

Who Are Writers?

Continuing with our inquiry, ChatGPT produced the following response when asked, “Who are writers?”

Writers are individuals who use language to express ideas, tell stories, convey information, or evoke emotions through written communication. They can work across various mediums including books, articles, essays, poems, scripts, speeches, and more. Writers come from diverse backgrounds and may specialize in different genres or subjects, such as fiction, nonfiction, journalism, academic writing, or technical writing. Some writers are known for their imaginative storytelling, while others focus on providing factual information or persuasive arguments. Writing can be a profession, a hobby, or a form of self-expression for many people around the world.

We first considered what ChatGPT gets “right” in this definition. We appreciate the many verbs that describe what writers do (express, tell, convey, and evoke) and the mediums in which they can do it. We noticed the nod to “diverse backgrounds” and the many specialties that writers may take on. Also, ChatGPT addresses the idea that there are various purposes for which a writer composes, though the bot did not predict the use of the word purpose itself.

Still, ChatGPT seemed to overlook one crucial point for teaching writing: Whether for ourselves or for a wider audience, we all write for an audience of readers. Writers are metacognitive beings, making decisions that will directly influence their audience. They take advantage of text structures and literary devices to be clearer and more concise or to elaborate, explore, discover, and ruminate in other situations. To connect to the idea that writing is multimodal,

writers are those individuals who are putting words into sentences and crafting images, video, audio, and more.

In an age of genAI, the question of who writers actually are is becoming complicated. Is ChatGPT a writer? Yes, it produces text that conveys meaning, but it doesn’t think the same way humans do because it cannot truly understand what it is producing. (We cover what ChatGPT and similar kinds of genAI actually do in the next chapter.)

So, in some ways, genAI does some of the work that writers do. If a human writer uses genAI to support their process and help them develop the meaning they want to convey to readers, then—in some ways—the technology becomes a cowriter. Ultimately, the degree to which genAI contributes to an author’s process, purpose, and meaning is determined by a human’s writerly sense of agency. This theme is threaded throughout this book. Returning to the principles we stated previously, our goal for teaching students to employ genAI for any particular task is to help them become more agentive in the work they do with technology in their writing.

To be more concise, we believe that writers are agentive; genAI is a tool to help them improve their craft.

What Are the Essential Practices for Teaching Writing?

The NCTE encourages teachers to consider four principles as they design classroom practices for teaching writing (Adler-Kassner et al., 2018):

1. Writers grow within a context/culture/and community of feedback.

2. Writers grow when they broaden their repertoire, and when they refine their judgment in making choices with their repertoire.

3. Assessment should be transparent and contextual, and it should provide opportunities for writers to take risks and grow.

4. Writers grow when they have a range of writing experiences and in-depth writing experiences.

These four principles distill years of research into effective writing instruction. Interestingly, ChatGPT’s bulleted list of specific practices also reflects pedagogy that is research-based.

• Modeling: Teachers should demonstrate various aspects of the writing process, including brainstorming,

drafting, revising, editing, and publishing. Modeling allows students to see the thought process behind effective writing.

• Explicit instruction: Provide clear explanations of writing concepts, techniques, and strategies. Break down complex skills into manageable steps and scaffold instruction to support students as they learn.

• Writing workshops: Create opportunities for students to engage in writing workshops where they can receive feedback from peers and the teacher. Peer feedback can help students develop critical thinking skills and improve their own writing through revision.

• Writing across the curriculum: Integrate writing into various subject areas beyond just language arts or English classes. Writing in science, social studies, math, and other subjects helps students see the relevance of writing in different contexts.

• Individualized instruction: Recognize that students have diverse abilities, interests, and learning styles. Provide differentiated instruction to meet the needs of all learners, including struggling writers and advanced writers.

• Authentic writing tasks: Assign writing tasks that have real-world relevance and purpose. Authentic writing tasks could include writing letters, emails, reports, essays, stories, or blog posts on topics that are meaningful to students.

• Feedback and revision: Teach students how to give and receive constructive feedback on their writing. Encourage revision as an essential part of the writing process, emphasizing that writing is a continual process of improvement.

• Use of technology: Incorporate technology tools and resources to enhance the writing process. This could include word-processing software, online writing

platforms, digital storytelling tools, and multimedia resources.

• Cultivating a growth mindset: Foster a growth mindset in students by emphasizing the idea that writing skills can be developed through effort, practice, and perseverance. Encourage students to embrace challenges and learn from mistakes.

• Celebrating writing: Create opportunities to celebrate students’ writing achievements and successes. This could involve publishing student work, organizing writing showcases or readings, or participating in writing contests and competitions.

This output is a good list—one we would likely share as an outline of key elements in writing pedagogy for preservice teachers in our courses. The actual pedagogies to meet each of these practices are quite complicated and involve a specific kind of intentionality in design. For instance, teachers and students do not simply create writing celebrations without first building strong relationships and a sense of classroom community. Without connections, celebrations feel perfunctory. In turn, students paradoxically may feel disconnected from their peers and lose a sense of pride in their own writing.

Similarly, it is unwise to encourage teachers to use technology without having a clear sense of purpose for when, why, and how specific tools could be used. For instance, using Google Docs for composing should not just be a choice of convenience (the phrase “we are a ‘Google district’” is often heard as a reason). Instead, Google Docs should be used because it offers features that support their pedagogical principles of substantive feedback and revision, as well as opportunities for digital writing practices like hyperlinking, embedding media, and voice dictation. Individual educators may not always have control over the technology tools their district selects, including the use or nonuse of genAI. Still, it is imperative to ask questions of principals, IT staff, curriculum directors, and other educational leaders when planning to use educational technologies, especially genAI.

How This Book Integrates GenAI Into Writing Instruction

In this book, as we (re)define our philosophy of teaching writing, we gird its contents with the NCTE principles noted previously, underscored by the specific practices articulated by ChatGPT. One additional layer is that—in addition to the print-based modes of writing that ChatGPT articulated in its descriptions—we also encourage our colleagues to embrace opportunities for digital writing. GenAIs that can produce images, videos, and audio are becoming more sophisticated, even those that are available for free. To the extent that educators ask their students to practice media literacy skills and compose various texts, using genAI for the written word is simply a starting point.

This book is dedicated to what writerly, agentive practices look like in an age of genAI, which has shifted our thinking about what writing is and who writers are. If educators root themselves in literacy practices they know are effective, they can find many points for introducing genAI as a tool in the writing classroom. GenAI can enhance both educators’ pedagogical practice and writers’ daily lives.

As we close this introduction, which describes who we are and what we believe about teaching writing, we want to call out a missing piece from our identities in this discussion of genAI. Neither of us is a computer scientist, statistician, or linguist. We come to the pedagogical work described in this book through many years of teaching writing, collaborating with teachers of writing, and— through the process of creating this book—experimenting with many genAI writing tools. We draw from our experiences with colleagues in the NWP, guided by the mantra that teachers of writing must be writers themselves, especially in a digital age and with various technologies at hand. We also connect to ideas from the NCTE, the professional home for thousands of educators who explore the teaching of writing and digital literacies. Throughout the summaries of research and lesson ideas shared in this book, we make a concerted effort to describe genAI in ways that make sense to help you, the reader, better explain the tools to your colleagues and students.

Chapters 1 and 2 explain how genAI works and provide a framework for how it can be used in the classroom or by writers themselves. We provide more background on what genAI is in chapter 1, “Understanding GenAI, Addressing Concerns, and Exploring the Technology Behind the Tools,” by defining the term generative AI, addressing general concerns the public and educators have about AI, and gauging how they relate to writing teachers’ work as it has changed over time.

In chapter 2, “Using a Framework for Thinking About GenAI in the Writing Classroom,” we outline how our personal inquiry has helped us see the possibilities for utilizing genAI in the following roles.

Z Thinking partners: To explore topics, play with genres and grammatical structures, develop ideas, and understand what the writer wants to say and to whom

Z Research assistants: To curate, annotate, and summarize sources or to put information in conversation

Z Cowriters: To support the composition process along the way by creating downdrafts and providing feedback, editing, or questioning prompts

From there, chapter 3, “Developing Writers’ Agency and a Sense of Curiosity With GenAI,” delves more deeply into descriptions of specific genAI tools. Chapter 4, “Using GenAI to Examine and Begin Writing Tasks,” explores how we can leverage genAI to understand and approach writing tasks.

After these establishing chapters, we turn to the writing classroom, offering specific ideas for teaching student writers how to prompt genAI, use genAI to get started with their writing, and incorporate genAI tools into their writing processes. Each chapter begins with the inquiry questions we explored as we wrote this book. This includes chapter 5, “Engaging With GenAI for Argumentative Writing,” where we look more closely at argumentative writing, and chapter 6, “Engaging With GenAI for Narrative Writing,” where we explore narrative writing, oftentimes without genAI, since the nature of narrative and personal writing, we contend, requires our students to rely more on their own thoughts, experiences, and skills. Chapter 7, “Empowering Writers’ Agency in the Age of GenAI,” is our conclusion, and we reflect on the concerns we have heard and offer our responses. We are still exploring the strategies featured in this book in our own teaching, and we hope they are helpful to our readers as they discover new ways to prompt genAI and use it as a thinking partner, research assistant, and cowriter.

Throughout the book, we aimed to combine both an interest in and passion for the intentional integration of digital literacies that we have demonstrated over the years, along with the kinds of critical perspectives on these technologies and literacies that must guide our work (Turner & Hicks, 2022). If conversations about genAI fail to include this kind of criticality—or as scholars Daniel G. Krutka, University of North Texas; Marie K. Heath, Loyola University

Maryland; and Lance E. Mason, Indiana University Kokomo (2020) call it, a technoskepticism (a concept detailed in chapter 2)—then educators and students alike could fall into the trap of unquestioningly thinking that all genAI tools are inherently useful and good. While it is no surprise that people think these tools are advantageous, an understanding of what they are and how they work (as covered in chapter 1) is important to have before embarking on further exploration.

On a deeper level, there are concerns about rigor and the ways that genAI could curtail critical thinking related to specific writing skills (Thompson, 2025). Is it doing too much of the drafting work for the writer? Is it summarizing too much for students as readers? These concerns cut to an emotional core about what teaching and learning, as well as what reading and writing, should be. Students need to grapple with the process of putting words into sentences and sentences into paragraphs, then revise those words. Students also need to comprehend text and extract the main ideas, as well as to question and push back on those texts.

We aren’t sure we can provide a clear path forward for this concern about rigor. More than a few times since 2023 (and in the process of writing this book), we have each been stuck on an idea and turned to genAI for help. Just as we can turn to these tools, so can our students. Simply put, genAI exists, and we need to figure out how to coexist with it as writers, teachers, and humans. To that end, we will be mindful of this concern—as well as all the others listed throughout this book—as we move forward in presenting lesson and assignment ideas and as we strive to become better teachers of writing in the age of genAI.

We begin by working to understand what genAI is, addressing many of the concerns about it, and then examining the ways these tools are designed and deployed.

Chapter 1

Understanding GenAI, Addressing Concerns, and Exploring the Technology

Behind the Tools

Depending on who you ask, there is some debate about what genAI actually is. Some computer scientists, linguists, and other scholars contend that tools like voice assistants and the algorithms that choose the next video to watch or song to listen to are not artificial intelligence, and some would even contend that tools like ChatGPT are not either. As a quick example, some might argue that voice dictation and image classification are still kinds of artificial intelligence, and others would suggest that these are just technologies that are baked into other web-based applications, software, and hardware. This is all a bit confusing for a general audience; it’s even confusing for us as educators who have studied technology for many years.

To that end, we will do our best to describe some key concepts in this chapter. We begin with the following idea from two prominent literacy scholars, both members of the renowned New London Group (1996, 2000) that revitalized literacy studies with their “pedagogy of multiliteracies” work in the late 1990s and early 2000s. Authors and academics Mary Kalantzis and Bill Cope (2025)

offer a contemporary perspective on what genAI—beyond the lines of code that comprise it—means for the teaching and learning of literacy:

Generative AI puts out of business the narrow, utilitarian literacy pedagogies with their standardized tests to match. It moves literacy into a more serious, challenging, and much more interesting place—cognitively as an embodied and material practice. Literacy can no longer afford to be narrowly instrumental and functional. (p. 21)

This idea that literacy is no longer instrumental or functional as a mere set of skills and instead will become even more embodied and material as people come to rely on and integrate genAI in their daily writing tasks strikes us as both full of potential and—as outlined later in this chapter—fraught with peril. Thus, we had more questions as we continued our own inquiry into what genAI is, ones that can guide readers of this chapter, too.

Z What is genAI in a broad sense, and why does it seem to be more concerning to literacy educators now than other technologies have in the past?

Z How do large language models (LLMs) and generative pretrained transformers work, and what are their affordances and constraints?

Z What are the implications for teachers of writing as they consider the increasing role genAI will play in their students’ literate lives?

Deep consideration of any technology requires additional literacy practices. Two scholars have described these practices in ways that help situate writers’ relationships to the screen. Professor Earl Aguilera (2017) distinguishes between what is on the screen, behind the screen, and beyond the screen, whereas educator Tom Liam Lynch (2017) refers to super-screenic (above-the-screen) literacy as well as sub-screenic (below-the-screen) literacy. In each of these frames, Aguilera (2017) and Lynch (2017) encourage educators to talk with and think through the interfaces and options presented to users as they interact with apps, programs, and websites, as well as to interrogate the algorithms and coding that drive them.

Moreover, thinking beyond the device encourages educators to question technology companies’ economic motivations and how they profit, often at the expense of users’ privacy and the potential harm to the environment. As it relates to genAI, researcher Kate Crawford’s (2021) book Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, as well as computer

scientists Arvind Narayanan and Sayash Kapoor’s (2024) AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference, further extend these earlier arguments made by Aguilera (2017) and Lynch (2017).

As many computer engineers themselves struggle to understand what is happening in the “black box of AI,” we begin this chapter with some background on what genAI is, the type of genAI that powers LLM tools like ChatGPT, and common concerns the general public has about genAI (Bagchi, 2023). This leads to an explanation of what genAI itself represents for us as writers and teachers and why, in a broader sense, teaching students to use genAI is both an act of AI literacy and a connection to what Kalantzis and Cope (2025) call an “embodied and material practice” that readers and writers in our classrooms need to understand, integrate, and critique (p. 21).

A Brief Overview of Generative Artificial Intelligence

When we hear artificial intelligence, or AI, each of us likely imagines something different, spurred by our own experiences with literature, media, and news. Notice, we are specifically using the term AI here rather than genAI for just a moment, as these kinds of artificial intelligence are making logical decisions based on rules, as compared to generating unique series of words, images, or video clips. From science fiction author Isaac Asimov’s (2008) classic I, Robot which presented the three laws by which robots should function (don’t injure humans, right?)—to tragic examples of autonomous vehicles causing accidents and killing pedestrians, our entertainment and information media ecosystems have been infused with both a fascination with and a mistrust of AI and the machines it powers. Common refrains in the post-ChatGPT era include mentions of fictional computers from movies—like Skynet from The Terminator (Cameron, 1984) and HAL 9000 from 2001: A Space Odyssey (Kubrick, 1968)— and more memes than we can count.

Yet, despite individual conceptions of AI and fictional interpretations in the media, there are ways to clearly describe and define AI even as it constantly evolves. When writers can confirm accuracy based on other reputable sources and their own knowledge, Wikipedia can serve as a reasonable way to introduce general definitions. For AI, it has been useful to return to this source as the definition evolves. For instance, while writing an early draft of this book, Wikipedia succinctly defined AI as “the intelligence of machines or software, as opposed to the intelligence of living beings, primarily humans.” However, in the ever-expanding ways this resource changes, that definition changed to

reflect “intelligence exhibited by machines, particularly computer systems” when we checked it again right before our final revisions (Wikipedia, 2024). AI is a field of study in computer science, though the work of creating tools that reflect “reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics” is an interdisciplinary endeavor, touching on many other fields (Wikipedia, 2024).

The evolution of AI writing tools challenges us to reconsider our relationship with technology while offering new interpretations of our written expression. From early automated essay-scoring systems to today’s LLMs and generative pretrained transformers, this long history reveals how AI has been gradually integrating with and reshaping our writing practices.

The Evolution of Writing Tools

The field of linguistics comes into play with generative tools like ChatGPT, as this tool’s ability to create human-like writing seems startling when compared to other tools available to writers. Those who study language analyze how well genAI emulates text produced by writers and, along with computer scientists, work to understand how genAI can be analyzed in terms of its perplexity (the likelihood of particular combinations of words) and burstiness (the variety in phrase and sentence patterns over an entire document). It is also worth noting that it’s possible to bypass current genAI detection tools by simply asking genAI, through prompting, to use more or fewer attributes of perplexity and burstiness (Perkins et al., 2024; Tian, 2023). The introduction of genAI has certainly changed the literacy landscape as educators think about ways they used to ask students to add details and examples to their writing, and they now consider these new terms (perplexity and burstiness) to describe unique combinations that writers create.

Because of the drastic change in the nature of writing tools, tools like genAI can seem brand new. In reality, AI tools—if not genAI in particular—have been part of our daily lives for quite a while. Though most readers of this book do not have self-driving cars, it is likely that a newer model car has lane assist and adaptive cruise control. Moreover, internet users have been collectively searching the web since the early 2000s, and watching the “up next” preview on Netflix or YouTube has been a useful (or, depending on one’s perspective, annoying) predictive AI function. For quite some time, people have also been talking to Alexa and Siri as if they are family members through reactive AI that can answer basic who, what, when, where questions, if not the how or why questions.

People have relied on earlier forms of AI or AI-like tools—spelling and grammar checkers as well as automated essay-scoring systems—that didn’t necessarily predict or create new text but did look for arrangements in the data (like patterns in the words, phrases, and sentences given). What seemed like magic from within the computer was actually pattern-recognition software that scanned words on a screen and made recommendations based on the training it had been given or made evaluations based on what it recognized. With those tools of the past checking grammar, spelling, and even entire essays, educators have become used to the ways that contemporary AI tools help writers.

As we all work to understand the potential for genAI tools and what they can do, we must first understand how computer scientists, linguists, and mathematicians conceptualized and created early models of automated writing tools. Anyone who has used a word processor, for instance, can see when it highlights “errors” in writing through spelling and grammar checks. Sometimes, that feature is unhelpful when the text purposefully manipulates standard conventions, but the markings are generally useful for catching unintended spelling errors or unclear grammatical constructions. Though there were questions about whether young writers should have access to these tools (in similar ways that mathematics teachers questioned the use of calculators), overall, spelling and grammar checkers have contributed positively to how writers learn and improve their craft.

Editors Annette Vee, Tim Laquintano, and Carly Schnitzler (2023) provide a history of such tools. They note that “early text generation worked with templates or statistical models . . . where the next words in a chain are determined by probabilities.” In short, in the past, computers were programmed with many rules about how language works in order to confirm the correct spelling of a word or the grammatical structure of a sentence. Vee and colleagues (2023) explain that these earlier models of spelling and grammar checkers—which operated on the probability that something was correct or incorrect based on the placement of letters, words, phrases, and punctuation—functioned in different ways than genAI tools that have come after them. Vee and colleagues (2023) note the following:

While earlier models of text generation leaned on grammatical rules, current models are more speculative—predicting the next word in a sequence based on patterns in its dataset . . . Large language models are called so because they model language. That is, they take examples of language and then use certain processes to attempt to reproduce it.

These developments in the history of writing technology were important, of course, as they provided us with the functionality that we have come to know and use every day. Now, computers can use statistical predictions to place one word in relation to the next and in a grammatically correct manner. Moreover, genAI can do this by adopting a particular voice within the rhetorical expectations of a certain genre. In short, spelling and grammar checkers proofread by applying rules; LLMs and genAI attempt to create text by emulating language patterns.

GenAI has since become more nuanced than those early versions. In 2023, the Organisation for Economic Co-operation and Development (OECD) described key elements of an AI system. At the core of their definition, they contend that AI “infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions” (Russell, Perset, & Grobelnik, 2023). OECD’s inclusion of inputs and outputs is particularly relevant for writing teachers, as inputs can be “rules and data, which may be provided by humans or machines” (Russell, Perset, & Grobelnik, 2023). Outputs, then, can be different “‘content’ (technically, a sub-set of ‘predictions, recommendations, or decisions’) such as text, video, or images” (Russell, Perset, & Grobelnik, 2023). The OECD definition also recognizes that AI systems can continue to adapt even after they are initially created, which is referred to as deep learning, where “AI uses artificial neural networks that allow machines to learn from data and then perform tasks without being explicitly programmed” (Clark, 2023, p. 10). In short, we contend that the genAI tools students have access to now are making increasingly more complex and nuanced predictions, getting closer and closer to natural-sounding language.

Automated Essay Scoring

Another significant (and significantly critiqued) technology that used earlier forms of AI was automated essay-scoring (AES) tools. AES would evaluate the compositions that were provided based on dimensions of writing quality, such as following grammatical rules; use of varied word choice and sentence style; effective organization with use of transitions; and relevance to the prompt with similar words, phrases, and ideas (Dikli, 2006). While they were, of course, designed by standardized testing companies for grading efficiently—all in order to move through thousands of student essays at a time in lieu of employing educators as graders—they were also heralded as time-savers for overworked teachers and potential tools for providing students with more timely, efficient feedback.

However, as many scholars have contested—especially educational researcher Les Perelman (2014, 2020), a longtime critic of automated essay scoring— these tools are measuring a kind of lexical complexity that could be complete nonsense. In fact, he and his research team were concerned enough about how such automated scoring was being employed that they created the Basic Automatic B.S. Essay Language (BABEL) Generator to demonstrate how lexically complex, yet completely meaningless, text could still rate highly in AES. While many multinational educational companies have continued to work on these tools, the main concern that Perelman (2020) expressed and demonstrated through BABEL remains: Rating lexical complexity with forms of AI does not, in and of itself, ensure that the writer will have produced anything of worth to begin with, nor that any feedback the system offers will lead to substantive improvements.

In short, because earlier forms of AI—that is, the grammar and spelling checkers that were looking for correctness or the AES tools that were looking at sophistication in vocabulary, word frequencies, and sentence complexities— were bound by particular rules, they were (and still are) very limited. Some might even argue that these are not forms of AI at all, though it is beyond our expertise and the purpose of this book to explore that point further. However, we do want to put in the caveat that we do not think these tools are completely useless. Individual writers have many needs, and these tools can sometimes meet those needs. For example, multilingual learners, those who struggle with spelling as adults, those who are interested in the feedback that automated tools can provide, and teachers who are overwhelmed with providing feedback all benefit from using these tools because they serve distinct purposes and can be used judiciously by writers, writing teachers, and administrators looking for patterns in student data.

Earlier generations of AI writing tools used a rule-based approach and had limited functionality, but they set the stage for a revolutionary development in artificial intelligence—a new class of systems that wouldn’t just evaluate existing text but could generate original content. To understand how this dramatic shift in capability occurred, it is important to become familiar with the technology behind what scholars, journalists, and technologists now refer to as LLMs and generative pretrained transformers (GPTs).

LLMs and GPTs

Although newer genAI tools appear to be intelligent, mimicking human thinking, there are both inputs and outputs that require users to at least consider, if not directly engage with and evaluate, the results of a genAI interaction. Of course, this work that genAI is doing on behalf of writers becomes problematic in some ways. On a macro level, the inputs can be the large data sets that programmers use to build the models (ones that are drawn from the corpus of language found online, which is written in a particular manner, as we will explore in more detail later). On a micro level, inputs can be the prompts writers use to generate a desired output from genAI; yet it’s become clear that even carefully crafted prompts are transformed by genAI in unexpected or unintended ways. This was a particular problem as tech companies worked to combat biased outputs in genAI images by intentionally prompting for more diverse outputs. This came to a peak in the spring of 2024 when Google’s Gemini created an ahistorical image depicting diverse people in Nazi uniforms after having been given the prompt, “Generate an image of a 1943 German Soldier” (Grant, 2024). Still, the fact that humans can use genAI in purposeful ways will be a continuing theme that this book explores related to the agency of writers.

Given that the field of genAI is complex and the tools that it creates serve many purposes, it can be helpful to classify genAI into three categories. In collaboration with several professional organizations in education, the International Society for Technology in Education (ISTE) articulated the following types of AI (ASCD, ISTE, National Association of Secondary School Principals, National Association of Elementary School Principals, & American Association of School Administrators, 2023).

1. Reactive: These AI “tools respond to specific inputs or situations without learning from past experiences (for example, Alexa, Roomba, and a chess-playing computer)” (p. 2).

2. Predictive: These AI “tools analyze historical data and experiences to predict future events or behaviors (for example, Netflix and creditscoring systems)” (p. 2).

3. Generative: These AI “tools generate new content or outputs, often creating something novel from learned patterns (for example, ChatGPT and Stable Diffusion)” (p. 2).

As genAI tools, current LLMs are not just looking for patterns to determine what is right or wrong in a user’s existing writing. Instead, LLMs use

mathematical formulas and processes, or algorithms, to operate. After ingesting massive amounts of language, they create tokens, sometimes as small as a single letter or phoneme or even a word or short phrase. They create text by predicting which token—the individual character, portion of a word, or even entire word—has the highest probability of coming next. They learn, over time, how accurate their predictions are by using a scoring system. Their predictions and scoring are based on the corpus of text they have been fed. This is where the acronym GPT comes into play.

In our inquiry about genAI, we read information in the fields of computer science and educational technology, trying to uncover the meaning of terms that are sometimes overcomplicated. Here is our concise understanding of these terms, which we hope is useful for our readers as they talk about the tools themselves and the ways students might be able to use them. We summarize our understanding of GPTs that we have gleaned from news reports, techrelated podcasts, academic articles, explainer videos, and conversations between the two of us.

Z Generative means that the LLM, when prompted, will generate text that responds in a natural manner, with writing that reads like a human produced it. This requires no specialized knowledge of computer coding, and prompts can be written as if they are being provided to another computer without specific syntax to make the LLM understand it, as other computer programs might.

Z Pretrained indicates that the LLM is trained through predetermined data sources and draws on the words, phrases, sentences, and interconnected ideas in that data set, all to generate its output. The output is limited to the patterns it finds in the text it has been fed, which could include very specific data (such as all the poems by a particular poet or the internal memoranda of a corporation) or general data pulled from the entire open internet.

Z Transformer describes the architecture, or the building blocks, of the neural networks and how they function. Again, this gets a bit technical; the AI takes text input that is broken into small chunks (like individual letters or phonemes), called tokens, that can then be shifted around—or transformed—to make natural-sounding outputs. Sometimes, however, even that output can sound a bit stilted or confusing, a point we will elaborate on further.

Put another way, in the manner they are currently designed (though, as with all things in computer science, this will likely change in the future), these GPT tools—whether text only or ones that generate images, audio, or video— function by taking the following steps.

1. Taking a small amount of input data from a user

2. Combining it with the extensive input data that has been culled from millions of webpages, books, articles, transcripts, and more that has been previously fed to it

3. Reconstructing the many words and phrases on which it was trained into something that seems conceptually possible and unique each time it is prompted

Stated in a manner similar to philosopher and literary critic Mikail Bakhtin’s ideas, summarized here by educators Timothy J. Lensmire and Diane E. Beals (1994), “The appropriation of others’ speech and writing was much more than simple imitation or mimicry; it was active work done by the speaker/writer in order to communicate something to someone else in a specific situation” (p. 411). We make a parallel argument: Every output by a GPT is built on text that already exists, and it makes predictions about what likely comes next based on what it has previously found. As The New York Times reporter Aatish Bhatia reported in 2023, LLMs “are trained by going through mountains of internet text, repeatedly guessing the next few letters and then grading themselves against the real thing.” Bhatia (2023) cites a Washington Post article that describes those mountains of text for just one data set from Google, which had captured “a massive snapshot of the contents of 15 million websites that have been used to instruct some high-profile English-language AIs, called large language models” (Schaul, Chen, & Tiku, 2023).

The aforementioned The New York Times article includes an interactive description of this process that allows us to see a “BabyGPT” learn (Bhatia, 2023). It starts by producing random “gibberish” and, through trial and error, within an hour, it begins to produce text that mimics its training source. In this case, it was the complete works of author Jane Austen (Bhatia, 2023). This process results in text that “is original in the sense that it represents combinations of letters and words that generally have no exact match in the training documents, yet the content is also unoriginal in that it is determined by patterns in its training data” (Modern Language Association Conference on College Composition and Communication [MLA-CCCC], 2023, p. 6).

Again, genAI is putting words into sentences and sentences into paragraphs not quite in the same way that a human brain would when it moves iteratively through stages of brainstorming, writing, revising, and reconsidering the entire text through deliberative choices. Instead, it is working from the many thousands of data tokens it has been trained on and using those to predict what will appear next. It is simultaneously an incredibly mundane and infinitely complex process, which is why it seems a bit magical the first time we watch ChatGPT take our initial prompt and put words on a screen.

Yet, we need to keep it all in perspective and teach our students about what is really happening below the screen or in sub-screenic ways, to reiterate the previously mentioned ideas from Aguilera (2017) and Lynch (2017). It is important that students understand how technology works, as well as the social, emotional, or intellectual effects that specific hardware interfaces and algorithmic aspects of software have on them so they can enact their agency over the technology. In the case of ChatGPT, for example, mathematician and computer scientist Stephen Wolfram (2023) describes what the technology is doing:

The first thing to explain is that what ChatGPT is always fundamentally trying to do is to produce a “reasonable continuation” of whatever text it’s got so far, where by “reasonable” we mean “what one might expect someone to write after seeing what people have written on billions of webpages, etc.” . . . And the remarkable thing is that when ChatGPT does something like write an essay, what it’s essentially doing is just asking over and over again “given the text so far, what should the next word be?”—and each time adding a word.

Reasonable continuation is all about patterns. When tools like ChatGPT create the next word, sentence, and paragraph and the result seems reasonable, it feels impressive because even the most fluent human writers could not produce text that fast. However, it’s less amazing when the pattern prediction leads to something we find hilarious, ridiculous, or outright erroneous. In fact, the term hallucinate has been used to describe these kinds of AI missteps. Hallucinations can include the creation of text that is inaccurate factually, inappropriate tonally, or a variety of other writing gaffes. While it is true that, with more training, LLMs are able to “mimic the writing of sentient humans more convincingly” (MLA-CCCC, 2023, p. 6), it’s also worth noting the equity and access issue that is present—how well a genAI tool performs is likely a function of how much someone can pay for access to the premium version of it.

All of this leads to the point that—as with the countless technologies we have faced in the past and the many we will encounter in the future—an educator’s goal with genAI is to have a basic understanding of what it is and how it works so they can, in turn, help their students understand the same. This understanding of our history with genAI informs modern concerns with new iterations of this technology, particularly when it comes to early writing tools in classrooms. Concerns about academic honesty, accuracy and bias, data privacy, digital equity and access, policies and policing, and intellectual rigor are most relevant to educators but are shared by the general public.

Some Concerns About GenAI

Since the early 2000s, we have straddled a line between excitement for new technologies and the opportunities they open for readers and writers and cautious skepticism when discussing the limitations of those technologies. Typically, the limitations have focused on issues like the following.

Z Digital divide and access, both in terms of devices and broadband, as well as specific skills and training

Z Security and privacy—for example, whether tools were compliant with the Children’s Online Privacy Protection Act (COPPA, 1998), as well as the Family Educational Rights and Privacy Act (FERPA, 1974), or whether teachers were using unsanctioned tools for which they would ask students to create non-school accounts

Z Questions about academic rigor, curricular pacing, and whether or not the use of any given technology was worth the time and effort

There are, most likely, other school-, district-, and state- or provincial-level concerns about specific educational technology tools that we are overlooking, yet the main point is that any use of technology, especially genAI, happens in contentious spaces. Helping students and colleagues make the best use of technology is a time-consuming task, and we will all need to continue to make decisions (and concede to trade-offs) about when, why, and how to integrate AI. Of course, there are no easy answers to any of these questions. Educators will continue to wrestle with these in efforts to intentionally integrate any technology, especially genAI, into their classrooms.

GenAI has reignited conversations about some of these same concerns, though this time around, educators seem to have deeper ethical concerns than they’ve ever had before. Based on our conversations with colleagues teaching at all grade

levels, elementary school to graduate school, we know that we are not alone. Using genAI is different from simply copying and pasting from the internet (for example, patchwriting is a form of plagiarism where students are, at the very least, gathering ideas from numerous sources rather than having genAI create something for them), and it worries teachers for new reasons.

As has always been the case, these concerns are about the emotional work of teaching. They cut to the core of who we are—as writers and teachers of writing—as well as what we do with and for our students. When thinking back to the NCTE statement and ChatGPT’s responses (see The Practice of Writing With GenAI, page 5, in the introduction), we wonder: What is writing, exactly, in a moment where autocomplete finishes our sentences, and genAI can create sentences (let alone paragraphs and entire essays) for us? Who are writers—if not the solitary author working on the printed page—but a collaborator with both other humans and augmentation provided by genAI? Finally, what is essential in our pedagogical practice as we think about who our writers are and what specifically we want them to know and be able to do?

We could see these as merely philosophical questions, but we want to think about what these questions mean as they are growing increasingly complicated by genAI becoming part of our everyday writing tools. The following sections name many of the issues we think about as we consider how to bring genAI into the writing classroom, though we acknowledge that there are likely many more. The rest of the book shares how to address these issues with students in the context of specific lessons. For now, we want to name them and honor the trepidation many educators feel at this moment. In doing so, we open a conversation with our readers and colleagues. What ethical concerns do we have when it comes to uses and abuses of genAI? More importantly, how do we engage in productive dialogue around these concerns? We will briefly touch on six primary worries that we have heard in conversation with teachers since 2022: (1) academic honesty, (2) accuracy and bias, (3) data privacy, (4) digital equity and access, (5) policies and policing, and (6) engaging students in intellectual rigor and what has been called the productive struggle of learning.

Academic Honesty

The use of genAI by students for dishonest purposes (and adults, for that matter, as we are seeing in many instances of fake social media posts and deepfake videos) is a compelling concern. The ethics of when, why, and how anyone might choose to use genAI—and whether to disclose that use—are still unclear.

Some educators are convinced that any use of AI would be considered cheating. Others are open to some uses of AI in support of student learning and writing. Still others embrace the tools as a key part of their own—and their students’—writing. Educators ask a range of questions like, “How will we detect plagiarism?” and “Even if we allow it, how will we know what students have written versus what the genAI has created?”

Though the primary concern many educators have is plagiarism, the struggle to identify what constitutes honest use of AI also includes the gray areas of ethical practice where genAI and writers meet, all of which have yet to be adopted by a larger community of practice in the field of writing instruction. While there are now many examples of syllabi statements (Eaton, 2024) and AI disclosure statements that can be used as templates (Newman University, 2023), these alone will not deter academic dishonesty without substantive work to model for our students what ethical genAI use looks like in practice.

Accuracy and Bias

Despite the seemingly human-like text that emerges on screen after prompting a genAI tool, these outputs are the product of algorithms that are working in the background, or behind and beyond the screen as Aguilera (2017) describes it. Even with advances since the release of ChatGPT, genAI can provide inaccurate information, instances of which have been dubbed hallucinations. Or, in the case of citing nonexistent academic papers, AI researcher Kate Crawford (2023) plays on words to call these inaccuracies something new: “hallucitations” (a portmanteau of hallucinations and citations).

Even in the best case, AI output is often vague and in need of substantive human revision. Even though there are warnings on each genAI tool about the potential inaccuracy of the information, many users are not likely to heed these warnings and will trust the bot to provide them with valid outputs to their prompts.

On another level, we must remember that genAI output is largely built on the existing text available on the internet; when it comes to questions of racism, sexism, homophobia, and more, there are many examples of where AI has introduced bias in healthcare, applicant tracking systems, advertising, image generation and more (Blanding, 2023; IBM Data and AI Team, 2023). In her groundbreaking work on AI bias in facial recognition, Joy Buolamwini (2023) documents how “prejudices of those who have the power to shape technology can propagate harm, such as discrimination and erasure. We can encode

prejudice into technology even if it is not intentional” (p. 6). If outputs are based on a corpus of data that is inherently biased, remixing the language will undoubtedly perpetuate that bias.

Data Privacy

As is the case anywhere on the internet—especially for minors—the protection of personally identifiable information is imperative. With genAI tools, this becomes even more problematic, as the known risks are equally problematic as the unknown ones. A 2023 summary in Information Week (Pallardy, 2023) documents the following list of concerns.

Z Data collection

Z Transparency and traceability

Z AI as a black box

Z Consent

Z Data leakage and breeches

Z Bad actors

Z Hallucinations

Z Lawsuits and fines

Of these, the concept of AI as a black box—a term used by computer engineers and programmers to describe products whose nature and functioning are not fully understood—is particularly problematic for those who want a clearer understanding of how genAI tools generate language and how they might use sensitive or personal data in the process. Even with stated policies that suggest data will be deleted from genAI systems after a certain period of time, there are no ironclad guarantees of this. Once that sensitive or personal data is inputted into a genAI system, especially when using a free account, there is no assurance that it will be deleted. As the internet maxim goes, “If the product is free, then the user is the product.”

Deciding what constitutes a healthy skepticism about what users might share with genAI—including personally identifiable information as well as full drafts of writing that could be repurposed to train the genAI systems—in relation to what users might expect of genAI in return is a worthwhile conversation to have with colleagues and administrators. For instance, as people use genAI tools, they must consider what personal data they (as adults and educators) are willing to give up and what kinds of students data (within COPPA and FERPA compliance) they are willing to share.

Data privacy has been an ongoing consideration with technology since its inception, especially since the beginning of the web 2.0 era (circa 2004) with tools like blogs, wikis, and social media. The genAI era has been subsumed

by conversations of web 3.0 and the rise of artificial intelligence. Despite the specifics about when we moved from web 2.0 to web 3.0 (Geeks for Geeks, 2024), the concerns about how data is training genAI systems adds a new layer of complexity.

Digital Equity and Access

Diana Fingal (2023), director of digital content for ISTE, contends that digital equity is “about making sure students have equal access to technology like devices, software and the internet, and that they have trained educators to help them navigate those tools.” She outlines key points about using technology in meaningful ways, preparing preservice teachers, and improving accessibility, among other factors related to digital equity.

Of course, there is no easy answer to this question about solving digital inequities. Despite the billions of dollars of pandemic relief money that has been spent to put more devices into more students’ hands than ever before, equitable access is close, yet still not ubiquitous. Data from the Pew Research Center’s Teens and Internet, Device Access Fact Sheet indicates that while about 90 percent of teens have access to a desktop or laptop—a slight increase since the first measure in 2015 of 87 percent—smartphones still top the list at 95 percent in 2023, holding steady from 2022 and creating a plateau (Faverio, 2024). Broadband access is spottier, with Pew Research Center data indicating that there is still a gap between suburban (79 percent), urban (77 percent), and rural (72 percent) users and that “rural adults are also less likely than suburban and urban adults to have multiple devices or services that enable them to go online” (Vogels, 2021). It is widely assumed most people have some access to digital devices and reasonably efficient internet, yet that access remains inconsistent.

These data also help us draw a few reasonable conclusions about the digital use divide, or what the U.S. Department of Education Office of Educational Technology (2024) describes as the gap “between those students who are asked to use technology for creation, exploration, and critical analysis and those who are not” (p. 10). They go on to contend that “ensuring all students have transformative, active, creative, critically thoughtful experiences supported by technology” is paramount to educational success (p. 10). While the resources on this page reference the 2017 National Educational Technology Plan, the 2023 report titled Artificial Intelligence and the Future of Teaching and Learning explicitly references key elements that must be addressed, including equity, safety, and transparency (Cardona, Rodríguez, & Ishmael, 2023).

Yet, returning to the question of access, the long-standing models of webbased, paywalled services still remain. While there are many free, openly available, and oft-noted genAI writing tools, there are also many limited-use tools (with extra features available for purchase, sometimes referred to as freemium) as well as the ones that require a subscription. For instance, at the time of this writing, a search on the AI-tool index website Futurepedia for the category of “AI Writing Generators” yielded 294 results. Of those, only seven were open source and free, with eighty-two of them being “free” yet proprietary (and with many of them explicitly marketed as essay writers or tools to subvert genAI detection). Of note, the website AI Cowriter (https://ai-cowriter.com) is one of the tools available that is both free and open source, requiring no sign-in. Suffice it to say, with 163 of the tools requiring payment for access or full features, our continued concerns about equity and access linger in this new era of genAI, and constant vigilance is essential.

Policies and Policing

In an effort to curb cheating with genAI, educators have employed zerotolerance policies and adopted AI detectors that are intended to police the use. Many educators have concerns over the nature of both the policies and the policing. One significant problem is that AI detection services have a low accuracy rate. Additionally, they often produce false positives for multilingual learners and students with learning disabilities. A blog post from Stanford University’s Human-Centered Artificial Intelligence group quotes one of their researchers, James Zou, who argues that “non-native speakers typically score lower on common perplexity measures such as lexical richness, lexical diversity, syntactic complexity, and grammatical complexity” (Myers, 2023). They further document their findings in a research paper explicitly titled “GPT detectors are biased against non-native English writers” (Liang, Yuksekgonul, Mao, Wu, & Zou, 2023). A second paper from 2023, “Can AI-Generated Text be Reliably Detected?” sums it up succinctly: “Both theoretically and empirically, [our research shows] that the state-of-the-art detectors cannot reliably detect LLM outputs in practical scenarios” (Sadasivan, Kumar, Balasubramanian, Wang, & Feizi, 2023, p. 20).

With all these limitations in the tools themselves and, certainly, with the AI detection tools, educators must approach the use of LLMs and GPTs—as well as any tool that claims it can “detect” AI plagiarism—with healthy skepticism

and consider how policies might be shifted to account for learner development as they continue to approximate academic English in their writing.

Intellectual Rigor and the Productive Struggle

It is difficult to clearly articulate this final concern because the idea of rigor in conversations about academic achievement is beyond the scope of this book. A particular way to think about it is the concept of intellectual rigor, or what the Victoria State Government Department of Education and Training (2021) describes as “clarity in thinking and an ability to think carefully and deeply when faced with new content or concepts” (p. 1). Another way to consider this is in what has been described as a productive struggle, which is a task or endeavor that challenges students’ weaknesses in the context of activities and assignments, utilizes metacognition and reflection, and is fruitful rather than frustrating (Blackburn, 2018).

However one might characterize it, the concern boils down to this: What happens when genAI makes it too easy for students to get a nuanced, sophisticated answer without having to go through the thinking on their own? To extend the argument, as a colleague shared with us, what happens when a teacher creates an assignment in genAI, a student uses genAI to formulate a response, and a teacher uses genAI to provide feedback? Eventually, any reasonable person can wonder what might happen if everyone in the educational system succumbs to genAI and loses focus on when, how, and why writers need to engage in the intellectual work of writing. The goal of upcoming chapters is to demonstrate the ways educators could—and, if they hold true to their principles, should engage students in a writerly cycle that sometimes embraces and sometimes resists the use of genAI to center the productive struggle in writing.

Reactions to GenAI in Education

As noted earlier, in November 2022, educators added a new vocabulary term to their conversations about teaching and learning, one that in many circles may now be considered taboo: ChatGPT. When the LLM became publicly available, teachers immediately began to grapple with the impact of genAI in their classrooms and schools, most specifically in writing classrooms at all levels. Essayist Stephen Marche (2022) lit a fire when he wrote a piece that was featured in The Atlantic in which he envisioned the death of the college essay. A few days later, Daniel Herman (2022)—a teacher at Maybeck High School in Berkeley, California—proclaimed “the end of high school English.” Hyperbole aside, these

authors summarized a very real concern that genAI-enabled cheating would flourish, that students would no longer do their own thinking, and that the role of the writing teacher would be forever changed, probably for the worse.

As we work to finalize this book in 2025, the initial backlash to ChatGPT— and the use of genAI tools in schools more broadly—seems to have faded from unmitigated fear into more thoughtful considerations about both the affordances and limitations of these technologies. Concerns about outright cheating have been countered with calls to develop what has been described as AI literacy, including ways all genAI users must know and understand, use and apply, evaluate and create, and engage in discussions about the ethics of AI (MLACCCC, 2023; Ng, Leung, Chu, & Qiao, 2021). Discussions of the inherent biases embedded in AI models have been infused with strategies for exploring these biases from a critical perspective (Leander & Burriss, 2020). As those in education have seen with many educational technologies before, the debates about when, why, and how to use them are ongoing, and genAI is no different.

Though educators still have quite a way to go, Herman (2023) returned to the idea that high school English instruction, wholesale, needed reform. He acknowledged at the beginning of the 2023–2024 school year—on behalf of many educators who see that students’ use of genAI tools is just one symptom in a larger problem—that what teachers really need to do is give deeper consideration to the instruction, assignments, and assessments students are given, contending that high school English, in particular, “needed a makeover” well before the introduction of genAI (Herman, 2023). Quibbling about what computer scientists may or may not call genAI is beside the point for educators. Instead, genAI-inclusive teaching strategies are necessary to provide opportunities for teachers to consider an extension of what Herman (2023) calls for: using genAI tools in the writing classroom to develop ethical writers in an age of genAI.

Conclusion

Given the complexity of how AI tools work in sub-screenic ways, it’s not surprising that most users simply focus on what the tools do on the screen. For some, it’s fascinating. For others, it’s frightening. Vee and colleagues (2023) offer insight into why, specifically, teachers of writing feel this dichotomy:

For writing instructors, the relevant detail to know about this advancement is that it effectively enabled language models to retain relevant information and

place greater emphasis on earlier parts of the input. This is another way of saying that language models built using transformers could now sustain arguments, narratives, or discussion for thousands of words without “forgetting” crucial ideas from earlier in the prose.

With more sophisticated AI available to writers of all ages, and as it becomes more embedded in word processing tools, it’s important for us to think about the work of writing teachers in the context of the tools available. To summarize a few key points that are relevant to our work as teachers of writing, see table 1.1.

As this chapter concludes, the revolutionary nature of ChatGPT becomes clear. Instead of requiring a complex interface, ChatGPT allows users to interact with generative AI using methods previously reserved for computer scientists. Simply put, a writer can ask a conversational question in plain language, and ChatGPT will provide an answer. In this sense, it feels like a natural interaction for a native speaker to have a conversation with a tool (or, for that matter, to translate from one language to another and engage in dialogue). Prior to LLMs, a user could find some scripted chatbots, but the interactions didn’t always feel like a conversation. Now, with ChatGPT and similar tools, a user can reasonably ask a question they don’t know the answer to and receive a clear, concise answer in seconds.

One of the remaining challenges that human genAI users face is the natural tendency to assign human agency and personalities to the tools. There have been constant concerns about how people would interact with and become emotionally and intellectually attached to AI systems dating back to Eliza, the first chatbot designed by Joseph Weizenbaum at MIT in the 1960s (Jarow, 2023). Eliza was designed to ask questions, as a therapist would, based on specific words and phrases and generate a response to keep the human user interacting and answering more questions. While they may have felt engaged, users were simply caught in an algorithmic loop. Weizenbaum had hoped that by showing people how that loop worked, they would not romanticize the simple functions that Eliza performed. Still, many users reported that they engaged with Eliza in a manner that they felt was personally enriching, as if they were indeed interacting with another human (Jarow, 2023).

In sum, humans often put a good deal of time, energy, and effort into the belief that technologies can—and should—work to make our lives better, inspire creativity, and become ubiquitous. As we will contend in the lessons later in this book, there is a level of creativity that can be achieved with genAI. Yet,

Table 1.1: A Comparison of Early AI and GenAI Tools Relevant for Teaching Writing

Tools

Early forms of AI, including spell checking and grammar-checking tools and AES systems

Characteristics Statistical and probabilistic models that look for patterns of error and compare that error to what would otherwise be expected, such as the following. A misspelled word

• A misplaced comma or phrase

• A paragraph without adequate transition words and introduction of evidence These models are generally limited in terms of looking for specific errors or providing a score on a particular genre of writing.

Advantages For spelling and grammar checkers:

• While novel at one time, these types of tools have become generally accepted as key components of a writer’s toolbox.

• Many instructors demand that students perform these checks before submitting assignments. For automated essay scoring and other kinds of feedback tools:

• Though still not fully responsive to the needs of different audiences, genres, and writing contexts, these tools can provide some feedback for writers based on a particular parameter that can lead to revision suggestions that are contextual and specific.

Limitations Writers can become too reliant on these tools, failing to understand certain aspects of grammar, usage, and mechanics, or why the feedback being provided may be incorrect.

Contemporary forms of genAI, drawing from LLMs with GPT user interfaces

Dynamic models that draw from extensive amounts of natural language input that have been preprogrammed to make, per Wolfram (2023), “reasonable continuations” of words into sentences, and sentences into entire texts, based on very little input from a user. Moreover, because of this design, these tools can adapt to different audiences, purposes, and genres with nuances in the writing style, sophistication of vocabulary, and ways they can offer feedback on writing.

Because LLMs draw from existing language and can be prompted using natural language, their outputs can serve to do the following.

• Create models of particular types of texts for writers to examine.

• Provide general overviews of a certain topic and connections to related subtopics.

• Generate answers to questions, outlines to demonstrate steps in a process, or other similar logical outputs.

• Give feedback on writing that is input by the user into the system, with requests for specific focal areas of response.

The text produced can still lack specificity or include outright errors (hallucinations).

While some AI tools are available for public use at little to no cost, advanced models are beginning to use subscription-based pricing.

this only happens through critical conversations with students, effective modeling of writing practice, opportunities to play, and substantive reflection on what works for students in all contexts. Thus, with this understanding of what is happening with AI below the screen, upcoming chapters will play with AI in the writing classroom. As Crawford (2021) asserts in her critique of the AI industry, “AI is neither artificial nor intelligent” (p. 8). However, while genAI presents many components that warrant critique and concern, it also offers just as many opportunities for exploration and integration in teaching. Chapter 2 will explore these possibilities in more detail.

Reflection Questions and Next Steps for Teachers

As this chapter ends, consider the following next steps and reflection questions for teachers.

• Reflecting on your experiences as a student and an educator, consider a way you have interacted with genAI tools as a writer. In what ways did you find these tools compelling and useful? In what ways were they distracting or unhelpful? When and why did you choose to use these tools, or not, based on your writing context?

• What is your own biggest concern related to genAI? How might you engage colleagues in a conversation about addressing some of the concerns outlined in this chapter? How will you stay forward-focused in these talks?

• Of the genAI tools already introduced, which ones are you already familiar with? Does your school—or you personally—subscribe to any of them? Which ones do you need to find out more about? How might you engage in an inquiry around this list of tools provided by Common Sense Education (www.commonsense.org/education/lists/classroom-tools-that-use-ai) as a way to begin?

Teaching Writing in the Age of AI

STRATEGIES FOR TEACHERS OF SECONDARY STUDENTS

As generative AI reshapes how students think and write, secondary teachers need strategies that channel its potential to enhance—not replace— authentic student creativity and the writing process. In Teaching Writing in the Age of AI: Strategies for Teachers of Secondary Students, Troy Hicks and Kristen Hawley Turner uncover practical strategies for harnessing this technology as a powerful tool for developing stronger, more confident writers while maintaining academic integrity and fostering authentic student voice in secondary classrooms.

GRADES 6–12 WRITING TEACHERS WILL:

• Better understand the design and function of generative AI writing tools

• Preserve and enhance student writers’ creative agency

• Define acceptable, academically honest uses of generative AI in the writing process

• Examine generative AI’s ability to inform students’ craft of persuasive and argumentative writing

• Explore how generative AI can help students craft and revise engaging narrative prose

“As leaders in writing education and technology, Troy Hicks and Kristen Hawley Turner have applied their expertise to navigating how to productively write with AI, offering practical and very doable strategies. This text is a fantastic resource for teachers curious about incorporating these technologies into their classrooms and hungry for creative approaches to get started.”

Assistant Professor of English Education, School of Teaching and Learning, University of Florida

“Without turning a blind eye to pressing concerns about the role of generative AI in learning, Troy Hicks and Kristen Hawley Turner have designed a book that shows how educators can teach students to use generative AI ethically to enhance their agency as writers rather than as a cheating tool.”

Executive Director, National Writing Project

“Grounded in writing workshop principles, Teaching Writing in the Age of AI offers an essential road map for students to become ethical, agentive writers while maintaining creative control. Rather than banning AI or letting it replace human work, this book shows how to leverage AI as a thought partner.”

Coauthor, Creating Confident Writers: For High School, College, and Life

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.