Cybersecurity in Academic Libraries: Addressing Vulnerabilities and Creating Solutions
Guest Edited by Roger Strong (JoVE), Matthew Ragucci (Wiley), John Felts (Coastal Carolina Univ.), and Michael Meth (San Jose State Univ.)
Begins on Page 11
Highlights from Katina’s Tea Time
Column Editor: Caroline Goldsmith (Associate Director, The Charleston Hub) <caroline@charlestonlibraryconference.com>
Column Editor’s Note: “Tea Time With Katina and Leah” is a weekly series of articles published on the Charleston Hub. In this column, Katina Strauch, (Founder, Charleston Conference and Editor Emeritus, Against the Grain), shares news snippets, science articles and opinion pieces that she reads throughout the week and finds interesting, and it also features some updates from Katina herself! Here are some recent highlights from this series, and you can find all of the “Tea Time” articles here on our site. — CG
From October 5, 2025:
Katina here. I have a sort-of antiquing story. I used to travel around SC setting up libraries in hospitals. I always gravitated to the family-run shops and would sometimes buy things that tickled my fancy. I bought a set of household dishes that I am taking with me when we move to Winston-Salem. So – this article from Garden & Gun caught my eye!...
Streamline Your Library Workflows
with the GOBI Library Technical Services
Spend less time on repetitive processes and more time on what truly matters: fostering a love of reading and learning in your patrons. GOBI Library Solutions offers a wide range of services to help optimize your library’s workflow.
Download your free copy of the GOBI Library Technical Services Guide today.
Download our guide to discover: How GOBI can simplify your workflows with services like:
• Book ordering
• Cataloging services
• Physical processing
The benefits of outsourcing technical services:
• Increased efficiency
• Cost savings
• Improved accuracy
How to leverage GOBI to meet your library’s unique needs:
• Tailored solutions
• Seamless integration
Against the Grain (ISSN: 1043-2094), Copyright 2026 by the name Against the Grain is published five times a year in February, April, June, September, and November by Annual Reviews. Mailing Address: Annual Reviews, PO Box 10139, Palo Alto, CA 94303-0139. Subscribe online at https://www.charleston-hub.com/ membership-options/
Editor Emerita:
Katina Strauch (College of Charleston, Retired)
Editor:
Leah Hinds (Charleston Hub)
Manager:
Caroline Goldsmith (Charleston Hub)
Research Editor:
Judy Luther (Informed Strategies)
International Editor:
Rossana Morriello (Politecnico di Torino)
Contributing Editors:
Glenda Alvin (Tennessee State University)
Rick Anderson (Brigham Young University)
Sever Bordeianu (U. of New Mexico)
Todd Carpenter (NISO)
Ashley Krenelka Chase (Stetson Univ. College of Law)
Eleanor Cook (East Carolina University)
Kyle K. Courtney (Harvard University)
Cris Ferguson (Murray State)
Michelle Flinchbaugh (U. of MD Baltimore County)
Dr. Sven Fund (Fullstopp)
Tom Gilson (College of Charleston, Retired)
Michael Gruenberg (Gruenberg Consulting, LLC)
Bob Holley (Wayne State University, Retired)
Matthew Ismail (Charleston Briefings)
Donna Jacobs (MUSC, Retired)
Ramune Kubilius (Northwestern University)
Myer Kutz (Myer Kutz Associates, Inc.)
Tom Leonhardt (Retired)
Stacey Marien (American University)
Jack Montgomery (Retired)
Lesley Rice Montgomery (Tulane University)
Alayne Mundt (American University)
Bob Nardini (Retired)
Jim O’Donnell (Arizona State University)
Ann Okerson (Center for Research Libraries)
David Parker (Lived Places Publishing)
Genevieve Robinson (IGI Global)
Steve Rosato (OverDrive Academic)
Jared Seay (College of Charleston)
Corey Seeman (University of Michigan)
Bruce Strauch (The Citadel, Emeritus) Lindsay Wertman (IGI Global) Graphics:
Bowles & Carver, Old English Cuts & Illustrations. Grafton, More Silhouettes. Ehmcke, Graphic Trade Symbols By German Designers. Grafton, Ready-to-Use Old-Fashioned Illustrations. The Chap Book Style.
Publisher:
Annual Reviews, PO Box 10139 Palo Alto, CA 94303-0139
Production & Ad Sales: Toni Nix, Just Right Group, LLC., P.O. Box 412, Cottageville, SC 29435, phone: 843-835-8604
<justwrite@lowcountry.com>
Advertising Information: Toni Nix, phone: 843-835-8604 <justwrite@lowcountry.com>
Send correspondence, press releases, etc., to: Leah Hinds, Editor, Against the Grain <leah@charlestonlibraryconference.com>
Explore 120 years of groundbreaking research with The Physics Archive—granting your institution perpetual access to discipline-shaping research charting every theory, application, and advancement, on any device, forever.
AN EXTENSIVE COLLECTION: Access over 120 years of groundbreaking physics research, spanning a wide range of topics and sub-disciplines.
PERPETUAL ACCESS: With a one-time purchase, your institution can secure perpetual access to The Physics Archive, ensuring continuous availability of valuable resources.
NO DRM RESTRICTIONS: Unlock researchers’ needs with industry-leading permissive contract language and user permissions. No digital rights management.
TRUE ONE-TIME PURCHASE: Enjoy the simplicity of a true one-time purchase with no annual or maintenance fees, providing your institution with uninterrupted access without recurring costs.
UNLIMITED USE FOR UNLIMITED USERS: Benefit from unlimited access for an unlimited number of users, allowing researchers, students, and faculty to explore the archive from anywhere.
Ready to Get Started? Contact us
Letters to the Editor
Send letters to <editors@against-the-grain.com>, or you can also send a letter to the editor from the Charleston Hub at http://www. charleston-hub.com/contact-us/.
Dear Editor:
I’m trying to get a copy of the article I published in ATG for the Biz of Digital column (February 2025) titled “Data as Collections at UC Berkeley: Acquisition workflows and an Open Source Infrastructure Solution.”
I’d like to include the publisher formatted pdf in my upcoming promotion review, but I don’t have access to Against the Grain. Would it be possible for you to send me the pdf of the article?
Thanks so much for considering, Anna
Anna Sackmann (Data Services Librarian, Library Data Services Program, University of California, Berkeley) <asackmann@berkeley.edu>
Dear Anna:
We’d be delighted to share a PDF file of your article pages from this issue. We appreciate your contribution and wish you all the best with your upcoming promotion. If we can assist further, please just ask.
Highlights from Katina’s Tea Times continued from page 1
I wonder if Jiminy Cricket knew this? “Want to Know the Temperature? Just Count the Cricket Chirps . If you don’t have plans tonight, you will after reading this story. Over the weekend, CBS News Sunday Morning shared a fascinating fact: “A chorus of crickets isn’t just soothing — the sound can actually serve as a natural thermometer, no weather app or fancy gadget necessary.”
From December 15, 2025:
I was sad to learn of the death of Porter Anderson! I emailed with him rarely, but read all of his musings on Publishing Perspectives! In my dreams, I mused about inviting him as a Charleston Conference keynote. A missed opportunity! Oh well. May he rest in peace!
Just learned of the acquisition of DeltaThink by Impelsys. “This is our first acquisition and a meaningful step in shaping the next chapter,” said Sameer Shariff, Impelsys Founder and CEO.
AGAINST THE GRAIN ADVERTISING DEADLINES
VOLUME 38 — 2026
Issue Ad Reservation Camera-Ready
February 2026 01/08/26 01/22/26
April 2026 02/19/26 03/12/26
June 2026 04/16/26 05/07/26
September 2026 06/11/26 07/09/26
November 2026 08/20/26 09/10/26
FOR MORE INFORMATION CONTACT
Toni Nix
<justwrite@lowcountry.com>
Phone: 843-835-8604
Against the Grain / February 2026
Also, it’s easy to become a member of the Charleston Hub and/or an ATG Subscriber, visit our website and signup today!
Thank you, Toni Nix (Advertising Manager, Against the Grain) <justwrite@lowcountry.com>
From January 24, 2026:
Copyright fight continues! Hachette and Cengage have asked permission to join a class action lawsuit against Google alleging mass copyright infringement in support of training its Gemini LM.
“Google has objected to efforts by plaintiffs to include publishers as part of the class in the class action lawsuit, claiming that adding publishers would introduce intra-class complexities. But the AAP asserts, pointing to the success that authors and publishers had in the Bartz v. Anthropic class action lawsuit, that direct participation by publishers would instead help address various issues so that the case can move forward…”
From February 12, 2026:
A Super Bowl first. Last Sunday, Bad Bunny took the stage for his highly anticipated Super Bowl halftime performance — and Celimar Rivera Cosme was nearby making history. Rivera Cosme signed the halftime show in LSPR, or Puerto Rican Sign Language, marking the first multilingual signing program at a Super Bowl.
Also! Do not miss the charming Puppy Bowl! Puppy Bowl 2026! And lastly, Straight from the Horses Mouth:
Gosh! I am so flattered to still be involved with Leah, Caroline, Toni, and everybody on the Charleston Hub team!! I got this scrumptious flower arrangement from everybody yesterday!! I love and miss you all constantly! Really and truly! Btw Bruce and I have started a fledgling publishing company we are calling Benya Books! Send us submissions if you can please! Love, Katina
We’ll be featuring more highlights from “Tea Time” in upcoming issues of Against the Grain. You can read all of these articles (and see each new article posted every week) in their entirety here. Thanks, ya’ll!
<https://www.charleston-hub.com/media/atg/>
About Katina
Katina is a digital publication that uniquely addresses the value of librarians to society and elevates their role as trusted stewards of knowledge. Named after Katina Strauch, the visionary founder of the Charleston Conference, it is written by and for the international communities of librarians, vendors, and publishers.
Katina’s mission is to improve library and information science and inspire a sense of fulfillment among its worldwide professionals. It delivers content that is easy to understand, engaging, informative, and accurate, covering key topics, emerging trends, and transformative technologies and products, while also exploring the ongoing evolution of library practice. It covers key topics, emerging trends, transformative technologies, and library resources, and offers guidance on career and organizational development. By celebrating librarian contributions to open science, scholarship, and the enrichment of society, Katina aims to provide a springboard for community discussion and engagement through three content sections.
Tiered institutional subscription pricing for 2026 is:
$595 for MLIS schools and very high research output institutions
$395 for all other PhD, Masters, and 4-year institutions
$295 for 2-year institutions
Katina Features and Editors:
Editor-in-Chief: Curtis Brundy
Dean of University Libraries, University of Massachusetts Amhurst cbrundy@umass.edu
The Future of Work: Offers practical insight into the technologies, skills, and strategies shaping library practice in a rapidly changing world.
Resource Reviews: Builds on the groundwork set by The Charleston Advisor to provide critical reviews of products and resources for the information industry.
Senior Editor: Jill Emery
Collection Development and Management Librarian, Portland State University
Katina-Reviews@annualreviews.org
Open Knowledge: Addresses the evolving roles of libraries and librarians and their contributions towards an open knowledge ecosystem.
Senior Editor: Kate McCready
Program Director for Open Publishing, Big Ten Academic Alliance
Katina-Open@annualreviews.org
Leah Hinds, Managing Editor and Executive Director, lhinds@annualreviews.org
Elizabeth Weiss, Associate Managing Editor, eweiss@annualreviews.org
KATINA IS PUBLISHING UNDER THE SUBSCRIBE TO OPEN (S2O) PROGRAM
Bet You Missed It — Press Clippings — In the News
Carefully Selected by Your Crack Staff of News Sleuths
Column Editor: Bruce Strauch (The Citadel, Emeritus) <bruce.strauch@gmail.com>
Birth of the Family Sitcom
Gertrude Berg’s “The Rise of the Goldbergs” first aired on radio in 1929 and rapidly became a national hit. Through the Depression and WWII, this story of a Jewish immigrant family kept the nation laughing. Gertrude wrote thousands of episodes, had an advice column, a comic strip, a cookbook and a line of dresses for full-figured women. Polls had her the second most admired woman behind Eleanor Roosevelt.
In 1945, she moved to CBS TV, flacked Sanka decaf, and won an Emmy for best actress. But in 1950, Philip Loeb, who played her husband, was identified as a Red. Berg fought for three years to keep him, at last gave in, but had to move to the marginal DuMont Network with a new sponsor. The last show was in 1956.
Her Monday night slot was taken by “I Love Lucy.”
This is a typical exhaustive New Yorker article and worth reading in its entirety.
See: Emily Nussbaum, “The Forgotten Inventor of the Sitcom,” The New Yorker , June 9, 2025 (https://www.newy orker.com/magazine/2025/06/16/the-forgotten-inventor-ofthe-sitcom), Emily is the author of “Cue The Sun!: The Invention of Reality TV.”
Scottish Tea Plantations. Wha …?
Thomas Robinson’s Wee Tea Plantation business claimed to have mastered tea raising in Scotland and sold big quantities to Fortnum and Mason and high-end hotels like the Balmoral in Edinburgh. And he had won top prize in the “Salon de The” awards competing against the best of the global industry.
When a fraud officer was put on the case, the sleuth initially thought it absurd. It’s too cold and rainy in Scotland to grow tea. Yet he discovered 30 actual legitimate tea plantations above the River Tweed. But Robinson’s Wee Tea Plantation was selling far more tea than it could produce.
When asked for paperwork, Robinson vowed it was lost in a fire and then changed it to a flood. Robinson’s claimed university qualifications in agriculture turned out to be bogus.
In fact, he was buying tea from Sri Lanka and other tea climes and selling it at a big mark-up. He got three and a half years for fraud.
Obit of Note
Frederick Forsyth (1936-2025) flew fighters for the RAF, went to work for Reuters, and was sent to Paris in 1961 because he knew French. It was a time of turmoil with de Gaulle ditching the colony in Algeria and infuriating the army and settlers. This would become material for his first literary success.
Sent to Nigeria for the Biafran War, he contradicted the government’s prediction of an easy win, and the complaints of rebel-bias got him fired. He tried freelance and achieved personal bankruptcy.
He sat down to write a novel, a “no-hope-in-hell way of making” money. Thus was born The Day of the Jackal, which became a global best-seller. In 1972, he followed up with The Odessa File. Both were made into movies.
1974’s Dogs of War nearly got him killed by an arms dealer. Again it became a movie. And more best-sellers followed over five decades: 25 books that sold over 75 million copies.
He was repeatedly asked if he worked for MI6, and in fact he did the same as so many other journalists — he would sit for an interview about what he had seen in a world hot-spot.
See: Richard Lea, “Frederick Forsyth, Day of the Jackal author and former MI6 agent, dies aged 86,” https://www. theguardian.com/books/2025/jun/09/frederick-forsyth-day-ofthe-jackal-author-and-former-mi6-agent-dies-aged-86
The Joy of Faded Paperbacks
The author is invited out of hellish hot NYC for a weekend in upstate NY. Better than the shade, lake, and deer, was finding a room chock-a-block with 500-plus faded, spine creased paperbacks from the ’80s and ’90s by such authors as Gail Godwin, Pat Conroy, Dorothea Benton Frank, and Anne Rivers Siddons. She reveled in the summer reads of her youth.
Ironically, he is considered the godfather of the Scottish tea industry and sparked an interest in tea cultivation on the shores of the lochs.
See: Alastair MacDonald, “The Great Scottish Tea Swindle,” The Wall Street Journal, June 26, 2025, p.A1.
“They reminded me of a time when I had the time, or made the time, or passed the time …”
Once, she had co-founded a Classic Trashy Book Club dedicated to the likes of Nora Roberts, Danielle Steel, and Sidney Sheldon. One Halloween, she dressed as Jackie Collins.
“… remind me of a time when the world wasn’t at our fingertips. When the world was created by the fingertips of authors.”
For information on bulk orders or discounts, please contact:
Jamie Jones
Director of Sales, Marketing, and Outreach at Michigan Publishing jojamie@umich.edu
DOING THE CHARLESTON
My Personal History of Scholarly Communication
Katina Strauch with Darrell W. Gunter
In 1980, Katina Strauch started the Charleston Conference: Issues in Book and Serial Acquisition to bring together librarians, publishers, and vendors to discuss issues shared by the three groups The meeting has continued annually and boasts over 3,000 attendees in person and virtually This memoir is Katina’s diary and story of the Charleston Conference and its development concurrently with her career as a professional librarian. Over the last 45 years, there have been massive changes in scholarly communication, changes that Katina has been at the heart of Where and what will the library and publishing professions develop next? The sky’s the limit to reimagining! Let’s go
Sometimes A Great Notion
In 1954, Hugh Beaver was arguing with another hunter about the fastest game bird. He researched in vain for the answer, decided to create his own record book. It would settle arguments in pubs.
A year later, the first issue was published, financed as a promotional for the Guinness Brewery with laminated pages to deflect beer spillage. It was called the Guinness Book of Records. It hit best-seller status in the UK and by 1964 had sold a million copies.
See: History Facts, July 11, 2025, https://historyfacts.com/ arts-culture/fact/guinness-book-of-records-beer-argumentsin-pubs/
And Speaking of Scotland …
The phrase “scot-free” has nothing to do with Scotland. It comes from Old Norse — skot — meaning payment. You’re out of debt. And the Norse did invade Britain, melding their language into all the others.
Old English segued into Middle and then Modern. It was spelled “scott fre” and “scotchfree” meaning “getting away with anything,” — taxes, crimes, whatever. Those fell away, and now it means getting away with a crime without punishment.
See: Bennett Kleinman , Word Smarts , July 11, 2025. https://wordsmarts.com/scot-free/
Good Grief! Can’t They Just Jackpot an ATM?
As if struggling authors didn’t have enough problems, AIedited rip-offs of their work are being put on Amazon to confuse buyers and siphon off profits. As soon as your book goes up, some thieving creep uses AI to create a slightly reworked title and a skewed cover image. The author screams and yells at Amazon, and by the time something is done about it, the bandit just copies another book and mounts it.
The article goes into detail about how to detect the copies, which is actually pretty easy, and how this is much of what’s behind the author lawsuits against AI companies.
See: Joanna Summer, “AI Generated Books on Amazon Are Hurting Authors and the Publishing Industry,” Inside Hook, Aug. 6, 2025, https://www.insidehook.com/books/ai-generatedbooks-amazon-authors-publishing-industry.
Insult Resource
Old Bill Shakespeare didn’t just give us “bated breath,” “in a pickle,” and “wild goose chase.” He is a great source of creative rudeness.
“Three inch fool,” in The Taming of Shrew manages to cover small stature, intelligence, and size of Grumio’s manhood.
“Villain, I have done thy mother,” in Titus Andronicus is pretty self-explanatory.
Henry IV, Part 2 gives us “Away, you scullion, you rampalarian, you fustilarian! I’ll tickle your catastrophe,” covers a lot of ground. Scullion = low person. Rampalarian = wretch. Fustilarian comes from “fustilugs,” a ponderous, clumsy oaf. Tickle your catastrophe = smack on the butt.
See the article for many more.
See: Tony Dunnell, “The Best Shakespearean Insults,” Word Smarts, Aug. 18, 2025, https://wordsmarts.com/shakespeareaninsults/ . You can find Tony at: tonydunnell.com . He writes speculative fiction and lives on the edge of the Amazon River.
Cookbook Plugs
The ever colorful and entertaining Garden & Gun features five new must-have books on fusion cooking: (1) Ryan Rondeno, My Creole-Cali Kitchen (author fled New Orleans after Katrina, landed in CA and began mixing in new influences); (2) Jessica B. Harris, Braided Heritage (American cuisine was never separate strands but always braided together); (3) Sandra A. Guttierrez, The New Southern-Latino Table (Guatemalan-North Carolinian lays out the fusion of New and Nuevo South); (4) Arnie Segovia, Arnie-Tex (think “Salsa Tuétano slickened with roasted bonemarrow”); (5) Kevin West, The Cook’s Garden (starts with planning and tending a garden, progresses to the delights of sautéed “fresh corn over a whisper of heat in butter until barely cooked, then gilded with a schmear of crème fraîche and basil chiffonade”).
See: Jonathan Miles, “All Together Now,” Garden & Gun, Aug.-Sept., 2025, p.42.
This issue marks a moment of thanks and transition as we say farewell to our long-time “Bet You Missed It” column editor Bruce Strauch, who is stepping away from the role after many years. His contributions for both the column and its community have meant a great deal to Against the Grain. We’re deeply grateful for all he’s contributed over the years!
As we celebrate that legacy, we’re also looking ahead. We’ll be on the lookout for a new editor to take on the column, and we’d love to hear from anyone who might be interested or who knows someone who would be a great fit. Email us at <editors@ against-the-grain.com>!
<https://www.charleston-hub.com/media/atg/>
Cybersecurity in Academic Libraries: Addressing Vulnerabilities and Creating Solutions
By Roger Strong Jr. (Director of Sales, North America, JoVE) <roger.strong@jove.com>
and Matthew Ragucci (Director, Institutional Product Marketing, Wiley) <mragucci@wiley.com>
and John Felts (Head of Library Technology & Collections, Coastal Carolina University) <jfelts@coastal.edu> and Michael Meth (Dean, University Library, San Jose State University) <michael.meth@sjsu.edu>
Introduction
The digital transformation of scholarly communication has fundamentally reshaped the cybersecurity landscape for all stakeholders in the academic ecosystem. What was once a straightforward matter of protecting physical materials from theft or damage has evolved into a complex, multifaceted challenge involving sophisticated threats that target systems, data, and access infrastructure at unprecedented scale. Today, content providers face billions of cyberattacks quarterly, academic libraries navigate vulnerabilities in authentication systems while maintaining seamless access, and AI-powered bots strain institutional infrastructure in ways that blur the line between legitimate research activity and security incidents.
These challenges cannot be addressed in isolation or within stakeholder silos. Publishers, vendors, libraries, and institutions are increasingly recognizing that cybersecurity is a shared responsibility requiring cross-sectoral collaboration, technical innovation, and ongoing dialogue about how to balance robust protection with the open access principles that underpin academic research. This special issue brings together perspectives from across the scholarly communication field to examine the current state of cybersecurity. Together, these contributions demonstrate that while the threats may be growing, so too are the partnerships, preparedness, and solutions necessary to protect the scholarly ecosystem while preserving its foundational commitment to the free flow of knowledge.
Safeguarding Scholarly Content: The Role of Cybersecurity in Academic Publishing
Cybersecurity challenges continue to evolve for content providers serving academic libraries as the complexity, scale, and sophistication of threats increase over time. In the early days of content publishing, protecting print and microform materials was relatively straightforward and largely limited to single-user access. Readers checked out books, read them at their leisure, or spent time in library microform rooms scrolling through reels of archived content. The most common threats during this era were non-returned books, stolen microfilm reels, or torn pages from manuscript documents.
With the proliferation of electronic resources, transforming physical pages into digital images enriched with metadata and stored in the cloud, content providers now face a vastly different risk landscape. Publishers must protect not only against internal
vulnerabilities but also against external threats that exploit multiple access points and delivery pathways. Ensuring content integrity while maintaining reliable, “always-on” access has become a core responsibility. This perspective focuses primarily on the internal responsibilities of content providers to prevent cyberattacks from an operational and organizational standpoint.
As discussed during our presentation1 at the 2024 Charleston Conference, large publishers experience billions of attempted cyberattacks each quarter. According to a recent report2 by Demandsage, the education and research sector faces the highest volume of attacks, averaging more than 3,300 per week. While many incidents stem from relatively simple phishing attempts, ransomware attacks are increasingly common and far more disruptive.
To mitigate these risks, information technology teams must work closely with human resources and other internal departments to develop ongoing, role-based cybersecurity training for all employees. Effective programs3 emphasize awareness, prevention, and adherence to internal protocols, recognizing that breaches often originate from seemingly minor actions such as opening a malicious email or inadvertently sharing sensitive data. Hands-on simulations and scenariobased learning are particularly effective in reinforcing best practices.
Having led sales, marketing, and customer success teams that work directly with libraries to license and deliver content, I have seen firsthand the importance of cross-functional collaboration. These teams must work closely with product and IT departments to ensure clear, consistent communication with content consumers around privacy, security, and data protection policies. Open, ongoing conversations with customers about the due diligence a publisher undertakes to protect user data while preserving seamless access to content is essential for building trust and safeguarding long-term revenue.
As artificial intelligence continues to be integrated into organizational workflows, content providers also have an opportunity to leverage these technologies to enhance employee training, strengthen monitoring capabilities, and improve content creation and management processes. Used responsibly, AI can support more proactive and adaptive approaches to cybersecurity preparedness.
From a publisher’s perspective, cybersecurity is no longer solely a technical concern but a strategic business imperative. Protecting content, user data, and access reliability is foundational to maintaining customer trust, meeting
<https://www.charleston-hub.com/media/atg/>
institutional expectations, and ensuring sustainable growth. Publishers that invest in strong internal governance, crossdepartmental alignment, and transparent communication will be best positioned to navigate an increasingly complex digital ecosystem while continuing to serve the academic community effectively.
Academic Library’s Perspective on Cybersecurity
Academic libraries function within a complex set of constraints and circumstances. This may be a true statement broadly applicable to the academic library, but it is particularly so as it relates to cybersecurity. Academic libraries run complex systems that aim to facilitate seamless access to information, services, spaces, digital collections, etc. This infrastructure is built on a software stack that tries to mesh together systems that were not designed to be integrated. Some of the systems are provided by well-established organizations that manage their own cybersecurity while others may be homegrown, open source, or legacy. In doing so, academic libraries are potentially exposed to many vulnerabilities. If this was not already enough of a challenge, academic libraries typically also function in the context of campus IT governance frameworks that set many (if not all) of the IT policies. Ideally, an academic library benefits from the “halo effect” created by a well-resourced campus IT department’s strong cybersecurity group and specialists who are dedicated to protecting the entire campus from vulnerabilities and attacks.
Further cybersecurity challenges for academic libraries are at the user level. Libraries enable students and faculty to access any number of resources that are integrated into the library’s online offering from a wide variety of devices, which are a potential vulnerability. Some of the library services can be accessed without authentication (e.g., websites, open access digital collections) while others require authentication (e.g., database access). Both of these access models have the potential to create cybersecurity related vulnerabilities, in part because libraries try to strike a balance between state-of-the-art security measures to authenticate access while also allowing as much freedom of access as possible. In doing so, decisions and compromises are made that introduce risk.
A further challenge is a resource challenge. Library budgets are limited, which the library typically applies to maintain and increase services and access to resources to maximize benefits for the scholarly community. In doing so, this means making investments that are not focused on optimizing security measures.
The reason why academic libraries need to pay attention to cybersecurity challenges is not only that trusted library data is at risk, but in the event of an attack aimed at the disruption to services and access, and the subsequent erosion of trust can be much more costly than the cost of the attack.
In speaking with campus IT departments, it is clear that the campus IT infrastructure is constantly challenged by hackers and bots. Fortunately, most of the defenses in place are successful, however, it is also interesting to note that our community typically only hears of significant incidents. There doesn’t seem to be a repository of hacks and how they were resolved and at what costs. The academic library community could benefit from more collaboration and information sharing to learn from each other.
From
Discovery to Denial: The Cybersecurity Consequences of AI Scraping in Libraries
Academic libraries have long facilitated automated access to scholarly content for legitimate purposes such as indexing, preservation, and research-driven text and data mining. However, the emergence of AI systems capable of scraping content at unprecedented scale has altered the risk landscape. AI bots may operate without clear identification, leverage compromised credentials, or exploit permissive access mechanisms intended for human users. As a result, activities that resemble conventional web crawling can escalate into cybersecurity incidents with operational, legal, and reputational consequences for libraries.
One of the most immediate cybersecurity implications of AI bot scraping is stress on library-managed systems. High-volume automated requests can overwhelm discovery platforms, institutional repositories, and proxy services. These traffic patterns may trigger denial-of-service-like conditions, degrading performance for legitimate users or causing temporary outages. In cloud-hosted environments, excessive bot activity can also lead to unexpected costs related to bandwidth and compute usage. Because many library systems were designed for predictable human interaction rather than continuous machine querying, they are particularly vulnerable to such strain.
Also, academic libraries frequently rely on federated authentication, proxy-based access, and shared network trust models to enable seamless user access to licensed resources. AI bot scraping often exploits these mechanisms by operating through compromised student or faculty accounts, leaked proxy credentials, or misconfigured authentication endpoints. This creates multiple cybersecurity risks: credential-stuffing attacks against library login portals, unauthorized reuse of institutional access by third parties, and large-scale exfiltration of subscription content. When vendors detect such activity, they may suspend institutional access entirely, placing libraries in the difficult position of balancing security remediation with continuity of service.
Institutional repositories are central to libraries’ support of open access and research dissemination. Although they are one of the more heavily utilized systems in an academic library’s infrastructure, they often operate on open-source platforms with limited security hardening. AI bots may aggressively harvest repository content via public interfaces or metadata harvesting protocols, sometimes capturing embargoed works or student theses containing sensitive information. Additionally, the ingestion of repository content into AI training pipelines introduces supply chain risks: malicious files, poisoned data, or adversarial content may be propagated downstream, potentially affecting other systems and users. Libraries must therefore consider not only outbound scraping but also the integrity and security of inbound submissions.
AI bot scraping of scholarly content presents complex cybersecurity challenges for academic libraries, rooted in their unique role as access intermediaries and stewards of knowledge. Infrastructure strain, access control vulnerabilities, licensing risks, repository security, and privacy concerns all demand sustained attention. But being open doesn’t mean being passive. Addressing these issues will require not only technical safeguards but also policy development, cross-institutional collaboration, and ongoing professional education. By proactively engaging with these challenges, libraries can better protect their systems, users, and scholarly ecosystems while continuing to support innovation and access in the age of artificial intelligence.
<https://www.charleston-hub.com/media/atg/>
Shared Cybersecurity Responsibilities:
Cases For Collaboration, and Maintaining the Balance Between Security and Access
Content providers, namely publishers and vendors, face an ongoing challenge in cybersecurity that affects both their operations and institutional partners. Succinctly put, it is finding the right balance between robust security measures and seamless user authentication and access. Content providers are tasked with implementing sufficient protections that prevent breaches and content theft, while institutions work towards eliminating the friction that impedes legitimate research access, and usage.
Although content providers and institutions have a long history 4 of challenges and collaboration around authentication and access, recent cybersecurity incidents have exposed vulnerabilities that threaten both parties. For instance, when institutional user credentials are phished or stolen, the consequences can extend far beyond content theft. These compromised credentials can also be exploited to access personal identifying information or cause other institutional harm. These cases are reportedly on the rise.5 Although many institutions understandably choose not to report these incidents publicly, content providers remain deeply concerned about the threats to their institutional partners and the users they serve and understand the cross-stakeholder impact.
This multi-stakeholder reality underscores that cybersecurity is a shared responsibility rather than a unilateral concern. Publishers and vendors maintain sophisticated abuse monitoring systems designed to protect not only their hosted content but also institutions and users who may be unaware of compromises. When these protective measures function effectively alongside institutional safeguards, they prevent several disruptive outcomes: temporary IP blocks that shut off access for entire campuses,6 potential ransomware attacks, and (as mentioned earlier) stunted or inflated usage statistics 7 that distort collection development decisions.
Recognizing that disruption to institutional partners is counterproductive, several enterprising authentication and access solution providers have developed innovative approaches to minimize these impacts. Institutions have been moving away from IP-based authentication 8 for some time now, and towards more secure login systems such as federated authentication.9 While authentication methods like single sign-on and federated authentication are not entirely unsusceptible to breaches, they do offer several advantages. Most notably, when a breach occurs, shutting down a single compromised account is far less disruptive than blocking an entire IP address that affects hundreds or thousands of users. An article in this special issue calls out a real-life example; because the library at University
of Nebraska Medical Center was using updated more modern authentication protocols, they managed to avoid critical disruption during a cyber-attack.
Authentication tools like EZproxy have also advanced this approach further with the adoption of pseudonymous identifiers . 10 This technology shields user identities from content providers while still sharing sufficient data with hosting institutions to enable rapid response when breaches occur. Technological advances like these are better at representing the ideal balance, providing institutions with the information needed to take swift action without widespread disruption. And content providers are becoming more supportive of these authentication methods, making considerable investment in their implementation and development.
New cybersecurity threats have also presented opportunities for better cross-industry collaboration. Initiatives such as SNSI,11 GetFTR,12 and Seamless Access13 have proven that not only is this a shared stakeholder concern, but when they get together, they can address critical needs and achieve a balance where content providers and institutions can both fulfill their missions while minimizing disruptions during security incidents. While this balance may seem elusive, it will only be realized through a shared sense of responsibility and sustained commitment from all parties.
Conclusion
All of us working in the information and knowledge space have a stake in protecting against and preventing cybersecurity incidents. As content providers, we must carefully maintain a balance between educating our internal staff, ensuring secure and reliable access to content for the customers we serve, and continuing to drive growth and profitability.
Academic libraries face additional challenges in operating complex, deeply rooted systems that provide broad access while often working within limited budgets to balance investments in security and infrastructure. They are also increasingly challenged by AI bot scraping, a problem uniquely tied to their role as access intermediaries and stewards of knowledge.
Ultimately, cybersecurity is not an issue that should be siloed among the individual players in our space. Whether you are a commercial content provider, a nonprofit organization, or an academic library, collaboration and communication are essential to protecting the scholarly information ecosystem. Forums such as SNSI (The Scholarly Networks Security Initiative), Code4Lib, and others mentioned here play an important role in fostering that cooperation. It is up to all of us to share stories, reflect on experiences, break down partnership barriers, and continue to learn, upskill, and prepare for the road ahead.
endnotes on page 14
<https://www.charleston-hub.com/media/atg/>
Endnotes
1. Data, Disruption, and Defense: A Collaborative Approach to Cybersecurity in Academic Libraries, Charleston Conference, 2024. https://www.youtube.com/watch?v=feYXZ88wtXI
2. 83 Cybersecurity Statistics 2026 (Worldwide Data & Trends). https://www.demandsage.com/cybersecurity-statistics/
4. Schonfeld, Roger C. “Rethinking Authentication, Revamping the Business.” The Scholarly Kitchen, January 16, 2018. https:// scholarlykitchen.sspnet.org/2016/06/22/rethinking-authentication/
5. Kwon, Diana. “Cyberattacks’ Harm to Universities is Growing — and So Are Their Effects on Research.” Nature 648, no. 8092 (2025): 13-13. https://www.nature.com/articles/d41586-025-03484-9
6. “Security Audit Inspires Library to Discontinue Proxy Service.” EBSCO. Accessed January 30, 2026. https://about.ebsco.com/ resources/success-story/security-audit-inspires-library-discontinue-proxy-service-replace-with-openathens
7. Hughes, Michael P. “How I Stopped Worrying and Learned to Love the Usage Data.” College & Research Libraries 81, no. 7 (2020): 1168. https://crl.acrl.org/index.php/crl/article/view/24676/32497
8. Hinchliffe, Lisa Janicke. “What Will You Do When They Come for Your Proxy Server?” The Scholarly Kitchen, January 16, 2018. https://scholarlykitchen.sspnet.org/2018/01/16/what-will-you-do-when-they-come-for-your-proxy-server-ra21/
9. Tracy, Jody. “Making Federation Work for Libraries: Four Experts on Privacy, Trust, and Quick Wins.” InCommon, November 14, 2025. https://incommon.org/news/making-federation-work-for-libraries/
11. “Cybersecurity Resources for Librarians & Publishers.” Scholarly Networks Security Initiative. Accessed January 30, 2026. https:// www.snsi.info/librarian-resources/
12. “Get Full Text Research.” GetFTR. Accessed January 30, 2026. https://www.getfulltextresearch.com/
13. “SeamlessAccess: Streamlining Access to Scholarly Content and Digital Resources for Research and Education.” SeamlessAccess. Accessed January 30, 2026. https://seamlessaccess.org/
the Grain / February 2026
<https://www.charleston-hub.com/media/atg/>
Bots in the Stacks: Managing Automated Cyber Threats in Academic Libraries
By Roger Strong Jr. (Director of North American Sales, JoVE) <roger.strong@jove.com>
Since the dawn of the digital age, academic libraries have seen their role as the steward of digital information and delivery at an institution. In recent years, this role has become even more complex with the advent of large language models (LLMs). The creators of some of these models use scraping techniques to target library information systems and the information contained in library catalogs to gather new content with the purpose of training generative AI programs.
One interesting example of this occurred in late 2024 at the University of North Carolina at Chapel Hill’s library, where bots overran and disrupted the library’s online catalog. Detail on how they dealt with the attack can be found in UNC Director of Library Communications Judy Panitch’s article1 published last summer.
Thanks to Judy, I had a chance to interview two of her colleagues who were instrumental in identifying and mitigating the attack: Jason Casden, Head of Software Development at the University Library and Tim Shearer, Associate University Librarian for Digital Strategies and Information Technology. This article reflects their experience with the bot attack and provides guidance for other libraries on how to prepare for these attacks that are not Distributed Denial of Service (DDoS) attacks but that can have a similar damaging impact.
According to Jason, the proliferation of super aggressive, naïve web crawlers has been manageable up until recently, but with the development of Large Language Models (LLMs) the complexity of crawlers suddenly changed. “We don’t always know what the origin is, or that it’s LLMs, but it certainly seems to be closely related.” This web traffic now has become enough to potentially take a library catalog down.
UNC’s online catalog, like many, allows users to select facets, such as language or publication date, that allow users to narrow search results.
“A human seldom includes more than a few facets per search. Suddenly, there were hundreds of thousands of searches per day with fifteen to twenty-five facets selected.” Because library catalogs are extremely link-rich with an extremely large data set that include millions of records with a faceted browse interface, this creates a maze of an infinite combination of links, effectively trapping these bots on catalog servers for days.
As Jason and his team watched the searches unfold, they noticed several odd things. First, he notes, “We would see strange results — like we found in the search results tens of thousands of searches for Finnish music, which we have almost none of in our collection.”
Another aspect they observed was an “evasive” distribution of crawler traffic. Many of the bot attacks were coming from all over the world at the same time, some through consumer IP addresses. Blocking a certain geographic area would result in the bots shifting over the next few hours to attack from a different part of the world. Another challenge the team encountered was residential proxy networks, which are individual computers programmed through browser extensions or installed software to receive and relay requests. The networks made it extremely difficult to block the traffic.
Tim Shearer confirms that they determined this was not a DDoS attack, but that the impact was similar. Earlier 2024 attacks on some of UNC’s special collections data were easier to diagnose and block, he says. Once the more disruptive catalog attacks started in late fall of 2024, “that’s when the delivery mechanism became hugely sophisticated.”
Tim has seen a difference in the types of attacks campus IT operations experience versus those impacting the library IT community. Campus IT contends most often with classic takedowns or other malicious behavior. What Tim and library colleagues were experiencing is particular to libraries and archives, which he attributes to the unique original materials they hold. “Our strong guess is that in trying to build their LLMs, they ran out of newspapers. And who has oral histories, photographs, movies, letters, diaries, and maps? It is the memory institutions.”
Tim notes that UNC-Chapel Hill’s libraries were among the first to be hit in this fashion, but he soon learned of similar attacks at other organizations where original materials are shared and openly online, available to people for their use. “Presumably LLM vendors have found them and are starting to do the same thing to them that they are doing to us. Finally, there are some folks on campus that know what we are going through and have shared their experiences with us.”
As we discussed how Tim and Jason are collaborating with campus IT to prepare and prevent future attacks, Jason describes it as an amazing experience: “The solution that we have come to at this point is a multi-layer one.” Jason notes that they are solving this as if it were a DDoS attack, regardless of intent. “Once we realized what was going on, Tim arranged for a meeting with campus IT and security to discuss the problem. They took the problem very seriously, even though at that point they had not seen it occurring in other parts of the institution.”
The application Tim, Jason and team eventually implemented is a Web Application Firewall (WAF) — a commercial solution available from various commercial vendors — which serves as a proxy between web applications and the internet. “The hope was in working through a vendor campus IT had an existing relationship with, we could essentially pass the traffic through a bottleneck, do some commercial bot detection and filtering, and get our servers back online,” says Jason. “It was quite effective for about a month, and then it ended up not being enough.”
After that Jason, Tim and team developed custom methods to detect non-human behaviors and use them as a blocking trigger. Elements from the toolbox included continuing to throttle faceted searches as needed, banning certain IP blocks, and, most importantly, adding a CAPTCHA-like service, which has been one of the most effective ways to detect non-human searches.
<https://www.charleston-hub.com/media/atg/>
Tim says that a multi-layered solution was necessary due to the shifting origin of attacks. “First it was coming from commercial IP ranges similar to a DDoS attack, and then it all moved to China, then it moved to South America, then it moved to Eastern Europe.” While Jason and team identified and found a way to intercept the problematic search requests, it began to evolve adaptive patterns to evade their detection systems. Tim notes “We did some tuning; we threw more resources at the servers and then we put it behind the WAF and now it’s more or less ‘prove you’re human’ before starting a search.”
While this has deterred these types of attacks, Jason notes two ongoing challenges. “One is that we can’t put this in front of everything. We offer things like file downloads that this approach doesn’t work very well with. We also have content that we want crawlers to get through, like our institutional repository on Google Scholar for instance. If we put the WAF in front of our repository, Google can’t get to it.”
The unique aspects of library infrastructure — the sophistication and complexity of the information systems, the unique content, and a deep cultural ethos towards openness and access to information — make it a challenging balance between defensive lockdown and information-sharing to serve users.
Jason importantly states, “For organizations like commercial websites or some online newspapers, you put up a tougher paywall, you put up more hurdles, and you focus on your subscribers. For libraries, we are very, very reluctant to do something that we think will restrict access to information. I think that made us slower to move on this, thinking we could manage it through different methods. Luckily, it hasn’t been as disruptive as we thought it might be, but it’s still sort of a balancing act for our entire field of figuring out how to balance the need to provide service to our core users and our charge to providing open access to information.”
What recommendations do Tim and Jason have for other libraries in preparing for and countering bot attacks?
1. Do not suffer in silence. Become involved in key communities that continue to share and evolve solutions like the CODE{4}LIB2 Community or BOTS Community in Slack.
2. Explore the implementation of some sort of CAPTCHA solution.
3. Collaborate on campus with your IT colleagues to share the library’s experience and make connections. Working together will only make the solution stronger, as well as provide a deeper understanding of the diverse types of challenges related to cybersecurity.
Jason and Tim want to encourage libraries and their respective IT staff to not be afraid to venture off the island of what can be a unique and specialized technology compared to a more widely distributed campus IT. The relationships that they have built, conversations they have had, and networks that have been established continue to flourish, providing resources to strengthen their ability to defend and support the library and its data.
For those interested in reading more on Jason and Tim’s experience, including a deeper dive into the incident, I encourage you to check out their recent Code4Lib article.3
When the Bots Come for Our Stuff: How An Emerging Conversation Sparked a Support Network for Technologists in Crisis
By Arran Griffith (Fedora Program Manager, Lyrasis) <Arran.Griffith@lyrasis.org> ORCiD: https://orcid.org/0009-0001-8530-6214
Recently, we have all watched as the use of AI models have accelerated in both development and adoption. Partnered with this increased usage comes heated discussions around policy, ethics, and data governance. But here in the libraries, archives, museums (LAM), and cultural heritage ecosystem, we watch with rising concern as these AI models, scraping for large language model (LLM) training, consume our resources. All of our resources. Inside our institutions, where information is meant to be shared, used, and easily accessed, we are facing a different kind of issue: keeping our systems up and operational for the very patrons we serve.
This is a story of how a “rag-tag” group of technologists, repository stakeholders, and technology managers came together under the threat of these AI bots attacking their systems to help one another navigate the ever-changing landscape of aggressive harvesting behaviors. This isn’t a technical deepdive on AI bots and how they work but a story of the power of community. Our work has attracted attention across our industry. Across borders. And has allowed those working in the LAM and cultural heritage industry to find support across our ecosystems.
As the Program Manager for Fedora, the digital repository platform used in long-term preservation, building community is in my nature. My position centers around connecting individuals and bringing them together to share, support, and learn from each other. Fedora, as a software, is often considered middleware and rarely operates alone. And that is how I approach growing our community. Building intentional and deliberate bridges between the different technologies that Fedora relies on, so that all users can benefit. It is this outlook that helped shape how this working group was formed and how we operate today.
Getting Started — Early Observations
A little over a year ago, Fedora Governance had a brief conversation about some odd and unusual behavior they’d been experiencing in repository activity at their institutions. Users were observing massive spikes in traffic coming from uncharacteristic or wide ranges of foreign IP addresses. This is not entirely uncommon behavior for repositories holding open access data, however these traffic spikes were different. Something in the nature of the frequency and behavioral characteristics were raising red flags. This unusual activity was causing systems to slow to a crawl or were actually taking them down entirely. Repository managers could not predict or explain exactly what they were experiencing.
IT professionals are familiar with typical distributed denial of service (DDoS) attacks and can handle them relatively quickly through IP address blocking, rate limiting, and other intervention techniques. But in these instances, these bots are working with a different goal in mind. They want to consume the content, not take the system down. And that’s what makes them
tricky to identify. Often, the bots appear as legitimate users, making requests that an actual user would. So identifying them as malicious becomes a challenge.
It became clear very quickly that what we were witnessing was part of a much larger pattern: aggressive scraping of digital collections by AI bots feeding LLM training pipelines. And crucially, while the phenomenon had technical symptoms, the real challenge was the downstream impacts. Those responsible for handling the brunt of the work to stop, impede, or mitigate these scraping behaviors had few to little resources on how to go about it. Proprietary solutions were, and are, costly. Homegrowth solutions were few and far between, and above all, people were simply overwhelmed, with nowhere to turn for resources.
But there is one thing I will say about open source: community is what we do best. And so we turned to our peers.
Building Community One Therapy Session at a Time
Cross-community conversations are something that I am extremely familiar with, so the first course of action was to engage with Fedora partner communities, Islandora and Samvera, as they were likely encountering similar issues. Initial calls were held, centered around open dialogue about this unusual repository activity, and it was immediately clear that we, in the Fedora community, were not alone. This was a wide-spread phenomenon and people desperately wanted to talk about it. It also became very clear that people didn’t know where to come together to have these conversations. So I began facilitating a monthly series of open discussion forums, open to anyone, from any community, to come together and share their experiences.
The response was overwhelming.
People showed up from all corners of the library, archive, museum, and cultural heritage ecosystem. Academic institutions, repositories, museums, institutional archives, libraries, and independent technologists were all represented, each bringing their own experiences to the conversation. Many had been managing these disruptions in isolation, uncertain whether what they were seeing was unique to their institution or part of a broader pattern. Some arrived openly frustrated; others were seeking confirmation that their observations were valid. Nearly all were looking for a connection to others who understood the challenges they were facing and could speak from shared experience. So we unofficially became the AI Discussions Working Group.
A key factor in the success of these early sessions was the intentional way the discussion forums were structured. We took no minutes and did not record the meetings. This approach fostered a strong sense of trust and enabled participants to
<https://www.charleston-hub.com/media/atg/>
share information that might otherwise have felt sensitive or risky, without fear of judgment or unintended consequences. By prioritizing openness and confidentiality, the group quickly became a trusted space. One where technologists could speak candidly, support one another, and begin to make sense of a rapidly evolving reality. Word spread quickly and broadly about our work, and our audience continued to grow.
We affectionately referred to our monthly sessions as “group therapy,” where participants could both come and voice their frustrations and concerns and seek support from their peers at the same time. We were crowd-sourcing solutions for institutions of all shapes and sizes, and people were eager for more.
Conversation to Action
As months of discussion accumulated, the group collectively recognized that we needed to take our informal knowledgesharing and shape it into actionable, structured outputs. Originally, we formed three subgroups, each intended to tackle one of the core needs that had surfaced repeatedly throughout our conversations:
1. White Paper — The development of a White Paper to help define the issue and its impacts.
2. Solutions Showcase Series — The organization of a series of virtual “show-and-tell”-style presentations, in which participants would share the mitigation strategies and solutions they had implemented at their own institutions.
3. Metrics — Identifying a set of impact metrics that could help institutions better understand and communicate the full range of system, user, and staffing impacts associated with these disruptions.
For several months, the sub-groups collaboratively worked on their charges while continuing to meet as a larger group to share updates and new findings. As with anything driven by volunteers and diverse communities, capacity and reality sometimes forces you to redefine your scope and pivot if necessary. This happened early on for the Metrics sub-group. After defining a set of impact metrics, we quickly realized that it would be very challenging to gather the types of information we wanted to understand without investing significant time and resources from both our committee members as well as those who we would want to hear from. So we sunsetted the sub-group, allowing us to focus more closely on other priority work.
White Paper Sub-Group
During our initial conversations, many administrators, policymakers, and senior leaders knew AI was “a thing,” but few understood how LLM scraping was impacting repositories at a system level. The White Paper subgroup sought to change that.
Their mandate was to:
• define the problem in plain, administrative-level language
• delineate the difference between malicious DDoS attacks and AI harvesting bots
• articulate the tangible impacts on systems, staff time, and user experience
• document what technologists were doing to keep systems operational
The subgroup’s foundational work informed broader public scholarship, including a widely read Scholarly Kitchen article by Kate Dohe, Manager of Digital Programs and Initiatives at the University of Maryland Libraries. Kate’s article, titled “Have You Proved You’re Human Today?” Open Content and Web Harvesting in the AI Era, explored the shared experiences of the AI Discussions Working Group, sought to define the behaviours we were seeing, and outlined the broad range of impacts these scraping behaviors are having.
Another notable publication that rose from early working group conversations came from The University of North Carolina at Chapel Hill. In one candid article, Library IT vs the AI bots, Judy Panitch, Director of Library Communications at UNC Chapel Hill, shared the experiences of an AI bot attack on the Library’s online catalog and the resulting mitigation approach they used.
While the white paper itself is still in development, the influence of this work is visible across our ecosystem.
Solutions Showcase Sub-Group
One of the AI Discussions Working Group’s strongest shared values has been meeting people where they are and remaining open, accessible, and supportive for all users, especially those who are under-resourced, understaffed, or working alone. The Solutions Showcase Series grew from the desire to give individuals concrete technical, actionable solutions they could implement immediately.
I conceptualized a free virtual showcase series where volunteers would present short overviews of the mitigation strategies they’ve implemented, responding to a standardized set of questions to help attendees compare approaches. These sessions are geared toward technical level individuals, but the idea is to demo the solutions and provide a behind-the-scenes look at implementation strategies so that others can hopefully find a useful solution for themselves.
Recordings are shared only with registered attendees by request and not posted publicly to avoid exposing the defensive strategies to AI bots or scraper pipelines. While it may seem an overly protective approach, volunteer presenters are extremely appreciative of the decision, stating that they are more receptive to presenting in-depth details knowing it can’t “used against them” by the very bots they are trying to block.
The success of this Solutions Showcase Series has made them one of our most valuable outputs. We routinely see registrations reaching capacity with attendance conversation rates between 60-75%. The series empowers local staff, strengthens crossinstitutional collaboration, and normalizes proactive response rather than emergency scramble.
We plan to continue holding the Solutions Showcase Series until we no longer have volunteers reaching out to present. All past recordings are available upon request, so please reach out to me <Arran.Griffith@lyrasis.org> if you are interested in any of the past sessions.
Expanding Our Audience and Moving Forward
Along with our sub-group outputs, we’ve seen noted interest from national and international organizations about our work. We were invited to present at the Canadian Association of Research Libraries (CARL) Community of Practice sessions, the Coalition for Networked Information (CNI) Fall Meeting in 2025, and now Against the Grain. Our informal placeholder wiki at Lyrasis continues to receive attention from administrators,
<https://www.charleston-hub.com/media/atg/>
librarians, and technologists seeking guidance. And word continues to spread as institutions confront new manifestations of the problem. Each new challenge reinforces the need for community-driven intelligence and coordinated action.
What began as a conversation within Fedora Governance, has become a cross-ecosystem movement. The AI Discussions Working Group now represents a diverse group of participants representing a wide variety of institutional types and sizes. With the rapid development in AI learning and training, our job of mitigating its impacts on our digital infrastructures, the staff who manage it, and the patrons who rely on it becomes increasingly difficult. But that doesn’t mean we will toss in the towel and give up. As stewards of highly curated digital information, we must continue these conversations around respectful use while adhering to our mandate of maintaining open access.
We do not expect the pressures from AI harvesting to disappear. If anything, they will intensify as models become more sophisticated and more business models depend on largescale scraping of public digital content.
But we also believe strongly in the power of community. Our Working Group is proof of that. We did not wait for a mandate or a grant or an external directive. We came together because the need was immediate, the impacts were real, and no one else was going to convene us.
<https://www.charleston-hub.com/media/atg/> April 13-15, 2026 at the University of Tübingen
Community is one of the most powerful tools we have to weather this storm. It keeps us informed, supported, and connected. It ensures that frontline technologists are not isolated in their struggles. And, perhaps most importantly, it reaffirms what so many of us already know: openness is not free. It requires care, stewardship, and the collective labor of people who are committed to preserving access for the long term.
If there is one message the working group hopes to carry forward, it is this: “We can’t be open if we are not up.” We cannot continue to support open scholarship, open science, or open culture if the systems that host our content are rendered inoperable by unchecked AI harvesting.
Additional Resources for Consideration:
Mitigating Aggressive Crawler Traffic in the Age of Generative AI: A Collaborative Approach from the University of North Carolina at Chapel Hill Libraries — https://journal.code4lib. org/articles/18489
Are AI Bots Knocking Cultural Heritage Offline? — https:// www.glamelab.org/products/are-ai-bots-knocking-culturalheritage-offline/
Cybersecurity in Academic Libraries 2025: What Librarians and CISOs Really Think — and What We Should Do Next
By Susie Winter (VP Communications, Springer Nature, SNSI Communications Working Group, Co-chair) <susie.winter@springernature.com>
Introduction
Cybersecurity in higher education and the threats posed from pirate websites to data protection and the scholarly record has long been framed as an IT problem at the edge of campus networks, with solutions focusing on firewalls, patching cycles, and the latest malware signatures. In 2021 the Scholarly Networks Security Initiative (SNSI) commissioned Shift Insight to help us better understand how aware librarians and Chief Information Security Officers (CISOs) were of the threats posed by pirate websites, the wider impact cybercrime was having in their institutions, and, crucially, what steps they would take if their network has been compromised.
Last year we repeated this work to see whether and how attitudes had changed. Drawing on an international survey of 287 librarians and 20 in-depth interviews with Chief Information Security Officers (CISOs), the 2025 study research paints a picture of a sector that is becoming more alert, has better tools, and yet remains conflicted about pirate websites while confronted by both continuing and evolving threats, from AI- and credential abuse on library platforms to questions of research integrity and large language models (LLMs).
The message however is clear: the security of scholarly infrastructures can no longer be “someone else’s job.” It is a shared day-to-day responsibility that sits at the intersection of collections, access, identity, and ethics.
Main Findings
85% of librarians surveyed say cyber-threats to universities are rising. CISOs agree, describing a threat environment that has both scaled and matured. The following reasons were identified as to why this may be the case:
• Universities make attractive targets: Universities hold vast troves of sensitive data: student records, research outputs, IP tied to patents and commercialization, and health or personal data in some disciplines.
• There is a strong sharing culture in higher education: Higher education thrives on openness and collaboration — shared workstations, public spaces, and frequent guest access — all things which increase the likelihood of attack.
• Libraries have many legacy and complex systems: Outdated systems and uneven funding create vulnerabilities, even as institutions modernize.
The risk around ransomware looms largest for CISOs, with them describing it as an “existential” threat capable of shutting down campus operations, corrupting or exfiltrating data, and damaging reputation. But human-factor vulnerabilities — poor password hygiene, multi-factor authentication (MFA) fatigue, inadvertent data sharing — remain the most common risks. Students, with high turnover and variable engagement, are a
particular concern, while staff risks vary by department and digital literacy.
Encouragingly, CISOs describe meaningful improvements since 2021 including a wider deployment of MFA, even given the fatigue noted above. There is now more routine vulnerability scanning and patching and more mature incident response. Many institutions also now require mandatory annual security training, and several experiment with gamified learning to engage students.
Conversely, the study found no single narrative about the library’s risk profile. While 44% of librarians consider library risk roughly equivalent to other departments, 28% said it was higher because of things such as third-party vendor dependencies and the fact they handle personal and sensitive data (e.g., patron data, usage logs, access credentials). Some CISOs, by contrast, view libraries as relatively low risk, crediting tech-savvy staff who collaborate well with them.
What is consistent is librarians’ security awareness with 93% reporting at least some understanding of cybercrime and data security and 19% going so far as to say they have a high level of understanding. This awareness correlates with practical readiness. When asked how they would respond to a suspected network compromise, 96% would contact the security team and IT, mirroring 2021 figures and suggesting durable incident-reporting habits.
Pirate Sites, Perceptions, and Research Integrity
The research showed that awareness of Sci-Hub and other pirate websites is widespread. 69% of librarians name Sci-Hub unprompted, and awareness of LibGen and Z-Library has grown since 2021. However, over a third of librarians believe less than 20% of students and staff use pirate sites, but estimates differ by region and funding levels.
Attitudes amongst this group are also nuanced:
• 43% of librarians agree pirate sites can be useful to learners, often citing usability and convenience versus what can be seen as complex legitimate platforms.
• 45% worry about negative impacts on research integrity.
• 55% believe use of such sites puts institutional networks at risk.
And all of this is colored by access to content with a large majority (76%) agreeing public access to research should be free. This reflects the ongoing tension between access ideals and the realities of licensing.
CISOs, for their part, typically rank pirate websites as lower risk relative to ransomware and phishing, describing theft of publisher content as a “nuisance crime,” but several do report connected incidents of credential compromise and mass
<https://www.charleston-hub.com/media/atg/>
harvesting of licensed content. The more profound concern emerging from the new research in 2025 is the issue of LLMs (large language models) being trained on pirated content. 71% of librarians believe this is happening and 95% of those are concerned over the implications for research integrity.
What’s Working
The research highlights a maturing control environment with multi-factor authentication now mainstream, faster detection and containment of malware and credential abuse, and routine scanning and “lessons learnt” documentation. This is operating alongside increased behavioral interventions to stop threats at source including mandatory training, targeted awareness campaigns, and phishing simulations.
But for all this to be effective, the need for collaboration comes across loud and clear. Across the research, CISOs and librarians advocate joint risk assessments, regular communication, shared training, and explicit involvement of libraries in policy development and governance. Librarians specifically request bidirectional education which would see security teams learning how library workflows and vendor integrations really function, and librarians gain deeper practical security training. In short, security literacy for librarians and library literacy for security teams.
AI Governance
AI was a new topic for the 2025 survey as we were keen to better understand librarians’ and CISOs’ views and concerns in this space and how they felt it interacted with cybersecurity, data privacy, and content protection.
According to CISOs there is nascent AI governance now sitting alongside privacy and security reviews. This includes registering AI tools, defining permitted uses, integrating AI topics into security awareness, and monitoring data to reduce leakage of proprietary or personal information into third-party models. One CIO reported identifying 162 AI-related tools in use across their institution indicating an explosion of shadow AI that raises concerns over where the data goes and who is responsible for retaining it.
It was also felt that the librarian’s traditional role — teaching information literacy — naturally extends to AI literacy including how to interrogate outputs, check sources, and understand model limitations and biases.
A Shared Action Plan for 2026
Drawing together signals from the study, we believe a pragmatic, cross-stakeholder roadmap can be developed. For Libraries
1. Make access security part of service design: Review the full identity journey students take from discovery to document, work with IT to rationalize admin accounts, partner with vendors such as publishers to make legitimate access less complex and improve off-campus experience.
2. Improve AI literacy: Co-author campus guidance on responsible AI use, integrate source evaluation into instruction, and curate approved tools for teaching and research.
3. Exercise the “incident muscle”: Run joint tabletop exercises with security and publishers focused on mass download alerts, compromised credentials, and rapid entitlement revocation.
For CISOs and IT Security
1. Treat libraries as strategic allies by including library leadership in governance forums; schedule regular briefings on threat trends affecting content platforms.
2. Prioritize third - party risk: Mandate security attestations, RFP security addenda, and incident notification SLAs for library-critical vendors.
3. Scale training by cohort: Tailor interventions for students (micro-learning in the LMS), researchers (data handling and IP), and library staff (admin account hygiene).
For Publishers and Scholarly Platforms
1. Operationalize partnership: Offer named security contacts, API-based alerting, and clear remediation steps when anomalies occur.
2. Close the convenience gap: Promote services such as GetFTR and other “you already have access” cues.
3. Support sector education: Co- create rebrandable training modules (as several CISOs suggested), focused on credential safety, spotting phishing, and responsible use.
Mind the Paradoxes
Perhaps the most striking finding is the honest and human contradictions the study surfaces. Many librarians simultaneously affirm the illegality of pirate sites and acknowledge their usefulness to learners; they want more open access and worry about integrity when pirated content trains AI systems. CISOs may consider Sci-Hub a lower operational risk even as they trace stolen credentials to mass download events. None of this is hypocrisy; it reflects the frictions built into today’s scholarly access ecosystem, but as the research highlights, solving these frictions requires collaboration.
Conclusion: Security as a Scholarly Value
The SNSI 2025 study shows a community that is more prepared than three years ago and candid about the work ahead. The shift is cultural as much as technical: from treating security as a perimeter to treating it as a practice, woven into how we license, authenticate, discover, teach, and evaluate knowledge.
<https://www.charleston-hub.com/media/atg/>
Readiness, Tested: Leading Through a Cyber Attack at an Academic Medical Center Library
By Emily J. Glenn (McGoogan Health Sciences Library, University of Nebraska Medical Center) <emily.glenn@unmc.edu>
In 2020, the University of Nebraska Medical Center detected unusual IT network activity that escalated into a significant and disruptive cyber attack. In the span of a few hours, incident response protocols were activated across the campus, IT systems were isolated, and many units’ focus shifted from everyday operations to continuity under pressure. As the then Associate Dean of the Academic Medical Center Library and Head of Technology, I was suddenly experiencing the leadership imperative of activating a business continuity plan. From then on, I would reflect that change readiness isn’t a binder on a shelf, it’s the way you lead when it’s time to enact the “preparedness” part of your business continuity plan.
The information security issue was not localized to the library but did impact multiple library systems. We leapt into questioning: Could we still use this system? Does this mean there is an issue here? What should we tell users? What should we be looking for? Are these files gone forever? This marked the beginning of the human side of the breach, when we realized our library’s mission could be severely compromised by an external attack. In addition to the technical disruption, we experienced emotions such as surprise, fear, frustration, and all five stages of grief. We were both hindered and siloed, but knew that we had to act.
As the library moved through the earliest stages of the incident, we found ourselves naturally progressing through the SANS Institute’s PICERL model for incident response: Preparation, Identification, Containment, Eradication, Recovery, and Lessons Learned.1 Although we did not consciously invoke the model at the time, our experience echoed its phases: detecting anomalies, containing affected systems, working toward recovery, and eventually turning to reflection and improvement.
We reported the issues we were seeing. We re-reported them using another format. We waited. We didn’t have enough information or authority about any parts to change the course from the library, so we enacted our business continuity plan. The “easy” part was shifting to backup service plans, taking our integrated library system (ILS) offline, and explaining to our users that some services were halted. The hardest parts were supporting staff who had discovered that their work was suddenly inaccessible, stymying their ability to participate in the library’s mission.
Our library was “lucky” in a few ways. This attack occurred when the library was close to completing a new ILS implementation, which was coordinated with a local consortium of University of Nebraska libraries. Together we had already built out most of the new public-facing catalog. Our decision to decommission the old ILS was not for its security issues, which were then unknown to us; we chose to migrate for better system-wide coordination and user experience. Our old ILS was taken down in October. We had set up a test version of our new ILS that summer and brought it up a couple of weeks after our old ILS was taken down, still ahead of our anticipated go-live date. Our good documentation and
comprehensive planning contributed to the limited user impact: we ended up with no public interface to our catalog for a matter of days, and only WorldCat access to about 6,000 print book records during that time. Because we were in the midst of COVID-19, there was limited demand for print books.
Another crucial factor in our resilience was the transition of our authentication system to modern authentication protocols. This change was made to facilitate user access, and its benefits came at a critical time of the pandemic, when demand for remote access had never been higher. The project to transition to modern authentication protocols had included many library resources, and those resources remained available within the new system through the entire incident. Readiness, in this case, wasn’t a contingency, it was a design choice that paid off when we needed it most.
Aside from our licensed resources, our library staff working files were mostly intact. The university had been improving enterprise document management and team communication systems. Although many library teams used those improved spaces, one department used other file space for processing digital files. Since we had been focusing closely on our digital preservation processes, we knew that we were seeing some unexpected activity. We had to leave many files and folders untouched during the triage led by campus IT. It took time to assess losses, and we could not provide access to or modify existing digital files during this time. This event hastened the need to move files to a more secure location, which we eventually did as spaces were provisioned.
From the outside view of campus IT, there was rapid action: providing guidance, shutting down vulnerable services, engaging experts, triage, and increased communication. Soon after the initial attack, the university issued a public message acknowledging the incident, outlining steps toward resolution, and reaffirming our mission. Transparency mattered. It helped reassure stakeholders, but equally, it steadied our teams who were operating in ambiguity.
Where We Adapted: Continuity in Motion
In the wake of the attack, the university expanded its information security review procedures, requiring new assessments for all software, including library resources. This process required direct contact with vendors, which often posed challenges for smaller publishers. The library stepped in to advocate for practical pathways that allowed these vendors to comply. We eventually secured broad approval for online resources connected to our identity-based authentication system. This resolution provided immeasurable peace of mind. Since then, the university has revised its software approval and information security processes, and the library now participates as both a business owner and a partner, contributing to every review of library resources.
<https://www.charleston-hub.com/media/atg/>
While the attack initially interrupted the momentum of our digital preservation stewardship, it also forced us to adapt our plans and rethink our assumptions. Before the attack, we expected IT to advise on strategy for selecting major resources and making a final decision on implementation. What we learned after the attack was that their role was to enable our work, not direct it. What we perceived as a lack of attention to resource acquisitions was, in reality, outside the scope of their mandate. Engaging in the granular work of recovery made it clear that we needed to take a more active role in setting our technology direction, with IT as a partner.
Once we identified a viable resource and initiated the security review, we felt we had reached a milestone in learning how to solve our problems as a partner, not a permission-dependent unit. This was a key learning moment: library staff expectations of IT did not always match the IT department’s own mandate. We shifted our team toward a model of leading, recommending, and enacting — releasing the expectation that IT would provide substantial attention during the initial stages of tool exploration and recommendation.
Readiness meant acknowledging foundational gaps while investing the necessary financial resources to close them. Investments in readiness and leadership may be invisible during normal operations, but during disruption they protect the library’s core mission by sustaining access, reducing downtime, and minimizing the impact on users who rely on library services to learn, teach, and provide care. Our recovery included reassessing the gaps, adjusting budgets, and gaining a clearer understanding of roles. In 2022, two years after the attack, we implemented a repository and resumed our digital preservation work with a clearer understanding of roles and responsibilities.
Change Readiness as Culture
A crisis reveals not just systems, but culture. Through this experience, we learned several essential lessons about leadership and resilience.
First, we learned to assume ownership rather than wait for rescue. External teams cannot suddenly solve library-specific problems without our partnership and advocacy. It’s our responsibility to ensure the right people are at the table and that library needs remain visible in the enterprise response. Building connections with IT (and other) partners before an emergency strikes is critical, and the relationships forged in calm times become lifelines during chaos.
Communication proved equally vital. Staff deserve clarity even when answers are provisional. We committed to providing regular updates, sharing what we knew, what we were deciding, and where issues should be escalated. This consistent cadence helped maintain trust and reduce anxiety during uncertain times.
We also recognized that disruption is exhausting, and our role extended beyond technical problem-solving to caring for our people. We honored the strain and ambiguity staff experienced while grounding our decisions in pre-established plans. During sudden and intense disruptions, you will be challenged to do the right things in the right order, quickly. In those moments, you fall to the level of your training. That’s why it’s essential to train yourself and enable your team to support people, no matter what issue arises.
Finally, we learned to prioritize what truly matters. We re-centered our efforts on digital stewardship, high-risk dependencies affecting access, and critical user services vulnerable to future disruption. Not everything is equally urgent
in a crisis. Leaders must help discern the most urgent items based on dependencies and mission criticality, not emotional reactions from staff.
What I Carry Forward
Change readiness isn’t heroics; it’s a pattern. Looking back more than five years later, the sequence of events surrounding the cyber attack is deeply layered with how it made us feel. We had practices communicating, planning, and reflecting but we could have done better. In fact, five-plus years later, we reflect on how we did, where we rose to the occasion, and what was really hard. We are now more aware of the importance of spending time on lessons learned and bringing those insights back into our practice.
The attack was not an isolated event; we were many months into COVID-19, maintaining a tightly coordinated remote schedule that stripped away our usual human connections. We had also just moved into a newly renovated library after a long stint in temporary office space. Even though the new space was beautiful, our familiar library “home” had changed. As anyone who has endured a renovation knows, moving back in is only the beginning of settling in. Our leadership team had occasions where we would simply stare at each other and burst out laughing — what else could you do? We felt the frustration of our teams and were trying to keep it together. How do you categorize a cyber attack as “just one more thing” when you are a library? It is a level of business disruption that stands apart from the rest.
In hindsight, many people on the library team had witnessed the university maturing its information security environment over many years. The library, like other units, is now required to become more transparent about every resource, from procurement through implementation. This was a non-negotiable business shift that, following the attack, intensified the divide between “before” (more end-to-end library management) and “after” (less end-to-end library management) where our technology resources were subject to different mandates.
For those interested in better preparing their libraries for cyber security breaches, I recommend reflecting on how you lead in these areas:
• Design for resilience. Use modern authentication and focus on recoverability. Beyond simple backups, this means building a sound and well-understood library technology infrastructure using appropriate tools. Make sure that your permission and access processes are up to date.
• Keep backup and fallbacks viable. Define the “minimal viable library” for your setting. Refresh on how to use alternative processes for services and create offline SOPs for essential services. In an era of total digital dependency, knowing how to execute core library services, or how to arrange for partner library support. Backups can be other libraries: formalize service continuity agreements with regional partners or consortia.
• Know your dependencies and test them. Modern libraries sit at the center of a complex web of vendor, campus, and cloud-based connections. Regularly audit these links to understand which failures are manageable and which are catastrophic, ensuring you aren’t discovering a hidden dependency for the first time in the heat of a crisis. You might test how
<https://www.charleston-hub.com/media/atg/>
stakeholders access e-resources from on-campus or off-campus, determine if you have accessible copies of Service Level Agreements for the most used resources, affirm who has access and training to recover library systems, and check that your team is prepared to be an intermediary for offline requests if the ticketing system fails.
• Practice communicating with intent. Focus on cadence, clarity, and checking in. When disruption happens, you will inevitably fall to your level of training and established habits. If you have built a culture of transparency and care during normal operations, your staff will trust the brevity and directness required when you are in the “granular work” of recovery. Developing a “game plan” for crisis communication, in partnership with your institution’s public relations or emergency management teams, ensures you have the right allies and protocols in place before they are needed.
• Advocate with perspective. The library needs to be aligned with the enterprise to weather the disruption. This means shifting away from a “permissiondependent” mindset and instead positioning the library as a strategic partner that understands universitywide mandates, such as security reviews, while still protecting and embodying the library’s specific mission.
• Reflect on lessons learned. Use post-incident reviews that lead to real changes in training, practice, and resources for the library. True resilience is a pattern of behavior, not a one-time act. Use the “aftermath” of this crisis to ground your team, turning the pain of an attack or disruption into a catalyst for smoother governance and stronger, more proactive partnerships. Become familiar with the PICERL model for technology incident responses.
Preparedness is not abstract work; it is an investment that safeguards user access, sustains trust, and allows the library to fulfill its mission even when systems fail. In 2026, there are threats to libraries and universities seemingly around every corner. It’s challenging to balance the “what if” with the reality of where we might be within our institution. As libraries face growing and persistent threats, we can’t eliminate every risk. But we can reduce fragility, shorten recovery, and strengthen reliability so that when the next disruption arrives, we’re not simply reacting; we’re ready to lead.
Endnotes
1. SANS Institute, “Incident Response Cycle,” accessed January 27, 2026, https://www.sans.org/media/score/504incident-response-cycle.pdf
<https://www.charleston-hub.com/media/atg/>
Probability of Data Breaches in Libraries and Recommendations to Prevent Them
By Tonia San Nicolas-Rocca, PhD, MBA (Faculty Associate Dean, San Jose State University) <tonia.sannicolas-rocca@sjsu.edu>
and Thomas Lee, PhD (Chief Executive Officer, VivoSecurity) <ThomasL@mail.vivosecurity.com>
Abstract
Libraries worldwide have become attractive targets for cyberattacks, exposing patron data and disrupting library operations. This article describes how empirical regression modeling links cybersecurity staffing to organizational size, measured by full-time employees, and how this relationship affects cybersecurity performance. Staffing and other recommendations for libraries are provided.
Introduction
Libraries serve as access points for information, technology, and digital resources. Beyond checking out books and other resources, libraries provide patrons with access to the internet, online databases, computers and other specialized equipment (e.g., scanners, printers, 3D printers, fax machines), as well as digital literacy classes, and other technology services. A wide range of patrons depend on these resources, including students, families, job seekers, researchers, immigrants, and the homeless. This reliance increases libraries’ exposure to cybersecurity threats and vulnerabilities, as these high-traffic information resources and services attract threat actors seeking unauthorized access to network and patron data. As such, the need to protect the confidentiality, integrity, and availability of library data and digital resources is essential.
Libraries collect, store, and manage a wide range of information, including personally identifiable information (PII) such as names, addresses, and dates of birth, as well as highly sensitive data including social security numbers, financial information, military identification numbers, driver’s license numbers, and health information (Privacy Rights Clearing House, 2026; Maryland Attorney General, 2026). Libraries also maintain reading histories, research records, and login credentials used to access electronic resources and digital services. The combination and sensitivity of these data types increase the risk of cybersecurity incidents or breaches, which in turn, increases the need for information security controls to protect patron privacy.
Many libraries operate with minimal information technology (IT) staff, and even fewer employ staff with cybersecurity experience or training. This limitation makes it challenging for libraries to conduct effective cybersecurity risk management and assessment, leaving them vulnerable to cybersecurity threats and incidents. Without properly trained cybersecurity staff, security policies may be ineffective. Between 2017 to the time of this publication, approximately fifty-two libraries worldwide have been targeted by
a cybersecurity attack, with seventy-three percent of these incidents occurring since 2021. These attacks exposed sensitive data, took down network and digital resources, temporarily closed libraries, and impacted the availability of services that communities rely on from libraries.
Probability of a Data Breach
Recent empirical regression modeling of the differences between organizations that did and did not experience a data breach has revealed that the probability of a data breach for any organization can be accurately calculated based on its cybersecurity staffing levels (VivoSecurity). Specifically, organizations employing a sufficient number of Certified Information Systems Security Professionals (CISSP) are substantially (10-50 times) less likely to experience a data breach. It is not surprising that staffing levels serve as a strong predictor, given that workforce benchmarks are commonly used to assess broader business performance (IBM, 2025). In the context of cybersecurity, “performance” is best understood as the reduction in the likelihood of a significant security incident. This relationship is expected as cybersecurity staff are responsible for risk management and assessment activities (NIST SP 800-37 R2). These practitioners prepare, categorize, select, implement, assess, authorize, and monitor the controls outlined in the National Institute of Standards and Technology (NIST) Special Publication (SP) 800-53 Revision 5, Security and Privacy controls for Information Systems and Organizations (NIST SP 800-53 R5). Their presence and expertise have a direct influence on an organization’s ability to manage cyber risks effectively.
Recommended Cybersecurity Staffing in Libraries
It is important to know that regression analysis finds that cybersecurity is an ongoing operational activity that requires risk assessment, effective technical controls, continuous monitoring, governance and compliance. Table 1 has been developed specially for libraries based upon average IT-size as a function of employees seen across libraries. The table presents recommended cybersecurity staffing levels (shown in green) based on the number of Full-Time Employees (FTE,
<https://www.charleston-hub.com/media/atg/>
Table 1. Recommended Cybersecurity Staffing Levels for Libraries
shown in blue). For smaller libraries, staffing is recommended as outsourced consulting hours, while larger libraries are advised to employ full-time cybersecurity professionals. Regardless of whether cybersecurity services are provided internally or through external providers, staffing should reflect CISSP-level expertise or an equivalent qualification. These recommendations are based on average cybersecurity staffing benchmarks drawn from thousands of B2B organizations and are intended to help library leaders make informed, risk-based staffing decisions.
According to Table 1, libraries with fewer than 1,000 fulltime employees, outsourcing cybersecurity is an acceptable and practical option. We recommend interpolating between table values to estimate the number of outsourced consulting hours per month. For example, a library with 400 full time employees, we recommend between 50 and 100 hours of cybersecurity consulting per month. Because this is not a precise calculation, we recommend visually estimating the hours to get a more accurate estimate. Since 400 is a little more than 320, 60 or 70 hours per month may be sufficient. Outsourced hours should be performed at a CISSP equivalent level. Table 2 lists other certifications or qualifications which would be equivalent to the CISSP.
For libraries above 1,000 full time employees, we recommend hiring one or more full-time cybersecurity employees also with a CISSP or an equivalent certification or qualification. Organizations of that size that rely on outsourced cybersecurity services are among the organizations that account for most of the data breaches. Therefore, a library with 1,500 full-time employees, we recommend 1 to 2 full-time cybersecurity employees with CISSP or equivalent certifications or qualifications.
of an IT-employee can vary across organizations, we established an average IT staffing ratio for libraries as a function of FTE (one IT employee per 35 FTE overall) and used this to calculate an average for libraries based upon B2B organizations.
Why Average Staffing
Staffing levels for organizations and the resulting probabilities follow a bell-curve distribution. The recommended benchmark is the average staffing level. This is not only because it reflects the most common point on that curve and is therefore practical to achieve but because it drives the annual probability of a large data breach down to 0.066%, or about once every 1,500 years on average. Put differently, organizations operating well below this average staffing level account for most large breaches.
A low probability, however, is not the same as zero. The 0.066% figure applies only to large breaches; small breaches occur orders of magnitude more frequently. Moreover, even with a low annual probability, large breaches will still occur across the sector. If 1,500 libraries each had a 0.066% chance of a large breach, we would expect one such breach somewhere every year on average. That scenario is hypothetical, but with more than 400,000 public libraries worldwide, and 365,825 libraries having internet access worldwide (IFLA, 2026) the implication is clear: reaching at least the average staffing level is essential to reducing the global incidence of large breaches.
Other Recommendations
Cybersecurity Policy
Develop and implement a cybersecurity policy that clearly defines the library’s approach to data protection and governance in accordance with the NIST Cybersecurity Framework. The policy should ensure compliance with applicable laws and regulations that serve to protect library data (e.g., COPPA, CIPA, FERPA, HIPAA). It should also outline risk management procedures and incident response protocols, including clear steps for responding to cyberattacks and minimizing loss. This NIST Cybersecurity Framework Policy Template Guide (Center for Internet Security, 2024) may serve as a valuable resource for library leaders.
Rationale for Cybersecurity Staffing Ratios
We find that the average number of CISSP certified employees can be predicted with a very strong R-squared of 84% by using regression analysis of IT-size across thousands of B2B organizations. This relationship makes sense when IT is viewed as the primary vulnerability, and the number of employees represents the magnitude of the threat that must be mitigated. The more employees and systems an organization has, the larger the attack surface that must be protected. Because the definition
Security Education, Training, and Awareness Programs for Employees and patrons
Given the sensitivity of the data libraries manage and the limited information security resources available to them, it is essential for all library staff to develop the skills necessary to identify risks and implement mitigation strategies to protect information and systems they are entrusted with (San Nicolas-Rocca and Burkhard, 2019). Cybersecurity training should begin at onboarding and continue through ongoing training and awareness programs that address common threats (e.g., social engineering, ransomware attacks), and data protection practices such as multi-factor authentication and data minimization. Training should also communicate procedures for reporting vulnerabilities, including internal policies. Libraries should
<https://www.charleston-hub.com/media/atg/>
Table 2. Cybersecurity Certifications and Qualifications
extend cybersecurity training to patrons as well. Training could focus on recognizing phishing attempts, creating strong passwords, using multi-factor authentication, and protecting personal information.
Managing Third-Party Risks
Libraries rely on third-party providers for various services, including digital resources (e.g., databases, journals, eBooks), cataloging systems, library management systems (LMS), artificial intelligence tools, and cloud computing infrastructure to meet operational needs. These third-party providers may have access to and collect patron data to provide library services. Library leaders should thoroughly evaluate potential third-party providers before signing contracts or entering into any agreements, including their data collection and retention practices, risk mitigation strategies, and privacy standards. Privacy by design should be a priority, and ongoing communication with providers should be maintained to ensure that security controls are continuously integrated throughout the duration of the contract.
Conclusion
Over the past five years, libraries worldwide have experienced ransomware and other cyberattacks that have exposed patron data and disrupted access to library resources. As a result, effective cybersecurity has become an ongoing, multi-faceted responsibility for libraries. Regression analysis indicates that effective cybersecurity performance depends heavily on qualified staffing, particularly those with CISSP-level expertise. Like all organizations, libraries face a non-zero probability of data exposure. When breaches occur, compromised information is often exploited for financial gain and fraud, causing real harm to patrons and communities. To comply with American Library Association Privacy Guidelines (2020) and protect patron information, libraries must prioritize cybersecurity as a core operation requirement. Effective cybersecurity is not optional. It is essential and it begins with sufficient staffing.
References
American Library Association. “Library Privacy Guidelines for Vendors.” Approved June 29, 2015. Revised January 26, 2020. https://www.ala.org/advocacy/privacy/guidelines/vendors
American Library Association. “Library Privacy Guidelines for Data Exchange Between Networked Devices and Services.” Approved June 24, 2016. Amended January 26, 2020. https:// www.ala.org/advocacy/privacy/guidelines/dataexchange
Center for Internet Security. NIST Cybersecurity Framework Policy Template Guide. 2024. https://www.cisecurity.org/-/media/ project/cisecurity/cisecurity/data/media/files/uploads/2024/08/ cis-ms-isac-nist-cybersecurity-framework-policy-templateguide-2024.pdf
DataBreaches.Net. Accessed January 10, 2026 from https:// databreaches.net/
IBM. Cost of a Data Breach Report 2025: The AI Oversight Gap. 2025. https://www.ibm.com/downloads/documents/usen/131cf87b20b31c91
International Federal of Library Associations (IFLA). IFLA Library Map of the World. Accessed January 15, 2026 from https:// librarymap.ifla.org/datalab.
Joint Task Force. NIST SP 800-37 Rev. 2, Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy. National Institute of Standards and Technology, December 2018. https://doi. org/10.6028/NIST.SP.800-37r2
Joint Task Force. NIST SP 800-53 Rev. 5, Security and Privacy Controls for Information Systems and Organizations. National Institute of Standards and Technology, September 2020. https:// doi.org/10.6028/NIST.SP.800-53r5
Lucas, Rick and Thomas Lee. “The Quantified Value of CISSP and CISA Certified Employees: A Strategic Approach to Cybersecurity Hiring and Risk Reduction, Part I.” VivoSecurity. Accessed January 18, 2026 from https://www.vivosecurity.com/ download-cissp-cisa-white-paper.
Maryland Office of the Attorney General. “Security Breach Notices.” 2026. https://oag.maryland.gov/resources-info/Pages/ security-breach-notices.aspx
Privacy Rights Clearing House. “Data Breach Chronology.” 2026. https://privacyrights.org/data-breaches
San Nicolas-Rocca, Tonia, and Richard J Burkhard. 2019. “Information Security in Libraries: Examining the Effects of Knowledge Transfer.” Information Technology and Libraries 38 (2):58-71. https://doi.org/10.6017/ital.v38i2.10973
About the Authors
Tonia San Nicolas-Rocca is the Faculty Associate Dean in the College of Information, Data and Society at San Jose State University, and serves as the Codirector of San Jose State University’s Center for AI and Cybersecurity. Her research interests include cybersecurity, artificial intelligence, health information systems, and knowledge management. Dr. San NicolasRocca has published her work in peer-reviewed research journals and conference proceedings. She earned her doctoral degree in Information Systems and Technology from Claremont Graduate University.
Dr. Thomas Lee is the CEO of VivoSecurity, a Silicon Valley–based startup specializing in data collection and regression modeling to bring predictability to the inherent randomness of cybersecurity incidents. He has developed quantitative models to forecast online banking fraud, the probability of PII data breaches, and project litigation likelihood and cost exposure following a breach. Dr. Lee holds multiple patents and has published extensively in peer-reviewed science and engineering journals. He earned dual BS degrees in Physics and Electrical Engineering from the University of Washington, and a PhD in Biophysics from the University of Chicago.
Column Editor: Corey Seeman (Director, Kresge Library Services, Ross School of Business, University of Michigan) <cseeman@umich.edu> Visit him at https://www.squirreldude.com/
Column Editor’s Note: Finding the beauty in our imperfect world is a photo away.
As I have shared previously, I will take some of my favorite photographs and attach a haiku to capture the image, or what I am thinking, or possibly what I am dreading. The above haiku accompanied my picture that is included in this column. If you want to see more (shameless promotion), you can follow me on Bluesky (https://bsky.app/profile/cseeman.bsky.social).
We are all living in very strange times. On the one hand, we have a serious disruption of the higher education system in the United States, impacting everything from research to programs to access. On the other hand, we work in a time when the demand for efficiency and productivity drowns out any other impulse we might have. The dynamics of needing to answer a question wins every time when the slower version of learning how to answer a question is attempted.
So I am thinking about this in the context of trying to learn a new language over the past year. (Spoiler alert: it’s German.) SO many questions here. Am I too old to learn a new language? Probably. Shouldn’t I use the great translation tools to just figure out what something says in English? Sure. However, if I use these tools, I will never learn the language. And as a fan of puzzles, that is essentially what a new language is: a puzzle with more applicable uses.
One important thing — the German haiku is wrong. It has eight syllables, not seven in the middle line.
But a friend of mine helped me figure out how to drop an “e” from a word to shave a syllable. It makes it work in the haiku and it is artistic license. The more that I learn, the easier it will become to write creatively. And that is really the way that learning anything works. It can be slow, but it pays dividends later. But alas, our AI world waits for no one (so I am told).
Why am I trying to learn German? I have family from there — my grandmother, great aunt and great grandmother spoke German around me — but I never took the opportunity to learn the language as a child. So while I can translate letters and genealogical references easily with the abundant and free tools, I really would like to learn the language. Sure, I cannot tell where the verbs go in a sentence (I am more convinced I am lucky rather than knowledgeable in this regard). And I wonder if there are many witches in Germany, as Duolingo seems to use them in practice sentences more than I expected.
If we are too busy looking down, we might miss out on the squirrels posing for us. Taken on a cold Winter’s day in Ann Arbor at the University of Michigan — Thursday, February 5th, 2026.
We have another shorter column with three reviews. And unlike many of my columns, this is more about the core work of librarians rather than reference works that we might purchase for our community. And the topics of these three works are spot on in requiring good resources and our attention as we bask in all that 2026 has to offer us. With the complete drain that seems to be our AI-charged world, Chernow’s Beyond the Internet: Successful Research Strategies helps us understand where our users are coming from in higher ed (especially) . Gonzales’ Library Website Design and Development: Trends and Best Practices helps us look at our websites and interfaces with the community, especially as we push towards meeting the accessibility needs of our electronic resources this year. Finally, the very nature of our work comes into focus in this third book. Authors Angelo Moreno, Kelly McElroy, Meredith Kahn, and Emily Drabinski have written a timely and useful work on how to organize the workforce in a library (O rganize Your Library!: Developing the Collective Power of Library Workers). Taken collectively, these three works represent the key issues facing us day in and day out.
So while I was not working on reviews for this column (and other things), I was trying something new. Writing a haiku in German. It accompanied this image (https://flic.kr/p/2rTUa3Q).
Wenn das wetter ist schlecht, Ich glaube Deutsch geräusche besser als Englisch.
English Translation: When the weather is bad, I think German sounds better than English.
Special thanks to our reviewers who take the time to explore these works to see if they are appropriate for libraries. Special thanks to my reviewers for this issue: Joshua Hutchinson (University of Southern California), Jennifer Matthews (Dartmouth College), and Katherine Swart Van Hof (Calvin University). As always, I want to thank them for bringing this column together.
If you would like to be a reviewer for Against the Grain, please write me at <cseeman@umich.edu>. If you are a publisher and have a book you would like to see reviewed in a future column, please also write me directly. You can also find out more about the Reader’s Roundup here — https://www.squirreldude.com/ atg-readers-roundup.
Happy reading and be nutty! — Corey
<https://www.charleston-hub.com/media/atg/>
Chernow, Barbara A. Beyond the Internet: Successful Research Strategies (2nd ed.). Lanham, MD: Bernan Press, 2024. 979889205032, 172 pages. $59.00.
Reviewed by Katherine Swart Van Hof (Collection Development Librarian, Hekman Library, Calvin University) <kswart20@calvin.edu>
Ask Google how Gen Z college students do research, and Gemini AI will reply that students use online sources and social media when doing research. They also seek help from family, friends, and teachers. No mention of libraries or librarians. Ouch.
Still, it’s no surprise to university librarians that students rely on the Internet for doing the majority of their research. Even parents at freshmen orientation will ask why libraries are still needed when everything is available online. The myth that one can find the answer to any question by using Google or AI is pervasive and may even feel threatening.
Academic librarians know it’s a myth, but it’s hard to muster the energy to explain once again that not every book has an eBook equivalent. Not all primary sources are digitized and freely available online. Not all statistics are neatly arranged in a free online report that will solve a student’s research question.
Barbara A. Chernow understands this, too. Though not a librarian, she is history scholar with an impressive resume. She was the associate editor of The Papers of Alexander Hamilton for Columbia University Press and editor-in-chief of the fifth edition of the Columbia Encyclopedia. An author, editor, and teacher, Chernow has had a career doing in-depth research that today’s students would marvel at.
In the first edition of Beyond the Internet (2007), Chernow set out to explain her philosophy of research, give examples of the types of research she has done, and prove that not everything is on the Internet. With this second edition of the book, Chernow
Guide to the ATG Reviewer Ratings
The ATG Reviewer Rating is being included for each book reviewed. Corey came up with this rating to reflect our collaborative collections and resource sharing means and thinks it will help to classify the importance of these books.
• I need this book on my nightstand. (This book is so good, that I want a copy close at hand when I am in bed.)
• I need this on my desk. (This book is so valuable, that I want my own copy at my desk that I will share with no one.)
• I need this in my library. (I want to be able to get up from my desk and grab this book off the shelf, if it’s not checked out.)
• I need this available somewhere in my shared network. (I probably do not need this book, but it would be nice to get it within three to five days via my network catalog.)
• I’ll use my money elsewhere. (Just not sure this is a useful book for my library or my network.)
ostensibly updates her sources and adds a chapter about artificial intelligence.
Chernow certainly seems to enjoy making her argument and recounting her adventures in research. However, the more I read the book, the more I wondered “Who is her audience?” Academic librarians will resonate with Part I where she presents her philosophy of research and reveals her research algorithm. Unfortunately, none of this will be earthshattering to them if they have gone through graduate school and done research of their own.
Part II seems more geared toward students, as Chernow details different types of research resources. Librarians are already familiar with these sources, but it may not have occurred to students that they could consult a government document, conduct an interview, or read a primary source from an archive. That’s good advice, but then Chernow digs deeper. She suggests that the reader page through old newspapers, read the diaries of historical figures, and visit private collections. And in Chapter 7, she actually suggests that the reader buy microfilm reels of documents they cannot find online.
Thus lies the problem: Would the average Gen Z student actually take Chernow’s advice? What’s more, does the typical college paper require the depth of research Chernow seems to promote?
Students will not easily be dissuaded from using AI based on Chernow’s account in the last chapter. Seemingly out of her element, the only AI tool she tries is ChatGPT, asking it questions about herself and her own reputation. She then tries to tell ChatGPT about herself when it doesn’t know who she is. She seems to miss the point that ChatGPT is primarily a predictive text generator and is piecing together an answer from the data it was trained on. She says AI chat bots are unreliable because the information they use is from the Internet and the Internet often has misinformation. However, she would be better off using a search engine to do the same search, this being a matter of using the right tool for the job, not ChatGPT’s fault. Plus, there are countless other AI tools that are rapidly improving. Chernow seems to dismiss all of AI based on her one experience.
So who is the audience? CHOICE Reviews (July 2008) places the 2007 edition in Z710 — Library guides and aids. This is where OCLC and most libraries catalog it, too. Reference & Research Book News (February 2008) labels the 2007 edition as LB2369 — Higher education — preparation of theses. I think the latter is perhaps the better classification. I would recommend this book to a Gen Z graduate student in history. I would hesitate to recommend it to anyone else.
ATG Reviewer Rating: I need this available somewhere in my shared network. (I probably do not need this book, but it would be nice to get it within three to five days via my network catalog.)
Gonzales, Brighid M. Library Website Design and Development: Trends and Best Practices. Lanham, MD: Rowan & Littlefield, 2025. 9781538192344, 176 pages. $92.26.
Reviewed by Jennifer Matthews (Head of Acquisitions and Collection Development, Dartmouth College) <jennifer.k.matthews@dartmouth.edu>
Website design is most likely something that many of us take for granted. We visit a website, peruse it for what we need, spend
<https://www.charleston-hub.com/media/atg/>
a few minutes or a few hours on that website, and then move on to our next task. But for some, the design of that website is critical, and for those charged with renewing said website, the best practice of how this is done is what can make the refresh successful or not. Brighid Gonzales’ book Library Website Design and Development: Trends and Best Practices seeks to provide these guidelines for those who may be charged with updating their library’s online presence.
Over the course of ten chapters, Gonzales writes in a very approachable style so that readers do not need to have a computer science degree to understand the topics that are under discussion. Instead, each chapter is quite accessible and is set out so that the reader can learn the terms associated with the chapter topic easily while also having a list of additional resources plus the bibliography associated with the chapter for more information. Some chapters also include lists of additional tools that could be used for a particular task, such as determining accessibility.
As an example of the writing style, the chapter on website security uses specific examples of security issues that are explained plainly and without jargon so the reader understands what can happen if they are not properly prevented. This then leads to a discussion of the need for strong passwords, security testing, data privacy, conducting data privacy audits, and writing a data privacy policy. With similar works on website redesign, the technical terminology throughout this discussion would be difficult for anyone not from a computer background to understand, but Gonzales has written this so fluidly that there is no need for overly technical terminology. The definitions are provided in layman’s terms, and each new section builds upon the last with examples to help as needed.
Library Website Design and Development discusses everything in website design, from colors and fonts to content management systems, information architecture, and usability. Additionally, there is discussion about what to do after the new website has been launched (e.g., assessing the site, scheduling updates, ensuring it stays new). With such coverage it does seem to meet the author’s goal of being an “inclusive guide to all topics and steps inherent in the process of website design and development, while also providing a focused guide on the unique needs of websites” (Gonzales, p. vii).
If your library is considering a website refresh soon, this book is worth a read to determine if you have thought about all of the important angles that such a project would entail. The chapters plus the additional resources are worth having on hand during an intensive website review.
ATG Reviewer Rating: I need this available somewhere in my shared network. (I probably do not need this book, but it would be nice to get it within three to five days via my network catalog.)
Moreno, Angelo, with Kelly McElroy, Meredith Kahn and Emily Drabinski. Organize Your Library!: Developing the Collective Power of Library Workers. Chicago: ALA Neal-Schuman, 2025. 9798892555227 (paper), 9798892553230 (pdf), 118 pages. $44.99 (softcover).
Reviewed by Joshua Hutchinson (Director, Technical Services, University of Southern California) <joshuah8@usc.edu>
Organizing your colleagues in a library to join a labor union is hard work. It requires collaboration, courage, and dedication.
It also requires a lot of knowledge on the part of all library staff — those who are organizing the unionization drive along with all other library workers. This important new book helps to prepare those interested in organizing a union in their workplace by providing them with strategies for organizing and planning, examples from successful union drives elsewhere in the country, and information on how to make their unionization successful. Organized in nine chapters, this book covers the basicsbasics. It progresses from the first chapter, entitled “What is a Union and Why Do I Need One?” describing the theory and law of unions and the day-today of what it means to negotiate as a union member. Later chapters, “Campaigns” and “Contracts” go into how to make those negotiations successful. It ends with two chapters looking forward (“Our Vision for the Future” which partly describes why this book was written and who the audience is) and talking directly to supervisors and managers (“Bonus: If You’re Reading This and You’re the Boss”).
This book aims to cover all aspects of the organizing process, but it does not aim to be comprehensive. It’s a brief volume, and intentionally so: “… this book is intended as a starting place, not an ending to your research” (p. xvii). Each of the first 8 chapters ends with further readings and an action plan — it truly is written as a guidebook or handbook for those interested in pursuing unionization. Each of the chapters also defines key terms, such as types of bargaining (including oppositional bargaining and interest-based bargaining), and usually follows them up with several illustrative examples from unionized workplaces that demonstrate the theory in a practical context. This structure ensures that the book is of use for library workers with all levels of knowledge and experience of unions — those who are novices are given the knowledge they need to understand the benefits of unions and how unionized workplaces operate.
The four authors all have wide and varied experience in libraries and with library unions in both public and academic libraries, along with library worker advocacy (Drabinski was ALA president from 2023-2024). Each of the authors explains in some depth their personal experience working with labor unions in the introduction. The underlying theory of this book is that “… the general principles and goals of organized labor can bring us closer to our patrons and our coworkers and can improve the overall strength of our libraries” (p. 9). They do make it clear, though, that “[s]tarting a union will not solve all the problems…” (p. 9) and provide some tips on how to use the strategies in this book going forward in a unionized workplace.
One small note: This book was written at a time of expansion of the powers and rights of labor unions, driven by President Biden and his National Labor Relations Board. Under President Trump, workers rights and the rights of labor unions have been curtailed, with a much less positive outlook for the near future. However, despite this sobering context, this book is an important one for anyone interested in organizing a union, playing a more active role in an existing union, or simply understanding what a union looks like in a library setting.
ATG Reviewer Rating: I need this on my desk. (This book is so valuable, that I want my own copy at my desk that I will share with no one.)
<https://www.charleston-hub.com/media/atg/>
Booklover — Period
Column Editor: Donna Jacobs (Retired, Medical University of South Carolina, Charleston, SC 29425) <donna.jacobs55@gmail.com>
László Krasznahorkai, the 2025 winner of the Nobel Prize in Literature, published the novel Herscht 07769, which includes sentences, paragraphs and pages with all types of punctuation, but not a period. How novel, except that in 2023 when Jon Fosse won the Nobel Prize in Literature, I read The Other Name: Septology I-II and was curious about the absence of periods in this piece. Now it becomes more than a curiosity that the works of two Nobel Laureates in Literature have avoided the use of the period. Is this a trend? Creative sentence structure, use of punctuation and book layout give one a view into an author’s mind. Does the lack of a period indicate stream of consciousness? Is that really the intent or is something else afoot? Maybe the title of this column should be – “The Use of a Period in Modern Writing.”
I chose not to complete the one-sentence composition entitled Herscht 07769. It quickly identified itself as a tour de force of dark subject matter offered in the difficult format of one sentence. Instead, The World Goes On was found to not only have included periods, but also showcase the reason for the Nobel Committee’s choice — “for his compelling and visionary oeuvre that, in the midst of apocalyptic terror, reaffirms the power of art.”
Born in 1954 in the small town of Gyula, Hungary near the Romanian border, László Krasznahorkai specialized in Latin while attending high school; played piano in jazz and beat ensembles; completed his required military service; briefly studied law before roaming through various odd jobs, then returned to legal studies at Eötvös Loránd University but decided rather to finish a thesis in the Faculty of Humanities focusing on the writer Sándor Márai. Once his studies were finally completed, he worked as a freelance writer.
“When I am not reading Kafka I am thinking about Kafka. When I am not thinking about Kafka I miss thinking about him. Having missed thinking about him for a while, I take him out and read him again. That’s how it works.” — László Krasznahorkai
Sátántangó (1985) debuted to great success elevating him in the world of Hungarian literature as well as capturing the attention of critics like Susan Sontag who crowned Krasznahorkai the “master of the apocalypse” in Hungarian literature.
Sidebar: Understanding the use of the word apocalypse to describe Krasznahorkai seemed important. Most people identify it with the “end of the world” definition, but a Google search produced a variety of definition, some of which stem from
the etymology of the word, the Greek word apokálypsis meaning a revelation is the genesis of apocalypse. “A revelation” seems to be an appropriate approach to this book. Even László Krasznahoraki explains: “Each text is about drawing our attention away from this world, speeding our body toward annihilation, and immersing ourselves in a current of thought or a narrative ….” This might explain why one reviewer described Krasznahoraki as “a devoted practitioner of purposeful obscurity.”
In a book entitled The World Goes On, there is a story (or maybe a thought process) entitled The World Goes On, one of 21 pieces in this collection, in which Krasznahorkai presents the impossibility after what transpired on September 11, 2001: “….I knew at once, watching the flaming, tumbling Towers, and then envisioning them again and again, and I knew that without a brand-new language it was impossible to understand this brand-new era in which, along with everyone else, I suddenly found myself, I brooded and pondered, tormented myself for days on end, after which I had to admit that no, I had no chance of suddenly learning a new language, I was, along with the others, too much a prisoner of the old, and there was no recourse, I concluded, but to abandon all hope of ever understanding ….”
I can’t help but respond with the revelation that … the world goes on.
Maybe it goes on with a dose of humor, as humor plays a role in moving forward from the tough stuff of the world. And maybe it is just me, but I took a dose of humor from Krasznahorkai’s The Swan of Istanbul (79 paragraphs on blank pages) with subsequent “Notes.” It truly is blank pages followed by “Notes.” This is where the humor exists for me because of the blank pages one doesn’t know to what he is referring except there is the reference to a lot of forgetting … my mind just went blank:
“Page 287. suddenly forgot: after the kind personal communication of Attila Golyo Gulyas-Kovacs (Rockefeller Institute, New York) 9.30.2011.”
“Page 287. the rapid forgetting of details: after the kind personal communications of Balint Lasztoczi (Columbia University, New York) 9.30.2011.”
“Page 287. he was aware that he was forgetting, that some kind of confusion had developed between himself and the world, in this case between him and the …: David S. Martin: “Man’s Rare Ability May Unlock Secret of Memory.” CNN, May 2008.”
I couldn’t help but laugh out loud and know that the world will go on.
<https://www.charleston-hub.com/media/atg/>
LEGAL ISSUES
Questions & Answers — Copyright Column
Column Editor: Kyle K. Courtney, Esq. (Director of Copyright & Information Policy, Harvard Library) <kyle_courtney@harvard.edu>
FROM A LIBRARIAN TRYING TO KEEP UP WITH AI DEVELOPMENTS: Last year I wrote about whether “training” AI models on copyrighted books might be fair use. But now I keep seeing headlines about enormous settlements and allegations of “pirated libraries.” If fair use may survive training, why is the legal risk still so intense?
ANSWER: Because the legal fight is no longer only about what AI systems do with texts. It’s also, often more decisively, about how those texts were obtained.
In other words, even if a court is willing to see certain kinds of AI training as transformative, non-expressive “learning,” that doesn’t mean the data pipeline is automatically lawful. Fair use is a defense to copyright infringement; it is not a magic eraser for a bad fact pattern. If the books came from pirate sources, were acquired through contract breaches, or were accessed by circumventing controls, the courts can shift from a nuanced fair-use debate to something much simpler: unauthorized acquisition and storage at massive scale.
Libraries have dealt with a close cousin of this problem for decades: you can have a socially valuable purpose, but it’s helpful to have a lawful acquisition story. Provenance matters. And in the AI era, “data provenance” may be the most important phrase for librarians to learn.
I’m still stuck on the headline version: “If training might be fair use, why are AI companies paying huge settlements?”
ANSWER : Because the biggest exposure may not be “training” in the abstract. It may be the inputs: where the books came from, how they were copied, and what was retained.
A concrete example is the class action involving Anthropic. Judge William Alsup’s June 23, 2025 ruling is frequently described as a major fair use decision about training. But it was also, importantly, a decision that separated two different things: (1) training as a transformative process, and (2) the alleged storage and use of large quantities of pirated books. Public reporting on the ruling emphasizes that the court could accept fair use for training while still treating the retention of pirated libraries as infringement.
Then came the settlement news. In late 2025, a federal judge approved (at least preliminarily, per reporting) a roughly $1.5 billion settlement tied to allegations that hundreds of thousands of books were pirated for training. Whatever you think about the merits, the magnitude tells you something: the “how we got the books” story can dominate.
This is the key lesson: a fair use defense doesn’t “launder” a dirty dataset. Even if training can be framed as learning,
it is much harder to frame “downloaded a shadow library” as an authorized reproduction and acquisition.
When people say “data provenance,” what does that mean in AI training, and why should librarians care?
ANSWER : Provenance is the chain of custody. Or documentation that answers, in plain terms, “Where did this text come from and under what constraints?”
Librarians already speak this language fluently. For rare books and archives, provenance is the story of ownership and handling. For licensed databases, provenance is contract terms, access rules, and permitted uses. For digitization, provenance includes scanning logs, preservation files, rights review, and access controls.
In AI, provenance is the “receipt file” for training data. At minimum, it should document:
• Source category: open web, public domain, Creative Commons, licensed corpora, purchased print books scanned in-house, user-provided content, etc.
• Permissions and restrictions: TDM allowed? internal use only? no derivative reuse? retention limits? deletion requirements?
• Governance: who approved inclusion, under what policy, and with what auditing or review?
Why does this matter? Because provenance is what you can defend. A company — or a library — may say “we trained on books,” but litigation asks sharper questions: Which books? From where? With what proof? With what retention? With what controls? The ability to answer those questions with credible documentation can be the difference between a complicated fairuse case and an extremely expensive settlement conversation.
Isn’t all of this just “copying”? Why are courts and plaintiffs treating “piracy” differently from “copying for training”?
ANSWER: Because “piracy” is often a different legal and moral fact pattern than “copying for analysis.”
Training cases that survive early stages often do so by emphasizing non-expressive internal use: the system learns statistical patterns; it does not distribute the books or output them verbatim. Courts may then weigh classic fair-use
<https://www.charleston-hub.com/media/atg/>
considerations: transformative purpose, necessity of copying, and whether outputs substitute for the originals.
But “piracy” usually means something more straightforward: unauthorized copying and retention from sources that were not lawful in the first place. That shifts the conversation from “transformative use” to “how did you get this many copyrighted books?” It’s not that fair use becomes irrelevant; there are many cases where works might have been acquired in unauthorized way and are still fair use (See In NXIVM Corp. v. Ross Institute, where the Second Circuit affirmed that using copyrighted, confidential materials for critical commentary constitutes fair use, even if the material was obtained in violation of a nondisclosure agreement). But it’s that the defendant may have a much harder time presenting itself as a good-faith actor engaged in socially beneficial learning when the acquisition story looks like a shadow library.
That is why Judge Alsup’s posture in the Anthropic litigation, accepting fair use for some training theories while still treating the storage of pirated books as a serious issue, feels like a roadmap for future cases.
Here’s the librarian translation: good purpose doesn’t always excuse bad intake. Libraries know this instinctively. Our mission is public-serving, but we still don’t get to build collections through theft. AI developers face the same basic principle, just at a scale that makes the consequences headline-sized.
Okay, so what exactly is the “risk menu” here? What can rightsholders claim besides plain copyright infringement?
ANSWER: The AI disputes we’re watching are multi-front wars, not single-issue debates. Even if “training” sits under a fair use umbrella in one case, plaintiffs can still pursue other theories depending on the facts.
Copyright infringement is the classic claim. Unauthorized copies were made and retained, and the battleground becomes purpose, transformation, market harm, and, increasingly, the role of outputs.
Secondary liability theories can appear depending on how models and datasets are shared, especially if downstream uses are alleged to cause infringing outputs or distribution.
Contract claims are the sleeper issue for libraries and vendors because licenses and terms of service often govern access and reuse, and a fair use story may not defeat a “you-breached-thelicense” story.
Access-control and circumvention theories sometimes arise when plaintiffs point to technical restrictions and argue that access controls were bypassed. These claims are factspecific, but the general lesson is stable: how content was accessed can create liability independent of whether training is transformative.
And equitable or state-law claims, such as unjust enrichment, may appear depending on pleading and jurisdiction. They don’t always survive, but they shape settlement pressure and discovery.
The key point is not that every claim will succeed. It’s that provenance problems spawn multiple lines of legal attack. When the intake pipeline is messy, the defendant fights on too many fronts at once.
I saw recent reporting about companies physically scanning huge numbers of print books, sometimes destructively. Does buying books and scanning them solve the provenance problem?
ANSWER: It can improve the optics and sometimes the legal posture, but it doesn’t turn the situation into a free-for-all.
Public reporting has described a “race for text” in which companies sought massive corpora not only from the web but also through large-scale acquisition and scanning of print books. The instinct here is understandable: print copies are widely available and buying them feels “cleaner” than downloading a shadow library.
But from a copyright perspective, owning a physical copy does not automatically include the right to digitize it for any purpose. Libraries know this very well: our ability to digitize at scale has historically depended on carefully bounded purposes (preservation, accessibility, indexing, research) and on the legal frameworks (including fair use) that support those purposes.
So, what facts matter? Purpose and character matters. Internal research tools and non-expressive analysis look different from building a commercial product that competes in the expressive market. Access controls matter. A locked-down internal corpus is not the same as a public-facing system that can output long passages. Retention and governance matter. Are copies stored indefinitely, are they secured, and is there a deletion policy? Outputs and substitution matter. If outputs can substitute for books through summaries, chapter-like rewrites, or verbatim passages, risk rises dramatically.
Even if purchasing and scanning books looks more responsible than downloading pirate libraries, the legal question does not disappear. It becomes: Is the digitization and use of those texts justified under the applicable doctrine and facts? The answer may be “yes” for some uses, “no” for others, and it depends for most. That’s not a dodge; it’s the honest structure of fair use..
What’s going on with the Anthropic settlement process and why should librarians pay attention to the details?
ANSWER: Because the settlement is more than a headline number. It is a public, operational example of what “provenance liability” looks like at scale, and it shows how these disputes translate into real-world administration.
The settlement website lays out concrete deadlines and procedures, including a claims deadline of March 30, 2026, and other key dates in the process. That means the legal system is not merely “debating” principles; it is distributing money based on allegedly unlawful acquisition and copying practices.
For librarians, the most important takeaway is not to become settlement administrators. It’s to recognize how quickly the conversation can shift. One day it’s “Training is transformative; maybe fair use.” The next day It shifts to “Show me your dataset sources and your retention.” And the following month: “Here is a claims portal and a class notice.”
Even if you think training should generally be fair use, you should still want clean provenance, because provenance determines whether a dispute is doctrinal and winnable, or reputational and combustible.
How does this change what libraries should ask for when we buy AI tools, or when vendors want to train on content we license?
ANSWER: This is where librarians can be very practical. And also quietly confident, because we start from a stronger place than most commercial actors: libraries have, and will continue to have, lawful access to their collections.
<https://www.charleston-hub.com/media/atg/>
That is the good news in a provenance-driven world. Libraries acquire books, journals, and media through established legal channels, including purchase, license, deposit, and donation. We maintain records of acquisition. We negotiate terms. We steward access. That means that when the conversation shifts from abstract fair use arguments to the concrete question of where the text came from, libraries often have an answer that is both boring and powerful: it came from our collection, obtained lawfully, and documented as such.
This does not mean that every form of digitization or every model-training project is automatically permitted. But it does mean that libraries are in a much better position to explore mission-driven AI projects with cleaner facts. We can consider training or fine-tuning models on corpora drawn from our holdings in ways that prioritize research, teaching, accessibility, and discovery, while keeping sensible controls around access, security, and outputs. In other words, we can design systems that learn from the collection without turning the collection into a substitute product.
And that stronger footing should shape both our procurement posture and our internal ambitions. When we buy AI tools, we should expect vendors to meet the provenance standards we already meet. We should be able to ask, plainly, what content was used, under what rights, and with what documentation, and to get answers that do not rely on hand-waving. When vendors want to train on content we license, we should insist that training terms be explicit, limited, and consistent with the rights we negotiated in the first place. If “no training” is the default, that should be written clearly. If training is proposed, it should be bounded, disclosed, and accountable, and it should not turn licensed access into uncompensated extraction.
At the same time, we should recognize the strategic opportunity here. If fair use survives training, it will survive best when the provenance is clean and the use is genuinely tied to scholarship and research. Libraries can build that record. We can create careful, documented corpora. We can align model development with public-interest goals. We can reduce the legal and ethical volatility that comes from opaque pipelines. The
point is not that libraries should rush headlong into building models. The point is that we are positioned, better than most, to do it responsibly.
This is not about being anti-AI. It is about insisting that the next generation of AI tools be built on lawful access, transparent provenance, and the same values of stewardship that libraries have practiced all along.
What’s the big takeaway for librarians? Be cautious or bold?
ANSWER: Be informed! And be bold in the way libraries have always been bold: mission-forward, ethically grounded, and legally literate.
The AI copyright conversation often gets framed as a single question: “Is training fair use?” That question matters. But the more immediate and actionable questions might be: Is the dataset clean? Does it have provenance you can defend? Are your contracts aligned with your mission and risk tolerance?
Libraries are unusually well positioned here. We already know how to do stewardship: documentation, governance, access control, and careful boundary-setting. We have experience building transformative tools, search, indexing, preservation, accessibility, without turning ourselves into distributors of substitute copies. We know that rights and access can coexist.
If fair use survives training, it will survive best when the facts are good: lawful acquisition, and transparent documentation. If the law moves toward demanding licensing in some contexts, libraries will still be indispensable voices in shaping what “responsible” looks like, because responsibility cannot be defined solely by the loudest commercial actors.
The simplest phrase to carry forward is this: fair use can’t always sanitize a dirty dataset acquired by an AI company. Libraries can help the field remember this lesson by insisting on lawful provenance, transparent documentation, and responsible design, whether we are building tools ourselves, purchasing them from vendors, or shaping the norms that will govern AI in the years ahead.
Still Seeking Columnist for Legally Speaking Column!
Are you a law librarian, or do you have experience with legal issues in libraries? We’re looking for a new editor for the “Legally Speaking” column. In this role, the column editor would write on legal issues in librarianship and scholarly communication, solicit articles from other authors, or a combination of both. There are five issues per year (February, April, June, September, November) having these editorial deadlines. Please contact us at editors@against-the-grain.com if you’re interested in finding out more! Or if you have suggestions/nominations for someone else who would be a good fit. Thank you!
<https://www.charleston-hub.com/media/atg/>
Seeing the Whole Board — Charleston Conference 2025 in Three Sessions: Name, Claim, Frame
In November I took part in three sessions at the 2025 Charleston Conference in three ways, a pre-conference sponsored by Emerald Publishing, and two conference sessions including a Neapolitan session. The common theme of the sessions was about bringing issues to light, naming them; making the connections between current issues and academic libraries, claiming them; and providing content and suggesting actions, framing them.
The pre-conference session was called “Safeguarding Scholarly Communications: A Strategic Leadership Workship for Academic Librarians.” More than 50 people attended the workshop, which was led by Lucy Santos-Green, the director of the School of Library and Information Science at the University of Iowa.
Starting with the premise that “Academic librarians play a critical role in the scholarly communication enterprise, curating and providing access to resources that foster informed and engaged research, teaching, and learning communities within their institutions and beyond,” the workshop focused on what library professionals can do to develop strategies to highlight the value that academic libraries bring to their institutions and the research cycle. “Librarians are well positioned to be powerful advocates for the value of scholarly communication and importance of intellectual freedom and academic integrity.” The workshop explained the importance of articulating what makes a healthy academic ecosystem and posited ways library professionals can defend that system.
Given the impact of politics and policies on academic institutions and scholarly communications, some of the current issues facing librarians and their institutions helped provide the framework for the half-day program. The goal was to give practical advice that those in attendance could apply to their own current challenges and/or to prepare for any future issues.
As I usually do, I spelled out some of the policies, executive orders, and online rhetoric that had been occurring since the new administration began implementing its policies or making demands. I also highlighted the games afoot, the logical fallacies being used as tactics including: Ad Hominem Attacks, False Equivalences, Ambiguity, Red Herrings, Oversimplifications, Impossible Expectations, Moving Goalposts, Data CherryPicking. They are all tactics that can be broken down and understood AND countered. The first place to start is to name them.
One other tactic that is important for people to keep in mind is that feeling of being overwhelmed. It is intentional. Once you realize it is a tactic, that it is something being done to you, it is easier to respond. “Us versus them” is a way to isolate people. The firehose of headlines, statements, executive orders, social media posts, speeches, and threats are designed to be unsettling, to overwhelm. In a world designed to be overwhelming, to convince us it is inevitable and there is nothing we can do, to rig the discourse, to make it easy to
give up in advance, what can we do? The best advice is to pick your lane, choose your area of expertise, and get involved.
Once I set the scene, it was time for the real library professional to take over. Dr. Lucy Santos-Green walked the attendees through a checklist of institutions and systems (e.g., resources, policies, community-wide issues, statewide issues) and broke each of those down into categories. Resources included branding and marketing, university general counsel, government relations teams, compliance and ethics documentation, budget, civil rights compliance, strategic communications, advancement, and training. Policies included governing board, institutional, library, departmental and other policies that might be of interest (e.g., undergraduate and graduate college policies). For community-wide issues, she listed K-12 partnerships, servicelearning programs, publicly accessible programs and lectures, workforce development, and civic engagement. Statewide issues included economy, environment, political and social issues (e.g., brain drain), healthcare, workforce development, innovation, arts and culture, civic engagement, and infrastructure goals.
Dr. Green suggested attendees list out any links, documents, or collections that matched up with these areas along with key contacts they know or need to know. She also focused on considering how the library interacts with or supports these areas and any improvements the library is part of. Attendees were also asked to indicate areas they were unfamiliar or uncomfortable with. The goal was for attendees to select an item to work on as a priority project.
For the interactive part, the attendees broke up into groups and identified gaps, defined their current and desired state, and outlined the gaps in performance and opportunity. They were instructed to select one institution and a set of issues and review the purpose of the library and the value it brings to the institution, the community, and the state. They then looked at the strategies — how is the library bringing that value and how is the value being communicated to the institution, community, and state — and then outline objectives to bring that value to those institutions and stakeholders. This led to an examination of component tasks and work processes, interdependencies, staff capabilities, workforce hierarchies, and the library’s culture.
My first full conference session was “Twenty Percent of an Idea. If We Are Confronting a New Reality, How Can We Start from Zero and Move Forward.” I presented with Gary Price, coeditor and creator ARL Day in Review, and Tony Roche, Chief Content Officer at Emerald Publishing. Our goal was to begin a discussion with attendees who were ready to talk about what to do to “move on from the fear and the anger and … start to discuss what comes next.”
We started by outlining something simple that everyone in the room could do starting that day. Gary Price explained the importance of contributing to the public archive. If information
<https://www.charleston-hub.com/media/atg/>
is disappearing or being taken down from websites, each visit to a website is an opportunity to contribute to the public archive. It is easy to save a webpage, an article, or a website, by saving it to the Internet Archive using the Wayback Machine.
Once everyone had that simple task in their toolbelt, we started to discuss how to fill out the next 80 percent of the idea of what can we do if everything we know is left in the rubble. We were interested in mapping out what the future of academic research, scholarly publishing, and communication would look like. We wanted to outline some of the core components of generating and reviewing information; publishing and disseminating research; acquiring and preserving content; and discovering, accessing, and assimilating that knowledge. This outline covered the cycle from authors, to publishers and platforms, to libraries, and the user.
We listed out areas of collaboration and asked the audience what we were missing. Here is what we started with:
• AI Literacy, Education, and Training
• Copyright, Fair Use, and Licensing
• Research Integrity, Ethical Disclosure, and Misconduct Prevention
• AI Ethics, Bias Mitigation, and Privacy/ Security Policy
• Data Management, Metadata Production, and Technical Application of AI
• Environmental Impact of AI and Data Centers
• Digital Preservation, Digitization, and Archiving
• Data Curation, Data Management, and Data Policies
• Open Access, Open Science, and Open Scholarship
• Access to Government Information, Federal Data, and Public Trust
• Innovation and Futurescape Planning
• Assessment, Scholarly Impact, and Peer Review
• Workforce Transformation and Leadership Development
We were positing that if everything we know is being destroyed, what would we do differently? What would we abandon? What would we fight for? What would we create from scratch? There is more to it than just what. We needed to consider the who, as in where would the resistance be? Who are our natural or unexpected allies? Who do we need to add to the conversation? For instance, we weren’t thinking about NGOs and where they fit in the scholarly communications lifecycle. One of the attendees brought that to our attention, and it was a perfect substantiation for why these kinds of conversations need to take place. There is so much collective knowledge. We need to harness it and add it to the mix. This idea of what will we do next is something I am starting to see more people talk about in broader terms of society, politics, culture, and global relationships. It is also an action people can take to move beyond the tactics at play; if the goal is to make what is happening seem inevitable and there is nothing that can be done, what negates that better than to start to plan out what we will do when faced with change.
The final session I participated in at Charleston 2025 was a Neapolitan Session called “Reclaiming the Library Narrative: Strategic Communication for Academic Leaders.” I presented along with Alex Hodges from the Harvard Graduate School of Education, Hodges is a librarian and the director of the Gutman Library and a lecturer on Education at Harvard University, Lyda Fontes McCartin, professor and director at the School of Information Science at the University of South Carolina, and my pre-conference co-presenter, Lucy Santos Green.
The goal of the session was to bring together library leaders and administrators and provide them with examples and perspectives allowing them to advocate and engage with campus leadership and external stakeholders on behalf of the library. We focused on de-politicizing the atmosphere. There are times when everyone wants you to jump in and respond, but the best response is often to be the voice of reason, to avoid the emotions and focus on information. It’s less satisfying but more successful. One way to do that is to make sure the people speaking for your institution understand the role the library or the library school plays in core institutional and state issues. For example, what role does the library play in research and grant proposals, in returning graduates into jobs in the state and combating “brain drain,” and in supporting students to increase retention? How does your institution benchmark itself within the state? Who do they compete against for faculty, research dollars, students?
Given the highly politicized atmosphere surrounding higher education, library professionals should know who lobbies for their institution. Librarians may not be able to lobby but they can make sure the people speaking on their behalf know how the library contributes, how many library school graduates stay in the state to start their careers, and any talking points that highlight library wins. Reaching out to them doesn’t mean you are advocating or lobbying, you are offering them information about the library. Knowing how to write a one-sheet so lobbyists, legislative staffers, or administrators have information that can differentiate the library and the institutions from others is a simple way to provide talking points for those doing the talking.
Librarians and staff can go a step further and make sure they know key federal, state and local legislators from their institution’s city, town, or district and which committees they are on. They may want to know whether any legislators or their staff are graduates and make sure they understand the role academic libraries play in the institution’s success.
Having been the person in the room lobbying against a rule change or legislation, I think about being the voice of reason, being an information provider, which often means being the person who points out the unintended consequences, which, ironically, is what a communications person typically does in a business. These are emotional issues; if part of the tactic is to overwhelm people, then part of the response needs to be to avoid being overwhelmed, to understand the best ways to respond, and to bring information to the places where it can do the most good.
Kathleen McEvoy is a long-time communications executive with direct experience in crisis communications, media and public relations, and public affairs. She has lobbied and created strategies to address legislation in multiple U.S. states and has met directly with state executives and legislators to call out the unintended consequences of legislation. Kathleen is a Senior Policy Fellow at the EveryLibrary Institute and board member of EveryLibrary, the national political action committee for libraries. Kathleen has also co-chaired a task force on intellectual freedom as part of the American Library Association’s United for Libraries division, where she is a board member and serves on the Intellectual Freedom, Public Policy, and Advocacy Committee. She is a member of the American Library Association’s Committee on Legislation.
<https://www.charleston-hub.com/media/atg/>
And They Were There — Reports of Meetings 2025 Charleston Conference
Column Editor: Caroline Goldsmith (Associate Director, The Charleston Hub) <caroline@charlestonlibraryconference.com>
Column Editor’s Note: Thanks to the Charleston Conference attendees, both those who attended in person and those who attended virtually, who agreed to write brief reports highlighting their 2025 Charleston Conference experience. Our in-person event was held November 3-7, 2025 in historic downtown Charleston, with the virtual event following on November, 17-21, 2025. The virtual event included recorded presentations from the in-person event, in person Q&A sessions with all of the speakers, as well as exclusive “virtual only” content. There were more Charleston Conference sessions than there were volunteer reporters for Against the Grain, so the coverage is just a snapshot.
There are many ways to learn more about the 2025 conference. Please visit the Charleston Conference YouTube site, https://www.youtube.com/user/CharlestonConference/ videos?app=desktop, for selected interviews and videos, and the conference site, https://www.charleston-hub.com/thecharleston-conference/ for links to conference information. The 2025 Charleston Conference Proceedings will be published in 2026, in partnership with University of Michigan Press.
In this issue, we have the first installment of Conference reports, including a preconference workshop, some lessons learned at the vendor showcase, and a report on the Wednesday and Thursday opening plenary sessions. Thank you again to all of our volunteer reporters! — CG
PRECONFERENCE WORKSHOP
Safeguarding Scholarly Communication: A Strategic Leadership Workshop for Academic Librarians
Reported by Lashonda Campbell (Collections Development Librarian, University of Arkansas at Monticello (UAM) – Taylor Library) <CampbellL@uamont.edu>
The Charleston Conference 2025 workshop, “Safeguarding Scholarly Communication: A Strategic Leadership Workshop for Academic Librarians,” was timely and directly relevant to my work in academic librarianship. I appreciated that the presenters — Lucy Santos-Green, Terri Teleen, and Kathleen McEvoy — framed scholarly communication challenges not only as technical issues but as leadership and advocacy opportunities that require strategic messaging and relationship-building across campus. The workshop met my expectations by moving beyond general discussion and creating space for participants to share real challenges and brainstorm practical approaches for demonstrating library value in a changing environment. One of the most memorable takeaways was the emphasis on advocacy as an ongoing practice rather than a one-time effort, supported by clear evidence, consistent communication, and alignment with institutional priorities. Interacting with library professionals from other academic institutions provided insight and practical ways to advocate for libraries as essential
divisions within the institutional framework. The session reinforced the importance of positioning the library as a key partner in research, teaching, and academic integrity, especially amid shifting expectations and increasing pressures on resources.
Top Three Lessons Learned from the Vendor Showcase
Reported by Ramune K. Kubilius (Northwestern University Feinberg School of Medicine) <r-kubilius@northwestern.edu>
The 2025 vendor showcase, held, as usual, the day before the Charleston Conference, enjoyed good foot traffic. Some attendees also attended brief vendor updates. Preconference attendees may or may not have made it to the showcase. Before informally talking to vendors and publishers, one couldn’t always guess which ones would have representatives staying for the conference (hopefully where possible, some did stay)… What could be learned from a stroll through the 2025 vendor showcase? Here are three thoughts:
1) The 2025 showcase featured a nice group of firsttime exhibitors who felt the need (or were encouraged) to promote products and services. “Thank you’s” were expressed verbally, but sometimes “Where have you been?” remained unvoiced, as some obviously were a good Charleston audience fit. Though not prominently marked as first-time exhibitors (showcase organizers should consider visually noting?), conversations revealed enthusiasm and curiosity — about being in Charleston and the vendor showcase, and, for those staying — anticipation of the conference. Attendees appreciated vendors travelling to Charleston from afar: from western Canada, all over Europe, and elsewhere! It was nice to encounter “freelancers,” staffing vendor tables for colleagues unable to be in Charleston.
2) Start-up stories of vendors and publishers are interesting. Sometimes products “born” in Charleston make an appearance. In 2025, one example was an academic faculty (with specialist colleagues) who founded a technology platform and textbook ad-on for patient simulation use in health education.
3) The vendor showcase day is a snapshot in time. Even as vendors and representatives set up tables and display “swag,” the landscape continues to evolve. Each year, showcase visitors find out about products so new that flyers are barely ready. Expect follow-up emails. Companies may announce structural or ownership changes days before, or during, vendor showcase day, leaving reps to field questions quite different from what they probably expected.
<https://www.charleston-hub.com/media/atg/>
Katina is a digital publication that addresses the value of librarians to society and elevates their role as trusted stewards of knowledge. Named after Katina Strauch, the visionary founder of the Charleston Conference, it is written by and for the international communities of librarians, vendors, and publishers and covers content across three core sections:
Resource Reviews, which provides critical reviews of products and resources for the information industry.
Open Knowledge, which addresses the evolving roles of libraries and librarians and their contributions toward an open knowledge ecosystem
The Future of Work , which offers practical insight into library careers and organizational development
www.katinamagazine.org
Ar ticle Proposals
our
Katina welcomes proposals from authors interested in contributing to any of our three sections We encourage prospective authors to familiarize themselves with the content of the publication before submitting a proposal . Our readership is broad and varied in their level of experience in the industry, from early career to advanced management level and beyond. Our goal is to provide easy-to-understand, engaging, informative, and accurate content that will be of general interest to the entire library and scholarly communications community.
CHARLESTON 2025 PLENARY REPORTS
Opening Keynote: Career Preparation, LIS Schools and Libraries
Reported by Marci Cohen (Head, Music Library, Boston University Libraries (retired)) <rockhackcohen@yahoo.com>
Presented by Lorcan Dempsey (Librarian, Consultant, Writer, Advisor)
Dempsey started with a prelude, noting that he was focusing on formal library education, particularly in ALA-accredited programs. He then discussed his broad background, the various aspects of his career, in both public and academic libraries in different countries and many years at OCLC. This led to his current role of Professor of Practice and Distinguished Practitioner in Residence at the Information School, University of Washington. He acknowledged the strengths and weaknesses of his outsider perspective since he did not attend a U.S. ALAaccredited school but recognized that libraries are focused on values and that they are responsive, relational, and dynamic. He then tied this to his main point: library students of today are looking for experiential learning to complement their classroom learning, recognizing that this is what employers seek and what libraries need. He advocated for library professional organizations to create experiential learning opportunities for LIS students.
Plenary Panel: Leading in a Time of Crisis
Reported by Marci Cohen (Head, Music Library, Boston University Libraries (retired)) <rockhackcohen@yahoo.com>
Presented by Xan Arch (Dean, University Library, Portland State University); Judith Russell (Dean of University Libraries, University of Florida Libraries (retired)); Del R. Hornbuckle (Executive Director, Howard University Libraries, Howard University). Moderated by Jim O’Donnell (University Librarian, Arizona State University).
O’Donnell opened by acknowledging that both libraries and larger institutions are currently facing challenges and asked the panel about the specific challenges they are dealing with. Hornbuckle, whose university is in a city dealing with National Guard troops, encouraged students to carry ID at all times and be polite. Arch discussed the student occupation in their library a year earlier and the resulting and lingering fear among library staff. Russell dealt with staff shortages and job candidates unwilling to move to Florida. O’Donnell addressed lost research and IMLS grant funding. Among the solutions to addressing the challenges, Hornbuckle acknowledged that Howard, like all HBCUs, has historically been underfunded, so they are accustomed to this problem. She stressed emotional intelligence with staff, investing in staff to allow them to grow, and situational leadership, recognizing that different people need different things. Arch, who joined Portland State after the student occupation, makes a point of meeting with all staff, paying “relational attention,” and advocating for staff when she meets with higher-level people. Russell used the institution’s centennial to celebrate all campus libraries and found ways to lift spirits and engage people in positive ways. To deal with pushback against DEI policies, O’Donnell has focused on internalizing the ASU charter, literally carved in stone, that says success is “measured not by whom it excludes, but by whom it includes.” The panels closed by citing some particular successes at their institutions.
This concludes the first installment of reports from the 2025 Charleston Conference. Make sure to check out the next batch of reports in the April 2026 issue to see some “favorites” roundups and reports on the poster sessions! You can view recordings of conference sessions, podcast interviews and our new Charleston Conference leadership interview series on our YouTube channel. Thank you again to our volunteer reporters!
<https://www.charleston-hub.com/media/atg/>
Access Granted — Getting Started
Column Editor: Michelle K. Hahn (Disability Advocate and Librarian) <michellekhahn@gmail.com>
Accessibility isn’t a checklist — compliance is only the floor, and disability inclusion means going beyond the minimum. This column explores how we get there, one win at a time.
Welcome to a new column in Against the Grain. I’d like to take the opportunity to introduce myself and my vision for this column and extend an invitation to join me in this critical work.
I am Michelle. I am a 40-something woman with blue eyes, brown hair with blonde highlights, usually wearing glasses and a hearing aid. I have been a librarian for nearly two decades, serving primarily on the technical services side of librarianship but with broad operational experience across library functions. But my path to developing this column isn’t just professional.
I have a story for you — about the day I unexpectedly joined the disability community and learned that being a librarian can be dangerous! It happened at a national conference, right after dinner with friends across the street from the hotel. As we walked back, using designated crosswalks and adhering to pedestrian signals … I got hit by a car. The driver ran the red light at the intersection where we were crossing and continued on his way. The result was a Traumatic Brain Injury and one shattered heel bone that is the basis of my permanent mobility impairment. With a bit of gallows humor, I like to say I became an Audi hood ornament and got to fly without wings!
For added context, there were approximately 400 attendees in a tight-knit professional community, so it was like everyone experienced the trauma together. It happened the first night of the conference, so everyone had to carry on without me, and I’m told I really killed the vibe for the rest of the week! Though I will probably never remember it, the friends who were with me will never forget it. But for me, I was in the right place at the right time, as the hotel next to ours was hosting a medical convention, so sharing the crosswalk with us were a number of doctors. I was essentially in the emergency room on the pavement before the EMTs even arrived!
That experience changed more than my physical ability; it changed my perspective. It changed how I move throughout the world and what I advocate for. Before my injury, I was what disability advocates call “temporarily able-bodied.” Now
I like to say I am a recovering ableist. I remember life fully abled, so I know what’s different and what never came to mind in the before times.
Since then, I’ve served on accessibility commissions and committees in three cities and will continue to do so wherever I land. I’ve organized walk audits, led training, planned awareness events, and presented on accessibility in libraries and beyond. What I’ve learned is that accessibility doesn’t have to be an all-or-nothing proposition. Organizations get paralyzed by six-figure projects and complex compliance matrices, and too often, that paralysis becomes an excuse for inaction. But meaningful change doesn’t always require major construction. Sometimes the most powerful wins are hiding in plain sight.
That’s what this column is all about: disability access in libraries, explored from multiple angles.
Sometimes that will mean practical tips: the low-cost, easy changes that often go unnoticed. Sometimes it will mean awareness: helping readers see barriers they’ve walked past a hundred times. Sometimes it will mean other voices from the disability community and the profession, because no single perspective captures the whole picture.
What ties it together is a commitment to keeping disability inclusion at the center, specifically and intentionally.
The disability motto is “Nihil de nobis, sine nobis” — nothing about us without us. I’m inviting you into that work. Notice the barriers. Question your reflexes. And let’s start putting some changes in the win column together.
If there is a particular topic you would like to see covered or you’d like to share an experience improving accessibility in your corner of the world, please get in touch! I could fill every issue with my own perspective, but that would miss the point. We’re all in this, whether disabled or temporarily able-bodied. So let’s make something incredible together.
<https://www.charleston-hub.com/media/atg/>
Leading with Trust: Supporting Ethical AI Use Through Collaboration, Not Accusation
With contributions from Frances Alvarado-Albertorio, PhD (Assistant Professor, Graduate Initiatives & Engagement Librarian, Oklahoma State University)
“I didn’t think using AI was cheating.”
“I spend more time policing assignments than teaching.”
“We need policies that can keep up.”
These aren’t just isolated voices, they reflect the growing tension echoing across classrooms, staff rooms, and boardrooms in today’s academic landscape. As generative AI tools become easily and freely accessible to students, they’ve brought both opportunity and upheaval. Some students use AI to support their learning, while many end up using them as shortcuts. This makes it difficult for faculty to define legitimate use, while institutions scramble to update policies not built for this pace of change.
Amid this, a deeper crisis of trust has emerged in this postGPT era. The shared values of academic effort, originality, and mentorship are being tested. And navigating this new era, one not of misconduct alone but of misunderstanding, requires more than just AI detectors or classroom bans. It calls for systemic reflection, shared language, and an educational framework equipped for nuance.
Major Issues Facing Academia in the Post-GPT Era
The rise of generative AI has profoundly reshaped the academic landscape, raising urgent questions not just about misconduct but about the responsible and ethical use of AI tools. According to AI in education statistics by AIPRA, 44% students actively engage with Gen AI, with 54% using it for schoolwork and/or homework. For students, faculty, and institutions alike, the challenge has shifted from simply detecting wrongdoing to navigating the gray areas of appropriate AI use. Traditional norms around authorship, effort, and originality are being redefined in real time, often without clear guidance.
Student Agency and Ethical Uncertainty
AI tools are now deeply embedded in students’ academic life. A 2024 survey found that 86% of students use AI in their studies, with 54% using them daily or weekly. Students primarily use AI for information search (69%), grammar check (42%), paraphrasing (28%), summarizing documents (33%) and creating a first draft (24%).
Despite this high adoption, ethical uncertainty persists:
• 54% of students believe AI use in assignments is cheating
• 21% disagree
• 25% remain undecided
This ethical uncertainty, combined with a limited understanding AI’s limitations and academic norms, turns a potential learning opportunity into a source of mistrust and unintended misuse, exacerbated by the fear of false accusations from flawed AI detection systems.
There is also a clear demand for AI literacy training. 71% students say they want to be included in the AI-related decision making processes. This data highlights that students, often confused rather than careless, are eager to move from passive targets of suspicion to active participants in shaping ethical AI use. They are calling for clarity, for transparent frameworks, and educational guidance that empower them to use AI ethically and effectively.
Policing Drama and the Erosion of Trust
Faculty members are increasingly being thrust into roles of investigators and enforcers in addition to focusing on pedagogy and mentorship. They are expected to identify and validate suspected AI-generated content, often without proper institutional training or clear guidelines. This has led to a “cat and mouse” dynamic where faculty chase evolving AI use while students test the limits of detection.
Today, 68% of educators rely on AI detection tools. But these tools are far from foolproof as they frequently producing false positives that flag human-written content as AI-generated. The consequences are serious where students risk academic penalties, scholarship loss, and long-term reputational harm. These tools are also more likely to misclassify work by non-native English speakers compounding existing educational inequities.
On the flip side, false negatives where AI-generated content escapes detection are increasingly common due to AI “humanizers” and prompt engineering techniques. This fuels an arms race between content generation and detection, eroding trust and creating uncertainty for faculty, who are often reluctant to accuse students without clear evidence. Understandably, this environment contributes to burnout, stress, and reduced job satisfaction among educators.
What’s needed is not just better detection, but better direction. Faculty deserve training not only in how to spot AI misuse, but how to model and teach responsible AI use. The shift from reactive enforcement to proactive education and AI literacy can reframe AI as a learning aid, not a threat, building trust rather than tension in the classroom.
Institutional Policy Lag
Institutions are equally struggling to keep up. While nearly 9 in 10 academic leaders believe AI will bring irreversible changes to higher education, the policy infrastructure remains fragmented. A recent study showed that around 49% of the universities in the UK have not published official guidelines on their websites. Another study reported that more than one-third universities in the US had unclear or undecided policies on AI use. This patchwork approach creates inconsistency, making it harder for students and faculty to navigate expectations around integrity and permissible AI use.
<https://www.charleston-hub.com/media/atg/>
Redefining Authorship, Originality, and Assessment in the Post-GPT Era
Perhaps the most profound disruption brought by generative AI is its challenge to the very concepts of authorship and originality which serve as cornerstones of academic identity. When algorithms generate coherent essays, analyze datasets, create visual content, and even simulate scholarly tone, it leads us to this inevitable question on authorship and what it means in the current world. It does not stop there. For educators, it also becomes important to how should we fairly evaluate originality in a world where creation is increasingly collaborative between human and AI.
For decades, academia has prioritized the final product — the paper, the project, the polished presentation that served as proxies for effort and understanding. But in the post-GPT era, it’s the process — the ideation, revision, engagement, and reflection — that most authentically reveals a student’s intellectual labour rooted in co-writing, prompt engineering, and AI-augmented ideation. This calls for a broader, more nuanced understanding of originality, one that differentiates between the originality of idea, expression, and execution.
Assessment, therefore, must move beyond the binary of “AI-written” or “AI-free.” Instead, it must account for how assistance was used, why it was used, and whether it contributed meaningfully to the learning process. That’s not just an integrity conversation, it is a pedagogical one.
Progressive educators are already responding by designing assessments that emphasize:
• Draft and revision history
• Reflective annotations on AI use
• Live oral presentations and defences
• Time-bound, in-class tasks
• Collaborative outputs with individual accountability
These approaches not only make misuse of AI tools more difficult, they foster deeper engagement. They reposition students as active contributors, thinkers who reflect, iterate, question, and defend their ideas, rather than passive generators of polished content.
Frances Alvarado-Albertorio , PhD , Assistant Professor, Graduate Initiatives & Engagement Librarian emphases the value of process and intentionality. “Instead of focusing on the product, we should examine the process. Writing is a process, and GenAI tools can support it. But we must be intentional about how we use them,” she reminds, underscoring the importance of maintaining the student’s voice and authenticity.
Academic Librarians at the Forefront of AI Integration
Academic librarians, long positioned as ethical stewards of information, bring a distinctive perspective to AI integration in higher education. Evidence from surveys show both optimism and caution in fully embracing AI. Nearly half of librarians in a recent study acknowledged AI’s potential to improve discovery, automate routine tasks, and support advanced research workflows. In another survey, more than 60% of the academic libraries reported plans to implement or were already implementing AI in their services. Yet this enthusiasm is tempered by major challenges. 56.8% librarians cited lack of expertise and 56.4% pointed to budget constraints as barriers to adopting and/or scaling AI in technologies.
These findings highlight a central tension where libraries are expected to lead in building campus-wide AI literacy, but many professionals lack sufficient training or resources. Librarians emphasize that AI must complement, not replace, the human judgment central to research and learning. They are calling for comprehensive, flexible training programs focused on critical evaluation, ethical use, and privacy protections, alongside collaborative frameworks for policy development. In keeping with the ethos of “leading with trust,” librarians occupy a distinctive position not as passive adopters of AI but as proactive guides, ensuring that AI is integrated in ways that strengthen academic integrity rather than erode it.
Studies shows that academic librarians are increasingly being tasked with fostering AI literacy among students and faculty, representing a natural evolution of their information literacy mission. Librarians are now emerging not only as service providers but as architects of a culture of responsible AI use, positioning themselves at the intersection of ethics, pedagogy, and technology in higher education.
Building a Resilient Framework for AI-Integrated Academia
The goal of an AI-integrated academic infrastructure is not to ban AI, nor to embrace it without limits, but to build a thoughtful platform where technology enhances learning without undermining the values at the heart of education — curiosity, metacognitive thinking, integrity, and genuine effort.
AI Literacy Across the Academic Spectrum
Literacy is not about drawing rigid lines between permissible and impermissible use, it’s about building judgment and contextual awareness. AI literacy must become a core competency for all learners and educators. Tiered training should empower every stakeholder to:
• Understand how generative models function and where they fall short
• Identify biases, hallucinations, and ethical red flags
• Know when and how to disclose AI use, both formally and informally
Transparent and Inclusive Policy Design
Measuring learning processes and empowerment is critical, yet it remains difficult to quantify and is rarely prioritized by policymakers. To address this, academic institutions must move beyond generic rules and adopt clear, adaptable, and inclusive policies that reflect how AI is actually used across disciplines. Policy frameworks must be:
• Clearly articulated and accessible to all
• Flexible enough to adapt to evolving tools and workflows
• Developed through dialogue with students, faculty, and researchers
• Sensitive to disciplinary norms (e.g., differing expectations in humanities, STEM, or design)
Assessment That Honours Process Over Product
In the post-AI era, assessment design must evolve to prioritize the learning journey over polished outputs. This helps educators assess genuine engagement and thinking, rather than just outcomes. Assignments should:
<https://www.charleston-hub.com/media/atg/>
• Capture multiple stages of student engagement including outlines, drafts, feedback loops
• Integrate self-reflection and ethical justification of tool use
• Emphasize synthesis and critique over replication
• Reward transparency over concealment
Faculty Development and Ethical Leadership
As faculty are the frontline interpreters of this transition, they need much more than just policy updates. They need consistent guidance and resources that would help them uphold ethics and integrity while guiding students through this complex landscape. Faculty training should include:
• Understanding the role of AI in modern pedagogy
• Addressing AI misuse through inclusive, non-punitive strategies
• Designing AI-informed assignments
• Modelling responsible AI use in research and teaching
Equally important is fostering collaborative exploration between teachers and students, creating spaces where both can learn, experiment, and define responsible AI integration together.
Student Agency and Empowerment
Frances highlights the importance of intentional use, stating, “Writing is a process, and GenAI tools can support it. But we must be intentional about how we use them. Simply copying and pasting from GenAI without critical evaluation and editing means we lose our voice and authenticity.”
Students should be engaged as active participants in defining responsible use of AI. When students understand the boundaries and the reasoning behind them, they’re more likely to act ethically and learn meaningfully. This change of perspective invites:
• Honest discussions about ambiguity and ethics
• Active participation in shaping acceptable practices
• A culture of inquiry, not fear
Tools That Support Trust, Not Just Detection
Institutions must also critically evaluate the technologies they adopt to uphold integrity. The tools of the post-GPT era must move beyond simplistic detection and toward multidimensional insight. They must offer deeper insights, acting as cognitive partners that help educators understand how a piece of work was created, not just whether AI was involved. This partnership empowers faculty to tailor feedback, guide students in refining their skills, and foster critical reflection on tool use. In essence, such tools amplify human judgment rather than replacing it, supporting educators in nurturing ethical, informed, and confident learners.
Tools should also support transparency in a way that facilitates constructive conversations. Transparent tools provide educators with nuanced information such as how and where AI assistance was used, patterns of student engagement, and levels of original input. This will help them engage students in meaningful dialogue about their learning process. Ultimately, when tools illuminate rather than accuse, they empower both educators and students to build trust, reflect critically, and grow together.
Additionally, such tools must be adaptable, aligning with each institution’s evolving policies, pedagogical goals, and disciplinary contexts. Adopting such tools isn’t about catching wrongdoing, it’s about reinforcing integrity through transparency.
Collective Action for the Future
Future discussions and institutional strategies must address three critical areas for a more complete and actionable framework. There’s a need to develop detailed implementation roadmaps for proposed solutions, such as specific curricula for AI literacy and practical examples of adaptable AI policies. Moreover, establishing robust mechanisms for deeper student engagement is vital, moving past mere dialogue to foster student-led initiatives and ongoing feedback loops that genuinely empower students and faculty in shaping AI policy and pedagogy together.
Frances stresses that faculty must retain agency, “Agency includes the right to opt out when a human-centered approach better serves learning goals,” underscoring the need for institutional flexibility and respect for pedagogical diversity.
The post-GPT era demands nothing short of reimagining education itself. We must move from asking “How do we stop students from using AI?” to “How do we help them use it wisely?”
From “How do we detect AI use?” to “How do we cultivate human insight?” From “How do we maintain the old standards?” to “How do we define new ones worthy of our highest aspirations?”
The stakes couldn’t be higher. Get this right, and we will graduate students who can think critically in an AI-augmented world, who understand both the power and peril of these tools, who can navigate ethical complexity with confidence. Get it wrong, and we risk creating a generation that either fears technology or surrenders their intellectual agency to it.
The choice is ours. We can cling to detection and suspicion, letting fear drive our decisions until trust erodes completely. Or we can step boldly into transparency and education, building bridges between human wisdom and artificial intelligence that honour both.
The future of learning isn’t about humans versus AI. It’s about humans with AI, guided by educators and librarians who understand that our greatest competitive advantage has never been our ability to generate content. It’s been our capacity to question, to synthesize, to imagine, and to care about the consequences of our choices.
That capacity is irreplaceably human. And it’s exactly what our students need us to teach them now, more than ever.
References
AIPRM. (n.d.) AI in Education Statistics. Retrieved December 3, 2025, from https://www.aiprm.com/ai-in-educationstatistics/.
Clarivate. (2024). Pulse of the Library 2024 https://clarivate. com/pulse-of-the-library/
Elkhatat, A. M., Elsaid, K., & Almeer, S. (2023). Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text. International Journal for Educational Integrity , 19 (1), 17. https://doi.org/10.1007/ s40979-023-00140-5
Hirsch, A. (2024, December 12). AI detectors: An ethical minefield. Northern Illinois University Center for Innovative
<https://www.charleston-hub.com/media/atg/>
Teaching and Learning. (2024). https://citl.news.niu. edu/2024/12/12/ai-detectors-an-ethical-minefield/
Kelly, R. (2024, August 28). Survey: 86% of Students Already Use AI in Their Studies. Campus Technology. Retrieved September 3, 2025, from https://campustechnology.com/articles/2024/08/28/ survey-86-of-students-already-use-ai-in-their-studies.aspx
Kumar, R. (2023). Faculty members’ use of artificial intelligence to grade student papers: a case of implications. International Journal for Educational Integrity, 19(1), 9. https:// doi.org/10.1007/s40979-023-00130-7
Lo, L. (2024). Evaluating AI Literacy in Academic Libraries: A Survey Study with a Focus on U.S. Employees. College & Research Libraries, 85(5), 635. doi: https://doi.org/10.5860/crl.85.5.635.
Moxley, J. (2025, January 31). Universities must compel students to detail how they use AI. Times Higher Education. https://www.timeshighereducation.com/depth/universitiesmust-compel-students-detail-how-they-use-ai-assignments
Myers, Andrew. (2023, May 15). AI-Detectors Biased Against Non-Native English Writers. Stanford HAI. Retrieved September 3, 2025, from https://hai.stanford.edu/news/ai-detectors-biasedagainst-non-native-english-writers.
Nam, J. (2023, November 22). 56% of College Students Have Used AI on Assignments or Exams. Best Colleges. Retrieved September 3, 2025, from https://www.bestcolleges.com/ research/most-college-students-have-used-ai-survey/
Prothero, A. (2024, April 5). More Teachers Are Using AIDetection Tools. Here’s Why That Might Be a Problem. Education Week. https://www.edweek.org/technology/more-teachersare-using-ai-detection-tools-heres-why-that-might-be-aproblem/2024/04
Sage. (2025, May 20). New Technology from Sage Report Explores Librarian Leadership in the Age of AI . Retrieved September 3, 2025, from https://www.sagepub.com/exploreour-content/press-office/press-releases/2025/05/20/newtechnology-from-sage-report-explores-librarian-leadershipin-the-age-of-ai.
Samruddhi & Brown, M. (2025, December 30). AI in Education Statistics 2026 — Latest Data & Trends https://www.open2study. com/statistics/ai-in-education/
Shalwa. (2025, February 23). AI Plagiarism Statistics 2025: Transforming Academic Integrity. Artsmart. https://artsmart.ai/ blog/ai-plagiarism-statistics/
Sharma, S. (2024, October 30). Using AI in Higher Education: When Does It Become Plagiarism? Times of India. Retrieved September 3, 2025, from https://timesofindia.indiatimes.com/ education/news/using-ai-in-higher-education-when-does-itbecome-plagiarism/articleshow/114776015.cms
Thompson, E. (n.d.) 49% of UK universities lack AI usage guidelines according to StuRents study. EdTech Innovation Hub. Retrieved September 3, 2025, from https://www. edtechinnovationhub.com/news/49-of-uk-universities-lackai-usage-guidelines
Von Garrel, J., & Mayer, J. (2023). Artificial Intelligence in studies — use of ChatGPT and AI-based tools among students in Germany. Humanities and Social Sciences Communications, 10(1), 799. https://doi.org/10.1057/s41599-023-02304-7
Wang, H., Dang, A., Wu, Z., & Mac, S. (2024). Generative AI in higher education: Seeing ChatGPT through universities’ policies, resources, and guidelines. Computers and Education: Artificial Intelligence , 7 , 100326. https://doi.org/10.1016/j. caeai.2024.100326
<https://www.charleston-hub.com/media/atg/>
Libraries,
Synergies — Ontology in Artificial Intelligence, Information Science and Theory, Philosophical Roots, and Implications for Library Leadership
Column Editor: Antje Mays (Collection Analysis Librarian, University of Kentucky Libraries) <antjemays@uky.edu>
Column Editor’s Note: Ontology in artificial intelligence (AI) is gaining favorable attention as a safeguard against generative AI’s confabulations, fabrications, and AI slop. This article examines ontology’s philosophical roots, its core tenets, and its footprints in information science and information theory. The article further notes library expertise with practical and ethical dimensions of knowledge management — strong qualifications for libraries’ leading voices in strategic AI deliberations. — AM
Ontology in Artificial Intelligence (AI)
Recent literature highlights ontology in artificial intelligence and its rising prominence as a structured guardrail against the widely reported slop and generative AI outputs’ pervasive inconsistencies, hallucinations, confabulations, and fabrications.
Ontology in artificial intelligence, rooted in the philosophy of knowledge,1 constitutes a formal and structured representation of knowledge, assigning meaning to data elements reflecting concepts in a domain, their interrelationships, and knowledgebased rules that serve as the semantic backbone of intelligent systems. Such structured knowledge representation grounds AI systems’ associative mechanisms in structured domain knowledge, in contrast to the computational statistics’ probabilistic pattern predictions of word associations prevalent in large language models (LLMs).2
Ontology is gaining favorable attention due to concerns and erosion of trust arising from the phenomenon of AI slop and fabrications: Ontology-based approaches aim to counteract the widespread phenomenon of AI confabulations. LLMs’ hallucinated outputs appear plausible on the surface, but these outputs’ fabrications and factual falsehoods are exposed with human appraisal against domain knowledge. Hallucinations are inevitable results of transformer-based LLMs’ output mechanisms — their reliance on mathematical pattern recognition of language structures and word colocations lacks conceptual associations tied to knowledge domains. Hallucinations are thus structural flaws rather than incidental inconsistencies.3 These probabilistic mechanisms contribute to AI-generated misinformation, raising concerns over LLMs’ inability to distinguish between factual knowledge and plausible-sounding but unfounded fabrications.4
Ontology as guardrail: Generative AI’s tendencies toward hallucinated outputs are tied to the mechanisms’ reliance on statistical likelihood of language patterns rather than conceptual content knowledge. Ontologies function as structured, logicbased guardrails against such confabulations by grounding the output mechanisms in external knowledge references5 and hard-
coding conceptual relationships into the process of generating AI outputs.6 Foundational research on artificial intelligence and natural language processing substantiates the refining power of knowledge-based concept associations, as these enable reasoning drawn from facts rather than probabilistic co-occurrences of words alone. 7 Ontologies’ characteristic concept-reference modeling, concept-sense reasoning, knowledge-based explanation structure, and intelligible logic also foster understanding of how and why a system produces its results. This in turn promotes algorithmic transparency and explainable AI.8
Ontology: Philosophical Roots and Their Bearing on AI Ontology
Ontology in philosophy examines the nature and meaning of being, 9 knowledge-based reality, entities and concepts, and how concepts can be logically grouped and categorized. Ontology is rooted in metaphysics, phenomenology, and epistemology10 — branches of philosophy concerned with reality and existence, categorization, concrete reality and abstract analysis, and truth and knowledge.11 Metaphysics examines the fundamental nature of reality, essence and substance, identity and its relation to change, and the “why” behind observed concepts.12, 13 Phenomenology examines human consciousness, the structures of experience, distinction between experiencing and the content or object being experienced, and the all-enveloping world which undergirds experiencing life and pursuit of scientific and philosophical inquiry.14 Epistemology focuses on the study of knowledge, definitions of belief and objective reality, and methods of inquiry and reasoning.15
Ontology in philosophy and in AI share focus on structures of knowledge and reality, while also diverging in aims, scope, methods of reasoning and inquiry, and approaches to knowledge: Both in philosophy and AI, ontology aims to anchor reasoning mechanisms in knowledge classifications and conceptual categorizations, clear conceptual definitions, and logical relationships between knowledge elements. Yet divergences prevail in aims, knowledge scope, and inquiry methods: Ontology in philosophy aims for metaphysical and conceptual understanding of truth, existence, and being. Conversely, ontology in AI aims for improved computation through knowledge-anchoring. For philosophical ontology, the knowledge scope is universal; AI ontology’s scope is bounded within hard-coded references to specific knowledge domains. In practical terms, knowledge domains reflect areas of expertise including, for example, library theory and practice, information retrieval, finance, robotics, and more. In philosophy, ontological methods of inquiry and knowledge are rooted in logic and argumentation; AI ontology’s inquiry methods are rooted
<https://www.charleston-hub.com/media/atg/>
in computational logic. Ontology in philosophy approaches knowledge seeking truth through metaphysical inquiry; ontology in AI seeks truth through data quality and the bounds of knowledge references. AI ontologies’ structured knowledge representations improve the prospect of explainable AI behaviors by constraining probabilistic generative mechanisms’ symptomatic fabrications,16 despite the complexities of mapping ontologies at scale.17
Ontology and Structured Knowledge: Information Science and Information Theory
Ontology in information science addresses the meaning of information by providing formal knowledge structures including conceptual definitions, subject headings, thesauri, groupings and classifications, conceptual interrelations, knowledge hierarchies, and controlled vocabularies to enable congruity in knowledge representation and consistent findability for information retrieval.18 These structures underpin cataloging, metadata schemas and encoding standards, descriptions of objects and archives, semantic search and linked-data ecosystems, knowledge graphs, data governance, systems design, and data sharing across institutions — core tenets of library practice.19
Ontology in information theory addresses the structure of information 20 and its quantification and transmission. Efficiency and efficacy in retrieval draw on search algorithm design, relevancy ranking, indexing models, and noise reduction to promote efficient information transmission between users, information structures, and retrieval systems — related strongly to library systems’ aims of improving search accuracy and prioritizing relevant search results. Concurrently, cataloging and controlled vocabularies provide the structure for decoding users’ encoded queries.21
Ontologies, AI, and Implications for Library Leadership
Ontologies in library practice embody a convergence of human education and research, systematized knowledge structures, and technology-infused interactions with knowledge:22 Libraries foster the human endeavor of research,
discovery, and knowledge production, while also serving as systematic and ethical stewards of knowledge and cultural heritage. 23 Library expertise in knowledge organization and structured knowledge-sharing is firmly established. AI ontologies expand libraries into an ecosystem of machinereadable semantics, subject assignment and associations, linking across knowledge domains, conceptually linked inferences and recommendations for end-user searches, unifying myriad descriptive schema, interoperability, and integration across multiple institutions.24
AI in libraries builds on these longstanding foundations and key library strengths. Thoughtful AI integration takes shape in AI-assisted services including classification, authority control, predictive analytics, and virtual reference for scaled-up support for widely shared standardized information needs.25 Through this sophisticated knowledge-engineering experience, libraries are keenly aware of the need for ethical considerations in data stewardship: The combined power of computational capacity and ontologies magnifies the scale of information management and discoverability — factors requiring ethical commitment to fairness in knowledge representation, respectful stewardship of data provenance, and transparency in data practices.26
Libraries are natural partners and leading voices in bridging the gap between knowledge, reasoning, information architecture, and AI considerations in meaningful contexts: As long-established early technology adopters, libraries have long navigated with successive information-management and analytical tools and their thoughtful applications for information management and data stewardship. Libraries have long championed high-quality knowledge structures, open and interoperable data ecosystems, origination and stewardship of high-quality metadata, and ethical safeguarding of provenance — factors compatible with the growing attention to grounding AI systems in reliable knowledge structures. Libraries are uniquely qualified as trustworthy stewards of structured knowledge in an AI-infused information environment. Libraries bring skills for meaningful pilot projects and discerning insights for wider AI deliberations.27
endnotes on page 48
<https://www.charleston-hub.com/media/atg/>
Endnotes
1. Saurabh Pal. Handbook of Metadata, Semantics and Ontologies. First edition. Burlington, ON: Arcler Press, 2024.
2. Rikard Rosenbacke, Carl Rosenbacke, Victor Rosenbacke, and Martin McKee. Beyond Hallucinations: The Illusion of Understanding in Large Language Models. 2025. https://doi.org/10.48550/arxiv.2510.14665
3. Richard Ackermann and Simeon Emanuilov. Teaching a Language Model to Speak the Language of Tools: Incentives or Ontology? A Structural Rebuttal to OpenAI’s Hallucination Thesis. San Diego, CA: RA Software, and Sofia, Bulgaria: Sofia University, 2025. Research Report. https://arxiv.org/pdf/2512.14801
4. Anqi Shao. “New Sources of Inaccuracy? A Conceptual Framework for Studying AI Hallucinations.” HKS Misinformation Review (August 27, 2025). https://misinforeview.hks.harvard.edu/article/new-sources-of-inaccuracy-a-conceptual-framework-forstudying-ai-hallucinations/
5. Ackermann and Simeon Emanuilov. Teaching a Language Model to Speak, 2025.
6. Hamed Babaei Giglou, Simon Burbach, Francesco Compagno, Muhammad Ismail, Gunjan Singh, and Sebastian Rudolph. “Beyond Statistical Parroting: Hard-Coding Truth into LLMs via Ontologies.” Proceedings of the Second International Workshop on Retrieval-Augmented Generation Enabled by Knowledge Graphs (RAGE-KG 2025) co-located with the 24th International Semantic Web Conference (ISWC 2025). Nara, Japan, November 2-6, 2025. CEUR Workshop Proceedings 4079:120-131. https://ceur-ws. org/Vol-4079/paper10.pdf
7. Stephan Grimm, Pascal Hitzler, and Andreas Abecker. “Knowledge Representation and Ontologies,” in Semantic Web Services, eds. Rudi Studer, Stephan Grimm, Andreas Abecker (Berlin: Springer, 2007), 51-105.
8. Roberto Confalonieri and Giancarlo Guizzardi. “On the Multiple Roles of Ontologies in Explanations for Neuro-symbolic AI.” Neurosymbolic AI 1 (2025): 1-15. doi:10.3233/NAI-240754
9. James R. Mensch. The Question of Being in Husserl’s Logical Investigations. The Hague: Martinus Nijhoff, 1981.
10. Chalmers, David John, David Manley, and Ryan Wasserman, eds. Metametaphysics: New Essays on the Foundations of Ontology. Oxford: Clarendon Press, 2023.
11. Edmund Husserl. Formal and Transcendental Logic. Translated by Dorion Cairns. The Hague: Martinus Nijhoff, 1969.
12. Jean Grondin. Introduction to Metaphysics: From Parmenides to Levinas. New York: Columbia University Press, 2012.
13. Robert Pasnau. Metaphysical Themes, 1274-1671. Oxford: Oxford University Press, 2011.
14. Dan Zahavi, ed. The Oxford Handbook of the History of Phenomenology. Oxford: Oxford University Press, 2018.
15. Paul K. Moser. The Oxford Handbook of Epistemology. Oxford; Oxford University Press, 2002.
16. Nicola Jones. “AI Hallucinations Can’t Be Stopped — but These Techniques Can Limit Their Damage.” Nature (London) 637, no. 8047 (2025): 778–780. https://doi.org/10.1038/d41586-025-00068-5
17. The Common Core Ontologies and Artificial Intelligence. Gaithersburg, MD: National Institute of Standards and Technology. May 29, 2019. Online report. https://www.nist.gov/system/files/documents/2021/10/14/nist-ai-rfi-cubrc_inc_001.pdf
18. Daniel N. Joudrey and Emily Baldoni. The Organization of Information. Fifth edition. Bloomsbury Libraries Unlimited, 2025.
19. Saurabh Pal. Handbook of Metadata. (Burlington, ON: Arcler Press, 2024), 68-69.
20. Claude E. Shannon and Warren Weaver. The Mathematical Theory of Communication. Urbana: University of Illinois Press, 1949.
21. Colleen Cool and Nicholas J. Belkin, “A Classification of Interactions with Information,” in Emerging frameworks and methods. Proceedings of the Fourth International Conference on Conceptions of Library and Information Science (COLIS4), eds. Harry Bruce, Raya Fidel, Peter Ingwersen, Pertti Vakkari (Greenwood Village, CO: Libraries Unlimited. 2002), 1-15.
22. Maria Teresa Biagetti. “Ontologies as Knowledge Organization Systems.” Knowledge Organization 48, no. 2 (2021): 152–76. https://doi.org/10.5771/0943-7444-2021-2-152
23. Tiziana Pasciuto Riccardo Albertoni, and Roberta Maggi. “Metadata Schemas in Cataloguing the GLAM Domain: An Overview.” Journal of Information Science, (October 2025): 1-47. https://doi.org/10.1177/01655515251365580
24. David Stuart. Practical Ontologies for Information Professionals. London: Facet, 2016.
25. Martin Frické. Artificial Intelligence and Librarianship: Notes for Teaching. 3rd edition. Tucson, Arizona: SoftOption, 2024.
26. James O. Hodonu-Wusu. “The Rise of Artificial Intelligence in Libraries: The Ethical and Equitable Methodologies, and Prospects for Empowering Library Users.” AI Ethics 5, 755–765 (2025). https://doi.org/10.1007/s43681-024-00432-7
27. Sandy Hervieux and Amanda Wheatley, eds. The Rise of AI: Implications and Applications of Artificial Intelligence in Academic Libraries. Chicago, Illinois: Association of College and Research Libraries, 2022.
<https://www.charleston-hub.com/media/atg/>
Wandering the Web — Online Fashion Resources, Part III: A Look at Denim History and Fashion
Column Editor: Lesley Rice Montgomery, MLIS (Catalog Librarian II, Tulane University Libraries’ Technical Services Department)
with Guest Editor: Roxanne Myers Spencer (Retired, Associate Professor and Coordinator, Beulah Winchel Education Library, Western Kentucky University)
Introduction
This month’s “Wandering the Web” is the final installment of a three-part series covering the topic of online fashion resources. Earlier Against the Grain issues included an outline of the fascinating history of fashion and a discussion of online educational and informational websites for those interested in learning about this field or breaking into this challenging but rewarding profession. Associate Professor Roxanne Spencer is concluding this series with numerous relevant and thoughtfully analyzed websites, writing on the subject of blue jeans — both the history and as a sociocultural phenomenon. My coeditor from the earlier “Wandering the Web” column articles hopes you enjoy her online reference resources review as much as she enjoyed researching and writing about this unique topic! — Lesley
A Look at Denim History and Fashion
The history and popularity of denim blue jeans is long and hardwearing. Levis or Wranglers? Rag & Bone or Madewell? Blue jeans, dungarees; baggy, skinny; distressed, embellished; light, medium, or dark — whatever the brand or style, the world’s most popular pants are alive and kicking. Here is a glimpse into the origins and sensation of the humble denim jean.
What Are the Origins of Denim?
Blue denim fabric, once comprised of an inexpensive blend of cotton, wool, and silk, is believed to have originated in the 16th and 17th centuries in France (as “serge de Nîmes”) and in Genoa (Génes in French) Italy, as durable sailors’ and laborers’ workwear, according to Arizona State University FIDM Museum (asufidmmuseum.asu.edu/learn/articles/blue-jeans). Recent research in Emma McLendon’s book, Denim: Fashion’s Frontier (Yale University Press, 2016; archive.org/details/ denimfashionsfro0000fash ), indicates an originally English fabric may have evolved into early denim, in the History.com blog article “How Blue Jeans Began — And Then Conquered U.S. Closets” (www.history.com/articles/history-demin-bluejeans). For a well-illustrated overview of the development of blue jeans, check out sustainable clothing company Sanvt’s timeline (sanvt.com/blogs/journal/history-of-jeans/).
A much starker picture in the development of denim is presented in a short video from PBS Learning Media’s American Experience, Indigo: The Devil’s Dye, (thinktv. pbslearningmedia.org/resource/amex34rthoj-socraceslaveryjeans/race-slavery-and-blue-jeans-americanexperience/ , 2017). Indigo and cotton were important and profitable crops in the early American South. Enslaved Africans brought knowledge of the uses of both crops, and their forced
labor enriched the white plantation owners. Indigo-dyed cotton was made into a coarse cloth, often known as “slave cloth” — which only the enslaved people wore.
A major turning point in the development of denim as a popular commodity came on May 20, 1873, from History.com’s This Day in History piece, “Levi Strauss and Jacob Davis Receive Patent for Blue Jeans” ( www.history.com/this-day-in-history/may-20/levi-straussand-jacob-davis-receive-patent-for-blue-jeans; 2009, 2025). San Francisco businessman Levi Strauss and Reno, Nevada, tailor Jacob Davis, both immigrants to the U.S., were awarded a patent for cotton twill work pants reinforced with copper rivets — the hard-wearing denim that became worldwide clothing staples.
How Did Denim Become a Global Fashion Phenomenon?
From durable sailor, laborer, and cowboy workwear, denim in the 1940s became influential in Europe and Japan as American GIs wore them while off-duty in World War II. On UK clothing manufacturer Hawthorn’s History of Denim blog post (www. hawthornintl.com/history-of-denim), 1950s cinematic rebels such as Marlon Brando in The Wild One and James Dean in Rebel without a Cause wore denims in their bad-boy roles. In the 1960s and 1970s, jeans were adopted and adapted by students and branched into the hip-hugger bell-bottoms so popular with hippies and rockers.
Vintage and second-hand clothing retailer ThriftTale ( thrifttale.com/blogs/thrifttalk/history-of-jeans-throughthe-decades/), based in the Netherlands, offers a fun timeline showcasing styles of jeans by decades. Remember the brightly colored jeans of the 1980s? Baggy skater jeans of the 1990s? In 2023, The Guardian newspaper even weighed in on the popularity of tight jeans with “A Symbol of Disenchanted Youth: The History of Skinny Jeans from the 4th Century to Gucci” (www.theguardian.com/fashion/2023/may/23/skinnyjeans-fashion-history). And in case you are curious, Business Insider has you, a-hem, covered with this 2018 piece, “The Most Popular Denim Trend the Year You Were Born” (www. businessinsider.com/jeans-over-the-years-2018-8).
Modern Denim Consumption
Trends in the jeans industry include sustainable manufacturing, recycling/upcycling, vintage looks, and high fashion designs, according to “Outlook ’25: “Denim Industry Braces for Tariffs, Green Legislation” found in Sourcing Journal: Denim (sourcingjournal.com/denim/denimbusiness/2025-outlook-denim-industry-braces-for-challengestonello-lenzing-bluesign-adriano-goldschmied-1234729122/). Among the challenges of the enduring popularity of jeans are the unsustainable volume produced by inexpensive,
<https://www.charleston-hub.com/media/atg/>
outsourced manufacturing and online retailers offering trendy, cheap clothing, as well as the unsold bulk used/thrift clothing markets. Some textile repurposing innovations help mitigate used clothing that has been dumped in landfills. According to Grandview Research’s “Global Denim Jeans Market Size & Outlook, 2024-2030” the humble blue jean is an $86.65 billion global industry and is likely to reach nearly $121.5 billion by 2030 (Grandview Research, 2025, grandviewresearch.com/ horizon/outlook/denim-jeans-market-size/global ). Clearly, jeans are here to stay.
Bonus: Jeans Fashion Tips
If you’re looking for the next jeans trend, check out these denim fashion trends for 2026:
• Photo gallery for the Fall/Winter 2025-2026 Milan Fashion Week from Sourcing Journal/WWD ( sourcingjournal.com/gallery/denim/denim-trends/ fall-winter-2025-2026-denim-at-milan-fashionweek-1234739455/)
• YouTube channels: ParisianVibe offers “French Girl’s Guide: How to Elevate Your Jeans Look” (www. youtube.com/watch?v=UFGpgTagBlM ); Real Men Real Style offers tips for finding the right fit in “Stop Wearing Your Jeans Wrong! (7 Tips for PERFECT Fit)” (www.youtube.com/watch?v=IR9EdmY_mjI); and for those of us inching toward middle-age, Heather Anderson of Fashion Over 40 shows women “7 of the Hottest Denim Trends of 2026” (www.youtube. com/watch?v=ZR8xUCO-GtY). — Roxanne
The earlier installments are available on the Charleston Hub: Part I appeared in ATG v.37#4, September 2025. Part II appeared in ATG, v.37#5, November 2025.
<https://www.charleston-hub.com/media/atg/>
The Digital Toolbox — The Modern Student Playbook: What Undergraduates Really Want in Digital Learning
Column Editor: Steve Rosato (Director of Digital Book Services for Academic Libraries, OverDrive, Cleveland, OH 44125) <srosato@overdrive.com>
Introduction
Academic libraries are at the center of the digital learning ecosystem, yet student expectations are evolving rapidly. A recent survey of 500 undergraduates reveals a clear trend: students are moving beyond passive learning and demanding dynamic, inclusive, and experiential education. For librarians, this shift signals both challenges and opportunities to reimagine resource delivery, discovery, and engagement.
Methodology and Sample Snapshot
The findings stem from an online quantitative survey of 500 U.S. undergraduate students conducted in September 2025. Respondents represent diverse institutions, disciplines, and demographics:
This diversity underscores the broad applicability of the insights for academic libraries serving varied student populations.
The New Student Mindset: Expectations for Learning
Undergraduates increasingly demand learning environments that are diverse, interactive, and multimodal. 90% expect course materials to reflect varied voices and perspectives, positioning libraries as leaders in curating inclusive collections and expanding media formats. Traditional essays are losing ground — 89% of students prefer alternative assessments like projects and presentations — creating opportunities for libraries to support creative and applied learning.
Multimedia is central to this shift. 87% percent view films as credible academic resources, and podcasts and social platforms rank nearly as high. Libraries can respond by strengthening streaming media offerings and integrating them into discovery systems. Yet awareness remains a challenge: students typically learn about digital tools from faculty or peers, while librarians account for only 15%. Closing this gap requires proactive outreach and deeper integration into course workflows.
Despite the rise of digital tools, human interaction still matters. 88% percent value discussion and debate, underscoring the need for libraries to maintain collaborative spaces — both physical and virtual — where ideas can thrive.
Student Learning Archetypes: Implications for Resource Design
Turns out, students overwhelmingly lean toward visual or kinesthetic learning styles, representing over two-thirds of the student body. Here are a few key archetypes that emerge:
1. Visual Vloggers (37%)
a. Prefer videos, diagrams, and real-life examples.
b. Libraries can prioritize video collections, infographics, and visual learning aids.
2. Kinesthetic Constructors (35%)
a. Thrive on experiential learning and case studies.
b. Opportunities: Makerspaces, lab partnerships, and interactive simulations.
3. Textual Traditionalists (19%)
a. Favor textbooks, eBooks, and summaries.
b. Libraries remain essential for curated text-based resources and annotation tools.
4. Auditory Anchors (10%)
a. Rely on audio lectures and audiobooks.
b. Libraries can expand audiobook offerings and promote text-to-speech features.
5. Feature Finders (30%)
a. Seek efficiency tools like AI summaries and personalized recommendations.
b. Libraries can integrate AI-driven discovery and recommendation engines into catalogs.
Opportunities for Academic Libraries
Students expect powerful, flexible digital tools, creating opportunities for libraries to lead in access and innovation. A major gap exists in multimedia resources — only 44% of schools provide digital film or documentary tools — while 30% of students want personalized recommendations, signaling demand for AI-driven discovery. Baseline expectations include offline access and cross-device syncing, making mobile-friendly platforms essential. Yet faculty dominate tool awareness, so libraries must embed resources into LMS platforms, collaborate on course design, and market services effectively.
Faculty Integration: A Key Partnership
According to the surveyed students, faculty are actively utilizing digital tools to build a more flexible and dynamic classroom, perfectly aligning with student demands.
• Active Learning is the Focus: Top uses include built-in discussion forums, project-based work, and interactive activities, confirming that technology is primarily used to drive application and collaboration.
<https://www.charleston-hub.com/media/atg/>
• Asynchronous is Key: High usage of pre-recording lectures and discussion forums signals that faculty expect flexible, on-demand content and structured, asynchronous interaction.
• Integration Points: Digital tools are most frequently embedded into assignments and used during class. This means reliability and device sync are critical, as students depend on these tools to complete coursework outside the lecture hall.
• The Big Ask for Providers: Tools must optimize asynchronous delivery (stable media, easy linking) and robustly support collaborative workflows, making it simple for faculty to embed media and utilize LMS features.
Takeaways for Library Strategy
1. Go Beyond Text: Invest in multimedia collections and experiential learning resources.
2. Make Discovery Intuitive: Implement recommendation engines and AI summaries.
3. Prioritize Access and Equity: Ensure offline functionality, accessibility features, and inclusive content.
4. Strengthen Faculty Partnerships: Position the library as a co-designer of digital learning experiences.
In short, today’s undergraduates are rewriting the rules of engagement in higher education. They want learning that moves, speaks, and connects — the kind that reflects the realworld and gives them agency in how they learn. The next generation of educational tools will need to meet that challenge head-on, blending flexibility, personalization, and discovery in ways that feel intuitive and empowering.
The future of learning belongs to those who make education not just available, but alive.
<https://www.charleston-hub.com/media/atg/>
Biz of Digital — Integrating Generative AI in Graduate Instruction using the IDEA Instructional Model: A Collaborative Librarian-Faculty Approach
By Lorena Jordan (Policy and Government Librarian, Fenwick Library, George Mason University, 4400 University Dr. Fairfax, VA 22030; Phone: 703-993-9864) <ljordan9@gmu.edu>
and Trevor Watkins (Teaching and Outreach Librarian, Fenwick Library,George Mason University, 4400 University Dr. Fairfax, VA 22030; Phone: 703-993-2244) <twatkin8@gmu.edu>
Column Editor: Michelle Flinchbaugh (Digital Scholarship Services Librarian, Albin O. Kuhn Library & Gallery, University of Maryland Baltimore County, 1000 Hilltop Circle, Baltimore, MD 21250; Phone: 410-455-3544) <flinchba@umbc.edu>
Introduction
As most of us know, in November 2022, OpenAI released ChatGPT, a generative artificial intelligence (GenAI) tool that changed how students conduct research and retrieve sources (De Angelis et al., 2023). In my previous role as an Access Services Specialist at The College of Charleston, students were beginning to see how powerful the tool could be to find instant sources, whether genuine or not. After starting my appointment at George Mason University (GMU) as the Policy and Government Librarian in 2023, it became evident that artificial intelligence (AI) would be a significant research area for me, as ChatGPT and other GenAI tools were being heavily discussed among the faculty that I serve at the Schar School for Policy and Government. During the fall 2024 semester, I was asked to instruct public administration students on the ethical use of AI and how it’s being used within actual government settings. Seeking a partner who also focuses on AI instruction and has more experience, I asked my colleague, Trevor Watkins, the Teaching and Outreach Librarian at GMU, to join me in delivering this lesson. Before the release of ChatGPT, Trevor had already been involved in designing and building AI systems, teaching AI techniques and promoting AI literacy in the classroom, and conducting library carpentry workshops.
IDEA and Other Instructional Models
The IDEA Model (Interview, Design, Embed, Assess) was created by Mullins (2014, 2016) to integrate information literacy into academic instruction, with a focus on collaboration between the library instructor or co-instructors and the course instructor. Although some general instructional models, such as ADDIE, ASSURE, Dick and Carey, and Gerlach and Ely, are adequate, we chose the IDEA model for two reasons. First, we believe that information literacy (IL), digital literacy (DL), and media literacy (MeL) are prerequisites for AI literacy, and that IDEA provides a blueprint for bridging these literacies to AI literacy, especially if one of the learning outcomes of the instructional session is for learners to be able to compare or interrogate traditional research methods with GenAI-aided research. Additionally, the Association of College and Research Libraries (ACRL) developed library-specific AI competencies that define the knowledge, skills, and dispositions required of library workers (Free, 2025). However, the competencies do not tell us how to structure a course or instructional session, and they will obviously evolve as AI continues to advance. IDEA is rooted in both behavioral
and cognitive theory (Mullins, 2014, 2016). Some of the biggest issues we see from students are cognitive, meaning they have misconceptions about AI because of a lack of prior knowledge, including its history and application in society.
Adapting the IDEA Model
Interview
Every semester, the Policy and Government Librarian emails faculty within the Schar School of Policy and Government to determine whether library instruction sessions can be integrated into their courses. During the fall 2024 semester, a professor for Public Administration (PUAD) 502, a graduate course in GMU’s Schar School of Policy and Government, responded and informed us that they were already discussing AI in class and wanted to learn how to use generative AI ethically in their assignments. The course provides an overview of public administration and is a requirement for the Master of Public Administration (MPA) degree; assignments include a policy memo, paper, and an article review. Students also wanted to explore library resources relevant to their research, how to cite GenAI-generated content, and when to use AI tools like ChatGPT, QuillBot, and Hemingway. Both the professor and students wanted to know the acceptable, scholarly use of artificial intelligence. Additional conversations led to selecting a policy memo assignment as the focus for one of the first discussions about AI-generated content. After a quick debrief of the interview, we transitioned to the design phase of the IDEA model.
Design
The first step in designing the instruction session involved gathering information from the instructor on current Gen AI areas of interest and their use in coursework. We provided the instructor with a 16-question survey created by the Teaching and Outreach Librarian and modified by the Policy and Government Librarian to elicit more specific GenAI interests for the public administration instruction course. For the complete survey, see Appendix A. Questions were based on four areas of AI use within the classroom: course context, AI literacy needs, instructional design, and collaboration and support. Survey results indicated that the instructor’s teaching methods were a combination of traditional lectures, thinking-based learning, and active learning. These responses, along with the survey results from our student survey, were combined into a two-hour lesson plan where we wanted to demonstrate how to conduct traditional research using library resources and services specific to public
<https://www.charleston-hub.com/media/atg/>
administration, how to use the databases, to understand current AI use within government and administrative organizations and determine ethical means of using GenAI within assignments. In addition, we wanted to demonstrate the use of the GenAI tools ChatGPT, Quillbot, and Hemingway, as requested by student respondents in the survey. The design phase allowed us to plan and create learning outcomes for the instruction, including which AI and IL activities and assessments to use in the two-hour session.
Embed
As Mullins notes, during the embed phase, it is crucial to develop strategies for integrating IL into the course materials. We focused on integrating information literacy and AI literacy into the existing structure of the graduate course for this session, rather than presenting them as separate, stand-alone workshops or one-shot instruction.
The Instruction
The two-hour instructional session included a lecture with PowerPoint slides, hands-on activities, and discussions about AI, its history, the use of AI tools in academia, and use cases in public administration. The Policy and Government Librarian first presented a brief overview and lecture on how to locate sources within the George Mason University Libraries’ databases and open-access databases using basic and advanced search strategies, as well as when and how to use the Policy and Government library guide. This was the traditional research portion of the session. Before transitioning to using GenAI in research, we ended the first session with an activity that compared policy memos. We used a January 2025 policy memo on student loan debt, retrieved from Harvard Kennedy School’s Policy Memo Database, and asked ChatGPT to generate a similarstyled report on the same topic. Both reports were then edited
in Microsoft Word to look similar in presentation and style. Students were given both versions simultaneously and asked which memo they believed was AI-generated and which was not. This was followed by a brief lecture on how AI is currently being implemented within federal and state government agencies, as well as the benefits and concerns of AI within government. In the second half of the session, the Teaching and Outreach Librarian first led a mind-mapping exercise (As shown in Figure 1), in which both students and the professor participated equally. In this exercise, we learned what the class knew about AI and what they did not, as well as how to use GenAI tools in research. We covered iterative prompt engineering and how to use the tools ethically. We also discussed how to cite the use of GenAI tools and the importance of copyright.
Assess
At the end of the session, students were given a QR code to complete an anonymous 8-question survey assessing their engagement with the material and its relevance to their assignments. Of the 14 students in the course, 4 completed the survey. All respondents mentioned learning how to use AI as a key takeaway from the session. Humata and Gemini were mentioned as programs to use in future assignments. Three respondents mentioned the usefulness of librarian instruction on accessing databases and the library website. Post-instruction communication with the instructor indicated that the students felt all their questions were answered and that the presentation was excellent and well received. Follow-up emails from students who sought consultations on research assistance with GenAI tools for their current and future assignments also showed that this session was well received. Within two weeks of the fall 2024 session’s end, the instructor asked that the same presentation be given during their spring 2025 semester.
<https://www.charleston-hub.com/media/atg/>
Conclusion
During our one-on-one debrief, we documented some of our challenges. One of those challenges was not accounting for the possibility of using different AI tools than those initially planned. For example, we did not cover Quillbot and Hemingway in the session. We instead covered Humata AI, Grammarly, and Gemini, meaning we had to adapt to their needs in real time, which is a responsive and learner-centered teaching method. We didn’t factor this into the lesson plan, but the Teaching and Outreach Librarian was able to make the adjustment based on previous experience. However, it encroached on the time we allotted in the lesson plan, and we had to shorten the Q&A. We’ve already modified the lesson plan to account for and mitigate future challenges. Through outreach by the Policy and Government Librarian, a professor requested a virtual session, and we were able to convert this face-to-face instruction to a virtual format. We plan to follow up on how we did it, including details about the conversion and the session.
References
De Angelis, L., Baglivo, F., Arzilli, G., Privitera, G. P., Ferragina, P., Tozzi, A. E., & Rizzo, C. (2023). ChatGPT and the rise of large language models: the new AI-driven infodemic threat in Public health. Frontiers in Public Health, 11, 1166120. https:// doi.org/10.3389/fpubh.2023.1166120
Free, D. (2025). AI Competencies for Academic Library Workers Draft Review. College & Research Libraries News, 86(3), 93.
Mullins, K. (2016). IDEA Model from Theory to Practice: Integrating Information Literacy in Academic Courses. The Journal of Academic Librarianship, 42(1), 55–64. https://doi. org/10.1016/j.acalib.2015.10.008
Mullins, K. (2014). Good IDEA: Instructional Design Model for Integrating Information Literacy. The Journal of Academic Librarianship , 40 (3–4), 339–349. https://doi.org/10.1016/j. acalib.2014.04.012
Appendix A: Faculty Survey and Responses
Interview Form for Integrating AI Literacy in the Classroom Course: PUAD 502 Date: 10/26/2024
The learning objectives of the course are as follows:
• Develop an understanding of the administration of public and non-profit organizations.
• Explain the structure and purpose of the government and its agencies.
• Appreciate the role and interplay of the public, nonprofit, and private sectors in modern governance.
• Gain an understanding of public service production’s policymaking and policy-implementation aspects.
• Become familiar with key aspects of the implementation process.
What are the typical assignments and assessments in your course?
The students are required to write a policy memo (this is already completed), to conduct an article review (they just turned this assignment in) and to complete a final paper that is a literature review.
Do you currently address any aspects of AI in your course? If so, how?
Yes, we have discussed the application of AI in public management in general. Our class has covered how governments use engagement tools and models to aid public managers in connecting with members of the public. We have reviewed a couple applications that use AI such as Zencity and discussed the implications for the use of AI in local government.
Students have discussed ways to leverage AI in their coursework during the class but seem uncomfortable with knowing what is acceptable and what is not acceptable.
What are your thoughts on AI’s potential benefits and challenges in education?
I believe AI can be and should be used as a tool for students, but that we need to be careful that we are not using AI to think for us. I believe that we need to be using AI as it continually evolves and improves otherwise students will not be comfortable applying AI as an effective tool to their learning.
What are your thoughts on AI’s potential benefits and challenges in public policy?
One of the ways that I am exploring a use of AI in public policy is to aid in comparing legislation. As a full-time city manager, I find that it is very cumbersome to sort through many different pieces of legislation and ensure that I am explaining all components of the legislation to various stakeholders in the community. Additionally, it is challenging to take a dense piece of legislation that most citizens will not want to reach and come up with education / outreach for the public. Using AI for public policy in the same way someone would use it for their 30-day marketing campaign for a product or service can really help you think through ways to hit on the key points of the legislation for citizens to engage, understand to leverage public feedback. Oftentimes, when you have been buried in the drafting of legislation you are not able to look at it through a lens of a citizen, AI can help with that.
What citation style and edition are required for your course?
APA 7th ed.
2. AI Literacy Needs:
What specific AI literacy skills would be most beneficial for your students to develop in this course?
1. How to identify problems with the use of AI.
2. How to identify benefits with the use of AI.
3. How to properly cite the use of AI.
Are there any particular AI tools or technologies you think students should be familiar with?
The awareness in the class is very broad; therefore, I would start with the basics at a high level. The students are very engaged so I believe if you ask them some further questions they will let you know if they are able to dig deeper in content.
<https://www.charleston-hub.com/media/atg/>
How do you envision AI literacy supporting or enhancing the existing learning objectives of your course?
This course is an introductory to the MPA program. The students are generally not familiar with writing article reviews or literature reviews prior to this course. I envision AI literacy supporting them as the enter the MPA program and further their ability to formalize their thinking processes into academic and practical outputs.
3. Instructional Design:
What teaching methods do you typically employ in your course?
I use a combination of traditional lecture, thinking-based, and active learning.
Are there any specific resources or materials you currently use that could be adapted to incorporate AI literacy?
I will need to think on this question a bit further. How much class time are you willing to dedicate to AI literacy instruction?
Our class runs from 4:30 – 7:00 PM. I am open to however much time you need especially if we can possibly do an exercise where students could apply AI to their research topic for their literature review.
What are your preferred methods for assessing student learning?
I prefer to assess students learning by their work product (i.e., papers and assignments) and their engagement in the classroom.
4. Collaboration and Support:
What is your level of familiarity with AI concepts and technologies?
I am moderately familiar with AI concepts and technologies. What kind of support would you find most helpful in integrating AI literacy into your course?
Examples of how AI can be used in the classroom, suggestions, etc.
Are there any concerns or challenges you anticipate in integrating AI literacy?
I believe integrating AI literacy will generate more questions for students, but I do not think that is a concern.
Created by: Trevor Watkins, Teaching and Outreach Librarian Last Updated August 15, 2024 (Lorena Jordan).
<https://www.charleston-hub.com/media/atg/>
ATG Interviews Leo Lo
Dean of Libraries and Advisor for AI Literacy, University of Virginia
Interview Conducted By Erin Gallagher (Chair of Acquisitions & Discovery Services at the University of Florida and Charleston Conference Director) <gallaghere@ufl.edu>
The following is an edited transcript of an episode of ATG the Podcast. This episode is part of the 2025 Charleston Conference Leadership Interview Series, featuring leaders and innovators who are shaping scholarly publishing, libraries, and the broader information landscape.
EG: Hi everybody, welcome again to another Charleston Leadership interview. I’m Erin Gallagher, I’m the Chair of Acquisitions and Collection Services at the University of Florida, and I’m also a Charleston Conference Director. And I’m so pleased today to be here with Dr. Leo Lo, who is at University of Virginia, is their new University Librarian and Dean of Libraries. So Leo, we’re going to get right into this. What is your library origin story and what was your relationship to libraries before you started working in them?
LL: Well, hello, Erin. Hello, everybody. Thank you for having me here. It’s funny, I didn’t even know librarianship could be a career when I was younger. I originally wanted to be a film scholar and film preservationist. I studied film and was very interested in preserving cultural history through cinema. When I began exploring graduate programs in film preservation, I realized how niche the field was. A professor suggested I look at library science programs, since some offered preservation specializations.
Once I started exploring those programs, I became more interested in librarianship itself. This was when the Internet was exploding and transforming everything, including libraries. It felt like an inflection point. I saw librarianship as the intersection between honoring the past and actively shaping the future, and that idea has guided me throughout my career.
I earned my degree at Florida State University (nice to meet another FSU alum), and I’m grateful for that program that helped get me into this career. Today, I think we’re at another inflection point with AI. Having lived through the digital transformation of the Internet era, I’ve learned from both the successes and the mistakes. Now we can move forward with AI more thoughtfully and confidently.
EG: I’ve done these interviews for years, and no one has ever said they wanted to be a librarian as a child. It’s often something we discover through intersecting passions.
LL: That’s one of the things I love about our profession. It welcomes people with diverse skill sets and experiences, and we’re able to use those strengths in meaningful ways. Many of us discovered librarianship by accident — and that diversity makes the profession stronger.
EG: Well, you mentioned Florida State University. So Leo and I both got our master’s in library science degrees from Florida State, go ’NOLEs. And it does seem like it’s getting farther and farther in my rear-view mirror now, but I still think pretty frequently about what’s going on in masters of library science programs nowadays. What can library science programs do to better prepare future librarians for a landscape where AI will play a large and evolving role?
LL: Yeah, so like you, I am not super involved with library science programs, but I’ve given some talks at library schools. But mostly, I’m speaking mostly from an employer’s perspective, what I want from students coming out of these programs. We’ve adapted before; digitization reshaped libraries, and we developed new roles and skills. AI is the next transformation. But the stakes are higher because AI can generate, decide, execute, and persuade at scale.
Library schools shouldn’t treat AI as just another tool. Tools will change quickly. Students need to understand the underlying technology and how it shapes workflows, evaluation, and decision-making.
AI is fundamentally a prediction machine. It recognizes patterns and generates outputs based on statistical probabilities. But prediction alone isn’t enough. Human judgment is essential. AI doesn’t understand nuance the way people do. It doesn’t possess lived experience or ethical reasoning.
To make sound decisions, you need both prediction and judgment. Library education should make that distinction visible. We need to train students in human-AI collaboration, how to design workflows, evaluate outputs, and exercise informed judgment. We may also need to rethink management training. In the future, librarians may manage both people and AI agents. That’s a new competency.
Finally, libraries are trusted institutions. In a world shaped by AI-generated misinformation and disinformation, that trust becomes even more important. Library education should help students think strategically about how to maintain and strengthen that trust.
EG: Well, I really do appreciate the focus on humans in this. I think that’s something that we don’t talk about enough. And this also bleeds into my next question, which is, you’re in a unique role at UVA. You serve not only as Dean of Libraries at UVA, but also as Advisor to the Provost for AI Literacy. And that’s very unique. That’s a role that I haven’t heard of a dean of libraries taking on before. First of all, congratulations. Secondly, so as far as we in the library field, and you’ve already touched on this a bit, but how can libraries help students
<https://www.charleston-hub.com/media/atg/>
strengthen critical thinking in a world where AI can make thinking feel optional?
LL: When ChatGPT launched in November 2022, within two weeks students were asking our staff for books and articles that didn’t exist. But we were not ready. Librarians were not ready, staff were not ready. We didn’t yet understand what was happening. Now we know: the system was fabricating citations. Those could have been teachable moments, but we weren’t ready. That experience made me rethink what it means to be literate today.
Libraries already teach information literacy — how to evaluate sources, verify information, and assess credibility. AI literacy builds on that foundation. I actually appreciate that AI isn’t perfect. Its flaws create opportunities for us to teach verification and critical evaluation.
As ACRL president, I established a task force to develop AI competencies for library workers, building on my AI literacy framework. Many librarians want to learn about AI, but they’re doing so in ad hoc ways. Competencies create shared language and structured learning pathways.
We’re also moving from a search world to an answer world. With Google, we received links to evaluate. With AI, we receive synthesized answers. That changes how we teach evaluation and verification. This is a strategic opportunity for libraries to lead.
EG: Well, and you know, the research process isn’t perfect. AI is flawed, but so is the entire research process. When I was doing library instruction in the past, it was always important not to show some sort of perfect “point A to point Z” because that’s not going to be their experience. When they get out of the library and they start doing their own research, it’s not going to be, “I typed in two keywords and I found the perfect article” and then that was it. That’s just not going to happen. So I think being realistic about that is just helping our students. Absolutely.
You’ve been very active in national organizations. What would you say to early career librarians who aren’t sure about the value of professional involvement?
LL: Professional organizations can be transformative at any stage of your career. I joined ALA as a student at FSU and got involved early. For me, it is AI that I want to be able to have that impact that I can help people with that. So, for early career professionals, I recommend that you find something you care about and then you could either learn by going to conferences or going to webinars and workshops and all that you learn. You can join committees where you can contribute. You can network with people who also share very similar interests and learn from each other. And if you keep doing that, you can actually have an influence. You can make a meaningful contribution to the profession while learning and building networks.
I’ve gained mentors through professional service; people who offered advice, opened doors, and even nominated me for opportunities. Service can also support promotion and tenure, depending on your institution. But ultimately, it’s about impact. Being able to influence your profession in areas you care deeply about is incredibly rewarding.
EG: And it just feels good to get more involved in something you care about. You know, I think a lot of us get into libraries because it’s a good fit. We have skills that transfer well into library work. But we also get into it because we’re passionate about literacy. We’re passionate about education. We’re
passionate about reading. And so having a passion and being able to find some sort of a professional network that allows you to care while making a difference, that’s pretty rare.
Switching gears a little bit, I did enjoy looking at your personal website and especially your mission, which was to live a purposeful, joyful, and extraordinary life and to make a meaningful contribution to the world by inspiring and empowering others to do the same. So how are you finding joy these days?
LL: When I turned 40, I created a “45 before 45” list — 45 goals to complete before age 45. I shared it publicly to hold myself accountable. I’ve learned that when you set goals, some studies say that if you share it widely with people, that keeps you accountable. So I did that. I shared it really widely. And I think it actually resonated with a lot of people. I think it inspired a lot of people. And that made me feel really good as I tried to accomplish my life goals. A lot of other people said, “I want to do that too.” You know, they follow, they did that. And that’s why I set that mission or vision for my life because that made me feel really good and I feel like I can make a positive impact on other people’s life.
My work with AI also brings me joy, and that passion connects to a childhood experience. I’ll share a quick personal story, when people ask about “why do you care so much about AI so early on?” So if we rewind back many, many years when I was about 10 years old in Hong Kong, I was a student in a science class and my teacher was doing some kind of chemistry experiment at the front of the class, mixing chemicals and they would do different chemical reactions. I was very shy, I didn’t like asking questions but I got really curious. I was like, how come these chemicals did not react with the glass container or the test tubes, the glass base whatever container they were. And all right, for people in Hong Kong, asking a question was big. So, I summoned up all my courage and raised my hand, asked that question. And the teacher, maybe he was in a bad mood. Maybe he didn’t like me. He said that’s a stupid question. And all the kids laughed. I was traumatized. I never liked chemistry again. I never liked asking questions in class again.
So fast forward to 2022 November when ChatGPT came out, I asked it that same question. It gave me a textbook answer, and then I asked it to explain it like I was a 10-year-old who liked Spider-Man. It did. I became emotional. I thought, “I wish I had this as a child.” No one should feel ashamed for wanting to learn.
That’s why I care about AI literacy. If we use it thoughtfully, AI can support personalized learning. It can help people who are shy, who have disabilities, or who are not native English speakers. The technology isn’t perfect, but it has powerful potential.
EG: I’m so glad that AI didn’t come back and say, that’s a stupid question. And my personal mantra always, and I share this with all of my coworkers, is asking questions is a strength. It’s not a weakness. It is a strength to have the courage to ask, to articulate your question in a way that makes sense to who you’re asking. So yeah, that’s a really good point about AI.
Switching us back to the Charleston Conference a bit, what do you most look forward to at the Charleston Conference?
LL: Well, first, I like Charleston, the place. Good food. The weather is always amazing at that time of year, so I like going there. Also, I think it’s one of the few places where librarians, publishers, vendors can all share the same room and talk and get each other’s perspective and get to meet each other and
<https://www.charleston-hub.com/media/atg/>
talk a little bit more. I love that part. And a lot of the times the presentations have all three perspectives. And I really find that that is really important.
For the past two, maybe three conferences, I’ve been involved with presenting on AI and with vendors, with librarians, with publishers. And I love getting the questions when people hear from all these perspectives. I work with vendors a lot on developing the AI, giving them advice and other things. So that’s a venue where we can make that happen or have more of that happen. I think that’s really necessary. We should not be working in silo. And so that’s what I love about Charleston. I also love all the other add-ons with you know, some publications and blogs and all of that.
EG: Yeah, I agree. I’ve been going to the Charleston Conference for 14 years now and it was my first professional conference right out of library school. It was also my first conference that gave me the opportunity to present. So it’s always really, you know, near and dear to my heart. But yeah, I love the place. It feels like I’m going home almost every year because I’m so used to it now and the food, you just can’t even get me started on the food. It’s so good.
So my favorite question to ask is always my final question. And normally it’s, “What’s the best book you read over the past year?” But Leo, because you also have a history in screenwriting, I’ll give you a couple of options. What’s the best book you read over the past year or the best movie you saw over the past year?
LL: I am a bit embarrassed to even admit that I did not complete a single entire book last year. I’ve changed my reading habits. I read portions of many books rather than finishing them cover to cover. Two I found particularly compelling were Who Can You Trust? by Rachel Botsman, which examines how trust is built and maintained, and Empire of AI by Karen Hao, a critical look at tech companies and AI development.
As for movies, I rewatched Die Hard, I firmly believe it’s a Christmas movie. But one I really enjoy, and my wife and I watch every few years because it’s four hours long, is a Taiwanese movie called Yi Yi. So recently Criterion released a 35th year anniversary remastered version and we watched it. It’s a quiet movie about families and all of that. So that’s another one that I highly recommend.
EG: My second goal behind asking this question is that then I get good recommendations too. And you won’t get any argument from me about Die Hard, definitely Christmas movie. I watched it again, and Die Hard 2. I also love that one, another great Christmas movie.
Well, Leo, thank you so much for talking with me today. It was great getting to know you more and thanks for being a part of this interview series.
LL: Thank you very much.
<https://www.charleston-hub.com/media/atg/>
Librarian Luminaries — David Banush
Dean of Libraries, University Libraries, University of South Carolina
Column Editor: Caroline Goldsmith (Associate Director, The Charleston Hub) <caroline@charlestonlibraryconference.com>
Academic librarians and library staff are a bridge between the vast world of knowledge and the needs of students, faculty, and researchers. The “Librarian Luminaries” column features a different librarian each month who has had a recent notable achievement, implemented a new idea or approach in their library, who is a trail blazer, or who is an overall exemplary model of service, scholarship and innovation.
We’re happy to share this interview with David Banush, Dean of Libraries, University Libraries, University of South Carolina, who was featured for our October 2025 Librarian Luminaries column!
ATG: Hi David! Can you share a little bit with us about your background and education?
DB: Hi, and thank you for the opportunity to speak with you!
About me: I grew up in suburban Detroit and received my undergraduate degree at the University of Michigan in Ann Arbor, where I majored in English. My original plan had been to study economics, but my experience with calculus quickly revealed that it wasn’t really my forte. I had always done well with languages and literature and eventually chose English because of that. Working for a few years after graduation made me miss campus life and formal study, so I decided I would pursue a PhD in English. I got a fellowship to attend New York University, so I left Ann Arbor for Manhattan. I enjoyed the program and experience very much, but even with funding, New York was extremely expensive, and I realized that my job prospects were going to be limited. After two years, I decided to take my MA as a consolation prize and turned down a job in publishing to work at the NYU libraries full time. Although I was paid modestly, I enjoyed the people and the work. I had great supervisors and mentors who encouraged me, and I understood that the only way to progress in the field was to earn an MLIS. I returned to Michigan and earned my degree from Wayne State University by working half-time and going to school full-time for 18 months.
ATG: What made you decide to go to library school or drew you towards librarianship?
DB: I sometimes say that I was fated to be a librarian, even though I certainly did not feel that way when I was younger. But my first real paid job was in the public library of my hometown. I was a page, as they were called at the time, which sounds very medieval and courtly, but really meant I shelved books. To get the job, there was a gauntlet of tests to narrow the applicant pool: a spelling test; a timed shelving test; and finally, an interview with the librarian. The interview was yet another test: I was asked to recite the alphabet — backwards. Although taken aback, I did it and got the job. (I can still recite the alphabet backwards, though I’ve never found it an especially useful skill.)
I think what drew me to academic librarianship particularly was the ability to be part of university life without being
either a student or teaching faculty member. Academic libraries offered a wide variety of specializations, and I began my support staff work in cataloging. I really enjoyed the process of describing a work and making it discoverable. I am also rather introverted, like many in the field, and the behind-the-scenes aspect of technical services suited me well.
ATG: We see that you were named Dean of University Libraries at University of South Carolina in 2022. You’ve been a leader in the library community and have served in various leadership roles in different professional organizations over the years, such as ACRL. Can you tell us about the evolution of your career and how you came to be in your current position?
DB: After graduating with my MLIS, I was extremely fortunate to begin my professional career as a cataloger at Cornell University, which was a fantastic learning environment with a substantial number of opportunities. Performance expectations at Cornell were high, and Cornell librarians were required to be active outside of the institution, to write and present, and to be engaged in initiatives. That was demanding, but I really thrived and was able to participate in a lot of things that helped me grow professionally and personally. I am as much an alumnus of the Cornell University Library as any place from which I earned a degree.
I left Cornell as head of cataloging, with about 30 staff, and moved to Brown University as an Associate University Librarian, with about twice as many people. At Brown, I had all of technical services in my portfolio as well as access services, but after one year, I also took on responsibilities for collection management and collection strategy. I had been a materials selector at Cornell, but this was a quite different level. I was now overseeing the whole collection budget at Brown, so the process of resource selection, description, circulation, and storage of materials, as well as e-resource licensing and purchases, of course, became my responsibility. It was challenging and I was well supported by terrific colleagues and the University Librarian. She encouraged me to take the next step toward being a director. It was not anything I had envisioned for myself, but I became a fellow in the ARL Leadership Fellows Program in 2013, which was a fantastic experience. After two years of that program, and the accumulated experiences I had, I felt ready to take on new challenges.
I went to Tulane as Dean of Libraries in 2015, ten years after that institution and its host city suffered the major catastrophe of Hurricane Katrina. The Tulane libraries had been heavily damaged by the flooding, and recovery was slow, painful, and difficult. Much of that recovery work was concluded when I arrived. The library was at an important inflection point, making it a very appealing opportunity. I was a bit nervous about living in New Orleans; the heat, the threat of another storm, and the distance from my comfort zones in the Midwest
<https://www.charleston-hub.com/media/atg/>
and northeast, made it intimidating. But it was a wonderful place to be, and I feel that, with my colleagues, I was able to advance the library, position it for the future, and have an influence on the institution.
The University of South Carolina was another such opportunity. The longtime dean was retiring and there was new university leadership. That was appealing. Since my arrival, I have found a place that reveres its library and recognizes how important our work of instruction and stewardship is, particularly in a world increasingly marked by distrust and chaos. My colleagues are doing outstanding work with our world-class distinctive collections, with instruction and outreach, and with research support services ranging from data management to digital humanities, AI, and more. I am extremely happy to be in an R1 institution that is growing and offers me a real voice in shaping the research and teaching mission.
ATG: Feeling that you have a voice is so important, and it sounds like you have some amazing support in your University. We see that the University of South Carolina is using a new platform, Manifold, for Open Carolina, “an openaccess repository of diverse and authoritative content made available to the public by the University of South Carolina Press” to expand publication options for faculty and students. Can you share with us about how this move has helped with accessibility, expanded publication options and how it better supports students, faculty, and researchers?
DB: You are referring to our recent implementation of Manifold, an open-source digital publishing platform used at many other libraries and university presses. Open Carolina, our Open Access partnership with the University of South Carolina Press, has been the primary catalyst behind its selection and implementation. We needed a better way to host OA monographs published under the partnership that met accessibility requirements. But we also wanted something that could support future developments, whether as part of the Open Carolina initiative or outside of that framework. We are hopeful that Manifold will help us support more multimodal, open-ended kinds of long-form scholarship that still have trusted peer review but also take advantage of the full capabilities of a digital publishing platform, not simply to present texts digitally and without paywalls, important as that is. The potential that Manifold offers is exciting, and we are already seeing increased use of those works now hosted on the platform from readers and scholars around the world.
ATG: We understand that some of your work has involved educating incoming students about what the library has to offer. This can be a daunting task! Can you tell us about some successful initiatives or programs in your library to help connect with students and help them to discover what the library offers and what librarians can do?
DB: When I first came to USC, as part of a planning process, I invited stakeholders to meet with the entire library staff and answer some questions about their work, their challenges, and what they really wished the university could support. I did not want them to talk about the library per se but rather focus on any areas where they felt they needed more support. One consistent finding from the sessions was a pronounced level of ignorance about the many services we offer, for everyone, whether first year students or advanced researchers or even campus administrators. One of the strategic priorities had to be reaching our audiences more effectively. As it turned out, we
had had a large turnover in our communications group, so I was able to hire a new director and restructure the operations. We now have much more engaging social media, which has helped us gain a greater student following, and we have been able to partner effectively with other offices, like our teaching center, the graduate school, and the research office, to embed our events, workshops, and resources in their communications. We have spoken at faculty senate meetings about our offerings, and I routinely share items with my fellow deans for distribution. We have launched a series of newsletters for different audiences and have had success getting new students to sign up during orientation (it helps to have their parents there too — they are important to include in outreach to undergrads). Although we have more work to do, our metrics clearly show that we’re reaping the benefits through increased visits, more requests for instruction sessions, higher attendance at workshops, and more interest in our programs, such as our SHARPgrads program that teaches graduate students about data management, AI, data visualization, and other research skills critical to their studies and future success.
ATG: We saw that, over the summer, your university hosted the 2025 Best Practices Exchange Conference for information professionals which focused on digital information management and preservation. What are some things that you do to foster honest, respectful conversation at this event? Also, what are some things that you feel helped to achieve the goal of focusing on the non-digital, human side of digital stewardship?
DB: Katie Hoskins, digital collections librarian here at USC, was the organizing force behind bringing the Best Practices Exchange (BPE) to our campus this past summer. The BPE is approaching its 20th anniversary and has long been noted for its low-key, intimate, and informal atmosphere. Its UnSteering Committee provides continuity and has developed key tenets and standards that guide planning and convening. The group’s tenets include respect, open discussion, and the acknowledgement that we all have something to learn; this is emphasized throughout BPE events. Perhaps most unique and sacred to BPE is confidentiality, which is facilitated through several means: no vendors, no recordings, capped attendance, and frequent reminders to share out what we have learned but to do so without naming people or institutions. This created a safe environment for attendees to engage in honest conversations about both successes and challenges. Katie and her colleagues across the BPE made that happen.
<https://www.charleston-hub.com/media/atg/>
1920s dancing at the Gatsby exhibit opening
Harvard University’s Head of Digital Preservation, Stephen Abrams, kicked off the event with the keynote address called “Preservation, People, Practice.” Abrams raised thoughtprovoking questions about preserving the context of cultural heritage materials and the ways we communicate their meaning to future generations. He proposed a new continuum-based model, as opposed to the lifecycle we are accustomed to in digital preservation discourse, which allows us to incorporate the many processes influenced by intentional decisions made by human practitioners. The keynote themes set the tone for the conference, inviting attendees to think critically about the “how” and “why” of preservation.
In the planning stage, the Un-Steering Committee called for proposals that would highlight human-centered themes such as invisible labor, collaboration, and advocacy. The community delivered with standout presentations on topics such as human intervention in AI workflows, peer learning and assessment, managing student labor, and effective communication with colleagues and administration. BPE’s tradition of emphasizing practice over theory — wanting to know what really happens, what really works — lent itself well to this theme. The feedback regarding the theme was overwhelmingly positive and is shaping the 2026 meeting at Indiana University Indianapolis.
ATG: This year, your library had a really cool exhibit, the Great Gatsby 100-year anniversary exhibit. Can you tell us about what this exhibit featured, what some of the interactive elements were, and how this exhibit helped provide a unique opportunity to experience the world of F. Scott Fitzgerald and The Great Gatsby up close?
DB: Our Irvin Department of Rare Books and Special Collections has some of the largest collections of 20th century American literature in the country. The Fitzgerald collection has its genesis in the work of longtime USC English professor Matthew Bruccoli, a Fitzgerald scholar who also collected Fitzgerald materials; the library acquired these materials primarily through him. Scholars from around the world regularly consult them, and certain items, like Fitzgerald’s ledger, where he recorded all of the short stories he sold (and for how much), are among our “greatest hits.” The Gatsby
centennial gave us the opportunity to showcase some of these materials (and only some — there are thousands of items in the collection) while also bringing in more classes (from both across campus and from local K-12 schools) and of course the public. Michael Weisenburg, director of the Irvin Department, did a fantastic job not only as a curator of the exhibit, but as the prime mover behind several events around the exhibit. We had an opening event featuring USC student dancers and their instructors doing period dances (with audience participation); a USC jazz band playing 1920s music; a multi-disciplinary faculty panel with scholarly talks; and a community readaloud of the whole novel in front of the library last April. Our programs received coverage from local and national news outlets (including the New York Times). These kinds of interactive, open events are great ways to reinforce the library’s role as convener and connector, bringing students, faculty, alumni, and community members together.
ATG: What’s a major challenge you have faced in your role in librarianship and how did you overcome it?
DB: It’s difficult to pinpoint one challenge, but there is one that is increasingly common: the effect of climate change and extreme weather on our operations. I can’t say that I have overcome it, or that anyone could, but it is something I faced in New Orleans during the aftermath of Hurricane Ida in 2021. The storm, which hit on the anniversary of Hurricane Katrina, did not result in devastating flooding, though there was substantial wind damage that knocked the power grid out for days and destroyed cell towers. And of course it came during the ongoing pandemic. The Tulane email listservs were down. Communicating with staff to ensure that everyone was safe and accounted for was very difficult. I had evacuated at the last minute and was able to see news reports, learn about places people could go for water, groceries, cooling, and charging, and tried to share that information with my staff. I sent regular updates throughout the week as I could. You realize in these moments that many things we take for granted daily, like access to email and phone service, let alone food and potable water, can disappear overnight. And you are responsible for some buildings, but more importantly, for people. You need to focus on them and doing what you can to help them through. These events are coming more frequently and with greater intensity. Our responsibility as leaders is to worry about our colleagues first.
ATG: I understand this with what we went through here with Hurricane Helene last year and those devastating effects. I can only imagine dealing with that coupled with the Pandemic and how difficult that must have been for you, and also that responsibility you felt for assisting your local colleagues with finding resources. What does “Librarianship Elevated” Mean to you?
DB: Librarianship Elevated, to me, means being an active player in solving the dilemmas we face in our institutions and in society, generally. It means having highly skilled people from a variety of backgrounds helping our communities navigate the complicated environment we face today. It absolutely means being present at the point of users’ needs, wherever that may be in whatever context.
These are interesting times for libraries and librarians. Many of our values are under threat, and so of course is our funding. AI has incredible potential, but it is also an enormous challenge to the integrity of information, which has already been under siege for some time. For me, leveraging the trust
<https://www.charleston-hub.com/media/atg/>
Student at the Great Gatsby read-a-thon
that many people continue to place in libraries is essential not just to our survival, but to our growth and prosperity. I do think we have great opportunities to work with people to help them understand, to the best of our ability, how to find information that is relevant and dependable. To do so, we will have to increase our mastery of tools and techniques that may fall outside the traditional boundaries of the profession.
ATG: Interesting times, for sure. Is there something that you enjoy doing in your “spare time” or a hobby that you enjoy that you want to share with us?
DB: I enjoy reading quite a bit and try to do that as widely as I can. Travel, especially to cities, is another thing I enjoy,
as are cultural activities — museums particularly. I enjoy old films, especially film noir, and watch them regularly, and while I am not a big sports person, I do follow the Detroit Tigers and Michigan Wolverines.
ATG: Thank you so much, David, for taking the time to talk with us, and for sharing insight into your experiences at your library. We know you are very busy. We really appreciate it!
If you would like to nominate a librarian or library staff to be featured in this column, please reach out to us at info@charlestonhub.com
<https://www.charleston-hub.com/media/atg/>
Innovator’s Saga — An Interview with Kent Anderson and Joy Moore
Column Editor: Darrell W. Gunter (President & CEO, Gunter Media Group) <d.gunter@guntermediagroup.com>
Column Editor’s Note: This transcript captures a conversation from Leadership with Darrell W. Gunter, produced at Seton Hall University’s Marconi Award-winning WSOU 89.5 FM and distributed globally via wsou.net and major podcast platforms. The dialogue has been lightly edited for clarity and continuity without altering the substance of the discussion. — DG
DARRELL W. GUNTER: We are so happy to have two of the very important people who are keeping the scholarly publishing industry safe, secure, and precise. Today’s conversation goes to the heart of trust: trust in science, trust in institutions, and trust in our systems we rely on to distinguish fact from fiction. We’re living in an era of unprecedented scientific output — more papers, more data, more platforms, more speed, quite frankly, more junk. And yet, paradoxically, trust in the scientific record itself is under growing strain. Retractions are rising, incentives are misaligned, volume is often rewarded over verification, and technology, once expected to be the great stabilizer, may in some cases be amplifying the very problems it was meant to solve.
KENT ANDERSON & JOY MOORE: Thank you so much.
DG: So, let’s start with an icebreaker for each of you. In one sentence, what is the most broken right now in the way science is communicated to the world? Joy, ladies first.
JM: I’m going to say the fact that we are even having to ask this question means that the communication around science is broken.
KA: And I’d add, I think that the scientific information space, as you alluded to, has become infected with bad information. And a lot of this is due to the habits that we’ve assumed out of Silicon Valley: a reliance on attention metrics, quantity over quality, speed over care, pay-to-play business models, and now most recently, “AI slop.”
DG: Wow. “AI slop,” yes. So, Kent, you’ve argued that scientific publishing once acted as a circuit breaker, slowing things down, enforcing standards. Can you go deeply into what has changed?
Well, my guests today are two of the most credible and pragmatic voices examining this issue head-on. Kent Anderson is the founder of The Scholarly Kitchen and is one of the most respected critics and analysts of modern scholarly publishing. His work has consistently challenged the industry to confront uncomfortable truths about incentives, governance, and editorial responsibility. Joy Moore is a senior industry leader with deep experience at the intersection of scholarly publishing platforms, technology, and business models. She brings an operator’s perspective to how systems, economics, and scale shape behavior, often in unintended ways.
Together, Kent and Joy are the authors of the forthcoming book How the Internet Disrupted Science, to be published by Globe Pequot/Prometheus with distribution by Simon & Schuster. This book is scheduled for publication in July 2026. You’re getting an early glimpse of what the book will be about. The book examines how digital scale, economic pressures, and technological acceleration have reshaped and in some cases destabilized the scientific record itself.
Today, we’re going to talk candidly about what we are now calling a crisis in the scientific record: how we got here, why it matters beyond academia, and what must change if trust in science is to be restored. Kent, Joy, welcome to the program.
KA: Sure, Darrell. I’d love to. A whole host of things, and we’ve been observing it gradually over the past 20 to 25 years, and it’s why we wrote the book. But basically, scientific publication leaders, publishing leaders, editors, and business leaders have allowed a shift in digital distribution to radically redefine their industry’s sense of purpose. Instead of remaining focused on high-quality content for specific communities and taking the time to ensure that the experts in those communities received accurate and relevant information, the industry became focused on scale, which is a Silicon Valley conceit, using a business model producer-pays with gold open access and other things like that, which is very much akin to the Silicon Valley advertising business model. And it created some of the very particular kinds of fraud and corruption we see in online advertising and in bias on social media and general web platforms in order to drive people to extremes.
DG: Wow. So, Joy, from the platform and business side, how do you discuss incentive structures, especially open access and other author-pay models, that will shape behavior in a much more productive way?
JM: Well, the publishers with the large platforms understand that individual papers and the data they’re able to collect on how people use these papers are monetizable one click at a time. So where we used to focus on journals as being the branded products that publishers
<https://www.charleston-hub.com/media/atg/>
were focused on, now the papers — the individual articles — are the products, and the journal name is the packaging, and the platform is the brand. And so big publishers can extend their portfolios to accommodate papers of varying degrees of quality or scope to serve the growing market of authors of all stripes who need to satisfy their academic or commercial requirements to publish. And these papers act as currency, regardless of whether the findings and claims are useful to other researchers. And so we’re seeing a lot of pop-up publishing that these platforms can enable — stand up a new journal or do a special issue. And at the same time, we’re seeing the smaller publishers that are more focused having to join up with the big platforms in order to simply compete and exist.
DG: So, Kent, at what point did this technology disruption become an economic one?
KA: Yeah, I think it became an economic one when the argument was that information needed to be free. People started to assume that the public was an audience for scientific information, and then the only way to make it work was to make it so that authors paid for it. And it was an argument that because the government funded scientific research, that therefore taxpayers needed to get access to the reports of that scientific research for free.
And I think if you look at analogies to that, I think you see how that argument doesn’t make sense. I subsidize farmers to get good food. I don’t want to read their crop reports, right? I pay for my water bill to have drinking water and a good sewage system. I don’t need to read the reports from the wastewater plant. I want those things, but I don’t need to read the reports.
But there’s this conflation of economic and epistemic technological ideas at the same time. Also, the idea that the marginal cost of distributing a scientific finding was near zero with the internet, not taking into account that the fixed costs of running those platforms, of hiring those technologies, of having all the security systems and the analytic systems and everything else were far greater than the variable costs of producing print issues and distributing those once a month or once every couple of weeks. So, a lot of confusion around this, but ultimately it kind of became a philosophical disruption because everybody became confused about what the purpose of scientific publishing was, which again, historically, has always been to serve specific communities in specific ways with highquality, relevant information.
DG: Wow. And Joy, in your point of view, do you feel that publishers are trapped by these incentives or are they complicit in them? And is there a realistic alternative to this current commercial model?
JM: That’s a good question. Every publisher is different, and I think that’s one of the things that Kent and I want to get across is that every publisher has their own community, their own stakeholders that they’re serving — for different specialties, but also different economic reasons. And so, one of the things that we like to do is not try to lump everyone together. That said, what they all have in common is that they’re making their business decisions based on who they’re aiming to serve and what is paying their bills. It seems like the publishing community — not everyone, but a large part—have lost their overarching objective of producing quality scientific research.
DG: Kent, you’ve been at this. I’ve known you the whole time I’ve been in this industry since ’98.
KA: Same here. We’ve been lucky that way.
DG: And you have always been an advocate for good science. How do we get back to that?
KA: Well, I think, you know, some people have said slow science is better science, and I would agree with that. Other experts have said less science is better science as far as the amount produced and put into the publication streams. I think that we’ve kind of lost our way.
Look, I mean, all the norms that we’ve allowed to be broken, right? “Peer review? Just post your pre-prints, it doesn’t matter.” Peer review doesn’t matter to us anymore. Conflicts of interest about who’s paying for the research and who’s paying to have the research posted — that doesn’t matter to us anymore. We’re not going to ask you to disclose that. That’s recently started to change, which is a good thing. So, we’re starting to have some disclosure about who’s paying gold open access APCs, which is a good thing. We don’t really care about what goes into large databases like PubMed Central and things like that. Predatory publishers have been caught going into there, and they’ve been indexing preprints.
All of these things, I think it’s a kind of bringing back a set of standards, but also monitoring incentives — I think that’s a huge thing. So, if somebody is paying to publish something, why are they doing that? To ask that question is a very fair question: why did you pay somebody to publish this? Why was it not good enough to have them assume all the risk for publishing it and using the quality and the importance of your findings to further burnish their brand and to attract a higher level of science from that community? So, a lot more scrutiny of these things, I think, will help a lot.
DG: And so, what prompted the two of you to come together to publish the book How the Internet Disrupted Science, and what can the reader expect when it is available?
KA: Yeah. So, we started talking a little over a year ago. I’ve been covering this space as you know, Darrell, since 2007/2008, and I started to cover it as an enthusiast. I was very enthusiastic about, you know, I’ve talked about Physician 2.0 and Publisher 2.0, and I loved the disruption mindset and all of that, but I thought the norms would hold. And the norms didn’t hold. The norms kept getting twisted, and I kept finding stories of corruption and stories of insider trading and duplicitousness and deceit that just didn’t make sense. And there was also this drive to create a monoculture.
So when I reconnected with Joy, we both found ourselves kind of on the same page. It’s like, what’s gone wrong here? This is not the feeling when we discovered scientific and scholarly publishing. It was a breath of fresh air because it was a group of people who were all devoted to sorting out fact from fiction, to refining, helping people with really important findings, refining them and getting them out to other scientists so they could use them. You felt like you were building something great. And right now, it feels like we’re destroying something great, which is the opposite.
And so, we started to try to find a way to bring this concern to a wider audience. I’ve been bringing it to the scientific publishing audience productively for a long time now — 15, 20 years. And I write the Geyser and publish that every day and have thousands of people read it. But we want to have hundreds of thousands of people read it. So, we tried various things, and then we realized that the best way to do it was to put all our ideas together in a book.
And so, we explore everything from the historical norms of publishing and how publishers came about, and then we dive into
<https://www.charleston-hub.com/media/atg/>
all the things that have happened since then — how technology vendors have become the default go-to players in scientific publishing. If you look at any scientific publishing meeting, it’s all dominated by technology vendors; it’s not dominated by editorial voices. We discuss how copyright came under attack and all of the various issues, and we wrap it up with a concern about what happens if all these things line up, considering what’s going on with RFK Jr. and the anti-science devotees in the government. It doesn’t seem like scientific publishers are screaming from the balconies about the problems. They seem to be more watching their bottom line and trying to stay out of trouble. So, we think that’s where we can step in and say, “No, you should actually make some good trouble.”
DG: Exactly. And Joy, we met some years ago when you were with the Seed, I believe.
JM: Mhm.
DG: And you’ve been at it as well. What are your thoughts in regards to how we can turn this industry around?
JM: We definitely can. And that’s why Kent and I, when we reconnected, we can connect the dots on the technology front: Why do publisher sites work the way they do? Why are they throwing off unintended consequences? How are people making money through the technology? And also, how the outside world is using publisher platforms to take scientific papers and use them for their own means. And so, taking that inside-out, outside-in perspective is where we have laid the through-lines, and there are very simple things that could be done to counteract them if publishers wanted to take back control of their content for their core audiences.
DG: You know, interestingly enough, when I was at Elsevier and launched Science Direct, Karen Hunter, she was the Senior VP of Strategy. She actually came up with the idea for Science Direct — for Elsevier, not for the industry, but for Elsevier. And she realized early on that in trying to get other publishers’ content abstracts and indexes on the science platform so people could connect, she would have to do a license for each individual one. So the industry came together and created Crossref. Is this another opportunity for the industry to come together for the greater good to put in standards around publishing?
KA: I think that’s one possible tool that could come about. I think that if the industry said that peer review is a nonnegotiable — and good peer review, not just the hand-waving kind that some publishers do, but really strong peer review — I think if they, you know, standards like careful evaluation by certifying bodies were repeated.
But I think the biggest thing to me is one of our biggest recommendations. I think copyright is a democratic principle. Copyright is in the Constitution for the U.S. It’s among the natural rights cited by a lot of the Western world for its citizens, and I think that denigrating copyright should be off-limits.
Crossref, actually, one of the more interesting anecdotes people might remember was when OMICS was a big predatory publisher, and the Federal Trade Commission fined them $55 million because of their consistent predatory publishing practices in order to help drive them out of business. Crossref took them out of their membership and said, “OMICS, you’re not going to be able to use Crossref anymore.” Crossref is an interesting example because they have a pivotal role in this, right? They are the de facto scholarly scientific publisher registry. And if they had standards that were higher and said, “You know, somebody’s paying to place that article, there’s a financial incentive that we don’t like,” or “If this group has this
many retractions or they’re closing down a brand or something like that,” there are signals that they could use to evaluate whether those DOIs should be granted in the first place, whether they have the kind of qualifications and community focus and community uptake and all of that in order to justify getting a Crossref DOI. So, I think there are bodies like that that could do a lot more with their place in the technological realm than just dish technology.
JM: And I think that’s a really good example of the premise for our book: the unintended consequences. You started with something that looked like technology, and it turned into a certification that turned into a business model that could be exploited in one direction or the other. And so, our recommendation is for the leaders in scholarly publishing to take a step back and look at how these things are connected, and to re-examine some policies, especially the ones who are in control of making decisions about who gets a DOI and things like that.
DG: You know, this discussion reminds me of the former organization AAP/PSP. It still exists, but they disbanded the group where the publishers would sit around the table and discuss these very significant issues. At the recent annual STM Frankfurt conference, they actually had the different publishing organizations together discussing something for the first time. Why is it that the leaders of our various publisher organizations have not come together to say, “Let’s tackle the industry’s serious problems”? Kent, what are your thoughts?
KA: Well, my thoughts are, one of the things that the presence of technology and the pursuit of scale led to was a massive consolidation of the industry, right? So, where there used to be — and Joy and I remember these days, you know, the Highwire Press or some of the other publisher meetings, PSP is another example — there used to be dozens or hundreds of publishers, and a lot of people with the title executive or C-level titles in their name, and people would turn to them and listen to them. Now we have at these meetings maybe a dozen people like that. And so there are fewer minds, and a lot of the contractual relationships make it so that those 12 people have huge influence on what’s possible or can be blockers to change. And so you have this concentrated brain power that is more and more commercially oriented and less and less scientifically oriented, because a lot of those people actually don’t seem to have a lot of background in scientific publishing either.
So, I think from a leadership standpoint, we’ve seen a change in leaders where they’ve been tagged to lead these organizations either because they have technology chops ...
JM: Mhm.
KA: ... because they believe in the routines of scale and speed, and they don’t seem to have a lot of experience dealing with scientific publishing and the editorial and ethical aspects of that. So, I think that’s kind of a crucial missing piece right now on the leadership front.
JM: A lot of the largest publishers have rebranded themselves in recent years as being more about being data brokers or knowledge content providers than actual publishers. And I think to what Kent was just saying, that’s part and parcel of how they’re operating and focusing more on scale and moving content through the system, as opposed to traditional publishing which has been about what not to publish. Right now, it’s about “more is more,” where it used to be about curation, and “more is less.” And so, I think that’s why we don’t really get them at the table talking about core publishing decisions, because that
<https://www.charleston-hub.com/media/atg/>
would curtail the growth that operating platforms, brokering, and data at scale require.
DG: In our last five-minute section that we have — time just flies by — what is it that you hope that the reader will come away with from reading your book, How the Internet Disrupted Science? What are two or three things that you hope it strikes a chord with them?
KA: Yeah, I hope people understand how high the stakes are in all of this. This is an area maybe they didn’t even think about, right? Scientific publishing. They take it for granted. We hear it all the time with Scott Galloway and other academics saying, “You know, well, it was peer-reviewed science and it’s good,” and we want to kind of make people aware that that doesn’t mean what you think it means anymore. That people are abusing that term and getting away with it. That there are reasons for that; there are things that happened that really weakened what that means and diluted the power of the industry.
And that the stakes are high because where this could lead is, you know, we’ve already seen the very first inklings of this, right? Where you have a lawyer at the head of Health and Human Services in the U.S. making vaccine policy that has led to one, if not two, maybe three countries losing their measles-free label, to deaths and quarantines, and the resurgence of whooping cough and all of this. And that’s because they’ve used scientific information planted specifically to fake justify what we call “gaslight science” in order to drive policy changes and fool the public. If there’s not a good scientific information space that people can trust and that scientists are protecting, then that kind of thing can happen. So, we want people to be aware that this exists, it’s important, and the stakes are high if it goes wrong.
JM: And we also want to connect the dots between some ideologies that have been put out there that no longer hold up. Speaking of science, we want to use a scientific brain to look back on how do we get from there to here? And so, one of the most fundamental issues is the idea that more science available to more people immediately is better. And what we found through the research and what we discuss in the book is that it’s actually not. And it’s better to control copyright and to curate content for the people who need it in a controlled way. And that’s what we want to get back to.
KA: Right. And what “controlled” means is communitybased, right? Each community can decide what that means.
Like in botany, maybe you have a seed database that you make available to countries that can’t afford to do genomic testing on their seeds or order the right seeds for their environment and for their agriculture. And so maybe that’s how you get your information out, but a monoculture and a one-size-fits-all and one model fits all doesn’t make sense. So, we need to return it to communities and then be encouraging of them to have longterm interests in the content through copyright and a long-term interest and relevance to the community by being selective about what goes to that community and make sure it’s the best information, however that community sizes it up.
DG: I know this is a tough one. 30 seconds each. What are your thoughts about AI and how it can help us solve this problem?
KA: I’ll take one second. It can’t.
JM: Ditto.
KA: And AI is about as anti-science as you can get. At scale, there are small, well-controlled inference engines that can work in specific situations, and that’s about it. I think what people have gotten wrong is that they think AI is AI. AI is modest technology that does modest things fairly well, and Sam Altman is conning everybody. That’s my take.
JM: There’s nothing wrong with using machine learning to solve problems, but using the term “AI will help us” is delaying action on things that we can solve with our regular human brains and normal computers.
DG: Wow. Kent and Joy, this has been awesome. Ladies and gentlemen, their book comes out this July, How the Internet Disrupted Science. Make sure that you get your orders placed. It’s published Globe Pequot/Prometheus with distribution by Simon & Schuster. Really thank you for your time today, Kent and Joy, and we look forward to having future conversations.
KA: Thanks, Darrell.
JM: Thanks so much. This has been fun.
DG: That wraps it up for this week on Leadership with Darrell W. Gunter on WSOU 89.5 FM, and can be found on all of your popular podcasts. We want to wish you to have a very good day and a happy holiday season, but remember, leadership begins with you.
<https://www.charleston-hub.com/media/atg/>
ATG PROFILES ENCOURAGED
Jason Casden
Head, Software Development
University of North Carolina at Chapel Hill University Libraries
CB #3900, 209 Davis Library, University of North Carolina at Chapel Hill
Chapel Hill, NC 27515-8890
Phone: (919) 962-7554
Fax: (919) 843-8936 (yes, we still have a fax number) <jason.casden@unc.edu>
HOW/WHERE DO I SEE THE INDUSTRY IN FIVE YEARS: I think that libraries and archives will be more important than ever. In a world of increasingly convincing AI-produced false documents, our emphasis on information literacy and provenance will be essential.
John Felts
Head of Information Technology and Collections, University Libraries
BORN AND LIVED: Born in the mountains of SW Virginia and lived mainly in the Carolinas.
EARLY LIFE: Jazz trumpet player; lovable rogue.
PROFESSIONAL CAREER AND ACTIVITIES: I’m currently the Head of Information Technology and Collections at Coastal Carolina University. I’ve worked in academic library technology for over 30 years and a former patent holder and co-founder of Journal Finder, the first OpenURL Resolver and knowledge base to go into production in the. United States. I’m currently Chair of the SeamlessAccess Outreach Committee and a member of the Scholarly Network Security Initiative (SNSI) University Relations Group.
FAMILY: My daughter Sydney, my son John, and my two furry feline freeloaders: Olive and Oliver.
IN MY SPARE TIME: I listen to jazz, read, swim, and enjoy watching movies, TV, and sports.
FAVORITE BOOKS: Slough House series, The World According to Garp, and The Right Stuff
PET PEEVES: People who meticulously back into parking spots while the world watches and waits.
PHILOSOPHY: Suit up. Show up. Swing hard.
MOST MEMORABLE CAREER ACHIEVEMENT: Working with Tim Bucknall to create and develop Journal Finder, the first OpenURL Resolver and knowledge base to go into production in the United States.
GOAL I HOPE TO ACHIEVE FIVE YEARS FROM NOW: Stop thinking about The Sopranos ending.
HOW/WHERE DO I SEE THE INDUSTRY IN FIVE YEARS: We are racing beyond information retrieval into an era of knowledge navigation, contextual intelligence, and evidence synthesis at scale. As AI reshapes the scholarly landscape, libraries will remain essential as long as we continue to focus on what matters most: nurturing relationships, building trust, and keeping people at the heart of all that we do.
Grain / February 2026
Emily Glenn Dean
McGoogan Health Sciences Library
University of Nebraska Medical Center Omaha, NE
Phone: (402) 559-4085
<Emily.glenn@unmc.edu> www.unmc.edu/library
BORN AND LIVED: Born in the San Francisco Bay Area, lived in Washington State, North Carolina, and Nebraska.
EARLY LIFE: I spent a lot of time at Powell’s Books in Portland, OR as a kid. That helped open my eyes to, well, everything you could find in that amazing bookstore! The people who worked there were so kind and welcoming, too.
PROFESSIONAL CAREER AND ACTIVITIES: I began my library career as a work-study student at the University of Oregon Knight Library, eventually becoming the university’s first Orbis Cascade Alliance lending coordinator. While in grad school at UNC Chapel Hill, a fellowship with the EPA RTP library solidified my interest in supporting biomedical research scientists. Following roles at the Duke University Medical Center Archives and in Seattle with computer and infectious disease researchers, I transitioned into outreach as a public health information coordinator with the Network of the National Library of Medicine.
During my decade at UNMC, I have focused on education, research services, and technology. As dean, I lead a team of about 30 people to fulfill the missions of the library and the Wigton Heritage Center museum. Beyond these roles, I have led campus AI initiatives and facilitated design thinking and improv-based communication workshops. I currently volunteer with the Medical Library Association and Association of Academic Health Sciences Libraries, focusing on developing future library leaders.
IN MY SPARE TIME: I started geocaching this year, which combines trekking, solving puzzles, and learning local lore. I feel like I’m on a constant sidequest with unkown co-conspirators. I also enjoy traveling and going on adventures with my terrier mutt dog.
PET PEEVES: People who don’t wait their turn while unloading an airplane.
PHILOSOPHY: Be kind, even on your bad days.
MOST MEMORABLE CAREER ACHIEVEMENT: Putting together a plan to collaborate on information literacy instruction in Rwanda, embedding med students in the project, and working with local librarians to make it meaningful for students and professionals.
GOAL I HOPE TO ACHIEVE FIVE YEARS FROM NOW: I hope to have earned a doctoral degree and achieved master gardener certification.
HOW/WHERE DO I SEE THE INDUSTRY IN FIVE YEARS: In five years, AI will have fundamentally changed what it means to produce and consume information. Consequently, we will see a heightened focus on the ethics and tools that integrate critical information literacy into education and research workflows. Librarians will be at the forefront creating and auditing systems that go beyond detecting AI or signaling the presence of incorrect information. While open access and open data will be more accessible, the library’s value will have shifted noticeably from discovery toward reproducibility. Librarians’ work will be more relevant than ever, even as the library evolves into a distributed network of specialized services across the institution.
<https://www.charleston-hub.com/media/atg/>
Arran Griffith Fedora Program Manager Fedora
<fedora@lyrasis.org> https://fedorarepository.org
BORN AND LIVED: Born in Toronto, ON Canada. Currently reside in Dieppe, NB Canada.
PROFESSIONAL CAREER AND ACTIVITIES: I have a BSc in Biology from Mount Allison University in Sackville, NB Canada. I spent the first 12 years of my career as a large volume, big-box retail manager before transitioning over to community outreach in open-source library technology. My first experience in this industry was when I joined the Fedora Program as a part-time outreach coordinator in 2021.
Shortly after, I stepped into the role of Acting Program Coordinator in 2022, and later that year became the full-time Fedora Program Manager. In this capacity, I represent Fedora through conference participation and speaking engagements, and lead ongoing program operations, outreach, and community engagement efforts.
IN MY SPARE TIME: I am an avid long distance runner, and have completed over five marathons and 20+ half marathons. Pre-COVID I qualified to run the Boston Marathon but never made it to Boyleston St. because of race cancellations due to COVID restrictions. My goal is to qualify one more time and have the opportunity to run the Boston marathon IN Boston.
HOW/WHERE DO I SEE THE INDUSTRY IN FIVE YEARS: I think this is a challenging question to answer simply. I expect that this industry will be shaped by rapid technological change, especially as AI and large-scale data extraction continue to influence how digital content and collections are accessed, used, and protected. Under that lens, data ownership and data sovereignty will be more important than ever and should be the primary focus of advocacy.
I do hope there will be stronger recognition of open source software as critical infrastructure across libraries, archives, and cultural heritage organizations. Community-led platforms, like Fedora, help institutions maintain control over the data they steward and make values-driven decisions about access and sustainability. But open source can only succeed if we collectively invest in its long-term future. I am, and always will be, a strong advocate for each and every one of the maintainers and contributors who make this work possible. And for the communities that drive engagement, adoption and ultimately provide collaborative support networks that are the cornerstone of communityguided princples. And I will continue to be this advocate now, and into the future as long as I am able.
Matthew Ragucci
Director of Institutional Product Marketing Wiley East Brunswick, NJ <mragucci@wiley.com> https://www.linkedin.com/in/matthew-ragucci/
BORN AND LIVED: I like to tell people I was born in New York City, but technically it’s Staten Island. I grew up in New Jersey, spent some time in New York state before coming back home to Jersey.
PROFESSIONAL CAREER AND ACTIVITIES: My path to scholarly communications wasn’t a straight line; I didn’t discover my passion for scholarship and libraries until college. As a liberal arts graduate facing a difficult economic recession, I found myself somewhat directionless.
Then I stumbled upon a New York Times article titled “A Hipper Crowd of Shushers” about the aging demographics of the library profession, and something clicked. I knew I was going to become a librarian. I soon enrolled in graduate school and spent the following years building diverse experience across public libraries, an academic library, and even a presidential library and archives. While I no longer work in a library full-time — I still moonlight as a reference and instruction librarian on weekends — I’ve retained my librarian identity at my core. I like to think I’m still a library advocate, just now serving from the publisher side of the table. This unique perspective, understanding both the library and publisher viewpoints, has proven invaluable throughout my career.
Beyond my day-to-day marketing responsibilities at Wiley, I’ve embraced several professional membership and service opportunities that keep me connected to the broader scholarly communications community. My current commitments include:
• Serving on the Board of the North American Serials Interest Group (NASIG), where I also coordinate social media efforts
• Participating in the NISO Information Discovery & Interchange Topic Committee, with liaison responsibilities to the CREC Working Group
• Contributing as a member of the COUNTER Executive Committee
These roles allow me to stay engaged with critical industry conversations around standards, discoverability, and usage metrics — issues that sit at the intersection of library and publisher interests.
IN MY SPARE TIME: Lately I’ve been getting into jigsaw puzzles, big 1000 piece ones that keep me away from the screen. My public library has a great collection of them, and I usually take one home every other week. Other hobbies include guitar (I’m just good enough to entertain myself), vegetable gardening and composting, and traveling.
FAVORITE BOOKS: Hard to pick favorites, but some recent ones I’ve enjoyed are Homegoing by Yaa Gyasi, Elevation by Stephen King, Tonight in Jungeland: The Making of Born to Run by Peter Ames Carlin, and Lovecraft Country by Matt Ruff.
PET PEEVES: People who interrupt or don’t give others an opportunity to speak.
PHILOSOPHY: When in doubt, do something.
MOST MEMORABLE CAREER ACHIEVEMENT: I mentioned that one of my hobbies is traveling. One thing I really enjoy about my current role is the ability to travel. I’ve gotten to visit three different continents as part of my work, and while I don’t travel as much as I used to, it’s been a real highlight for me.
Which brings me to my career achievement. My Latin America colleagues at Wiley were asked by a Mexican consortium to deliver an on-site, halfday workshop, who in turn asked if I was interested. I immediately said yes, not really thinking about how much work would be entailed. My overwhelming sense of perfectionism meant that the weeks leading up to preparing and ultimately running the workshop were riddled with anxiety. I’m not a native Spanish speaker, so in addition to preparing several hours of workshop content, I also needed to translate everything into Spanish, including my delivery. But it was all worth it, and I couldn’t have been happier with the end result. Although I was invited to return, I didn’t receive any accolades, just a pat on the back.
But this stands out as an achievement for me because that experience taught me something important about myself. Once I’m committed to delivering something, even against seemingly impossible odds, I’m able to persevere and succeed. It was a watershed moment for me, and a memory I will never forget.
GOAL I HOPE TO ACHIEVE FIVE YEARS FROM NOW: I used to run long distance when I was younger, and it is something I’d like to revisit now. I’ve done half marathons before, but would like to have a whole one, under foot, within the next five years.
<https://www.charleston-hub.com/media/atg/>
HOW/WHERE DO I SEE THE INDUSTRY IN FIVE YEARS: There’s no question that AI will fundamentally reshape our industry. It’s already embedded throughout the research lifecycle, from literature discovery and manuscript preparation to peer review assistance and content analysis. Publishers, institutions, and researchers are all racing to develop and deploy AI solutions. This proliferation will only accelerate. However, what’s critically needed are robust guardrails and ethical frameworks to guide responsible adoption. We’re already witnessing both the promise and the pitfalls: AI-generated papers flooding submission systems, concerns about bias in automated peer review, questions about authorship attribution, and the integrity of AI-assisted research. While standards won’t solve every challenge, they provide essential stability during uncertainty. I expect the industry to coalesce around collaborative initiatives — developing standards, best practices, and shared principles that balance innovation with research integrity. Organizations like STM, NISO, and COPE will play crucial roles in establishing these guardrails.
The traditional read or read-and-publish paradigm is evolving into something more. Publishers are increasingly positioning themselves as research enablement platforms rather than content gatekeepers. The future model appears to be service-centric: analytics dashboards, data management tools, collaboration platforms, AI-powered research assistants, and reproducibility services bundled into subscription packages. This shift reflects publishers’ efforts to embed themselves more deeply into institutional workflows while also demonstrating tangible value beyond content access.
Market consolidation will continue, driven by the capital requirements of technology development and the competitive pressure to offer thorough service ecosystems. I anticipate further acquisitions and mergers, particularly as larger publishers acquire specialized technology companies, data analytics firms, and workflow solution providers. These consolidated entities will package these capabilities as “value-added” subscription services, creating more vertically integrated offerings. This trend raises important questions about vendor lock-in, data ownership, and market competition that the community will need to address.
Beyond these core trends, I’m also watching the growing tension between open science mandates and commercial sustainability, the emergence of preprint servers as serious competitive alternatives, and the increasing role of institutions as publishers themselves. The next five years will test the industry’s adaptability, but those who thoughtfully navigate the AI transformation, embrace new service models, and maintain ethical standards will be well-positioned for the road ahead.
Tim Shearer
Associate University Librarian (AUL) for Digital Strategies and IT, University of North Carolina at Chapel Hill – University Library
CB #3900, 209 Davis Library, University of North Carolina at Chapel Hill Chapel Hill, NC 27515-8890
Phone: (919) 962-7554
Fax: (919) 843-8936 (yes, we still have a fax number) • <tim_shearer@unc.edu> https://library.unc.edu/
HOW/WHERE DO I SEE THE INDUSTRY IN FIVE YEARS: Pundits claiming, as though they were the first, that libraries are on the cusp of irrelevance. Ubiquitous network and compute power have changed the tools yet again.
Roger Strong
Director of Sales, North America
JoVE
625 Massachusetts Avenue
Cambridge, MA 02139
Phone: (302) 573-1235
<roger.strong@jove.com> www.jove.com
BORN AND LIVED: Born in Norfolk, VA, grew up in New Jersey, and currently live in Delaware.
EARLY LIFE: Roger grew up near Virginia Beach as the oldest of four children before moving to the Philadelphia area in second grade, where he became a lifelong Phillies and Eagles fan. He balanced sports and the arts throughout school, playing saxophone, performing in theater, and competing in soccer, basketball, baseball, and golf. In high school, he served as Drum Major of the marching band and played the lead role of Felix Unger in The Odd Couple. At Rutgers University, Roger continued developing his leadership skills as student government president and was chosen as the senior speaker at graduation.
PROFESSIONAL CAREER AND ACTIVITIES: My first job out of college was with a small microform and textbook publisher Scholarly Resources. It was a great first job as I had a West Coast territory and had an opportunity to travel to see and visit great libraries and campuses in California and the Pacific Northwest. Eventually I became their Global Sales Manager. Realizing my passion for building and leading teams, I became a Regional HR Manager at RR Donnelley, recruiting and building sales teams in the printing industry. I returned to academic publishing with Gale (Cengage) after Thomson Learning acquired my former company’s microform division, and over the next 20 years helped launch Gale’s strategic account program while ultimately leading Americas sales and customer success teams. I have been an active member of ACRL, ALA, and CNI, and value the professional relationships built along the way. Inspired by years of campus travel and supporting my own children through admissions, I volunteered with ScholarMatch to assist firstgeneration students in the college application process. Today, I serve as Director of Sales, North America at JoVE, continuing my commitment to supporting libraries and their communities.
FAMILY: Wife: Sharon + 6 kids: David, Ben, Hannah, Molly, Grace, and Theo + Family Dog Arlo.
IN MY SPARE TIME: I like to spend time as a USA Swimming Offical volunteering at my son’s swim meets. I also like to read a good non-fiction or historical fiction book and explore smoking and BBQ in my 24” Weber Smoky Mountain (a COVID hobby that has now flourished).
FAVORITE BOOKS: Historical Fantasy — The Historian by Elizabeth Kostova; Non-Fiction — Impact Without Authority (How to Leverage Internal Resources to Create Customer Value) by Barbara Geraghty and Jane Helsing.
PHILOSOPHY: Stay passionate and stay persistent.
MOST MEMORABLE CAREER ACHIEVEMENT: Visiting close to 400 libraries and higher ed institutions across the Americas.
HOW/WHERE DO I SEE THE INDUSTRY IN FIVE YEARS: The academic publishing industry will continue to evolve, consolidate, and adapt to a few critical issues facing users of scholarly content. Most importantly, partnerships between publishers and educational institutions will focus on how to shorten the student learning gaps that are becoming more apparent post-COVID, particularly in math and STEM fields. The next generation of students will need to be engaged in ways that blend agentic AI and human interaction while delivering tools and content that is timely and relevant in formats that students are comfortable engaging. Librarians, faculty, and content providers will need to continue to evolve how they support this engagement to improve learning outcomes.
<https://www.charleston-hub.com/media/atg/>
Susie Winter
Co-chair SNSI Communications Working Group, Vice President Communications, Springer Nature SNSI / Springer Nature <Susie.winter@springernature.com> www.snsi.info
BORN AND LIVED: Born and brought up in Surrey, UK.
FAMILY: Married (nearly 25 years!) with two kids now adults (one 21, the other 18).
IN MY SPARE TIME: In the winter, going skiing, in the summer country walks and lying on a beach in a hot country
FAVORITE BOOKS: All time favourite book is Rebecca by Daphne de Maurier. All time greatest opening line of a book.
PET PEEVES: People walking slowly in London.
PHILOSOPHY: Be kind to yourself and to others.
MOST MEMORABLE CAREER ACHIEVEMENT: Steering a private members bill through the UK parliament which increased the penalties for copyright theft.
GOAL I HOPE TO ACHIEVE FIVE YEARS FROM NOW: Gosh — to still be making a (small) difference in helping what publishers do and the valuable role they play in the research ecosystem be better understood.
LIBRARY PROFILES ENCOURAGED
Leon S. McGoogan Health Sciences Library
University of Nebraska Medical Center
986705 Nebraska Medical Center Omaha, NE 68198-6705
Phone: (402) 559-6221
https://www.unmc.edu/library
BACKGROUND/HISTORY: The mission of the Leon S. McGoogan Health Sciences Library reflects its support of the academic, research, and patient care programs of The University of Nebraska Medical Center (UNMC): “Connecting the past, informing the present, building the future. Inspiring excellence in education, research, and patient care through information.”
The Leon S. McGoogan Health Sciences Library, one of the nation’s major health science libraries, serves the information needs of UNMC students, faculty, and staff, as well as licensed Nebraska health professionals and residents of the state. The library provides timely access to high quality collections of print and electronic materials, promotes the development of information management skills that support lifelong learning, and promotes the integration of quality information in UNMC education and research areas.
Beginning in May 2019, McGoogan Library’s physical home of 55,696 square feet underwent an extensive renovation. The library reopened on August 18, 2020, and now offers 24/7 access, 53 individual and group study rooms, a maker studio, an EZ Studio, conference rooms and classrooms equipped with advanced technology, and student and public workstations. The library also houses UNMC’s E-Learning Lab, the Writing Center, College of Allied Health Professions specialized simulation rooms, Faculty Commons, the Interprofessional Academy of Educators, IT Educational Technology, Faculty Development, and the Inclusion Corner and Brave Space, hosted by the Office of Inclusion.
Developed and managed by experts in health sciences collections, the McGoogan Library’s resources include over 40,000 print volumes and an extensive collection of anatomical models. The library website serves as the gateway to electronic information resources. Online journals, books, bibliographic and other databases are available, and many resources may be accessed using mobile devices. Online resources include more than 58,000 journal titles and more than 66,000 full-text books.
Librarians and archivists within the Special Collections and Archives Department of the Leon S. McGoogan Health Sciences Library collect materials on the history of the health sciences in Nebraska and the history of the UNMC campus community. It is Nebraska’s repository for health sciences-related archival materials, photographs, artifacts, ephemera, manuscripts, and rare books dating to the 1490s. In June 2021, the McGoogan Library opened the 13,110-square-foot Wigton Heritage Center. The Center tells UNMC’s story through gallery and digital exhibit space; showcases the McGoogan Library’s vast special collections, artifacts, archives and rare books; and encloses University Hospital’s historic façade and iconic columns within an atrium that serves as a welcoming space for alumni, visitors, and prospective students.
Librarians are available to assist with the use of the library and its collections, including assistance developing search strategies for the online databases, completing online searches, preparation of systematic reviews and other research projects, retrieving factual information and verifying citations, and managing citations using bibliographic management software. In addition, the library offers one-on-one and group instruction in locating and managing information. Requests for these services may be submitted in person, by telephone or email, or sent via text or chat. Library faculty provide instruction in information literacy, self-directed learning, and research skills within UNMC’s five colleges and to partner organizations. Additionally, the library offers health information services to Nebraska residents and Nebraska Medicine patients and their families.
NUMBER OF STAFF AND RESPONSIBILITIES: 30
Education and research services: 9
Access services, reference, interlibrary loan:3
Special collections and archives:10
Collections and technology:3
Administration: 5
OVERALL LIBRARY BUDGET: $4.5 million
TYPES OF MATERIALS YOU BUY: Databases, e-books, e-journals, print books, rare books, anatomical models.
DOES YOUR LIBRARY HAVE AN ILS OR ARE YOU PART OF A COLLABORATIVE ILS? We are part of a collaborative ILS through the University of Nebraska Consortium of Libraries.
DO YOU HAVE A DISCOVERY SYSTEM? Yes. We use Primo. DOES YOUR LIBRARY HAVE A COLLECTION DEVELOPMENT OR SIMILAR DEPARTMENT? Yes.
<https://www.charleston-hub.com/media/atg/>
IF SO, WHAT IS YOUR BUDGET AND WHAT TYPES OF MATERIALS ARE YOU PURCHASING? PRINT OR ELECTRONIC OR BOTH? Our collections budget is about $2 million. We mostly purchase e-journals. WHAT PROPORTION OF YOUR MATERIALS ARE LEASED AND NOT OWNED? About 98%.
WHAT DO YOU THINK YOUR LIBRARY WILL BE LIKE IN FIVE YEARS? The library is our people, and our future is inextricably linked to the transformations occurring across our institution and with our hospital system partners. A primary engine of this growth is the Project Health, a state-of-the-art hospital that will integrate with a new innovation district. While we are in the planning stages now, we expect to be reflecting on tangible progess of this partnership.
Our services and instruction will reflect shifts toward health sciences education pathway programs and streamlined undergrad-to-professional enrollment. While we already teach in high school and undergad programs, I imagine that we will focus on younger students as demographics shift. Futher, as UNMC expands in a new Kearney location about two hours away, the library will be fulling integrated into programs, partnerships, and outreach there.
We will have forged stronger connections with statewide digital humanities initiatives, broadening the interdisciplinary reach of our collections and expertise. Notably, our museum, the Wigton Heritage Center, will celebrate its 10th anniversary of sharing histories of health in Nebraska. As the research enterprise and university-industry partnerships evolve, we will also find our place supporting diverse information needs. We will be more nimble in our data science support and education, evolving alongside our campus and health system partners to support researchers at every level.
Above all, our staff will have five additional years of expertise, providing us with a deeper bench of talent. I think we will be reflecting on many successes and creating new opportunities for the years to follow.
WHAT EXCITES OR FRIGHTENS YOU ABOUT THE NEXT FIVE YEARS? Access to reliable health information is a core issue in health equity. I am concerned by the way AI-enabled technology can exacerbate health-related misinformation and disinformation, contributing to poorer healthcare decision-making for individuals and communities. On a larger scale, waning trust in science has threatened the credibility of our most reliable health information entities. I am deeply concerned for the future of public health and the erosion of access to healthcare, and I look forward to the opportunities for libraries to act as stabilizers and leaders in facilitating access to critical information.
University Library, University of North Carolina at Chapel HIll 208 Raleigh Street Chapel Hill, NC 27515-8890 Phone: (919) 962-1301 library.unc.edu
BACKGROUND/HISTORY: Library of the nation’s first public university and an intellectual and creative partner in the mission of the University, the success of every Tar Heel, and the vitality of North Carolina.
NUMBER OF STAFF AND RESPONSIBILITIES: 230 FTE + 175 student employees
DOES YOUR LIBRARY HAVE AN ILS? The University Library uses III from Clarivate.
DO YOU HAVE A DISCOVERY SYSTEM? Yes, based on the open source platform Blacklight.
Back Talk continued from page 74
OK, I understand that if I am a newspaper struggling for every nickel of revenue I can scrape up, my impulse is going to be to hang on to my content and mediate it out carefully myself. And I really do want those newspapers to survive and thrive and shape the discourse of our society. And sure, I’m glad to cash whatever check Mr. Anthropic sends me. But at
the end of the day, I think it’s in my interest and our interest to use all the tools at our disposal as resourcefully as we can and to live up to the commitment we all have, in some form, to free and open information for all.
None of this is easy, is it?
<https://www.charleston-hub.com/media/atg/>
COMPANY PROFILES ENCOURAGED
JoVE
7700 Windrose Avenue, Suite G300
Plano, TX 75024 U.S. https://www.jove.com
OFFICERS: Dr. Moshe Pritsker, Co-Founder & Chief Executive Officer ASSOCIATION MEMBERSHIPS, ETC.: Indexed in PubMed (for relevant journal content). Supporter and funder of Friends of Research4Life.
KEY PRODUCTS AND SERVICES: JoVE publishes peer-reviewed research and educational content presented in video format, including:
• JoVE Education
• JoVE Research
• JoVE Business
• JoVE Lab Manual
• JoVE Encyclopedia of Experiments
• JoVE Visualize
• JoVE Co-Pilot
CORE MARKETS / CLIENTELE:
• Academic libraries
• Universities and research institutions
• Faculty, researchers, and students
• Teaching and learning centers
• Institutions across life sciences, engineering, chemistry, and related fields
NUMBER OF EMPLOYEES: 450+ employees
HISTORY AND BRIEF DESCRIPTION OF THE COMPANY / PUBLISHING PROGRAM: Dr Moshe Pritsker co-founded JoVE in 2006 to address longstanding challenges in research reproducibility. Recognizing that written methods alone often fail to capture critical experimental details, JoVE pioneered the use of video to document and communicate scientific protocols alongside peer-reviewed articles. Today, JoVE is the world’s leading producer and provider of science videos, serving millions of researchers, educators, and students worldwide. Its platforms span both research and education, with over 25,000 videos available in more than 14 languages and accessible globally, 24/7. JoVE Research includes peer-reviewed videos of experiments filmed at leading research universities, as well as the Encyclopedia of Experiments, enabling scientists to learn new techniques more efficiently, reduce training time and costs, and improve research productivity. In parallel, JoVE Education supports teaching and learning through visual resources designed for classroom instruction, laboratory courses, and student understanding and retention. Across its portfolio, JoVE helps standardize how complex concepts and methods are taught and learned, supporting more consistent understanding and application of scientific knowledge across institutions worldwide.
IS THERE ANYTHING ELSE THAT WOULD BE OF INTEREST TO AGAINST THE GRAIN READERS? JoVE works closely with academic libraries as long-term partners in supporting research, teaching, and training across institutions. Libraries use JoVE not only as a content resource but as a practical tool to support teaching, research, and understanding of complex concepts.
As libraries continue to evaluate collections through the lens of usage, impact, and institutional value, JoVE offers a scalable visual model that supports all users, from undergraduates to Professors to PIs, and aligns with libraries’ evolving role in supporting research integrity, modern teaching approaches, and sustainable collection strategies.
Lyrasis
We are a fully remote company. Our mailing address is 3390 Peachtree Road NE, Suite 400 Atlanta, GA 30326
Phone: (800) 999-8558
https://lyrasis.org
OFFICERS: John Wilkin, Chief Executive Officer; Erin Tripp, Chief Operating Officer; Takeya McLaurin, Chief Human Resources Officer.
KEY PRODUCTS AND SERVICES: ArchivesSpace (Org Home), BiblioBoard, the Lyrasis Catalyst Fund, CollectionSpace, DataCite (US Community), DSpace (Org Home), Fedora (Org Home), the Indie Author Project, Lyrasis Consulting, Lyrasis Hosting Services (ArchivesSpace, CollectionSpace, DSpaceDirect and Duracloud), Lyrasis Learning, ORCID (US Community), The Palace Project, Scholarly Communications Initiatives, VIVO (Org Home)
CORE MARKETS/CLIENTELE: Libraries, archives, museums and other cultural heritage institutions.
NUMBER OF EMPLOYEES: 110
HISTORY AND BRIEF DESCRIPTION OF YOUR COMPANY/ PUBLISHING PROGRAM: Lyrasis is a 501(c)(3) nonprofit membership organization whose mission empowers libraries, archives and museums through content services, open technologies and community-based solutions that expand access to information, preserve cultural heritage, and advance the shared goals of our members and the communities we serve. Collaboratively, we build a future that is inclusive, equitable, accessible and sustainable. Our story began in 1936 with the establishment of PALINET, eventually evolving through the landmark merger with SOLINET and others to become the organization we lead today. Through strategic growth and shared vision, Lyrasis has matured into more than just a nonprofit; we are a vibrant community of over 1,100 members and over 3000 customers across the globe.
IS THERE ANYTHING ELSE THAT YOU THINK WOULD BE OF INTEREST TO OUR READERS? Lyrasis provides a full range of programs and services to help your library, archive or museum maximize your resources through cost savings, time savings and one-stop access. Our nationwide and international membership enables us to bring together communities around shared interests in ways unmatched by any other organization. Lyrasis was created by its members to help tackle wide-reaching challenges with a collective strength and our goal is to help your institution scale and leverage better through your membership as part of our broader community.
To follow our work, please sign up for our newsletter or join us on LinkedIn: @lyrasis, Facebook: @wearelyrasis, Instagram: @wearelyrasis or Bluesky: @wearelyrasis.
<https://www.charleston-hub.com/media/atg/>
Back Talk — The LLMs Are Hungry
Column Editor: Jim O’Donnell (University Librarian, Arizona State University) <jod@asu.edu>
It was about 2000 when I heard my friend and fellow classicist Greg Crane, who did more to kick classicists into the digital age than anyone else, offer this aphorism: “If it’s not on the net, it’s not information.” In those days, he drew long thoughtful stares. Now I would think that what he said was just plain true.
Times have changed and I’m writing this piece to tiptoe towards the point at which I might imitate Greg and say, “If it’s not in the LLMs, it’s not information.” Bear with me.
I was triggered thinking this way by this article from Harvard’s Nieman Lab: https://www.niemanlab.org/2026/01/ news-publishers-limit-internet-archive-access-due-to-aiscraping-concerns/
The story they tell, very clearly, very objectively, and with considerable detail, is how major news organizations are beginning to erect barriers to the work of our friends in the Internet Archive by blocking the fundamental archival activity of the Wayback Machine. The fear is that once the Archive has the content, it will be scraped by omnivorous LLMs to incorporate in their ever-expanding oceans of data. It’s not a simple story and I think the Nieman folks do much justice to it.
But it has made me think.
A few months ago, I got a letter out of the blue telling me the status of the lawsuit known by shorthand as Bartz v. Anthropic. That suit alleged that the Meta-owned AI enterprise Anthropic had populated its LLM with, inter alia, an enormous number of books whose digital contents Anthropic acquired by downloading them from “pirate libraries” on the dark net. Anthropic agreed last summer to settle the suit: details here https://www.anthropiccopyrightsettlement.com . As a consequence, authors of books that had been sucked in to the LLM were eligible for cash settlements in the amount of $3000 in return for filing a simple claim.
There are some distracting fine points about this settlement. (1) Many authors will share the $3,000 with their publishers. (2) Because the case was not litigated, it is not proven that
ADVERTISER’S INDEX
2 ACS Publications
5 American Physical Society
76 Against the Grain
25 ATGthePodcast
21 Charleston Briefings
9 Doing the Charleston
19 Fiesole Retreat 2026
3 GOBI Library Solutions
7 Katina Publication (About)
39 Katina Publication (Article Proposals)
75 Knowable Magazine
FOR ADVERTISING INFORMATION CONTACT
Toni Nix, Advertising Manger, Against the Grain, Charleston Hub <justwrite@lowcountry.com> • Phone: 843-835-8604
Anthropic acted as alleged and there are representations that after thinking about using pirate materials, Anthropic went out and bought a lot of print books and scanned them destructively. The Washington Post ran an article on 27 January ( https://www.washingtonpost.com/technology/2026/01/27/ anthropic-ai-scan-destroy-books/ ) quoting just-released court documents from the lawsuit that, to say the least, put Anthropic’s position in some doubt — but of course the Post and its owner are not uninterested parties in all this.
But let’s let those details ride. The broader question of the future of information in the age of AI remains. So I’ll tell my story.
I got that letter last summer about the lawsuit and used the tool it provided to look to see what had become of my books. Bingo: five of them are on the list. I filled out the claim, mentally multiplying $3,000 by five, and feeling doubly satisfied. Satisfied first because I can use the money, ok? But satisfied second because I was happy that my books would now be read and interpreted and quoted and used by one of the most influential thought leaders of our time — that Anthropic LLM itself. “If it’s not in the LLMs, . . .”??
I start a new paragraph here so that those readers outraged by what I just said can catch a breath. But hear me out. There’s vastly too much information out there and the quantity is growing hugely. (I never like it when people say “growing exponentially” because they’re usually both exaggerating and showing they don’t understand exponents. Even I might be tempted to use it in this case.) If I want to explore and use that information, then I — the generic I, meaning by that you and I gentle reader, and our children and students and colleagues — are all going to be using the most compendious tools available to do so. I use them now myself, most often when I want to know a serious answer to a question of professional interest but outside my own area of special competence. Tonight I asked it to tell me things to read to explore the question whether there should ever have been a Revolutionary War or not. I’m getting the excellent Maya Jasanoff’s Liberty’s Exiles to help me assuage that curiosity, but I learned a lot from ChatGPT about the eighteenth century literature of the subject and the modern editions of works from that period. Specialist in the fourth century CE that I am, I would never have gotten that kind of guidance from our instance of Ex Libris and certainly never from the library card catalog of happy memory, and never by any means so quickly and compendiously.
My point is that the future invasion of society by AI tools is already history. This is the world we do in fact live in. And if I care about my own petty contributions to the store of human knowledge, it’s in my interest to take advantage of the best emerging tools — and their successors — that I can.
continued on page 72
<https://www.charleston-hub.com/media/atg/>
Against the Grain wants your support!
About Against the Grain
TO ADVERTISE IN ATG
Contact Toni Nix at <justwrite@lowcountry.com>
Click the links below for information on how to Subscribe, Submit Content, or Contact Us “Join
Against the Grain (ISSN: 1043-2094) is your key to the latest news about libraries, publishers, book jobbers, and subscription agents. Our goal is to link publishers, vendors, and librarians by reporting on the issues, literature, and people that impact the world of books and journals. ATG’s eJournal, with an open rate of over 48%, is published five times a year (February, April, June, September, and November) and distributed to ATG subscribers, Charleston Library Conference attendees, and registered members on the Charleston Hub.
Find ATG on the Charleston Hub at www.charleston-hub.com