When I speak with library professionals in the U.S. and abroad, they commonly voice their frustration in trying to plan for AI implementation in such an uncertain environment. With that in mind, here’s some tips to help you get started implementing artificial intelligence at your library.
What’s the Current Regulatory Environment?
As it stands right now, copyright questions abound; the U.S. Copyright Office has recently taken public comments on artificial intelligence, and has previously pointed to “human authorship” as a necessary requirement for copyright protection, while copyright infringement lawsuits from artists and authors are filed against generative AI tools and slowly adjudicated.
Looking more broadly, U.S. President Joe Biden has issued an executive order calling for “new safety assessments, equity and civil rights guidance and research on AI’s impact on the labor market,” but some of the most potentially impactful aspects of this EO will come from various federal agencies (Commerce, Education) up to a year from now. With eyes on the European Union, there is a good chance that the first comprehensive AI legislation will come at some point in 2024.
Shouldn’t We Just Wait?
Knowing that we’ll have more clarity in a year’s time, it can be tempting to sit back and wait before engaging with AI at the organizational level. This would be a mistake! Like it or not, our library staff and our users have broad access to AI tools. Furthermore, the integration of conversational search, generative AI, and other iterations of the technology are coming to Windows 11, Google Workspace, and other common software applications! Traditional library database vendors will be approaching us with AI-driven solutions that we will need to critically evaluate. It is therefore imperative that we provide the guidance and training necessary to interact with AI safely and effectively. As we wait for the regulatory environment to take shape, we should be spending our time creating what certainty we can within our organizations, and demystifying the technology for our staff and public!

Creating Certainty Through Policy
While you wait for legislative action, you should engage in policy review. To what degree does your existing policy, however indirectly, address artificial intelligence? Sections on the confidentiality of patron records and intellectual freedom would be a good place to start! In a previous post, I provided some potential policy foundations. While you cannot govern the environment beyond the library, effective policy, followed by procedure, can provide staff with some rules of engagement, and safeguard patron privacy.
It is also important that we do not view future legislation as a cure-all! Legal and ethical are two different matters. Our professional ethics may stand in contrast to, or be unaddressed by the law. If future copyright law allows for generative AI to train on copyrighted material, do we reassess our relationship with some tools? Do we only use platforms that are “ethically” sourced, and how do we define that? These are important, value-driven policy conversations that must take place–and now is the time to hold them!

Demystifying AI
We cannot plan for a future with AI if we do not understand the tools of the trade. Experiential learning is necessary. To that end, I highly recommend forming an AI user group at the library. By way of example, this is the group we have formed at the South Huntington Public Library:
- Composition: 16 part-time and full-time librarian and support staff members from Computer Services, Circulation, Reference, Teen Services, Youth Services, and Administration. Since we will all interact with AI differently, it was essential that the group have a wide array of perspectives and job responsibilities.
- Purpose: To better understand artificial intelligence and its potential uses in the library. To identify concerning aspects to be avoided. To give staff a voice as we grapple with a new technology. To stress test existing policy. To identify potential tools for investment and integration. To train future trainers.
- Format:Every two weeks, we unveil a popular tool with broad functionality. Examples include ChatGPT for large language models (LLMs), Bing Chat for conversational search, DALL-E 3 for text-to-image. A staff member conducts an overview of the tool, on occasion we watch an online tutorial, and then hold a Q&A. Staff are given broad “homework assignments,” in which they apply the tool to an aspect of their job. For example, our Youth Services librarian used ChatGPT to brainstorm program titles. At the next meeting, we open with a discussion/evaluation of the previous tool.
As our user group becomes more comfortable with the technology, our organization stands to benefit in several ways.
- We can discuss and plan around the technology in an informed way, relying on experience rather than conjecture.
- We can identify appropriate and inappropriate use cases and develop appropriate library guard rails to match.
- When formal training occurs, we will have newly-minted trainers who can seed their departments of origin with specific, relevant knowledge.
- We develop in-house AI programmers to teach programs for our patrons.
That final bullet point is essential! We are learning so that we can position ourselves as AI navigators within our community, able to speak confidently about the technology. Our patrons are coming to us with questions in hand–turning these new skills outward is a crucial component in continuing to provide digital equity in our communities.








Leave a Reply