While artificial intelligence holds tremendous potential for libraries, it also comes with significant concerns and the potential for harm. We find ourselves sailing uncertain waters; there are few guard rails governing AI’s use. Even as we acknowledge this truth, we must also note that library staff are already experimenting with AI chatbots (most commonly ChatGPT), generative AI design tools (like Stable Diffusion), and other variations of AI technology. In short, we have potential innovations, pitfalls, and a total lack of clarity. It is only through the thoughtful development of policy, procedure, and professionals that we can hope to articulate a vision for the ethical use of AI in our libraries.
Policy & Procedure Considerations
Whether we develop new policy specific to AI, or view AI through the lens of existing library policy, some considerations include:
Simply put, if something is free, then you’re the product. AI models are constantly collecting data as they interact with users, train, and evolve. This can have dire consequences, as Samsung recently discovered. Samsung engineers had been using ChatGPT to check source code, only for those trade secrets to be leaked outside of the company. One can easily see library staff leveraging the same or similar technology as a writing assistant, brainstorming tool, etc, only to inadvertently reveal confidential patron information.
There has been a proliferation of AI tools that take text prompts from users to generate images (and other media). Design software Canva, already quite popular in libraries, itself has a “Text to Image Tool”. To date, the US Copyright Office has determined that since AI art is not the product of “human authorship,” it is ineligible for copyright protection. While that may give some clarity (for the time being) on the output of generative AI tools, the trickier ethical piece involves how these models are built. That process includes training the AI model on hundred of millions (even billions) of images on the web, often without the consent of the original artist. To be clear, legal does not necessarily equate to ethical!
When we utilize the product of AI, should we let our patrons know? How do we do so? Transparency for an AI generated image might mean citing the program used, along with the specific text prompts employed. Do we acknowledge the use (and extent) of AI-generated copy in our newsletter? That particular distinction, while important early on, may soon seem quaint with the synergy of generative AI and Microsoft Office.
Investing in AI
Once we establish some ground rules for the ethical use of AI in our libraries, we can then determine what specific technologies and platforms to invest in. One aspect of investment is financial; many services that are initially free (again with you being the product), will grow in sophistication and eventually cost money. Others are freemium; ChatGPT’s free model comes without the guarantee of uptime during high traffic hours, and the access is to a less powerful set of features. In a work setting, you will need more certainty than freemium can provide. Aside from investing money into AI, you must also invest in your staff; namely through training. Such training is multi-faceted. It should cover:
- How does the tool work?
- What are some use cases?
- How can it be used in a way that conforms with library policy (including data retention)?
Until Next Time!
AI is a truly disruptive technology that is moving with a speed I cannot say I have seen before! A feeling of bewilderment is understandable–I feel it too! That said, the time has come to take a deep breath and get to work laying a foundation of policy guidelines and staff development as we navigate the uncertain road ahead. As always, if you’re looking for a speaker for your event feel free to reach out! I cover emerging technologies, staff training, library tech trends, tech on a budget, change management, and more! You can also check out this list of recent and upcoming events.