Gary Marchionini
12 min readOct 25, 2023

--

Information and Library Professionals’ Responsibilities in an AI-Augmented World

Gary Marchionini

University of North Carolina at Chapel Hill

It is always hard to predict the long-term impact of any newly hyped technical development — to estimate which innovation is just part of the caravan of hype cycles and which will endure long-tail impact growth after the cycle peaks. In 2023, it is hard to attend a meeting or experience media without encountering Generative AI (GenAI) as the revolutionary technology that will change everything. For baby boomers, this is the third iteration of ‘AI will change the world’ and so there is some skepticism about whether we are indeed in another technical revolution or simply another incremental information technology iteration. My estimation is that the confluence of carefully layered deterministic and probabilistic algorithms running on networks of very high-performance processors that have access to enormous volumes of structured and unstructured inputs is going to be ‘capital S’ significant — another step in humankind amplifying and augmenting physical and mental existence. GenAI is less about one specific technical advance (e.g., machine learning or neural nets, or large language models-LLMs) but rather the natural and inexorable advance of our species’ ability to use tools and knowledge to amplify and augment human capabilities and activities. We are experiencing the latest turn in Douglas Engelbart’s observation more than 60 years ago about electronic tools augmenting the intellect. The ‘generative’ modifier for AI speaks to new capabilities that output novel products (e.g., text or images) and parallel the advances in systems that produce designed products and outcomes (e.g., additive 3D printing, self-assembling molecules, autonomous robots and weapons, synthetic biology spawns).

Big changes bring excitement and fear. There are examples of GenAI applications that advance science, health, and education — each day brings new augmentations of human endeavor such as support for protein engineering or on-the-fly clinical decision-making during radiation treatment. Likewise, each day brings examples of consequential errors (so called ‘hallucinations’) or tragic effects (e.g., autonomous vehicle accidents). Additionally, there are serious overarching concerns about the impact of GenAI on work, entertainment, politics, personal identity, global economy, military power, social well-being, and human dignity. The excitement and fear bring enormous investments in resources and human talent, breathless debates, and increasing attention and action from government agencies.[1] Given this context for GenAI and related technologies, what are the roles and responsibilities for information and library science professionals?

Why ILS Professionals? The responsibilities discussed here apply to information professionals in corporate, government, or other kinds of settings, however, I use the term ‘information and library science professional’ rather than the more general ‘information professional’ here because so many of these responsibilities flow from the historical values and practices of librarians, archivists, and information theorists. All information professionals in any setting will assume these roles and responsibilities within their specific work context. Information and Library Science (ILS) professionals have skills, perspectives, and practices that are essential to human interests in the AI-augmented world. We are especially crucial because of our human-centered approach to information and technology impact. For ILS professionals, human interest is not an add-on or afterthought but rather the reason we do the work at all — -it is fundamentally baked into information generation, management, and use. We are clearly positioned to approach an AI-augmented world from a human-centered perspective (Shneiderman, 2022).

· ILS professionals are concerned with information integrity and trust so that humanity can advance with confidence, justice, and equality.

· ILS professionals have long traditions of managing large-scaled information streams and repositories.

· ILS professionals have been active participants in pioneering information retrieval research and evaluation since the post-WWII era that gave rise to search engines, LLMs, and recommender systems.

· ILS professionals take a fundamentally human-centric approach to the needs of patrons, clients, and the public.

· ILS professionals have been active partners in advancing human-centered design and evaluation research and practice.

· ILS professionals have long accepted responsibility for managing institutions and services devoted to preserving the products of human memory and activity, and defending these institutions and services as public goods.

· ILS professionals are committed to a culture of sharing and equitable access — librarians, archivists, and curators operate cooperative loans (e.g., interlibrary loan) and distributed digital libraries.

What are the critical roles of ILS professionals in an AI-augmented world? Certainly, ILS professionals will learn to leverage AI augmentations to better serve their clients, organizations, and humanity’s information needs. However, given the developments and challenges that the latest wave of technology bring, together with the skills, critical perspectives, and cultures of ILS, there are several key responsibilities that ILS professionals must accept to ensure that humanity progresses with dignity and equanimity. Six types of responsibility follow that are categorized into inputs, use, outputs, evaluation, education, and reflection[2].

· 1. INPUTS. Training set selection and validation (collection development policy, workflow and data genesis documentation).

o Librarians, archivists, and curators create collection development policies that align patron communities with extant and new information products and knowledge representations. These policies determine how acquisition resources are invested and thus generate scrutiny and debate, and therefore demand careful specification and justification. ILS professionals are well-suited to consider the quality and appropriateness of materials for different stakeholders and communities. The selection of training sets for large language models and other generative algorithms is crucial and although early models use large samples of convenience (e.g., harvests of swaths of the public WWW), more specialized and vetted selection and validation policies will surely be needed. ILS professionals have important roles to play in validation, curation, and selection of training data. To do so, we must build upon traditional collection development strategies to make the structure and provenance of data sets transparent or at least documentable. Just as importantly, we must work to develop understanding of the nuances of different algorithms so that training sets may be well-suited to the problem domains to which the generative tools are applied.[3]

· 2. USE. Prompt construction and refinement.

o Sometimes called prompt engineering, one of the current challenges of GenAI systems entails translating human needs and desires into signals that a complex black box computational model can understand. Information professionals help people articulate their information needs. Librarians learn how to conduct reference interviews that help people explicate their information needs and clarify and refine their queries through conversation and iterative consideration of preliminary results. As interactive information retrieval systems researchers and designers, ILS professionals investigate ways to incorporate relevance feedback in system processes and guide how people reformulate queries with an eye toward how interactive query formulation and revision improve search efficiency and effectiveness[4]. Not only are ILS professionals well-prepared to adapt these skills to help people develop and hone prompts but we can leverage our experience to create systematic investigations of how generative models change as millions of people interact with them over time. This requires focus on the interaction of human iterations with base models that were trained on extant data. In essence we can study how human interaction affects how generative systems learn and evolve over time.

· 3. OUTPUTS. Model and output documentation, workflows, disclosure, declaration, and citation.

o Librarians, records managers, data stewards, and knowledge managers have deep expertise in collecting, classifying, and preserving physical and digital resources. Our expertise in meeting the challenges of curating and managing digital libraries of dynamic assets such as active code repositories or social media streams will be essential to adding metadata to training sets and workflows as well as context to outputs, recommendations, and decisions in an AI-augmented world. ILS professionals have been leaders in developing standards and procedures for collecting and preserving research data and working with publishers to formalize conflict of interest statements and other disclosures. This expertise will be crucial to ensuring that AI-augmentation is documented and consequently trusted. Whether the aim is to build upon existing knowledge to create new knowledge or to protect intellectual property, ILS professionals have crucial roles to play in documenting the workflows and context of GenAI and other technologies. Documentation for all types of human endeavor is essential to scientific, legal, economic, and historical progress. Scholarship and human progress depend on understanding the genesis, flow, and application of ideas. Public and private libraries, archives, databases, and repositories are essential to research and innovation, and sophisticated patent and copyright regimes have evolved to reward novel applications of knowledge. Twenty-first century technologies and techniques demand expanded and extended documentation efforts to continue human progress[5]. GenAI outputs that are used for decision making or to drive manufacturing or to manage large-scale systems are difficult to document due to complexity (e.g., the number of parameters), opaqueness (e.g., ‘hidden’ layers of reinforcement learning), and economic control (e.g., proprietary processes). It is especially important that ILS professionals engage in documenting workflows, design decisions, inputs, outputs, applications, and other contextual factors so that trust, additional progress, and history are served by these technologies.

· 4. EVALUATION. AI system development and evaluation that amends traditional models and metrics with new models and metrics appropriate to interactive generation of content, objects, and organisms.

o For more than 30 years, the ILS community has collaborated with the computer science and computational linguistic communities to formalize evaluation competitions for retrieval systems. These collaborations laid the foundations for model and system assessments, including the creation of metrics beyond traditional recall/precision.[6] ILS professionals were active in expanding evaluation metrics beyond system performance to human performance variables and concerns (e.g., satisfaction, engagement, fatigue, learning). GenAI will require new use cases, test tasks, and outcome metrics. Already, some groups have initiated this work and ILS professionals must join in to ensure that authentic task environments and human interaction metrics are included, and that specifications use consistent vocabularies for evaluations and publication[7].

· 5. EDUCATION. Inspire, educate, and foster critical understanding and access to stored and generated information products and streams.

o ILS professionals have long been champions of open access to information and proponents of literacy. Electronic information environments bring new challenges to free expression and new media for human expression. AI-augmentation will surely accelerate challenges to free speech, trust, and evidence of expertise and authority. We must shun becoming the authority stamp for ‘truth’ but rather strive to provide clearly delineated imprimaturs for information products and guides or tools for people to us in making their own determinations about what is disinformation or misinformation or credible information. ILS professionals’ experience in advocating for and providing training for literacy to all, our incorporation of new media in public collections, our opposition to censorship, our expertise in critical assessment of knowledge assets, and our devotion to equal access have prepared us to take leadership roles as new products and outcomes of GenAI applications affect individuals and society. We have important responsibilities to be active participants in policy discussions and regulatory actions. To carry out these responsibilities we must learn and work to understand both the applications and limitations of GenAI and design clear educational programming appropriate to diverse populations of learners[8]. In addition to being advocates for responsible and human-centric applications of GenAI, we are also responsible for ensuring that people understand the importance of materiality to human existence. ILS professionals have long experience managing and preserving the physical artifacts of human creativity and endeavor. We understand why physical materials have inherent value that cannot be digitized or computed — that knowledge artifacts have sensual characteristics (feel, smell, damage) that carry meaning. We understand that people are and understand that they are embodied in temporal and physical spaces. It is our responsibility to ensure that fully intelligent and actualized people appreciate and differentiate this materiality as well as the outputs of the artificial.

· 6. REFLECT. Promote and advocate for equitable individual and social access to powerful AI augmentation tools and products.

o ILS professionals are inherently interdisciplinary bridge builders. We work across different scholarly and practitioner communities and have developed trust from both privileged and underserved populations. It is imperative that we use this expertise and perspective to ensure that an AI-augmented world serves all people rather than only those with power and privilege. Just as we have created shared libraries of resources and tools for our constituents, open access collections of GenAI tools and services are needed. Additionally, just as we advocate for open access to ideas and information resources today, we must advocate for open AI systems, policies, and practices to serve the public good.

We cannot know today whether in 15 years everyone on our planet will have customizable and personalized AI augmentations immediately at hand on their phones, watches, or implants[9], or whether these augmentations will be limited to government or large corporate entities. What we do know is that there are enormous new opportunities and challenges in an AI-augmented world. Be prepared to be awed and disappointed — be realistic. ILS professionals are prepared to play crucial roles in ensuring that all of humanity prospers and benefits. However, this is not automatic, it will require courage and energy to accept the responsibilities and challenges the coming decades will bring. We are up to the challenge.

Bender, E., Gebru, T., McMillan-Major, A. & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT 21. ACM ISBN 978–1–4503–8309–7/21/03, https://doi.org/10.1145/3442188.3445922.

Sayash Kapoor and Arvind Narayanan substack book AI Snake Oil. https://www.aisnakeoil.com/

Shneiderman, B. (2022). Human-Centered AI. Oxford U. Press. See also, Google Group human-centered-ai@googlegroups.com; on behalf of; Ben Shneiderman ben.shneiderman@gmail.com

U.S. Senate Judiciary Committee Subcommittee on Privacy, Technology and the Law hosted a hearing titled “Oversight of A.I.: Rules for Artificial Intelligence.” https://techpolicy.press/transcript-senate-judiciary-subcommittee-hearing-on-oversight-of-ai/

European Union AI Act https://artificialintelligenceact.eu/the-act/

Liang, P. et.al, (2023). Holistic Evaluation of Language Models. Transactions on Machine Learning Research October 2023. https://arxiv.org/pdf/2211.09110.pdf. See also Center for Research on Foundation Models. https://crfm.stanford.edu/helm/latest/

Watson, J.L., Juergens, D., Bennett, N.R. et al. De novo design of protein structure and function with RFdiffusion. Nature 620, 1089–1100 (2023). https://doi.org/10.1038/s41586-023-06415-8

[1] Examples include: California’s 2019 BOT Disclosure Law requires BOTs to disclose that they are automated processes rather than human actors. The far-reaching European Union AI Act of 2021 aims to ensure that “AI should be a tool for people and be a force for good in society with the ultimate aim of increasing human well-being.” Numerous U.S. hearings and meetings have been held with a variety of laws and regulations proposed or making their way through legislation (e.g., U.S. Senate Judiciary Committee Subcommittee on Privacy, Technology and the Law hosted a hearing titled “Oversight of A.I.: Rules for Artificial Intelligence.” https://techpolicy.press/transcript-senate-judiciary-subcommittee-hearing-on-oversight-of-ai/)

[2] This list is not meant to be exhaustive but represents key challenges today. One overarching responsibility not made explicit is for us to continue to learn and discover new ways to serve humanity’s information needs.

[3] For example, diffusion models are used to identify promising new proteins by leveraging the structures in curated protein databases, reducing the number of lab trials by orders of magnitude (Watson et.al, 2023). This amplification of manual trial and error procedures is possible because biochemists and information specialists have carefully curated protein databases over decades. This is not the case for many other kinds of generative problems without carefully curated databases. ILS professionals are well-prepared to work with disciplinary experts to first tell the difference between what kinds of approach will work for different data conditions, and second, to help in the creation of curated data that lends itself to specific AI amplifications or augmentations.

[4] Perhaps ILS professionals would want to map classical search strategies such as successive fractions, building block, and citation pearl growing to emerging prompting strategies such as least-to-most, tree-of-thought, chain of thought, and majority vote.

[5] Other documentation challenges include managing records of manufacturing and supply chain events in complex, global production (e.g., Apple’s micro etching of quick codes on iPhone screens starting at glass sheet production and including assembly operations executed on other continents); managing records of 3D printed objects and organisms (e.g., StanfordBASE Lab’s additive printing of layers of cells to generate a human heart); and documentation systems for the inevitable development of bit-level watermarks.

[6] The Text Retrieval Evaluation Conference has had ongoing evaluations for more than 30 years expanding from early text retrieval evaluations to multimedia, cross-language, question answering, social media search, and other topics (https://trec.nist.gov/.

[7] Liang et al., have created an ambitious framework for evaluating LLMs, including sample cases that illustrate approaches for 16 core scenarios (e.g., IR, sentiment analysis, summarization, question answering) using 7 metrics (accuracy, calibration, robustness, fairness, bias, toxicity, and efficiency). Liang, P. et.al, (2023). Holistic Evaluation of Language Models. Transactions on Machine Learning Research October 2023. https://arxiv.org/pdf/2211.09110.pdf. See also Center for Research on Foundation Models. https://crfm.stanford.edu/helm/latest/

[8] There is a plethora of courses, conference workshops, and papers from credible sources that explain the principles of large language models, deep learning, and other elements of GenAIs (e.g., Coursera alone offers dozens of courses, some with more than 100,000 students enrolled). There is also a host of papers, blogs, and conference workshops that critically discuss the technical limitations and practical implications of GenAI (e.g., Bender, E., Gebru, T., McMillan-Major, A. & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT 21. ACM ISBN 978–1–4503–8309–7/21/03, https://doi.org/10.1145/3442188.3445922; Sayash Kapoor and Arvind Narayanan substack book AI Snake Oil. https://www.aisnakeoil.com/. These resources are dwarfed by the relentless flow of popular press and social media stories about magical applications or apocalyptic futures and it is information professionals responsibility to work to organize and curate resources for patrons and the public.

[9] We can imagine that the computational power and data resources will be cheap and ubiquitous so that anyone can easily call upon a model that draws on all public data as well as all personal data from emails, social media, and other systems to answer or guide life activities on the fly (and plug and play different proprietary or specialized data to generate ‘what if’ scenarios and outputs). I suggest that all the responsibilities discussed here continue to obtain.

--

--